title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
8
8.58M
Single-cell RNA sequencing of the spleen reveals differences in
eaa4c520-d2bc-44c8-b37a-b0af23ad91e8
11954797
Cytology[mh]
Bacterial infections have been the most important infectious diseases in the poultry industry, these diseases can slow growth, increase mortality rates, and cause economic devastation to the poultry industry ( ; ). Contaminated poultry eggs and meat are important sources of zoonotic pathogens ( ; ; ; ; ), such as Salmonella , which usually cause serious public health issues. Therefore, it is crucial to control bacterial infection in chickens to minimize economic losses in poultry production and reduce health risks for humans. The mechanisms of the host immune response to bacterial infection have been extensively studied. Bacteria invade the initial sites of colonization, along with entering the bloodstream and disseminating throughout the body, and then grow and multiply to cause serious infection ( ). The spleen is an important immune and blood-filtering organ ( ; ), which is crucial in preventing bacteria from entering the organism and regulating local and systemic immunity ( ). The spleen has a complex blood stream, with systemic blood flowing through the spleen almost every minute. The anatomical structure of the spleen is fit to filter blood-origin antigens and inhibit rapid bacterial multiplication. The spleen is divided into the red pulp and the white pulp. The splenic red pulp mainly contains macrophages with active phagocytic function, which eliminate most pathogenic bacteria from the blood and initiate innate and adaptive immune responses against pathogens. The splenic white pulp mainly contains T and B cells, which promote an antigen-specific immune response to protect the body from bacteria derived from the blood ( ). In addition, the spleen contains numerous immune cells and regulates the inflammatory response caused by bacterial infection. Identifying key cells and their dynamic changes in inflammatory tissues will help understand the mechanisms of resistance to bacterial infection and allow further exploration of new therapeutic strategies for bacterial infectious disease. Recent advances in single-cell RNA sequencing (scRNA-seq), which decomposes complex tissues into various single cells, have revolutionized our perception of the cellular characterization within immune organs in disease studies ( ; ; ). The application of single-cell transcriptomics and flow cytometry techniques in immune tissue has been a critical step in identifying immune cell populations associated with bacterial infectious diseases. The interactions between intracellular bacteria and immune cells generate multiple cellular phenotypes that may determine the outcome of bacterial infection. In our study, we compared the immune cell response to Salmonella between two different types of broiler breeds, Beijing-You chickens and Cobb broilers, at the single-cell level. Changes in the proportions of cell subpopulations after infection and major contributions to the inflammatory response revealed the different antibacterial mechanisms. Animals and experimental diets 150 Cobb broiler chicks were purchased from the Poultry Breeding Co., Ltd. (Beijing, China), and 150 Beijing-You chicks were acquired from the Changping Experimental farm of the Institute of Animal Sciences (Beijing, China). All chickens were maintained in sterilized isolation ventilated cages (IPQ-type 3 negative pressure isolator) at the experimental center of China Agricultural University (Beijing, China). Salmonella infection At 3 days old, 93 Cobb chicks and 93 Beijing-You chicks were randomly chosen and challenged orally with 1 mL PBS containing 5.12 × 10 10 CFU Salmonella typhimurium (ST), the remaining chicks were given orally with 1mL PBS. And we recorded the survival curve after infection. At 7 days old, we randomly selected 20 Cobb chicks and 20 Beijing-You chicks from the non-infection groups. 10 Cobb chicks and 10 Beijing-You chicks were challenged orally with 1 mL PBS containing 6.15 × 10 10 CFU ST, the remaining chicks were given orally with 1mL PBS. At 1 day post-infection (dpi), we collected the liver to determine the bacterial load and collected the spleen, duodenum, and ileum to prepare paraffin sections. At 28 days old, 15 Cobb chicks and 15 Beijing-You chicks were randomly chosen and challenged orally with 1 mL PBS containing 1.83 × 10 11 CFU ST, the remaining chicks were given orally with 1mL PBS. At 3 dpi, we randomly selected 3 chickens from each experimental group and collected the splenic tissue for the later histology analysis. We refer to these chickens hereafter as BYS (Beijing-You with ST infection), BYC (Beijing-You control), CBS (Cobb with ST infection), and CBC (Cobb control). Preparation of pathological tissue sections The collected tissues were fixed with 4% paraformaldehyde. They were strictly trimmed, dehydrated, embedded, sliced, stained with Hematoxylin and Eosin (HE) or Periodic acid-Schiff (PAS), and sealed. Then, morphological detection and pathological observation were performed under a microscope. Liver bacterial load counts We randomly selected 7 livers from each infection group We took 100 mg liver sample into 1ml 0.9% physiological saline and homogenized the tissue with a tissue homogenizer (Cibo, Shanghai, China). A serial dilution of 1:1, 10, 100, 1000, and 10,000 with 0.9% physiological saline was used to count the liver bacterial load. We took 100 uL of each concentration and plated it evenly on the Bismuth Sulfite Agar media (AOBOX, Beijing, China) and then incubated at 37 °C for 24 h. We counted the number of colony-forming units, and the result was converted to the number of bacteria contained in 1g of liver tissue. Single-cell sample preparation Fresh spleen tissues were mechanically minced in PBS with 5% Fetal Bovine Serum (FBS) (HyClone, Logan, UT, USA), and a syringe plunger was used to grind the spleen fragments through 100-μm and 40-μm cell strainers (Corning, NY, USA). The filtrated liquid containing cells was centrifuged at 500 × g for 10 minutes, the supernatant was discarded, and the cells were washed twice with 5% FBS and then suspended in RPMI 1640/Dulbecco's modified eagle medium (DMEM) with 5% FBS. The cells were incubated on ice with mouse anti-chicken CD45-PE (0.1 mg/mL; SouthernBiotech, UAB, AL, USA) for 30 minutes and 7-AAD (5 μL/10 5 cells; BIOESTABLISH, Beijing, China) for 10 minutes. CD45+ cells were collected by flow cytometry. Unfortunately, two samples in the BYS group had too few cells collected by fluorescence-activated cell sorting to meet the scRNA-seq criteria. The 10 samples were performed in the subsequent analysis. Single-cell RNA sequencing and read processing After obtaining the single-cell suspension, we detected the cell activity of all samples (the samples with cell activity >85%, single good dispersion, and low impurity content were collected). Subsequently, single-cell gel bead emulsions (GEMs) were prepared by microfluidic channels, and the cell concentration was controlled within 700-1200 cells/μL. In the GEM reaction system, cells were broken and lysed. The released mRNA and poly-dT primer sequence were activated by reverse transcriptase to generate full-length cDNA and amplify and construct the library. The StepOnePlus Real-Time PCR System was used for q-PCR to accurately quantify the effective concentration of the library (the index of the effective concentration of the library was not less than 10 nM). The qualified library was sequenced by the Illumina platform to obtain raw reads. According to the 10 × transcriptome library structure, the data were intercepted to obtain Read1 and target Read2, containing barcode and UMI sequence information. Then, the reads 2 were mapped to the Chicken reference genome (Gallus gallus 6.0) by STAR (version 2.1.3) ( ). We generated a raw gene expression matrix for each sample by CellRanger (version 3.1.0). The statistical results of CellRanger analysis are shown in Table S1. The 10 × scRNA-seq data analysis After importing all the raw count matrices, we constructed the Seurat objects by CreateSeuratObject. We filtered out low-quality cells (nFeature < 200 or nFeature > 4000, percent.mito > 15%). The gene expression values were log normalized, and highly variable genes were identified. The quality control results were shown in Fig. S1A-C. Then, the gene expression data of all the samples were integrated by the corrected covered area (CCA), and the gene expression values were scaled. After dimension reduction by PCA, we obtained 23 cell clusters by RunUMAP with the resolution parameter set as 0.5. FindAllMarkers (logfc.threshold = 0.25, p_val_adj ≤ 0.05) was used to identify marker genes in each cluster (Table S2). We annotated each cell cluster by querying the CellMarker database and referenced the known marker genes reported in previous studies. Gene set variation analysis and functional enrichment analysis To evaluate the pathway activity estimates of individual cells and describe the molecular signature of each cell set, we carried out gene set variation analysis (GSVA) from the Molecular Signatures Database and Gene Ontology (GO). Trajectory analysis Trajectory analysis was performed to deduce the transition state among different cell subpopulations and explore the different branches of cell differentiation. To construct dynamic gene expression models, we identified the significant genes by the differentialGeneTest and ordered the cells based on differentially expressed genes. After performing “ReduceDimension“ and “orderCells”, we placed cells onto a pseudo-time trajectory. Statistical analysis In this study, all animal experiments and samples were processed in parallel. We carried out all statistical analyses by R ( http://www.r-project.org ). We applied the Beanplot R package with default parameters to show the violin plots without displaying each data point. A large number of data points would obscure the overall distribution. The two-tailed t test or Mann‒Whitney U test was performed, with *p < 0.05, ** p < 0.01 and foldchange (FC) > 1.5 considered statistical significance. 150 Cobb broiler chicks were purchased from the Poultry Breeding Co., Ltd. (Beijing, China), and 150 Beijing-You chicks were acquired from the Changping Experimental farm of the Institute of Animal Sciences (Beijing, China). All chickens were maintained in sterilized isolation ventilated cages (IPQ-type 3 negative pressure isolator) at the experimental center of China Agricultural University (Beijing, China). infection At 3 days old, 93 Cobb chicks and 93 Beijing-You chicks were randomly chosen and challenged orally with 1 mL PBS containing 5.12 × 10 10 CFU Salmonella typhimurium (ST), the remaining chicks were given orally with 1mL PBS. And we recorded the survival curve after infection. At 7 days old, we randomly selected 20 Cobb chicks and 20 Beijing-You chicks from the non-infection groups. 10 Cobb chicks and 10 Beijing-You chicks were challenged orally with 1 mL PBS containing 6.15 × 10 10 CFU ST, the remaining chicks were given orally with 1mL PBS. At 1 day post-infection (dpi), we collected the liver to determine the bacterial load and collected the spleen, duodenum, and ileum to prepare paraffin sections. At 28 days old, 15 Cobb chicks and 15 Beijing-You chicks were randomly chosen and challenged orally with 1 mL PBS containing 1.83 × 10 11 CFU ST, the remaining chicks were given orally with 1mL PBS. At 3 dpi, we randomly selected 3 chickens from each experimental group and collected the splenic tissue for the later histology analysis. We refer to these chickens hereafter as BYS (Beijing-You with ST infection), BYC (Beijing-You control), CBS (Cobb with ST infection), and CBC (Cobb control). The collected tissues were fixed with 4% paraformaldehyde. They were strictly trimmed, dehydrated, embedded, sliced, stained with Hematoxylin and Eosin (HE) or Periodic acid-Schiff (PAS), and sealed. Then, morphological detection and pathological observation were performed under a microscope. We randomly selected 7 livers from each infection group We took 100 mg liver sample into 1ml 0.9% physiological saline and homogenized the tissue with a tissue homogenizer (Cibo, Shanghai, China). A serial dilution of 1:1, 10, 100, 1000, and 10,000 with 0.9% physiological saline was used to count the liver bacterial load. We took 100 uL of each concentration and plated it evenly on the Bismuth Sulfite Agar media (AOBOX, Beijing, China) and then incubated at 37 °C for 24 h. We counted the number of colony-forming units, and the result was converted to the number of bacteria contained in 1g of liver tissue. Fresh spleen tissues were mechanically minced in PBS with 5% Fetal Bovine Serum (FBS) (HyClone, Logan, UT, USA), and a syringe plunger was used to grind the spleen fragments through 100-μm and 40-μm cell strainers (Corning, NY, USA). The filtrated liquid containing cells was centrifuged at 500 × g for 10 minutes, the supernatant was discarded, and the cells were washed twice with 5% FBS and then suspended in RPMI 1640/Dulbecco's modified eagle medium (DMEM) with 5% FBS. The cells were incubated on ice with mouse anti-chicken CD45-PE (0.1 mg/mL; SouthernBiotech, UAB, AL, USA) for 30 minutes and 7-AAD (5 μL/10 5 cells; BIOESTABLISH, Beijing, China) for 10 minutes. CD45+ cells were collected by flow cytometry. Unfortunately, two samples in the BYS group had too few cells collected by fluorescence-activated cell sorting to meet the scRNA-seq criteria. The 10 samples were performed in the subsequent analysis. After obtaining the single-cell suspension, we detected the cell activity of all samples (the samples with cell activity >85%, single good dispersion, and low impurity content were collected). Subsequently, single-cell gel bead emulsions (GEMs) were prepared by microfluidic channels, and the cell concentration was controlled within 700-1200 cells/μL. In the GEM reaction system, cells were broken and lysed. The released mRNA and poly-dT primer sequence were activated by reverse transcriptase to generate full-length cDNA and amplify and construct the library. The StepOnePlus Real-Time PCR System was used for q-PCR to accurately quantify the effective concentration of the library (the index of the effective concentration of the library was not less than 10 nM). The qualified library was sequenced by the Illumina platform to obtain raw reads. According to the 10 × transcriptome library structure, the data were intercepted to obtain Read1 and target Read2, containing barcode and UMI sequence information. Then, the reads 2 were mapped to the Chicken reference genome (Gallus gallus 6.0) by STAR (version 2.1.3) ( ). We generated a raw gene expression matrix for each sample by CellRanger (version 3.1.0). The statistical results of CellRanger analysis are shown in Table S1. After importing all the raw count matrices, we constructed the Seurat objects by CreateSeuratObject. We filtered out low-quality cells (nFeature < 200 or nFeature > 4000, percent.mito > 15%). The gene expression values were log normalized, and highly variable genes were identified. The quality control results were shown in Fig. S1A-C. Then, the gene expression data of all the samples were integrated by the corrected covered area (CCA), and the gene expression values were scaled. After dimension reduction by PCA, we obtained 23 cell clusters by RunUMAP with the resolution parameter set as 0.5. FindAllMarkers (logfc.threshold = 0.25, p_val_adj ≤ 0.05) was used to identify marker genes in each cluster (Table S2). We annotated each cell cluster by querying the CellMarker database and referenced the known marker genes reported in previous studies. To evaluate the pathway activity estimates of individual cells and describe the molecular signature of each cell set, we carried out gene set variation analysis (GSVA) from the Molecular Signatures Database and Gene Ontology (GO). Trajectory analysis was performed to deduce the transition state among different cell subpopulations and explore the different branches of cell differentiation. To construct dynamic gene expression models, we identified the significant genes by the differentialGeneTest and ordered the cells based on differentially expressed genes. After performing “ReduceDimension“ and “orderCells”, we placed cells onto a pseudo-time trajectory. In this study, all animal experiments and samples were processed in parallel. We carried out all statistical analyses by R ( http://www.r-project.org ). We applied the Beanplot R package with default parameters to show the violin plots without displaying each data point. A large number of data points would obscure the overall distribution. The two-tailed t test or Mann‒Whitney U test was performed, with *p < 0.05, ** p < 0.01 and foldchange (FC) > 1.5 considered statistical significance. Beijing-You chickens showed intense immune response than Cobb during bacterial infection. To compare the bacterial resistance of Beijing-You chicks and Cobb broilers, we recorded their survival curves after infection with ST at 3 days of age, respectively (n=93 chickens from BYS; n=93 chickens from CBS). CBS chickens exhibited resistance advantage during bacterial infection and had a higher survival rate (97.85% survival) than BYS chickens (88.17% survival) (Fig. 1A; Table S3). Studies have shown that Salmonella can be detected in liver at 1 dpi, and the bacterial load begins to decrease at 4 dpi ( ). Given this dynamic change, we evaluated the liver bacterial load of 7-day-old chickens at 1 dpi, as an early indicator to evaluate the resistance to bacteria in chicken. Compared to Beijing-You, Cobb exhibited significantly lower bacterial load in liver ( B). In addition, we evaluated the effect of ST on the spleen. In the non-infected groups (BYC, CBC), the spleen structure exhibited clarity, with no notable necrosis. The white pulp was populated with a substantial number of lymphocytes. Meanwhile, the red pulp comprised venous sinuses and reticular splenic sinuses containing reticular cells, macrophages, lymphocytes, and red blood cells. And the splenic nodule structure was distinct. Notably, CBC displayed no significant eosinophil infiltration, while BYC showed a minor granulocyte infiltration in the red pulp. After Salmonella infection, BYS showed splenic sinus congestion and the disappearance of the splenic corpuscle. And granulocyte infiltration was observed in the red pulp. While CBS showed a significant reduction in splenic corpuscles. In the splenic sinus, red blood cells and eosinophilic fluid were visible, along with granulocyte infiltration in the red pulp ( C). The BYS seemed to exhibit slightly more severe inflammatory damage in the spleen. We observed the morphology of the ileum villus and the distribution of goblet cells ( D ) and measured the morphological index of the villi ( E). Compared with Beijing-You, Cobb exhibited a significant increase in goblet cells number per unit epithelial length, higher villous and fewer crypts, but no significant difference in crypt depth ( F-I). Spleen immune cellular composition and single-cell transcriptomic profiling of Beijing-You chickens and Cobb broilers. At 3 dpi, we collected 10 chickens spleens and generated scRNA-seq profiles from 4 groups (n=1 chikens from BYS, n=3 chikens from BYC, n=3 chikens from CBS, n=3 chikens from CBC). At this time, spleen inflammation was still in the peak stage of inflammation ( ). After quality control and filtering, we integrated all 10 samples, performed dimensional reduction and unbiased clustering by Seurat ( ) , and yielded 23 initial clusters on 54,487 cells. To analyze the differences in the expression profiles of each cluster, we evaluated the marker genes and drew an expression heatmap based on the top 10 marker genes related to each cluster ( A). The major cell populations ( B and Fig. S2A-B) were identified as T cells (CD3D and CD3E), NK cells (natural killer cells; GNLY and XCL1), B cells (CD79B), macrophages (C1QC or MARCO), dendritic cells (IRF8), plasmacytoid dendritic cells (JCHAIN), erythrocytes (HBBA), and megakaryocytes (ITGA2B). In addition, there was an actively proliferating population (TOP2A). The specific distribution of these marker genes confirmed the representativeness of cell assignments ( C). The spleen is the largest secondary lymphoid organ that contains a high proportion of lymphocytes (84.81%) and macrophages (9.40%) (Table S4) and is the center of cellular and humoral immunity in the body. Under normal conditions, compared with Beijing-You (84.34% in BYC and 73.84% in BYS), Cobb (81.14% in CBC and 92.80% in CBS) had a higher lymphocyte proportion ( D and Table S4). Interestingly, after ST infection, the proportion of lymphocytes significantly decreased, and the proportion of myeloid cells (9.36% in BYC and 21.46% in BYS) significantly increased in Beijing-You. The proportion of myeloid cells in Cobb showed the opposite trend (81.14% and 92.80% in lymphocytes for CBC and CBS, respectively; 11.16% and 2.93% in myeloid cells for CBC and CBS, respectively) ( D and Table S4). The cellular compositions reflected the gross alterations in infected chickens, implying a shift in the cellular microenvironment under inflammation caused by bacterial infection. Cytokines and TLR4 signaling pathways play a vital role in the response to bacterial infection ( ). Therefore, we plotted the cellular origins of some mediators of these pathways ( E). Genes related to TLR4 signaling pathways (TLR4, MYD88, IRAK2, TRAF3) and proinflammatory factors (IL1B, IL6, IL8, CSF1, CSF2 and CCL4) derived from macrophages were mainly elevated in the BYS group. The pro-inflammatory factors (IFNG, CCL5, CCL20), the anti-inflammatory factor (IL10) and the interferon regulatory factor (IRF4) were elevated in the CBS group and mainly derived from T and NK cells. This result suggested that macrophages promote the host inflammatory response in infected Beijing-You, while lymphocytes (T and NK cells) contribute to producing anti-inflammatory factors in infected Cobb broilers. The GSVA analysis scored the pathway activity in the different groups. After bacterial infection, the IL6 and IL1 mediated signaling pathway and positive regulation of myeloid leukocyte mediated immunity were enriched in Beijing-You. The immunoglobulin and anti-inflammatory interleukin IL13 production enriched in BYS ( F). Tregs in CBS showed stronger immunosuppressive function than those in BYS T and NK cells are considered the most prevalent cell types in the spleen. The 21,495 T cells and 12,577 NK cells were selected and reclustered for downstream analysis. These cells were subclustered into 14 clusters. According to the surface antigen markers CD4 and CD8, T cells were divided into two categories. Relating to the function-associated markers of the T and NK cells, these cell populations were defined as naïve T cells (Tn; CCR7), memory T cells (Tm; IL7R), regulatory T cells (Treg; ICOS, IL2RB), T helper 2 cells (Th2; GATA3) and cytotoxic T lymphocytes (CTL; GZMA, MHC class I) ( A-C). To evaluate the relative abundance of T and NK cells in the two broiler breeds after infection, we examined the percentage of each subpopulation. Intriguingly, the percentage of Treg-2 decreased in the BYS group. However, the percentage of Treg-1 increased and Treg-2 decreased in the CBS group ( D). These findings implied that the contribution of Treg-1 and Treg-2 cells differed between the BYS and CBS groups. In addition, CD4 Treg-1 cells were mainly derived from CBS chickens ( D), and compared with Treg-2 cells, they yielded more inhibitory receptors such as CTLA4 ( ) and LAG3 ( ; ), and more NFKB1 and CCL4 ( B). To highlight the functional difference between Treg-1 and Treg-2 cells, we analyzed the expression of the immune-related genes involved in metabolism and MHCs-related, redox reactions, inhibitory signal to T cells and negative regulation of inflammatory response. Compared with Treg-2, Treg-1 cells had higher metabolic and MHCs levels. CTLA4 and LAG3 were highly expressed in Treg-1 cells and rarely in Treg-2 cells. CTLA4 was significantly upregulated in Treg-2 cells of the CBS group. In addition, anti-inflammatory factors (TNIP2, NFKBIA, DUSP1) were upregulated in the infection group. The expression of metabolism-related and redox related genes increased after bacterial infection ( E). Trajectory analysis was performed to infer the transcriptional transition in CD 4 + T cells. Starting with CD4 + Tn cells, CD4 + T cells transitioned into the CD4 + Tm/Treg-2/CD4 + CTL cluster or the Treg-1/CD4 + CTL cluster. Treg-1 and Treg-2 cells showed distinct branches. ( F). Then, we analyzed the significant genes of branchpoint 1 and the pathway activity of the cells congregated at the origin and end of branches. The genes of branch origin were mainly enriched in inflammatory response and regulation of immune system process. The final cell population 1 (Treg2) was involved in negative regulation of lymphocyte activation and response to stimulus. The final cell population 2 (Treg1) had negative regulation of inflammatory response and cytokine production ( G). Compared to Beijing-You, more B cells transform into effector B cells to participate in the immune response in Cobb A total of 12,041 B cells were selected for downstream analysis. After removing the contaminated cells, the B cells were classified into 9 clusters ( A). According to the highly expressed genes in each subcluster, we annotated 6 subpopulations, including naïve B cells, activated B cells, regulator B cells, follicular B cells, germinal center B cells (GC-B cells) and plasma B cells ( B). Cluster 0 (naïve B) highly expressed PROM1, PBRM1 and other genes related to differentiation and proliferation inhibition and expressed low levels of CD38. The high expression of the MHC class I genes BF1 and BF2 in clusters 1 and 3 (activated B) suggested that this cell subpopulation is associated with antigen presentation and that B cells were activated. In cluster 2 (Breg, regulatory B cells), SH3BP2 and PTPN22 were highly expressed, which are negative regulators of T-cell receptor (TCR) signals and induce the production of IFNs. Clusters 4 and 5 (Follicular B) highly expressed the chemotactic protein GPR183, which can guide B cells to move to the follicular regions. Clusters 6 and 8 (GC-B) highly expressed the GC-B cell characteristic gene BCL6. Cluster 7 (plasma B) highly expressed JCHAIN and ENO1, which stimulate immunoglobulin production, suggesting the activation of plasma cell synthesis and antibody secretion ( B-C). Compared with Beijing-You (18.61% in BYC and 16.02% in BYS), Cobb had a higher proportion of activated B cells (39.41% in CBC and 48.80% in CBS). After infection, the proportion of GC-B cells increased (4.97% in BYC, 5.99% in BYS, 2.24% in CBC, and 21.62% in CBS), indicating that more B cells transformed into effector B cells (Table S4 and D). Plasma B cells are essential effector B cells that release many antibodies as part of the immune response. Therefore, we conducted GSEA to compare the plasma B cells of the two chicken breeds between bacterial infection and normal conditions. The negative regulation of proinflammatory factors (IL2 and IL1B) production and the negative regulation of IFNG production were enriched in the BYS and CBS groups. And the antimicrobial humoral immune response mediated by antimicrobial peptide was upregulated after bacterial infection. After infection, BYS chickens were mainly involved in antigen processing and presentation of peptide antigen via MHC class II, while CBS chickens were mainly involved via MHC class I ( E). Meanwhile, MHC class I (BF1 and BF2) and MHC II (DMA) were highly expressed and actively participated in antigen presentation. And JCHAIN and BCR-signaling-pathway-related gene (SH3BP5) showed high expression after infection. Plasma B cells also upregulated the expression of the anti-inflammatory cytokine NFKBIA, METRNL and DUSP1 after infection, promoting an antibacterial reaction ( F). The differential expression of PTPRJ in Mac-IL1B may contribute to the Salmonella resistance between Cobb and Beijing-You A total of 5,123 myeloid cells were selected. We removed a few clusters contaminated with T cells. After reclustering, we obtained 12 clusters mainly containing monocytes, four macrophages, two dendritic cells, and plasmacytoid dendritic cells ( A). The proinflammatory cytokines in these myeloid cells, including IL1B, IL8, S100A12 and S100A6, were highly expressed in monocytes and Mac-IL1B cells. Mac-C1QC cells specifically expressed C1QC, which enhances cell phagocytosis and proinflammatory cytokine secretion. MARCO was specifically expressed in clusters 7 and 9 (Mac-MARCO). As a scavenger receptor specifically expressed on macrophages, MARCO binds gram-negative and gram-positive bacteria and enhances the phagocytosis ability of macrophages. LGALS2, which promotes immune escape by recruiting tumor-associated macrophages and promoting their polarization toward M2, was specifically expressed in cluster 10 (Mac-LGALS2). cDCs (cDC-CCL1 and cDC-ITGA4) highly expressed IRF8, which was crucial for their survival. The marker genes of these cells were specifically distributed in myeloid cells ( B-C). The specific high expression and distribution of these marker genes again confirmed our classification ( C). We ordered cells in a pseudotemporal manner to infer the differentiation trajectory of myeloid cells. Beginning with the pro-inflammatory phenotype cluster (monocytes and Mac-IL1B), the myeloid cells bifurcated into either the anti-inflammatory phenotype macrophage cluster (Mac-C1QC, Mac-MARCO) or the cDC cluster (cDC-CCL1 and cDC-ITGA4) ( D). According to the pseudotime, we detected four phases. The pro-inflammatory phenotype related genes (S100A12, CSF1) were activated in phase 1. CSF1 can promote the transformation of monocytes into macrophages and the proliferation of inflammatory macrophages. Therefore, the proportion of Mac-IL1B cells (M1) peaked and the pro-inflammatory interleukin (IL1B) and chemokines (CCL4) were upregulated at phase 2. While the critical transcription factors of dendritic cell differentiation (IRF4, IRF8) were up-regulated expression in phase 3, may involve in the transition of monocytes into dendritic cells. The anti-inflammatory phenotype related genes (C1QB, MARCO) were upregulated at phase 4( E). Among them, the pro-inflammatory phenotype cells accounted for the majority in BYS, and the anti-inflammatory phenotype cells represent the most prevalent cell type in CBS. The proportion of anti-inflammatory macrophages in Cobb increased significantly after bacterial infection ( F). In accordance with the pseudotime order, the expression level of PTPRJ was increased from Monocyte and reached the peak expression in Mac-IL1B, followed by a decrease in expression ( G). Consistent with previous studies, PTPRJ expression exhibits the highest expression in the spleen of mice and is high in macrophages, and it is regulated by the inflammatory stimuli ( ). PTPRJ as a negative regulator of phagocyte function mediated by CEACAM3 has been reported to regulate and control inflammatory responses, a depletion of PTPRJ results in a stronger phagocytic function phenotype ( ). And in the non-infected group, PTPRJ expression was lower in CBC ( H), suggesting a possible factor that cause resistance differences between Beijing-You and Cobb. To compare the bacterial resistance of Beijing-You chicks and Cobb broilers, we recorded their survival curves after infection with ST at 3 days of age, respectively (n=93 chickens from BYS; n=93 chickens from CBS). CBS chickens exhibited resistance advantage during bacterial infection and had a higher survival rate (97.85% survival) than BYS chickens (88.17% survival) (Fig. 1A; Table S3). Studies have shown that Salmonella can be detected in liver at 1 dpi, and the bacterial load begins to decrease at 4 dpi ( ). Given this dynamic change, we evaluated the liver bacterial load of 7-day-old chickens at 1 dpi, as an early indicator to evaluate the resistance to bacteria in chicken. Compared to Beijing-You, Cobb exhibited significantly lower bacterial load in liver ( B). In addition, we evaluated the effect of ST on the spleen. In the non-infected groups (BYC, CBC), the spleen structure exhibited clarity, with no notable necrosis. The white pulp was populated with a substantial number of lymphocytes. Meanwhile, the red pulp comprised venous sinuses and reticular splenic sinuses containing reticular cells, macrophages, lymphocytes, and red blood cells. And the splenic nodule structure was distinct. Notably, CBC displayed no significant eosinophil infiltration, while BYC showed a minor granulocyte infiltration in the red pulp. After Salmonella infection, BYS showed splenic sinus congestion and the disappearance of the splenic corpuscle. And granulocyte infiltration was observed in the red pulp. While CBS showed a significant reduction in splenic corpuscles. In the splenic sinus, red blood cells and eosinophilic fluid were visible, along with granulocyte infiltration in the red pulp ( C). The BYS seemed to exhibit slightly more severe inflammatory damage in the spleen. We observed the morphology of the ileum villus and the distribution of goblet cells ( D ) and measured the morphological index of the villi ( E). Compared with Beijing-You, Cobb exhibited a significant increase in goblet cells number per unit epithelial length, higher villous and fewer crypts, but no significant difference in crypt depth ( F-I). At 3 dpi, we collected 10 chickens spleens and generated scRNA-seq profiles from 4 groups (n=1 chikens from BYS, n=3 chikens from BYC, n=3 chikens from CBS, n=3 chikens from CBC). At this time, spleen inflammation was still in the peak stage of inflammation ( ). After quality control and filtering, we integrated all 10 samples, performed dimensional reduction and unbiased clustering by Seurat ( ) , and yielded 23 initial clusters on 54,487 cells. To analyze the differences in the expression profiles of each cluster, we evaluated the marker genes and drew an expression heatmap based on the top 10 marker genes related to each cluster ( A). The major cell populations ( B and Fig. S2A-B) were identified as T cells (CD3D and CD3E), NK cells (natural killer cells; GNLY and XCL1), B cells (CD79B), macrophages (C1QC or MARCO), dendritic cells (IRF8), plasmacytoid dendritic cells (JCHAIN), erythrocytes (HBBA), and megakaryocytes (ITGA2B). In addition, there was an actively proliferating population (TOP2A). The specific distribution of these marker genes confirmed the representativeness of cell assignments ( C). The spleen is the largest secondary lymphoid organ that contains a high proportion of lymphocytes (84.81%) and macrophages (9.40%) (Table S4) and is the center of cellular and humoral immunity in the body. Under normal conditions, compared with Beijing-You (84.34% in BYC and 73.84% in BYS), Cobb (81.14% in CBC and 92.80% in CBS) had a higher lymphocyte proportion ( D and Table S4). Interestingly, after ST infection, the proportion of lymphocytes significantly decreased, and the proportion of myeloid cells (9.36% in BYC and 21.46% in BYS) significantly increased in Beijing-You. The proportion of myeloid cells in Cobb showed the opposite trend (81.14% and 92.80% in lymphocytes for CBC and CBS, respectively; 11.16% and 2.93% in myeloid cells for CBC and CBS, respectively) ( D and Table S4). The cellular compositions reflected the gross alterations in infected chickens, implying a shift in the cellular microenvironment under inflammation caused by bacterial infection. Cytokines and TLR4 signaling pathways play a vital role in the response to bacterial infection ( ). Therefore, we plotted the cellular origins of some mediators of these pathways ( E). Genes related to TLR4 signaling pathways (TLR4, MYD88, IRAK2, TRAF3) and proinflammatory factors (IL1B, IL6, IL8, CSF1, CSF2 and CCL4) derived from macrophages were mainly elevated in the BYS group. The pro-inflammatory factors (IFNG, CCL5, CCL20), the anti-inflammatory factor (IL10) and the interferon regulatory factor (IRF4) were elevated in the CBS group and mainly derived from T and NK cells. This result suggested that macrophages promote the host inflammatory response in infected Beijing-You, while lymphocytes (T and NK cells) contribute to producing anti-inflammatory factors in infected Cobb broilers. The GSVA analysis scored the pathway activity in the different groups. After bacterial infection, the IL6 and IL1 mediated signaling pathway and positive regulation of myeloid leukocyte mediated immunity were enriched in Beijing-You. The immunoglobulin and anti-inflammatory interleukin IL13 production enriched in BYS ( F). T and NK cells are considered the most prevalent cell types in the spleen. The 21,495 T cells and 12,577 NK cells were selected and reclustered for downstream analysis. These cells were subclustered into 14 clusters. According to the surface antigen markers CD4 and CD8, T cells were divided into two categories. Relating to the function-associated markers of the T and NK cells, these cell populations were defined as naïve T cells (Tn; CCR7), memory T cells (Tm; IL7R), regulatory T cells (Treg; ICOS, IL2RB), T helper 2 cells (Th2; GATA3) and cytotoxic T lymphocytes (CTL; GZMA, MHC class I) ( A-C). To evaluate the relative abundance of T and NK cells in the two broiler breeds after infection, we examined the percentage of each subpopulation. Intriguingly, the percentage of Treg-2 decreased in the BYS group. However, the percentage of Treg-1 increased and Treg-2 decreased in the CBS group ( D). These findings implied that the contribution of Treg-1 and Treg-2 cells differed between the BYS and CBS groups. In addition, CD4 Treg-1 cells were mainly derived from CBS chickens ( D), and compared with Treg-2 cells, they yielded more inhibitory receptors such as CTLA4 ( ) and LAG3 ( ; ), and more NFKB1 and CCL4 ( B). To highlight the functional difference between Treg-1 and Treg-2 cells, we analyzed the expression of the immune-related genes involved in metabolism and MHCs-related, redox reactions, inhibitory signal to T cells and negative regulation of inflammatory response. Compared with Treg-2, Treg-1 cells had higher metabolic and MHCs levels. CTLA4 and LAG3 were highly expressed in Treg-1 cells and rarely in Treg-2 cells. CTLA4 was significantly upregulated in Treg-2 cells of the CBS group. In addition, anti-inflammatory factors (TNIP2, NFKBIA, DUSP1) were upregulated in the infection group. The expression of metabolism-related and redox related genes increased after bacterial infection ( E). Trajectory analysis was performed to infer the transcriptional transition in CD 4 + T cells. Starting with CD4 + Tn cells, CD4 + T cells transitioned into the CD4 + Tm/Treg-2/CD4 + CTL cluster or the Treg-1/CD4 + CTL cluster. Treg-1 and Treg-2 cells showed distinct branches. ( F). Then, we analyzed the significant genes of branchpoint 1 and the pathway activity of the cells congregated at the origin and end of branches. The genes of branch origin were mainly enriched in inflammatory response and regulation of immune system process. The final cell population 1 (Treg2) was involved in negative regulation of lymphocyte activation and response to stimulus. The final cell population 2 (Treg1) had negative regulation of inflammatory response and cytokine production ( G). A total of 12,041 B cells were selected for downstream analysis. After removing the contaminated cells, the B cells were classified into 9 clusters ( A). According to the highly expressed genes in each subcluster, we annotated 6 subpopulations, including naïve B cells, activated B cells, regulator B cells, follicular B cells, germinal center B cells (GC-B cells) and plasma B cells ( B). Cluster 0 (naïve B) highly expressed PROM1, PBRM1 and other genes related to differentiation and proliferation inhibition and expressed low levels of CD38. The high expression of the MHC class I genes BF1 and BF2 in clusters 1 and 3 (activated B) suggested that this cell subpopulation is associated with antigen presentation and that B cells were activated. In cluster 2 (Breg, regulatory B cells), SH3BP2 and PTPN22 were highly expressed, which are negative regulators of T-cell receptor (TCR) signals and induce the production of IFNs. Clusters 4 and 5 (Follicular B) highly expressed the chemotactic protein GPR183, which can guide B cells to move to the follicular regions. Clusters 6 and 8 (GC-B) highly expressed the GC-B cell characteristic gene BCL6. Cluster 7 (plasma B) highly expressed JCHAIN and ENO1, which stimulate immunoglobulin production, suggesting the activation of plasma cell synthesis and antibody secretion ( B-C). Compared with Beijing-You (18.61% in BYC and 16.02% in BYS), Cobb had a higher proportion of activated B cells (39.41% in CBC and 48.80% in CBS). After infection, the proportion of GC-B cells increased (4.97% in BYC, 5.99% in BYS, 2.24% in CBC, and 21.62% in CBS), indicating that more B cells transformed into effector B cells (Table S4 and D). Plasma B cells are essential effector B cells that release many antibodies as part of the immune response. Therefore, we conducted GSEA to compare the plasma B cells of the two chicken breeds between bacterial infection and normal conditions. The negative regulation of proinflammatory factors (IL2 and IL1B) production and the negative regulation of IFNG production were enriched in the BYS and CBS groups. And the antimicrobial humoral immune response mediated by antimicrobial peptide was upregulated after bacterial infection. After infection, BYS chickens were mainly involved in antigen processing and presentation of peptide antigen via MHC class II, while CBS chickens were mainly involved via MHC class I ( E). Meanwhile, MHC class I (BF1 and BF2) and MHC II (DMA) were highly expressed and actively participated in antigen presentation. And JCHAIN and BCR-signaling-pathway-related gene (SH3BP5) showed high expression after infection. Plasma B cells also upregulated the expression of the anti-inflammatory cytokine NFKBIA, METRNL and DUSP1 after infection, promoting an antibacterial reaction ( F). Salmonella resistance between Cobb and Beijing-You A total of 5,123 myeloid cells were selected. We removed a few clusters contaminated with T cells. After reclustering, we obtained 12 clusters mainly containing monocytes, four macrophages, two dendritic cells, and plasmacytoid dendritic cells ( A). The proinflammatory cytokines in these myeloid cells, including IL1B, IL8, S100A12 and S100A6, were highly expressed in monocytes and Mac-IL1B cells. Mac-C1QC cells specifically expressed C1QC, which enhances cell phagocytosis and proinflammatory cytokine secretion. MARCO was specifically expressed in clusters 7 and 9 (Mac-MARCO). As a scavenger receptor specifically expressed on macrophages, MARCO binds gram-negative and gram-positive bacteria and enhances the phagocytosis ability of macrophages. LGALS2, which promotes immune escape by recruiting tumor-associated macrophages and promoting their polarization toward M2, was specifically expressed in cluster 10 (Mac-LGALS2). cDCs (cDC-CCL1 and cDC-ITGA4) highly expressed IRF8, which was crucial for their survival. The marker genes of these cells were specifically distributed in myeloid cells ( B-C). The specific high expression and distribution of these marker genes again confirmed our classification ( C). We ordered cells in a pseudotemporal manner to infer the differentiation trajectory of myeloid cells. Beginning with the pro-inflammatory phenotype cluster (monocytes and Mac-IL1B), the myeloid cells bifurcated into either the anti-inflammatory phenotype macrophage cluster (Mac-C1QC, Mac-MARCO) or the cDC cluster (cDC-CCL1 and cDC-ITGA4) ( D). According to the pseudotime, we detected four phases. The pro-inflammatory phenotype related genes (S100A12, CSF1) were activated in phase 1. CSF1 can promote the transformation of monocytes into macrophages and the proliferation of inflammatory macrophages. Therefore, the proportion of Mac-IL1B cells (M1) peaked and the pro-inflammatory interleukin (IL1B) and chemokines (CCL4) were upregulated at phase 2. While the critical transcription factors of dendritic cell differentiation (IRF4, IRF8) were up-regulated expression in phase 3, may involve in the transition of monocytes into dendritic cells. The anti-inflammatory phenotype related genes (C1QB, MARCO) were upregulated at phase 4( E). Among them, the pro-inflammatory phenotype cells accounted for the majority in BYS, and the anti-inflammatory phenotype cells represent the most prevalent cell type in CBS. The proportion of anti-inflammatory macrophages in Cobb increased significantly after bacterial infection ( F). In accordance with the pseudotime order, the expression level of PTPRJ was increased from Monocyte and reached the peak expression in Mac-IL1B, followed by a decrease in expression ( G). Consistent with previous studies, PTPRJ expression exhibits the highest expression in the spleen of mice and is high in macrophages, and it is regulated by the inflammatory stimuli ( ). PTPRJ as a negative regulator of phagocyte function mediated by CEACAM3 has been reported to regulate and control inflammatory responses, a depletion of PTPRJ results in a stronger phagocytic function phenotype ( ). And in the non-infected group, PTPRJ expression was lower in CBC ( H), suggesting a possible factor that cause resistance differences between Beijing-You and Cobb. The spleen is the largest lymphoid organ in the host that effectively provides defense against bacteria and viruses and improves the resistance of the host. It is not entirely clear how splenic host immune cells interact with invading pathogens or bacteria. In this study, Cobb exhibited more prominent Salmonella resistance, lower bacterial load, and slighter spleen damage than Beijing-You. To explore the possible reasons for the resistance differences between Cobb and Beijing-You after Salmonella infection, we present the first single-cell transcriptome comparison of inflammatory responses to bacterial infection in chicken spleen between Beijing-You and Cobb and summarized the main characteristics of breed differences between Beijing-You and Cobb after Salmonella infection (Table S5). The global CD45+ cell capture also allowed us to examine the molecular behavior of multiple immune cell populations directly. In each dataset, we identified a series of characteristic genes of chicken spleen immune cell subpopulations, many of which we further refined with successive rounds of multiparametric segregation. This approach was utilized to identify the most functionally important immune cell subpopulations in splenic tissue. Compared to an average gene expression profile of the whole tissue gained from the RNA-seq (routine technique), the single-cell transcriptomics can provide more detailed and accurate information on gene expression profile in each cell. Lymphocytes and mononuclear macrophages constitute two important leukocytes in the spleen, where they respectively orchestrate pivotal roles in adaptive and innate immunity. Lymphocytes participate in resisting bacteria T regulatory (Treg) cell-modulated inflammatory responses. Mononuclear macrophages serve as the primary immune barrier, providing robust protection against bacterial pathogens during the infection. Lymphocytes are the most important type of immune cells in the spleen. After bacterial infection, CD4+ effector T cells play a key role in regulating inflammation and maintaining immune homeostasis. We identified Th2 cells with high expression of GATA3 and Tregs with high anti-inflammatory activity. GATA3, as a transcription factor that drives the differentiation of Th2 cells, inhibits the production of IFNG and promotes the production of IL-5 and IL-13 ( ) and promotes Th2 cells to release inflammatory cytokines ( ; ; ). Tregs, as an important subset of immune cells in birds, play a crucial role in maintaining immune balance and self-tolerance by regulating effector T cells (such as Th2) to inhibit the overexpression of inflammatory factors and protect body tissues against damage ( ; ; ). Previous studies demonstrated that chicken CD4+CD25+ cells weakened their suppressive properties immediately after inflammation and acquired supersuppressive properties during the later stage of the persistent infection ( ). In the chicken cecal tonsil, Tregs increased steadily and the suppressive properties enhanced throughout the course of the 4-14d infection ( ). Our study observed that the percentage of Tregs significantly decreased in Beijing-You after infection, while the percentage of Tregs increased in Cobb. And the immunosuppressive effect of Tregs in Cobb was stronger than that of Beijing-You. After Salmonella , the role of Tregs in controlling the inflammatory response infection affects the differences in bacterial resistance between Beijing You and Cobb. These results suggested that it is possible that Beijing-You maintains an intense inflammatory response due to excessive invasion of bacteria (liver bacterial load higher than Cobb), which suppresses suppressive properties of Tregs. On the contrary, the increased Tregs and their suppressive properties on CD4+ cells in Cobb enhance the bacterial resistance, compared to the Beijing-You. With the deepening of our understanding of the inflammatory response caused by bacteria, great success has been achieved in revealing the antibacterial mechanism against bacteria. Currently, most immune function studies focus on monocytes, which have a strong ability to devour bacteria ( ; ). During the early stages of infection, bacterial infection induces macrophage activation of inflammatory response, producing pro-inflammatory cytokines and chemokines, migration, and elimination of damage. In the post-inflammatory stage, macrophages exhibit an anti-inflammatory phenotype and promote inflammation relief, inhibiting long-term inflammatory responses that may lead to tissue damage ( ; ). Our study showed a significant difference in the ratio of the monocytes between two different broiler breeds after bacterial infection. In this study, the pro-inflammatory phenotype cells represent the main population in BYS, while the anti-inflammatory phenotype cells were predominant in CBS. According to the pseudotime, macrophages transited from proinflammatory phenotype to anti-inflammatory phenotype. It was consistent with a previous report that the complex macrophage phenotypes process during Salmonella infection ( ; ). And it implied that Cobb went through a quick Macrophage transition course. In addition, we found that the PTPRJ, which regulates immune cell function, was mainly highly expressed in Mac-IL1B. And compared to the non-infected group of Beijing-You, Cobb showed significantly lower expression of PTPRJ. PTPRJ, as a receptor protein tyrosine phosphatase, is highly expressed in macrophages ( ). It is normally localized to the plasma membrane of macrophages and is elevated further by treatment with LPS and other Toll-like receptor ligands and regulated by pro-inflammatory stimuli ( ). PTPRJ dephosphorylates the negative regulatory tyrosine in src family kinases and indicates a positive role in monocytes activation process ( ; ). Recently, PTPRJ has been reported that its loss strongly promotes Akt signaling, suggesting that it regulates the M1/M2 polarization in macrophages ( ; ). Recent studies have demonstrated that PTPRJ alters and decreases CEACAM3 phosphorylation possibly by acting directly on CEACAM3, thereby negatively regulating CEACAM3-mediated phagocytosis and limiting the potential inflammatory response ( ). Previous GWAS studies had shown that PTPRJ is associated with immune defense to Salmonella in chickens. The individuals with low heterophil/lymphocyte index have lower expression of PTPRJ, weakening its inhibitory effect on heterophils and exhibiting the strong anti- Salmonella ability ( ). Although the precise mechanism underlying PTPRJ participates in inflammation regulation still have to be elucidated, based on recent research, it is hypothesized that PTPRJ may play an important role in the resistance to Salmonella in chicken. After Salmonella infection, Cobb with the low PTPRJ expression can promotes Akt signaling and activates Macrophage to polarize towards M1 phenotype quickly. And further accelerates the expression of pro-inflammatory cytokines and chemokines, thus rapidly activating the immune system and going through a quick Macrophage transition course. Meanwhile, the low expression of PTPRJ can boost CEACAM3-mediated phagocytosis in Macrophage, helping to clear pathogens. This might explain why Cobb has stronger resistance to Salmonella . Although we have made every effort to ensure the accuracy and reproducibility of the experiment during the design and execution process, strictly controlling the feeding conditions, ensuring a similar raising environment and physiological state, carefully interpreting these data, there is still a limitation of sample size. Adding more samples can help to improve the stability and reproducibility of research results and enhance the confidence in the research findings. In this study, we observed Cobb chickens are more resistant to Salmonella typhimurium. In our study, we found that Tregs in Beijing-You significantly decreased, while Tregs in Cobb increased and exhibited greater immunosuppressive properties. In addition, Macrophages more quickly transited from the pro-inflammatory phenotype (Mac-IL1B) to the anti-inflammatory phenotype (Mac-C1QC/Mac-MARCO). The performances of these immune cells may together contribute to excessive inflammation in Beijing-You and cause more tissue damage, and then affect the increased mortality of Beijing-You. The differential expression of PTPRJ in Mac-IL1B may also characterize the differences in Salmonella resistance between Cobb and Beijing-You. In summary, this study provides a single-cell transcriptomic analysis of chicken spleen for the first time, comparing the inflammatory response of Beijing You and Cobb after bacterial infection and exploring the reasons for the differences in resistance between Beijing-You and Cobb. In recent years, single-cell technology has been widely applied in the research of diseases in humans and animals. Improving the disease resistance of livestock and poultry is beneficial for improving animal welfare, reducing the economic costs caused by disease prevention and treatment, or animal death, and meanwhile providing healthier food. This work provides new ideas for exploring the inflammatory regulation mechanisms and improving disease resistance. Ethics approval and consent to participate The Animal Ethics Committee of the Institute of Animal Sciences, Chinese Academy of Agricultural Sciences (IAS-CAAS, Beijing, China) approved the animal experimentation and survival (IASCAAS2021-31). Consent for publication Not applicable. The Animal Ethics Committee of the Institute of Animal Sciences, Chinese Academy of Agricultural Sciences (IAS-CAAS, Beijing, China) approved the animal experimentation and survival (IASCAAS2021-31). Not applicable. This work was supported by the 10.13039/501100012428 Central Public-Interest Scientific Institution Basal Research Fund (No. 2023-YWF-ZYSQ-07 ), Biological Breeding-National Science and Technology Major Project ( 2023ZD0405302 ), Innovation Program of Chinese Academy of Agricultural Sciences ( CAAS-CSAB-202401 ). All data supporting our findings are included in the manuscript and our supplementary files. The Raw scRNA-seq data can been downloaded in GEO under the accession number GSE201153 (scRNA-Seq) that are publicly accessible at http://bigd.big.ac.cn/gsa . Qi Zhang: Conceptualization, Formal analysis, Methodology, Software, Visualization, Writing – original draft. Qiao Wang: Conceptualization, Data curation, Funding acquisition, Writing – original draft. Jumei Zheng: Investigation, Validation. Jin Zhang: Investigation, Validation. Gaomeng Zhang: Investigation, Software. Fan Ying: Investigation, Resources. Dawei Liu: Investigation, Resources. Jie Wen: Resources, Supervision, Writing – review & editing. Qinghe Li: Conceptualization, Funding acquisition, Writing – review & editing. Guiping Zhao: Conceptualization, Funding acquisition, Project administration, Resources, Writing – review & editing. The authors declare no competing interests.
Interaction effects of different chemical fractions of lanthanum, cerium, and fluorine on the taxonomic composition of soil microbial community
964079b7-54e7-4a21-ba2c-dc020ec32d00
11674582
Microbiology[mh]
Bastnasite (CeFCO 3 ) is one of the rare earth minerals with the widest distribution worldwide . Both lanthanum (La) and cerium (Ce) together account for 70–90% of the total rare earth elements . As the rapid growing of demand for rare earth elements in various fields, the mining and smelting of these elements have caused a series of severe ecological issues in the soil environments . In particular, the pollution of the farmland around the mining area caused by these elements could lead to changes in the physical and chemical properties of the soil and decrease in soil fertility , ultimately affecting the growth and development of crops and threatening human health via food chains . Studies have shown that the high La and Ce contents in bastnasite mine areas have sustained inhibitory effects on soil microbial abundance and function . Fluorine (F) in low concentrations would be essential trace nutrient for soil microbes , while the concentration of F over 1000 mg kg –1 could cause significant changes in microbial structure . To date, studies have investigated the effects of single element of rare earth ionic mine (e.g., La and Ce) on microbial community , whereas the effects of the combined contamination of F with either La or Ce in bastnasite on soil microbial community have not been explored. Therefore, it is urgent to study the response of soil bacterial and fungal communities to the interaction of La, Ce, and F in combined pollution soils. This is important for further development of effective strategies to treat the farmland contaminated by these elements. Microorganisms have shown various responses to the stress of different combined pollution elements . The significant variations in relative abundance of microbial communities, including the complete disappearance and newly appeared microbes, are considered the appropriate biological indicators for soil pollution evaluation . To evaluate the response of microbiome to the pollution caused by La, Ce, or F, individually, only a few studies have reported the sensitive or tolerant microbial communities to the total contents of these three elements . For example, the relative abundance of Cyanobacteria was reduced under the pollution of either La or Ce , while the resistant ability was revealed in Actinobacteria under long-term exposure to excessive La or Ce . Excess F can cause significant reduction of bacteria and fungi in the microbial communities . Furthermore, studies have explored the effect of co-contamination of either La, Ce, or F, individually, with another pollutant, especially a heavy metal, on the structure and diversity of soil microorganisms . For example, the synergistic toxicity of high concentrations of La and lead (Pb) significantly increased the relative abundance of Proteobacteria from 19.45 to 29.73%, while the relative abundance of Thaumarchaeota was decreased from 2.76 to 1.53% . However, few studies have assessed the collaborative effects of La and Ce and the synergistic toxic effect of total La and Ce contents on plant metabolism . To date, the effects of combined pollution of two or three elements on microbial communities are still unclear. Recent studies indicated that the toxicity of La, Ce, or F depends on the different chemical fractions or bioavailability of these elements . For example, the different forms of La exhibit distinct toxic effects on the growth of broad beans under hydroponic conditions . Similarly, the toxic effect of ionic La on Daphnia similis was enhanced compared with the chelated form of La . Therefore, it is necessary to explore the effects of various forms of these elements on the soil bacterial and fungal communities. Up to now, studies have mainly focused on the variations of the relatively dominant microbial taxa in the soil community co-contaminated by either La, Ce, or F, individually, with another pollutant . Few studies have focused on the disappearance or new appearance of microbes with low relative abundance in polluted soils. Although the dominant microbes are important in shaping the community composition and are involved in soil biogeochemical cycles and nutrient transformation , the disappearance or appearance of microbes with low abundances also play important roles in the soil ecological processes, e.g., the soil biogeochemical cycle . Therefore, the disappearance or appearance of microbes as well as the significant changes in their relative abundances should also be taken into consideration during the exploration of the roles of microbiota in shaping the microbial community composition. To date, there is still a lack of research on the response of microbes with different relative abundances to the interaction of different chemical fractions of La, Ce, and F. In particular, there is no study on the responses of sensitive or tolerant microbial communities to the interaction of different chemical fractions of the rare earth elements as well as the ecological processes or functions affected. This is probably due to the technical challenges posed by the complex combined treatments of multiple elements, such as heavy workload, control of the experimental conditions, and the accurate determination of various chemical fractions of the elements . At present, the high-throughput sequencing technology is considered one of the efficient methods to study the different responses of microorganisms to contaminated soils, showing enhanced detection of disappearance, appearance, sensitivity, or tolerance of microorganisms under the interaction of various chemical fractions of the rare earth elements. In our study, we hypothesize that the various interactions of different chemical fractions of La, Ce, and F in the combined pollution soils have significant effects on the sensitive and tolerant of bacterial and fungal communities. Based on the pot experiments of four different combined pollution soils of two or three elements, i.e., La + Ce (LC), Ce + F (CF), La + F (LF), and La + Ce + F (LCF), high-throughput sequencing was performed on the soil samples with the goals to: (1) investigate the interaction effects of various chemical fractions of these three elements on microbial community composition; (2) identify the sensitive and tolerant microbes under these interaction effects; and (3) detect the key correlation factors (i.e., the different chemical fractions of these elements) responsible for the sensitive and tolerant microbes. Soil sample collection and pot experiment We investigated the pollution level within 5 km of the farmland soil in the mining area, Mianning County (Sichuan, China) through pre-experiment. A total of 15 sampling sites were set up by using the mesh sampling method, with 3 farmland soil samples collected at each site (collection depth 0–20 cm below the surface of the ground) and brought back to the laboratory in aseptic bags to test the contents of La, Ce, and F. The concentrations of La, Ce, and F ranged from 100 to 400, 200 to 800, and 600 to 2000 mg kg –1 , respectively. Based on the pollution levels of these elements obtained in our pre-experiments, we designed the low (samples labelled with 1 and 2 in Table ), middle (samples labelled with 3 in Table ), and high (samples labelled with 4 and 5 in Table ) concentrations of soil pot experiments. Therefore, the farmland soil was collected about 15 km away from the bastnaesite mine area in Mianning County (Sichuan, China) and used as control (CK), containing the total contents of La, Ce, and F at 55.89, 177.24, and 540.73 mg kg –1 , respectively. It was noted that these concentrations were less than those detected in the areas close to the mine zones. Based on the actual polluted concentration of three elements, four different combined pollution treatments were performed, i.e., La + Ce (LC), Ce + F (CF), La + F (LF), and La + Ce + F (LCF), as well as CK. Each pollution treatment pot was filled with 10 kg control soil and treated with La, Ce, and F (in the above four combinations) provided by dissolved LaCl 3 ·7H 2 O, CeCl 3 ·7H 2 O, and NaF, all of analytical grades, respectively (Table ). The soils were mixed with rare earth elements homogenized before filling the pots, then all pots of control and experimental groups were cultivated at room temperature with 70% field water holding capacity in soil, with three biological replicates performed for each control and experimental treatment. In 6 month, 10 g aged soil samples were collected and stored at – 80 °C for high-throughput sequencing analysis of soil microorganisms. Then, 100 g aged soil samples were collected and air-dried for the determination of soil chemical properties. High-throughput sequencing analysis of soil microorganisms The high-throughput sequencing technology, also known as "Next-generation" sequencing technology, is considered one of the efficient methods to study the different responses of microorganisms to contaminated soils. We applied this technology to analyze the response of soil microbial communities to different composite contaminants (i.e., La, Ce, and F). Soil microbial DNA was extracted from all experimental and control groups using a soil DNA kit (Omega Bio-tek Inc., USA). The quality of the acquired DNA was examined on 1% agarose gels . The primer pair 338F/806R was used to amplify the V3-V4 hypervariable regions of bacterial 16S rRNA genes using thermocycler PCR system. The internal transcribed spacer region primers ITS1F and ITS2R were used to amplify fungal DNA. The PCR reactions were conducted using the following program: 3 min of denaturation at 95 °C, 27 and 37 cycles of 30 s at 95 °C for bacteria and fungi, respectively, then 30 s for annealing at 55 °C,followed by 45 s for elongation at 72 °C, and a final extension at 72 °C for 10 min. These PCR products were examined and extracted using a 2% agarose gel, further purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, USA), and quantified using QuantiFluor™-ST (Promega, USA) according to the manufacturer’s protocols . PCR-amplified products were purified and sent to Shanghai Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China) for sequencing using Illumina MiSeq platform. The raw sequences have been deposited in the Sequence Read Archive of NCBI database ( https://www.ncbi.nlm.nih.gov/bioproject/PRJNA1119661 ). Bioinformatics analysis of the high-throughput sequencing data was performed using QIIME1.9.1 . All raw sequences were demultiplexed and filtered by QIIME. After filtering the low-quality reads, based on the overlap between PE reads, pairs of reads were merged into a sequence with a minimum overlap length of 10 bp. The maximum mismatch ratio allowed by the overlap area of the merging sequence was 0.2, and the non-conforming sequence was screened. The samples were distinguished based on the barcode and primer at both ends of the first sequence, and the sequence direction was adjusted. The mismatch number of barcode allowed was 0 and the maximum mismatch number of primer was set to 2. These steps were completed using software fastp and FLASH .Then a total of 1,923,174 bacterial and 2,155,119 fungal sequences were acquired. respectively, and further grouped into operational taxonomic units (OTUs) using the USEARCH 7-uparse algorithm based on the similarity of 97% . Then, these sequences were clustered into different operational taxonomic units (OTUs) using UPARSE. The OTUs were assigned to different taxonomic ranks of bacteria and fungi at a 70% confidence threshold using ribosomal database project (RDP) classifier algorithm based on 16S rRNA database (SILVA v.138) and ITS fungi database (Unite8.0). Under the combined pollution of different chemical fractions of La, Ce, and F, compared with control samples, microbial taxa at phyla level with the relative abundance reduced over 80%, including the undetected, were defined as sensitive microbes, while the taxa with relative abundance increased over 80%, including the newly detected taxa, were recognized as tolerant microbes. Determination of soil chemical properties and content of pollutant elements Standard laboratory methods were used to analyze the soil chemical properties, including pH level and contents of organic matter (OM) , total nitrogen (TN) , total phosphorus (TP) , total potassium (TK) , hydrolyzable nitrogen (AN) , available phosphorus (AP) , and available potassium (AK) . The total La and Ce contents in the soil were determined using a three-acid digestion method (HNO 3 :HCl:HClO 4 = 1:2:2) . All six forms of La and Ce were extracted using an improved Tessier method . Specifically, (a) water-soluble (WS) form, normal temperature water extraction; (b) exchangeable (EX) form, 1 mol L –1 MgCl 2 extraction; (c) carbonate-bound (CAR) form, 1 mol L –1 CH 3 COONa extraction; (d) iron-manganese-bound (FeMn) form, 0.25 mol L –1 NH 2 OH:HCl extraction; (e) organic-bound (ORG) form, first 0.02 mol L –1 HNO 3 and 30% H 2 O 2 mixed extraction, and then 3.2 mol L –1 NH 4 Ac and 20% HNO 3 extraction; and (f) residual (RES) form, the three-acid digestion method (HNO 3 :HCl:HClO4 = 1:2:2). The different forms of La and Ce were measured using inductively coupled plasma optical emission spectrometry (ICP-OES). The total content of F was determined using the NaOH fusion-fluoride ion selective electrode method (GBT 22104–2008, China). The five forms of F in the soil were extracted using the continuous grading immersion method. Specifically, (a) WS form, hot water at 70 °C for extraction; (b) EX form, 1 mol L –1 MgCl 2 extraction, shaken at 25 °C for 1 h; (c) FeMn form, 0.04 mol L –1 NH 2 OH:HCl extraction, shaken at 60 °C for 1 h; (d) ORG form, first 0.04 mol L –1 HNO 3 and 30% H 2 O 2 extraction at 85 °C for 2 h, and then added with H 2 O 2 and continuous heating at 85 °C for 3 h; after cooling, 25 ml NH 4 Ac was added and shaken at 25 °C for 0.5 h; and (e) RES form, the NaOH fusion-fluoride ion selective electrode method (GBT 22104–2008, China). Statistical analysis The variance and mean differences of the data were analyzed by non-parametric ordination-based analysis and the Fisher’s least significant difference test using SPSS 19.0 (SPSS Inc., USA). OriginPro 9.0 (OriginLab, Northampton, MA, USA) was used to plot the contaminant element components, microbial community composition and relative abundance change. Redundancy analysis was performed using R 4.0.1 software of the vagen package to identify the main correlation factors (i.e., the different chemical fractions of La, Ce, and F) for tolerant and sensitive microbes at phylum level in different combined pollution soils, and we used the step model to detect the lowest AIC value. In this step, the model will automatically screen out the optimal environmental factors in RDA for solving the collinearity. An envfit analysis (envfit function used with 999 permutations) used the ‘vegan’ package to clarify significant factors explaining bacterial composition variation, and the RAD statistics were provided in the supplementary materials (Table S1). The response of the microbes to interaction of various chemical fractions of La, Ce, and F was calculated using the equation : relative abundance change = [(Xp – Xc)/Xc] × 100% where Xp and Xc represented the relative abundances of the microbes in polluted and unpolluted soils, respectively. We investigated the pollution level within 5 km of the farmland soil in the mining area, Mianning County (Sichuan, China) through pre-experiment. A total of 15 sampling sites were set up by using the mesh sampling method, with 3 farmland soil samples collected at each site (collection depth 0–20 cm below the surface of the ground) and brought back to the laboratory in aseptic bags to test the contents of La, Ce, and F. The concentrations of La, Ce, and F ranged from 100 to 400, 200 to 800, and 600 to 2000 mg kg –1 , respectively. Based on the pollution levels of these elements obtained in our pre-experiments, we designed the low (samples labelled with 1 and 2 in Table ), middle (samples labelled with 3 in Table ), and high (samples labelled with 4 and 5 in Table ) concentrations of soil pot experiments. Therefore, the farmland soil was collected about 15 km away from the bastnaesite mine area in Mianning County (Sichuan, China) and used as control (CK), containing the total contents of La, Ce, and F at 55.89, 177.24, and 540.73 mg kg –1 , respectively. It was noted that these concentrations were less than those detected in the areas close to the mine zones. Based on the actual polluted concentration of three elements, four different combined pollution treatments were performed, i.e., La + Ce (LC), Ce + F (CF), La + F (LF), and La + Ce + F (LCF), as well as CK. Each pollution treatment pot was filled with 10 kg control soil and treated with La, Ce, and F (in the above four combinations) provided by dissolved LaCl 3 ·7H 2 O, CeCl 3 ·7H 2 O, and NaF, all of analytical grades, respectively (Table ). The soils were mixed with rare earth elements homogenized before filling the pots, then all pots of control and experimental groups were cultivated at room temperature with 70% field water holding capacity in soil, with three biological replicates performed for each control and experimental treatment. In 6 month, 10 g aged soil samples were collected and stored at – 80 °C for high-throughput sequencing analysis of soil microorganisms. Then, 100 g aged soil samples were collected and air-dried for the determination of soil chemical properties. The high-throughput sequencing technology, also known as "Next-generation" sequencing technology, is considered one of the efficient methods to study the different responses of microorganisms to contaminated soils. We applied this technology to analyze the response of soil microbial communities to different composite contaminants (i.e., La, Ce, and F). Soil microbial DNA was extracted from all experimental and control groups using a soil DNA kit (Omega Bio-tek Inc., USA). The quality of the acquired DNA was examined on 1% agarose gels . The primer pair 338F/806R was used to amplify the V3-V4 hypervariable regions of bacterial 16S rRNA genes using thermocycler PCR system. The internal transcribed spacer region primers ITS1F and ITS2R were used to amplify fungal DNA. The PCR reactions were conducted using the following program: 3 min of denaturation at 95 °C, 27 and 37 cycles of 30 s at 95 °C for bacteria and fungi, respectively, then 30 s for annealing at 55 °C,followed by 45 s for elongation at 72 °C, and a final extension at 72 °C for 10 min. These PCR products were examined and extracted using a 2% agarose gel, further purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, USA), and quantified using QuantiFluor™-ST (Promega, USA) according to the manufacturer’s protocols . PCR-amplified products were purified and sent to Shanghai Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China) for sequencing using Illumina MiSeq platform. The raw sequences have been deposited in the Sequence Read Archive of NCBI database ( https://www.ncbi.nlm.nih.gov/bioproject/PRJNA1119661 ). Bioinformatics analysis of the high-throughput sequencing data was performed using QIIME1.9.1 . All raw sequences were demultiplexed and filtered by QIIME. After filtering the low-quality reads, based on the overlap between PE reads, pairs of reads were merged into a sequence with a minimum overlap length of 10 bp. The maximum mismatch ratio allowed by the overlap area of the merging sequence was 0.2, and the non-conforming sequence was screened. The samples were distinguished based on the barcode and primer at both ends of the first sequence, and the sequence direction was adjusted. The mismatch number of barcode allowed was 0 and the maximum mismatch number of primer was set to 2. These steps were completed using software fastp and FLASH .Then a total of 1,923,174 bacterial and 2,155,119 fungal sequences were acquired. respectively, and further grouped into operational taxonomic units (OTUs) using the USEARCH 7-uparse algorithm based on the similarity of 97% . Then, these sequences were clustered into different operational taxonomic units (OTUs) using UPARSE. The OTUs were assigned to different taxonomic ranks of bacteria and fungi at a 70% confidence threshold using ribosomal database project (RDP) classifier algorithm based on 16S rRNA database (SILVA v.138) and ITS fungi database (Unite8.0). Under the combined pollution of different chemical fractions of La, Ce, and F, compared with control samples, microbial taxa at phyla level with the relative abundance reduced over 80%, including the undetected, were defined as sensitive microbes, while the taxa with relative abundance increased over 80%, including the newly detected taxa, were recognized as tolerant microbes. Standard laboratory methods were used to analyze the soil chemical properties, including pH level and contents of organic matter (OM) , total nitrogen (TN) , total phosphorus (TP) , total potassium (TK) , hydrolyzable nitrogen (AN) , available phosphorus (AP) , and available potassium (AK) . The total La and Ce contents in the soil were determined using a three-acid digestion method (HNO 3 :HCl:HClO 4 = 1:2:2) . All six forms of La and Ce were extracted using an improved Tessier method . Specifically, (a) water-soluble (WS) form, normal temperature water extraction; (b) exchangeable (EX) form, 1 mol L –1 MgCl 2 extraction; (c) carbonate-bound (CAR) form, 1 mol L –1 CH 3 COONa extraction; (d) iron-manganese-bound (FeMn) form, 0.25 mol L –1 NH 2 OH:HCl extraction; (e) organic-bound (ORG) form, first 0.02 mol L –1 HNO 3 and 30% H 2 O 2 mixed extraction, and then 3.2 mol L –1 NH 4 Ac and 20% HNO 3 extraction; and (f) residual (RES) form, the three-acid digestion method (HNO 3 :HCl:HClO4 = 1:2:2). The different forms of La and Ce were measured using inductively coupled plasma optical emission spectrometry (ICP-OES). The total content of F was determined using the NaOH fusion-fluoride ion selective electrode method (GBT 22104–2008, China). The five forms of F in the soil were extracted using the continuous grading immersion method. Specifically, (a) WS form, hot water at 70 °C for extraction; (b) EX form, 1 mol L –1 MgCl 2 extraction, shaken at 25 °C for 1 h; (c) FeMn form, 0.04 mol L –1 NH 2 OH:HCl extraction, shaken at 60 °C for 1 h; (d) ORG form, first 0.04 mol L –1 HNO 3 and 30% H 2 O 2 extraction at 85 °C for 2 h, and then added with H 2 O 2 and continuous heating at 85 °C for 3 h; after cooling, 25 ml NH 4 Ac was added and shaken at 25 °C for 0.5 h; and (e) RES form, the NaOH fusion-fluoride ion selective electrode method (GBT 22104–2008, China). The variance and mean differences of the data were analyzed by non-parametric ordination-based analysis and the Fisher’s least significant difference test using SPSS 19.0 (SPSS Inc., USA). OriginPro 9.0 (OriginLab, Northampton, MA, USA) was used to plot the contaminant element components, microbial community composition and relative abundance change. Redundancy analysis was performed using R 4.0.1 software of the vagen package to identify the main correlation factors (i.e., the different chemical fractions of La, Ce, and F) for tolerant and sensitive microbes at phylum level in different combined pollution soils, and we used the step model to detect the lowest AIC value. In this step, the model will automatically screen out the optimal environmental factors in RDA for solving the collinearity. An envfit analysis (envfit function used with 999 permutations) used the ‘vegan’ package to clarify significant factors explaining bacterial composition variation, and the RAD statistics were provided in the supplementary materials (Table S1). The response of the microbes to interaction of various chemical fractions of La, Ce, and F was calculated using the equation : relative abundance change = [(Xp – Xc)/Xc] × 100% where Xp and Xc represented the relative abundances of the microbes in polluted and unpolluted soils, respectively. Chemical properties of contents of La, Ce, and F Compared with the CK group, the change in the soil OM, TN, TP, and TK contents were < 2% in the pot experiments. Among other soil properties, the most significant variations were detected in soil pH level in treatment groups (Fig. S1a). The pH of the CK group was 5.15, while the LC-polluted soils were acidic, with a pH range of 3.28–3.47. The pH values of CF-, LF-, and LCF-polluted soils were increased as the pollutant concentrations were increased, ranging from 3.86 to 5.61. However, the pH levels in CF- and LF-polluted treatments were lower than those in the CK group, with the highest increase by 12.26% observed in LC- and LCF-polluted soils. In the four treatment groups, the AN contents were increased as the pollutant concentrations were increased (Fig. S1b), while the AP contents were decreased in all four groups of contaminated soils (Fig. S1c). Compared to the CK group, the AK contents were increased in LC- and LF-contaminated soils but decreased in CF- and LCF-contaminated soils (Fig. S1d). In the four types of combined contamination soils, the contents of most fractions of La, Ce, and F were increased as the pollution concentration was increased except for La_RES of LC- and LF-polluted soils (Fig. a, c). In LC-polluted soils, both La_EX and Ce_EX contents showed the highest increases of about 65 and 98 times, respectively (Fig. a). In CF-contaminated soils, the highest increases were observed in the contents of Ce_CAR and F_WS by over 72 and 22 times, respectively (Fig. b). The highest contents were observed in both La_CAR and F_WS of the LF-polluted soils (Fig. c). In the LCF-contaminated soils, the highest increases were detected in the contents of La_CAR, Ce_EX, and F_WS, by over 40, 50, and 27 times, respectively (Fig. d). In conclusion, F_WS showed the highest growth rate with the increase of F pollution concentration in all the combined contamination soils containing F (i.e., CF-, LF-, and LCF-polluted soils), while the contents of unstable exchange state and moderately stable carbonate binding state La and Ce showed the highest growth rates in LC-, CF-, and LF-polluted soils, respectively. Notably, the content of La_RES declined with the increase of pollution concentration in both LC and LF treatments. Variations in the composition of microbial communities Different contamination soils were revealed with varied effects on the composition of microbial communities. There were a total of 28 bacterial and 15 fungal taxa at the phylum level detected in the unamended CK soil, while in the four groups of polluted soils, a total of 31 bacterial taxa at phylum level, including three newly detected taxa (MBNT15 in LF5-polluted soil, Fibrobacterota in CF5-polluted soil, and Fusobacteriota in LF2- and LF5-polluted soils), were identified (Fig. a); no new fungal taxa were detected at the phylum level in the four groups of pollution soils compared to the unamended CK soil (Fig. b). Compared with the CK soil, three bacterial taxa at the phylum level, including Abditibacteriota, WS4, and FCPU426, were undetected in the LC-, CF-, and LF-polluted soils (Fig. a), while three fungal phyla (i.e., Blastocladiomycota, Kickxellomycota, and Calcarisporiellomycota) were undetected in the LC-contaminated soil, and Blastocladiomycota disappeared in LF- and LCF-polluted soils (Fig. b). Correlation factors of sensitive microbial community The results showed that the relative abundances of nine bacterial taxa at the phylum level, including Planctomycetota, RCP2-54, Nitrospirota, Elusimicrobiota, Dependentiae, GAL15, Abditibacteriota, WS4 and FCPU426, and five fungal taxa at the phylum level, containing Blastocladiomycota, Kickxellomycota, Calcarisporiellomycota, Mortierellomycota and Zoopagomycota, were all decreased by more than 80% as the contents of the various combinations of La, Ce, and F were increased (Fig. ). Among these taxa, Abditibacteriota was undetected in LC-polluted soils, while La_RES was negatively correlated with Abditibacteriota (Fig. a). WS4 was undetected in CF-polluted soils, Ce_EX, Ce_ORG, F_EX, F_FeMn, F_ORG, and F_RES were negatively correlated with WS4 ( p < 0.05; Fig. c). La_WS showed negative correlation with the disappearance of FCPU426 in LF-polluted soils ( p < 0.05; Fig. e). Three fungal phyla, i.e., Blastocladiomycota, Kickxellomycota, and Calcarisporiellomycota, all were undetected in LC-polluted soils, and most forms of La and Ce, except for La_RES, were negatively correlated with these three phyla ( p < 0.05; Fig. b). In addition, Blastocladiomycota were also undetected in LF- and LCF-polluted soils, with La_RES, and F_RES identified as the correlation factors in LF-polluted soils, and La_WS, La_EX, Ce_WS, and F_WS identified as the correlation factors in LCF-polluted soils ( p < 0.05; Fig. f, h). Compared with the CK soil, the relative abundance of Nitrospirota were decreased by 100, 95.39, 89.24, and 96.00% in the four groups of combined pollution soils, respectively (Fig. a, c, e, and g). Nitrospirota was undetected in LC-polluted soils with high content of pollutants, and the most of the chemical fractions of La and Ce except for La_RES were negatively correlated with this phylum ( p < 0.05; Fig. a). The relative abundances of Elusimicrobiota was decreased by 91.13, 83.26, and 90.59% in LC-, CF-, and LCF-polluted soils, respectively (Fig. a, c, and g), while it had significantly negative correlation with La_RES in LC-polluted soils, and Ce_EX, Ce_ORG, F_EX, F_FeMn, F_ORG, and F_RES in CF-polluted soils, respectively ( p < 0.05; Fig. a, c). Moreover, La_RES and F_FeMn showed negative correlation with the microbial phyla in LCF-polluted soils (Fig. g). The relative abundance of Planctomycetota was decreased by 82.72% (Fig. a) and was correlated with the contents of La_ORG, Ce_WS, and Ce_ EX ( p < 0.05; Fig. a), while the relative abundance of Dependentiae was decreased by 86.98% (Fig. a) and was negatively correlation with most chemical fractions of La and Ce except for La_RES in LC-polluted soils ( p < 0.05, Fig. a). The relative abundance of RCP2-54 was decreased by 81.39% (Fig. e) and was negatively correlated with La_WS in LF-polluted soils ( p < 0.05; Fig. e). Among the fungal phyla, the relative abundances of Mortierellomycota and Zoopagomycota were decreased by more than 80% in LC- and LCF-polluted soils, while Monoblepharomycota was undetected in the higher concentrations of four groups of combined pollution soils (Fig. b, d, f, and h). The relative abundance of Zoopagomycota was negatively correlation with La_RES, while Mortierellomycota was negatively correlation with most forms of La and Ce, except for La_RES, in LC-polluted soils ( p < 0.05, Fig. b, h). In the LCF-polluted soils, Mortierellomycota was negatively correlation with La_WS, La_EX, Ce_WS, and F_WS, whereas Zoopagomycota was negatively correlation with all of chemical fractions of La, Ce and F, except for La_WS, La_EX, Ce_WS, F_WS, and Ce_RES ( p < 0.05, Fig. b, h). Correlation factors of tolerant microbial community A total of six bacterial phyla, including Gemmatimonadota, Bacteroidota, Patescibacteria, Myxococcota, Armatimonadota, and Abditibacteriota, and four fungal phyla, i.e., Chytridiomycota, Glomeromycota, Rozellomycota, and Basidiobolomycota, were revealed with increased relative abundances exceeding 80% as the pollutant concentrations were increased in the four groups of polluted treatments of soils (Fig. ). Specifically, compared with the CK soil, Fibrobacterota were newly detected in CF-polluted soils, and all forms of Ce and F showed positive correlation with the relative abundance of this phylum ( p < 0.05; Fig. c). Additionally, both MBNT15 and Fusobacteriota were detected in LF-polluted soils, with the former was positively correlation with all chemical fractions of La and F, the latter was only positively correlation with La_EX, F_WS, F_EX, F_FeMn, and F_ORG ( p < 0.05; Fig. e). Most chemical fractions of La and Ce except for La_RES showed positive correlation with Bacteroidota in LC-polluted soils, while all forms of La, Ce, and F were correlation with this phylum in other three groups of polluted soils ( p < 0.05; Fig. ). The relative abundance of Chytridiomycota was increased by 275–2871% in the four groups of polluted soils (Fig. ). In addition, the relative abundance of Rozellomycota was increased by 408% and 829% in LC- and LF-polluted soils, respectively, and the relative abundances of Glomeromycota and Basidiobolomycota were increased by 257% and 414%, respectively, in the LF-polluted soils (Fig. b and f). These results revealed the tolerance of both Glomeromycota and Basidiobolomycota to the interaction effect of most chemical fractions of La, Ce, and F, except for La_RES ( p < 0.05; Fig. ). Compared with the CK group, the change in the soil OM, TN, TP, and TK contents were < 2% in the pot experiments. Among other soil properties, the most significant variations were detected in soil pH level in treatment groups (Fig. S1a). The pH of the CK group was 5.15, while the LC-polluted soils were acidic, with a pH range of 3.28–3.47. The pH values of CF-, LF-, and LCF-polluted soils were increased as the pollutant concentrations were increased, ranging from 3.86 to 5.61. However, the pH levels in CF- and LF-polluted treatments were lower than those in the CK group, with the highest increase by 12.26% observed in LC- and LCF-polluted soils. In the four treatment groups, the AN contents were increased as the pollutant concentrations were increased (Fig. S1b), while the AP contents were decreased in all four groups of contaminated soils (Fig. S1c). Compared to the CK group, the AK contents were increased in LC- and LF-contaminated soils but decreased in CF- and LCF-contaminated soils (Fig. S1d). In the four types of combined contamination soils, the contents of most fractions of La, Ce, and F were increased as the pollution concentration was increased except for La_RES of LC- and LF-polluted soils (Fig. a, c). In LC-polluted soils, both La_EX and Ce_EX contents showed the highest increases of about 65 and 98 times, respectively (Fig. a). In CF-contaminated soils, the highest increases were observed in the contents of Ce_CAR and F_WS by over 72 and 22 times, respectively (Fig. b). The highest contents were observed in both La_CAR and F_WS of the LF-polluted soils (Fig. c). In the LCF-contaminated soils, the highest increases were detected in the contents of La_CAR, Ce_EX, and F_WS, by over 40, 50, and 27 times, respectively (Fig. d). In conclusion, F_WS showed the highest growth rate with the increase of F pollution concentration in all the combined contamination soils containing F (i.e., CF-, LF-, and LCF-polluted soils), while the contents of unstable exchange state and moderately stable carbonate binding state La and Ce showed the highest growth rates in LC-, CF-, and LF-polluted soils, respectively. Notably, the content of La_RES declined with the increase of pollution concentration in both LC and LF treatments. Different contamination soils were revealed with varied effects on the composition of microbial communities. There were a total of 28 bacterial and 15 fungal taxa at the phylum level detected in the unamended CK soil, while in the four groups of polluted soils, a total of 31 bacterial taxa at phylum level, including three newly detected taxa (MBNT15 in LF5-polluted soil, Fibrobacterota in CF5-polluted soil, and Fusobacteriota in LF2- and LF5-polluted soils), were identified (Fig. a); no new fungal taxa were detected at the phylum level in the four groups of pollution soils compared to the unamended CK soil (Fig. b). Compared with the CK soil, three bacterial taxa at the phylum level, including Abditibacteriota, WS4, and FCPU426, were undetected in the LC-, CF-, and LF-polluted soils (Fig. a), while three fungal phyla (i.e., Blastocladiomycota, Kickxellomycota, and Calcarisporiellomycota) were undetected in the LC-contaminated soil, and Blastocladiomycota disappeared in LF- and LCF-polluted soils (Fig. b). The results showed that the relative abundances of nine bacterial taxa at the phylum level, including Planctomycetota, RCP2-54, Nitrospirota, Elusimicrobiota, Dependentiae, GAL15, Abditibacteriota, WS4 and FCPU426, and five fungal taxa at the phylum level, containing Blastocladiomycota, Kickxellomycota, Calcarisporiellomycota, Mortierellomycota and Zoopagomycota, were all decreased by more than 80% as the contents of the various combinations of La, Ce, and F were increased (Fig. ). Among these taxa, Abditibacteriota was undetected in LC-polluted soils, while La_RES was negatively correlated with Abditibacteriota (Fig. a). WS4 was undetected in CF-polluted soils, Ce_EX, Ce_ORG, F_EX, F_FeMn, F_ORG, and F_RES were negatively correlated with WS4 ( p < 0.05; Fig. c). La_WS showed negative correlation with the disappearance of FCPU426 in LF-polluted soils ( p < 0.05; Fig. e). Three fungal phyla, i.e., Blastocladiomycota, Kickxellomycota, and Calcarisporiellomycota, all were undetected in LC-polluted soils, and most forms of La and Ce, except for La_RES, were negatively correlated with these three phyla ( p < 0.05; Fig. b). In addition, Blastocladiomycota were also undetected in LF- and LCF-polluted soils, with La_RES, and F_RES identified as the correlation factors in LF-polluted soils, and La_WS, La_EX, Ce_WS, and F_WS identified as the correlation factors in LCF-polluted soils ( p < 0.05; Fig. f, h). Compared with the CK soil, the relative abundance of Nitrospirota were decreased by 100, 95.39, 89.24, and 96.00% in the four groups of combined pollution soils, respectively (Fig. a, c, e, and g). Nitrospirota was undetected in LC-polluted soils with high content of pollutants, and the most of the chemical fractions of La and Ce except for La_RES were negatively correlated with this phylum ( p < 0.05; Fig. a). The relative abundances of Elusimicrobiota was decreased by 91.13, 83.26, and 90.59% in LC-, CF-, and LCF-polluted soils, respectively (Fig. a, c, and g), while it had significantly negative correlation with La_RES in LC-polluted soils, and Ce_EX, Ce_ORG, F_EX, F_FeMn, F_ORG, and F_RES in CF-polluted soils, respectively ( p < 0.05; Fig. a, c). Moreover, La_RES and F_FeMn showed negative correlation with the microbial phyla in LCF-polluted soils (Fig. g). The relative abundance of Planctomycetota was decreased by 82.72% (Fig. a) and was correlated with the contents of La_ORG, Ce_WS, and Ce_ EX ( p < 0.05; Fig. a), while the relative abundance of Dependentiae was decreased by 86.98% (Fig. a) and was negatively correlation with most chemical fractions of La and Ce except for La_RES in LC-polluted soils ( p < 0.05, Fig. a). The relative abundance of RCP2-54 was decreased by 81.39% (Fig. e) and was negatively correlated with La_WS in LF-polluted soils ( p < 0.05; Fig. e). Among the fungal phyla, the relative abundances of Mortierellomycota and Zoopagomycota were decreased by more than 80% in LC- and LCF-polluted soils, while Monoblepharomycota was undetected in the higher concentrations of four groups of combined pollution soils (Fig. b, d, f, and h). The relative abundance of Zoopagomycota was negatively correlation with La_RES, while Mortierellomycota was negatively correlation with most forms of La and Ce, except for La_RES, in LC-polluted soils ( p < 0.05, Fig. b, h). In the LCF-polluted soils, Mortierellomycota was negatively correlation with La_WS, La_EX, Ce_WS, and F_WS, whereas Zoopagomycota was negatively correlation with all of chemical fractions of La, Ce and F, except for La_WS, La_EX, Ce_WS, F_WS, and Ce_RES ( p < 0.05, Fig. b, h). A total of six bacterial phyla, including Gemmatimonadota, Bacteroidota, Patescibacteria, Myxococcota, Armatimonadota, and Abditibacteriota, and four fungal phyla, i.e., Chytridiomycota, Glomeromycota, Rozellomycota, and Basidiobolomycota, were revealed with increased relative abundances exceeding 80% as the pollutant concentrations were increased in the four groups of polluted treatments of soils (Fig. ). Specifically, compared with the CK soil, Fibrobacterota were newly detected in CF-polluted soils, and all forms of Ce and F showed positive correlation with the relative abundance of this phylum ( p < 0.05; Fig. c). Additionally, both MBNT15 and Fusobacteriota were detected in LF-polluted soils, with the former was positively correlation with all chemical fractions of La and F, the latter was only positively correlation with La_EX, F_WS, F_EX, F_FeMn, and F_ORG ( p < 0.05; Fig. e). Most chemical fractions of La and Ce except for La_RES showed positive correlation with Bacteroidota in LC-polluted soils, while all forms of La, Ce, and F were correlation with this phylum in other three groups of polluted soils ( p < 0.05; Fig. ). The relative abundance of Chytridiomycota was increased by 275–2871% in the four groups of polluted soils (Fig. ). In addition, the relative abundance of Rozellomycota was increased by 408% and 829% in LC- and LF-polluted soils, respectively, and the relative abundances of Glomeromycota and Basidiobolomycota were increased by 257% and 414%, respectively, in the LF-polluted soils (Fig. b and f). These results revealed the tolerance of both Glomeromycota and Basidiobolomycota to the interaction effect of most chemical fractions of La, Ce, and F, except for La_RES ( p < 0.05; Fig. ). The response of microorganisms to pollutant stress is closely reflected in the variations of the taxonomic composition and relative abundance of the microbial communities . Based on the different responsive modes of the microbes to pollutant stress, e.g., the undetected or remarkable decrease in relative abundance as well as newly detected or substantial increase in relative abundance, the microbial taxa are categorized as either sensitive or tolerant groups . In our study, these three groups of microbes were identified in the combined pollution soils to evaluate the effects of the interactions of different chemical fractions of La, Ce, and F on the soil taxonomic compositions of the microbial communities. Sensitive microbial community The taxonomic composition change of microbial community can be used as one of the biological indicators of soil pollution . In particular, the undetected or significant decrease in the relative abundances of the microbes are involved in the ecological functions in soils . Undetected microbes The highest level of stress effects of pollutant on soil microbial communities is represented by the undetected of certain microbes . Our results were consistent with those previously reported, showing the high levels of bioavailability and toxicity in the active forms of WS and EX . It was noteworthy that our results indicated that, not only the active forms of La, Ce, and F, but also the relatively stable chemical fractions of FeMn, ORG, and RES of La, Ce, and F showed high stress effects on the disappearance of the microbial taxa. A previous study showed that in polluted soils, the disappearance of some microbiome may affect the ecological functions of soil environment . For example, Blastocladiomycota influenced the degradation of organic matters, such as wood fibers, in soil . In our study this phylum were undetected in LC-, LF- and LCF-polluted soils, and may be influence the function of the degradation of organic matter in these polluted soils. Studies have shown that as the dominant genus of phylum Abditibacteriota (Table S1), Abditibacterium could inhabit extreme environments, due to its antibiotic and toxin-resistant properties . Our results showed that Abditibacterium was undetected in LC-polluted soils, indicating that the La_RES toxicity was stronger than those in the other three groups of contaminated soils. Microbes with significantly reduced relative abundances The change of microbial relative abundances in contaminated soils is also an important indicator of the level of soil pollution because these alterations are the observable factors in response to the persistent inhibition of pollutants . Although the relative abundance of Nitrospirota declined in all four combined pollution soils, it was undetected in the highest concentration of LC treatment. This was probably due to the synergistic effects of the combined pollution of La and Ce enhanced the toxicity of the different chemical fractions, ultimately inhibiting the growth and metabolism of Nitrospirota . While the toxicity levels of the other three types of polluted soils were lower than that of LC-polluted soil, possibly because F could combine with La and Ce to form relatively stable precipitates , thus reducing the toxicity levels of these two rare earth elements. In addition, in the CF-, LF-, and LCF-contaminated soils, the abundance of this bacterial phylum was decreased in the order of LCF > CF > LF; this was probably because that F promoted the formation of relatively stable carbonate binding states of La and Ce in LF- and CF-polluted soils (Fig. b, c); in LCF-polluted soil, the highest growth rate was observed in the carbonate binding states of La, while the growth of exchange states of Ce was the highest in LCF (Fig. d). Studies have shown that when La and Ce coexist with F, F was more likely to bind with La , thus this could be the cause for the higher toxicity level of LCF than that of CF and LF. Nitrospira was the dominant genus in Nitrospirota, and the decline in the relative abundance of Nitrospira (Table S1) could weaken the nitrification function of soils, as previously reported , ultimately the nitrification function of LC contaminated soil, among other soil properties, was most affected by this combined contamination. Previous studies identified GAL15 as the dominant bacterial taxon at the phylum level in the ionic rare earth mining soil . Our study showed that compared with the CK group, the relative abundance of GAL15 was decreased as the increase of the pollutant concentrations of four treatment groups, while the complete disappearance was revealed in treatments of high concentrations of pollutants (Fig. ). This was probably because that the experimental soils contained the high content of F (540.73 mg kg –1 ), which caused enhanced stress effect on GAL15 and decreased relative abundance or even complete disappearance of GAL15, suggesting the sensitive property of GAL15 to F. Moreover, studies showed that GAL15 were involved in the inhibition of glycan biosynthesis and metabolism , indicating that the decrease in the relative abundance of GAL15 would enhance the glycan biosynthesis and metabolism of the microbial community. Our studies showed that both active and stable forms of La, Ce, and F played important roles in the variations of the taxonomic composition in the microbial communities. These results were consistent with those previously reported, showing that the active or unstable forms of the elements were the key drivers of variations in soil taxonomic composition of microbial community due to their higher bioavailability , while the stable forms of heavy metal elements, such as FeMn, ORG, and RES, are harmful to certain groups of microbes . Interestingly, Elusimicrobiota were sensitive to the RES form of both La and F, probably due to the metabolites of this microbe that could promote dissociation of the stable fractions from soils , thus increasing the availability or toxicity of the RES form of both La and F. We note that further studies are needed to clarify the effects of these chemical fractions on the relative abundance of Elusimicrobiota. In the present study, the decrease in the relative abundances of Planctomycetota, Dependentiae, and RCP2-54 were significantly correlated with the chemical fractions, especially the available forms of La or Ce, due to their inhibitory effect on the cell growth , as previously reported. Additionally, Gemmataceae and Babeliaceae were the most dominant families of phyla Planctomycetota and Dependentiae, respectively (Table S1), playing the key roles in the metabolism of organic compounds, such as hydrolyzation of various carbohydrates in soils, as previously reported . These results indicated that the available forms of La or Ce significantly reduced the relative abundances of Planctomycetota and Dependentiae, and these two phyla showed the ecological functions, i.e., decomposing organic compounds in soil . Therefore, the reduced relative abundances of these two phyla may further affect their ecological functions of decomposing of organic compounds in soil. Studies have shown that the fungal communities are sensitive to perturbations or changes in soil ecological system . In our study, the relative abundances of Mortierellomycota and Zoopagomycota were decreased by more than 80% in LC- and LCF-polluted soils. It was noteworthy that the relative abundance of Zoopagomycota was negatively influenced by La_RES, probably because that La_RES was more easily bound to the functional groups of cell membranes in Zoopagomycota to cause disruption of cell membranes , resulting in a decrease in the relative abundance of this phylum. Moreover, Mortierella and Syncephalis were the dominant genera of Mortierellomycota and Zoopagomycota, respectively (Table S1). These results were in accordance with those previously reported, showing that the decreased relative abundances of these two genera could affect the mineralization of soil organic carbon, such as the decomposition of humic acid . In general, various microbes show multiple responsive modes to the interaction effects of different forms of elements in the combined pollution soils, including inhibition, insusceptibility, and promotion effects. These interaction effects would change the microbial community composition via stressing the sensitive taxa or promoting the resistant microbial taxa. Additionally, our study showed that the RES forms of the three elements may have restrained the growth of certain microbes. Tolerant microbial community Although the pollutants at high concentrations have shown significant inhibitory effect on soil microorganisms, some newly detected or tolerant microbes could be adaptive in the polluted environment via various regulatory mechanisms, such as adsorption of the metal element by cell wall, efflux by metal transporting ATPases, or intracellular bioaccumulation of the metal elements . In our study, the correlation factors responsible for the newly detected microbes or the significant increase in the relative abundance of microbes were investigated to further explore the interaction effects of the different chemical fractions of La, Ce, and F on the microbial community compositions in the combined pollution soils. Newly detected microbes Some researchers suggested that the microbes newly detected in contaminated soils may be tolerant to pollutants , which was attributed to the changes of soil properties . In our study, Fibrobacterota were newly detected in CF-polluted treatments (Fig. a). Previous studies showed that Fibrobacterota produced polysaccharides to adsorb metal ions , suggesting the adsorption of all chemical fractions of Ce and F by polysaccharides generated by Fibrobacterota, ultimately preventing the elements from entering cells and making these microbes adapt to CF-polluted treatments and become tolerant bacteria. Our study showed that Fusobacteriota were detected in LF-polluted treatments; these results were consistent with those previously reported, showing the promotion effects of La and F on organisms in a certain concentration range . Therefore, it was speculated that the concentrations of various chemical fractions of La and F were in the tolerance range of the phylum Fusobacteriota, making this phylum the tolerant microbes in LF-polluted soils. It was noted that Fusobacteriota, a group of core intestinal bacteria , were newly detected with extremely low relative abundance (ranging from 0.003 to 0.007%) only in LF-polluted soils. Pan et al. showed that the rare microbes play important roles in maintaining the community diversity and correlating multiple ecological functions. Further investigations are necessary to verify the functions of these newly detected microbes in this study. Microbes with significantly increased relative abundances Studies have shown that the microbes with significantly increased relative abundances are revealed with strong tolerance to pollutant stress . In our study, the relative abundance of Gemmatimonadota was increased by 79.24, 451.36, 516.06 and 530.89% in four combined pollution soils, respectively (Fig. ), suggesting the strong tolerance of this phylum in these pollution soils. Previous studies indicated that Gemmatimonadota were the key microbial hosts for heavy metal resistance genes and antibiotic resistance genes . These studies suggested that the phylum Gemmatimonadota contained La, Ce, and F tolerant bacterial taxa. Our results showed that Gemmatimonas was the predominant microorganism of the phylum Gemmatimonadota (Table S1), while studies showed this genus was involved in the mineralization of organic matters . Therefore, the significant increase of the relative abundance of Gemmatimonas could promote the mineralization of organic matters in the four groups of contaminated soils and enhance the soil carbon emissions. Studies have shown that some microbes can produce extracellular polymeric substances to prevent metal ions in contaminated soils from entering the microbial cells . Our results showed that as the dominant genus of Bacteroidota, Mucilaginibacter could produce and secrete extracellular polymeric substances into the surrounding environments to absorb copper (Cu) and zinc (Zn), as previously reported . Therefore, the tolerant property of Bacteroidota was probably due to Mucilaginibacter, which was the dominant genus of this phylum, producing the extracellular polymeric substances to absorb the various chemical fractions of these three elements. These results were consistent with those previously reported, showing that F was an anion that could be absorbed by the extracellular polymeric substance of Mucilaginibacter and the extracellular polymeric substances produced by this genus could exchange anions through electrostatic interactions , ultimately promoting the resistance of this bacterial taxon in the four groups of polluted soils. Despite the weak mobility of the stable forms of La and Ce in the soils , our study revealed different effects of these forms on various microbes. For example, La_RES and Ce_RES were the key drivers of the increase in the relative abundances of Myxococcota and Armatimonadota in LCF-polluted soil (Fig. g), with Haliangium and Chthonomonas identified as their dominant genera, respectively (Table S1). Studies have shown that Haliangium could enrich phosphorus and Chthonomonas could produce phospholipids , suggesting that these two genera could absorb or bind both La_RES and Ce_RES to ultimately promote the bacterial cell proliferation. The taxonomic composition of fungal community is generally stable or tolerant to pollutants due to the strong fungal adsorption, filtration, and retention of pollutants . In our study, Chytridiomycota, Glomeromycota, Rozellomycota, and Basidiobolomycota were identified as the tolerant fungal phyla in four groups of polluted soils, respectively (Fig. ). Specifically, the relative abundance of Chytridiomycota was increased by 275–2871% in the four groups of polluted soils (Fig. ). Our results revealed the tolerance of this phylum to the interaction effect of most chemical fractions of La, Ce, and F, probably due to its capability of synthesizing secondary metabolites, involvement in enzymatic activities, and regulation of metal induced protein synthesis , thus producing complexes with the different chemical fractions of La, Ce, and F, ultimately promoting fungal cell proliferation. These results were consistent with those previously reported, showing that these three tolerant fungal phyla could adsorb pollutants through cell walls to protect fungi from pollutant stress and develop resistance to pollutants . In conclusion, Several limitations of this study were noted. We only studied the effects of the chemical fractions of La, Ce, and F pollution elements on microbiomes and the possible effects of microorganisms on the corresponding functions of soil ecology. Although we discussed some relatively dominant bacterial and fungal genera, further explorations are necessary to identify the sensitive microbes at the species level. Moreover, it is still necessary to further evaluate the effects of Cl element introduced by LaCl 3 ·7H 2 O and CeCl 3 ·7H 2 O of analytical grades on soil microorganism composition. In summary, the in-depth investigations in these areas would strengthen and verify the findings revealed in our study and provide a strong experimental foundation to support the ecological restoration of La, Ce, and F contaminated soil. The taxonomic composition change of microbial community can be used as one of the biological indicators of soil pollution . In particular, the undetected or significant decrease in the relative abundances of the microbes are involved in the ecological functions in soils . Undetected microbes The highest level of stress effects of pollutant on soil microbial communities is represented by the undetected of certain microbes . Our results were consistent with those previously reported, showing the high levels of bioavailability and toxicity in the active forms of WS and EX . It was noteworthy that our results indicated that, not only the active forms of La, Ce, and F, but also the relatively stable chemical fractions of FeMn, ORG, and RES of La, Ce, and F showed high stress effects on the disappearance of the microbial taxa. A previous study showed that in polluted soils, the disappearance of some microbiome may affect the ecological functions of soil environment . For example, Blastocladiomycota influenced the degradation of organic matters, such as wood fibers, in soil . In our study this phylum were undetected in LC-, LF- and LCF-polluted soils, and may be influence the function of the degradation of organic matter in these polluted soils. Studies have shown that as the dominant genus of phylum Abditibacteriota (Table S1), Abditibacterium could inhabit extreme environments, due to its antibiotic and toxin-resistant properties . Our results showed that Abditibacterium was undetected in LC-polluted soils, indicating that the La_RES toxicity was stronger than those in the other three groups of contaminated soils. Microbes with significantly reduced relative abundances The change of microbial relative abundances in contaminated soils is also an important indicator of the level of soil pollution because these alterations are the observable factors in response to the persistent inhibition of pollutants . Although the relative abundance of Nitrospirota declined in all four combined pollution soils, it was undetected in the highest concentration of LC treatment. This was probably due to the synergistic effects of the combined pollution of La and Ce enhanced the toxicity of the different chemical fractions, ultimately inhibiting the growth and metabolism of Nitrospirota . While the toxicity levels of the other three types of polluted soils were lower than that of LC-polluted soil, possibly because F could combine with La and Ce to form relatively stable precipitates , thus reducing the toxicity levels of these two rare earth elements. In addition, in the CF-, LF-, and LCF-contaminated soils, the abundance of this bacterial phylum was decreased in the order of LCF > CF > LF; this was probably because that F promoted the formation of relatively stable carbonate binding states of La and Ce in LF- and CF-polluted soils (Fig. b, c); in LCF-polluted soil, the highest growth rate was observed in the carbonate binding states of La, while the growth of exchange states of Ce was the highest in LCF (Fig. d). Studies have shown that when La and Ce coexist with F, F was more likely to bind with La , thus this could be the cause for the higher toxicity level of LCF than that of CF and LF. Nitrospira was the dominant genus in Nitrospirota, and the decline in the relative abundance of Nitrospira (Table S1) could weaken the nitrification function of soils, as previously reported , ultimately the nitrification function of LC contaminated soil, among other soil properties, was most affected by this combined contamination. Previous studies identified GAL15 as the dominant bacterial taxon at the phylum level in the ionic rare earth mining soil . Our study showed that compared with the CK group, the relative abundance of GAL15 was decreased as the increase of the pollutant concentrations of four treatment groups, while the complete disappearance was revealed in treatments of high concentrations of pollutants (Fig. ). This was probably because that the experimental soils contained the high content of F (540.73 mg kg –1 ), which caused enhanced stress effect on GAL15 and decreased relative abundance or even complete disappearance of GAL15, suggesting the sensitive property of GAL15 to F. Moreover, studies showed that GAL15 were involved in the inhibition of glycan biosynthesis and metabolism , indicating that the decrease in the relative abundance of GAL15 would enhance the glycan biosynthesis and metabolism of the microbial community. Our studies showed that both active and stable forms of La, Ce, and F played important roles in the variations of the taxonomic composition in the microbial communities. These results were consistent with those previously reported, showing that the active or unstable forms of the elements were the key drivers of variations in soil taxonomic composition of microbial community due to their higher bioavailability , while the stable forms of heavy metal elements, such as FeMn, ORG, and RES, are harmful to certain groups of microbes . Interestingly, Elusimicrobiota were sensitive to the RES form of both La and F, probably due to the metabolites of this microbe that could promote dissociation of the stable fractions from soils , thus increasing the availability or toxicity of the RES form of both La and F. We note that further studies are needed to clarify the effects of these chemical fractions on the relative abundance of Elusimicrobiota. In the present study, the decrease in the relative abundances of Planctomycetota, Dependentiae, and RCP2-54 were significantly correlated with the chemical fractions, especially the available forms of La or Ce, due to their inhibitory effect on the cell growth , as previously reported. Additionally, Gemmataceae and Babeliaceae were the most dominant families of phyla Planctomycetota and Dependentiae, respectively (Table S1), playing the key roles in the metabolism of organic compounds, such as hydrolyzation of various carbohydrates in soils, as previously reported . These results indicated that the available forms of La or Ce significantly reduced the relative abundances of Planctomycetota and Dependentiae, and these two phyla showed the ecological functions, i.e., decomposing organic compounds in soil . Therefore, the reduced relative abundances of these two phyla may further affect their ecological functions of decomposing of organic compounds in soil. Studies have shown that the fungal communities are sensitive to perturbations or changes in soil ecological system . In our study, the relative abundances of Mortierellomycota and Zoopagomycota were decreased by more than 80% in LC- and LCF-polluted soils. It was noteworthy that the relative abundance of Zoopagomycota was negatively influenced by La_RES, probably because that La_RES was more easily bound to the functional groups of cell membranes in Zoopagomycota to cause disruption of cell membranes , resulting in a decrease in the relative abundance of this phylum. Moreover, Mortierella and Syncephalis were the dominant genera of Mortierellomycota and Zoopagomycota, respectively (Table S1). These results were in accordance with those previously reported, showing that the decreased relative abundances of these two genera could affect the mineralization of soil organic carbon, such as the decomposition of humic acid . In general, various microbes show multiple responsive modes to the interaction effects of different forms of elements in the combined pollution soils, including inhibition, insusceptibility, and promotion effects. These interaction effects would change the microbial community composition via stressing the sensitive taxa or promoting the resistant microbial taxa. Additionally, our study showed that the RES forms of the three elements may have restrained the growth of certain microbes. The highest level of stress effects of pollutant on soil microbial communities is represented by the undetected of certain microbes . Our results were consistent with those previously reported, showing the high levels of bioavailability and toxicity in the active forms of WS and EX . It was noteworthy that our results indicated that, not only the active forms of La, Ce, and F, but also the relatively stable chemical fractions of FeMn, ORG, and RES of La, Ce, and F showed high stress effects on the disappearance of the microbial taxa. A previous study showed that in polluted soils, the disappearance of some microbiome may affect the ecological functions of soil environment . For example, Blastocladiomycota influenced the degradation of organic matters, such as wood fibers, in soil . In our study this phylum were undetected in LC-, LF- and LCF-polluted soils, and may be influence the function of the degradation of organic matter in these polluted soils. Studies have shown that as the dominant genus of phylum Abditibacteriota (Table S1), Abditibacterium could inhabit extreme environments, due to its antibiotic and toxin-resistant properties . Our results showed that Abditibacterium was undetected in LC-polluted soils, indicating that the La_RES toxicity was stronger than those in the other three groups of contaminated soils. The change of microbial relative abundances in contaminated soils is also an important indicator of the level of soil pollution because these alterations are the observable factors in response to the persistent inhibition of pollutants . Although the relative abundance of Nitrospirota declined in all four combined pollution soils, it was undetected in the highest concentration of LC treatment. This was probably due to the synergistic effects of the combined pollution of La and Ce enhanced the toxicity of the different chemical fractions, ultimately inhibiting the growth and metabolism of Nitrospirota . While the toxicity levels of the other three types of polluted soils were lower than that of LC-polluted soil, possibly because F could combine with La and Ce to form relatively stable precipitates , thus reducing the toxicity levels of these two rare earth elements. In addition, in the CF-, LF-, and LCF-contaminated soils, the abundance of this bacterial phylum was decreased in the order of LCF > CF > LF; this was probably because that F promoted the formation of relatively stable carbonate binding states of La and Ce in LF- and CF-polluted soils (Fig. b, c); in LCF-polluted soil, the highest growth rate was observed in the carbonate binding states of La, while the growth of exchange states of Ce was the highest in LCF (Fig. d). Studies have shown that when La and Ce coexist with F, F was more likely to bind with La , thus this could be the cause for the higher toxicity level of LCF than that of CF and LF. Nitrospira was the dominant genus in Nitrospirota, and the decline in the relative abundance of Nitrospira (Table S1) could weaken the nitrification function of soils, as previously reported , ultimately the nitrification function of LC contaminated soil, among other soil properties, was most affected by this combined contamination. Previous studies identified GAL15 as the dominant bacterial taxon at the phylum level in the ionic rare earth mining soil . Our study showed that compared with the CK group, the relative abundance of GAL15 was decreased as the increase of the pollutant concentrations of four treatment groups, while the complete disappearance was revealed in treatments of high concentrations of pollutants (Fig. ). This was probably because that the experimental soils contained the high content of F (540.73 mg kg –1 ), which caused enhanced stress effect on GAL15 and decreased relative abundance or even complete disappearance of GAL15, suggesting the sensitive property of GAL15 to F. Moreover, studies showed that GAL15 were involved in the inhibition of glycan biosynthesis and metabolism , indicating that the decrease in the relative abundance of GAL15 would enhance the glycan biosynthesis and metabolism of the microbial community. Our studies showed that both active and stable forms of La, Ce, and F played important roles in the variations of the taxonomic composition in the microbial communities. These results were consistent with those previously reported, showing that the active or unstable forms of the elements were the key drivers of variations in soil taxonomic composition of microbial community due to their higher bioavailability , while the stable forms of heavy metal elements, such as FeMn, ORG, and RES, are harmful to certain groups of microbes . Interestingly, Elusimicrobiota were sensitive to the RES form of both La and F, probably due to the metabolites of this microbe that could promote dissociation of the stable fractions from soils , thus increasing the availability or toxicity of the RES form of both La and F. We note that further studies are needed to clarify the effects of these chemical fractions on the relative abundance of Elusimicrobiota. In the present study, the decrease in the relative abundances of Planctomycetota, Dependentiae, and RCP2-54 were significantly correlated with the chemical fractions, especially the available forms of La or Ce, due to their inhibitory effect on the cell growth , as previously reported. Additionally, Gemmataceae and Babeliaceae were the most dominant families of phyla Planctomycetota and Dependentiae, respectively (Table S1), playing the key roles in the metabolism of organic compounds, such as hydrolyzation of various carbohydrates in soils, as previously reported . These results indicated that the available forms of La or Ce significantly reduced the relative abundances of Planctomycetota and Dependentiae, and these two phyla showed the ecological functions, i.e., decomposing organic compounds in soil . Therefore, the reduced relative abundances of these two phyla may further affect their ecological functions of decomposing of organic compounds in soil. Studies have shown that the fungal communities are sensitive to perturbations or changes in soil ecological system . In our study, the relative abundances of Mortierellomycota and Zoopagomycota were decreased by more than 80% in LC- and LCF-polluted soils. It was noteworthy that the relative abundance of Zoopagomycota was negatively influenced by La_RES, probably because that La_RES was more easily bound to the functional groups of cell membranes in Zoopagomycota to cause disruption of cell membranes , resulting in a decrease in the relative abundance of this phylum. Moreover, Mortierella and Syncephalis were the dominant genera of Mortierellomycota and Zoopagomycota, respectively (Table S1). These results were in accordance with those previously reported, showing that the decreased relative abundances of these two genera could affect the mineralization of soil organic carbon, such as the decomposition of humic acid . In general, various microbes show multiple responsive modes to the interaction effects of different forms of elements in the combined pollution soils, including inhibition, insusceptibility, and promotion effects. These interaction effects would change the microbial community composition via stressing the sensitive taxa or promoting the resistant microbial taxa. Additionally, our study showed that the RES forms of the three elements may have restrained the growth of certain microbes. Although the pollutants at high concentrations have shown significant inhibitory effect on soil microorganisms, some newly detected or tolerant microbes could be adaptive in the polluted environment via various regulatory mechanisms, such as adsorption of the metal element by cell wall, efflux by metal transporting ATPases, or intracellular bioaccumulation of the metal elements . In our study, the correlation factors responsible for the newly detected microbes or the significant increase in the relative abundance of microbes were investigated to further explore the interaction effects of the different chemical fractions of La, Ce, and F on the microbial community compositions in the combined pollution soils. Newly detected microbes Some researchers suggested that the microbes newly detected in contaminated soils may be tolerant to pollutants , which was attributed to the changes of soil properties . In our study, Fibrobacterota were newly detected in CF-polluted treatments (Fig. a). Previous studies showed that Fibrobacterota produced polysaccharides to adsorb metal ions , suggesting the adsorption of all chemical fractions of Ce and F by polysaccharides generated by Fibrobacterota, ultimately preventing the elements from entering cells and making these microbes adapt to CF-polluted treatments and become tolerant bacteria. Our study showed that Fusobacteriota were detected in LF-polluted treatments; these results were consistent with those previously reported, showing the promotion effects of La and F on organisms in a certain concentration range . Therefore, it was speculated that the concentrations of various chemical fractions of La and F were in the tolerance range of the phylum Fusobacteriota, making this phylum the tolerant microbes in LF-polluted soils. It was noted that Fusobacteriota, a group of core intestinal bacteria , were newly detected with extremely low relative abundance (ranging from 0.003 to 0.007%) only in LF-polluted soils. Pan et al. showed that the rare microbes play important roles in maintaining the community diversity and correlating multiple ecological functions. Further investigations are necessary to verify the functions of these newly detected microbes in this study. Microbes with significantly increased relative abundances Studies have shown that the microbes with significantly increased relative abundances are revealed with strong tolerance to pollutant stress . In our study, the relative abundance of Gemmatimonadota was increased by 79.24, 451.36, 516.06 and 530.89% in four combined pollution soils, respectively (Fig. ), suggesting the strong tolerance of this phylum in these pollution soils. Previous studies indicated that Gemmatimonadota were the key microbial hosts for heavy metal resistance genes and antibiotic resistance genes . These studies suggested that the phylum Gemmatimonadota contained La, Ce, and F tolerant bacterial taxa. Our results showed that Gemmatimonas was the predominant microorganism of the phylum Gemmatimonadota (Table S1), while studies showed this genus was involved in the mineralization of organic matters . Therefore, the significant increase of the relative abundance of Gemmatimonas could promote the mineralization of organic matters in the four groups of contaminated soils and enhance the soil carbon emissions. Studies have shown that some microbes can produce extracellular polymeric substances to prevent metal ions in contaminated soils from entering the microbial cells . Our results showed that as the dominant genus of Bacteroidota, Mucilaginibacter could produce and secrete extracellular polymeric substances into the surrounding environments to absorb copper (Cu) and zinc (Zn), as previously reported . Therefore, the tolerant property of Bacteroidota was probably due to Mucilaginibacter, which was the dominant genus of this phylum, producing the extracellular polymeric substances to absorb the various chemical fractions of these three elements. These results were consistent with those previously reported, showing that F was an anion that could be absorbed by the extracellular polymeric substance of Mucilaginibacter and the extracellular polymeric substances produced by this genus could exchange anions through electrostatic interactions , ultimately promoting the resistance of this bacterial taxon in the four groups of polluted soils. Despite the weak mobility of the stable forms of La and Ce in the soils , our study revealed different effects of these forms on various microbes. For example, La_RES and Ce_RES were the key drivers of the increase in the relative abundances of Myxococcota and Armatimonadota in LCF-polluted soil (Fig. g), with Haliangium and Chthonomonas identified as their dominant genera, respectively (Table S1). Studies have shown that Haliangium could enrich phosphorus and Chthonomonas could produce phospholipids , suggesting that these two genera could absorb or bind both La_RES and Ce_RES to ultimately promote the bacterial cell proliferation. The taxonomic composition of fungal community is generally stable or tolerant to pollutants due to the strong fungal adsorption, filtration, and retention of pollutants . In our study, Chytridiomycota, Glomeromycota, Rozellomycota, and Basidiobolomycota were identified as the tolerant fungal phyla in four groups of polluted soils, respectively (Fig. ). Specifically, the relative abundance of Chytridiomycota was increased by 275–2871% in the four groups of polluted soils (Fig. ). Our results revealed the tolerance of this phylum to the interaction effect of most chemical fractions of La, Ce, and F, probably due to its capability of synthesizing secondary metabolites, involvement in enzymatic activities, and regulation of metal induced protein synthesis , thus producing complexes with the different chemical fractions of La, Ce, and F, ultimately promoting fungal cell proliferation. These results were consistent with those previously reported, showing that these three tolerant fungal phyla could adsorb pollutants through cell walls to protect fungi from pollutant stress and develop resistance to pollutants . In conclusion, Several limitations of this study were noted. We only studied the effects of the chemical fractions of La, Ce, and F pollution elements on microbiomes and the possible effects of microorganisms on the corresponding functions of soil ecology. Although we discussed some relatively dominant bacterial and fungal genera, further explorations are necessary to identify the sensitive microbes at the species level. Moreover, it is still necessary to further evaluate the effects of Cl element introduced by LaCl 3 ·7H 2 O and CeCl 3 ·7H 2 O of analytical grades on soil microorganism composition. In summary, the in-depth investigations in these areas would strengthen and verify the findings revealed in our study and provide a strong experimental foundation to support the ecological restoration of La, Ce, and F contaminated soil. Some researchers suggested that the microbes newly detected in contaminated soils may be tolerant to pollutants , which was attributed to the changes of soil properties . In our study, Fibrobacterota were newly detected in CF-polluted treatments (Fig. a). Previous studies showed that Fibrobacterota produced polysaccharides to adsorb metal ions , suggesting the adsorption of all chemical fractions of Ce and F by polysaccharides generated by Fibrobacterota, ultimately preventing the elements from entering cells and making these microbes adapt to CF-polluted treatments and become tolerant bacteria. Our study showed that Fusobacteriota were detected in LF-polluted treatments; these results were consistent with those previously reported, showing the promotion effects of La and F on organisms in a certain concentration range . Therefore, it was speculated that the concentrations of various chemical fractions of La and F were in the tolerance range of the phylum Fusobacteriota, making this phylum the tolerant microbes in LF-polluted soils. It was noted that Fusobacteriota, a group of core intestinal bacteria , were newly detected with extremely low relative abundance (ranging from 0.003 to 0.007%) only in LF-polluted soils. Pan et al. showed that the rare microbes play important roles in maintaining the community diversity and correlating multiple ecological functions. Further investigations are necessary to verify the functions of these newly detected microbes in this study. Studies have shown that the microbes with significantly increased relative abundances are revealed with strong tolerance to pollutant stress . In our study, the relative abundance of Gemmatimonadota was increased by 79.24, 451.36, 516.06 and 530.89% in four combined pollution soils, respectively (Fig. ), suggesting the strong tolerance of this phylum in these pollution soils. Previous studies indicated that Gemmatimonadota were the key microbial hosts for heavy metal resistance genes and antibiotic resistance genes . These studies suggested that the phylum Gemmatimonadota contained La, Ce, and F tolerant bacterial taxa. Our results showed that Gemmatimonas was the predominant microorganism of the phylum Gemmatimonadota (Table S1), while studies showed this genus was involved in the mineralization of organic matters . Therefore, the significant increase of the relative abundance of Gemmatimonas could promote the mineralization of organic matters in the four groups of contaminated soils and enhance the soil carbon emissions. Studies have shown that some microbes can produce extracellular polymeric substances to prevent metal ions in contaminated soils from entering the microbial cells . Our results showed that as the dominant genus of Bacteroidota, Mucilaginibacter could produce and secrete extracellular polymeric substances into the surrounding environments to absorb copper (Cu) and zinc (Zn), as previously reported . Therefore, the tolerant property of Bacteroidota was probably due to Mucilaginibacter, which was the dominant genus of this phylum, producing the extracellular polymeric substances to absorb the various chemical fractions of these three elements. These results were consistent with those previously reported, showing that F was an anion that could be absorbed by the extracellular polymeric substance of Mucilaginibacter and the extracellular polymeric substances produced by this genus could exchange anions through electrostatic interactions , ultimately promoting the resistance of this bacterial taxon in the four groups of polluted soils. Despite the weak mobility of the stable forms of La and Ce in the soils , our study revealed different effects of these forms on various microbes. For example, La_RES and Ce_RES were the key drivers of the increase in the relative abundances of Myxococcota and Armatimonadota in LCF-polluted soil (Fig. g), with Haliangium and Chthonomonas identified as their dominant genera, respectively (Table S1). Studies have shown that Haliangium could enrich phosphorus and Chthonomonas could produce phospholipids , suggesting that these two genera could absorb or bind both La_RES and Ce_RES to ultimately promote the bacterial cell proliferation. The taxonomic composition of fungal community is generally stable or tolerant to pollutants due to the strong fungal adsorption, filtration, and retention of pollutants . In our study, Chytridiomycota, Glomeromycota, Rozellomycota, and Basidiobolomycota were identified as the tolerant fungal phyla in four groups of polluted soils, respectively (Fig. ). Specifically, the relative abundance of Chytridiomycota was increased by 275–2871% in the four groups of polluted soils (Fig. ). Our results revealed the tolerance of this phylum to the interaction effect of most chemical fractions of La, Ce, and F, probably due to its capability of synthesizing secondary metabolites, involvement in enzymatic activities, and regulation of metal induced protein synthesis , thus producing complexes with the different chemical fractions of La, Ce, and F, ultimately promoting fungal cell proliferation. These results were consistent with those previously reported, showing that these three tolerant fungal phyla could adsorb pollutants through cell walls to protect fungi from pollutant stress and develop resistance to pollutants . In conclusion, Several limitations of this study were noted. We only studied the effects of the chemical fractions of La, Ce, and F pollution elements on microbiomes and the possible effects of microorganisms on the corresponding functions of soil ecology. Although we discussed some relatively dominant bacterial and fungal genera, further explorations are necessary to identify the sensitive microbes at the species level. Moreover, it is still necessary to further evaluate the effects of Cl element introduced by LaCl 3 ·7H 2 O and CeCl 3 ·7H 2 O of analytical grades on soil microorganism composition. In summary, the in-depth investigations in these areas would strengthen and verify the findings revealed in our study and provide a strong experimental foundation to support the ecological restoration of La, Ce, and F contaminated soil. Our study revealed the sensitive and tolerant microbes with multiple responsive modes to the various interactions of different chemical fractions of La, Ce, and F in farmland soils. La_RES was the key correlation factor responsible for the disappearance or the significant increase in the relative abundance of microbes in LC-polluted soils; Ce_EX and four chemical fractions of F (i.e., F_EX, F_FeMn, F_ORG, and F_RES) in CF-polluted soils were identified as the stress factors for the disappearance or the decrease in the relative abundances of microbes. Furthermore, La_WS showed the most toxic effect on bacterial taxa at the phylum level, while all chemical fractions of La and F caused the novel appearance of microbes in LF-polluted soils. Both La_RES and F_FeMn were detected as the stress factors in LCF-polluted soils. Our study further indicated the LC polluted soil had the most toxicity than other three soils on sensitive microbes for La and Ce had the synergistic effects to enhance the toxicity of their chemical fractions. The undetected and newly detected microbes as well as the microbes with significant changes in the relative abundance would affect the mineralization of soil organic matters or the nitrogen transformation. Therefore, interaction groups of various chemical fractions of La, Ce, and F would inhibit or enhance the growth of different microbial communities. In future studies, it is necessary to explore the interactions of various chemical fractions of La, Ce, and F on the ecological functions of sensitive and tolerant microbes at genus level of microbial communities, in order to provide a certain theoretical basis for the ecological restoration of La, Ce, and F contaminated soil. Supplementary Material 1.
Novel Nanozyme-Based Multicomponent in situ Hydrogels with Antibacterial, Hypoxia-Relieving and Proliferative Properties for Promoting Gastrostomy Tube Tract Maturation
045a74c9-bd78-457e-807e-a7e71cb656ed
11762016
Surgical Procedures, Operative[mh]
Enteral feeding is the preferred method for the mid- to long-term enteral feeding of patients with a functional gastrointestinal system but difficulty swallowing. Enteral administration has the advantages of maintaining the intestinal microbiota balance and reducing the risk of bacterial translocation and associated bacteremia. Gastrostomy is the most commonly used enteral feeding technology and has been used with neurological diseases, such as cerebrovascular disease, retardation and dementia, and head and neck cancer. International guidelines recommend gastrostomy for patients who require enteral access for more than 4 to 6 weeks, as gastrostomy is considered a safe technique. However, some complications, including bleeding, tube dislodgement, peristomal site infection, and peristomal leakage, have been reported, which bring great pain and danger to the patient. Among these complications, gastrostomy tube dislodgement and peristomal site infection deserve more attention. Peristomal site infection is the most common complication following gastrostomy, with a rate of incidence ranging from 4% to 30%, even reaching 65% as reported in some studies. Nevertheless, deep tissue infections are difficult to detect in the early stages. Currently, the main approach to treat peristomal site infection is to periodically administer antibiotics to reduce the incidence of infection. Inadvertent tube dislodgement is also a common complication, with an incidence rate of approximately 4%-13%. It is estimated that gastrostomy tract formation occurs within the first 2 weeks after gastronomy, but complete tract maturation may be delayed for up to 1 month after the procedure. If tube displacement occurs in the immature tract, the stomach and anterior abdominal wall separate, resulting in clinical risks, such as free perforation, misplacement of the reset blind tube into the peritoneal cavity and leakage into the peritoneal cavity. However, replacement of the tube can be performed blindly at the bedside after the tract is mature. Thus, accelerating tract maturation is an effective way to reduce the clinical risks caused by tube displacement. Currently in clinical practice, traditional dressings for post-gastrostomy wounds, such as split gauze, require daily replacement and are ineffective in preventing infections, therefore, with postoperative wound infection rates reaching as high as 47.05%. , Hydrogel dressings can increase wound moisture, but their mechanical properties and lack of antimicrobial action are limiting factors. Incorporating antimicrobial drugs into wound dressings is a common method, but excessive use of antibiotics can lead to antibiotic resistance, reducing their effectiveness in treating infections. , Until now, there was little research on promoting tract maturation or preventing infection with nonantibiotic drugs after gastrostomy. Thus, exploring the development of new dressing coatings on tubes that can prevent infection and accelerate tract maturation is of great significance for reducing the complications of gastrostomy. Oxygen is an important factor in wound treatment. During the healing process, processes such as collagen synthesis and cell proliferation need sufficient oxygen to provide energy. , It has been reported that collagen synthesis requires an oxygen tension of approximately 30–40 mmHg, while cell proliferation can vary depending on the cell type but generally requires similar levels. However, local wound hypoxia due to poor oxygen diffusion and permeation caused by damaged blood vessels and tissue tightness is a major reason for poor wound healing. , Additionally, bacterial colonization is inevitable in chronic wounds, which attracts leukocytes, leading to elevated levels of pro-inflammatory cytokines. Consequently, this directly triggers and sustains the inflammatory response, and therefore resulting in an elevated level of reactive oxygen species (ROS), such as such as H 2 O 2 and superoxide (O 2- ), which is detrimental to wound healing. , Excessive ROS can lead to the over-cross-linking of collagen, resulting in a stiff and non-functional extracellular matrix. Moreover, the oxidative stress triggered by excessive ROS not only impedes cell migration but also impair angiogenesis by damaging endothelial cells and disrupting the vascular network. , Some methods to topically apply oxygen to tissue have been proposed, such as using hyperbaric dissolved oxygen or perfluorodecalin. However, these techniques can only improve the oxygen level of the wound but cannot regulate the excessive level of ROS. Nanozymes, such as iron oxide, manganese oxide, and cerium oxide, are a type of nanometal oxide that has received widespread attention in recent years. Nanozymes can catalyze the reduction of H 2 O 2 to generate oxygen to scavenge ROS and supply oxygen. Among them, nano-manganese dioxide (n-MO), which possesses good safety and a strong catalytic ability in acidic environments, is suitable for application in gastric acid environments after gastrostomy. Due to their excellent biocompatibility and flexibility, hydrogels are widely used in wound healing. Hyaluronic acid (HA), a disaccharide glycosaminoglycan composed of D-glucuronic acid and N-acetylglucosamine, is a natural component of the extracellular matrix (ECM), connective tissue, epithelial tissue, and nerve tissue. As an important component of the ECM, HA is widely used in tissue engineering and wound treatment research and exhibits the potential to promote cell migration. , Polylysine (PLL) is a synthetic peptide produced by polymerizing lysine monomers that has good safety and biocompatibility. PLL has good cell membrane affinity due to its positive charge and has thus been used for improving cell adhesion in damaged blood vessels. Moreover, PLL possesses broad-spectrum antibacterial activities against both gram-negative and gram-positive bacteria. Additionally, PLL has low drug resistance due to its ability to interact with microbial membranes. Sodium alginate has unique properties, such as being nontoxic, biodegradable, biostable, and viscosifying, which have led to it being widely used as a wound dressing material. In the presence of cations such as Ca 2+ , sodium alginate crosslinks to form a hydrogel with a three-dimensional network. Alginate hydrogels (AL) can provide a moist microenvironment, absorb wound exudate, and promote cell proliferation, which facilitate wound healing. , Considering that hydrogels prepared by monomers are limited by due to having few functions, , composite materials have been developed to fabricate multifunctional hydrogels. However, the problems of insufficient mechanical strength and inconvenient use still exist in some composite hydrogels. In situ hydrogels, with the advantages of being convenient and suitable for wounds of different sizes and shapes, have been widely studied for use in wound dressings. Therefore, we designed a novel ternary in situ hydrogel composed of HA, PLL and alginate with ideal mechanical properties, adhesion characteristics and antibacterial activity to promote tract maturation and prevent infection after gastrostomy. Considering the above, n-MO-doped multifunctional in situ hydrogels (MO-HPA hydrogels) were developed to have antibacterial, ROS scavenging and hypoxia relief activities to promote tract maturation and reduce complications after gastrostomy. To the best of our knowledge, this is the first time to use multifunctional hydrogels in preventing gastrostomy infection and promoting tract maturation. Compared to existing wound dressing solutions, such as silver-impregnated dressings, alginate dressings, and hydrocolloids, which often suffer from limitations such as insufficient antibacterial efficacy, lack of oxygen-generating capacity, and suboptimal mechanical strength, our MO-HPA hydrogels offer a significant advancement. PLL modified HA (HP), with cell migration and antibacterial properties was synthesized. Then, n-MO-doped in situ ternary hydrogels crosslinked by HP and alginate with good mechanical properties and adhesion properties were developed. The physical and chemical properties of the hydrogels, such as morphology, rheology, gel formation, and mechanical properties, were evaluated in vitro. The ROS scavenging and O 2 supplying functions of the MO-HPA hydrogels were confirmed by in vitro and cell assays, and their abilities to resist bacteria and promote cell migration were detected in vitro. Finally, the effects of the in situ hydrogel on healing acceleration and the promotion of tract maturation were evaluated in a mouse wound model and rabbit gastrostomy model, respectively. Materials Hyaluronic acid (HA, Mw = 1200 kDa) was obtained from Shandong Haiyu Forida Co., Ltd. (Shandong, China). Sodium alginate (SA, MW = 480 kDa, M/G ≈ 1) was purchased from Shanghai Macklin Biochemical Co., Ltd. (Shanghai, China). Epsilon-polylysine (Mw = 4 kDa) was procured from Zhengzhou Qihuateng Co., Ltd. (Zhengzhou, China). 1-(3-Dimethylaminopropyl)-3-ethylcarbodiimide hydro (EDC), N-hydroxysuccinimide (NHS) and Bovine serum albumin (BSA) were obtained from Sigma-Aldrich (St. Louis, Missouri, USA). Potassium permanganate (KMnO 4 ) was received from Sinopharm Co., Ltd. (Shanghai, China). Tris(4,7-biphenyl-1,10-o-phenanthroline) ruthenium dichloride (RDPP) (98%) was procured from Leyuan Biological Co., Ltd. (Hangzhou, China). Synthesis and Characterization of Nano-MnO 2 Nano-MnO 2 (n-MO) was prepared by the redox method with some modifications (Wang, Song, Zhu, Zhang, and Liu, 2018). Briefly, KMnO 4 solution (7 mg/mL) was added dropwise to BSA solution (15 mg/mL) and stirred at room temperature. Next, the mixed solution was dialyzed (MWCO 8–14 kDa) against deionized water (DI water). Finally, the product (n-MO) was obtained by lyophilization and stored at 4 °C. The ultraviolet absorption spectra of KMnO 4 , BSA and n-MO were acquired with an ultraviolet spectrophotometer (UV-1800PC, MAPADA, Shanghai, China). The morphology of n-MO was examined using transmission electron microscopy (TEM, JEOL 2100 PLUS, Tokyo, Japan). The particle size and zeta potential were determined by dynamic light scattering (DLS, 90Plus PALS, Brookhaven, New York, USA). The ability of n-MO to catalyze H 2 O 2 to produce oxygen was determined by Tris (4,7-biphenyl-1,10-o-phenanthroline) ruthenium (II) dichloride (RDPP). Briefly, a certain amount of RDPP (3 mm) ethanol solution was added to PBS (pH 4.0 or pH 7.4) without or with different concentrations of H 2 O 2 . After the addition of n-MO, the fluorescence spectra of the samples were acquired by using a fluorescence spectrophotometer (F97XP, Lengguang Technology, Shanghai, China) with an excitation wavelength of 455 nm and emission wavelength of 615 nm. Synthesis and Characterization of PLL Modified HA PLL modified HA (HP) was synthesized as described in a previous study with some modifications. In brief, 1.2 g of HA was dissolved in 60 mL of PBS (pH 5.5) to obtain a HA solution. Then, 19.1 mg of EDC and 21.7 mg of NHS were dissolved in DI water (40 mL) and added to the HA solution under stirring. The reaction continued for 1 h at 4 °C. Afterward, different proportions of PLL (13.3 mg, 26.6 mg, or 66.7 mg) were added to the reaction solution. After adjusting the pH to 7, the reaction proceeded for another 4 h under room temperature. Subsequently, the solution was dialyzed against water for 48 h and then lyophilized to obtain HP with different ratios of the starting materials (referred to as HP 10 , HP 20 and HP 50 ). The modified HP materials with different ratios of starting materials were qualitatively confirmed by attenuated total reflection infrared spectroscopy (ATR, Thermo Scientific™ Nicolet™ iS50, Shanghai, China). The chemical composition of the optimized HP was analyzed with a 1H nuclear magnetic resonance (1H NMR, AVANCE II 400 MHz, Switzerland) spectrometer. Preparation of the Multifunctional MO-HPA Hydrogels n-MO-doped multicomponent in situ hydrogels (composed of HP and alginate, MO-HPA) were prepared using the following procedures. Appropriate amounts of SA and HP were dissolved in water to obtain solution A. n-MO was dispersed in CaCl 2 solution (1.5%) to obtain solution B. The in situ hydrogels were prepared by mixing solution A and solution B at a ratio of 5:1. The hydrogels were named MO-HPA 0.5 , MO-HPA 1.0 and MO-HPA 1.5 according to the HP 20 concentration in solution A (0.5%, 1.0% and 1.5%, respectively). HPA hydrogels without n-MO were prepared by the same procedure, except that solution B contained only CaCl 2 . Characterization of MO-HPA Hydrogels The internal morphology of the MO-HPA hydrogels was observed by scanning electron microscopy (SEM, JSM 7001F, Japan) after lyophilization. The gelation times of the hydrogels were measured by the vial tilting method. Briefly, solutions A and B were injected into the vial simultaneously at room temperature, and the time at which the hydrogel formed and stopped flowing was recorded. The swelling behaviors of hydrogels were also evaluated. MO-HPA hydrogels (about 3 mg) was weighed (W 0 ) before immersion in 10 mL of PBS (pH 7.4) at 37 °C. The hydrogels were removed from the PBS solution at specified time intervals and weighed after removing the water on the surface of the hydrogels (W1). The swelling ratio was calculated using the following formula (1). (1) [12pt]{minimal} [substack]{amsmath} [mathscr]{eucal} {linotext }{} $${} = {{({{{}_1} - {{}_0}} )} {{{}_0}}} 100{}$$ Injectability and Self-Healing Properties Solution A was stained with bromophenol blue and injected into solution B using a 23G needle to investigate the injectability and formability. Smooth letters “DH” and “UJS” were written by injection, and the letter “J” was lifted to evaluate the self-healing ability. Hydrogel blocks stained or not with bromophenol blue were cut into separated pieces. Subsequently, the cut interfaces were joined without external intervention to heal, and the self-healing behaviors of the hydrogels were observed with a digital camera and images were acquired at the predetermined time under. These studies were conducted under room temperature. Cytocompatibility, Cell Migration Capability The cytocompatibility of the hydrogels was assessed by using the MTT method. 3T6-Swiss albino cells were purchased from Procell Life Science & Technology Co., Ltd, Wuhan. Briefly, 3T6-Swiss albino fibroblasts cells (mouse fibroblasts) were seeded in 96-well plates (1×10 4 cells/well) and cultured in high-glucose Dulbecco’s modified Eagle’s medium (DMEM) containing 10% bovine fetal serum for 24 h at 37 °C with 5% CO 2 and 95% relative humidity. Afterward, the culture medium was replaced with DMEM containing a series of concentrations of sterilized n-MO, MO-HPA 0.5 , MO-HPA 1.0 or MO-HPA 1.5 . After culturing for 24, 48 or 72 h, the culture medium was discarded, and the cells were washed with PBS. Cell activity was detected by using MTT solution, and the absorbance at 570 nm was recorded by a microplate reader (BioTek 800 TS, Vermont, USA) to calculate cell viability. To investigate the cell migration capability bestowed by treatment with the MO-HPA hydrogels, cell scratch tests were performed. Specifically, 3T6 cells were inoculated in 6-well plates (1×10 6 cells/well) and cultured for 24 h. Afterward, the cell surface was scratched vertically with a 200 μL pipetting tip and washed with DMEM. Next, the cells were treated with DMEM as control group, while experimental groups were treated with DMEM medium containing different concentrations of hydrogels (MO-HPA 0.5 , MO-HPA 1.0 , or MO-HPA 1.5 ) for 24 hours. Then, the scratched area of the cells was photographed by using an inverted fluorescence microscope (Nikon TI-DH, Tokyo, Japan) to evaluate wound healing efficacy. In vitro and Intracellular Oxygen Production Efficacy of MO-HPA The in vitro oxygen production efficacy of the hydrogels with or without n-MO (MO-HPA or HPA) under different pH conditions was examined by using the fluorescent probe (RDPP). In brief, RDPP ethanol solution (3 mm) was diluted by PBS (pH 7.4, 4.0 or 1.5) without or with H 2 O 2 . After the addition of MO-HPA 1.0 or HPA 1.0 hydrogels, the fluorescence spectra of the samples were acquired by using a fluorescence spectrophotometer (F97XP, Lengguang Technology, Shanghai, China) with an excitation wavelength of 455 nm and emission wavelength of 615 nm at set times. NIH-3T6 cells were inoculated in 6-well plates (1×10 5 cells/well) under hypoxic conditions for 24 h. Then, the cells were treated with 5 µM RDPP (diluted in blank DMEM) for 4 h. After washing with PBS, the cells were incubated with HPA 1.0 or MO-HPA 1.0 for 24 h. Finally, the cells were observed by an inverted fluorescence microscope (ECLIPSE Ti, Nikon, Tokyo, Japan). Antibacterial Properties The antibacterial properties of the MO-HPA hydrogels were evaluated by using Escherichia coli ( E. coli , gram-negative bacterium) and Staphylococcus aureus ( S. aureus , gram-positive bacterium). The inhibition of E. coli by the hydrogels was determined by the spread plate method. In brief, equal proportions of hydrogels with different concentrations were added to activated E. coli solution. Subsequently, the E. coli was coated in the medium at a concentration of 10 8 CFU/mL. After culturing in a constant temperature (37 °C) shaker for 24 h, the inhibition performance of the hydrogel was determined by observing and counting the number of E. coli bacterial colonies. The antibacterial properties of MO-HPA against S. aureus were detected following a similar method. Rheology, Mechanical Properties, and Adhesion Ability The rheological properties of the hydrogels (MO-HPA 0.5 , MO-HPA 1.0 and MO-HPA 1.5 ), such as the time sweep, strain sweep, oscillatory frequency sweep and coefficient of shear viscosity, were evaluated by a rheometer (DHR-2, TA-Instruments, Massachusetts, USA). The mechanical properties of the MO-HPA hydrogels were measured by a universal material testing machine (MTS, CMT2103, Minnesota, USA). The compression capabilities of the hydrogels cut into cylinders (10 mm ×5 mm) were tested at a speed of 20 mm/min at 100 N. The adhesion properties of the MO-HPA hydrogels were quantitatively evaluated by a lap shear test. Briefly, hydrogel layers (10 mm × 10 mm) were adhered between the surface of two fresh porcine skins (30 mm × 30 mm). Subsequently, MTS was used to perform the tensile test with a 50 N force measuring element at a speed of 10 mm/min at room temperature. Hemolysis Assay Rabbit blood was collected in a heparinized tube. After centrifugation, the cell precipitates were collected and washed several times with PBS (pH 7.4) and then dispersed in PBS to obtain an erythrocyte dispersion. Solutions of MO-HPA 1.0 at different concentrations (0.5, 1.0, 2.5, 5.0, 10, 20 mg/mL) were dispersed into 0.5 mL of the erythrocyte dispersion as the experimental groups. DI water and PBS were used as positive and negative controls, respectively. Each group was incubated at 37 °C for 2 h, and the cell morphology was observed under a microscope. Hyaluronic acid (HA, Mw = 1200 kDa) was obtained from Shandong Haiyu Forida Co., Ltd. (Shandong, China). Sodium alginate (SA, MW = 480 kDa, M/G ≈ 1) was purchased from Shanghai Macklin Biochemical Co., Ltd. (Shanghai, China). Epsilon-polylysine (Mw = 4 kDa) was procured from Zhengzhou Qihuateng Co., Ltd. (Zhengzhou, China). 1-(3-Dimethylaminopropyl)-3-ethylcarbodiimide hydro (EDC), N-hydroxysuccinimide (NHS) and Bovine serum albumin (BSA) were obtained from Sigma-Aldrich (St. Louis, Missouri, USA). Potassium permanganate (KMnO 4 ) was received from Sinopharm Co., Ltd. (Shanghai, China). Tris(4,7-biphenyl-1,10-o-phenanthroline) ruthenium dichloride (RDPP) (98%) was procured from Leyuan Biological Co., Ltd. (Hangzhou, China). 2 Nano-MnO 2 (n-MO) was prepared by the redox method with some modifications (Wang, Song, Zhu, Zhang, and Liu, 2018). Briefly, KMnO 4 solution (7 mg/mL) was added dropwise to BSA solution (15 mg/mL) and stirred at room temperature. Next, the mixed solution was dialyzed (MWCO 8–14 kDa) against deionized water (DI water). Finally, the product (n-MO) was obtained by lyophilization and stored at 4 °C. The ultraviolet absorption spectra of KMnO 4 , BSA and n-MO were acquired with an ultraviolet spectrophotometer (UV-1800PC, MAPADA, Shanghai, China). The morphology of n-MO was examined using transmission electron microscopy (TEM, JEOL 2100 PLUS, Tokyo, Japan). The particle size and zeta potential were determined by dynamic light scattering (DLS, 90Plus PALS, Brookhaven, New York, USA). The ability of n-MO to catalyze H 2 O 2 to produce oxygen was determined by Tris (4,7-biphenyl-1,10-o-phenanthroline) ruthenium (II) dichloride (RDPP). Briefly, a certain amount of RDPP (3 mm) ethanol solution was added to PBS (pH 4.0 or pH 7.4) without or with different concentrations of H 2 O 2 . After the addition of n-MO, the fluorescence spectra of the samples were acquired by using a fluorescence spectrophotometer (F97XP, Lengguang Technology, Shanghai, China) with an excitation wavelength of 455 nm and emission wavelength of 615 nm. PLL modified HA (HP) was synthesized as described in a previous study with some modifications. In brief, 1.2 g of HA was dissolved in 60 mL of PBS (pH 5.5) to obtain a HA solution. Then, 19.1 mg of EDC and 21.7 mg of NHS were dissolved in DI water (40 mL) and added to the HA solution under stirring. The reaction continued for 1 h at 4 °C. Afterward, different proportions of PLL (13.3 mg, 26.6 mg, or 66.7 mg) were added to the reaction solution. After adjusting the pH to 7, the reaction proceeded for another 4 h under room temperature. Subsequently, the solution was dialyzed against water for 48 h and then lyophilized to obtain HP with different ratios of the starting materials (referred to as HP 10 , HP 20 and HP 50 ). The modified HP materials with different ratios of starting materials were qualitatively confirmed by attenuated total reflection infrared spectroscopy (ATR, Thermo Scientific™ Nicolet™ iS50, Shanghai, China). The chemical composition of the optimized HP was analyzed with a 1H nuclear magnetic resonance (1H NMR, AVANCE II 400 MHz, Switzerland) spectrometer. n-MO-doped multicomponent in situ hydrogels (composed of HP and alginate, MO-HPA) were prepared using the following procedures. Appropriate amounts of SA and HP were dissolved in water to obtain solution A. n-MO was dispersed in CaCl 2 solution (1.5%) to obtain solution B. The in situ hydrogels were prepared by mixing solution A and solution B at a ratio of 5:1. The hydrogels were named MO-HPA 0.5 , MO-HPA 1.0 and MO-HPA 1.5 according to the HP 20 concentration in solution A (0.5%, 1.0% and 1.5%, respectively). HPA hydrogels without n-MO were prepared by the same procedure, except that solution B contained only CaCl 2 . The internal morphology of the MO-HPA hydrogels was observed by scanning electron microscopy (SEM, JSM 7001F, Japan) after lyophilization. The gelation times of the hydrogels were measured by the vial tilting method. Briefly, solutions A and B were injected into the vial simultaneously at room temperature, and the time at which the hydrogel formed and stopped flowing was recorded. The swelling behaviors of hydrogels were also evaluated. MO-HPA hydrogels (about 3 mg) was weighed (W 0 ) before immersion in 10 mL of PBS (pH 7.4) at 37 °C. The hydrogels were removed from the PBS solution at specified time intervals and weighed after removing the water on the surface of the hydrogels (W1). The swelling ratio was calculated using the following formula (1). (1) [12pt]{minimal} [substack]{amsmath} [mathscr]{eucal} {linotext }{} $${} = {{({{{}_1} - {{}_0}} )} {{{}_0}}} 100{}$$ Solution A was stained with bromophenol blue and injected into solution B using a 23G needle to investigate the injectability and formability. Smooth letters “DH” and “UJS” were written by injection, and the letter “J” was lifted to evaluate the self-healing ability. Hydrogel blocks stained or not with bromophenol blue were cut into separated pieces. Subsequently, the cut interfaces were joined without external intervention to heal, and the self-healing behaviors of the hydrogels were observed with a digital camera and images were acquired at the predetermined time under. These studies were conducted under room temperature. The cytocompatibility of the hydrogels was assessed by using the MTT method. 3T6-Swiss albino cells were purchased from Procell Life Science & Technology Co., Ltd, Wuhan. Briefly, 3T6-Swiss albino fibroblasts cells (mouse fibroblasts) were seeded in 96-well plates (1×10 4 cells/well) and cultured in high-glucose Dulbecco’s modified Eagle’s medium (DMEM) containing 10% bovine fetal serum for 24 h at 37 °C with 5% CO 2 and 95% relative humidity. Afterward, the culture medium was replaced with DMEM containing a series of concentrations of sterilized n-MO, MO-HPA 0.5 , MO-HPA 1.0 or MO-HPA 1.5 . After culturing for 24, 48 or 72 h, the culture medium was discarded, and the cells were washed with PBS. Cell activity was detected by using MTT solution, and the absorbance at 570 nm was recorded by a microplate reader (BioTek 800 TS, Vermont, USA) to calculate cell viability. To investigate the cell migration capability bestowed by treatment with the MO-HPA hydrogels, cell scratch tests were performed. Specifically, 3T6 cells were inoculated in 6-well plates (1×10 6 cells/well) and cultured for 24 h. Afterward, the cell surface was scratched vertically with a 200 μL pipetting tip and washed with DMEM. Next, the cells were treated with DMEM as control group, while experimental groups were treated with DMEM medium containing different concentrations of hydrogels (MO-HPA 0.5 , MO-HPA 1.0 , or MO-HPA 1.5 ) for 24 hours. Then, the scratched area of the cells was photographed by using an inverted fluorescence microscope (Nikon TI-DH, Tokyo, Japan) to evaluate wound healing efficacy. The in vitro oxygen production efficacy of the hydrogels with or without n-MO (MO-HPA or HPA) under different pH conditions was examined by using the fluorescent probe (RDPP). In brief, RDPP ethanol solution (3 mm) was diluted by PBS (pH 7.4, 4.0 or 1.5) without or with H 2 O 2 . After the addition of MO-HPA 1.0 or HPA 1.0 hydrogels, the fluorescence spectra of the samples were acquired by using a fluorescence spectrophotometer (F97XP, Lengguang Technology, Shanghai, China) with an excitation wavelength of 455 nm and emission wavelength of 615 nm at set times. NIH-3T6 cells were inoculated in 6-well plates (1×10 5 cells/well) under hypoxic conditions for 24 h. Then, the cells were treated with 5 µM RDPP (diluted in blank DMEM) for 4 h. After washing with PBS, the cells were incubated with HPA 1.0 or MO-HPA 1.0 for 24 h. Finally, the cells were observed by an inverted fluorescence microscope (ECLIPSE Ti, Nikon, Tokyo, Japan). The antibacterial properties of the MO-HPA hydrogels were evaluated by using Escherichia coli ( E. coli , gram-negative bacterium) and Staphylococcus aureus ( S. aureus , gram-positive bacterium). The inhibition of E. coli by the hydrogels was determined by the spread plate method. In brief, equal proportions of hydrogels with different concentrations were added to activated E. coli solution. Subsequently, the E. coli was coated in the medium at a concentration of 10 8 CFU/mL. After culturing in a constant temperature (37 °C) shaker for 24 h, the inhibition performance of the hydrogel was determined by observing and counting the number of E. coli bacterial colonies. The antibacterial properties of MO-HPA against S. aureus were detected following a similar method. The rheological properties of the hydrogels (MO-HPA 0.5 , MO-HPA 1.0 and MO-HPA 1.5 ), such as the time sweep, strain sweep, oscillatory frequency sweep and coefficient of shear viscosity, were evaluated by a rheometer (DHR-2, TA-Instruments, Massachusetts, USA). The mechanical properties of the MO-HPA hydrogels were measured by a universal material testing machine (MTS, CMT2103, Minnesota, USA). The compression capabilities of the hydrogels cut into cylinders (10 mm ×5 mm) were tested at a speed of 20 mm/min at 100 N. The adhesion properties of the MO-HPA hydrogels were quantitatively evaluated by a lap shear test. Briefly, hydrogel layers (10 mm × 10 mm) were adhered between the surface of two fresh porcine skins (30 mm × 30 mm). Subsequently, MTS was used to perform the tensile test with a 50 N force measuring element at a speed of 10 mm/min at room temperature. Rabbit blood was collected in a heparinized tube. After centrifugation, the cell precipitates were collected and washed several times with PBS (pH 7.4) and then dispersed in PBS to obtain an erythrocyte dispersion. Solutions of MO-HPA 1.0 at different concentrations (0.5, 1.0, 2.5, 5.0, 10, 20 mg/mL) were dispersed into 0.5 mL of the erythrocyte dispersion as the experimental groups. DI water and PBS were used as positive and negative controls, respectively. Each group was incubated at 37 °C for 2 h, and the cell morphology was observed under a microscope. Animals Balb/c mice (male, 6–8 weeks, 20–30 g) and rabbits (male, 5–6 months, 1.5–2.5 kg) were provided by Jiangsu University Animal Center (Zhenjiang, China). All animal protocols in this study were approved by the Institutional Animal Care and Use Committee of Jiangsu University (UJS-IACUC-2022091302), and met the guidelines of the National Research Council’s Guide for the Care and Use of Laboratory Animals. Wound Healing of Full-Thickness Skin Defects in Mice To evaluate the effect of MO-HPA 1.0 on wound healing, a full-thickness skin defect model was established. In brief, after anesthesia, a circle of about 0.5 cm×0.5 cm was marked on the mid-back of Balb/c mice and excised with surgical scissors to create a full-thickness skin defect wound. Then, all the mice were randomly divided into two groups. One group of mice were treated with the hydrogels (MO-HPA 1.0 ) once a day, and the other group was treated with normal saline as control group. Wound healing was observed at preset time points. The wound closure was evaluated by Image J. The wound area measured at day 0 was recorded as S 0 , while its wound area measured at days 1,3,5,7, and 14 was recorded as S t , and the wound healing ratio was calculated by following formula (2). (2) [12pt]{minimal} [substack]{amsmath} [mathscr]{eucal} {linotext }{} $${} = {{{{}_{}}} {{{}_0}}} 100{}$$ Acceleration of Tract Maturation in a Rabbit Gastrostomy Model The effect of MO-HPA 1.0 to enhance tract maturation was evaluated in a rabbit gastrostomy model. Due to the inconvenience of endoscopic technology in animal laboratories, a surgical and suturing method was conducted to establish a rabbit gastrostomy model. In brief, the rabbits were randomly divided into two groups. A small opening was created in the rabbit abdominal cavity near the upper edge of the stomach, and then a very small hole was cut in the stomach. Afterward, a thin tube with a ball head was carefully inserted into the stomach of the rabbit through the abdominal wall. In the control group, the tube was left untreated, whereas the tube of treatment group was coated with MO-HPA 1.0 . The skin around the incision was then carefully sutured. The rabbits were sacrificed on days 0, 7, and 14 after gastrostomy, and the maturity of the tract was determined anatomically. Histological Examination and Immunohistochemical Analysis For histological analysis, rabbits were sacrificed on days 0, 5, and 14. The stomach wound samples around the tube were retrieved and fixed in 10% formalin for histological analysis, Masson staining and immunohistochemistry (HIF-1α). The collagen volume fraction and HIF-1α positive density of wound tissues were analyzed and calculated using ImageJ software. In addition, the gastric tissue homogenates on days 7 and 14 were centrifuged, the inflammatory factors (TNF-α, IL-1β, and IL-6) were determined by ELASA kit. Statistics The experimental data are expressed as the mean ± standard deviation. Statistical analyses were performed by one-way ANOVA followed by Tukey’s multiple comparison tests. A value of p less than 0.05 was considered to indicate statistical significance. Balb/c mice (male, 6–8 weeks, 20–30 g) and rabbits (male, 5–6 months, 1.5–2.5 kg) were provided by Jiangsu University Animal Center (Zhenjiang, China). All animal protocols in this study were approved by the Institutional Animal Care and Use Committee of Jiangsu University (UJS-IACUC-2022091302), and met the guidelines of the National Research Council’s Guide for the Care and Use of Laboratory Animals. To evaluate the effect of MO-HPA 1.0 on wound healing, a full-thickness skin defect model was established. In brief, after anesthesia, a circle of about 0.5 cm×0.5 cm was marked on the mid-back of Balb/c mice and excised with surgical scissors to create a full-thickness skin defect wound. Then, all the mice were randomly divided into two groups. One group of mice were treated with the hydrogels (MO-HPA 1.0 ) once a day, and the other group was treated with normal saline as control group. Wound healing was observed at preset time points. The wound closure was evaluated by Image J. The wound area measured at day 0 was recorded as S 0 , while its wound area measured at days 1,3,5,7, and 14 was recorded as S t , and the wound healing ratio was calculated by following formula (2). (2) [12pt]{minimal} [substack]{amsmath} [mathscr]{eucal} {linotext }{} $${} = {{{{}_{}}} {{{}_0}}} 100{}$$ The effect of MO-HPA 1.0 to enhance tract maturation was evaluated in a rabbit gastrostomy model. Due to the inconvenience of endoscopic technology in animal laboratories, a surgical and suturing method was conducted to establish a rabbit gastrostomy model. In brief, the rabbits were randomly divided into two groups. A small opening was created in the rabbit abdominal cavity near the upper edge of the stomach, and then a very small hole was cut in the stomach. Afterward, a thin tube with a ball head was carefully inserted into the stomach of the rabbit through the abdominal wall. In the control group, the tube was left untreated, whereas the tube of treatment group was coated with MO-HPA 1.0 . The skin around the incision was then carefully sutured. The rabbits were sacrificed on days 0, 7, and 14 after gastrostomy, and the maturity of the tract was determined anatomically. For histological analysis, rabbits were sacrificed on days 0, 5, and 14. The stomach wound samples around the tube were retrieved and fixed in 10% formalin for histological analysis, Masson staining and immunohistochemistry (HIF-1α). The collagen volume fraction and HIF-1α positive density of wound tissues were analyzed and calculated using ImageJ software. In addition, the gastric tissue homogenates on days 7 and 14 were centrifuged, the inflammatory factors (TNF-α, IL-1β, and IL-6) were determined by ELASA kit. The experimental data are expressed as the mean ± standard deviation. Statistical analyses were performed by one-way ANOVA followed by Tukey’s multiple comparison tests. A value of p less than 0.05 was considered to indicate statistical significance. Synthesis and Characterization of Nano-MnO 2 In the modified redox synthetic method, BSA served as both the biological template and reductant. As shown in , n-MO had an average hydrodynamic diameter of approximately 40 nm (based on intensity), a PDI of 0.239 and a zeta potential of −38.0±0.1 mV. It can be seen from the TEM images that the nanoparticles were evenly dispersed without agglomeration . The particle size observed from the TEM images was approximately 25 nm, which is similar with the size distributions based on number and volume ( Figure S1 ). What’s more, the characteristic peaks of potassium permanganate (at 315 and 545 nm) disappeared, and a broad UV absorption peak at 300–400 nm was observed for n-MO , which is consistent with literature reports. n-MO is a nanozyme with ROS scavenging and oxygen producing activities. RDPP is a widely used fluorescence probe for oxygen detection and quantification. Due to dynamic quenching, molecular oxygen causes a significant decrease in the fluorescence of RDPP. Therefore, molecular oxygen can be detected by measuring the fluorescence intensity. As shown in Figure S2 , in the absence of n-MO, the fluorescence intensity of the RDPP solution remained nearly constant under both pH 7.4 and 4.0 condition, indicating that no oxygen was generated. As the concentration of H 2 O 2 increases, the fluorescence intensity decreases, indicating the production of more oxygen. Specially, in the groups containing n-MO, the fluorescence intensity showed a noticeable decrease. In the control group without H 2 O 2 , the fluorescence intensity at pH 4.0 decreased by approximately 21%, indicating that n-MO has some capacity to generate oxygen under acidic conditions. In the groups with added H 2 O 2 , the fluorescence intensity decreased obviously with increasing H 2 O 2 concentration, and the fluorescence intensity under the pH 4.0 group was lower than that under the pH 7.4 group. This suggests that the oxygen production catalytic ability of n-MO is positively correlated with both the H + concentration and H 2 O 2 concentration, which is consistent with the literature reports. The results demonstrated that n-MO, with a nanozyme function, had been synthesized successfully. Synthesis and Characterization of PLL Modified HA The synthesis route of HA-PLL is shown in . Specifically, HA-PLL (HP) was synthesized by a carbodiimide coupling reaction between the carboxyl group of HA and the amino group of PLL. The molecular structures of HA, PLL and HP with different ratios of reactants were confirmed by ATR, as shown in . The peaks at 3321 cm −1 , 1606 cm −1 and 1407 cm −1 belonged to the stretching vibration of the carboxyl group of HA. In addition, the peaks at 1158 cm −1 , 1077 cm −1 , 1034 cm −1 and 949 cm −1 were characteristic peaks of polysaccharides. The adsorption peaks at 1647 cm −1 and 1566 cm −1 were attributed to vibrations of the peptide group, which were mainly generated by stretching vibrations of the C=O group and in-plane deformation vibrations of the N-H group of the amide bonds of PLL, respectively. In the HP spectrum, the peaks at 1647 cm −1 and 1566 cm −1 were replaced by a peak at 1611 cm −1 . The appearance of a broad band between 3000 and 3400 cm −1 was mainly due to the overlap of the carbonyl stretching vibration peak of PLL, the -NH stretching vibration peak of HA, and the formation of intramolecular or intermolecular hydrogen bonds. The intensity of the -NH absorption peak at 1564 cm −1 in the HP spectrum was significantly lower than that in the PLL spectrum, indicating a decrease in the number of -NH 2 groups due to their reaction with carboxyl groups. Compared to the HA spectrum, the intensity of the amide bond (-CONH) at 1320 cm −1 in the HP spectrum increased, indicating the formation of amide bonds during the reaction between PLL and HA. Moreover, the absorption peak of HP 20 was stronger than those of HP 10 and HP 50 , which suggested that HP 20 had the highest degree of crosslinking. Therefore, the optimal ratio of HA to PLL was 1:20, and HP 20 was used for hydrogel preparation. Furthermore, H NMR spectroscopy was performed to verify the structure and grafting rate of HP 20 . The characteristic peaks of PLL appeared at 3.8 ppm (α–CH 2 ), 1.95–1.35 ppm (β, γ-CH 2 ), and 2.9 ppm (NH-CH 2 ). Additionally, the peaks from 3.0–4.0 ppm were assigned to the HA backbone. The grafting rate of HP 20 was quantified by comparing the integration of the peak of the vinyl group (δ = 2.9) with that of the HA backbone (δ = 3.20–4.20) and was determined to be approximately 25%. Fabrication, Injectability and Morphology of MO-HPA Hydrogels The scheme of hydrogel preparation is shown in . In situ hydrogels were formed by homogeneously mixing solution A and solution B, which transformed from a sol state to a gel state . In the presence of Ca 2+ , sodium alginate generated calcium alginate and physically crosslinked with HP 20 to form hydrogels with a three-dimensional network structure. Generally, the structures of MO-HPA hydrogels were fabricated by the combination of chemical crosslinking (carboxyl group of HA and the amino group of PLL) and physical crosslinking (HP with alginate), as well as ionic crosslinking (alginate with Ca 2+ ). Considering the high viscosity of the polymer solution, its syringe ability was investigated. Solution A (containing HP and SA) was easily injected into solution B through a narrow 23G needle, and the mixture rapidly formed a gel . These results indicated that the hydrogel precursor solutions had satisfactory syringe ability and formability, which are beneficial for clinical application. The micromorphology of the lyophilized hydrogels (MO-HPA 0.5 , MO-HPA 1.0 , and MO-HPA 1.5 ) was observed by SEM. According to the SEM images, all of the hydrogels exhibited a uniform dense network with a highly porous structure, the diameters of which were approximately 50 μm . Moreover, the HP 20 concentration played an important role in the microstructure of the gel. With an increasing concentration of HP 20 , the hydrogels became more homogeneous and denser. Specifically, the order of homogeneity and crosslinking density of the hydrogels is MO-HPA 1.5 > MO-HPA 1.0 > MO-HPA 0.5 . Notably, there is a close relationship between the micromorphology and adaptability of the hydrogels. The homogeneous structure of the gels might be attributed to their mechanical properties. Moreover, hydrogels with similar microstructures would be beneficial for the diffusion and exchange of nutrients, oxygen, and other biomolecules, which implies that they could be further applied for cell proliferation and tissue engineering. Notably, the materials used to synthesize MO-HPA hydrogels are readily available and relatively cost-effective, which supports the potential for large-scale production. The in situ gelation process is straightforward and can be adapted to a clinical setting, enabling ease of application. Moving forward, our study would focus on optimizing production techniques to ensure consistency in properties, such as mechanical strength and gelation time, to meet regulatory standards for medical use. Gelation Time, Swelling Properties, and Self-Healing Properties The gelation time was determined by the vial tilting method. As shown in , the sol−gel transformation of MO-HPA with different HP 20 concentrations took place in 65 seconds. However, the gelation time of MO-HPA decreased as the HP 20 concentration increased, and MO-HPA 1.5 showed the shortest gelation time. The accelerated formation of the hydrogels may be caused by increased chain entanglement between HP 20 and alginate. It is worth noting that the gelation time is influenced by temperature; as the temperature decreases, the time required for gel formation increases. Considering that parts of tube was positioned on the surface of the skin, the gelation experiment was conducted at room temperature (around 25°C). The equilibrium swelling ratio is an important parameter to evaluate the crosslinking degree. As shown in , all of the hydrogels with various HP 20 contents reached swelling equilibrium within 300 s. The water content of MO-HPA 1.0 was relatively higher, which may be due to it being more porous than MO-HPA 0.5 and having a lower crosslinking density than MO-HPA 1.5 , which are conducive to the permeation of water molecules. According to the literature, hydrogels with excellent absorbability are beneficial for absorbing excess exudate. The in situ hydrogel molding characteristics were thus further evaluated. As shown in , the letters “UJS” could be written on the hydrogel without difficulty. Moreover, the letter “J” written on all the MO-HPA hydrogels could be lifted, indicating the potential mechanical strength of the hydrogels. The macroscopic self-healing ability of the MO-HPA hydrogels was further detected. As shown in , the two semi-blocks (MO-HPA 1.0 ), stained or not with bromophenol blue, started to close in several minutes and completely merged into an intact hydrogel, which was likely due to reversible physical crosslinking and ionic crosslinking. This self-healing property ensured that the hydrogels could reform if they were destroyed accidentally from the daily tube movement or wound contact after gastrostomy. These results indicated that the multivariate crosslinking system endowed the MO-HPA hydrogels with adaptability, moldability and self-healing characteristics that could match the complex environments of gastrostomy wounds. Cytotoxicity Test and Fibroblast Migration Biocompatibility and biosafety are important prerequisites for the clinical application of hydrogels. MTT and hemolysis assays were performed to evaluate cell and blood compatibility. 3T6-Swiss albino cells were used as model cells to assess the cytotoxicity of the n-MO and MO-HPA hydrogels. As shown in , the viabilities of cells treated with various concentrations of n-MO were almost over 80% after incubation for 24, 48 or 72 h, which indicated that n-MO has good safety within the concentration range tested. The toxicity of the MO-HPA hydrogels to 3T6-Swiss albino cells was also assessed at 24, 48 and 72 h. As shown in , cell viability remained over 80% during coincubation, exhibiting the good cytocompatibility of the hydrogels. Moreover, the viability values in some groups were even greater than 100%, which may be because HA promotes cell proliferation. It was reported hyaluronic acid is a major component of the extracellular matrix, capable of regulating the secretion of growth factors and cytokines, and influencing cell adhesion, growth, proliferation, and differentiation. Alginate can serve as a synthetic extracellular matrix material and promotes the proliferation of fibroblasts. Thus, this phenomenon was consistently observed in multiple experiments and is also indicative of hyaluronic acid and alginate’s biocompatibility. Fibroblasts play a crucial role in the replacement of damaged ECM and the promotion of tissue healing. During this process, the migration and proliferation of fibroblasts to the injury site remodel the surrounding microenvironment and promote tissue regeneration. A scratch test was performed with 3T6 cells to determine whether the MO-HPA hydrogel can promote the migration of fibroblasts. As shown in and ), compared to the control group, the cell migration rates of the three MO-HPA hydrogel groups (MO-HPA 0.5 , MO-HPA 1.0 , and MO-HPA 1.5 ) were increased by approximately 12.3%, 17.0% and 23.3%, respectively. The MO-HPA1.5 group displayed the highest cell migration rate (about 41.3%). These results indicated that as the proportion of HP 20 in the hydrogels increased, the gel’s ability to promote cell proliferation was also enhanced. Cellular Oxygen Production The principle of n-MO catalysis of H 2 O 2 to generate oxygen is shown in . Oxygen is crucial for collagen synthesis, angiogenesis, and tissue regeneration. Local wound hypoxia due to poor oxygen diffusion and permeation affect the healing process. The n-MO in MO-HPA 1.0 is a nanozyme with oxygen producing activities. The ability of MO-HPA 1.0 to mimic CAT activity and catalyze the conversion of H 2 O 2 to O 2 was investigated under pH 1.5, 4.0 and 7.4 conditions, simulating fasting, postprandial gastric conditions and physiological conditions, respectively, i n vitro experiments. RDPP served as a luminescent indicator for oxygen detection, with molecular oxygen leading to a reduction in fluorescence intensity of RDPP through a dynamic quenching process. In that case the fluorescence intensity inversely correlates with the amount of O 2 produced, with higher reductions in fluorescence indicating greater O 2 output. As shown in , the fluorescence intensities of the HPA 1.0 hydrogels in the groups without n-MO were almost unchanged under all pH conditions. Notably, the MO-HPA 1.0 hydrogel groups with H 2 O 2 reduced the fluorescence intensities under acidic conditions (pH 1.5 and 4.0) remarkably. These results indicated that, in the presence of H 2 O 2 , the MO-HPA 1.0 hydrogel showed a stronger ability to remove H 2 O 2 and promote oxygen production, especially in the presence of H + , which is beneficial for applications to gastrostomy wounds in acidic conditions. Well, in the absence of H 2 O 2 , the MO-HPA 1.0 hydrogel had no oxygen production under physiological condition, but a little oxygen production capacity under acidic conditions, which was due to the ability of n-MO to use H + to produce oxygen. The catalytic ability of the MO-HPA 1.0 hydrogels was further evaluated in NIH-3T6 cells under hypoxic conditions. As shown in , the control group and HPA 1.0 hydrogel group had notable and similar fluorescence intensities, suggesting that the HPA 1.0 hydrogel without n-MO lacked the ability to regulate the intracellular oxygen content. However, the fluorescence intensity in the cells treated with MO-HPA 1.0 was significantly decreased, indicating n-MO in the gel elevated intracellular oxygen content. It is thus suggested that MO-HPA 1.0 was able to regulate the oxygen balance in hypoxic cells via its CAT-mimicking activity. Antibacterial Ability The antibacterial effects of the MO-HPA hydrogels against S. aureus and E. coli were evaluated by spread plate assays. As shown in , the numbers of bacterial colonies (including S. aureus and E. coli ) treated with the MO-HPA hydrogels were significantly reduced compared with the control group, which indicated that the MO-HPA hydrogels possessed antibacterial effects against both gram-positive and gram-negative bacteria. Moreover, for E. coli , the antibacterial efficiency of the MO-HPA 1.0 and MO-HPA 1.5 groups increased by 40.1% and 73.4%, respectively, compared to the MO-HPA 0.5 group. Similarly, for S. aureus , the antibacterial efficiency of the MO-HPA 1.0 and MO-HPA 1.5 groups increased by 55.6% and 57.6%, respectively, compared to the MO-HPA 0.5 group. These results indicated that the antibacterial efficiency of the hydrogels was enhanced with increasing HP 20 concentration, and MO-HPA 1.5 exhibited the strongest inhibitory effect . The potential antibacterial mechanism of the MO-HPA hydrogels was considered to be the inhibition and killing effect of PLL due to its ability to change the bacterial membrane permeability and the electrostatic interactions between −NH 3 + and the functional groups on the bacterial membrane surface. Rheological and Mechanical Properties The compressive properties of the MO-HPA hydrogels were further evaluated. As shown in , the strength of the hydrogels increased with increasing compressive force in the initial stage, indicating that all the MO-HPA hydrogels possessed appropriate compressive properties. Specifically, the compressive stress of MO-HPA 0.5 hydrogel was approximately 0.011 MPa. With the increase of the concentration of HP 20 , the compressive stress of MO-HPA 1.0 hydrogel and MO-HPA 1.5 hydrogel improved to around 0.016 MPa and 0.015 MPa, respectively, indicating an enhancement the mechanical properties for wound healing application. Porcine skin tensile experiments were performed to evaluate the adhesion properties of the MO-HPA hydrogels. As shown in , the adhesion of the M-HPA 0.5, MO-HPA 1.0 and MO-HPA 1.5 hydrogels to fresh porcine skin increased with increasing HP 20 concentration to approximately 0.13 Mpa, 0.35 Mpa and 0.48 Mpa, respectively. The potential mechanism was considered to be the presence of the -NH 2 group in PLL, which could form hydrogen bonds with the H 2 O in porcine skin. In general, adhesive hydrogels are favorable for effective binding to tissue, which is beneficial for biomedical usage. Dynamic rheological tests, including time sweep, dynamic strain sweep, frequency sweep and stress relaxation tests, were performed on the MO-HPA hydrogels to evaluate their viscoelastic characteristics. As shown in , the time oscillation scanning curves showed that the energy storage moduli (G’) of MO-HPA 1.0 and MO-HPA 1.5 were higher than their loss moduli (G)”. The tan δ (G”G’) values of MO-HPA 1.0 and MO-HPA 1.5 were less than 1.0, which indicated that they act as viscoelastic gels. However, for the MO-HPA 0.5 hydrogel, G’ was close to G”, suggesting poor mechanical properties compared with those of MO-HPA 1.0 and MO-HPA 1.5 . The shear thinning behaviors of the MO-HPA hydrogels with different HP 20 concentrations were further examined. As shown in , the viscosity of all the MO-HPA hydrogels decreased with increasing shear rate, which demonstrated shear thinning behavior and good injectability. The frequency sweep curves of all MO-HPA hydrogels showed that both G’ and G” increased with increasing frequency, and the value of G’ was larger than that of G” in the frequency range of 0.01–10 hz . These results indicated that the MO-HPA hydrogels have a stable structure and viscoelasticity, presumably due to physical entanglement in the chemically crosslinked network. , These rheological characteristics indicated that the MO-HPA hydrogels with tunable viscoelasticity is beneficial for meeting the demands of clinical applications. The gel-sol transition point and linear viscoelastic region of the MO-HPA hydrogels were detected by strain sweep and frequency sweep tests. As shown in , G’ was higher than G” at 70% strain in the sweep curves, indicating the stable structure of the hydrogel. However, as the shear strain increased, G” gradually exceeded G’, implying a gel-to-sol phase transition. Notably, MO-HPA 1.0 exhibited a postponed transformation point in the curve, indicating a more stable network. Considering that it had the best water absorption performance, the most stable internal structure, and appropriate mechanical properties, MO-HPA 1.0 was therefore used for subsequent research. Blood Compatibility Evaluation The blood compatibility of the hydrogels was further proven by hemolysis experiments. As shown in , the positive control group was bright red, indicating the rupture of blood cells. However, all the hydrogel groups were almost colorless and transparent and consistent with the negative control group. This conclusion was also confirmed by UV full-wavelength scanning of the supernatants from all groups . Cell morphology is shown in . It was visually evident that almost all of the red blood cells in the positive group had ruptured, while those of the hydrogel group maintained an intact morphology. These results indicated that the hydrogels possessed good blood compatibility and do not carry a risk of hemolysis. Healing of Mouse Full-Thickness Skin Defects A mouse dorsal full-thickness skin defect model was established to preliminarily evaluate the effect of the MO-HPA 1.0 hydrogel on wound healing. The in situ hydrogel formed rapidly on the wounds of the mice after injection. Representative images of the wound areas treated with normal saline (control group) and MO-HPA 1.0 hydrogels on Days 0, 1, 3, 5, 7 and 14 are shown in . Both groups showed a continuous reduction in wound area. Notably, the MO-HPA 1.0 hydrogel-treated groups showed nearly complete wound healing on the 7 th day, while the control group still exhibited obvious blood clot scabs in the epidermis. Specially, as shown in , on day 7, the wound area ratio of the control and MO-HPA 1.0 treatment groups were approximately 28.2% and 18.9%, respectively ( p <0.01), indicating the treatment effect of MO-HPA 1.0 group was significantly better than that of control group. These results suggested that the MO-HPA 1.0 hydrogel had the ability to promote wound treatment. Acceleration of Tract Maturation in the Rabbit Gastrostomy Model A rabbit gastrostomy model was established to further investigate the effect of the MO-HPA 1.0 hydrogels on tract maturation after gastrostomy. We successfully constructed a rabbit gastrostomy model by adopting surgical methods, following improvements as reported in the literature. This method does not require the use of an endoscope and is beneficial for completion in the laboratory. Additionally, the gastrostomy rabbits can survive well during the experimental period. As shown in , a mature tract along the tube between the gastric and abdominal walls formed in the MO-HPA 1.0 hydrogel group on approximately the 7 th day after gastrostomy. However, the control group only began to develop a new tract on the tube in proximity to the peristomal site on the 7th day. Tract maturation in the control group was observed 14 days after gastrostomy, while at this time, a compact connection between the stomach and the abdominal wall had formed on the peristomal site in the MO-HPA 1.0 hydrogel group. Blood vessels, collagen, and myogenic fibers play important roles in tissue healing. H&E and Masson staining were performed to evaluate tissue healing of the peristomal site wounds after gastrostomy. As shown in , the MO-HPA 1.0 hydrogel group showed much many more glands (yellow arrows) and fibroblasts (black arrows) than the control group on Day 7, indicating the angiogenic effect of the hydrogel. Likewise, as shown in and , on day 7, the collagen content in the control group was approximately 33.2%, whereas the hydrogel group exhibited a collagen content of about 46.9%, indicating a significant increase in the accumulation of collagen in the dermis ( P < 0.001). Additionally, the expression level of HIF-1α in the MO-HPA 1.0 hydrogel group significantly decreased ( P < 0.001), reaching only 30.9% of that in the control group on day 7 ( and ). The level of HIF-1α reflects the hypoxia degree of the wound tissue. The results indicated that the gel downregulated HIF-1α and ameliorate hypoxic conditions. In general, after treatment with the MO-HPA 1.0 hydrogels, micro-vessels and collagen fibers were visible in the regenerated mature tract tissue 7 days post gastrostomy, demonstrating excellent injury healing and tissue regeneration effects of the hydrogel. The levels of inflammatory cytokines, including TNF-α, IL-6 and IL-1β were also evaluated. As shown in Figure S3 , compared to the control group, the levels of TNF-α, IL-6 and IL-1β were significantly decreased on day 7 and day 14 ( p < 0.01). The results suggested that treatment with MO-HPA 1.0 effectively reduced the levels of the inflammatory cytokines TNF-α, IL-1β, and IL-6 in wound environments. These results indicated that the MO-HPA 1.0 hydrogels effectively accelerated tract maturation, which may be conducive to reducing clinical risks, such as free perforation, abdominal infection, and difficulty in reinsertion after accidental catheter detachment. Current wound dressings, including conventional hydrogels, face challenges in treating gastrostomy-related wounds due to their inability to effectively address hypoxia or control the oxidative environment at the wound site. Moreover, most existing hydrogel treatments do not possess the necessary antimicrobial properties to prevent infection in such highly exposed wounds. The MO-HPA hydrogel, with its unique oxygen-generating capabilities and ROS modulation, offers an innovative solution that addresses both hypoxia and bacterial contamination, representing a novel approach in the management of gastrostomy-related wounds. The multifunctional MO-HPA hydrogels developed in this study exhibit antibacterial, hypoxia-relieving, and proliferative properties, which make them suitable for clinical situations beyond gastrostomy wound management. Given the increasing prevalence of multidrug-resistant infections, these hydrogels could offer an effective, non-antibiotic approach to managing chronic wounds, such as diabetic ulcers, pressure sores, chronic non-healing wounds. These wounds often remain in a persistent inflammatory state and suffer from poor oxygenation, similar to gastrostomy wounds. The antimicrobial capabilities of MO-HPA, coupled with its ability to scavenge ROS and hypoxia-relieving, indicate its potential use in treating these complex wound environments. Future studies could focus on evaluating the efficacy of MO-HPA hydrogels in such chronic wound models, thereby expanding its clinical relevance and impact. Additionally, the main components of the hydrogel, such as hyaluronic acid and sodium alginate, are well-known for their biodegradability, primarily via enzymatic pathways that ensure their breakdown into biocompatible by-products. , Although the short-term efficacy of MO-HPA hydrogels has been demonstrated in our study, the long-term implications is critical for clinical translation. In future research, the long-term tissue compatibility, potential immune responses of the hydrogel will be evaluated to further support its clinical translation and application. 2 In the modified redox synthetic method, BSA served as both the biological template and reductant. As shown in , n-MO had an average hydrodynamic diameter of approximately 40 nm (based on intensity), a PDI of 0.239 and a zeta potential of −38.0±0.1 mV. It can be seen from the TEM images that the nanoparticles were evenly dispersed without agglomeration . The particle size observed from the TEM images was approximately 25 nm, which is similar with the size distributions based on number and volume ( Figure S1 ). What’s more, the characteristic peaks of potassium permanganate (at 315 and 545 nm) disappeared, and a broad UV absorption peak at 300–400 nm was observed for n-MO , which is consistent with literature reports. n-MO is a nanozyme with ROS scavenging and oxygen producing activities. RDPP is a widely used fluorescence probe for oxygen detection and quantification. Due to dynamic quenching, molecular oxygen causes a significant decrease in the fluorescence of RDPP. Therefore, molecular oxygen can be detected by measuring the fluorescence intensity. As shown in Figure S2 , in the absence of n-MO, the fluorescence intensity of the RDPP solution remained nearly constant under both pH 7.4 and 4.0 condition, indicating that no oxygen was generated. As the concentration of H 2 O 2 increases, the fluorescence intensity decreases, indicating the production of more oxygen. Specially, in the groups containing n-MO, the fluorescence intensity showed a noticeable decrease. In the control group without H 2 O 2 , the fluorescence intensity at pH 4.0 decreased by approximately 21%, indicating that n-MO has some capacity to generate oxygen under acidic conditions. In the groups with added H 2 O 2 , the fluorescence intensity decreased obviously with increasing H 2 O 2 concentration, and the fluorescence intensity under the pH 4.0 group was lower than that under the pH 7.4 group. This suggests that the oxygen production catalytic ability of n-MO is positively correlated with both the H + concentration and H 2 O 2 concentration, which is consistent with the literature reports. The results demonstrated that n-MO, with a nanozyme function, had been synthesized successfully. The synthesis route of HA-PLL is shown in . Specifically, HA-PLL (HP) was synthesized by a carbodiimide coupling reaction between the carboxyl group of HA and the amino group of PLL. The molecular structures of HA, PLL and HP with different ratios of reactants were confirmed by ATR, as shown in . The peaks at 3321 cm −1 , 1606 cm −1 and 1407 cm −1 belonged to the stretching vibration of the carboxyl group of HA. In addition, the peaks at 1158 cm −1 , 1077 cm −1 , 1034 cm −1 and 949 cm −1 were characteristic peaks of polysaccharides. The adsorption peaks at 1647 cm −1 and 1566 cm −1 were attributed to vibrations of the peptide group, which were mainly generated by stretching vibrations of the C=O group and in-plane deformation vibrations of the N-H group of the amide bonds of PLL, respectively. In the HP spectrum, the peaks at 1647 cm −1 and 1566 cm −1 were replaced by a peak at 1611 cm −1 . The appearance of a broad band between 3000 and 3400 cm −1 was mainly due to the overlap of the carbonyl stretching vibration peak of PLL, the -NH stretching vibration peak of HA, and the formation of intramolecular or intermolecular hydrogen bonds. The intensity of the -NH absorption peak at 1564 cm −1 in the HP spectrum was significantly lower than that in the PLL spectrum, indicating a decrease in the number of -NH 2 groups due to their reaction with carboxyl groups. Compared to the HA spectrum, the intensity of the amide bond (-CONH) at 1320 cm −1 in the HP spectrum increased, indicating the formation of amide bonds during the reaction between PLL and HA. Moreover, the absorption peak of HP 20 was stronger than those of HP 10 and HP 50 , which suggested that HP 20 had the highest degree of crosslinking. Therefore, the optimal ratio of HA to PLL was 1:20, and HP 20 was used for hydrogel preparation. Furthermore, H NMR spectroscopy was performed to verify the structure and grafting rate of HP 20 . The characteristic peaks of PLL appeared at 3.8 ppm (α–CH 2 ), 1.95–1.35 ppm (β, γ-CH 2 ), and 2.9 ppm (NH-CH 2 ). Additionally, the peaks from 3.0–4.0 ppm were assigned to the HA backbone. The grafting rate of HP 20 was quantified by comparing the integration of the peak of the vinyl group (δ = 2.9) with that of the HA backbone (δ = 3.20–4.20) and was determined to be approximately 25%. The scheme of hydrogel preparation is shown in . In situ hydrogels were formed by homogeneously mixing solution A and solution B, which transformed from a sol state to a gel state . In the presence of Ca 2+ , sodium alginate generated calcium alginate and physically crosslinked with HP 20 to form hydrogels with a three-dimensional network structure. Generally, the structures of MO-HPA hydrogels were fabricated by the combination of chemical crosslinking (carboxyl group of HA and the amino group of PLL) and physical crosslinking (HP with alginate), as well as ionic crosslinking (alginate with Ca 2+ ). Considering the high viscosity of the polymer solution, its syringe ability was investigated. Solution A (containing HP and SA) was easily injected into solution B through a narrow 23G needle, and the mixture rapidly formed a gel . These results indicated that the hydrogel precursor solutions had satisfactory syringe ability and formability, which are beneficial for clinical application. The micromorphology of the lyophilized hydrogels (MO-HPA 0.5 , MO-HPA 1.0 , and MO-HPA 1.5 ) was observed by SEM. According to the SEM images, all of the hydrogels exhibited a uniform dense network with a highly porous structure, the diameters of which were approximately 50 μm . Moreover, the HP 20 concentration played an important role in the microstructure of the gel. With an increasing concentration of HP 20 , the hydrogels became more homogeneous and denser. Specifically, the order of homogeneity and crosslinking density of the hydrogels is MO-HPA 1.5 > MO-HPA 1.0 > MO-HPA 0.5 . Notably, there is a close relationship between the micromorphology and adaptability of the hydrogels. The homogeneous structure of the gels might be attributed to their mechanical properties. Moreover, hydrogels with similar microstructures would be beneficial for the diffusion and exchange of nutrients, oxygen, and other biomolecules, which implies that they could be further applied for cell proliferation and tissue engineering. Notably, the materials used to synthesize MO-HPA hydrogels are readily available and relatively cost-effective, which supports the potential for large-scale production. The in situ gelation process is straightforward and can be adapted to a clinical setting, enabling ease of application. Moving forward, our study would focus on optimizing production techniques to ensure consistency in properties, such as mechanical strength and gelation time, to meet regulatory standards for medical use. The gelation time was determined by the vial tilting method. As shown in , the sol−gel transformation of MO-HPA with different HP 20 concentrations took place in 65 seconds. However, the gelation time of MO-HPA decreased as the HP 20 concentration increased, and MO-HPA 1.5 showed the shortest gelation time. The accelerated formation of the hydrogels may be caused by increased chain entanglement between HP 20 and alginate. It is worth noting that the gelation time is influenced by temperature; as the temperature decreases, the time required for gel formation increases. Considering that parts of tube was positioned on the surface of the skin, the gelation experiment was conducted at room temperature (around 25°C). The equilibrium swelling ratio is an important parameter to evaluate the crosslinking degree. As shown in , all of the hydrogels with various HP 20 contents reached swelling equilibrium within 300 s. The water content of MO-HPA 1.0 was relatively higher, which may be due to it being more porous than MO-HPA 0.5 and having a lower crosslinking density than MO-HPA 1.5 , which are conducive to the permeation of water molecules. According to the literature, hydrogels with excellent absorbability are beneficial for absorbing excess exudate. The in situ hydrogel molding characteristics were thus further evaluated. As shown in , the letters “UJS” could be written on the hydrogel without difficulty. Moreover, the letter “J” written on all the MO-HPA hydrogels could be lifted, indicating the potential mechanical strength of the hydrogels. The macroscopic self-healing ability of the MO-HPA hydrogels was further detected. As shown in , the two semi-blocks (MO-HPA 1.0 ), stained or not with bromophenol blue, started to close in several minutes and completely merged into an intact hydrogel, which was likely due to reversible physical crosslinking and ionic crosslinking. This self-healing property ensured that the hydrogels could reform if they were destroyed accidentally from the daily tube movement or wound contact after gastrostomy. These results indicated that the multivariate crosslinking system endowed the MO-HPA hydrogels with adaptability, moldability and self-healing characteristics that could match the complex environments of gastrostomy wounds. Biocompatibility and biosafety are important prerequisites for the clinical application of hydrogels. MTT and hemolysis assays were performed to evaluate cell and blood compatibility. 3T6-Swiss albino cells were used as model cells to assess the cytotoxicity of the n-MO and MO-HPA hydrogels. As shown in , the viabilities of cells treated with various concentrations of n-MO were almost over 80% after incubation for 24, 48 or 72 h, which indicated that n-MO has good safety within the concentration range tested. The toxicity of the MO-HPA hydrogels to 3T6-Swiss albino cells was also assessed at 24, 48 and 72 h. As shown in , cell viability remained over 80% during coincubation, exhibiting the good cytocompatibility of the hydrogels. Moreover, the viability values in some groups were even greater than 100%, which may be because HA promotes cell proliferation. It was reported hyaluronic acid is a major component of the extracellular matrix, capable of regulating the secretion of growth factors and cytokines, and influencing cell adhesion, growth, proliferation, and differentiation. Alginate can serve as a synthetic extracellular matrix material and promotes the proliferation of fibroblasts. Thus, this phenomenon was consistently observed in multiple experiments and is also indicative of hyaluronic acid and alginate’s biocompatibility. Fibroblasts play a crucial role in the replacement of damaged ECM and the promotion of tissue healing. During this process, the migration and proliferation of fibroblasts to the injury site remodel the surrounding microenvironment and promote tissue regeneration. A scratch test was performed with 3T6 cells to determine whether the MO-HPA hydrogel can promote the migration of fibroblasts. As shown in and ), compared to the control group, the cell migration rates of the three MO-HPA hydrogel groups (MO-HPA 0.5 , MO-HPA 1.0 , and MO-HPA 1.5 ) were increased by approximately 12.3%, 17.0% and 23.3%, respectively. The MO-HPA1.5 group displayed the highest cell migration rate (about 41.3%). These results indicated that as the proportion of HP 20 in the hydrogels increased, the gel’s ability to promote cell proliferation was also enhanced. The principle of n-MO catalysis of H 2 O 2 to generate oxygen is shown in . Oxygen is crucial for collagen synthesis, angiogenesis, and tissue regeneration. Local wound hypoxia due to poor oxygen diffusion and permeation affect the healing process. The n-MO in MO-HPA 1.0 is a nanozyme with oxygen producing activities. The ability of MO-HPA 1.0 to mimic CAT activity and catalyze the conversion of H 2 O 2 to O 2 was investigated under pH 1.5, 4.0 and 7.4 conditions, simulating fasting, postprandial gastric conditions and physiological conditions, respectively, i n vitro experiments. RDPP served as a luminescent indicator for oxygen detection, with molecular oxygen leading to a reduction in fluorescence intensity of RDPP through a dynamic quenching process. In that case the fluorescence intensity inversely correlates with the amount of O 2 produced, with higher reductions in fluorescence indicating greater O 2 output. As shown in , the fluorescence intensities of the HPA 1.0 hydrogels in the groups without n-MO were almost unchanged under all pH conditions. Notably, the MO-HPA 1.0 hydrogel groups with H 2 O 2 reduced the fluorescence intensities under acidic conditions (pH 1.5 and 4.0) remarkably. These results indicated that, in the presence of H 2 O 2 , the MO-HPA 1.0 hydrogel showed a stronger ability to remove H 2 O 2 and promote oxygen production, especially in the presence of H + , which is beneficial for applications to gastrostomy wounds in acidic conditions. Well, in the absence of H 2 O 2 , the MO-HPA 1.0 hydrogel had no oxygen production under physiological condition, but a little oxygen production capacity under acidic conditions, which was due to the ability of n-MO to use H + to produce oxygen. The catalytic ability of the MO-HPA 1.0 hydrogels was further evaluated in NIH-3T6 cells under hypoxic conditions. As shown in , the control group and HPA 1.0 hydrogel group had notable and similar fluorescence intensities, suggesting that the HPA 1.0 hydrogel without n-MO lacked the ability to regulate the intracellular oxygen content. However, the fluorescence intensity in the cells treated with MO-HPA 1.0 was significantly decreased, indicating n-MO in the gel elevated intracellular oxygen content. It is thus suggested that MO-HPA 1.0 was able to regulate the oxygen balance in hypoxic cells via its CAT-mimicking activity. The antibacterial effects of the MO-HPA hydrogels against S. aureus and E. coli were evaluated by spread plate assays. As shown in , the numbers of bacterial colonies (including S. aureus and E. coli ) treated with the MO-HPA hydrogels were significantly reduced compared with the control group, which indicated that the MO-HPA hydrogels possessed antibacterial effects against both gram-positive and gram-negative bacteria. Moreover, for E. coli , the antibacterial efficiency of the MO-HPA 1.0 and MO-HPA 1.5 groups increased by 40.1% and 73.4%, respectively, compared to the MO-HPA 0.5 group. Similarly, for S. aureus , the antibacterial efficiency of the MO-HPA 1.0 and MO-HPA 1.5 groups increased by 55.6% and 57.6%, respectively, compared to the MO-HPA 0.5 group. These results indicated that the antibacterial efficiency of the hydrogels was enhanced with increasing HP 20 concentration, and MO-HPA 1.5 exhibited the strongest inhibitory effect . The potential antibacterial mechanism of the MO-HPA hydrogels was considered to be the inhibition and killing effect of PLL due to its ability to change the bacterial membrane permeability and the electrostatic interactions between −NH 3 + and the functional groups on the bacterial membrane surface. The compressive properties of the MO-HPA hydrogels were further evaluated. As shown in , the strength of the hydrogels increased with increasing compressive force in the initial stage, indicating that all the MO-HPA hydrogels possessed appropriate compressive properties. Specifically, the compressive stress of MO-HPA 0.5 hydrogel was approximately 0.011 MPa. With the increase of the concentration of HP 20 , the compressive stress of MO-HPA 1.0 hydrogel and MO-HPA 1.5 hydrogel improved to around 0.016 MPa and 0.015 MPa, respectively, indicating an enhancement the mechanical properties for wound healing application. Porcine skin tensile experiments were performed to evaluate the adhesion properties of the MO-HPA hydrogels. As shown in , the adhesion of the M-HPA 0.5, MO-HPA 1.0 and MO-HPA 1.5 hydrogels to fresh porcine skin increased with increasing HP 20 concentration to approximately 0.13 Mpa, 0.35 Mpa and 0.48 Mpa, respectively. The potential mechanism was considered to be the presence of the -NH 2 group in PLL, which could form hydrogen bonds with the H 2 O in porcine skin. In general, adhesive hydrogels are favorable for effective binding to tissue, which is beneficial for biomedical usage. Dynamic rheological tests, including time sweep, dynamic strain sweep, frequency sweep and stress relaxation tests, were performed on the MO-HPA hydrogels to evaluate their viscoelastic characteristics. As shown in , the time oscillation scanning curves showed that the energy storage moduli (G’) of MO-HPA 1.0 and MO-HPA 1.5 were higher than their loss moduli (G)”. The tan δ (G”G’) values of MO-HPA 1.0 and MO-HPA 1.5 were less than 1.0, which indicated that they act as viscoelastic gels. However, for the MO-HPA 0.5 hydrogel, G’ was close to G”, suggesting poor mechanical properties compared with those of MO-HPA 1.0 and MO-HPA 1.5 . The shear thinning behaviors of the MO-HPA hydrogels with different HP 20 concentrations were further examined. As shown in , the viscosity of all the MO-HPA hydrogels decreased with increasing shear rate, which demonstrated shear thinning behavior and good injectability. The frequency sweep curves of all MO-HPA hydrogels showed that both G’ and G” increased with increasing frequency, and the value of G’ was larger than that of G” in the frequency range of 0.01–10 hz . These results indicated that the MO-HPA hydrogels have a stable structure and viscoelasticity, presumably due to physical entanglement in the chemically crosslinked network. , These rheological characteristics indicated that the MO-HPA hydrogels with tunable viscoelasticity is beneficial for meeting the demands of clinical applications. The gel-sol transition point and linear viscoelastic region of the MO-HPA hydrogels were detected by strain sweep and frequency sweep tests. As shown in , G’ was higher than G” at 70% strain in the sweep curves, indicating the stable structure of the hydrogel. However, as the shear strain increased, G” gradually exceeded G’, implying a gel-to-sol phase transition. Notably, MO-HPA 1.0 exhibited a postponed transformation point in the curve, indicating a more stable network. Considering that it had the best water absorption performance, the most stable internal structure, and appropriate mechanical properties, MO-HPA 1.0 was therefore used for subsequent research. The blood compatibility of the hydrogels was further proven by hemolysis experiments. As shown in , the positive control group was bright red, indicating the rupture of blood cells. However, all the hydrogel groups were almost colorless and transparent and consistent with the negative control group. This conclusion was also confirmed by UV full-wavelength scanning of the supernatants from all groups . Cell morphology is shown in . It was visually evident that almost all of the red blood cells in the positive group had ruptured, while those of the hydrogel group maintained an intact morphology. These results indicated that the hydrogels possessed good blood compatibility and do not carry a risk of hemolysis. A mouse dorsal full-thickness skin defect model was established to preliminarily evaluate the effect of the MO-HPA 1.0 hydrogel on wound healing. The in situ hydrogel formed rapidly on the wounds of the mice after injection. Representative images of the wound areas treated with normal saline (control group) and MO-HPA 1.0 hydrogels on Days 0, 1, 3, 5, 7 and 14 are shown in . Both groups showed a continuous reduction in wound area. Notably, the MO-HPA 1.0 hydrogel-treated groups showed nearly complete wound healing on the 7 th day, while the control group still exhibited obvious blood clot scabs in the epidermis. Specially, as shown in , on day 7, the wound area ratio of the control and MO-HPA 1.0 treatment groups were approximately 28.2% and 18.9%, respectively ( p <0.01), indicating the treatment effect of MO-HPA 1.0 group was significantly better than that of control group. These results suggested that the MO-HPA 1.0 hydrogel had the ability to promote wound treatment. A rabbit gastrostomy model was established to further investigate the effect of the MO-HPA 1.0 hydrogels on tract maturation after gastrostomy. We successfully constructed a rabbit gastrostomy model by adopting surgical methods, following improvements as reported in the literature. This method does not require the use of an endoscope and is beneficial for completion in the laboratory. Additionally, the gastrostomy rabbits can survive well during the experimental period. As shown in , a mature tract along the tube between the gastric and abdominal walls formed in the MO-HPA 1.0 hydrogel group on approximately the 7 th day after gastrostomy. However, the control group only began to develop a new tract on the tube in proximity to the peristomal site on the 7th day. Tract maturation in the control group was observed 14 days after gastrostomy, while at this time, a compact connection between the stomach and the abdominal wall had formed on the peristomal site in the MO-HPA 1.0 hydrogel group. Blood vessels, collagen, and myogenic fibers play important roles in tissue healing. H&E and Masson staining were performed to evaluate tissue healing of the peristomal site wounds after gastrostomy. As shown in , the MO-HPA 1.0 hydrogel group showed much many more glands (yellow arrows) and fibroblasts (black arrows) than the control group on Day 7, indicating the angiogenic effect of the hydrogel. Likewise, as shown in and , on day 7, the collagen content in the control group was approximately 33.2%, whereas the hydrogel group exhibited a collagen content of about 46.9%, indicating a significant increase in the accumulation of collagen in the dermis ( P < 0.001). Additionally, the expression level of HIF-1α in the MO-HPA 1.0 hydrogel group significantly decreased ( P < 0.001), reaching only 30.9% of that in the control group on day 7 ( and ). The level of HIF-1α reflects the hypoxia degree of the wound tissue. The results indicated that the gel downregulated HIF-1α and ameliorate hypoxic conditions. In general, after treatment with the MO-HPA 1.0 hydrogels, micro-vessels and collagen fibers were visible in the regenerated mature tract tissue 7 days post gastrostomy, demonstrating excellent injury healing and tissue regeneration effects of the hydrogel. The levels of inflammatory cytokines, including TNF-α, IL-6 and IL-1β were also evaluated. As shown in Figure S3 , compared to the control group, the levels of TNF-α, IL-6 and IL-1β were significantly decreased on day 7 and day 14 ( p < 0.01). The results suggested that treatment with MO-HPA 1.0 effectively reduced the levels of the inflammatory cytokines TNF-α, IL-1β, and IL-6 in wound environments. These results indicated that the MO-HPA 1.0 hydrogels effectively accelerated tract maturation, which may be conducive to reducing clinical risks, such as free perforation, abdominal infection, and difficulty in reinsertion after accidental catheter detachment. Current wound dressings, including conventional hydrogels, face challenges in treating gastrostomy-related wounds due to their inability to effectively address hypoxia or control the oxidative environment at the wound site. Moreover, most existing hydrogel treatments do not possess the necessary antimicrobial properties to prevent infection in such highly exposed wounds. The MO-HPA hydrogel, with its unique oxygen-generating capabilities and ROS modulation, offers an innovative solution that addresses both hypoxia and bacterial contamination, representing a novel approach in the management of gastrostomy-related wounds. The multifunctional MO-HPA hydrogels developed in this study exhibit antibacterial, hypoxia-relieving, and proliferative properties, which make them suitable for clinical situations beyond gastrostomy wound management. Given the increasing prevalence of multidrug-resistant infections, these hydrogels could offer an effective, non-antibiotic approach to managing chronic wounds, such as diabetic ulcers, pressure sores, chronic non-healing wounds. These wounds often remain in a persistent inflammatory state and suffer from poor oxygenation, similar to gastrostomy wounds. The antimicrobial capabilities of MO-HPA, coupled with its ability to scavenge ROS and hypoxia-relieving, indicate its potential use in treating these complex wound environments. Future studies could focus on evaluating the efficacy of MO-HPA hydrogels in such chronic wound models, thereby expanding its clinical relevance and impact. Additionally, the main components of the hydrogel, such as hyaluronic acid and sodium alginate, are well-known for their biodegradability, primarily via enzymatic pathways that ensure their breakdown into biocompatible by-products. , Although the short-term efficacy of MO-HPA hydrogels has been demonstrated in our study, the long-term implications is critical for clinical translation. In future research, the long-term tissue compatibility, potential immune responses of the hydrogel will be evaluated to further support its clinical translation and application. In this study, the injectable, nanozyme-based, multifunctional in situ hydrogels (MO-HPA) with antibacterial properties, ROS scavenging, and oxygen production capabilities were successfully developed. The hydrogel significantly reduced inflammatory factors and promoted collagen synthesis and fibroblast migration, which are beneficial for wound healing and gastrostomy tract maturation. It is worth noting that, at present, there is still a lack of research on promoting peristomal site wound treatment and tract maturation after gastrostomy. This study is the first to carry out research on promoting tract maturation after gastrostomy through the use of in situ hydrogels, which has important significance and application value for reducing the complications of gastrostomy. Furthermore, the hydrogel’s unique properties suggest that it could be valuable in treating chronic wounds characterized by hypoxia and persistent inflammation, such as diabetic ulcers and pressure sores, highlighting its potential for broader wound care applications. Additionally, the potential long-term effects, biodegradability, and manufacturing scalability of the hydrogel will be explored in future studies to ensure its suitability for prolonged clinical use.
ChatGPT-4 Omni’s superiority in answering multiple-choice oral radiology questions
9386b05b-7725-4a5d-bad4-4e060f010725
11786404
Dentistry[mh]
Natural Language Processing (NLP) is a field dedicated to understanding and enabling computers to interpret and process human language in textual and spoken forms . In 1950, Alan Turing asked whether a computer program could communicate effectively with humans. This inquiry has evolved into the Turing test, a foundation for developing chatbots . Since the launch of ChatGPT by OpenAI in November 2022, the landscape of generative AI chatbots has experienced remarkable advancements. These developments profoundly impact higher education as academics and students increasingly turn to chatbots like Ernie, Bard (now known as Gemini), and Grok . These text-based AI tools have been trained and refined using various datasets, including books, articles, and websites . The efficacy and precision of these AI tools depend on multiple factors, including their expertise, frequency of model updates, and the complexity of the inquiries they address . It is anticipated that these AI technologies, which offer personalized solutions, may soon supplant traditional search engines . Previous studies have investigated the effectiveness of various AI chatbots in specialty medical exams across multiple fields, including family medicine , sleep medicine , internal medicine , dermatology , ophthalmology , and radiology . A study conducted in Taiwan evaluated the performance of ChatGPT-3.5 on the pharmacy licensing examination, revealing a correct answer rate of 54.4%. Furthermore, another study compared the correct answer rates of ChatGPT-4 and Bard on the American College of Radiology’s Diagnostic Radiology In-Training examination, showcasing that ChatGPT-4 outperformed Bard with rates of 87.11% and 70.44%, respectively. While numerous studies have assessed chatbot performance across various disciplines, data regarding their effectiveness in dentistry remains limited. Recent research indicated that ChatGPT-3.5 and ChatGPT-4 achieved success rates of 61.3% and 76.9%, respectively, on the dental board exam evaluating dental knowledge. AI is transforming education by enhancing e-learning platforms and assisting with tasks such as answering multiple-choice questions and completing assignments . Many students prefer digital learning resources over traditional materials like textbooks . Large language models (LLMs) can particularly benefit dental students preparing for undergraduate and graduate multiple-choice exams. These models can facilitate academic development by generating new questions and case scenarios inspired by prior exams. Concerns continue to arise regarding the accuracy and reliability of AI-generated responses, particularly when addressing specific topics or questions. Oral radiology, a specialized field in dentistry with a limited number of practitioners and a shortage of educators, stands to gain significantly from integrating large language models (LLMs). Consequently, examining their accuracy and reliability is crucial for students in this discipline. By examining the latest advancements in AI tools, particularly ChatGPT-4 and ChatGPT-4o, we can provide valuable insights for dental students specializing in oral radiology and their educators. This analysis will support informed decision-making that enhances learning and teaching methodologies within this specialized field. Notably, the effectiveness of these advanced chatbots in addressing questions related to oral radiology remains to be thoroughly investigated. ChatGPT-4o excels in delivering accurate and reliable answers, capable of swiftly tackling complex queries and conducting in-depth analyses. Its enhanced language comprehension allows for responses that are increasingly natural and human-like. Microsoft Copilot, developed using GPT-4, exemplifies this technology in practice, as it effectively understands and responds to intricate prompts, creates more innovative and informative text, and assists users across various tasks. Conversely, ChatGPT-3.5 is recognized as the fastest model in the ChatGPT series, while Bard, integrated into the Google ecosystem, offers detailed and comprehensive responses . To specialize in oral radiology in Türkiye, passing the Dental Specialty Admission Exam (DUS) after completing a five-year undergraduate program in dentistry is crucial. This exam is held annually and consists of 120 multiple-choice questions, with 10 dedicated explicitly to oral radiology. This study aims to assess and compare the performance of ChatGPT-4o, ChatGPT-3.5, Google Bard, and Microsoft Copilot in answering text-based multiple-choice questions related to oral radiology as part of the DUS in Türkiye. Study design and data collection This study aimed to evaluate and compare the performance of ChatGPT-3.5, ChatGPT-4 Omni (4o), Google Bard, and Microsoft Copilot in answering multiple-choice questions related to oral radiology. The dataset utilized pertains to oral radiology, derived explicitly from the open-source question bank of the DUS covering the years 2012–2021 ( https://www.osym.gov.tr/TR,15070/dus-cikmis-sorular.html ). It comprises 123 multiple-choice questions, each featuring five answer options with one correct response. These questions primarily assess theoretical knowledge and diagnostic skills in oral radiology. To maintain authenticity and prevent any bias, the original questions, which were written in Turkish, were input into the chatbots without translation. Inclusion Criteria: All multiple-choice questions from the DUS database that exclusively assessed oral radiology were included. Exclusion Criteria: Questions with non-multiple-choice formats (e.g., open-ended or rank-based) or visual elements (e.g., images, radiographs) were excluded to ensure uniformity in the analysis. Querying procedure The study utilized the following AI chatbots: ChatGPT-3.5 (OpenAI, San Francisco, USA, March 2024 version). ChatGPT-4 Omni (OpenAI, San Francisco, USA, June 2024 version). Google Bard (Google, Menlo Park, USA, April 2024 version). Microsoft Copilot (Microsoft, Redmond, USA, May 2024 update). Each question was individually inputted into the chatbots in Turkish to ensure accurate understanding and minimize contextual contamination. Each interaction was treated as a standalone session to prevent memory effects from influencing subsequent prompts. The complete text of the questions, including punctuation and syntax, was preserved during input. No additional prompt optimization or pre-testing was conducted, allowing for conditions that closely reflect real-world usage. Responses from the chatbots were categorized as either “correct” or “incorrect” based on the official answer key provided by the DUS question bank. The accuracy rate for each chatbot was calculated as the percentage of correct answers out of the total number of questions attempted. Categories and topics The questions were categorized into 17 oral radiology topics, including but not limited to: Tooth caries radiology, Dental anomalies, Extraoral imaging, Advanced imaging techniques, Oral medicine, Jaw pathologies, Radiobiology. To better analyze the chatbots’ performance, these topics were further grouped into three educational content areas: Fundamental Knowledge: Topics such as physics, radiobiology, and projection geometry. Imaging and Equipment: Topics covering panoramic, intraoral, and extraoral imaging techniques. Image Interpretation: Topics requiring diagnostic reasoning, such as jaw pathologies and systemic diseases. Fig. illustrates the English translation of a case question from the DUS and displays the answer screen. Table provides examples of oral radiology questions across various topics Reliability assessment To ensure consistency, a single observer posed each question twice to each chatbot. The initial responses were used for primary analysis, while the second set was employed to evaluate reliability. The agreement between the two query attempts was assessed using Cohen’s Kappa (κ), resulting in the following values: ChatGPT-4 Omni: κ = 0.86, ChatGPT-3.5: κ = 0.78, and Microsoft Copilot: κ = 0.72. Evaluation metrics Three metrics were assessed to evaluate chatbot performance: Accuracy: The proportion of correct answers provided by each chatbot. Mean Word Count: The average word count of the responses. Response Time: The time taken by each chatbot to generate a response. Word Count Calculation: Responses were copied into Microsoft Word, and word count was measured using the built-in feature. Response Time Measurement: An online stopwatch was used to measure the time from input submission to response completion. Statistical analysis Statistical analyses were conducted using IBM SPSS Statistics version 21.0. Descriptive statistics were computed for all metrics. Comparative studies were carried out: the Kruskal-Wallis test was utilized to compare word counts and response times among the four chatbots, followed by Dunn’s post hoc test for pairwise comparisons of the chatbots. Cochran’s Q test was also applied to evaluate differences in accuracy rates for the same questions across the chatbots. This study aimed to evaluate and compare the performance of ChatGPT-3.5, ChatGPT-4 Omni (4o), Google Bard, and Microsoft Copilot in answering multiple-choice questions related to oral radiology. The dataset utilized pertains to oral radiology, derived explicitly from the open-source question bank of the DUS covering the years 2012–2021 ( https://www.osym.gov.tr/TR,15070/dus-cikmis-sorular.html ). It comprises 123 multiple-choice questions, each featuring five answer options with one correct response. These questions primarily assess theoretical knowledge and diagnostic skills in oral radiology. To maintain authenticity and prevent any bias, the original questions, which were written in Turkish, were input into the chatbots without translation. Inclusion Criteria: All multiple-choice questions from the DUS database that exclusively assessed oral radiology were included. Exclusion Criteria: Questions with non-multiple-choice formats (e.g., open-ended or rank-based) or visual elements (e.g., images, radiographs) were excluded to ensure uniformity in the analysis. The study utilized the following AI chatbots: ChatGPT-3.5 (OpenAI, San Francisco, USA, March 2024 version). ChatGPT-4 Omni (OpenAI, San Francisco, USA, June 2024 version). Google Bard (Google, Menlo Park, USA, April 2024 version). Microsoft Copilot (Microsoft, Redmond, USA, May 2024 update). Each question was individually inputted into the chatbots in Turkish to ensure accurate understanding and minimize contextual contamination. Each interaction was treated as a standalone session to prevent memory effects from influencing subsequent prompts. The complete text of the questions, including punctuation and syntax, was preserved during input. No additional prompt optimization or pre-testing was conducted, allowing for conditions that closely reflect real-world usage. Responses from the chatbots were categorized as either “correct” or “incorrect” based on the official answer key provided by the DUS question bank. The accuracy rate for each chatbot was calculated as the percentage of correct answers out of the total number of questions attempted. The questions were categorized into 17 oral radiology topics, including but not limited to: Tooth caries radiology, Dental anomalies, Extraoral imaging, Advanced imaging techniques, Oral medicine, Jaw pathologies, Radiobiology. To better analyze the chatbots’ performance, these topics were further grouped into three educational content areas: Fundamental Knowledge: Topics such as physics, radiobiology, and projection geometry. Imaging and Equipment: Topics covering panoramic, intraoral, and extraoral imaging techniques. Image Interpretation: Topics requiring diagnostic reasoning, such as jaw pathologies and systemic diseases. Fig. illustrates the English translation of a case question from the DUS and displays the answer screen. Table provides examples of oral radiology questions across various topics To ensure consistency, a single observer posed each question twice to each chatbot. The initial responses were used for primary analysis, while the second set was employed to evaluate reliability. The agreement between the two query attempts was assessed using Cohen’s Kappa (κ), resulting in the following values: ChatGPT-4 Omni: κ = 0.86, ChatGPT-3.5: κ = 0.78, and Microsoft Copilot: κ = 0.72. Three metrics were assessed to evaluate chatbot performance: Accuracy: The proportion of correct answers provided by each chatbot. Mean Word Count: The average word count of the responses. Response Time: The time taken by each chatbot to generate a response. Word Count Calculation: Responses were copied into Microsoft Word, and word count was measured using the built-in feature. Response Time Measurement: An online stopwatch was used to measure the time from input submission to response completion. Statistical analyses were conducted using IBM SPSS Statistics version 21.0. Descriptive statistics were computed for all metrics. Comparative studies were carried out: the Kruskal-Wallis test was utilized to compare word counts and response times among the four chatbots, followed by Dunn’s post hoc test for pairwise comparisons of the chatbots. Cochran’s Q test was also applied to evaluate differences in accuracy rates for the same questions across the chatbots. The study revealed statistically significant differences ( p = 0.000) in the accuracy of responses the four evaluated chatbots provided. ChatGPT-4o achieved the highest accuracy rate at 86.1%, followed by Google Bard at 61.8%. In contrast, ChatGPT-3.5 had an accuracy rate of 43.9%, while Microsoft Copilot recorded 41.5%. Additionally, significant variations in word counts of the responses were noted, with Google Bard producing the most verbose replies and ChatGPT-3.5 generating the least ( p = 0.000). Disparities in response times were also statistically significant, as ChatGPT-3.5 delivered the fastest responses, whereas ChatGPT-4o was the slowest ( p = 0.000) (Table ). Table presents the p-values for the pairwise comparisons among four chatbots. The comparisons between ChatGPT-3.5 and Google Bard and Microsoft Copilot and Google Bard revealed statistically significant differences in word count. Significant differences in mean response time were observed between ChatGPT-3.5, ChatGPT-4o, and Google Bard. Table analyzes the chatbot responses to oral radiology topics. Jaw pathologies and systemic diseases are the topics with the most questions. Table summarizes the oral radiology questions by educational content. Figure presents graphs of chatbots using educational content from oral radiology. ChatGPT-3.5, ChatGPT-4o, Bard, and Copilot are large language models (LLMs) constructed using deep neural networks and trained on extensive text datasets to understand and process human language effectively . The potential applications of LLMs in medicine garner increasing attention; however, it is crucial to understand their strengths and limitations before implementation. LLMs can aid in selecting appropriate radiological imaging modalities and reporting images under human supervision, enhancing efficiency and quality in radiology . Conversely, integrating LLMs into medical education can enrich students’ learning experiences by enabling personalized study plans . Nonetheless, there is a notable lack of sufficient data in the literature regarding the performance of LLMs, particularly concerning their similar potential in dental education. Oral radiology is a specialized branch within clinical dentistry that has a limited number of specialists. Education in this field requires a robust theoretical foundation and practical application to actual cases. However, many institutions face challenges in providing adequate access to diverse cases that enhance learning and maintain a sufficient number of specialist trainers for each student. This situation prompts the inquiry into how large language models (LLMs) can contribute to education in this domain. A recent study indicated that ChatGPT-4 achieved the highest accuracy among the tested tools, with a correct response rate of 86.1%. In contrast, ChatGPT-3.5 demonstrated lower accuracy, which may be attributed to its less advanced architecture and training, limiting its ability to address complex scenarios effectively . In a separate study evaluating the performance of ChatGPT-3.5 on the Israeli Internal Medicine National Residency Exam, it answered 36.6% of 133 questions correctly . Similarly, ChatGPT-3.5 demonstrated a correctness rate of 43.9% in another related study by Jeong et al. . A study compared LLM-based chatbots with dental students in oral and maxillofacial radiology. Faculty members developed a series of questions and categorized them into various educational content areas, standardizing the inputs in Korean. The dental students achieved an impressive overall accuracy rate of 81.2%. In contrast, the accuracy rates for the chatbots were as follows: 50.0% for ChatGPT, 65.4% for ChatGPT Plus, 50.0% for Bard, and 63.5% for Bing Chat. This study examined chatbots’ performance using publicly available DUS questions, encompassing a wide array of dental topics, without juxtaposing their results with student performance. The variations in accuracy rates observed may be attributed to the different algorithms utilized and the question complexity in the two studies. The most recent and successful chatbot highlighted in this research is ChatGPT-4o, launched on May 13, 2024. Following this, ChatGPT-3.5 was released in March 2023, with Microsoft Copilot debuting on November 1, 2023, and Google Bard on March 21, 2023. ChatGPT-4o boasts enhanced natural language processing capabilities, enabling seamless and rapid responses in both English and non-English texts. In addition to its success in the educational domain and improvements over previous iterations, it offers an integrated online consulting experience for addressing complex cases within telemedicine systems . Within this study, ChatGPT-3.5 demonstrated the fastest response time, likely due to its concise answers, which employed the fewest words among the evaluated chatbots. Google Bard ranked second fastest, generating the highest word count in its replies while maintaining relatively quick response times; however, its accuracy rate did not match that of ChatGPT-4o. Its high accuracy distinguishes ChatGPT-4.0, though it typically has longer response times, making it particularly well-suited for high-stakes situations that demand precise information. In contrast, ChatGPT-3.5 and Google Bard offer quicker response times, which may be more appropriate for applications that require fast answers, even if this comes at the expense of some accuracy. Google Bard’s tendency for higher word counts can benefit users seeking extensive explanations, whereas ChatGPT-4.0 strikes a well-balanced response length that effectively meets various needs. In Table , ChatGPT-4o demonstrated high accuracy, answering all questions correctly, especially in areas such as “Oral Medicine” (16/16), “Physics” (12/12), and “Radiobiology” (4/4). This suggests ChatGPT-4o is reliable for oral radiology topics encompassing complex and detailed information. The performance disparities among the chatbots were especially pronounced in areas with extensive details, such as “Jaw Pathologies” (20 questions) and “Systemic Diseases” (18 questions). ChatGPT-4o scored 17 and 15 correct answers in these categories, respectively, while the other models achieved lower accuracy rates. These findings underscore the significance of large language models (LLMs) and the need for comprehensive data sets to address advanced clinical issues. The performance of the other chatbots indicates the areas that require improvement, as they exhibited lower accuracy rates. Table reveals that the chatbots had lower accuracy rates in oral radiology topics about imaging and equipment. These results indicate that LLMs’ performance in educational content varies across specific fields and that enhancements are needed, particularly for complex tasks like imaging and equipment. In a study assessing multiple-choice questions from dental licensing exams in the US and UK, ChatGPT-3.5 achieved an accuracy rate of 68.3% for the US and 43.3% for UK questions. ChatGPT-4.0 demonstrated improved accuracy, scoring 80.7% for US questions and 62.7% for UK questions. When evaluating performance on the pre-course Advanced Life Support (ALS) Multiple Choice Questionnaire from the European Resuscitation Council (ERC), Copilot attained an accuracy of 62.5%, Bard achieved 57.5%, and ChatGPT-3.5 scored 42.25%. ChatGPT-4.0 excelled with the highest accuracy at 87.5%, while ChatGPT-3.5 provided the quickest responses, averaging three seconds per answer. Although studies involving different language models yield varying accuracy results, it is clear that the latest versions show a general improvement in performance. This study reveals substantial differences among various chatbot models, aiding users in selecting the most appropriate option for their needs. Several factors may explain the variations in accuracy observed. Model Training Data and Architecture: ChatGPT-4.0 likely utilizes a more advanced architecture and a wider range of training data than its predecessors, which may enhance its accuracy. Fine-tuning and Updates: Advanced models like ChatGPT-4.0 may have undergone more rigorous fine-tuning and received updates more frequently than earlier, contributing to improved precision. Response Generation Strategy: Some chatbot models may prioritize swift response times. For example, the faster responses of ChatGPT-3.5, along with its lower accuracy, indicate a strategy focused on speed rather than precision. In summary, ChatGPT-4.0 excels in scenarios where high accuracy is imperative, while ChatGPT-3.5 and Google Bard are more fitting for applications that prioritize speed. Additionally, Google Bard is particularly effective in contexts requiring detailed information. The primary limitation of this study is its exclusive focus on Turkish text-based questions. Exploring various question formats, including open-ended and rank-based questions, would be advantageous. Future research could also benefit from increasing the number of questions, uploading radiological images to different chatbots, and comparing their responses with those of human students. Furthermore, generating new questions based on the uploaded ones could provide valuable insights into the potential support available to students as they prepare for exams. ChatGPT-4o stands out as the most accurate among the four available chatbots. With ongoing advancements in educational content and the underlying architectures of these tools, chatbots are becoming increasingly integral to the academic landscape. They present the potential to facilitate the swift resolution of complex dental and medical scenarios, thereby enhancing outcomes in these vital fields. However, it is crucial to recognize that, at this stage, AI tools still fall short of matching the expertise and nuanced understanding of human specialists. This gap underscores the need for continued research and development in AI technology. Looking ahead, there is considerable curiosity and anticipation surrounding how these innovations will evolve and their implications for the future of education and healthcare.
Incidence and predictors of Woven EndoBridge (WEB) shape modification following treatment of intracranial aneurysms in a large multicenter study
99290d5e-5063-486c-8105-3c42c3712c9a
11850463
Surgical Procedures, Operative[mh]
Intrasaccular flow disruption with the Woven EndoBridge (WEB) device (Microvention, Tustin, California, USA) is a promising technique for treating intracranial aneurysms. Although the FDA recently approved the device for bifurcation aneurysms, it has been in use for more than a decade. Previous studies reported a high adequate occlusion rate and a safe clinical profile . In recent years, there has been a rising concern for the rate of aneurysm recanalization and retreatment following WEB embolization due to device shape modification . This phenomenon corresponds to a decrease in WEB height, which can sometimes lead to aneurysmal recanalization. Although the exact cause is not well known, it is thought to be related to clot retraction during the healing process, and a high blood flow exposure could exacerbate this . A better understanding of WEB shape modification and its predisposing factors can potentially lead to higher aneurysm occlusion rates. However, previous studies were limited by the small number of cases which limited the generalization of their findings and led to contradictory findings in shape modification rate and relevance to aneurysm recurrence and retreatment . The WorldWideWEB consortium was established as the most extensive global retrospective multicenter WEB registry. In the present study, we performed a sub-analysis of the consortium that investigates the shape modification rate of implanted WEB devices and the factors associated with this phenomenon. We also aimed to study the correlation between WEB shape modification and aneurysm retreatment. Patient population A retrospective review of the WorldWide WEB Consortium, a synthesis of prospectively maintained databases at academic institutions in North America, South America, Europe, and Australia, was performed to identify patients with intracranial aneurysms treated with WEB device between 2011 and 2022. Selection of aneurysms for WEB treatment was based on clinical and anatomical criteria, including aneurysm size, and wide-neck morphology. Decisions were made at the discretion of the treating physician. The following information was collected: patient demographics, aneurysm characteristics, antiplatelet regimen, procedural details, complications, angiographic and functional outcomes. Only adult patients (age > 18 years) with available aneurysm measurements, imaging follow-up, and shape modification rate were included in this study. Both ruptured and unruptured aneurysms in all locations were included. Both bifurcation and sidewall aneurysms were included. Institutional Review Board approval was obtained at all centers included in the consortium. Angiographic and functional outcomes The angiographic outcome was assessed using digital subtraction angiography (DSA). Aneurysm occlusion after treatment, both immediately and at last follow-up, was categorized using the Raymond Roy Occlusion Classification (RROC): complete occlusion (class 1), residual neck (class 2), and residual aneurysm (class 3). Adequate occlusion was defined as either complete occlusion or residual neck without a residual aneurysm. Other angiographic outcomes included immediate blood flow stagnation, patency of branches arising from the aneurysm at last follow-up, and aneurysm recurrence. Immediate blood flow stagnation was defined as a significant slowing of blood flow into the aneurysm sac immediately following WEB device deployment. This phenomenon indicates effective disruption of intra-aneurysmal flow but does not necessarily correlate with complete aneurysm occlusion at follow-up. WEB device shape modification was defined as the percentage of reduction in the distance between the two WEB markers (distal and proximal) between the initial procedure DSA and imaging at last follow-up. It was then classified into no shape modification (0%), minor shape modification (< 50%), and major shape modification (> 50%). A similar classification was also adopted in previous studies . Functional outcome was assessed using the modified Rankin Scale (mRS) at last follow-up. Complications Thromboembolic complications occurring from the date of the procedure up to the last follow-up were recorded. Intra-procedural thromboembolic complications were identified on DSA as either thrombus formation, slow filling of a previously normal filling vessel, or vessel occlusion. Post-procedural thromboembolic complications were identified using a combination of clinical and radiographic findings. Post-procedural imaging was performed at the discretion of the individual institutions. Routine screening for clinically silent infarcts was not consistently performed. Post-procedural imaging obtained to detect a symptomatic ischemic stroke could include any combination of a non-contrast computed tomography (CT), CT angiography, or magnetic resonance imaging. Only ischemic strokes in the territory of the treated vessel were included. An ischemic complication was considered symptomatic if there were patient-reported symptoms or clinical signs attributable to thromboembolism; this included transient or resolving signs and symptoms. Complications were considered permanent if still present at 3-month follow-up. Hemorrhagic complications were identified intra-operatively as contrast extravasation on DSA or post-procedure imaging. Hemorrhagic complications occurring from the time of the procedure up until the last follow-up were included. Hemorrhages were counted as symptomatic if the patient-reported symptoms or demonstrated signs attributable to hemorrhage. Statistical analysis Statistical analysis was performed using R software (version 4.3.1, R Foundation for Statistical Computing, Vienna, Austria). Categorical variables were presented as frequencies and percentages and compared using the Chi-square test, while continuous variables were presented as median (IQR) and compared using the Mann–Whitney U test. The utilization of Kaplan–Meier curves was employed in order to examine the likelihood of no or minor shape modification. The log-rank tests were employed to assess and compare the survival curves across the various groups. To find out how baseline predictors affected the rate of device shape modification at the last follow-up, a univariable Cox proportional hazards ratio was used. Multivariable logistic regression was used to determine the relationship between major shape modification and outcomes of interest. All those with p < 0.1 were included in multivariable regression models to determine the relationship of our covariates of interest to the outcomes. Forced inclusion of some key variables was done based on scientific rationale. Results were deemed statistically significant if p < 0.05. Lastly, we built receiver-operator characteristic (ROC) curve and employed the Youden index to determine the optimal cutoff point for the “WEB width minus aneurysm width”, Aspect ratio, and height to width ratio to predict “Major shape modification”. A retrospective review of the WorldWide WEB Consortium, a synthesis of prospectively maintained databases at academic institutions in North America, South America, Europe, and Australia, was performed to identify patients with intracranial aneurysms treated with WEB device between 2011 and 2022. Selection of aneurysms for WEB treatment was based on clinical and anatomical criteria, including aneurysm size, and wide-neck morphology. Decisions were made at the discretion of the treating physician. The following information was collected: patient demographics, aneurysm characteristics, antiplatelet regimen, procedural details, complications, angiographic and functional outcomes. Only adult patients (age > 18 years) with available aneurysm measurements, imaging follow-up, and shape modification rate were included in this study. Both ruptured and unruptured aneurysms in all locations were included. Both bifurcation and sidewall aneurysms were included. Institutional Review Board approval was obtained at all centers included in the consortium. The angiographic outcome was assessed using digital subtraction angiography (DSA). Aneurysm occlusion after treatment, both immediately and at last follow-up, was categorized using the Raymond Roy Occlusion Classification (RROC): complete occlusion (class 1), residual neck (class 2), and residual aneurysm (class 3). Adequate occlusion was defined as either complete occlusion or residual neck without a residual aneurysm. Other angiographic outcomes included immediate blood flow stagnation, patency of branches arising from the aneurysm at last follow-up, and aneurysm recurrence. Immediate blood flow stagnation was defined as a significant slowing of blood flow into the aneurysm sac immediately following WEB device deployment. This phenomenon indicates effective disruption of intra-aneurysmal flow but does not necessarily correlate with complete aneurysm occlusion at follow-up. WEB device shape modification was defined as the percentage of reduction in the distance between the two WEB markers (distal and proximal) between the initial procedure DSA and imaging at last follow-up. It was then classified into no shape modification (0%), minor shape modification (< 50%), and major shape modification (> 50%). A similar classification was also adopted in previous studies . Functional outcome was assessed using the modified Rankin Scale (mRS) at last follow-up. Thromboembolic complications occurring from the date of the procedure up to the last follow-up were recorded. Intra-procedural thromboembolic complications were identified on DSA as either thrombus formation, slow filling of a previously normal filling vessel, or vessel occlusion. Post-procedural thromboembolic complications were identified using a combination of clinical and radiographic findings. Post-procedural imaging was performed at the discretion of the individual institutions. Routine screening for clinically silent infarcts was not consistently performed. Post-procedural imaging obtained to detect a symptomatic ischemic stroke could include any combination of a non-contrast computed tomography (CT), CT angiography, or magnetic resonance imaging. Only ischemic strokes in the territory of the treated vessel were included. An ischemic complication was considered symptomatic if there were patient-reported symptoms or clinical signs attributable to thromboembolism; this included transient or resolving signs and symptoms. Complications were considered permanent if still present at 3-month follow-up. Hemorrhagic complications were identified intra-operatively as contrast extravasation on DSA or post-procedure imaging. Hemorrhagic complications occurring from the time of the procedure up until the last follow-up were included. Hemorrhages were counted as symptomatic if the patient-reported symptoms or demonstrated signs attributable to hemorrhage. Statistical analysis was performed using R software (version 4.3.1, R Foundation for Statistical Computing, Vienna, Austria). Categorical variables were presented as frequencies and percentages and compared using the Chi-square test, while continuous variables were presented as median (IQR) and compared using the Mann–Whitney U test. The utilization of Kaplan–Meier curves was employed in order to examine the likelihood of no or minor shape modification. The log-rank tests were employed to assess and compare the survival curves across the various groups. To find out how baseline predictors affected the rate of device shape modification at the last follow-up, a univariable Cox proportional hazards ratio was used. Multivariable logistic regression was used to determine the relationship between major shape modification and outcomes of interest. All those with p < 0.1 were included in multivariable regression models to determine the relationship of our covariates of interest to the outcomes. Forced inclusion of some key variables was done based on scientific rationale. Results were deemed statistically significant if p < 0.05. Lastly, we built receiver-operator characteristic (ROC) curve and employed the Youden index to determine the optimal cutoff point for the “WEB width minus aneurysm width”, Aspect ratio, and height to width ratio to predict “Major shape modification”. Baseline characteristics In this multicenter study, a total of 405 patients were evaluated for the incidence and predictors of WEB shape modification following treatment of intracranial aneurysms. Minor and major shape modification occurred in 127 (31.4%) and 41 (10.1%) of cases, respectively. Among these, females represented a majority with a total of 298 cases (73.6%). The median age at which patients presented was 61 years (IQR: 53 to 68), with those experiencing major shape modification being slightly younger at a median of 58 years (IQR: 49 to 64), a difference that was statistically significant ( p = 0.017). The presentation of intracranial aneurysms varied, with incidental/asymptomatic cases being the most common (218 patients, 57.4%). The majority of patients presented with unruptured aneurysms (313 patients, 77.3%) (Table ). Most patients had a pre-treatment modified Rankin Scale (mRS) score of 0–2, comprising 328 (94.5%) and 39 (95.1%) in the minor or no shape modification and major shape modification groups, respectively. Most aneurysms were bifurcation aneurysms (84.9%) and were more frequently located in the middle cerebral artery (MCA) (37.5%), anterior cerebral artery (30.6%), and the vertebrobasilar artery (18.5%). The median maximum aneurysm diameter, height, width, and neck size were 7 mm, 6 mm, 5.7 mm, and 4.1 mm, respectively. A daughter sac was present in 29.1% of aneurysms while an incorporated arterial branch was present in 13.2% of aneurysms. A prior treatment was done in 5.5% of aneurysms (Table ). The median height to width ratio was significantly different between groups, with the minor or no shape modification group having a higher ratio (1.1 (IQR: 0.9 to 1.3)) compared to the major shape modification group (1 (IQR: 0.8 to 1.1)) ( p = 0.004). The WEB width minus aneurysm width showed a median difference of 0.9 mm (IQR: 0.1 to 1.4), with a less pronounced difference in the major shape modification group ( p = 0.08). In addition, the median Aspect ratio showed a significant difference between the two groups, with the minor or no shape modification group having a higher ratio (1.5 (IQR: 1.1 to 1.9) compared to the major shape modification group (1.3 (IQR: 1.2 to 1.4)) ( p = 0.028). Treatment outcomes Most procedures were performed through femoral access (83.5%). The median follow-up imaging was longer in the major shape modification group with a median of 19.5 months (IQR: 8 to 26.7 months) compared to 10.0 months (IQR: 6.0 to 16.0 months) in the minor or no shape modification group ( p = 0.001). Immediate flow stagnation was more prevalent in the no or minor shape modification group at 90.7% versus 70.7% in the major shape modification group ( p < 0.001) (Table ). There were significant differences in retreatment rates, with 11/40 (26.8%) patients in the major shape modification group undergoing retreatment compared to 29/359 (8.1%) in the minor or no shape modification group ( p < 0.001) (Fig. ). At the final imaging follow-up, adequate occlusion was achieved less frequently in the major shape modification group (70.7%) compared to the minor or no shape modification group (86.6%), with the difference being statistically significant ( p = 0.014) (Fig. ). No significant difference was found between the two groups in terms of hemorrhagic complications ( p = 1) or thromboembolic complications ( p = 0.983). The cut-off points for “WEB width minus aneurysm width”, Aspect ratio, and height to width ratio to predict “Major shape modification” were determined using the Youden index, as documented in the ROC curves in Fig. . The Kaplan–Meier survival analyses were conducted to compare the probability of no or minor shape modification occurrence over time across various conditions. The presence of a daughter sac was found to significantly influence the likelihood of no or minor shape modification ( p = 0.013). Moreover, when considering the relationship between the WEB width minus aneurysm width ≤ 0.5 and shape modification, the analyses revealed a highly significant association, with a p -value of 0.00021. However, the attainment of immediate occlusion status post-treatment did not demonstrate a statistically significant correlation with the incidence of no or minor shape modification ( p = 0.087). Similarly, patients exhibiting immediate flow stagnation after treatment showed no significant correlation with the incidence of no or minor shape modification (Fig. ). Lastly, the overall Kaplan–Meier curve for the entire cohort demonstrates the time-dependent probability of no or minor shape modification following aneurysm treatment. Initially, all 405 patients were at risk, with a 100% probability of no or minor shape modification. However, within the first 25 months, a notable decline in this probability indicates that shape modification events were most frequent during this early period. As time progressed beyond 25 months, the decline in the probability of no or minor shape modification tapered off, implying a reduced rate of these events. This trend continued up to 100 months, where the data showed the probability stabilizing as the number of patients at risk diminished, concluding with only one patient at risk by this final time point (Fig. ). Multivariable logistic regression After adjusting the model to sex, age, smoking status, pretreatment mRS, location, aneurysm dimensions, immediate inadequate occlusion, and rupture aneurysm status, major shape modification was found to be a significant predictor of retreatment (OR: 4.93; CI: 1.74 to 13.8, p < 0.001) (Table ). Multivariable cox hazards proportional regression model In the multivariable Cox regression model, several predictors were found to be significantly associated with the occurrence of major shape modification in patients. These predictors included daughter sac (HR: 2.75; CI 1.20 to 6.29, p = 0.016), bifurcation aneurysms (HR: 0.18; CI: 0.04 to 0.9, p = 0.036), immediate flow stagnation (HR: 0.31; CI: 0.12 to 0.79, p = 0.014), and WEB width minus aneurysm width ratio ≤ 0.5 (HR: 4.57; CI: 1.59 to 13.2, p = 0.005) (Table ). No shape modification and minor shape modification (< 50%) The Cox proportional hazards regression model was used to determine if the factors associated with major shape modification also applied to minor shape modification (< 50%) when compared to no shape modification. The model showed that most variables significant in the major shape modification group did not maintain their significance in this comparison. For instance, smoking status remained a significant predictor in both univariable (HR, 1.87; 95% CI: 1.26–2.77; p = 0.002) and multivariable (HR, 1.83; 95% CI: 1.17–2.86; p = 0.008) analyses. However, other variables such as WEB width minus aneurysm width ≤ 0.5 (HR, 1.12; 95% CI: 0.72–1.73; p = 0.62), age (HR, 0.99; 95% CI: 0.97–1.00; p = 0.14), secondary aneurysm location (HR, 1.33; 95% CI: 0.83–2.12; p = 0.23), ruptured aneurysm status (HR, 1.18; 95% CI: 0.72–1.94; p = 0.51), and aneurysm neck size (HR, 0.97; 95% CI: 0.86–1.10; p = 0.68) were not significant in the comparison between no and minor shape modifications (Supplementary Table ). The treatment outcomes were analyzed for patients with no shape modification and those with minor shape modification. The analysis indicated no significant differences in thromboembolic (6.3% vs. 6.3%, p > 0.99) and hemorrhagic complications (2.1% vs. 2.4%, p > 0.99) between the groups. Retreatment was required significantly more often in the minor shape change group (14% vs. 5.1%, p = 0.004) (Supplementary Table ). The adjusted multivariable logistic regression revealed that minor shape modification has a significant association with retreatment (OR, 4.04; 95% CI: 1.29–14.8; p = 0.022) and inadequate occlusion at last follow-up (OR, 3.95; 95% CI: 1.69–9.91; p = 0.002) (Supplementary Table ). In this multicenter study, a total of 405 patients were evaluated for the incidence and predictors of WEB shape modification following treatment of intracranial aneurysms. Minor and major shape modification occurred in 127 (31.4%) and 41 (10.1%) of cases, respectively. Among these, females represented a majority with a total of 298 cases (73.6%). The median age at which patients presented was 61 years (IQR: 53 to 68), with those experiencing major shape modification being slightly younger at a median of 58 years (IQR: 49 to 64), a difference that was statistically significant ( p = 0.017). The presentation of intracranial aneurysms varied, with incidental/asymptomatic cases being the most common (218 patients, 57.4%). The majority of patients presented with unruptured aneurysms (313 patients, 77.3%) (Table ). Most patients had a pre-treatment modified Rankin Scale (mRS) score of 0–2, comprising 328 (94.5%) and 39 (95.1%) in the minor or no shape modification and major shape modification groups, respectively. Most aneurysms were bifurcation aneurysms (84.9%) and were more frequently located in the middle cerebral artery (MCA) (37.5%), anterior cerebral artery (30.6%), and the vertebrobasilar artery (18.5%). The median maximum aneurysm diameter, height, width, and neck size were 7 mm, 6 mm, 5.7 mm, and 4.1 mm, respectively. A daughter sac was present in 29.1% of aneurysms while an incorporated arterial branch was present in 13.2% of aneurysms. A prior treatment was done in 5.5% of aneurysms (Table ). The median height to width ratio was significantly different between groups, with the minor or no shape modification group having a higher ratio (1.1 (IQR: 0.9 to 1.3)) compared to the major shape modification group (1 (IQR: 0.8 to 1.1)) ( p = 0.004). The WEB width minus aneurysm width showed a median difference of 0.9 mm (IQR: 0.1 to 1.4), with a less pronounced difference in the major shape modification group ( p = 0.08). In addition, the median Aspect ratio showed a significant difference between the two groups, with the minor or no shape modification group having a higher ratio (1.5 (IQR: 1.1 to 1.9) compared to the major shape modification group (1.3 (IQR: 1.2 to 1.4)) ( p = 0.028). Most procedures were performed through femoral access (83.5%). The median follow-up imaging was longer in the major shape modification group with a median of 19.5 months (IQR: 8 to 26.7 months) compared to 10.0 months (IQR: 6.0 to 16.0 months) in the minor or no shape modification group ( p = 0.001). Immediate flow stagnation was more prevalent in the no or minor shape modification group at 90.7% versus 70.7% in the major shape modification group ( p < 0.001) (Table ). There were significant differences in retreatment rates, with 11/40 (26.8%) patients in the major shape modification group undergoing retreatment compared to 29/359 (8.1%) in the minor or no shape modification group ( p < 0.001) (Fig. ). At the final imaging follow-up, adequate occlusion was achieved less frequently in the major shape modification group (70.7%) compared to the minor or no shape modification group (86.6%), with the difference being statistically significant ( p = 0.014) (Fig. ). No significant difference was found between the two groups in terms of hemorrhagic complications ( p = 1) or thromboembolic complications ( p = 0.983). The cut-off points for “WEB width minus aneurysm width”, Aspect ratio, and height to width ratio to predict “Major shape modification” were determined using the Youden index, as documented in the ROC curves in Fig. . The Kaplan–Meier survival analyses were conducted to compare the probability of no or minor shape modification occurrence over time across various conditions. The presence of a daughter sac was found to significantly influence the likelihood of no or minor shape modification ( p = 0.013). Moreover, when considering the relationship between the WEB width minus aneurysm width ≤ 0.5 and shape modification, the analyses revealed a highly significant association, with a p -value of 0.00021. However, the attainment of immediate occlusion status post-treatment did not demonstrate a statistically significant correlation with the incidence of no or minor shape modification ( p = 0.087). Similarly, patients exhibiting immediate flow stagnation after treatment showed no significant correlation with the incidence of no or minor shape modification (Fig. ). Lastly, the overall Kaplan–Meier curve for the entire cohort demonstrates the time-dependent probability of no or minor shape modification following aneurysm treatment. Initially, all 405 patients were at risk, with a 100% probability of no or minor shape modification. However, within the first 25 months, a notable decline in this probability indicates that shape modification events were most frequent during this early period. As time progressed beyond 25 months, the decline in the probability of no or minor shape modification tapered off, implying a reduced rate of these events. This trend continued up to 100 months, where the data showed the probability stabilizing as the number of patients at risk diminished, concluding with only one patient at risk by this final time point (Fig. ). After adjusting the model to sex, age, smoking status, pretreatment mRS, location, aneurysm dimensions, immediate inadequate occlusion, and rupture aneurysm status, major shape modification was found to be a significant predictor of retreatment (OR: 4.93; CI: 1.74 to 13.8, p < 0.001) (Table ). In the multivariable Cox regression model, several predictors were found to be significantly associated with the occurrence of major shape modification in patients. These predictors included daughter sac (HR: 2.75; CI 1.20 to 6.29, p = 0.016), bifurcation aneurysms (HR: 0.18; CI: 0.04 to 0.9, p = 0.036), immediate flow stagnation (HR: 0.31; CI: 0.12 to 0.79, p = 0.014), and WEB width minus aneurysm width ratio ≤ 0.5 (HR: 4.57; CI: 1.59 to 13.2, p = 0.005) (Table ). The Cox proportional hazards regression model was used to determine if the factors associated with major shape modification also applied to minor shape modification (< 50%) when compared to no shape modification. The model showed that most variables significant in the major shape modification group did not maintain their significance in this comparison. For instance, smoking status remained a significant predictor in both univariable (HR, 1.87; 95% CI: 1.26–2.77; p = 0.002) and multivariable (HR, 1.83; 95% CI: 1.17–2.86; p = 0.008) analyses. However, other variables such as WEB width minus aneurysm width ≤ 0.5 (HR, 1.12; 95% CI: 0.72–1.73; p = 0.62), age (HR, 0.99; 95% CI: 0.97–1.00; p = 0.14), secondary aneurysm location (HR, 1.33; 95% CI: 0.83–2.12; p = 0.23), ruptured aneurysm status (HR, 1.18; 95% CI: 0.72–1.94; p = 0.51), and aneurysm neck size (HR, 0.97; 95% CI: 0.86–1.10; p = 0.68) were not significant in the comparison between no and minor shape modifications (Supplementary Table ). The treatment outcomes were analyzed for patients with no shape modification and those with minor shape modification. The analysis indicated no significant differences in thromboembolic (6.3% vs. 6.3%, p > 0.99) and hemorrhagic complications (2.1% vs. 2.4%, p > 0.99) between the groups. Retreatment was required significantly more often in the minor shape change group (14% vs. 5.1%, p = 0.004) (Supplementary Table ). The adjusted multivariable logistic regression revealed that minor shape modification has a significant association with retreatment (OR, 4.04; 95% CI: 1.29–14.8; p = 0.022) and inadequate occlusion at last follow-up (OR, 3.95; 95% CI: 1.69–9.91; p = 0.002) (Supplementary Table ). In the present study, we examined the rate of WEB device shape modification in a large international retrospective cohort. Minor and major shape modification occurred in 31.4% and 10.1% of cases, respectively. Major shape modification was associated with a significantly higher rate of incomplete aneurysm occlusion at last follow-up and a higher retreatment rate compared to no or minor shape modification. Cox analysis underscored the importance of WEB width minus aneurysm width, the presence of daughter sacs, bifurcation aneurysms, and immediate flow stagnation in predicting shape modification events. Moreover, multivariable logistic regression revealed that major shape modification was found to be a significant predictor of retreatment rates. When comparing patients with no shape modification to those with minor shape modification, the significant predictors of major shape modification largely lost their significance. Variables such as age, secondary aneurysm location, ruptured aneurysm status, and aneurysm neck size, which were significant in the major shape modification group, were not significant in this comparison. Additionally, the WEB width minus aneurysm width ratio did not maintain its significance between minor and no shape modification groups (HR 1.12, 95% CI 0.72–1.73, p = 0.62). Pierot et al. reported the 1-year , 2-years and 3-years follow-up of combined data from the two WEBCAST (WEB Clinical Assessment of Intrasaccular Aneurysm Therapy) and French Observatory trials in what was considered the largest multicenter WEB database. In those studies, aneurysm retreatment rate increased from 7.2% at 1 year to 9.2% at 2 years and 11.4% at 3 years after device implantation . In the WEB-IT (WEB Intrasaccular Therapy) trial, adequate occlusion was achieved in 85.6% of cases at 1-year follow-up. Between the 6-months and 1 year follow-up, 11.5% of aneurysms showed some degree of recanalization. Retreatment was needed in 9.8% of cases at 1 year . The concern for intra-saccular WEB shape modification and the consequent increased risk of recanalization and the need for retreatment was first raised by Cognard and Januel , and it was further evaluated in other small-sized studies . This phenomenon is defined as a decrease in the height of the device owing to the deepening of the proximal and distal concave device recesses during follow-up . Because both the proximal marker (near the aneurysm neck) and the distal marker (near the aneurysm apex) move toward the center of the device with time, one hypothesis is that the mechanism responsible for this phenomenon is likely associated with clot organization and retraction . However, this issue or its precursors was not addressed in the large WEB trials leading to absence or generalizable findings . One prospective study of 51 aneurysms treated with the WEB device showed that during a total follow-up period of 5 years, shape modification was observed in 72.9% of cases. However, shape modification did not correlate with adequate occlusion rates in that series . Conversely, a study by Caroff et al. demonstrated that the absence of WEB shape modification was almost a guarantee of an adequate occlusion at follow-up in the 12 aneurysms in their cohort with no WEB shape modification . In the present study, aneurysms with WEB shape modification had a significantly lower rate of adequate occlusion, as no or minor shape modification (< 50%) and major shape modification (> 50%) had adequate occlusion rates of 86.6% and 70.7%, respectively. Major shape modification also led to a significantly higher rate of aneurysm retreatment (26.8%) compared to no or minor shape modification (8.1%) at last follow-up. Few previous studies have suggested that oversizing the WEB width by 1–2 mm might significantly lower rate of WEB shape modification, with no significant correlation with device height . However, those studies were limited by a small number of patients. In the present study, we determined that oversizing the WEB width by 0.5 mm or more is a significant predictor of no or minor shape modification. Conversely, choosing a WEB smaller than the recommended size appears to result in ‘compression,’ a phenomenon associated with inadequate occlusion. This specific finding highlights the need for careful device sizing but cannot be generalized to all cases of WEB shape modification, as shape modification may also result from other mechanisms such as clot retraction and high arterial inflow. Contrary to the findings of Caroff et al., who reported that WEB shape modification mostly occurred in the early stages after device implantation and it stabilized after 9 months , we found that shape modification, in fact, may continue until 25 months of follow-up, stabilizing thereafter. Our study found the presence of daughter sac to affect shape modification. While the exact mechanism remains speculative, the daughter sac’s irregular morphology could result in increased mechanical stress or differential blood flow patterns, which might accelerate or amplify the shape modification process. Limitations The primary limitations of the current study include its retrospective design and variability in the management of patients across centers. Retrospective studies are subject to incomplete datasets, selection bias, and unidentified confounders. The inclusion of multiple institutions improves the generalizability of the findings but introduces variability in aneurysm measurement, patient management, follow-up protocol, and assessment of aneurysm occlusion or shape modification status, among others. Additionally, while this study identified a strong association between undersized WEB devices and ‘compression’ leading to inadequate occlusion, this observation cannot be generalized to all cases of WEB shape modification. Other factors such as clot retraction and high arterial inflow may also play significant roles in shaping outcomes and warrant further investigation. Also, major shape modification is more likely to occur with longer follow-up durations, which could influence the observed differences between groups. Furthermore, we recognize that a stricter definition of minor shape modification (e.g., 10–50%) might offer a clearer distinction, as the current definition inherently includes cases with no shape modification (0%). The primary limitations of the current study include its retrospective design and variability in the management of patients across centers. Retrospective studies are subject to incomplete datasets, selection bias, and unidentified confounders. The inclusion of multiple institutions improves the generalizability of the findings but introduces variability in aneurysm measurement, patient management, follow-up protocol, and assessment of aneurysm occlusion or shape modification status, among others. Additionally, while this study identified a strong association between undersized WEB devices and ‘compression’ leading to inadequate occlusion, this observation cannot be generalized to all cases of WEB shape modification. Other factors such as clot retraction and high arterial inflow may also play significant roles in shaping outcomes and warrant further investigation. Also, major shape modification is more likely to occur with longer follow-up durations, which could influence the observed differences between groups. Furthermore, we recognize that a stricter definition of minor shape modification (e.g., 10–50%) might offer a clearer distinction, as the current definition inherently includes cases with no shape modification (0%). The current study highlights major WEB device shape modification as a significant determinant of aneurysm occlusion efficacy and retreatment necessity, emphasizing the importance of its consideration in post-embolization patient care and follow-up protocols. Below is the link to the electronic supplementary material. ESM 1 (DOCX 13.4 KB) ESM 2 (DOCX 20.6 KB)
Synergistic Epistasis and Systems Biology Approaches to Uncover a Pharmacogenomic Map Linked to Pain, Anti-Inflammatory and Immunomodulating Agents (PAIma) in a Healthy Cohort
ed9a94c8-064b-47c0-983c-b7a852ca4ecf
11541314
Pharmacology[mh]
Opioid misuse and abuse constitute a remarkable crisis in global public health, particularly in the United States (US). Clinicians are increasingly encouraged to focus on the treatment and prevention of opioid use disorders (OUDs). Notably, methadone, naltrexone, and buprenorphine are the three drugs which the Food and Drug Administration (FDA) has approved for the treatment of OUD (Oesterle et al. ). Opioid agonist therapy (OAT) has been documented as the cornerstone of OUD treatment. OAT is an approach regulating opioid receptors to lessen cravings and substance use. While OAT preserves opioid dependency, it effectively mitigates the negative consequences of substance abuse and overdose (Lee et al. ). Given the worst opioid epidemic in the US and other countries, also, the rising number of overdose deaths, there is an urgent need for more sustainable and non-addictive effective treatments for OUD. It is indeed thoughtful to encourage neuromodulatory interventions to motivate the neural circuitry of addiction functioning in the dorsolateral prefrontal cortex and deeper structures of the mesolimbic system to restrain desire and decrease usage (Lee et al. ). Other potential therapies for OUD include targeting distinct dopamine-related addiction system components, identifying susceptible genes and altering gene products, and employing immunizations as immunotherapy to lessen the addictive effects of illicit drugs. Additional clinical evidence is required to confirm the safety and efficacy of these medications in OUD, however, these suggested innovative treatments morph opioid receptors and provide promise for a more long-lasting OUD therapy (Lee et al. ). Detecting polymorphisms, or sequence variations, which increase disease risk, is among the most complicated challenges in human genetics. When it comes to uncommon Mendelian single gene diseases including cystic fibrosis (CF) or sickle-cell anemia, the association between phenotype and genotype is simply visible since the mutant-carrying genotypes directly cause the disease. Unfortunately, this type of association is exceedingly difficult to quantify in the case of prevalent, complicated diseases like hypertension, diabetes, or multiple sclerosis. This is because the disease seems to be the outcome of several genetic factors and social-economic and -spiritual environmental variables. Indeed, gene–gene interaction, or epistasis, is increasingly documented to be essential in the genetic framework of prevalent disease and disorders (Moore ; Sing et al. ; Thornton-Wells et al. ). This challengeable issue also exists in pharmacogenomics studies that could lead to a personalized approach to for example pain management (Consortium ; Wilke et al. ). In the framework of human evolution, prescription drug usage represents a moderately modern phenomenon, offering a substantial item for recently identifiable interactions between adverse environmental conditions (novel medications; epigenetics) and extremely polymorphic genotypes (archaic genes). Polymorphisms in drug-metabolizing enzymes (DMEs) may rank amongst the most common inherited risk factors of disease development if adverse drug reactions (ADRs) and treatment defects are recognized as distinct disorders (Wilke et al. ). Genetic variants with functional roles are not limited to catabolism or metabolism. Genetic diversity affects receptor affinity and a number of intricate processes related to drug disposition, including Absorption, Distribution, Metabolism, and Excretion (ADME). Moreover, genetic variations in known or even unknown pharmacodynamics pathways (molecular signal transduction) may provide variances in outcomes that are therapeutically discernible. Even in the context of medications having clinically mild ADR and somewhat broad therapeutic indicators, sophisticated computer algorithms may reveal hitherto undetected gene–gene interactions resulting in the phenotypic change by considering these complex additional layers. (Wilke et al. ). In datasets containing categorical independent variables, such as single nucleotide polymorphisms (SNPs) along with other sequence variations, such as insertions, deletions, and so forth, in addition to environmental data that can be represented as categorical variables, Multifactor Dimensionality Reduction (MDR) was established for detecting gene–gene or gene-environment connections. When studying disorders that impact humans, the dependent variable, or endpoint, needs to be binary, meaning it may be either both cases or controls. When evaluating pharmacogenomics data, MDR can be applied with "response/non-response" or "toxicity/no toxicity" metrics. MDR may also be used for any dataset that serves two distinct therapeutic objectives (Motsinger and Ritchie ). In the current investigation, MDR and its multiple modifications have been employed to investigate an extensive spectrum of phenotypes, including pharmacogenetics (Wilke et al. ; Dai et al. ). As healthcare professionals and lawmakers work harder to improve drug safety, scientists and researchers are being more motivated to enhance the administration and analysis of population-based toxicogenomic datasets (Lord and Papoian ). Furthermore, high-throughput genotyping efficacy and big cohort investigations are improving as science approaches better tools towards uncovering gene -gene interactions (genotyping) in a very positive manner to help explore in deep silico analyses (McCarty et al. ). As these factors interact, analytical omics linked informatics face a growing demand for high-quality computational tools that have the potential to help the finding of formerly unknown gene–gene interactions in the framework of drug toxicities (Wilke et al. ). In the future,using MDR and similar methods, it could be possible to create better gene-based dosage models and promote safer medication prescription practices through the employment of individual drug sensitivity profiles. (Hoh et al. ; Culverhouse et al. ; Wilke et al. ). Thus, our deep in silico investigation aimed to broaden the scope by utilizing MDR in a pharmacogenomics-based gene–gene interaction. This has been accomplished by designing a signaling pathway panel (PAIma) on the Whole-Exome Sequencing (WES) results of a randomized healthy non-psychoactive abusing (nicotine, amphetamines, antidepressants, etc.) Western Iranians that have been specifically identified for never imbibing powerful opioid analgesic medications enabling a novel stratified population. Thus, we embarked on an exploratory PGx map in this cohort to putatively identify genotypic risk and therapeutic targets to attenuate opioid misuse. Sampling and Data Collection The participants of the current study included 100 healthy individuals who provided their DNA samples to the medical genetic laboratory in Kermanshah Province; thus, all of the samples were Kurdish. During a one-year screening, we recruited healthy individuals who were referred to the laboratory for a WES test. A printed informed consent was completed from every individual. This research gained the ethical approval of University of Isfahan and Biomedical Research Ethics Committee [IR.UI.REC.1402.092]. The primary subjects were 150 people without any significant manifestations for a specific disease and their blood tests were normal for routine biochemical factors. Based on the included variant annotations and following adjusting for each related drug with its own PGx variant, 47 unique drugs and 2 drug families (antiandrogens and antiepileptics) were obtained. To determine ever using an opioid, a questionnaire was designed to determine the background of each subject. Specifically, we were able to identify an individual’s past history related to ever taking any addictive drugs (methadone, morphine, methamphetamine, nicotine, etc.) and or lifetime substance use disorder. Following the completion of this questionnaire, 50 individuals were discarded/excluded for further investigation due to their use of Nicotine (Cigarette smoking), celecoxib, rofecoxib, clopidogrel, and lorazpam and addiction history. Accordingly, all of the questionnaires were completed by subjects and checked by the laboratory staff and was further inspected by a staff Pharmacist (Table ). Exclusion/Inclusion Criteria Inclusion The following were the inclusion requirements for the individual: (1) the subject had to be more than 20 years old, (2) have normal blood test results (based on routine biochemical indexes like CBC, WBC, RBC, FBS, HbA1c, CRP, etc.), lack of having any disease-related phenotypes (major genetic manifestations that have been diagnosed by a healthcare specialist such as common syndromes, metabolic, and musculoskeletal disorders), (3) not have any consanguinity status in their parents’ marriages and (4) life-time non-usage of psychoactive drugs (see Table ), Exclusion Exclusion criteria requirements included: (1) healthy participants with patient children who might be a heterozygous carrier of a pathogenic mutation related to an autosomal recessive disease; (2) subjects with a consanguineous pedigree in terms of parent–child relationship; and (3) individuals under the age of 20. The fundamental data for this research was obtained from PharmGKB, which consisted of curated pathways classified to reflect agents (PAIma) linked to pain, anti-inflammatory, and immunomodulation. It is indeed noteworthy that along with the pathways for anticancer, neurological, and cardiovascular drugs by 2023, this category, which has 37 signaling pathways (21 curated pathways), is one of the most promising evidence-based pathways. PharmGKB pathways are evidence-based diagrams that depict a drug’s pharmacokinetics (PK) “and/or” pharmacodynamics (PD) concerning significant (or possibly significant) Pharmacogenomics (PGx) associations. WES Tests and NGS Analyzing Strategies A WES test was performed on every individual to identify pathogenic variants: a filter-based approach to extract and purify genomic deoxyribonucleic acid (gDNA) from the blood samples of subjects, which was then analyzed. One gram (1.0 g) of gDNA was employed for the preparation of DNA. Moreover, to create sequencing datasets, the Agilent SureSelect- Human All ExonV7 Kit (Agilent Technologies, USA) was employed, and subsequently the sample attribute sequences and the x-index codes were connected. The DNAs were fragmented into 180–280 bp segments utilizing hydrodynamic shear method (Covaris, USA). Exonuclease/polymerase processes reduced residual overhangs, and enzymes were eliminated from the nest. Adapter oligonucleotides were successfully ligated following the adenylation of the DNA fragments’ 3ʹ ends. Additionally, in a PCR procedure, DNA fragments with adaptor molecules attached at each end were deliberately chosen. In order to prepare the collected libraries for hybridization, index tags were added by a PCR amplification. The products were measured using the Agilent 2100 Bioanalyzer and Agilent High Sensitivity DNA Assay after they were purified using the Beckman Coulter AMPure XP system. Of note, the verified libraries were put onto the Illumina NovaSeq 6000 sequencer. Next, using an HP server (a Generation G9 with a Unix-based operator system), data quality control, analysis, and interpretation were executed. With Ubuntu (ver. 22.04.2), NGS analysis was performed on each fastq file using filtered-based command-line procedures including the genomic packages fastqc, IlluQC, Cutadapt, Alignment, Post-Alignment, BQSR, Variant-calling, VQSR, Annotation with ANNOVAR, and Filtering. Three levels of annotation were used including region-level (cytoBand), gene-level (refGene), and frequency-level (cytoBand, exac03, dbnsfp30a, avsnp150, clinvar_20221231, regsnpintron, and icgc28) databases (Wang et al. ). Following the combining of the 128 PAIma gene candidate panel on the Variant Call Format (VCF) files, the Reference Sequence (RS) IDs of the variants were used to apply commands, and the genotypes for each individual’s variant were retrieved. In Silico and Statistical Analyses The in silico investigations were performed on the 128 candidate PAIma genes to uncover the novel interactions at different levels that included Gene–Gene Interactions (GGIs) by MDR (version 3.0.2), signaling pathways by Cytoscape (version 3.10.1), Protein–Protein Interactions (PPIs) via STRING-MODEL (version 12), Gene-miRNA Interactions (GMIs) through miRTargetLink2 (version 2.0), and finally, Protein-Drug Interactions (PDIs) and Protein-Chemical Interactions (PCIs) both via Network Analyst (version 3) (Zhou et al. ). Prior to performing MDR analysis, we employed a PS power and sample size (version 3.1.6) (available at: https://biostat.app.vumc.org/wiki/Main/PowerSampleSize#Windows ) to calculate the power of study including all the refined variants based on the following design: Dichotomous, independent, case–control, and Fischer’s Exact Test with the statistical indicators including α (Type I error probability), p 0 (MAF), n (number of cases), m (ratio of control to case patients), and Ψ (OR) were performed. Epistasis and synergism were calculated via MDR analysis through the MDR software (version 3.0.2) (Institute for Quantitative Biomedical Sciences, USA) (Ritchie et al. ). To prepare the appropriate file for MDR, the genotype frequencies obtained from our case group population and the all phase III individuals of the 1000 genome project were considered as the control group. To develop the ratio of 1:1 for this case–control analysis, genotype frequencies of the control group were adjusted for 100 individuals. While the case group had not any phenotypic disease; we were cognizant of the pharmacogenetic diversity of this specific Iranian population compared to known population pharmacogenomics status. The optimal model for predicting susceptibility to the PAIma gene list was selected according to the lowest grouping error in a training set (known as vector R, the element of which indicates samples in the data sets), and a 10-folded cross-validation was considered to assess prediction accuracy. The alpha level set for our analyses was at the 95% level ( p < 0.05). Concerning the ratio of cases/controls, the genotypic combination was categorized as either low or high risk to help identify the risk evaluation. Finally, a dendrogram, Fruchterman-Reingold plot and circle graph were generated for illustrating the chosen model according to an information theory (Moore et al. ). Every connection among variants indicates the entropy risk as percentage of the overall dataset. Positive percentage rates were thought to have a synergistic connection, while scores of 0i or less were seen to be antagonistic or redundant. The fact that the paired percent rate is larger than the individual rate, as shown by the more accurate model, is noticeable (Hahn et al. ). In summary, entropy measurements are used to quantify the volume of information regarding the case–control status defined by unique characteristics. Importantly, it is worthwhile to mention that a negative rate denotes (correlation redundancy owing to linkage disequilibrium: LD), whereas a positive data gain denotes a significant synergistic or non-additive impact (epistasis for instance). Notably, as indicated in the result section the red line linking the two distinct SNPs suggests a highly synergistic interplay (Hu et al. , ). By integrating Pharmacogenomics (PGx), this approach aimed to optimize pain management, enhance safety, and reduce addiction risks. This understanding prompted the utilization of multifactor dimensionality reduction (MDR) to explore a range of phenotypes including PGx and gene–gene interactions (GGI) in a healthy cohort, thereby personalizing pain management strategies. The participants of the current study included 100 healthy individuals who provided their DNA samples to the medical genetic laboratory in Kermanshah Province; thus, all of the samples were Kurdish. During a one-year screening, we recruited healthy individuals who were referred to the laboratory for a WES test. A printed informed consent was completed from every individual. This research gained the ethical approval of University of Isfahan and Biomedical Research Ethics Committee [IR.UI.REC.1402.092]. The primary subjects were 150 people without any significant manifestations for a specific disease and their blood tests were normal for routine biochemical factors. Based on the included variant annotations and following adjusting for each related drug with its own PGx variant, 47 unique drugs and 2 drug families (antiandrogens and antiepileptics) were obtained. To determine ever using an opioid, a questionnaire was designed to determine the background of each subject. Specifically, we were able to identify an individual’s past history related to ever taking any addictive drugs (methadone, morphine, methamphetamine, nicotine, etc.) and or lifetime substance use disorder. Following the completion of this questionnaire, 50 individuals were discarded/excluded for further investigation due to their use of Nicotine (Cigarette smoking), celecoxib, rofecoxib, clopidogrel, and lorazpam and addiction history. Accordingly, all of the questionnaires were completed by subjects and checked by the laboratory staff and was further inspected by a staff Pharmacist (Table ). Inclusion The following were the inclusion requirements for the individual: (1) the subject had to be more than 20 years old, (2) have normal blood test results (based on routine biochemical indexes like CBC, WBC, RBC, FBS, HbA1c, CRP, etc.), lack of having any disease-related phenotypes (major genetic manifestations that have been diagnosed by a healthcare specialist such as common syndromes, metabolic, and musculoskeletal disorders), (3) not have any consanguinity status in their parents’ marriages and (4) life-time non-usage of psychoactive drugs (see Table ), Exclusion Exclusion criteria requirements included: (1) healthy participants with patient children who might be a heterozygous carrier of a pathogenic mutation related to an autosomal recessive disease; (2) subjects with a consanguineous pedigree in terms of parent–child relationship; and (3) individuals under the age of 20. The fundamental data for this research was obtained from PharmGKB, which consisted of curated pathways classified to reflect agents (PAIma) linked to pain, anti-inflammatory, and immunomodulation. It is indeed noteworthy that along with the pathways for anticancer, neurological, and cardiovascular drugs by 2023, this category, which has 37 signaling pathways (21 curated pathways), is one of the most promising evidence-based pathways. PharmGKB pathways are evidence-based diagrams that depict a drug’s pharmacokinetics (PK) “and/or” pharmacodynamics (PD) concerning significant (or possibly significant) Pharmacogenomics (PGx) associations. The following were the inclusion requirements for the individual: (1) the subject had to be more than 20 years old, (2) have normal blood test results (based on routine biochemical indexes like CBC, WBC, RBC, FBS, HbA1c, CRP, etc.), lack of having any disease-related phenotypes (major genetic manifestations that have been diagnosed by a healthcare specialist such as common syndromes, metabolic, and musculoskeletal disorders), (3) not have any consanguinity status in their parents’ marriages and (4) life-time non-usage of psychoactive drugs (see Table ), Exclusion criteria requirements included: (1) healthy participants with patient children who might be a heterozygous carrier of a pathogenic mutation related to an autosomal recessive disease; (2) subjects with a consanguineous pedigree in terms of parent–child relationship; and (3) individuals under the age of 20. The fundamental data for this research was obtained from PharmGKB, which consisted of curated pathways classified to reflect agents (PAIma) linked to pain, anti-inflammatory, and immunomodulation. It is indeed noteworthy that along with the pathways for anticancer, neurological, and cardiovascular drugs by 2023, this category, which has 37 signaling pathways (21 curated pathways), is one of the most promising evidence-based pathways. PharmGKB pathways are evidence-based diagrams that depict a drug’s pharmacokinetics (PK) “and/or” pharmacodynamics (PD) concerning significant (or possibly significant) Pharmacogenomics (PGx) associations. A WES test was performed on every individual to identify pathogenic variants: a filter-based approach to extract and purify genomic deoxyribonucleic acid (gDNA) from the blood samples of subjects, which was then analyzed. One gram (1.0 g) of gDNA was employed for the preparation of DNA. Moreover, to create sequencing datasets, the Agilent SureSelect- Human All ExonV7 Kit (Agilent Technologies, USA) was employed, and subsequently the sample attribute sequences and the x-index codes were connected. The DNAs were fragmented into 180–280 bp segments utilizing hydrodynamic shear method (Covaris, USA). Exonuclease/polymerase processes reduced residual overhangs, and enzymes were eliminated from the nest. Adapter oligonucleotides were successfully ligated following the adenylation of the DNA fragments’ 3ʹ ends. Additionally, in a PCR procedure, DNA fragments with adaptor molecules attached at each end were deliberately chosen. In order to prepare the collected libraries for hybridization, index tags were added by a PCR amplification. The products were measured using the Agilent 2100 Bioanalyzer and Agilent High Sensitivity DNA Assay after they were purified using the Beckman Coulter AMPure XP system. Of note, the verified libraries were put onto the Illumina NovaSeq 6000 sequencer. Next, using an HP server (a Generation G9 with a Unix-based operator system), data quality control, analysis, and interpretation were executed. With Ubuntu (ver. 22.04.2), NGS analysis was performed on each fastq file using filtered-based command-line procedures including the genomic packages fastqc, IlluQC, Cutadapt, Alignment, Post-Alignment, BQSR, Variant-calling, VQSR, Annotation with ANNOVAR, and Filtering. Three levels of annotation were used including region-level (cytoBand), gene-level (refGene), and frequency-level (cytoBand, exac03, dbnsfp30a, avsnp150, clinvar_20221231, regsnpintron, and icgc28) databases (Wang et al. ). Following the combining of the 128 PAIma gene candidate panel on the Variant Call Format (VCF) files, the Reference Sequence (RS) IDs of the variants were used to apply commands, and the genotypes for each individual’s variant were retrieved. The in silico investigations were performed on the 128 candidate PAIma genes to uncover the novel interactions at different levels that included Gene–Gene Interactions (GGIs) by MDR (version 3.0.2), signaling pathways by Cytoscape (version 3.10.1), Protein–Protein Interactions (PPIs) via STRING-MODEL (version 12), Gene-miRNA Interactions (GMIs) through miRTargetLink2 (version 2.0), and finally, Protein-Drug Interactions (PDIs) and Protein-Chemical Interactions (PCIs) both via Network Analyst (version 3) (Zhou et al. ). Prior to performing MDR analysis, we employed a PS power and sample size (version 3.1.6) (available at: https://biostat.app.vumc.org/wiki/Main/PowerSampleSize#Windows ) to calculate the power of study including all the refined variants based on the following design: Dichotomous, independent, case–control, and Fischer’s Exact Test with the statistical indicators including α (Type I error probability), p 0 (MAF), n (number of cases), m (ratio of control to case patients), and Ψ (OR) were performed. Epistasis and synergism were calculated via MDR analysis through the MDR software (version 3.0.2) (Institute for Quantitative Biomedical Sciences, USA) (Ritchie et al. ). To prepare the appropriate file for MDR, the genotype frequencies obtained from our case group population and the all phase III individuals of the 1000 genome project were considered as the control group. To develop the ratio of 1:1 for this case–control analysis, genotype frequencies of the control group were adjusted for 100 individuals. While the case group had not any phenotypic disease; we were cognizant of the pharmacogenetic diversity of this specific Iranian population compared to known population pharmacogenomics status. The optimal model for predicting susceptibility to the PAIma gene list was selected according to the lowest grouping error in a training set (known as vector R, the element of which indicates samples in the data sets), and a 10-folded cross-validation was considered to assess prediction accuracy. The alpha level set for our analyses was at the 95% level ( p < 0.05). Concerning the ratio of cases/controls, the genotypic combination was categorized as either low or high risk to help identify the risk evaluation. Finally, a dendrogram, Fruchterman-Reingold plot and circle graph were generated for illustrating the chosen model according to an information theory (Moore et al. ). Every connection among variants indicates the entropy risk as percentage of the overall dataset. Positive percentage rates were thought to have a synergistic connection, while scores of 0i or less were seen to be antagonistic or redundant. The fact that the paired percent rate is larger than the individual rate, as shown by the more accurate model, is noticeable (Hahn et al. ). In summary, entropy measurements are used to quantify the volume of information regarding the case–control status defined by unique characteristics. Importantly, it is worthwhile to mention that a negative rate denotes (correlation redundancy owing to linkage disequilibrium: LD), whereas a positive data gain denotes a significant synergistic or non-additive impact (epistasis for instance). Notably, as indicated in the result section the red line linking the two distinct SNPs suggests a highly synergistic interplay (Hu et al. , ). By integrating Pharmacogenomics (PGx), this approach aimed to optimize pain management, enhance safety, and reduce addiction risks. This understanding prompted the utilization of multifactor dimensionality reduction (MDR) to explore a range of phenotypes including PGx and gene–gene interactions (GGI) in a healthy cohort, thereby personalizing pain management strategies. Data Mining and Analyzing the WES Results Data mining of 21 curated PAIma pathways from the PharmGKB database ( https://www.pharmgkb.org/ ) revealed 55,590 annotations, 900 significant variants affecting FDA-approved drugs, and 128 genes. After several filtrations, 128 genes were retained as the main gene list for the WES test analysis. The PAIma panel of genes include: ABCB1, ABCC2, ABCC3, ABCC4, ABCG2, AKR1B1, AKR1C3, AMACR, ATF2, ATF3, BATF, CES1, CES2, CNR1, CNR2, CYP1A2, CYP1A1, CYP2A6, CYP2C18, CYP2B6, CYP2C19, CYP2D6, CYP2C9, CYP2C8, CYP2E1, CYP3A, CYP3A4, CYP3A7, CYP3A5, FAAH, FKBP1A, FOS, FOSB, FOSL1, FOSL2, GSTA1, GSTM1, GSTP1, GSTT1, HPGDS, IL2, IMPDH1, IMPDH2, JUN, JUNB, JDP2, JUND, MAFB, MAFA, MAFG, MAFF, MAFK, MAF, MAP2K4, MAP2K3, MAP2K6, MAP3K1, MAP2K7,MAP3K7, MAPK8, MAP3K11, MAPK14, NFATC2, NFATC1, NFATC4, NFKB2, NFKB1, NOS1, NOS2, NOS3, NRL, PLA2G2A, PLA2G4A, PPP3CA, PPIA, PPP3CC, PPP3CB, PPP3R1, PTGDR, PPP3R2, PTGDR2, PTGER1, PTGDS, PTGER2, PTGER4, PTGER3, PTGES, PTGES3, PTGES2, PTGIR, PTGFR, PTGIS, PTGS2, PTGS1, REL, RELB, RELA, S1PR1, S1PR3, S1PR5, SLC22A1, SLC22A11, SLC22A6, SLC22A7, SLC22A8, SLC22A9, SLCO1B1, SLCO1B3, SLCO2B1, SULT1A1, SULT1A3, SULT1A4, SULT1E1, SULT2A1, TBXA2R, TBXAS1, TGFB1, UGT1A10, UGT1A1, UGT1A3, UGT1A7, UGT1A6, UGT1A8, UGT2B15, UGT1A9, UGT2B17, UGT2B7 and UGT2B4, . By excluding non-coding and synonymous variants, 54 candidate variants were identified that differed from the reference genome (hg38) based on varying Minor Allele Frequency (MAF) estimates for all 100 WES results. Based on the variant functions, there were just 48 nsSNVs out of 54. Besides, 6 variants were either a splicing (rs2270860, rs776746, and rs4513095) or highly structure-altering missense mutations changing amino acids affecting the final protein product, including 2 stop-gained [a mutation that cause a premature termination codon] (rs17863778 and rs145014075) and 1 frameshift [a mutation that induces an insertion/deletion resulting in changing the triplet reading codons] (rs11572078) variant. Moreover, it was found that some nsSNVs had overlapping functions either functional (missense) or regulatory [promoter, transcription binding site, enhancer, and CCCTC-binding factor (CTCF)]. As mentioned earlier, rs17863778 ( UGT1A7 ) and rs145014075 ( CYP2A6 ) are stop-gained variants, rs11572078 ( CYP2C8 ) is a frameshift, and rs2270860 ( SLC22A7 ), rs776746 ( CYP3A ; CYP3A5 ), and rs4513095 ( CES1 ) are annotated as splicing variants (Table ). In this current investigation, we explored our case–control cohort and its potential derived data from dbsnp source ( https://www.ncbi.nlm.nih.gov/snp/ ) and plausibility that our subsequent results indicates that the likelihood of exposure in controls ( p 0 ) is 0.22 (calculated as mean MAF based on Table ). We will be able to reject the null hypothesis that this OR equals 1 with probability (power) 0.802 if the real odds ratio (OR) for disease in exposed participants compared to unexposed ones is 2.5. The risk of Type I error for this null hypothesis test is 0.05 (α). Therefore, with a power analysis of greater than 80, we assessed this null hypothesis using a Fisher’s exact test or a continuity-adjusted chi-squared (χ 2 ) statistic. Gene–Gene Interactions (GGIs) with MDR Successfully utilizing the MDR analysis enabled the Entropy-based SNP-SNP interaction network of 54 variants in a combined attribute network. The whole dataset statistics calculated by MDR were included as a balanced accuracy [the average of recall obtained on each class]: 0.99, Sensitivity: 0.98, Specificity: 1.0, Χ 2 :192.1569 ( p < 0.0001), Precision: 1.0, Kappa: 0.98, and F-Measure: 0.9899. In a dendrogram model, the synergic relationships of the 54 final variants on each other are represented (Fig. ). With a node visibility threshold of 0.0667 (100%), SD of 0.0787, maximum betweenness centrality of 28.32, and maximum closeness centrality of 0.76, dendrogram, Fruchterman-Reingold (Fig. A), and Circle (Fig. B) models illustrated interesting synergic relationships among some SNPs including [SNP4 ( GSTP1 _rs1138272) and SNP20 ( CYP2C9 _rs1799853)], [[SNP23 ( UGT2B7 _rs28365063) and SNP47 ( ABCC2 _rs717620)]/[SNP30 ( SLC22A7 _rs2270860) and SNP37 ( NOS3 _rs1799983)]], [SNP33 ( SLCO2B1 _rs2306168) and SNP42 ( SLCO1B3 _rs4149117)], [[SNP39 ( ABCC4 _rs1751034)] > [SNP5 ( GSTP1 _rs1695) and SNP13 ( UGT1A10 _rs1105879])]. Moreover, a synergistic cluster among 21 SNPs was found including SNP21 ( CYP3A7 _rs2257401), SNP22 ( CYP3A5 _rs776746) SNP41 ( SLCO1B1 _rs2306283), SNP34 ( UGT2B7 _rs7439366), SNP45 ( UGT1A7 _rs17863778), SNP18 ( CYP2D6 _rs1135840), SNP19 ( CYP2E1 _rs2515641), SNP32 ( SLCO1B1 _rs2306283), SNP10 ( UGT1A8 _rs2070959), SNP26 ( ABCC2 _rs3740066), SNP25 ( UGT2B7 _rs7438284), SNP51 ( SLC22A1 _rs628031), SNP24 ( UGT2B7 _rs7439366), SNP2 ( CYP2D6 -rs1135840), SNP35 ( AKR1C3 _rs12529), SNP11 ( UGT1A10 _rs17868323), SNP53 ( UGT1A8 _rs1042597), SNP29 ( CYP2C18 _rs1126545), SNP3 ( CYP2D6 _rs16947), SNP15 ( UGT1A10 _rs1105879), and SNP27 ( CYP2B6 _rs3745274). The synergistic cluster contained 14 unique genes including CYP2D6, UGT1A8, UGT1A10, CYP2E1, CYP3A7, CYP3A5, UGT2B7, ABCC2, CYP2B6, CYP2C18, SLCO1B1, AKR1C3, UGT1A7, and SLC22A1 (Table ). These 14 genes were considered as the main source of further in silico analyses as follows: signaling pathways in PPIs, GMIs, PCIs, and PDIs. The best model analyzed and subsequently indicated by MDR was an entropic relationship between SNP1 ( ABCC2 _rs2273697), SNP21 ( CYP3A7 _rs2257401), and SNP22 ( CYP3A5 _ rs776746). Cross Validation (CV) of SNP1, SNP21, and SNP22 included Training Balanced Accuracy: 0.9906, Training Χ 2 : 173.3261 ( p < 0.0001), Training Sensitivity: 0.9811, Training Accuracy: 0.9906, Training Specificity: 1.0, Training Kappa: 0.9811, Training Precision: 1.0, and Training F-Measure: 0.9905. Dendrogram and Fruchterman-Reingold models indicated that there is a strong synergy among rs2257401 and rs776746 (19.23%); also, we found another synergism that was revealed among rs2273697 and rs2257401 (11.28%). According to the overall balanced accuracy, there are other important synergistic relationships among other SNPs; for example, among SNP6 ( SULT1A1 _rs1042008), SNP21 ( CYP3A7 _rs2257401), and SNP22 ( CYP3A5 _rs776746). Furthermore, a Graphical Model uncovered the genotypic relevance among these three SNPs (Fig. ). Notably, rs2273697 and rs2257401 are both nsSNV and rs776746 is a splicing variant. Three-Dimensional Gene–Gene Interactions (GGIs) ViSEN software analyzes and visualizes non-linear interactions between discrete characteristics, such as SNPs, that predict a discrete outcome, like a case–control condition used in our study. The ViSEN program quantifies both pairwise and 3-way epistatic interactions using single information-gain measures. It visualizes three orders of effects, that is, main effects, pairwise and 3-way interactions, in one network at the same time. In Fig. the circular nodes stand for qualities, the solid-line edges mean pairwise connections, and the triangles are 3-way connections. The geometric area forms and the breadth of their edges show their power (Hu et al. , , ). To reach a deeper insight into the SNP-SNP interactions, we utilized ViSEN for all 54 SNPs to find three-way interactions. A network visualized by ViSEN showed the top 2-way interactions threshold of 0.314 and the top 3-way interactions threshold of 0.0319. Amonst them, this 2-way, 3-way interacted network confirmed MDR results and added 3-way interactions for multiple SNPs.Specifically, SNP17 and SNP6 had the highest 3-way interactions (both 2- and 3-way interactions) (Fig. ) (Supplementary Table 1). Protein–Protein Interactions (PPIs) To validate the PPIs among 14 candidate genes, the STRING-MODEL of these genes were utilized, and the primary outcome results revealed that all of these genes are connected together according to the strong molecular evidence displayed with a PPI enrichment p -value lower than 1.0e-16 (Fig. ). Signaling Pathway Analysis (SPA) Employing Cytoscape ver. 3.10.1, the most significant and curated signaling pathway containing 14 genes was codeine and morphine Metabolism pathway with a p -value of 3.69e-13. Cytoscape also showed that Tamoxifen metabolism pathway was the second significant curated pathway ( p = 1.66e-12) (Fig. ). Gene-miRNA Interactions (GMIs) GMIs through NetworkAnalyst with adjusting the GMIs according to the miRTarBase v8.0 revealed the First-Order Network in some sub-networks which among them, CYP3A5, CYP2E1 , and SLCO1B1 genes had the most important miRNA connections. GMIs revealed that hsa-miR-355-5p is very important due to its links with all of the aforementioned three genes. The backbone model of GMIs was selected to show the gene-miRNA connections (Fig. A). Furthermore, TF-miR Coregulatory Network was applied to this gene list and notable outputs were obtained. The literature validated regulatory interaction data were gathered from the RegNetwork ( http://www.regnetworkweb.org/ ) repository. Some Transcription factors (TFs) showed multiple interactions with the mentioned genes including PPARG which was related to ABCC2, CYP2D6 , and UGT1A10 ; HNF4A which had associations with ABCC2 , CYP2D6 , and AKR1C3 ; and SRF which was connected to the UGT1A10, UGT1A7 , and UGT1A8 (Fig. B). Protein-Drug Interactions (PDIs) The protein and drug target information, which was utilized by NetworkAnalyst, was obtained from the DrugBank database (Version 5.0). PDIs indicated multiple separate subnetworks. In fact, one example for a separate subnetwork was paliperidone as a FDA-approved drug with more than one gene target ( CYP2D6 and CYP3A5 ) (Figure not shown). Protein-Chemical Interactions (PCIs) The data of PCIs in NetworkAnalyst was on the basis of data from the Comparative Toxicogenomics Database (CTD) and the findings from PCIs showed that the most interactive gene is CYP2D6 and the highest interaction degree is 2 for chemicals. In a Linear Bi/Tripartite model of PCIs, some chemicals might be candidates for future drug discovery such as 4-(N-methyl-N-nitrosamino)-1-(3-pyridyl)-1-butanone (linked to CYP2D6 and CYP2E1 ), 4-aminophenylarsenoxide (associated with CYP2D6 and ARK1C ), 1,1,1-trichloro-2-(4-hydroxyphenyl)-2-(4-methoxyphenyl)ethane (interacted with CYP2D6 and CYP2C18 ) , Benzo(a)pyrene (connected to the CYP2D6 and SLC22A1 ), and finally, 2-amino-1-methyl-6-phenylimidazo(4,5-b)pyridine (had relationships with CYP2D6 and ABCC2 ). The other chemicals can be found in Fig. A. Gene-Disease Associations (GDAs) GDAs were investigated to find the most important genes involving various diseases by NetworkAnalyst which utilizes the literature curated GDA information from the DisGeNET database. While the Sugiyama model indicated that CYP2D6 had the highest degree of betweenness; both CYP2E1 and CYP3A5 trigger common diseases with CYP2D6 . For comprehension purposes it is known that adverse drug reactions (ADRs), Drug Allergy, chemical and drug-induced liver injury, Parkinson’s disease, Schizophrenia, and Mood disorders are some of the diseases in associations with CYP2D6 , CYP2E1 , and CYP3A5 genes (Fig. B). In summary, we believe that our approach described herein, involving pretesting investigations focusing on a putative major signaling pathway category (PAIma including 21 curated pathway) led to raw big data annotation (55,590 variant annotations) which in our case was further filtered multiple times to enable required refinement (54 functional nsSNVs and regulatory variants remained). These refined variants were related to the initial 128 pharmacogenes and WES tests (VCFs) of 100 Western Iranians analyzed by this primary gene list or gene panel through Ubuntu 22.04.2. To our knowledge, this is the first study to utilize a clean life-time non-psychoactive using cohort to apply multiple in deep silico pharmacogenomic layers to promote pharmacogenomics guidelines, future NGS analyzers, and company-based sites such as Centogene ( https://www.centogene.com/ ), Fulgent Genetics ( https://www.fulgentgenetics.com/ ), DisGeNET ( https://www.disgenet.org/ ), CeGat ( https://cegat.com/ ), Blueprint Genetics ( https://blueprintgenetics.com/ ), Prevention Genetics ( https://www.preventiongenetics.com/ ), Asper Biogene ( https://www.asperbio.com/ ), Invitae ( https://www.invitae.com/ ), etc. In essence we found a new gene–gene interaction strategy designed for MDR analysis (controls vs. pseudo-controls) on the final 54 variants (SNPs) which remained following subsequent analyses. Importantly, MDR results showed a synergistic cluster containing 21 SNPs and 14 related protein-coding genes. Briefly, for readership comprehension, because of the complexity of these heuristic results, we illustrated a high level diagram indicating our final finding(s) in each main step of this investigation from 55,590 PGx annotations (raw data) to MDR and ViSEN results (3 and 2 SNPs, respectively) (Fig. ). Data mining of 21 curated PAIma pathways from the PharmGKB database ( https://www.pharmgkb.org/ ) revealed 55,590 annotations, 900 significant variants affecting FDA-approved drugs, and 128 genes. After several filtrations, 128 genes were retained as the main gene list for the WES test analysis. The PAIma panel of genes include: ABCB1, ABCC2, ABCC3, ABCC4, ABCG2, AKR1B1, AKR1C3, AMACR, ATF2, ATF3, BATF, CES1, CES2, CNR1, CNR2, CYP1A2, CYP1A1, CYP2A6, CYP2C18, CYP2B6, CYP2C19, CYP2D6, CYP2C9, CYP2C8, CYP2E1, CYP3A, CYP3A4, CYP3A7, CYP3A5, FAAH, FKBP1A, FOS, FOSB, FOSL1, FOSL2, GSTA1, GSTM1, GSTP1, GSTT1, HPGDS, IL2, IMPDH1, IMPDH2, JUN, JUNB, JDP2, JUND, MAFB, MAFA, MAFG, MAFF, MAFK, MAF, MAP2K4, MAP2K3, MAP2K6, MAP3K1, MAP2K7,MAP3K7, MAPK8, MAP3K11, MAPK14, NFATC2, NFATC1, NFATC4, NFKB2, NFKB1, NOS1, NOS2, NOS3, NRL, PLA2G2A, PLA2G4A, PPP3CA, PPIA, PPP3CC, PPP3CB, PPP3R1, PTGDR, PPP3R2, PTGDR2, PTGER1, PTGDS, PTGER2, PTGER4, PTGER3, PTGES, PTGES3, PTGES2, PTGIR, PTGFR, PTGIS, PTGS2, PTGS1, REL, RELB, RELA, S1PR1, S1PR3, S1PR5, SLC22A1, SLC22A11, SLC22A6, SLC22A7, SLC22A8, SLC22A9, SLCO1B1, SLCO1B3, SLCO2B1, SULT1A1, SULT1A3, SULT1A4, SULT1E1, SULT2A1, TBXA2R, TBXAS1, TGFB1, UGT1A10, UGT1A1, UGT1A3, UGT1A7, UGT1A6, UGT1A8, UGT2B15, UGT1A9, UGT2B17, UGT2B7 and UGT2B4, . By excluding non-coding and synonymous variants, 54 candidate variants were identified that differed from the reference genome (hg38) based on varying Minor Allele Frequency (MAF) estimates for all 100 WES results. Based on the variant functions, there were just 48 nsSNVs out of 54. Besides, 6 variants were either a splicing (rs2270860, rs776746, and rs4513095) or highly structure-altering missense mutations changing amino acids affecting the final protein product, including 2 stop-gained [a mutation that cause a premature termination codon] (rs17863778 and rs145014075) and 1 frameshift [a mutation that induces an insertion/deletion resulting in changing the triplet reading codons] (rs11572078) variant. Moreover, it was found that some nsSNVs had overlapping functions either functional (missense) or regulatory [promoter, transcription binding site, enhancer, and CCCTC-binding factor (CTCF)]. As mentioned earlier, rs17863778 ( UGT1A7 ) and rs145014075 ( CYP2A6 ) are stop-gained variants, rs11572078 ( CYP2C8 ) is a frameshift, and rs2270860 ( SLC22A7 ), rs776746 ( CYP3A ; CYP3A5 ), and rs4513095 ( CES1 ) are annotated as splicing variants (Table ). In this current investigation, we explored our case–control cohort and its potential derived data from dbsnp source ( https://www.ncbi.nlm.nih.gov/snp/ ) and plausibility that our subsequent results indicates that the likelihood of exposure in controls ( p 0 ) is 0.22 (calculated as mean MAF based on Table ). We will be able to reject the null hypothesis that this OR equals 1 with probability (power) 0.802 if the real odds ratio (OR) for disease in exposed participants compared to unexposed ones is 2.5. The risk of Type I error for this null hypothesis test is 0.05 (α). Therefore, with a power analysis of greater than 80, we assessed this null hypothesis using a Fisher’s exact test or a continuity-adjusted chi-squared (χ 2 ) statistic. Successfully utilizing the MDR analysis enabled the Entropy-based SNP-SNP interaction network of 54 variants in a combined attribute network. The whole dataset statistics calculated by MDR were included as a balanced accuracy [the average of recall obtained on each class]: 0.99, Sensitivity: 0.98, Specificity: 1.0, Χ 2 :192.1569 ( p < 0.0001), Precision: 1.0, Kappa: 0.98, and F-Measure: 0.9899. In a dendrogram model, the synergic relationships of the 54 final variants on each other are represented (Fig. ). With a node visibility threshold of 0.0667 (100%), SD of 0.0787, maximum betweenness centrality of 28.32, and maximum closeness centrality of 0.76, dendrogram, Fruchterman-Reingold (Fig. A), and Circle (Fig. B) models illustrated interesting synergic relationships among some SNPs including [SNP4 ( GSTP1 _rs1138272) and SNP20 ( CYP2C9 _rs1799853)], [[SNP23 ( UGT2B7 _rs28365063) and SNP47 ( ABCC2 _rs717620)]/[SNP30 ( SLC22A7 _rs2270860) and SNP37 ( NOS3 _rs1799983)]], [SNP33 ( SLCO2B1 _rs2306168) and SNP42 ( SLCO1B3 _rs4149117)], [[SNP39 ( ABCC4 _rs1751034)] > [SNP5 ( GSTP1 _rs1695) and SNP13 ( UGT1A10 _rs1105879])]. Moreover, a synergistic cluster among 21 SNPs was found including SNP21 ( CYP3A7 _rs2257401), SNP22 ( CYP3A5 _rs776746) SNP41 ( SLCO1B1 _rs2306283), SNP34 ( UGT2B7 _rs7439366), SNP45 ( UGT1A7 _rs17863778), SNP18 ( CYP2D6 _rs1135840), SNP19 ( CYP2E1 _rs2515641), SNP32 ( SLCO1B1 _rs2306283), SNP10 ( UGT1A8 _rs2070959), SNP26 ( ABCC2 _rs3740066), SNP25 ( UGT2B7 _rs7438284), SNP51 ( SLC22A1 _rs628031), SNP24 ( UGT2B7 _rs7439366), SNP2 ( CYP2D6 -rs1135840), SNP35 ( AKR1C3 _rs12529), SNP11 ( UGT1A10 _rs17868323), SNP53 ( UGT1A8 _rs1042597), SNP29 ( CYP2C18 _rs1126545), SNP3 ( CYP2D6 _rs16947), SNP15 ( UGT1A10 _rs1105879), and SNP27 ( CYP2B6 _rs3745274). The synergistic cluster contained 14 unique genes including CYP2D6, UGT1A8, UGT1A10, CYP2E1, CYP3A7, CYP3A5, UGT2B7, ABCC2, CYP2B6, CYP2C18, SLCO1B1, AKR1C3, UGT1A7, and SLC22A1 (Table ). These 14 genes were considered as the main source of further in silico analyses as follows: signaling pathways in PPIs, GMIs, PCIs, and PDIs. The best model analyzed and subsequently indicated by MDR was an entropic relationship between SNP1 ( ABCC2 _rs2273697), SNP21 ( CYP3A7 _rs2257401), and SNP22 ( CYP3A5 _ rs776746). Cross Validation (CV) of SNP1, SNP21, and SNP22 included Training Balanced Accuracy: 0.9906, Training Χ 2 : 173.3261 ( p < 0.0001), Training Sensitivity: 0.9811, Training Accuracy: 0.9906, Training Specificity: 1.0, Training Kappa: 0.9811, Training Precision: 1.0, and Training F-Measure: 0.9905. Dendrogram and Fruchterman-Reingold models indicated that there is a strong synergy among rs2257401 and rs776746 (19.23%); also, we found another synergism that was revealed among rs2273697 and rs2257401 (11.28%). According to the overall balanced accuracy, there are other important synergistic relationships among other SNPs; for example, among SNP6 ( SULT1A1 _rs1042008), SNP21 ( CYP3A7 _rs2257401), and SNP22 ( CYP3A5 _rs776746). Furthermore, a Graphical Model uncovered the genotypic relevance among these three SNPs (Fig. ). Notably, rs2273697 and rs2257401 are both nsSNV and rs776746 is a splicing variant. ViSEN software analyzes and visualizes non-linear interactions between discrete characteristics, such as SNPs, that predict a discrete outcome, like a case–control condition used in our study. The ViSEN program quantifies both pairwise and 3-way epistatic interactions using single information-gain measures. It visualizes three orders of effects, that is, main effects, pairwise and 3-way interactions, in one network at the same time. In Fig. the circular nodes stand for qualities, the solid-line edges mean pairwise connections, and the triangles are 3-way connections. The geometric area forms and the breadth of their edges show their power (Hu et al. , , ). To reach a deeper insight into the SNP-SNP interactions, we utilized ViSEN for all 54 SNPs to find three-way interactions. A network visualized by ViSEN showed the top 2-way interactions threshold of 0.314 and the top 3-way interactions threshold of 0.0319. Amonst them, this 2-way, 3-way interacted network confirmed MDR results and added 3-way interactions for multiple SNPs.Specifically, SNP17 and SNP6 had the highest 3-way interactions (both 2- and 3-way interactions) (Fig. ) (Supplementary Table 1). To validate the PPIs among 14 candidate genes, the STRING-MODEL of these genes were utilized, and the primary outcome results revealed that all of these genes are connected together according to the strong molecular evidence displayed with a PPI enrichment p -value lower than 1.0e-16 (Fig. ). Employing Cytoscape ver. 3.10.1, the most significant and curated signaling pathway containing 14 genes was codeine and morphine Metabolism pathway with a p -value of 3.69e-13. Cytoscape also showed that Tamoxifen metabolism pathway was the second significant curated pathway ( p = 1.66e-12) (Fig. ). GMIs through NetworkAnalyst with adjusting the GMIs according to the miRTarBase v8.0 revealed the First-Order Network in some sub-networks which among them, CYP3A5, CYP2E1 , and SLCO1B1 genes had the most important miRNA connections. GMIs revealed that hsa-miR-355-5p is very important due to its links with all of the aforementioned three genes. The backbone model of GMIs was selected to show the gene-miRNA connections (Fig. A). Furthermore, TF-miR Coregulatory Network was applied to this gene list and notable outputs were obtained. The literature validated regulatory interaction data were gathered from the RegNetwork ( http://www.regnetworkweb.org/ ) repository. Some Transcription factors (TFs) showed multiple interactions with the mentioned genes including PPARG which was related to ABCC2, CYP2D6 , and UGT1A10 ; HNF4A which had associations with ABCC2 , CYP2D6 , and AKR1C3 ; and SRF which was connected to the UGT1A10, UGT1A7 , and UGT1A8 (Fig. B). The protein and drug target information, which was utilized by NetworkAnalyst, was obtained from the DrugBank database (Version 5.0). PDIs indicated multiple separate subnetworks. In fact, one example for a separate subnetwork was paliperidone as a FDA-approved drug with more than one gene target ( CYP2D6 and CYP3A5 ) (Figure not shown). The data of PCIs in NetworkAnalyst was on the basis of data from the Comparative Toxicogenomics Database (CTD) and the findings from PCIs showed that the most interactive gene is CYP2D6 and the highest interaction degree is 2 for chemicals. In a Linear Bi/Tripartite model of PCIs, some chemicals might be candidates for future drug discovery such as 4-(N-methyl-N-nitrosamino)-1-(3-pyridyl)-1-butanone (linked to CYP2D6 and CYP2E1 ), 4-aminophenylarsenoxide (associated with CYP2D6 and ARK1C ), 1,1,1-trichloro-2-(4-hydroxyphenyl)-2-(4-methoxyphenyl)ethane (interacted with CYP2D6 and CYP2C18 ) , Benzo(a)pyrene (connected to the CYP2D6 and SLC22A1 ), and finally, 2-amino-1-methyl-6-phenylimidazo(4,5-b)pyridine (had relationships with CYP2D6 and ABCC2 ). The other chemicals can be found in Fig. A. GDAs were investigated to find the most important genes involving various diseases by NetworkAnalyst which utilizes the literature curated GDA information from the DisGeNET database. While the Sugiyama model indicated that CYP2D6 had the highest degree of betweenness; both CYP2E1 and CYP3A5 trigger common diseases with CYP2D6 . For comprehension purposes it is known that adverse drug reactions (ADRs), Drug Allergy, chemical and drug-induced liver injury, Parkinson’s disease, Schizophrenia, and Mood disorders are some of the diseases in associations with CYP2D6 , CYP2E1 , and CYP3A5 genes (Fig. B). In summary, we believe that our approach described herein, involving pretesting investigations focusing on a putative major signaling pathway category (PAIma including 21 curated pathway) led to raw big data annotation (55,590 variant annotations) which in our case was further filtered multiple times to enable required refinement (54 functional nsSNVs and regulatory variants remained). These refined variants were related to the initial 128 pharmacogenes and WES tests (VCFs) of 100 Western Iranians analyzed by this primary gene list or gene panel through Ubuntu 22.04.2. To our knowledge, this is the first study to utilize a clean life-time non-psychoactive using cohort to apply multiple in deep silico pharmacogenomic layers to promote pharmacogenomics guidelines, future NGS analyzers, and company-based sites such as Centogene ( https://www.centogene.com/ ), Fulgent Genetics ( https://www.fulgentgenetics.com/ ), DisGeNET ( https://www.disgenet.org/ ), CeGat ( https://cegat.com/ ), Blueprint Genetics ( https://blueprintgenetics.com/ ), Prevention Genetics ( https://www.preventiongenetics.com/ ), Asper Biogene ( https://www.asperbio.com/ ), Invitae ( https://www.invitae.com/ ), etc. In essence we found a new gene–gene interaction strategy designed for MDR analysis (controls vs. pseudo-controls) on the final 54 variants (SNPs) which remained following subsequent analyses. Importantly, MDR results showed a synergistic cluster containing 21 SNPs and 14 related protein-coding genes. Briefly, for readership comprehension, because of the complexity of these heuristic results, we illustrated a high level diagram indicating our final finding(s) in each main step of this investigation from 55,590 PGx annotations (raw data) to MDR and ViSEN results (3 and 2 SNPs, respectively) (Fig. ). For the current investigation we performed multiple analyses based on the filtered-based findings of our newly introduced PAIma panel (based on WES results) and novel strategy for MDR analyses. Additionally, we performed multiple in silico analyses that included 3-way GGIs, PPIs, SPA, GMIs, PDIs, PCIs, and GDAs. MDR analyses revealed a synergistic cluster containing 21 SNPs related to 14 genes. In addition, ViSEN indicated rs145014075 ( CYP2A6 ) and rs1042008 ( SULT1A1 ) having the highest 3-way interactions. GMIs revealed that hsa-miR-355-5p is very important. Interestingly, TF-miR coregulatory network analysis uncovered the metabolizing CYP2D6 gene as being highly impactful. PDIs found paliperidone as the high-interacted FDA-approved drug associated with CYP2D6 and CYP3A5 . PCIs revealed that CYP2D6 had the most relationships with chemicals. Lastly, GDAs highlighted that CYP2D6 had the highest betweenness. Also, GDAs showed that CYP2E1 and CYP3A5 trigger common diseases with CYP2D6 such as ADRs, Drug Allergy, Chemical and drug-induced liver injury, Parkinson’s disease, Schizophrenia, and Mood disorders. Furthermore, our results revealed fewer but higher potential actionable SNPs (compared to 54 SNPs) for drug prescribing of PAIma pathways among Iranians by genotyping an array of 10 SNPs including rs1135840, rs16947, rs1135840, rs138417770 ( CYP2D6 ), rs776746 ( CYP3A5 ), rs145014075 ( CYP2A6 ), rs2515641 ( CYP2E1 ), rs11045819, rs2306283 ( SLCO1B1 ), and rs1042008 ( SULT1A1 ). As mentioned earlier, our results related to: CYP2D6, CYP3A5, CYP3A7, CYP2A6, ABCC2, and SULT1A1 indicated important roles in pain, inflammation, and immunity processes. There are remarkable reports in the literature highlighting the associations of pharmacokinetics and pharmacodynamics of these genes involving pain management among healthy people (Ohno et al. ; Novalbos et al. ; Lohela et al. ). However, a few studies investigated the relationships of genetic polymorphisms of these genes with pain management. For instance, a recent systematic review reported by Zondeh et al. conducted on 25 papers (out of the 6547 originally identified publications) to study related drug–gene relationships which were distinguished for the drug safety yielded some remarkable results. These authors found important medication–gene interactions in pain management including ibuprofen with CYP2C9 ; celecoxib with CYP2C9 ; piroxicam with CYP2C8 and CYP2C9 ; diclofenac with CYP2C9 , CYP2C8 , UGT2B7 , and ABCC2 ; meloxicam with CYP2C9 ; aspirin with SLCO1B1 , CYP2C9 , and CHST2 ; amitriptyline with CYP2C19 and CYP2D6 ; imipramine with CYP2C19 ; nortriptyline with CYP2D6 , CYP2C19 , and ABCB1 ; and lastly, escitalopram with HTR2C , CYP2C19 , and CYP1A2 (Zobdeh et al. ). Interestingly, in a Randomized Clinical Trial by Pickering et al., polymorphisms of 23 receptors and enzymes were investigated, and associations were found with pain alleviation among 47 Caucasian Healthy Volunteers. These investigators described the association of SULT1A1 SNP (rs224534) with paracetamol anti-nociception (Pickering et al. ). All of these studies are consistent with our investigation. In contrast to PGx associations of PAIma panel genes among healthy cohorts, Mejía-Abril et al. involved 85 healthy individuals in 3 clinical trials and informed that there were no associations between the genetic polymorphisms of CYP2D6, CYP3A5, ABCB1, CYP3A4, ABCC2, UGT1A1, SLCO1B1, CYP1A2, CYP2A6 , CYP2C8, CYP2B6, CYP2C19, CYP2C9 and SLC22A1 with the adverse impacts of dexketoprofen as a NSAID (Mejía-Abril et al. ). We are cognizant that using powerful opioids and other psychoactive drugs even at an addictive rate will not alter DNA antecedents or the presence of such polymorphisms but will indeed affect miRNA transcription epigenetically for at least f2 generations (Hamilton and Nestler ). Other reports have acknowledged the significance of inter-individual genetic varieties in buprenorphine metabolism revealing variable treatment feedback, these genetic variations are the cause of treatment failures for some patients and making them highly vulnerable for relapse. Accordingly, Ettienne et al. made a strong case for clinical pharmacogenomics studies as revealed herein, to profoundly affect opioid prescribing based on one’s inherited genetic variations and their subsequent drug response (Ettienne et al. ). Specifically, in their study, PGx testing demonstrated that African-American patients presented a cytochrome P450 3A4 (CYP3A4) ultra-rapid metabolizer phenotype necessitating an elevated than suggested daily dosage of buprenorphine for suitable OUD administration (32 mg). By comparing the patient’s relapse rate to usual dosing, the pharmacogenetic-linked dose recommendation showed a significant reduction in relapses, which is an important recovery outcome. It is indeed noteworthy that in the current study, albeit a rather modest population requiring much larger samples and a variety of ethnic groups, prior to any generalizations that could be made, the results of this first ever approach and subsequent map of 14 genes seems quite parsimonious. Certainly, confirmation of these results will open a new pathway in recovery science to assist in both prescribing methadone and buprenorphine to reduce harm in an addictive population and or even to treat ongoing pain (Lee et al. ). The take home message herein is to highlight the effect of PGx testing on OUD managing results. Finally, one important feature of these findings related to pharmacogenomics gene–gene interactions, can be potentially useful in the future as a more accurate way clinicians could prescribe opioid medications for pain and or harm reduction (Adams et al. ; Suarez et al. ; Johnson et al. ). This may involve, prior to prescribing these powerful addictive pharmaceuticals, especially to a pre-addictive genetic or epigenetic population, genotyping variants as denoted herein as an initial panel (McLellan et al. ; Blum et al. ). However, we believe that both time and cost-savings might be assisted via molecular techniques such as Multiplex-PCR and Real-time PCR methods before utilizing the WES test (which takes weeks and months for preparations and interpretations of an individual’s drug susceptibilities). We believe that these results are encouraging and may provide a gene–gene map having heuristic value to help reduce the global public health opioid crisis (Blum et al. ; Bakkali et al. ; McKenzie-Brown et al. ; Muir et al. ; De Aquino et al. ). While we are cognizant that one hundred participants, albeit significant findings from our study herein, a larger cohort employing this PGx approach should help refute or confirm our potentially interesting results. Based on this investigation we are proposing the possible potential flexibility of pharmacogenomics to enable the prescribing clinician to utilize these findings related for example to develop a panel of 14 annotated genes as a guide to assist in the prescribing of powerful opioids for not only pain management but for recovery maintenance as well. Following confirmation of our results, albeit an Iranian population, this type of work could serve as a model going forward. While we tried our best to control the power of the study, it is highly notable that future studies performing MDR, consider the MAFs. The most interesting finding of this investigation promotes a personalized medicine approach that links our WES results and known prescribing FDA-approved drugs involved in PAIma pathways by focusing on the most important variants form 55,590 variants annotations to the 21 actionable alleles found in a synergistic cluster of GGIs. Simply, the novelty of our primary findings coupling both WES and PAIma with the MDR analyses may lead to the futuristic development of real personalized medicine to help enable higher precision levels in prescribing the related FDA-approved analgesics like opioid type drugs awaiting other countries to also test populations similarly. As such we are cognizant that ethnicity (race) may vary accordingly and our panel of 14 annotated genes may differ from country to country. The goal of the current study was to create an updated and comprehensive PGx gene panel (PAIma) focusing on pain based on GGI using MDR and ViSEN. Our study herein considered pain pathways and its related variants, genes, and drugs (methadone, morphine, nicotine, and amphetamine). Moreover, in an attempt to capture novel personalized PGX medicine, we hereby investigated the susceptibility and predisposition potential of healthy people to pain and psychiatric medications by displaying their associated PGx-related variants. It is highly recommended that a translation of this work to clinical utility is using the PAIma panel (128 Pharmacogenes) in NGS analyses, targeted therapy, Prescribing the right pharmaceuticals with effective doses including analgesics, anti-psychotics, NSAIDs, Chemotherapeutic agents, Anti-viral medications, and transplantation- anti -rejection agents. Undertaking the analyses of our strategy might prove worthwhile to the entire scientific community to reduce the well-described global public health opioid misuse crisis. Finally, it is quite possible that adoption of our technique could provide a higher level of personalized medicine and precision opioid type therapy and as such maybe the new norm. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 15 KB)
ERAP Inhibitors in Autoimmunity and Immuno-Oncology: Medicinal Chemistry Insights
8d0bb73b-4a4b-4c4a-8232-a8066bfcf259
11284793
Pharmacology[mh]
After decades of biologic-based treatments in immunology and immuno-oncology, small molecules are advancing to overcome treatment resistance and to address previously untapped or inaccessible targets. Inhibition of ERAP, that are proteases of the antigen processing and presentation pathway linked to the risk of developing cancer or MHC-I opathies, is an attractive treatment strategy. This perspective provides guidance for the design and development of ERAP inhibitors with improved profiles and highlights challenges for their characterization. Therapeutic Relevance of ERAP Inhibition 1.1 Biological Role of ERAP Enzymes and Implication in Diseases The endoplasmic reticulum aminopeptidases (ERAPs) are intracellular are intracellular M1 family proteases involved in the antigen processing and presentation pathway . These enzymes have attracted much attention since their role in antigen presentation was discovered by N. Shastri (murine ERAAP), , York and Goldberg (ERAP1), , and by P. van Endert (ERAP2). In cells, they are located in the endoplasmic reticulum (ER) where they truncate the N-terminus of extended peptide precursors generating epitopes that bind to the major histocompatibility class I (MHC-I) complex for presentation at the cell surface ( A). In addition, to their role in the production of antigenic peptides, ERAP enzymes have been shown to degrade some antigens to lengths too short to bind the MHC-I ( B). Both enzymes thus work together to shape the immunopeptidome. Cell-based experiments and in vivo studies in mice and humans proved that ERAP1 and ERAP2 thereby regulate adaptive and NK-mediated immune responses. − It is noteworthy that a related enzyme IRAP (insulin-regulated aminopeptidase) is implicated in antigen cross-presentation . , Large-scale genetic association studies, complemented by preclinical experiments have linked polymorphisms in ERAP with various human pathologies (Supporting Information (SI), Table S1 ). ERAP enzymes are strongly associated with predisposition to the so-called “MHC-I opathies”, i.e., inflammatory diseases with a strong genetic link to the MHC-I antigen presentation pathway (HLA in humans). , MHC-I opathies include Behçet’s disease, ankylosing spondylitis, birdshot uveitis, and psoriasis. For example, ERAP1/2 is the second strongest genetic risk factor for ankylosing spondylitis after HLA-B27. ERAP alleles with reduced activity or ERAP deficiency protect against the disease. Ninety percent of patients carrying HLA-B27 alleles have single nucleotide polymorphisms (SNP) in ERAP1. , ERAP2 is also a risk factor for ankylosing spondylitis. HLA-B*51 is strongly associated with the risk of Behçet’s disease, and ERAP1 polymorphism affects the peptidome bound to HLA-B*51. Interestingly, a meta-analysis also showed that HLA-B*27 is a new susceptibility gene associated with Behçet’s disease, especially in patients with coexisting Behçet’s disease and ankylosing spondylitis. , ERAP2 also synergizes also with ERAP1 to shape the immunopeptidome in patients with Behçet’s disease. Both ERAP1 and ERAP2 are associated with psoriasis. − Patients suffering from psoriasis have a higher ratio of ERAP1/2 in comparison to healthy donors, and this association depends on the presence of HLA-C*06:02. ERAP1 and most particularly ERAP2 are involved in birdshot uveitis, a disease associated with HLA-A29. , In Crohn’s disease, the MHC-II involvement is well described. However, the MHC-I involvement has recently been reported and an association with the ERAP2 polymorphism has been identified. In oncology (SI, Table S1 ), altered ERAP1–2 expression in human tumors determines immune evasion. Deletion of ERAP enhances tumor rejection in a mouse model of lymphoma. Altered ERAP1 activity due to polymorphic variation has been associated with the clinical outcome in cervical cancer or in non-small cell lung cancer. , In HPV-driven cancers ERAP1 function correlates with tumor-infiltrating immune cells. ERAP1 depletion resulted in the presentation of a cryptic tumor epitope and elicited strong T-cell-mediated cytotoxic responses against cancer cells. ERAP2 is highly elevated in oral squamous cell carcinoma (OSCC), its expression inversely correlated with overall survival, and its deletion prevents cancer cell migration and invasion. ERAP2 is also associated with the immune infiltration of tumors and is a strong predictor of the overall survival in cancer, particularly in patients receiving checkpoint inhibitor therapy. ERAP inhibitors could thus be used as a strategy to overcome resistance to immune checkpoint inhibitors and as a personalized cancer immunotherapy. An ERAP1 inhibitor (GRWD5769) is currently in a phase 1/2 clinical trial in patients with advanced solid tumors both in monotherapy and in combination with the PD-1 inhibitor cemiplimab. Several studies have also linked ERAP variants with infectious diseases, particularly viral infections (SI, Table S1 ). Other intra- or extra-cellular functions have been uncovered for ERAP1 and 2. For example, ERAP1 is a previously unknown player in the Hedgehog (Hh) signaling pathway and that its genetic inhibition suppresses Hh-dependent tumor growth both in vitro and in vivo , suggesting that ERAP1 is as a promising therapeutic target for Hh-driven tumors. More recently, ERAP2 has been shown to inhibit the Hh signaling pathway that promotes pyroptosis in rheumatoid arthritis. ERAP1 has also been implicated in cytokine receptors shedding or in inflammatory response as it induces macrophage phagocytosis and is involved in NO synthesis. , Given the importance of ERAP enzymes in the antigen processing and presentation pathway, two therapeutic applications in particular are being explored . In immuno-oncology, an ERAP inhibitor could help restore the immune-mediated cancer cell destruction by promoting neoantigen presentation ( A). In the treatment of autoimmune diseases, particularly MHC-I-opathies, ERAP inhibitors, could help to tame the immune system by shaping the immunopeptidome, acting upstream of the antigen presentation ( B). In both cases, acting on ERAP enzymes targets the early events of the immune system activation, before antigen presentation . This strategy is complementary to other therapeutic intervention points using, depending on the disease, checkpoint inhibitors, antibodies targeting inflammatory cytokines, or their receptors or specific intracellular activated pathways . 1.2 Genes, Structure, and Substrate Preferences of ERAP ERAP genes are located on the chromosome 5q15 ( A). The two genes are highly polymorphic. For ERAP1 , at least 10 distinct haplotypes have been identified in the human population ( A). The single nucleotide variants (SNV) can combine to encode protein variants (allotypes) that differ for example in their enzymatic activity. For ERAP2, 2 SNV lead to main haplotypes 2A and 2B. The latter results in the generation of a premature stop codon, leading to nonsense mediated mRNA decay and no expression of ERAP2 protein ( A). , ERAP enzymes are members of the oxytocinase subfamily of M1 Zn-metallopeptidases. ERAP1 and ERAP2 share 52% sequence identity and are also closely related to IRAP (46 and 44% identity to ERAP1 and ERAP2, respectively) (SI, Figure S1A ). Differences in the amino acids in the catalytic pocket of the 3 enzymes are reflected in their substrate selectivity (SI, Figure S2 ). ERAP1 has orthologues in many animal species and its sequence is highly conserved in rodents, cynomolgus, or dog (SI, Figure S1B ). It shares 86% sequence identity with the murine ERAAP enzyme. ERAP2 also has orthologues in several animal species but is however not present in rodents (SI, Figure S1B ). ERAP enzymes are structured in 4 domains . Domain II contains both the catalytic Zn 2+ coordinated by the HEXXH(X) 18 E motif and the substrate recognition GAMEN loop. ERAP1 has first been crystallized with bestatin, a nonselective aminopeptidase inhibitor, as well as in its apo form. There are now 14 structures available in the PDB (SI, Table S2 ). ERAP1 could be crystallized in both “closed” and “open” forms ( B). Comparison of the two structures revealed a major conformational change of the protein to allow domain IV and domain II to interact in the “closed” state. , This conformational change affects its catalytic activity. For example, in the open conformation, Tyr438 adopts a nonoptimal position for catalytic activity ( B). A first allosteric site (site A, also called “malate site” B), located about 25 Å away from catalytic site, has been shown to regulate the catalytic activity of ERAP1 by binding the C-terminus of long substrates. , Conformational dynamics studies based on small-angle X-ray scattering showed that allosteric modulators were able to induce the closure of ERAP1. Activators and inhibitors, including nonhydrolyzable substrates, have allowed the discovery of another allosteric site (allosteric site B, B). , It is not yet clear whether this site can also accommodate the C-terminus of long substrates. In contrast to ERAP1, no “open” conformation of ERAP2 has yet been observed (SI, Table S2 ). Recently, however, two independent studies have disclosed compounds with an atypical binding to an allosteric site adjacent to the main catalytic pockets of ERAP2. , This suggests that some conformational mobility may also exist for ERAP2. Finally, the analysis of the X-ray structures of ERAP1 and ERAP2 also supports the cooperative action of heterodimers. Consistent with the sequence variation in the catalytic subpockets (SI, Figure S2 ), ERAP1 and ERAP2 have distinct but complementary substrate preferences. ERAP1 preferentially hydrolyses peptides longer than 8 residues with bulky hydrophobic side chains in S1 to produce peptides that fit into the binding groove of MHC-I molecules. , The trimming mechanism by ERAP1 has been extensively studied and has led to the "molecular ruler" concept which highlighted the key role of the allosteric site A in setting the distance between the amino- and carboxy- termini of the trimmed peptides. In contrast to ERAP1, ERAP2 preferentially trims positively charged amino acids and shorter peptides. , 1.3 Overview of ERAP Inhibitor Series and Discovery Strategies In the past decade, key steps such as solving ERAP structures with or without ligands, leap in potency and selectivity, discovery of new binding pockets or allosteric sites, and optimization of drug likeness have allowed the transition from pharmacological tools to leads and to the first clinical candidate GRWD5769 ( A). Approximately 15 chemical series of inhibitors have been reported (SI, Table S3–S4 ). The majority of them (61%) have been rationally designed ( B). A smaller fraction (26%) has been identified by high-throughput on-target screening, while in silico strategies are also reported (4%) ( B). Fragment screening led to the discovery of micromolar ERAP1 inhibitors (SI, Table S5 ). Finally, protein-templated reactions were also successfully applied (4%; KTGS: kinetic target-guided synthesis) ( B). The most populated chemical series are phosphinic peptidomimetic inhibitors and structure-based designed diaminobenzoic acids, both targeting the catalytic site. Other catalytic site inhibitors include suberones, hydroxamic acids, α-hydroxy-β-amino acids or pyridinones, (SI, Table S4,S6 ). Allosteric inhibitors or activators were also disclosed and allowed identification of some modulation pockets specifically for ERAP1. These include phenyl-sulfamoyl-benzoic acid derivatives and other carboxylic acids. All the inhibitors vary widely in potency, selectivity, and physicochemical properties. In this perspective, we have selected relevant ERAP inhibitors focusing on their potency and the selectivity, which are the main challenges, their discovery strategy, and their binding mode (allosteric or catalytic). As many inhibitors are not fully characterized in terms of physicochemical properties, we have analyzed the seven more populated chemical series and highlighted the main drivers for optimization in function of ADME properties, selectivity, and ligand efficiency. We aim to provide the medicinal chemistry community with guidance for the design of the next generation of ERAP inhibitors with improved profile and to highlight the challenges that remain for their design and their characterization both in vitro and in vivo . Biological Role of ERAP Enzymes and Implication in Diseases The endoplasmic reticulum aminopeptidases (ERAPs) are intracellular are intracellular M1 family proteases involved in the antigen processing and presentation pathway . These enzymes have attracted much attention since their role in antigen presentation was discovered by N. Shastri (murine ERAAP), , York and Goldberg (ERAP1), , and by P. van Endert (ERAP2). In cells, they are located in the endoplasmic reticulum (ER) where they truncate the N-terminus of extended peptide precursors generating epitopes that bind to the major histocompatibility class I (MHC-I) complex for presentation at the cell surface ( A). In addition, to their role in the production of antigenic peptides, ERAP enzymes have been shown to degrade some antigens to lengths too short to bind the MHC-I ( B). Both enzymes thus work together to shape the immunopeptidome. Cell-based experiments and in vivo studies in mice and humans proved that ERAP1 and ERAP2 thereby regulate adaptive and NK-mediated immune responses. − It is noteworthy that a related enzyme IRAP (insulin-regulated aminopeptidase) is implicated in antigen cross-presentation . , Large-scale genetic association studies, complemented by preclinical experiments have linked polymorphisms in ERAP with various human pathologies (Supporting Information (SI), Table S1 ). ERAP enzymes are strongly associated with predisposition to the so-called “MHC-I opathies”, i.e., inflammatory diseases with a strong genetic link to the MHC-I antigen presentation pathway (HLA in humans). , MHC-I opathies include Behçet’s disease, ankylosing spondylitis, birdshot uveitis, and psoriasis. For example, ERAP1/2 is the second strongest genetic risk factor for ankylosing spondylitis after HLA-B27. ERAP alleles with reduced activity or ERAP deficiency protect against the disease. Ninety percent of patients carrying HLA-B27 alleles have single nucleotide polymorphisms (SNP) in ERAP1. , ERAP2 is also a risk factor for ankylosing spondylitis. HLA-B*51 is strongly associated with the risk of Behçet’s disease, and ERAP1 polymorphism affects the peptidome bound to HLA-B*51. Interestingly, a meta-analysis also showed that HLA-B*27 is a new susceptibility gene associated with Behçet’s disease, especially in patients with coexisting Behçet’s disease and ankylosing spondylitis. , ERAP2 also synergizes also with ERAP1 to shape the immunopeptidome in patients with Behçet’s disease. Both ERAP1 and ERAP2 are associated with psoriasis. − Patients suffering from psoriasis have a higher ratio of ERAP1/2 in comparison to healthy donors, and this association depends on the presence of HLA-C*06:02. ERAP1 and most particularly ERAP2 are involved in birdshot uveitis, a disease associated with HLA-A29. , In Crohn’s disease, the MHC-II involvement is well described. However, the MHC-I involvement has recently been reported and an association with the ERAP2 polymorphism has been identified. In oncology (SI, Table S1 ), altered ERAP1–2 expression in human tumors determines immune evasion. Deletion of ERAP enhances tumor rejection in a mouse model of lymphoma. Altered ERAP1 activity due to polymorphic variation has been associated with the clinical outcome in cervical cancer or in non-small cell lung cancer. , In HPV-driven cancers ERAP1 function correlates with tumor-infiltrating immune cells. ERAP1 depletion resulted in the presentation of a cryptic tumor epitope and elicited strong T-cell-mediated cytotoxic responses against cancer cells. ERAP2 is highly elevated in oral squamous cell carcinoma (OSCC), its expression inversely correlated with overall survival, and its deletion prevents cancer cell migration and invasion. ERAP2 is also associated with the immune infiltration of tumors and is a strong predictor of the overall survival in cancer, particularly in patients receiving checkpoint inhibitor therapy. ERAP inhibitors could thus be used as a strategy to overcome resistance to immune checkpoint inhibitors and as a personalized cancer immunotherapy. An ERAP1 inhibitor (GRWD5769) is currently in a phase 1/2 clinical trial in patients with advanced solid tumors both in monotherapy and in combination with the PD-1 inhibitor cemiplimab. Several studies have also linked ERAP variants with infectious diseases, particularly viral infections (SI, Table S1 ). Other intra- or extra-cellular functions have been uncovered for ERAP1 and 2. For example, ERAP1 is a previously unknown player in the Hedgehog (Hh) signaling pathway and that its genetic inhibition suppresses Hh-dependent tumor growth both in vitro and in vivo , suggesting that ERAP1 is as a promising therapeutic target for Hh-driven tumors. More recently, ERAP2 has been shown to inhibit the Hh signaling pathway that promotes pyroptosis in rheumatoid arthritis. ERAP1 has also been implicated in cytokine receptors shedding or in inflammatory response as it induces macrophage phagocytosis and is involved in NO synthesis. , Given the importance of ERAP enzymes in the antigen processing and presentation pathway, two therapeutic applications in particular are being explored . In immuno-oncology, an ERAP inhibitor could help restore the immune-mediated cancer cell destruction by promoting neoantigen presentation ( A). In the treatment of autoimmune diseases, particularly MHC-I-opathies, ERAP inhibitors, could help to tame the immune system by shaping the immunopeptidome, acting upstream of the antigen presentation ( B). In both cases, acting on ERAP enzymes targets the early events of the immune system activation, before antigen presentation . This strategy is complementary to other therapeutic intervention points using, depending on the disease, checkpoint inhibitors, antibodies targeting inflammatory cytokines, or their receptors or specific intracellular activated pathways . Genes, Structure, and Substrate Preferences of ERAP ERAP genes are located on the chromosome 5q15 ( A). The two genes are highly polymorphic. For ERAP1 , at least 10 distinct haplotypes have been identified in the human population ( A). The single nucleotide variants (SNV) can combine to encode protein variants (allotypes) that differ for example in their enzymatic activity. For ERAP2, 2 SNV lead to main haplotypes 2A and 2B. The latter results in the generation of a premature stop codon, leading to nonsense mediated mRNA decay and no expression of ERAP2 protein ( A). , ERAP enzymes are members of the oxytocinase subfamily of M1 Zn-metallopeptidases. ERAP1 and ERAP2 share 52% sequence identity and are also closely related to IRAP (46 and 44% identity to ERAP1 and ERAP2, respectively) (SI, Figure S1A ). Differences in the amino acids in the catalytic pocket of the 3 enzymes are reflected in their substrate selectivity (SI, Figure S2 ). ERAP1 has orthologues in many animal species and its sequence is highly conserved in rodents, cynomolgus, or dog (SI, Figure S1B ). It shares 86% sequence identity with the murine ERAAP enzyme. ERAP2 also has orthologues in several animal species but is however not present in rodents (SI, Figure S1B ). ERAP enzymes are structured in 4 domains . Domain II contains both the catalytic Zn 2+ coordinated by the HEXXH(X) 18 E motif and the substrate recognition GAMEN loop. ERAP1 has first been crystallized with bestatin, a nonselective aminopeptidase inhibitor, as well as in its apo form. There are now 14 structures available in the PDB (SI, Table S2 ). ERAP1 could be crystallized in both “closed” and “open” forms ( B). Comparison of the two structures revealed a major conformational change of the protein to allow domain IV and domain II to interact in the “closed” state. , This conformational change affects its catalytic activity. For example, in the open conformation, Tyr438 adopts a nonoptimal position for catalytic activity ( B). A first allosteric site (site A, also called “malate site” B), located about 25 Å away from catalytic site, has been shown to regulate the catalytic activity of ERAP1 by binding the C-terminus of long substrates. , Conformational dynamics studies based on small-angle X-ray scattering showed that allosteric modulators were able to induce the closure of ERAP1. Activators and inhibitors, including nonhydrolyzable substrates, have allowed the discovery of another allosteric site (allosteric site B, B). , It is not yet clear whether this site can also accommodate the C-terminus of long substrates. In contrast to ERAP1, no “open” conformation of ERAP2 has yet been observed (SI, Table S2 ). Recently, however, two independent studies have disclosed compounds with an atypical binding to an allosteric site adjacent to the main catalytic pockets of ERAP2. , This suggests that some conformational mobility may also exist for ERAP2. Finally, the analysis of the X-ray structures of ERAP1 and ERAP2 also supports the cooperative action of heterodimers. Consistent with the sequence variation in the catalytic subpockets (SI, Figure S2 ), ERAP1 and ERAP2 have distinct but complementary substrate preferences. ERAP1 preferentially hydrolyses peptides longer than 8 residues with bulky hydrophobic side chains in S1 to produce peptides that fit into the binding groove of MHC-I molecules. , The trimming mechanism by ERAP1 has been extensively studied and has led to the "molecular ruler" concept which highlighted the key role of the allosteric site A in setting the distance between the amino- and carboxy- termini of the trimmed peptides. In contrast to ERAP1, ERAP2 preferentially trims positively charged amino acids and shorter peptides. , Overview of ERAP Inhibitor Series and Discovery Strategies In the past decade, key steps such as solving ERAP structures with or without ligands, leap in potency and selectivity, discovery of new binding pockets or allosteric sites, and optimization of drug likeness have allowed the transition from pharmacological tools to leads and to the first clinical candidate GRWD5769 ( A). Approximately 15 chemical series of inhibitors have been reported (SI, Table S3–S4 ). The majority of them (61%) have been rationally designed ( B). A smaller fraction (26%) has been identified by high-throughput on-target screening, while in silico strategies are also reported (4%) ( B). Fragment screening led to the discovery of micromolar ERAP1 inhibitors (SI, Table S5 ). Finally, protein-templated reactions were also successfully applied (4%; KTGS: kinetic target-guided synthesis) ( B). The most populated chemical series are phosphinic peptidomimetic inhibitors and structure-based designed diaminobenzoic acids, both targeting the catalytic site. Other catalytic site inhibitors include suberones, hydroxamic acids, α-hydroxy-β-amino acids or pyridinones, (SI, Table S4,S6 ). Allosteric inhibitors or activators were also disclosed and allowed identification of some modulation pockets specifically for ERAP1. These include phenyl-sulfamoyl-benzoic acid derivatives and other carboxylic acids. All the inhibitors vary widely in potency, selectivity, and physicochemical properties. In this perspective, we have selected relevant ERAP inhibitors focusing on their potency and the selectivity, which are the main challenges, their discovery strategy, and their binding mode (allosteric or catalytic). As many inhibitors are not fully characterized in terms of physicochemical properties, we have analyzed the seven more populated chemical series and highlighted the main drivers for optimization in function of ADME properties, selectivity, and ligand efficiency. We aim to provide the medicinal chemistry community with guidance for the design of the next generation of ERAP inhibitors with improved profile and to highlight the challenges that remain for their design and their characterization both in vitro and in vivo . Pan ERAP Inhibitors 2.1 Phosphonic and Phosphinic Derivatives DG002 and DG013 are the first rationally designed inhibitors of ERAP enzymes. These compounds are pseudodi- or tripeptides bearing a phosphinic group to act as transition-state analogues in both ERAP1 and ERAP2. While DG002 is a submicromolar inhibitor of ERAP1 and ERAP2, regardless of its stereochemistry, DG013A [ R , S , S ] was the first nanomolar inhibitor of both ERAP1 and 2 (IC 50 = 33 and 11 nM, respectively). DG013A and analogues also inhibit the hydrolysis of a 10-mer fluorogenic ERAP1 substrate peptide, WEVYEKC DNP ALK ( DG013A , and ERAP1, IC 50 = 55 nM). This inhibitor enhances SRHFLAFSFR epitope presentation in HeLa-B27 MHC-I molecules. It also rescues GSW11 peptide neoantigen presentation in CT26 cells. In melanoma cells, DG013A reduces the presentation of the ERAP1 dependent model antigenic epitope SIINFEKL from its LEQLESIINFEKL precursor. In these cells, the inhibitor also slightly modulates the immunopeptidome presented on the cell surface. DG013A also reduces the expression of HLA-B27 free heavy chain in HeLa-B27, the differentiation of Th17 cells, and the secretion of IL-17A from CD4+ T cells. This effect was confirmed in PBMC from SpA patients. ERAP phosphinic inhibitors usually do retain some activity on the related IRAP enzyme and other M1 family metalloproteases. For example, DG013A is an inhibitor of both IRAP and neutral aminopeptidase APN (IC 50 = 30 nM and IC 50 = 3.7 nM, respectively). , About 30 stereospecific phosphinic pseudotripeptide analogues with modified S1′ and S2′ groups showed high potency ( DG046 , DG011A , 1 – 6 ). While DG046 , DG011A , and 1 – 3 are more selective for ERAP2 than for ERAP1, they lack selectivity toward IRAP . DG011A , displaying l -Ser in S2′ and l -Leu in S1′, shows the best selectivity for ERAP2 (>50-fold and ∼13-fold selectivity toward ERAP1 and IRAP, respectively). Introduction of aryl groups in S1′ drastically reduces the activity toward ERAP1 ( 2 – 3 ) and aromatic amino-acids in S2′ (Trp, Phe, Tyr) yields low nanomolar ERAP2 inhibitors ( DG046 , 1 – 2 ). Interestingly, DG046 with l -Phe in S2′ and a propargyl group in S1′ is very potent against all 3 enzymes (ERAP1, IC 50 = 43 nM; ERAP2, IC 50 = 37 nM; IRAP, IC 50 = 2 nM). The most potent ERAP1 inhibitor ( 4 , IC 50 = 33 nM) with an l -Phe group in S2′ and an extended para -isoxazolyl phenol in S1′ unfortunately showed poor selectivity over ERAP2 (IC 50 = 56 nM) and was even more potent against IRAP (IC 50 = 4 nM). While shifting the position of the hydroxyl group ( 5 ) had no effect on selectivity, replacing phenol with chlorobenzene ( 6 ) in the S1′ pocket improved selectivity over ERAP2 (IC 50 = 345 nM) but not over IRAP (IC 50 = 34 nM). Four compounds in the series were crystallized with either ERAP1 or ERAP2 ( , Table S2 ). As expected, the phosphinic group of DG013A coordinates to the active site Zn 2+ ion, and its two oxygen atoms are further stabilized by hydrogen bonding interactions with Glu371 and the hydroxyl group of Tyr455 ( A, PDB 4JBS ). The homophenylalanine makes hydrophobic interactions with the conserved Phe450 that lines the base of the S1 specificity pocket. The leucine side chain is stabilized by hydrophobic interactions with Val367, which defines the bottom of a shallow hydrophobic S1′ pocket. Finally, the tryptophan residue is stacked between Tyr455 and Tyr892. All major residues involved in DG013A binding, except Tyr892, are conserved in ERAP1, ERAP2, and IRAP, explaining the poor selectivity of this compound. Recently, analogues 3 and 5 were successfully crystallized with ERAP2. Interestingly, the P1′ groups showed different orientations near the ERAP2 active site ( B, PDB 7PFS ). These structures suggest that the S1′ pocket in ERAP2 may be inhibitor dependent and may be useful for further optimization of this class of compounds. In 2019, Giastas et al., reported the first high-resolution (1.60 Å) crystal structure of the closed-conformation of ERAP1, in complex with the analogue DG046 ( C, PDB 6Q4R ). In this complex, the homophenylalanine side chain adopts a different orientation and forms a T-shaped π–π interaction with Phe433. The propargyl group facing the S1′ site adopts a spatial conformation that optimizes its π–π interactions with the aromatic cloud of the zinc coordinating His353. The phenylalanine residue of DG046 interacts via T-shaped aromatic interactions both with Tyr438 of ERAP1 and intramolecularly with the phenyl group of the homophenylalanine side chains. The absence of an aromatic residue at S2′ of ERAP1 (Ser869), in contrast to ERAP2 (Tyr892) and IRAP (Tyr961), prevents the P2′ group from π-stacking with any aromatic group from domain IV, adopting completely different rotamer conformation and a more distant placement of its C-terminal carbonyl group (∼3 Å further from Zn 2+ ), compared to the corresponding group of DG013A when bound to ERAP2. ERAP phosphonic and phosphinic acid inhibitors from an in-house library previously used for APN inhibitors discovery ( 7 , 8 ) were reported to be submicromolar ERAP2 inhibitors with moderate selectivity (3–60×). Introduction of a hydrogen-bond acceptor (HBA) group, such as NO 2 , in the most potent compound 7 allows interaction at P1 with Arg895/Gln447 (Ala872/Arg330 in ERAP1), as suggested by docking ( D). Conversely, the introduction of a basic group at the P1′ position, is not tolerated by ERAP2 due to electrostatic repulsions involving Arg366 (Met349 in ERAP1) ( 8 ). Specific Learning Rational design using phosphinic analogue of the tetrahedral transition state allows achievement of low nanomolar potency on ERAP. Selectivity specially toward IRAP is still a challenge, but key differences between ERAP1 and ERAP2 can be exploited to achieve 1.5 log separation in potency in both directions. 2.2 Diaminobenzoic Acid Derivatives (DABA) Papakyriakou et al., rationally designed and synthesized a novel family of zinc aminopeptidase inhibitors with a diaminobenzoic acid scaffold and based on l -homophenylalanine ( 9 – 11 , , ). This residue accommodates in the S1 pocket of ERAP1 by π-stacking interaction with the conserved aromatic residues (Phe433, Phe450, and Phe544) as shown by docking ( A). The reported compounds exhibit micromolar activity against either ERAP1, ERAP2, or IRAP. Compound 9 was the most potent ERAP1 inhibitor (IC 50 = 2 μM) with a 10-fold selectivity over ERAP2 (IC 50 = 25 μM) but poorly selective over IRAP (IC 50 = 10 μM). Analogue 10 , with a methyl ester group in S1′ instead of a carboxylic acid function is equipotent against all 3 enzymes (IC 50 = 2.6 μM ERAP1, 9 μM ERAP2 and 6 μM IRAP). Tryptophan-based analogue 11 is more potent at IRAP (IC 50 = 1.3 μM). Docking of 9 in ERAP1 showed that the C-terminal methyl ester interacts with Ser869 and the lysine moiety points toward a putative S2′ pocket, interacting electrostatically with Asp435 and Asp439. The fact that these three residues are not conserved in ERAP2 (Glu452, Asn454, and Tyr892) and IRAP (Ser546, Phe550, and Tyr961) may explain the 10-fold selectivity of 9 on ERAP1 ( A). Further optimization led to submicromolar ERAP1 or ERAP2 inhibitors ( 12 – 16 , , ). Analogue 12 with L-NLe and L-Trp-OBn in S1 and S1′-S2′ pockets, respectively, displayed submicromolar activity on IRAP (IC 50 = 0.1 μM) and similarly low micromolar activity on ERAP1 and 2 (IC 50 = 0.9 and 1.6 μM, respectively). Analogue 13 with an L-Arg in S1 was more potent on ERAP2 (IC 50 = 0.5 μM) with ∼20-fold selectivity over ERAP1 (IC 50 = 9.6 μM) and 2-fold selectivity over IRAP (IC 50 = 0.97 μM), consistent with ERAP2 preference for basic side chains at this position. Compound 13 was cocrystallized with ERAP2 ( B). The amide carboxylic oxygen of arginine coordinates Zn 2+ and the free terminal NH 2 is further involved in electrostatic interactions with the Zn-coordinating residue Glu393, and Glu371 and Glu337. As predicted, arginine penetrates deep into the S1 specificity pocket of the enzyme and the guanidinium nitrogen is stabilized by interactions with Asp198 and Glu200. The tyrosine group is stacked between (not parallel) the phenyl rings of Tyr455 and Tyr892 in the S2′ pocket ( B). Regioisomers 14 – 16 with S1′-2′ groups introduced in the meta position of the free aniline, showed different selectivities. l -Norleucine derivatives were submicromolar inhibitors of ERAP2, while homotyrosine analogue 16 ( C) exhibited a promising ERAP1 activity (IC 50 = 0.8 μM) with some selectivity over ERAP2 and IRAP (10-fold and 7-fold respectively). Specific Learning A new ZBG is introduced but provides less potent inhibitors than phosphinic acids. 2-log selectivity toward ERAP2 or 1-log toward ERAP1 could be achieved, but lack of selectivity toward IRAP remains an issue. The stability of the aniline with respect to oxidative metabolism which could lead to electrophilic quinonimines, has not been evaluated. 2.3 Aminosuberone Derivatives Aminobenzosuberones are M1 aminopeptidase inhibitors in which the ketone hydrate is the ZBG. Ten derivatives were reported as nanomolar inhibitors of APN and Pf AM1 with micromolar activities on both ERAP1, ERAP2, and IRAP ( , A). Compound 17 was the best inhibitor against ERAP2 (IC 50 = 0.39 μM). The bromine atom was essential for activity on ERAP2 (compound 17 vs 18 ), while the introduction of a second bromine atom provided a nanomolar inhibitor of IRAP ( 19 ) . 18 was crystallized with Pf AM1. Extrapolation of this binding to ERAP1 ( A) suggests that the ketone hydrate group may interact with catalytic zinc ion and surrounding residues Glu183, Glu376, Glu320, and Glu354 in the S1′ pocket. 18 further interacts with Tyr438 of the S1 pocket. Specific Learning Target hopping from other inhibitors active on M1 proteins allows for a low nanomolar IRAP inhibitor. While the ZBG is original, no X-ray structure with ERAP1 or 2 is available to help for the design of the next generation of suberone-based inhibitors. 2.4 Pyrazole-5-carbohydrazides A virtual screening on ERAP1, yielded approximatively 2500 ligand–enzyme complexes. These were further visually filtered for interaction with the catalytic Zn 2+ ion, presence of an aromatic ring in the S1 pocket and additional interactions in the S1′, S2′ pockets. In vitro testing of the 24 remaining hits identified pyrazole-5-carbohydrazide 20 as a low micromolar inhibitor of ERAP1 (43 μM, ). A scaffold similarity search identified two analogues, 21 with an IC 50 = 27 μM and 22 with an IC 50 = 18 μM, respectively. The slight increase in potency can be explained to a potential CH···O hydrogen or halogen bond with the hydroxyl group of Ser342 ( B). Compound 22 is nonselective for ERAP2 (IC 50 = 34 μM) and IRAP (IC 50 = 28 μM). Specific Learning An original nonpeptidic scaffold is disclosed, but no insight is provided on how to improve potency and selectivity. 2.5 Bestatin Analogues The ERAP1 or ERAP2 or other aminopeptidase inhibitory activity of several archetypical aminopeptidase inhibitors has been reported in the literature . Bestatin, also called ubenimex, is an inhibitor extracted from Streptomyces olivoreticuli . It has been reported as an inhibitor of LTA 4 hydrolase and of several aminopeptidases such as APN (IC 50 = 3 μM) and as a potent inhibitor of leucine aminopeptidase LAP3 (low nanomolar) . , It is currently being evaluated in clinical trials in oncology. Bestatin was later shown to be also a micromolar inhibitor of ERAP1 and a weak inhibitor of ERAP2 and IRAP. , X-Ray structure of the ERAP1–bestatin complex shows that bestatin binds the catalytic zinc through the adjacent carbonyl and hydroxyl groups ( B). The carboxylic acid makes a hydrogen bond with Gly317, while the amino group forms hydrogen bonds with Glu376, Glu183, and Glu320 of the S1 pocket via a water molecule. The phenyl ring stacks with Phe433 in the S1 pocket, while the isobutyl group faces the S1′ pocket. Tyr 438 forms a hydrogen bond with the carbonyl function of the amide. Amastatin, another statin-containing peptide, also derived from Streptomyces sp., inhibits both APN, IRAP, and ERAP1 in the micromolar range . − Tosedostat acid (CHR-79888) is a reported inhibitor for aminopeptidases currently evaluated in clinical trials in oncology. It is a very potent inhibitor of LTA 4 hydrolase (8 nM) and APN (IC 50 = 30 nM) and LAP3 (IC 50 = 5 nM). , Tosedostat is a poor ERAP1 inhibitor (IC 50 = 18 μM) and moderate ERAP2 inhibitor (IC 50 = 770 nM) . Leucinethiol is widely used in cellular assays as a nonselective aminopeptidase inhibitor. It is a low nanomolar inhibitor of APN and IRAP and a submicromolar inhibitor of both ERAP1 and ERAP2 . , The SAR established in the phosphinic pseudopeptide series were used to extend the bestatin main chain by incorporating Tyr or Trp residues to target S2′, while retaining the leucine isobutyl side chain for adequate S1′ filling. This work resulted in micromolar inhibitors . Incorporation of multiple alcohols at P1 to better fill the S1 pockets resulted in nonselective compounds with submicromolar potency on IRAP such as 23 . Interestingly, when the aliphatic alcohols at P1 were replaced by aryl ethers, nanomolar IRAP inhibitors ( 24 , ) were obtained that still retained micromolar potency on ERAP1 and ERAP2,. The incorporation of nitrogen atoms in the P1 substituent, with the synthesis of amides and amines such as 25 , allowed elimination of the ERAP1 activity. Compound 23 was cocrystallized with ERAP1 . It shows the same binding mode as bestatin, with the primary amine interacting with Glu183, Glu320, and Glu376, the main chain making several polar contacts with the GAMEN motif, and the 3 pockets filled as expected. Specific Learning The X-ray structure of the bestatin-ERAP1 complex reveals important interactions between inhibitor and enzyme. However, these are not strong enough to obtain nanomolar potencies on ERAP, while selectivity in the M1 family remains problematic. Still, in this family, the filling of the S1 pocket appears to be the key for selectivity on IRAP. Phosphonic and Phosphinic Derivatives DG002 and DG013 are the first rationally designed inhibitors of ERAP enzymes. These compounds are pseudodi- or tripeptides bearing a phosphinic group to act as transition-state analogues in both ERAP1 and ERAP2. While DG002 is a submicromolar inhibitor of ERAP1 and ERAP2, regardless of its stereochemistry, DG013A [ R , S , S ] was the first nanomolar inhibitor of both ERAP1 and 2 (IC 50 = 33 and 11 nM, respectively). DG013A and analogues also inhibit the hydrolysis of a 10-mer fluorogenic ERAP1 substrate peptide, WEVYEKC DNP ALK ( DG013A , and ERAP1, IC 50 = 55 nM). This inhibitor enhances SRHFLAFSFR epitope presentation in HeLa-B27 MHC-I molecules. It also rescues GSW11 peptide neoantigen presentation in CT26 cells. In melanoma cells, DG013A reduces the presentation of the ERAP1 dependent model antigenic epitope SIINFEKL from its LEQLESIINFEKL precursor. In these cells, the inhibitor also slightly modulates the immunopeptidome presented on the cell surface. DG013A also reduces the expression of HLA-B27 free heavy chain in HeLa-B27, the differentiation of Th17 cells, and the secretion of IL-17A from CD4+ T cells. This effect was confirmed in PBMC from SpA patients. ERAP phosphinic inhibitors usually do retain some activity on the related IRAP enzyme and other M1 family metalloproteases. For example, DG013A is an inhibitor of both IRAP and neutral aminopeptidase APN (IC 50 = 30 nM and IC 50 = 3.7 nM, respectively). , About 30 stereospecific phosphinic pseudotripeptide analogues with modified S1′ and S2′ groups showed high potency ( DG046 , DG011A , 1 – 6 ). While DG046 , DG011A , and 1 – 3 are more selective for ERAP2 than for ERAP1, they lack selectivity toward IRAP . DG011A , displaying l -Ser in S2′ and l -Leu in S1′, shows the best selectivity for ERAP2 (>50-fold and ∼13-fold selectivity toward ERAP1 and IRAP, respectively). Introduction of aryl groups in S1′ drastically reduces the activity toward ERAP1 ( 2 – 3 ) and aromatic amino-acids in S2′ (Trp, Phe, Tyr) yields low nanomolar ERAP2 inhibitors ( DG046 , 1 – 2 ). Interestingly, DG046 with l -Phe in S2′ and a propargyl group in S1′ is very potent against all 3 enzymes (ERAP1, IC 50 = 43 nM; ERAP2, IC 50 = 37 nM; IRAP, IC 50 = 2 nM). The most potent ERAP1 inhibitor ( 4 , IC 50 = 33 nM) with an l -Phe group in S2′ and an extended para -isoxazolyl phenol in S1′ unfortunately showed poor selectivity over ERAP2 (IC 50 = 56 nM) and was even more potent against IRAP (IC 50 = 4 nM). While shifting the position of the hydroxyl group ( 5 ) had no effect on selectivity, replacing phenol with chlorobenzene ( 6 ) in the S1′ pocket improved selectivity over ERAP2 (IC 50 = 345 nM) but not over IRAP (IC 50 = 34 nM). Four compounds in the series were crystallized with either ERAP1 or ERAP2 ( , Table S2 ). As expected, the phosphinic group of DG013A coordinates to the active site Zn 2+ ion, and its two oxygen atoms are further stabilized by hydrogen bonding interactions with Glu371 and the hydroxyl group of Tyr455 ( A, PDB 4JBS ). The homophenylalanine makes hydrophobic interactions with the conserved Phe450 that lines the base of the S1 specificity pocket. The leucine side chain is stabilized by hydrophobic interactions with Val367, which defines the bottom of a shallow hydrophobic S1′ pocket. Finally, the tryptophan residue is stacked between Tyr455 and Tyr892. All major residues involved in DG013A binding, except Tyr892, are conserved in ERAP1, ERAP2, and IRAP, explaining the poor selectivity of this compound. Recently, analogues 3 and 5 were successfully crystallized with ERAP2. Interestingly, the P1′ groups showed different orientations near the ERAP2 active site ( B, PDB 7PFS ). These structures suggest that the S1′ pocket in ERAP2 may be inhibitor dependent and may be useful for further optimization of this class of compounds. In 2019, Giastas et al., reported the first high-resolution (1.60 Å) crystal structure of the closed-conformation of ERAP1, in complex with the analogue DG046 ( C, PDB 6Q4R ). In this complex, the homophenylalanine side chain adopts a different orientation and forms a T-shaped π–π interaction with Phe433. The propargyl group facing the S1′ site adopts a spatial conformation that optimizes its π–π interactions with the aromatic cloud of the zinc coordinating His353. The phenylalanine residue of DG046 interacts via T-shaped aromatic interactions both with Tyr438 of ERAP1 and intramolecularly with the phenyl group of the homophenylalanine side chains. The absence of an aromatic residue at S2′ of ERAP1 (Ser869), in contrast to ERAP2 (Tyr892) and IRAP (Tyr961), prevents the P2′ group from π-stacking with any aromatic group from domain IV, adopting completely different rotamer conformation and a more distant placement of its C-terminal carbonyl group (∼3 Å further from Zn 2+ ), compared to the corresponding group of DG013A when bound to ERAP2. ERAP phosphonic and phosphinic acid inhibitors from an in-house library previously used for APN inhibitors discovery ( 7 , 8 ) were reported to be submicromolar ERAP2 inhibitors with moderate selectivity (3–60×). Introduction of a hydrogen-bond acceptor (HBA) group, such as NO 2 , in the most potent compound 7 allows interaction at P1 with Arg895/Gln447 (Ala872/Arg330 in ERAP1), as suggested by docking ( D). Conversely, the introduction of a basic group at the P1′ position, is not tolerated by ERAP2 due to electrostatic repulsions involving Arg366 (Met349 in ERAP1) ( 8 ). Specific Learning Rational design using phosphinic analogue of the tetrahedral transition state allows achievement of low nanomolar potency on ERAP. Selectivity specially toward IRAP is still a challenge, but key differences between ERAP1 and ERAP2 can be exploited to achieve 1.5 log separation in potency in both directions. Rational design using phosphinic analogue of the tetrahedral transition state allows achievement of low nanomolar potency on ERAP. Selectivity specially toward IRAP is still a challenge, but key differences between ERAP1 and ERAP2 can be exploited to achieve 1.5 log separation in potency in both directions. Diaminobenzoic Acid Derivatives (DABA) Papakyriakou et al., rationally designed and synthesized a novel family of zinc aminopeptidase inhibitors with a diaminobenzoic acid scaffold and based on l -homophenylalanine ( 9 – 11 , , ). This residue accommodates in the S1 pocket of ERAP1 by π-stacking interaction with the conserved aromatic residues (Phe433, Phe450, and Phe544) as shown by docking ( A). The reported compounds exhibit micromolar activity against either ERAP1, ERAP2, or IRAP. Compound 9 was the most potent ERAP1 inhibitor (IC 50 = 2 μM) with a 10-fold selectivity over ERAP2 (IC 50 = 25 μM) but poorly selective over IRAP (IC 50 = 10 μM). Analogue 10 , with a methyl ester group in S1′ instead of a carboxylic acid function is equipotent against all 3 enzymes (IC 50 = 2.6 μM ERAP1, 9 μM ERAP2 and 6 μM IRAP). Tryptophan-based analogue 11 is more potent at IRAP (IC 50 = 1.3 μM). Docking of 9 in ERAP1 showed that the C-terminal methyl ester interacts with Ser869 and the lysine moiety points toward a putative S2′ pocket, interacting electrostatically with Asp435 and Asp439. The fact that these three residues are not conserved in ERAP2 (Glu452, Asn454, and Tyr892) and IRAP (Ser546, Phe550, and Tyr961) may explain the 10-fold selectivity of 9 on ERAP1 ( A). Further optimization led to submicromolar ERAP1 or ERAP2 inhibitors ( 12 – 16 , , ). Analogue 12 with L-NLe and L-Trp-OBn in S1 and S1′-S2′ pockets, respectively, displayed submicromolar activity on IRAP (IC 50 = 0.1 μM) and similarly low micromolar activity on ERAP1 and 2 (IC 50 = 0.9 and 1.6 μM, respectively). Analogue 13 with an L-Arg in S1 was more potent on ERAP2 (IC 50 = 0.5 μM) with ∼20-fold selectivity over ERAP1 (IC 50 = 9.6 μM) and 2-fold selectivity over IRAP (IC 50 = 0.97 μM), consistent with ERAP2 preference for basic side chains at this position. Compound 13 was cocrystallized with ERAP2 ( B). The amide carboxylic oxygen of arginine coordinates Zn 2+ and the free terminal NH 2 is further involved in electrostatic interactions with the Zn-coordinating residue Glu393, and Glu371 and Glu337. As predicted, arginine penetrates deep into the S1 specificity pocket of the enzyme and the guanidinium nitrogen is stabilized by interactions with Asp198 and Glu200. The tyrosine group is stacked between (not parallel) the phenyl rings of Tyr455 and Tyr892 in the S2′ pocket ( B). Regioisomers 14 – 16 with S1′-2′ groups introduced in the meta position of the free aniline, showed different selectivities. l -Norleucine derivatives were submicromolar inhibitors of ERAP2, while homotyrosine analogue 16 ( C) exhibited a promising ERAP1 activity (IC 50 = 0.8 μM) with some selectivity over ERAP2 and IRAP (10-fold and 7-fold respectively). Specific Learning A new ZBG is introduced but provides less potent inhibitors than phosphinic acids. 2-log selectivity toward ERAP2 or 1-log toward ERAP1 could be achieved, but lack of selectivity toward IRAP remains an issue. The stability of the aniline with respect to oxidative metabolism which could lead to electrophilic quinonimines, has not been evaluated. A new ZBG is introduced but provides less potent inhibitors than phosphinic acids. 2-log selectivity toward ERAP2 or 1-log toward ERAP1 could be achieved, but lack of selectivity toward IRAP remains an issue. The stability of the aniline with respect to oxidative metabolism which could lead to electrophilic quinonimines, has not been evaluated. Aminosuberone Derivatives Aminobenzosuberones are M1 aminopeptidase inhibitors in which the ketone hydrate is the ZBG. Ten derivatives were reported as nanomolar inhibitors of APN and Pf AM1 with micromolar activities on both ERAP1, ERAP2, and IRAP ( , A). Compound 17 was the best inhibitor against ERAP2 (IC 50 = 0.39 μM). The bromine atom was essential for activity on ERAP2 (compound 17 vs 18 ), while the introduction of a second bromine atom provided a nanomolar inhibitor of IRAP ( 19 ) . 18 was crystallized with Pf AM1. Extrapolation of this binding to ERAP1 ( A) suggests that the ketone hydrate group may interact with catalytic zinc ion and surrounding residues Glu183, Glu376, Glu320, and Glu354 in the S1′ pocket. 18 further interacts with Tyr438 of the S1 pocket. Specific Learning Target hopping from other inhibitors active on M1 proteins allows for a low nanomolar IRAP inhibitor. While the ZBG is original, no X-ray structure with ERAP1 or 2 is available to help for the design of the next generation of suberone-based inhibitors. Target hopping from other inhibitors active on M1 proteins allows for a low nanomolar IRAP inhibitor. While the ZBG is original, no X-ray structure with ERAP1 or 2 is available to help for the design of the next generation of suberone-based inhibitors. Pyrazole-5-carbohydrazides A virtual screening on ERAP1, yielded approximatively 2500 ligand–enzyme complexes. These were further visually filtered for interaction with the catalytic Zn 2+ ion, presence of an aromatic ring in the S1 pocket and additional interactions in the S1′, S2′ pockets. In vitro testing of the 24 remaining hits identified pyrazole-5-carbohydrazide 20 as a low micromolar inhibitor of ERAP1 (43 μM, ). A scaffold similarity search identified two analogues, 21 with an IC 50 = 27 μM and 22 with an IC 50 = 18 μM, respectively. The slight increase in potency can be explained to a potential CH···O hydrogen or halogen bond with the hydroxyl group of Ser342 ( B). Compound 22 is nonselective for ERAP2 (IC 50 = 34 μM) and IRAP (IC 50 = 28 μM). Specific Learning An original nonpeptidic scaffold is disclosed, but no insight is provided on how to improve potency and selectivity. An original nonpeptidic scaffold is disclosed, but no insight is provided on how to improve potency and selectivity. Bestatin Analogues The ERAP1 or ERAP2 or other aminopeptidase inhibitory activity of several archetypical aminopeptidase inhibitors has been reported in the literature . Bestatin, also called ubenimex, is an inhibitor extracted from Streptomyces olivoreticuli . It has been reported as an inhibitor of LTA 4 hydrolase and of several aminopeptidases such as APN (IC 50 = 3 μM) and as a potent inhibitor of leucine aminopeptidase LAP3 (low nanomolar) . , It is currently being evaluated in clinical trials in oncology. Bestatin was later shown to be also a micromolar inhibitor of ERAP1 and a weak inhibitor of ERAP2 and IRAP. , X-Ray structure of the ERAP1–bestatin complex shows that bestatin binds the catalytic zinc through the adjacent carbonyl and hydroxyl groups ( B). The carboxylic acid makes a hydrogen bond with Gly317, while the amino group forms hydrogen bonds with Glu376, Glu183, and Glu320 of the S1 pocket via a water molecule. The phenyl ring stacks with Phe433 in the S1 pocket, while the isobutyl group faces the S1′ pocket. Tyr 438 forms a hydrogen bond with the carbonyl function of the amide. Amastatin, another statin-containing peptide, also derived from Streptomyces sp., inhibits both APN, IRAP, and ERAP1 in the micromolar range . − Tosedostat acid (CHR-79888) is a reported inhibitor for aminopeptidases currently evaluated in clinical trials in oncology. It is a very potent inhibitor of LTA 4 hydrolase (8 nM) and APN (IC 50 = 30 nM) and LAP3 (IC 50 = 5 nM). , Tosedostat is a poor ERAP1 inhibitor (IC 50 = 18 μM) and moderate ERAP2 inhibitor (IC 50 = 770 nM) . Leucinethiol is widely used in cellular assays as a nonselective aminopeptidase inhibitor. It is a low nanomolar inhibitor of APN and IRAP and a submicromolar inhibitor of both ERAP1 and ERAP2 . , The SAR established in the phosphinic pseudopeptide series were used to extend the bestatin main chain by incorporating Tyr or Trp residues to target S2′, while retaining the leucine isobutyl side chain for adequate S1′ filling. This work resulted in micromolar inhibitors . Incorporation of multiple alcohols at P1 to better fill the S1 pockets resulted in nonselective compounds with submicromolar potency on IRAP such as 23 . Interestingly, when the aliphatic alcohols at P1 were replaced by aryl ethers, nanomolar IRAP inhibitors ( 24 , ) were obtained that still retained micromolar potency on ERAP1 and ERAP2,. The incorporation of nitrogen atoms in the P1 substituent, with the synthesis of amides and amines such as 25 , allowed elimination of the ERAP1 activity. Compound 23 was cocrystallized with ERAP1 . It shows the same binding mode as bestatin, with the primary amine interacting with Glu183, Glu320, and Glu376, the main chain making several polar contacts with the GAMEN motif, and the 3 pockets filled as expected. Specific Learning The X-ray structure of the bestatin-ERAP1 complex reveals important interactions between inhibitor and enzyme. However, these are not strong enough to obtain nanomolar potencies on ERAP, while selectivity in the M1 family remains problematic. Still, in this family, the filling of the S1 pocket appears to be the key for selectivity on IRAP. The X-ray structure of the bestatin-ERAP1 complex reveals important interactions between inhibitor and enzyme. However, these are not strong enough to obtain nanomolar potencies on ERAP, while selectivity in the M1 family remains problematic. Still, in this family, the filling of the S1 pocket appears to be the key for selectivity on IRAP. ERAP1 Selective Inhibitors 3.1 Thimerosal Thimerosal, identified by virtual screening, displays a submicromolar inhibition of ERAP1 while being inactive on ERAP2, IRAP, and LAP3 . Docking suggested two potential interactions between the mercury atom and the hydroxyl groups of Ser316 and Ser869, respectively ( A). In contrast, such interactions were not observed with the corresponding residues Pro333 and Tyr892 of ERAP2. These two serine residues are also not conserved in IRAP. Thimerosal showed a dose-dependent effect on antigen presentation by bone marrow-derived dendritic cells (BMDC) treated with ovalbumin and exposed to OT-I CD8+ T cells (ED 50 = 930 nM). This effect was shown to be mediated by ERAP1, as thimerosal is inactive in ERAP –/– BMDCs. Specific Learning While thimerosal is selective for ERAP1, it has a little potential for optimization. 3.2 Sulfonylguanidines and Ureas A high-throughput screening of 350 000 compounds allowed the identification of ERAP1 inhibitors . Sulfonylguanidine 26 ( B) was also a micromolar inhibitor (IC 50 = 28 μM) of the hydrolysis of the WRCYEKMALK decapeptide by ERAP1 . Docking in ERAP1 suggests that it binds at the catalytic site, with its tryptamine group extending toward the ERAP1 specific residue Thr350 and its fluorinated group binding next to the S1 site ( B). This docking pose was further supported by the decrease in activity of 26 on the T350A mutated ERAP1. However, 26 was shown to be inactive in a cellular antigen presentation and exhibited off-target effects in control cells, potentially affecting peptide or MHC expression, peptide loading, or intracellular trafficking. Conversely, urea 27 ( C), which inhibits the hydrolysis of WK10 by ERAP1 (IC 50 = 6.9 μM, ), reduces the presentation of the ERAP1- dependent model antigenic epitope SIINFEKL (ED 50 45 μM) in HeLa cells. N -Acetylpiperazine and homocycloleucine moieties are important for activity, while chlorine is preferred over methyl, fluorine, or methoxy groups ( 28 IC 50 = 3.6 μM, )). Docking suggests that the cyclohexyl group binds near the ERAP1 specific residue T350 and the N -acetyl oxygen binds the catalytic Zn 2+ ion ( C). The tolyl group of 27 is located in a hydrophobic pocket, consistently with the slightly increased activity of the chlorine analogue 28 . Specific Learning The high-throughput screening provided a diversity of putative ZBGs. The binding mode needs to be verified experimentally. 3.3 Clerodane Acid Derivative Compound 29 , a clerodane acid derivative from the Dodonea viscosa tree, was identified through a screening of a large compound collection at GSK . This screening was designed to discover binders of the regulatory site of ERAP1, which binds the C-terminus of substrates. They therefore screened for compounds that increased the rate of hydrolysis of the short substrate by ERAP1. 29 behaved as submicromolar activator of ERAP1 (EC 50 = 0.63 μM) while being a weak inhibitor of ERAP2 (IC 50 = 158 μM). When larger peptide substrates are used, such as the nonapeptide YTAFTIPSI (from HIV), which is normally destroyed by ERAP1, or the 15mer peptide SGLEQLE-SIINFEKL, which is able of delivering the mature SIINFEKL model epitope, 29 behaves as a micromolar inhibitor (IC 50 = 1 and 1.3 μM, respectively). In cells, 29 reduces the presentation of the ERAP1-dependent model antigenic epitope SIINFEKL (IC 50 = 2 μM). 29 was cocrystallized with ERAP1 (PDB 6TR6 ). The structure confirmed that 29 is located at the regulatory site A, 25 Å away from the catalytic Zn 2+ . The carboxylic acid forms ionic and hydrogen bonds with Tyr684, Lys685, and Arg807. This was further confirmed by the decrease in potency of 29 on ERAP1 Y684F or ERAP1 K685A variants. Specific Learning High throughput screening of unbiased libraries delivered ERAP1 activators, while no structural hint was available. The screening cascade including first an assay using a short substrate followed by an assay on a large substrate, helped to rationalize the mode of action. Structural data later validated the possibility of targeting the regulatory allosteric site A, but the lack of analogues of the natural hit precludes the development of this series. 3.4 Benzofuran Carboxylic Acids Benzofuran ERAP1 inhibitors were discovered by a fluorescence-based screening of a one-million-compound library using the 8-mer L-Rho-Succ-FKARKF substrate. Hits were further evaluated in an orthogonal mass-spectroscopy-based inhibition assay using EFAPGNYPAL substrates. Compounds active in both assays were further filtered by LLE, thermal shift assay on ERAP1, selectivity toward ERAP2 and APN, and early ADME profiling. The benzofuran carboxylic acid series was selected for further optimization (compound 31 , ). The SAR highlighted the importance of the carboxylic acid function for the activity. The best inhibitors in the series were disubstituted by small alkyl groups in the α-position of the carboxylic acid. In particular, 31 showed high potency (IC 50 = 34 nM, L-Rho-Succ-FKARKF substrate) and was selective over ERAP2 and APN (IC 50 > 30 μM), . Interestingly, in strong contrast to DG013 or leucinethiol, benzofuran inhibitors were weak inhibitors, or even activators of ERAP1 hydrolysis of the short substrate L-Rho-(D)-Q, suggesting that they are allosteric modulators of ERAP1. Docking studies identified a putative binding of 31 to the allosteric site A of ERAP1. The carboxylate group of 31 makes key interactions with Lys685 and Arg807, which are involved in the C-terminal binding of peptides . The p -chlorophenyl group and the cyclohexyl are in close proximity to the lipophilic pocket bounded by Phe674, Leu677, Ile681, Leu734, Val737, and Phe803 . The cyclohexyl group fits very well into this pocket, which explains why 31 is the most potent analogue of the series. Specific Learning The screening cascade included first an assay with a long fluorescent substrate then confirmation of activity using a unrelated substrate in MS. Finally, a small substrate was used to test for putative differences in mode of action (allosteric or catalytic). This is the first drug-like series of inhibitors targeting allosteric site A. 3.5 Aryl-sulfamoyl-benzoic Acid Derivatives and Their Tetrazole Isosters Several families of sulfonamides have been identified as ERAP1 inhibitors. A first series of sulfonamides was inspired by the IRAP inhibitor 32 , which showed micromolar activity on ERAP1 ( , A). , Other sulfonamides were discovered by virtual screening at the catalytic site. Replacement of the thiophene ring with a substituted phenyl slightly improved the activity ( 33 ) ( , A). Tetrazole and catechol-like groups were hypothesized to bind the catalytic zinc ion. No further information on selectivity and binding is available. Arylsulfamoyl-benzoic acid activators of the L-AMC hydrolysis by ERAP1 were identified by screening. , Compound 37 ( , B) showed micromolar activation of ERAP1 (AC 50 = 4.7 μM), L-AMC) and was inactive on both ERAP2 and IRAP. Consistent with an allosteric behavior, 37 was an inhibitor of long substrate hydrolysis (IC 50 = 5.3 μM, WRCYEKMALK substrate) and reduced the presentation of the ERAP1-dependent model antigenic epitope SIINFEKL (ED 50 = 1 μM) in cells. The SAR pointed out the importance of the carboxylic acid function and the NH of the sulfonamide group. Replacement of the piperidine ring by N -methyl-piperazine was tolerated . Docking in ERAP1, outside the catalytic site, revealed a putative binding mode in the allosteric site B, at the interface between domains II and IV. In particular the carboxylate function interacts with Lys551 of domain III and makes hydrophobic interactions with Trp921 and Pro682 ( B). The binding hypothesis was confirmed by the decrease of the activity of 37 on the mutant ERAP1 K551A . Surprisingly, a close analogue bearing the trifluomethyl group in the para-position of the sulfonamide ( 40 ) lost activity on ERAP1 and gained inhibitory activity on ERAP2. The crystal structure of 40 -bound ERAP2 shows that the inhibitor binds in the catalytic site without interacting with Zn 2+ (SI, Figure S3 ). A closely related series of allosteric phenyl-sulfamoyl-benzoic acid inhibitors has been patented by Gray Wolf Therapeutics ( C, ). The activity was assessed using decapeptide substrates. A first patented series (representative examples 41 – 43 ) consisted of phenyl-sulfamoyl benzoic acids substituted with aryl groups at the R1 position . Compounds substituted by 2- or 3- piperazines or pyrrolidines ( 44 – 50 ) are more potent, reaching nanomolar activities . A compound from the latter series (GRWD5769) is currently evaluated in clinical trial. While its structure is not yet disclosed, it is probable that it belongs to, or is a close analogue of compounds 44 – 50 . Some macrocycles in which the 2 aryl rings are linked ( 51 – 53 ) have also been patented. Specific Learning This is the first well populated series of nanomolar allosteric inhibitors of ERAP1. However, the binding of the piperidine analogues needs to be confirmed experimentally. As the series moves into the clinics, we expect publications with comprehensive structure–activity data in cell models as well as in vivo data. Thimerosal Thimerosal, identified by virtual screening, displays a submicromolar inhibition of ERAP1 while being inactive on ERAP2, IRAP, and LAP3 . Docking suggested two potential interactions between the mercury atom and the hydroxyl groups of Ser316 and Ser869, respectively ( A). In contrast, such interactions were not observed with the corresponding residues Pro333 and Tyr892 of ERAP2. These two serine residues are also not conserved in IRAP. Thimerosal showed a dose-dependent effect on antigen presentation by bone marrow-derived dendritic cells (BMDC) treated with ovalbumin and exposed to OT-I CD8+ T cells (ED 50 = 930 nM). This effect was shown to be mediated by ERAP1, as thimerosal is inactive in ERAP –/– BMDCs. Specific Learning While thimerosal is selective for ERAP1, it has a little potential for optimization. While thimerosal is selective for ERAP1, it has a little potential for optimization. Sulfonylguanidines and Ureas A high-throughput screening of 350 000 compounds allowed the identification of ERAP1 inhibitors . Sulfonylguanidine 26 ( B) was also a micromolar inhibitor (IC 50 = 28 μM) of the hydrolysis of the WRCYEKMALK decapeptide by ERAP1 . Docking in ERAP1 suggests that it binds at the catalytic site, with its tryptamine group extending toward the ERAP1 specific residue Thr350 and its fluorinated group binding next to the S1 site ( B). This docking pose was further supported by the decrease in activity of 26 on the T350A mutated ERAP1. However, 26 was shown to be inactive in a cellular antigen presentation and exhibited off-target effects in control cells, potentially affecting peptide or MHC expression, peptide loading, or intracellular trafficking. Conversely, urea 27 ( C), which inhibits the hydrolysis of WK10 by ERAP1 (IC 50 = 6.9 μM, ), reduces the presentation of the ERAP1- dependent model antigenic epitope SIINFEKL (ED 50 45 μM) in HeLa cells. N -Acetylpiperazine and homocycloleucine moieties are important for activity, while chlorine is preferred over methyl, fluorine, or methoxy groups ( 28 IC 50 = 3.6 μM, )). Docking suggests that the cyclohexyl group binds near the ERAP1 specific residue T350 and the N -acetyl oxygen binds the catalytic Zn 2+ ion ( C). The tolyl group of 27 is located in a hydrophobic pocket, consistently with the slightly increased activity of the chlorine analogue 28 . Specific Learning The high-throughput screening provided a diversity of putative ZBGs. The binding mode needs to be verified experimentally. The high-throughput screening provided a diversity of putative ZBGs. The binding mode needs to be verified experimentally. Clerodane Acid Derivative Compound 29 , a clerodane acid derivative from the Dodonea viscosa tree, was identified through a screening of a large compound collection at GSK . This screening was designed to discover binders of the regulatory site of ERAP1, which binds the C-terminus of substrates. They therefore screened for compounds that increased the rate of hydrolysis of the short substrate by ERAP1. 29 behaved as submicromolar activator of ERAP1 (EC 50 = 0.63 μM) while being a weak inhibitor of ERAP2 (IC 50 = 158 μM). When larger peptide substrates are used, such as the nonapeptide YTAFTIPSI (from HIV), which is normally destroyed by ERAP1, or the 15mer peptide SGLEQLE-SIINFEKL, which is able of delivering the mature SIINFEKL model epitope, 29 behaves as a micromolar inhibitor (IC 50 = 1 and 1.3 μM, respectively). In cells, 29 reduces the presentation of the ERAP1-dependent model antigenic epitope SIINFEKL (IC 50 = 2 μM). 29 was cocrystallized with ERAP1 (PDB 6TR6 ). The structure confirmed that 29 is located at the regulatory site A, 25 Å away from the catalytic Zn 2+ . The carboxylic acid forms ionic and hydrogen bonds with Tyr684, Lys685, and Arg807. This was further confirmed by the decrease in potency of 29 on ERAP1 Y684F or ERAP1 K685A variants. Specific Learning High throughput screening of unbiased libraries delivered ERAP1 activators, while no structural hint was available. The screening cascade including first an assay using a short substrate followed by an assay on a large substrate, helped to rationalize the mode of action. Structural data later validated the possibility of targeting the regulatory allosteric site A, but the lack of analogues of the natural hit precludes the development of this series. High throughput screening of unbiased libraries delivered ERAP1 activators, while no structural hint was available. The screening cascade including first an assay using a short substrate followed by an assay on a large substrate, helped to rationalize the mode of action. Structural data later validated the possibility of targeting the regulatory allosteric site A, but the lack of analogues of the natural hit precludes the development of this series. Benzofuran Carboxylic Acids Benzofuran ERAP1 inhibitors were discovered by a fluorescence-based screening of a one-million-compound library using the 8-mer L-Rho-Succ-FKARKF substrate. Hits were further evaluated in an orthogonal mass-spectroscopy-based inhibition assay using EFAPGNYPAL substrates. Compounds active in both assays were further filtered by LLE, thermal shift assay on ERAP1, selectivity toward ERAP2 and APN, and early ADME profiling. The benzofuran carboxylic acid series was selected for further optimization (compound 31 , ). The SAR highlighted the importance of the carboxylic acid function for the activity. The best inhibitors in the series were disubstituted by small alkyl groups in the α-position of the carboxylic acid. In particular, 31 showed high potency (IC 50 = 34 nM, L-Rho-Succ-FKARKF substrate) and was selective over ERAP2 and APN (IC 50 > 30 μM), . Interestingly, in strong contrast to DG013 or leucinethiol, benzofuran inhibitors were weak inhibitors, or even activators of ERAP1 hydrolysis of the short substrate L-Rho-(D)-Q, suggesting that they are allosteric modulators of ERAP1. Docking studies identified a putative binding of 31 to the allosteric site A of ERAP1. The carboxylate group of 31 makes key interactions with Lys685 and Arg807, which are involved in the C-terminal binding of peptides . The p -chlorophenyl group and the cyclohexyl are in close proximity to the lipophilic pocket bounded by Phe674, Leu677, Ile681, Leu734, Val737, and Phe803 . The cyclohexyl group fits very well into this pocket, which explains why 31 is the most potent analogue of the series. Specific Learning The screening cascade included first an assay with a long fluorescent substrate then confirmation of activity using a unrelated substrate in MS. Finally, a small substrate was used to test for putative differences in mode of action (allosteric or catalytic). This is the first drug-like series of inhibitors targeting allosteric site A. The screening cascade included first an assay with a long fluorescent substrate then confirmation of activity using a unrelated substrate in MS. Finally, a small substrate was used to test for putative differences in mode of action (allosteric or catalytic). This is the first drug-like series of inhibitors targeting allosteric site A. Aryl-sulfamoyl-benzoic Acid Derivatives and Their Tetrazole Isosters Several families of sulfonamides have been identified as ERAP1 inhibitors. A first series of sulfonamides was inspired by the IRAP inhibitor 32 , which showed micromolar activity on ERAP1 ( , A). , Other sulfonamides were discovered by virtual screening at the catalytic site. Replacement of the thiophene ring with a substituted phenyl slightly improved the activity ( 33 ) ( , A). Tetrazole and catechol-like groups were hypothesized to bind the catalytic zinc ion. No further information on selectivity and binding is available. Arylsulfamoyl-benzoic acid activators of the L-AMC hydrolysis by ERAP1 were identified by screening. , Compound 37 ( , B) showed micromolar activation of ERAP1 (AC 50 = 4.7 μM), L-AMC) and was inactive on both ERAP2 and IRAP. Consistent with an allosteric behavior, 37 was an inhibitor of long substrate hydrolysis (IC 50 = 5.3 μM, WRCYEKMALK substrate) and reduced the presentation of the ERAP1-dependent model antigenic epitope SIINFEKL (ED 50 = 1 μM) in cells. The SAR pointed out the importance of the carboxylic acid function and the NH of the sulfonamide group. Replacement of the piperidine ring by N -methyl-piperazine was tolerated . Docking in ERAP1, outside the catalytic site, revealed a putative binding mode in the allosteric site B, at the interface between domains II and IV. In particular the carboxylate function interacts with Lys551 of domain III and makes hydrophobic interactions with Trp921 and Pro682 ( B). The binding hypothesis was confirmed by the decrease of the activity of 37 on the mutant ERAP1 K551A . Surprisingly, a close analogue bearing the trifluomethyl group in the para-position of the sulfonamide ( 40 ) lost activity on ERAP1 and gained inhibitory activity on ERAP2. The crystal structure of 40 -bound ERAP2 shows that the inhibitor binds in the catalytic site without interacting with Zn 2+ (SI, Figure S3 ). A closely related series of allosteric phenyl-sulfamoyl-benzoic acid inhibitors has been patented by Gray Wolf Therapeutics ( C, ). The activity was assessed using decapeptide substrates. A first patented series (representative examples 41 – 43 ) consisted of phenyl-sulfamoyl benzoic acids substituted with aryl groups at the R1 position . Compounds substituted by 2- or 3- piperazines or pyrrolidines ( 44 – 50 ) are more potent, reaching nanomolar activities . A compound from the latter series (GRWD5769) is currently evaluated in clinical trial. While its structure is not yet disclosed, it is probable that it belongs to, or is a close analogue of compounds 44 – 50 . Some macrocycles in which the 2 aryl rings are linked ( 51 – 53 ) have also been patented. Specific Learning This is the first well populated series of nanomolar allosteric inhibitors of ERAP1. However, the binding of the piperidine analogues needs to be confirmed experimentally. As the series moves into the clinics, we expect publications with comprehensive structure–activity data in cell models as well as in vivo data. This is the first well populated series of nanomolar allosteric inhibitors of ERAP1. However, the binding of the piperidine analogues needs to be confirmed experimentally. As the series moves into the clinics, we expect publications with comprehensive structure–activity data in cell models as well as in vivo data. ERAP2 Selective Inhibitors 4.1 Phosphinic or Carboxylic Acids Compared to ERAP1, far fewer ERAP2 selective inhibitors have been reported so far. These include a few inhibitors with limited selectivity and potency such as phosphinic compounds, carboxylic acids ( 54 – 56 , ), and a hydroxypyridone (SI, Table S6 ). While most of the phosphinic derivatives described so far are nonselective, the analogue DG011 showed a 1.8 log selectivity toward ERAP2 (89 nM). In MOLT-4 leukemia cell lines, DG011 induced a significant shift in the immunopeptidome such that more than 20% of the detected peptides were either novel or significantly upregulated. Specific Learning While full selectivity for ERAP2 or ERAP1 has not yet been achieved for phosphinic compounds, this is the first indication that aromatic groups in S′ or S2′ confer some selectivity for ERAP2. An in-house library of 1920 compounds designed to target metalloenzymes was screened against ERAP2. Among the hits, 2 carboxylic acid compounds 54 and 55 (IC 50 ERAP2 = 22 and 9.7 μM, respectively) were selected for their potency, availability, and selectivity (IC 50 ERAP1 > 100 μM; IC 50 IRAP ≈ 100 μM) ( A). Docking of hit 54 in ERAP2 revealed that the carboxylic acid is predicted to coordinate the catalytic Zn 2+ , the amide carbonyl and the phenyl substituent interact with Tyr455 and Phe450 in the S1 pocket, while the indole N -H is involved in H-bonding with phenol from the gating residue Tyr892 in S2′, and its aromatic core T-stacks with Trp363 from S1′ ( A). Hit 55 was also predicted to bind the catalytic site. Analogues such as 56 bearing a phenethyl instead of the N -ethylpyrrolidine proved to be substrate-dependent ERAP2 modulators ( B). They activate the hydrolysis of the short substrate R-AMC (1.42–1.84-fold at 100 μM) and conversely inhibit the hydrolysis of the long substrate KSIINFEKL (IC 50 ERAP2 89–100 μM). Docking of 56 shows that it is surprisingly located within the catalytic site but far enough away from the catalytic zinc ion to shape the catalytic site appropriately for improved hydrolysis of short substrates. 56 adopts a U-conformation with multiple π-stacking interactions ( B) with Tyr892, Trp363, and Tyr455. The carboxylic acid and sulphonamide groups interact with basic residues Lys397 and Arg895, respectively. Specific Learning The screening on ERAP2 provided the first discovery of substrate-dependent modulators of ERAP2. Although no regulatory site for ERAP2 has been disclosed, these findings suggest that ERAP2 selective compounds can also be obtained by allosteric inhibition, avoiding interactions with Zn 2+ around which the environment is highly conserved throughout the M1 family. 4.2 Hydroxamic Acid Triazoles from KTGS Kinetic target -guided synthesis was used to discover the first nanomolar selective ERAP2 inhibitors from 6 dual azide/hydroxamic acid warheads, targeting the zinc ion in the catalytic site and 175 diverse alkynes. In this experiment, ERAP2 catalyzed the irreversible 1,3-dipolar cycloaddition between azides and diverse alkynes sufficiently bound to ERAP2, in the right configuration and proximity, to provide 19 triazole hits. They showed dose–response inhibition of ERAP2, either as a 1,4-/1,5-triazole mixture or as defined regioisomers. Among these, several ligands were derived from the same propargyl sulphonamide-thiophene motif, including hit 57 ( A). Optimization of 57 led to several low nanomolar analogues ( 58 – 60 ). The most potent and selective ERAP2 inhibitor 58 ( K i R-AMC = 4 nM; K i R-SIINFEKL = 42 nM) was cocrystallized with the target ( B). The hydroxamic acid chelates the catalytic Zn 2+ ion and interacts with the phenol of Tyr455. 58 adopts a U-shape, allowing the pyridine ring to π-stack with Tyr892. The methylated phenol group occupies the ERAP2 S1′ (Trp363, Val367, Glu400, and Lys397). In HEK cells, the close analogue 59 both engages ERAP2 and inhibits SIINFEKL antigen presentation. Its good in vitro ADME profile translates into a good in vivo exposure and a C max in the range of the target engagement concentration. Thus, 59 is an excellent pharmacological tool to explore the role of ERAP2 and a promising lead. Specific Learning Protein templated reactions allow the discovery of the first nanomolar, highly selective ERAP2 inhibitors. Binding to the S′ pocket provides the opportunity to achieve 2- to 3-log selectivity over ERAP1 and IRAP. Phosphinic or Carboxylic Acids Compared to ERAP1, far fewer ERAP2 selective inhibitors have been reported so far. These include a few inhibitors with limited selectivity and potency such as phosphinic compounds, carboxylic acids ( 54 – 56 , ), and a hydroxypyridone (SI, Table S6 ). While most of the phosphinic derivatives described so far are nonselective, the analogue DG011 showed a 1.8 log selectivity toward ERAP2 (89 nM). In MOLT-4 leukemia cell lines, DG011 induced a significant shift in the immunopeptidome such that more than 20% of the detected peptides were either novel or significantly upregulated. Specific Learning While full selectivity for ERAP2 or ERAP1 has not yet been achieved for phosphinic compounds, this is the first indication that aromatic groups in S′ or S2′ confer some selectivity for ERAP2. An in-house library of 1920 compounds designed to target metalloenzymes was screened against ERAP2. Among the hits, 2 carboxylic acid compounds 54 and 55 (IC 50 ERAP2 = 22 and 9.7 μM, respectively) were selected for their potency, availability, and selectivity (IC 50 ERAP1 > 100 μM; IC 50 IRAP ≈ 100 μM) ( A). Docking of hit 54 in ERAP2 revealed that the carboxylic acid is predicted to coordinate the catalytic Zn 2+ , the amide carbonyl and the phenyl substituent interact with Tyr455 and Phe450 in the S1 pocket, while the indole N -H is involved in H-bonding with phenol from the gating residue Tyr892 in S2′, and its aromatic core T-stacks with Trp363 from S1′ ( A). Hit 55 was also predicted to bind the catalytic site. Analogues such as 56 bearing a phenethyl instead of the N -ethylpyrrolidine proved to be substrate-dependent ERAP2 modulators ( B). They activate the hydrolysis of the short substrate R-AMC (1.42–1.84-fold at 100 μM) and conversely inhibit the hydrolysis of the long substrate KSIINFEKL (IC 50 ERAP2 89–100 μM). Docking of 56 shows that it is surprisingly located within the catalytic site but far enough away from the catalytic zinc ion to shape the catalytic site appropriately for improved hydrolysis of short substrates. 56 adopts a U-conformation with multiple π-stacking interactions ( B) with Tyr892, Trp363, and Tyr455. The carboxylic acid and sulphonamide groups interact with basic residues Lys397 and Arg895, respectively. Specific Learning The screening on ERAP2 provided the first discovery of substrate-dependent modulators of ERAP2. Although no regulatory site for ERAP2 has been disclosed, these findings suggest that ERAP2 selective compounds can also be obtained by allosteric inhibition, avoiding interactions with Zn 2+ around which the environment is highly conserved throughout the M1 family. While full selectivity for ERAP2 or ERAP1 has not yet been achieved for phosphinic compounds, this is the first indication that aromatic groups in S′ or S2′ confer some selectivity for ERAP2. An in-house library of 1920 compounds designed to target metalloenzymes was screened against ERAP2. Among the hits, 2 carboxylic acid compounds 54 and 55 (IC 50 ERAP2 = 22 and 9.7 μM, respectively) were selected for their potency, availability, and selectivity (IC 50 ERAP1 > 100 μM; IC 50 IRAP ≈ 100 μM) ( A). Docking of hit 54 in ERAP2 revealed that the carboxylic acid is predicted to coordinate the catalytic Zn 2+ , the amide carbonyl and the phenyl substituent interact with Tyr455 and Phe450 in the S1 pocket, while the indole N -H is involved in H-bonding with phenol from the gating residue Tyr892 in S2′, and its aromatic core T-stacks with Trp363 from S1′ ( A). Hit 55 was also predicted to bind the catalytic site. Analogues such as 56 bearing a phenethyl instead of the N -ethylpyrrolidine proved to be substrate-dependent ERAP2 modulators ( B). They activate the hydrolysis of the short substrate R-AMC (1.42–1.84-fold at 100 μM) and conversely inhibit the hydrolysis of the long substrate KSIINFEKL (IC 50 ERAP2 89–100 μM). Docking of 56 shows that it is surprisingly located within the catalytic site but far enough away from the catalytic zinc ion to shape the catalytic site appropriately for improved hydrolysis of short substrates. 56 adopts a U-conformation with multiple π-stacking interactions ( B) with Tyr892, Trp363, and Tyr455. The carboxylic acid and sulphonamide groups interact with basic residues Lys397 and Arg895, respectively. The screening on ERAP2 provided the first discovery of substrate-dependent modulators of ERAP2. Although no regulatory site for ERAP2 has been disclosed, these findings suggest that ERAP2 selective compounds can also be obtained by allosteric inhibition, avoiding interactions with Zn 2+ around which the environment is highly conserved throughout the M1 family. Hydroxamic Acid Triazoles from KTGS Kinetic target -guided synthesis was used to discover the first nanomolar selective ERAP2 inhibitors from 6 dual azide/hydroxamic acid warheads, targeting the zinc ion in the catalytic site and 175 diverse alkynes. In this experiment, ERAP2 catalyzed the irreversible 1,3-dipolar cycloaddition between azides and diverse alkynes sufficiently bound to ERAP2, in the right configuration and proximity, to provide 19 triazole hits. They showed dose–response inhibition of ERAP2, either as a 1,4-/1,5-triazole mixture or as defined regioisomers. Among these, several ligands were derived from the same propargyl sulphonamide-thiophene motif, including hit 57 ( A). Optimization of 57 led to several low nanomolar analogues ( 58 – 60 ). The most potent and selective ERAP2 inhibitor 58 ( K i R-AMC = 4 nM; K i R-SIINFEKL = 42 nM) was cocrystallized with the target ( B). The hydroxamic acid chelates the catalytic Zn 2+ ion and interacts with the phenol of Tyr455. 58 adopts a U-shape, allowing the pyridine ring to π-stack with Tyr892. The methylated phenol group occupies the ERAP2 S1′ (Trp363, Val367, Glu400, and Lys397). In HEK cells, the close analogue 59 both engages ERAP2 and inhibits SIINFEKL antigen presentation. Its good in vitro ADME profile translates into a good in vivo exposure and a C max in the range of the target engagement concentration. Thus, 59 is an excellent pharmacological tool to explore the role of ERAP2 and a promising lead. Specific Learning Protein templated reactions allow the discovery of the first nanomolar, highly selective ERAP2 inhibitors. Binding to the S′ pocket provides the opportunity to achieve 2- to 3-log selectivity over ERAP1 and IRAP. Protein templated reactions allow the discovery of the first nanomolar, highly selective ERAP2 inhibitors. Binding to the S′ pocket provides the opportunity to achieve 2- to 3-log selectivity over ERAP1 and IRAP. Properties of ERAP Inhibitors 5.1 Molecular Properties of ERAP Inhibitors ERAP inhibitors can be divided into two functional categories, catalytic site inhibitors or allosteric inhibitors (for ERAP1), and two structural categories, peptidomimetics or nonpeptidic inhibitors. The distribution of 6 physicochemical parameters (molecular weight (MW), cLogP, hydrogen bond acceptors and donors (HBA, HBD), polar surface area (PSA), and number of rotatable bonds (nRotB)) for the 7 most populated chemical series of ERAP inhibitors shows that about 50% of the inhibitors have a MW between 400 and 500 g/mol –1 , but large peptidic inhibitors can reach 700 g/mol –1 (SI, Figure S4 ). The cLogP range is highly extended as 75% of the inhibitors have a cLogP below 4, and 25% have a negative cLogP. The latter is populated mainly by highly zwitterionic phosphinic compounds in particular. Both HBD and HBA counts are relatively low, with 30% of the inhibitors having an HBD of 2 and an HBA of 7. The 90% of the inhibitors have a PSA between 75 and 175 Å 2 . The number of rotatable bonds is widely distributed (between 1 and 16), as the scaffolds vary from large linear compounds to small compounds or even macrocycles. A more refined analysis of these 6 parameters was performed as a function of the chemical scaffold and binding site . Most inhibitors, regardless of structural class, have an MW between 300 and 500 Da. Thus they occupy a relatively small portion of the substrate binding pocket. The peptidomimetic series have a larger MW range, reaching a maximum MW of 650 Da. Catalytic site inhibitors have a lower average cLogP than allosteric site inhibitors. Peptidomimetics have a lower cLogP (between −2 and 2) compared to nonpeptidic inhibitors (above 2). cLogP is the highest for allosteric inhibitors but does not exceed 5. Most of phosphinic derivatives have a negative log P due to the high polarity of the phosphinic group and their zwitterionic character (phosphinic function and amino group). Due to the properties of zinc binding groups, HBA and HBD values are higher for catalytic site inhibitors. Among peptidomimetics, bestatin-like compounds have the highest HBA values. The PSA for all series is between 50 and 200 Å 2 . Not surprisingly, the PSA is higher for peptidomimetics and catalytic site inhibitors (including hydroxamic acids). Due to their scaffold, peptidomimetics also have a higher number of rotatable bonds (between 8 and 16). Conversely, hydroxamic acids have a lower number of rotatable bonds (3–6) for the same PSA range. As expected, allosteric site inhibitors have a lower nRotB because they bind to a single pocket rather than the 3 specificity sites for most catalytic site-directed series. Given that a limited nRotB (i.e., below 10) and a PSA below 140 Å 2 are important predictors of good oral bioavailability regardless of molecular weight. We observe that a first set of ERAP inhibitors have promising properties (cluster A, ). Allosteric ERAP1 inhibitors such as N -aryl-sulfonamide analogues ( C) or benzofurans and some ERAP2 inhibitors are predominantly negatively charged at physiological pH. According to Martin, such negatively charged compounds should have an even lower PSA (lower than 75 Å 2 ) to have a good probability os showing bioavailability >10% in rodent or a measurable Caco-2 permeability, which is the case for almost all of them. Phosphinic ERAP inhibitors, that also carry an amino group are zwitterionic at physiological pH which may affect their cell permeability. Nevertheless, almost all inhibitors described so far are within the “possible to be oral space” as defined by Doak et al. Most peptidomimetics have a cLogD below 1 or even negative, while nonpeptidic inhibitors have a cLogD between 1.5 and 3 . Phosphinic, DABA, and sulfonamide inhibitors have a significantly wider distribution due to the higher number of compounds in the family ( A). For all series, solubility is predicted to be greater than 1 μM except for the benzofuran series, which have (apart from the carboxylic acid) aromatic and aliphatic groups that may impair solubility ( B). Consistently, benzofurans have the highest predicted permeability ( C) and phosphinic and hydroxamic acids have the lowest. This can be attributed to the higher nRotB for the former and the high HBA count for the latter. Permeability and solubility are determined by properties that are not easily calculated, such as K m / k cat for pumps, crystal structures, and reticular enthalpy. Experimental determination of absorption and solubility is of great interest especially for phosphinic and hydroxamic acids. Specific Learning Several series show promising physicochemical properties. Clear differences can be seen between allosteric and nonpeptidic catalytic inhibitors on the one hand and pseudopeptidic catalytic inhibitors on the other hand. Experimental determination of absorption and solubility is scarce. 5.2 Potency, Ligand Efficiency, and Selectivity of ERAP Inhibitors Among the catalytic-site ERAP1 inhibitors, phosphinic peptides are the most potent, achieving nanomolar IC 50 s ( A). This is due to an extended interaction surface and high binding energy of the phosphinate to Zinc. However, it is also possible to achieve nanomolar potencies with sulphonamide and benzofuran carboxylic acids, which are allosteric inhibitors ( A). It is also important to note that the highly populated phosphinic and DABA series are nonselective and inhibit ERAP1 and ERAP2 in similar potency ranges. In contrast, the first-in-class potent hydroxamate ERAP2 inhibitors were able to achieve pIC 50 > 8 with excellent selectivity ( A). Lipophilic ligand efficiency is a useful driver for optimization in order to limit potency gains based solely on hydrophobic interactions, which prove to hamper the optimization of bioavailability. However, it is necessary to contextualize its interpretation. Indeed, LLE may be suitable in series where the pharmacophore is highly polar but the ADME properties are then compromised by high solvation energies. It is therefore difficult to use LLE to compare inhibitors with different physicochemical properties and sizes, for chemical series with negative clogP, and for targets that require highly polar ligands. , In this context, LLE values for phosphinic compounds (gray areas in B panels) cannot be compared with others because these compounds are highly hydrophilic with negative log P s ( B and B). All series already show an LLE above 3 or 4, indicating a good balance between lipophilicity and potency. LLE ≥ 6–7 is a good target value for a drug candidate (i.e., cLogP 2–3 and nanomolar potency). Of note, the ERAP1 bestatin compounds and the ERAP2 hydroxamic acids have LLE values close to 6. Specific Learning Both allosteric and orthosteric inhibitors can reach nanomolar potencies. Of of the most populated chemical series disclosed to date, phosphinic acid inhibitors are the most potent pan-ERAP inhibitors (gray filled circles, ). Although the development of specific inhibitors for ERAP1 or ERAP2 is highly challenging, sulfonamide allosteric inhibitors achieve a 3-log difference in favor of ERAP1 (red filled circles, ) and hydroxamic acids, the most potent ERAP2 inhibitors (blue filled circles, ), achieve a selectivity gap of almost 4-log units. The LLE in the phosphonic acid series is irrelevant for membrane permeability because they are acidic polar compounds. Molecular Properties of ERAP Inhibitors ERAP inhibitors can be divided into two functional categories, catalytic site inhibitors or allosteric inhibitors (for ERAP1), and two structural categories, peptidomimetics or nonpeptidic inhibitors. The distribution of 6 physicochemical parameters (molecular weight (MW), cLogP, hydrogen bond acceptors and donors (HBA, HBD), polar surface area (PSA), and number of rotatable bonds (nRotB)) for the 7 most populated chemical series of ERAP inhibitors shows that about 50% of the inhibitors have a MW between 400 and 500 g/mol –1 , but large peptidic inhibitors can reach 700 g/mol –1 (SI, Figure S4 ). The cLogP range is highly extended as 75% of the inhibitors have a cLogP below 4, and 25% have a negative cLogP. The latter is populated mainly by highly zwitterionic phosphinic compounds in particular. Both HBD and HBA counts are relatively low, with 30% of the inhibitors having an HBD of 2 and an HBA of 7. The 90% of the inhibitors have a PSA between 75 and 175 Å 2 . The number of rotatable bonds is widely distributed (between 1 and 16), as the scaffolds vary from large linear compounds to small compounds or even macrocycles. A more refined analysis of these 6 parameters was performed as a function of the chemical scaffold and binding site . Most inhibitors, regardless of structural class, have an MW between 300 and 500 Da. Thus they occupy a relatively small portion of the substrate binding pocket. The peptidomimetic series have a larger MW range, reaching a maximum MW of 650 Da. Catalytic site inhibitors have a lower average cLogP than allosteric site inhibitors. Peptidomimetics have a lower cLogP (between −2 and 2) compared to nonpeptidic inhibitors (above 2). cLogP is the highest for allosteric inhibitors but does not exceed 5. Most of phosphinic derivatives have a negative log P due to the high polarity of the phosphinic group and their zwitterionic character (phosphinic function and amino group). Due to the properties of zinc binding groups, HBA and HBD values are higher for catalytic site inhibitors. Among peptidomimetics, bestatin-like compounds have the highest HBA values. The PSA for all series is between 50 and 200 Å 2 . Not surprisingly, the PSA is higher for peptidomimetics and catalytic site inhibitors (including hydroxamic acids). Due to their scaffold, peptidomimetics also have a higher number of rotatable bonds (between 8 and 16). Conversely, hydroxamic acids have a lower number of rotatable bonds (3–6) for the same PSA range. As expected, allosteric site inhibitors have a lower nRotB because they bind to a single pocket rather than the 3 specificity sites for most catalytic site-directed series. Given that a limited nRotB (i.e., below 10) and a PSA below 140 Å 2 are important predictors of good oral bioavailability regardless of molecular weight. We observe that a first set of ERAP inhibitors have promising properties (cluster A, ). Allosteric ERAP1 inhibitors such as N -aryl-sulfonamide analogues ( C) or benzofurans and some ERAP2 inhibitors are predominantly negatively charged at physiological pH. According to Martin, such negatively charged compounds should have an even lower PSA (lower than 75 Å 2 ) to have a good probability os showing bioavailability >10% in rodent or a measurable Caco-2 permeability, which is the case for almost all of them. Phosphinic ERAP inhibitors, that also carry an amino group are zwitterionic at physiological pH which may affect their cell permeability. Nevertheless, almost all inhibitors described so far are within the “possible to be oral space” as defined by Doak et al. Most peptidomimetics have a cLogD below 1 or even negative, while nonpeptidic inhibitors have a cLogD between 1.5 and 3 . Phosphinic, DABA, and sulfonamide inhibitors have a significantly wider distribution due to the higher number of compounds in the family ( A). For all series, solubility is predicted to be greater than 1 μM except for the benzofuran series, which have (apart from the carboxylic acid) aromatic and aliphatic groups that may impair solubility ( B). Consistently, benzofurans have the highest predicted permeability ( C) and phosphinic and hydroxamic acids have the lowest. This can be attributed to the higher nRotB for the former and the high HBA count for the latter. Permeability and solubility are determined by properties that are not easily calculated, such as K m / k cat for pumps, crystal structures, and reticular enthalpy. Experimental determination of absorption and solubility is of great interest especially for phosphinic and hydroxamic acids. Specific Learning Several series show promising physicochemical properties. Clear differences can be seen between allosteric and nonpeptidic catalytic inhibitors on the one hand and pseudopeptidic catalytic inhibitors on the other hand. Experimental determination of absorption and solubility is scarce. Several series show promising physicochemical properties. Clear differences can be seen between allosteric and nonpeptidic catalytic inhibitors on the one hand and pseudopeptidic catalytic inhibitors on the other hand. Experimental determination of absorption and solubility is scarce. Potency, Ligand Efficiency, and Selectivity of ERAP Inhibitors Among the catalytic-site ERAP1 inhibitors, phosphinic peptides are the most potent, achieving nanomolar IC 50 s ( A). This is due to an extended interaction surface and high binding energy of the phosphinate to Zinc. However, it is also possible to achieve nanomolar potencies with sulphonamide and benzofuran carboxylic acids, which are allosteric inhibitors ( A). It is also important to note that the highly populated phosphinic and DABA series are nonselective and inhibit ERAP1 and ERAP2 in similar potency ranges. In contrast, the first-in-class potent hydroxamate ERAP2 inhibitors were able to achieve pIC 50 > 8 with excellent selectivity ( A). Lipophilic ligand efficiency is a useful driver for optimization in order to limit potency gains based solely on hydrophobic interactions, which prove to hamper the optimization of bioavailability. However, it is necessary to contextualize its interpretation. Indeed, LLE may be suitable in series where the pharmacophore is highly polar but the ADME properties are then compromised by high solvation energies. It is therefore difficult to use LLE to compare inhibitors with different physicochemical properties and sizes, for chemical series with negative clogP, and for targets that require highly polar ligands. , In this context, LLE values for phosphinic compounds (gray areas in B panels) cannot be compared with others because these compounds are highly hydrophilic with negative log P s ( B and B). All series already show an LLE above 3 or 4, indicating a good balance between lipophilicity and potency. LLE ≥ 6–7 is a good target value for a drug candidate (i.e., cLogP 2–3 and nanomolar potency). Of note, the ERAP1 bestatin compounds and the ERAP2 hydroxamic acids have LLE values close to 6. Specific Learning Both allosteric and orthosteric inhibitors can reach nanomolar potencies. Of of the most populated chemical series disclosed to date, phosphinic acid inhibitors are the most potent pan-ERAP inhibitors (gray filled circles, ). Although the development of specific inhibitors for ERAP1 or ERAP2 is highly challenging, sulfonamide allosteric inhibitors achieve a 3-log difference in favor of ERAP1 (red filled circles, ) and hydroxamic acids, the most potent ERAP2 inhibitors (blue filled circles, ), achieve a selectivity gap of almost 4-log units. The LLE in the phosphonic acid series is irrelevant for membrane permeability because they are acidic polar compounds. Both allosteric and orthosteric inhibitors can reach nanomolar potencies. Of of the most populated chemical series disclosed to date, phosphinic acid inhibitors are the most potent pan-ERAP inhibitors (gray filled circles, ). Although the development of specific inhibitors for ERAP1 or ERAP2 is highly challenging, sulfonamide allosteric inhibitors achieve a 3-log difference in favor of ERAP1 (red filled circles, ) and hydroxamic acids, the most potent ERAP2 inhibitors (blue filled circles, ), achieve a selectivity gap of almost 4-log units. The LLE in the phosphonic acid series is irrelevant for membrane permeability because they are acidic polar compounds. Discussion and Perspectives lists specific properties of ERAP enzymes that medicinal chemists need to consider in the design and selection of inhibitors to fully exploit their therapeutic potential in immuno-oncology and autoimmune diseases. ERAP enzymes have a high structural mobility and a large number of binding sites (ERAP1). Among the inhibitors targeting the catalytic site, pseudopeptidic or nonpeptidic series exploring different zinc-binding groups such as phosphinic, carboxylic, hydroxamic acid, DABA, or more “exotic” suberone, several potent inhibitors have been described. Two main families of inhibitors for the ERAP1 allosteric pockets have been disclosed that contain a conformationally constrained carboxylic acid group, which is essential for binding the basic residues in these pockets. In most cases, their potency is lower than that of catalytic site inhibitors. A low nanomolar allosteric sulfonamide ERAP1 inhibitor is currently the first ERAP inhibitor chemotype to be administered in humans. It is not yet clear what level of potency is required for in vivo activity in a specific disease. For compound optimization, LLE should be used cautiously for highly polar pseudopeptide inhibitors. A large number of ERAP substrates have already been reported. Therefore, it is suggested that several substrates to be used to reflect the variability of ERAP applications in antigen processing. Some compounds have also been discovered that act as activators for the hydrolysis of small substrates by ERAP. This highlights the complex regulation of the enzyme activity. To further explore the mode of action, the screening cascade should include both short model substrates and larger (9–10 amino acids), more physiological substrates, especially when screening for allosteric inhibitors. It is not yet known how different the effect of allosteric or catalytic site inhibitors will be on immunopeptidome and CTL activation. Preliminary studies have shown that an allosteric inhibitor has a different effect on the immunopeptidome than genetic silencing of ERAP1. In the future, both types of inhibitors will help to explore the different functions of ERAP and their utility as therapeutic targets. ERAP enzymes belong to the large family of metalloproteases, making selectivity among homologous proteases (especially the M1 family) an important design and selection criterion for drug candidates. As shown above for the different inhibitor families, achieving selectivity between ERAP1 and ERAP2 is challenging. In addition, the selectivity of the ERAP inhibitors toward other proteins of the metalloprotease class or toward other drug protein classes is not yet fully understood. Selectivity toward IRAP, the close analogue of ERAP1, ERAP2 is usually measured, but selectivity within the M1 family is only reported for a few series. At least APN, a prototype enzyme of the M1 family should be used. Catalytic site inhibitors are expected to be more selective for metalloproteases than for other protein classes due to structural specificities. Conversely, it is not yet clear how selective allosteric inhibitors can be toward other drug target classes. Most of the inhibitors have not been fully characterized for ADME properties. To reach ERAPs in the organism, compounds must cross both cell and ER membranes. Only allosteric selective ERAP1 sulfonamide inhibitors (data not published) and nanomolar selective ERAP1 inhibitors have been evaluated for in vivo PK. Much work remains to be done to improve the permeability of the ERAP inhibitors . To translate in vitro data to in vivo , several issues need to be addressed . There is a nonsynonymous polymorphism of the ERAP genes in the human population. For better relevance, specific ERAP allotypes associated with a disease could be used in the screening cascade. However, while the catalytic site is rather conserved in the different ERAP allotypes, the allosteric sites might be more variable. Not many inhibitors have been tested against the murine enzyme ERAAP (orthologue of ERAP1). More data is needed, especially before compounds can be tested in preclinical disease models. Alternatively, patient-derived cells could be used to measure the effect of inhibitors on modulating the immunopeptidome. Because ERAP2 has no orthologue in rodents, transgenic animals expressing ERAP2 are highly desirable to assess the efficacy of ERAP inhibitors. Broadly humanized models, such as the HIS-mouse, are also important for translation to humans, as the activity of ERAP inhibitors modulates antigen presentation and downstream events starting with recognition by CTLs. In recent years, the concept of using small molecules to interfere with intracellular targets has emerged in both immunology and in immuno-oncology, as exemplified by JAK inhibitors for rheumatoid arthritis or orally available small molecule targeting sTNF (SAR441566) or inhibitors of various intracellular negative regulators of the antitumor immune response (MAP4K1, DGKα, EP4, ...), , ERAP enzymes involved in antigen presentation pathway, may become the target for small molecule drugs in immunology and immuno-oncology upstream of the current interventions. As small-molecule treatments, ERAP inhibitors are expected to allow for improved compliance due to oral bioavailability. Having shorter half-lives than biologics, they are by design more maneuvrable than biologics. They are also less likely to display hyper-pharmacology associated with high affinity biologics that target immune checkpoints. At this stage, it is not yet clear which level of selectivity for ERAP1 or ERAP2 is required. This can even be disease-dependent. Focusing on selective ERAP1 or selective ERAP2 inhibitors could preserve some antigen processing and presentation, which could hopefully reduce side effects such as reduced defense against infection. In any case, ERAP enzymes form only a fraction of the immunopeptidome, suggesting that inhibiting them may be a safer immunomodulatory approach., ERAP targeting drugs should be selective against IRAP. Indeed, while IRAP is also involved in antigen cross-presentation, inhibition of its other functions such as regulation of glucose uptake, as it colocalizes and interacts with the GLUT4 receptor, or degradation of various peptides such as angiotensin III, oxytocin, and neurokinin, may be undesirable in cancer or autoimmune diseases or would require specific monitoring. At this stage, in the absence of information on how the polymorphism modifies the disease phenotype and response to inhibitors, it is expected that the first ERAP1 inhibitors will be active on all haplotypes. This will allow compounds to be tested in different disease contexts. For ERAP2 selective agents, however, clinical development will require prior screening of patients for the presence of the coding variant to ensure the presence of the protein. In autoimmune diseases, ERAP inhibitors are likely to be used either in combination with other drugs or as monotherapy. In ankylosing spondylitis, psoriasis, and psoriatic arthritis, they could be evaluated in patients who have had an inadequate response or intolerance to previous conventional disease-modifying antirheumatic drugs (DMARDs), either synthetic (methotrexate, aprimelast, JAK inhibitors) or biologics, such as anti-IL-17 or -23 inhibitors. In Behçet’s disease, depending on the form and level of resistance, an ERAP1 inhibitor could be used in combination with colchicine or anti-TNF agents, and in birdshot uveitis with immunosuppressants such as mycophenolate. Because MHC-I opathies are complex conditions, patient HLA typing should be performed to analyze clinical outcomes. In cancer immunotherapy, ERAP1 or ERAP2 inhibitors will be used in combination. Initially, combinations with anti-PD-L1 or CTLA4 inhibitors are expected to be used in melanoma, bladder cancer, or lymphoma. In addition to all of the efforts made for obtain crystal structures of ERAP in both apo- and liganded forms, as well as to develop relevant assays for antigen presentation, CTL activation, immunopeptidome analysis, and neoantigen formation, the discovery of 15 different chemical families of ERAP inhibitors has allowed further progress in ERAP inhibition and validation of these targets. In the future, we will need more potent, selective, and penetrant compounds to answer the pending questions in nonclinical models and to further define the best indications for ERAP inhibitors. The development of humanized in vivo models is also essential. Overall, the potential success of ERAP inhibitors in cancer immunotherapy and in autoimmune diseases is very promising and may offer new strategies for patients.
null
edac1605-b55d-401d-924c-a806e117becf
11925755
Vaccination[mh]
Lactic Acid Bacteria (LAB) are an excellent candidate for manipulation as a mucosal vaccine carrier. LAB is resistant to acidic conditions in the gastrointestinal system and can effectively deliver vaccines to the intestinal area. One of the LAB widely applied as a carrier vaccine is Lactococcus lactis ( L. lactis ). Naturally, L. lactis enhances the immune response to pathogens by inhibiting their colonization in the gastrointestinal tract and boosting the immunological system of the mucous membrane intestine . The ability of L. lactis to pass through the intestinal tract without colonization, its Gram-positive status (it does not contain endotoxins), its safety for consumption, genetic material is easy to manipulate, its ease of handling, its rapid growth, its ability to express stable recombinant proteins (antigens), and its low production costs due to the lack of protein purification are further significant benefits of using L. lactis as a mucosal vaccine carrier . In addition, peptidoglycan in L. lactis has benefits as an adjuvant, in addition to being a location for antigen expression, this peptidoglycan can bind to various pattern recognition receptors (PRRs) . Peptidoglycan can interact with Toll-Like Receptors (TLR2), NOD-Like Receptors ( e.g. , NOD1 and NOD2), C-Type Lectin Receptor ( e.g. , Dectin-1) so that it will trigger an innate immune response to L. lactis -based vaccine . L. lactis as a mucosal vaccine carrier is called L. lactis -based vaccine can overexpress antigen using the NICE (nisin-controlled gene expression) system to control their protein expression . The mechanism of nisin induction in the NICE system involves the histidine kinase NisK, which captures the nisin-induced signal and undergoes autophosphorylation, transferring the phosphate group to the NisR response regulator protein, thereby activating the NisA promoter . L. lactis -based vaccine can stimulate the immune response when administered orally or nasally. When administered orally, L. lactis -based vaccine will go to the gut, Peyer's patches . On the intestine, M cells transport the antigen carried by L. lactis through the lumen epithelium of intestine via a transcytosis mechanism to the dendritic or antigen-presenting cells (APCs) in the space between Peyer's patches, known as the intrafollicular region (IFR). APCs then present the antigen peptides to B and T lymphocytes to induce an adaptive immune response . As previously described, there are many benefits of using L. lactis , especially as a as a mucosal vaccine carrier and producer of recombinant proteins such as antigens that can activate innate and adaptive immune responses. This research may facilitate future scientific advancements on utilizing L. lactis as a mucosal vaccine carrier/ L. lactis -based vaccine. We believe conducting further research on L. lactis as a vaccine delivery system is essential. This paper aims to review critical points of current knowledge on the promising characteristics of L. lactis -based vaccine to suggest its implications for vaccine design. The descriptive research uses a systematic literature review (SLR) methodology. The systematic literature review was informed by data extrapolated from a concurrent comprehensive analysis, which sought to integrate and appraise the implementation of L. lactis as a vector for mucosal vaccine delivery systems or called L. lactis -based vaccine. The search and selection of literature, in the form of scientific articles, followed the Preferred Reporting Items for Systematic Literature Reviews and PRISMA protocol . Identification Strategy The article search was conducted in three databases, namely Crossref and PubMed, using Harzing's Publish or Perish application with keywords such as " Lactococcus lactis " OR “ L. lactis ” AND "vaccine" OR “Vaccines” and "immunity" AND “mucosal" OR “Mucosal”. Study Selection Articles had to meet the following inclusion criteria: 1) research articles published in the last ten years (2013–2023); 2) English-language articles indexed in the databases used; 3) true experimental studies; 4) original articles; 4) studies meeting PICO criteria (population: research using L. lactis to enhance the immune response; intervention: giving recombinant L. lactis on trial in vitro and in vivo ; comparison: animal trial without treatment (control); giving L. lactis without gene insert; outcome: improving the immune system; 5) vaccines for infectious diseases; 6) critical evaluation score of ≥ 50%. In contrast, the exclusion criteria included: 1) a review article, thesis, or protocol; 2) an in-silico study; 3) combination of adjuvants; 4) articles not available in full text; 5) studies not related to vaccine delivery system 6) studies focusing only on a probiotic; 7) vaccines for animals; 8) clinical trials; 9) non-living bacteria delivery system. Data Assessment Data quality analysis was conducted using critical evaluation tools, specifically the Joanna Briggs Institute critical appraisal checklist for quasi-experimental studies. The checklist consists of 9 questions, where a "yes" answer is worth 1 point, and "no," "unclear," or "not applicable" answers are valued at 0 points. The results of this analysis are supported by the quality analysis of the journals, considering their quartile rankings and impact factor values . Data Analysis Descriptive statistical methods were employed to encapsulate the research attributes incorporated within this systematic review. The data is presented using Microsoft Excel, VOSviewer and R Studio. The article search was conducted in three databases, namely Crossref and PubMed, using Harzing's Publish or Perish application with keywords such as " Lactococcus lactis " OR “ L. lactis ” AND "vaccine" OR “Vaccines” and "immunity" AND “mucosal" OR “Mucosal”. Articles had to meet the following inclusion criteria: 1) research articles published in the last ten years (2013–2023); 2) English-language articles indexed in the databases used; 3) true experimental studies; 4) original articles; 4) studies meeting PICO criteria (population: research using L. lactis to enhance the immune response; intervention: giving recombinant L. lactis on trial in vitro and in vivo ; comparison: animal trial without treatment (control); giving L. lactis without gene insert; outcome: improving the immune system; 5) vaccines for infectious diseases; 6) critical evaluation score of ≥ 50%. In contrast, the exclusion criteria included: 1) a review article, thesis, or protocol; 2) an in-silico study; 3) combination of adjuvants; 4) articles not available in full text; 5) studies not related to vaccine delivery system 6) studies focusing only on a probiotic; 7) vaccines for animals; 8) clinical trials; 9) non-living bacteria delivery system. Data quality analysis was conducted using critical evaluation tools, specifically the Joanna Briggs Institute critical appraisal checklist for quasi-experimental studies. The checklist consists of 9 questions, where a "yes" answer is worth 1 point, and "no," "unclear," or "not applicable" answers are valued at 0 points. The results of this analysis are supported by the quality analysis of the journals, considering their quartile rankings and impact factor values . Descriptive statistical methods were employed to encapsulate the research attributes incorporated within this systematic review. The data is presented using Microsoft Excel, VOSviewer and R Studio. The initial search resulted in 2729 articles discussing using L. lactis as a mucosal vaccine carrier. After removing duplicates, 1883 articles remained. A quick screening of the title and abstract reduced this number to 146 articles. Further screening against the exclusion criteria resulted in 24 articles. All articles were assessed using the Joanna Briggs Institute critical appraisal, and it was determined that 24 articles were included in the analysis of this study. describes L. lactis -based vaccine. Mice are preferred as experimental animals because they are easier to handle. The antigen protein expression target location is extracellularly preferred by adding signal peptides because the immune system can directly recognize and represent the antigen. L. lactis , as a mucosal vaccination carrier, can also be combined with additional adjuvants. When challenged, the author also reported protection in experimental animals against bacterial or viral infections. The results of screening in PubMed, CrossRef, and Scholar databases using the corresponding keywords yielded a total of 2,729 articles. An initial analysis was conducted to eliminate duplicate titles reducing the total number to 1,883 articles. Further analysis was conducted to select titles and abstracts articles, narrowing down the selection to 146 articles available for full access. Additional exclusions were made based on content relevance, resulting in a final selection of 24 articles for analysis ( ). L. lactis is a nonpathogenic Gram-positive bacterium widely used in the dairy industry. L. lactis has been widely explored for its potential as a vector for delivering therapeutic molecules such as vaccine antigens. The application of L. lactis as a mucosal vaccine delivery system has been widely investigated over the past two decades, demonstrating its versatility in expressing heterologous proteins, cytokines, and enzymes. L. lactis has been proposed as a safe platform for the mucosal vaccine carrier, and it can be genetically modified to express specific antigens on its surface, intracellularly, or extracellularly . L. lactis has a good safety history because of its benefits in food fermentation. It survives in the digestive tract for 2-3 days and does not attack the intestinal mucosal surface . L. lactis does not have lipopolysaccharides, so it will not strongly stimulate the host's immune response, so it is safe when given repeatedly [ , , ]. L. lactis also possesses immunomodulatory abilities as a probiotic bacterium. It can enhance the activity of phagocytic cells, which engulf and destroy pathogens . Expression of recombinant proteins such as antigens is very effective in L. lactis . This is due to the tight control with the addition of the inducer Nisin . Antigens can be expressed intracellularly, extracellularly, or on the surface of L. lactis by adding signal peptides to the gene of interest . Antigens expressed intracellularly require immune cells to degrade or phagocytose L. lactis -based vaccine first to uptaken the antigen. This differs from antigens expressed extracellularly or on the surface of L. lactis -based vaccine, immune cells can directly recognize antigens so that the immune response of memory or antibodies to antigens can be produced faster . In addition, L. lactis -based vaccine also stimulate the production of proinflammatory cytokines, which are essential molecules in regulating immune responses and inflammation. Among the cytokines produced with the help of L. lactis are IFN-γ and TNF-α. IFN-γ plays a significant role in activating immune cells, such as macrophages and T cells, which combat infection. TNF-α is also essential in mediating the inflammatory response that helps to destroy pathogens and repair damaged tissues [ - ]. Protein Expression System and Location The Nisin Controlled Gene Expression (NICE) system employed by L. lactis is a highly effective and easy to use gene expression method. The procedure involves introducing a specific quantity of the Nisin inducer (0.1-5 ng/ml) into the growth medium . The NICE system operates through a signal transduction mechanism involving two primary proteins, NisK and NisR. NisK is a sensor protein located on the membrane, while NisR is a response regulator protein in the cytoplasm. Nisin as an inducer can interact with NisK, causing it to undergo autophosphorylation. The phosphate group is then transferred to NisR, activating it. NisR initiates transcription of the target gene on the plasmid by binding to the downstream part of the PnisA promoter. The uniqueness of the NICE system lies in its ability to control protein expression with high precision, allowing for efficient and controlled protein production. This mechanism makes L. lactis a valuable tool in biotechnology research and applications, particularly in vaccine production [ , - ]. L. lactis -based vaccine can be expressed as cell surface antigen. One method for designing the gene of interest involves adding a signal peptide, a technique extensively explored for its potential in developing L. lactis -based vaccine. The studies in and demonstrate the successful antigen expression on the surface of L. lactis , which can stimulate immune responses [ , , , ]. The pgsA signal peptide is particularly effective for expressing proteins on the surface of L. lactis . The pgsA protein facilitates protein translocation across the cytoplasmic membrane, allowing for effective secretion into the extracellular space or anchorage to the cell wall . Several studies in , such as those by , have successfully utilized the pgsA signal peptide in L. lactis /pNZ8110-pgsA-NA that has been constructed, highlighting its critical role in facilitating protein display on the surface of L. lactis - based vaccine. and also report several studies concerning extracellular protein expression in L. lactis . The signal peptide used for extracellular protein expression differs from the signal peptide used for protein expression on the cell surface. The USP45 signal peptide has been employed to enable the secretion of target proteins by L. lactis . The Usp45 signal peptide is added to the N-terminus of the target protein to facilitate its secretion by L. lactis . In the study by , the pNZ8124:sip vector containing the lactococcal Usp45 signal peptide sequence (SP usp45) fused to the PnisA promoter was successfully constructed, allowing the target protein to be expressed extracellularly by L. lactis . reported the successful utilization of the Usp45 signal peptide for the extracellular expression of the Helicobacter pylori Lpp20 antigen using L. lactis . This study also demonstrated that extracellular vaccination with H. pylori Lpp20 was more effective. Other researchers, such as [ , , , - , , , ] have also proven that the usp45 signal peptide can be used to express extracellular recombinant proteins in L. lactis . Lactococcus lactis Strain Several strains of L. lactis are commonly used in vaccine delivery systems. shows that L. lactis strains NZ9000 and NZ3900 are widely used as mucosal vaccine carriers. Both strains cannot grow on media containing only lactose as a carbon source due to the deletion of the LacF gene, necessitating a plasmid carrying the LacF gene operon. The LacF gene detection system is a selection system that determines whether cells carry the plasmid or not. In addition, both strains have the PnisA promoter, allowing for tightly controlled protein expression [ , , ]. and show that L. lactis NZ9000 has been used to express viral proteins, bacterial antigens, and fusion proteins. This demonstrates its versatility in vaccine development. L. lactis NZ9000 has been used as a live bacterial vaccine platform to present antigens from pathogens such as Group A Streptococcus , Helicobacter pylori , and influenza [ , , ]. In addition, L. lactis NZ9000 has been used to express antigens from pathogens such as Brucella melitensis , Human papillomavirus , Hepatitis A VP1-P2a antigen and Neuraminidase protein from Influenza A demonstrating its potential in developing vaccines against viral infections. This strain can deliver antigens to mucosal sites and induce mucosal and systemic immune responses. One of the advantages of using L. lactis NZ9000 is its ability to stimulate both humoral and cellular immune responses. Studies in have shown that oral and mucosal immunization with L. lactis NZ9000 expressing specific antigens can elicit strong antibody responses, including IgG and IgA, and activate T cells, which promote a strong immune response against a variety of pathogens. In addition to the NZ9000 strain, L. lactis NZ3900 has also been widely used in the development of L. lactis -based vaccine, as shown in . L. lactis NZ3900 has been genetically engineered to maximize the expression of vaccine proteins. For example, demonstrated that L. lactis NZ3900 was used to deliver the Highly Conserved Region Spike S2 antigen for oral and nasal immunization in BALB/c mice. Other studies have also reported success in developing L. lactis -based vaccine that express bacterial or viral antigens in L. lactis NZ3900, such as antigens from H. pylori [ , , ] pertussis toxin and filamentous hemagglutinin from Bordetella pertussis , antigens from Enterotoxigenic Escherichia coli and M-protein antigens derived from Group A Streptococcus pyogenes . This shows that this strain can express antigens from bacteria or viruses. Doses and Route Dosage is a critical component in vaccine development and administration. Dosage is crucial as it ensures the vaccinés efficacy and safety. The vaccine dose determines the amount of antigen given to the body to trigger a strong immune response with the least side effects. Dosage determination begins with preclinical studies, followed by several phases of clinical trials [ - ]. shows that the dose of L. lactis -based vaccine is measured in colony forming units (CFU) and can be adjusted according to the experimental animals used besides determining the number of CFU . Dosages for L. lactis -based vaccine can start from as low as 10 6 CFUs and can range up to 10 9 CFUs or more, depending on the immunogenicity of the antigen and the delivery method Vaccination with a prime-boost strategy can also be applied, initial dose (prime) is followed by one or more subsequent doses (boost) to increase the immune response. The interval and frequency between doses and the total dose are also important factors. The interval and frequency of vaccination can range from a few weeks to several months to build a stronger and more durable immune response [ , , , ]. Moreover, the route of administration plays a role in determining the dosage. Oral administration might require higher doses than nasal administration due to the degradation of bacteria in the gastrointestinal tract . Stabilizers and adjuvants are often included to protect the bacteria and enhance the immune response . Based on , in various experimental animal models other than mice, larger doses are generally observed; for instance, piglets receive doses ranging from 10 9 to 10 12 CFU and rabbits receive 5 × 10 9 CFU . In contrast, vaccine doses administered to mice ranged from 1x10 8 CFU to 5 × 10 14 CFU . The route of administration also influences the dosage of L. lactis -based vaccine, with the oral route requiring higher doses of approximately 5 × 10 9 CFU compared to the nasal route, which utilizes doses around 1 × 10 9 CFU . The dose of a L. lactis -based vaccine administered via the oral route tends to be higher than that given nasally due to several factors related to the body's immune system and physical barriers in the gastrointestinal (GI) tract. The first factor is exposure to digestive enzymes and the harsh environment of the GI tract. The oral route exposes the vaccine to acidic pH and digestive enzymes like pepsin and proteases, which can degrade the L. lactis -based vaccine and reduce its efficacy. Therefore, a higher dose is needed to ensure enough L. lactis - based vaccine survive to stimulate an immune response . The second factor is the immune system's complexity in the gut-associated lymphoid tissue (GALT), which includes structures like Peyer's patches that efficiently sample and respond to antigens. Additionally, normal intestinal flora competes with and may neutralize L. lactis -based vaccine. These two main factors necessitate higher doses for oral [ - ]. L. lactis -based vaccine administered nasally is directly exposed to the nasal-associated lymphoid tissue (NALT). The nasal mucosa, especially NALT, is more efficient at antigen uptake, resulting in a strong immune response. This allows for a lower dose of L. lactis -based vaccine than oral administration . Experimental Animals Experimental animals are essential in preclinical studies for developing L. lactis -based vaccine. Commonly used experimental animals include rats and mice, chosen for their physiological and immunological similarities to humans. These animals are relatively inexpensive and easy to obtain. Preclinical studies on these animals are critical for assessing vaccine safety, immunogenicity, and efficacy . The study in shows that 92% of the studies used mice (specifically BALB/c, C57BL/6, and FVB/n strains), while the remaining studies employed piglets and rabbits. Specific inbred strains such as BALB/c and C57BL/6 mice are preferred within these species. Inbred strains ensure genetic uniformity, reducing variability in immune responses and improving reproducibility of results. Vaccine Evaluation The efficacy of L. lactis -based vaccine can be measured by assessing the immune response they produce. The primary immune response in L. lactis -based vaccine is the mucosal immune response, which can be assessed by measuring IgA or IgG antibodies circulating systemically. The levels of these antibodies indicate the strength of the humoral immune response triggered by L. lactis -based vaccine [ , , ]. shows that all studies related to L. lactis -based vaccination reported a significant increase in humoral immune responses. The parameters for assessing humoral immune responses include IgG and IgA antibodies, with IgG evaluation generally performed on serum and IgA assessment on feces, tissues (intestine, colon, nose), and nasal fluid. In addition to antibodies, the assessment of cellular immune responses is crucial for evaluating the efficacy of L. lactis -based vaccine. This can be done by measuring the activation of T cells, especially CD4 T cells and CD8 T cells, which are essential in coordinating the immune response and directly targeting infected cells . In addition to humoral and cellular immune responses, the efficacy of L. lactis -based vaccine can be assessed by measuring the cytokine profile. One common cytokine measured is interferon-gamma (IFN-γ). IFN-γ, which can provide an overview of the polarization of CD4 T cell responses, including Th1 cells pathways . shows that several researchers have reported a significant increase in IFN-γ in experimental animals immunized with L. lactis -based vaccine. As previously mentioned, IFN-γ analysis in the study was used to assess Th1 cell activation. Activated Th1 cells will release IFN-γ to help activate cytotoxic T cells, NK cells, and macrophage cells. As a step to initiate the immune response, L. lactis -based vaccine directly interacts with the mucosal surface of the gastrointestinal tract or nose, depending on the route of administration. The mucosal surface is rich in Microfold (M) and dendritic cells as an Antigen Precenting Cell (APC). Microfold (M) cells are important in the mucosal immune system on intestine. Unlike other epithelial cells, M cells do not have a mucus layer on their apical side, thus facilitating antigen uptake on intestine . M cells also facilitate the transport of L. lactis and its associated antigens to underlying immune cells, especially dendritic cells . L. lactis -based vaccine and antigens captured by M cells and dendritic cells (APCs) on intestine are transported through the process of transcytosis to the basal side, such as Peyer's Patches (PPs). The APC cells phagocytize and internalize antigen and L. lactis -based vaccine along with its protein components. Antigens are presented by MHC-I and MHC-II molecules, presenting them to T and B cells, thereby triggering an adaptive immune response [ , , , ]. Induced B cells differentiate into plasma cells, specifically prepared to produce antibodies. This immunological interaction involves CD4 T cells (Th cells), which send necessary signals through cytokines to promote class-switch recombination in B cells, leading to IgA production . These IgA-producing plasma cells then migrate to the lamina propria, where they continue to secrete dimeric IgA. Subsequently, IgA binds to the polymeric immunoglobulin receptor (pIgR) on epithelial cells, facilitating its translocation across the cell and its eventual release into the lumen as secretory IgA (sIgA). The region between the follicles around the PPs, called the Intrafollicular Region (IFR), is rich in T cells and dendritic cells and regulates the adaptive immune response. Through this mechanism, vaccine antigens carried by L. lactis will be presented and generate both innate and adaptive immune responses[ - ]. The IgA antibodies produced by L. lactis -based vaccinations significantly impact mucosal immunity. Mucosal IgA antibodies are crucial in the initial defense against infections that enter the body via mucosal surfaces. Fecal IgA antibodies are used as markers of secretory IgA in the gastrointestinal tract, providing valuable insights into the immune response associated with the gut. Moreover, Immunoglobulin A (IgA) antibodies found in nasal tissue and nasal fluid can also indicate immunity in the respiratory mucosa. Tissue IgA measurement can yield insights into the localization of specific IgA within tissues . IgA levels can be assessed by analyzing samples of feces, tissue (such as the gut, colon, and nose), and nasal fluid, as shown in . For example, oral vaccination with L. lactis expressing antigens from pathogens such as H. pylori or the Influenza has increased IgA antibodies, contributing to protective immunity [ - ]. This also proves that IgA can be formed to defend against pathogens like bacteria or viruses. L. lactis -based vaccine can stimulate humoral immune responses of IgG antibodies. Studies in have shown that vaccination based on L. lactis expressing antigens can increase significant IgG antibody responses [ , , ]. In , oral vaccination based on L. lactis expressing antigens from pathogens such as H. pylori [ , , , ], Influenza , , Bordetella pertussis , and HIV-1 has been shown to induce IgG antibodies contributing to humoral immunity. In producing IgG or IgA antibodies, L. lactis -based vaccine are recognized by APCs in the mucosa. Dendritic cells present the antigen to CD4 T cells via MHC-II. CD4 T cells release cytokines such as IL-4 (Interleukin-4) and IL-6 (Interleukin-6), which are then responded to by B cells . This interaction allows B cells to mature into plasma cells. With the help of IL-4 and IL-21 from Th2 cells, plasma cells are prepared to produce IgG antibodies [ - ]. Additionally, class switching in plasma cells for IgA production occurs under the influence of TGF-β, IL-21, and IL-17 [ - ]. IgG antibodies can specifically bind to the pathogen antigen that triggers their production, thus forming an antigen-antibody complex. This facilitates pathogen recognition and elimination. IgG antibodies also neutralize pathogens, thereby enhancing the phagocytic response. Phagocytes, such as macrophages, have Fc receptors that bind to the Fc portion of IgG antibodies, triggering phagocytosis of pathogens opsonized by IgG antibodies. Additionally, IgG antibodies can activate the complement system, leading to the formation of a complex that damages pathogen membranes and triggers cell lysis . IgA antibodies can prevent pathogen adhesion to the mucosal surface through neutralization. The IgA antigen-antibody complex can be captured by immune cells such as macrophages and dendritic cells in the mucosa, which can then destroy and remove the complex from the body. In the digestive tract, the IgA antigen-antibody complex is secreted through feces . L. lactis -based vaccines can also induce cellular immune responses. shows that not all studies report an increased cellular immune response; however, significant increases were observed in the studies by [ , , ]. Vaccination using L. lactis -based vaccines increases dendritic cell activation by measuring MHC-II expression, CD4 T cells, CD8 T cells, and plasma cells by measuring CD138 expression. Dendritic cells play a critical role in initiating and modulating the cellular immune response. These cells function as antigen-presenting cells (APCs) by processing antigens and presenting them on major histocompatibility complex (MHC) molecules to T cells, activating the immune response. When vaccines utilizing L. lactis are administered, they deliver antigens directly to the host’s immune system. Upon administration, these bacteria or their components are internalized by dendritic cells. The antigens from the bacteria are then processed and presented via MHC-II molecules on the surface of the dendritic cells. Vaccines based on L. lactis enhance the expression of MHC-II molecules on dendritic cells. This augmented expression improves the capacity of dendritic cells to present antigens to CD4 T cells, thereby potentiating the immune response. As MHC-II molecules present antigens, CD4 T cells are more effectively activated. These activated CD4 T cells subsequently aid in activating B cells, resulting in antibody production and cytotoxic T cells, which can destroy infected cells . The antigens expressed by L. lactis extracellularly are broken down into smaller peptide fragments. Dendritic cells play an active role in this process. After capturing antigens, dendritic cells migrate to the lymph nodes, where they further process the antigens into smaller fragments and load them onto MHC-I molecules, essential for CD8 T cell activation. MHC-I molecules on the surface of dendritic cells interact with TCRs on naive CD8 T cells, leading to CD8 T cell activation. CD8 T cells then differentiate into cytotoxic T lymphocytes (CTLs). CTLs leave the lymph nodes and patrol the body, seeking cells that express the same antigen presented by L. lactis . Upon finding infected cells, CTLs release perforin and granzymes, which destroy the target cells . Evaluation of vaccine efficacy in animal models is critical. This is typically done through a challenge test involving exposing vaccinated experimental animals to pathogens. This test can determine the vaccinés ability to prevent infection or reduce the severity of the disease caused by the infection . Vaccine efficacy evaluation parameters include measuring the number of pathogens, clinical symptoms, immune responses, and survival rates in experimental animals, providing valuable information about the vaccinés effectiveness . A common challenge test assessment is histopathological evaluation, which determines the number of bacteria or viruses used to evaluate vaccine efficacy . has several methods of evaluating vaccine efficacy other than immunologically. These evaluation methods include measurement of viral titer (viral load) [ , , ], and virulence factor measurement [ , , ]. Challenge tests can provide an in-depth understanding of the effects of vaccines on the immune system and disease development. Studies of challenge tests on mucosal vaccines administered orally or intranasally have shown that the immune response plays a vital role in protecting against infection by pathogenic microorganisms . The Nisin Controlled Gene Expression (NICE) system employed by L. lactis is a highly effective and easy to use gene expression method. The procedure involves introducing a specific quantity of the Nisin inducer (0.1-5 ng/ml) into the growth medium . The NICE system operates through a signal transduction mechanism involving two primary proteins, NisK and NisR. NisK is a sensor protein located on the membrane, while NisR is a response regulator protein in the cytoplasm. Nisin as an inducer can interact with NisK, causing it to undergo autophosphorylation. The phosphate group is then transferred to NisR, activating it. NisR initiates transcription of the target gene on the plasmid by binding to the downstream part of the PnisA promoter. The uniqueness of the NICE system lies in its ability to control protein expression with high precision, allowing for efficient and controlled protein production. This mechanism makes L. lactis a valuable tool in biotechnology research and applications, particularly in vaccine production [ , - ]. L. lactis -based vaccine can be expressed as cell surface antigen. One method for designing the gene of interest involves adding a signal peptide, a technique extensively explored for its potential in developing L. lactis -based vaccine. The studies in and demonstrate the successful antigen expression on the surface of L. lactis , which can stimulate immune responses [ , , , ]. The pgsA signal peptide is particularly effective for expressing proteins on the surface of L. lactis . The pgsA protein facilitates protein translocation across the cytoplasmic membrane, allowing for effective secretion into the extracellular space or anchorage to the cell wall . Several studies in , such as those by , have successfully utilized the pgsA signal peptide in L. lactis /pNZ8110-pgsA-NA that has been constructed, highlighting its critical role in facilitating protein display on the surface of L. lactis - based vaccine. and also report several studies concerning extracellular protein expression in L. lactis . The signal peptide used for extracellular protein expression differs from the signal peptide used for protein expression on the cell surface. The USP45 signal peptide has been employed to enable the secretion of target proteins by L. lactis . The Usp45 signal peptide is added to the N-terminus of the target protein to facilitate its secretion by L. lactis . In the study by , the pNZ8124:sip vector containing the lactococcal Usp45 signal peptide sequence (SP usp45) fused to the PnisA promoter was successfully constructed, allowing the target protein to be expressed extracellularly by L. lactis . reported the successful utilization of the Usp45 signal peptide for the extracellular expression of the Helicobacter pylori Lpp20 antigen using L. lactis . This study also demonstrated that extracellular vaccination with H. pylori Lpp20 was more effective. Other researchers, such as [ , , , - , , , ] have also proven that the usp45 signal peptide can be used to express extracellular recombinant proteins in L. lactis . Strain Several strains of L. lactis are commonly used in vaccine delivery systems. shows that L. lactis strains NZ9000 and NZ3900 are widely used as mucosal vaccine carriers. Both strains cannot grow on media containing only lactose as a carbon source due to the deletion of the LacF gene, necessitating a plasmid carrying the LacF gene operon. The LacF gene detection system is a selection system that determines whether cells carry the plasmid or not. In addition, both strains have the PnisA promoter, allowing for tightly controlled protein expression [ , , ]. and show that L. lactis NZ9000 has been used to express viral proteins, bacterial antigens, and fusion proteins. This demonstrates its versatility in vaccine development. L. lactis NZ9000 has been used as a live bacterial vaccine platform to present antigens from pathogens such as Group A Streptococcus , Helicobacter pylori , and influenza [ , , ]. In addition, L. lactis NZ9000 has been used to express antigens from pathogens such as Brucella melitensis , Human papillomavirus , Hepatitis A VP1-P2a antigen and Neuraminidase protein from Influenza A demonstrating its potential in developing vaccines against viral infections. This strain can deliver antigens to mucosal sites and induce mucosal and systemic immune responses. One of the advantages of using L. lactis NZ9000 is its ability to stimulate both humoral and cellular immune responses. Studies in have shown that oral and mucosal immunization with L. lactis NZ9000 expressing specific antigens can elicit strong antibody responses, including IgG and IgA, and activate T cells, which promote a strong immune response against a variety of pathogens. In addition to the NZ9000 strain, L. lactis NZ3900 has also been widely used in the development of L. lactis -based vaccine, as shown in . L. lactis NZ3900 has been genetically engineered to maximize the expression of vaccine proteins. For example, demonstrated that L. lactis NZ3900 was used to deliver the Highly Conserved Region Spike S2 antigen for oral and nasal immunization in BALB/c mice. Other studies have also reported success in developing L. lactis -based vaccine that express bacterial or viral antigens in L. lactis NZ3900, such as antigens from H. pylori [ , , ] pertussis toxin and filamentous hemagglutinin from Bordetella pertussis , antigens from Enterotoxigenic Escherichia coli and M-protein antigens derived from Group A Streptococcus pyogenes . This shows that this strain can express antigens from bacteria or viruses. Dosage is a critical component in vaccine development and administration. Dosage is crucial as it ensures the vaccinés efficacy and safety. The vaccine dose determines the amount of antigen given to the body to trigger a strong immune response with the least side effects. Dosage determination begins with preclinical studies, followed by several phases of clinical trials [ - ]. shows that the dose of L. lactis -based vaccine is measured in colony forming units (CFU) and can be adjusted according to the experimental animals used besides determining the number of CFU . Dosages for L. lactis -based vaccine can start from as low as 10 6 CFUs and can range up to 10 9 CFUs or more, depending on the immunogenicity of the antigen and the delivery method Vaccination with a prime-boost strategy can also be applied, initial dose (prime) is followed by one or more subsequent doses (boost) to increase the immune response. The interval and frequency between doses and the total dose are also important factors. The interval and frequency of vaccination can range from a few weeks to several months to build a stronger and more durable immune response [ , , , ]. Moreover, the route of administration plays a role in determining the dosage. Oral administration might require higher doses than nasal administration due to the degradation of bacteria in the gastrointestinal tract . Stabilizers and adjuvants are often included to protect the bacteria and enhance the immune response . Based on , in various experimental animal models other than mice, larger doses are generally observed; for instance, piglets receive doses ranging from 10 9 to 10 12 CFU and rabbits receive 5 × 10 9 CFU . In contrast, vaccine doses administered to mice ranged from 1x10 8 CFU to 5 × 10 14 CFU . The route of administration also influences the dosage of L. lactis -based vaccine, with the oral route requiring higher doses of approximately 5 × 10 9 CFU compared to the nasal route, which utilizes doses around 1 × 10 9 CFU . The dose of a L. lactis -based vaccine administered via the oral route tends to be higher than that given nasally due to several factors related to the body's immune system and physical barriers in the gastrointestinal (GI) tract. The first factor is exposure to digestive enzymes and the harsh environment of the GI tract. The oral route exposes the vaccine to acidic pH and digestive enzymes like pepsin and proteases, which can degrade the L. lactis -based vaccine and reduce its efficacy. Therefore, a higher dose is needed to ensure enough L. lactis - based vaccine survive to stimulate an immune response . The second factor is the immune system's complexity in the gut-associated lymphoid tissue (GALT), which includes structures like Peyer's patches that efficiently sample and respond to antigens. Additionally, normal intestinal flora competes with and may neutralize L. lactis -based vaccine. These two main factors necessitate higher doses for oral [ - ]. L. lactis -based vaccine administered nasally is directly exposed to the nasal-associated lymphoid tissue (NALT). The nasal mucosa, especially NALT, is more efficient at antigen uptake, resulting in a strong immune response. This allows for a lower dose of L. lactis -based vaccine than oral administration . Experimental animals are essential in preclinical studies for developing L. lactis -based vaccine. Commonly used experimental animals include rats and mice, chosen for their physiological and immunological similarities to humans. These animals are relatively inexpensive and easy to obtain. Preclinical studies on these animals are critical for assessing vaccine safety, immunogenicity, and efficacy . The study in shows that 92% of the studies used mice (specifically BALB/c, C57BL/6, and FVB/n strains), while the remaining studies employed piglets and rabbits. Specific inbred strains such as BALB/c and C57BL/6 mice are preferred within these species. Inbred strains ensure genetic uniformity, reducing variability in immune responses and improving reproducibility of results. The efficacy of L. lactis -based vaccine can be measured by assessing the immune response they produce. The primary immune response in L. lactis -based vaccine is the mucosal immune response, which can be assessed by measuring IgA or IgG antibodies circulating systemically. The levels of these antibodies indicate the strength of the humoral immune response triggered by L. lactis -based vaccine [ , , ]. shows that all studies related to L. lactis -based vaccination reported a significant increase in humoral immune responses. The parameters for assessing humoral immune responses include IgG and IgA antibodies, with IgG evaluation generally performed on serum and IgA assessment on feces, tissues (intestine, colon, nose), and nasal fluid. In addition to antibodies, the assessment of cellular immune responses is crucial for evaluating the efficacy of L. lactis -based vaccine. This can be done by measuring the activation of T cells, especially CD4 T cells and CD8 T cells, which are essential in coordinating the immune response and directly targeting infected cells . In addition to humoral and cellular immune responses, the efficacy of L. lactis -based vaccine can be assessed by measuring the cytokine profile. One common cytokine measured is interferon-gamma (IFN-γ). IFN-γ, which can provide an overview of the polarization of CD4 T cell responses, including Th1 cells pathways . shows that several researchers have reported a significant increase in IFN-γ in experimental animals immunized with L. lactis -based vaccine. As previously mentioned, IFN-γ analysis in the study was used to assess Th1 cell activation. Activated Th1 cells will release IFN-γ to help activate cytotoxic T cells, NK cells, and macrophage cells. As a step to initiate the immune response, L. lactis -based vaccine directly interacts with the mucosal surface of the gastrointestinal tract or nose, depending on the route of administration. The mucosal surface is rich in Microfold (M) and dendritic cells as an Antigen Precenting Cell (APC). Microfold (M) cells are important in the mucosal immune system on intestine. Unlike other epithelial cells, M cells do not have a mucus layer on their apical side, thus facilitating antigen uptake on intestine . M cells also facilitate the transport of L. lactis and its associated antigens to underlying immune cells, especially dendritic cells . L. lactis -based vaccine and antigens captured by M cells and dendritic cells (APCs) on intestine are transported through the process of transcytosis to the basal side, such as Peyer's Patches (PPs). The APC cells phagocytize and internalize antigen and L. lactis -based vaccine along with its protein components. Antigens are presented by MHC-I and MHC-II molecules, presenting them to T and B cells, thereby triggering an adaptive immune response [ , , , ]. Induced B cells differentiate into plasma cells, specifically prepared to produce antibodies. This immunological interaction involves CD4 T cells (Th cells), which send necessary signals through cytokines to promote class-switch recombination in B cells, leading to IgA production . These IgA-producing plasma cells then migrate to the lamina propria, where they continue to secrete dimeric IgA. Subsequently, IgA binds to the polymeric immunoglobulin receptor (pIgR) on epithelial cells, facilitating its translocation across the cell and its eventual release into the lumen as secretory IgA (sIgA). The region between the follicles around the PPs, called the Intrafollicular Region (IFR), is rich in T cells and dendritic cells and regulates the adaptive immune response. Through this mechanism, vaccine antigens carried by L. lactis will be presented and generate both innate and adaptive immune responses[ - ]. The IgA antibodies produced by L. lactis -based vaccinations significantly impact mucosal immunity. Mucosal IgA antibodies are crucial in the initial defense against infections that enter the body via mucosal surfaces. Fecal IgA antibodies are used as markers of secretory IgA in the gastrointestinal tract, providing valuable insights into the immune response associated with the gut. Moreover, Immunoglobulin A (IgA) antibodies found in nasal tissue and nasal fluid can also indicate immunity in the respiratory mucosa. Tissue IgA measurement can yield insights into the localization of specific IgA within tissues . IgA levels can be assessed by analyzing samples of feces, tissue (such as the gut, colon, and nose), and nasal fluid, as shown in . For example, oral vaccination with L. lactis expressing antigens from pathogens such as H. pylori or the Influenza has increased IgA antibodies, contributing to protective immunity [ - ]. This also proves that IgA can be formed to defend against pathogens like bacteria or viruses. L. lactis -based vaccine can stimulate humoral immune responses of IgG antibodies. Studies in have shown that vaccination based on L. lactis expressing antigens can increase significant IgG antibody responses [ , , ]. In , oral vaccination based on L. lactis expressing antigens from pathogens such as H. pylori [ , , , ], Influenza , , Bordetella pertussis , and HIV-1 has been shown to induce IgG antibodies contributing to humoral immunity. In producing IgG or IgA antibodies, L. lactis -based vaccine are recognized by APCs in the mucosa. Dendritic cells present the antigen to CD4 T cells via MHC-II. CD4 T cells release cytokines such as IL-4 (Interleukin-4) and IL-6 (Interleukin-6), which are then responded to by B cells . This interaction allows B cells to mature into plasma cells. With the help of IL-4 and IL-21 from Th2 cells, plasma cells are prepared to produce IgG antibodies [ - ]. Additionally, class switching in plasma cells for IgA production occurs under the influence of TGF-β, IL-21, and IL-17 [ - ]. IgG antibodies can specifically bind to the pathogen antigen that triggers their production, thus forming an antigen-antibody complex. This facilitates pathogen recognition and elimination. IgG antibodies also neutralize pathogens, thereby enhancing the phagocytic response. Phagocytes, such as macrophages, have Fc receptors that bind to the Fc portion of IgG antibodies, triggering phagocytosis of pathogens opsonized by IgG antibodies. Additionally, IgG antibodies can activate the complement system, leading to the formation of a complex that damages pathogen membranes and triggers cell lysis . IgA antibodies can prevent pathogen adhesion to the mucosal surface through neutralization. The IgA antigen-antibody complex can be captured by immune cells such as macrophages and dendritic cells in the mucosa, which can then destroy and remove the complex from the body. In the digestive tract, the IgA antigen-antibody complex is secreted through feces . L. lactis -based vaccines can also induce cellular immune responses. shows that not all studies report an increased cellular immune response; however, significant increases were observed in the studies by [ , , ]. Vaccination using L. lactis -based vaccines increases dendritic cell activation by measuring MHC-II expression, CD4 T cells, CD8 T cells, and plasma cells by measuring CD138 expression. Dendritic cells play a critical role in initiating and modulating the cellular immune response. These cells function as antigen-presenting cells (APCs) by processing antigens and presenting them on major histocompatibility complex (MHC) molecules to T cells, activating the immune response. When vaccines utilizing L. lactis are administered, they deliver antigens directly to the host’s immune system. Upon administration, these bacteria or their components are internalized by dendritic cells. The antigens from the bacteria are then processed and presented via MHC-II molecules on the surface of the dendritic cells. Vaccines based on L. lactis enhance the expression of MHC-II molecules on dendritic cells. This augmented expression improves the capacity of dendritic cells to present antigens to CD4 T cells, thereby potentiating the immune response. As MHC-II molecules present antigens, CD4 T cells are more effectively activated. These activated CD4 T cells subsequently aid in activating B cells, resulting in antibody production and cytotoxic T cells, which can destroy infected cells . The antigens expressed by L. lactis extracellularly are broken down into smaller peptide fragments. Dendritic cells play an active role in this process. After capturing antigens, dendritic cells migrate to the lymph nodes, where they further process the antigens into smaller fragments and load them onto MHC-I molecules, essential for CD8 T cell activation. MHC-I molecules on the surface of dendritic cells interact with TCRs on naive CD8 T cells, leading to CD8 T cell activation. CD8 T cells then differentiate into cytotoxic T lymphocytes (CTLs). CTLs leave the lymph nodes and patrol the body, seeking cells that express the same antigen presented by L. lactis . Upon finding infected cells, CTLs release perforin and granzymes, which destroy the target cells . Evaluation of vaccine efficacy in animal models is critical. This is typically done through a challenge test involving exposing vaccinated experimental animals to pathogens. This test can determine the vaccinés ability to prevent infection or reduce the severity of the disease caused by the infection . Vaccine efficacy evaluation parameters include measuring the number of pathogens, clinical symptoms, immune responses, and survival rates in experimental animals, providing valuable information about the vaccinés effectiveness . A common challenge test assessment is histopathological evaluation, which determines the number of bacteria or viruses used to evaluate vaccine efficacy . has several methods of evaluating vaccine efficacy other than immunologically. These evaluation methods include measurement of viral titer (viral load) [ , , ], and virulence factor measurement [ , , ]. Challenge tests can provide an in-depth understanding of the effects of vaccines on the immune system and disease development. Studies of challenge tests on mucosal vaccines administered orally or intranasally have shown that the immune response plays a vital role in protecting against infection by pathogenic microorganisms . L. lactis is suitable as a vector carrier for oral or nasal mucosal vaccines for bacterial and viral infections. L. lactis -based vaccine can induce cellular and humoral immune responses that protect against these infections. Research related to L. lactis as a mucosal vaccine carrier has great potential to continue to be carried out and developed.
Data-driven AI platform for dens evaginatus detection on orthodontic intraoral photographs
006f004b-fc90-4662-9255-abf03b74c455
11872327
Dentistry[mh]
Dens evaginatus (DE) is a developmental dental anomaly characterized by the appearance of an accessory cusp or “tubercle” that protrudes from the occlusal or palatal surfaces of teeth, primarily affecting premolars . Morphologically, DE in premolars averagely possesses a tubercle with 2 mm in width and 3.5 mm in height . This feature often causes occlusal interference and is prone to fracture, abrasion, and subsequent non-carious pulp exposure , resulting in a painful therapeutic process and unfavorable prognosis. However, since the patients are asymptomatic until severe abrasion or pulp-involved fracture of the tubercle, overlook is prone to happen. Early detection of DE premolars is crucial. Conservative treatment methods, such as tubercle preservation, tubercle reduction, and pulp capping, are beneficial to remove occlusal interference and preserve the vitality of dental pulps, significantly decreasing the risk of pulp exposure . Severe complications could also arise from the exposure of pulp tissue: pulp inflammation, pulp necrosis, periapical abscess, and even maxillofacial cellulitis and osteomyelitis of the jaws , resulting in long treatment cycles and tortuous therapeutic procedures. It is also noticeable that when DE-caused pulp exposure involves immature permanent teeth, their development will be largely interfered with, specifically resulting in thinner root canal walls, shorter root lengths as well as open root apexes . The timely detection of DE premolars is also vital in orthodontics. Malformed central cusps can lead to malocclusion, uneven distribution of occlusal forces, and other issues that negatively impact a patient’s chewing function and oral health . During orthodontic treatment, these anomalies may alter the interproximal contact point and occlusal vertical dimension, necessitating careful consideration by orthodontists when designing treatment plans . Adjustments to the design of orthodontic appliances or selective reduction of the malformed cusps may be necessary to facilitate effective treatment progress. Additionally, premolars are often the first choice for extraction in orthodontic treatment plans, with approximately 34.4% of orthodontic patients opting to have premolars removed . Premolars affected by DE are typically prioritized in extraction plans. However, the excessive number of patients with a relatively small number of dentists in developing countries like China has resulted in a high workload and a heightened risk of misdiagnosis . Therefore, an artificial intelligence (AI)-assisted diagnostic platform is essential to identify patients with DE premolars efficiently. Nowadays, with the development of AI and its widespread utilization in dentistry, convenience and accuracy have been demonstrated in the field of automatic oral disease detection . Among the algorithms of AI, the convolutional neural network (CNN) embraces strong capacities in medical image feature processing , which has predominated the application of AI in medical imaging and won the academic consensus . However, as AI-based studies have proliferated, certain limitations of prior work have also emerged. In terms of inappropriate data selection, poor data quality, selection bias, and limited generalizability due to small sample sizes can significantly impact the performance of AI algorithms . Additionally, regarding the analysis of experimental results, some studies focus solely on the impressive capabilities of AI algorithms, often neglecting comparisons between AI performance and that of dentists. This oversight restricts AI’s clinical applications, as it prevents the quantification of clinical effectiveness. Furthermore, result explainability remains contentious since AI’s learning outcomes may not fully align with human conceptual understanding. However, completely unexplainable AI algorithms can introduce significant biases . At the application level, while many studies have introduced their algorithms and some have made them publicly available, clinicians still face challenges in understanding and utilizing these AI tools effectively . To minimize data selection bias, a standardized data collection protocol and a well-balanced dataset are essential. Researchers are supposed to add human-algorithm comparison tests to confirm clinical acceptance. Additionally, the internal logic of AI algorithms can be partially explained through visual analyses, such as attention-based visualization in image processing . To enhance clinical accessibility of AI, there is a pressing need for research focused on developing fully functional applications, rather than solely presenting algorithmic code. Moreover, a deeper understanding of human-algorithm interactions should be actively pursued. Currently, the majority of AI-based dental research relies on radiological images , with only a few studies focusing on radiation-free intraoral data . Since intraoral photography only requires relatively inexpensive equipment like a smartphone or camera and has been more widely used even in underdeveloped and remote areas lacking X-ray equipment, an AI-assisted platform based on intraoral photographs has greater potential for widespread adoption. In this study, we aimed to generate a two-stage CNN algorithm model for detecting DE in orthodontic intraoral images. Furthermore, we aimed to develop an interactive and user-friendly platform for DE detection. We evaluated the performance of our platform and conducted a comparative analysis involving three dental interns to further validate its efficacy. We hope that our platform will enable grassroots dentists and nonspecialists to easily diagnose DE with just a click. Dataset The complete workflow of our research process is shown in Fig. . All intraoral images used in this study were collected from the Department of Orthodontics, West China Hospital of Stomatology, Sichuan University. Our investigation was approved by the Research Ethics Committee of West China Hospital of Stomatology (project number: WCHSIRB-D-2021-370). The inclusion criteria of our research are as follows: (1) patients in the permanent dentition stage who initially visited the Department of Orthodontics during the year 2018 to 2022, (2) both patients with and without DE premolars. Images were taken for documentation or educational purposes using professional single-reflex lens cameras (Nikon D300 with a Nikon Micro 105-mm lens) and a macro flash after the teeth were wiped and dried. To ensure image quality, any duplicate images from the same dentition and images with insufficient information (e.g., hazy, distorted, or saliva-contaminated images) were excluded. The original resolution of the included intraoral images, which was 3,008 × 2,008 pixels, was reduced to 450 × 300 pixels in JPEG format (RGB image) to save storage space and improve processing efficiency. To minimize potential bias, images with the following conditions were also excluded: (1) unrecognizable premolar morphology (such as severe abrasion or attrition, and severe dental crowding that obscures occlusal surfaces of premolars), (2) other developmental disorders or defects in premolar areas, including amelogenesis imperfecta, dentinogenesis imperfecta, dens invaginatus, gemination, and tooth fusion, as defined in authoritative articles and differentiated by morphological criteria, and (3) any restorations, resin-fillings, or appliances on premolars. A total of 1,400 high-quality intraoral images were included in our study. The initial phase was dedicated to the object detection task, where our objective was to identify regions of interest (ROIs) within the intraoral images. To facilitate this, we divided the 1,400 intraoral images into a training set and a testing set, adhering to a 6:1 ratio. Consequently, 1,200 images were allocated to the training set, and 200 images were designated for the testing set. Subsequently, the study progressed to the image classification task, focusing on the differentiation of premolar regions based on the presence or absence of DE. For this purpose, premolar regions were meticulously extracted from the available images, resulting in a curated dataset specifically prepared for this classification challenge. The dataset was then bifurcated into a training subset, consisting of 1,011 premolars diagnosed with DE and 1,017 premolars without DE, and a testing subset, comprising 50 premolars with DE and 50 premolars without DE. Image annotation In the first stage, two dentists with more than ten years of clinical experience provided the golden standard of the premolar positions. The tool “labelImg” ( https://github.com/HumanSignal/labelImg/tree/master , version 1.8.1) was utilized to markup annotations of premolar positions and numbers: experts were required to create a rectangular box around each premolar; and for each box, a tooth number referring to FDI system was labeled. Finally, a total number of 6,312 premolars were labeled. In the second stage, the same two dentists were asked to independently detect whether each labeled premolar was healthy or with defects of DE. We first conducted image matting according to the labeled rectangular boxes to obtain the images of single labeled premolars. The single premolar images were categorized into two groups, the healthy group and the DE group. All the premolars with inconsistent classification results were discussed and revalued together by a third expert with more than 20 years of clinical experience and these two dentists. If agreements can be reached after discussion, the agreed results were used. However, if agreements were failed to be achieved, we then excluded these premolar images. The manual evaluation results, both the tooth detection and DE determination, served as the golden standard for training our AI models. BiStageNet model In this study, we proposed a two-stage deep learning model, namely BiStageNet, for the detection of DE on intraoral photographs. The first stage was an object detection model that used the convolutional layers of VGG16 for feature extraction, followed by a fully connected layer that outputs the coordinates of the center points of four bounding boxes, totaling eight values. Each bounding box measures 90 × 90. The four bounding boxes outputted in the first stage are displayed on a graphical user interface, where dentists can adjust the position of the bounding boxes and modify their width and height to completely cover the area of premolar. The second stage deployed a CNN model (VGG-Lite) for binary classification. The classification model used the 90 × 90 RGB images cropped from the four bounding boxes outputted in the first stage for binary classification prediction. The specific workflow and architectures of our model are shown in Fig. . VGG16 is a deep convolutional neural network architecture known for its effectiveness in image recognition tasks. In the context of our study, we utilized the convolutional layers of VGG16 in the first stage of the BiStageNet model for feature extraction in object detection. These convolutional layers process intraoral photographs with a resolution of 450 × 300 pixels to extract meaningful features that help in identifying the location of premolars. Following the convolutional layers, a fully connected layer outputs the coordinates of the center points of four bounding boxes, yielding a total of eight values. Each bounding box is sized at 90 × 90 pixels and is designed to encompass the premolar regions accurately. VGG-Lite is a simplified version of the VGG16 model, optimized for computational efficiency without significantly compromising performance in classification tasks. In the second stage of the BiStageNet model, VGG-Lite serves as a binary classification model to detect the presence of dens evaginatus in the premolars. The architecture of VGG-Lite consists of one 5 × 5 convolutional layer and five 3 × 3 convolutional layers. Each convolutional layer is followed by a 2 × 2 max pooling layer to reduce spatial dimensions while retaining important features. Unlike the standard VGG16, VGG-Lite concludes with only one fully connected layer, followed by a softmax layer that outputs the probability of the binary classes—positive or negative for dens evaginatus detection. The experiments for the BiStageNet model were conducted on a Windows 10 workstation equipped with an Intel Core i9-13900 K CPU, 128 GB of DDR4 RAM, and an NVIDIA RTX 4090 GPU to accelerate deep learning computations. The software environment utilized Python 3.9 and PyTorch 1.13 for implementing and training the convolutional neural network models. Statistical analysis To evaluate the recognition results of the premolars in intraoral images, we introduced the Dice coefficient, a widely used indicator for evaluating the segmentation performance of the AI model, which is calculated as follows: [12pt]{minimal} $$\, \,= \,|X\:Y|}{|X|+|Y|}$$ where X represents the manually labeled area of premolar (the ground truth), and Y represents the auto-detected area conducted by our detect model. [12pt]{minimal} $$\:|\:|\:$$ represents the intersected area of X and Y, while [12pt]{minimal} $$\:||+||$$ represents the sum area of X and Y. Given that our input intraoral photographs are in RGB formats, input data should be converted into binary format first. Additionally, the area size is determined by the total number of pixels. The Dice coefficient ranges between 0 and 1, with higher values indicating a greater proportion of overlap between X (the manually labeled premolar area) and Y (the auto-detected area by our model). A Dice coefficient closer to 1 signifies improved accuracy in automatic premolar recognition and segmentation. To figure out the DE detection capacities, we took the following evaluation indicators into consideration: accuracy (ACC), sensitivity (SE), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), as well as F1-score. The calculation formulae are as follows: [12pt]{minimal} $$\,= \,+{TN}}{{TP}+{TN}+{FP}+{FN}}$$ [12pt]{minimal} $$\,= \,}{{TP}+{FN}}$$ [12pt]{minimal} $$\,= \,}{{TN}+{FP}}$$ [12pt]{minimal} $$\,= \,}{{TP}+{FP}}$$ [12pt]{minimal} $$\,= \,}{{TN}+{FN}}$$ [12pt]{minimal} $$\,= \,*{PPV}*{SE}}{{PPV}+{SE}}$$ where TP, TN, FP, and FN present true positives, true negatives, false negatives, and false negatives, respectively. Additionally, we also produced the receiver operating characteristic (ROC) curves and calculated the area under the ROC curve (AUC) to evaluate the performance of our model. Subjective assessment based on BiStageNet To evaluate the efficacy of our DE detection tool, a comparative analysis was conducted involving the tool and three dental interns, each possessing one year of clinical training experience. This assessment focused on the same testing set of 100 regions, derived from 30 intraoral images encompassing 50 premolars diagnosed with DE and 50 premolars without DE. The interns, who did not receive any additional training specific to this study, initially rendered their diagnostic decisions based solely on their subjective judgment, without the assistance of the DE detection tool. Two weeks subsequent to the initial assessment, a follow-up evaluation was undertaken, wherein the same three dental interns re-examined the identical 100 regions. During this session, the DE detection tool’s predictive results were made available to them. The interns were tasked with determining the concordance between the tool’s predictions and their own subjective visual assessments for each region. Discrepancies between the tool’s predictions and the interns’ judgments prompted a final decision-making process, wherein the interns, leveraging both their clinical insights and the tool’s output, adjudicated the ultimate determination regarding the presence of DE. Additionally, during the comparative analysis, we also randomly selected 100 single premolar images (50 with DE and 50 without DE) and used Cohen’s Kappa value to assess the agreement level between the diagnostic outcomes of our DE detection tool and those of three dental interns. Cohen’s Kappa value was calculated as follows: [12pt]{minimal} $$\:\:=_{0}-{p}_{e}}{1-{p}_{e}}$$ Where [12pt]{minimal} $$\:{p}_{0}$$ presents the observed agreement proportion, and [12pt]{minimal} $$\:{p}_{e}$$ presents the expected agreement proportion. The complete workflow of our research process is shown in Fig. . All intraoral images used in this study were collected from the Department of Orthodontics, West China Hospital of Stomatology, Sichuan University. Our investigation was approved by the Research Ethics Committee of West China Hospital of Stomatology (project number: WCHSIRB-D-2021-370). The inclusion criteria of our research are as follows: (1) patients in the permanent dentition stage who initially visited the Department of Orthodontics during the year 2018 to 2022, (2) both patients with and without DE premolars. Images were taken for documentation or educational purposes using professional single-reflex lens cameras (Nikon D300 with a Nikon Micro 105-mm lens) and a macro flash after the teeth were wiped and dried. To ensure image quality, any duplicate images from the same dentition and images with insufficient information (e.g., hazy, distorted, or saliva-contaminated images) were excluded. The original resolution of the included intraoral images, which was 3,008 × 2,008 pixels, was reduced to 450 × 300 pixels in JPEG format (RGB image) to save storage space and improve processing efficiency. To minimize potential bias, images with the following conditions were also excluded: (1) unrecognizable premolar morphology (such as severe abrasion or attrition, and severe dental crowding that obscures occlusal surfaces of premolars), (2) other developmental disorders or defects in premolar areas, including amelogenesis imperfecta, dentinogenesis imperfecta, dens invaginatus, gemination, and tooth fusion, as defined in authoritative articles and differentiated by morphological criteria, and (3) any restorations, resin-fillings, or appliances on premolars. A total of 1,400 high-quality intraoral images were included in our study. The initial phase was dedicated to the object detection task, where our objective was to identify regions of interest (ROIs) within the intraoral images. To facilitate this, we divided the 1,400 intraoral images into a training set and a testing set, adhering to a 6:1 ratio. Consequently, 1,200 images were allocated to the training set, and 200 images were designated for the testing set. Subsequently, the study progressed to the image classification task, focusing on the differentiation of premolar regions based on the presence or absence of DE. For this purpose, premolar regions were meticulously extracted from the available images, resulting in a curated dataset specifically prepared for this classification challenge. The dataset was then bifurcated into a training subset, consisting of 1,011 premolars diagnosed with DE and 1,017 premolars without DE, and a testing subset, comprising 50 premolars with DE and 50 premolars without DE. In the first stage, two dentists with more than ten years of clinical experience provided the golden standard of the premolar positions. The tool “labelImg” ( https://github.com/HumanSignal/labelImg/tree/master , version 1.8.1) was utilized to markup annotations of premolar positions and numbers: experts were required to create a rectangular box around each premolar; and for each box, a tooth number referring to FDI system was labeled. Finally, a total number of 6,312 premolars were labeled. In the second stage, the same two dentists were asked to independently detect whether each labeled premolar was healthy or with defects of DE. We first conducted image matting according to the labeled rectangular boxes to obtain the images of single labeled premolars. The single premolar images were categorized into two groups, the healthy group and the DE group. All the premolars with inconsistent classification results were discussed and revalued together by a third expert with more than 20 years of clinical experience and these two dentists. If agreements can be reached after discussion, the agreed results were used. However, if agreements were failed to be achieved, we then excluded these premolar images. The manual evaluation results, both the tooth detection and DE determination, served as the golden standard for training our AI models. In this study, we proposed a two-stage deep learning model, namely BiStageNet, for the detection of DE on intraoral photographs. The first stage was an object detection model that used the convolutional layers of VGG16 for feature extraction, followed by a fully connected layer that outputs the coordinates of the center points of four bounding boxes, totaling eight values. Each bounding box measures 90 × 90. The four bounding boxes outputted in the first stage are displayed on a graphical user interface, where dentists can adjust the position of the bounding boxes and modify their width and height to completely cover the area of premolar. The second stage deployed a CNN model (VGG-Lite) for binary classification. The classification model used the 90 × 90 RGB images cropped from the four bounding boxes outputted in the first stage for binary classification prediction. The specific workflow and architectures of our model are shown in Fig. . VGG16 is a deep convolutional neural network architecture known for its effectiveness in image recognition tasks. In the context of our study, we utilized the convolutional layers of VGG16 in the first stage of the BiStageNet model for feature extraction in object detection. These convolutional layers process intraoral photographs with a resolution of 450 × 300 pixels to extract meaningful features that help in identifying the location of premolars. Following the convolutional layers, a fully connected layer outputs the coordinates of the center points of four bounding boxes, yielding a total of eight values. Each bounding box is sized at 90 × 90 pixels and is designed to encompass the premolar regions accurately. VGG-Lite is a simplified version of the VGG16 model, optimized for computational efficiency without significantly compromising performance in classification tasks. In the second stage of the BiStageNet model, VGG-Lite serves as a binary classification model to detect the presence of dens evaginatus in the premolars. The architecture of VGG-Lite consists of one 5 × 5 convolutional layer and five 3 × 3 convolutional layers. Each convolutional layer is followed by a 2 × 2 max pooling layer to reduce spatial dimensions while retaining important features. Unlike the standard VGG16, VGG-Lite concludes with only one fully connected layer, followed by a softmax layer that outputs the probability of the binary classes—positive or negative for dens evaginatus detection. The experiments for the BiStageNet model were conducted on a Windows 10 workstation equipped with an Intel Core i9-13900 K CPU, 128 GB of DDR4 RAM, and an NVIDIA RTX 4090 GPU to accelerate deep learning computations. The software environment utilized Python 3.9 and PyTorch 1.13 for implementing and training the convolutional neural network models. To evaluate the recognition results of the premolars in intraoral images, we introduced the Dice coefficient, a widely used indicator for evaluating the segmentation performance of the AI model, which is calculated as follows: [12pt]{minimal} $$\, \,= \,|X\:Y|}{|X|+|Y|}$$ where X represents the manually labeled area of premolar (the ground truth), and Y represents the auto-detected area conducted by our detect model. [12pt]{minimal} $$\:|\:|\:$$ represents the intersected area of X and Y, while [12pt]{minimal} $$\:||+||$$ represents the sum area of X and Y. Given that our input intraoral photographs are in RGB formats, input data should be converted into binary format first. Additionally, the area size is determined by the total number of pixels. The Dice coefficient ranges between 0 and 1, with higher values indicating a greater proportion of overlap between X (the manually labeled premolar area) and Y (the auto-detected area by our model). A Dice coefficient closer to 1 signifies improved accuracy in automatic premolar recognition and segmentation. To figure out the DE detection capacities, we took the following evaluation indicators into consideration: accuracy (ACC), sensitivity (SE), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), as well as F1-score. The calculation formulae are as follows: [12pt]{minimal} $$\,= \,+{TN}}{{TP}+{TN}+{FP}+{FN}}$$ [12pt]{minimal} $$\,= \,}{{TP}+{FN}}$$ [12pt]{minimal} $$\,= \,}{{TN}+{FP}}$$ [12pt]{minimal} $$\,= \,}{{TP}+{FP}}$$ [12pt]{minimal} $$\,= \,}{{TN}+{FN}}$$ [12pt]{minimal} $$\,= \,*{PPV}*{SE}}{{PPV}+{SE}}$$ where TP, TN, FP, and FN present true positives, true negatives, false negatives, and false negatives, respectively. Additionally, we also produced the receiver operating characteristic (ROC) curves and calculated the area under the ROC curve (AUC) to evaluate the performance of our model. To evaluate the efficacy of our DE detection tool, a comparative analysis was conducted involving the tool and three dental interns, each possessing one year of clinical training experience. This assessment focused on the same testing set of 100 regions, derived from 30 intraoral images encompassing 50 premolars diagnosed with DE and 50 premolars without DE. The interns, who did not receive any additional training specific to this study, initially rendered their diagnostic decisions based solely on their subjective judgment, without the assistance of the DE detection tool. Two weeks subsequent to the initial assessment, a follow-up evaluation was undertaken, wherein the same three dental interns re-examined the identical 100 regions. During this session, the DE detection tool’s predictive results were made available to them. The interns were tasked with determining the concordance between the tool’s predictions and their own subjective visual assessments for each region. Discrepancies between the tool’s predictions and the interns’ judgments prompted a final decision-making process, wherein the interns, leveraging both their clinical insights and the tool’s output, adjudicated the ultimate determination regarding the presence of DE. Additionally, during the comparative analysis, we also randomly selected 100 single premolar images (50 with DE and 50 without DE) and used Cohen’s Kappa value to assess the agreement level between the diagnostic outcomes of our DE detection tool and those of three dental interns. Cohen’s Kappa value was calculated as follows: [12pt]{minimal} $$\:\:=_{0}-{p}_{e}}{1-{p}_{e}}$$ Where [12pt]{minimal} $$\:{p}_{0}$$ presents the observed agreement proportion, and [12pt]{minimal} $$\:{p}_{e}$$ presents the expected agreement proportion. Premolar recognition and DE detection results In the premolar recognition and segmentation stage, a mean Dice coefficient of 0.961 was demonstrated on the testing set. Further, after CNN training in the second stage, our algorithm could detect DE in different premolars with desirable outcomes: the overall accuracy of DE detection reached 85.0%, with a sensitivity and specificity of 88.0% and 82.0%, respectively. the positive predictive value and negative predictive value are 83.0% and 87.2%, respectively, and the F1-Score is 0.854. The AUC of the overall DE detection was 0.930. Visualizing our two-stage convolutional neural networks in Grad-CAM heatmaps in Fig. , we can tell that the attention of the algorithm was mainly paid to the tubercle signatures at the occlusal surfaces of premolars during the automatic detection process. The test of DE detection tool Based on our two-stage CNN algorithm and training results, we have eventually constructed an application tool that can automatically recognize premolars and detect the existence of DE on premolars using Pytorch (Fig. ). To start with, by clicking the “Open Image” button we can import one intraoral image with any size and resolution, and then click the “Gen Boxes” button and four rectangular boxes appear to automatically select four premolars. Manual exact adjustment of these boxes can be achieved by dragging them with your mouse (changing their positions) or clicking the “height+”, “height-“, “width+”, and “width-” buttons (changing their sizes). After selecting premolars properly, clicking the “Central Cusp Deformity” button will generate the automatic decision-making results of whether DE exists in these areas, where “P” presents the positive result and “N” for the negative outcome. Our DE detection tool exhibited high consistency with dental interns during comparative analysis, where The Kappa values obtained were 0.859 for the model versus Dental Intern 1, 0.839 for the model versus Dental Intern 2, and 0.818 for the model versus Dental Intern 3. Table presents the outcomes of subjective assessments conducted by three dental interns, both with and without the assistance of the DE detection tool, alongside the performance metrics of the tool operating autonomously. The data reveal that, when operating manually (without tool assistance), the dental interns generally exhibited high sensitivity, indicating a strong ability to correctly identify true cases of DE. Upon integration of the DE detection tool, a noticeable enhancement in specificity was observed for all three interns, without a significant compromise in sensitivity. This improvement in specificity underscores the tool’s utility in reducing false positive rates, thereby refining the accuracy of DE detection. Notably, the tool’s influence also led to improvements in the PPV, NPV, and F1-Scores, further evidencing its positive impact on diagnostic precision. In the premolar recognition and segmentation stage, a mean Dice coefficient of 0.961 was demonstrated on the testing set. Further, after CNN training in the second stage, our algorithm could detect DE in different premolars with desirable outcomes: the overall accuracy of DE detection reached 85.0%, with a sensitivity and specificity of 88.0% and 82.0%, respectively. the positive predictive value and negative predictive value are 83.0% and 87.2%, respectively, and the F1-Score is 0.854. The AUC of the overall DE detection was 0.930. Visualizing our two-stage convolutional neural networks in Grad-CAM heatmaps in Fig. , we can tell that the attention of the algorithm was mainly paid to the tubercle signatures at the occlusal surfaces of premolars during the automatic detection process. Based on our two-stage CNN algorithm and training results, we have eventually constructed an application tool that can automatically recognize premolars and detect the existence of DE on premolars using Pytorch (Fig. ). To start with, by clicking the “Open Image” button we can import one intraoral image with any size and resolution, and then click the “Gen Boxes” button and four rectangular boxes appear to automatically select four premolars. Manual exact adjustment of these boxes can be achieved by dragging them with your mouse (changing their positions) or clicking the “height+”, “height-“, “width+”, and “width-” buttons (changing their sizes). After selecting premolars properly, clicking the “Central Cusp Deformity” button will generate the automatic decision-making results of whether DE exists in these areas, where “P” presents the positive result and “N” for the negative outcome. Our DE detection tool exhibited high consistency with dental interns during comparative analysis, where The Kappa values obtained were 0.859 for the model versus Dental Intern 1, 0.839 for the model versus Dental Intern 2, and 0.818 for the model versus Dental Intern 3. Table presents the outcomes of subjective assessments conducted by three dental interns, both with and without the assistance of the DE detection tool, alongside the performance metrics of the tool operating autonomously. The data reveal that, when operating manually (without tool assistance), the dental interns generally exhibited high sensitivity, indicating a strong ability to correctly identify true cases of DE. Upon integration of the DE detection tool, a noticeable enhancement in specificity was observed for all three interns, without a significant compromise in sensitivity. This improvement in specificity underscores the tool’s utility in reducing false positive rates, thereby refining the accuracy of DE detection. Notably, the tool’s influence also led to improvements in the PPV, NPV, and F1-Scores, further evidencing its positive impact on diagnostic precision. The early-stage DE causes no symptoms, thus can be easily neglected during dental visits. However, its high possibility of abrasion and fracture is prone to bring underlying severe problems. Pulp-unaffected dental abrasion or fracture can be restored with satisfactory results, but irreversible pulp lesions usually occur early, which largely prevents young permanent premolars from development and leads to unfavorable prognosis . Also, of clinical imperative to the orthodontist is that premolar extraction cases should be planned to include extraction of the DE premolars instead of the normal ones . Therefore, it is of necessity and significance for dentists to conduct early DE detection to prevent its progression in time and to design optimal treatment schemes. To our best knowledge, our study is particularly novel in first presenting an effective AI-based approach to detect DE on orthodontic intraoral images. The results suggest that our AI model can be a useful tool for assisting DE detection with high accuracy and inexperienced dental interns can expediently screen for DE in premolars using our tool. Our findings demonstrate that with the assistance of our AI model, grassroots dentists or nonspecialists could achieve an average accuracy of 93.5% in detecting DE premolars. Our model demonstrated an overall diagnostic accuracy of 85.0%, a sensitivity of 0.880, an F1-Score of 0.854, and an AUC of 0.930 when recognizing all the premolars with DE in the testing dataset, indicating superior diagnostic outcomes were obtained. However, a shortcoming of our tool is that its specificity (0.820) is relatively lower than its sensitivity, indicating that while our tool effectively screens for potential DE cases, it may mistakenly diagnose healthy premolars as having DE. The tool’s assistance to dental interns significantly improved specificity, suggesting that in clinical applications, DE misdiagnosis rates may be reduced. PPV and NPV are also critical indicators in clinical diagnosis. Given that DE is a dental deformity with relatively low prevalence, NPV is expected to be high while PPV remains relatively lower, consistent with our findings. With the aid of our tool, the PPV values for dental interns also improved markedly, suggesting that our tool offers substantial clinical application value. In addition, to simply visualize how our tool’s decisions were made, we provided heatmaps of the attentions of our BiStageNet in Fig. . The heatmap examples of automatic DE detection demonstrated that our unsupervised learning algorithm has successfully paid its attention to the tubercle sites on the occlusal surfaces of premolars, the characteristic structure of DE deformity, which confirmed our detection results as reliable. And finally, we have made a DE detection application program on the basis of our CNN model, which can automatically select premolars and judge the existence of DE. In orthodontic clinics, DE is a relatively uncommon dental anomaly, making it crucial to prioritize diagnostic sensitivity to capture as many DE cases as possible. On the basis of high sensitivity, specificity must also be carefully considered to minimize misdiagnosis. Our analysis of three dental interns, which mimicked the diagnostic behaviors of less experienced dentists, exhibited high sensitivity but clinically unacceptable specificity, with intern 3’s specificity dropping below 80%. With the assistance of our DE diagnostic program, they succeeded in maintaining high sensitivity while significantly enhancing specificity by 6–12%, significantly reducing the risk of clinical misdiagnosis and subsequent incorrect orthodontic decisions. The human-algorithm comparison test showed that our detection program is a valuable aid for DE preliminary screening in clinical practice. In addition to clinical reliability, our DE diagnostic tool is also highly user-friendly. As previously mentioned, dentists and patients are required only to input intraoral images in the program and fine-tune the positions and sizes of automatic-generated premolar-targeted boxes, and our decision-making algorithm will evaluate the involved scope of DE simultaneously, which greatly facilitates the DE clinical diagnosis of dentists. Moreover, such a simple-to-use tool also minimizes obstacles for nonspecialists to screen for premolar DE, which brings much convenience to improve the oral health of the public. Our model demonstrated a satisfactory premolar detection outcome with the mean Dice coefficient of 0.961, indicating that the auto-segmented area and the manual premolar recognition area were almost completely overlapped. The AI-based tooth recognition has previously been implemented in multiple studies, most of which were conducted using radiographs . Tuzoff et al. utilized Faster R-CNN to detect permanent tooth position, by training with 1,352 panoramic radiographs, a sensitivity of 0.9941 and a precision of 0.9945 were achieved. Another study possessed a sensitivity and precision of 0.9804 and 0.9571, respectively, when implementing deciduous teeth recognition using the same type of algorithm . Only one study conducted landmark detection on intraoral occlusal images , where the VGG19 model demonstrated the best landmark detection capacity with a mean error of 0.84 mm in the maxilla and 1.06 mm in the mandible. Our results also verified the outstanding tooth recognition capacity of VGGnet. Therefore, Faster R-CNN and VGGnet are promising automatic tooth recognition tools that can be utilized in future research. There have been several studies developing AI models concerning intraoral data, most of them aimed at distinguishing the multiple status of teeth, including the existence of dental caries , the existence of various restoration materials , and the existence of malocclusion . And only one study focused on the status of soft tissue (automatic gingivitis detection) . The limited number of studies suggests that intraoral images still need further exploration with AI approaches. As reported in other studies, the detection accuracies of dental diseases on photographs carried out by AI-based models ranged widely from 64% (Angle malocclusion classification on intraoral dentition photographs) to 99.4% (gold restoration detection on intraoral single tooth photographs) based on various detection difficulties and the complexity of the information provided by photographs. The morphology and color of the accessory dental cusps are similar to those of the normal ones since the histological composition of them is identical. And clinically, the accessory cusps were usually fractured or worn, with a statistical rate of appropriate 75% , so that the teeth with DE are hard to differentiate from the ones with deep occlusal pits and fissures, or with occlusal dental caries in photographs. In consideration of the above difficulties, therefore, our diagnostic tools possess very high accuracy for the distinguishment of DE teeth from patients’ image data, and our independently developed DE diagnostic tool can help dentists to diagnose DE more preciously and develop treatment plans more intensively. Existing methods for disease detection using intraoral data often rely on segmentation tasks driven by deep learning, as demonstrated by who developed a segmentation-based approach to identify dental calculus, gingivitis, and dental caries from intraoral photographic images. While effective, these methods require pixel-level annotation, which is labor-intensive, time-consuming, and resource-intensive, posing challenges for scalability in practical applications. In contrast, our proposed method addresses these limitations by utilizing bounding box annotations combined with classification-based approaches, eliminating the need for pixel-level detail. This reduces the annotation workload and associated costs, providing a more efficient and cost-effective solution for disease detection in intraoral images while maintaining high levels of accuracy and applicability in large-scale screening scenarios. However, our study also has some limitations. Firstly, the diagnostic accuracy of DE teeth needs to be further improved. More intraoral images are needed for model training, and transfer learning can also be an effective approach . Secondly, our model focused not only on the accessory dental cusps in DE premolars, but in some cases the normal cusps were also taken into consideration, as shown in Fig. (e.g., the 44 tooth and the 45 tooth). To further optimize our model, we can conduct supervised learning by marking annotations on the specific accessory cusps to narrow its range of attention. The detection of DE teeth is the starting point of a novel research direction of AI-based disease diagnosis, and we envisage that future studies can dig deeper into the automatic detection of DE by involving more tooth types rather than only premolars, and by classifying DE into several categories according to the position of accessary cusps or their fracture and abrasion degree, to better guide clinical decision making. Our DE diagnostic tool is the first step of the automated diagnosis of multiple dental diseases, further researchers are awaited to integrate the algorithms of various dental disease detection based on intraoral data and form a high-accuracy mode for clinical usage, achieving the simultaneous and comprehensive reflection of both the status of soft and hard tissues in one tool, which will largely improve the diagnostic efficiency of dentists. AI is a powerful and promising tool for advancing public health in the future. When applied to public health, information security becomes essential. Clinical data contain significant amounts of patients’ privacy information; therefore, developers of future diagnostic tools must focus not only on algorithm optimization but also on approaches to face legal, ethical, and cybersecurity challenges. To make AI algorithms truly beneficial for public health, robust laws against AI-based privacy invasion must be established, along with user informed consent protocols and measures to prevent data leakage and misuse . It is hoped that, by simply pressing the camera shutter and uploading intraoral photographs, one can have a preliminary view of his or her own oral health status, which is of great use in the promotion of public oral hygiene. In this study, we have successfully constructed a BiStageNet model and demonstrated that deep learning methods are capable of achieving automatic premolar recognition and DE detection with high accuracy in intraoral photographs. Based on our custom-made CNN algorithms, we have also developed an automatic DE detection platform, which was applicable to both dentists and nonspecialists with promising diagnostic results. CNNs are powerful tools to improve DE early diagnosis rate, diagnostic accuracy as well as clinical work efficiency, and further improve the public oral health status.
Dipterans Associated with a Decomposing Animal Carcass in a Rainforest Fragment in Brazil: Notes on the Early Arrival and Colonization by Necrophagous Species
c0c5f9be-9d21-48eb-ad75-e4a4700967b5
4015403
Pathology[mh]
The temporal pattern of arrival of necrophagous insects at a cadaver is a key feature in the estimation of the minimum post-mortem interval, which is the most widespread contribution of forensic entomology. Information on abiotic factors combined with the time interval taken by the larvae to reach each developmental stage can provide reliable estimates of the time elapsed between cadaver colonization by insects and the discovery of the body ( ; ; ; ). In forensic studies, decomposition is divided into stages, the number and duration of which vary according to the region, climate, and other environmental factors. The changes in a cadaver that occur immediately following death are often more rapid than those that take place later during the decomposition ( ). Therefore, in order to validate entomological evidence related to the period of insect activity, shorter time scales in field surveys of necrophagous insects are likely to increase the reliability of the estimates. Additionally, it is crucial to understand the dynamics of cadaver detection and colonization as soon as death occurs. It is a widely accepted assumption that dipteran species of the families Sarcophagidae (flesh flies) and Calliphoridae (blow flies) are able to reach cadavers within a few hours of death and are the first colonizers of a corpse ( ; ; ; ; , ). This ability has led to the more frequent use of sarcophagids and calliphorids as evidence in medico-criminal investigations ( ). However, references to the early arrival of necrophagous dipterans on a cadaver frequently seem to overlook species of other families, such as Piophilidae, Anthomyiidae, and Fanniidae. Moreover, field surveys based solely on the collection of adults may fail to detect whether the species actually colonize the corpse as a resource for larval development ( ). The development of forensic entomology in Brazil has been sustained by an increasing number of field surveys of necrophagous species, comprising ecosystems located mainly in the Amazon and in central and southern states of the country. Areas with high rates of homicides, such as cities located in the Northeastern region, have been neglected ( ). In this context, this study aimed at providing a preliminary checklist of forensically important dipteran species in a rainforest fragment in Northeastern Brazil. Two hypotheses were tested: 1) species of Calliphoridae and Sarcophagidae would be the first insects to locate a recently killed animal, and 2) larval competition during colonization would favor a limited number of species that would be able to complete their cycle on the carcass. To test these hypotheses, a pig carcass was used as a model to investigate which species would actually colonize the ephemeral resource, as compared to species that would be mostly limited to visiting the resource as adults. The study was carried out in Recife, one of Brazil's largest cities (population 3.7 million), located on the Northeastern coast. It ranks among the most violent cities in the country, with a rate of 57.9 homicides/100,000 inhabitants, and many of the homicides are unsolved ( ). The field study took place in a preserved rainforest fragment (Dois Irmaos State Park) in Recife (08° 07′ S; 34° 52′ W). The park has a total area of 388 ha, with an altitude ranging from 30 to 80 m a.s.l. The local climate is hot and humid, with mean rainfall ca. 2,500 mm/year, an average annual temperature ca. 25.6° C, and two well-defined seasons, namely dry (October-February) and rainy (March-September). Vegetation is classified as dense, ombrophylous forest composed mainly of Fabaceae, Lauraceae, Moraceae, Sapotaceae, and Euphorbiaceae species ( ). The area was chosen because it has been used as a repository for the clandestine dumping of cadavers from homicides. A pig, Sus scrofa L. (Artiodactyla: Suidae) (ca. 15 kg) was used as the model. The pig was killed in loco with a gunshot to the occipital region, a procedure performed by experts according to the Ethics Committee of the Federal University of Pernambuco. Immediately after death, the carcass was placed in a metal cage (0.9 m × 0.6 m × 0.5 m) to prevent disturbance by large scavengers. Around the cage, a metal frame (2 m high × 1 m long × 1 m wide) covered with a fine white mesh fabric was placed in order to trap insects that visited the carcass. A 30 cm gap between the bottom of the net and the soil was left, through which insects could gain access to the carcass. The field experiment took place in July 2007, in the rainy season. The average temperature throughout the experiment was 25.2° C, and the mean relative humidity was 84%. Death occurred on day 1 at 13:00. For the collection of early species, samples were taken at seven timepoints, which combined are referred to hereafter as “immediately postdeath”: 5, 30, 60, 90, 120, 150, and 180 min post-mortem. At each of these timepoints, the adult flies trapped in the mesh structure were collected using an entomological net (20 cm diameter), sweeping for 5 min each time To determine which species would continue to visit the resource, an additional collection of dipteran adults on the carcass was performed at 24, 48, and 72 hr postmortem using the same procedure. Collected insects were killed using ethyl acetate, mounted, and identified using taxonomical keys ( ; , ; ; ; ; ). All specimens were deposited at the Entomological Collection at the Universidade Federal de Pernambuco, Brazil. In order to collect larvae at the post-feeding stage, i.e., insects that completed the larval stage on the carcass but were yet to pupate, a 60 cm × 30 cm × 15 cm plastic tray containing sawdust was placed under the cage, onto which insects would fall, as they typically pupate on the soil. Starting on the fourth day postmortem, the tray was removed daily until the 11 th day, from which point the tray was retrieved every 48 hr until the 17 th day post-mortem. This was due to previous observational studies that indicated that the majority of pupation occurred in that time interval . All immature insects recovered from the tray on each day were placed in plastic containers (31 cm × 18 cm × 10 cm) covered with fine nylon mesh and containing a Petri dish with ca. 20 g of minced beef to guarantee that the larvae completed their development cycle. Rearing conditions in the glasshouse emulated field conditions (mean temperature: 27.8 ± 1.6° C; RH: 61.6 ± 9.8%; 12:12 L:D photoperiod). Insects were observed daily, and emerged adults were identified to the lowest taxonomie level. The frequency of occurrence of each species at each decomposition stage was calculated. Chi-square tests using the significance level of 5% were performed to check for differences in the abundance of necrophagous species according to the stage of decomposition. Insect species as early visitors In total, 153 insects from 14 families were collected in the first three hours after death ( ). This included species of Phoridae (24.2% of all adults), Sarcophagidae (18.3%), Piophilidae (10.5%), Calliphoridae (10.5%), Fanniidae (8.5%), Chloropidae (6.5%), Muscidae (4.6%), Dixidae (4.6%), and, in smaller proportions, Milichiidae, Drosophilidae, Anthomyiidae, Micropezidae, Ropalomeridae, and Neriidae. Sarcophagidae was the richest family, with eleven species, most of which belonged to the genus Oxysarcodexia ( ). Other necrophagous families were represented by fewer species, as Anthomyiidae had 3 species, and Calliphoridae, Muscidae, Fanniidae, Phoridae, Piophilidae, each had two species. Megaselia scalaris Loew (Phoridae) was the most abundant species (19.6% of all specimens) at the period immediately after death. Only nine species with no previous record of necrophagy were registered in this time interval ( ). Twenty five species from 12 families were collected within the first 30 minutes of death ( ). In fact, 16 species were collected as early as five minutes post-death: Fly str i - cocnema plinthopyga (Wiedmann) (Sarcophagidae), Oxysarcodexia modesta Lopes, O. fluminensis Lopes, (O. riograndensis Lopes, O. intona (Curran and Walley), O . avuncular (Lopes), O. excise (Lopes), and Peckia (Squamalodes) in gen s (Walker); Hemilucilia segmentaria (F.) (Calliphoridae), H. semidiaphana (Rondani); Morellia humeralis (Stein) (Muscidae); Piophila casei L. (Piophilidae), Piophilidae sp.; Fannia obscurinervis (Stein) (Fanniidae), Fannia sp.l; and Anthomyia punctipennis (Wiedemann) (Anthomyiidae). From that moment on, necrophagous species continued to visit the carcass for at least 72 hr post-death, while the diversity of non-necrophagous (predatory, accidental, and omnivore species) diminished throughout time ( ). To illustrate that, at three days post-death the number of families and species associated with the carcass was reduced by 42.9% and 15.2% respectively when compared to immediately post-death ( ). The frequency of necrophagy of the insect species registered on the carcass increased throughout decomposition, as the percentage of necrophagous species rose from 72.7% immediately post-death to 89.3% at 72 hr post-death ( ). Insect species as colonizers A total of 18,469 adults emerged from the samples collected at the post-feeding stage. Adults began to emerge from the fourth day post-death (when the carcass was at the bloated stage) until skeletonization of the carcass, which occurred on the 17 th day. Adults from 11 species belonging to six families emerged; the majority of individuals (61.6% of all adults) corresponded to Calliphoridae, followed by specimens from Phoridae (25.6%) and Muscidae (11.6% of the emerged adults) ( ). Two Calliphoridae species were dominant in terms of abundance: H. segmentaria (34.4% of emerged adults) and H. semidiaphana (27.2%). Ophyra chalcogaster (Wiedemann) (Muscidae) and M. scalaris (Phoridae) also composed a significant proportion of the emerged adults. Decomposition occurred quickly due to a combination of biotic and abiotic factors, which included the action of maggots, whose population reached thousands of individuals, and environmental factors such as rainfall and elevated temperature. The stages of decomposition were characterized as follows: fresh stage (048 hr post-death), bloated (48–96 hr), decay (96–120 hr), and dry stage (120-ca. 410 hr post-death). After that period, virtually no insects were found on the carcass. The amount of emerged adults differed according to the decomposition stage at which larvae were recovered: 15.5% of the adults emerged from larvae collected at the bloated stage, 18.4% at the decay stage, and 66.1% of the adults emerged from larvae collected at the dry stage, and this difference was statistically significant (χ 2 = 5,757; p < 0.0001, df = 3). The diversity of emerged adults differed little according to the stage in which the larvae were retrieved, with the exception of Piophila sp., whose larvae were only collected at the dry stage. The temporal pattern of emergence varied. While the majority of H. semidiaphana and M. scalaris adults emerged when larvae were collected at dry stage, the numbers of O. chalcogaster larvae retrieved decreased with decomposition ( ). Regarding the larvae reared in the laboratory, the minimum time of emergence of adults was as short as four days after collection for H. semidiaphana and H. segmentaria and as long as 14 days for M. scalaris. In total, 153 insects from 14 families were collected in the first three hours after death ( ). This included species of Phoridae (24.2% of all adults), Sarcophagidae (18.3%), Piophilidae (10.5%), Calliphoridae (10.5%), Fanniidae (8.5%), Chloropidae (6.5%), Muscidae (4.6%), Dixidae (4.6%), and, in smaller proportions, Milichiidae, Drosophilidae, Anthomyiidae, Micropezidae, Ropalomeridae, and Neriidae. Sarcophagidae was the richest family, with eleven species, most of which belonged to the genus Oxysarcodexia ( ). Other necrophagous families were represented by fewer species, as Anthomyiidae had 3 species, and Calliphoridae, Muscidae, Fanniidae, Phoridae, Piophilidae, each had two species. Megaselia scalaris Loew (Phoridae) was the most abundant species (19.6% of all specimens) at the period immediately after death. Only nine species with no previous record of necrophagy were registered in this time interval ( ). Twenty five species from 12 families were collected within the first 30 minutes of death ( ). In fact, 16 species were collected as early as five minutes post-death: Fly str i - cocnema plinthopyga (Wiedmann) (Sarcophagidae), Oxysarcodexia modesta Lopes, O. fluminensis Lopes, (O. riograndensis Lopes, O. intona (Curran and Walley), O . avuncular (Lopes), O. excise (Lopes), and Peckia (Squamalodes) in gen s (Walker); Hemilucilia segmentaria (F.) (Calliphoridae), H. semidiaphana (Rondani); Morellia humeralis (Stein) (Muscidae); Piophila casei L. (Piophilidae), Piophilidae sp.; Fannia obscurinervis (Stein) (Fanniidae), Fannia sp.l; and Anthomyia punctipennis (Wiedemann) (Anthomyiidae). From that moment on, necrophagous species continued to visit the carcass for at least 72 hr post-death, while the diversity of non-necrophagous (predatory, accidental, and omnivore species) diminished throughout time ( ). To illustrate that, at three days post-death the number of families and species associated with the carcass was reduced by 42.9% and 15.2% respectively when compared to immediately post-death ( ). The frequency of necrophagy of the insect species registered on the carcass increased throughout decomposition, as the percentage of necrophagous species rose from 72.7% immediately post-death to 89.3% at 72 hr post-death ( ). A total of 18,469 adults emerged from the samples collected at the post-feeding stage. Adults began to emerge from the fourth day post-death (when the carcass was at the bloated stage) until skeletonization of the carcass, which occurred on the 17 th day. Adults from 11 species belonging to six families emerged; the majority of individuals (61.6% of all adults) corresponded to Calliphoridae, followed by specimens from Phoridae (25.6%) and Muscidae (11.6% of the emerged adults) ( ). Two Calliphoridae species were dominant in terms of abundance: H. segmentaria (34.4% of emerged adults) and H. semidiaphana (27.2%). Ophyra chalcogaster (Wiedemann) (Muscidae) and M. scalaris (Phoridae) also composed a significant proportion of the emerged adults. Decomposition occurred quickly due to a combination of biotic and abiotic factors, which included the action of maggots, whose population reached thousands of individuals, and environmental factors such as rainfall and elevated temperature. The stages of decomposition were characterized as follows: fresh stage (048 hr post-death), bloated (48–96 hr), decay (96–120 hr), and dry stage (120-ca. 410 hr post-death). After that period, virtually no insects were found on the carcass. The amount of emerged adults differed according to the decomposition stage at which larvae were recovered: 15.5% of the adults emerged from larvae collected at the bloated stage, 18.4% at the decay stage, and 66.1% of the adults emerged from larvae collected at the dry stage, and this difference was statistically significant (χ 2 = 5,757; p < 0.0001, df = 3). The diversity of emerged adults differed little according to the stage in which the larvae were retrieved, with the exception of Piophila sp., whose larvae were only collected at the dry stage. The temporal pattern of emergence varied. While the majority of H. semidiaphana and M. scalaris adults emerged when larvae were collected at dry stage, the numbers of O. chalcogaster larvae retrieved decreased with decomposition ( ). Regarding the larvae reared in the laboratory, the minimum time of emergence of adults was as short as four days after collection for H. semidiaphana and H. segmentaria and as long as 14 days for M. scalaris. When conducting field surveys on necrophagous species on animal carcasses, the first hours post-death are critically important for the establishment of dipteran populations, as not all species continue to explore the cadaver throughout its decomposition. The presence of several non-necrophagous species at early stages post-death confirms the notion that a corpse is exploited not only by necrophagous species, but by herbivore, predatory, and omnivore species that are attracted by the necrophagous fauna or exploit the resource as a complementary source of food or as a temporary habitat ( ; ). The diversity of feeding habits in insect assemblages has been consistently found in other field studies performed in several countries such as Brazil ( ), the United States ( ), South Africa ( ), Spain ( ), and Colombia ( ). The amount of time after death affects the structure of the assemblage of insects attracted to a carcass ( ), a feature that will have direct implications on the accuracy of the biological information available to the forensic entomologist. In this study, species from seven forensically important families with varying degrees of specialization in necrophagy were recorded minutes after death: Calliphoridae, Muscidae, Sarcophagidae, Phoridae, Piophilidae, Anthomyiidae, and Fanniidae. Numerous references endorse Calliphoridae and Sarcophagidae as the first arthropods to locate and colonize a cadaver ( ; ; ; , ). For example, Reibe and Madea ( ) reported that egg batches of Lucilia cesar (Calliphoridae) were detected on the carcass just two hours after its exposure in the field. The data presented here confirm the ability of calliphorids and sarcophagids to quickly locate dead animal matter, but reveal that M. scalaris (Phoridae), P. casei (Piophilidae), F. obscurinervis (Fanniidae), and M. humeralis (Muscidae), among others, can reach the carcass as quickly as five minutes after death. This is, to our knowledge, the documentation of the earliest arrival of these and other species ( ) on a carcass based on a field experimental setting. Piophilidae, for example, has been largely associated with the late stages post-death and are commonly found in both urban and rural environments ( ). Although the forensic relevance of insect species has been largely related to the recovery of larvae from the corpse, the presence of adult phorids and piophilids as forensic evidence at fresh and bloated stages of decomposition should not be dismissed. While the forensic relevance has been corroborated for some of the early species registered here, namely Chrysomya species ( ), M. scalaris ( ), P. casei ( ), and Fannia species ( ), species from the genus Oxysarcodexia are comparatively less studied. The genus is characteristic of the Neotropical region, and the greatest number of species is found in Brazil, where they develop preferentially in feces ( ). Recently, O. riograndensis was found colonizing cadavers at the Institute of Legal Medicine in Recife ( ), which encourages further studies to assess their forensic importance. The arrival of the black soldier-fly, Hermetia illucens (Stratiomyidae), at later stages of decomposition was previously demonstrated by Pujol - Luz et al. ( ), who calculated the time of development of larvae to estimate the time of death in a criminal case in Brazil. Perhaps the best way to validate the forensic relevance of an insect species is to assess whether it can effectively complete its larval cycle using the corpse as substrate. In the animal model in our study, an initial assemblage composed of species with diverse feeding habits changed into a more necrophagy-oriented community. This was evident from the first days post-death and, naturally, reached a maximum specialization when species collected at the post-feeding stage were taken into consideration. Only a third of the necrophagous species collected as adults effectively completed the cycle to adult stage. It is likely that several fly species began their development on the carcass, but direct effects of interspecific competition resulted in a lower number of species being able to successfully complete their development on the resource. Physiological, morphological, and behavioral characteristics of larvae of different species will determine strategies for resource exploitation, which in turn will generate different patterns in the emerged populations ( ). Even considering that the collections were based on a single carcass, the high number of colonizing species (11) in the forest fragment located in an urban area may be of use in its extrapolation to human cadavers, as pigs have been systematically considered to be the best animal models to mimic human decomposition in forensic entomology studies ( ). Three families stood out in terms of constancy and abundance: Calliphoridae, Muscidae, and Phoridae. Two Calliphoridae species, H. segmentaria and H. semidiaphana , besides having been recorded immediately post-death, came out as the dominant emerged adults. The genus Hemilucilia comprises six species distributed in several countries in Central and South America, four of which are found in Brazil, especially in forested areas ( ). Surveys performed in southern Brazil ( ) demonstrated that their abundance and intimate association with human cadavers encourages forensic entomologists to consider them as candidates for use in medico-legal investigations. Surprisingly, no adults from Chrysomya species emerged from the carcass despite numerous references of their escalating distribution in Northeast Brazil ( ) and the recent register of C. megacephala on human cadavers in Recife ( ). This could be a result of direct competition with native Hemilucilia species, which still seem to be more successful in locating and colonizing carcasses in forested environments. Opyhra chalcogaster was a dominant species found mostly at early stages of decomposition. Ophyra species (Muscidae) have been associated with both cadavers ( ) and carcasses ( ), especially during active decay stages. Phoridae species were also among the most abundant emerged adults, although a recent study performed in Malaysia led to the conclusion that species from this family tend to be dominant when corpses are located indoors ( ). Despite the richness of species reported throughout the first days post-death, Sarcophagidae was classified as an accessory group, as they represented only 0.18% of all emerged adults. The high diversity associated with low abundance of Sarcophagidae observed in the rainforest fragment of Dois Irmaos State Park has also been found in other field studies ( ). The other dominant families among the emerged adults, Muscidae and Phoridae, are also commonly reported in larval stages in field experiments on forensic entomology ( ). In tropical regions with high rates of unsolved homicide, such as the case of Northeast Brazil, forensic scientists should be aware of the fact that decomposition-related processes occur at a fast rate, increasing the difficulty in establishing definite chronological stages and, consequently, the insect community associated with them. This reinforces the necessity of an immediate involvement of the forensic scientist in search for entomological evidence, preferably at larval stage, because a shorter window for data gathering is available. Despite the limitations of using a single carcass as model, due to logistical and ethical restraints, this study provides the first evidence of at least 10 species completing their larval cycle on carrion in rainforest fragments in Northeastern Brazil, which include H. segmentaria, H. semidiaphana, O. chalcogaster, F. obscurinervis, and M. scalaris. The Neotropical Hemilucilia species in particular deserve further studies as useful forensic indicators, especially considering their recent use in the estimation of minimum post-mortem interval in Brazil ( ). Because of the overlap in the temporal occupation of some Diptera species, only detailed bionomical studies can lend support to their use as reliable indicators of the period of insect activity on the corpse. Finally, the common assumption that Sarcophagidae and Calliphoridae are the sole visitors at early stages post-death should be regarded with caution.
Unlocking breast cancer in Brazilian public health system: Using tissue microarray for accurate immunohistochemical evaluation with limitations in subtyping
c4a748a7-ce8f-4908-8a2f-0bb1be246689
11694303
Anatomy[mh]
Breast cancer (BC) is the most common cancer in women worldwide, with 70% of deaths from the disease occurring in low- and middle-income countries, such as Brazil. The National Cancer Institute estimates that approximately 18,000 Brazilian women die of BC every year. Around 75% of the population has no private health insurance and relies exclusively on the Universal Health System (SUS), the largest public health system in the world that provides free healthcare to all Brazilians, regardless of their socioeconomic status. BC is more frequently diagnosed in its symptomatic and in more advanced stages in SUS than in private health systems or high-income countries. Brazilian public hospitals face enormous pressure to optimize healthcare services and reduce costs. The Hospital de Clínicas de Porto Alegre, a tertiary public hospital in the South of Brazil, processes approximately 540 immunohistochemical (IHC) tests of BC biomarkers per year at a cost of around 31,000 USD (154,400.00 BRL—Brazilian reais). This is the most common and expensive individual test offered in our laboratory and includes analysis of the expression of estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor 2 (HER2), and the proliferation marker Ki-67. These biomarkers are combined for BC subtyping into luminal (A and B), HER2-positive, and triple-negative tumors and guide systemic therapy. Proposing strategies to increase access to BC diagnosis and treatment is a priority in the Brazilian public health context. The IHC tests of BC biomarkers are traditionally done on surgical specimens or biopsies on whole individual glass slides. The tissue microarray (TMA) approach, which combines multiple cylindrical fragments of tumor tissue from different patients in the same glass slide, has been extensively used in pathology research. TMA saves working time, standardizes reactions, allows for comparative interpretation of cases, and reduces the total cost of tissue analyses. However, the use of TMA in clinical practice remains controversial worldwide, and its feasibility and cost-benefit have never been evaluated in the Brazilian public health system before. BC was chosen as the prototype for this type of study due to its high regional prevalence at the regional level and throughout the country. This study aimed to assess the diagnostic accuracy of TMA as a cost-effective alternative to evaluating the IHC status of ER, PR, HER2, Ki-67, and BC subtyping and maximize its potential use in clinical practice. Patients The study is a retrospective cohort analysis that evaluates the diagnostic accuracy of TMA in BC IHC evaluation. Two hundred forty-two women diagnosed with invasive BC in Hospital de Clínicas de Porto Alegre between 2010 and 2015 were consecutively included in the study. The patient eligible criteria are BC diagnosis and previous IHC evaluation for ER, PR, HER2, and Ki-67 available in medical records. Formalin-fixed tissue blocks from all patients were retrieved from the Laboratory of Pathology archive in accordance with ethical guidelines. We consistently follow the established preanalytical handling guidelines of the College of American Pathologists. The clinical data and the original IHC scores of ER, PR, HER2, and Ki-67 were obtained from the anatomopathological reports through analyses of the whole slide and medical records. The average age was 58.2 years (range 24–92 years), and invasive carcinoma of the non-special type was the most frequent histopathological type of tumor. Pathological staging was determined using the AJCC TNM System, and was distributed as follows: 131 patients in stage I, 57 in stage II, 42 in stage III, and 12 in stage IV. In 237 of the 242 cases, the IHC scores were fully available in the pathology report, making it possible to define the IHC subtype (BC subtype): 101 tumors were classified as luminal A (ER + and/or PR + , HER2 − and Ki-67 ⩽20%), 87 as luminal B (ER + and/or PR + , HER2 + or Ki-67 >20%), 19 as HER2 positive (ER − , PR − and HER2 + ) and 30 as triple negative (ER − , PR − , and HER2 − ). Cases with tumor areas smaller than 2 cm, treated with neoadjuvant chemotherapy prior to surgical resection or without IHC evaluation for the four markers evaluated were excluded. Only excisional samples were utilized, as core biopsies were not employed to ensure the preservation of the patient’s archived tissue and to mitigate the risk of material depletion during TMA assembly. TMA assembly and immunohistochemistry The most representative area of the tumor was carefully circled by an expert breast pathologist (MSG) on the hematoxylin-eosin-stained slide in areas with high tumor cellularity. For TMAs assembly, we used the manual TMA T-Sue system (Simport ® Scientific, Beloeil, Canada) to extract two cores of 2.0 mm of each tumor using the principles first described by Kononen et al. . Briefly, the procedures began with the preparation of the TMA grid to correctly identify the position of each sample and the organization of the donor blocks. Then, two cylindrical tissue cores were extracted from the donor block with a 2.00 mm punch needle, no more than 3 mm deep, and precisely placed in the recipient block, which was previously prepared using the M473-60 mold. This mold has a capacity for 60 cores, distributed over 6 rows and 10 columns, allowing to include 24 duplicate tumors/cases per TMA. For guidance in reading the TMA, a core containing placental tissue was included in each TMA block. The cores were fixed with light pressure followed by brief heating, cooled overnight, and sectioned (4 µm). The sections were then mounted on slides for H&E staining and analysis. The total tumor area size analyzed is 3.14 mm 2 for each 2.0 mm core. The minimum number of tumor cells sufficient for scoring was ⩾100 per core. For immunohistochemistry, the TMA blocks were cut into 3 µm sections and placed on glass slides with positive and negative controls. The sections were processed on Ventana automation equipment (BenchMark AutoStainer; Ventana Medical Systems, Tucson, AZ, USA) using the following antibodies: ER (clone SP1; Ventana, Tucson, AZ, USA), PR (clone 1E2; Ventana Medical Systems), HER2 (clone 4B5; Ventana Medical Systems) and Ki-67 (clone 30-9; Ventana Medical Systems) . This immunostaining method is the same one used in the laboratory’s routine work, with quality attested to by the Joint Commission International Accreditation Seal in 2017. Microscopic analysis of TMA TMA consolidated multiple tissue samples into a single slide for simultaneous analysis. In contrast, in the traditional IHC, a whole section of the tumor tissue is analyzed in slides individually. The reliability of TMA microscopic analysis depended on the quality of the TMA, the alignment of the cores, and the pathologist’s ability to orient themselves and identify precise samples according to the grid. Then, following an initial overall TMA quality evaluation and positioning, the pathologist proceeded to assign a core-specific IHC score to each tissue core by traversing the slide in an up-and-down motion. The same criteria of immunostaining evaluation in the whole section were applied to TMA. The evaluation of ER, PR, and HER2 expression was carried out in accordance with the guidelines of the American Society of Clinical Oncology. Nuclear staining was considered positive for ER and/or PR when detected in at least 1% of tumor cells at any intensity ( and . For HER2, staining in the membranes of tumor cells was classified as follows : 0, when no tumor cells showed HER2-positive staining or incomplete and weakly perceptible membrane staining in ⩽10% of tumor cells; 1+, incomplete and weakly perceptible staining in ⩾10% of tumor cells; 2+, weak to moderate complete staining observed in ⩾10% of tumor cells; and 3+, circumferential and strong complete staining in ⩾10% of tumor cells. Cases with a score of 3+ were considered HER2-positive. Cases 2+ are considered indeterminate. All other cases (0 or 1+) were considered HER2-negative . For Ki-67, the IHC score was determined using the St. Gallen International Expert Consensus . Tumor cells were evaluated for Ki-67 and scored with the percentage of positively stained nuclei. A cut-off point >20% was considered high (“positive”) for Ki-67, while values ⩽20% were considered low (“negative”) . The TMA slides were read by a breast specialist pathologist (MSG) who read the first core of each case. If it was impossible to read the first core due to selection errors or loss of material during the procedure, the second core was analyzed. Informative cores were those that allowed the pathologist to interpret and determine the IHC score successfully in TMA. When cores were missing or in the absence of a tumor, they were considered non-informative. A second breast specialist pathologist (DMU) evaluated the TMA slides independently to assess agreement between observers. To assess intratumoral heterogeneity, two cores from the same case introduced into the TMA were evaluated in a randomly selected subset of cases ( n = 12). The combined analysis of the four biomarker readings on the TMA described before was used to determine the BC subtype in each case. The IHC scores for ER, PR, HER2, Ki-67, and the BC subtype resulting from the TMA reading were compared to those obtained in the original pathology report for the respective case by consulting the medical records. In cases of disagreement, the original slide of the case was re-analyzed by the leading pathologist (MSG) to determine the final IHC score. Statistical analysis The sample size was calculated using data from Hospital de Clínicas in Porto Alegre, considering a proportion ( P ) of positivity of 60% for PR/RE, 20% for HER-2-enriched, and 20% for triple-negative BC. The estimation precision ( D ) used considered the spectrum of the 10% confidence interval, with semi-amplitude (0.05 above or 0.05 below) as the maximum acceptable error. The confidence interval used was 95% ( Z = 1.96, for α = 0.05). By applying the formula N = Z * Z ( P (1 − P ))/( D * D ), the N of 96 samples were obtained. All statistical analysis was carried out using SPSS version 18 (SPSS IBM, New York, NY, USA). The agreement between the IHC score in the TMA versus the medical records and between the different observers was determined by calculating Cohen’s kappa. Sensitivity, specificity, disease prevalence, positive and negative predictive value, and accuracy are expressed as percentages and in Clopper-Pearson confidence intervals. p -Values of and less than 0.05 were considered statistically significant. We consistently followed the STARD2015 as the appropriate reporting guidelines when preparing our manuscript and submitted the completed checklist as Supplemental Material . The study is a retrospective cohort analysis that evaluates the diagnostic accuracy of TMA in BC IHC evaluation. Two hundred forty-two women diagnosed with invasive BC in Hospital de Clínicas de Porto Alegre between 2010 and 2015 were consecutively included in the study. The patient eligible criteria are BC diagnosis and previous IHC evaluation for ER, PR, HER2, and Ki-67 available in medical records. Formalin-fixed tissue blocks from all patients were retrieved from the Laboratory of Pathology archive in accordance with ethical guidelines. We consistently follow the established preanalytical handling guidelines of the College of American Pathologists. The clinical data and the original IHC scores of ER, PR, HER2, and Ki-67 were obtained from the anatomopathological reports through analyses of the whole slide and medical records. The average age was 58.2 years (range 24–92 years), and invasive carcinoma of the non-special type was the most frequent histopathological type of tumor. Pathological staging was determined using the AJCC TNM System, and was distributed as follows: 131 patients in stage I, 57 in stage II, 42 in stage III, and 12 in stage IV. In 237 of the 242 cases, the IHC scores were fully available in the pathology report, making it possible to define the IHC subtype (BC subtype): 101 tumors were classified as luminal A (ER + and/or PR + , HER2 − and Ki-67 ⩽20%), 87 as luminal B (ER + and/or PR + , HER2 + or Ki-67 >20%), 19 as HER2 positive (ER − , PR − and HER2 + ) and 30 as triple negative (ER − , PR − , and HER2 − ). Cases with tumor areas smaller than 2 cm, treated with neoadjuvant chemotherapy prior to surgical resection or without IHC evaluation for the four markers evaluated were excluded. Only excisional samples were utilized, as core biopsies were not employed to ensure the preservation of the patient’s archived tissue and to mitigate the risk of material depletion during TMA assembly. The most representative area of the tumor was carefully circled by an expert breast pathologist (MSG) on the hematoxylin-eosin-stained slide in areas with high tumor cellularity. For TMAs assembly, we used the manual TMA T-Sue system (Simport ® Scientific, Beloeil, Canada) to extract two cores of 2.0 mm of each tumor using the principles first described by Kononen et al. . Briefly, the procedures began with the preparation of the TMA grid to correctly identify the position of each sample and the organization of the donor blocks. Then, two cylindrical tissue cores were extracted from the donor block with a 2.00 mm punch needle, no more than 3 mm deep, and precisely placed in the recipient block, which was previously prepared using the M473-60 mold. This mold has a capacity for 60 cores, distributed over 6 rows and 10 columns, allowing to include 24 duplicate tumors/cases per TMA. For guidance in reading the TMA, a core containing placental tissue was included in each TMA block. The cores were fixed with light pressure followed by brief heating, cooled overnight, and sectioned (4 µm). The sections were then mounted on slides for H&E staining and analysis. The total tumor area size analyzed is 3.14 mm 2 for each 2.0 mm core. The minimum number of tumor cells sufficient for scoring was ⩾100 per core. For immunohistochemistry, the TMA blocks were cut into 3 µm sections and placed on glass slides with positive and negative controls. The sections were processed on Ventana automation equipment (BenchMark AutoStainer; Ventana Medical Systems, Tucson, AZ, USA) using the following antibodies: ER (clone SP1; Ventana, Tucson, AZ, USA), PR (clone 1E2; Ventana Medical Systems), HER2 (clone 4B5; Ventana Medical Systems) and Ki-67 (clone 30-9; Ventana Medical Systems) . This immunostaining method is the same one used in the laboratory’s routine work, with quality attested to by the Joint Commission International Accreditation Seal in 2017. TMA consolidated multiple tissue samples into a single slide for simultaneous analysis. In contrast, in the traditional IHC, a whole section of the tumor tissue is analyzed in slides individually. The reliability of TMA microscopic analysis depended on the quality of the TMA, the alignment of the cores, and the pathologist’s ability to orient themselves and identify precise samples according to the grid. Then, following an initial overall TMA quality evaluation and positioning, the pathologist proceeded to assign a core-specific IHC score to each tissue core by traversing the slide in an up-and-down motion. The same criteria of immunostaining evaluation in the whole section were applied to TMA. The evaluation of ER, PR, and HER2 expression was carried out in accordance with the guidelines of the American Society of Clinical Oncology. Nuclear staining was considered positive for ER and/or PR when detected in at least 1% of tumor cells at any intensity ( and . For HER2, staining in the membranes of tumor cells was classified as follows : 0, when no tumor cells showed HER2-positive staining or incomplete and weakly perceptible membrane staining in ⩽10% of tumor cells; 1+, incomplete and weakly perceptible staining in ⩾10% of tumor cells; 2+, weak to moderate complete staining observed in ⩾10% of tumor cells; and 3+, circumferential and strong complete staining in ⩾10% of tumor cells. Cases with a score of 3+ were considered HER2-positive. Cases 2+ are considered indeterminate. All other cases (0 or 1+) were considered HER2-negative . For Ki-67, the IHC score was determined using the St. Gallen International Expert Consensus . Tumor cells were evaluated for Ki-67 and scored with the percentage of positively stained nuclei. A cut-off point >20% was considered high (“positive”) for Ki-67, while values ⩽20% were considered low (“negative”) . The TMA slides were read by a breast specialist pathologist (MSG) who read the first core of each case. If it was impossible to read the first core due to selection errors or loss of material during the procedure, the second core was analyzed. Informative cores were those that allowed the pathologist to interpret and determine the IHC score successfully in TMA. When cores were missing or in the absence of a tumor, they were considered non-informative. A second breast specialist pathologist (DMU) evaluated the TMA slides independently to assess agreement between observers. To assess intratumoral heterogeneity, two cores from the same case introduced into the TMA were evaluated in a randomly selected subset of cases ( n = 12). The combined analysis of the four biomarker readings on the TMA described before was used to determine the BC subtype in each case. The IHC scores for ER, PR, HER2, Ki-67, and the BC subtype resulting from the TMA reading were compared to those obtained in the original pathology report for the respective case by consulting the medical records. In cases of disagreement, the original slide of the case was re-analyzed by the leading pathologist (MSG) to determine the final IHC score. The sample size was calculated using data from Hospital de Clínicas in Porto Alegre, considering a proportion ( P ) of positivity of 60% for PR/RE, 20% for HER-2-enriched, and 20% for triple-negative BC. The estimation precision ( D ) used considered the spectrum of the 10% confidence interval, with semi-amplitude (0.05 above or 0.05 below) as the maximum acceptable error. The confidence interval used was 95% ( Z = 1.96, for α = 0.05). By applying the formula N = Z * Z ( P (1 − P ))/( D * D ), the N of 96 samples were obtained. All statistical analysis was carried out using SPSS version 18 (SPSS IBM, New York, NY, USA). The agreement between the IHC score in the TMA versus the medical records and between the different observers was determined by calculating Cohen’s kappa. Sensitivity, specificity, disease prevalence, positive and negative predictive value, and accuracy are expressed as percentages and in Clopper-Pearson confidence intervals. p -Values of and less than 0.05 were considered statistically significant. We consistently followed the STARD2015 as the appropriate reporting guidelines when preparing our manuscript and submitted the completed checklist as Supplemental Material . TMA performance In order to incorporate the 242 duplicate BC cases, we constructed 10 TMA blocks, each containing 2 cores of 2.0 mm per case. Each BC case contributes 4 cores (one for each antibody: ER, PR, HER2, Ki-67), resulting in a total of 968 cores. These 968 cores represent the total number of potential cores to be scored in the 10 TMA slides (242 cases × 4 markers). Regarding the overall quality of the TMA, the immunostaining on the TMA slides showed consistent results with no discrepancies between central and peripheral nuclei. The proper alignment of the nuclei, the inclusion of positive and negative controls, and the orientation of nuclei (such as the placenta) ensured an effective and safe reading by the pathologists. In TMA slides IHC evaluation, out of the total 968 cores, 97% (940) provided informative results, showing high immunostaining quality and sufficient tumor cellularity (>100 tumor nuclei per core) for adequate scoring . In 91% of cases, the reading of the first core of the duplicate was sufficient to determine the IHC score. However, in 79 cases, the second core had to be assessed to complete the analysis, highlighting the importance of including duplicate tumors in the TMAs. Uninformative cores were minimal at 2.9% , primarily due to errors in tumor area selection where both cores lacked tumor tissue. Loss of both cores during processing occurred in only 1% of cases. Importantly, there were no differences observed in the quality of TMA slides stained with different antibodies. Inter-examiner variability and intratumoral heterogeneity For all the antibodies evaluated, there was almost perfect and statistically significant agreement in determining the IHC score by reading the TMAs by two different pathologists. Kappa values ranged from 0.85 for HER2 to 0.91 for ER , classified as “almost perfect” by Cohen’s criteria. With regard to intratumoral heterogeneity, the agreement between the IHC scores assigned to the two cores from the same case included in the TMA varied by antibody. For ER and HER2, agreement was almost perfect (100%), with kappa values of 1.0 for both markers. For PR and Ki-67, there was less agreement, classified as moderate for PR ( k = 0.47) and substantial for Ki-67 ( k = 0.68). Among the discordant cases, two PR-positive cases in the first core were assessed as negative in the second, and two Ki-67-high cases in the first core were classified as low in the second. Comparison of TMA results versus original report Overall, there was a high agreement between the IHC scores obtained in TMA cores and those in the original report, based on the evaluation of the whole section. In the first analysis, 828 of the 940 (88%) IHC scores were concordant, and 112 were discordant. The discordant cases had the original slide containing the whole section reviewed by the study’s leading pathologist (MSG), who then reissued the final IHC score. Forty-three IHC scores with initially discordant results were considered concordant after a whole section review using the same immunostaining interpretation criteria. Thus, final agreement was observed between the TMA versus the original report in 871 of the 940 IHC scores (93%) evaluated, being classified as almost perfect and statistically significant ( k = 0.81, p < 0.001) . Some differences could be observed when the concordance rates were compared among the antibodies . There was an almost perfect agreement for ER and PR, while for HER2 and Ki-67, this was slightly lower and classified as substantial. The final comparative analysis of the 69 discordant IHC scores showed that in the evaluation of ER and PR, there was a lower and similar frequency of false-positive and false-negative cases in the TMA. For HER2 and Ki-67, there were more discordant cases, with a higher frequency of false negatives (4.5 and 6.7%, respectively) than false positives in the TMA. The most significant discrepancy in results was observed for Ki-67, where 24 of the 235 IHC scores were discordant in the TMA compared to the original report. Working time and cost analysis shows a comparative analysis of the time and cost spent on the technical procedures and evaluation of results using the TMA versus the traditional procedure. The TMA approach reduced the time of IHC evaluation (for the four markers) from 8.5 to 0.5 h per case. This estimated time included glass slide preparation, TMA assembly, IHC staining, and the pathologist’s IHC scoring process of an individual or TMA glass slides. Considering the current values, the cost of the IHC panel with four biomarkers (including labor and materials) is $53.61 per case compared to $4.58 spent per case in the TMA approach, a reduction of approximately 11 times. In a TMA of 24 cases, the apparent saving is $1146.52 in total or $47.77 per case. BC subtyping Defining the BC subtype is of great clinical relevance in therapeutic management and disease outcomes. Overall, BC subtyping was possible in 97% (237/242) of the cases using the traditional method compared with 89% (217/242) using TMA. In 20 cases (8.4%), the IHC subtype could not be determined due to failure to read the IHC score on the TMA for one or more of the biomarkers analyzed. Between the 217 remaining cases, there was agreement in the BC subtype in 162 (75%) by the two methods. presents a detailed analysis of the sensibility, specificity, and overall accuracy of TMA in BC subtyping. Among the 55 discordant cases, 41 (74%) were luminal tumors classified incorrectly as luminal A or B, 5 were HER2 tumor classified incorrectly as luminal A, and 4 triple-negative tumors were incorrectly classified as luminal A, luminal B, or HER-2 subtypes using TMA. In order to incorporate the 242 duplicate BC cases, we constructed 10 TMA blocks, each containing 2 cores of 2.0 mm per case. Each BC case contributes 4 cores (one for each antibody: ER, PR, HER2, Ki-67), resulting in a total of 968 cores. These 968 cores represent the total number of potential cores to be scored in the 10 TMA slides (242 cases × 4 markers). Regarding the overall quality of the TMA, the immunostaining on the TMA slides showed consistent results with no discrepancies between central and peripheral nuclei. The proper alignment of the nuclei, the inclusion of positive and negative controls, and the orientation of nuclei (such as the placenta) ensured an effective and safe reading by the pathologists. In TMA slides IHC evaluation, out of the total 968 cores, 97% (940) provided informative results, showing high immunostaining quality and sufficient tumor cellularity (>100 tumor nuclei per core) for adequate scoring . In 91% of cases, the reading of the first core of the duplicate was sufficient to determine the IHC score. However, in 79 cases, the second core had to be assessed to complete the analysis, highlighting the importance of including duplicate tumors in the TMAs. Uninformative cores were minimal at 2.9% , primarily due to errors in tumor area selection where both cores lacked tumor tissue. Loss of both cores during processing occurred in only 1% of cases. Importantly, there were no differences observed in the quality of TMA slides stained with different antibodies. For all the antibodies evaluated, there was almost perfect and statistically significant agreement in determining the IHC score by reading the TMAs by two different pathologists. Kappa values ranged from 0.85 for HER2 to 0.91 for ER , classified as “almost perfect” by Cohen’s criteria. With regard to intratumoral heterogeneity, the agreement between the IHC scores assigned to the two cores from the same case included in the TMA varied by antibody. For ER and HER2, agreement was almost perfect (100%), with kappa values of 1.0 for both markers. For PR and Ki-67, there was less agreement, classified as moderate for PR ( k = 0.47) and substantial for Ki-67 ( k = 0.68). Among the discordant cases, two PR-positive cases in the first core were assessed as negative in the second, and two Ki-67-high cases in the first core were classified as low in the second. Overall, there was a high agreement between the IHC scores obtained in TMA cores and those in the original report, based on the evaluation of the whole section. In the first analysis, 828 of the 940 (88%) IHC scores were concordant, and 112 were discordant. The discordant cases had the original slide containing the whole section reviewed by the study’s leading pathologist (MSG), who then reissued the final IHC score. Forty-three IHC scores with initially discordant results were considered concordant after a whole section review using the same immunostaining interpretation criteria. Thus, final agreement was observed between the TMA versus the original report in 871 of the 940 IHC scores (93%) evaluated, being classified as almost perfect and statistically significant ( k = 0.81, p < 0.001) . Some differences could be observed when the concordance rates were compared among the antibodies . There was an almost perfect agreement for ER and PR, while for HER2 and Ki-67, this was slightly lower and classified as substantial. The final comparative analysis of the 69 discordant IHC scores showed that in the evaluation of ER and PR, there was a lower and similar frequency of false-positive and false-negative cases in the TMA. For HER2 and Ki-67, there were more discordant cases, with a higher frequency of false negatives (4.5 and 6.7%, respectively) than false positives in the TMA. The most significant discrepancy in results was observed for Ki-67, where 24 of the 235 IHC scores were discordant in the TMA compared to the original report. shows a comparative analysis of the time and cost spent on the technical procedures and evaluation of results using the TMA versus the traditional procedure. The TMA approach reduced the time of IHC evaluation (for the four markers) from 8.5 to 0.5 h per case. This estimated time included glass slide preparation, TMA assembly, IHC staining, and the pathologist’s IHC scoring process of an individual or TMA glass slides. Considering the current values, the cost of the IHC panel with four biomarkers (including labor and materials) is $53.61 per case compared to $4.58 spent per case in the TMA approach, a reduction of approximately 11 times. In a TMA of 24 cases, the apparent saving is $1146.52 in total or $47.77 per case. Defining the BC subtype is of great clinical relevance in therapeutic management and disease outcomes. Overall, BC subtyping was possible in 97% (237/242) of the cases using the traditional method compared with 89% (217/242) using TMA. In 20 cases (8.4%), the IHC subtype could not be determined due to failure to read the IHC score on the TMA for one or more of the biomarkers analyzed. Between the 217 remaining cases, there was agreement in the BC subtype in 162 (75%) by the two methods. presents a detailed analysis of the sensibility, specificity, and overall accuracy of TMA in BC subtyping. Among the 55 discordant cases, 41 (74%) were luminal tumors classified incorrectly as luminal A or B, 5 were HER2 tumor classified incorrectly as luminal A, and 4 triple-negative tumors were incorrectly classified as luminal A, luminal B, or HER-2 subtypes using TMA. In the present study, we propose using a TMA constructed with two 2.0 mm diameter cores of tumor tissue as an alternative to the traditional procedure for the IHC evaluation of ER, PR, HER2, and Ki-67 in BC. The study results suggest that TMA is a fast, highly accurate, and cost-effective method for testing individual BC biomarkers. However, based on the combined analysis of the four antibodies, we do not recommend using TMA with two cores for BC subtyping unless a reduction in costs is necessary to continue testing patients and providing them with treatment. Regarding TMA feasibility and overall performance, we observed that after an initial and time-consuming period of training for the technical staff, high-quality TMAs were constructed in our laboratory in a satisfactory way. There was a high retention rate of informative cores in the TMAs, with only 2.9% lost , similar to that reported in a previous study. The absence of a tumor is the main cause of non-informative cores, probably due to an error in selecting the area to be punctured in the original block, with the capture of more peripheral cores where the tumor may not be represented. This finding reflects the need for attention and adequate training for the tumor selection stage. The loss of the two cores during the IHC process occurred in only 1% of cases, a frequency similar to that observed in previous studies , and lower than the 10% loss reported by Visser et al., who used 0.6 mm cores. Thus, our practice of using 2.0 mm cores in duplicate seems to be ideal to avoid the need to recolor the entire slide due to the infeasibility of analyzing non-informative IHC scores in the TMA. In microscopic TMA analyses, when the 4 antibodies were analyzed individually (core-by-core), the comparison of 940 IHC scores in the 242 cases showed a high overall agreement (93%, almost perfect) between the TMA results and the original report. In general, an accuracy rate of 90% or above is often deemed high and acceptable for clinical implementation. Many widely accepted diagnostic tests, such as mammography for BC screening, often have sensitivity and specificity rates in the range of 80%–90%. In summary, a 93% accuracy rate is considered high and reliable for clinical practice, meeting or exceeding the standards of well-established diagnostic methods. However, as described by other authors, , our study confirms that the TMA performance is not the same for all antibodies. The concordance of IHC scores was higher for ER and PR and lower for HER2 and Ki-67. Our hypothesis to explain these differences is mainly based on intratumoral heterogeneity, which has also been reported as a limitation of the use of TMA in routine IHC evaluation. – To investigate this hypothesis, we performed an intratumoral heterogeneity analysis and detected that there is a high agreement between the IHC scores obtained by comparing the two cores of the same case for the ER and HER2 markers and slightly lower for PR and Ki-67. So, we suggest that intratumoral heterogeneity partially explains the occurrence of false-negative and false-positive IHC scores observed in our study (4%, 5%, 10%, and 10% for ER, PR, HER2, and Ki-67, respectively). These results align with previous studies, where discordant results in 2%, 7%, and 8% of cases for ER, PR, and HER2, respectively and 18% for Ki-67 were also associated with intratumoral heterogeneity. It is well known that increasing the number of cores for each case in the TMA to cover a larger area of the tumor may reduce its impact. Taken together, our results indicate that, especially for HER2 and Ki-67, the addition of more than two cores (2.0 mm each) per case in the TMA or the use of whole-slide staining to IHC analyses should be considered to decrease the chance of discrepant results. It was expected that the TMA approach would drastically reduce the work time and the cost of evaluating BC markers spent in Brazilian women’s healthcare. Indeed, this is the first study to detail potential savings related to implementing TMA technology in Brazil, specifically inside the public health system (SUS). Importantly, we showed a reduction of 17-fold in time and 11-fold in cost of an individual BC IHC scoring, considering labor time and direct and indirect costs. Taken our current demand of 540 requests per year, TMA would allow us to save 2,500,000 USD per year in BC diagnosis, a reduction of 91% in the amount originally spent on this test in our hospital. It is important to remember that cost-effectiveness is directly linked to the volume of tests performed in each laboratory. So, the time spent to gather sufficient cases to fulfill the TMA should be considered to avoid delays in the release of results. An applicable alternative would be to use TMAs with a smaller number of cases (12, 24, or 36 cores per TMA), which could be produced weekly. “Urgent” cases would be processed immediately using the traditional method and reported in less than 48 h. Even so, TMA may not be a viable method for laboratories with a low volume of tests. However, to properly decide on implementing new technology as an alternative method in clinical practice, it was crucial to know the TMA accuracy in predicting the BC subtype through the combined analyses of the four BC markers. Based on our results, two arguments can demonstrate that IHC analyses of TMA and whole section are not equivalent and their potential for predicting BC subtype: (1) Accuracy and reliability concerns: the results indicate that TMA yielded a lower rate of successful BC subtyping compared to the traditional method. While the traditional method achieved BC subtyping in 97% of cases, TMA only achieved it in 89% of cases. This discrepancy in success rates suggests that TMA may be less accurate or reliable in determining BC subtypes. Our findings highlight the variability in TMAs accuracy across BC subtypes, with notable differences in sensitivity, specificity, and overall accuracy. Among Luminal A cases, TMA demonstrated the highest agreement rate but showed relatively lower accuracy in identifying Luminal B cases. In the same way, TMA performance was better in identifying triple-negative cases than HER2-positive. These results highlight the limitations of TMA in accurately capturing the heterogeneity of BC in each individual marker, which can potentially be amplified when they are combined to predict the BC subtype, with serious implications for treatment decisions and patient outcomes. (2) Technical challenges and limitations: the inability to determine the BC subtype in 8.4% of cases due to failure to read the IHC score on the TMA indicates technical challenges associated with this method. Despite the high number of informative cores in our TMAs, issues such as inadequate tissue sampling or technical errors during slide preparation, staining, or interpretation significantly impacted the BC subtyping in those cases The entire IHC process may need to be redone, potentially resulting in significant financial and time losses. When we highlighted luminal tumors, the predominant subtype in the Brazilian population, we observed that they were frequently misclassified using TMA, with 74% of the discordant cases incorrectly classified as luminal A or B. The correct Ki-67 IHC scoring is crucial for distinguishing between luminal A and B tumors, and the misclassification of Ki-67 as “low” (⩽20%) or “high” (>20%) can be the cause of the higher rate of false-positive or false-negative results for the Ki-67 marker and consequently elevated rate of luminal B incorrectly subtyped in TMA (~30% in our study). Our study aligns with the previous showing a high discordance rate of 38% in Ki-67 scoring in TMAs using the same cut-off of ⩾20%. Our data reinforce the existence of possible reproducibility flaws in the Ki-67 evaluation in TMAs depending on the Ki-67 cut-off point applied in the analyses. Among the HER2 BC group, despite the high specificity, TMA failed to detect 5 in 13 HER2-positive cases, representing the lowest sensibility (58%) compared to the other subtypes. This result differs from the 98% sensitivity detected previously in a similar study that recommends TMA for HER2 subtyping, a rate that we can’t confirm in our study. This could be due to differences in the HER2 scoring methods used in both studies and our patient population composition, including all BC subtypes, which need further investigation. Detecting HER2-positive BC accurately is crucial because it significantly impacts treatment decisions and patient outcomes. They tend to be more aggressive than HER2-negative BC, and they require targeted therapy with drugs like trastuzumab or other HER2-targeted therapies. Finally, our data support previous findings that TMA-based IHC results should be used with caution in BC subtype classification, especially when distinguishing luminal A from luminal B and when interpreting findings for HER2-enriched cancers. Finally, this study has some limitations that should be acknowledged. Technical challenges, such as misclassification of Ki-67 scores and core selection errors, may have contributed to false-negative and false-positive results. Additionally, our findings are based on a single institution’s patient population, which restricts their applicability to broader contexts. Future research should involve larger, multi-center cohorts to enhance the reliability of TMA in BC subtyping in clinical practice. While TMA offers a fast and cost-effective method for testing individual ER, PR, HER2, and Ki-67 biomarkers in BC, caution is needed when using them for BC subtyping. Challenges include tissue loss during construction and varying performance across markers due to tumor heterogeneity. TMAs perform well in identifying certain BC subtypes, like luminal A and triple-negative, but show less reliability in classifying luminal B tumors. Of concern is their lower sensitivity in detecting HER2-positive BC, impacting treatment decisions. Despite the benefits of efficiency and cost, careful consideration of limitations is crucial in clinical practice, requiring further research to optimize TMA use in BC diagnosis and subtyping. sj-docx-1-whe-10.1177_17455057241304654 – Supplemental material for Unlocking breast cancer in Brazilian public health system: Using tissue microarray for accurate immunohistochemical evaluation with limitations in subtyping Supplemental material, sj-docx-1-whe-10.1177_17455057241304654 for Unlocking breast cancer in Brazilian public health system: Using tissue microarray for accurate immunohistochemical evaluation with limitations in subtyping by Rubia Denise Ruppenthal, Emily Ferreira Salles Pilar, Jordan Boeira dos Santos, Rafael Correa Coelho, Carina Machado Costamilan Henriques, Diego de Mendonça Uchôa and Marcia Silveira Graudenz in Women’s Health sj-docx-2-whe-10.1177_17455057241304654 – Supplemental material for Unlocking breast cancer in Brazilian public health system: Using tissue microarray for accurate immunohistochemical evaluation with limitations in subtyping Supplemental material, sj-docx-2-whe-10.1177_17455057241304654 for Unlocking breast cancer in Brazilian public health system: Using tissue microarray for accurate immunohistochemical evaluation with limitations in subtyping by Rubia Denise Ruppenthal, Emily Ferreira Salles Pilar, Jordan Boeira dos Santos, Rafael Correa Coelho, Carina Machado Costamilan Henriques, Diego de Mendonça Uchôa and Marcia Silveira Graudenz in Women’s Health
A cross-sectional study on occupational hygiene in biowaste plants
3b42cd63-d8b0-4bb4-8408-93569a4a3b8f
11586275
Microbiology[mh]
An objective set forth by the “European Green Deal” aimed at bolstering material recycling , underscores the growing significance of sustainable waste management practices. In tandem with this shift, European Union citizens recycled 49% of domestic waste in 2021 . Forecasts indicate a further surge in waste recycling activities, necessitating an exploration of associated occupational hazards. Source separation of biowaste has advantages from a climate perspective , and in Denmark, several biowaste pretreatment plants have been built. Residents are provided with bags and separate bins for biowaste, which is collected by workers who transport it to these facilities for pretreatment and later composting or anaerobic digestion. Previous studies in the waste industry have shown a higher prevalence of early signs of health symptoms mainly of the airways but also gastrointestinal problems and systemic health risks linked to bioaerosol exposure . In the Danish context, waste collectors and workers involved in recycling biowaste have been shown to be exposed to pathogenic, allergenic, and inflammogenic microorganisms, and endotoxin ( b; ; ) emphasizing the need for a meticulous assessment of occupational exposure and identification of factors affecting exposure. Based on reports on symptoms of the airways, exposure via the air (inhalation) is important to consider. The aerodynamic diameter of airborne particles with microorganisms determines where they deposit in the respiratory tract and may affect host tissue , and whether it can be captured in the upper and mid-respiratory tract and transported to the gastrointestinal tract . The Andersen 6-stage cascade impactor (ACI) has been used for decades to measure the distribution of airborne microorganisms in 6 size fractions , and is therefore relevant to use in relation to potential airway deposition. Given the gastrointestinal issues and recent exposure measurements to airborne risk class 2 bacteria in the waste industry ( , ; ; b), hand hygiene should also be considered in an occupational hygiene study in this environment. Few studies have examined occupational hand hygiene in nonclinical settings. During the COVID-19 pandemic awareness of the use of hand sanitizer has increased . It remains unclear whether continuous emphasis on hand hygiene in the waste industry is necessary. The microbial species richness of workers’ exposure seems to vary considerably between working environments , but whether it has an impact on occupational health is not yet clear. Endotoxin is highly inflammogenic, and previously an occupational exposure limit of 50 EU/m 3 has been proposed . In the waste industry, exposures exceeding this limit have been found . Production of reactive oxygen species (ROS) by exposed cells is considered a marker of inflammation . The total inflammatory potential (TIP) of a bioaerosol sample consisting of endotoxin, different microorganisms, and dust has previously been measured using an in vitro assay based on the production of ROS from granulocyte-like cells ( a). The aim of this study is to obtain knowledge about occupational hygiene in biowaste plants to inform prevention strategies. Personal exposure measurements, hand hygiene assessments, and a questionnaire on the use of protective equipment were conducted in 6 biowaste plants. Bioaerosol concentrations were measured in various work areas, and the aerodynamic diameter of airborne particles containing microorganisms was analyzed to understand potential respiratory deposition. The association between TIP and bioaerosol components was also studied. Biowaste pretreatment plants and work tasks Sampling was conducted at 6 biowaste pretreatment plants (P1–P6) in Denmark between May 2021 and June 2022. All plants were visited twice during different seasons, except plant P3, which was visited twice during the summer 1 year apart. This was due to planned reconstruction at the plant, however, due to COVID-19 delays, the reconstruction was not implemented during the sampling campaign. See also for more details on the sampling and waste plants. At the biowaste pretreatment plants, presorted organic waste, i.e. biowaste, from homes and businesses was delivered by waste collection trucks. In plant P1, the biowaste was deposited in a silo by waste collection trucks using an outdoor hatch, after which the waste was automatically transferred through closed pipes and conveyor belts to the processing machinery. In plants P2–P6 the waste was deposited on the receiving hall floor, either by waste collection trucks driving into the hall to unload the waste or by unloading the waste through an outdoor hatch leading to the receiving hall floor. After this, the waste was transferred to the processing machinery manually by use of wheel loaders. The tasks of production workers typically consist of maintenance and cleaning inside the waste-receiving and processing halls, manually sorting and transferring waste by use of wheel loaders, and work in control rooms. Some of the production workers worked in the control room most of the time, and partly inside the waste hall. Nonproduction workers consisted of administrative staff working in the same buildings who did not work inside the production area itself. Sampling consisted of personal exposure measurements from 31 different staff members, 14 of whom participated twice and 17 once, resulting in a total of 45 personal exposure measurements. Of the 45 exposure measurements, 36 were production and 9 nonproduction workers, and this is considered as 2 job groups. Stationary measurements were also conducted in the waste-receiving and waste-processing areas of the plants. Associations between bioaerosol exposure and health symptoms and inflammation for the same group of workers are published in . Questionnaire An online questionnaire was sent to 33 staff members, including 2 persons who did not contribute with exposure measurements (28 production and 5 nonproduction workers). The questionnaire contained questions about occupational hygiene and health, and only questions about hygiene (protective equipment and use of hand sanitizer) are part of this study. Twenty individuals completed the questionnaire with 16 from the production workers and 4 of the nonproduction workers. Ten participants did not provide answers to the questionnaire and 3 gave partial answers, which were excluded from the results. Personal exposure measurements and hand hygiene To determine personal exposure, participants were fitted with backpacks containing pumps attached to 2 personal air samplers (Gesamtstaubprobenahme sampler [GSP], BIG Inc., USA) attached on the front of the backpack, i.e. in the inhalation zone of the participants. A 37 mm polycarbonate filter (pore size 0.8 μm, SKC) was mounted in 1 sampler and a 37 mm Teflon filter (pore size 1.0 μm, Merck) was mounted in the other. The sample flow was adjusted throughout the sampling period (average 420 min) and kept at a flow rate of 3.5 L/min. Hand hygiene was measured on 23 production workers (7 of these twice) and 6 nonproduction (4 of these twice). A moistened swab (eSwabs pre-moistened in modified Amies medium; Copan) was taken from both palms of workers’ hands at the end of the working day by rotating the swab while moving it over the palms in a zig-zag motion. The swabs were stored in 1 mL Amies transport medium and stored cool until returned in the laboratory. Concentrations in work areas and outdoor references To determine exposures within the production areas of the waste plants, and to get information about the sizes of particles with airborne microorganisms, stationary samples were collected using an Andersen 6-stage cascade impactor (ACI; Thermo Fisher Scientific Inc. Waltham, MA, USA) with a flow rate of 28.3 L/min. The ACI samples the bioaerosols into 6 size fractions thereby covering the aerosols that are likely to deposit in the upper airways (nasopharyngeal region, stages 1 and 2), in the tracheobronchial region, which includes the trachea and the larger bronchi (stage 3) as well as the respirable faction including the bronchioles and alveoli (stages 4–6). Stage 1: 7.0–12 µm, stage 2: 4.7–7.0 µm, stage 3: 3.3–4.7 µm, stage 4: 2.1–3.3 µm, stage 5: 1.1–2.2 µm, and stage 6: 0.65–1.1 µm. Stationary samplers were placed in 2 locations at each plant: for plant P1 samplers were placed in the waste-processing area and in a cellar where pulp samples were taken, and for plants P2–P6 samplers were placed in the waste-receiving and waste-processing areas. Plant P1 did not have a waste receiving area matching the other plants. The ACI was placed on a 1 m stand and sample times varied from 15 s to 10 min, depending on agar type and an assessment of the dust levels at the plants on the day of sampling (i.e. too long sampling time would overload agar plates). The agar plates from the ACI were incubated upon return to the laboratory. Seventeen stationary samples within the waste receiving and waste-processing areas were taken using the GSP sampler. This was only included later in the sampling campaign (i.e. 1 visit at plant P1–P3 and 2 visits at plant P4–P6). The stationary GSP samples were mounted on stands at a height of 1.5 m. The average sampling time was 384 min. Outdoor reference samples were taken using GSP samplers mounted with a polycarbonate and a Teflon filter. These samples were taken outside the National Research Centre for the Working Environment at a height of 1.5 m on each sampling date. The average sampling time was 353 min. Temperature and relative humidity were measured every 5 min inside the waste plant during the site visit (Tinytag Plus Data Loggers (Gemini Data Loggers, Chichester, UK), ). Particle concentrations as a function of time was measured as a function of time in the waste receiving area in plant P4 using GRIMM Optical Particle Counter 1.108 in particle size range 0.25 µm to >32.0 µm in 31 size ranges. Particle concentrations are presented as different size ranges as a function of time of the day . Extraction, quantification, and identification of bacteria and fungi Polycarbonate filters from both personal and stationary GSP samplers were extracted the morning following the sampling day due to the long transport time from sampling location to our laboratory. Filters were extracted at room temperature in 5 mL sterile extraction solution (MilliQ water with 0.05% Tween 80 and 0.85% NaCl) by orbital shaking at 500 rpm for 15 min. Suspensions from the filters were plated in 4 serial dilutions for quantification and identification of bacteria and fungi. For enumeration of bacteria, samples were plated on Nutrient agar (NA; Thermo Fisher Scientific Oxoid, Basingstoke, UK) plates with actidione (cycloheximide; 50 mg/L; Serva, Germany) and incubated at 25 °C for 7 days. For enumeration of bacteria able to grow under anaerobic conditions, samples were plated on Fastidious Anaerobe Agar with 5% blood (FAA; SSI Dianostica, Hillerød, Denmark) and incubated anaerobically (AnaeroJar with an AnaeroGen sachet, Thermo Fisher Scientific Oxoid, Basingstoke, UK) at 37 °C for 2 days—these bacteria are in the following called anaerobic. For enumeration of fungi, samples were plated on Dichloran Glycerol agar (DG18; Thermo Fisher Scientific Oxoid, Basingstoke, UK) and incubated at both 25 °C for 7 days and at 37 °C for 4 days. Hand swabs were vortexed for 5 min and plated in 5 serial dilutions on Na and DG18 agar, followed by incubation at 25 °C, and counted after 4 and 7 days. The detection limit of bacteria was 4 cfu/ml and of fungi 6 cfu/ml. Stationary ACI samples sampled on Na agar were incubated at 25 °C for 4 days, on FAA agar was incubated anaerobically at 37 °C for 2 days, and on DG18 agar at 25 °C for 4 days. All visible bacterial and fungal colonies were counted after the specified incubation period. For GSP and ACI samples concentrations were calculated as time-weighted averages of cfu/m 3 , taking into account the cfu count, the volume of extraction solution and amount plated, the sampling time, and the flow rate. Hand swabs were presented as cfu per 2 hand palms. Sample concentrations were then calculated as geometric mean (GM) values of appropriate sample dilutions (plates with a cfu count between 1 and 200). A representative dilution, i.e. with optimal coverage and separation of individual colonies, was used for species identifications using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). A Microflex LT mass spectrometer (Bruker Daltonics) was used using the Bruker Biotyper 3.1 software with the BDAL library for bacteria (V11) and Filamentous Fungi library (V4) for fungi. Bacteria were identified using the extended direct transfer method and fungi were identified using a modified ethanol extraction protocol. The instrument was calibrated weekly using a bacterial test standard (Bruker Daltonics). Isolates were analyzed in duplicates on the MALDI-TOF MS and the following cutoffs were used for species identifications: isolates with scores of 1.80 or higher were identified to species level, isolates with scores between 1.70 and 1.79 were identified to genus level, and isolates with scores lower than 1.70 were unidentified. The number of species were counted and used in this study. Dust, endotoxin, and the TIP Dust measurements were determined using the Teflon filters from personal and stationary GSP samplers. These, along with 3 blanks on each measuring day, were preweighed in a climate-controlled weight room. After sampling, filters were placed in the weighing room and weighed after a minimum 16-h acclimatization period. Dust mass is presented as time-weighted averages (mg/m 3 ). The detection limit with a sampling time of 420 min was 0.004 mg/m 3 . Following dust mass determination, Teflon filters were extracted in 5.0 mL 0.05% Tween 20.0 and 0.85% NaCl by orbital shaking at 300 rpm for 60 min, followed by a 15-min centrifugation step, after which the supernatant was frozen at −80 °C. Most previous studies have used the kinetic Limulus assay (Limulus Amebocyte Lysate called LAL) to measure endotoxin. However, for this assay blood is collected from horseshoe crabs and the amebocytes are extracted. To avoid the negative impact on horseshoe crabs we used the rFC assay in this study. Previously we have found that the rFC assay measures 10 times less endotoxin than the LAL assay for samples from biowaste workers’ exposure . This has to be considered in comparisons with previous studies. The recombinant factor C (rFC) assay (Lonza, Walkersville Inc.) was used following the manufacturer’s instructions. Samples were analyzed in duplicates. Plates were read using a PyroWave XM fluorescence microplate reader (Lonza, Walkersville Inc.). Endotoxin concentrations are presented as time-weighted averages (EU/m 3 ). The detection limit with a sampling time of 420 min was 0.017 EU/m 3 . Suspensions from GSP polycarbonate filters were used to determine the TIP of the bioaerosol samples in duplicate using a granulocyte-like cell assay. Here, HL-60 cells were exposed to the bioaerosol suspensions, after which the production of ROS was measured by a luminol-dependent chemiluminometric assay using a thermostated (37 °C) ORION II Microplate Luminometer (Berthold Detection Systems). The HL-60 cells (ATCC, CCL-240) were provided by E.W. Hansen . Relative-light units per second (RLU/s) were measured for 1 s every 120 s for 180 min, and for each sample, RLU over the full period was summed, thereby expressing the TIP of the sample as the area under the curve (AUC). AUC was normalized by dividing with the AUC of a within-run reference sample and multiplied by the average AUC of the between-run reference samples . For GSP samples AUC was expressed as time-weighted averages (AUC/m 3 ). Data analyses To investigate whether personal exposure to bioaerosols differed among waste plants, seasons, and production and nonproduction workers (called job ), linear models with waste plant , season , and job, and the 2-way interactions between waste plant and season and waste plant and job as fixed effects was used. The exposure measurements were added as response variables. Nonsignificant interactions were removed from the models. As the models on TIP did not meet model assumptions of homogeneous residuals, as inspected visually, nonparametric Kruskal-Wallis rank sum tests were performed for waste plant , season , and job . To investigate whether concentrations of bacteria and fungi on hands differed among production and nonproduction workers, linear models with job as fixed effect were used. The exposure measurements were added as response variables. To examine the factors influencing the bioaerosol concentrations of the stationary ACI samples, the same analysis as for the personal exposure was done with the sum of all 6 stages. Concentrations of the stationary GSP samples on the same day in the waste receiving versus waste-processing areas were compared using pairwise comparison. All response variables mentioned above (unless stated otherwise) were log10 transformed to meet model assumptions of homogeneous residuals, which were inspected visually. These models were conducted in R v. 4.2.1 using the car package. The association between TIP and each exposure and species richness was analyzed as general linear regression models (GLM), and following, all response variables were analyzed together with stepwise backward regression. The same analyses were done for log10 transformed dust exposure. The analysis were done in SAS 9.4. The geometrical mean diameter (D g ) of the airborne fungi and bacteria sampled using the ACI were calculated using the following formula: D g = (D 1 n1 ×D 2 n2 ×D 3 n3 ... ×D n n6 ) 1/N . Where: D g is the geometrical mean diameter of fungal aerosols; D 1 is the geometrical midpoint of the first interval; n 1 is the measured number of particles in the interval, and N is the total number of particles summed over each interval. The D g of microorganisms in different areas and plants were compared using GLM using in SAS 9.4. Differences in microbial community composition were explored using redundancy analysis (RDA) with a Hellinger pretransformation on the concentrations as well as presence absence of the microbial community as a function of waste plant , season , and their interaction as well as job for personal samples, and as a function of waste plant , season , and area. The models were conducted in R v. 4.2.1 the vegan package for RDA models . Sampling was conducted at 6 biowaste pretreatment plants (P1–P6) in Denmark between May 2021 and June 2022. All plants were visited twice during different seasons, except plant P3, which was visited twice during the summer 1 year apart. This was due to planned reconstruction at the plant, however, due to COVID-19 delays, the reconstruction was not implemented during the sampling campaign. See also for more details on the sampling and waste plants. At the biowaste pretreatment plants, presorted organic waste, i.e. biowaste, from homes and businesses was delivered by waste collection trucks. In plant P1, the biowaste was deposited in a silo by waste collection trucks using an outdoor hatch, after which the waste was automatically transferred through closed pipes and conveyor belts to the processing machinery. In plants P2–P6 the waste was deposited on the receiving hall floor, either by waste collection trucks driving into the hall to unload the waste or by unloading the waste through an outdoor hatch leading to the receiving hall floor. After this, the waste was transferred to the processing machinery manually by use of wheel loaders. The tasks of production workers typically consist of maintenance and cleaning inside the waste-receiving and processing halls, manually sorting and transferring waste by use of wheel loaders, and work in control rooms. Some of the production workers worked in the control room most of the time, and partly inside the waste hall. Nonproduction workers consisted of administrative staff working in the same buildings who did not work inside the production area itself. Sampling consisted of personal exposure measurements from 31 different staff members, 14 of whom participated twice and 17 once, resulting in a total of 45 personal exposure measurements. Of the 45 exposure measurements, 36 were production and 9 nonproduction workers, and this is considered as 2 job groups. Stationary measurements were also conducted in the waste-receiving and waste-processing areas of the plants. Associations between bioaerosol exposure and health symptoms and inflammation for the same group of workers are published in . An online questionnaire was sent to 33 staff members, including 2 persons who did not contribute with exposure measurements (28 production and 5 nonproduction workers). The questionnaire contained questions about occupational hygiene and health, and only questions about hygiene (protective equipment and use of hand sanitizer) are part of this study. Twenty individuals completed the questionnaire with 16 from the production workers and 4 of the nonproduction workers. Ten participants did not provide answers to the questionnaire and 3 gave partial answers, which were excluded from the results. To determine personal exposure, participants were fitted with backpacks containing pumps attached to 2 personal air samplers (Gesamtstaubprobenahme sampler [GSP], BIG Inc., USA) attached on the front of the backpack, i.e. in the inhalation zone of the participants. A 37 mm polycarbonate filter (pore size 0.8 μm, SKC) was mounted in 1 sampler and a 37 mm Teflon filter (pore size 1.0 μm, Merck) was mounted in the other. The sample flow was adjusted throughout the sampling period (average 420 min) and kept at a flow rate of 3.5 L/min. Hand hygiene was measured on 23 production workers (7 of these twice) and 6 nonproduction (4 of these twice). A moistened swab (eSwabs pre-moistened in modified Amies medium; Copan) was taken from both palms of workers’ hands at the end of the working day by rotating the swab while moving it over the palms in a zig-zag motion. The swabs were stored in 1 mL Amies transport medium and stored cool until returned in the laboratory. To determine exposures within the production areas of the waste plants, and to get information about the sizes of particles with airborne microorganisms, stationary samples were collected using an Andersen 6-stage cascade impactor (ACI; Thermo Fisher Scientific Inc. Waltham, MA, USA) with a flow rate of 28.3 L/min. The ACI samples the bioaerosols into 6 size fractions thereby covering the aerosols that are likely to deposit in the upper airways (nasopharyngeal region, stages 1 and 2), in the tracheobronchial region, which includes the trachea and the larger bronchi (stage 3) as well as the respirable faction including the bronchioles and alveoli (stages 4–6). Stage 1: 7.0–12 µm, stage 2: 4.7–7.0 µm, stage 3: 3.3–4.7 µm, stage 4: 2.1–3.3 µm, stage 5: 1.1–2.2 µm, and stage 6: 0.65–1.1 µm. Stationary samplers were placed in 2 locations at each plant: for plant P1 samplers were placed in the waste-processing area and in a cellar where pulp samples were taken, and for plants P2–P6 samplers were placed in the waste-receiving and waste-processing areas. Plant P1 did not have a waste receiving area matching the other plants. The ACI was placed on a 1 m stand and sample times varied from 15 s to 10 min, depending on agar type and an assessment of the dust levels at the plants on the day of sampling (i.e. too long sampling time would overload agar plates). The agar plates from the ACI were incubated upon return to the laboratory. Seventeen stationary samples within the waste receiving and waste-processing areas were taken using the GSP sampler. This was only included later in the sampling campaign (i.e. 1 visit at plant P1–P3 and 2 visits at plant P4–P6). The stationary GSP samples were mounted on stands at a height of 1.5 m. The average sampling time was 384 min. Outdoor reference samples were taken using GSP samplers mounted with a polycarbonate and a Teflon filter. These samples were taken outside the National Research Centre for the Working Environment at a height of 1.5 m on each sampling date. The average sampling time was 353 min. Temperature and relative humidity were measured every 5 min inside the waste plant during the site visit (Tinytag Plus Data Loggers (Gemini Data Loggers, Chichester, UK), ). Particle concentrations as a function of time was measured as a function of time in the waste receiving area in plant P4 using GRIMM Optical Particle Counter 1.108 in particle size range 0.25 µm to >32.0 µm in 31 size ranges. Particle concentrations are presented as different size ranges as a function of time of the day . Polycarbonate filters from both personal and stationary GSP samplers were extracted the morning following the sampling day due to the long transport time from sampling location to our laboratory. Filters were extracted at room temperature in 5 mL sterile extraction solution (MilliQ water with 0.05% Tween 80 and 0.85% NaCl) by orbital shaking at 500 rpm for 15 min. Suspensions from the filters were plated in 4 serial dilutions for quantification and identification of bacteria and fungi. For enumeration of bacteria, samples were plated on Nutrient agar (NA; Thermo Fisher Scientific Oxoid, Basingstoke, UK) plates with actidione (cycloheximide; 50 mg/L; Serva, Germany) and incubated at 25 °C for 7 days. For enumeration of bacteria able to grow under anaerobic conditions, samples were plated on Fastidious Anaerobe Agar with 5% blood (FAA; SSI Dianostica, Hillerød, Denmark) and incubated anaerobically (AnaeroJar with an AnaeroGen sachet, Thermo Fisher Scientific Oxoid, Basingstoke, UK) at 37 °C for 2 days—these bacteria are in the following called anaerobic. For enumeration of fungi, samples were plated on Dichloran Glycerol agar (DG18; Thermo Fisher Scientific Oxoid, Basingstoke, UK) and incubated at both 25 °C for 7 days and at 37 °C for 4 days. Hand swabs were vortexed for 5 min and plated in 5 serial dilutions on Na and DG18 agar, followed by incubation at 25 °C, and counted after 4 and 7 days. The detection limit of bacteria was 4 cfu/ml and of fungi 6 cfu/ml. Stationary ACI samples sampled on Na agar were incubated at 25 °C for 4 days, on FAA agar was incubated anaerobically at 37 °C for 2 days, and on DG18 agar at 25 °C for 4 days. All visible bacterial and fungal colonies were counted after the specified incubation period. For GSP and ACI samples concentrations were calculated as time-weighted averages of cfu/m 3 , taking into account the cfu count, the volume of extraction solution and amount plated, the sampling time, and the flow rate. Hand swabs were presented as cfu per 2 hand palms. Sample concentrations were then calculated as geometric mean (GM) values of appropriate sample dilutions (plates with a cfu count between 1 and 200). A representative dilution, i.e. with optimal coverage and separation of individual colonies, was used for species identifications using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). A Microflex LT mass spectrometer (Bruker Daltonics) was used using the Bruker Biotyper 3.1 software with the BDAL library for bacteria (V11) and Filamentous Fungi library (V4) for fungi. Bacteria were identified using the extended direct transfer method and fungi were identified using a modified ethanol extraction protocol. The instrument was calibrated weekly using a bacterial test standard (Bruker Daltonics). Isolates were analyzed in duplicates on the MALDI-TOF MS and the following cutoffs were used for species identifications: isolates with scores of 1.80 or higher were identified to species level, isolates with scores between 1.70 and 1.79 were identified to genus level, and isolates with scores lower than 1.70 were unidentified. The number of species were counted and used in this study. Dust measurements were determined using the Teflon filters from personal and stationary GSP samplers. These, along with 3 blanks on each measuring day, were preweighed in a climate-controlled weight room. After sampling, filters were placed in the weighing room and weighed after a minimum 16-h acclimatization period. Dust mass is presented as time-weighted averages (mg/m 3 ). The detection limit with a sampling time of 420 min was 0.004 mg/m 3 . Following dust mass determination, Teflon filters were extracted in 5.0 mL 0.05% Tween 20.0 and 0.85% NaCl by orbital shaking at 300 rpm for 60 min, followed by a 15-min centrifugation step, after which the supernatant was frozen at −80 °C. Most previous studies have used the kinetic Limulus assay (Limulus Amebocyte Lysate called LAL) to measure endotoxin. However, for this assay blood is collected from horseshoe crabs and the amebocytes are extracted. To avoid the negative impact on horseshoe crabs we used the rFC assay in this study. Previously we have found that the rFC assay measures 10 times less endotoxin than the LAL assay for samples from biowaste workers’ exposure . This has to be considered in comparisons with previous studies. The recombinant factor C (rFC) assay (Lonza, Walkersville Inc.) was used following the manufacturer’s instructions. Samples were analyzed in duplicates. Plates were read using a PyroWave XM fluorescence microplate reader (Lonza, Walkersville Inc.). Endotoxin concentrations are presented as time-weighted averages (EU/m 3 ). The detection limit with a sampling time of 420 min was 0.017 EU/m 3 . Suspensions from GSP polycarbonate filters were used to determine the TIP of the bioaerosol samples in duplicate using a granulocyte-like cell assay. Here, HL-60 cells were exposed to the bioaerosol suspensions, after which the production of ROS was measured by a luminol-dependent chemiluminometric assay using a thermostated (37 °C) ORION II Microplate Luminometer (Berthold Detection Systems). The HL-60 cells (ATCC, CCL-240) were provided by E.W. Hansen . Relative-light units per second (RLU/s) were measured for 1 s every 120 s for 180 min, and for each sample, RLU over the full period was summed, thereby expressing the TIP of the sample as the area under the curve (AUC). AUC was normalized by dividing with the AUC of a within-run reference sample and multiplied by the average AUC of the between-run reference samples . For GSP samples AUC was expressed as time-weighted averages (AUC/m 3 ). To investigate whether personal exposure to bioaerosols differed among waste plants, seasons, and production and nonproduction workers (called job ), linear models with waste plant , season , and job, and the 2-way interactions between waste plant and season and waste plant and job as fixed effects was used. The exposure measurements were added as response variables. Nonsignificant interactions were removed from the models. As the models on TIP did not meet model assumptions of homogeneous residuals, as inspected visually, nonparametric Kruskal-Wallis rank sum tests were performed for waste plant , season , and job . To investigate whether concentrations of bacteria and fungi on hands differed among production and nonproduction workers, linear models with job as fixed effect were used. The exposure measurements were added as response variables. To examine the factors influencing the bioaerosol concentrations of the stationary ACI samples, the same analysis as for the personal exposure was done with the sum of all 6 stages. Concentrations of the stationary GSP samples on the same day in the waste receiving versus waste-processing areas were compared using pairwise comparison. All response variables mentioned above (unless stated otherwise) were log10 transformed to meet model assumptions of homogeneous residuals, which were inspected visually. These models were conducted in R v. 4.2.1 using the car package. The association between TIP and each exposure and species richness was analyzed as general linear regression models (GLM), and following, all response variables were analyzed together with stepwise backward regression. The same analyses were done for log10 transformed dust exposure. The analysis were done in SAS 9.4. The geometrical mean diameter (D g ) of the airborne fungi and bacteria sampled using the ACI were calculated using the following formula: D g = (D 1 n1 ×D 2 n2 ×D 3 n3 ... ×D n n6 ) 1/N . Where: D g is the geometrical mean diameter of fungal aerosols; D 1 is the geometrical midpoint of the first interval; n 1 is the measured number of particles in the interval, and N is the total number of particles summed over each interval. The D g of microorganisms in different areas and plants were compared using GLM using in SAS 9.4. Differences in microbial community composition were explored using redundancy analysis (RDA) with a Hellinger pretransformation on the concentrations as well as presence absence of the microbial community as a function of waste plant , season , and their interaction as well as job for personal samples, and as a function of waste plant , season , and area. The models were conducted in R v. 4.2.1 the vegan package for RDA models . Protective equipment, personal exposure, and hand concentrations The questionnaire showed that of the 16 production workers, 3 sometimes use coveralls, and 11 use work gloves. Ten of the 20 workers use hand sanitizer at least 6 times per day while 10 use it less than 5 times a day . Significant different exposure levels were found between plants for bacteria, anaerobic bacteria, fungi(37 °C), and endotoxin and TIP of exposure . Fungal, fungal(37 °C), dust, and endotoxin exposures and TIP differed between seasons, and were lowest in winter . The GM concentration of microorganisms on workers’ hands were 7,035 cfu of bacteria/hands and 41 cfu of fungi/hands, with higher concentrations for production workers than for nonproduction workers . Concentrations of bacteria and fungi correlated significantly ( r = 0.71, P < 0.0001). Concentrations in work areas An overview of concentrations in work areas as measured using the ACI and GSP and particles can be found in supplementary files. Concentrations of fungi but not bacteria and anaerobic bacteria (6 stages of the ACI summed) differed between the 3 areas with highest concentrations in the receiving area . The concentrations of airborne particles increased upon unloading of waste . Individual comparisons showed that all concentrations were lowest in the pulp areas . Concentration of bacteria and anaerobic bacteria were highest in plants P2 and P3 . Fungal concentrations in the plants interacted with seasons and areas , and high concentrations were found in P6 and low concentrations in P1. Bacterial and anaerobic bacterial concentrations were highest in summer . The GM aerodynamic diameters (D g ) differed significantly between bacteria, anaerobic bacteria, and fungi ( P < 0.0001, ). The D g of bacteria differed between work areas ( P = 0.0023) but not between plants ( P = 0.11) though bacteria in plant P1 had a smaller D g than bacteria in most other plants. On 7 out of 10 measurement days in plants where waste was transported to the floor of the receiving areas, concentrations of dust and microorganisms were measured in both the receiving area and the waste-processing area using GSP samplers . Pairwise comparison showed higher concentrations of fungi ( P = 0.039) and fungi(37 °C) ( P = 0.011) in the waste receiving area; for other microorganisms, this was not significant ( P s > 0.05). Community composition—personal samples From personal exposure samples, 219 taxa were identified, with 204 taxa to species level (172 bacterial and 32 different fungal species). All identified species can be found in , and the most frequently found species in . Several species were found across seasons and plants. However, microbial community composition of personal exposure analyzed as concentrations of species differed significantly between waste plants and job . For fungi, it also differed between seasons. Similar results were found for personal exposure analyzed as the presence/absence of species, though with some smaller differences . Especially plants P1 and P6 seemed to have a different species composition than the other plants . Community composition—stationary samples For ACI samples, in total, 364 different taxa were identified with 346 to species level (305 bacterial and 41 fungal species). Several species were found repeatedly across seasons and areas . However, microbial concentrations and presence/absence data for stationary ACI samples both differed significantly between waste plants, seasons, and areas in the plant . TIP and dust and contributing factors TIP of workers’ exposure was associated significantly and positively with bacteria, anaerobic bacteria, fungi, fungi(37 °C), endotoxin, and dust. In a model with all factors and backward stepwise regression, TIP was associated positively fungi and endotoxin exposure. In a similar model for dust exposure, dust was associated positively with fungal richness and endotoxin . The questionnaire showed that of the 16 production workers, 3 sometimes use coveralls, and 11 use work gloves. Ten of the 20 workers use hand sanitizer at least 6 times per day while 10 use it less than 5 times a day . Significant different exposure levels were found between plants for bacteria, anaerobic bacteria, fungi(37 °C), and endotoxin and TIP of exposure . Fungal, fungal(37 °C), dust, and endotoxin exposures and TIP differed between seasons, and were lowest in winter . The GM concentration of microorganisms on workers’ hands were 7,035 cfu of bacteria/hands and 41 cfu of fungi/hands, with higher concentrations for production workers than for nonproduction workers . Concentrations of bacteria and fungi correlated significantly ( r = 0.71, P < 0.0001). An overview of concentrations in work areas as measured using the ACI and GSP and particles can be found in supplementary files. Concentrations of fungi but not bacteria and anaerobic bacteria (6 stages of the ACI summed) differed between the 3 areas with highest concentrations in the receiving area . The concentrations of airborne particles increased upon unloading of waste . Individual comparisons showed that all concentrations were lowest in the pulp areas . Concentration of bacteria and anaerobic bacteria were highest in plants P2 and P3 . Fungal concentrations in the plants interacted with seasons and areas , and high concentrations were found in P6 and low concentrations in P1. Bacterial and anaerobic bacterial concentrations were highest in summer . The GM aerodynamic diameters (D g ) differed significantly between bacteria, anaerobic bacteria, and fungi ( P < 0.0001, ). The D g of bacteria differed between work areas ( P = 0.0023) but not between plants ( P = 0.11) though bacteria in plant P1 had a smaller D g than bacteria in most other plants. On 7 out of 10 measurement days in plants where waste was transported to the floor of the receiving areas, concentrations of dust and microorganisms were measured in both the receiving area and the waste-processing area using GSP samplers . Pairwise comparison showed higher concentrations of fungi ( P = 0.039) and fungi(37 °C) ( P = 0.011) in the waste receiving area; for other microorganisms, this was not significant ( P s > 0.05). From personal exposure samples, 219 taxa were identified, with 204 taxa to species level (172 bacterial and 32 different fungal species). All identified species can be found in , and the most frequently found species in . Several species were found across seasons and plants. However, microbial community composition of personal exposure analyzed as concentrations of species differed significantly between waste plants and job . For fungi, it also differed between seasons. Similar results were found for personal exposure analyzed as the presence/absence of species, though with some smaller differences . Especially plants P1 and P6 seemed to have a different species composition than the other plants . For ACI samples, in total, 364 different taxa were identified with 346 to species level (305 bacterial and 41 fungal species). Several species were found repeatedly across seasons and areas . However, microbial concentrations and presence/absence data for stationary ACI samples both differed significantly between waste plants, seasons, and areas in the plant . TIP of workers’ exposure was associated significantly and positively with bacteria, anaerobic bacteria, fungi, fungi(37 °C), endotoxin, and dust. In a model with all factors and backward stepwise regression, TIP was associated positively fungi and endotoxin exposure. In a similar model for dust exposure, dust was associated positively with fungal richness and endotoxin . In this study, 6 biowaste plants participated. At each plant, only a few people were employed though the 6 plants together receive more than 900 tons of biowaste each day. Concentrations of microorganisms on workers’ hands at the end of the workday were higher for production than for nonproduction workers. The concentrations of bacteria on production workers’ hands were higher than those measured on waste collection workers’ hands. However, for fungal concentrations, similar levels were found on both groups ( a). Most production workers used work gloves every day. In a hospital environment, staff using gloves during work had lower concentrations of bacteria on their fingertips than staff not using gloves . The present study was performed from May 2021 to May 2022, and thus after the start of the COVID-19 pandemic, and most workers washed their hands more than 6 times per workday—in spite of that, hand concentration of bacteria was around 15 times higher for production than for nonproduction workers. Therefore, hand hygiene should continuously be in focus, and opportunities for keeping a good hand hygiene should be provided. Thus, a review study concludes that providing more opportunities for hand hygiene is effective at improving it . The production workers were exposed to significantly higher levels of all measured components than nonproduction workers except for bacteria. Even though the exposure level to bacteria did not differ between the 2 groups, the bacterial species compositions differed indicating an impact of the production area on workers’ exposure. The maximum measured exposure levels to airborne bacteria, fungi, fungi(37 °C), and endotoxin were lower than the maximum exposures of workers in Norwegian waste sorting plants ( a, b). The GM exposures were in the lower end of what has been measured in the French, Norwegian, Polish, and Portuguese waste sorting plants . The GM of exposure of production workers to bacteria, fungi, and dust but not fungi(37 °C) and endotoxin were lower than for Danish production workers handling different types of waste (including outdoor work) . The exposure to fungi and bacteria was at the level of what was previously found for waste collection workers while the endotoxin exposure was lower for the biowaste workers ( a). The exposure of nonproduction workers was at the levels found in indoor air of homes indicating that the transport of microorganisms to nonproduction areas is limited. The limited transportation is supported by different community compositions of production versus nonproduction workers’ exposure. The personal exposure to bacteria, anaerobic bacteria, fungi(37 °C), and dust differed significantly for workers in different plants. In general, the exposure was highest at plant P6, and Bacillus licheniformis and Aspergillus fumigatus were often found. These microorganisms can degrade organic material and are heat tolerant, and their presence may be related to the fact that P6 is a plant with composting of garden waste in addition to biowaste pretreatment. Collection and composting of garden waste have previously been associated with particularly high exposure and self-reported health symptoms . Furthermore, plant P6 had no mechanical ventilation in the receiving hall, and received more waste than the other plants. Ventilation has previously been described to reduce exposure in waste receiving and processing halls . Workers in plant P1 in general had a lower exposure, and this may be related to the transport of the waste in closed pipes and no waste on the floor area, which was also associated with fewer work hours in the production areas. The higher concentrations of bacteria and anaerobic bacteria in plants P2 and P3 may be related to the lack of separation between the waste receiving and processing hall. Fungal concentrations differed between plants and areas on the plants. Fungi were present as smaller airborne particles than bacteria and anaerobic bacteria, and therefore fungi have a longer settling time. At plant P1, measurements were done in the cellar where pulp samples were taken. In this area, the concentration of microorganisms was lower than in other areas, and the bacteria and anaerobic bacteria were present as smaller particles than in other areas. This suggests that the smallest particles have been transported from the waste receiving or processing area to the cellar. Despite the presence of several lactic acid bacteria, including Leuconostoc mesenteroides and Lactococcus lactis , in all areas, the community composition differed between areas. This indicates that different microbial species were released during the unloading and processing of waste, or that only some bacteria were transported between work areas. Exposure to fungi, fungi(37 °C), dust, and endotoxin and TIP of exposure and work area concentrations of microorganisms were highest in the warmest months. Elevated exposure in warm months is in accordance with what has previously been found for some microbial exposures in waste composting facilities and for waste collection workers . The lower exposure and concentrations of fungi in the winter are expected to be due to less growth of fungi in the biowaste in the cold months. The lack of seasonality in bacterial exposure even though their concentrations in work areas were lowest in the winter indicates that also bacteria grow less in the biowaste in the wintertime, but due to their larger particle sizes, they are to a lesser degree than fungi transported from the biowaste to where the workers are present. The lower concentrations of bacteria and fungi in the production areas during winter suggest that faster processing of biowaste may reduce the growth of microorganisms in the waste, thereby lowering the associated exposure of production workers. Microorganisms were found in sizes depositing in both the upper, mid, and lower airways. Previous studies have reported both upper and lower airway symptoms in workers handling waste ( ; a; ). It is anticipated that a significant portion of the particles deposited in the upper and mid-respiratory tract will be swallowed, reaching the digestive system. A recent study in the waste industry indicated a nonsignificant trend towards a higher incidence of diarrhea among production workers compared to nonproduction workers ( a). Several bacterial species classified in Risk Class 2 (those that can cause infections but are unlikely to be a serious hazard to workers) have been found in biowaste workers’ exposure. Of these, a large fraction of Enterobacter cloacae , Enterococcus casseliflavus , Enterococcus faecalis , Escherichia coli were present as particles larger than 7 µm, which may be swallowed . Although these bacteria are classified in Risk Class 2 and are commonly found in the human gastrointestinal tract, it is not known whether inhalation and swallowing of these bacteria affect workers’ health. Furthermore, Aspergillus fumigatus , 1 of the species most frequently found in this environment was found on all 6 stages of the ACI, indicating that it may also deposit in the deeper airways. This species has previously been associated with bronchopulmonary aspergillosis among composting workers . The total inflammatory potential (TIP) of workers’ exposure extends across a broader spectrum, with the GM falling within the higher ranges observed in previous measurements for waste collection workers ( a). Increased microbial and dust exposures were associated with increased TIP of the samples. This underscores the importance of reducing exposure to these components. In 10 out of 36 measurements on production workers, endotoxin exposure exceeded the suggested Occupational Exposure Limit (OEL) of 50 EU/m 3 . While there are no OELs for bacteria and fungi, reducing dust exposure may concurrently decrease bacterial and fungal exposures or species richness due to their correlations. Notably, none of the workers’ exposure or work area concentrations exceeded the Danish OEL of 3 mg organic dust/m 3 . In this study, both GSP and the ACI were used for sampling of airborne microorganisms. The ACI samples of microorganisms directly onto agar, which supports the survival of bacteria. However, the sampling efficiency of the ACI can be affected by the wind direction. Additionally, if airborne particles with microorganisms larger than 10 µm are present, their concentration may have been underestimated . On the other hand, particle data in indicate that almost all particles were smaller than 10 µm. This study has a focus on bioaerosols, however, future research should also consider studying chemical compounds (e.g. pesticides, antibiotics, heavy metals, dioxins) that may be present in biowaste and may have the potential to become aerosolised. The exposure to endotoxin, dust, bacteria, and fungi, seem collectively to contribute to the TIP of workers’ exposure. This underscores the importance of considering all these components in the risk assessment of waste workers’ exposure. Within the different production areas, the workers faced different exposure levels and production workers’ inhale a distinct species composition of microorganisms from that of their colleagues. Nonproduction workers appear to be protected from exposure in production areas, emphasizing the need for future exposure reduction efforts to concentrate on production workers. The study suggests that maintaining a focus on hand hygiene for production workers is crucial. Outdoor reception of waste in a silo connected to closed pipes is recommended for new biowaste plants, compared to systems where waste is transported to the floor. Supplementary material is available at Annals of Work Exposures and Health online. wxae074_suppl_Supplementary_Materials
Small molecule modulation of protein corona for deep plasma proteome profiling
438f9880-f855-454a-a657-714c04f4c55f
11544298
Biochemistry[mh]
The quest to comprehensively analyze the plasma proteome has become crucial for advancing disease diagnosis and monitoring, as well as biomarker discovery , . Yet, obstacles like identifying low-abundance proteins remain owing to the prevalence of high-abundance proteins in plasma where the seven most abundant proteins collectively represent 85% of the total protein mass , . Peptides from these high-abundance proteins, especially those of albumin, tend to dominate mass spectra impeding the detection of proteins with lower abundance. To address this challenge, techniques such as affinity depletion, protein equalizer, and electrolyte fractionation have been developed to reduce the concentration of these abundant proteins, thereby facilitating the detection of proteins with lower-abundance – . Additionally, a range of techniques has been developed to enhance the throughput and depth of protein detection and identification, from advanced acquisition modes to methods that concentrate low-abundance proteins or peptides for liquid chromatography-mass spectrometry (LC-MS/MS) analysis , – . For instance, in the affinity depletion strategy , affinity chromatography columns are used with specific ligands that bind to high-abundance proteins such as albumin, immunoglobulins, and haptoglobin. However, the cost and labor associated with such depletion strategies hamper their application for large cohorts. As another example, the salting-out technique is used to add reagents (e.g., ammonium sulfate) to selectively precipitate high-abundance proteins, leaving the lower-abundance proteins in the supernatant. However, these methods can introduce biases in precipitating lower-abundance proteins as well, therefore, additional robust strategies are needed to ensure low-abundance proteins with high diagnostic potential are not missed in biomarker discovery studies. More details on the limitations of these strategies are presented elsewhere . Recently, nanoparticles (NPs) have gained attention for their ability to support biomarker discovery through analysis of the spontaneously-forming protein/biomolecular corona (i.e., a layer of biomolecules, primarily proteins, that forms on NPs when exposed to plasma or other biological fluids) , – . The protein corona can contain a unique ability to concentrate proteins with lower abundance, easily reducing the proteome complexity for LC-MS/MS analysis , , . While the physicochemical properties of NPs do indeed influence the structure of their protein corona, it is generally observed that nanoscale materials exhibit different protein abundances compared to the original plasma protein composition . In essence, most NPs have the potential to form a protein corona with distinct protein composition and abundance, differing from the native plasma proteins . The application of single NPs for biomarker discovery has limitations in achieving deep proteome coverage, typically enabling the detection of only hundreds of proteins . To enhance proteome coverage and quantify a higher number of plasma proteins, the use of a protein corona sensor array or multiple NPs with distinct physicochemical properties can be implemented. This approach leverages the unique protein corona that forms on each NP to increase proteome coverage, but carries the drawback of having to analyze multiple NP samples and needing to test many NP types to reach the desired depth , , . In addition, the use of single NPs offers several advantages over multiple NPs, particularly in terms of commercialization and the regulatory complexities associated with multi-NP systems . Additionally, utilizing a single type of NP can streamline the MS analysis process, reducing the time required to analyze large cohorts in plasma proteomics studies. Small molecules native to human biofluids play a significant role in regulating human physiology, often through interactions with proteins. Therefore, we hypothesize that small molecules might influence the formation of the NP protein corona and serve to enrich specific proteins including biomarkers or low-abundance proteins. Recent findings have reported that high levels of cholesterol result in a protein corona with enriched apolipoproteins and reduced complement proteins, which is due to the changes in the binding affinity of the proteins to the NPs in the presence of cholesterol . Accordingly, we hypothesized that small molecules endogenous to human plasma may affect the composition of the NP protein corona differently depending on whether these molecules act individually or collectively . Our work presents an efficient methodology that harnesses the influence of various small molecules in creating diverse protein coronas on otherwise identical polystyrene NPs. Our primary hypothesis, corroborated by our findings, posits that introducing small molecules into plasma alters the manner in which the plasma proteins engage with NPs. This alteration, in turn, modulates the protein corona profile of the NPs. As a result, when NPs are incubated with plasma pre-treated with an array of small molecules at diverse concentrations, these small molecules significantly enhance the detection of a broad spectrum of low-abundance proteins through LC-MS/MS analyses. The selected small molecules include essential biological metabolites, lipids, vitamins, and nutrients consisting of glucose, triglyceride, diglycerol, phosphatidylcholine (PtdChos), phosphatidylethanolamine (PE), l -α-phosphatidylinositol (PtdIns), inosine 5′-monophosphate (IMP), and B complex and their combinations. The selection of these molecules was based on their ability to interact with a broad spectrum of proteins, which significantly influences the composition of the protein corona surrounding NPs. For example, B complex components can interact with a wide range of proteins including albumin , , hemoglobin , myoglobin , pantothenate permease , acyl carrier protein , lactoferrin , prion , β-amyloid precursor , and niacin-responsive repressor . Additionally, to assess the potential collective effects of these molecules, we analyzed two representative “molecular sauces.” Molecular sauce 1 contained a blend of glucose, triglyceride, diglycerol, and PtdChos, and molecular sauce 2 consisted of PE, PtdIns, IMP, and vitamin B complex. Why did we choose polystyrene NPs for this study? Our team has extensive experience in analyzing the composition and profiles of the protein corona on various types of NPs, including gold – , superparamagnetic iron oxide – , graphene oxide – , iron-platinum , zeolite , , silica , , polystyrene , – , silver , and lipids , , . In this study, we specifically selected highly uniform polystyrene NPs for two primary reasons: (i) polystyrene NPs have a protein corona that encompasses a broad spectrum of protein categories, including immunoglobulins, lipoproteins, tissue leakage proteins, acute phase proteins, complement proteins, and coagulation factors. This diversity is crucial for achieving wide proteome identification, which is essential for our research objectives and (ii) these particles are tested widely for numerous applications in nanobiomedicine: we – and other groups – have conducted extensive optimization, employing a wide range of characterizations, including MS, to analyze the protein corona of polystyrene NPs. This rigorous optimization ensures highly accurate and reproducible results. Our findings confirm that the addition of these small molecules in plasma generates distinct protein corona profiles on otherwise identical NPs, significantly expanding the range of the plasma proteome that can be captured and detected by simple LC-MS/MS analysis. Notably, we discover that the addition of specific small molecules, such as PtdChos, leads to a substantial increase in proteome coverage, which is attributed to the unique ability of PdtChos to bind albumin and reduce its participation in protein corona formation. Therefore, PtdChos coupled with NP protein corona analysis can replace the expensive albumin depletion kits and accelerate the plasma analysis workflow by reducing processing steps. Furthermore, our single small molecule-single NP platform reduces the necessity for employing multiple NP workflows in plasma proteome profiling. This approach can seamlessly integrate with existing LC-MS/MS workflows to further enhance the depth of plasma proteome analysis for biomarker discovery. Protein corona and small molecules enable deep profiling of the plasma proteome We assessed the effect of eight distinct small molecules, namely, glucose, triglyceride, diglycerol, PtdChos, PE, PtdIns, IMP, and vitamin B complex, on the protein corona formed around polystyrene NPs. The workflow of the study is outlined in Supplementary Fig. . Commercially available plain polystyrene NPs, averaging 80 nm in size, were purchased. Each small molecule, at varying concentrations (10 µg/ml, 100 µg/ml, and 1000 µg/ml; we selected a broad range of small molecule concentrations to determine the optimal levels for maximizing proteome coverage), was first incubated with commercial pooled healthy human plasma at 37 °C for 1 h allowing the small molecules to interact with the biological matrix. The concentration of each small molecule was carefully adjusted to ensure that the final concentration in the combined molecular solutions was 10 µg/ml, 100 µg/ml, or 1000 µg/ml for each component, consistent with the concentration used for individual small molecules. Subsequently, NPs at a concentration of 0.2 mg/ml were introduced into the plasma containing small molecules or sauces and incubated for an additional hour at 37 °C with agitation. It is noteworthy that the NPs concentration was chosen in a way to avoid any protein contamination (which was detected at concentrations of 0.5 mg/ml and higher) in the protein corona composition, which may cause errors in the proteomics data , . These methodological parameters were refined from previous studies to guarantee the formation of a distinct protein corona around the NPs. Supplementary Fig. offers further details on our methodologies, showcasing dynamic light scattering (DLS), zeta potential, and transmission electron microscopy (TEM) analyses for both the untreated NPs and those covered by a protein corona . The untreated polystyrene NPs exhibited excellent monodispersity, with an average size of 78.8 nm a polydispersity index of 0.026, and a surface charge of −30.1 ± 0.6 mV. Upon the formation of the protein corona, the average size of NPs expanded to 113 nm, and the surface charge shifted to −10 ± 0.4 mV. TEM analysis further corroborated the size and morphology alterations of the NPs before and after protein corona formation (Supplementary Fig. ). To investigate how spiking different concentrations of small molecules can influence the molecular composition of the protein corona, samples were subjected to LC-MS/MS analysis for high-resolution proteomic analysis. While the analysis of plasma alone led to the quantification of 218 unique proteins, analysis of the protein corona formed on the polystyrene NPs significantly enhanced the depth of plasma proteome sampling to enable the quantification of 681 unique proteins. Furthermore, the inclusion of small molecules further deepened plasma proteome sampling to enable quantification of between 397 and up to 897 unique proteins, depending on the small molecules added to plasma prior to corona formation. When comparing the use of protein coronas, both with and without the inclusion of small molecules, to the analysis of plasma alone (Fig. and Supplementary Data ), there is a notable increase—approximately a threefold rise—in the number of proteins that can be quantified. The CVs of the number of quantified proteins between three technical replicates were generally less than 1.54% for all sample types (Supplementary Table ). Interestingly, the concentration of small molecules did not significantly affect the number of quantified proteins in a concentration-dependent manner; only a small stepwise reduction in the number of quantified proteins was noted with increasing concentrations of glucose and diglycerol. Cumulatively, the incorporation of small molecules and molecular sauces into the protein corona of NPs led to a significant increase in protein quantification, with a total of 1793 proteins identified, marking an 8.25-fold increase compared to plasma proteins alone. Specifically, the addition of small molecules resulted in the quantification of 1573 additional proteins compared to plasma alone, and 1037 more proteins than the untreated protein corona. Strikingly, spiking 1000 µg/ml of PtdChos increased the number of quantified proteins to 897 (1.3-fold of quantified proteins in untreated plasma), singlehandedly. This observation prompted a detailed investigation into the influence of PtdChos on plasma proteome coverage, which is elaborated in the following sections. It is noteworthy that the superior performance of PtdChos alone compared to Molecular Sauce 1 could be attributed to interactions between the small molecules in the mixture, which may have lowered the effective concentration of PtdChos (for example, the interactions between PtdChos and triglycerides) , . Mass spectrometry workflow and the type of data analysis have a critical influence on proteomics outcomes in general , – , as well as in the specific field of protein corona research , , . For instance, our recent study demonstrated that identical corona-coated polystyrene NPs analyzed by different mass spectrometry centers resulted in a wide range of quantified proteins, varying from 235 to 1430 (5.1 fold increase as compared to plasma alone) . To mitigate the impact of these variables on the interpretation of how small molecules can enhance proteome coverage, we chose to report our data as fold changes in the number of quantified proteins relative to control plasma and untreated corona samples. This approach offers a more objective assessment of the role of small molecules in enhancing proteome analysis, minimizing the confounding effects of different workflows and data analysis techniques that may be employed by various researchers. The distribution of normalized protein intensities for the samples is shown in Fig. . The median value in the plasma group was notably higher than in the other samples, although the overall distribution did not differ significantly. In general, the proteomes obtained from protein corona profiles in the presence of small molecules showed a good correlation (generally a Pearson correlation above 0.6 for most small-molecule comparisons) demonstrating the faithful relative representation of proteins after treatment with different small molecules (Supplementary Fig. ). Small molecules diversify the protein corona composition We next investigated if the addition of small molecules would change the type and number of proteins detected by LC-MS/MS. Indeed, each small molecule and the molecular sauces generated a proteomic fingerprint that was distinct from untreated protein corona or those of other small molecules (Fig. ). Spiking small molecules led to the detection of a diverse set of proteins in the plasma. Interestingly, even different concentrations of the same small molecules or molecular sauces produced unique fingerprints. A similar analysis was performed for the 117 shared proteins across the samples (Fig. ). The Venn diagrams in Supplementary Fig. , show the number of unique proteins that were quantified in the respective group across all concentrations which were not quantified in the plasma or in the untreated protein corona. These results suggest that spiking small molecules into human biofluids can diversify the range of proteins that are identifiable in protein corona profiles, effectively increasing proteomic coverage to lower abundance proteins. Such an enrichment or depletion of a specific subset of proteins can be instrumental in biomarker discovery focused on a disease area. This feature can also be used for designing assays where the enrichment of a known biomarker is facilitated by using a given small molecule. As representative examples, a comparison of enriched and depleted proteins for molecular sauce 1 and 2 against the untreated protein corona is shown in Supplementary Fig. , , respectively (Supplementary Data ). In certain cases, the enrichment or depletion was drastic, spanning several orders of magnitude. The enriched and depleted proteins for molecular sauces 1 and 2 were mapped to KEGG pathways and biological processes in StringDB (Supplementary Fig. ). While most of the enriched pathways were shared, some pathways were specifically enriched for a given molecular sauce. For example, systemic lupus erythematosus (SLE) was only enriched among the top pathways for molecular sauce 2. Therefore, the small molecules can be potentially used for facilitating the discovery of biomarkers for specific diseases, or for assaying the abundance of a known biomarker in disease detection. Similar analyses were performed for all the small molecules and the volcano plots for the highest concentration of each molecule (i.e., 1000 µg/ml) are demonstrated in Supplementary Fig. (Supplementary Data ). A pathway analysis was also performed for all the significantly changing proteins for each small molecule at all concentrations (Supplementary Fig. ). To facilitate comparison, we have combined the enrichment analysis for all the samples vs the untreated protein corona in Supplementary Fig. To demonstrate how small molecules affect the composition and functional categories of proteins in the protein corona, potentially aiding in early diagnosis of diseases (since proteins enriched in the corona are pivotal in conditions like cardiovascular and neurodegenerative diseases), we utilized bioanalytical methods to categorize the identified proteins based on their blood-related functions namely complement activation, immune response, coagulation, acute phase response, and lipid metabolism (Supplementary Fig. ). In our analysis, apolipoproteins were major protein types that were found in the small molecule treated protein corona, and their types and abundance were heavily dependent to the type and concentrations of the employed small molecules (Supplementary Fig. ). Similarly, the enrichment of other specific protein categories on NPs surfaces was influenced by the type and concentration of small molecules used (Supplementary Fig. ). For example, antithrombin-III in coagulation factors plays a significant role in the protein corona composition of all tested small molecules, but this effect is observed only at their highest concentration. At lower concentrations, or in the untreated protein corona, this considerable participation is not evident (Supplementary Fig. ). This ability of small molecules to modify the protein composition on NPs highlights their potential for early disease diagnosis (e.g., apolipoprotein in cardiovascular and neurodegenerative disorders) , , where these protein categories are crucial in disease onset and progression . PtdChos reduces the plasma proteome dynamic range and increases proteome coverage by depleting the abundant plasma proteins To understand whether the quantification of a higher number of proteins in protein corona profiles was due to a lower dynamic range of proteins available in human plasma for NP binding, we plotted the maximum protein abundance vs minimum protein abundance for plasma alone, and plasma-treated with small molecules in Supplementary Fig. . The plasma alone showed the highest dynamic range, suggesting that identification of low-abundance proteins would be most difficult from plasma alone. Conversely, the addition of small molecules was shown to reduce plasma protein dynamic range, thereby allowing for the detection of more peptides and quantification of proteins with lower abundance through the NP protein corona. Notably, while albumin accounted for over 81% of our plasma sample, its representation was significantly lowered to an average of 29% in the protein coronas, both with and without small molecule modifications. This reduction was most pronounced with PtdChos treatment at 1000 µg/ml, where albumin levels dropped to around 17% of plasma proteins (Fig. ). Despite these changes, albumin remained the most abundant protein in all samples. A similar diminishing trend was observed for the second and third most abundant proteins, serotransferrin (TF) and haptoglobin (HB), which make up about 3.9% and 3.6% of plasma protein abundance, respectively. The rankings of these proteins’ abundance in each sample are depicted above the panels in Fig. . From this analysis, it is evident that the protein corona, both in its native form and when altered by small molecules, can drastically reduce the combined representation of the top three proteins from about 90% to roughly 29%. The most substantial reduction was observed with PtdChos at 1000 µg/ml, reducing the top three proteins’ cumulative representation from 90% to under 17%. PtdChos treatment also effectively reduced the levels of the fourth most abundant plasma protein IGHA1. This significant decrease in the abundance of highly prevalent plasma proteins explains the marked increase in the number of unique proteins detected from NP corona samples treated with PtdChos (897 proteins identified in the PtdChos-treated protein corona vs 681 proteins identified in the untreated corona vs 218 proteins identified in the untreated plasma, as shown in Fig. ). These results indicate that high concentrations of PtdChos can be strategically employed to enable more comprehensive plasma protein sampling by specifically targeting and depleting the most abundant plasma proteins, especially albumin. The stream (or alluvial) diagram in Fig. shows the overall changes in the representation of proteins found in plasma upon incubation of protein corona with different concentrations of PtdChos. To validate this discovery, we prepared fresh samples treated with a series of PtdChos concentrations ranging from 100 µg/ml to 10,000 µg/ml (Supplementary Data ). As shown in Fig. , 957 proteins could be quantified in the protein corona treated with PtdChos at 1000 µg/ml, while neither lower concentration nor further addition of PtdChos did not enhance the number of quantified proteins. The CVs of the number of quantified proteins between three technical replicates were generally less than 2% for all sample types (Supplementary Table ). The stream diagram in Fig. shows the specific depletion of albumin and a number of other abundant proteins in plasma upon the addition of PtdChos, allowing for more robust detection of other proteins with lower abundance. To confirm that the improved proteome coverage achieved with PtdChos treatment is independent of the LC-MS platform or the data acquisition mode used, we prepared new samples of plasma, untreated protein corona, and protein corona treated with 1000 µg/ml PtdChos, and analyzed them using LC-MS in the DIA mode. We identified 322 proteins in the plasma alone, 1011 proteins in the untreated protein corona samples, and 1436 proteins in the protein corona treated with PtdChos (1.4-fold increase over the untreated corona) (Supplementary Data ). These findings not only validate the enhancement of plasma proteome coverage by PtdChos but also illustrate the capability of PtdChos to facilitate the in-depth profiling of the plasma proteome associated with protein corona formed on the surface of a single type of NP. Since the ratio of the number of quantified proteins through PtdChos spiking is generally around 1.4-fold higher than in the NP corona alone, PtdChos can be incorporated into any LC-MS workflow aiming to boost plasma proteome profiling. More optimized plasma proteomics pipelines, TMT multiplexing coupled to fractionation, or high-end mass spectrometers such as Orbitrap Astral are envisioned to quantify an even higher number of proteins than those reported in the current study. To confirm the role of PtdChos in enhancing the proteome depth of the protein corona, we expanded our analysis by using additional NPs and four plasma samples from individual donors. Specifically, we tested seven additional commercially available and highly uniform NPs with distinct physicochemical properties: polystyrene NPs of varying sizes (mean diameters of 50 nm, 100 nm, and 200 nm) and surface charges (carboxylated and aminated polystyrene NPs, both with the mean diameter of 100 nm), as well as silica NPs with the mean sizes of 50 nm and 100 nm. These NPs have been extensively characterized and widely utilized for protein corona analysis by numerous research groups including our own , , – . The protein corona samples from different NPs were analyzed in the DIA mode with the 30 samples per day (SPD) setting with 44 min acquisition time. Our analysis revealed two key findings: (i) the physicochemical properties of NPs significantly influence the effectiveness of PtdChos in enhancing the number of quantified proteins in plasma, and (ii) incorporating additional plasma samples can markedly increase the overall number of identified proteins (Supplementary Data and Supplementary Fig. ). Polystyrene NPs, in general, and due to their hydrophobic nature, showed higher protein detection capacity than silica NPs ( p value = 0.012; Student’s t -test, two-sided with unequal variance). The average number of quantified proteins using polystyrene NPs was 823.4 vs 633 with silica NPs, while cumulatively there were 1241 unique quantified proteins in polystyrene NPs compared to 1024 in silica. Polystyrene NPs with 200 nm size provided the highest proteome coverage, although the difference in the number of quantified proteins was comparable to the same type of NPs with other sizes. Plain and positively charged polystyrene NPs had a better performance than carboxylated NPs. Our analysis also revealed the inter-individual variabilities between patients. The percentage CVs of the number of proteins quantified across four donors were generally lower for polystyrene NPs than silica NPs (14.4 vs 21.6%) (Supplementary Table ). PtdChos increases the number of detected plasma proteoforms We then asked if PtdChos could enhance the number of detected proteoforms in top-down proteomics as well. Proteoforms represent distinct structural variants of a protein product from a single gene, including variations in amino acid sequences and post-translational modifications . Proteoforms originating from the same gene can exhibit divergent biological functions and are crucial for modulating disease progression – . Therefore, proteoform-specific measurement of the protein corona, along with their improved detection depth through the use of small molecules, will undoubtedly provide a more accurate characterization of the protein molecules within the corona. We compared the chromatogram, the number of proteoform identifications, proteoform mass distribution, and differentially represented proteins between the untreated corona and PtdChos-treated samples. The LC-MS/MS data showed consistent base peak chromatograms, the number of proteoform identifications, and the number of proteoform-spectrum matches (PrSMs) across the technical triplicates of both the control and PtdChos-treated samples (Fig. , respectively). However, the treated sample exhibited a significant signal corresponding to the small molecule after 60 min of separation time (Fig. ), validating our hypothesis that small molecules interact with plasma proteins, causing the observed variation in the protein corona on the NPs’ surface. Furthermore, the process of recovering intact proteins from the surfaces of NPs primarily collects proteins from the outer layer of the protein corona , as the inner layer is tightly bound to the NP surfaces through various physical and chemical forces . This observation further confirms that PtdChos interacts with plasma proteins rather than directly with the NP surfaces, leading to the formation of its unique protein corona composition. In total, 637 proteoforms were identified across the two samples (with technical triplicates for each sample) (Fig. ). Data analysis using Perseus software (Version 2.0.10.0) revealed that only 110 proteoforms overlapped between the two samples (the minimum number of valid values for filtering data was set to 1). The proteoform mass distribution differed between the two samples (Fig. ). Although the average proteoform masses were similar, the box plot indicated a greater number of larger proteoform identifications in the control sample (over 20 kDa). We hypothesize that PtdChos can bind to large proteins, and due to the high concentration of PtdChos relative to the proteoforms, the signals of these large proteoforms may be obscured. Additional data analyses identified differential proteins in this study (Fig. ). The top-down proteomics approach identified specific gene products that bind to the NP surface in the presence of PtdChos. PtdChos interacts with human serum albumin via hydrophobic interactions, H-bonds, and water bridges To determine the types of interactions between albumin and PtdChos, we conducted all-atoms molecular dynamics (MD) simulations with various numbers of PtdChos molecules (Supplementary Fig. ). First, we performed blind and site-specific molecular docking simulations to find the most favorable binding sites for PtdChos on albumin. We then used the top ten most favorable non-overlapping binding poses, as quantified by binding affinity, for our MD simulations (Supplementary Fig. ). Four types of systems with the top 1, 3, 5, and 10 PtdChos molecules, respectively, were investigated via 100 ns simulations. As evidenced by the sum of Lennard-Jones and Coulombic interaction energies shown in Fig. , PtdChos strongly interacts with albumin. A nearly additive effect occurs from 1 to 3 ligands added. However, the five ligands system has a similar total energy as the three ligands one. This may indicate that some PtdChos molecules do not strongly interact with albumin. When the number of ligands increased to 10, we noticed an almost 2-fold increase in energy as compared to the 5 ligands system. To further quantify the strength of interactions between albumin and PtdChos, we calculated the effective free energy of the four types of systems, obtaining a similar trend (Fig. ). The average root mean square fluctuations of albumin residues reveal consistent peaks with the increase in fluctuations as the number of ligands increases (Fig. ). This may suggest that the protein conformation does not change drastically based on the number of ligands added. The average root mean square deviations of the PtdChos heavy atoms show similar values for the 1 and 3 ligands systems but higher values for 5 and 10 ligands systems (Fig. ). This confirms that the first few poses form a more stable interaction with albumin. Finally, Fig. shows that albumin and PtdChos interact primarily via hydrophobic interactions, hydrogen bonding, and water bridges. The hydrophobic interactions formed between albumin and the long fatty acid chains are present throughout every simulation. On the other hand, the number of hydrogen bonds and water bridges increases significantly from the 1 ligand systems to the 3, 5, and 10 ligands systems. These interactions are mainly due to the phosphate group oxygen atoms. We assessed the effect of eight distinct small molecules, namely, glucose, triglyceride, diglycerol, PtdChos, PE, PtdIns, IMP, and vitamin B complex, on the protein corona formed around polystyrene NPs. The workflow of the study is outlined in Supplementary Fig. . Commercially available plain polystyrene NPs, averaging 80 nm in size, were purchased. Each small molecule, at varying concentrations (10 µg/ml, 100 µg/ml, and 1000 µg/ml; we selected a broad range of small molecule concentrations to determine the optimal levels for maximizing proteome coverage), was first incubated with commercial pooled healthy human plasma at 37 °C for 1 h allowing the small molecules to interact with the biological matrix. The concentration of each small molecule was carefully adjusted to ensure that the final concentration in the combined molecular solutions was 10 µg/ml, 100 µg/ml, or 1000 µg/ml for each component, consistent with the concentration used for individual small molecules. Subsequently, NPs at a concentration of 0.2 mg/ml were introduced into the plasma containing small molecules or sauces and incubated for an additional hour at 37 °C with agitation. It is noteworthy that the NPs concentration was chosen in a way to avoid any protein contamination (which was detected at concentrations of 0.5 mg/ml and higher) in the protein corona composition, which may cause errors in the proteomics data , . These methodological parameters were refined from previous studies to guarantee the formation of a distinct protein corona around the NPs. Supplementary Fig. offers further details on our methodologies, showcasing dynamic light scattering (DLS), zeta potential, and transmission electron microscopy (TEM) analyses for both the untreated NPs and those covered by a protein corona . The untreated polystyrene NPs exhibited excellent monodispersity, with an average size of 78.8 nm a polydispersity index of 0.026, and a surface charge of −30.1 ± 0.6 mV. Upon the formation of the protein corona, the average size of NPs expanded to 113 nm, and the surface charge shifted to −10 ± 0.4 mV. TEM analysis further corroborated the size and morphology alterations of the NPs before and after protein corona formation (Supplementary Fig. ). To investigate how spiking different concentrations of small molecules can influence the molecular composition of the protein corona, samples were subjected to LC-MS/MS analysis for high-resolution proteomic analysis. While the analysis of plasma alone led to the quantification of 218 unique proteins, analysis of the protein corona formed on the polystyrene NPs significantly enhanced the depth of plasma proteome sampling to enable the quantification of 681 unique proteins. Furthermore, the inclusion of small molecules further deepened plasma proteome sampling to enable quantification of between 397 and up to 897 unique proteins, depending on the small molecules added to plasma prior to corona formation. When comparing the use of protein coronas, both with and without the inclusion of small molecules, to the analysis of plasma alone (Fig. and Supplementary Data ), there is a notable increase—approximately a threefold rise—in the number of proteins that can be quantified. The CVs of the number of quantified proteins between three technical replicates were generally less than 1.54% for all sample types (Supplementary Table ). Interestingly, the concentration of small molecules did not significantly affect the number of quantified proteins in a concentration-dependent manner; only a small stepwise reduction in the number of quantified proteins was noted with increasing concentrations of glucose and diglycerol. Cumulatively, the incorporation of small molecules and molecular sauces into the protein corona of NPs led to a significant increase in protein quantification, with a total of 1793 proteins identified, marking an 8.25-fold increase compared to plasma proteins alone. Specifically, the addition of small molecules resulted in the quantification of 1573 additional proteins compared to plasma alone, and 1037 more proteins than the untreated protein corona. Strikingly, spiking 1000 µg/ml of PtdChos increased the number of quantified proteins to 897 (1.3-fold of quantified proteins in untreated plasma), singlehandedly. This observation prompted a detailed investigation into the influence of PtdChos on plasma proteome coverage, which is elaborated in the following sections. It is noteworthy that the superior performance of PtdChos alone compared to Molecular Sauce 1 could be attributed to interactions between the small molecules in the mixture, which may have lowered the effective concentration of PtdChos (for example, the interactions between PtdChos and triglycerides) , . Mass spectrometry workflow and the type of data analysis have a critical influence on proteomics outcomes in general , – , as well as in the specific field of protein corona research , , . For instance, our recent study demonstrated that identical corona-coated polystyrene NPs analyzed by different mass spectrometry centers resulted in a wide range of quantified proteins, varying from 235 to 1430 (5.1 fold increase as compared to plasma alone) . To mitigate the impact of these variables on the interpretation of how small molecules can enhance proteome coverage, we chose to report our data as fold changes in the number of quantified proteins relative to control plasma and untreated corona samples. This approach offers a more objective assessment of the role of small molecules in enhancing proteome analysis, minimizing the confounding effects of different workflows and data analysis techniques that may be employed by various researchers. The distribution of normalized protein intensities for the samples is shown in Fig. . The median value in the plasma group was notably higher than in the other samples, although the overall distribution did not differ significantly. In general, the proteomes obtained from protein corona profiles in the presence of small molecules showed a good correlation (generally a Pearson correlation above 0.6 for most small-molecule comparisons) demonstrating the faithful relative representation of proteins after treatment with different small molecules (Supplementary Fig. ). We next investigated if the addition of small molecules would change the type and number of proteins detected by LC-MS/MS. Indeed, each small molecule and the molecular sauces generated a proteomic fingerprint that was distinct from untreated protein corona or those of other small molecules (Fig. ). Spiking small molecules led to the detection of a diverse set of proteins in the plasma. Interestingly, even different concentrations of the same small molecules or molecular sauces produced unique fingerprints. A similar analysis was performed for the 117 shared proteins across the samples (Fig. ). The Venn diagrams in Supplementary Fig. , show the number of unique proteins that were quantified in the respective group across all concentrations which were not quantified in the plasma or in the untreated protein corona. These results suggest that spiking small molecules into human biofluids can diversify the range of proteins that are identifiable in protein corona profiles, effectively increasing proteomic coverage to lower abundance proteins. Such an enrichment or depletion of a specific subset of proteins can be instrumental in biomarker discovery focused on a disease area. This feature can also be used for designing assays where the enrichment of a known biomarker is facilitated by using a given small molecule. As representative examples, a comparison of enriched and depleted proteins for molecular sauce 1 and 2 against the untreated protein corona is shown in Supplementary Fig. , , respectively (Supplementary Data ). In certain cases, the enrichment or depletion was drastic, spanning several orders of magnitude. The enriched and depleted proteins for molecular sauces 1 and 2 were mapped to KEGG pathways and biological processes in StringDB (Supplementary Fig. ). While most of the enriched pathways were shared, some pathways were specifically enriched for a given molecular sauce. For example, systemic lupus erythematosus (SLE) was only enriched among the top pathways for molecular sauce 2. Therefore, the small molecules can be potentially used for facilitating the discovery of biomarkers for specific diseases, or for assaying the abundance of a known biomarker in disease detection. Similar analyses were performed for all the small molecules and the volcano plots for the highest concentration of each molecule (i.e., 1000 µg/ml) are demonstrated in Supplementary Fig. (Supplementary Data ). A pathway analysis was also performed for all the significantly changing proteins for each small molecule at all concentrations (Supplementary Fig. ). To facilitate comparison, we have combined the enrichment analysis for all the samples vs the untreated protein corona in Supplementary Fig. To demonstrate how small molecules affect the composition and functional categories of proteins in the protein corona, potentially aiding in early diagnosis of diseases (since proteins enriched in the corona are pivotal in conditions like cardiovascular and neurodegenerative diseases), we utilized bioanalytical methods to categorize the identified proteins based on their blood-related functions namely complement activation, immune response, coagulation, acute phase response, and lipid metabolism (Supplementary Fig. ). In our analysis, apolipoproteins were major protein types that were found in the small molecule treated protein corona, and their types and abundance were heavily dependent to the type and concentrations of the employed small molecules (Supplementary Fig. ). Similarly, the enrichment of other specific protein categories on NPs surfaces was influenced by the type and concentration of small molecules used (Supplementary Fig. ). For example, antithrombin-III in coagulation factors plays a significant role in the protein corona composition of all tested small molecules, but this effect is observed only at their highest concentration. At lower concentrations, or in the untreated protein corona, this considerable participation is not evident (Supplementary Fig. ). This ability of small molecules to modify the protein composition on NPs highlights their potential for early disease diagnosis (e.g., apolipoprotein in cardiovascular and neurodegenerative disorders) , , where these protein categories are crucial in disease onset and progression . To understand whether the quantification of a higher number of proteins in protein corona profiles was due to a lower dynamic range of proteins available in human plasma for NP binding, we plotted the maximum protein abundance vs minimum protein abundance for plasma alone, and plasma-treated with small molecules in Supplementary Fig. . The plasma alone showed the highest dynamic range, suggesting that identification of low-abundance proteins would be most difficult from plasma alone. Conversely, the addition of small molecules was shown to reduce plasma protein dynamic range, thereby allowing for the detection of more peptides and quantification of proteins with lower abundance through the NP protein corona. Notably, while albumin accounted for over 81% of our plasma sample, its representation was significantly lowered to an average of 29% in the protein coronas, both with and without small molecule modifications. This reduction was most pronounced with PtdChos treatment at 1000 µg/ml, where albumin levels dropped to around 17% of plasma proteins (Fig. ). Despite these changes, albumin remained the most abundant protein in all samples. A similar diminishing trend was observed for the second and third most abundant proteins, serotransferrin (TF) and haptoglobin (HB), which make up about 3.9% and 3.6% of plasma protein abundance, respectively. The rankings of these proteins’ abundance in each sample are depicted above the panels in Fig. . From this analysis, it is evident that the protein corona, both in its native form and when altered by small molecules, can drastically reduce the combined representation of the top three proteins from about 90% to roughly 29%. The most substantial reduction was observed with PtdChos at 1000 µg/ml, reducing the top three proteins’ cumulative representation from 90% to under 17%. PtdChos treatment also effectively reduced the levels of the fourth most abundant plasma protein IGHA1. This significant decrease in the abundance of highly prevalent plasma proteins explains the marked increase in the number of unique proteins detected from NP corona samples treated with PtdChos (897 proteins identified in the PtdChos-treated protein corona vs 681 proteins identified in the untreated corona vs 218 proteins identified in the untreated plasma, as shown in Fig. ). These results indicate that high concentrations of PtdChos can be strategically employed to enable more comprehensive plasma protein sampling by specifically targeting and depleting the most abundant plasma proteins, especially albumin. The stream (or alluvial) diagram in Fig. shows the overall changes in the representation of proteins found in plasma upon incubation of protein corona with different concentrations of PtdChos. To validate this discovery, we prepared fresh samples treated with a series of PtdChos concentrations ranging from 100 µg/ml to 10,000 µg/ml (Supplementary Data ). As shown in Fig. , 957 proteins could be quantified in the protein corona treated with PtdChos at 1000 µg/ml, while neither lower concentration nor further addition of PtdChos did not enhance the number of quantified proteins. The CVs of the number of quantified proteins between three technical replicates were generally less than 2% for all sample types (Supplementary Table ). The stream diagram in Fig. shows the specific depletion of albumin and a number of other abundant proteins in plasma upon the addition of PtdChos, allowing for more robust detection of other proteins with lower abundance. To confirm that the improved proteome coverage achieved with PtdChos treatment is independent of the LC-MS platform or the data acquisition mode used, we prepared new samples of plasma, untreated protein corona, and protein corona treated with 1000 µg/ml PtdChos, and analyzed them using LC-MS in the DIA mode. We identified 322 proteins in the plasma alone, 1011 proteins in the untreated protein corona samples, and 1436 proteins in the protein corona treated with PtdChos (1.4-fold increase over the untreated corona) (Supplementary Data ). These findings not only validate the enhancement of plasma proteome coverage by PtdChos but also illustrate the capability of PtdChos to facilitate the in-depth profiling of the plasma proteome associated with protein corona formed on the surface of a single type of NP. Since the ratio of the number of quantified proteins through PtdChos spiking is generally around 1.4-fold higher than in the NP corona alone, PtdChos can be incorporated into any LC-MS workflow aiming to boost plasma proteome profiling. More optimized plasma proteomics pipelines, TMT multiplexing coupled to fractionation, or high-end mass spectrometers such as Orbitrap Astral are envisioned to quantify an even higher number of proteins than those reported in the current study. To confirm the role of PtdChos in enhancing the proteome depth of the protein corona, we expanded our analysis by using additional NPs and four plasma samples from individual donors. Specifically, we tested seven additional commercially available and highly uniform NPs with distinct physicochemical properties: polystyrene NPs of varying sizes (mean diameters of 50 nm, 100 nm, and 200 nm) and surface charges (carboxylated and aminated polystyrene NPs, both with the mean diameter of 100 nm), as well as silica NPs with the mean sizes of 50 nm and 100 nm. These NPs have been extensively characterized and widely utilized for protein corona analysis by numerous research groups including our own , , – . The protein corona samples from different NPs were analyzed in the DIA mode with the 30 samples per day (SPD) setting with 44 min acquisition time. Our analysis revealed two key findings: (i) the physicochemical properties of NPs significantly influence the effectiveness of PtdChos in enhancing the number of quantified proteins in plasma, and (ii) incorporating additional plasma samples can markedly increase the overall number of identified proteins (Supplementary Data and Supplementary Fig. ). Polystyrene NPs, in general, and due to their hydrophobic nature, showed higher protein detection capacity than silica NPs ( p value = 0.012; Student’s t -test, two-sided with unequal variance). The average number of quantified proteins using polystyrene NPs was 823.4 vs 633 with silica NPs, while cumulatively there were 1241 unique quantified proteins in polystyrene NPs compared to 1024 in silica. Polystyrene NPs with 200 nm size provided the highest proteome coverage, although the difference in the number of quantified proteins was comparable to the same type of NPs with other sizes. Plain and positively charged polystyrene NPs had a better performance than carboxylated NPs. Our analysis also revealed the inter-individual variabilities between patients. The percentage CVs of the number of proteins quantified across four donors were generally lower for polystyrene NPs than silica NPs (14.4 vs 21.6%) (Supplementary Table ). We then asked if PtdChos could enhance the number of detected proteoforms in top-down proteomics as well. Proteoforms represent distinct structural variants of a protein product from a single gene, including variations in amino acid sequences and post-translational modifications . Proteoforms originating from the same gene can exhibit divergent biological functions and are crucial for modulating disease progression – . Therefore, proteoform-specific measurement of the protein corona, along with their improved detection depth through the use of small molecules, will undoubtedly provide a more accurate characterization of the protein molecules within the corona. We compared the chromatogram, the number of proteoform identifications, proteoform mass distribution, and differentially represented proteins between the untreated corona and PtdChos-treated samples. The LC-MS/MS data showed consistent base peak chromatograms, the number of proteoform identifications, and the number of proteoform-spectrum matches (PrSMs) across the technical triplicates of both the control and PtdChos-treated samples (Fig. , respectively). However, the treated sample exhibited a significant signal corresponding to the small molecule after 60 min of separation time (Fig. ), validating our hypothesis that small molecules interact with plasma proteins, causing the observed variation in the protein corona on the NPs’ surface. Furthermore, the process of recovering intact proteins from the surfaces of NPs primarily collects proteins from the outer layer of the protein corona , as the inner layer is tightly bound to the NP surfaces through various physical and chemical forces . This observation further confirms that PtdChos interacts with plasma proteins rather than directly with the NP surfaces, leading to the formation of its unique protein corona composition. In total, 637 proteoforms were identified across the two samples (with technical triplicates for each sample) (Fig. ). Data analysis using Perseus software (Version 2.0.10.0) revealed that only 110 proteoforms overlapped between the two samples (the minimum number of valid values for filtering data was set to 1). The proteoform mass distribution differed between the two samples (Fig. ). Although the average proteoform masses were similar, the box plot indicated a greater number of larger proteoform identifications in the control sample (over 20 kDa). We hypothesize that PtdChos can bind to large proteins, and due to the high concentration of PtdChos relative to the proteoforms, the signals of these large proteoforms may be obscured. Additional data analyses identified differential proteins in this study (Fig. ). The top-down proteomics approach identified specific gene products that bind to the NP surface in the presence of PtdChos. To determine the types of interactions between albumin and PtdChos, we conducted all-atoms molecular dynamics (MD) simulations with various numbers of PtdChos molecules (Supplementary Fig. ). First, we performed blind and site-specific molecular docking simulations to find the most favorable binding sites for PtdChos on albumin. We then used the top ten most favorable non-overlapping binding poses, as quantified by binding affinity, for our MD simulations (Supplementary Fig. ). Four types of systems with the top 1, 3, 5, and 10 PtdChos molecules, respectively, were investigated via 100 ns simulations. As evidenced by the sum of Lennard-Jones and Coulombic interaction energies shown in Fig. , PtdChos strongly interacts with albumin. A nearly additive effect occurs from 1 to 3 ligands added. However, the five ligands system has a similar total energy as the three ligands one. This may indicate that some PtdChos molecules do not strongly interact with albumin. When the number of ligands increased to 10, we noticed an almost 2-fold increase in energy as compared to the 5 ligands system. To further quantify the strength of interactions between albumin and PtdChos, we calculated the effective free energy of the four types of systems, obtaining a similar trend (Fig. ). The average root mean square fluctuations of albumin residues reveal consistent peaks with the increase in fluctuations as the number of ligands increases (Fig. ). This may suggest that the protein conformation does not change drastically based on the number of ligands added. The average root mean square deviations of the PtdChos heavy atoms show similar values for the 1 and 3 ligands systems but higher values for 5 and 10 ligands systems (Fig. ). This confirms that the first few poses form a more stable interaction with albumin. Finally, Fig. shows that albumin and PtdChos interact primarily via hydrophobic interactions, hydrogen bonding, and water bridges. The hydrophobic interactions formed between albumin and the long fatty acid chains are present throughout every simulation. On the other hand, the number of hydrogen bonds and water bridges increases significantly from the 1 ligand systems to the 3, 5, and 10 ligands systems. These interactions are mainly due to the phosphate group oxygen atoms. The protein corona is a layer of proteins that spontaneously adsorbs on the surface of nanomaterials when exposed to biological fluids . The composition and dynamic evolution of the protein corona is critically important as it can impact the interactions of NPs with biological systems (e.g., activate the immune system), can cause either positive or adverse biocompatibility outcomes, and can greatly affect NP biodistribution in vivo . The specific proteins that adsorb on the surface of the NPs depend on various factors, including the physicochemical properties of the NPs and the composition of the surrounding biological fluid . Metabolites, lipids, vitamins, nutrition, and other types of small biomolecules present in the biological fluid can interact with proteins in these fluids and influence their behavior, including their adsorption onto NPs. For example, it was shown that the addition of glucose and cholesterol to plasma can alter the composition of protein corona on the surface of otherwise identical NPs , . Small molecules can alter the protein corona of NPs, after interaction with plasma proteins, due to various mechanisms such as (i) their competition with proteins for binding to the surface of NPs; (ii) altering proteins’ binding affinities to NPs; and (iii) changing protein conformation , . For example, previous studies revealed that triglyceride, PtdChos, PE, and PtdIns can interact with lipovitellin , C-reactive protein , protein Z , and myelin basic protein (MBP) , respectively. Each individual small molecule and its combinations interrogates tens to hundreds of additional proteins across a broad dynamic range in an unbiased and untargeted manner. Our results also suggest that endogenous small molecule function may help guide which small molecule(s) can enrich protein biomarkers of a specific disease class. Therefore, any changes in the level of the small molecules in the body can alter the overall composition of the protein corona, leading to variations in the types and number of proteins that bind to NPs and consequently their corresponding interactions with biosystems , . Among the various employed small molecules, we discovered that PtdChos alone demonstrates a remarkably high ability to reduce the participation of the most highly abundant proteins in protein corona composition. PtdChos is the most common class of phospholipids in the majority of eukaryotic cell membranes . For a long time, it has been established that PtdChos can engage in specific interactions with serum albumin through hydrophobic processes , , forming distinct protein–lipid complexes , . The results of our molecular dynamics evaluations of the interactions between PtdChos and albumin were in line with the literature. As a result, we found that the simple addition of PtdChos to plasma can significantly reduce albumin adsorption for the surface of polystyrene NPs, thereby creating unique opportunities for the involvement of a broader range of proteins with lower abundance in the protein corona layer. We also observed the same effects of PtdChos on enhancing the proteome coverage using different types of NPs. Not only is PtdChos an economical and simple alternative for conventional albumin depletion strategies, but it can also deplete several other highly abundant proteins as an added advantage. This approach reduces the necessity for employing NP arrays in plasma proteome profiling, and the cost and biases that can occur with albumin depletion. Additionally, PtdChos can help accelerate plasma analysis workflows by reducing the sample preparation steps. Our top-down proteomics analysis of both untreated and PtdChos-treated protein coronas demonstrated that incorporating small molecules such as PtdChos can significantly enhance the quantification of proteoforms within protein corona profiles. Proteoforms, which are distinct structural variants of proteins arising from genetic variations, alternative splicing, and post-translational modifications, play a crucial role in determining protein functionality and are often closely linked to disease occurrence and progression , , , , . By enriching the diversity and depth of proteoforms within the protein corona, the use of small molecules like PtdChos can substantially improve the level of information from plasma proteomics. This enhancement is particularly valuable for biomarker discovery, as the increased detection of proteoforms allows for a more nuanced understanding of disease mechanisms. The ability to capture a broader spectrum of proteoforms in the protein corona could lead to the identification of novel biomarkers that may otherwise be overlooked using traditional bottom-up proteomics approaches that mainly consider protein abundance . Our study highlights the tremendous potential of leveraging small molecules to enhance the capabilities of protein corona profiles for broader plasma proteome analysis. By introducing individual small molecules and their combinations into plasma, we have successfully created distinct protein corona patterns on single identical NPs, thereby expanding the repertoire of attached proteins. Using our approach, we quantified an additional 1573 unique proteins that would otherwise remain undetected in plasma. This enhanced depth in protein coverage can be attributed, in part, to the unique interactions of each small molecule, allowing for the representation of a diverse set of proteins in the corona. Moreover, our findings underscore the influence of small molecules on the types and categories of proteins in the protein corona shell. This feature opens exciting possibilities for early disease diagnosis, particularly in conditions such as cardiovascular and neurodegenerative disorders, where enriched proteins, such as apolipoproteins, play pivotal roles. Importantly, our study demonstrated that PtdChos preferentially interact with highly abundant plasma proteins, thereby reducing their binding to NP surfaces. This reduction allows low-abundance proteins to contribute more significantly to the protein corona profile. To further confirm the critical role of PtdChos in enhancing the depth of the plasma proteome, we employed the concept of actual causality, as outlined by Halpern and Pearl , rather than relying solely on correlation. This mathematical framework allowed us to substantiate how small molecules spiked into plasma can induce diverse protein corona patterns based on our proteomics results. Our findings revealed that among the small molecules tested, PtdChos was the actual cause of the observed increase in the proteomic depth of the plasma sample . This effect was achieved by reducing the binding of highly abundant proteins and enhancing the representation of low-abundance proteins on the NP surfaces. We acknowledge that the number of human plasma samples used in this study was limited, primarily due to our specific focus on improving proteome coverage through the use of a single pooled plasma sample. This approach effectively allows us to test and validate our hypothesis, given that the most abundant plasma proteins exhibit minimal variability between individuals. However, for future biomarker discovery applications, it is essential to expand the sample size to a more diverse cohort. This will ensure the platform fully accounts for biological variability and provides a more comprehensive and generalizable assessment of the proteome across different individuals. One critical challenge that must be addressed is the standardization of proteomics analysis of the protein corona. Ensuring consistent and reproducible results across laboratories and core facilities is essential for the rapid development and successful translation of this platform into clinical applications , , . Addressing this challenge will require coordinated efforts from the scientific community to establish robust, universally accepted protocols. There are a few additional foreseeable limitations with the application of PtdChos. In certain scenarios, any depletion strategy could lead to distortion of the abundance of proteins in plasma, which can be mitigated by enforcing proper controls. Moreover, upon discovery of a biomarker, it can be validated in the cohort using orthogonal techniques such as Western blotting. Furthermore, similar to other albumin depletion strategies, certain proteins bound to albumin might be co-depleted (albuminome) . In summary, our platform is capable of quantifying up to 1793 proteins when using a single NP with an array of small molecules, while only 218 and 681 proteins could be quantified using the plasma or the NP protein corona alone. We showed the possibility of quantifying up to 1436 proteins using a single NP and PtdChos alone using a single plasma sample. Similarly, in top-down proteomics, the addition of PtdChos to plasma prior to their interactions with NPs, can increase the number of quantified proteoforms in the protein corona. The cumulative number of detected proteins will therefore dramatically increase if this platform is applied to a cohort of patient samples with individual variability. Expectedly, with the progressive development of both top-down and bottom-up platforms , the depth of analysis can further increase toward the ultimate goal of achieving comprehensive human proteome coverage. Another alternative would be to combine our strategy with tandem mass tag (TMT) multiplexing and fractionation to achieve an even higher plasma proteome depth. We anticipate that this platform will find extensive applications in plasma proteome profiling, providing an unprecedented opportunity in disease diagnostics and monitoring. Materials Pooled healthy human plasma proteins, along with plasma from four individual healthy donors, were obtained from Innovative Research ( www.innov-research.com ) and diluted to a final concentration of 55% using phosphate buffer solution (PBS, 1×). Seven commercial NPs of various types (silica and polystyrene), sizes (50 nm, 100 nm, and 200 nm), and functional groups (plain, amino, and carboxylated) were sourced from Polysciences ( www.polysciences.com ). Small molecules were purchased from Sigma, Abcam, Fisher Scientific, VWR, and Beantown, and diluted to the desired concentration with 55% human plasma. Reagents for protein digestion, including guanidinium-HCl, DL-dithiothreitol (DTT), iodoacetamide (IAA), and trifluoroacetic acid (TFA), were obtained from Sigma Aldrich. Mass spectrometry-grade lysyl endopeptidase (Lys-C) was sourced from Fujifilm Wako Pure Chemical Corporation, and trypsin was obtained from Promega. Formic acid and C18 StageTips were purchased from Thermo Fisher Scientific. Protein corona formation on the surface of NPs in the presence of small molecules For protein corona formation in the presence of small molecules, individual or pooled human plasma proteins 55% were first incubated with individual small molecules or in combination by preparing two molecular sauces of individual small molecules at different concentrations (i.e., 10 µg/ml, 100 µg/ml, and 1000 µg/ml) for 1 h at 37 °C. Then, each type of polystyrene NPs was added to the mixture of plasma and small molecules solution so that the final concentration of the NPs was 0.2 mg/ml and incubated for another 1 h at 37 °C. It is noteworthy that all experiments are designed in a way that the concentration of NPs, human plasma, and small molecules was 0.2 mg/ml, 55%, and 10 µg/ml, 100 µg/ml, and 1000 µg/ml, respectively. To remove unbound and plasma proteins only loosely attached to the surface of NPs, protein–NP complexes were then centrifuged at 14,000× g for 20 min, the collected NPs’ pellets were washed three times with cold PBS under the same conditions, and the final pellet was collected for further analysis. For the PtdChos concentration study, we used various concentrations of PtdChos (i.e., 250 µg/ml, 750 µg/ml, 1000 µg/ml, and 10000 µg/ml) and used the same protein corona method for the preparation of the samples for mass spectrometry analysis. NP characterization DLS and zeta potential analyses were performed to measure the size distribution and surface charge of the NPs before and after protein corona formation using a Zetasizer nano series DLS instrument (Malvern company). A Helium-Neon laser with a wavelength of 632 nm was used for size distribution measurement at room temperature. TEM was carried out using a JEM-2200FS (JEOL Ltd) operated at 200 kV. The instrument was equipped with an in-column energy filter and an Oxford X-ray energy dispersive spectroscopy (EDS) system. Twenty microliters of the bare NPs were deposited onto a copper grid and used for imaging. For protein corona–coated NPs, 20 μl of samples was negatively stained using 20 μl uranyl acetate 1%, washed with DI water, deposited onto a copper grid, and used for imaging. PC composition was also determined using LC-MS/MS. Bottom-up LC-MS/MS sample preparation for the screening and concentration series experiments The collected protein corona-coated NP pellets were resuspended in 20 µl of PBS containing 0.5 M guanidinium-HCl. The proteins were reduced with 2 mM DTT at 37 °C for 45 min and then alkylated with 8 mM IAA for 45 min at room temperature in the dark. Subsequently, 5 µl of LysC at 0.02 µg/µl in PBS was added and incubated for 4 h, followed by the addition of the same concentration and volume of trypsin for overnight digestion. The next day, the samples were centrifuged at 16,000× g for 20 min at room temperature to remove the NPs. The supernatant was acidified with TFA to a pH of 2–3 and cleaned using C18 StageTips. The samples were then heated at 95 °C for 10 min, vacuum-dried, and submitted to the core facility for LC-MS analysis. LC-MS/MS Analysis: Dried samples were reconstituted with 1 μg of peptides in 25 μl of LC loading buffer (3% ACN, 0.1% TFA) and analyzed using LC-MS/MS. A 60-min gradient was applied in LFQ mode, with 5 μl aliquots injected in triplicate. Control samples (55% human plasma) were prepared with 8 μg of peptides in 200 μl of loading buffer and analyzed similarly. An Ultimate 3000RSLCnano (Thermo Fisher) HPLC system was used with predefined columns, solvents, and gradient settings. Data Dependent Analysis (DDA) was performed with specific MS and MS2 scan settings, followed by data analysis using Proteome Discoverer 2.4 (Thermo Fisher), applying the protocols detailed in our earlier publication (center #9) . The PtdChos concentration series experiment was performed using the same protocol, and the samples were analyzed over a 120 min gradient. Sample preparation for top-down proteomics Protein elution from the surface of NPs and purification were conducted based on procedures illustrated in our recent publications , . The protein corona-coated NPs (with/without PtdChos) were separately treated in a 0.4% ( w / v ) SDS solution at 60 °C for 1.5 h with continuous agitation to release the protein corona from the NP surface. Subsequently, the supernatant containing the protein corona in 0.4% SDS was separated from the NPs by centrifugation at 19,000× g for 20 min at 4 °C. To ensure thorough separation, the supernatant underwent an additional centrifugation step under the same conditions. The final protein corona sample was then subjected to buffer exchange using an Amicon Ultra Centrifugal Filter with a 10 kDa molecular weight cut-off, effectively removing sodium dodecyl sulfate (SDS) from the protein samples. The buffer exchange process began by wetting the filter with 20 µl of 100 mM ABC (pH 8.0), followed by centrifugation at 14,000× g for 10 min. Next, 200 µg of proteins were added to the filter, and centrifugation was conducted for 20 min at 14,000× g . This step was repeated with the addition of 200 µl of 8 M urea in 100 mM ammonium bicarbonate, followed by centrifugation for 20 min at 14,000× g , and repeated twice to ensure complete removal of SDS and other small molecules. To eliminate urea from the purified protein, the filter underwent three additional rounds of buffer exchange. Specifically, 100 mM ABC was added to the filter, adjusting the final volume to 200 µl. All procedures were carried out at 4 °C to effectively eliminate urea from the protein corona. Following buffer exchange, the total protein concentration was determined using a bicinchoninic acid (BCA) assay kit from Fisher Scientific (Hampton, NH), following the manufacturer’s instructions. The samples were then stored overnight at 4 °C. The final protein solutions, consisting of 40 µl (without PtdChos initially) and 44 µl (with PtdChos initially) of 100 mM ABC with a protein concentration of 2.8 mg/ml, were prepared for LC-MS/MS analysis. Top-down proteomics LC-MS/MS The RPLC separation was performed using an EASY-nLC™ 1200 system from Thermo Fisher Scientific. A 1-µL aliquot of the protein corona sample (0.3 mg/mL) was loaded onto a home-packed C4 capillary column (75 µm i.d. × 360 µm o.d., 20 cm in length, 3 µm particles, 300 Å, Bio-C4, Sepax) and separated at a flow rate of 400 nL/min. A gradient composed of mobile phase A (2% ACN in water containing 0.1% FA) and mobile phase B (80% ACN with 0.1% FA) was used for separation. The gradient profile consisted of a 105-min program: 0–85 min, 8–70% B; 85–90 min, 70–100% B; 90–105 min, 100% B. The LC system required an additional 30 min for column equilibration between the analyses, resulting in approximately 135 min per LC-MS analysis. The experiments utilized a Q-Exactive HF mass spectrometer, employing a data-dependent acquisition (DDA) method. MS settings included 120,000 mass resolution (at m / z 200), 3 micro scans, a 3E6 AGC target value, a maximum injection time of 100 ms, and a scan range of 600–2000 m / z . For MS/MS analysis, parameters included 120,000 mass resolution (at 200 m / z ), 3 micro scans, a 1E5 AGC target, 200 ms injection time, 4 m / z isolation window, and 20% normalized collision energy (NCE). During MS/MS, the top five most intense precursor ions from each MS spectrum were selected in the quadrupole and fragmented using higher-energy collision dissociation (HCD). Fragmentation occurred exclusively for ions with intensities exceeding 5E4 and charge states of 4 or higher. Dynamic exclusion was enabled with a 30-s duration, and the “Exclude isotopes” feature was activated. Top-down proteomics data analysis Complex sample data were analyzed using Xcalibur software (Thermo Fisher Scientific) to obtain proteoform intensities and retention times. Chromatograms were exported from Xcalibur and formatted using Adobe Illustrator for the final figure presentation. Proteoform identification and quantification were conducted using the TopPIC Suite (Top-down mass spectrometry-based Proteoform Identification and Characterization, version 1.7.4) pipeline . Initially, RAW files were converted to mzML format using the MSConvert tool. Spectral deconvolution, which converted precursor and fragment isotope clusters to monoisotopic masses, and proteoform feature detection were performed using TopFD (Top-down mass spectrometry Feature Detection, version 1.7.4) . The resulting mass spectra were stored in msalign files, while proteoform feature information was stored in text files. Database searches were carried out using TopPIC Suite against a custom-built protein database (~2780 protein sequences), which included proteins identified in the BUP data. The search allowed for a maximum of one unexpected mass shift, with mass error tolerances of 10 ppm for precursors and fragments. Unknown mass shifts up to 500 Da were considered. False discovery rates (FDRs) for proteoform identifications were estimated using a target-decoy approach, filtering proteoform identifications at 1% and 5% FDR at the PrSM and proteoform levels, respectively. Lists of identified proteoforms from all RPLC-MS/MS runs are provided in Supplementary Data . Label-free quantification of identified proteoforms was performed using TopDiff (Top-down mass spectrometry-based identification of Differentially expressed proteoforms, version 1.7.4) with default settings . LC-MS analysis by DIA The samples were centrifuged at 14,000× g for 20 min to remove the unbound proteins. The collected NP pellets were washed three times with cold PBS under the same conditions. The samples were resuspended in 20 µl of PBS, and the proteins were reduced with 2 mM DTT (final concentration) for 45 min and then alkylated using 8 mM IAA (final concentration) for 45 min in the dark. Subsequently, 5 µl of LysC at 0.02 µg/µl was added for 4 h, followed by the same concentration and volume of trypsin overnight. The samples were then centrifuged at 16,000× g for 20 min at room temperature to remove the NPs then cleaned using C18 cartridges and vacuum dried. Dried peptides were resuspended in 0.1% aqueous formic acid and subjected to LC-MS/MS analysis using an Exploris 480 mass spectrometer fitted with a Vanquish Neo (both Thermo Fisher Scientific) and a custom-made column heater set to 60 °C. Peptides were resolved using an RP-HPLC column (75 μm × 30 cm) packed in-house with C18 resin (ReproSil-Pur C18–AQ, 1.9 μm resin; Dr. Maisch GmbH) at a flow rate of 0.2 μl/min. The following gradient was used for peptide separation: from 4% B to 10% B over 7.5 min to 35% B over 67.5 min to 50% B over 15 min to 95% B over 1 min followed by 10 min at 95% B to 5% B over 1 min followed by 4 min at 5% B. Buffer A was 0.1% formic acid in water and buffer B was 80% acetonitrile, 0.1% formic acid in water. The mass spectrometer was operated in DIA mode with a cycle time of 3 s. MS1 scans were acquired in the Orbitrap in centroid mode at a resolution of 120,000 FWHM (at 200 m / z ), a scan range from 390 m / z to 910 m / z , normalized AGC target set to 300 %, and maximum ion injection time mode set to Auto. MS2 scans were acquired in the Orbitrap in centroid mode at a resolution of 15,000 FWHM (at 200 m / z ), precursor mass range of 400 to 900, quadrupole isolation window of 7 m / z with 1 m / z window overlap, a defined first mass of 120 m / z , normalized AGC target set to 3000% and a maximum injection time of 22 ms. Peptides were fragmented by HCD with collision energy set to 28% and one microscan was acquired for each spectrum. The acquired RAW files were searched individually using the Spectronaut (Biognosys v18.6) directDIA workflow against a Homo sapiens database (consisting of 20,360 protein sequences downloaded from Uniprot on 2022/02/22) and 392 commonly observed contaminants. Default settings were used. For analysis of the impact of PtdChos treated plasma and different NPs, we chose a quicker LC-MS setup (30 SPD) consisting of an Exploris 480 fitted with an Evosep One using the following settings. Dried peptides were resuspended in 0.1% aqueous formic acid, loaded onto Evotip Pure tips (Evosep Biosystems), and subjected to LC-MS/MS analysis using an Exploris 480 Mass Spectrometer (Thermo Fisher Scientific) fitted with an Evosep One (EV 1000, Evosep Biosystems). Peptides were resolved using a performance column—30 SPD (150 μm × 15 cm, 1.5 um, EV1137, Evosep Biosystems) kept at 40 °C fitted with a stainless-steel emitter (30 um, EV1086, Evosep Biosystems) using the 30 SPD method. Buffer A was 0.1% formic acid in water and buffer B was acetonitrile, with 0.1% formic acid. The mass spectrometer was operated in DIA mode. MS1 scans were acquired in centroid mode at a resolution of 120,000 FWHM (at 200 m / z ), a scan range from 350 m / z to 1500 m / z , AGC target set to standard, and maximum ion injection time mode set to Auto. MS2 scans were acquired in centroid mode at a resolution of 15,000 FWHM (at 200 m / z ), precursor mass range of 400–900 m / z , quadrupole isolation window of 12 m / z without window overlap, a defined first mass of 120 m / z , normalized AGC target set to 3000% and maximum injection time mode set to Auto. Peptides were fragmented by HCD with collision energy set to 28% and one microscan was acquired for each spectrum. The acquired RAW files were searched using the Spectronaut (Biognosys v19.0) directDIA workflow against a Homo sapiens database (consisting of 20,360 protein sequences downloaded from Uniprot on 2022/02/22) and 392 commonly observed contaminants. Default settings were applied except method evaluation was set to TRUE. In silico experiments The crystal structure of human serum albumin (PDB code: 1AO6) was obtained from the protein data bank and used for all simulation setups (Supplementary Fig. ). The structure of the PtdChos ligand was obtained from the CHARMM 36 force field files (Name: PLPC) . Molecular docking Two blind docking methods and one site-specific docking were performed with Autodock Vina , software. The first blind docking used the whole albumin for the binding search and the second method consisted of multiple search boxes covering the entire albumin surface. The site-specific docking was performed based on crystallographic analysis of the binding sites on albumin for palmitic acid . The top ten unique non-overlapping binding poses were kept for the subsequent molecular dynamics simulations. MD simulations All-atom MD simulations were performed with GROMACS free software and the CHARMM36 force field. Four types of protein-ligand systems were investigated (Supplementary Fig. ). Three 1 ligand systems, one 3 ligands, 5 ligands, and 10 ligands systems each were used for the simulations. The protein-ligand systems along with the TIP3P water model and a neutralizing salt concentration of 0.15 M NaCl were energy minimized using 5000 steps with an energy tolerance of 1000 KJ/mol/nm. The systems were subsequently equilibrated in 1 ns NVT and 4 ns NPT steps with a 1 fs timestep. The constant temperature for all runs was 310 K and the Berendsen pressure coupling was used. Production steps were then run for 100 ns with a 2 fs timestep with the Parrinello–Rahman barostat. Post-processing analysis Interaction energy The short-range nonbonded Coulombic and Lennard-Jones interaction energies between albumin and the ligands were calculated using GROMACS . Free energy calculation The free energy calculation for the entire 100 ns simulations was calculated using the gmx_MMPBSA , package with the generalized Born method. The entropic term was not considered. The residues and ligand atoms within 6 Å were selected for the calculation. The gas phase and solvation terms, as well as their sum, were averaged for 1000 frames and plotted for each simulation. RMSF Root mean square fluctuation per residue was calculated using GROMACS free software after fitting to the first frame of the simulation. For the 1 ligand systems, all 3 simulation results were averaged. RMSD The root mean square deviation of the ligand(s) with respect to the energy-minimized structure was calculated using GROMACS . The results of the three 1 ligand systems were averaged. Bond types The types of bonds formed between albumin and PtdChos were determined using the MD-Ligand-Receptor tool . Visualization All visualizations were made using the visual molecular dynamics (VMD) software . Data analysis First, data were normalized by total protein intensity in each technical replicate. Then all the abundances were transformed into log10 and NA values were imputed by a constant value of −10 (in the heatmap figure). Except for PtdChos sample at 100 µg/ml, all samples were analyzed with three technical replicates. In the case, of DIA analysis of different NPs, there were four individual samples per group with no technical replicates. Statistical t -test with unequal variance were used to compare the differences between groups. Data analysis was performed using R (R version 4.1.0) with the help of ggplot2, dplyr, tidyr, ComplexHeatmap, and PerfromanceAnalytics packages. Statistics and reproducibility All measurements were performed as a triplicate analysis of a given aliquot. The initial DIA analysis was performed in one replicate. The experiments on different NPs with PtdChos and DIA were performed on plasma samples from four individual donors. Reporting summary Further information on research design is available in the linked to this article. Pooled healthy human plasma proteins, along with plasma from four individual healthy donors, were obtained from Innovative Research ( www.innov-research.com ) and diluted to a final concentration of 55% using phosphate buffer solution (PBS, 1×). Seven commercial NPs of various types (silica and polystyrene), sizes (50 nm, 100 nm, and 200 nm), and functional groups (plain, amino, and carboxylated) were sourced from Polysciences ( www.polysciences.com ). Small molecules were purchased from Sigma, Abcam, Fisher Scientific, VWR, and Beantown, and diluted to the desired concentration with 55% human plasma. Reagents for protein digestion, including guanidinium-HCl, DL-dithiothreitol (DTT), iodoacetamide (IAA), and trifluoroacetic acid (TFA), were obtained from Sigma Aldrich. Mass spectrometry-grade lysyl endopeptidase (Lys-C) was sourced from Fujifilm Wako Pure Chemical Corporation, and trypsin was obtained from Promega. Formic acid and C18 StageTips were purchased from Thermo Fisher Scientific. For protein corona formation in the presence of small molecules, individual or pooled human plasma proteins 55% were first incubated with individual small molecules or in combination by preparing two molecular sauces of individual small molecules at different concentrations (i.e., 10 µg/ml, 100 µg/ml, and 1000 µg/ml) for 1 h at 37 °C. Then, each type of polystyrene NPs was added to the mixture of plasma and small molecules solution so that the final concentration of the NPs was 0.2 mg/ml and incubated for another 1 h at 37 °C. It is noteworthy that all experiments are designed in a way that the concentration of NPs, human plasma, and small molecules was 0.2 mg/ml, 55%, and 10 µg/ml, 100 µg/ml, and 1000 µg/ml, respectively. To remove unbound and plasma proteins only loosely attached to the surface of NPs, protein–NP complexes were then centrifuged at 14,000× g for 20 min, the collected NPs’ pellets were washed three times with cold PBS under the same conditions, and the final pellet was collected for further analysis. For the PtdChos concentration study, we used various concentrations of PtdChos (i.e., 250 µg/ml, 750 µg/ml, 1000 µg/ml, and 10000 µg/ml) and used the same protein corona method for the preparation of the samples for mass spectrometry analysis. DLS and zeta potential analyses were performed to measure the size distribution and surface charge of the NPs before and after protein corona formation using a Zetasizer nano series DLS instrument (Malvern company). A Helium-Neon laser with a wavelength of 632 nm was used for size distribution measurement at room temperature. TEM was carried out using a JEM-2200FS (JEOL Ltd) operated at 200 kV. The instrument was equipped with an in-column energy filter and an Oxford X-ray energy dispersive spectroscopy (EDS) system. Twenty microliters of the bare NPs were deposited onto a copper grid and used for imaging. For protein corona–coated NPs, 20 μl of samples was negatively stained using 20 μl uranyl acetate 1%, washed with DI water, deposited onto a copper grid, and used for imaging. PC composition was also determined using LC-MS/MS. The collected protein corona-coated NP pellets were resuspended in 20 µl of PBS containing 0.5 M guanidinium-HCl. The proteins were reduced with 2 mM DTT at 37 °C for 45 min and then alkylated with 8 mM IAA for 45 min at room temperature in the dark. Subsequently, 5 µl of LysC at 0.02 µg/µl in PBS was added and incubated for 4 h, followed by the addition of the same concentration and volume of trypsin for overnight digestion. The next day, the samples were centrifuged at 16,000× g for 20 min at room temperature to remove the NPs. The supernatant was acidified with TFA to a pH of 2–3 and cleaned using C18 StageTips. The samples were then heated at 95 °C for 10 min, vacuum-dried, and submitted to the core facility for LC-MS analysis. LC-MS/MS Analysis: Dried samples were reconstituted with 1 μg of peptides in 25 μl of LC loading buffer (3% ACN, 0.1% TFA) and analyzed using LC-MS/MS. A 60-min gradient was applied in LFQ mode, with 5 μl aliquots injected in triplicate. Control samples (55% human plasma) were prepared with 8 μg of peptides in 200 μl of loading buffer and analyzed similarly. An Ultimate 3000RSLCnano (Thermo Fisher) HPLC system was used with predefined columns, solvents, and gradient settings. Data Dependent Analysis (DDA) was performed with specific MS and MS2 scan settings, followed by data analysis using Proteome Discoverer 2.4 (Thermo Fisher), applying the protocols detailed in our earlier publication (center #9) . The PtdChos concentration series experiment was performed using the same protocol, and the samples were analyzed over a 120 min gradient. Protein elution from the surface of NPs and purification were conducted based on procedures illustrated in our recent publications , . The protein corona-coated NPs (with/without PtdChos) were separately treated in a 0.4% ( w / v ) SDS solution at 60 °C for 1.5 h with continuous agitation to release the protein corona from the NP surface. Subsequently, the supernatant containing the protein corona in 0.4% SDS was separated from the NPs by centrifugation at 19,000× g for 20 min at 4 °C. To ensure thorough separation, the supernatant underwent an additional centrifugation step under the same conditions. The final protein corona sample was then subjected to buffer exchange using an Amicon Ultra Centrifugal Filter with a 10 kDa molecular weight cut-off, effectively removing sodium dodecyl sulfate (SDS) from the protein samples. The buffer exchange process began by wetting the filter with 20 µl of 100 mM ABC (pH 8.0), followed by centrifugation at 14,000× g for 10 min. Next, 200 µg of proteins were added to the filter, and centrifugation was conducted for 20 min at 14,000× g . This step was repeated with the addition of 200 µl of 8 M urea in 100 mM ammonium bicarbonate, followed by centrifugation for 20 min at 14,000× g , and repeated twice to ensure complete removal of SDS and other small molecules. To eliminate urea from the purified protein, the filter underwent three additional rounds of buffer exchange. Specifically, 100 mM ABC was added to the filter, adjusting the final volume to 200 µl. All procedures were carried out at 4 °C to effectively eliminate urea from the protein corona. Following buffer exchange, the total protein concentration was determined using a bicinchoninic acid (BCA) assay kit from Fisher Scientific (Hampton, NH), following the manufacturer’s instructions. The samples were then stored overnight at 4 °C. The final protein solutions, consisting of 40 µl (without PtdChos initially) and 44 µl (with PtdChos initially) of 100 mM ABC with a protein concentration of 2.8 mg/ml, were prepared for LC-MS/MS analysis. The RPLC separation was performed using an EASY-nLC™ 1200 system from Thermo Fisher Scientific. A 1-µL aliquot of the protein corona sample (0.3 mg/mL) was loaded onto a home-packed C4 capillary column (75 µm i.d. × 360 µm o.d., 20 cm in length, 3 µm particles, 300 Å, Bio-C4, Sepax) and separated at a flow rate of 400 nL/min. A gradient composed of mobile phase A (2% ACN in water containing 0.1% FA) and mobile phase B (80% ACN with 0.1% FA) was used for separation. The gradient profile consisted of a 105-min program: 0–85 min, 8–70% B; 85–90 min, 70–100% B; 90–105 min, 100% B. The LC system required an additional 30 min for column equilibration between the analyses, resulting in approximately 135 min per LC-MS analysis. The experiments utilized a Q-Exactive HF mass spectrometer, employing a data-dependent acquisition (DDA) method. MS settings included 120,000 mass resolution (at m / z 200), 3 micro scans, a 3E6 AGC target value, a maximum injection time of 100 ms, and a scan range of 600–2000 m / z . For MS/MS analysis, parameters included 120,000 mass resolution (at 200 m / z ), 3 micro scans, a 1E5 AGC target, 200 ms injection time, 4 m / z isolation window, and 20% normalized collision energy (NCE). During MS/MS, the top five most intense precursor ions from each MS spectrum were selected in the quadrupole and fragmented using higher-energy collision dissociation (HCD). Fragmentation occurred exclusively for ions with intensities exceeding 5E4 and charge states of 4 or higher. Dynamic exclusion was enabled with a 30-s duration, and the “Exclude isotopes” feature was activated. Complex sample data were analyzed using Xcalibur software (Thermo Fisher Scientific) to obtain proteoform intensities and retention times. Chromatograms were exported from Xcalibur and formatted using Adobe Illustrator for the final figure presentation. Proteoform identification and quantification were conducted using the TopPIC Suite (Top-down mass spectrometry-based Proteoform Identification and Characterization, version 1.7.4) pipeline . Initially, RAW files were converted to mzML format using the MSConvert tool. Spectral deconvolution, which converted precursor and fragment isotope clusters to monoisotopic masses, and proteoform feature detection were performed using TopFD (Top-down mass spectrometry Feature Detection, version 1.7.4) . The resulting mass spectra were stored in msalign files, while proteoform feature information was stored in text files. Database searches were carried out using TopPIC Suite against a custom-built protein database (~2780 protein sequences), which included proteins identified in the BUP data. The search allowed for a maximum of one unexpected mass shift, with mass error tolerances of 10 ppm for precursors and fragments. Unknown mass shifts up to 500 Da were considered. False discovery rates (FDRs) for proteoform identifications were estimated using a target-decoy approach, filtering proteoform identifications at 1% and 5% FDR at the PrSM and proteoform levels, respectively. Lists of identified proteoforms from all RPLC-MS/MS runs are provided in Supplementary Data . Label-free quantification of identified proteoforms was performed using TopDiff (Top-down mass spectrometry-based identification of Differentially expressed proteoforms, version 1.7.4) with default settings . The samples were centrifuged at 14,000× g for 20 min to remove the unbound proteins. The collected NP pellets were washed three times with cold PBS under the same conditions. The samples were resuspended in 20 µl of PBS, and the proteins were reduced with 2 mM DTT (final concentration) for 45 min and then alkylated using 8 mM IAA (final concentration) for 45 min in the dark. Subsequently, 5 µl of LysC at 0.02 µg/µl was added for 4 h, followed by the same concentration and volume of trypsin overnight. The samples were then centrifuged at 16,000× g for 20 min at room temperature to remove the NPs then cleaned using C18 cartridges and vacuum dried. Dried peptides were resuspended in 0.1% aqueous formic acid and subjected to LC-MS/MS analysis using an Exploris 480 mass spectrometer fitted with a Vanquish Neo (both Thermo Fisher Scientific) and a custom-made column heater set to 60 °C. Peptides were resolved using an RP-HPLC column (75 μm × 30 cm) packed in-house with C18 resin (ReproSil-Pur C18–AQ, 1.9 μm resin; Dr. Maisch GmbH) at a flow rate of 0.2 μl/min. The following gradient was used for peptide separation: from 4% B to 10% B over 7.5 min to 35% B over 67.5 min to 50% B over 15 min to 95% B over 1 min followed by 10 min at 95% B to 5% B over 1 min followed by 4 min at 5% B. Buffer A was 0.1% formic acid in water and buffer B was 80% acetonitrile, 0.1% formic acid in water. The mass spectrometer was operated in DIA mode with a cycle time of 3 s. MS1 scans were acquired in the Orbitrap in centroid mode at a resolution of 120,000 FWHM (at 200 m / z ), a scan range from 390 m / z to 910 m / z , normalized AGC target set to 300 %, and maximum ion injection time mode set to Auto. MS2 scans were acquired in the Orbitrap in centroid mode at a resolution of 15,000 FWHM (at 200 m / z ), precursor mass range of 400 to 900, quadrupole isolation window of 7 m / z with 1 m / z window overlap, a defined first mass of 120 m / z , normalized AGC target set to 3000% and a maximum injection time of 22 ms. Peptides were fragmented by HCD with collision energy set to 28% and one microscan was acquired for each spectrum. The acquired RAW files were searched individually using the Spectronaut (Biognosys v18.6) directDIA workflow against a Homo sapiens database (consisting of 20,360 protein sequences downloaded from Uniprot on 2022/02/22) and 392 commonly observed contaminants. Default settings were used. For analysis of the impact of PtdChos treated plasma and different NPs, we chose a quicker LC-MS setup (30 SPD) consisting of an Exploris 480 fitted with an Evosep One using the following settings. Dried peptides were resuspended in 0.1% aqueous formic acid, loaded onto Evotip Pure tips (Evosep Biosystems), and subjected to LC-MS/MS analysis using an Exploris 480 Mass Spectrometer (Thermo Fisher Scientific) fitted with an Evosep One (EV 1000, Evosep Biosystems). Peptides were resolved using a performance column—30 SPD (150 μm × 15 cm, 1.5 um, EV1137, Evosep Biosystems) kept at 40 °C fitted with a stainless-steel emitter (30 um, EV1086, Evosep Biosystems) using the 30 SPD method. Buffer A was 0.1% formic acid in water and buffer B was acetonitrile, with 0.1% formic acid. The mass spectrometer was operated in DIA mode. MS1 scans were acquired in centroid mode at a resolution of 120,000 FWHM (at 200 m / z ), a scan range from 350 m / z to 1500 m / z , AGC target set to standard, and maximum ion injection time mode set to Auto. MS2 scans were acquired in centroid mode at a resolution of 15,000 FWHM (at 200 m / z ), precursor mass range of 400–900 m / z , quadrupole isolation window of 12 m / z without window overlap, a defined first mass of 120 m / z , normalized AGC target set to 3000% and maximum injection time mode set to Auto. Peptides were fragmented by HCD with collision energy set to 28% and one microscan was acquired for each spectrum. The acquired RAW files were searched using the Spectronaut (Biognosys v19.0) directDIA workflow against a Homo sapiens database (consisting of 20,360 protein sequences downloaded from Uniprot on 2022/02/22) and 392 commonly observed contaminants. Default settings were applied except method evaluation was set to TRUE. The crystal structure of human serum albumin (PDB code: 1AO6) was obtained from the protein data bank and used for all simulation setups (Supplementary Fig. ). The structure of the PtdChos ligand was obtained from the CHARMM 36 force field files (Name: PLPC) . Two blind docking methods and one site-specific docking were performed with Autodock Vina , software. The first blind docking used the whole albumin for the binding search and the second method consisted of multiple search boxes covering the entire albumin surface. The site-specific docking was performed based on crystallographic analysis of the binding sites on albumin for palmitic acid . The top ten unique non-overlapping binding poses were kept for the subsequent molecular dynamics simulations. All-atom MD simulations were performed with GROMACS free software and the CHARMM36 force field. Four types of protein-ligand systems were investigated (Supplementary Fig. ). Three 1 ligand systems, one 3 ligands, 5 ligands, and 10 ligands systems each were used for the simulations. The protein-ligand systems along with the TIP3P water model and a neutralizing salt concentration of 0.15 M NaCl were energy minimized using 5000 steps with an energy tolerance of 1000 KJ/mol/nm. The systems were subsequently equilibrated in 1 ns NVT and 4 ns NPT steps with a 1 fs timestep. The constant temperature for all runs was 310 K and the Berendsen pressure coupling was used. Production steps were then run for 100 ns with a 2 fs timestep with the Parrinello–Rahman barostat. Interaction energy The short-range nonbonded Coulombic and Lennard-Jones interaction energies between albumin and the ligands were calculated using GROMACS . Free energy calculation The free energy calculation for the entire 100 ns simulations was calculated using the gmx_MMPBSA , package with the generalized Born method. The entropic term was not considered. The residues and ligand atoms within 6 Å were selected for the calculation. The gas phase and solvation terms, as well as their sum, were averaged for 1000 frames and plotted for each simulation. RMSF Root mean square fluctuation per residue was calculated using GROMACS free software after fitting to the first frame of the simulation. For the 1 ligand systems, all 3 simulation results were averaged. RMSD The root mean square deviation of the ligand(s) with respect to the energy-minimized structure was calculated using GROMACS . The results of the three 1 ligand systems were averaged. Bond types The types of bonds formed between albumin and PtdChos were determined using the MD-Ligand-Receptor tool . Visualization All visualizations were made using the visual molecular dynamics (VMD) software . The short-range nonbonded Coulombic and Lennard-Jones interaction energies between albumin and the ligands were calculated using GROMACS . The free energy calculation for the entire 100 ns simulations was calculated using the gmx_MMPBSA , package with the generalized Born method. The entropic term was not considered. The residues and ligand atoms within 6 Å were selected for the calculation. The gas phase and solvation terms, as well as their sum, were averaged for 1000 frames and plotted for each simulation. Root mean square fluctuation per residue was calculated using GROMACS free software after fitting to the first frame of the simulation. For the 1 ligand systems, all 3 simulation results were averaged. The root mean square deviation of the ligand(s) with respect to the energy-minimized structure was calculated using GROMACS . The results of the three 1 ligand systems were averaged. The types of bonds formed between albumin and PtdChos were determined using the MD-Ligand-Receptor tool . All visualizations were made using the visual molecular dynamics (VMD) software . First, data were normalized by total protein intensity in each technical replicate. Then all the abundances were transformed into log10 and NA values were imputed by a constant value of −10 (in the heatmap figure). Except for PtdChos sample at 100 µg/ml, all samples were analyzed with three technical replicates. In the case, of DIA analysis of different NPs, there were four individual samples per group with no technical replicates. Statistical t -test with unequal variance were used to compare the differences between groups. Data analysis was performed using R (R version 4.1.0) with the help of ggplot2, dplyr, tidyr, ComplexHeatmap, and PerfromanceAnalytics packages. All measurements were performed as a triplicate analysis of a given aliquot. The initial DIA analysis was performed in one replicate. The experiments on different NPs with PtdChos and DIA were performed on plasma samples from four individual donors. Further information on research design is available in the linked to this article. Supplementary Information Description of Additional Supplementary Files Supplementary Data 1 Supplementary Data 2 Supplementary Data 3 Supplementary Data 4 Supplementary Data 5 Supplementary Data 6 Reporting Summary Transparent Peer Review file
Historical Profile of Kurt Karl Stephan Semm, Born March 23, 1927 in Munich, Germany, Resident of Tucson, Arizona, USA Since 1996
5b26b4f8-160e-4c9e-85eb-5fc22f5b35b7
3113196
Gynaecology[mh]
The Society of Laparoendoscopic Surgeons wishes to express great sadness at the passing of our friend and colleague Kurt Karl Stephan Semm. Dr. Semm was an innovator and pioneer who cared for patients and friends with generosity, passion, and grace. He served the Society as an active member of the International Advisory Board and as a contributor to our journal, JSLS. Dr. Semm was an SLS Excel Award winner for his lifelong commitment to laparoscopic surgery and education. We will miss him as a colleague, a physician and as a friend.
Novel optimized drug delivery systems for enhancing spinal cord injury repair in rats
658b8152-0f13-49fb-b303-2f880984ecd4
8648032
Pharmacology[mh]
Spinal cord injury (SCI), which is one of the most serious injuries of the central nervous system, has a high rate of disability and serious complications, and has significant negative impact on daily life (McDonald & Sadowsky, ). Effective treatment at the early stages of SCI can have a great influence on the prognosis of SCI. However, due to short drug cycles and unclear targets, the current treatment strategy still has limitations. Pathological processes of SCI include primary and secondary injury (Ambrozaitis et al., ). Secondary injury is a long-term regulatory process at the cellular and molecular levels. In addition, its consequences are more serious than primary injury, which is also the main focus of current research on treatment strategies. The pathological mechanisms of secondary SCI are complex (Ahuja et al., ; Karsy & Hawryluk, ), and include oxidative stress, mitochondrial dysfunction, nerve cell apoptosis, inflammatory response, lipid peroxidation and glutamate receptor overactivation. Among these, inflammation (Orr & Gensel, ) and oxidative stress (Jia et al., ) have important roles in the progression of SCI. In addition, due to ruptured and sheared blood vessels in SCI tissue (Yao et al., ), producing a high concentration of drugs at the local injury site, and to function continuously and effectively is quite difficult. Following spinal cord injury, Mel exerts neuroprotective effects by attenuating the inflammatory response (Yang et al., ) and oxidative stress response (Yuan et al., ). However, Mel has several disadvantages, including poor water solubility and easy decomposition. Thus, the purpose of this study is to establish a highly targeted drug delivery method to delivery drugs with definite efficacy (i.e. Mel) to intervene prior to secondary SCI and that can meet the needs of complex clinical applications. Sustained-release microspheres (MS) are spherical entities that are formed by the dissolution or dispersion of drugs in the matrix of polymer materials. MS have several advantages, including an improvement in drug solubility, permeability and bioavailability (Hu et al., ). MS are often biodegradable and harmless to organisms, and are commonly composed of chitosan, methacrylate, gelatin, poly (lactic-co-glycolic acid, PLGA), as these materials can help achieve certain efficacy in the application of disease treatment. Studies have reported that methacrylate spheres loaded with diclofenac sodium, have excellent biocompatibility and can attenuate osteoarthritis (Yang et al., ). The IL4-loaded gelatin MS switched from a proinflammatory M1 macrophage into a pro-healing M2 phenotype, which efficiently resolved inflammation, and ultimately enhanced osteoblastic differentiation and bone regeneration (Hu et al., ). Furthermore, PLGA-MS loaded with growth factor sustained release system have been reported to promote sciatic nerve repair in rats (Zhang et al., ). Therefore, application of MS in SCI has been considered to be a potential treatment. Hydrogels are hydrophilic polymers with a three-dimensional grid structure, which can absorb and retain a large amount of water or biological fluids. Based on their biocompatibility, biodegradability and good tolerance, hydrogels have been widely used in the field of drug targeted delivery and controlled release (Oliva et al., ). Laponite XLG (Na + 0.7[(Si8Mg5.5Li0.3)O20(OH)4] − 0.7) is a type of biocompatible hydrogel nanomaterial with a special structure, which includes a positively charged edge and negatively charged surface (Das et al., ). This nanomaterial is able to generate a stable nanoscale platelet dispersion and a large surface area in order to form a “House of Cards” structure when dispersed in solution due to electrostatic adsorption (Dávila & d’Ávila, ). Furthermore, this structure can be degraded into nontoxic products (Na+, Mg2+, Si(OH)4 and Li+); Na + and Mg2+ are beneficial to nerve cells (Tomás et al., ; Zhai et al., ). Based on these features, the nanomaterials are often used in the field of central nervous system repair and regenerative medicine. The Brimonidine-LAPONITE ® intravitreal formulation has been reported to have an ocular hypotensive and neuroprotective effect in a glaucoma animal model (Rodrigo et al., ). In addition, the laponite hydrogel bridge FGF4 has been shown to treat SCI (Wang et al., ). However, there are still a few studies that have evaluated whether laponite hydrogel can help promote stabilization of additional drug sustained-release biomaterials in the model of nerve damage. Biological membrane coating is an effective tool for nanoparticle drug carriers in order to improve their biological properties (Zou et al., ). The membrane is nondestructive and extracted by a variety of physical and chemical methods, and then wrapped on the surface of inorganic or organic nanocarrier. Thus, they have similar biological functions to cells from the membrane. The use of cell membrane-coated nanodrugs that simulate source cells with a natural cell membrane of good biocompatibility, and the ability to interact in an in vivo microenvironment can identify and target source cells, extend its blood half-life, enhance accumulation in the target area, reduce immunogenicity, and minimize side effects (Luk & Zhang, ; Kroll et al., ; Qin et al., ). The sources of biomimetic materials include red blood cells, white blood cells, stem cells, tumor cells and platelets. Among these, platelets, which are an important cell type, are involved in the process of coagulation and hemostasis, innate immune response, and bacterial infection. Some studies in the treatment of cardiovascular atherosclerosis (Wei et al., ), rheumatoid arthritis (Jin et al., ), and cancer (Jiang et al., ) using a platelet membrane as a carrier of nanocoating have led to desirable outcomes. Furthermore, platelet membranes have the ability to naturally target the hemorrhage and inflammatory site, and do not need to rely upon passive targeting and active targeting by ligands and external stimulation. Thus, in view of the pathological characteristics of secondary SCI, application of platelet membrane is worth looking forward to. Herein, given the clinical complexity of patients with SCI and the need for different routes of administration, we designed two novel injectable microsphere drug delivery systems. The Lap/MS@Mel drug delivery system has a drug-loading efficiency (DL%,7.2–9.1%) of MS that were loaded Mel mixed with the Laponite hydrogels, as the Laponite hydrogel can maintain bioactivity of Mel and prolonged and stabilize the MS to release Mel to the SCI tissue, thus synergistically repairing damaged nerves. The PM/MS@Mel is another delivery system that we have designed based on nanospheres for clinical treatment of SCI. The nanoscale MS cloud can pass through various narrow barriers in the blood system in order to reach site of injury. Importantly, MS that are subsequently coated with platelet membranes can increase stability, biocompatibility and targeted release of the microsphere sustained-release system in the blood. Multiple comprehensive evidences, which include functional, histological and morphological assessments were performed to evaluate the biological effect of Lap/MS@Mel gel and PM/MS@Mel. In addition, the novel sustained-release system helps Mel balance between the macrophage subsets that shift from the pro-inflammatory M1 phenotype to the anti-inflammatory M2 phenotype in order to reduce the loss of biological materials. Overall, the novel sustained-release system based on MS is able to validate a more precise and efficient delivery for melatonin, and promotes the effect on recovery of SCI. Lap/MS@Mel hydrogels and PM/MS@Mel nanoparticle fabrication In order to prepare Lap/MS@Mel hydrogels, 10 mg of melatonin (Aladdin, Shanghai, China) and 100 mg of PLGA (molecular weight 40000; LA: GA =50:50, Aladdin, Shanghai, China) were dissolved in 3 mL of dichloromethane solution (National Pharmaceutical Group, Shanghai, China) in the oil phase. Then, they were mixed with 10 mL of deionized aqueous solution containing (2–4%, w/v) polyvinyl acetate. The emulsion was then prepared by phacoemulsification (400 ms/time, 8 times) under ice bath conditions. The emulsion was quickly added to 30 mL of aqueous solution, stirred overnight at 200–300 rpm, and the dichloromethane was volatilized. After, MS, loaded with Mel, were collected via centrifugation at 10,000 rpm, and then washed with distilled water three times, and freeze-dried for 24 h. Blank MS without drugs were then prepared using the same method. Then, 1.5 g of Laponite powder (Bick Chemical Co., Ltd, Germany) and loaded with MS@Mel was dissolved in 50 mL double distilled water by stirring for 2 h in order to form Lap/MS@Mel hydrogels. For PM/MS@Mel nanoparticle, platelet membrane (PM) extraction, 10% Acid Citrate Dextrose (ACD, Solarbio, Beijing, China) anticoagulant was added to rat blood, and platelet-rich plasma (PRP) was collected after centrifugation at approximately 10,000 rpm for 20 min. The platelet precipitation was then collected after PRP was centrifuged at approximately 2000 rpm for 20 min. Then, platelet precipitation was resuspended in a red blood cell lysis buffer for 5 min at 4 °C, following centrifugation at approximately 2000 rpm for 20 min. After removing the supernatant, platelet precipitation was resuspended in the tyrode solution (Solarbio, Beijing, China) and frozen and thawed three times. The precipitate that was obtained after centrifugation was platelet membrane (PM). For MS@Mel nanoparticles synthesis, melatonin and poly(lactic-co-glycolic acid) (PLGA) were initially weighed and added to dichloromethane. After they were fully dissolved, they were added to the 3% bovine serum albumin (BSA, Solarbio, Beijing, China) solution. The mixture was sonicated for 5 min. Then, the above solution was added to 20 mL of saturated melatonin solution and stirred for 4 h. The MS@Mel nanoparticles were collected by centrifugation at 10,000 rpm for 20 min at 4 °C, and the supernatant was removed. The nanoparticles were then resuspended in deionized water by vortexing, and collected via centrifugation, again. The obtained platelet membrane and MS@Mel nanoparticles were then ultrasonicated for 30 min under the ultrasonic cleaning apparatus, and the obtained mixture was PM@MS@Mel. Characterizations For encapsulation efficiency (EE), the sample (MS@Mel) was added to an Eppendorf (EP) tube, after weighing the tube. Next, the total weight of an EP tube was measured after lyophilization. The weight of the sample (MS@Mel) (WT) is equal to the total weight, minus the weight of the EP tube. Next, the sample was washed with distilled water, the supernatant was collected after centrifugation at 12,000 rpm, and then the amount of free Mel (WM) was detected using ultraviolet (absorbance value at 270 nm). The sediment (MS) was dissolved in dichloromethane containing deionized aqueous solution and stirred for 4 h, and the amount of encapsulated Mel (EM) was calculated by ultraviolet (absorbance value at, 270 nm). The encapsulation efficiency (EE) and drug loading efficiency (DL) was calculated according to the following formula: EE%=(EM/WM + EM)×100%.DL%=(EM/WT) ×100%. For Lap/MS@Mel hydrogels, EE%=69.3–71.5% and DL%=7.2–9.1%. For PM/MS@Mel nanoparticle, EE%=29.8–31.1% and DL%=2.7–3.5%. For drug release experiment, the sample was put into 10 mL simulated body fluid (pH = 7.4), and the drug release rate was carried out under a constant temperature (37 °C ± 1 °C). For micro-gel compound, all release medium was changed at 1, 3, 5, 7, 14, 21, and 28 d, respectively, and the absorbance of the release medium was measured at the same time. For nano-PM compound, part of release medium was changed by the dialysis bag (Sbjbio Biotechnology Co., LTD, Nanjing, China) at 6, 12, 24, 48, 72, 96, 120, 144, and 168 h, respectively. The release amount was calculated, as per the standard curve equation. The zeta potentials of samples were investigated on a Zeta sizer (Malvern Instruments, UK), the absorption spectra were evaluated on a TU-1810 UV-V spectrophotometer and Fourier Transform Infrared (FTIR) Spectroscopy. Transmission electron microscopy (TEM) images were obtained using transmission electron microscope (TEM; Tecnai F20, FEI). Animals and spinal cord injury rat model construction Sprague-Dawley (SD) rats were purchased from the Shanghai Laboratory Animal Center, Chinese Academy of Science (Shanghai, China). All animal experiments were conducted in compliance with guidelines, and followed a protocol that was approved by the Research Ethics Committee of Zhejiang University, China. In order to develop a spinal cord injury (SCI) model, animals were anesthetized with 2% (w/v) pentobarbital sodium (intraperitoneal injection; 40 mg/kg). The T9 lamina was removed, and then the T9 spinal cord was clamped using a 30 g vascular clamp (Oscar, China) for 1 min. Immediately, the Lap/MS hydrogel, a free Mel solution (20 mg), MS-containing 20 mg Mel or Lap/MS hydrogel containing 20 mg Mel, was orthotopically injected in order to cover the injured site. In addition, the PM/MS solution, a free Mel solution (5 mg/kg), MS containing Mel (5 mg/kg) or PM/MS containing Mel (5 mg/kg) was caudal vein that was injected with a time interval of seven days. Locomotion recovery assessment The motor function of both hind limbs was analyzed on 1, 3, 7, 14 and 28 d after operation using the Basso Beattie Bresnahan (BBB) scale. The physiological changes, including range and number of joint movements, body balance, weight-bearing and coordination of the hind and forelimbs, were recorded ranging from 0 (complete paralysis) to 21 (normal locomotion). ELISA The samples were measured for expression of tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), glutathione peroxidase (GPx), malondialdehyde (MDA) and superoxide dismutase (SOD) by ELISA kits (Boster Biological Technology co. Ltd, China). Western blotting Proteins from tissues were extracted using the Radio Immunoprecipitation Assay (RIPA) lysis buffer. Then, 60 μg of protein was added per well, and separated on 4–20% gels. The protein was then transferred to the polyvinylidene fluoride (PVDF) membranes. Next, the PVDF membranes were blocked (5% fat-free milk) for one hour, and incubated overnight at 4 °C with following primary antibodies. These antibodies included c-caspase3 (1:1000, CST), Integrin α6 (1:1000, CST), CD41 (1:1000, CST), CD47 (1:1000, CST), CD62p (1:1000, CST), Histone H3 (1:1000, CST) and GAPDH (Cat: RT1210-1, Huabio). Next, the PVDF membranes were cultured with secondary antibodies at room temperature for one hour, and visualized by a ChemiDicTM XRS + Imaging System (BioRad Laboratories, Hercules, CA, USA). Immunofluorescence Next, the samples were subjected to dewaxing, rehydration, dehydration, antigen repair, and then blocked by 5% bovine serum albumin for 30 min. These samples were incubated with primary antibodies, anti-cleaved caspase 3 (1:400), anti-Iba-1 (1:200), anti-Arginase-1 (1:200) or anti iNOS (1:100) at 4 °C overnight. This was followed by incubation with secondary antibodies for one hour. Next, all images were captured by a confocal laser microscope (Nikon, A1PLUS, Tokyo, Japan). Hematoxylin and eosin (H&E) staining The tissue sections were stained with H&E and crystal violet, according to manufacturer's instructions. The images were captured by a Nikon ECLPSE 80i (Nikon, Tokyo, Japan). Statistical analysis Data is presented as means ± SEM. One‑way ANOVA, followed by Tukey’s post hoc test, were used to determine differences among multiple groups. Repeated measurement two-way mixed ANOVA, followed by Tukey test, was utilized to detect differences between groups in BBB scores. p -Value <.05 was considered significant. In order to prepare Lap/MS@Mel hydrogels, 10 mg of melatonin (Aladdin, Shanghai, China) and 100 mg of PLGA (molecular weight 40000; LA: GA =50:50, Aladdin, Shanghai, China) were dissolved in 3 mL of dichloromethane solution (National Pharmaceutical Group, Shanghai, China) in the oil phase. Then, they were mixed with 10 mL of deionized aqueous solution containing (2–4%, w/v) polyvinyl acetate. The emulsion was then prepared by phacoemulsification (400 ms/time, 8 times) under ice bath conditions. The emulsion was quickly added to 30 mL of aqueous solution, stirred overnight at 200–300 rpm, and the dichloromethane was volatilized. After, MS, loaded with Mel, were collected via centrifugation at 10,000 rpm, and then washed with distilled water three times, and freeze-dried for 24 h. Blank MS without drugs were then prepared using the same method. Then, 1.5 g of Laponite powder (Bick Chemical Co., Ltd, Germany) and loaded with MS@Mel was dissolved in 50 mL double distilled water by stirring for 2 h in order to form Lap/MS@Mel hydrogels. For PM/MS@Mel nanoparticle, platelet membrane (PM) extraction, 10% Acid Citrate Dextrose (ACD, Solarbio, Beijing, China) anticoagulant was added to rat blood, and platelet-rich plasma (PRP) was collected after centrifugation at approximately 10,000 rpm for 20 min. The platelet precipitation was then collected after PRP was centrifuged at approximately 2000 rpm for 20 min. Then, platelet precipitation was resuspended in a red blood cell lysis buffer for 5 min at 4 °C, following centrifugation at approximately 2000 rpm for 20 min. After removing the supernatant, platelet precipitation was resuspended in the tyrode solution (Solarbio, Beijing, China) and frozen and thawed three times. The precipitate that was obtained after centrifugation was platelet membrane (PM). For MS@Mel nanoparticles synthesis, melatonin and poly(lactic-co-glycolic acid) (PLGA) were initially weighed and added to dichloromethane. After they were fully dissolved, they were added to the 3% bovine serum albumin (BSA, Solarbio, Beijing, China) solution. The mixture was sonicated for 5 min. Then, the above solution was added to 20 mL of saturated melatonin solution and stirred for 4 h. The MS@Mel nanoparticles were collected by centrifugation at 10,000 rpm for 20 min at 4 °C, and the supernatant was removed. The nanoparticles were then resuspended in deionized water by vortexing, and collected via centrifugation, again. The obtained platelet membrane and MS@Mel nanoparticles were then ultrasonicated for 30 min under the ultrasonic cleaning apparatus, and the obtained mixture was PM@MS@Mel. For encapsulation efficiency (EE), the sample (MS@Mel) was added to an Eppendorf (EP) tube, after weighing the tube. Next, the total weight of an EP tube was measured after lyophilization. The weight of the sample (MS@Mel) (WT) is equal to the total weight, minus the weight of the EP tube. Next, the sample was washed with distilled water, the supernatant was collected after centrifugation at 12,000 rpm, and then the amount of free Mel (WM) was detected using ultraviolet (absorbance value at 270 nm). The sediment (MS) was dissolved in dichloromethane containing deionized aqueous solution and stirred for 4 h, and the amount of encapsulated Mel (EM) was calculated by ultraviolet (absorbance value at, 270 nm). The encapsulation efficiency (EE) and drug loading efficiency (DL) was calculated according to the following formula: EE%=(EM/WM + EM)×100%.DL%=(EM/WT) ×100%. For Lap/MS@Mel hydrogels, EE%=69.3–71.5% and DL%=7.2–9.1%. For PM/MS@Mel nanoparticle, EE%=29.8–31.1% and DL%=2.7–3.5%. For drug release experiment, the sample was put into 10 mL simulated body fluid (pH = 7.4), and the drug release rate was carried out under a constant temperature (37 °C ± 1 °C). For micro-gel compound, all release medium was changed at 1, 3, 5, 7, 14, 21, and 28 d, respectively, and the absorbance of the release medium was measured at the same time. For nano-PM compound, part of release medium was changed by the dialysis bag (Sbjbio Biotechnology Co., LTD, Nanjing, China) at 6, 12, 24, 48, 72, 96, 120, 144, and 168 h, respectively. The release amount was calculated, as per the standard curve equation. The zeta potentials of samples were investigated on a Zeta sizer (Malvern Instruments, UK), the absorption spectra were evaluated on a TU-1810 UV-V spectrophotometer and Fourier Transform Infrared (FTIR) Spectroscopy. Transmission electron microscopy (TEM) images were obtained using transmission electron microscope (TEM; Tecnai F20, FEI). Sprague-Dawley (SD) rats were purchased from the Shanghai Laboratory Animal Center, Chinese Academy of Science (Shanghai, China). All animal experiments were conducted in compliance with guidelines, and followed a protocol that was approved by the Research Ethics Committee of Zhejiang University, China. In order to develop a spinal cord injury (SCI) model, animals were anesthetized with 2% (w/v) pentobarbital sodium (intraperitoneal injection; 40 mg/kg). The T9 lamina was removed, and then the T9 spinal cord was clamped using a 30 g vascular clamp (Oscar, China) for 1 min. Immediately, the Lap/MS hydrogel, a free Mel solution (20 mg), MS-containing 20 mg Mel or Lap/MS hydrogel containing 20 mg Mel, was orthotopically injected in order to cover the injured site. In addition, the PM/MS solution, a free Mel solution (5 mg/kg), MS containing Mel (5 mg/kg) or PM/MS containing Mel (5 mg/kg) was caudal vein that was injected with a time interval of seven days. The motor function of both hind limbs was analyzed on 1, 3, 7, 14 and 28 d after operation using the Basso Beattie Bresnahan (BBB) scale. The physiological changes, including range and number of joint movements, body balance, weight-bearing and coordination of the hind and forelimbs, were recorded ranging from 0 (complete paralysis) to 21 (normal locomotion). The samples were measured for expression of tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), glutathione peroxidase (GPx), malondialdehyde (MDA) and superoxide dismutase (SOD) by ELISA kits (Boster Biological Technology co. Ltd, China). Proteins from tissues were extracted using the Radio Immunoprecipitation Assay (RIPA) lysis buffer. Then, 60 μg of protein was added per well, and separated on 4–20% gels. The protein was then transferred to the polyvinylidene fluoride (PVDF) membranes. Next, the PVDF membranes were blocked (5% fat-free milk) for one hour, and incubated overnight at 4 °C with following primary antibodies. These antibodies included c-caspase3 (1:1000, CST), Integrin α6 (1:1000, CST), CD41 (1:1000, CST), CD47 (1:1000, CST), CD62p (1:1000, CST), Histone H3 (1:1000, CST) and GAPDH (Cat: RT1210-1, Huabio). Next, the PVDF membranes were cultured with secondary antibodies at room temperature for one hour, and visualized by a ChemiDicTM XRS + Imaging System (BioRad Laboratories, Hercules, CA, USA). Next, the samples were subjected to dewaxing, rehydration, dehydration, antigen repair, and then blocked by 5% bovine serum albumin for 30 min. These samples were incubated with primary antibodies, anti-cleaved caspase 3 (1:400), anti-Iba-1 (1:200), anti-Arginase-1 (1:200) or anti iNOS (1:100) at 4 °C overnight. This was followed by incubation with secondary antibodies for one hour. Next, all images were captured by a confocal laser microscope (Nikon, A1PLUS, Tokyo, Japan). The tissue sections were stained with H&E and crystal violet, according to manufacturer's instructions. The images were captured by a Nikon ECLPSE 80i (Nikon, Tokyo, Japan). Data is presented as means ± SEM. One‑way ANOVA, followed by Tukey’s post hoc test, were used to determine differences among multiple groups. Repeated measurement two-way mixed ANOVA, followed by Tukey test, was utilized to detect differences between groups in BBB scores. p -Value <.05 was considered significant. Characteristics of Laponite and Lap/MS@Mel gels A MS-based Laponite hydrogels sustained release system was synthesized . The morphology of the Laponite and Lap/MS@Mel hydrogels was observed using a scanning electron microscope (SEM), as shown in ), MS had a diameter of approximately 40 μm attached to the Lap hydrogen. In order to examine whether Mel and PLGA bind successfully, we measured the zeta potential and found that the zeta potential values of the MS@Mel, Laponite hydrogels and Lap/MS@Mel were −22, −29 and −63 mv , respectively, which provides good evidence for the hypothesis that PLGA MS have an affinity for Lap hydrogels via electrostatic reaction. Drug release test results demonstrated that Lap/MS@Mel continuously released Mel for at least 28 days in vitro, however, a negligible amount of Mel was released from the MS@Mel after day 7 . The characteristic of PM/MS@Mel As shown in , a platelet membrane (PM) was coated outside the MS@Mel nanoparticles. The transmission electron microscope (TEM) demonstrated a thin biofilm wrap around the surface of the MS. The characteristic absorption peak of MS@Mel and PM/MS@Mel appear to be in the range of 250–300 nm wavelength , which is consistent with the wavelength range of melatonin. The FT-IR results showed that , for free Mel, the characteristic absorption peak of N-H stretching vibration was visible at 3280 cm −1 . Furthermore, the characteristic absorption peaks of benzene ring C–H stretching vibration were observed at 1550 and 1210 cm −1 . For PLGA, there is an absorption peak generated by C=O stretching vibration at 1750 cm −1 . In addition, at 2950 cm −1 , the absorption peak was saturated with C=H stretching vibration. In the encapsulated nanoparticles (MS@Mel,PM/MS@Mel), all the above peaks are present, then the encapsulation was successful. The diameter of MS, MS@Mel and PM/MS@Mel were 196.8, 222.1 and 280.7 nm , respectively. Hence, this result indicates that melatonin was favorably wrapped by PLGA MS. In order to further test whether MS@Mel nanoparticles have been favorably wrapped by PM, the protein mark on PM was investigated. Coomassie blue staining and western blotting results demonstrated that the unique platelet membrane protein mark (Integrin α6, CD41, CD47, CD62p) and nuclear protein marker Histone H3 are detected in the group of PM and PM@MS, however, these indicators were not found in the MS group. Compared to MS@Mel group of nanoparticles, the absolute value of size and zeta potential of nanoparticles in PM/MS@Mel group were slightly increased . The drug release test results demonstrated that PM/MS@Mel continuously released the Mel for at least 7 d in vitro . The drug delivery system improved pathology and motor function after SCI In order to detect neurological deficits in rats, the Basso Beattie Bresnahan (BBB) score was performed within four weeks after surgery. The BBB scores of the Lap/MS@Mel group were found to be significantly higher than those of the Mel and MS@Mel group for all time points from two weeks post-surgery . The H&E staining and physical photos of spinal cord tissue demonstrated that severe injury was observed on the injured site of the spinal cord at day 28 after SCI. The hierarchy of the relative lesion area is as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel . The footprint test demonstrated that the rats in Lap/MS@Mel group presented a coordinated and consistent posterior limb footprint at day 21 after SCI. Meanwhile, compared to the SCI group, the Lap/MS, MS@Mel and free Mel groups demonstrated that the width of blue ink streaks was still increased in these groups . Being encouraged by the nerve repair efficacy of the micro-gel compound via the in situ injection in the SCI rat model, we next evaluated the efficacy of another common clinical administration, intravenous administration (nano-PM compound), for nerve repair. As presented in , the BBB score in the PM/MS@Mel group was higher than that of free Mel, MS@Mel and the PM/MS group after 7–28 days post-surgery. We demonstrated that Mel MS have neurorepair function in a rat model of SCI. Thus, we were able to evaluate the ability of nanoparticles to target spinal cord injury tissue in an intravenous microsphere experiment. As illustrated in by MS-conjugated Cy7.5 (MS-Cy7.5), the fluorescence intensity of PM/MS@Mel group was much stronger than that of the group MS@Mel, and the fluorescence intensity in the spinal cord gradually decreased over time. The H&E staining , footprint test and physical photos of spinal cord tissue showed that the platelet coated drug delivery MS had an improved ability to repair nerve function than the uncoated drug delivery MS. The drug delivery system restrained SCI-induced apoptosis, oxidative stress and inflammatory response In order to further verify the inhibitory effect of the drug delivery system promoting Mel in inhibiting apoptosis of spinal cord tissue after SCI, we examined the expression of active-caspase3 proteins in the spinal cord tissue. For micro-gel compound, we discovered that the hierarchy of active-caspase3 is as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel . The effect of Mel on inflammation and oxidative stress was previously demonstrated by numerous studies (Arioz et al., ; Rehman et al., ; Jauhari et al., ). In order to examine the effectiveness of drug delivery systems, we utilized the corresponding kits to measure the spinal cord tissue levels of malondialdehyde (MDA), glutathione peroxidase (GPx) and superoxide dismutase (SOD). As shown in , the Lap/MS@Mel gels were able to inhibiting oxidative stress response in a rat model of SCI. Furthermore, the hierarchy of MDA was as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel. Meanwhile, the hierarchy of GPx and SOD was as follows: SCI < Lap/MS < free Mel < MS@Mel < Lap/MS@Mel. For nano-PM compound, the expression of active-caspase3 proteins and expression of MDA, GPx and SOD in the spinal cord tissue had similar results to the micro-gel compound. The drug delivery system influenced the macrophage/microglia M1-M2 polarization balance after SCI As shown in , the Lap/MS@Mel gels were able to reduce secretion of inflammatory factors in a rat model of SCI. Furthermore, the hierarchy of IL-6 and TNF-α was as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel. We demonstrated that the optimized MS promote melatonin anti-inflammatory effects in SCI model rats. Furthermore, macrophages/microglia phenotypic transition plays important roles in the inflammatory response toward SCI. Thus, we speculated that the effect of optimized MS in melatonin is likely by promoting melatonin to inhibit macrophages/microglia polarization to the M1 phenotype. In order to determine the effect of optimized MS on M1 polarization, we examined markers of M1 macrophages/microglia (iNOS) in spinal cord tissue at seven days after injury. Our results showed that iNOS levels were decreased in the Lap/MS@Mel group, and the hierarchy of iNOS was as follows: SCI > =Lap/MS > free Mel > MS@Mel > Lap/MS@Mel . Furthermore, we examined the effect of optimized MS on M2 polarization. The data showed that the optimized melatonin MS increased expression of Arginase1 in spinal cord tissue at seven days after injury. Furthermore, the hierarchy of Arginase1 is as follows: SCI< =Lap/MS < free Mel < MS@Mel < Lap/MS@Mel , these results showed that the Microsphere drug delivery system promotes functioning of melatonin by regulating macrophages/microglia M1-M2 polarization after SCI. A MS-based Laponite hydrogels sustained release system was synthesized . The morphology of the Laponite and Lap/MS@Mel hydrogels was observed using a scanning electron microscope (SEM), as shown in ), MS had a diameter of approximately 40 μm attached to the Lap hydrogen. In order to examine whether Mel and PLGA bind successfully, we measured the zeta potential and found that the zeta potential values of the MS@Mel, Laponite hydrogels and Lap/MS@Mel were −22, −29 and −63 mv , respectively, which provides good evidence for the hypothesis that PLGA MS have an affinity for Lap hydrogels via electrostatic reaction. Drug release test results demonstrated that Lap/MS@Mel continuously released Mel for at least 28 days in vitro, however, a negligible amount of Mel was released from the MS@Mel after day 7 . As shown in , a platelet membrane (PM) was coated outside the MS@Mel nanoparticles. The transmission electron microscope (TEM) demonstrated a thin biofilm wrap around the surface of the MS. The characteristic absorption peak of MS@Mel and PM/MS@Mel appear to be in the range of 250–300 nm wavelength , which is consistent with the wavelength range of melatonin. The FT-IR results showed that , for free Mel, the characteristic absorption peak of N-H stretching vibration was visible at 3280 cm −1 . Furthermore, the characteristic absorption peaks of benzene ring C–H stretching vibration were observed at 1550 and 1210 cm −1 . For PLGA, there is an absorption peak generated by C=O stretching vibration at 1750 cm −1 . In addition, at 2950 cm −1 , the absorption peak was saturated with C=H stretching vibration. In the encapsulated nanoparticles (MS@Mel,PM/MS@Mel), all the above peaks are present, then the encapsulation was successful. The diameter of MS, MS@Mel and PM/MS@Mel were 196.8, 222.1 and 280.7 nm , respectively. Hence, this result indicates that melatonin was favorably wrapped by PLGA MS. In order to further test whether MS@Mel nanoparticles have been favorably wrapped by PM, the protein mark on PM was investigated. Coomassie blue staining and western blotting results demonstrated that the unique platelet membrane protein mark (Integrin α6, CD41, CD47, CD62p) and nuclear protein marker Histone H3 are detected in the group of PM and PM@MS, however, these indicators were not found in the MS group. Compared to MS@Mel group of nanoparticles, the absolute value of size and zeta potential of nanoparticles in PM/MS@Mel group were slightly increased . The drug release test results demonstrated that PM/MS@Mel continuously released the Mel for at least 7 d in vitro . In order to detect neurological deficits in rats, the Basso Beattie Bresnahan (BBB) score was performed within four weeks after surgery. The BBB scores of the Lap/MS@Mel group were found to be significantly higher than those of the Mel and MS@Mel group for all time points from two weeks post-surgery . The H&E staining and physical photos of spinal cord tissue demonstrated that severe injury was observed on the injured site of the spinal cord at day 28 after SCI. The hierarchy of the relative lesion area is as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel . The footprint test demonstrated that the rats in Lap/MS@Mel group presented a coordinated and consistent posterior limb footprint at day 21 after SCI. Meanwhile, compared to the SCI group, the Lap/MS, MS@Mel and free Mel groups demonstrated that the width of blue ink streaks was still increased in these groups . Being encouraged by the nerve repair efficacy of the micro-gel compound via the in situ injection in the SCI rat model, we next evaluated the efficacy of another common clinical administration, intravenous administration (nano-PM compound), for nerve repair. As presented in , the BBB score in the PM/MS@Mel group was higher than that of free Mel, MS@Mel and the PM/MS group after 7–28 days post-surgery. We demonstrated that Mel MS have neurorepair function in a rat model of SCI. Thus, we were able to evaluate the ability of nanoparticles to target spinal cord injury tissue in an intravenous microsphere experiment. As illustrated in by MS-conjugated Cy7.5 (MS-Cy7.5), the fluorescence intensity of PM/MS@Mel group was much stronger than that of the group MS@Mel, and the fluorescence intensity in the spinal cord gradually decreased over time. The H&E staining , footprint test and physical photos of spinal cord tissue showed that the platelet coated drug delivery MS had an improved ability to repair nerve function than the uncoated drug delivery MS. In order to further verify the inhibitory effect of the drug delivery system promoting Mel in inhibiting apoptosis of spinal cord tissue after SCI, we examined the expression of active-caspase3 proteins in the spinal cord tissue. For micro-gel compound, we discovered that the hierarchy of active-caspase3 is as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel . The effect of Mel on inflammation and oxidative stress was previously demonstrated by numerous studies (Arioz et al., ; Rehman et al., ; Jauhari et al., ). In order to examine the effectiveness of drug delivery systems, we utilized the corresponding kits to measure the spinal cord tissue levels of malondialdehyde (MDA), glutathione peroxidase (GPx) and superoxide dismutase (SOD). As shown in , the Lap/MS@Mel gels were able to inhibiting oxidative stress response in a rat model of SCI. Furthermore, the hierarchy of MDA was as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel. Meanwhile, the hierarchy of GPx and SOD was as follows: SCI < Lap/MS < free Mel < MS@Mel < Lap/MS@Mel. For nano-PM compound, the expression of active-caspase3 proteins and expression of MDA, GPx and SOD in the spinal cord tissue had similar results to the micro-gel compound. As shown in , the Lap/MS@Mel gels were able to reduce secretion of inflammatory factors in a rat model of SCI. Furthermore, the hierarchy of IL-6 and TNF-α was as follows: SCI > Lap/MS > free Mel > MS@Mel > Lap/MS@Mel. We demonstrated that the optimized MS promote melatonin anti-inflammatory effects in SCI model rats. Furthermore, macrophages/microglia phenotypic transition plays important roles in the inflammatory response toward SCI. Thus, we speculated that the effect of optimized MS in melatonin is likely by promoting melatonin to inhibit macrophages/microglia polarization to the M1 phenotype. In order to determine the effect of optimized MS on M1 polarization, we examined markers of M1 macrophages/microglia (iNOS) in spinal cord tissue at seven days after injury. Our results showed that iNOS levels were decreased in the Lap/MS@Mel group, and the hierarchy of iNOS was as follows: SCI > =Lap/MS > free Mel > MS@Mel > Lap/MS@Mel . Furthermore, we examined the effect of optimized MS on M2 polarization. The data showed that the optimized melatonin MS increased expression of Arginase1 in spinal cord tissue at seven days after injury. Furthermore, the hierarchy of Arginase1 is as follows: SCI< =Lap/MS < free Mel < MS@Mel < Lap/MS@Mel , these results showed that the Microsphere drug delivery system promotes functioning of melatonin by regulating macrophages/microglia M1-M2 polarization after SCI. To date, the clinical treatment for SCI includes drug therapy combined with physical therapy (Rehman et al., ), such as calcium channel antagonists, hormones, and naloxone combined with hyperbaric oxygen therapy, which were used to reduce or eliminate secondary pathological reactions in the acute phase of injury and to protect the remaining axons and neurons from secondary injury. Furthermore, surgical treatment eliminates physical damage and promotes regeneration and repair of nerve tissue during a chronic period of injury. Although part of the damaged neurons are able to regenerate through these above treatment strategies, the sensory and motor abilities of patients with SCI are in the process of continuous injury progression due to persistence of secondary injury. Furthermore, the functional defects and obstacles caused by trauma have not yet had a fundamental breakthrough. Melatonin plays a vital role in human organs, as it regulates many essential pathological and physiological functions. Studies have confirmed that melatonin fastens the recovery of sciatic and hippocampal (Li et al., ) nerve injury (Rateb et al., ) in rats by inhibiting inflammatory cytokine secretion and oxidative stress. However, due to differences of water-solubility of melatonin, there is a lack of specificity and stability in body distribution. After melatonin enters the body, it is vulnerable to protease hydrolysis, which causes the oral, intravenous injection or in situ bioavailability to be low. In addition, due to the existence of blood spinal cord barrier – it can also influence the treatment effect. In order to overcome this disadvantage, more doses or repeated doses were selected during treatment, which can consequently result in unexpected damage to the human body. Direct or indirect infusion methods, such as melatonin intrathecal infusion, can effectively improve drug concentration in the spinal cord. However, the drug maintenance effect is short, the injury is large, the operation is complicated, and it is easy to cause an infection or aggravate spinal cord injury. For these above reasons, the clinical application of melatonin in the treatment of SCI remains limited. As an effective method of drug delivery, drug-loaded sustained-release MS have been reported to improve efficacy of drug repair for peripheral nerve injury (Zhuang et al., ; Rao et al., ). In addition, MS have the potential to prolong retention time of melatonin in the nasal mucosa, improve bioavailability, as well as the therapeutic effect (Nižić et al., ). Furthermore, studies have shown that melatonin-loaded PLGA MS can help improve efficacy of glaucoma treatment (Arranz-Romera et al., ). However, the burst release associated with MS reduces efficacy of MS, and increases risk of unpredictable side effects in nerve repair and tissue regeneration engineering (Rambhia & Ma, ; Dong et al., ). Early works have utilized composite hydrogen–microsphere delivery systems in order to reduce the burst release, and improve regeneration of various tissues (Elisseeff et al., ; Dyondi et al., ; Karam et al., ). Therefore, we speculate that the hydrogel-bound MS can help achieve stable microsphere degradation and release of drugs, which can help avoid the phenomenon of peak and valley of blood drug concentration caused by microsphere burst release, reduce the dose of administration during treatment cycle, and improve the bioavailability of drugs and patient compliance. Laponite hydrogel is often utilized in tissue regeneration engineering due to its unique properties. One study highlighted the great potential of laponite-enhanced hydrogel MS in vascularized dental pulp regeneration (Zhang et al., ). Our results show that Lap hydrogels reduce the initial burst release of MS, and compared to MS@Mel, Lap/MS@Mel presents a more stable sustained release and better neural recovery ability of SCI via an in situ injection. After SCI, macrophages rapidly polarize into M1 macrophages and release inflammatory cytokine aggravated inflammation, inhibit axon regeneration, promote lipid oxidation and degradation, affect cell membrane fluidity and permeability, and resulting in re-injury of neurons and glial cells (David & Kroner, ; Milich et al., ). In addition, polarization of macrophages is recruited to the surface of the biomaterial after implantation. Then, a series of inflammatory cytokines is secreted, which may lead to degradation and failure of the biomaterial. Regulation of this type of cell has become an urgent problem. Our data shows that the MS-based sustained release system has the ability to enhance Mel regulation in the transformation of macrophages/microglia from an M1 phenotype to an M2 phenotype in SCI tissue. Furthermore, we demonstrated the effectiveness of in situ Lap/MS@Mel gel system for SCI repair. However, due to in situ device injury and post-injection volume effect, improper operation will inevitably cause additional SCI. For the clinical complexity of patients with SCI, some patients are not suitable for an in-situ injection. Therefore, there should be other options for SCI administration. Besides, MS injected intravenously become consumed by macrophages of the reticuloendothelial system, and gets blocked by the blood-spinal cord barrier, which is difficult to pass through various narrow spaces in the blood system, and cannot effectively reach the SCI tissue. It is a common strategy to modify the surface of MS and alter the charge of the MS, as it can help increase the affinity of the MS to a specific target, and prolong the retention time of the drug in vivo , leading to enhanced efficacy of the drug. Previous studies have confirmed that the intravenous injection of nanosphere-delivering drugs has the effect of promoting optic nerve regeneration (Robinson et al., ). Furthermore, nanoparticle modified by platelet membrane cloaking has reduced cellular uptake by macrophage-like cells, as well as improved disease-targeted delivery of nanoparticles (Hu et al., ). Based on these above studies, we designed the PM/MS@Mel nano sustained release system and applied it to treatment of SCI for the first time. Our experimental results demonstrate that nanoscale MS can smoothly reach the injured site via various barriers and gaps in the blood system, and play the role of nerve repair. Meanwhile, compared to MS@Mel, PM/MS@Mel increases biocompatibility of MS and precision of delivery by taking advantage of the targeting characteristics of the platelet membrane. In conclusion, this study caters to the clinical complexity of patients with SCI, as we designed and synthesized two novel optimized drug delivery MS. The Lap/MS@Mel with high loading efficiency PLGA MS mixed with the Laponite hydrogel was found to facilitate and prolong melatonin delivery to the damaged spinal cord via an in situ injection in vivo . For another common drug delivery approach in the clinical treatment of SCI, we synthesized a PM/MS@Mel nanometer sustained-release system which uses nanoscale MS loaded with melatonin in order to avoid being blocked by various barriers and membrane gaps in the blood system. Meanwhile, the biomimetic platelet membrane encapsulated MS increase biocompatibility of the MS to prevent them from being engulfed by macrophages in the blood system and precision of delivery. The Lap/MS@Mel gel and PM/MS@Mel exerted neuroprotective effects and restrained oxidative stress and inflammatory reactions. Importantly, novel optimized drug delivery MS enhanced Mel-inhibition of macrophages/microglia polarization to the M1 phenotype and thus, prevented the biomaterial from being destroyed. The neuroprotection benefits of the two deliveries system are in line with clinical treatment strategy, have enormous potential, and are a clinically feasible therapeutic approach for patients suffering from SCI.
Integrating Retrieval-Augmented Generation with Large Language Models in Nephrology: Advancing Practical Applications
922ea876-6c16-4eee-b4f2-ec692222d388
10972059
Internal Medicine[mh]
Large language models (LLMs) are a sophisticated type of artificial intelligence (AI), specifically designed for understanding and generating human-like language, thereby earning the alternative designation of chatbots. These models, like the generative pre-trained transformer (GPT) series, are trained on vast datasets of text and can generate text that is often indistinguishable from that written by humans. They can answer questions, write essays, generate creative content, and even write code, based on the patterns they have learned from the data they were trained on. The capabilities of LLMs continue to expand, making them a crucial and transformative technology in the field of AI . ChatGPT, a prominent generative LLM developed by OpenAI, was released towards the close of 2022 . The most recent version, GPT-4, is equipped with advanced capabilities in both text and image analysis and is collectively referred to as GPT-4 Vision . Alongside ChatGPT, the current landscape of widely utilized LLMs encompasses Google’s Bard AI and Microsoft’s Bing Chat . These technological advancements have enabled their adoption and implementation across various domains, including business, academic institutions, and healthcare fields . Within the healthcare sector, the evolving influence of AI is reshaping traditional practices . Tools like AI-driven LLMs possess a significant potential to enhance multiple aspects of healthcare, encompassing patient management, medical research, and educational methodologies . For example, some investigators have shown that they can offer tailored medical guidance , distribute educational resources , and improve the quality of medical training . These tools can also support clinical decision making , help identify urgent medical situations , and respond to patient inquiries with understanding and empathy . Extensive research has shown that ChatGPT, particularly its most recent version GPT-4, excels across various standardized tests. This includes the United States Medical Licensing Examination ; medical licensing tests from different countries ; and exams related to specific fields such as psychiatry , nursing , dentistry , pathology , pharmacy , urology , gastroenterology , parasitology , and ophthalmology . Additionally, there is evidence of ChatGPT’s ability to create discharge summaries and operative reports , record patient histories of present illness , and enhance the documentation process for informed consent , although its effectiveness requires further improvement. Within the specific scope of our research in nephrology, we have explored the use of chatbots in various areas such as innovating personalized patient care, critical care in nephrology, and kidney transplant management , as well as dietary guidance for renal patients and addressing nephrology-related questions . Despite these advancements, LLMs face notable challenges. A primary concern is their tendency to generate hallucinations—outputs that are either factually incorrect or not relevant to the context . For instance, the references or citations generated by these chatbots are unreliable . Our analysis of 610 nephrology-related references showed that only 62% of ChatGPT’s references existed. Meanwhile, 31% were completely fabricated, and 7% were partial or incomplete . We also compared the relevance of ChatGPT, Bing Chat, and Bard AI in nephrology literature searches, with accuracy rates of only 38%, 30%, and 3%, respectively . The occurrence of hallucinations during the literature searches, combined with the suboptimal accuracy in responding to nephrology inquiries and correctly identifying oxalate, potassium, and phosphorus in diets , compromises the reliability or dependability of LLM outputs, raising significant concerns about their practical application. In critical areas like healthcare decision making, the impact of such inaccuracies is considerably heightened, highlighting the need for models that are more reliable and precise. To address these challenges, various strategies have been developed. One such strategy is prompt engineering, like the multiple-shot or chain-of-thought prompting techniques . This approach involves structuring the input prompt to encourage the model to break down the problem into intermediate steps or reasoning sequences before arriving at a final answer. By explicitly asking the model to generate a step-by-step explanation or “thought process”, chain-of-thought prompting helps the model tackle multistep reasoning problems more effectively, potentially leading to more accurate and interpretable answers . Although this approach has proven beneficial in several contexts, it is not without its limitations. Concerns like scalability and the risk of embedding biases present significant challenges, necessitating meticulous prompt engineering to maintain the model’s adaptability while safeguarding its efficiency. Another strategy to enhance LLMs’ ability is the retrieval-augmented generation (RAG) technique . The primary advantage of the RAG approach is that it allows the LLM to access a vast external database of information, effectively extending its knowledge beyond what was available in its training data. This can significantly improve the model’s performance, especially in generating responses that require specific factual information or up-to-date knowledge. This review aims to explore the potential application of LLMs integrated with RAG in nephrology. This review also provides an analysis of the strengths and weaknesses of RAG. These observations are essential for appraising the potential of sophisticated AI models to drive notable advancements in healthcare sectors, where both precision and contemporary knowledge are of utmost importance, thus redefining the benchmarks for AI deployment in key domains. The RAG approach is a method used in natural language processing and machine learning that combines the strengths of retrieval-based and generative models to improve the quality of generated text . This approach is particularly useful in tasks such as question answering, document summarization, and conversational agents. In the dynamic field of medicine, the unique capability of the RAG system to access external medical databases in real time allows the LLM to base its responses on the latest research, clinical guidelines, and drug information . To generate more accurate and contextually relevant responses, the RAG approach combines the strengths of two components including the retrieval and generation components. The former component is responsible for fetching relevant information or documents from a large database or knowledge source provided to the LLMs. The retrieval is typically based on the input query or context, aiming to find content that is most likely to contain the information needed to generate an accurate response. The latter component takes the input prompt along with the retrieved documents or information from the retrieval component and generates a response. The generation component uses the context provided by the retrieved documents to inform its responses, making them more accurate, informative, and contextually relevant. The RAG approach is particularly beneficial in scenarios where the model needs to provide information that may not have been present in its training set or when the information is continually updated. By grounding the responses in factual data, the RAG approach effectively reduces the occurrence of inaccuracy or hallucinations. However, the success of RAG depends on the quality and timeliness of the external data sources, and integrating these sources introduces additional technical complexities. Complementing these approaches is the process of fine-tuning, which involves adapting a pre-trained model to specific tasks or domains. This enhances the model’s capacity to process certain types of queries or content, thereby improving its efficiency and specificity for certain domains. While this method improves the model’s performance in specific areas, it also poses the risk of over-fitting in certain datasets, potentially limiting its broader applicability and increasing the demands on training resources. A recent study experimentally developed a liver disease-focused LLM model named LiVersa, incorporating the RAG approach with 30 guidelines from the American Association for the Study of Liver Diseases. This integration was intended to enhance LiVersa’s functionality. In the study, LiVersa accurately answered all 10 questions related to hepatitis B virus treatment and hepatocellular carcinoma surveillance. However, the explanations provided for three of these cases were not entirely accurate . Another study introduced Almanac, an LLM framework enhanced with RAG functions, which was specifically integrated with medical guidelines and treatment recommendations . This framework’s effectiveness was evaluated using a new dataset comprising 130 clinical scenarios. In terms of accuracy, Almanac outperformed ChatGPT by an average of 18% across various medical specialties. The most notable improvement was seen in cardiology, where Almanac achieved 91% accuracy compared to ChatGPT’s 69% . Moreover, they evaluated the performance of Almanac against conventional LLMs (ChatGPT-4 [May 24, 2023 version], BingChat [June 28, 2023], and Bard AI [June 28, 2023]) by testing the LLMs with a new dataset comprising 314 clinical questions across nine medical specialties. Almanac demonstrated notable enhancements in accuracy, comprehensiveness, user satisfaction, and resilience to adversarial inputs when compared to the standard LLMs . A recent investigation introduced a RAG system named RECTIFIER (RAG-Enabled Clinical Trial Infrastructure for Inclusion Exclusion Review), assessing its efficacy against that of expert clinicians in a clinical trial screening . The comparison revealed a high concordance between the responses from RECTIFIER and those from expert clinicians, with RECTIFIER’s accuracy spanning from 98% to 100% and the study staff’s accuracy from 92% to 100%. Notably, RECTIFIER outperformed the study staff in identifying the inclusion criterion of “symptomatic heart failure”, achieving an accuracy of 98% compared to 92%. In terms of eligibility determination, RECTIFIER exhibited a sensitivity of 92% and a specificity of 94%, whereas the study staff recorded a sensitivity of 90% and a specificity of 84%. These findings indicate that integrating a RAG system into GPT-4-based solutions could significantly enhance the efficiency and cost effectiveness of clinical trial screenings . The RAG’s strengths lie in its access to current information and its ability to tailor relevance. By utilizing the most recent data, the likelihood of offering outdated or incorrect information is greatly reduced. However, this approach also presents several challenges. The effectiveness of RAG’s responses is heavily dependent on the quality and currency of the data sources it uses. Adding RAG to LLMs also introduces an extra layer of complexity, which can complicate implementation and ongoing management. Moreover, there is a risk of retrieval errors. Should the retrieval system malfunction or fetch incorrect information, it could result in inaccuracies in the output it generates. The RAG integration is also valuable in nephrology, where staying abreast of the latest developments is crucial. This integration of current, validated data from external sources significantly reduces the likelihood of the LLMs providing outdated or incorrect information. 4.1. Integrating Latest Research and Guidelines The RAG approach has the unique capability to dynamically integrate the most recent findings from nephrology-related sources into the model’s outputs. This includes new research from nephrology journals, results from the latest clinical trials, or any updates in treatment guidelines. By doing so, the RAG approach ensures that LLMs are not only up-to-date but also highly relevant and accurate in the field of nephrology. For instance, consider a scenario where a nephrology specialist or an internist is seeking information about the latest management strategies for polycystic kidney disease (PKD). In such cases, the RAG can actively search for, retrieve, and incorporate information from the most recent guidelines and treatment protocols, such as the KDIGO 2023 clinical practice guideline for autosomal dominant polycystic kidney disease (ADPKD), and studies published in the PubMed database. This process involves not just accessing this information but also synthesizing it in a way that is coherent and directly applicable to the query at hand. By utilizing RAG, the physician is thus provided with information that is not only current but is also directly relevant to their specific inquiry. This approach is especially valuable in a field like nephrology, where advancements in research and changes in treatment protocols can have a significant impact on patient care. The ability of RAG to provide the latest knowledge helps healthcare professionals stay informed and make well-founded decisions in their practice. 4.2. Case-Based Learning and Discussion Employing RAG in educational settings can significantly enhance the learning process by incorporating detailed and real-life case studies into lectures, discussions, or interactive learning modules. This application of RAG is particularly useful in complex and dynamic fields like medicine. Take, for example, the education of medical students on the topic of complex electrolyte imbalances in chronic kidney disease (CKD). The RAG approach can be utilized to access and reference specific, real-world case reports or clinical scenarios relevant to this topic. By doing so, it can provide students with practical, tangible examples that illustrate the theoretical concepts they are learning. This not only aids in a deeper understanding of the subject matter but also helps students appreciate the real-world implications and applications of their knowledge. Moreover, RAG’s ability to retrieve the latest studies and reports ensures that the educational content is not only rich in practical examples but also current. This is especially vital in medical education, where staying abreast of the latest research and clinical practices is crucial. By integrating up-to-date case studies and scenarios, RAG can help create a more engaging and informative educational experience, preparing students for the challenges they will face in their medical careers. This approach can be extended to other complex medical topics, making learning more interactive, relevant, and evidence-based. 4.3. Multidisciplinary Approach In situations where a multidisciplinary perspective is essential, RAG proves to be particularly valuable as it can draw upon a wide array of medical disciplines to offer a more comprehensive understanding. This capability is critical in treating conditions that intersect multiple areas of healthcare. Consider the case of a patient suffering from diabetic nephropathy, for instance. This condition, being at the crossroads of diabetes and kidney health, requires a nuanced understanding from several medical specialties. The RAG system can effectively consolidate relevant information from endocrinology, focusing on diabetes management strategies; from cardiology, addressing the cardiovascular risks associated with the condition; and from nephrology, providing insights into preserving renal function. By integrating this diverse information, the RAG system can greatly assist healthcare professionals in developing a holistic and multifaceted treatment plan. This approach ensures that all aspects of the patient’s condition are considered, leading to more effective and comprehensive patient care. Such an integrated approach is beneficial not just in diabetic nephropathy but in any complex medical condition where multiple body systems are affected or where various specialties need to collaborate for optimal patient management. The ability of RAG to seamlessly merge insights from different medical fields into a cohesive whole enhances its utility in planning and implementing effective treatment strategies. The RAG approach has the unique capability to dynamically integrate the most recent findings from nephrology-related sources into the model’s outputs. This includes new research from nephrology journals, results from the latest clinical trials, or any updates in treatment guidelines. By doing so, the RAG approach ensures that LLMs are not only up-to-date but also highly relevant and accurate in the field of nephrology. For instance, consider a scenario where a nephrology specialist or an internist is seeking information about the latest management strategies for polycystic kidney disease (PKD). In such cases, the RAG can actively search for, retrieve, and incorporate information from the most recent guidelines and treatment protocols, such as the KDIGO 2023 clinical practice guideline for autosomal dominant polycystic kidney disease (ADPKD), and studies published in the PubMed database. This process involves not just accessing this information but also synthesizing it in a way that is coherent and directly applicable to the query at hand. By utilizing RAG, the physician is thus provided with information that is not only current but is also directly relevant to their specific inquiry. This approach is especially valuable in a field like nephrology, where advancements in research and changes in treatment protocols can have a significant impact on patient care. The ability of RAG to provide the latest knowledge helps healthcare professionals stay informed and make well-founded decisions in their practice. Employing RAG in educational settings can significantly enhance the learning process by incorporating detailed and real-life case studies into lectures, discussions, or interactive learning modules. This application of RAG is particularly useful in complex and dynamic fields like medicine. Take, for example, the education of medical students on the topic of complex electrolyte imbalances in chronic kidney disease (CKD). The RAG approach can be utilized to access and reference specific, real-world case reports or clinical scenarios relevant to this topic. By doing so, it can provide students with practical, tangible examples that illustrate the theoretical concepts they are learning. This not only aids in a deeper understanding of the subject matter but also helps students appreciate the real-world implications and applications of their knowledge. Moreover, RAG’s ability to retrieve the latest studies and reports ensures that the educational content is not only rich in practical examples but also current. This is especially vital in medical education, where staying abreast of the latest research and clinical practices is crucial. By integrating up-to-date case studies and scenarios, RAG can help create a more engaging and informative educational experience, preparing students for the challenges they will face in their medical careers. This approach can be extended to other complex medical topics, making learning more interactive, relevant, and evidence-based. In situations where a multidisciplinary perspective is essential, RAG proves to be particularly valuable as it can draw upon a wide array of medical disciplines to offer a more comprehensive understanding. This capability is critical in treating conditions that intersect multiple areas of healthcare. Consider the case of a patient suffering from diabetic nephropathy, for instance. This condition, being at the crossroads of diabetes and kidney health, requires a nuanced understanding from several medical specialties. The RAG system can effectively consolidate relevant information from endocrinology, focusing on diabetes management strategies; from cardiology, addressing the cardiovascular risks associated with the condition; and from nephrology, providing insights into preserving renal function. By integrating this diverse information, the RAG system can greatly assist healthcare professionals in developing a holistic and multifaceted treatment plan. This approach ensures that all aspects of the patient’s condition are considered, leading to more effective and comprehensive patient care. Such an integrated approach is beneficial not just in diabetic nephropathy but in any complex medical condition where multiple body systems are affected or where various specialties need to collaborate for optimal patient management. The ability of RAG to seamlessly merge insights from different medical fields into a cohesive whole enhances its utility in planning and implementing effective treatment strategies. To illustrate the process of creating a customized ChatGPT model with a RAG strategy, we will use the field of nephrology as a reference, specifically focusing on CKD due to its prevalence in nephrology encounters . This example will serve to demonstrate the steps and considerations involved in tailoring a ChatGPT model to a specific medical specialty, incorporating a specialized knowledge base. The aim is to enhance the model’s responses with precise, specialized knowledge, in this case, centered around CKD, guided by insights from the KDIGO 2023 Clinical Practice Guideline . Below is a detailed breakdown of the steps involved in this process. 5.1. Creation of a CKD-Focused Retrieval System This process involves the careful selection of knowledge sources, integration of guidelines, and regular updates to ensure accuracy and relevancy. The first step is to meticulously select a comprehensive database rich in information about CKD. This database should draw from a range of reliable sources, such as peer-reviewed academic journals, reports from clinical trials, and authoritative nephrology textbooks. A key focus is placed on incorporating the KDIGO 2023 CKD guidelines , which are recognized for their currency and authority in the field. Next, it is vital to directly integrate these KDIGO 2023 guidelines into the chosen database by creating a customized ChatGPT model . This process involves navigating to “My GPTs” and selecting “Create a GPT”. Following this, we have the opportunity to customize/configure our GPT by entering a name, description, and instructions, and by uploading the knowledge bases(s) we wish to embed within the model. We can choose to restrict access to the model by selecting one of the following options: “Only me”, “Anyone with a link”, or “Everyone”. Once customized, the GPT will be accessible under “My GPTs”, where it will produce responses utilizing the incorporated database(s). This integration covers the detailed aspects of CKD, including diagnosis, staging, management, and treatment protocols. Such incorporation ensures that the model’s responses are in line with the most recent and accepted clinical practices. While ChatGPT operates based on its internal knowledge gained during training, RAG takes this a step further by dynamically incorporating external information into the generation process. The integration of a retrieval component in RAG could theoretically enhance ChatGPT by providing it access to a wider range of current information and specific data not covered during its training. 5.2. Development of a CKD-Focused Retrieval System The RAG system, specialized for CKD, is specifically configured to identify and respond to CKD-related queries accurately. It is adept at grasping the intricacies of CKD, including its various stages, the comorbid conditions often accompanying it, and the diverse methods of treatment available. Additionally, the system is fine-tuned for both speed and relevance, ensuring rapid and efficient access to relevant information from the comprehensive CKD database when processing queries. This optimization guarantees prompt and pertinent responses tailored to the specifics of CKD. Moreover, establishing a system for continuous updates to the database is crucial. This involves regularly reviewing and including new research findings, updated medical guidelines, and emerging treatment methods in nephrology. Keeping the database up to date guarantees that the information remains both current and authoritative, making it a reliable foundation for the model’s knowledge base. 5.3. Integration with the Customized GPT-4 Model Integrating the customized GPT-4 model with the CKD retrieval system involves establishing strong and secure API (Application Programming Interface) connections. Firstly, it focuses on creating a robust connection that allows for the seamless flow of data between the customized ChatGPT model and the CKD retrieval system. This connection must be secure to protect sensitive medical information and ensure data integrity. Secondly, the customized ChatGPT model undergoes fine-tuning to harmonize the in-depth CKD information with its innate natural language processing abilities. This fine-tuning is critical to ensure that the model not only provides responses that are accurate and rich in CKD-specific information but also maintains clarity and appropriateness in the context of the user’s query. Through this integration, the model becomes capable of delivering responses that are not just factually correct but also tailored to the specific context of the query, whether it is a patient’s inquiry, a healthcare professional’s detailed question, or an educational scenario. This ensures that the model’s outputs are highly relevant, understandable, and useful for various users, ranging from medical practitioners and students to patients seeking information about CKD. 5.4. Customized Response for CKD Inquiries The integration of a customized GPT-4 model with a CKD-specialized RAG system brings a significant advancement in handling CKD-related inquiries. This integration leverages sophisticated algorithms to ensure that the ChatGPT model precisely recognizes the context and specific details of queries related to CKD, leading to highly relevant and tailored responses. This process operates on multiple levels, including contextual understanding, relevance of responses, access to updated information, and dynamic information integration. Through this integrated approach, the ChatGPT model becomes a powerful tool for providing accurate, up-to-date, and highly specific responses to a wide range of CKD-related inquiries. This capability is particularly valuable for healthcare professionals seeking quick and reliable information, patients looking for understandable explanations of their condition, and researchers needing the latest data in the field of nephrology. 5.5. Rigorous Testing with CKD Scenarios The system undergoes comprehensive testing in a variety of CKD situations. This testing encompasses a spectrum of patient histories, various stages of CKD, and the intricacies involved in treatment plans. Such extensive testing is crucial for confirming the model’s reproductivity and its ability to adapt to diverse clinical conditions. The feedback obtained from these rigorous tests is instrumental to the ongoing enhancement of the system. It aids in refining the precision of information retrieval and boosting the effectiveness of how the ChatGPT model works in conjunction with the CKD database. This process of continuous improvement ensures the system remains reliable and effective in addressing the complex needs of CKD management. 5.6. Regular System Monitoring and Updating The system’s performance in providing accurate and relevant CKD information is consistently monitored. This includes assessing the accuracy of responses, the relevance of information provided, and the speed of retrieval. Moreover, the CKD database is regularly updated with the latest research, guidelines, and treatment protocols, ensuring the model’s responses remain current and authoritative. 5.7. Healthcare Professional Engagement and Feedback Healthcare professionals are trained on how to effectively use the customized ChatGPT model for CKD queries. This includes understanding its capabilities, limitations, and the best ways to phrase queries for optimal results. A feedback loop is established to continuously improve the system based on real-world user experiences and suggestions from healthcare professionals. This process involves the careful selection of knowledge sources, integration of guidelines, and regular updates to ensure accuracy and relevancy. The first step is to meticulously select a comprehensive database rich in information about CKD. This database should draw from a range of reliable sources, such as peer-reviewed academic journals, reports from clinical trials, and authoritative nephrology textbooks. A key focus is placed on incorporating the KDIGO 2023 CKD guidelines , which are recognized for their currency and authority in the field. Next, it is vital to directly integrate these KDIGO 2023 guidelines into the chosen database by creating a customized ChatGPT model . This process involves navigating to “My GPTs” and selecting “Create a GPT”. Following this, we have the opportunity to customize/configure our GPT by entering a name, description, and instructions, and by uploading the knowledge bases(s) we wish to embed within the model. We can choose to restrict access to the model by selecting one of the following options: “Only me”, “Anyone with a link”, or “Everyone”. Once customized, the GPT will be accessible under “My GPTs”, where it will produce responses utilizing the incorporated database(s). This integration covers the detailed aspects of CKD, including diagnosis, staging, management, and treatment protocols. Such incorporation ensures that the model’s responses are in line with the most recent and accepted clinical practices. While ChatGPT operates based on its internal knowledge gained during training, RAG takes this a step further by dynamically incorporating external information into the generation process. The integration of a retrieval component in RAG could theoretically enhance ChatGPT by providing it access to a wider range of current information and specific data not covered during its training. The RAG system, specialized for CKD, is specifically configured to identify and respond to CKD-related queries accurately. It is adept at grasping the intricacies of CKD, including its various stages, the comorbid conditions often accompanying it, and the diverse methods of treatment available. Additionally, the system is fine-tuned for both speed and relevance, ensuring rapid and efficient access to relevant information from the comprehensive CKD database when processing queries. This optimization guarantees prompt and pertinent responses tailored to the specifics of CKD. Moreover, establishing a system for continuous updates to the database is crucial. This involves regularly reviewing and including new research findings, updated medical guidelines, and emerging treatment methods in nephrology. Keeping the database up to date guarantees that the information remains both current and authoritative, making it a reliable foundation for the model’s knowledge base. Integrating the customized GPT-4 model with the CKD retrieval system involves establishing strong and secure API (Application Programming Interface) connections. Firstly, it focuses on creating a robust connection that allows for the seamless flow of data between the customized ChatGPT model and the CKD retrieval system. This connection must be secure to protect sensitive medical information and ensure data integrity. Secondly, the customized ChatGPT model undergoes fine-tuning to harmonize the in-depth CKD information with its innate natural language processing abilities. This fine-tuning is critical to ensure that the model not only provides responses that are accurate and rich in CKD-specific information but also maintains clarity and appropriateness in the context of the user’s query. Through this integration, the model becomes capable of delivering responses that are not just factually correct but also tailored to the specific context of the query, whether it is a patient’s inquiry, a healthcare professional’s detailed question, or an educational scenario. This ensures that the model’s outputs are highly relevant, understandable, and useful for various users, ranging from medical practitioners and students to patients seeking information about CKD. The integration of a customized GPT-4 model with a CKD-specialized RAG system brings a significant advancement in handling CKD-related inquiries. This integration leverages sophisticated algorithms to ensure that the ChatGPT model precisely recognizes the context and specific details of queries related to CKD, leading to highly relevant and tailored responses. This process operates on multiple levels, including contextual understanding, relevance of responses, access to updated information, and dynamic information integration. Through this integrated approach, the ChatGPT model becomes a powerful tool for providing accurate, up-to-date, and highly specific responses to a wide range of CKD-related inquiries. This capability is particularly valuable for healthcare professionals seeking quick and reliable information, patients looking for understandable explanations of their condition, and researchers needing the latest data in the field of nephrology. The system undergoes comprehensive testing in a variety of CKD situations. This testing encompasses a spectrum of patient histories, various stages of CKD, and the intricacies involved in treatment plans. Such extensive testing is crucial for confirming the model’s reproductivity and its ability to adapt to diverse clinical conditions. The feedback obtained from these rigorous tests is instrumental to the ongoing enhancement of the system. It aids in refining the precision of information retrieval and boosting the effectiveness of how the ChatGPT model works in conjunction with the CKD database. This process of continuous improvement ensures the system remains reliable and effective in addressing the complex needs of CKD management. The system’s performance in providing accurate and relevant CKD information is consistently monitored. This includes assessing the accuracy of responses, the relevance of information provided, and the speed of retrieval. Moreover, the CKD database is regularly updated with the latest research, guidelines, and treatment protocols, ensuring the model’s responses remain current and authoritative. Healthcare professionals are trained on how to effectively use the customized ChatGPT model for CKD queries. This includes understanding its capabilities, limitations, and the best ways to phrase queries for optimal results. A feedback loop is established to continuously improve the system based on real-world user experiences and suggestions from healthcare professionals. The effectiveness of the responses generated by GPT-4, both with and without the RAG approach, is evaluated using a straightforward query: “List medication treatment to help slow progression of CKD and end-stage kidney disease (ESKD)”. This test aims to compare the quality and accuracy of the information provided by GPT-4 under both methodologies ( and ). When using the general GPT-4 to address treatment approaches for slowing the progression of CKD to ESKD, the responses tend to offer a broad overview, lacking in-depth adherence to the latest KDIGO guidelines. However, the customized GPT-4 model enhanced with a RAG system provides responses that are more specific, detailed, and nuanced. Upon verification, these responses are found to be in close alignment with the KDIGO 2023 CKD guidelines, accurately reflecting the current research and clinical practices within nephrology. ChatGPT’s recommendations included SGLT-2 inhibitor and GLP-1 receptor agonists for patients with CKD and type 2 diabetes. However, ChatGPT failed to mention some targeted pharmaceutical interventions that may offer a way to slow CKD progression in individuals with specific causes, such as tolvaptan for ADPKD patients. To enhance its precision, it is necessary to incorporate additional resources, such as the ADPKD guidelines, into its reference database. This will enable ChatGPT to access a broader array of documents, facilitating the generation of more precise advice for CKD patients with specific conditions. Significantly, utilizing a series of prompts or exploring varied prompting techniques in standard ChatGPT, such as the chain-of-thought method and determining a specific CKD guideline for use, could also lead to more consistent responses with the RAG system. This review seeks to present an alternative strategy, the RAG system, for enhancing the effectiveness of LLMs and their applications, including in the context of CKD, to illustrate its utility. This method proves to be advantageous, efficient, and expedient when responses require dependence on specific or particular documents. Therefore, creating a customized ChatGPT model specifically for nephrology, with a focus on CKD and based on the KDIGO 2023 CKD guidelines, is an extensive and meticulous process. It involves building a specialized knowledge base, developing a dedicated retrieval system tailored to nephrology, and integrating this with the ChatGPT model. The process also includes fine-tuning the model to generate precise responses, conducting thorough testing to ensure reliability, continuously updating the system with the latest information, and maintaining engagement with healthcare professionals for feedback and validation. This development results in a model that stands out in offering specialized and accurate medical guidance for managing CKD. As such, it becomes an invaluable resource for healthcare providers, enhancing their ability to deliver informed and up-to-date care to patients with CKD. Notably, the ChatGPT model has merely presented an instance to illustrate the utility of the RAG approach. Additional research is required to confirm its dependability and enhance its efficacy in nephrology applications. Future studies in the context of LLMs with RAG systems in nephrology are suggested to address several promising avenues. These could significantly enhance both the depth and breadth of nephrology research, clinical decision support, patient education, and personalized medicine. Prospective studies would likely involve deploying RAG-enhanced LLMs in clinical settings as decision-support tools. Their effectiveness in assisting with real-time patient care decisions could be evaluated against traditional decision-making processes. Key metrics could include improvements in treatment time efficiency, accuracy in diagnosis, and patient satisfaction levels. Research could explore the seamless integration of LLMs with RAG systems into electronic health record (EHR) platforms, which is essential for enabling real-time, context-aware decision support for clinicians treating patients with kidney diseases. For instance, by leveraging the latest research findings, current guidelines, and patient-specific data, these models could assist in identifying subtle patterns or rare conditions that are difficult for humans to discern, thus improving the diagnostic accuracy for complex kidney diseases and tailoring treatment plans for individual patients with kidney diseases such as CKD or AKI. Future research might also explore automating the process of conducting systematic reviews and meta-analyses using LLMs with RAG systems. This could significantly speed up the synthesis of new research findings, ensuring that the nephrology practice remains at the cutting edge. Moreover, the integration of nephrology-focused RAG systems with other medical domains could provide a more comprehensive patient care model. For instance, combining nephrology with cardiovascular data might better predict renal patients’ risk of heart disease. Studies could examine the outcomes of such integrations in improving the management of comorbid conditions. Combining insights from genomics, proteomics, and other omics technologies with LLMs and RAG systems also could lead to a more comprehensive understanding of kidney diseases and breakthroughs in precision medicine and novel therapeutic targets. The development of adaptive learning modules using RAG-enhanced LLMs could offer personalized educational pathways for medical professionals. These modules could use real-time data to simulate patient scenarios, adapting to the learner’s responses and providing immediate feedback grounded in the latest clinical guidelines. To mitigate the risk of misinformation, future research might develop advanced fact-checking algorithms tailored to medical data nuances. These algorithms could cross-reference multiple authoritative databases before generating patient advice, ensuring a higher degree of accuracy in the information provided. The LLMs with a RAG system can also be utilized to provide personalized, easy-to-understand educational materials and support for patients with kidney diseases. Furthermore, studies may explore the establishment of international consortia for the standardization of AI applications in nephrology. These networks could facilitate the sharing of best practices, the creation of diverse and comprehensive datasets, and the development of AI models that are generalizable across different populations and healthcare systems. This includes customizing models to account for genetic, environmental, and socioeconomic factors affecting kidney disease prevalence and treatment outcomes across different populations. As LLMs with RAG systems rely on extensive data, future studies must address ethical and privacy concerns, ensuring patient data are used responsibly and securely. Therefore, research into the ethical implications of AI in nephrology will need to address consent processes for patient data, biases in AI training, and the transparency of AI decision-making processes. Regulatory studies might focus on developing frameworks for AI accountability and compliance with healthcare regulations like the Health Insurance Portability and Accountability Act (HIPAA). Combining LLMs with RAG systems in nephrology is a big step forward. It has the potential to change how we care for and educate patients in this specialized area. However, one of the main challenges is making sure the information they provide is accurate and reliable. To make these models better for their use in nephrology, strategies like using detailed prompting techniques, carefully applying RAG, and fine-tuning the models are important. As we move into this new phase, it is essential to have teams that include AI experts, kidney specialists, and ethicists. The goal is to improve AI so that it not only matches the skills of healthcare professionals but also adds to them. Achieving this is complex and very important. It requires a constant commitment to accuracy, innovation, and ethical practice. Through ongoing research, improvement, and a focus on patient welfare, we are getting closer to a future where AI plays a transformative role in healthcare, leading to better patient outcomes and more effective, knowledgeable healthcare systems.
Exercise and Psychosexual Education to Improve Sexual Function in Men With Prostate Cancer
50c27cfe-0890-4519-a2ef-591f4ff23ab0
11904736
Patient Education as Topic[mh]
Sexual function is adversely affected following prostate cancer treatment. The decline in erectile function (the most common factor impacting sexual function) is progressive even 15 years after prostatectomy and radiotherapy (although age is a potential contributing factor), with other aspects such as sexual desire, altered ejaculatory and/or orgasmic function, and modifications in partner relationships also contributing to sexual dysfunction. , Current management of sexual dysfunction in men with prostate cancer predominantly involves pharmacological intervention to address the direct physiological effects of prostate cancer treatment on erectile function. However, sexual dysfunction is complex and there are physical, psychological, and relationship effects of prostate cancer treatment that contribute to such impairment. Importantly, most men report that they are not offered helpful interventions to support sexual function after prostate cancer treatments. Exercise is a potential therapy in the management of sexual function for men with prostate cancer as it can counteract physical (eg, body feminization, loss of muscle mass and strength, and declining physical function as a result of androgen deprivation therapy [ADT]) and psychological adverse effects of treatment implicated with sexual dysfunction. Exercise can also promote improved feelings of masculinity and preserve libido. Further, multimodal psychosocial and psychosexual interventions have been shown as acceptable to men with prostate cancer and to improve mental health outcomes and quality of life, as well as increase sexual satisfaction and decrease sexual bother. However, there is limited research on the effects of exercise and the potential combination of exercise and psychosexual education for sexual function in men with prostate cancer. Herein we report the efficacy of a supervised exercise intervention on sexual function in men with prostate cancer concerned about sexual dysfunction and whether exercise combined with a brief psychosexual education and self-management intervention (PESM) results in more pronounced effects on sexual function compared with supervised exercise alone. Changes in sexual function assessed by the International Index of Erectile Function (IIEF) over 6 months served as the primary study end point. Secondary outcomes included physical factors (ie, body composition, functional capacity, and muscle strength) associated with sexual dysfunction. We hypothesized that exercise would improve sexual function in men with prostate cancer concerned about sexual dysfunction compared with standard medical care. Moreover, we hypothesized that exercise combined with PESM would result in improvements in sexual function that exceed those observed with exercise alone. Study Design, Participants, and Procedures This was a 3-arm, single-blinded (investigators blinded), parallel-group, single-center randomized clinical trial. The final trial protocol and statistical analysis plan are included in , and the study adhered to the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. Patients with prostate cancer were recruited in Perth, Australia, between July 24, 2014, and December 20, 2018, by invitation from their urologist or oncologist and referred to the study coordinator for eligibility screening. Three hundred and ninety-four men were referred and screened. Their progress through the study is shown in . Inclusion criteria were: (1) concern about sexual function as assessed by an IIEF overall satisfaction score of less than 8, indicating moderately to very dissatisfied (scores range from 2-10) and/or an Expanded Prostate Cancer Index Composite (EPIC) sexual bother score of greater than 8 (ie, a small to big problem) indicating symptomatic dysfunction (scores range from 1-17, calculated by summing raw scores) ; (2) prior or current treatment for prostate cancer, including prostatectomy, radiotherapy, or ADT; and (3) physician consent. Exclusion criteria consisted of (1) non–nerve-sparing prostatectomy; (2) more than 12 months since prostatectomy or completion of radiotherapy or ADT (initially >6 months and amended to facilitate recruitment); (3) incontinence defined as requiring the use more than 1 pad in a 24-hour period; (4) already performing regular exercise defined as undertaking structured aerobic or resistance training at least 2 times per week within the past 3 months; (5) acute illness or any musculoskeletal, cardiovascular, and/or neurological disorder that could inhibit exercise or put participants at risk from exercising; and (6) inability to read and speak English. Following a familiarization session and baseline assessments, participants were randomized to an exercise group, exercise and PESM group, or usual care group in a 1:1:1 ratio. Participants were stratified by (1) age (<60 or ≥60 years), (2) current sexual activity (yes or no) as assessed by the sexual activity score in the prostate cancer module of the European Organisation for Research and Treatment of Cancer (EORTC) quality of life questionnaire (QLQ-PR25), (3) previous prostatectomy (yes or no), (4) previous radiotherapy (yes or no), and (5) previous or current ADT (yes or no). Randomization was performed independently by the National Health and Medical Research Council Clinical Trials Center, Sydney, Australia. The study was approved by the Human Research Ethics Committee at Edith Cowan University and associated hospitals in Perth, Australia, and all participants provided written informed consent. The detailed methods of the study protocol have been published elsewhere. Interventions Exercise consisted of aerobic and resistance training undertaken 3 days per week for 6 months. All exercise sessions were supervised by an accredited exercise physiologist and conducted in small groups of as many as 10 to 12 participants at various university-affiliated exercise clinics in Perth. The aerobic component of the program involved 20 to 30 minutes of cardiovascular exercise performed at moderate to vigorous intensity (approximately 60%-85% of estimated maximal heart rate) on a treadmill, cycling or rowing ergometer, or elliptical or cross trainer. In addition, participants were encouraged to undertake further home-based aerobic exercise and accumulate a total of at least 150 minutes of moderate-intensity aerobic exercise per week. Resistance training consisted of 6 to 8 exercises targeting the major upper and lower body muscle groups with intensity ranging from 6 to 12 repetitions maximum using 1 to 4 sets per exercise. The exercise program was progressive in nature and periodized, altering emphasis on exercise intensity and volume. Sessions commenced with a 10-minute warm-up consisting of low-intensity aerobic exercise and stretching and concluded with a 5-minute cool-down consisting of stretching. Participants in the exercise plus PESM group completed the same exercise intervention described above as well as a brief intervention that addressed psychological and sexual well-being. A low-intensity psychological care approach was used to maximize uptake and facilitate translation. At baseline, participants attended a brief face-to-face PESM session with their exercise physiologist, who received training in how to deliver the intervention. Session content included stress management, problem-solving coping for treatment challenges, and goal setting for sexual rehabilitation. The intervention used a cognitive behavioral and adult learning approach where men self-selected rehabilitation goals. To support self-management, participants received a published self-help book for men with prostate cancer and their partners, a study-specific tip sheet about treatments for erectile dysfunction and goal setting for sexual rehabilitation, a progress journal, and audio resources for stress management. Participants in the usual care group received standard medical care and were asked to maintain their current physical activity level for 6 months. Outcome Measures The primary outcome was sexual function across multiple domains assessed at baseline and 6 months using the IIEF-15 (erectile function, orgasmic function, sexual desire, intercourse satisfaction, and overall satisfaction), EPIC (sexual function), and EORTC QLQ-PR25 (sexual activity). Secondary outcomes were body composition, physical function, and muscle strength. Lean mass and fat mass were assessed by dual-energy x-ray absorptiometry (Discovery A; Hologic). Physical function was assessed by the 400-m walk (aerobic capacity and walking endurance) and repeated chair rise (lower body muscle function), and upper and lower body muscle strength was assessed using 1-repetition maximum assessment for the chest press and leg press, respectively. Self-reported physical activity was assessed by the leisure score index from the Godin Leisure-Time Exercise Questionnaire. In addition, blood samples for prostate-specific antigen, testosterone, and C-reactive protein levels were collected and analyzed commercially by National Association of Testing Authorities–accredited laboratories in Australia. Statistical Analysis Statistical analysis was performed from October 8 to December 23, 2024. The initial sample size calculation was based on detecting a moderate standardized effect (Cohen d = 0.5) in our primary as well as secondary outcomes of interest. To achieve 80% power at a 2-tailed α level of 0.05 and to account for an attrition rate of 20% or less, 80 patients per study arm were required for a total of 240 patients. However, due to slow recruitment and approval by the funding body to extend the research for an additional year, the trial management group closed recruitment at 112 patients December 20, 2018, before reaching target accrual. Analyses were conducted using SPSS Statistics, version 29 (IBM Corporation). Normality of distribution was assessed using the Kolmogorov-Smirnov test. Analysis of covariance (ANCOVA) adjusted for baseline values, age, current sexual activity, previous prostatectomy, previous radiotherapy, and previous or current ADT for primary and secondary outcomes. Data not normally distributed were log-transformed (ln) for analysis with ln(x +2) used for specific scales, as scores included zero. If exercise and exercise plus PESM were effective for improving sexual function in the specific domains assessed, then we tested for additional effects of exercise and PESM compared with exercise alone by using ANCOVA. Subgroup analyses were undertaken for patients treated with prostatectomy, previous or current radiotherapy, and previous or current ADT using ANCOVA adjusting for covariates used in the primary analyses. Trend analysis was performed using linear regression and entering tertiles of IIEF domains as an ordinal variable. Intention-to-treat analysis was used for maximum likelihood imputation of missing values (expectation maximization). Tests were 2 tailed with statistical significance set at P < .05. This was a 3-arm, single-blinded (investigators blinded), parallel-group, single-center randomized clinical trial. The final trial protocol and statistical analysis plan are included in , and the study adhered to the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. Patients with prostate cancer were recruited in Perth, Australia, between July 24, 2014, and December 20, 2018, by invitation from their urologist or oncologist and referred to the study coordinator for eligibility screening. Three hundred and ninety-four men were referred and screened. Their progress through the study is shown in . Inclusion criteria were: (1) concern about sexual function as assessed by an IIEF overall satisfaction score of less than 8, indicating moderately to very dissatisfied (scores range from 2-10) and/or an Expanded Prostate Cancer Index Composite (EPIC) sexual bother score of greater than 8 (ie, a small to big problem) indicating symptomatic dysfunction (scores range from 1-17, calculated by summing raw scores) ; (2) prior or current treatment for prostate cancer, including prostatectomy, radiotherapy, or ADT; and (3) physician consent. Exclusion criteria consisted of (1) non–nerve-sparing prostatectomy; (2) more than 12 months since prostatectomy or completion of radiotherapy or ADT (initially >6 months and amended to facilitate recruitment); (3) incontinence defined as requiring the use more than 1 pad in a 24-hour period; (4) already performing regular exercise defined as undertaking structured aerobic or resistance training at least 2 times per week within the past 3 months; (5) acute illness or any musculoskeletal, cardiovascular, and/or neurological disorder that could inhibit exercise or put participants at risk from exercising; and (6) inability to read and speak English. Following a familiarization session and baseline assessments, participants were randomized to an exercise group, exercise and PESM group, or usual care group in a 1:1:1 ratio. Participants were stratified by (1) age (<60 or ≥60 years), (2) current sexual activity (yes or no) as assessed by the sexual activity score in the prostate cancer module of the European Organisation for Research and Treatment of Cancer (EORTC) quality of life questionnaire (QLQ-PR25), (3) previous prostatectomy (yes or no), (4) previous radiotherapy (yes or no), and (5) previous or current ADT (yes or no). Randomization was performed independently by the National Health and Medical Research Council Clinical Trials Center, Sydney, Australia. The study was approved by the Human Research Ethics Committee at Edith Cowan University and associated hospitals in Perth, Australia, and all participants provided written informed consent. The detailed methods of the study protocol have been published elsewhere. Exercise consisted of aerobic and resistance training undertaken 3 days per week for 6 months. All exercise sessions were supervised by an accredited exercise physiologist and conducted in small groups of as many as 10 to 12 participants at various university-affiliated exercise clinics in Perth. The aerobic component of the program involved 20 to 30 minutes of cardiovascular exercise performed at moderate to vigorous intensity (approximately 60%-85% of estimated maximal heart rate) on a treadmill, cycling or rowing ergometer, or elliptical or cross trainer. In addition, participants were encouraged to undertake further home-based aerobic exercise and accumulate a total of at least 150 minutes of moderate-intensity aerobic exercise per week. Resistance training consisted of 6 to 8 exercises targeting the major upper and lower body muscle groups with intensity ranging from 6 to 12 repetitions maximum using 1 to 4 sets per exercise. The exercise program was progressive in nature and periodized, altering emphasis on exercise intensity and volume. Sessions commenced with a 10-minute warm-up consisting of low-intensity aerobic exercise and stretching and concluded with a 5-minute cool-down consisting of stretching. Participants in the exercise plus PESM group completed the same exercise intervention described above as well as a brief intervention that addressed psychological and sexual well-being. A low-intensity psychological care approach was used to maximize uptake and facilitate translation. At baseline, participants attended a brief face-to-face PESM session with their exercise physiologist, who received training in how to deliver the intervention. Session content included stress management, problem-solving coping for treatment challenges, and goal setting for sexual rehabilitation. The intervention used a cognitive behavioral and adult learning approach where men self-selected rehabilitation goals. To support self-management, participants received a published self-help book for men with prostate cancer and their partners, a study-specific tip sheet about treatments for erectile dysfunction and goal setting for sexual rehabilitation, a progress journal, and audio resources for stress management. Participants in the usual care group received standard medical care and were asked to maintain their current physical activity level for 6 months. The primary outcome was sexual function across multiple domains assessed at baseline and 6 months using the IIEF-15 (erectile function, orgasmic function, sexual desire, intercourse satisfaction, and overall satisfaction), EPIC (sexual function), and EORTC QLQ-PR25 (sexual activity). Secondary outcomes were body composition, physical function, and muscle strength. Lean mass and fat mass were assessed by dual-energy x-ray absorptiometry (Discovery A; Hologic). Physical function was assessed by the 400-m walk (aerobic capacity and walking endurance) and repeated chair rise (lower body muscle function), and upper and lower body muscle strength was assessed using 1-repetition maximum assessment for the chest press and leg press, respectively. Self-reported physical activity was assessed by the leisure score index from the Godin Leisure-Time Exercise Questionnaire. In addition, blood samples for prostate-specific antigen, testosterone, and C-reactive protein levels were collected and analyzed commercially by National Association of Testing Authorities–accredited laboratories in Australia. Statistical analysis was performed from October 8 to December 23, 2024. The initial sample size calculation was based on detecting a moderate standardized effect (Cohen d = 0.5) in our primary as well as secondary outcomes of interest. To achieve 80% power at a 2-tailed α level of 0.05 and to account for an attrition rate of 20% or less, 80 patients per study arm were required for a total of 240 patients. However, due to slow recruitment and approval by the funding body to extend the research for an additional year, the trial management group closed recruitment at 112 patients December 20, 2018, before reaching target accrual. Analyses were conducted using SPSS Statistics, version 29 (IBM Corporation). Normality of distribution was assessed using the Kolmogorov-Smirnov test. Analysis of covariance (ANCOVA) adjusted for baseline values, age, current sexual activity, previous prostatectomy, previous radiotherapy, and previous or current ADT for primary and secondary outcomes. Data not normally distributed were log-transformed (ln) for analysis with ln(x +2) used for specific scales, as scores included zero. If exercise and exercise plus PESM were effective for improving sexual function in the specific domains assessed, then we tested for additional effects of exercise and PESM compared with exercise alone by using ANCOVA. Subgroup analyses were undertaken for patients treated with prostatectomy, previous or current radiotherapy, and previous or current ADT using ANCOVA adjusting for covariates used in the primary analyses. Trend analysis was performed using linear regression and entering tertiles of IIEF domains as an ordinal variable. Intention-to-treat analysis was used for maximum likelihood imputation of missing values (expectation maximization). Tests were 2 tailed with statistical significance set at P < .05. Between July 24, 2014, and December 20, 2018, a total of 112 patients with prostate cancer (mean [SD] age, 66.3 [7.1] years) were randomized to exercise plus PESM (n = 36 [34.8%]), exercise only (n = 39 [32.1%]), or usual care (n = 37 [33.0%]) . Participant characteristics are presented in . Patients in the exercise plus PESM group attended 81% of scheduled exercise sessions and those in the exercise-only group attended 82% of the scheduled sessions. There were no major adverse events related to the exercise program, with only nonserious musculoskeletal-related adverse events reported (eTable 1 in ). Sexual Function Outcomes Change in sexual function outcomes are provided in . The adjusted difference in IIEF erectile function scores at 6 months was in favor of exercise (5.1 points) compared with usual care (1.0 points; adjusted mean difference, 3.5; 95% CI, 0.3-6.6; P = .04). Change in intercourse satisfaction scores was not significant (adjusted mean difference, 1.7; 95% CI, 0.1-3.2; P = .05). When the intervention modalities were compared, PESM did not result in additional improvements in erectile function (adjusted mean difference, 1.1; 95% CI, −2.7 to 4.8; P = .89) or intercourse satisfaction (adjusted mean difference, −0.2; 95% CI, −2.1 to 1.6; P = .64). In subgroup analyses, the effects of exercise for erectile function were larger for the subgroups who received radiotherapy (adjusted mean difference, 4.2; 95% CI, 0.4-8.0; P = .11) and ADT (adjusted mean difference, 4.4; 95% CI, 0.2-8.7; P = .08) compared with the prostatectomy subgroup (adjusted mean difference, 1.6; 95% CI, −2.5 to 5.7; P = .36) (eTables 2-4 in ). There was no statistically significant difference between exercise and usual care for sexual function assessed with the EPIC (adjusted mean difference, 7.9; 95% CI, 0.2-15.6; P = .09) or sexual activity assessed with the EORTC QLQ-PR25 (adjusted mean difference, 2.9; 95% CI, −4.1 to 9.9; P = .70) , although based on the confidence intervals, some men would have experienced clinically relevant improvements. When the IIEF domains were examined by tertiles , those with the lowest tertile values prior to the initiation of exercise benefited the most following supervised exercise for sexual desire, intercourse satisfaction, and overall satisfaction. Body Composition, Physical Function and Strength, and Serum Markers Change in body composition, physical function and strength, and blood markers are shown in . The adjusted mean difference for fat mass was −0.9 kg (95% CI, −1.8 to −0.1 kg; P = .02) at 6 months, favoring exercise compared with usual care, with no difference between groups for lean mass. Compared with usual care, exercise also significantly improved chair rise performance (adjusted mean difference, −1.8 seconds; 95% CI, −3.2 to −0.5 seconds; P = .002) and upper (adjusted mean difference, 9.4 kg; 95% CI 6.9-11.9 kg; P < .001) and lower (adjusted mean difference, 17.9 kg; 95% CI, 7.6-28.2 kg; P < .001) body muscle strength. There was no significant difference between groups for prostate-specific antigen, testosterone, or C-reactive protein levels. Change in sexual function outcomes are provided in . The adjusted difference in IIEF erectile function scores at 6 months was in favor of exercise (5.1 points) compared with usual care (1.0 points; adjusted mean difference, 3.5; 95% CI, 0.3-6.6; P = .04). Change in intercourse satisfaction scores was not significant (adjusted mean difference, 1.7; 95% CI, 0.1-3.2; P = .05). When the intervention modalities were compared, PESM did not result in additional improvements in erectile function (adjusted mean difference, 1.1; 95% CI, −2.7 to 4.8; P = .89) or intercourse satisfaction (adjusted mean difference, −0.2; 95% CI, −2.1 to 1.6; P = .64). In subgroup analyses, the effects of exercise for erectile function were larger for the subgroups who received radiotherapy (adjusted mean difference, 4.2; 95% CI, 0.4-8.0; P = .11) and ADT (adjusted mean difference, 4.4; 95% CI, 0.2-8.7; P = .08) compared with the prostatectomy subgroup (adjusted mean difference, 1.6; 95% CI, −2.5 to 5.7; P = .36) (eTables 2-4 in ). There was no statistically significant difference between exercise and usual care for sexual function assessed with the EPIC (adjusted mean difference, 7.9; 95% CI, 0.2-15.6; P = .09) or sexual activity assessed with the EORTC QLQ-PR25 (adjusted mean difference, 2.9; 95% CI, −4.1 to 9.9; P = .70) , although based on the confidence intervals, some men would have experienced clinically relevant improvements. When the IIEF domains were examined by tertiles , those with the lowest tertile values prior to the initiation of exercise benefited the most following supervised exercise for sexual desire, intercourse satisfaction, and overall satisfaction. Change in body composition, physical function and strength, and blood markers are shown in . The adjusted mean difference for fat mass was −0.9 kg (95% CI, −1.8 to −0.1 kg; P = .02) at 6 months, favoring exercise compared with usual care, with no difference between groups for lean mass. Compared with usual care, exercise also significantly improved chair rise performance (adjusted mean difference, −1.8 seconds; 95% CI, −3.2 to −0.5 seconds; P = .002) and upper (adjusted mean difference, 9.4 kg; 95% CI 6.9-11.9 kg; P < .001) and lower (adjusted mean difference, 17.9 kg; 95% CI, 7.6-28.2 kg; P < .001) body muscle strength. There was no significant difference between groups for prostate-specific antigen, testosterone, or C-reactive protein levels. This randomized clinical trial is, to our knowledge, the first exercise intervention study including brief psychosexual education with self-management for men with prostate cancer to examine sexual function as the primary outcome. As hypothesized, the supervised exercise program improved erectile function compared with usual care; however, there was no additional benefit of PESM. Moreover, when the IIEF domains were examined by tertiles, those with the lowest values prior to the initiation of exercise benefited the most following supervised exercise for sexual desire, intercourse satisfaction, and overall satisfaction. Exercise also had a significant effect on preventing gains in fat mass and resulted in significant improvements in physical function as well as upper and lower body muscle strength. These observations support the use of exercise as an effective intervention in the management of sexual dysfunction for men with prostate cancer. Sexual dysfunction is a critical adverse effect of prostate cancer treatment and a major survivorship issue for patients and their partners. Exercise has been shown to improve patient-reported outcomes and reduce treatment toxicities, and is recommended in national and international cancer survivorship guidelines. , However the evidence is less clear for sexual function in prostate cancer. In an unplanned post hoc analysis of a study examining short-term aerobic and resistance exercise on lean mass changes in patients undergoing ADT, patients in the exercise group preserved sexual activity, whereas the usual care group decreased it. This finding is supported by observational data showing that higher physical activity levels are associated with better sexual function in men with prostate cancer prior to radical prostatectomy and after external beam radiation therapy. , Moreover, men with prostate cancer in the Health Professionals Follow-up Study who reported walking at a brisk pace (compared with an easy pace) had better sexual function, independent of walking duration. In the present study, exercise significantly improved erectile function (mean, 5.1 points) indicating a potentially clinically relevant improvement (minimal clinically important difference, 4.0 points) and resulted in higher intercourse satisfaction compared with usual care. A similarly positive effect on erectile function was observed in a study comparing yoga with usual care in men with prostate cancer during radiotherapy, where yoga prevented a decline in erectile function at 4 (but not 8) weeks of treatment. Further, in middle-aged and older men, aerobic exercise has been shown to improve erectile function with a mean change in the IIEF domain of 2.8 points. In contrast, equivalent benefits of exercise on erectile function were not observed in a study of aerobic training after radical prostatectomy or in men with advanced prostate cancer. It may be that timing of exercise implementation, exercise mode, or stage of disease and accumulated effects of treatments account for these differences. However, an important observation from our trial was that patients with the lowest values in sexual desire, intercourse satisfaction, and overall satisfaction prior to the initiation of exercise benefited the most following supervised exercise. Regaining sexual function is not a rehabilitation goal for all men with prostate cancer and varies according to relationship status, comorbid health conditions for both the man and their partner, and personal priorities and values. Screening patients for sexual dysfunction and rehabilitation goals following treatment could assist directing patients to exercise as a countermeasure, forming part of an accessible evidence-based survivorship intervention. Our low-intensity psychoeducation intervention had no additional effect on sexual function outcomes in the present study. We hypothesized that a brief PESM intervention would further enhance improvements in sexual function by increasing the participants’ ability to better self-manage their well-being and sexual function through, for example, increased uptake of pharmacological management for erectile dysfunction. Chambers et al previously reported that multimodal psychosocial and/or psychosexual interventions are shown to improve mental health and quality of life, as well as increase sexual satisfaction and decrease sexual bother in men with prostate cancer. However, a recent online psychosexual support intervention for couples after prostate cancer treatment did not improve global sexual satisfaction, although couples who received the intervention did engage in more sexual activity. Our psychosexual support delivered by the exercise physiologist as part of a PESM adjunctive component may not have been powerful enough to improve outcomes above the exercise intervention effect. Given the impact treatments have on erectile function, a more intense intervention that targets adherence to medical management of erectile dysfunction as well as the couple relationship might be indicated. As expected, the exercise intervention resulted in significant improvements in physical function and muscle strength and prevented an increase in fat mass, which occurred in the usual care group. Exercise for men with prostate cancer is an established intervention to address treatment-related deterioration in these outcomes, , which can negatively impact sexual function and quality of life. Meeting physical activity recommendations for aerobic exercise has been associated with significantly better masculine self-esteem in men with prostate cancer, which was strongly correlated with perception of body image in this group of men, both important factors contributing to sexual function. Exercise, specifically resistance training, can therefore play an active part in sexual function by counteracting treatment-related changes in body composition, physical function, and muscle strength. Further, despite well-established reductions in treatment adverse effects and improvements in quality of life , , and an association of exercise and prostate cancer survival, many men with prostate cancer remain insufficiently active. Our present study potentially provides an additional rationale for taking up exercise for men who are concerned about their sexual function. Strengths and Limitations Our study has several features that are worthy of comment. First, we investigated a highly significant outcome of sexual function in men with prostate cancer who were concerned about their sexual function and were within 12 months of treatment. Second, we compared usual care and exercise with and without PESM support. Furthermore, participants had high adherence to the exercise intervention, reflecting the program’s overall feasibility and effectiveness to produce favorable outcomes in body composition, physical function, and muscle strength. Nevertheless, this study also has some limitations. The trial was originally designed as a multisite study (in 3 states in Australia); however, due to logistical issues it was modified to be a single-center study (in Western Australia). In addition, recruitment difficulties resulted in the study management group closing recruitment at 112 patients before reaching target accrual and, as such, the study was likely underpowered. Our patients were well-functioning individuals who were motivated to undertake the intervention program and the supervised exercise sessions and may not be representative of all men with prostate cancer. Our study has several features that are worthy of comment. First, we investigated a highly significant outcome of sexual function in men with prostate cancer who were concerned about their sexual function and were within 12 months of treatment. Second, we compared usual care and exercise with and without PESM support. Furthermore, participants had high adherence to the exercise intervention, reflecting the program’s overall feasibility and effectiveness to produce favorable outcomes in body composition, physical function, and muscle strength. Nevertheless, this study also has some limitations. The trial was originally designed as a multisite study (in 3 states in Australia); however, due to logistical issues it was modified to be a single-center study (in Western Australia). In addition, recruitment difficulties resulted in the study management group closing recruitment at 112 patients before reaching target accrual and, as such, the study was likely underpowered. Our patients were well-functioning individuals who were motivated to undertake the intervention program and the supervised exercise sessions and may not be representative of all men with prostate cancer. In this randomized clinical trial, we found that supervised resistance and aerobic exercise improved erectile function and intercourse satisfaction in men with prostate cancer previously or currently undergoing treatment, although the addition of psychosexual education resulted in no additional improvements. Based on the findings of this study, exercise should be considered as an integral part of treatment to improve sexual function in men with prostate cancer.
Strengthening the WHO Emergency Care Systems Framework: insights from an integrated, patient-centered approach in the Copenhagen Emergency Medical Services system—a qualitative system analysis
9bbd2c9c-0906-4ee3-b3c5-5f6266425137
11916934
Patient-Centered Care[mh]
The demand for emergency medical care is increasing globally [ – ]. As disease patterns evolve and demographic and socio-cultural structures shift [ , – ], Emergency Medical Services (EMS) systems are challenged to meet increasingly diverse and complex patient needs with timely, appropriate care. However, often encountered patchworks of definitions, legislations, and health system structures [ – ], continue to cause fragmented and siloed structures across multiple regions or countries, complicating efforts to direct patients to the most suitable care pathway . Troubled by high numbers of Emergency Department (ED) visits, long waiting times and financial losses due to mismanagement of healthcare allocation, some countries in Europe have started to reconfigure urgent and emergency EMS systems towards a more integrated approach [ – ]. Similarly, the 76th World Health Assembly emphasizes the need for seamless coordination between emergency, critical, and primary care through effective communication, transport, and referral systems. As interdependent parts of the wider health system, failures in emergency care capacity disrupt primary care, while gaps in primary and social services increase emergency demand, potentially delaying life-saving care . The WHO Emergency Care System Framework (ECSF) was developed to provide a structured approach for organizing emergency care from initial contact through inpatient treatment (Fig. ), aiming to support policymakers to evaluate and strengthen emergency care systems . Despite this guidance, the ECSF largely emphasizes traditional hospital-based care pathways, which may not fully address the needs of patients with non-urgent or complex health issues. Thus, evidence-based insights are needed on how these systems can evolve to address shifting demands effectively. To our knowledge, an assessment or analysis of the completeness, currency, or adaptability of the WHO ECSF has not yet been published, highlighting an important gap that this study seeks to address by reviewing the Emergency Medical Services in the Capital Region of Denmark “ Hovedstadens Akutberedskab” (CPH EMS) against the WHO ECSF. Over the past two decades, CPH EMS has undergone a profound transformation, evolving from a fragmented and complex system into an integrated model within the broader healthcare framework. This integration now enables seamless coordination with primary care, out-of-hours services (OOH), and other emergency response entities, such as police and fire services [ – ]. By 2020, six years after its transformation, CPH EMS demonstrated significant performance improvements while reducing overall costs, showcasing the efficiency and sustainability of the restructured EMS framework . ED waiting times reached historically low levels, while ED visits decreased by 10% within the first years following the system overhaul. The number of home visits by General Practitioners (GPs) also declined. Call response times were short, with emergency calls answered within 4–5 s and non-emergency calls within less than three minutes. Patient satisfaction reached 90%, with complaints averaging just 15 per month per 100,000 calls. Patient safety incidents were rare, with a thorough follow-up conducted on every case daily . Enabled by Denmark's unique civil registration number system, comprehensive health data collection and linkage across registers [ – ], CPH EMS exemplifies the potential of research-driven innovation in EMS systems across Europe . Additionally, through international initiatives such as co-founding the European EMS Leadership Network which addresses EMS challenges and innovations across Europe , and the Global Resuscitation Alliance (GRA), dedicated to advancing resuscitation practices CPH EMS has played a key role in shaping the field. CPH EMS has also initiated, hosted, and co-organized European EMS Congresses , further strengthening international collaboration. Thus, the CPH EMS, with its rapid transition towards an integrated, and patient-centered system and its emphasis on research-driven innovations, and international network and outreach is believed to serve as one good case in point for further advancing the WHO ECSF. Aim This study aims to analyze key components of the CPH EMS to inform potential enhancements to the WHO ECSF using a scoping review and expert interviews. Specifically, this study sought to: Identify CPH EMS key components exemplifying its integrated and patient-centered approach. Highlight elements within CPH EMS that could inform potential enhancement of the WHO ECSF. By focusing on integrative, patient-centered practices and evolving evidence-driven EMS approaches, this qualitative analysis supports a practical review of the WHO ECSF. Rather than comprehensively reviewing the CPH EMS or fully updating the WHO ECSF this highlights the importance of continuous evaluation of the WHO ECSF. Study design This exploratory study conducted a partial system analysis of the CPH EMS, using a qualitative approach conducting expert interviews and a scoping review. The initial step involved a partial system analysis, comparing the CPH EMS system with the WHO ECSF . Given resource limitations, the analysis centered on selected examples of components based on the WHO Health System Building Blocks, rather than a full system analysis as proposed in the PEMS assessment tool by Mehmood et al. . Data collection and analysis were performed in a concurrent manner, with analytic steps guiding additional data collection, and data initiating new analytic processes . To ensure transparent and comprehensive reporting, the PRISMA ScR-Checklist and COREQ-Checklist (COnsolidated criteria for REporting Qualitative research) were followed in the reporting of this study. Setting: pre-hospital EMS of the capital region of Denmark Established in 2011, CPH EMS serves 1.9 million (approximately one-third of Denmark´s population) across rural and urban areas spanning 2,563 km 2 , coordinating emergency medical communications, dispatch, mobile care units, and manages the region's overall interdisciplinary healthcare response and Major Incident Medical Management, including outbreak and preparedness planning [ , , ]. Per year, the CPH EMS responds to 130.000 emergency medical calls (112), 1.2 million medical helpline calls (1813), and responds with 300.000 emergency ambulance missions and 500.000 non-urgent patient transports . In Denmark´s tax-funded health system, the EMS operates as an independent Public Health organization responsible for acute and prehospital EMS within a national structure of five administrative and health care regions, each responsible for health and psychiatric services provided by General Practitioners (GP) and specialists as well as prehospital emergency services and hospital care. In the capital region of Denmark there is one hospital trust, six University Hospitals and 40.000 health care employees. The CPH EMS works together with 29 municipalities, four police regions and seven fire and rescue services . The CPH EMS ensures free, equal and 24/7 access to EMS [ , , ]. The organizational structure of the CPH EMS is illustrated in Fig. . The responsibilities for emergency preparedness and pre-hospital care planning are defined by the national Executive Order on Planning of Emergency Preparedness: “§ 4. The purpose of the pre-hospital effort is to save lives, improve health prospects, reduce pain and other symptoms, shorten the overall course of the disease, provide care and create security.” (translated via Google Translate) Data collection Qualitative data was collected from April to June 2021 through a scoping literature review , and expert interviews. Scoping review A scoping literature review was conducted in May 2021. PubMed, Google Scholar were searched and supplemented by web-based Google searches, snowball sampling for grey literature, contacting professionals in this field, and reference tracking. The obtained literature was organized using the reference manager Mendeley© and screened for eligibility in three stages: title, abstract, and full-text screening. Details on the search strategy including search terms, and eligibility criteria are provided in Additional file . Expert interviews Twenty experts on CPH EMS were identified through recommendations and authorship of relevant publications and invited via email (cf. Additional file ). For each WHO ECSF Matrix Domain, at least two experts were assigned with some covering multiple domains. To ensure completeness, the list of potential interviewees was independently verified by a senior-level executive and a senior-level researcher of the CPH EMS. Individual interviews were conducted in person or via MSTeams, and lasted about 30 min each. Semi-structured, member-checked and pilot-tested questionnaires guided the interviews, and all but one conducted during ongoing operations, were audiotaped and transcribed non-verbatim by SB. The Comparative Method for Themes Saturation (CoMeTS) determined the number of interviews, continuing until no new information emerged or no further experts were available in the field of interest within the timeframe . Each interviewee was asked to suggest specific topics and best practice components of the CPH EMS, and since no further recommendations were made, thematic saturation was assumed. No repeat interviews were conducted; clarifications were made in person or via e-mail. Transcripts were returned to the interviewees for verification or comments, with one clarifying comment provided for the non-audiotaped interview. The scoping review and interviews were conducted concurrently, enabling an iterative process where literature findings informed more detailed interview questions, while expert insights helped contextualize study results—particularly when interviewees were also study co-authors . Based on The Merriam-Webster Dictionary definition of ‘best practice’—“a procedure that has been shown by research and experience to produce optimal results and that is established or proposed as a standard suitable for wide-spread adoption” —system components were considered as relevant if they met either of two criteria: (i) research-validated improvements to the status quo practices published in peer-reviewed journals (ii) practices or components deemed ‘good or best practices’ by the interviewed experts. Data analysis A partial system analysis of the CPH EMS was conducted using directed (relational) content analysis , a method that builds upon prior research, using concepts or variables as initial coding categories. Interview transcripts were coded deductively on pre-established themes derived from the WHO ECSF-Matrix by two researchers (1st coder: SB, 2nd coder: TK) using Atlas.ti software, which aligns with the WHO Health System Building blocks: (i) human resources and training, (ii) essential medical products, technologies and infrastructure, (iii) information and research, (iv) financing (represented as separate core component in the WHO ECSF rather than a cross-cutting building block), and (v) leadership and governance. Document data were analyzed similarly, with translations provided by Deepl.com for non-English documents This coding approach accords with the first part of the prehospital EMS assessment tool by Mehmood et al. , evaluating inputs, capacity, and performance. This study aims to analyze key components of the CPH EMS to inform potential enhancements to the WHO ECSF using a scoping review and expert interviews. Specifically, this study sought to: Identify CPH EMS key components exemplifying its integrated and patient-centered approach. Highlight elements within CPH EMS that could inform potential enhancement of the WHO ECSF. By focusing on integrative, patient-centered practices and evolving evidence-driven EMS approaches, this qualitative analysis supports a practical review of the WHO ECSF. Rather than comprehensively reviewing the CPH EMS or fully updating the WHO ECSF this highlights the importance of continuous evaluation of the WHO ECSF. This exploratory study conducted a partial system analysis of the CPH EMS, using a qualitative approach conducting expert interviews and a scoping review. The initial step involved a partial system analysis, comparing the CPH EMS system with the WHO ECSF . Given resource limitations, the analysis centered on selected examples of components based on the WHO Health System Building Blocks, rather than a full system analysis as proposed in the PEMS assessment tool by Mehmood et al. . Data collection and analysis were performed in a concurrent manner, with analytic steps guiding additional data collection, and data initiating new analytic processes . To ensure transparent and comprehensive reporting, the PRISMA ScR-Checklist and COREQ-Checklist (COnsolidated criteria for REporting Qualitative research) were followed in the reporting of this study. Established in 2011, CPH EMS serves 1.9 million (approximately one-third of Denmark´s population) across rural and urban areas spanning 2,563 km 2 , coordinating emergency medical communications, dispatch, mobile care units, and manages the region's overall interdisciplinary healthcare response and Major Incident Medical Management, including outbreak and preparedness planning [ , , ]. Per year, the CPH EMS responds to 130.000 emergency medical calls (112), 1.2 million medical helpline calls (1813), and responds with 300.000 emergency ambulance missions and 500.000 non-urgent patient transports . In Denmark´s tax-funded health system, the EMS operates as an independent Public Health organization responsible for acute and prehospital EMS within a national structure of five administrative and health care regions, each responsible for health and psychiatric services provided by General Practitioners (GP) and specialists as well as prehospital emergency services and hospital care. In the capital region of Denmark there is one hospital trust, six University Hospitals and 40.000 health care employees. The CPH EMS works together with 29 municipalities, four police regions and seven fire and rescue services . The CPH EMS ensures free, equal and 24/7 access to EMS [ , , ]. The organizational structure of the CPH EMS is illustrated in Fig. . The responsibilities for emergency preparedness and pre-hospital care planning are defined by the national Executive Order on Planning of Emergency Preparedness: “§ 4. The purpose of the pre-hospital effort is to save lives, improve health prospects, reduce pain and other symptoms, shorten the overall course of the disease, provide care and create security.” (translated via Google Translate) Qualitative data was collected from April to June 2021 through a scoping literature review , and expert interviews. Scoping review A scoping literature review was conducted in May 2021. PubMed, Google Scholar were searched and supplemented by web-based Google searches, snowball sampling for grey literature, contacting professionals in this field, and reference tracking. The obtained literature was organized using the reference manager Mendeley© and screened for eligibility in three stages: title, abstract, and full-text screening. Details on the search strategy including search terms, and eligibility criteria are provided in Additional file . Expert interviews Twenty experts on CPH EMS were identified through recommendations and authorship of relevant publications and invited via email (cf. Additional file ). For each WHO ECSF Matrix Domain, at least two experts were assigned with some covering multiple domains. To ensure completeness, the list of potential interviewees was independently verified by a senior-level executive and a senior-level researcher of the CPH EMS. Individual interviews were conducted in person or via MSTeams, and lasted about 30 min each. Semi-structured, member-checked and pilot-tested questionnaires guided the interviews, and all but one conducted during ongoing operations, were audiotaped and transcribed non-verbatim by SB. The Comparative Method for Themes Saturation (CoMeTS) determined the number of interviews, continuing until no new information emerged or no further experts were available in the field of interest within the timeframe . Each interviewee was asked to suggest specific topics and best practice components of the CPH EMS, and since no further recommendations were made, thematic saturation was assumed. No repeat interviews were conducted; clarifications were made in person or via e-mail. Transcripts were returned to the interviewees for verification or comments, with one clarifying comment provided for the non-audiotaped interview. The scoping review and interviews were conducted concurrently, enabling an iterative process where literature findings informed more detailed interview questions, while expert insights helped contextualize study results—particularly when interviewees were also study co-authors . Based on The Merriam-Webster Dictionary definition of ‘best practice’—“a procedure that has been shown by research and experience to produce optimal results and that is established or proposed as a standard suitable for wide-spread adoption” —system components were considered as relevant if they met either of two criteria: (i) research-validated improvements to the status quo practices published in peer-reviewed journals (ii) practices or components deemed ‘good or best practices’ by the interviewed experts. A scoping literature review was conducted in May 2021. PubMed, Google Scholar were searched and supplemented by web-based Google searches, snowball sampling for grey literature, contacting professionals in this field, and reference tracking. The obtained literature was organized using the reference manager Mendeley© and screened for eligibility in three stages: title, abstract, and full-text screening. Details on the search strategy including search terms, and eligibility criteria are provided in Additional file . Twenty experts on CPH EMS were identified through recommendations and authorship of relevant publications and invited via email (cf. Additional file ). For each WHO ECSF Matrix Domain, at least two experts were assigned with some covering multiple domains. To ensure completeness, the list of potential interviewees was independently verified by a senior-level executive and a senior-level researcher of the CPH EMS. Individual interviews were conducted in person or via MSTeams, and lasted about 30 min each. Semi-structured, member-checked and pilot-tested questionnaires guided the interviews, and all but one conducted during ongoing operations, were audiotaped and transcribed non-verbatim by SB. The Comparative Method for Themes Saturation (CoMeTS) determined the number of interviews, continuing until no new information emerged or no further experts were available in the field of interest within the timeframe . Each interviewee was asked to suggest specific topics and best practice components of the CPH EMS, and since no further recommendations were made, thematic saturation was assumed. No repeat interviews were conducted; clarifications were made in person or via e-mail. Transcripts were returned to the interviewees for verification or comments, with one clarifying comment provided for the non-audiotaped interview. The scoping review and interviews were conducted concurrently, enabling an iterative process where literature findings informed more detailed interview questions, while expert insights helped contextualize study results—particularly when interviewees were also study co-authors . Based on The Merriam-Webster Dictionary definition of ‘best practice’—“a procedure that has been shown by research and experience to produce optimal results and that is established or proposed as a standard suitable for wide-spread adoption” —system components were considered as relevant if they met either of two criteria: (i) research-validated improvements to the status quo practices published in peer-reviewed journals (ii) practices or components deemed ‘good or best practices’ by the interviewed experts. A partial system analysis of the CPH EMS was conducted using directed (relational) content analysis , a method that builds upon prior research, using concepts or variables as initial coding categories. Interview transcripts were coded deductively on pre-established themes derived from the WHO ECSF-Matrix by two researchers (1st coder: SB, 2nd coder: TK) using Atlas.ti software, which aligns with the WHO Health System Building blocks: (i) human resources and training, (ii) essential medical products, technologies and infrastructure, (iii) information and research, (iv) financing (represented as separate core component in the WHO ECSF rather than a cross-cutting building block), and (v) leadership and governance. Document data were analyzed similarly, with translations provided by Deepl.com for non-English documents This coding approach accords with the first part of the prehospital EMS assessment tool by Mehmood et al. , evaluating inputs, capacity, and performance. Characteristics of sources of evidence Scoping review The literature selection process is depicted in the PRISMA 2009 flow diagram (Fig. ). A total of 35 records were identified for document analysis. Of these, twenty-eight records were peer-reviewed journal articles, sourced through a database searches ( n = 14), snowballing of references ( n = 6); expert recommended literature ( n = 5), and targeted unsystematic online searches ( n = 3). Grey literature consisted of webpages/legal texts ( n = 3), annual reports, and internal documents from CPH EMS ( n = 4). Expert interviews From April through June 2021, we interviewed thirteen experts (referenced as E1 to E13). Four of the interviewees (30.8%) were part-time researchers and part-time employed within EMS as medical physicians ( n = 3) or paramedics ( n = 1). Seven interviewees (61.5%) were professionals currently working at the level of senior executive and/or operational management at the CPH EMS. One interviewee (7.7%) was a full-time researcher at the CPH EMS and one (7.7%) was employed at the operational level at a hospital in Copenhagen. Five invited individuals did not reply to the invitation and two individuals declined, stating a lack of expertise in the requested area. Experts covered at least one, and in some cases two domains of the WHO ECSF: Scene ( n = 6), Transport ( n = 3), Facility ( n = 2), Cross-Cutting Elements ( n = 6) with focus on financing, quality improvement, patient safety, research. Preparedness planning, service delivery, data management. Synthesis of results The identified system components were charted in the four segments below that correspond to the WHO ECSF-Matrix’ categories: (i) scene, (ii) transport, (iii) facility, and (iv) cross-cutting elements (Tables , , and ). Data from the scoping review and interviews are synthesized under each topic as they address the same object of study and include overlapping perspectives, as some authors also participated as respondents. (i) Scene The EMS response begins the moment someone recognizes a need for medical help. As one expert put it, “ as soon as someone realizes that they need an ambulance, or they need help and they call 112 or 1813. The taking care of the patient starts there.” (E2). Citizen involvement is encouraged through regular mandatory Basic Life Support (BLS) courses (E12), and citizen responder programs such as the “HeartRunner”-Project, where app-dispatched citizen can provide resuscitation support (E1, E3, E11), which relies on a national network of accessible Automated External Defibrillators (AEDs) largely procured and funded by the Danish people or institutions [ – ] (E1, E3). The Emergency Medical Communication Center (EMCC) serves as a single point of access by integrating the emergency number 112 and medical helpline 1813 and using the same integrated IT-system . The EMCC is available 24/7/365 and staffed with medical calltakers (nurses and paramedics) and technical dispatchers. One expert labels this integration of both numbers as the most important innovation: “the most important innovation in my opinion being implemented in Copenhagen since [20] 14, is that you just make a decision whether to call 1813 or 112 and then, what’s happening behind in the black box is going to be decided by us [the EMS personnel]. […]. An integrated patient care system that can provide help 24/7 independent of what is your need and what time of the day.” (E8) The triage process is supported by computer-based protocols (i.e. “Danish Index”) and may be supported by an artificial intelligence-based (AI) speech-recognition software in case of suspected cardiac arrest. In a recent trial, the AI demonstrated a remarkable sensitivity, accurately detecting 85% of cardiac arrest cases over the phone, showing a significantly higher sensitivity than human medical dispatchers , highlighting the potential of AI to enhance early recognition and response in critical situations. Communication with the patient or bystander can be extended through video-transmission and video-aided telephone CPR (E4). In case of an emergency, the waiting line of the non-emergency medical helpline 1813 may be cut using an Emergency Access Button (EAB) [ – ]. The dispatching of prehospital resources may include different multilevel responses and is best matched to the patient´s needs . GPs are integrated into the EMS system by supervising and supporting calltaking and dispatching activities during OOH, conducting home visits, and may order ambulance transports or book ED appointments via the EMCC. Referral or discharge notes are automatically extracted from the patient charter and forwarded to the relevant recipient, ensuring seamless information transfer between healthcare departments . Upon arrival at the scene, prehospital personnel can initiate treatment and, in some cases, perform diagnostic assessments, enabling timely and needs-based care (E2, E6). In Denmark, paramedics can decide against hospital transport based on legal guidelines (Bekendtgørelse om ambulancer og uddannelse af ambulancepersonale. BEK nr 1264 af 09/11/2018). Patients who are not conveyed may receive on-scene treatment, be discharged without follow-up, or be redirected to primary care services . All ambulance services are part of EMS and work under the same Standard Operating Procedures with medical supervision 24/7 . (ii) Transport A variety of specialized mobile care units supplement the traditional ambulance vehicles such as ambulances, physician-staffed mobile critical care units and helicopter EMS to provide patient-tailored and differentiated care. This includes “Sociolancen” for socially deprived persons , “Babylancen” for young children and their parents , mobile critical care units , also for psychiatric cases , national helicopter EMS and mobile causality clearing stations for mass casualty incidents [ , , ]. Units may be dispatched alone or in combination (“Rendezvous Model”) and manned with different level trained staff with varying (delegated) competences of treatment. Additionally, non-emergency transports are being coordinated by the CPH EMS, including “Sygetransport”, for laying or seated transports of non-critical patients to and from treatments . Technical solutions include telemedicine for medical support, wireless sensors network for automatic collection and documentation of vital signs, and a prehospital patient journal for information, communication and documentation of patient transport and care (E6). One expert describes the growing possibilities within the prehospital EMS as follows: “if the patient needs to go to the hospital - there are several opportunities also now, not only to diagnose, but also to treat patients. So, moving from an ambulance just being a transport form to a hospital doing the diagnostic and the treatment, more and more things are now shifted towards the prehospital phase”. (E11). (iii) Facility Although the CPH EMS is responsible for prehospital resources, the following examples were included as they illustrate the collaboration of pre-hospital and in-hospital care. This begins with the in-hospital environment which is largely triaged via the dispatchers at the EMCC or the GPs, and the “Copenhagen Triage Algorithm” that categorizes incoming patients [ – ] (E13). An early information transfer between ambulance is “[…] a big improvement in the hospital, [and helps] being prepared of what ‘s the matter with this patient arriving to our emergency room within the next few minutes. ” (E3). The “Acute Admission Database” records the patient pathway to aggregate patient-based data for analysis and quality improvement . Due to the prehospital focus of the CPH EMS, limited data was available for the domain “facility”. (iv) Cross-Cutting Elements The optimal coordination and integration of cross-cutting elements of the CPH EMS was summarized by one expert as “It takes a system to save a life” (E8). “Getting the right patient to the right treatment at the right time.”, a declared goal of the CPH EMS system, is thought to lower redundant expenses while increasing care quality . Identified components regarding cross-cutting elements, predominantly address the collection, monitoring, and evaluation of information, supporting research and quality improvement. Data is used to monitor patient needs and systems performance, or as summarized by one expert: “data save lives” (E10). Patient needs can be monitored through comprehensive data collection, linking EMS activity with individual-level health data via the Danish civil registration number , patient satisfaction surveys [ , , ], project-related surveys , and annual benchmarking of prehospital data , allowing the tracking of patient pathways, highlighting gaps, and supporting research-driven improvement. Patient safety is improved through tools like the national incident reporting system, the ´Danish Patient Safety Database’ allowing patients and professionals to report incidents, fostering a system-wide learning environment for improvement , following the mantra “ improve the system– not the person ” . A variety of data is being collected through performance monitoring of the EMCC (including data on call and mission processing, waiting times, hospitalization rates, patient satisfaction and complaints, home visits by GPs etc.), registries and databases contribute to research and quality improvement, while regular data-based quality improvement councils (E7, E12), and annual public reports ensure high transparency and accountability of the CPH EMS. An integrated ICT system incorporates the emergency dispatch of prehospital resources, enabling communication and data aggregation of the operational side of the CPH EMS . This highlights the data-driven approach of service delivery at the CPH EMS and its focus on research, facilitated by public–private partnerships for development and maintenance of research projects and innovations . Scoping review The literature selection process is depicted in the PRISMA 2009 flow diagram (Fig. ). A total of 35 records were identified for document analysis. Of these, twenty-eight records were peer-reviewed journal articles, sourced through a database searches ( n = 14), snowballing of references ( n = 6); expert recommended literature ( n = 5), and targeted unsystematic online searches ( n = 3). Grey literature consisted of webpages/legal texts ( n = 3), annual reports, and internal documents from CPH EMS ( n = 4). Expert interviews From April through June 2021, we interviewed thirteen experts (referenced as E1 to E13). Four of the interviewees (30.8%) were part-time researchers and part-time employed within EMS as medical physicians ( n = 3) or paramedics ( n = 1). Seven interviewees (61.5%) were professionals currently working at the level of senior executive and/or operational management at the CPH EMS. One interviewee (7.7%) was a full-time researcher at the CPH EMS and one (7.7%) was employed at the operational level at a hospital in Copenhagen. Five invited individuals did not reply to the invitation and two individuals declined, stating a lack of expertise in the requested area. Experts covered at least one, and in some cases two domains of the WHO ECSF: Scene ( n = 6), Transport ( n = 3), Facility ( n = 2), Cross-Cutting Elements ( n = 6) with focus on financing, quality improvement, patient safety, research. Preparedness planning, service delivery, data management. The literature selection process is depicted in the PRISMA 2009 flow diagram (Fig. ). A total of 35 records were identified for document analysis. Of these, twenty-eight records were peer-reviewed journal articles, sourced through a database searches ( n = 14), snowballing of references ( n = 6); expert recommended literature ( n = 5), and targeted unsystematic online searches ( n = 3). Grey literature consisted of webpages/legal texts ( n = 3), annual reports, and internal documents from CPH EMS ( n = 4). From April through June 2021, we interviewed thirteen experts (referenced as E1 to E13). Four of the interviewees (30.8%) were part-time researchers and part-time employed within EMS as medical physicians ( n = 3) or paramedics ( n = 1). Seven interviewees (61.5%) were professionals currently working at the level of senior executive and/or operational management at the CPH EMS. One interviewee (7.7%) was a full-time researcher at the CPH EMS and one (7.7%) was employed at the operational level at a hospital in Copenhagen. Five invited individuals did not reply to the invitation and two individuals declined, stating a lack of expertise in the requested area. Experts covered at least one, and in some cases two domains of the WHO ECSF: Scene ( n = 6), Transport ( n = 3), Facility ( n = 2), Cross-Cutting Elements ( n = 6) with focus on financing, quality improvement, patient safety, research. Preparedness planning, service delivery, data management. The identified system components were charted in the four segments below that correspond to the WHO ECSF-Matrix’ categories: (i) scene, (ii) transport, (iii) facility, and (iv) cross-cutting elements (Tables , , and ). Data from the scoping review and interviews are synthesized under each topic as they address the same object of study and include overlapping perspectives, as some authors also participated as respondents. (i) Scene The EMS response begins the moment someone recognizes a need for medical help. As one expert put it, “ as soon as someone realizes that they need an ambulance, or they need help and they call 112 or 1813. The taking care of the patient starts there.” (E2). Citizen involvement is encouraged through regular mandatory Basic Life Support (BLS) courses (E12), and citizen responder programs such as the “HeartRunner”-Project, where app-dispatched citizen can provide resuscitation support (E1, E3, E11), which relies on a national network of accessible Automated External Defibrillators (AEDs) largely procured and funded by the Danish people or institutions [ – ] (E1, E3). The Emergency Medical Communication Center (EMCC) serves as a single point of access by integrating the emergency number 112 and medical helpline 1813 and using the same integrated IT-system . The EMCC is available 24/7/365 and staffed with medical calltakers (nurses and paramedics) and technical dispatchers. One expert labels this integration of both numbers as the most important innovation: “the most important innovation in my opinion being implemented in Copenhagen since [20] 14, is that you just make a decision whether to call 1813 or 112 and then, what’s happening behind in the black box is going to be decided by us [the EMS personnel]. […]. An integrated patient care system that can provide help 24/7 independent of what is your need and what time of the day.” (E8) The triage process is supported by computer-based protocols (i.e. “Danish Index”) and may be supported by an artificial intelligence-based (AI) speech-recognition software in case of suspected cardiac arrest. In a recent trial, the AI demonstrated a remarkable sensitivity, accurately detecting 85% of cardiac arrest cases over the phone, showing a significantly higher sensitivity than human medical dispatchers , highlighting the potential of AI to enhance early recognition and response in critical situations. Communication with the patient or bystander can be extended through video-transmission and video-aided telephone CPR (E4). In case of an emergency, the waiting line of the non-emergency medical helpline 1813 may be cut using an Emergency Access Button (EAB) [ – ]. The dispatching of prehospital resources may include different multilevel responses and is best matched to the patient´s needs . GPs are integrated into the EMS system by supervising and supporting calltaking and dispatching activities during OOH, conducting home visits, and may order ambulance transports or book ED appointments via the EMCC. Referral or discharge notes are automatically extracted from the patient charter and forwarded to the relevant recipient, ensuring seamless information transfer between healthcare departments . Upon arrival at the scene, prehospital personnel can initiate treatment and, in some cases, perform diagnostic assessments, enabling timely and needs-based care (E2, E6). In Denmark, paramedics can decide against hospital transport based on legal guidelines (Bekendtgørelse om ambulancer og uddannelse af ambulancepersonale. BEK nr 1264 af 09/11/2018). Patients who are not conveyed may receive on-scene treatment, be discharged without follow-up, or be redirected to primary care services . All ambulance services are part of EMS and work under the same Standard Operating Procedures with medical supervision 24/7 . (ii) Transport A variety of specialized mobile care units supplement the traditional ambulance vehicles such as ambulances, physician-staffed mobile critical care units and helicopter EMS to provide patient-tailored and differentiated care. This includes “Sociolancen” for socially deprived persons , “Babylancen” for young children and their parents , mobile critical care units , also for psychiatric cases , national helicopter EMS and mobile causality clearing stations for mass casualty incidents [ , , ]. Units may be dispatched alone or in combination (“Rendezvous Model”) and manned with different level trained staff with varying (delegated) competences of treatment. Additionally, non-emergency transports are being coordinated by the CPH EMS, including “Sygetransport”, for laying or seated transports of non-critical patients to and from treatments . Technical solutions include telemedicine for medical support, wireless sensors network for automatic collection and documentation of vital signs, and a prehospital patient journal for information, communication and documentation of patient transport and care (E6). One expert describes the growing possibilities within the prehospital EMS as follows: “if the patient needs to go to the hospital - there are several opportunities also now, not only to diagnose, but also to treat patients. So, moving from an ambulance just being a transport form to a hospital doing the diagnostic and the treatment, more and more things are now shifted towards the prehospital phase”. (E11). (iii) Facility Although the CPH EMS is responsible for prehospital resources, the following examples were included as they illustrate the collaboration of pre-hospital and in-hospital care. This begins with the in-hospital environment which is largely triaged via the dispatchers at the EMCC or the GPs, and the “Copenhagen Triage Algorithm” that categorizes incoming patients [ – ] (E13). An early information transfer between ambulance is “[…] a big improvement in the hospital, [and helps] being prepared of what ‘s the matter with this patient arriving to our emergency room within the next few minutes. ” (E3). The “Acute Admission Database” records the patient pathway to aggregate patient-based data for analysis and quality improvement . Due to the prehospital focus of the CPH EMS, limited data was available for the domain “facility”. (iv) Cross-Cutting Elements The optimal coordination and integration of cross-cutting elements of the CPH EMS was summarized by one expert as “It takes a system to save a life” (E8). “Getting the right patient to the right treatment at the right time.”, a declared goal of the CPH EMS system, is thought to lower redundant expenses while increasing care quality . Identified components regarding cross-cutting elements, predominantly address the collection, monitoring, and evaluation of information, supporting research and quality improvement. Data is used to monitor patient needs and systems performance, or as summarized by one expert: “data save lives” (E10). Patient needs can be monitored through comprehensive data collection, linking EMS activity with individual-level health data via the Danish civil registration number , patient satisfaction surveys [ , , ], project-related surveys , and annual benchmarking of prehospital data , allowing the tracking of patient pathways, highlighting gaps, and supporting research-driven improvement. Patient safety is improved through tools like the national incident reporting system, the ´Danish Patient Safety Database’ allowing patients and professionals to report incidents, fostering a system-wide learning environment for improvement , following the mantra “ improve the system– not the person ” . A variety of data is being collected through performance monitoring of the EMCC (including data on call and mission processing, waiting times, hospitalization rates, patient satisfaction and complaints, home visits by GPs etc.), registries and databases contribute to research and quality improvement, while regular data-based quality improvement councils (E7, E12), and annual public reports ensure high transparency and accountability of the CPH EMS. An integrated ICT system incorporates the emergency dispatch of prehospital resources, enabling communication and data aggregation of the operational side of the CPH EMS . This highlights the data-driven approach of service delivery at the CPH EMS and its focus on research, facilitated by public–private partnerships for development and maintenance of research projects and innovations . The qualitative assessment of the CPH EMS system highlights several examples of (i) integrated, and (ii) patient-centered emergency care, and supporting (iii)– sometimes “smart”—technology solutions (see Figs. and ). These findings clearly indicate a shift towards prehospital and community-based emergency care, moving away from a mere hospital-centric model. The identified key components of the CPH EMS were validated through member checking with researchers at the CPH EMS and were subsequently mapped onto the WHO ECSF as follows: Blue components represent examples of integrated EMS care, focusing on the triage process for differentiated responses, including dispatching of various specialized response units, self-help advice, and referrals to Emergency Departments (ED), GPs, specialists, or home consultations. Pink components illustrate examples of patient-centered EMS care, including a “single-point of access” via the EMCC emergency line (112) and medical helpline (1813), app-dispatched responders, mandatory community BLS trainings, and needs-based on-site treatment or transport care by specialized EMS personnel and equipment. Green components depict supporting– sometimes smart– technologies, such as urgent call prioritization, video-transmission, AI-based speech recognition for early detection of out-of-hospital cardiac arrest (OHCA), and telemedical supervision. The mapped components of CPH EMS in the WHO ECSF infographic highlight the importance of community-based interventions and a needs-driven, differentiated system response (as shown in the raindrop-shaped light green area). This approach moves beyond traditional hospital-focused care, enabling on-site case resolution or redirection to the most appropriate care pathways, including EMS, primary care, and OOH services. Integrated EMS The CPH EMS system exemplifies an integrated approach at different stations of the patient pathway within EMS. Firstly, the EMCC serves as a ‘single point of contact’ for both emergency and non-emergency medical concerns (112 and 1813), optimizing patient navigation [ , , ], early and systematic triage supporting optimal EMS resource allocation, effectively functioning as a gatekeeper, assigning unspecific cases to disease-specific care . Its aim to reduce ED congestion and improve EMS system performance is a priority shared by many healthcare systems worldwide . Secondly, CPH EMS offers a differentiated response through specialized units that address different levels of urgency and cover a broad spectrum of health, including mental and social care aspects . This development can be found in many EMS systems, reflected in the establishment of single-response, multiprofessional, and/or tele-medical supported mobile units [ – ]. Thirdly, EMS personnel may triage patients on-scene, opting against hospital transport . This reflects a shift in many EMS systems from emphasis on hospital transport to providing advanced prehospital care, and redirecting patients to the best point of (available) service or discharging the patient without follow-up . The WHO ECSF could further emphasize an integrated approach, including a single point of access, comprehensive triage, and a differentiated system response and include care options of social, psychological, and sub-acute cases frequently managed by EMS, as well as EMS integration within the broader healthcare system. Aligning with the European Union’s ‘State of Health’ reports from 2017 and 2019 , a stronger emphasis on integrated care systems could support policymakers and health system administrators in develop a holistic health system including the interaction of sub-systems. However, transforming healthcare silos into an integrated system comes with challenges. During the CPH EMS restructuring, obstacles included “traditional thinking in hospital structure”, “facilities and logistics”, “stakeholder power (physician vs nurses, GPs vs other physicians)”, and “money” . Similar to reports from Brazil integration efforts encountered policy, structural and organizational barriers despite improvements in care quality and health systems effectiveness . Overcoming these challenges require sustained policy and organizational support to fully realize the benefits of integrated systems. Patient-centered emergency care CPH EMS takes a patient-centered approach, addressing somatic, social, and psychiatric needs through differentiated care pathways, supported by a single-point-of-access, telemedicine, specialized mobile units, scheduled ED visits or primary care services [ , , ]. Research from Australia has shown that one in ten EMS-attended patients presented with mental health issues, with most (74,4%) being transported to hospitals, despite being more suited for community-based mental health services . This stresses the need for a holistic assessment, considering somatic, mental, and social aspects, while taking into account patient-specific settings and life circumstances. In contrast, the WHO ECSF illustrates patients as individuals with specific emergencies—such as cardiac arrest, car accident, pregnancy, or pediatric illness— seemingly focusing on somatic emergencies with a singular response option: transport to a hospital. While this may not be intentional, it visually underrepresents the reality of complex patients’ abilities and needs. Patient and bystander abilities include health literacy and health system literacy among further circumstantial factors, essential for recognizing type and urgency of medical needs and seeking appropriate care . While Copenhagen research projects observed a community with strong health awareness , and active support of initiatives such as the heartrunner project [ , , ], the importance of early recognition and help-seeking behavior becomes even more apparent when reviewing examples with less favorable conditions. In a population-based survey in The Gambia, limited community awareness of common warning signs, especially for non-communicable diseases like stroke, acute coronary symptoms, or diabetic emergencies, was associated with a high proportion of disability especially among the young male population . These reflections on patients abilities, needs and circumstances, have been emphasized by EU and WHO initiatives on integrated, patient-centered healthcare systems that advocate for “services of better quality, financially more sustainable and more responsive to personal preferences and needs ”. The WHO Integrated and People-Centered Health Services Framework (IPCHS) states that “all people have equal access to quality health services that are co-produced in a way that meets their life course needs , are coordinated across the continuum of care, and are comprehensive, safe, effective, timely, efficient and acceptable; and all carers are motivated, skilled and operate in a supportive environment” . Supporting technologies in EMS systems CPH EMS has integrated various supporting and smart technologies and have indicated in various studies that these may (i) improve timely EMS access, triaging, and preliminary diagnosis [ , , – , – ], and (ii) facilitate seamless communication and documentation (E6) , enabling data transfer and communication across providers including GPs or inpatient care facilities. Technology can support “seamless interaction” among care providers across settings and sectors as demanded by the WHO Framework on Integrated People-Centered Health Services , and enable tele-health and telemedicine care that can play a key role in reducing care disparities, enhancing health literacy, promoting healthy behaviors , and improve EMS efficiency . However, availability and quality of data input remains essential for effective care delivery . Beyond operational benefits, technology facilitates digitized and standardized data collection, essential for monitoring, quality improvement, and allocation optimization. The role of standardized data in enhancing emergency care quality has also been emphasized by the 72nd World Health Assembly . Similarly, Mowafi et al. stress the need for data-driven research to build evidence-based EMS services, noting a lack of emergency care surveillance and registries in most low- and middle-income countries, which could significantly improve service quality and Public Health . The WHO ECSF aims to depict the “essential functions” of an EMS system. Thus, it is comprehensible that few digital technologies are included as these depend on resource availability and (digital) infrastructure. Nonetheless, smart information technology and modern biotechnology are believed to offer significant benefits by enhancing efficiency, accessibility and personalization in healthcare , a shift accelerated during the Covid-19 pandemic when digital care expanded rapidly . Thus, a balanced approach—acknowledging the benefits of supporting technology while considering infrastructure limitations– could make the WHO ECSF more adaptable while including in which areas technological support could be beneficial if available. While promising, smart technology in EMS also introduces risks, such as fallibility e.g. false-negatives in AI speech-recognition, or challenges in infrastructure and cybersecurity. Thus, ongoing improvements in IT infrastructure, skills, security, and data protection standards are essential. Transferability EMS systems and patients’ needs vary widely due to differences in socio-demographics, culture, healthcare infrastructure, political and environmental contexts, and overall resource availability. While the WHO ECSF is a valuable framework, its strong focus on hospital-centered EMS limits its applicability, particularly in rural and low-resource areas where integrated community-based and telehealth solutions may be of even greater importance than in a hospital-dense area. Similarly, CPH EMS components´ effectiveness is context-dependent, relying on a high-resource health system with fiscal stability, an effective and reliable approach for standards, guidance, and system organization; supporting a robust infrastructure, resources for specialized mobile units, and comprehensive data collection and linkage. While initiatives such as the community responder program succeed also due to cultural factors like widespread public first-aid training. However, while CPH EMS offers examples of good practice, challenges remain, such as AED access in rural areas, data flow between prehospital and hospital systems, and integration with social and mental health systems– showing the need for continuous evaluation and further development. Nonetheless, the presented examples are largely backed by peer-reviewed research and practitioners working in CPH EMS, thus, confirming its evidence base and practicality in the CPH EMS setting, making them valuable considerations for the WHO ECSF. Finally, it is important to recognize that good or best practices are generally ‘fluid concepts’ , meaning they constantly evolve with changing needs and new organizational and technological developments. Our considerations on how the WHO ECFS could be strengthened, is not intended to represent a final blueprint for the ideal EMS system but highlights areas for improvement and emphasize the need to for continuous review and updating of the WHO ECSF, to ensure that it remains a relevant and evidence-informed guide for EMS systems. Limitations This study has several potential limitations. First, its qualitative and regional approach may introduce biases in data collection, including selection bias, researcher bias, and interviewee bias in expert interviews. Additionally, the geographical focus of the scoping review on the CPH EMS and, in some cases, the Danish perspective on nationwide structures, may have limited insights from other EMS models, and a broader literature review could have yielded additional findings for the WHO. To minimize the interviewee biases, expert selection was verified, and multiple data collection (interviews and literature) were to cross-check. Transcript verification and member-checking were also used to reduce interpretation bias. Publication bias was addressed by including grey literature and internal documents. However, no quality analysis of the included studies was performed. Second, due to resource constraints (time and personnel), only a partial system analysis was feasible, potentially leading to an incomplete representation of the CPH EMS system. Future research using a multifaceted approach, such as direct inspection/observation and focus group discussions, as suggested by Mehmood et al. (2018) could enhance system assessment. Third, as two sections of the WHO ECSF were not publicly available at the time of the study (E-Mail responses of [email protected] on 31 May 2021 and 04.04.2023), they were not considered. Lastly, data collection were conducted by a single researcher (SB), however, results were peer-reviewed and verified by senior-level researchers at the CPH EMS and Maastricht University to mitigate bias. Finally, given that this study focuses solely on CPH EMS, the components identified are not claimed as being unique or superior, thus are not considered ‘best practices’ or ‘unique to the CPH EMS’ as there might be other EMS systems with similar or even better practices and components that may be worth exploring in a different study. Implications for practice and research The findings highlight the need for research-driven EMS systems to continuously measure, review and enhance current EMS practices and frameworks. The CPH EMS exemplifies the benefits of a close research-practice link, encouraging systematic data collection, rapid implementation of research insights, and fostering a culture of innovation. With short implementation ways from research to practice, making the CPH EMS an innovative and adapting system. The WHO ECSF offers guidance for effective design of a patient pathway within EMS systems, but could be further developed in three areas: (i) integrating EMS within primary healthcare, public safety, and public health frameworks for a holistic approach, (ii) emphasize the EMCC´s role as the central point of contact and needs-based resource allocation and (iii) balancing its hospital-centered, resource-intensive model with guidance suited for low-resource or underserved settings. As this study predominantly focused on the WHO Health System Building Blocks and the PEMS Framework from Mehmood et al. (2016), a subsequent step would be to further assess (ii) outputs (access, quality, coverage, and safety) and (iii) goals (improved health, responsiveness, social and financial risk protection, and efficiency) across the EMS, primary health and care, public safety and public health systems. Emerging stressors on the EMS and health system, such as cross-border care needs extreme weather events [ – ], the emergence of conflict areas (encompassing physical, political, and digital dimensions), have not been discussed within this paper but indisputably must gain greater significance in the design and resilience strengthening of health systems including emergency care systems. The CPH EMS system exemplifies an integrated approach at different stations of the patient pathway within EMS. Firstly, the EMCC serves as a ‘single point of contact’ for both emergency and non-emergency medical concerns (112 and 1813), optimizing patient navigation [ , , ], early and systematic triage supporting optimal EMS resource allocation, effectively functioning as a gatekeeper, assigning unspecific cases to disease-specific care . Its aim to reduce ED congestion and improve EMS system performance is a priority shared by many healthcare systems worldwide . Secondly, CPH EMS offers a differentiated response through specialized units that address different levels of urgency and cover a broad spectrum of health, including mental and social care aspects . This development can be found in many EMS systems, reflected in the establishment of single-response, multiprofessional, and/or tele-medical supported mobile units [ – ]. Thirdly, EMS personnel may triage patients on-scene, opting against hospital transport . This reflects a shift in many EMS systems from emphasis on hospital transport to providing advanced prehospital care, and redirecting patients to the best point of (available) service or discharging the patient without follow-up . The WHO ECSF could further emphasize an integrated approach, including a single point of access, comprehensive triage, and a differentiated system response and include care options of social, psychological, and sub-acute cases frequently managed by EMS, as well as EMS integration within the broader healthcare system. Aligning with the European Union’s ‘State of Health’ reports from 2017 and 2019 , a stronger emphasis on integrated care systems could support policymakers and health system administrators in develop a holistic health system including the interaction of sub-systems. However, transforming healthcare silos into an integrated system comes with challenges. During the CPH EMS restructuring, obstacles included “traditional thinking in hospital structure”, “facilities and logistics”, “stakeholder power (physician vs nurses, GPs vs other physicians)”, and “money” . Similar to reports from Brazil integration efforts encountered policy, structural and organizational barriers despite improvements in care quality and health systems effectiveness . Overcoming these challenges require sustained policy and organizational support to fully realize the benefits of integrated systems. CPH EMS takes a patient-centered approach, addressing somatic, social, and psychiatric needs through differentiated care pathways, supported by a single-point-of-access, telemedicine, specialized mobile units, scheduled ED visits or primary care services [ , , ]. Research from Australia has shown that one in ten EMS-attended patients presented with mental health issues, with most (74,4%) being transported to hospitals, despite being more suited for community-based mental health services . This stresses the need for a holistic assessment, considering somatic, mental, and social aspects, while taking into account patient-specific settings and life circumstances. In contrast, the WHO ECSF illustrates patients as individuals with specific emergencies—such as cardiac arrest, car accident, pregnancy, or pediatric illness— seemingly focusing on somatic emergencies with a singular response option: transport to a hospital. While this may not be intentional, it visually underrepresents the reality of complex patients’ abilities and needs. Patient and bystander abilities include health literacy and health system literacy among further circumstantial factors, essential for recognizing type and urgency of medical needs and seeking appropriate care . While Copenhagen research projects observed a community with strong health awareness , and active support of initiatives such as the heartrunner project [ , , ], the importance of early recognition and help-seeking behavior becomes even more apparent when reviewing examples with less favorable conditions. In a population-based survey in The Gambia, limited community awareness of common warning signs, especially for non-communicable diseases like stroke, acute coronary symptoms, or diabetic emergencies, was associated with a high proportion of disability especially among the young male population . These reflections on patients abilities, needs and circumstances, have been emphasized by EU and WHO initiatives on integrated, patient-centered healthcare systems that advocate for “services of better quality, financially more sustainable and more responsive to personal preferences and needs ”. The WHO Integrated and People-Centered Health Services Framework (IPCHS) states that “all people have equal access to quality health services that are co-produced in a way that meets their life course needs , are coordinated across the continuum of care, and are comprehensive, safe, effective, timely, efficient and acceptable; and all carers are motivated, skilled and operate in a supportive environment” . CPH EMS has integrated various supporting and smart technologies and have indicated in various studies that these may (i) improve timely EMS access, triaging, and preliminary diagnosis [ , , – , – ], and (ii) facilitate seamless communication and documentation (E6) , enabling data transfer and communication across providers including GPs or inpatient care facilities. Technology can support “seamless interaction” among care providers across settings and sectors as demanded by the WHO Framework on Integrated People-Centered Health Services , and enable tele-health and telemedicine care that can play a key role in reducing care disparities, enhancing health literacy, promoting healthy behaviors , and improve EMS efficiency . However, availability and quality of data input remains essential for effective care delivery . Beyond operational benefits, technology facilitates digitized and standardized data collection, essential for monitoring, quality improvement, and allocation optimization. The role of standardized data in enhancing emergency care quality has also been emphasized by the 72nd World Health Assembly . Similarly, Mowafi et al. stress the need for data-driven research to build evidence-based EMS services, noting a lack of emergency care surveillance and registries in most low- and middle-income countries, which could significantly improve service quality and Public Health . The WHO ECSF aims to depict the “essential functions” of an EMS system. Thus, it is comprehensible that few digital technologies are included as these depend on resource availability and (digital) infrastructure. Nonetheless, smart information technology and modern biotechnology are believed to offer significant benefits by enhancing efficiency, accessibility and personalization in healthcare , a shift accelerated during the Covid-19 pandemic when digital care expanded rapidly . Thus, a balanced approach—acknowledging the benefits of supporting technology while considering infrastructure limitations– could make the WHO ECSF more adaptable while including in which areas technological support could be beneficial if available. While promising, smart technology in EMS also introduces risks, such as fallibility e.g. false-negatives in AI speech-recognition, or challenges in infrastructure and cybersecurity. Thus, ongoing improvements in IT infrastructure, skills, security, and data protection standards are essential. EMS systems and patients’ needs vary widely due to differences in socio-demographics, culture, healthcare infrastructure, political and environmental contexts, and overall resource availability. While the WHO ECSF is a valuable framework, its strong focus on hospital-centered EMS limits its applicability, particularly in rural and low-resource areas where integrated community-based and telehealth solutions may be of even greater importance than in a hospital-dense area. Similarly, CPH EMS components´ effectiveness is context-dependent, relying on a high-resource health system with fiscal stability, an effective and reliable approach for standards, guidance, and system organization; supporting a robust infrastructure, resources for specialized mobile units, and comprehensive data collection and linkage. While initiatives such as the community responder program succeed also due to cultural factors like widespread public first-aid training. However, while CPH EMS offers examples of good practice, challenges remain, such as AED access in rural areas, data flow between prehospital and hospital systems, and integration with social and mental health systems– showing the need for continuous evaluation and further development. Nonetheless, the presented examples are largely backed by peer-reviewed research and practitioners working in CPH EMS, thus, confirming its evidence base and practicality in the CPH EMS setting, making them valuable considerations for the WHO ECSF. Finally, it is important to recognize that good or best practices are generally ‘fluid concepts’ , meaning they constantly evolve with changing needs and new organizational and technological developments. Our considerations on how the WHO ECFS could be strengthened, is not intended to represent a final blueprint for the ideal EMS system but highlights areas for improvement and emphasize the need to for continuous review and updating of the WHO ECSF, to ensure that it remains a relevant and evidence-informed guide for EMS systems. This study has several potential limitations. First, its qualitative and regional approach may introduce biases in data collection, including selection bias, researcher bias, and interviewee bias in expert interviews. Additionally, the geographical focus of the scoping review on the CPH EMS and, in some cases, the Danish perspective on nationwide structures, may have limited insights from other EMS models, and a broader literature review could have yielded additional findings for the WHO. To minimize the interviewee biases, expert selection was verified, and multiple data collection (interviews and literature) were to cross-check. Transcript verification and member-checking were also used to reduce interpretation bias. Publication bias was addressed by including grey literature and internal documents. However, no quality analysis of the included studies was performed. Second, due to resource constraints (time and personnel), only a partial system analysis was feasible, potentially leading to an incomplete representation of the CPH EMS system. Future research using a multifaceted approach, such as direct inspection/observation and focus group discussions, as suggested by Mehmood et al. (2018) could enhance system assessment. Third, as two sections of the WHO ECSF were not publicly available at the time of the study (E-Mail responses of [email protected] on 31 May 2021 and 04.04.2023), they were not considered. Lastly, data collection were conducted by a single researcher (SB), however, results were peer-reviewed and verified by senior-level researchers at the CPH EMS and Maastricht University to mitigate bias. Finally, given that this study focuses solely on CPH EMS, the components identified are not claimed as being unique or superior, thus are not considered ‘best practices’ or ‘unique to the CPH EMS’ as there might be other EMS systems with similar or even better practices and components that may be worth exploring in a different study. The findings highlight the need for research-driven EMS systems to continuously measure, review and enhance current EMS practices and frameworks. The CPH EMS exemplifies the benefits of a close research-practice link, encouraging systematic data collection, rapid implementation of research insights, and fostering a culture of innovation. With short implementation ways from research to practice, making the CPH EMS an innovative and adapting system. The WHO ECSF offers guidance for effective design of a patient pathway within EMS systems, but could be further developed in three areas: (i) integrating EMS within primary healthcare, public safety, and public health frameworks for a holistic approach, (ii) emphasize the EMCC´s role as the central point of contact and needs-based resource allocation and (iii) balancing its hospital-centered, resource-intensive model with guidance suited for low-resource or underserved settings. As this study predominantly focused on the WHO Health System Building Blocks and the PEMS Framework from Mehmood et al. (2016), a subsequent step would be to further assess (ii) outputs (access, quality, coverage, and safety) and (iii) goals (improved health, responsiveness, social and financial risk protection, and efficiency) across the EMS, primary health and care, public safety and public health systems. Emerging stressors on the EMS and health system, such as cross-border care needs extreme weather events [ – ], the emergence of conflict areas (encompassing physical, political, and digital dimensions), have not been discussed within this paper but indisputably must gain greater significance in the design and resilience strengthening of health systems including emergency care systems. This study highlights components of an integrated, patient-centered and technology-supported EMS system in the Capital Region of Denmark, including (i) integration of EMS within public health and primary care, (ii) patient-centered strategies such as a single point of access, effective triaging systems, and diverse care response options, and (iii) the use of supportive technologies to enhance care coordination, operational efficiency, and patient outcomes. These findings advocate for incorporating evidence-based practices from a research-driven, integrated EMS system into the WHO ECSF, emphasizing a shift from a hospital-centric to a more holistic, integrated EMS system framework. With its global recognition and visibility, the WHO ECSF has the potential to guide the evolution of emergency care systems toward these standards, but it must adapt to advancing knowledge, emerging technologies, and diverse contextual needs. Implementation success will depend on the availability of resources, including funding, data availability, infrastructure, and cultural aspects, necessitating adaptations for local contexts. Future research should focus on evaluating the identified components in terms of process and outcome parameters and assess their applicability across varying global settings. Supplementary Material 1. Supplementary Material 2.
Plant growth promotion and biocontrol properties of a synthetic community in the control of apple disease
f2597196-66fe-4f12-8af9-211ed5767396
11177370
Microbiology[mh]
Apple replant disease (ARD) is a major problem for the apple industry worldwide, including in the Bohai Gulf region of China, where a large number of apple trees are planted. Continuous replanting severely affects the yield and quality of apple trees and causes serious economic losses. The act of replanting is often associated with increased inoculum levels and elevated activity of soil-borne plant pathogens, as well as disturbances in soil microbial communities, leading to reduced yields in apple cultivars over time . ARD, primarily affecting apple roots, is significantly influenced by the complex interactions within the rhizosphere, the soil region near plant roots. The aboveground performance of plants is closely correlated with changes in underground microbial communities. The rhizosphere microbial community differs depending on the planting time and planting cycle, which in turn affect the physical and chemical properties of the subsoil and the growth of aboveground plants . The composition and diversity of microbial communities play a pivotal role in influencing soil structure and biological interactions . Variations in the rhizosphere microbial community may lead to microecological imbalances in the root zone of apple trees, and these imbalances could potentially be one of the factors contributing to the incidence of apple diseases. In a previous study, we used high-throughput sequencing to analyze the microbial communities of the rhizospheres of perennial apple trees around Bohai Gulf. The results revealed that replanting led to an increase in populations of potential pathogenic fungi such as Verticillium and bacteria such as Xanthomonadaceae , alongside a decrease in potentially beneficial bacterial populations like Pseudomonas and Bacillus . Some studies have also found that replanting coincided with a rise in antagonistic bacteria and fungi, e.g., Arthrobacter , Chaetomium , indicating that heightened pathogen levels may induce increased microbial antagonism . Apple replant diseases can be controlled by physical, chemical, and biological means. Biocontrol includes bioprospecting for new, active isolates but also an understanding of the mechanisms of pathogen antagonism, to allow their improvement and broader use. Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola are associated with soil-borne and plant diseases. Rhizoctonia solani is a major fungal pathogen responsible for ARD, particularly noted for its impact on apple production. Fusarium oxysporum has been identified as a pathogen causing rown and root rot in apples . Botryosphaeria ribis is linked with apple stem canker and fruit rot . Meanwhile, Physalospora piricola is known to cause bull’s-eye rot in apples, affecting fruit quality and storability . All four are typically present in the soil and in residual roots or crowns, where they cause a reduction in plant productivity. Conversely, many strains of Bacillus and Streptomyces are considered biocontrol agents, because they effectively colonize the rhizosphere of different plants species, including fruit trees, and produce a wide range of antimicrobial agents that can survive harsh environments . Different strains of Streptomyces and Bacillus have thus been investigated for their ability to control fungal and bacterial diseases of plants, such as bacterial leaf blight caused by Xanthomonas . However, most of the current biological control methods are based on individual strains that exhibit limited resistance. For instance, Bacillus thuringiensis , extensively utilized as a biopesticide in agriculture, targets lepidopteran pests, beetles, and flies . The antagonistic effects of a combination of multiple bacterial strains still need to be explored. A synthetic community (SynCom) is designed by mixing selected strains with the aim of increasing microbial community stability through synergistic interactions between members in a manner that benefits plants. This approach also enables a detailed assessment of host and microbe characteristics under controlled, reproducible conditions . SynCom construction is an essential step in verifying microbiome function and in studying the interactions between the microbiome and host plant. The isolation and culture of microorganisms can link amplicon sequencing data to functional validation and are key to elucidating the interactions between microbiome and host plant . However, little research has examined the use of SynCom in combating apple diseases. The effects of SynCom on the growth of Malus hupehensis Rehd and the beneficial and harmful microorganisms in its rhizosphere are not known. In this study, the responses of a microbial community to the application of different exogenous microorganisms were investigated, including the antagonism between rhizosphere bacteria and pathogens, the role of SynCom in plant growth, and the effect of SynCom on the rhizosphere microbial community and vice versa. These topics were explored by screening bacteria capable of inhibiting typical apple disease pathogens, which resulted in the isolation and characterization of eight isolates that were further assessed for their potential as biological control agents. Then a SynCom constructed using these resistant bacteria was tested in colonization and plant growth promotion experiments, to identify possible beneficial effects on Malus hupehensis Rehd. The structure, species composition, co-occurrence network characteristics, and assembly process of the microbial community were investigated, together with the changes in the responses to different treatments, to analyze how SynCom functions from a microbial perspective. Our results contribute new insights into the prevention and control of ARD and other apple diseases, offering preliminary steps towards developing an ecologically friendly and sustainable apple industry. Rhizosphere soil sampling The rhizosphere soil in this study was collected from perennial apple trees in apple orchards around Bohai Gulf (China) in five sampling sites (Qixia, Muping, Laizhou, Huludao and Changli). The roots were shaken vigorously to remove loose soil and the 1–2 mm thick soil layer surrounding the root was defined as rhizosphere soil. To collect the rhizosphere soil, the root samples were transferred to sterile 50 mL centrifuge tubes containing 20 mL sterile 10 mM PBS and placed in a full-temperature shaker at 120 rpm/min, where they were oscillated for 20 min at room temperature . The root system in each tube was removed with sterile tweezers, and the remaining suspension was centrifuged at high speed (6,000 × g, 4 °C) for 20 min. All soil samples were stored at 4 °C. Bacterial isolation and assessment of antimicrobial and plant growth promotion (PGP) activity Rhizosphere bacteria were isolated and antimicrobial activity of rhizosphere bacterial isolates was tested against the apple pathogens Fusarium oxysporum , Rhizoctonia solani (AG-5), Botryosphaeria ribis , and Physalospora piricola , and apple pathogens are provided by the microbiology laboratory of Shandong Agricultural University. To prepare a 10 6 -fold dilution of soil for microbial analysis, a 1 g soil sample is suspended in 99 mL of sterile water, mixed thoroughly to ensure homogeneity. This suspension is then serially diluted to achieve the desired dilution factor. From this suspension, 100 µL is spread onto Luria-Bertani (LB) agar plates and incubated overnight at 30 °C. Representative colonies differing in color, shape, and size are selected and subcultured onto fresh LB plates for an additional 2 days to obtain pure cultures . Strains of Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola were cultured on potato dextrose agar and incubated at 28 °C for 7 days. Then dual culture assays on PDA were conducted by placing a square agar disk (side length of 0.5 mm) containing mycelium of the pathogens at the center of the plate. Then, two 6 mm wells were created on opposite sides of the plate, and 100 µL of culture medium from each bacterial strain, after 2 days of incubation in LB liquid medium (10 g NaCl, 10 g tryptone, 5 g yeast dip powder, 1 L of distilled water, pH 7.0-7.5), was pipetted into the wells. The plates were incubated at 30 °C for 3 days and the antagonistic activity was estimated by measuring the growth inhibition zone . Antagonistic bacteria with inhibitory effects on all four pathogens were selected for further study. The antagonistic bacterial strains were then evaluated for inhibition between isolates. Different strains were streaked pairwise on LB agar plates without intersecting, and the growth process was observed for any antagonistic phenomena . The selected bacterial strains were also evaluated for siderophore, indoleacetic acid (IAA), and ACC deaminase production. Siderophore-producing bacteria were screened by inoculating candidate strains onto chrome azurol S medium, incubating the plates at 30 °C in the dark for 2 days, and observing the size of the resulting orange halo . ACC deaminase production was determined based on the ability of each candidate strain to use ACC (1-aminocyclopropane-1-carboxylate) as the sole nitrogen source. IAA and IAA-like molecules were quantitatively determined using a colorimetric Salkowski assay . Taxonomic identification and phylogenetic analysis of the strains The TIANamp bacterial DNA kit (TIANGEN BIOTECH(BEIJING)CO., LTD) was used to extract the DNA of the antagonistic strains. Target fragments amplified using 16 S rDNA universal primers (27 F: 5′-CAGAGTTTGATCCTGGCT-3′, 1492R: 5′-AGGAGGTGATCCAGCCGCA-3′) served as the template for PCR amplification. The PCR reaction protocol begins with an initial denaturation step at 95 °C for 3 min to fully denature the DNA. This is followed by 35 cycles, each consisting of three steps: denaturation at 95 °C for 30 s, annealing at 52 °C for 30 s, and extension at 72 °C for 1 min. The process concludes with a final extension at 72 °C for 5 min to ensure that any remaining DNA is fully extended . The amplified products were sequenced by Sangon Biotech (Shanghai) Co. Ltd., and the obtained sequences were used in a BLAST search ( http://www.ncbi.nlm.nih.gov/ ) to identify the species of the isolates. The BLAST results were downloaded and the sequences were used to construct a phylogenetic tree using MEGA7.0 . Assembly and colonization of the SynCom Eight bacterial strains with a highly antagonistic effect on Fusarium oxysporum (ZOI > 6 mm), Rhizoctonia solaniI (ZOI > 6 mm), Botryosphaeria ribis (ZOI > 8 mm), and Physalospora piricola (ZOI > 6 mm) were selected as candidate strains for SynCom construction. An equal volume of each strain (~ 10 8 cells/mL) was mixed to establish the SynCom. The persistence and viability of the isolates in the soil were tested using antibiotic resistance marking method. The labeled eight strains were inoculated in equal numbers in pots containing Malus hupehensis Rehd, and the rhizosphere-colonizing bacteria were recovered after inoculation. Each sample was diluted in sterile water and inoculated in medium supplemented with rifampicin and ampicillin. The results were assessed after 4 days of incubation at 30 °C and are reported as CFU/g rhizosphere soil . Pot experiment The antagonistic ability of SynCom in the apple rhizosphere under natural conditions was also investigated in pot experiments using Malus hupehensis Rehd. Malus huphensis Rehd has deep roots and is commonly used as rootstock in major apple-producing regions of China. These seedlings are propagated through non-fusion techniques, avoiding crossbreeding, which ensures that the experimental material is genetically consistent. Antagonistic strains obtained from the screening were cultured in LB liquid medium at 28 °C and 170 rpm for 2 days, diluted with sterile water, and mixed in equal proportions to obtain a 10 8 CFU·mL − 1 SynCom suspension . The pathogen suspension was composed of a combination of four fungi: Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola . Each of these fungi was individually inoculated into Potato Dextrose Broth (PDB) culture medium and incubated in a shaking incubator at 28 °C and 170 rpm for a period of 5–7 days. Following incubation, each fungal culture was filtered to obtain spore suspensions, which were then diluted with sterile water to achieve a concentration of 10 6 spores/mL . Subsequently, equal volumes of these diluted spore suspensions were combined to prepare the pathogen suspension. The root surfaces of healthy Malus hupehensis Rehd seedlings were washed and then immersed in the suspensions, depending on the treatment (see below) for 30 min. Then the seedlings were transferred to plastic pots (diameter of 29.6 cm) containing 2 kg mixed growing substrate consisting of soil from the apple orchard and sterilized nutrient soil in a 1:1 ratio. Watering management was carried out for 3 months according to conventional cultivation. Four treatments were established in triplicate: inactivated SynCom suspension (25 mL) + sterile water (25 mL) (S treatment), pathogen suspension (25 mL) + sterile water (25 mL) (P), pathogen suspension (25 mL) + SynCom suspension (25 mL) (M), and SynCom suspension (25 mL) + sterile water (25 mL) (A). Root inoculation was started 3 months after seedling transplantation. The suspension was applied three times separated by a 1-week interval (The final concentration of bacteria added was about 3.75 × 10 7 cells per gram of soil, and that of fungi was about 3.75 × 10 4 spores per gram of soil) and the seedlings were harvested after 75 days. The biological characteristics (root length, plant height, the number of leaves) were determined 15, 45, and 75 days after application. Physicochemical measurements of pot soil The treated Malus hupehensis Rehd seedlings were harvested after 75 days and potting soil was collected. Soil adhering to the roots, approximately 1 mm thick, was defined as rhizosphere soil. The soil pH, moisture content, organic matter content, and concentrations of available N, P, and K were measured . DNA extraction, amplification, and sequencing Total DNA was extracted from 0.6–0.8 g soil using the FastDNA®SPIN kit (MP Biomedicals, Solon, USA) according to the manufacturer’s protocol. The primers 338F (5’-ACTCCTACGGGAGGCAGCA-3’) and 806R (5’-GGACTACHVGGGTWTCTAAT-3’) were used to amplify the V3–V4 region of bacterial 16 S rRNA, and the primers ITS1F (5’- CTTGGTCATTTAGAGGAAGTAA − 3’) and ITS1R (5’- GCTGCGTTCTTCATCGATGC − 3’) were used to amplify the ITS regions of fungi. Sequencing of purified PCR products was performed on an Illumina MiSeq PE300 platform at Major Biomedical Technology Co., Ltd (Shanghai, China). Subsequent to sequencing, raw data were assembled and quality-filtered following the methodology described by Caporaso , and chimeric sequences were removed using the QIIME2 tool. Sequences corresponding to mitochondria and chloroplasts were also removed. The remaining effective sequences were clustered into operational taxonomic units (OTUs) at 97% similarity . The raw sequencing data were deposited in the Sequence Read Archive at NCBI with the accession number PRJNA1009678 and were under processed. Effects of SynCom on soil microbial community The species diversity and richness of rhizosphere soil samples following SynCom application were characterized by calculating alpha-diversity indices for the microbial community. Microbial community composition was ordinated by principal coordinates analysis (PCoA) based on Bray-Curtis distances, and differences in rhizosphere soil samples after the application of SynCom and/or pathogens were compared using nonparametric permutational multivariate analysis of variance, based on the Adonis function in the R package . Linear discriminant analysis and a significance test were used to explore the most discriminating genus between treatments, using linear discriminant analysis effect size (LefSe) . A co-occurrence network was also constructed and the null-model was used to quantify community assembly processes. Environmental factors were combined with microbial communities to identify the biomarkers and environmental drivers in the different treatments. The predicted functional annotations were based on the PICRUSt and FUNGuild . Statistical analyses All statistical analyses were performed in the R environment. Student’s t tests (two-sided) were used to compare pairs of samples for significant differences. An analysis of variance (ANOVA) and Tukey’s honest significant difference (HSD) test were performed to determine significant differences in multiple comparisons, using the R agricolae package. Normality was assessed using the Shapiro-Wilk test, and homogeneity of variances was evaluated by Bartlett’s test. Alpha-diversity indices, including the Richness, Shannon index, Chao1 index and Simpson index, were calculated using the “vegan” package in R. The results of PCoA based on a Bray–Curtis distance matrix were visualized using the “ggplot2” package, and the coordinates were used to draw 2D graphical outputs . The R package “psych” was used to generate a species correlation matrix and to calculate correlations between OTUs based on Spearman analyses. The node and edge files were exported using Gephi, which was also used to further map the network. Network correlation parameters were also calculated using Gephi. Microbial community assembly was analyzed using the “NST0,” “picante,” and “ape” packages to calculate βNTI indices and the MEGA7.0 software to construct phylogenetic trees. The “Hmisc” and “picante” packages were used to calculate Bloomburg K values . The physicochemical properties of soils driving microorganisms were analyzed using the “linkET” and “dplyr” packages. The “vegan,” “RandomForest,” and “reshape” packages were used to identify biomarkers and environmental drivers. Function categorization based on KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways were performed by PICRUSt according to a standard analysis process . The relative abundance of level 2 pathways was obtained, and the results were visualized using the RStudio. Fungal function was predicted using the FUNGuild tool of Majorbio Cloud . The rhizosphere soil in this study was collected from perennial apple trees in apple orchards around Bohai Gulf (China) in five sampling sites (Qixia, Muping, Laizhou, Huludao and Changli). The roots were shaken vigorously to remove loose soil and the 1–2 mm thick soil layer surrounding the root was defined as rhizosphere soil. To collect the rhizosphere soil, the root samples were transferred to sterile 50 mL centrifuge tubes containing 20 mL sterile 10 mM PBS and placed in a full-temperature shaker at 120 rpm/min, where they were oscillated for 20 min at room temperature . The root system in each tube was removed with sterile tweezers, and the remaining suspension was centrifuged at high speed (6,000 × g, 4 °C) for 20 min. All soil samples were stored at 4 °C. Rhizosphere bacteria were isolated and antimicrobial activity of rhizosphere bacterial isolates was tested against the apple pathogens Fusarium oxysporum , Rhizoctonia solani (AG-5), Botryosphaeria ribis , and Physalospora piricola , and apple pathogens are provided by the microbiology laboratory of Shandong Agricultural University. To prepare a 10 6 -fold dilution of soil for microbial analysis, a 1 g soil sample is suspended in 99 mL of sterile water, mixed thoroughly to ensure homogeneity. This suspension is then serially diluted to achieve the desired dilution factor. From this suspension, 100 µL is spread onto Luria-Bertani (LB) agar plates and incubated overnight at 30 °C. Representative colonies differing in color, shape, and size are selected and subcultured onto fresh LB plates for an additional 2 days to obtain pure cultures . Strains of Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola were cultured on potato dextrose agar and incubated at 28 °C for 7 days. Then dual culture assays on PDA were conducted by placing a square agar disk (side length of 0.5 mm) containing mycelium of the pathogens at the center of the plate. Then, two 6 mm wells were created on opposite sides of the plate, and 100 µL of culture medium from each bacterial strain, after 2 days of incubation in LB liquid medium (10 g NaCl, 10 g tryptone, 5 g yeast dip powder, 1 L of distilled water, pH 7.0-7.5), was pipetted into the wells. The plates were incubated at 30 °C for 3 days and the antagonistic activity was estimated by measuring the growth inhibition zone . Antagonistic bacteria with inhibitory effects on all four pathogens were selected for further study. The antagonistic bacterial strains were then evaluated for inhibition between isolates. Different strains were streaked pairwise on LB agar plates without intersecting, and the growth process was observed for any antagonistic phenomena . The selected bacterial strains were also evaluated for siderophore, indoleacetic acid (IAA), and ACC deaminase production. Siderophore-producing bacteria were screened by inoculating candidate strains onto chrome azurol S medium, incubating the plates at 30 °C in the dark for 2 days, and observing the size of the resulting orange halo . ACC deaminase production was determined based on the ability of each candidate strain to use ACC (1-aminocyclopropane-1-carboxylate) as the sole nitrogen source. IAA and IAA-like molecules were quantitatively determined using a colorimetric Salkowski assay . The TIANamp bacterial DNA kit (TIANGEN BIOTECH(BEIJING)CO., LTD) was used to extract the DNA of the antagonistic strains. Target fragments amplified using 16 S rDNA universal primers (27 F: 5′-CAGAGTTTGATCCTGGCT-3′, 1492R: 5′-AGGAGGTGATCCAGCCGCA-3′) served as the template for PCR amplification. The PCR reaction protocol begins with an initial denaturation step at 95 °C for 3 min to fully denature the DNA. This is followed by 35 cycles, each consisting of three steps: denaturation at 95 °C for 30 s, annealing at 52 °C for 30 s, and extension at 72 °C for 1 min. The process concludes with a final extension at 72 °C for 5 min to ensure that any remaining DNA is fully extended . The amplified products were sequenced by Sangon Biotech (Shanghai) Co. Ltd., and the obtained sequences were used in a BLAST search ( http://www.ncbi.nlm.nih.gov/ ) to identify the species of the isolates. The BLAST results were downloaded and the sequences were used to construct a phylogenetic tree using MEGA7.0 . Eight bacterial strains with a highly antagonistic effect on Fusarium oxysporum (ZOI > 6 mm), Rhizoctonia solaniI (ZOI > 6 mm), Botryosphaeria ribis (ZOI > 8 mm), and Physalospora piricola (ZOI > 6 mm) were selected as candidate strains for SynCom construction. An equal volume of each strain (~ 10 8 cells/mL) was mixed to establish the SynCom. The persistence and viability of the isolates in the soil were tested using antibiotic resistance marking method. The labeled eight strains were inoculated in equal numbers in pots containing Malus hupehensis Rehd, and the rhizosphere-colonizing bacteria were recovered after inoculation. Each sample was diluted in sterile water and inoculated in medium supplemented with rifampicin and ampicillin. The results were assessed after 4 days of incubation at 30 °C and are reported as CFU/g rhizosphere soil . The antagonistic ability of SynCom in the apple rhizosphere under natural conditions was also investigated in pot experiments using Malus hupehensis Rehd. Malus huphensis Rehd has deep roots and is commonly used as rootstock in major apple-producing regions of China. These seedlings are propagated through non-fusion techniques, avoiding crossbreeding, which ensures that the experimental material is genetically consistent. Antagonistic strains obtained from the screening were cultured in LB liquid medium at 28 °C and 170 rpm for 2 days, diluted with sterile water, and mixed in equal proportions to obtain a 10 8 CFU·mL − 1 SynCom suspension . The pathogen suspension was composed of a combination of four fungi: Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola . Each of these fungi was individually inoculated into Potato Dextrose Broth (PDB) culture medium and incubated in a shaking incubator at 28 °C and 170 rpm for a period of 5–7 days. Following incubation, each fungal culture was filtered to obtain spore suspensions, which were then diluted with sterile water to achieve a concentration of 10 6 spores/mL . Subsequently, equal volumes of these diluted spore suspensions were combined to prepare the pathogen suspension. The root surfaces of healthy Malus hupehensis Rehd seedlings were washed and then immersed in the suspensions, depending on the treatment (see below) for 30 min. Then the seedlings were transferred to plastic pots (diameter of 29.6 cm) containing 2 kg mixed growing substrate consisting of soil from the apple orchard and sterilized nutrient soil in a 1:1 ratio. Watering management was carried out for 3 months according to conventional cultivation. Four treatments were established in triplicate: inactivated SynCom suspension (25 mL) + sterile water (25 mL) (S treatment), pathogen suspension (25 mL) + sterile water (25 mL) (P), pathogen suspension (25 mL) + SynCom suspension (25 mL) (M), and SynCom suspension (25 mL) + sterile water (25 mL) (A). Root inoculation was started 3 months after seedling transplantation. The suspension was applied three times separated by a 1-week interval (The final concentration of bacteria added was about 3.75 × 10 7 cells per gram of soil, and that of fungi was about 3.75 × 10 4 spores per gram of soil) and the seedlings were harvested after 75 days. The biological characteristics (root length, plant height, the number of leaves) were determined 15, 45, and 75 days after application. The treated Malus hupehensis Rehd seedlings were harvested after 75 days and potting soil was collected. Soil adhering to the roots, approximately 1 mm thick, was defined as rhizosphere soil. The soil pH, moisture content, organic matter content, and concentrations of available N, P, and K were measured . Total DNA was extracted from 0.6–0.8 g soil using the FastDNA®SPIN kit (MP Biomedicals, Solon, USA) according to the manufacturer’s protocol. The primers 338F (5’-ACTCCTACGGGAGGCAGCA-3’) and 806R (5’-GGACTACHVGGGTWTCTAAT-3’) were used to amplify the V3–V4 region of bacterial 16 S rRNA, and the primers ITS1F (5’- CTTGGTCATTTAGAGGAAGTAA − 3’) and ITS1R (5’- GCTGCGTTCTTCATCGATGC − 3’) were used to amplify the ITS regions of fungi. Sequencing of purified PCR products was performed on an Illumina MiSeq PE300 platform at Major Biomedical Technology Co., Ltd (Shanghai, China). Subsequent to sequencing, raw data were assembled and quality-filtered following the methodology described by Caporaso , and chimeric sequences were removed using the QIIME2 tool. Sequences corresponding to mitochondria and chloroplasts were also removed. The remaining effective sequences were clustered into operational taxonomic units (OTUs) at 97% similarity . The raw sequencing data were deposited in the Sequence Read Archive at NCBI with the accession number PRJNA1009678 and were under processed. The species diversity and richness of rhizosphere soil samples following SynCom application were characterized by calculating alpha-diversity indices for the microbial community. Microbial community composition was ordinated by principal coordinates analysis (PCoA) based on Bray-Curtis distances, and differences in rhizosphere soil samples after the application of SynCom and/or pathogens were compared using nonparametric permutational multivariate analysis of variance, based on the Adonis function in the R package . Linear discriminant analysis and a significance test were used to explore the most discriminating genus between treatments, using linear discriminant analysis effect size (LefSe) . A co-occurrence network was also constructed and the null-model was used to quantify community assembly processes. Environmental factors were combined with microbial communities to identify the biomarkers and environmental drivers in the different treatments. The predicted functional annotations were based on the PICRUSt and FUNGuild . All statistical analyses were performed in the R environment. Student’s t tests (two-sided) were used to compare pairs of samples for significant differences. An analysis of variance (ANOVA) and Tukey’s honest significant difference (HSD) test were performed to determine significant differences in multiple comparisons, using the R agricolae package. Normality was assessed using the Shapiro-Wilk test, and homogeneity of variances was evaluated by Bartlett’s test. Alpha-diversity indices, including the Richness, Shannon index, Chao1 index and Simpson index, were calculated using the “vegan” package in R. The results of PCoA based on a Bray–Curtis distance matrix were visualized using the “ggplot2” package, and the coordinates were used to draw 2D graphical outputs . The R package “psych” was used to generate a species correlation matrix and to calculate correlations between OTUs based on Spearman analyses. The node and edge files were exported using Gephi, which was also used to further map the network. Network correlation parameters were also calculated using Gephi. Microbial community assembly was analyzed using the “NST0,” “picante,” and “ape” packages to calculate βNTI indices and the MEGA7.0 software to construct phylogenetic trees. The “Hmisc” and “picante” packages were used to calculate Bloomburg K values . The physicochemical properties of soils driving microorganisms were analyzed using the “linkET” and “dplyr” packages. The “vegan,” “RandomForest,” and “reshape” packages were used to identify biomarkers and environmental drivers. Function categorization based on KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways were performed by PICRUSt according to a standard analysis process . The relative abundance of level 2 pathways was obtained, and the results were visualized using the RStudio. Fungal function was predicted using the FUNGuild tool of Majorbio Cloud . Assembly and characterization of simplified bacterial communities in rhizosphere soils of apple trees from the Bohai Gulf area The fungistatic effects of 353 bacterial strains isolated from the rhizosphere soil of apple trees in the Bohai Gulf were tested against Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola (Fig. ). The double antibody labeling method was used to determine the colonization of the marked strains and the PGP ability of these antagonistic strains was verified. Based on the results, eight strains (J-73, J-19, J-310, J-24, J-27, J-28, J-40, J-41) with stable colonization ability, strong antagonistic effects against the four pathogens, and PGP were selected. All eight strains showed remarkably high antifungal effects against the four pathogens (Table ). However, the fungistatic effect among the different strains, determined based on zone of inhibition, differed, being 6.52–11.22 mm for Fusarium oxysporum , 6.58–12.46 mm for Botryosphaeria ribis , 8.12–14.28 mm for Rhizoctonia solani , and 6.07–18.09 mm for Physalospora piricola . The eight antagonistic strains colonized the rhizosphere soil, with viable bacterial counts of 10 6 CFU/g after 84 days of inoculation (Figure ). The strains J-19, J-310, J-28, and J-41 are capable of producing siderophores and ACC deaminase; and strain J-19 also has the ability to produce IAA, underscoring their potential to enhance plant resistance against biotic stress and pathogenic attacks. Thus, these bacterial strains not only resist the four common pathogens but also have a potential PGP effect, suggesting that they could be used as the strains for the prevention and control of continuous cropping disorder for further research (Fig. D-F). The eight isolates were further classified and identified. The alignment results showed that for strains J-73, J-19, J-310, J-24, J-27, J-40, and J-41, the genus with the nearest genetic distance was Bacillus , whose gene sequences with a homology of up to 99% for Bacillus thuringiensis (MG988269.1), Bacillus velezensis (KX129848.1), Bacillus siamensis (MG788345.1), Bacillus toyonensis (KY038747.1), Bacillus cereus (MG561368.1), Bacillus mycoides (MG561363.1), and Bacillus subtilis (GU980963.1). J-28 had 100% sequence identity with Streptomyces venezuelae (CP013129.1). Thus, J-73, J-19, J-310, J-24, J-27, J-40, J-41 are members of Bacillus , consistent with the high prevalence of Bacillaceae in the rhizosphere soil of apple and other Rosaceae, where they act as antagonists . J-28 belongs to Streptomyces (Fig. A and B). There were no antagonistic interactions observed between these 8 highly antagonistic strains (Table S2). Synthetic communities that combine the functions of several strains of microbes are generally more stable than a single strain . Thus, in this study, the eight strains with the ability to promote plant growth and inhibit pathogenic fungi were mixed in equal proportions to establish the SynCom. Evaluation of functional assemblages of microbial consortia Next, the physicochemical properties of soils in which Malus hupehensis Rehd were grown under different treatments were analyzed (Table ). In treatment A, AN was 123.73 mg/kg, AK was 155.08 mg/kg, AP was 57.01 mg/kg, and OM was 3.07%, which were higher than other treatments. And the SM (14.24%), AK (151.14 mg/kg), AP (42.80) mg/kg, and OM (2.40%) were the lowest in treatment P. Among them, AP and OM contents in treatment A were significantly higher than that in treatment P ( P < 0.05). S (inactivated SynCom), P (pathogen only), M (pathogens and SynCom); A (SynCom only). Values represent means ± standard deviation (SD) of three replicates. Different letters (a, b, c) indicate statistically significant differences at P < 0.05. Abbreviations: pH (potential of hydrogen), SM (soil moisture), OM (organic matter), AN (available nitrogen), AK (available potassium), AP (available phosphorus). Measurements of the height, root length, and number of leaves of Malus hupehensis Rehd potted seedlings showed significant differences among the four treatments ( P < 0.05) (Table ). The average plant height (cm) in treatments S, P, M, and A were 44.49, 29.91, 36.55, and 46.82; the average number of leaves was 21.33, 1.33, 18, and 27.33; and the average root length (cm) was 16.51, 13.14, 14.58, and 19.54, respectively. The plant height, root length and number of leaves in treatment A were higher than the other three treatments, whereas the root length and number of leaves in treatment M were significantly higher than that in treatment P ( P < 0.05), suggesting a possible growth-promoting effect of SynCom on seedlings. S (inactivated SynCom), P (pathogen only), M (pathogens and SynCom); A (SynCom only). Values represent means ± standard deviation (SD) of three replicates. Different letters (a, b, c) indicate statistically significant differences at P < 0.05. Microbial community structure and diversity under the different treatments According to the 16 S rRNA amplicon sequencing results, 72,871, 50,527, 47,639, and 49,350 bacterial clean tags were obtained in treatments A, S, P, and M, respectively. The alpha-diversity index was used to characterize the diversity of the microbial communities (Fig. ). After the application of the SynCom, there was a significant impact on the species richness of the bacterial community. Treatment A had the highest richness index and Chao1, followed by treatment M, while treatments S and P had lower richness index and Chao1 ( P < 0.05) (Fig. A). The ITS regions of fungi were analyzed, with 28,336, 34,889, 44,159, and 40,490 fungal clean tags obtained in treatments A, S, P, and M, respectively. The richness index and Chao1 were highest for treatment P, followed by treatment S, M, A (Fig. B). The PCoA based on Bray-Curtis using the high-throughput sequencing data clearly distinguished between the four treatments for both bacteria and fungi. The fungal and bacterial compositions of the rhizosphere microbial communities clustered into distinct groups that well corresponded to the different treatments. The rhizospheric bacterial communities of Malus hupehensis Rehd exhibited significant differences among the four treatments ( P < 0.05) (Fig. C, D). Treatment A exhibited noticeable differences in rhizospheric microbial composition with the other treatments, indicating that the addition of SynCom may cause the change and disturbance of plant rhizospheric soil bacterial community composition. In addition, the difference in rhizosphere bacterial community composition was greater than that in fungal community ( R = 0.651 and P = 0.003 for bacteria, R = 0.198 and P = 0.062 for fungi). These findings demonstrate that SynCom had a greater effect on bacterial community than on fungi community, and that SynCom increased the richness of bacterial community. Dominant and differential genera under the different treatments Changes in bacterial and fungal communities in response to exogenous microorganisms were evaluated by comparing the genus-level diversity of Malus hupehensis Rehd rhizosphere soil in the four treatments. The microbial diversity fluctuated during the different treatments (Fig. ). The dominant bacterial genera in all four treatments were Pseudarthrobacter , Skermanella , Blastococcus , Haliangium , Sphingomonas , and Chryseolinea . The relative abundances of Pseudarthrobacter (2.05%), Haliangium (1.12%), and Chryseolinea (0.84%) in treatment A were higher than in the other treatments (Fig. B). Pseudarthrobacter efficiently degrades crude oil and multi-benzene compounds . Members of the genus Chryseolinea form a key group able to suppress disease-causing Fusarium . While Streptomyces and Bacillus do not fall within the top 20 dominant genera, we also evaluated their enrichment across different treatments. We observed that the relative abundance of Bacillus was highest in treatment A (0.312%), followed by treatment M (0.305%) and treatment P (0.187%). Similarly, the relative abundance of Streptomyces in treatment A (0.571%) > M (0.520%) > P (0.455%). These results suggest an increase in the relative abundance of potentially beneficial bacteria in treatment A that improved the soil environment, promoting Malus hupehensis Rehd growth and controlling disease. Fusarium , Aspergillus , Mortierella , Phoma , and Acremonium were the dominant fungal genera. The relative abundance of the pathogenic fungus Fusarium , which causes multiple soil-borne diseases and reduces crop yields, was highest in treatment P (23.3%). It indicates that pathogen application led to a high relative abundance of Fusarium in the soil. The relative abundances of Phoma and Phaeomycocentrospora in treatment M (2.02% and 0.71%) were lower than in the other treatments. Phoma are well-known plant disease agents but they are also pathogens of animals and humans . Species of Pseudocercospora include plant pathogens, endophytes, and saprobes, and some have been used as biological control agents of weeds (Fig. C). To further investigate the effects of the different treatments on the composition of the microbial communities, the species composition under the different treatments was compared in a LEfSe analysis to identify species whose abundances significantly differed between the four groups (Fig. D). SM1A02, Crenobacter, Citrifermentans, Comamonas, Sphingoaurantiacus, Azospira, Candidatus_Chloroploca, Synechococcus_IR11, Geobacter, Pseudogulbenkiania, Euzebya, Azoarcus were among the biomarkers identified in treatment M. Some research indicates that Comamonas, Sphingoaurantiacus , and Geobacter are renowned for their ability to degrade complex organic compounds and pollutants, making them valuable for bioremediation processes . Additionally, Azospira, Pseudogulbenkiania , and Azoarcus play a vital role in the soil nitrogen cycle, enhancing soil fertility and supporting plant growth . The LEfSe analysis also identified TM7 (tentatively named Saccharibacteria) as a biomarker in treatment A, and detected high abundances of Ramlibacter, Minicystis, Georgfuchsia, Pseudoxanthomonas, Ferruginibacter, Sphingomonas, FFCH7168, Sporacetigenium, Pseudomonas, Microcoleus Es-Yyy1400 in treatment P. The relative abundances of the dominant genera and the results of the LEfSe analysis together showed that pathogen application increased the presence of potentially pathogenic microorganisms in the rhizosphere soil, which may exacerbate the occurrence of plant diseases. The application of SynCom, however, enriched the beneficial bacteria in the rhizosphere soil, which may help plants resist the invasion of pathogenic microorganisms, in addition to enhancing soil disease resistance and improving the soil environment. Co-occurrence network analysis and the evaluation of the microbial assembly process under the different treatments The concept of an “integrative microbiome,” which includes protists, fungi, bacteria, archaea, and viruses, has been proposed as a direction for future microbiome studies . Thus, in this study, linkages between bacterial and fungal communities and their interactions were investigated by constructing bacterial-fungal interkingdom networks. A metacommunity co-occurrence network of the relationships among bacteria and fungi based on Spearman’s correlation coefficients was established to examine the effects of exogenous strains on the rhizosphere microbial community and the co-occurrence patterns in treatments S, A, M, and P (Fig. ). The number of edges, the average degree, the graph density, and the average clustering coefficient were highest in treatment A (SynCom application), suggesting a strongly correlated and complex network with the most links. The modularity of the co-occurrence network was the highest in treatment M (11.916), indicative of the “small-world” properties and nonrandom topology of its network. The bacterial-fungal interkingdom networks in treatments A and M had the same proportions of positive (64%) and negative edges (36%), while the co-occurrence networks in treatment P had remarkably large negative edge proportions (> 47%), which implies an increase in mutual exclusion rather than the coexistence of bacteria and fungi during pathogen treatment. These results further suggest that pathogen application to the rhizosphere soil stimulated strong negative interactions within the microbial community (Fig. A, B, C and D). The influence of the exogenous SynCom on microbial community assembly was further explored by comparing the relative importance of determinism and stochasticity. Bacterial community assembly processes were quantified using the null model, which indicated the dominance of deterministic processes. The weighted microbial community assembly (βNTI) metric provides insights into the potential roles of deterministic and stochastic forces in the phylogenetic community dynamics of microbial communities. Significant deviations of |β NTI| > 2 are defined as indicating the dominance of deterministic processes, and deviations |β NTI| < 2 are considered to indicate the dominance of stochastic processes . The community assembly of bacteria in treatments S, P, and A was mainly shaped by deterministic processes whereas the turnover of bacteria in treatment M was determined by both deterministic (33.33%) and stochastic (66.67%) processes (Fig. E and F). The assembly of fungal communities in treatment P was dominated by nondominant processes and was stochastic, whereas in treatments M and A the fungal communities were mainly influenced by heterogeneous selection and nondominant processes. In treatment S, the fungal community was determined by heterogeneous selection (33.33%), dispersal limitation (33.33%), and undominated processes (33.33%) (Fig. H and I). A comparison of phylogenetic signals resulting from the four treatments further demonstrated the differential responses of aggregate communities to physicochemical variables (Fig. G and J). The bacterial community in control treatment S had the greatest phylogenetic signal for SM, pH, OM, AN, AK, and AP, indicating that it was more conservative in its phylogeny and less susceptible to external factors, while the addition of exogenous microbes led to a perturbation of the soil habitat and a reduction in environmental preferences. Compared to bacteria, the fungal community had a higher Blomberg’s K value and a more conservative phylogeny, indicative of a wider range of environmental adaptations. The phylogenetic signals resulting from treatments A and M, both of which included SynCom, were similar and suggested that the SynCom-treated soils contained microbial species with similar ecological preferences for specific physicochemical variables. In summary, treatments A and M had similar bacterial-fungal interkingdom networks and phylogenetic signals. Their microbial networks appeared more stable, whereas treatment P displayed a higher level of negative interactions, such as competition and antagonism. Influence of SynCom on the soil microbial community of Malus hupehensis Rehd The relationship between microbial communities and environmental factors under the different treatments was evaluated in a redundancy analysis (RDA) of the correlation between the bacterial and fungal communities and environmental factors in each treatment (Fig. ). The results indicated differences between treatments following application of the microorganisms. This was particularly evident in treatment A, treated only with the SynCom, which showed a positive correlation with environmental factors, particularly AP, pH, OM, and plant growth, and in the control treatment S, for which a positive correlation with environmental factors was also determined. Treatment A with antagonistic bacteria showed similar correlations with environmental factors to treatment S, indicating similar responses of their microbial communities to environmental factors. By contrast, treatments P and M, which included the pathogens, correlated negatively with environmental factors and plant growth except AK. The similar correlations of treatments P and M with environmental factors indicated similar responses of the microbial communities to environmental factors (Fig. A). A model of the relationship between microbial community composition and each environmental factor was developed using the random forest algorithm to regress the relative abundance of each microbial genus against the environmental factors. Based on the importance values of the microbial genera, the top 20 bacterial genera and fungal genera were obtained. Among the former, most showed positive correlations with AP, pH, and OM (Fig. B). Among them, Reyranella , Variibacter , Nocardioides , Devosia , and Altererythrobacter had higher relative abundances in treatment A than in the other treatments. Nocardioides , which is arsenic- and antimony-resistant and thus of interest in the remediation of heavy-metal-contaminated sites , and Devosia may be effective for controlling plant diseases such as Fusarium head blight . However, Hydrogenophaga , Anaerolinea , and Azoarcus correlated significantly and negatively with AP ( P < 0.05). Anaerolinea and Azoarcus had high relative abundances in treatment M, with Azoarcus identified as a biomarker of this treatment in the LEfSe analysis. Both Anaerolinea and Azoarcus drive soil C, N, P, and S cycles in forests, improve arsenic contamination of soil, and remediate the soil environment . Among the 20 fungal genera, Lophiostoma had a high relative abundance in treatment P and also correlated negatively with AP (Fig. B). This genus is responsible for the formation of Lophiostoma carpini in woody plants . Prediction of bacterial and fungal functional profiles in rhizosphere soil after inoculation To investigate whether the application of SynCom changed the functions of the bacterial and fungal communities, bacterial community function was predicted using the PICRUSt software and fungal community function was predicted using the FUNGuild tool of Majorbio Cloud. The functional spectra of the bacterial and fungal communities in the rhizosphere differed among the four treatments (Fig. ). The potential functions of the bacterial community were further examined by annotating the sequences according to KEGG pathways, which predicted metabolism (11 pathways), disease (10 pathways), biological systems (7 pathways), cellular processes (5 pathways), genetic information processing (4 pathways), and environmental information processing (3 pathways). The relative abundances of the energy metabolism, amino acid metabolism, transport, and catabolism, substance dependence, and cell growth pathways were significantly higher in treatment A than in the other treatments ( P < 0.05). The relative abundance of the infectious disease and immune system pathway was highest in treatment P, which suggests that pathogen application may stimulated the pathogenic pathways of the fungal community, leading to the development of soil-borne diseases that in turn stimulated the functional genes of the immune system of the bacterial community, enhancing soil antagonism (Fig. A). The results of FUNGuild analysis showed that 620 OTUs could be annotated with different trophic modes, accounting for 52.36% of the total OTUs, with some OTUs annotated with multiple trophic modes (Fig. B). Saprotrophs were represented by 531 OTUs, accounting for 44.85% of the total OTUs; this group is the dominant fungal trophic mode in the composting process. OTUs representative of symbiotrophs and pathotrophs accounted for 15.63% and 18.33%, respectively. For 60% of the OTUs, annotation to the family level was possible. The 30 OTUs with an average abundance greater than 0.1% were screened for their predictive functional information and abundances using a heatmap. Most of the OTUs associated with pathogenic pathways tended to show high relative abundance in treatment P, such as OTU27, OTU6, and OTU9. OTU27, which belongs to Microascaceae, which are plant pathogens such as Microascus cirrosus that may affect plant health and crop yield . OTU6 belongs to Nectriaceae, a family containing important plant pathogens. OTU9 was assigned to saprotrophic fungi including Aspergillus_subversicolor and the Aspergillus genus. Some versicolores species are facultative human and animal pathogens . Their abundances in treatments A and M were low, indicating a decrease in the relative abundance of harmful fungi in the rhizosphere after SynCom application. The fungistatic effects of 353 bacterial strains isolated from the rhizosphere soil of apple trees in the Bohai Gulf were tested against Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola (Fig. ). The double antibody labeling method was used to determine the colonization of the marked strains and the PGP ability of these antagonistic strains was verified. Based on the results, eight strains (J-73, J-19, J-310, J-24, J-27, J-28, J-40, J-41) with stable colonization ability, strong antagonistic effects against the four pathogens, and PGP were selected. All eight strains showed remarkably high antifungal effects against the four pathogens (Table ). However, the fungistatic effect among the different strains, determined based on zone of inhibition, differed, being 6.52–11.22 mm for Fusarium oxysporum , 6.58–12.46 mm for Botryosphaeria ribis , 8.12–14.28 mm for Rhizoctonia solani , and 6.07–18.09 mm for Physalospora piricola . The eight antagonistic strains colonized the rhizosphere soil, with viable bacterial counts of 10 6 CFU/g after 84 days of inoculation (Figure ). The strains J-19, J-310, J-28, and J-41 are capable of producing siderophores and ACC deaminase; and strain J-19 also has the ability to produce IAA, underscoring their potential to enhance plant resistance against biotic stress and pathogenic attacks. Thus, these bacterial strains not only resist the four common pathogens but also have a potential PGP effect, suggesting that they could be used as the strains for the prevention and control of continuous cropping disorder for further research (Fig. D-F). The eight isolates were further classified and identified. The alignment results showed that for strains J-73, J-19, J-310, J-24, J-27, J-40, and J-41, the genus with the nearest genetic distance was Bacillus , whose gene sequences with a homology of up to 99% for Bacillus thuringiensis (MG988269.1), Bacillus velezensis (KX129848.1), Bacillus siamensis (MG788345.1), Bacillus toyonensis (KY038747.1), Bacillus cereus (MG561368.1), Bacillus mycoides (MG561363.1), and Bacillus subtilis (GU980963.1). J-28 had 100% sequence identity with Streptomyces venezuelae (CP013129.1). Thus, J-73, J-19, J-310, J-24, J-27, J-40, J-41 are members of Bacillus , consistent with the high prevalence of Bacillaceae in the rhizosphere soil of apple and other Rosaceae, where they act as antagonists . J-28 belongs to Streptomyces (Fig. A and B). There were no antagonistic interactions observed between these 8 highly antagonistic strains (Table S2). Synthetic communities that combine the functions of several strains of microbes are generally more stable than a single strain . Thus, in this study, the eight strains with the ability to promote plant growth and inhibit pathogenic fungi were mixed in equal proportions to establish the SynCom. Next, the physicochemical properties of soils in which Malus hupehensis Rehd were grown under different treatments were analyzed (Table ). In treatment A, AN was 123.73 mg/kg, AK was 155.08 mg/kg, AP was 57.01 mg/kg, and OM was 3.07%, which were higher than other treatments. And the SM (14.24%), AK (151.14 mg/kg), AP (42.80) mg/kg, and OM (2.40%) were the lowest in treatment P. Among them, AP and OM contents in treatment A were significantly higher than that in treatment P ( P < 0.05). S (inactivated SynCom), P (pathogen only), M (pathogens and SynCom); A (SynCom only). Values represent means ± standard deviation (SD) of three replicates. Different letters (a, b, c) indicate statistically significant differences at P < 0.05. Abbreviations: pH (potential of hydrogen), SM (soil moisture), OM (organic matter), AN (available nitrogen), AK (available potassium), AP (available phosphorus). Measurements of the height, root length, and number of leaves of Malus hupehensis Rehd potted seedlings showed significant differences among the four treatments ( P < 0.05) (Table ). The average plant height (cm) in treatments S, P, M, and A were 44.49, 29.91, 36.55, and 46.82; the average number of leaves was 21.33, 1.33, 18, and 27.33; and the average root length (cm) was 16.51, 13.14, 14.58, and 19.54, respectively. The plant height, root length and number of leaves in treatment A were higher than the other three treatments, whereas the root length and number of leaves in treatment M were significantly higher than that in treatment P ( P < 0.05), suggesting a possible growth-promoting effect of SynCom on seedlings. S (inactivated SynCom), P (pathogen only), M (pathogens and SynCom); A (SynCom only). Values represent means ± standard deviation (SD) of three replicates. Different letters (a, b, c) indicate statistically significant differences at P < 0.05. According to the 16 S rRNA amplicon sequencing results, 72,871, 50,527, 47,639, and 49,350 bacterial clean tags were obtained in treatments A, S, P, and M, respectively. The alpha-diversity index was used to characterize the diversity of the microbial communities (Fig. ). After the application of the SynCom, there was a significant impact on the species richness of the bacterial community. Treatment A had the highest richness index and Chao1, followed by treatment M, while treatments S and P had lower richness index and Chao1 ( P < 0.05) (Fig. A). The ITS regions of fungi were analyzed, with 28,336, 34,889, 44,159, and 40,490 fungal clean tags obtained in treatments A, S, P, and M, respectively. The richness index and Chao1 were highest for treatment P, followed by treatment S, M, A (Fig. B). The PCoA based on Bray-Curtis using the high-throughput sequencing data clearly distinguished between the four treatments for both bacteria and fungi. The fungal and bacterial compositions of the rhizosphere microbial communities clustered into distinct groups that well corresponded to the different treatments. The rhizospheric bacterial communities of Malus hupehensis Rehd exhibited significant differences among the four treatments ( P < 0.05) (Fig. C, D). Treatment A exhibited noticeable differences in rhizospheric microbial composition with the other treatments, indicating that the addition of SynCom may cause the change and disturbance of plant rhizospheric soil bacterial community composition. In addition, the difference in rhizosphere bacterial community composition was greater than that in fungal community ( R = 0.651 and P = 0.003 for bacteria, R = 0.198 and P = 0.062 for fungi). These findings demonstrate that SynCom had a greater effect on bacterial community than on fungi community, and that SynCom increased the richness of bacterial community. Changes in bacterial and fungal communities in response to exogenous microorganisms were evaluated by comparing the genus-level diversity of Malus hupehensis Rehd rhizosphere soil in the four treatments. The microbial diversity fluctuated during the different treatments (Fig. ). The dominant bacterial genera in all four treatments were Pseudarthrobacter , Skermanella , Blastococcus , Haliangium , Sphingomonas , and Chryseolinea . The relative abundances of Pseudarthrobacter (2.05%), Haliangium (1.12%), and Chryseolinea (0.84%) in treatment A were higher than in the other treatments (Fig. B). Pseudarthrobacter efficiently degrades crude oil and multi-benzene compounds . Members of the genus Chryseolinea form a key group able to suppress disease-causing Fusarium . While Streptomyces and Bacillus do not fall within the top 20 dominant genera, we also evaluated their enrichment across different treatments. We observed that the relative abundance of Bacillus was highest in treatment A (0.312%), followed by treatment M (0.305%) and treatment P (0.187%). Similarly, the relative abundance of Streptomyces in treatment A (0.571%) > M (0.520%) > P (0.455%). These results suggest an increase in the relative abundance of potentially beneficial bacteria in treatment A that improved the soil environment, promoting Malus hupehensis Rehd growth and controlling disease. Fusarium , Aspergillus , Mortierella , Phoma , and Acremonium were the dominant fungal genera. The relative abundance of the pathogenic fungus Fusarium , which causes multiple soil-borne diseases and reduces crop yields, was highest in treatment P (23.3%). It indicates that pathogen application led to a high relative abundance of Fusarium in the soil. The relative abundances of Phoma and Phaeomycocentrospora in treatment M (2.02% and 0.71%) were lower than in the other treatments. Phoma are well-known plant disease agents but they are also pathogens of animals and humans . Species of Pseudocercospora include plant pathogens, endophytes, and saprobes, and some have been used as biological control agents of weeds (Fig. C). To further investigate the effects of the different treatments on the composition of the microbial communities, the species composition under the different treatments was compared in a LEfSe analysis to identify species whose abundances significantly differed between the four groups (Fig. D). SM1A02, Crenobacter, Citrifermentans, Comamonas, Sphingoaurantiacus, Azospira, Candidatus_Chloroploca, Synechococcus_IR11, Geobacter, Pseudogulbenkiania, Euzebya, Azoarcus were among the biomarkers identified in treatment M. Some research indicates that Comamonas, Sphingoaurantiacus , and Geobacter are renowned for their ability to degrade complex organic compounds and pollutants, making them valuable for bioremediation processes . Additionally, Azospira, Pseudogulbenkiania , and Azoarcus play a vital role in the soil nitrogen cycle, enhancing soil fertility and supporting plant growth . The LEfSe analysis also identified TM7 (tentatively named Saccharibacteria) as a biomarker in treatment A, and detected high abundances of Ramlibacter, Minicystis, Georgfuchsia, Pseudoxanthomonas, Ferruginibacter, Sphingomonas, FFCH7168, Sporacetigenium, Pseudomonas, Microcoleus Es-Yyy1400 in treatment P. The relative abundances of the dominant genera and the results of the LEfSe analysis together showed that pathogen application increased the presence of potentially pathogenic microorganisms in the rhizosphere soil, which may exacerbate the occurrence of plant diseases. The application of SynCom, however, enriched the beneficial bacteria in the rhizosphere soil, which may help plants resist the invasion of pathogenic microorganisms, in addition to enhancing soil disease resistance and improving the soil environment. The concept of an “integrative microbiome,” which includes protists, fungi, bacteria, archaea, and viruses, has been proposed as a direction for future microbiome studies . Thus, in this study, linkages between bacterial and fungal communities and their interactions were investigated by constructing bacterial-fungal interkingdom networks. A metacommunity co-occurrence network of the relationships among bacteria and fungi based on Spearman’s correlation coefficients was established to examine the effects of exogenous strains on the rhizosphere microbial community and the co-occurrence patterns in treatments S, A, M, and P (Fig. ). The number of edges, the average degree, the graph density, and the average clustering coefficient were highest in treatment A (SynCom application), suggesting a strongly correlated and complex network with the most links. The modularity of the co-occurrence network was the highest in treatment M (11.916), indicative of the “small-world” properties and nonrandom topology of its network. The bacterial-fungal interkingdom networks in treatments A and M had the same proportions of positive (64%) and negative edges (36%), while the co-occurrence networks in treatment P had remarkably large negative edge proportions (> 47%), which implies an increase in mutual exclusion rather than the coexistence of bacteria and fungi during pathogen treatment. These results further suggest that pathogen application to the rhizosphere soil stimulated strong negative interactions within the microbial community (Fig. A, B, C and D). The influence of the exogenous SynCom on microbial community assembly was further explored by comparing the relative importance of determinism and stochasticity. Bacterial community assembly processes were quantified using the null model, which indicated the dominance of deterministic processes. The weighted microbial community assembly (βNTI) metric provides insights into the potential roles of deterministic and stochastic forces in the phylogenetic community dynamics of microbial communities. Significant deviations of |β NTI| > 2 are defined as indicating the dominance of deterministic processes, and deviations |β NTI| < 2 are considered to indicate the dominance of stochastic processes . The community assembly of bacteria in treatments S, P, and A was mainly shaped by deterministic processes whereas the turnover of bacteria in treatment M was determined by both deterministic (33.33%) and stochastic (66.67%) processes (Fig. E and F). The assembly of fungal communities in treatment P was dominated by nondominant processes and was stochastic, whereas in treatments M and A the fungal communities were mainly influenced by heterogeneous selection and nondominant processes. In treatment S, the fungal community was determined by heterogeneous selection (33.33%), dispersal limitation (33.33%), and undominated processes (33.33%) (Fig. H and I). A comparison of phylogenetic signals resulting from the four treatments further demonstrated the differential responses of aggregate communities to physicochemical variables (Fig. G and J). The bacterial community in control treatment S had the greatest phylogenetic signal for SM, pH, OM, AN, AK, and AP, indicating that it was more conservative in its phylogeny and less susceptible to external factors, while the addition of exogenous microbes led to a perturbation of the soil habitat and a reduction in environmental preferences. Compared to bacteria, the fungal community had a higher Blomberg’s K value and a more conservative phylogeny, indicative of a wider range of environmental adaptations. The phylogenetic signals resulting from treatments A and M, both of which included SynCom, were similar and suggested that the SynCom-treated soils contained microbial species with similar ecological preferences for specific physicochemical variables. In summary, treatments A and M had similar bacterial-fungal interkingdom networks and phylogenetic signals. Their microbial networks appeared more stable, whereas treatment P displayed a higher level of negative interactions, such as competition and antagonism. The relationship between microbial communities and environmental factors under the different treatments was evaluated in a redundancy analysis (RDA) of the correlation between the bacterial and fungal communities and environmental factors in each treatment (Fig. ). The results indicated differences between treatments following application of the microorganisms. This was particularly evident in treatment A, treated only with the SynCom, which showed a positive correlation with environmental factors, particularly AP, pH, OM, and plant growth, and in the control treatment S, for which a positive correlation with environmental factors was also determined. Treatment A with antagonistic bacteria showed similar correlations with environmental factors to treatment S, indicating similar responses of their microbial communities to environmental factors. By contrast, treatments P and M, which included the pathogens, correlated negatively with environmental factors and plant growth except AK. The similar correlations of treatments P and M with environmental factors indicated similar responses of the microbial communities to environmental factors (Fig. A). A model of the relationship between microbial community composition and each environmental factor was developed using the random forest algorithm to regress the relative abundance of each microbial genus against the environmental factors. Based on the importance values of the microbial genera, the top 20 bacterial genera and fungal genera were obtained. Among the former, most showed positive correlations with AP, pH, and OM (Fig. B). Among them, Reyranella , Variibacter , Nocardioides , Devosia , and Altererythrobacter had higher relative abundances in treatment A than in the other treatments. Nocardioides , which is arsenic- and antimony-resistant and thus of interest in the remediation of heavy-metal-contaminated sites , and Devosia may be effective for controlling plant diseases such as Fusarium head blight . However, Hydrogenophaga , Anaerolinea , and Azoarcus correlated significantly and negatively with AP ( P < 0.05). Anaerolinea and Azoarcus had high relative abundances in treatment M, with Azoarcus identified as a biomarker of this treatment in the LEfSe analysis. Both Anaerolinea and Azoarcus drive soil C, N, P, and S cycles in forests, improve arsenic contamination of soil, and remediate the soil environment . Among the 20 fungal genera, Lophiostoma had a high relative abundance in treatment P and also correlated negatively with AP (Fig. B). This genus is responsible for the formation of Lophiostoma carpini in woody plants . To investigate whether the application of SynCom changed the functions of the bacterial and fungal communities, bacterial community function was predicted using the PICRUSt software and fungal community function was predicted using the FUNGuild tool of Majorbio Cloud. The functional spectra of the bacterial and fungal communities in the rhizosphere differed among the four treatments (Fig. ). The potential functions of the bacterial community were further examined by annotating the sequences according to KEGG pathways, which predicted metabolism (11 pathways), disease (10 pathways), biological systems (7 pathways), cellular processes (5 pathways), genetic information processing (4 pathways), and environmental information processing (3 pathways). The relative abundances of the energy metabolism, amino acid metabolism, transport, and catabolism, substance dependence, and cell growth pathways were significantly higher in treatment A than in the other treatments ( P < 0.05). The relative abundance of the infectious disease and immune system pathway was highest in treatment P, which suggests that pathogen application may stimulated the pathogenic pathways of the fungal community, leading to the development of soil-borne diseases that in turn stimulated the functional genes of the immune system of the bacterial community, enhancing soil antagonism (Fig. A). The results of FUNGuild analysis showed that 620 OTUs could be annotated with different trophic modes, accounting for 52.36% of the total OTUs, with some OTUs annotated with multiple trophic modes (Fig. B). Saprotrophs were represented by 531 OTUs, accounting for 44.85% of the total OTUs; this group is the dominant fungal trophic mode in the composting process. OTUs representative of symbiotrophs and pathotrophs accounted for 15.63% and 18.33%, respectively. For 60% of the OTUs, annotation to the family level was possible. The 30 OTUs with an average abundance greater than 0.1% were screened for their predictive functional information and abundances using a heatmap. Most of the OTUs associated with pathogenic pathways tended to show high relative abundance in treatment P, such as OTU27, OTU6, and OTU9. OTU27, which belongs to Microascaceae, which are plant pathogens such as Microascus cirrosus that may affect plant health and crop yield . OTU6 belongs to Nectriaceae, a family containing important plant pathogens. OTU9 was assigned to saprotrophic fungi including Aspergillus_subversicolor and the Aspergillus genus. Some versicolores species are facultative human and animal pathogens . Their abundances in treatments A and M were low, indicating a decrease in the relative abundance of harmful fungi in the rhizosphere after SynCom application. Plants have evolved to attract microbes that promote plant growth and development from soils to their roots . Plant-associated microbiota can influence the disease resistance, nutrient status, growth rate, and stress tolerance of their host plants . In this study, eight strains were isolated that showed antagonistic ability against Fusarium oxysporum , Rhizoctonia solani , Botryosphaeria ribis , and Physalospora piricola , able to stably colonize, and some of them can produce siderophores, ACC deaminase, and IAA. Siderophore and IAA production are associated with growth promotion in host plants while ACC deaminase reduces ethylene levels, thereby enhancing plant growth . These eight beneficial microorganisms were used to construct a SynCom, and its effects and mechanisms of action were investigated (Fig. ). The eight strains belonged to the genera Bacillus and Streptomyces , which are among the root-associated bacterial genera identified in previous studies, together with Azospirillum and Pseudomonas . Isolates from these have already been deployed as biofertilizers . Both Bacillaceae and Streptomycetaceae are considered plant growth promoting rhizobacteria (PGPR), and they are specifically recruited by plants to suppress disease . Their effects on disease control and host plant performance were the basis for constructing a SynCom that was used in pot experiments. The SynCom led to an increase in the soil pH. Previous have shown that increased soil acidity inhibits the activity of some soil microorganisms, thereby affecting the conversion and utilization of nitrogen and other nutrients . The SynCom also increased the levels of available nutrients in the soil, including AP and organic matter, most likely by recruiting beneficial microorganisms such as phosphate-solubilizing, potassium-solubilizing, and nitrogen-fixing bacteria. Such microbes can convert insoluble minerals and organic matter into a continuous supply of available nutrients. These microorganisms may also have induced the plants to secrete root exudates that in combination with colloids formed a soil aggregate structure that improved the porosity, water and fertilizer retention, and general properties of soils. This would provide a suitable environment for microbial metabolism and soil enzymes, thereby enhancing the circulation and utilization of nutrients . Soil microbes contribute to nutrient enrichment and play a crucial role in the modulation of primary production by controlling the decomposition and availability of nutrients as well as root grazing and plant nutrient absorption to maintain soil productivity . SynCom promoted the growth of Malus hupehensis Rehd, including rhizome elongation and an increase in the number of leaves, all of which provide plants with advantages in terms of nutrient competition, resource access, and vitality. Plants attract growth-promoting microorganisms to their root system by releasing exudates like sugars, organic acids, amino acids, and phenolics. These microorganisms play a role in enhancing plant growth through nutrient transformation, translocation, and the regulation of phytohormones such as auxins, cytokinins, gibberellins, abscisic acid, and ethylene . The composition and structure of the rhizosphere soil microbial community were also affected by the application of SynCom. Soil biodiversity in general, and microbial diversity in particular, is a driving force underlying the soil processes that are essential to sustain agricultural production . The results showed an increase in the richness of bacteria after the application of SynCom. The increase of bacteria and the decrease of fungi in soil after the application of SynCom is an important direction for the control of successive cropping obstacle . The rhizosphere bacterial community structure of treatment A was significantly separated from that of other treatments, indicating that SynCom application may cause the change of the bacterial community. Studies have shown that soil bacterial communities are more sensitive than fungi and are easily affected by external factors . In treatment A and M, the relative abundances of potentially beneficial microorganisms in the dominant genera, such as Pseudarthrobacter, Haliangium , Chryseolinea , Streptomyces and Bacillus , were increased. Pseudarthrobacter efficiently degrades crude oil and multi-benzene compounds. Haliangium species are the core bacteria in the rhizosphere of ex situ wild rice and play an important role in improving nutrient resource acquisition for rice growth . These microbes degrade crude oil and multi-benzene compounds and suppress disease-causing Fusarium , respectively . And the Bacillus and Streptomyces are well-recognized Plant Growth-Promoting Rhizobacteria (PGPRs) with well-documented beneficial effects. In treatment M, the relative abundances of the potentially pathogenic fungi Phoma and Phaeomycocentrospora were reduced while those of bacteria involved in disease suppression and environmental remediation, such as Azoarcus , Geobacter , Azospira , Pseudogulbenkiania , and SM1A02 were increased. Both the LEfSe analysis and the random forest model predictions identified Azoarcus as a biomarker for treatment M. This genus, along with Gluconacetobacter and Herbaspirillum , is among the nitrogen-fixing bacteria found in disease-free host plant tissues . Azoarcus , Geobacter , and Azospira are nitrogen-fixing, converting nitrogen in the air into ammonium salt available for plants, thus providing an essential nutrient for plant growth and development, significantly improving the nitrogen supply capacity of soil, and reducing soil electrical conductivity. Together, these activities improve the microecological environment of plant roots . The abundance of the pathogenic fungus Fusarium , which causes multiple soil-borne diseases and reduces crop yields, was highest in treatment P . Most Fusarium species are plant pathogens, such as F . maniliforme , F . solani and F . xysporum ; their abundance in the pathogen-treated rhizosphere soil was as high as 16% . Taken together, our results suggest that the SynCom developed in this study enhances the soil environment by suppressing soil-borne, disease-causing fungi and recruiting antagonistic bacteria and growth-promoting microorganisms. This collective action contributes to controlling pathogens associated with apple disease and supports the maintenance of healthy plant growth. Network topological parameters can be used to characterize the complexity and stability of the network . According to the co-occurrence patterns in the bacterial-fungal interkingdom networks, the number of edges, average degree, graph density, and average clustering coefficient of treatment A indicated that SynCom application stabilized the microbial community. The highest modularity coefficient (11.916) was found in treatment M, such that the antagonistic effect of SynCom on pathogens may have strengthened the clustering intensity of the bacterial and fungal communities, with a clearer association structure. The SynCom induced interactions among microbial communities, enhancing the stability of the microbial network. Microbial community assembly processes were also investigated. Deterministic processes are those in which abiotic and biotic factors determine the presence or absence and relative abundance of a species and thus are related to ecological selection. Stochastic processes include random changes in the probability distribution and relative abundance of species (ecological drift) that are not the result of adaptations to the environment . Tao et al. found that the assembly of healthy soil bacterial communities in soil was mainly a stochastic process . In our study, the assembly of bacterial community groups in treatment M was a stochastic process, whereas in treatment P assembly was a deterministic process. These results imply that the SynCom was able to confer resistance to pathogen invasion and drive the transformation of diseased soil into healthy soil. The physical and chemical properties of soil affect the structure and function of microorganisms and their interaction with plants, such that changes in soil structure can have a profound effect on the survival and metabolism of soil microorganisms. Conversely, microbes influence soil fertility and nutrient turnover through the secretion of metabolites and enzymes, which in turn can affect crop growth . In accordance with previous results, the amount of available nutrients in the soil increased after the application of SynCom. Among environmental factors, AP and OM were the main limiting factors for microbial communities and both were positively correlated with treatments A and S. Phosphorus is usually the limiting nutrient in agroecosystems. Microbial biomass itself is a large and dynamic phosphorus reservoir that responds rapidly to environmental changes and can drive phosphorus effectiveness through transfer and fixation mechanisms; it is therefore an important regulator of phosphorus effectiveness . The level of organic matter in soil impacts the structure of microbial communities, and carbon sources are a key ecological driver of microbial community dynamics . The high-level growth achieved with treatment A shows that increasing organic matter and AP may improve field plantings of apple seedlings. Crop cultivation under disease stress can be improved utilizing microbiomes from this environment. In this study, a SynCom promoted plant growth and increased the nutrient content of the soil, including organic matter and AP. It also increased the diversity of bacteria as well as the relative abundances of potentially beneficial bacteria while decreasing the relative abundance of potentially pathogenic microorganisms, in the rhizosphere. It also improved the stability of the microbial community of the rhizosphere, promoting the growth of apple plants. Similar functional synthetic communities can be used to achieve sustainable agriculture, promoting plant growth while avoiding common apple diseases. Below is the link to the electronic supplementary material. Supplementary Material 1
A New Data Repository for Pharmacokinetic Natural Product-Drug Interactions: From Chemical Characterization to Clinical Studies
7c4378db-b6f1-4fc3-95a8-0a5c09ba7cc6
7543481
Pharmacology[mh]
Natural products (NPs) include herbal and other botanical products . Pharmacokinetic interactions involving NPs and conventional [e.g., approved by the US Food and Drug Administration (FDA)] drugs could result in reduced treatment efficacy or adverse effects . Although up to 88% of older adults use herbal medicinal products concurrently with conventional drugs , there are many gaps in scientific knowledge about the clinical significance of pharmacokinetic NP–drug interactions (NPDIs) in which the NP is the precipitant and a conventional drug is the object. Although 6 of the 40 top-selling herbal medicinal products in 2017 were implicated in clinically significant pharmacokinetic NPDIs, there was minimal or no supporting clinical evidence for potential NPDIs involving nine products . Similarly, data were insufficient to conclude the clinical relevance of 11 of the 15 potential pharmacokinetic NPDIs involving antiretroviral drugs . There are several unique challenges associated with pharmacokinetic NPDI research, including the large variability of phytoconstituents among marketed products, difficulty extrapolating results from animal and/or in vitro models to humans, variability in study design, and inadequate methods . Based on these knowledge gaps and challenges, the National Center for Complimentary and Integrative Health created the Center of Excellence for NPDI Research (NaPDI Center; www.napdi.org ) to provide leadership and guidance on the study of pharmacokinetic NPDIs . One objective of the NaPDI Center is to develop and apply a set of Recommended Approaches to determine the clinical relevance of pharmacokinetic NPDIs . A key deliverable of the Center is the development of an online repository for data generated by the NaPDI Center ( repo.napdi.org ). The repository combines data currently distributed across a variety of information sources into a single user-friendly format complemented by an information portal. This portal, also developed by the NaPDI Center, disseminates the Recommended Approaches on the optimal conduct of pharmacokinetic NPDI studies ( napdicenter.org ). Combined, these new resources will help advance pharmacokinetic NPDI research by providing Recommended Approaches and novel pharmacokinetic NPDI data. Pharmacokinetic NPDI data include chemical characterization of NPs, metabolomics analyses, and in vitro and clinical pharmacokinetic experimental results. This new repository stores data from all of these types of investigations. It provides a user-friendly interface that enables users with limited informatics skills to effectively explore relevant data . As of March 2020, coverage of the repository is limited to four carefully selected high-priority NPs based on a systematic method for the purpose of demonstrating the Recommended Approaches : cannabis ( Cannabis sativa ), goldenseal ( Hydrastis canadensis ), green tea ( Camellia sinensis ), and kratom ( Mitragyna speciosa ). A prior Recommended Approach reported the inclusion of licorice ( Glycyrrhiza spp.). The Center later replaced licorice with kratom to 1) keep pace with public health needs in the face of an ever-changing NP market and 2) omit redundancy with the research efforts of a longstanding botanical center ( https://pcrps.pharmacy.uic.edu/our-centers/uic-nih-center-for-botanical-dietary-supplements-research/ ). The current work describes the design of the repository, standard operating procedures (SOPs) used to enter data, and pharmacokinetic NPDI data that have been entered to date. To illustrate the usefulness of the NaPDI Center repository, more details on two high-priority NPs, cannabis and kratom, are provided as case studies. Construction and Content Studies Conducted by NaPDI Center Investigators. To date, the repository has focused on original pharmacokinetic NPDI research conducted by NaPDI Center investigators, who are organized into three cores with complementary expertise . The Analytical Core is composed of NP chemists, analytical chemists, and clinical pharmacologists and serves multiple functions. This core chemically characterizes multiple commercially available products of a given NP, determines the contents of constituents in these products, and provides guidance on the proper selection of one or more commercially available products to be tested by the Pharmacology Core. The core also analyzes plasma and urine samples obtained from pharmacokinetic clinical studies for NP constituents and object drugs. The Pharmacology Core is composed of clinical pharmacologists and medicinal chemists. This core designs and conducts rigorous experiments to evaluate the potential for NPs to precipitate pharmacokinetic interactions with certain object drugs. The core also characterizes the pharmacokinetics of select NP constituents in human subjects. The data obtained are used to develop physiologically based pharmacokinetic models that can be applied to other object drugs and patient populations of interest. shows the variety of different experiment types that the repository supports to store data from the NaPDI Center’s interaction projects. The Informatics Core is composed of biomedical informaticists, computer scientists, and communication experts. This core compiles all data generated from NaPDI Center research activities into the data repository, which is accessible via the information portal. Prior to public release, NaPDI Center data are only accessible to researchers approved to access the site. Contributing researchers indicate when to make the data public. The data are made available according to a Recommended Approach for making pharmacokinetic NPDI research data findable, accessible, interoperable, and reusable (FAIR; https://www.w3id.org/hclscg/npdi ). Data Types. A variety of data types are produced from pharmacokinetic NPDI studies ( Supplemental Table 1 ). Initially, the specification and subsequent characterization of the NP source materials generated a diverse set of data, including chromatograms from conventional high-pressure liquid chromatography with UV detection and ultrahigh-pressure liquid chromatography–mass spectrometry methods, spectral data from nuclear magnetic resonance and circular dichroism, and bioactivity fractionation data. These data include instrument tracings that are often not retrievable in digitized form. Hence, the scanned image files are archived in the repository. Quantitative data on NP source materials, such as content of individual phytoconstituents and specific impurities or contaminants, are organized in tabular format. The types of data generated from in vitro NPDI studies vary across the range of human-derived in vitro test systems, including enzymatic reactions involving recombinant enzymes, human tissue fractions (e.g., human liver microsomes), or cultured cells (e.g., hepatocytes), and drug transport experiments measuring uptake into membrane vesicles or efflux from transfected cells. Currently, the data repository tracks 82 measurements for quantitative data resulting from NPDI experiments. The full list is provided in Supplemental Table 1 . Included in the list are, for example, percent inhibition, IC 50 , K m , and V max . In addition, data generated from inhibition experiments involving drug metabolizing enzymes or transporters differ from those generated from induction experiments. Thus, the repository provides separate sets of data fields for each of these in vitro systems and mechanisms ( Supplemental Table 1 ). Pharmacokinetic data generated from clinical NPDI studies include human subject demographics, concentration-time data, and key pharmacokinetic endpoints (e.g., oral clearance, renal clearance, apparent volume of distribution, half-life, area under the plasma-concentration vs. time curve, maximum plasma concentration, and time to reach maximum concentration). Statistical analyses of primary and secondary pharmacokinetic endpoints generated additional data sets. Data Findability, Accessibility, Interoperability, and Reusability. There is a growing recognition by both researchers and funding agencies that pharmacokinetic NPDI study data sets should be more FAIR . The NaPDI Center repository is designed to ensure that data satisfy these four foundational principles of good data management and stewardship. summarizes the specific features of the repository that support FAIR pharmacokinetic NPDI data. Each feature is described in greater detail in a public and participative report that the NaPDI Center is developing in collaboration with the World Wide Web Consortium Semantic Web in Health Care and Life Sciences Community Group ( https://www.w3id.org/hclscg/npdi ). Standard Operating Procedures for Data Entry. A major feature of the repository is that data are entered using validated SOPs. There are currently 11 SOPs, one for each experiment type listed in . Data collection forms have been developed for both internal and external NPDI researchers, such as contract research organizations. These forms are based closely on the SOP documents. Both the SOPs and data entry forms are publicly available on GitHub ( https://github.com/dbmi-pitt/NaPDI-SOPs ), and the SOP document for enzyme inhibition experiment type is provided as an example in Supplemental Data . Quality Control and Validation Processes. Given the variety of data types, close attention must be paid to enable accurate tracking and meticulous organization of the generated data. The structure, data organization, and concepts effectively used by the University of Washington’s Drug Interaction Database , now Drug Interaction Solutions ( www.druginteractionsolutions.org ), have been applied to the NaPDI Center repository. These features have been validated over time with feedback from a large user base. To ensure the quality and consistency of the entry process, data are entered by experienced curators who are well versed in drug interactions using the aforementioned SOPs. All data entry undergoes review by a second reviewer prior to public release. Current Status of the Repository. An overview of data entered into the NaPDI Center repository is provided for two of the high-priority NPs selected as case studies: cannabis ( C. sativa ) and kratom ( M. speciosa ). These NPs were chosen due to increasing use and public interest. Neither NP has been well studied with respect to NPDI potential. In the United States, a majority of states have legalized marijuana for recreational and/or medical purposes. Moreover, a growing number of products containing the nonpsychotropic phytocannabinoid cannabidiol are marketed every year. These products include the FDA-approved drug Epidiolex and numerous unapproved tinctures, oils, and extracts. Kratom, a member of the coffee family native to Southeast Asia, is touted for its analgesic and stimulant effects. Warnings about kratom toxicity have been raised by the US FDA and the Centers for Disease Control and Prevention . Calls to US poison centers involving kratom exposures from 2011 to 2017 increased 52-fold, from 13 to 682, with more than one-third of the calls reported involving co-consumption with prescription or illicit drugs . Each case study begins with a summary of NaPDI Center research activities focusing on each NP as a precipitant of pharmacokinetic NPDIs. A description follows about how published evidence was added to the repository to both complement the data generated by the NaPDI Center and provide researchers with a more complete picture of the pharmacokinetic interaction potential for each NP. NPDI Study Process. Four steps are crucial for conducting a rigorous research study on a given pharmacokinetic NPDI: NP selection; sourcing and chemical characterization of different commercial products of the selected NP; in vitro assessment of inhibition or induction of drug metabolizing enzymes and transporters by the NP; and, if necessary based on the prior data, a clinical study of potential pharmacokinetic NPDIs in human subjects . The upper half of shows the cannabis studies conducted by the NaPDI Center as of March 2020. Chemical characterization data for two products were obtained from the National Center for Natural Products Research at the University of Mississippi. One product was an extract enriched in delta-9-tetrahydrocannabinol (THC) and the other was an extract enriched in cannabidiol (CBD). Purified THC and CBD were tested as inhibitors of five major cytochrome P450 (P450) enzymes, namely, CYP1A2, CYP2C9, CYP2C19, CYP2D6, and CYP3A4/5. Results informed the design of an ongoing clinical cannabis-drug interaction study. The lower half of shows the kratom studies conducted by the NaPDI Center as of March 2020. The Analytical Core conducted a metabolomics study involving 55 kratom products, informing the selection of one product for further in vitro and clinical studies. The selection criteria followed a published NaPDI Center Recommended Approach . The Analytical Core conducted chemical characterization of the selected product to quantify mitragynine, 7-hydroxymitragynine, and speciofoline . Extracts prepared from three kratom products, including one that was eventually selected for the clinical study, were tested by the Pharmacology Core as inhibitors of three major P450s, specifically CYP2C9, CYP2D6, and CYP3A4/5. As with cannabis, the in vitro results informed the design of the ongoing clinical kratom-drug interaction study. Literature Search Process. Additional data were identified from peer-reviewed published reports in order for the data repository to provide greater research context for the NaPDI Center–conducted studies. Systematic literature searches were designed to retrieve studies on NP constituent pharmacokinetics and drug interactions involving either cannabis or kratom. The final search strategies are available in the Appendix. Queries were run in PubMed in July 2018 and again in February 2020. The screening of titles and abstracts, and subsequently full text articles, was completed independently and in duplicate to identify experiments of the types shown in . Mechanistic experiments of interest included assessing the NP as an inhibitor or inducer of P450s, UDP-glucuronosyltransferases (UGTs), and transporters. Clinical experiments of interest included pharmacokinetic NPDIs involving cannabis or kratom. Experiments involving only synthetic analogs, pharmacodynamics, or nonhuman animal studies and review articles were excluded. Full text articles available only in non-English languages were also excluded. Published reports cited in a recent review by the NaPDI Center on cannabis pharmacology and pharmacokinetics ( n = 6) were added to the screening results. Data Entry of Published Literature and Pharmacokinetic NPDI Studies. Data from the included published reports were entered into the repository following the aforementioned SOPs . When available, exact values from the text were entered. Otherwise, estimates were made from the study figures. Data extracted from each report were marked as “draft” during initial data entry and “pending” upon completion of data entry. After quality assurance by a second reviewer, the extracted data were made public. Data entry issues were tracked and addressed until quality assurance was complete for all studies. Studies Conducted by NaPDI Center Investigators. To date, the repository has focused on original pharmacokinetic NPDI research conducted by NaPDI Center investigators, who are organized into three cores with complementary expertise . The Analytical Core is composed of NP chemists, analytical chemists, and clinical pharmacologists and serves multiple functions. This core chemically characterizes multiple commercially available products of a given NP, determines the contents of constituents in these products, and provides guidance on the proper selection of one or more commercially available products to be tested by the Pharmacology Core. The core also analyzes plasma and urine samples obtained from pharmacokinetic clinical studies for NP constituents and object drugs. The Pharmacology Core is composed of clinical pharmacologists and medicinal chemists. This core designs and conducts rigorous experiments to evaluate the potential for NPs to precipitate pharmacokinetic interactions with certain object drugs. The core also characterizes the pharmacokinetics of select NP constituents in human subjects. The data obtained are used to develop physiologically based pharmacokinetic models that can be applied to other object drugs and patient populations of interest. shows the variety of different experiment types that the repository supports to store data from the NaPDI Center’s interaction projects. The Informatics Core is composed of biomedical informaticists, computer scientists, and communication experts. This core compiles all data generated from NaPDI Center research activities into the data repository, which is accessible via the information portal. Prior to public release, NaPDI Center data are only accessible to researchers approved to access the site. Contributing researchers indicate when to make the data public. The data are made available according to a Recommended Approach for making pharmacokinetic NPDI research data findable, accessible, interoperable, and reusable (FAIR; https://www.w3id.org/hclscg/npdi ). Data Types. A variety of data types are produced from pharmacokinetic NPDI studies ( Supplemental Table 1 ). Initially, the specification and subsequent characterization of the NP source materials generated a diverse set of data, including chromatograms from conventional high-pressure liquid chromatography with UV detection and ultrahigh-pressure liquid chromatography–mass spectrometry methods, spectral data from nuclear magnetic resonance and circular dichroism, and bioactivity fractionation data. These data include instrument tracings that are often not retrievable in digitized form. Hence, the scanned image files are archived in the repository. Quantitative data on NP source materials, such as content of individual phytoconstituents and specific impurities or contaminants, are organized in tabular format. The types of data generated from in vitro NPDI studies vary across the range of human-derived in vitro test systems, including enzymatic reactions involving recombinant enzymes, human tissue fractions (e.g., human liver microsomes), or cultured cells (e.g., hepatocytes), and drug transport experiments measuring uptake into membrane vesicles or efflux from transfected cells. Currently, the data repository tracks 82 measurements for quantitative data resulting from NPDI experiments. The full list is provided in Supplemental Table 1 . Included in the list are, for example, percent inhibition, IC 50 , K m , and V max . In addition, data generated from inhibition experiments involving drug metabolizing enzymes or transporters differ from those generated from induction experiments. Thus, the repository provides separate sets of data fields for each of these in vitro systems and mechanisms ( Supplemental Table 1 ). Pharmacokinetic data generated from clinical NPDI studies include human subject demographics, concentration-time data, and key pharmacokinetic endpoints (e.g., oral clearance, renal clearance, apparent volume of distribution, half-life, area under the plasma-concentration vs. time curve, maximum plasma concentration, and time to reach maximum concentration). Statistical analyses of primary and secondary pharmacokinetic endpoints generated additional data sets. Data Findability, Accessibility, Interoperability, and Reusability. There is a growing recognition by both researchers and funding agencies that pharmacokinetic NPDI study data sets should be more FAIR . The NaPDI Center repository is designed to ensure that data satisfy these four foundational principles of good data management and stewardship. summarizes the specific features of the repository that support FAIR pharmacokinetic NPDI data. Each feature is described in greater detail in a public and participative report that the NaPDI Center is developing in collaboration with the World Wide Web Consortium Semantic Web in Health Care and Life Sciences Community Group ( https://www.w3id.org/hclscg/npdi ). Standard Operating Procedures for Data Entry. A major feature of the repository is that data are entered using validated SOPs. There are currently 11 SOPs, one for each experiment type listed in . Data collection forms have been developed for both internal and external NPDI researchers, such as contract research organizations. These forms are based closely on the SOP documents. Both the SOPs and data entry forms are publicly available on GitHub ( https://github.com/dbmi-pitt/NaPDI-SOPs ), and the SOP document for enzyme inhibition experiment type is provided as an example in Supplemental Data . Quality Control and Validation Processes. Given the variety of data types, close attention must be paid to enable accurate tracking and meticulous organization of the generated data. The structure, data organization, and concepts effectively used by the University of Washington’s Drug Interaction Database , now Drug Interaction Solutions ( www.druginteractionsolutions.org ), have been applied to the NaPDI Center repository. These features have been validated over time with feedback from a large user base. To ensure the quality and consistency of the entry process, data are entered by experienced curators who are well versed in drug interactions using the aforementioned SOPs. All data entry undergoes review by a second reviewer prior to public release. Current Status of the Repository. An overview of data entered into the NaPDI Center repository is provided for two of the high-priority NPs selected as case studies: cannabis ( C. sativa ) and kratom ( M. speciosa ). These NPs were chosen due to increasing use and public interest. Neither NP has been well studied with respect to NPDI potential. In the United States, a majority of states have legalized marijuana for recreational and/or medical purposes. Moreover, a growing number of products containing the nonpsychotropic phytocannabinoid cannabidiol are marketed every year. These products include the FDA-approved drug Epidiolex and numerous unapproved tinctures, oils, and extracts. Kratom, a member of the coffee family native to Southeast Asia, is touted for its analgesic and stimulant effects. Warnings about kratom toxicity have been raised by the US FDA and the Centers for Disease Control and Prevention . Calls to US poison centers involving kratom exposures from 2011 to 2017 increased 52-fold, from 13 to 682, with more than one-third of the calls reported involving co-consumption with prescription or illicit drugs . Each case study begins with a summary of NaPDI Center research activities focusing on each NP as a precipitant of pharmacokinetic NPDIs. A description follows about how published evidence was added to the repository to both complement the data generated by the NaPDI Center and provide researchers with a more complete picture of the pharmacokinetic interaction potential for each NP. NPDI Study Process. Four steps are crucial for conducting a rigorous research study on a given pharmacokinetic NPDI: NP selection; sourcing and chemical characterization of different commercial products of the selected NP; in vitro assessment of inhibition or induction of drug metabolizing enzymes and transporters by the NP; and, if necessary based on the prior data, a clinical study of potential pharmacokinetic NPDIs in human subjects . The upper half of shows the cannabis studies conducted by the NaPDI Center as of March 2020. Chemical characterization data for two products were obtained from the National Center for Natural Products Research at the University of Mississippi. One product was an extract enriched in delta-9-tetrahydrocannabinol (THC) and the other was an extract enriched in cannabidiol (CBD). Purified THC and CBD were tested as inhibitors of five major cytochrome P450 (P450) enzymes, namely, CYP1A2, CYP2C9, CYP2C19, CYP2D6, and CYP3A4/5. Results informed the design of an ongoing clinical cannabis-drug interaction study. The lower half of shows the kratom studies conducted by the NaPDI Center as of March 2020. The Analytical Core conducted a metabolomics study involving 55 kratom products, informing the selection of one product for further in vitro and clinical studies. The selection criteria followed a published NaPDI Center Recommended Approach . The Analytical Core conducted chemical characterization of the selected product to quantify mitragynine, 7-hydroxymitragynine, and speciofoline . Extracts prepared from three kratom products, including one that was eventually selected for the clinical study, were tested by the Pharmacology Core as inhibitors of three major P450s, specifically CYP2C9, CYP2D6, and CYP3A4/5. As with cannabis, the in vitro results informed the design of the ongoing clinical kratom-drug interaction study. Literature Search Process. Additional data were identified from peer-reviewed published reports in order for the data repository to provide greater research context for the NaPDI Center–conducted studies. Systematic literature searches were designed to retrieve studies on NP constituent pharmacokinetics and drug interactions involving either cannabis or kratom. The final search strategies are available in the Appendix. Queries were run in PubMed in July 2018 and again in February 2020. The screening of titles and abstracts, and subsequently full text articles, was completed independently and in duplicate to identify experiments of the types shown in . Mechanistic experiments of interest included assessing the NP as an inhibitor or inducer of P450s, UDP-glucuronosyltransferases (UGTs), and transporters. Clinical experiments of interest included pharmacokinetic NPDIs involving cannabis or kratom. Experiments involving only synthetic analogs, pharmacodynamics, or nonhuman animal studies and review articles were excluded. Full text articles available only in non-English languages were also excluded. Published reports cited in a recent review by the NaPDI Center on cannabis pharmacology and pharmacokinetics ( n = 6) were added to the screening results. Data Entry of Published Literature and Pharmacokinetic NPDI Studies. Data from the included published reports were entered into the repository following the aforementioned SOPs . When available, exact values from the text were entered. Otherwise, estimates were made from the study figures. Data extracted from each report were marked as “draft” during initial data entry and “pending” upon completion of data entry. After quality assurance by a second reviewer, the extracted data were made public. Data entry issues were tracked and addressed until quality assurance was complete for all studies. To date, the repository has focused on original pharmacokinetic NPDI research conducted by NaPDI Center investigators, who are organized into three cores with complementary expertise . The Analytical Core is composed of NP chemists, analytical chemists, and clinical pharmacologists and serves multiple functions. This core chemically characterizes multiple commercially available products of a given NP, determines the contents of constituents in these products, and provides guidance on the proper selection of one or more commercially available products to be tested by the Pharmacology Core. The core also analyzes plasma and urine samples obtained from pharmacokinetic clinical studies for NP constituents and object drugs. The Pharmacology Core is composed of clinical pharmacologists and medicinal chemists. This core designs and conducts rigorous experiments to evaluate the potential for NPs to precipitate pharmacokinetic interactions with certain object drugs. The core also characterizes the pharmacokinetics of select NP constituents in human subjects. The data obtained are used to develop physiologically based pharmacokinetic models that can be applied to other object drugs and patient populations of interest. shows the variety of different experiment types that the repository supports to store data from the NaPDI Center’s interaction projects. The Informatics Core is composed of biomedical informaticists, computer scientists, and communication experts. This core compiles all data generated from NaPDI Center research activities into the data repository, which is accessible via the information portal. Prior to public release, NaPDI Center data are only accessible to researchers approved to access the site. Contributing researchers indicate when to make the data public. The data are made available according to a Recommended Approach for making pharmacokinetic NPDI research data findable, accessible, interoperable, and reusable (FAIR; https://www.w3id.org/hclscg/npdi ). A variety of data types are produced from pharmacokinetic NPDI studies ( Supplemental Table 1 ). Initially, the specification and subsequent characterization of the NP source materials generated a diverse set of data, including chromatograms from conventional high-pressure liquid chromatography with UV detection and ultrahigh-pressure liquid chromatography–mass spectrometry methods, spectral data from nuclear magnetic resonance and circular dichroism, and bioactivity fractionation data. These data include instrument tracings that are often not retrievable in digitized form. Hence, the scanned image files are archived in the repository. Quantitative data on NP source materials, such as content of individual phytoconstituents and specific impurities or contaminants, are organized in tabular format. The types of data generated from in vitro NPDI studies vary across the range of human-derived in vitro test systems, including enzymatic reactions involving recombinant enzymes, human tissue fractions (e.g., human liver microsomes), or cultured cells (e.g., hepatocytes), and drug transport experiments measuring uptake into membrane vesicles or efflux from transfected cells. Currently, the data repository tracks 82 measurements for quantitative data resulting from NPDI experiments. The full list is provided in Supplemental Table 1 . Included in the list are, for example, percent inhibition, IC 50 , K m , and V max . In addition, data generated from inhibition experiments involving drug metabolizing enzymes or transporters differ from those generated from induction experiments. Thus, the repository provides separate sets of data fields for each of these in vitro systems and mechanisms ( Supplemental Table 1 ). Pharmacokinetic data generated from clinical NPDI studies include human subject demographics, concentration-time data, and key pharmacokinetic endpoints (e.g., oral clearance, renal clearance, apparent volume of distribution, half-life, area under the plasma-concentration vs. time curve, maximum plasma concentration, and time to reach maximum concentration). Statistical analyses of primary and secondary pharmacokinetic endpoints generated additional data sets. There is a growing recognition by both researchers and funding agencies that pharmacokinetic NPDI study data sets should be more FAIR . The NaPDI Center repository is designed to ensure that data satisfy these four foundational principles of good data management and stewardship. summarizes the specific features of the repository that support FAIR pharmacokinetic NPDI data. Each feature is described in greater detail in a public and participative report that the NaPDI Center is developing in collaboration with the World Wide Web Consortium Semantic Web in Health Care and Life Sciences Community Group ( https://www.w3id.org/hclscg/npdi ). A major feature of the repository is that data are entered using validated SOPs. There are currently 11 SOPs, one for each experiment type listed in . Data collection forms have been developed for both internal and external NPDI researchers, such as contract research organizations. These forms are based closely on the SOP documents. Both the SOPs and data entry forms are publicly available on GitHub ( https://github.com/dbmi-pitt/NaPDI-SOPs ), and the SOP document for enzyme inhibition experiment type is provided as an example in Supplemental Data . Given the variety of data types, close attention must be paid to enable accurate tracking and meticulous organization of the generated data. The structure, data organization, and concepts effectively used by the University of Washington’s Drug Interaction Database , now Drug Interaction Solutions ( www.druginteractionsolutions.org ), have been applied to the NaPDI Center repository. These features have been validated over time with feedback from a large user base. To ensure the quality and consistency of the entry process, data are entered by experienced curators who are well versed in drug interactions using the aforementioned SOPs. All data entry undergoes review by a second reviewer prior to public release. An overview of data entered into the NaPDI Center repository is provided for two of the high-priority NPs selected as case studies: cannabis ( C. sativa ) and kratom ( M. speciosa ). These NPs were chosen due to increasing use and public interest. Neither NP has been well studied with respect to NPDI potential. In the United States, a majority of states have legalized marijuana for recreational and/or medical purposes. Moreover, a growing number of products containing the nonpsychotropic phytocannabinoid cannabidiol are marketed every year. These products include the FDA-approved drug Epidiolex and numerous unapproved tinctures, oils, and extracts. Kratom, a member of the coffee family native to Southeast Asia, is touted for its analgesic and stimulant effects. Warnings about kratom toxicity have been raised by the US FDA and the Centers for Disease Control and Prevention . Calls to US poison centers involving kratom exposures from 2011 to 2017 increased 52-fold, from 13 to 682, with more than one-third of the calls reported involving co-consumption with prescription or illicit drugs . Each case study begins with a summary of NaPDI Center research activities focusing on each NP as a precipitant of pharmacokinetic NPDIs. A description follows about how published evidence was added to the repository to both complement the data generated by the NaPDI Center and provide researchers with a more complete picture of the pharmacokinetic interaction potential for each NP. Four steps are crucial for conducting a rigorous research study on a given pharmacokinetic NPDI: NP selection; sourcing and chemical characterization of different commercial products of the selected NP; in vitro assessment of inhibition or induction of drug metabolizing enzymes and transporters by the NP; and, if necessary based on the prior data, a clinical study of potential pharmacokinetic NPDIs in human subjects . The upper half of shows the cannabis studies conducted by the NaPDI Center as of March 2020. Chemical characterization data for two products were obtained from the National Center for Natural Products Research at the University of Mississippi. One product was an extract enriched in delta-9-tetrahydrocannabinol (THC) and the other was an extract enriched in cannabidiol (CBD). Purified THC and CBD were tested as inhibitors of five major cytochrome P450 (P450) enzymes, namely, CYP1A2, CYP2C9, CYP2C19, CYP2D6, and CYP3A4/5. Results informed the design of an ongoing clinical cannabis-drug interaction study. The lower half of shows the kratom studies conducted by the NaPDI Center as of March 2020. The Analytical Core conducted a metabolomics study involving 55 kratom products, informing the selection of one product for further in vitro and clinical studies. The selection criteria followed a published NaPDI Center Recommended Approach . The Analytical Core conducted chemical characterization of the selected product to quantify mitragynine, 7-hydroxymitragynine, and speciofoline . Extracts prepared from three kratom products, including one that was eventually selected for the clinical study, were tested by the Pharmacology Core as inhibitors of three major P450s, specifically CYP2C9, CYP2D6, and CYP3A4/5. As with cannabis, the in vitro results informed the design of the ongoing clinical kratom-drug interaction study. Additional data were identified from peer-reviewed published reports in order for the data repository to provide greater research context for the NaPDI Center–conducted studies. Systematic literature searches were designed to retrieve studies on NP constituent pharmacokinetics and drug interactions involving either cannabis or kratom. The final search strategies are available in the Appendix. Queries were run in PubMed in July 2018 and again in February 2020. The screening of titles and abstracts, and subsequently full text articles, was completed independently and in duplicate to identify experiments of the types shown in . Mechanistic experiments of interest included assessing the NP as an inhibitor or inducer of P450s, UDP-glucuronosyltransferases (UGTs), and transporters. Clinical experiments of interest included pharmacokinetic NPDIs involving cannabis or kratom. Experiments involving only synthetic analogs, pharmacodynamics, or nonhuman animal studies and review articles were excluded. Full text articles available only in non-English languages were also excluded. Published reports cited in a recent review by the NaPDI Center on cannabis pharmacology and pharmacokinetics ( n = 6) were added to the screening results. Data from the included published reports were entered into the repository following the aforementioned SOPs . When available, exact values from the text were entered. Otherwise, estimates were made from the study figures. Data extracted from each report were marked as “draft” during initial data entry and “pending” upon completion of data entry. After quality assurance by a second reviewer, the extracted data were made public. Data entry issues were tracked and addressed until quality assurance was complete for all studies. Construction and Content As of April 2020, the NaPDI Center repository contains data from 777 experiments . Currently, the most common experiment types are in vitro enzyme inhibition (405), in vitro enzyme induction (99), in vitro transport inhibition (78), and clinical pharmacokinetic NPDIs (57). The remaining 138 experiments are of various other types supported by the repository. In line with FAIR recommendations, every experiment is assigned a unique and persistent identifier that also resolves to a downloadable copy of a data set. A clear description of each experiment’s conditions is provided by the repository website. The repository publishes metadata about each experiment that is machine readable and confirmed to work with Google’s Dataset Search ( https://datasetsearch.research.google.com/ ). To provide the most optimal experience to the researcher or editor wanting to search for data in the repository, an interactive and silent guided tour is provided on the home page (see the screen capture video in Supplemental Data ). Utility This section reports the results of NaPDI Center repository data entry of the two high-priority NPs selected as case studies: cannabis ( C. sativa ) and kratom ( M. speciosa ). Cannabinoids. provides an overview of reported NPDI data for cannabis from both NaPDI Center studies and peer-reviewed published reports. Links to the specific experiments are provided in Supplemental Table 2 . Chemical characterization data obtained from the National Center for Natural Products Research ( https://pharmacy.olemiss.edu/ncnpr/ ) for two cannabis extracts and bulk plant material provided the exact concentration of CBD, THC, and other cannabinoids. The data confirmed the CBD-enriched extract (CBD 59.34%, THC 1.96%) to have a higher concentration of CBD than the bulk plant (CBD 0.04%, THC 11.7%) or THC-enriched extract (CBD 0%, THC 69.81%) . NaPDI Center experiments confirmed that CBD inhibited CYP2C9, CYP3A4/5, CYP2C19, and CYP2D6 and that THC inhibited CYP2C9, CYP2C19, and CYP2D6 (unpublished data). Data from a total of 22 published in vitro reports focusing on cannabis-drug interactions were entered into the repository (Holland et al., 2006, 2007, 2008; Zhu et al., 2006; Watanabe et al., 2007; Mazur et al., 2009; Alhamoruni et al., 2010; Tournier et al., 2010; Yamaori et al., 2010, 2011a,b, 2012, 2013, 2014, 2015; Jiang et al., 2011, 2013; Arnold et al., 2012; Al Saabi et al., 2013; Feinshtein et al., 2013a,b; Qian et al., 2019). As shows, experiments using either human liver microsomes or recombinant baculovirus–transfected insect cells expressing specific P450/UGT isoforms reported that cannabinoids inhibit CYP1A1, CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP3A4/5, and UGT (Mazur et al., 2009; Yamaori et al., 2010, 2011a,b, 2012, 2013; Al Saabi et al., 2013; Jiang et al., 2013; Qian et al., 2019). Yamaori et al. reported that CBD mechanistically inhibited CYP1A1 in vitro in recombinant baculovirus transfected insect cells. Qian et al. reported that CBD and cannabinol inhibited carboxylesterase 1 in vitro in human embryonic kidney 293 cells . In vitro inhibition of P-glycoprotein–mediated efflux transport was reported for THC from experiments using transfected human embryonic kidney cells and for CBD using BeWo choriocarcinoma, LLC-PK1/MDR1, or MCF7/P-gp cells . An experiment using a human ovarian carcinoma cell line reported that cannabinol inhibited the efflux transporter multidrug resistance-associated protein 1 (MRP1 or ABCC1) . Experiments using BeWo, Jar, MCF7/P-gp, and MEF3.8/Bcrp A2 cell lines reported that CBD inhibited breast cancer resistance protein (BCRP or ABCG2), an effect that was reported for THC and cannabinol using the cell line MEF3.8/Bcrp A2 . A total of nine published clinical reports focusing on pharmacokinetic cannabis-drug interactions were entered into the repository . Only one study reported an interaction involving smoked C. sativa , which was observed to increase the clearance of the CYP1A2 substrate theophylline . Clinical pharmacokinetic interactions between cannabis and docetaxel, fentanyl, indinavir, irinotecan, nelfinavir, or secobarbital were not evident based on bioequivalence limits . One clinical study compared the plasma concentrations of THC and CBD under fasting and fed conditions , whereas another study reported estimated pharmacokinetic parameters for THC . Kratom. provides an overview of pharmacokinetic NPDI data for kratom from both NaPDI Center studies and peer-reviewed published reports. Links to the specific experiments are provided in Supplemental Table 3 . The Analytical Core’s metabolomics analysis of 51 kratom products highlighted differences in chemical compound profiles depending on the manufacturer, form, and geographic location where the plants grew. A principal components analysis of the data identified three principal components explaining 91% of the variability across the features included in the metabolomics analysis. Chemical characterization of the methanolic kratom extract used in the ongoing NaPDI in vitro and clinical studies (made from a clinical product) identified mitragynine (22.7 mg/g of sample), 7-hydroxymitragynine (0.57 mg/g of sample), and speciofoline (0.41 mg/g of sample). The in vitro inhibition studies showed that both the methanolic kratom extract and mitragynine inhibited CYP2C9, CYP2D6, and CYP3A4/5 by differing extents (unpublished observations). Data from nine published in vitro studies were entered into the repository . One study using recombinant P450 enzymes reported that a methanolic extract of kratom inhibited CYP2D6 but not CYP2C9 or CYP3A4 . One study using pooled human liver microsomes reported inhibition of CYP2C19 by 7-hydroxymitragynine , whereas another study using recombinant enzymes reported inhibition of UGT1A1 by 7-hydroxymitragynine . Mitragynine inhibition of CYP2D6 was reported in three different studies using pooled human liver microsomes , recombinant P450s , and a high-throughput in vitro fluorescent P450 assay . Mitragynine inhibition of CYP3A and CYP2C19 was reported with pooled human liver microsomes and the in vitro fluorescent P450 assay . Mitragynine inhibition of CYP2C8 was reported with pooled human liver microsomes , CYP1A2 with an in vitro fluorescent P450 assay , and CYP2C9 with recombinant P450 enzymes . Three studies reported inhibition of P-glycoprotein by mitragynine, two using Caco-2 cells , and one using MDCK-transfected cells . The same MDCK-transfected cell study reported inhibition of P-glycoprotein by 7-hydroxymitragynine. One study reported CYP3A4 as the primary metabolizing enzyme for mitragynine . Another study reported downregulation of P-glycoprotein in Caco-2 cells by mitragynine . As of April 2020, the NaPDI Center repository contains data from 777 experiments . Currently, the most common experiment types are in vitro enzyme inhibition (405), in vitro enzyme induction (99), in vitro transport inhibition (78), and clinical pharmacokinetic NPDIs (57). The remaining 138 experiments are of various other types supported by the repository. In line with FAIR recommendations, every experiment is assigned a unique and persistent identifier that also resolves to a downloadable copy of a data set. A clear description of each experiment’s conditions is provided by the repository website. The repository publishes metadata about each experiment that is machine readable and confirmed to work with Google’s Dataset Search ( https://datasetsearch.research.google.com/ ). To provide the most optimal experience to the researcher or editor wanting to search for data in the repository, an interactive and silent guided tour is provided on the home page (see the screen capture video in Supplemental Data ). This section reports the results of NaPDI Center repository data entry of the two high-priority NPs selected as case studies: cannabis ( C. sativa ) and kratom ( M. speciosa ). Cannabinoids. provides an overview of reported NPDI data for cannabis from both NaPDI Center studies and peer-reviewed published reports. Links to the specific experiments are provided in Supplemental Table 2 . Chemical characterization data obtained from the National Center for Natural Products Research ( https://pharmacy.olemiss.edu/ncnpr/ ) for two cannabis extracts and bulk plant material provided the exact concentration of CBD, THC, and other cannabinoids. The data confirmed the CBD-enriched extract (CBD 59.34%, THC 1.96%) to have a higher concentration of CBD than the bulk plant (CBD 0.04%, THC 11.7%) or THC-enriched extract (CBD 0%, THC 69.81%) . NaPDI Center experiments confirmed that CBD inhibited CYP2C9, CYP3A4/5, CYP2C19, and CYP2D6 and that THC inhibited CYP2C9, CYP2C19, and CYP2D6 (unpublished data). Data from a total of 22 published in vitro reports focusing on cannabis-drug interactions were entered into the repository (Holland et al., 2006, 2007, 2008; Zhu et al., 2006; Watanabe et al., 2007; Mazur et al., 2009; Alhamoruni et al., 2010; Tournier et al., 2010; Yamaori et al., 2010, 2011a,b, 2012, 2013, 2014, 2015; Jiang et al., 2011, 2013; Arnold et al., 2012; Al Saabi et al., 2013; Feinshtein et al., 2013a,b; Qian et al., 2019). As shows, experiments using either human liver microsomes or recombinant baculovirus–transfected insect cells expressing specific P450/UGT isoforms reported that cannabinoids inhibit CYP1A1, CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP3A4/5, and UGT (Mazur et al., 2009; Yamaori et al., 2010, 2011a,b, 2012, 2013; Al Saabi et al., 2013; Jiang et al., 2013; Qian et al., 2019). Yamaori et al. reported that CBD mechanistically inhibited CYP1A1 in vitro in recombinant baculovirus transfected insect cells. Qian et al. reported that CBD and cannabinol inhibited carboxylesterase 1 in vitro in human embryonic kidney 293 cells . In vitro inhibition of P-glycoprotein–mediated efflux transport was reported for THC from experiments using transfected human embryonic kidney cells and for CBD using BeWo choriocarcinoma, LLC-PK1/MDR1, or MCF7/P-gp cells . An experiment using a human ovarian carcinoma cell line reported that cannabinol inhibited the efflux transporter multidrug resistance-associated protein 1 (MRP1 or ABCC1) . Experiments using BeWo, Jar, MCF7/P-gp, and MEF3.8/Bcrp A2 cell lines reported that CBD inhibited breast cancer resistance protein (BCRP or ABCG2), an effect that was reported for THC and cannabinol using the cell line MEF3.8/Bcrp A2 . A total of nine published clinical reports focusing on pharmacokinetic cannabis-drug interactions were entered into the repository . Only one study reported an interaction involving smoked C. sativa , which was observed to increase the clearance of the CYP1A2 substrate theophylline . Clinical pharmacokinetic interactions between cannabis and docetaxel, fentanyl, indinavir, irinotecan, nelfinavir, or secobarbital were not evident based on bioequivalence limits . One clinical study compared the plasma concentrations of THC and CBD under fasting and fed conditions , whereas another study reported estimated pharmacokinetic parameters for THC . Kratom. provides an overview of pharmacokinetic NPDI data for kratom from both NaPDI Center studies and peer-reviewed published reports. Links to the specific experiments are provided in Supplemental Table 3 . The Analytical Core’s metabolomics analysis of 51 kratom products highlighted differences in chemical compound profiles depending on the manufacturer, form, and geographic location where the plants grew. A principal components analysis of the data identified three principal components explaining 91% of the variability across the features included in the metabolomics analysis. Chemical characterization of the methanolic kratom extract used in the ongoing NaPDI in vitro and clinical studies (made from a clinical product) identified mitragynine (22.7 mg/g of sample), 7-hydroxymitragynine (0.57 mg/g of sample), and speciofoline (0.41 mg/g of sample). The in vitro inhibition studies showed that both the methanolic kratom extract and mitragynine inhibited CYP2C9, CYP2D6, and CYP3A4/5 by differing extents (unpublished observations). Data from nine published in vitro studies were entered into the repository . One study using recombinant P450 enzymes reported that a methanolic extract of kratom inhibited CYP2D6 but not CYP2C9 or CYP3A4 . One study using pooled human liver microsomes reported inhibition of CYP2C19 by 7-hydroxymitragynine , whereas another study using recombinant enzymes reported inhibition of UGT1A1 by 7-hydroxymitragynine . Mitragynine inhibition of CYP2D6 was reported in three different studies using pooled human liver microsomes , recombinant P450s , and a high-throughput in vitro fluorescent P450 assay . Mitragynine inhibition of CYP3A and CYP2C19 was reported with pooled human liver microsomes and the in vitro fluorescent P450 assay . Mitragynine inhibition of CYP2C8 was reported with pooled human liver microsomes , CYP1A2 with an in vitro fluorescent P450 assay , and CYP2C9 with recombinant P450 enzymes . Three studies reported inhibition of P-glycoprotein by mitragynine, two using Caco-2 cells , and one using MDCK-transfected cells . The same MDCK-transfected cell study reported inhibition of P-glycoprotein by 7-hydroxymitragynine. One study reported CYP3A4 as the primary metabolizing enzyme for mitragynine . Another study reported downregulation of P-glycoprotein in Caco-2 cells by mitragynine . provides an overview of reported NPDI data for cannabis from both NaPDI Center studies and peer-reviewed published reports. Links to the specific experiments are provided in Supplemental Table 2 . Chemical characterization data obtained from the National Center for Natural Products Research ( https://pharmacy.olemiss.edu/ncnpr/ ) for two cannabis extracts and bulk plant material provided the exact concentration of CBD, THC, and other cannabinoids. The data confirmed the CBD-enriched extract (CBD 59.34%, THC 1.96%) to have a higher concentration of CBD than the bulk plant (CBD 0.04%, THC 11.7%) or THC-enriched extract (CBD 0%, THC 69.81%) . NaPDI Center experiments confirmed that CBD inhibited CYP2C9, CYP3A4/5, CYP2C19, and CYP2D6 and that THC inhibited CYP2C9, CYP2C19, and CYP2D6 (unpublished data). Data from a total of 22 published in vitro reports focusing on cannabis-drug interactions were entered into the repository (Holland et al., 2006, 2007, 2008; Zhu et al., 2006; Watanabe et al., 2007; Mazur et al., 2009; Alhamoruni et al., 2010; Tournier et al., 2010; Yamaori et al., 2010, 2011a,b, 2012, 2013, 2014, 2015; Jiang et al., 2011, 2013; Arnold et al., 2012; Al Saabi et al., 2013; Feinshtein et al., 2013a,b; Qian et al., 2019). As shows, experiments using either human liver microsomes or recombinant baculovirus–transfected insect cells expressing specific P450/UGT isoforms reported that cannabinoids inhibit CYP1A1, CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP3A4/5, and UGT (Mazur et al., 2009; Yamaori et al., 2010, 2011a,b, 2012, 2013; Al Saabi et al., 2013; Jiang et al., 2013; Qian et al., 2019). Yamaori et al. reported that CBD mechanistically inhibited CYP1A1 in vitro in recombinant baculovirus transfected insect cells. Qian et al. reported that CBD and cannabinol inhibited carboxylesterase 1 in vitro in human embryonic kidney 293 cells . In vitro inhibition of P-glycoprotein–mediated efflux transport was reported for THC from experiments using transfected human embryonic kidney cells and for CBD using BeWo choriocarcinoma, LLC-PK1/MDR1, or MCF7/P-gp cells . An experiment using a human ovarian carcinoma cell line reported that cannabinol inhibited the efflux transporter multidrug resistance-associated protein 1 (MRP1 or ABCC1) . Experiments using BeWo, Jar, MCF7/P-gp, and MEF3.8/Bcrp A2 cell lines reported that CBD inhibited breast cancer resistance protein (BCRP or ABCG2), an effect that was reported for THC and cannabinol using the cell line MEF3.8/Bcrp A2 . A total of nine published clinical reports focusing on pharmacokinetic cannabis-drug interactions were entered into the repository . Only one study reported an interaction involving smoked C. sativa , which was observed to increase the clearance of the CYP1A2 substrate theophylline . Clinical pharmacokinetic interactions between cannabis and docetaxel, fentanyl, indinavir, irinotecan, nelfinavir, or secobarbital were not evident based on bioequivalence limits . One clinical study compared the plasma concentrations of THC and CBD under fasting and fed conditions , whereas another study reported estimated pharmacokinetic parameters for THC . provides an overview of pharmacokinetic NPDI data for kratom from both NaPDI Center studies and peer-reviewed published reports. Links to the specific experiments are provided in Supplemental Table 3 . The Analytical Core’s metabolomics analysis of 51 kratom products highlighted differences in chemical compound profiles depending on the manufacturer, form, and geographic location where the plants grew. A principal components analysis of the data identified three principal components explaining 91% of the variability across the features included in the metabolomics analysis. Chemical characterization of the methanolic kratom extract used in the ongoing NaPDI in vitro and clinical studies (made from a clinical product) identified mitragynine (22.7 mg/g of sample), 7-hydroxymitragynine (0.57 mg/g of sample), and speciofoline (0.41 mg/g of sample). The in vitro inhibition studies showed that both the methanolic kratom extract and mitragynine inhibited CYP2C9, CYP2D6, and CYP3A4/5 by differing extents (unpublished observations). Data from nine published in vitro studies were entered into the repository . One study using recombinant P450 enzymes reported that a methanolic extract of kratom inhibited CYP2D6 but not CYP2C9 or CYP3A4 . One study using pooled human liver microsomes reported inhibition of CYP2C19 by 7-hydroxymitragynine , whereas another study using recombinant enzymes reported inhibition of UGT1A1 by 7-hydroxymitragynine . Mitragynine inhibition of CYP2D6 was reported in three different studies using pooled human liver microsomes , recombinant P450s , and a high-throughput in vitro fluorescent P450 assay . Mitragynine inhibition of CYP3A and CYP2C19 was reported with pooled human liver microsomes and the in vitro fluorescent P450 assay . Mitragynine inhibition of CYP2C8 was reported with pooled human liver microsomes , CYP1A2 with an in vitro fluorescent P450 assay , and CYP2C9 with recombinant P450 enzymes . Three studies reported inhibition of P-glycoprotein by mitragynine, two using Caco-2 cells , and one using MDCK-transfected cells . The same MDCK-transfected cell study reported inhibition of P-glycoprotein by 7-hydroxymitragynine. One study reported CYP3A4 as the primary metabolizing enzyme for mitragynine . Another study reported downregulation of P-glycoprotein in Caco-2 cells by mitragynine . Although rigorous pharmacokinetic NPDI research can mitigate adverse interactions, the data and knowledge resulting from these experiments are currently distributed across a variety of information sources, making them difficult to find, access, and reuse. The new NaPDI Center repository is the first user-friendly online repository that stores and links pharmacokinetic NPDI data across chemical characterization, metabolomics analyses, and pharmacokinetic in vitro and clinical experiments. The design is expected to help researchers more easily arrive at a complete understanding of pharmacokinetic NPDI research on a particular NP. The repository will also facilitate multidisciplinary collaborations, as the repository links all of the experimental data for a given NP across the study types. For example, the repository links chemical characterization data with data from in vitro and clinical experiments and vice versa. This feature should help facilitate communication between multidisciplinary researchers working on different aspects of a particular pharmacokinetic NPDI. The mission of the NaPDI Center is to provide leadership and guidance on the study of pharmacokinetic NPDIs. Currently, only data on the four high-priority NPs under study by the NaPDI Center have been entered in the repository. Future work hopes to expand the repository to include a larger selection of NPs and engage NPDI researchers external to the NaPDI Center. Toward that goal, pilot work is completed that includes data from experiments involving P450 inhibition by three licorice species (i.e., Glycyrrhiza glabra , G. uralensis , and G. inflata ) . The published report includes pharmacokinetic NPDI data specific to extracts of each licorice species and for individual constituents present in some or all licorice species. The repository links all of these data in a manner that allows researchers to explore P450 inhibition by licorice from a variety of perspectives (i.e., single or multiple licorice species and single or multiple licorice constituents). It is useful to emphasize that the NaPDI Center repository currently focuses on pharmacokinetic NPDI data. At the present time there are no plans to integrate pharmacodynamic NPDI data. Though it has not been the focus to date, the format for data in the NaPDI data repository allows for setting the NP as the object drug, and there are a handful of experiments in the repository of this kind that have been entered as test cases. The inclusion of this kind of data might become the focus in the future depending on feedback from the NPDI research community and other stakeholders. Building upon this strong foundation, the NaPDI Center plans to create novel information visualizations to provide researchers a complete evidence-based overview of the potential of each NP to precipitate pharmacokinetic NPDIs. The Center also plans to permit other researchers to submit data using files or the repository’s web-based application programming interface with the goal of supporting medium- to high-throughput assays that generate megabytes or gigabytes of data. Researchers external to the NaPDI Center can enter data by first requesting an account and then following the SOP documents during data entry. After a researcher’s data entry is completed, a trained individual within the NaPDI Center will inspect the entered data before public release. Finally, the NaPDI Center plans to implement automatic FAIR quality analytic reports that will run each time a data submitter marks a new study entry as “pending.” Issues identified from the reports can then be addressed promptly by the data submitter. These functionalities, combined with the existing functionalities of the NaPDI Center repository, seek to facilitate pharmacokinetic NPDI research with the long-range goal of mitigating adverse interactions and improving public health.
Medication-Assisted Treatment for Opioid Use Disorder in a Rural Family Medicine Practice
6e92fcd8-6b7e-476f-bc73-84ac97365841
7278292
Family Medicine[mh]
An opioid use disorder (OUD) is defined as a problematic pattern of opioid use that leads to serious impairment or distress. In the late 1990s, prescription opioid use increased in all regions of the United States, including rural areas. , Unfettered prescription opioid use was promoted, to a large extent, by the pharmaceutical industry, which had previously assured providers and patients that both long-acting forms of opioids and opioids prescribed for somatic pain were not addicting. Misuse and diversion of these medications became widespread; by 2017, an estimated 1.7 million people in the United States suffered from substance use disorders related to prescription opioid pain medications and 652 000 suffered from a heroin use disorder (not mutually exclusive). OUD is a cause of significant morbidity and mortality, and nearly 47 000 people died from an opioid overdose in the United States in 2018. Overall deaths due to opioid misuse and abuse are also on the rise, primarily due to respiratory depression, the risk of which is accentuated with the concomitant use of benzodiazepines. The abuse of novel opioids, which are based primarily on the potent opioid fentanyl and mixed with heroin, has increased in recent years. Opioids are highly addictive substances and misuse may result in fatal consequences, which disproportionately affects rural areas. The rate of increase in deaths due to opioid use in rural areas exceeds those in nonrural areas of the United States. From 1999 to 2015, there was a 325% increase in drug overdose in rural areas, compared with a 198% increase in urban populations. The Centers for Disease Control and Prevention (CDC) reported in October 2017 that “persistent limited access to substance abuse treatment services in rural areas” contributed to the excess risk in rural areas and that interventions should include better education about the role of opioids in treatment of chronic pain as well as improved access to medication-assisted therapy (MAT). MAT has a substantial potential to offset consequences for patients with OUD, if only it were more widely accessed. Morbidity associated with OUD, as measured by ED utilization, is as common as 20.1% monthly. MAT has been demonstrated to reduce ED utilization rate by 51%. Additionally, MAT has been shown to result in decreased criminal activity as well as human immunodeficiency virus and hepatitis C infections. Long-term data on the efficacy of MAT for OUD is limited; a randomized study of patients with OUD assigned to either methadone or buprenorphine/naloxone (Suboxone) demonstrated a 5-year abstinence from heroin rate of 33.2% (number needed to treat = 3) and 20.7% from all opioids. Despite recognition of the importance of MAT, it is estimated that only 11% of patients receive a prescription for a Food and Drug Administration (FDA)–approved medication for their OUD. Access to MAT is a nationwide problem, but rural communities face unique and significant barriers to opioid addiction treatment. There are fewer facilities, limited services, and greater distances required to travel in order to receive care. Overall, 88.6% of rural counties lack a sufficient number of opioid treatment programs. Outpatient primary care practices that offer MAT are exceptionally rare in rural areas; nearly 30% of rural residents live in a county without a buprenorphine provider compared with 2.2% of urban citizens. A survey of rural physicians found that lack of mentorship, concern about Drug Enforcement Administration (DEA) intrusion into their practice, and patient misuse of medications were barriers to offering MAT. A cornerstone of primary care–based programs in MAT is the use of buprenorphine/naloxone (Suboxone) in conjunction with careful patient assessment. Becoming a prescriber is not without its own barriers and limitations. Primary care physicians must both hold a valid DEA license and complete an 8-hour Substance Abuse and Mental Health Services Administration (SAMHSA)–approved course prior to applying for a DEA waiver in order to prescribe buprenorphine. Physician assistants (PAs) and nurse practitioners (NPs) are required to complete 24 hours of approved training. Providers should also be comfortable utilizing tools such as the Clinical Opiate Withdrawal Scale (COWS) to inform their patient assessment. In the initial waiver year, the provider is limited to treating 100 patients. In years following, a provider may apply to SAMHSA for approval to treat up to 275 patients. Our health system provides primary care to 77 000 patients in 11 rural clinics in the Midwest United States. Of the 64 primary care providers, only 3 possess a DEA waiver to prescribe buprenorphine, and have collectively treated 20 patients. Following a simple process to complete inductions and follow-up appointments, patients are able to receive MAT in the normal workflow of our rural family medicine practice. Our patient was an otherwise healthy 43-year-old male who had intermittently taken prescription methadone, fentanyl, and oxycodone over a 14-year period of time for chronic low back pain. In the past 2 to 3 years, he used heroin after coming home from working his job during the night shift; he would inject heroin intravenously to relax and fall asleep, and later would join his spouse and daughter for dinner before going back to work. Aware of the severity and progression of his problem, the patient had reached out to several local addiction treatment programs. Upfront costs, required travel to a treatment center, and the inability to be away from work prevented him from participating in any treatment program. On one occasion, the heroin he obtained was more potent than expected. He injected himself and fell asleep. When he did not answer his spouse’s calls, she came home from work and found him unresponsive and apneic; this was just moments before their pre-teen daughter would have come home from school. His spouse, a layperson, was unable to locate naloxone in their home and performed cardiopulmonary resuscitation until first responders arrived, even though she does not work in health care herself. First responders administered 2 intranasal doses of naloxone as the patient was transported to the emergency department (ED) of the critical access hospital in the same town. There, he was medically stabilized and monitored overnight. Coincidentally, his family medicine physician, who cared for his entire family and was familiar with the patient, was working in the ED that night. He was aware of Suboxone therapy for OUD being offered by his colleagues in the outpatient practice and made an urgent referral to the MAT provider. The following morning, the patient was seen in the family medicine clinic. He was actively in withdrawal with a COWS score of 16, indicating moderate withdrawal. Suboxone therapy was initiated according to the MAT protocol, 2 mg initially and 2 mg every hour thereafter for a total of 4 doses. He was stabilized over several days of follow-up at a dose of 8 mg of Suboxone twice daily. Follow-up consisted of frequent weekly visits for the initial 4 weeks, monthly visits for 6 months, and then continued office visits every 3 months thereafter. At a follow-up visit after 6 months of MAT, the patient was motivated to share his positive experience with others and referred 2 people for MAT in our practice. One year after beginning MAT, he was still taking Suboxone at 8 mg twice daily and felt that he was ready to begin weaning to a lower dose. He was working, had received a promotion, was actively participating in family activities, and made it a point to attend all of his daughter’s school events. He and his family remain in our family medicine practice and are otherwise physically and emotionally well. The case presented illustrates how a rural family medicine practice can increase accessibility to MAT treatment. Previously, access to MAT was limited by lack of flexibility of the few other options existing in this rural area. Alternative MAT programs either require admission to an inpatient facility with the risk to patients of losing employment or require daily travel for over an hour to a nearby larger city for enrollment in a methadone program. These barriers prevented the subject patient in this case from receiving MAT treatment sooner. This patient expressed that he truly believes that he would not be alive if not for the simplicity of going immediately from the ED to the outpatient clinic to initiate MAT treatment. MAT can be offered to patients in several formulations, including methadone, buprenorphine with or without naloxone, and naltrexone, but Suboxone is the most effective and practical option for incorporating MAT into an outpatient family medicine practice. A large comparative effectiveness study that included 40 885 adults with OUD, examined 6 different treatment pathways and found that only treatment with buprenorphine or methadone was associated with reduced risk of both overdose and serious opioid-related acute care utilization compared with no treatment at 3 and 12 months of follow-up. Between these 2 options, there are additional barriers for the outpatient primary care provider to prescribe methadone compared with Suboxone. For a practitioner to administer and dispense methadone for OUD, they must obtain a separate DEA registration as a Narcotic Treatment Program. This type of activity requires additional approval and registration of the Center for Substance Abuse Treatment (CSAT) within SAMHSA of the Department of Health and Human Services (HHS), as well as the applicable state methadone authority. Given its relative effectiveness and practicality, Suboxone is the MAT treatment used our rural family medicine practice protocol. A suggested clinic-based protocol developed and implemented in our rural outpatient family medicine practice to provide a pathway for patients with OUD to receive MAT with Suboxone is shown in the . The scheduling staff and team registered nurse (RN) have a list of specific tasks to prepare the patient for an initial consultation with a physician. By the time the patient is seen, appropriate laboratory tests, including liver function tests, human immunodeficiency virus and hepatitis C screening, sexually transmitted infection testing and pregnancy testing, where appropriate, have been completed. The State Prescription Drug Monitoring Program (PDMP) database is queried, and the patient’s consent is obtained to receive records from previous treatment providers. Contact is made with the patient’s medical insurance company to determine if there is a preferred buprenorphine formulation and the cost of that medication to the patient. Prior authorization is obtained from the insurance company, to help ensure that the recommended product is available at the time of the initial clinic consultation. In addition, behavioral and social services covered by the patient’s medical insurance company are ascertained. Patients are typically placed on the provider schedule one week prior to their initial consultation, providing time to complete the pre-induction activities as above. Appointment slots early in the day and early in the week are preferred, to allow for monitoring during office hours and follow-up to occur during the work week. Urgent appointments are approved when acute detoxification has occurred and the patient is ready to begin therapy immediately. The COWS is administered when the patient comes to the clinic. A COWS score of 13 to 24 indicates that the patient is in opioid withdrawal, and further withdrawal symptoms will not likely be precipitated by initiation of MAT. If a patient has recently used opioids and is not yet in withdrawal, as indicated by a COWS score of less than 13, they are asked to return to the clinic or commence treatment the following day. Patients who have been abstinent from opioids for many days and are no longer in withdrawal, can start treatment immediately, or be discharged home for a home-based induction. Those patients that are sent home self-administer the medication and are contacted by the nurse via phone within 1 to 2 hours of the previously agreed-upon MAT initiation time. Home initiation follows the same dosing regimen as office-based initiation ( ). In our practice, office-based MAT initiation is required for those who have a functional status <4 METs (metabolic equivalents), atherosclerotic coronary artery disease, diabetes mellitus, multiple medical comorbidities, methadone transition (current methadone dose of <40 mg daily) and chronic pain disorders. MAT is initiated with 2 to 4 mg of the buprenorphine component given every hour until the patient is comfortable and cravings for opioids have resolved. Dosage is calculated using a morphine-equivalent dose of 1 mg of buprenorphine for 10 mg of oral morphine for those patients using prescription medication. Patients using heroin or with unknown opioid use are started with a 4-mg first dose. During the initiation period, the RN and physician alternate in seeing the patient every 15 to 30 minutes, allowing the physician to continue with a normally scheduled practice. Since many patients have a comorbid pain syndrome and opioid withdrawal may include unmasking some chronic pain, the RN visit includes assessment of the general well-being of the patient, vital signs, COWS, and a pain assessment. When the patient is comfortable, typically after receiving 2 to 3 doses of buprenorphine, they are discharged. The patient is contacted by telephone within 24 hours of induction and seen again within 1 week for follow-up. Thereafter, the patient is seen in the clinic weekly for a month and monthly for 6 months. Patients who are stable in the long term continue to be seen every 3 months. Urine drug screens are done at initiation and randomly and are completed at least every 6 months during MAT treatment. In the initial 2 years of our program, 16 of the total 20 patients have sustained abstinence, 4 patients have not continued on buprenorphine, and none have been lost to follow-up. The 4 patients who have not continued buprenorphine have been previously prescribed methadone (n = 1), tramadol (n = 1), or hydrocodone (n = 2) for chronic pain syndromes. Three of these patients were referred to MAT due to failed opioid therapy agreements, and each patient previously took more than 50 mg morphine-equivalents per day. These patients were transitioned off of buprenorphine and are currently taking 10 mg morphine-equivalents or less of a prescription opioid. The program is still relatively new and in its first few years of development. Since failure of MAT treatment can occur after months or even years of therapy, we do expect that more patients will not be able to sustain abstinence in the future. A significant factor in the success of our program is that patients are able to be enrolled when they ask to be seen, throughout the week, and not just at a time set aside specifically for MAT. This means that patients may be seen on an urgent basis when they are actively in withdrawal or are ready for therapy on their terms. Because the induction process is not cumbersome to the provider’s time in clinic, urgently referred patients can be accommodated into the schedule of the provider following the same preestablished processes for any patient with any health condition, whether being seen for an initial consultation, induction, or for follow-up. Remote telehealth appointments may be offered to patients after their MAT treatment plan has been adequately established. In conclusion, improved access to MAT for OUD can be delivered in rural areas by groups of Family Medicine providers and trained staff, by incorporating MAT into the regular provision of primary care in their practice and community.
Identifying mesonephric‐like adenocarcinoma of the endometrium by combining
364e0238-04f4-4d7a-ba44-02311bd616dc
11649513
Anatomy[mh]
Mesonephric adencarcinomas (MA) of the uterine cervix are rare, aggressive entities and mainly associated with mesonephric remnants and/or hyperplasia. , , However, an intriguing subset of endometrial and ovarian carcinomas, referred to as mesonephric‐like adenocarcinomas (MLA), share morphological, immunophenotypical and molecular attributes with MAs of the cervix. MLAs are characterised by a clinically aggressive course, often presenting at advanced stages with a predilection for pulmonary metastases. , , , , Unlike cervical MAs, MLAs are not associated with mesonephric remnants and/or hyperplasia, but occur in the endometrium or are associated with endometriosis in the ovary. In 2016 McFarland, Quick and McCluggage delineated uterine corpus MLA from cervical MAs in a series including seven corpus and five ovarian examples. The entity has since been included under the ‘other endometrial carcinomas’ in the current World Health Organisation (WHO) classification of female genital tumours. MLAs present a diverse histological pattern, including tubular, glandular, spindled, solid and papillary structures, posing a diagnostic challenge due to morphological overlap with endometrioid carcinoma, serous carcinoma and carcinosarcoma. Immunophenotypically, MLAs exhibit positive staining for GATA3, TTF‐1, CD10 (luminal staining) and calretinin while being negative for oestrogen receptor (ER)‐ and progesterone receptor (PR)‐like MAs of the cervix. , , , Molecularly, these tumours share also key features with MAs of the cervix, including KRAS mutations, microsatellite stability and frequent gain of chromosome 1q, distinguishing them from other endometrial carcinoma types categorised based on molecular profiling. , , , , , Some studies also suggest that a subset of MLAs harbour alterations typically associated with endometrioid carcinoma, such as PTEN and PIK3CA mutations; however, this is controversial. , This molecular diversity has led to debates regarding the cellular origins of MLA. While morphological, immunohistochemical and molecular profiles suggest a mesonephric/Wolffian origin, the distribution of uterine tumours in the endometrium rather than the myometrium, coupled with associations with endometriosis and Müllerian‐type tumours, strongly supports a Müllerian origin. , , SOX17 (SRY‐box transcription factor 17), a key player in embryonic development, is widely expressed in endometrial tissue and various visceral organs. , , As a transcriptional regulator, recent studies link SOX17 to epithelial ovarian carcinoma, sharing an expression pattern with PAX8 and jointly influencing downstream genes related to the cell cycle and tissue morphogenesis. , Beyond ovarian carcinoma, SOX17 is implicated in cervical and endometrial carcinogenesis and is proposed as a tumour suppressor for endometrial adenocarcinoma. , , Notably, research on pathological specimens, including studies by our group, identifies SOX17 as a sensitive and specific marker for gynaecological carcinomas. , , However, SOX17 expression has not been explored in MLAs. In this study, we aimed to investigate SOX17 expression in MLAs together with other IHCs to differentiate MLAs from other endometrial carcinomas and then use a rational combined IHC approach to retrospectively identify MLAs from a study cohort harbouring endometrial carcinomas diagnosed prior to MLA definition. Specimens The study cohort included 17 endometrial and ovarian MLAs from The Ohio State University (OSU), Brown University, The Johns Hopkins University and University of Texas MD Anderson Cancer Center during a study period from 2021 to 2023, and 652 endometrial carcinomas from OSU between 2012 and 2015. Tissue microarray construction The pathology database at the OSU Wexner Medical Center was searched to retrieve 652 endometrial carcinomas with hysterectomies between April 2012 and January 2015. , The corresponding clinicopathological findings were collected. A formalin‐fixed paraffin‐embedded tissue block representative of the tumour was collected from each hysterectomy. Tissue microarrays (TMA) with triplicate 1‐mm cores for each tumour were constructed at the OSU pathology core facility. Immunohistochemistry Immunohistochemical staining was performed on a Leica Bond III autostainer system (Leica Biosystems). Formalin‐fixed paraffin‐embedded tissue sections were deparaffinised/rehydrated and antigen retrieval was performed with Bond ER1 (Leica Biosystems, Richmond, VA, USA; equivalent to citrate buffer, pH 6.0) or Bond ER2 [Leica Biosystems; equivalent to ethylenediamine tetraacetic acid (EDTA) buffer, pH 8.0] at 100°C for 20 min. The primary antibody was incubated for 15 min at room temperature; it was detected using the Bond Polymer Refine Detection kit (cat. no. DS9800; Leica Biosystems) and diaminobenzidine chromogen. The tissues were then counterstained using Leica haematoxylin, provided as part of the Leica Bond Polymer Refine Detection kit. Normal endometrial tissues were used as a positive control. The primary antibodies used in this study are summarised in Table . Each IHC was reviewed by two pathologists initially (M.T. and Z.L.), and in difficult cases additional pathologists were consulted and consensus was reached. For SOX17, PAX8, ER, PR and TRPS1 (TRPS1 is a recently identified marker for breast carcinoma, but was also found expressed in a small portion of gynaecological tumours), the positivity was defined using a cut‐off value of 10% of positive tumour cells with staining. The 10% was used to eliminate some non‐specific weak staining to increase specificity. The H ‐score was calculated by multiplying staining percentage (0–100) by intensity (1–3) to obtain a value from 0 to 300. Non‐homogenous staining within fewer than 50% of targeted cells was described as focal staining pattern. Statistical analysis Statistical analysis was performed using GraphPad Prism (GraphPad Software, Inc., La Jolla, CA, USA). Categorical data (IHC positivity) were summarised as frequency and percentage, and continuous variables ( H ‐scores) as medians and ranges. Fisher's exact test was used to compare each variable between different groups. An unpaired t ‐test was used to analyse continuous variables. An adjusted P ‐value of ≤ 0.05 was considered significant. The study cohort included 17 endometrial and ovarian MLAs from The Ohio State University (OSU), Brown University, The Johns Hopkins University and University of Texas MD Anderson Cancer Center during a study period from 2021 to 2023, and 652 endometrial carcinomas from OSU between 2012 and 2015. The pathology database at the OSU Wexner Medical Center was searched to retrieve 652 endometrial carcinomas with hysterectomies between April 2012 and January 2015. , The corresponding clinicopathological findings were collected. A formalin‐fixed paraffin‐embedded tissue block representative of the tumour was collected from each hysterectomy. Tissue microarrays (TMA) with triplicate 1‐mm cores for each tumour were constructed at the OSU pathology core facility. Immunohistochemical staining was performed on a Leica Bond III autostainer system (Leica Biosystems). Formalin‐fixed paraffin‐embedded tissue sections were deparaffinised/rehydrated and antigen retrieval was performed with Bond ER1 (Leica Biosystems, Richmond, VA, USA; equivalent to citrate buffer, pH 6.0) or Bond ER2 [Leica Biosystems; equivalent to ethylenediamine tetraacetic acid (EDTA) buffer, pH 8.0] at 100°C for 20 min. The primary antibody was incubated for 15 min at room temperature; it was detected using the Bond Polymer Refine Detection kit (cat. no. DS9800; Leica Biosystems) and diaminobenzidine chromogen. The tissues were then counterstained using Leica haematoxylin, provided as part of the Leica Bond Polymer Refine Detection kit. Normal endometrial tissues were used as a positive control. The primary antibodies used in this study are summarised in Table . Each IHC was reviewed by two pathologists initially (M.T. and Z.L.), and in difficult cases additional pathologists were consulted and consensus was reached. For SOX17, PAX8, ER, PR and TRPS1 (TRPS1 is a recently identified marker for breast carcinoma, but was also found expressed in a small portion of gynaecological tumours), the positivity was defined using a cut‐off value of 10% of positive tumour cells with staining. The 10% was used to eliminate some non‐specific weak staining to increase specificity. The H ‐score was calculated by multiplying staining percentage (0–100) by intensity (1–3) to obtain a value from 0 to 300. Non‐homogenous staining within fewer than 50% of targeted cells was described as focal staining pattern. Statistical analysis was performed using GraphPad Prism (GraphPad Software, Inc., La Jolla, CA, USA). Categorical data (IHC positivity) were summarised as frequency and percentage, and continuous variables ( H ‐scores) as medians and ranges. Fisher's exact test was used to compare each variable between different groups. An unpaired t ‐test was used to analyse continuous variables. An adjusted P ‐value of ≤ 0.05 was considered significant. SOX17 expression in 17 diagnosed endometrial and ovarian MLAs (whole tissue sections) Seventeen endometrial and ovarian MLAs were collected from multiple institutions and further reviewed to confirm the diagnosis. All 17 MLAs showed diffuse strong staining for PAX8, variable staining for GATA3 and TTF1, luminal staining for CD10, patchy staining for p16, wild‐type staining for p53 and negative staining for ER and PR, consistent with previous findings. Surprisingly, all 17 MLAs showed either complete negative ( n = 10) or focal weak/moderate ( n = 7) staining for SOX17, which is almost always more diffuse and stronger than PAX8 in other subtypes of endometrial carcinomas. Additionally, TRPS1 was negative in all cases (Table ; Figure ). Screening 652 endometrial carcinomas using tissue microarray This result encouraged us to screen TMAs with 652 endometrial carcinomas diagnosed before the MLA era at The Ohio State University using SOX17 and PAX8. Most cases showed positive staining for both SOX17 and PAX8 (88.7%, 578 of 652), 45 cases (6.9%) showed positive SOX17 and negative PAX8 staining (SOX17+/PAX8−), 14 cases (2.1%) showed positive PAX8 and negative SOX17 staining (SOX17−/PAX8+) and 15 cases (2.3%) showed negative staining for both SOX17 and PAX8 (SOX17−/PAX8−). Among 45 SOX17+/PAX8− cases, 38 had been diagnosed as endometrioid, six as malignant mixed Müllerian tumour (MMMT) and one as mixed carcinoma. Among 15 SOX17−/PAX8− cases, there were two endometrioid carcinomas, one serous carcinoma, six MMMT, one mixed carcinoma, four undifferentiated carcinomas and one other. Among 14 SOX17−/PAX8+ cases, eight were diagnosed as endometrioid, one as clear cell carcinoma, one as MMMT, three as mixed carcinomas and one as other (Table ). Identifying MLAs from SOX17 −/ PAX8 + cases We further studied 14 SOX17−/PAX8+ cases by examining the morphology and performing additional IHCs (TTF1, GATA3, ER and CD10) on whole tissue sections. Seven (50%) cases were reclassified as MLA based on morphological (typical MLA morphological features) and immunostain results (positive CD10, TTF1 and/or GATA3 staining). Five of the seven cases demonstrated aggressive clinical outcomes with advanced disease (distant metastasis). Additionally, these seven MLA cases showed strong PAX8 staining, while the non‐MLA cases (cases 8, 9, 10, 12, 14 and 14 in Table ) showed only weak to moderate PAX8 staining, except one case (case 11 in Table ). This case (case 11 in Table ) showed strong staining for PAX8 and ER and negative staining for SOX17 on TMA slides, but strong staining for both PAX8 and SOX17 on whole tissue sections, caused by intratumoural heterogeneity (Table ). Comparison of SOX17 , ER and PAX8 expression (> 10%) in different types of endometrial carcinoma After diagnosing these seven cases as MLA, we further compared the expression of SOX17, PAX8 and ER in different types of endometrial carcinomas including endometrioid, serous, clear cell, MMMT, mixed carcinoma, undifferentiated carcinoma and MLAs. Overall, SOX17 was the most positive marker and ER was the least (cut‐off = 10%). SOX17 and PAX8 showed similar positivity in most tumour subtypes except FIGO grades 1 and 2 endometrioid carcinomas (SOX17 > PAX8) and MLAs (PAX8 > SOX17). ER showed less positivity than PAX8 or SOX17 in almost all subtypes except MLAs, which were negative or weakly positive for both ER and SOX17. Positive percentage and H scores were compared among these markers, with SOX17 showing the highest positive percentage and H scores and ER showing the lowest in all subtypes except MLAs (Table ; Figure ). expression in 17 diagnosed endometrial and ovarian MLAs (whole tissue sections) Seventeen endometrial and ovarian MLAs were collected from multiple institutions and further reviewed to confirm the diagnosis. All 17 MLAs showed diffuse strong staining for PAX8, variable staining for GATA3 and TTF1, luminal staining for CD10, patchy staining for p16, wild‐type staining for p53 and negative staining for ER and PR, consistent with previous findings. Surprisingly, all 17 MLAs showed either complete negative ( n = 10) or focal weak/moderate ( n = 7) staining for SOX17, which is almost always more diffuse and stronger than PAX8 in other subtypes of endometrial carcinomas. Additionally, TRPS1 was negative in all cases (Table ; Figure ). This result encouraged us to screen TMAs with 652 endometrial carcinomas diagnosed before the MLA era at The Ohio State University using SOX17 and PAX8. Most cases showed positive staining for both SOX17 and PAX8 (88.7%, 578 of 652), 45 cases (6.9%) showed positive SOX17 and negative PAX8 staining (SOX17+/PAX8−), 14 cases (2.1%) showed positive PAX8 and negative SOX17 staining (SOX17−/PAX8+) and 15 cases (2.3%) showed negative staining for both SOX17 and PAX8 (SOX17−/PAX8−). Among 45 SOX17+/PAX8− cases, 38 had been diagnosed as endometrioid, six as malignant mixed Müllerian tumour (MMMT) and one as mixed carcinoma. Among 15 SOX17−/PAX8− cases, there were two endometrioid carcinomas, one serous carcinoma, six MMMT, one mixed carcinoma, four undifferentiated carcinomas and one other. Among 14 SOX17−/PAX8+ cases, eight were diagnosed as endometrioid, one as clear cell carcinoma, one as MMMT, three as mixed carcinomas and one as other (Table ). MLAs from SOX17 −/ PAX8 + cases We further studied 14 SOX17−/PAX8+ cases by examining the morphology and performing additional IHCs (TTF1, GATA3, ER and CD10) on whole tissue sections. Seven (50%) cases were reclassified as MLA based on morphological (typical MLA morphological features) and immunostain results (positive CD10, TTF1 and/or GATA3 staining). Five of the seven cases demonstrated aggressive clinical outcomes with advanced disease (distant metastasis). Additionally, these seven MLA cases showed strong PAX8 staining, while the non‐MLA cases (cases 8, 9, 10, 12, 14 and 14 in Table ) showed only weak to moderate PAX8 staining, except one case (case 11 in Table ). This case (case 11 in Table ) showed strong staining for PAX8 and ER and negative staining for SOX17 on TMA slides, but strong staining for both PAX8 and SOX17 on whole tissue sections, caused by intratumoural heterogeneity (Table ). SOX17 , ER and PAX8 expression (> 10%) in different types of endometrial carcinoma After diagnosing these seven cases as MLA, we further compared the expression of SOX17, PAX8 and ER in different types of endometrial carcinomas including endometrioid, serous, clear cell, MMMT, mixed carcinoma, undifferentiated carcinoma and MLAs. Overall, SOX17 was the most positive marker and ER was the least (cut‐off = 10%). SOX17 and PAX8 showed similar positivity in most tumour subtypes except FIGO grades 1 and 2 endometrioid carcinomas (SOX17 > PAX8) and MLAs (PAX8 > SOX17). ER showed less positivity than PAX8 or SOX17 in almost all subtypes except MLAs, which were negative or weakly positive for both ER and SOX17. Positive percentage and H scores were compared among these markers, with SOX17 showing the highest positive percentage and H scores and ER showing the lowest in all subtypes except MLAs (Table ; Figure ). The findings of this study shed light on the diagnostic significance of SOX17 expression in MLAs of the endometrium and ovary. MLAs represent a unique subset of gynaecological malignancies, sharing morphological and molecular characteristics with MAs of the cervix but lacking the association with mesonephric remnants or hyperplasia. , , , , While previous research has identified key immunophenotypical and molecular features of MLAs, , , , the role of SOX17 expression in these tumours has remained unexplored until the present. Our study revealed that SOX17 expression in MLAs is either completely negative or weakly focal, contrasting with its typical diffuse and strong staining observed in other subtypes of endometrial carcinomas. This finding suggests a potential utility of SOX17 IHC in distinguishing MLAs from other endometrial carcinoma subtypes, such as endometrioid and serous carcinomas, which often exhibit positive staining for SOX17. Furthermore, our study utilised a rational combined IHC approach, incorporating SOX17 and PAX8 IHCs, to retrospectively identify MLAs from a cohort of endometrial carcinomas diagnosed before MLA was an established diagnostic category. This approach proved effective in identifying MLAs from cases with SOX17‐negative and PAX8‐strongly positive staining patterns as we were able to accurately diagnose MLAs from cases initially classified as other subtypes of endometrial carcinoma, highlighting the diagnostic utility of SOX17 IHC in this context. In addition, the comparison of SOX17, PAX8, and ER expression across different types of endometrial carcinomas revealed distinct staining patterns characteristic of different subtypes of endometrial carcinoma. Consistent with our and others’ previous findings, , , SOX17 emerged as the most sensitive marker, demonstrating higher positivity rates and H scores compared to PAX8 and ER across most endometrial carcinoma subtypes. Notably, MLAs exhibited negative or weakly positive staining for both SOX17 and ER but strongly positive staining of PAX8, further emphasising the unique immunophenotypical profile of these tumours. The origin of MLAs has been debated, with conflicting evidence regarding their embryological derivation from Müllerian ducts or Wolffian (mesonephric) ducts. , , While morphological, immunohistochemical and molecular profiling studies suggest a mesonephric/Wolffian origin for MLAs, their occurrence primarily within the endometrium and association with Müllerian‐type tumours challenge this hypothesis. , , In contrast to SOX17's diffuse and strong staining typically observed in Müllerian‐derived tumours, MLAs showed negative or weakly positive staining; however, this may be caused by the transdifferentiation phenomenon. Further research utilising advanced molecular techniques and lineage tracing studies is warranted to elucidate the precise cellular origins of MLAs and clarify their embryological lineage. Our study is limited by the small cohort size of MLAs, the study's retrospective nature and the use of TMAs. Future studies with large cohorts from multiple institutions are warranted to ascertain current findings and confirm the diagnostic utility of this combined IHC approach. In conclusion, the differential expression patterns of SOX17 and PAX8 observed in MLAs present an opportunity to improve the accuracy of MLA diagnosis. Specifically, the detection of strong nuclear labelling for PAX8 coupled with negative staining for SOX17 could serve as a reliable indicator of MLA. By incorporating both SOX17 and PAX8 into diagnostic algorithms, pathologists can enhance their ability to distinguish MLAs from other endometrial carcinoma subtypes. None to declare.
Dynamic culture system advances the applications of breast cancer organoids for precision medicine
fe2ac8bc-546d-488e-abb9-9931bc962845
11909168
Medicine[mh]
Breast cancer is the most common and lethal malignant tumor among women, ranking first in both incidence and mortality among female tumors . Breast cancer not only causes serious physical and psychological damage to the patients, but also brings huge burden to the society. In breast cancer treatment, chemotherapy has been the main means of adjuvant therapy, which can reduce tumor size before surgery (neoadjuvant chemotherapy) or eliminate residual cancer cells after surgery (adjuvant chemotherapy) . To cope with the possible tumor resistance, the combination of multiple chemotherapy drugs, though common, can cause short-term to lifelong side effects for patients, which severely affect the patients’ quality of life . Based on the above needs, tumor organoid culture technology has received widespread attention since it was proposed. Tumor organoids are cultures that have certain tissue characteristics, which are constructed by isolating cancer cells from patient tumor tissues and cultivating under 3D matrix conditions with addition of specific growth factors. Tumor organoids have similar genomic, histological and drug response characteristics to the parental tumors , . Currently, a large number of studies have utilized tumor organoids to perform chemotherapy drug sensitivity tests and obtained results similar to clinical treatment – . The commonly used organoid culture method is the dome method, which is to mix the isolated tumor cells with matrix gel and inoculate them in a well plate for culture. The specific growth factor combination and the three-dimensional structure of the matrix gel provide support for cell proliferation and morphogenesis. This method has been proven to be able to stably culture breast cancer organoids for a long time, and achieve subsequent drug stimulation experiments , . While the dome method can fulfill the experimental requirements in the lab, it fails to fully meet the demands of clinical practice. The tissue site, sampling method and tumor cell proportion of the sample have been proven to be important factors that determine the culture success rate and efficiency. In the case that the starting tissue is small so that it will be hard to expand quickly enough to reach the necessary cell amounts for drug sensitivity analysis, which will lead to prolonged detection cycle. Breast cancer samples also face the same situation, clinicians prefer the less invasive puncture biopsy method for sampling , . Usually, less tumor cells the punctured tissue can obtained, which will directly lead to the extension of the organoid culture cycle in the existing organoid culture system, and it is difficult to provide timely diagnosis and treatment suggestions for patients. Therefore, a new organoid culture system is urgently needed to shorten the detection cycle, and maintaining a high growth rate of breast cancer organoids is a direction worth exploring. Developmental biology has established that embryonic growth is intricately linked to vascular development, and sufficient nutrients depend on a complex vascular system . Tumor organoid culture faces the same situation, due to its relatively simple cellular composition, it cannot form the necessary vascular structure, making its nutrient acquisition only dependent on passive diffusion. Under this factor, organoids are difficult to obtain sufficient nutrient supply, and their maximum size is also limited. Currently, microfluidic systems are considered to be one of the solutions to this problem , . Therefore, we will use microfluidic systems as an entry point to provide continuous and stable nutrient supply for breast cancer organoids, in order to maintain the high-level proliferation of organoids and shorten the clinical test cycle. A dynamic culture system facilitates the growth of organoids Breast cancer organoids were established from three samples of breast invasive ductal carcinoma. Then, single cell suspensions were obtained from breast cancer organoids and cultured in Matrigel using two methods: the static dome method (Dome) and the fluidic dome method (Flow) (Fig. a). The growth of the breast cancer organoids were continuously monitored for two weeks. By the end of the observation period, the diameters of the Flow group were significantly larger compared to the Dome group in all three samples (3/3) (Fig. b). Notably, the organoid morphology in the Dome group changed from solid to hollow across all the samples, whereas this phenomenon was not observed in the Flow group (Fig. c). Such morphological difference across two groups may complicate the analysis of proliferation rates. To address this, we measured cell viability of organoids after 15 days of culture using Alamar Blue, and found that organoids in the Flow group consistently showed higher viability than Dome group in all three samples (Fig. d). Furthermore, we performed immunohistochemical staining on the organoids at the end of the observation period and compared their molecular marker expression patterns with the parental breast cancer tissue, except for estrogen receptor(ER) and progesterone receptor(PR) markers (Fig. e). These results demonstrated that the breast cancer organoids cultured in both Dome and Flow methods preserved the molecular characteristics of the parental tissue while enhancing organoid growth and cell viability. Consistent sensitivity between fluidic and static organoids Drug stimulation studies were conducted on three established organoid models using olaparib, capecitabine (5’-fluorouracil substitute), cisplatin, gemcitabine, sacituzumab govitecan (SN-38 substitute), and pharmorubicin. The relative activity of drug-treated organoids showed gradual decrease when increasing drug concentrations, except for those organoids showed certain drug resistance (Fig. a). The three organoid models displayed a sensitivity to pharmorubicin that consistent with the clinical responses observed in patients. Of these, patient BC1 and patient BC2 were administered pharmorubicin following mastectomy, no tumor recurrence was observed during one year of follow-up. Patient BC3 received neoadjuvant therapy with pharmorubicin, and after six cycles of treatment, the tumor size reduced from 2.41 × 1.91 cm to 1.46 × 1.24 cm. Concurrently, to assess the impact of culture protocols on organoid responsiveness to drugs, we observed the drug responses for Flow group organoids under identical conditions. This comparison revealed equivalent activity between both groups (Fig. b, Figure ). Further curve fitting and the two-way ANOVA analysis comparing the groups’ responses showed 88.88% non-significant differences (Fig. c) suggesting similar predictability of drug reactions regardless of culture methods. The fluid’s mechanical effects promote breast cancer organoids growth The fluid conditions can affect the growth of organoids in two ways: the flowing condition facilitates the maintenance of a stable nutrient supply and the influence from the fluid shear stress. To determine which factor is the primary cause of the difference in proliferation capacity, we added an extra group of dome-sp (Dome-sp) to the culture process of sample BC3, which changed the medium daily to replenish the possible material consumption (Fig. a). After the 15-day observation period, the Dome-sp group did not exhibit the expected advantage in proliferation capacity, and only the Flow group showed a higher level of diameter change (Fig. b). Furthermore, we performed a viability assay and found that the overall cell viability of the Flow group was significantly higher than that of the Dome and Dome-sp (Fig. c). This indicates that the culture method used in the Dome group can fully satisfy the growth requirements of breast cancer organoids, therefore the stability of nutrients is not the factor that causes significant diameter change in our experiment. It is noteworthy that in the Dome-sp group, we observed a general occurrence of hollow organoids (Fig. d). This suggests that the mechanical effects of the fluid may be the primary reason for the higher proliferation level and morphological changes in breast cancer organoids. Fluid shear stress alters the morphological characteristics and gene expression of breast cancer organoids Furthermore, we aimed to investigate whether the hollowing observed in breast cancer organoids was a natural transformation that occurred during the prolonged culture process of breast cancer organoids, and whether the presence of fluid shear stress only delayed this process. We extended the single culture time to 30 days based on the 15-day culture cycle of the Flow group (Fig. a). The results showed that the central hollowing of the organoids was not observed under the microscope in the long-term culture under fluid conditions, and the organoids still showed a solid appearance (Fig. b). And due to the continuous increase in volume, there was a situation of mutual fusion between different organoids. The organoids at this stage were proceeded for staining and verification. Compared with the staining results of the organoids cultured for 15 days, the organoids cultured for 30 days still exhibited molecular marker expression characteristics similar to the parental tissue, except for ER (Fig. c). However, we observed cell detachment and chromatin marginalization in the center of some larger organoids, with internal cells displaying typical apoptosis (Fig. d). Some organoids began to form concentric circle structures (Fig. e), and the arrangement of cells in organoid changed from the previous disorderly aggregation mode to the direction of multilayer arrangement. But none of them showed the hollow-like morphology observed in the Dome group, which indicates that the fluid shear stress may have changed the morphology of the organoids under static culture. We tested the expression of genes related to drug resistance and proliferation, and found that the introduction of fluid conditions had a significant impact on the gene expression (Figure ). This indicates that the fluid shear stress can induce changes in the gene expression characteristics of the organoids. Breast cancer organoids were established from three samples of breast invasive ductal carcinoma. Then, single cell suspensions were obtained from breast cancer organoids and cultured in Matrigel using two methods: the static dome method (Dome) and the fluidic dome method (Flow) (Fig. a). The growth of the breast cancer organoids were continuously monitored for two weeks. By the end of the observation period, the diameters of the Flow group were significantly larger compared to the Dome group in all three samples (3/3) (Fig. b). Notably, the organoid morphology in the Dome group changed from solid to hollow across all the samples, whereas this phenomenon was not observed in the Flow group (Fig. c). Such morphological difference across two groups may complicate the analysis of proliferation rates. To address this, we measured cell viability of organoids after 15 days of culture using Alamar Blue, and found that organoids in the Flow group consistently showed higher viability than Dome group in all three samples (Fig. d). Furthermore, we performed immunohistochemical staining on the organoids at the end of the observation period and compared their molecular marker expression patterns with the parental breast cancer tissue, except for estrogen receptor(ER) and progesterone receptor(PR) markers (Fig. e). These results demonstrated that the breast cancer organoids cultured in both Dome and Flow methods preserved the molecular characteristics of the parental tissue while enhancing organoid growth and cell viability. Drug stimulation studies were conducted on three established organoid models using olaparib, capecitabine (5’-fluorouracil substitute), cisplatin, gemcitabine, sacituzumab govitecan (SN-38 substitute), and pharmorubicin. The relative activity of drug-treated organoids showed gradual decrease when increasing drug concentrations, except for those organoids showed certain drug resistance (Fig. a). The three organoid models displayed a sensitivity to pharmorubicin that consistent with the clinical responses observed in patients. Of these, patient BC1 and patient BC2 were administered pharmorubicin following mastectomy, no tumor recurrence was observed during one year of follow-up. Patient BC3 received neoadjuvant therapy with pharmorubicin, and after six cycles of treatment, the tumor size reduced from 2.41 × 1.91 cm to 1.46 × 1.24 cm. Concurrently, to assess the impact of culture protocols on organoid responsiveness to drugs, we observed the drug responses for Flow group organoids under identical conditions. This comparison revealed equivalent activity between both groups (Fig. b, Figure ). Further curve fitting and the two-way ANOVA analysis comparing the groups’ responses showed 88.88% non-significant differences (Fig. c) suggesting similar predictability of drug reactions regardless of culture methods. The fluid conditions can affect the growth of organoids in two ways: the flowing condition facilitates the maintenance of a stable nutrient supply and the influence from the fluid shear stress. To determine which factor is the primary cause of the difference in proliferation capacity, we added an extra group of dome-sp (Dome-sp) to the culture process of sample BC3, which changed the medium daily to replenish the possible material consumption (Fig. a). After the 15-day observation period, the Dome-sp group did not exhibit the expected advantage in proliferation capacity, and only the Flow group showed a higher level of diameter change (Fig. b). Furthermore, we performed a viability assay and found that the overall cell viability of the Flow group was significantly higher than that of the Dome and Dome-sp (Fig. c). This indicates that the culture method used in the Dome group can fully satisfy the growth requirements of breast cancer organoids, therefore the stability of nutrients is not the factor that causes significant diameter change in our experiment. It is noteworthy that in the Dome-sp group, we observed a general occurrence of hollow organoids (Fig. d). This suggests that the mechanical effects of the fluid may be the primary reason for the higher proliferation level and morphological changes in breast cancer organoids. Furthermore, we aimed to investigate whether the hollowing observed in breast cancer organoids was a natural transformation that occurred during the prolonged culture process of breast cancer organoids, and whether the presence of fluid shear stress only delayed this process. We extended the single culture time to 30 days based on the 15-day culture cycle of the Flow group (Fig. a). The results showed that the central hollowing of the organoids was not observed under the microscope in the long-term culture under fluid conditions, and the organoids still showed a solid appearance (Fig. b). And due to the continuous increase in volume, there was a situation of mutual fusion between different organoids. The organoids at this stage were proceeded for staining and verification. Compared with the staining results of the organoids cultured for 15 days, the organoids cultured for 30 days still exhibited molecular marker expression characteristics similar to the parental tissue, except for ER (Fig. c). However, we observed cell detachment and chromatin marginalization in the center of some larger organoids, with internal cells displaying typical apoptosis (Fig. d). Some organoids began to form concentric circle structures (Fig. e), and the arrangement of cells in organoid changed from the previous disorderly aggregation mode to the direction of multilayer arrangement. But none of them showed the hollow-like morphology observed in the Dome group, which indicates that the fluid shear stress may have changed the morphology of the organoids under static culture. We tested the expression of genes related to drug resistance and proliferation, and found that the introduction of fluid conditions had a significant impact on the gene expression (Figure ). This indicates that the fluid shear stress can induce changes in the gene expression characteristics of the organoids. Tumor organoids have emerged as a cutting-edge focus in precision medicine, but the long culture time and expensive culture cost limit their applications . We demonstrated that the fluid conditions can enhance the proliferation of breast cancer organoids, resulting in an increased number of cells within the same culture time. Surprisingly, the key factor in this process was not the abundant and stable supply of nutrients. There was no significant difference in diameter between the Dome group and the Dome-sp group. The results of the viability assay also confirmed this. This allows us to conclude that the existing culture system and method can be considered adequate in terms of nutrient supply. Faithfully reproducing the basic characteristics of the parental tissue is the fundamental reason why tumor organoids are favored in the field of precision medicine. We showed that the organoids cultured under fluid conditions show consistent marker characteristics with those cultured statically, and the drug sensitivity results also show consistent responses. This indicates that the organoids cultured in fluid have the same application basis as the organoids cultured in conventional ways. Moreover, fluid conditions can effectively shorten the culture cycle and leading us to believe that the organoids cultured in fluid have a promising application prospect in the field of precision medicine. A large number of studies have shown that under specific culture conditions, tumor organoids can be cultured and expanded stably for a long time , , . In addition, we observed the loss of ER and PR expression, which may correlate with the aberrant activation of the NOTCH signaling pathway , . Moreover, The culture systems used in these studies typically do not exhibit fluid shear stress. In the normal human body, all types of tissue cells are continuously exposed to shear forces caused by fluid flow in the tissue microenvironments. The influence of shear force can significantly affect cell fate and plays a pivotal role in cell growth and differentiation , . The effect of shear force on gene transcription is almost inevitable, and our experimental results also show this change. We focused on several genes related to drug resistance and cellular proliferation, particularly PIM1, MCL-1, MYC, CDK6, and JAK2. PIM1 and c-Myc have been found to synergistically promote the expansion of oncological cells . MCL-1 plays a regulatory role in cell differentiation and apoptosis . MYC and MCL1 are co-amplified in drug-resistant breast cancer . Lee et al. revealed that MYC and MCL1 cooperate to maintain cancer stem cells resistant to chemotherapy by increasing mitochondrial OXPHOS, ROS production and HIF-1α expression. Moreover, elevated levels of CDK6 and JAK2 have been observed in drug-resistant patients . Our findings demonstrate that different fluidic conditions can significantly influence the expression levels of various drug-resistant genes, although their distribution is not uniform across the three samples. We noticed that organoids cultured under fluid conditions exhibited morphological characteristics different from those cultured in static culture. We observed apoptosis in the centers of larger organoids in the Flow-30 group, caused by their excessive size and insufficient internal material supply. We believe that organoids cultured in the Dome group may also encounter this problem . Hollow morphogenesis, triggered by cell movement, is a natural response of tumor cells to hostile environments, and shear stress alters this process. However, we do not consider this change to be negative. Fluid mechanical force is a common condition in tissues and is highly related to tissue and cell growth and development. We believe that the presence of fluid shear force, which induces changes, maintains the stability of the tissue structure represented by the organoids. We observed that the cell arrangement in breast cancer organoids cultured under fluid conditions for a long time has transitioned from the previous disorderly aggregation mode to a multilayer arrangement. The research results of Florian et al. reveal that this phenomenon is associated with the development and differentiation process of the mammary duct lumen . Similar conclusions were also reached by Cho et al. . This indicates that the presence of fluid shear force is critical and necessary for tissue reconstruction in vitro. Tissue collection Breast cancer tissues were obtained from three patients with invasive ductal carcinoma(Table ). The acquisition and use of samples for this study were reviewed and approved by the Ethics Committee of Shanxi Provincial Cancer Hospital (record number KY2023039). All research methods were conducted in accordance with approved protocols. All samples were collected with written informed consent from patients and in compliance with all relevant ethical regulations, including the Declaration of Helsinki. The patient information was de-identified before any processing and analysis of the tissue samples. The tissue samples were placed in advance DMEM/F12 (12634010, Gibco) containing 10µM Y-27,632 (HY-10071; MedChemExpress), 5% FBS (10100147, Gibco) and 100 µg/mL Primocin (ant-pm-1, InvivoGen), and transported directly to the laboratory at 2–8 °C. These cancer tissue samples were used to establish primary organoids cultures and parental tumor analysis. Establishment and culture of breast cancer organoids After removing the obvious connective tissue from the tissue, a piece of tissue was cut for paraffin embedding. The remaining tissue was cut into fragments of about 1 mm 3 and dispersed using advance DMEM/F12 containing 10µM Y-27,632 and 2 mg/mL Collagenase Type I (17018029, Gibco) on a constant temperature shaker at 37 °C. The mixture was shaken for 30 min to 1 h, during which it was mixed at five-to-ten-minute intervals using a Pasteur pipette. An equal volume of advance DMEM/F12 containing 2% FBS was added to the collected mixture.After centrifugation, the cell pellet was resuspended with Matrigel (354230, Corning), and 40µL per drop was inoculated in a 24-well cell culture plate. Incubated at 37 °C for 20 min, and after the matrigel was completely solidified, added 500µL of breast cancer organoid culture medium (KCJ-7, KINGBIO) and continue to culture at 37 °C in 5% CO 2 . Change the medium every 3 days. Digest and passage organoids every 7–15 days, use 2 mg/mL Dispase II (17105041, Gibco) to recover the organoids, and use Accutase (A1110501, Gibco) to dissociate the organoids. The breast cancer organoids used for the experiment were between the second and fifth generations. The organoids were dissociated into single cells before the experiment and were inoculated at 20,000 cells per 100 µL Matrigel. The experiments were conducted on three samples, each divided into two experimental groups, carried out simultaneously, with six replicates per group. The equipment used for fluid culture was the Quasi Vivo ® chamber (Kirkstall, QV500), with the equipment schematic shown in Figure . Immunohistochemistry Tissues and organoids were fixed with 4% polyformaldehyde. Following paraffin embedding, 6-µm-thick sections were cut and fixed with Citrate Antigen Retrieval solution(C1032, Solarbio). The sections were washed 3 times in PBS (5 min each), treated with 0.3% H 2 O 2 in distilled water, blocked and stained. The antibodies used for immunohistochemistry at their respective dilutions were as follows: PR (Abcam, ab32085) at 1:100, ER (Abcam, ab108398) at 1:200, HER2 (Abcam, ab134182) at 1:1000, CK7 (Abcam, ab181598) at 1:8000, GATA3 (Abcam, ab199428) at 1:500, E-cadherin (Abcam, ab40772) at 1:500, and Ki-67 (Abcam, ab15580) at 1:1000. Following these, secondary antibodies included Goat anti-Rabbit IgG (31460, Thermo Fisher) at 1:500. Images were acquired using Nikon ECLIPSE E100 microscope MshOt MS60, and image processing was performed using Adobe Photoshop. Gene expression analysis Gene expression levels were quantified by RT-qPCR. Total RNA was extracted using Trizol reagent (beyotime, R0016), reverse transcription was performed using PrimeScript RT Master Mix (TaKaRa, RR036A), the qPCR reactions were performed using TB Green Advantage qPCR premixes (TaKaRa, 639676). MYC-for: CCTACCCTCTCAACGACAGC, MYC-rev: CTCTGACCTTTTGCCAGGAG, MCL1-for: AGAAAGCTGCATCGAACCAT, MCL1-rev: CCAGCTCCTACTCCAGCAAC, CDK6-for: CCGTGGATCTCTGGAGTGTT, CDK6-R: CTCAATTGGTTGGGCAGATT, GAPDH-for: GACAGTCAGCCGCATCTTCT, GAPDH-R: TTAAAAGCAGCCCTGGTGAC. Organoid proliferation assays The organoid proliferation assay was performed using alamar blue (YEASEN, 40202ES80). The reagent was added to the organoid culture medium in the required proportion, with a reaction time of 3 h. For detection, the mixed solution was pipetted into a 96-well plate. Matrigel drops without cells were inoculated in a 24-well plate as a negative control, and a 100% reduced Alamar blue solution was used as a positive control. The absorbance values were measured at 595 nm and 630 nm, and the reduction rate was calculated according to the manufacturer’s instructions. Organoid diameter measurements. Randomly select 3 fields of view under a 100x field of view, measure and count all clearly visible organoids within the field of view. Measure every five days. Drug screen Drug sensitivity analysis of organoids was performed using CellTiter-Glo (Promega, G9241).These drugs are: olaparib(HY-10162,MCE), capecitabine (5’-fluorouracil substitute)(100187,National Institutes for Food and Drug Control), cisplatin(S1166,selleck), gemcitabine(100622,National Institutes for Food and Drug Control), sacituzumab govitecan (SN-38 substitute)(HY-13704,MCE), and pharmorubicin(130560,National Institutes for Food and Drug Control)0.2 mg/mL Dispase II was used to recover the organoids, accutase was used to dissociate the organoids.Resuspended cells in the organoid culture medium containing 5% Matrigel and inoculated them into a 384-well plate at a density of 1,000 cells per well.After incubating overnight, added the drug solution. Cell viability was assayed using CellTiter-Glo 3D (Promega) according to the manufacturer’s instructions following 3 days of drug incubation. Data analyses were performed using GraphPad Prism 8.0.2 (GraphPad Software, San Diego, California, USA, www.graphpad.com ), and the dose response curves were calculated by applying nonlinear regression (curve fit). Statistical analyses Data were analyzed as the mean ± SD. Normality was tested with the Shapiro-Wilk test. Comparative analysis used Two-tailed Student’s t-test, Wilcoxon’s test and Kruskal–Wallis test. p < 0.05 indicates significant, unless otherwise stated. The number of asterisks indicate different levels of statistical significance, (* p < 0.05; ** p < 0.01; and *** p < 0.001). All data analyses and graphics were performed using GraphPad Prism 8.0.2. Breast cancer tissues were obtained from three patients with invasive ductal carcinoma(Table ). The acquisition and use of samples for this study were reviewed and approved by the Ethics Committee of Shanxi Provincial Cancer Hospital (record number KY2023039). All research methods were conducted in accordance with approved protocols. All samples were collected with written informed consent from patients and in compliance with all relevant ethical regulations, including the Declaration of Helsinki. The patient information was de-identified before any processing and analysis of the tissue samples. The tissue samples were placed in advance DMEM/F12 (12634010, Gibco) containing 10µM Y-27,632 (HY-10071; MedChemExpress), 5% FBS (10100147, Gibco) and 100 µg/mL Primocin (ant-pm-1, InvivoGen), and transported directly to the laboratory at 2–8 °C. These cancer tissue samples were used to establish primary organoids cultures and parental tumor analysis. After removing the obvious connective tissue from the tissue, a piece of tissue was cut for paraffin embedding. The remaining tissue was cut into fragments of about 1 mm 3 and dispersed using advance DMEM/F12 containing 10µM Y-27,632 and 2 mg/mL Collagenase Type I (17018029, Gibco) on a constant temperature shaker at 37 °C. The mixture was shaken for 30 min to 1 h, during which it was mixed at five-to-ten-minute intervals using a Pasteur pipette. An equal volume of advance DMEM/F12 containing 2% FBS was added to the collected mixture.After centrifugation, the cell pellet was resuspended with Matrigel (354230, Corning), and 40µL per drop was inoculated in a 24-well cell culture plate. Incubated at 37 °C for 20 min, and after the matrigel was completely solidified, added 500µL of breast cancer organoid culture medium (KCJ-7, KINGBIO) and continue to culture at 37 °C in 5% CO 2 . Change the medium every 3 days. Digest and passage organoids every 7–15 days, use 2 mg/mL Dispase II (17105041, Gibco) to recover the organoids, and use Accutase (A1110501, Gibco) to dissociate the organoids. The breast cancer organoids used for the experiment were between the second and fifth generations. The organoids were dissociated into single cells before the experiment and were inoculated at 20,000 cells per 100 µL Matrigel. The experiments were conducted on three samples, each divided into two experimental groups, carried out simultaneously, with six replicates per group. The equipment used for fluid culture was the Quasi Vivo ® chamber (Kirkstall, QV500), with the equipment schematic shown in Figure . Tissues and organoids were fixed with 4% polyformaldehyde. Following paraffin embedding, 6-µm-thick sections were cut and fixed with Citrate Antigen Retrieval solution(C1032, Solarbio). The sections were washed 3 times in PBS (5 min each), treated with 0.3% H 2 O 2 in distilled water, blocked and stained. The antibodies used for immunohistochemistry at their respective dilutions were as follows: PR (Abcam, ab32085) at 1:100, ER (Abcam, ab108398) at 1:200, HER2 (Abcam, ab134182) at 1:1000, CK7 (Abcam, ab181598) at 1:8000, GATA3 (Abcam, ab199428) at 1:500, E-cadherin (Abcam, ab40772) at 1:500, and Ki-67 (Abcam, ab15580) at 1:1000. Following these, secondary antibodies included Goat anti-Rabbit IgG (31460, Thermo Fisher) at 1:500. Images were acquired using Nikon ECLIPSE E100 microscope MshOt MS60, and image processing was performed using Adobe Photoshop. Gene expression levels were quantified by RT-qPCR. Total RNA was extracted using Trizol reagent (beyotime, R0016), reverse transcription was performed using PrimeScript RT Master Mix (TaKaRa, RR036A), the qPCR reactions were performed using TB Green Advantage qPCR premixes (TaKaRa, 639676). MYC-for: CCTACCCTCTCAACGACAGC, MYC-rev: CTCTGACCTTTTGCCAGGAG, MCL1-for: AGAAAGCTGCATCGAACCAT, MCL1-rev: CCAGCTCCTACTCCAGCAAC, CDK6-for: CCGTGGATCTCTGGAGTGTT, CDK6-R: CTCAATTGGTTGGGCAGATT, GAPDH-for: GACAGTCAGCCGCATCTTCT, GAPDH-R: TTAAAAGCAGCCCTGGTGAC. The organoid proliferation assay was performed using alamar blue (YEASEN, 40202ES80). The reagent was added to the organoid culture medium in the required proportion, with a reaction time of 3 h. For detection, the mixed solution was pipetted into a 96-well plate. Matrigel drops without cells were inoculated in a 24-well plate as a negative control, and a 100% reduced Alamar blue solution was used as a positive control. The absorbance values were measured at 595 nm and 630 nm, and the reduction rate was calculated according to the manufacturer’s instructions. Organoid diameter measurements. Randomly select 3 fields of view under a 100x field of view, measure and count all clearly visible organoids within the field of view. Measure every five days. Drug sensitivity analysis of organoids was performed using CellTiter-Glo (Promega, G9241).These drugs are: olaparib(HY-10162,MCE), capecitabine (5’-fluorouracil substitute)(100187,National Institutes for Food and Drug Control), cisplatin(S1166,selleck), gemcitabine(100622,National Institutes for Food and Drug Control), sacituzumab govitecan (SN-38 substitute)(HY-13704,MCE), and pharmorubicin(130560,National Institutes for Food and Drug Control)0.2 mg/mL Dispase II was used to recover the organoids, accutase was used to dissociate the organoids.Resuspended cells in the organoid culture medium containing 5% Matrigel and inoculated them into a 384-well plate at a density of 1,000 cells per well.After incubating overnight, added the drug solution. Cell viability was assayed using CellTiter-Glo 3D (Promega) according to the manufacturer’s instructions following 3 days of drug incubation. Data analyses were performed using GraphPad Prism 8.0.2 (GraphPad Software, San Diego, California, USA, www.graphpad.com ), and the dose response curves were calculated by applying nonlinear regression (curve fit). Data were analyzed as the mean ± SD. Normality was tested with the Shapiro-Wilk test. Comparative analysis used Two-tailed Student’s t-test, Wilcoxon’s test and Kruskal–Wallis test. p < 0.05 indicates significant, unless otherwise stated. The number of asterisks indicate different levels of statistical significance, (* p < 0.05; ** p < 0.01; and *** p < 0.001). All data analyses and graphics were performed using GraphPad Prism 8.0.2. Below is the link to the electronic supplementary material. Supplementary Material 1
ACC/AHA/ASE/ASNC/ASPC/HFSA/HRS/SCAI/SCCT/SCMR/STS 2023 Multimodality Appropriate Use Criteria for the Detection and Risk Assessment of Chronic Coronary Disease
e0114385-d122-4604-a930-4569118f831b
10585920
Internal Medicine[mh]
Since the introduction of AUC in 2005, the ACC has produced a number of documents that synthesize evidence for specific cardiovascular procedures into appropriate use standards. The AUC were developed to support utilization of high-quality patterns of procedure use (ie, appropriate use) while informing efforts to reduce resource use when benefits to patients are unlikely . The range of tools used to evaluate cardiovascular disease has expanded over the past decade, especially in the field of noninvasive imaging. The purpose of this document is to delineate the appropriate use of various invasive and noninvasive testing modalities for the diagnosis and/or evaluation of CCD across common patient presentations (clinical scenarios), including the following: Patients with symptoms of ischemia: without prior testing (Table ), with prior testing but without myocardial infarction (MI) or revascularization (Table ), and with prior MI or revascularization (Table ) Patients without symptoms of ischemia: testing for risk of ASCVD events (Table ), and with prior MI or prior revascularization (Table ) Patients seeking to initiate a physical exercise or cardiac rehabilitation program (Table ) Patients with other cardiovascular conditions such as heart failure, arrhythmias, or syncope (Table ) Writing Group At the outset of the AUC development process, the Solution Set Oversight Committee (SSOC) appoints 1 to 2 experts to serve as chair, cochairs, or chair/vice-chair of the writing group. The SSOC, in collaboration with the chair(s), then appoints additional members to serve on the multidisciplinary writing group, which usually ranges in size from 5 to 9 members. The goal of the writing group is to develop patient scenarios that are likely to be encountered in clinical practice and to categorize those scenarios based on symptoms, anatomy, and/or disease state. Patient presentation varies widely, and not all clinical factors will be fully captured in the scenarios. Where possible, the writing group maps the scenarios to relevant guidelines, clinical trials, and other data sources. Recommendations for writing group members may be solicited from ACC Member Councils as well as relevant professional societies. In accordance with the ACC’s Diversity and Inclusion principles, every effort is made to ensure that the writing group members vary in age, sex, and ethnicity/race. In addition, one or more early-career physicians, fellows-in-training, or cardiovascular team members are included. Other important considerations for the group’s makeup include specialty, appropriate organizational/content expertise, practice setting, and geographic location. SSOC considers relevant relationships in consideration of ACC’s RWI Policy in the formation of all writing groups. Reviewers SSOC identifies a group of reviewers to provide feedback to the writing group prior to sending the scenarios to the rating panel. Similar to both the writing group and rating panel, reviewers are solicited from varied sources both internal to the College as well as other relevant societies and organizations. Specifically, reviewers provide feedback on whether the scenarios are comprehensive and represent typical patients, and whether the document provides accurate definitions and assumptions, as well as acceptable evidence mapping. Rating Panel The rating panel is responsible for rating each clinical scenario. To maximize the input from a broad array of stakeholders, the rating panel is composed of experts in cardiovascular medicine, general internal medicine/hospital practice, and outcomes research. The SSOC is also responsible for appointing members to the rating panel. The membership usually includes 15 to 17 individuals, including practicing clinicians with expertise in the clinical topic being evaluated, practicing clinicians with expertise in a closely related discipline, and often a primary care physician, an expert in statistical analysis, and an expert in clinical trial design. An individual from the public sector and/or a payer representative may also be included. The panel includes clinicians other than cardiologists to reduce the potential for bias among clinicians with expertise in individual testing modalities or treatment methods. The SSOC has a strong interest in maintaining balance between specialists who use the technology or treatment methods addressed in the specific set of AUC, and other professionals who represent referring clinicians, including general cardiologists, outcome specialists, and/or primary care physicians. Specialists whose key area of practice is the main AUC topic under consideration represent < 50% of the panel. Similar to the writing group, recommendations for rating panel members are solicited from varied sources. Every effort is made to adhere to the ACC’s Diversity and Inclusion principles, and relevant RWI is taken into consideration. Additionally, SSOC strives to include one or more early career physicians, fellows-in-training, or cardiovascular team members as part of the panel. All rating panels have an odd number of individuals to ensure that the final median score reflects a whole number. The methods for development of AUC have evolved over time and were recently updated . This document summarizes the diagnostic and prognostic capabilities of a multitude of cardiovascular tests to inform choices for testing in common clinical scenarios for the evaluation and management of CCD. Both symptomatic and asymptomatic clinical scenarios are considered, as well as presentations for patients with and without a prior history of CCD. This document intends to provide testing recommendations based on the decisions that would be applicable to providing real-world patient care and should stand as a reference for cardiovascular specialists and referring physicians. The document is intended not to determine a single best test for each clinical scenario, but rather to provide recommendations for a range of testing options that may or may not be reasonable for a specific clinical scenario. It is critical to understand that the AUC should be used to assess an overall pattern of clinical care rather than being the final arbitrator of specific individual cases and should not be used as the sole determination of payment by payors. The ACC and its collaborators believe that an ongoing review of one’s practice using these criteria will help guide more effective testing and, ultimately, better patient outcomes. 2.1. Clinical Scenario Construction The clinical scenarios have been developed by a diverse writing group composed of individuals who are experts in both general cardiology and also noninvasive or invasive cardiac diagnostic testing. The writing group sought to create sets of clinical scenarios that cover the majority of situations for which known or suspected CCD patients are referred for cardiovascular testing. Wherever possible during the writing process, the group members mapped the scenarios to relevant clinical guidelines and key publications or references (see Additional file ). This included diagnosis-oriented guidelines and modality-specific guidelines. Major consideration was given to trying cover as many clinical scenarios as possible, in balance with usability and ease of navigation of the document. The writing group recognizes that patient presentations vary widely, and not all clinical factors are fully captured by these clinical scenarios. 2.2. Rating Process and Scoring After the scenarios were created, they were reviewed and critiqued by the SSOC and by external reviewers, including general cardiologists, preventive cardiologists, imaging experts, electrophysiologists, cardiac surgeons, and physicians in internal medicine and hospital medicine. After revision by the writing group based on feedback from the reviewers, the scenarios were sent to an independent rating panel . To maximize the input from a broad array of stakeholders, the rating panel was comprised of experts in cardiovascular medicine, general medical practice (internal medicine/hospital medicine), and outcomes research. Noncardiologists were included in the process to reduce the potential for bias among physicians with expertise in individual testing modalities. The rating panel was provided with relevant evidence and guidelines to inform their ratings. Formal leadership roles were established for facilitating panel interaction at the subsequent face-to-face meeting. Although panel members were not provided explicit safety and cost information to help determine their appropriate use ratings, they were asked to implicitly consider safety and cost as additional factors in their evaluation of appropriate use. In rating these scenarios, the AUC Rating Panel was asked to assess whether the use of the test for each scenario was Appropriate (A), May Be Appropriate (M), or Rarely Appropriate (R) (see definitions in the following text). When scoring each scenario, the raters were instructed to assume that each modality is locally available, performed on appropriate equipment, and interpreted by individuals with relevant training and expertise. The first step in the process was for members of the rating panel to evaluate and score the clinical scenarios independently (referred to as the first-round rating). Then, the panel held a virtual, online meeting where panel members were given their scores and a blinded summary of their peers’ scores. The panel discussed the scenarios and the scores, and then panel members were asked again to independently provide scores for each clinical scenario (second-round rating). After the second-round rating, the results were sent back to the writing group for review. At this point, the writing group had a final chance to clarify clinical scenarios and, if necessary, return to the rating panel for rescoring. A more detailed description of the methods is provided in a previous publication, “ACCF Proposed Method for Evaluating the Appropriateness of Cardiovascular Imaging,” which was updated in 2018 . Based on these multiple rounds of review, scoring, and revision, each scenario was classified as Appropriate, May Be Appropriate, or Rarely Appropriate. Although ratings for the clinical scenarios are categorized into 3 groups based on appropriateness, the appropriateness of testing is most accurately viewed as a continuum, depending on the variations of benefits and risks in individual patients. Appropriate, median score 7 to 9 : An appropriate option for management of patients in this population because benefits generally outweigh risks; an effective option for individual care plans, although not always necessary, depending on physician judgment and patient-specific preferences (ie, procedure is generally acceptable and generally reasonable for the clinical scenario). May Be Appropriate, median score 4 to 6 : At times, an appropriate option for management of patients in this population due to variable evidence or agreement regarding the benefit-risk ratio, potential benefit based on practice experience in the absence of evidence, and/or variability in the population; effectiveness for individual care must be determined by a patient’s physician in consultation with the patient on the basis of additional clinical variables and judgment along with patient preferences (ie, procedure may be acceptable and may be reasonable for the clinical scenario). Rarely Appropriate, median score 1 to 3 : Rarely an appropriate option for management of patients in this population due to the lack of a clear benefit/risk advantage; rarely an effective option for individual care plans; exceptions should have documentation of the clinical reasons for proceeding with this care option (ie, procedure is not generally acceptable and is not generally reasonable for the clinical scenario). The level of agreement among panelists as defined by RAND was analyzed on the basis of the RAND/UCLA modified Delphi Panel method rule for a panel of 14 to 17 members . Ratings were considered to be in agreement when fewer than 5 panelists’ ratings fell outside of the 3-point region containing the median score. Disagreement was defined as when 5 or more panelists’ ratings fell in both the Appropriate and the Rarely Appropriate categories. Any clinical scenario having disagreement was categorized as May Be Appropriate regardless of the final median score. The clinical scenarios have been developed by a diverse writing group composed of individuals who are experts in both general cardiology and also noninvasive or invasive cardiac diagnostic testing. The writing group sought to create sets of clinical scenarios that cover the majority of situations for which known or suspected CCD patients are referred for cardiovascular testing. Wherever possible during the writing process, the group members mapped the scenarios to relevant clinical guidelines and key publications or references (see Additional file ). This included diagnosis-oriented guidelines and modality-specific guidelines. Major consideration was given to trying cover as many clinical scenarios as possible, in balance with usability and ease of navigation of the document. The writing group recognizes that patient presentations vary widely, and not all clinical factors are fully captured by these clinical scenarios. After the scenarios were created, they were reviewed and critiqued by the SSOC and by external reviewers, including general cardiologists, preventive cardiologists, imaging experts, electrophysiologists, cardiac surgeons, and physicians in internal medicine and hospital medicine. After revision by the writing group based on feedback from the reviewers, the scenarios were sent to an independent rating panel . To maximize the input from a broad array of stakeholders, the rating panel was comprised of experts in cardiovascular medicine, general medical practice (internal medicine/hospital medicine), and outcomes research. Noncardiologists were included in the process to reduce the potential for bias among physicians with expertise in individual testing modalities. The rating panel was provided with relevant evidence and guidelines to inform their ratings. Formal leadership roles were established for facilitating panel interaction at the subsequent face-to-face meeting. Although panel members were not provided explicit safety and cost information to help determine their appropriate use ratings, they were asked to implicitly consider safety and cost as additional factors in their evaluation of appropriate use. In rating these scenarios, the AUC Rating Panel was asked to assess whether the use of the test for each scenario was Appropriate (A), May Be Appropriate (M), or Rarely Appropriate (R) (see definitions in the following text). When scoring each scenario, the raters were instructed to assume that each modality is locally available, performed on appropriate equipment, and interpreted by individuals with relevant training and expertise. The first step in the process was for members of the rating panel to evaluate and score the clinical scenarios independently (referred to as the first-round rating). Then, the panel held a virtual, online meeting where panel members were given their scores and a blinded summary of their peers’ scores. The panel discussed the scenarios and the scores, and then panel members were asked again to independently provide scores for each clinical scenario (second-round rating). After the second-round rating, the results were sent back to the writing group for review. At this point, the writing group had a final chance to clarify clinical scenarios and, if necessary, return to the rating panel for rescoring. A more detailed description of the methods is provided in a previous publication, “ACCF Proposed Method for Evaluating the Appropriateness of Cardiovascular Imaging,” which was updated in 2018 . Based on these multiple rounds of review, scoring, and revision, each scenario was classified as Appropriate, May Be Appropriate, or Rarely Appropriate. Although ratings for the clinical scenarios are categorized into 3 groups based on appropriateness, the appropriateness of testing is most accurately viewed as a continuum, depending on the variations of benefits and risks in individual patients. Appropriate, median score 7 to 9 : An appropriate option for management of patients in this population because benefits generally outweigh risks; an effective option for individual care plans, although not always necessary, depending on physician judgment and patient-specific preferences (ie, procedure is generally acceptable and generally reasonable for the clinical scenario). May Be Appropriate, median score 4 to 6 : At times, an appropriate option for management of patients in this population due to variable evidence or agreement regarding the benefit-risk ratio, potential benefit based on practice experience in the absence of evidence, and/or variability in the population; effectiveness for individual care must be determined by a patient’s physician in consultation with the patient on the basis of additional clinical variables and judgment along with patient preferences (ie, procedure may be acceptable and may be reasonable for the clinical scenario). Rarely Appropriate, median score 1 to 3 : Rarely an appropriate option for management of patients in this population due to the lack of a clear benefit/risk advantage; rarely an effective option for individual care plans; exceptions should have documentation of the clinical reasons for proceeding with this care option (ie, procedure is not generally acceptable and is not generally reasonable for the clinical scenario). The level of agreement among panelists as defined by RAND was analyzed on the basis of the RAND/UCLA modified Delphi Panel method rule for a panel of 14 to 17 members . Ratings were considered to be in agreement when fewer than 5 panelists’ ratings fell outside of the 3-point region containing the median score. Disagreement was defined as when 5 or more panelists’ ratings fell in both the Appropriate and the Rarely Appropriate categories. Any clinical scenario having disagreement was categorized as May Be Appropriate regardless of the final median score. To limit inconsistencies in interpretation, the following assumptions and considerations should be applied when interpreting the ratings. Each test is performed, interpreted, and reported in compliance with published criteria for quality cardiac diagnostic testing, as provided by national laboratory accreditation standards and societal quality guideline documents, including the following. Exercise ECG Coronary artery calcium scans Stress echocardiogram Radionuclide myocardial perfusion imaging (MPI) CMR CCTA Invasive coronary angiography Radiation Use of these AUC assumes that each modality is locally available, performed on appropriate equipment, and interpreted by individuals with acceptable training and expertise. The diagnostic and prognostic value of a previous test generally decreases over time. The clinical status of the patient should be assumed to be valid as stated in the clinical scenario (eg, a thorough history has been obtained and a physical examination has been conducted such that an asymptomatic patient is truly asymptomatic for the scenario in question). The clinical scenarios in this AUC document are not intended for patients with acute conditions (such as acute coronary syndrome or acute decompensated heart failure), although they may be applicable to evaluating hospitalized patients undergoing an evaluation for CCD. All patients are receiving optimal standard care, including guideline-based risk factor modification for primary or secondary prevention of ischemic heart disease unless specifically noted. In the event of an equivocal or inconclusive noninvasive test (stress electrocardiogram [ECG], stress imaging, or CCTA), where further testing is clinically warranted, a different test modality should be performed. In the event of equivocal or inconclusive results on a coronary angiogram, physiological testing (eg, using fractional flow reserve [FFR] or nonhyperemic indexes, noninvasive stress testing, or intravascular ultrasound for left main coronary artery assessment) may be performed as needed. A variety of additional technologies are available to augment the diagnostic and prognostic information yielded by noninvasive imaging techniques (eg, computed FFR for CCTA, myocardial perfusion for stress echo, novel detector arrangements for single-photon emission computed tomography [SPECT], myocardial blood flow reserve for CMR and position emission tomography [PET], etc.); however, these technologies are not always routinely available. Details about when these technologies are appropriate is beyond the scope of this document, and individual ratings do not assume that these technologies were necessarily used or performed. Before performing a noninvasive stress imaging study, relevant diagnostic information should be reviewed for alternative explanations of the symptoms being evaluated . For example, before stress echo, the baseline resting imaging performed should include a screening assessment of cardiac structure and function, including global and segmental ventricular function, chamber sizes, wall thickness, and cardiac valves, unless assessment of these has already been performed. For CMR and CCTA, scout images should be reviewed for any relevant chest pathology. If the patient’s characteristics are captured under more than 1 clinical scenario, the presence of symptoms should generally be the primary criterion for navigating the flowchart in Fig. and test selection from the tables. Clinical scenarios that describe routine or surveillance imaging imply that the test is being considered solely because a period of time has elapsed, not because of any change in clinical circumstances or any need to consider a change in therapy (Table ). When considering testing that includes an exercise component, it should be assumed that the patient has no limitations that would preclude exercising to a symptomatic endpoint, achieving at least 80% of their age- and sex-predicted workload or ≥ 85% of their age-predicted maximal heart rate. Similarly, unless otherwise stated, it should be assumed that the ECG is interpretable. Selection for and monitoring of contrast agent use is assumed to be in accordance with published standards . The clinical scenarios are, at times, purposefully broad to cover an array of cardiovascular signs and symptoms and to account for the ordering physician's best judgment as to the risk of ischemic heart disease. Clear documentation of the reason for ordering the test or procedure should be included in the medical record. Additionally, there are likely clinical scenarios that are not covered in this document. In some clinical scenarios, it may be reasonable to either perform or not perform a test. To reflect this, a column labeled “defer testing” is provided to indicate that testing may be deferred at this time, until a change in the patient’s status warrants reappraisal. Individual test modalities have unique limitations as well as advantages that provide information supplementary to the detection of coronary artery disease and myocardial ischemia. In some cases, these limitations and advantages would make a specific test modality superior to others for an individual patient. Examples are listed in Table A. Multimodality-Specific Assumptions/Considerations Comparative Rating 18. Testing modalities are rated for their level of appropriateness specific to clinical scenarios rather than a rank order comparison against other testing modalities. The goal of this document is to identify any and all tests that are considered reasonable for a given clinical scenario. As such, more than 1 test type or even all tests may be considered “Appropriate,” “May Be Appropriate,” or “Rarely Appropriate.” 19. If more than 1 modality falls into the same appropriate use category, it is assumed that clinician judgment; test advantages and disadvantages (Table A); and available local expertise, facilities, and equipment will be considered to determine the optimal test for an individual patient. 20. Clinical scenario ratings contained herein supersede the ratings of similar clinical scenarios contained in previous AUC documents. Risk/Benefit 21. Each test modality considered in this document has inherent risks that may include but are not limited to radiation exposure, sensitivity to iodinated or gadolinium-based contrast agents, other bodily injury, and interpretation error. For any given patient, it is assumed that the ordering and performing clinicians have accounted for these individual risks in their choice of test. 22. Clinical scenarios, such as but not limited to, advanced malignancy, frailty, unwillingness to consider testing, technical reasons rendering testing infeasible, or comorbidities likely to markedly increase procedural risk are beyond the scope of this document but should be taken into consideration in test selection. These may relate to clinical appropriateness for revascularization. 23. Unless explicitly stated, it should be assumed that patients presenting with a specific clinical scenario are potential candidates for all of the test types and do not have any contraindications. Radiation Safety 24. Users of the AUC are aware that the generally applied assumption among experts in radiation biology and epidemiology is a linear no-threshold relationship between radiation exposure and subsequent risk of cancer and that radiation exposure for any given test will be as low as reasonably achievable (ALARA). Tests that impart ionizing radiation will be performed by laboratories that have adopted contemporary dose-reduction techniques . 25. Testing without radiation or a no-testing strategy should be considered for low-risk premenopausal women . Cost/Value 26. In selecting a test, clinical benefits are considered first. Cost and value may also be considered, although estimating these for an individual patient may be difficult due to: Differences in reimbursement depending on region, setting, and payer Differences in cost between cardiovascular testing options Differences in charges versus reimbursement Downstream or serial testing Cost to reduce an adverse event or to add quality-adjusted life expectancy Detection of noncardiac conditions, both positive (occult malignancy) and potentially negative (incidental findings) Evidence Review 27. Clinical scenarios were rated based on the best available data and were mapped to relevant clinical practice guidelines. 28. Newer technologies should not be considered more or less appropriate compared with older technologies. Appropriate test: A test in which the expected clinical benefit exceeds the risks of the procedure by a sufficiently wide margin, such that the procedure is generally considered acceptable or reasonable care. For diagnostic imaging procedures, benefits include incremental information that, when combined with clinical judgment, augments efficient patient care. These benefits are weighed against the potential negative consequences (risks include the potential hazard of missed diagnoses, radiation, contrast agents, and/or unnecessary downstream procedures). ASCVD: Clinical ASCVD is defined by a history of acute coronary syndrome; stable angina; coronary or other arterial revascularization; or stroke, transient ischemic attack, or peripheral arterial disease presumed to be of atherosclerotic origin. ASCVD risk estimation : For decision-making about appropriateness of testing, some clinical scenarios are based on ASCVD risk. Several different risk calculators are available for clinicians to use with individual patients to estimate the long-term likelihood of ASCVD events. Clinicians are suggested to use a calculator that has been validated in the population of patients they are evaluating. For North American populations, the ACC ASCVD Risk Estimator is recommended. Clinical scenario : A specific set of patient characteristics that define a unique situation for which cardiovascular testing may be considered. CCD : Diseases of the heart related to current or prior myocardial ischemia in a stable phase, including history of acute coronary syndrome, obstructive atherosclerosis with or without coronary revascularization, ischemia with no obstructive coronary atherosclerosis, or ischemic heart failure. Patients with CCD may be asymptomatic or may have active symptoms, including angina pectoris, dyspnea, and/or fatigue. These symptoms may or may not be related to exertion. Definitions for Table Likely anginal symptoms: Chest/epigastric/shoulder/arm/jaw pain, chest pressure/discomfort, when occurring with exertion or emotional stress and relieved by rest, nitroglycerin, or both. Less-likely anginal symptoms : Symptoms including dyspnea or fatigue when not exertional and not relieved by rest/nitroglycerin; also includes generalized fatigue or chest discomfort occurring in a time course not suggestive of angina (eg, resolves spontaneously within seconds or lasts for an extended period and is unrelated to exertion). Noncardiac explanation: An alternative diagnosis, such as gastroesophageal reflux, chest trauma, anemia, chronic obstructive pulmonary disease, or pleurisy, is present and is the most likely explanation for the patient’s symptoms. Definitions for Table Coronary artery calcium data and reporting system (CAC-DRS): A standardized reporting system to report the degree and extent of coronary artery calcification for either quantified measurements (eg, Agatston score) or visual estimates of coronary calcification. Coronary artery disease-reporting and data system (CAD-RADS): A standardized reporting system to provide greater consistency of reporting the degree of coronary stenosis measured on a CCTA. Abnormal ECG : An ECG with findings concerning for ischemia or prior infarction such as resting ST-segment depression or T-wave inversions, Q waves, or left bundle branch block. Normal exercise treadmill test : Adequate exertional effort with no evidence of ischemia and no reproduction of symptoms. Inconclusive exercise treadmill test : An exercise stress test that does not provide a sufficient level of confidence for clinical care, such as < 85% maximum predicted heart rate achieved, ST segments that are uninterpretable due to baseline abnormalities, or ST-segment changes that resolve rapidly or are nonspecific. Inconclusive stress imaging: A SPECT, PET, echo, or CMR imaging stress study that does not provide adequate or reliable information to allow a diagnosis or therapeutic strategies to be established to a sufficiently high level of clinical confidence (Table B). Normal stress imaging : No evidence of ischemia or infarction. Mild ischemia : Ischemia is present but affects < 10% of the myocardium on stress nuclear imaging, < 4 of 32 subsegments (epicardial and endocardial subsegments of 16 segments) on stress CMR, or < 3 of 16 segments on stress echo or stress CMR. Moderate to severe ischemia : Moderate to severe ischemia has been defined as an estimate of ≥ 5% annual risk of cardiac death or nonfatal MI. This level of risk correlates as follows: for stress nuclear imaging, ≥ 10% ischemic myocardium; for stress echo, ≥ 3 of 16 newly dysfunctional segments during stress; and for stress CMR, ≥ 4 of 32 subsegments with ischemic perfusion defects during vasodilation stress or > 3 of 16 segments with new or worsened dysfunction during exercise stages or progressive inotropic stress. Categories of invasive coronary angiography results: Mild or none: maximal coronary diameter stenosis is 0% to 39% Intermediate: maximal coronary diameter stenosis is 40% to 69% Obstructive: maximal coronary diameter stenosis is ≥ 70% OR left main coronary artery stenosis ≥ 50%) Invasive physiological testing : The results of coronary physiological testing are generally reported as continuous variables (ranging from 0–1). Although clinical studies of these tests have been performed using dichotomous cutpoints, the results of these tests should not be considered only dichotomously. Lower values correlate with more severe ischemia and worse clinical outcomes, and there may be values above a cutpoint that do not rule out myocardial ischemia. This definition does not assume that a comprehensive assessment for microvascular dysfunction was performed. Definitions for Table Incomplete revascularization : Coronary revascularization by percutaneous coronary intervention (PCI) or coronary artery bypass graft with suspected or known residual obstructive epicardial coronary artery stenosis that may or may not be amenable to revascularization, or unrevascularized coronary arteries following an acute coronary syndrome. Examples include an incomplete surgical or percutaneous revascularization (unrevascularized territories due to poor targets, chronic occlusion, or diffuse disease), prior MI without culprit artery revascularization, or prior MI with residual obstructive coronary artery disease (CAD) in a non-infarct-related artery. Similar to prior ischemic episode : Patients who are presenting with symptoms that are similar in character to those which occurred at the time of a prior acute coronary syndrome or stable angina event. Likely anginal symptoms : Chest/epigastric/shoulder/arm/jaw pain, chest pressure/discomfort, when occurring with exertion or emotional stress and relieved by rest, nitroglycerin, or both. Less-likely anginal symptoms : Symptoms including dyspnea or fatigue when not exertional or relieved by rest/nitroglycerin; also includes generalized fatigue or chest discomfort occurring in a time course not suggestive of angina (eg, resolves spontaneously within seconds or lasts for an extended period and is unrelated to exertion). Definitions for Table ASCVD risk : See definitions provided in Table . Nontraditional risk factors : In addition to traditional risk factors, there are several conditions that are associated with premature atherosclerosis or rapid progression of atherosclerosis. In some cases, these risk factors may also be associated with greater morbidity and/or mortality in the setting of an acute coronary syndrome. As such, the presence of such conditions may influence a clinician’s decision to evaluate a patient for the presence of coronary atherosclerosis or SIHD. Examples are provided in Table C. Definitions for Table Incomplete revascularization: Coronary revascularization by PCI or coronary artery bypass graft with suspected or known residual obstructive epicardial coronary artery stenosis that may or may not be amenable to revascularization, or unrevascularized coronary arteries following an acute coronary syndrome. Examples include an incomplete surgical or percutaneous revascularization (unrevascularized territories due to poor targets, chronic occlusion, or diffuse disease), prior MI without culprit artery revascularization, or prior MI with residual obstructive CAD in a non–infarct-related artery. Prior high-risk PCI: Revascularization posing a higher-than-normal risk for restenosis or closure (eg, PCI of a diffusely diseased saphenous vein graft, treatment of recurrent in-stent restenosis) or a higher risk for adverse sequelae should restenosis occur (eg, left main coronary artery PCI or single remaining vessel/conduit). Definitions for Table Frequent premature ventricular contractions (PVCs): More than 30 PVCs per hour . Infrequent PVCs : Thirty or fewer PVCs per hour. Sustained ventricular tachycardia (VT) : Cardiac arrhythmia of consecutive complexes originating in the ventricles at a rate > 100 beats/min (cycle length: < 600 ms) lasting > 30 s or requiring termination due to hemodynamic compromise in < 30 s. Nonsustained VT : Cardiac arrhythmia of ≥ 3 consecutive complexes originating in the ventricles at a rate > 100 beats/min (cycle length: < 600 ms) that self-terminates in < 30 s and without hemodynamic compromise. Heart failure : Stages B, C, and D heart failure, as defined by the ACCF/AHA Guideline for the Management of Heart Failure . Syncope: A symptom that presents with an abrupt, transient, complete loss of consciousness, associated with inability to maintain postural tone, with rapid and spontaneous recovery. The presumed mechanism is cerebral hypoperfusion. There should not be clinical features of other nonsyncopal causes of loss of consciousness, such as seizure, antecedent head trauma, or apparent loss of consciousness (ie, pseudosyncope) . AUC = Appropriate Use Criteria CAD = coronary artery disease CMR = cardiac magnetic resonance CCTA = coronary computed tomography angiography ECG = electrocardiogram Echo = echocardiogram MPI = myocardial perfusion imaging PCI = percutaneous coronary intervention PVC = premature ventricular contraction SIHD = stable ischemic heart disease VT = ventricular tachycardia The final ratings for Multimodality AUC on the Detection and Risk Assessment of CCD are listed by clinical scenario in Tables , , , , , , and . The final score reflects the median score of the 15 rating panel members and has been labeled according to the categories of Appropriate (median 7 to 9), May Be Appropriate (median 4 to 6), and Rarely Appropriate (median 1 to 3) (Additional file ). The discussion section highlights further general trends in the scoring related to specific patient populations. See Tables , , , , , , . The foundation for this AUC document is the 2013 AUC for Multimodality Imaging in SIHD, one of the first documents to shift away from a test-modality–specific focus toward a clinical focus . In this revision, the writing group sought to produce a balanced document that offered ease of use and a comprehensive list of clinical scenarios. The writing group established a formal definition of CCD, which had not been done in prior ACC documents, to delineate the scope of the document. Substantial changes were made to the organizational flow chart, and some tables were simplified or removed. In a few instances, the writing group felt that expansion of scenarios was warranted to capture clinically relevant situations that were not acknowledged in the prior version. Because the ACC has a standalone AUC document being developed on the management of heart disease in the perioperative/periprocedural setting, those clinical scenarios were removed from this document. As with the prior version, this document refers only to patients with stable conditions, and a separate AUC addressing acute chest pain syndromes is being considered by the ACC. Because of these changes, this document consists of 20% fewer clinical scenarios compared with the prior iteration . Although ratings in this document supersede those in the 2013 document, it should be noted that the ACC has sponsored other AUC documents that may have some overlap with scenarios in this document. For example, the 2017 AUC for valvular heart disease provide recommendations on ischemia testing modalities in patients with syncope and palpitations . The American College of Radiology maintains many appropriateness documents that have a categorization structure that differs from the ACC’s . This represents an area of ongoing uncertainty for clinicians and for health policy because similar scenarios in documents developed through different methods may have discordant appropriateness ratings . Aside from changes in clinical scenarios, one of the most substantial changes in this version of the AUC is the inclusion of a “no testing” column alongside the noninvasive and invasive testing columns. In terms of precedent for this change, the 2018 AUC for peripheral artery intervention included “continue or intensify medical therapy” as an option alongside invasive management options . The writing group for the 2013 AUC of multimodality imaging for SIHD acknowledged in the discussion that a “no test at all” rating may also be considered an option for some clinical scenarios . The writing group for this document felt it was time to adopt a “no test” column to formally acknowledge that testing may be safely deferred in some situations. Rating of the “no test” option was omitted for selected scenarios where the writing group did not think it applicable. Clinicians should remain aware that the appropriateness of testing deferral, as with the appropriateness of other testing modalities, may change when there is a change in the patient’s clinical scenario. If such a change occurs, the appropriateness of deferring testing and other options should be evaluated under the newly applicable clinical scenario. The inclusion of the “no test” column introduces some novel considerations and potential implications. First, there are generally less data examining the clinical impact on outcomes and safety of not performing testing compared with performing testing. Clinical scenarios of patients for whom testing was considered and not pursued is difficult to capture in medical records. This makes evaluation of deferred testing challenging to audit. Second, the presence of a “no test” option provides an opportunity to engage in shared decision-making with patients, allowing personal values and preferences to weigh on the choice to perform a test. Third, the writing group strongly advises against use of this document and its ratings for making blanket insurance coverage or reimbursement decisions. If both testing and “no test” are rated appropriate in a given clinical scenario, clinical decision-making should be informed by the individual patient’s situation. In this version of the AUC, the summary flowchart (Fig. ) has been rearranged with a reduced hierarchy to try to more closely follow the flow of clinical decision-making. This was intended to make navigation to the desired clinical scenario easier. The prior version of the AUC for the detection and risk assessment of SIHD noted in the assumptions, “If the patient’s characteristics are captured under more than 1 indication, the patient should be categorized according to the hierarchy provided in Fig. ” . In the current version, clinicians will have to rely on clinical judgment in situations where a patient fits into more than 1 clinical scenario. By starting the hierarchy with a yes/no question about symptoms, the document potentially favors those clinical scenarios that are more often rated as appropriate (in symptomatic patients) compared with other scenarios in which a patient is asymptomatic. The writing group suggests that when a patient fits more than 1 scenario, the scenario best matching the predominant clinical question should be applied. Throughout the writing process, the writing group had several discussions about whether to divide certain testing modalities into subtypes. For example, CT could be further identified as coronary CT angiography alone or with CT-based FFR, or nuclear MPI as PET or SPECT. Ultimately, this was not done for several reasons. First, although there are potential clinical reasons to perform 1 type of test over another, those reasons may not always be captured within the clinical scenarios. For example, if PET provides superior image quality to SPECT in patients with obesity, but the clinical scenarios do not specifically address testing in obese vs normal-weight patients, then the appropriateness ratings are not likely to be different and would add unnecessary complexity to the tables. Second, for the clinical scenarios that were included, the writing group did not think that identifying the specific subtypes within a given imaging modality would result in any substantial difference in the ratings (eg, for a patient with recurrent anginal symptoms after PCI, both SPECT and PET could be appropriate). Third, the addition of more columns could increase the complexity and reduce the usability of the tables. Fourth, essentially all modalities have subtypes, and the writing group did not believe it would be appropriate or beneficial to include 1 test modality subtype preferentially without including all subtypes as separate columns. The potentially relevant differences for individual imaging modalities are acknowledged in Table A and should be incorporated with clinical features, clinical judgment, and local availability and expertise when selecting a testing strategy. As a result of the effort to simplify application of the AUC in this version of the document, the terms for classifying angina were changed. The prior version of this document used the terms typical angina , atypical angina , and nonanginal symptoms , whereas this version of the AUC uses the terms likely anginal and less-likely anginal symptoms. Although atypical angina has a specific definition based on criteria from Diamond and Forrester’s symptom classification, this term is known to be applied incorrectly in clinical practice. For example, for patients with symptoms that may be ischemic, conscious or unconscious bias on the part of the clinician may result in the symptoms being labeled atypical to justify not performing a test. However, for patients with symptoms unlikely to have an ischemic origin, the term atypical angina can be used to justify testing. In Table , we have included a clinical scenario where a clear, noncardiac etiology is present to demonstrate for clinicians that testing should typically not be performed “just to be sure.” Due to the separate processes and the methodology specific to guideline and AUC development, the terms used in this document do not mirror the “cardiac” and “possibly cardiac” terms used in the 2021 chest pain guideline. For users of this AUC, the writing group considers the terms “likely anginal” and “cardiac” to be equivalent, as well as “less likely anginal” and “possibly cardiac.” In clinical scenarios for symptomatic patients with no prior testing, the recommendation to calculate the pretest likelihood of obstructive coronary disease has been removed (Table ). The primary reason for this change is that the pretest likelihood strategy, as described in the prior version of the AUC, does not perform well at identifying patients who could safely defer testing or those at high pretest likelihood of obstructive CAD. Contemporary cohort data has demonstrated how changes in the epidemiology of CAD warrant rethinking these traditional strategies . The writing group elected to use the simplified symptom profiles described earlier, recognizing that for many patients with symptoms, testing for CCD is appropriate. By adopting this strategy, this version of the AUC for imaging in CCD is the first to incorporate patient risk factors, not just age and sex, as relevant considerations when deciding on a test for CCD. The approach to symptomatic patients with prior testing has been redesigned in this AUC document (Table ). Based on the available literature on how AUC for CCD were being used in clinical practice, Tables 2.0 to 2.3 in the 2013 AUC were rarely used. By collapsing these scenarios into a single table, the flowchart was substantially simplified. The 2013 document used a cutoff of 90 days to define sequential tests performed as part of a continued evaluation for a given clinical presentation vs an older test with less clinical relevance. Although this is an important clinical distinction, the writing group believed that the 90-day time cutoff was arbitrary and elected to provide 1 table to cover all recommendations for sequential testing. Clinical scenarios related to the assessment of patients with prior revascularization have also been revised, now based on symptom status (Table ). Specifically, patients with prior revascularization are now categorized based on whether their symptoms are anginal or similar in quality to prior CCD episodes. This was done with the intent of acknowledging that patients with prior revascularization may experience a wide array of symptoms, some of which are more likely to be ischemic, and some of which are clearly noncardiac in origin. In the former, invasive testing may be warranted, but in the latter, ischemia testing can often be deferred. Acknowledging the results of recent studies, such as the ISCHEMIA (International Study of Comparative Health Effectiveness With Medical and Invasive Approach) trial, either testing or deferral of testing may be suitable for symptomatic patients with prior revascularization based on their preferences and individual clinical situations . The clinical scenarios for asymptomatic patients without known ASCVD (Table ) are significantly modified from the prior document. Instead of using global CAD risk and ECG interpretability or the ability to exercise, these scenarios intended for ASCVD screening have been modified based on the categories of 10-year ASCVD risk and the presence of risk-enhancing factors. Prior chest radiation, coronary artery calcifications on chest imaging, and prior chemotherapy with vasotoxicity potential are included as additional considerations. The reason for these changes was to better align recommendations for CCD testing with the patient groups described in the clinical guidelines on prevention and the management of blood cholesterol . The remainder of the tables, Tables , , and , include a few additional clinical scenarios closing potential gaps in the prior AUC and acknowledging ongoing changes in clinical practice. In Table , scenarios have been added for assessing graft patency before redo sternotomy, for viability assessment, and for management of patient with or at risk for silent ischemia. Table now provides recommendations for unsupervised exercise prescriptions in patients with and without known heart disease. Last, Table adds guidance on screening for transplant vasculopathy, testing in new paroxysmal sustained VT and atrial flutter, and a new heading for cardio-oncology and assessment of patients with a history of chest radiation. This table includes scenarios for syncope that have changed to align this AUC document with the 2017 ACC/AHA/HRS syncope guideline, which provides recommendations for cardiovascular testing based on history, physical examination, and ECG . Because of these changes to the clinical scenarios, it is difficult to compare the ratings for individual scenarios and tests with those in prior documents (Table ). Substantial changes to scenarios for the assessment of patients with prior testing and prior MI/revascularization make comparisons to the prior document immaterial (Tables and ). Although patients without symptoms in Table are categorized in a different fashion than in the 2013 document, the rating panel felt that most testing is not likely warranted for these patients. One exception is CAC scoring, which has greater support across the spectrum of risk. Ratings in Tables and are largely unchanged. In Table of this document, many of the scenario ratings are identical to those from 2013. Testing in the setting of new-onset atrial fibrillation is generally considered rarely appropriate in this document, whereas some test options were previously rated as may be appropriate. Future Directions The ACC is well into 2 decades of publishing AUC to help guide clinicians on appropriateness of tests and procedures for patients. We anticipate that these documents will continue to play an important role in day-to-day practice and may soon have a larger role in measuring quality at a health system level and through societal clinical registries. Current decision-support systems are often difficult to navigate, and we are hopeful that electronic health record vendors will continue to work on strategies to implement AUC in a way that automatically gathers relevant data for making appropriateness determinations. At present, administrative data lack the clinical granularity necessary to capture the relevant details of clinical scenarios to apply appropriateness criteria. In the future, patient-reported symptom profiles may help enhance the patient voice and further automate the process. Limitations As with all previous versions of the AUC, there are limitations to the exercise of trying to simplify myriad patient presentations to a brief list of clinical scenarios. Some patients will inevitably not fit the precise definitions provided. The time scale for drafting and revising such documents means the recommendations will inherently lag behind published evidence. For example, work on developing the clinical scenarios and rating the test options preceded the publication of recent chest pain guidelines as well as the pending chronic coronary disease management guidelines by multiple years . Although the writing group worked internally with the ACC to eliminate any disagreements with these documents, they could not be inherently part of the development of these AUC. The ACC is developing new strategies to “chunk” guidelines and other documents so that they will be easier to update on a shorter timetable. The 2023 AUC for multimodality imaging in CCD has been substantially revised in an effort to make application easier and more closely aligned to how clinical decisions are made in practice. Special attention has been paid to aligning this document with clinical practice guidelines and contemporary scientific studies. Several innovations have been introduced, most notably a column of ratings for “no test,” reinforcing the concept that not every patient encounter warrants cardiovascular testing. ACC President and Staff B. Hadley Wilson, MD, FACC, President Cathy C. Gates, Chief Executive Officer Joseph M. Allen, MA, Team Lead, Clinical Standards and Solution Sets Amy Dearborn, Team Lead, Clinical Policy Content Development Marίa Velásquez, Project Manager, Appropriate Use Criteria Grace Ronan, Team Lead, Clinical Policy Publications Additional file 1. Guideline Mapping File. Additional file 2. Relationships with Industry and Other Entities (Comprehensive).
Transcriptomic and metabolomic insights into flavor variations in wild and cultivated
ef4ed589-6ba9-4e04-9531-876fc1b0d8dc
11953371
Biochemistry[mh]
A. bisporus is among the most popular and common edible mushrooms and has been widely cultivated worldwide, given its excellent functional properties. A. bisporus is characterized by good texture, high nutritional value, and unique flavor. Compared with other vegetables, it has higher protein content and lower fat content , . Most importantly, it is widely thought that they have certain medicinal values . A. bisporus has been cultivated for over 300 years . After a relatively long period of artificial culture, A. bisporus has been subject to artificial selection to a certain extent and has become different from wild A. bisporus . It is well-established that there are differences in flavor between cultivated and wild A. bisporus primarily due to their unique chemical composition . In addition, the protein, dietary fiber, vitamins, minerals, and amino acid composition between cultivated and wild mushrooms may differ. Nowadays, the majority of globally produced A. bisporus mushrooms are from cultivated sources. The challenge we face is how to cultivate mushrooms with superior flavor and higher nutritional value. Therefore, revealing the metabolites and their molecular mechanisms of cultivated and wild A. bisporus mushrooms is crucial for understanding their cultivation and enhancing the mushroom industry. Next-generation sequencing (NGS) technology is extensively used in life sciences for genome sequencing, transcriptome sequencing, and metagenomics sequencing , . Comparative transcriptome analysis is an effective way to compare gene expression patterns between different subjects and to provide insights into biological processes. By analyzing the sequencing data, we can screen the key differentially expressed genes and the biological processes involved and then reveal the molecular differences between wild and cultivated A. bisporus . Many transcriptome studies have been performed for mushroom species, including Lentinus edodes , Ganoderma lucidum , Agrocybe aegerita , Auricularia polytricha , Pleurotus eryngii subsp. Tuoliensis and Cordyceps militaris . Transcriptome analysis is an effective tool that can reveal the differences between cultivated and wild A. bisporus . Detection of mushroom metabolites can reveal changes in their main flavor compounds. In addition, organic acids are important compounds that affect the accumulation of flavor substances in mushrooms. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) has proven to be an efficient and powerful tool for studying these products . Here, we present the first combined transcriptomic and metabolomic investigation of wild and cultivated A. bisporus . By analyzing 43 strains (23 wild, 20 cultivated) via RNA-seq and LC-MS/MS, we: Identify key differentially expressed genes (DEGs) and metabolites linked to flavor divergence; Construct gene-metabolite correlation networks to reveal regulatory hubs; Uncover cultivation-specific metabolic shifts, including upregulated methionine permease genes (AGABI2DRAFT_188981/191000) and altered organic acid profiles. This study not only delineates the molecular basis of flavor differentiation but also provides actionable targets for precision breeding, addressing a critical bottleneck in sustainable mushroom industry development. Test mushroom strains and culture conditions for fruiting A total of 43 A. bisporus strains, divided into 23 wild strains (abbreviated as W) and 20 cultivated strains (abbreviated as C), were tested. It mainly includes the wild ARP strains in the United States, the wild strains in China and the traditional cultivated strains in Europe and the United States. The strain was stored and provided by the Institute of Edible Mushroom, Fujian Academy of Agricultural Sciences, Fujian China. The strain number and source are shown in Table S1. The tested strains were cultivated in the artificial climate mushroom room in institute of edible mushroom, fujian academy of agricultural sciences. A plastic basket with a growing area of 40 cm×50 cm was used for plot cultivation which 17.0 kg of wet compost was filled. 200 g of wheat spawn was used in total per strain. The humidity of mushroom house was kept at 70–80%, and the temperature of culture material was controlled to 24 ± 1℃. In the stage of mushroom emergence, the humidity is controlled at 85 ~ 90%, and the temperature of the mushroom house is controlled at 16 ~ 18℃. Control ventilation time and ventilation volume. The covering soil thickness is 4 cm. Fresh fruiting bodies could be obtained after 40 days of sowing. In the first tide, the fruiting bodies with strong growth were selected to observe the color and morphological characteristics of fruiting bodies. Part of the test strains fruiting scene is shown in Fig. . Fresh fruiting body samples (10 g) were cut and quickly frozen in liquid nitrogen at -196℃ for further analysis. Three biological replicates were performed for each test strain, and subsequent assay experiments were carried out. Mushroom total protein and amino acid content determination The protein content in samples was determined by Kjeldahl nitrogen determination method (Kjeltec 8400 Kjeldahl Nitrogen Analyzer, Foss Company) and the amino acid content was determined by acid hydrolyzation-automatic amino acid analyzer (Hitachi L-8900 High-Speed Amino Acid Analyzer, Japan) using the method described by Jie et al. . RNA Sequencing and analysis RNA sequencing with total RNA extracted from the fruiting bodies was performed as described previously . Briefly, total RNA was extracted from the mushroom fruiting bodies using the RNeasy Plant Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s protocol. Then the extracted RNAs were further treated with DNase I. The DNase-I-treated RNA samples were purified using an RNeasy column (Qiagen, Hilden, Germany). The RNA quality and quantity were analyzed by UV spectrophotometry and gel electrophoresis. The cDNA library was constructed using a TruSeq RNA sample prep kit v2 (Illumina, San Diego, CA, USA), and RNA sequencing with an Illumine HiSeq 4000 sequencing system (Illumina, San Diego, CA, USA) was performed by Biomics (Beijing) Biotech Co., Ltd (Beijing, China). The raw data obtained by sequencing was filtered to obtain clean data using Trimmomatic (v.0.33) with the default parameters . Then, we used HISAT2 (v.2.10) to map the clean reads to the reference genome ( Agaricus bisporus var. bisporus H97, NCBI BioProject: PRJNA61005), which exhibited chromosome-level assembly and employed StringTie (v.1.3.4) to calculate each gene’s FPKM value . All genes were annotated using local BLASTX programs against the Nr, SwissPort, GO and PFAM databases. The RNA-Seq data were transformed into Gene Ontology (GO, www.geneontology.org/ ) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG, www.genome.jp/kegg/ ) analysis. DESeq2 software (Differential expression analysis for sequence count data software, 1.10.1) was used to identify differential gene expression, and the negative binomial distribution pvalue model was calculated using the standardized method of DESeq2, and the identifying standard of differential gene expression was P < 0.05. Metabolome sequencing and analysis A total of 100 mg fruiting body samples were collected and powered by an automated grinder (JXFSTPRP-24/32, Shanghai Netcom, China; at 60 Hz for2 min) and ultrasonic cleaner (SB-5200DT, Ningbo Scientz Biotechnology Co., Ltd, Ningbo, China; for 30 min). Chloroform (200 µL) was added for lipid extraction, followed by 30-min sonication. After that, the samples were centrifuged at 12000 rpm for 10 minutes at 4℃ and dried in a vacuum centrifuge concentrator (LNG-T98, Huamei Biochemistry Instrument, Taicang, China). Then, 1 ml 50% pre-cooled methyl alcohol and 20 µL L-2-chloro-phenylalanine were used for sample extraction, followed by cooling at -20°C for 2 min and grinding (at 60 Hz for 2 min). Homogenates were centrifuged, supernatants were collected, dried, and dissolved into 50% pre-cooled methyl alcohol, vortex oscillated for 60 s, ultrasonic-treated for 30 s, and finally centrifuged (at 12000 rpm for 10 min at 4℃) and dried. All samples were subjected to treatments with methoxamine hydrochloride pyridine under rotation (2 min) and sonication (at 37°C for 90 min) . System stability and accuracy were validated using QC samples at an interval of 5 samples . MS raw data (total ion current, TIC) was converted into available file format using ChemStation (version E.02.02.1431, Agilent Technologies Inc). ChromaTOF (version 4.34, LECO, St Joseph, MI) was used to analyze the data, and NIST and Fiehn databases were used to annotate the metabolites. After alignment with the Statistic Compare component, the ‘raw data array’ (.cvs) was obtained from raw data, including peak names, retention time-m/z and peak intensities. All internal standards and pseudo-positive peaks were removed . Data were log2 transformed and then imported into the SIMCA (Standard isolinear method of class assignment) software package (14.0, Umetrics, Umeå, Sweden, https://www.sartorius.com/en/products/process-analytical-technology/data-analytics-software/mvda-software/simca ). Unweighted principal component analysis (PCA) and orthogonal partial least-squares-discriminant analysis (OPLS-DA, with 7-fold cross-validation and response permutation testing, 200 times randomly permutated) were performed to visualize the metabolism difference between groups . Metabolites with variable important in projection (VIP) > 1 and p-value < 0.05 by two-tailed Student’s t-test were used to identify differential metabolites. Metabolites between groups with |fold change (FC)| ≥ 1 were considered differential metabolites. The KEGG pathways associated with the differential metabolites were identified from the KEGG database ( http://www.genome.jp/KEGG/pathway.html ) with a threshold of corrected p < 0.05 27,28 . Redundancy analysis (RDA) using CANOCO (version 5.0, Biometris, Netherland). Quantitative real-time PCR (qRT-PCR) The samples were ground with liquid nitrogen, total RNA was extracted with a Magen total RNA extraction kit (R4151-02, Magen, Guangzhou, China), and cDNA was synthesized from 1 mg of total RNA via cDNA synthesis supermix (CAT: 11141ES60, Yeasen, Shanghai, China) according to the manufacturer’s instructions. qRT‒PCR analysis was performed with three biological replicates and three technical replicates with Hieff™ qPCR SYBR Green Master Mix (Low Rox Plus, part number 11202ES08, Yeasen, Shanghai, China) on a QuantStudio 6 Flex PCR system (ABI). All amplification consisted of denaturing for 10 s at 95 °C, followed by 40 cycles of 5 s each at 95 °C and 30 s at primer-specific annealing temperature. The specificity of each RT-qPCR reaction was tested using a dissociation curve (gradient from 60 °C to 95 °C). The sequences of the reference β-actin gene and gene-specific primer pairs and their amplicon sizes are shown in Table S5. For the analysis of the qRT-PCR output, the 2 −ΔΔCT method of relative quantification was used . The data are shown as the means ± standard deviations (SDs) of six independent experiments. A total of 43 A. bisporus strains, divided into 23 wild strains (abbreviated as W) and 20 cultivated strains (abbreviated as C), were tested. It mainly includes the wild ARP strains in the United States, the wild strains in China and the traditional cultivated strains in Europe and the United States. The strain was stored and provided by the Institute of Edible Mushroom, Fujian Academy of Agricultural Sciences, Fujian China. The strain number and source are shown in Table S1. The tested strains were cultivated in the artificial climate mushroom room in institute of edible mushroom, fujian academy of agricultural sciences. A plastic basket with a growing area of 40 cm×50 cm was used for plot cultivation which 17.0 kg of wet compost was filled. 200 g of wheat spawn was used in total per strain. The humidity of mushroom house was kept at 70–80%, and the temperature of culture material was controlled to 24 ± 1℃. In the stage of mushroom emergence, the humidity is controlled at 85 ~ 90%, and the temperature of the mushroom house is controlled at 16 ~ 18℃. Control ventilation time and ventilation volume. The covering soil thickness is 4 cm. Fresh fruiting bodies could be obtained after 40 days of sowing. In the first tide, the fruiting bodies with strong growth were selected to observe the color and morphological characteristics of fruiting bodies. Part of the test strains fruiting scene is shown in Fig. . Fresh fruiting body samples (10 g) were cut and quickly frozen in liquid nitrogen at -196℃ for further analysis. Three biological replicates were performed for each test strain, and subsequent assay experiments were carried out. The protein content in samples was determined by Kjeldahl nitrogen determination method (Kjeltec 8400 Kjeldahl Nitrogen Analyzer, Foss Company) and the amino acid content was determined by acid hydrolyzation-automatic amino acid analyzer (Hitachi L-8900 High-Speed Amino Acid Analyzer, Japan) using the method described by Jie et al. . RNA sequencing with total RNA extracted from the fruiting bodies was performed as described previously . Briefly, total RNA was extracted from the mushroom fruiting bodies using the RNeasy Plant Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s protocol. Then the extracted RNAs were further treated with DNase I. The DNase-I-treated RNA samples were purified using an RNeasy column (Qiagen, Hilden, Germany). The RNA quality and quantity were analyzed by UV spectrophotometry and gel electrophoresis. The cDNA library was constructed using a TruSeq RNA sample prep kit v2 (Illumina, San Diego, CA, USA), and RNA sequencing with an Illumine HiSeq 4000 sequencing system (Illumina, San Diego, CA, USA) was performed by Biomics (Beijing) Biotech Co., Ltd (Beijing, China). The raw data obtained by sequencing was filtered to obtain clean data using Trimmomatic (v.0.33) with the default parameters . Then, we used HISAT2 (v.2.10) to map the clean reads to the reference genome ( Agaricus bisporus var. bisporus H97, NCBI BioProject: PRJNA61005), which exhibited chromosome-level assembly and employed StringTie (v.1.3.4) to calculate each gene’s FPKM value . All genes were annotated using local BLASTX programs against the Nr, SwissPort, GO and PFAM databases. The RNA-Seq data were transformed into Gene Ontology (GO, www.geneontology.org/ ) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG, www.genome.jp/kegg/ ) analysis. DESeq2 software (Differential expression analysis for sequence count data software, 1.10.1) was used to identify differential gene expression, and the negative binomial distribution pvalue model was calculated using the standardized method of DESeq2, and the identifying standard of differential gene expression was P < 0.05. A total of 100 mg fruiting body samples were collected and powered by an automated grinder (JXFSTPRP-24/32, Shanghai Netcom, China; at 60 Hz for2 min) and ultrasonic cleaner (SB-5200DT, Ningbo Scientz Biotechnology Co., Ltd, Ningbo, China; for 30 min). Chloroform (200 µL) was added for lipid extraction, followed by 30-min sonication. After that, the samples were centrifuged at 12000 rpm for 10 minutes at 4℃ and dried in a vacuum centrifuge concentrator (LNG-T98, Huamei Biochemistry Instrument, Taicang, China). Then, 1 ml 50% pre-cooled methyl alcohol and 20 µL L-2-chloro-phenylalanine were used for sample extraction, followed by cooling at -20°C for 2 min and grinding (at 60 Hz for 2 min). Homogenates were centrifuged, supernatants were collected, dried, and dissolved into 50% pre-cooled methyl alcohol, vortex oscillated for 60 s, ultrasonic-treated for 30 s, and finally centrifuged (at 12000 rpm for 10 min at 4℃) and dried. All samples were subjected to treatments with methoxamine hydrochloride pyridine under rotation (2 min) and sonication (at 37°C for 90 min) . System stability and accuracy were validated using QC samples at an interval of 5 samples . MS raw data (total ion current, TIC) was converted into available file format using ChemStation (version E.02.02.1431, Agilent Technologies Inc). ChromaTOF (version 4.34, LECO, St Joseph, MI) was used to analyze the data, and NIST and Fiehn databases were used to annotate the metabolites. After alignment with the Statistic Compare component, the ‘raw data array’ (.cvs) was obtained from raw data, including peak names, retention time-m/z and peak intensities. All internal standards and pseudo-positive peaks were removed . Data were log2 transformed and then imported into the SIMCA (Standard isolinear method of class assignment) software package (14.0, Umetrics, Umeå, Sweden, https://www.sartorius.com/en/products/process-analytical-technology/data-analytics-software/mvda-software/simca ). Unweighted principal component analysis (PCA) and orthogonal partial least-squares-discriminant analysis (OPLS-DA, with 7-fold cross-validation and response permutation testing, 200 times randomly permutated) were performed to visualize the metabolism difference between groups . Metabolites with variable important in projection (VIP) > 1 and p-value < 0.05 by two-tailed Student’s t-test were used to identify differential metabolites. Metabolites between groups with |fold change (FC)| ≥ 1 were considered differential metabolites. The KEGG pathways associated with the differential metabolites were identified from the KEGG database ( http://www.genome.jp/KEGG/pathway.html ) with a threshold of corrected p < 0.05 27,28 . Redundancy analysis (RDA) using CANOCO (version 5.0, Biometris, Netherland). The samples were ground with liquid nitrogen, total RNA was extracted with a Magen total RNA extraction kit (R4151-02, Magen, Guangzhou, China), and cDNA was synthesized from 1 mg of total RNA via cDNA synthesis supermix (CAT: 11141ES60, Yeasen, Shanghai, China) according to the manufacturer’s instructions. qRT‒PCR analysis was performed with three biological replicates and three technical replicates with Hieff™ qPCR SYBR Green Master Mix (Low Rox Plus, part number 11202ES08, Yeasen, Shanghai, China) on a QuantStudio 6 Flex PCR system (ABI). All amplification consisted of denaturing for 10 s at 95 °C, followed by 40 cycles of 5 s each at 95 °C and 30 s at primer-specific annealing temperature. The specificity of each RT-qPCR reaction was tested using a dissociation curve (gradient from 60 °C to 95 °C). The sequences of the reference β-actin gene and gene-specific primer pairs and their amplicon sizes are shown in Table S5. For the analysis of the qRT-PCR output, the 2 −ΔΔCT method of relative quantification was used . The data are shown as the means ± standard deviations (SDs) of six independent experiments. Mushroom total protein and amino acids We initially analyzed the total protein content of all mushroom samples. Amino acid and protein content, as indicated by A325 and A319, were higher in the C group, whereas A409 and A408 showed higher levels in the W group. Overall protein content levels between the C and W groups did not exhibit significant differences (Fig. ). Sequence data summary Following quality control of the raw sequencing data, the percentage of Q20 and Q30 reads was 97.05% and 92.28%, respectively (S2 Table), indicating the high quality of sequencing data. The sequence alignment of the clean reads to the reference genome indicated that overall mapping rates in most samples ranged between 46.53% and 78.82%. The concordant pair alignment rate was 50–65% in most samples (Table S2). The percentage of reads with unique mapping sites in each sample was as high as 99%. Identification of DEGs A total of 103 DEGs were identified between wild-type and cultivated A. bisporus , with 45 and 58 DEGs highly expressed in W and C groups, respectively (Fig. A). The expression of these DEGs is shown in Fig. B. Among the DEGs, AGABI2DRAFT_133726 (gamma-glutamylcyclotransferase, GGCT ) and AGABI2DRAFT_134468 (hypothetical protein AN958_09571) were the top two DEGs highly expressed in cultivated A. bisporus ; AGABI2DRAFT_62224 (protein disulfide-isomerase A1, PDIA1 ) and AGABI2DRAFT_68765 (H + antiporter, TC-CPA1 ) were the top two DEGs highly expressed in wild-type A. bisporus (Table S3). In addition, GO and KEGG enrichment analyses of the 103 DEGs were conducted. Significant enrichment in 7 key GO terms was found (Fig. S1A), including cation transport (GO:0006812), amino acid transmembrane transport (GO:0003333), carbohydrate metabolic process (GO:0005975), proteolysis (GO:0006508), metabolic process (GO:0008152), transmembrane transport (GO:0055085), oxidation-reduction process (GO:0055114). There are 16 DEGs were enriched in these GO terms (Fig. C). KEGG pathway analysis results (Fig. S1B) showed that the Phosphatidylinositol signaling system (ko04070), Inositol phosphate metabolism (ko00562), Fructose and mannose metabolism (ko00051) and Homologous recombination (ko03440) were significantly enriched. DEGs, including AGABI2DRAFT_139182, AGABI2DRAFT_191352, AGABI2DRAFT_203654, AGABI2DRAFT_188981, AGABI2DRAFT_192972 and AGABI2DRAFT_191000 were involved in these pathways (Fig. D). Identification of differential metabolites There were 44 differential metabolites between wild-type and cultivated A. bisporus (Fig. A, Table S4). A comparison of the levels of differential metabolites between W and C is shown in Fig. B. The top 10 differential metabolites according to the content level were fumaric acid, 2-hydroxybutanoic acid, isoleucine, phenylalanine 1, uridine 2, proline, alpha-ketoglutaric acid, Valine, O-phosphonothreonine 1 and palmitic acid. During KEGG enrichment analysis (Fig S2), Citrate cycle (TCA cycle) (ath00020), Alanine, aspartate and glutamate metabolism (ath00250), Aminoacyl-tRNA biosynthesis (ath00970), Valine, leucine and isoleucine biosynthesis (ath00290), and Valine, leucine and isoleucine degradation (ath00280) were significantly enriched. Five differential metabolites participated in these pathways, including alpha-ketoglutaric acid, Valine, fumaric acid, isoleucine and proline (Fig. C). The level of fumaric acid was the highest among all differential metabolites. Combined analysis of metabolome and transcriptome Redundancy analysis (RDA) using CANOCO showed that AGABI2DRAFT_181624 and O-phosphonothreonine 1 were positively correlated, and AGABI2DRAFT_212894 was associated with proline, uridine2, and phenylalanine1. Moreover, AGABI2DRAFT_191352 was positively connected with 2-hydroxybutanoic acid. The results were showed in Fig. . RT-qPCR verification results In order to validate the differently expressed AGABI2DRAFT_181624, AGABI2DRAFT_212894 and AGABI2DRAFT_191352 in our RNA-seq data, we performed RT-qPCR analyses for expression of these genes. The expression trends of these three genes were similar to those of the transcriptome data (Fig. ). We initially analyzed the total protein content of all mushroom samples. Amino acid and protein content, as indicated by A325 and A319, were higher in the C group, whereas A409 and A408 showed higher levels in the W group. Overall protein content levels between the C and W groups did not exhibit significant differences (Fig. ). Following quality control of the raw sequencing data, the percentage of Q20 and Q30 reads was 97.05% and 92.28%, respectively (S2 Table), indicating the high quality of sequencing data. The sequence alignment of the clean reads to the reference genome indicated that overall mapping rates in most samples ranged between 46.53% and 78.82%. The concordant pair alignment rate was 50–65% in most samples (Table S2). The percentage of reads with unique mapping sites in each sample was as high as 99%. A total of 103 DEGs were identified between wild-type and cultivated A. bisporus , with 45 and 58 DEGs highly expressed in W and C groups, respectively (Fig. A). The expression of these DEGs is shown in Fig. B. Among the DEGs, AGABI2DRAFT_133726 (gamma-glutamylcyclotransferase, GGCT ) and AGABI2DRAFT_134468 (hypothetical protein AN958_09571) were the top two DEGs highly expressed in cultivated A. bisporus ; AGABI2DRAFT_62224 (protein disulfide-isomerase A1, PDIA1 ) and AGABI2DRAFT_68765 (H + antiporter, TC-CPA1 ) were the top two DEGs highly expressed in wild-type A. bisporus (Table S3). In addition, GO and KEGG enrichment analyses of the 103 DEGs were conducted. Significant enrichment in 7 key GO terms was found (Fig. S1A), including cation transport (GO:0006812), amino acid transmembrane transport (GO:0003333), carbohydrate metabolic process (GO:0005975), proteolysis (GO:0006508), metabolic process (GO:0008152), transmembrane transport (GO:0055085), oxidation-reduction process (GO:0055114). There are 16 DEGs were enriched in these GO terms (Fig. C). KEGG pathway analysis results (Fig. S1B) showed that the Phosphatidylinositol signaling system (ko04070), Inositol phosphate metabolism (ko00562), Fructose and mannose metabolism (ko00051) and Homologous recombination (ko03440) were significantly enriched. DEGs, including AGABI2DRAFT_139182, AGABI2DRAFT_191352, AGABI2DRAFT_203654, AGABI2DRAFT_188981, AGABI2DRAFT_192972 and AGABI2DRAFT_191000 were involved in these pathways (Fig. D). There were 44 differential metabolites between wild-type and cultivated A. bisporus (Fig. A, Table S4). A comparison of the levels of differential metabolites between W and C is shown in Fig. B. The top 10 differential metabolites according to the content level were fumaric acid, 2-hydroxybutanoic acid, isoleucine, phenylalanine 1, uridine 2, proline, alpha-ketoglutaric acid, Valine, O-phosphonothreonine 1 and palmitic acid. During KEGG enrichment analysis (Fig S2), Citrate cycle (TCA cycle) (ath00020), Alanine, aspartate and glutamate metabolism (ath00250), Aminoacyl-tRNA biosynthesis (ath00970), Valine, leucine and isoleucine biosynthesis (ath00290), and Valine, leucine and isoleucine degradation (ath00280) were significantly enriched. Five differential metabolites participated in these pathways, including alpha-ketoglutaric acid, Valine, fumaric acid, isoleucine and proline (Fig. C). The level of fumaric acid was the highest among all differential metabolites. Redundancy analysis (RDA) using CANOCO showed that AGABI2DRAFT_181624 and O-phosphonothreonine 1 were positively correlated, and AGABI2DRAFT_212894 was associated with proline, uridine2, and phenylalanine1. Moreover, AGABI2DRAFT_191352 was positively connected with 2-hydroxybutanoic acid. The results were showed in Fig. . In order to validate the differently expressed AGABI2DRAFT_181624, AGABI2DRAFT_212894 and AGABI2DRAFT_191352 in our RNA-seq data, we performed RT-qPCR analyses for expression of these genes. The expression trends of these three genes were similar to those of the transcriptome data (Fig. ). The rising consumer demand for edible mushrooms coupled with heightened food safety awareness has positioned cultivated A. bisporus as a predominant choice in the market. However, artificial cultivation induces pronounced divergence between cultivated and wild strains, particularly in gene expression and metabolic profiles. While amino acids and proteins are traditionally considered pivotal drivers of mushroom flavor , this study revealed no statistically significant associations between these macromolecules and strain-specific flavor differentiation. This discrepancy may stem from the subtle flavor variations observed between cultivated and wild A. bisporus , implying that secondary metabolites likely exert a more pronounced influence on flavor nuances. Integrated transcriptomic and metabolomic analyses identified key differentially expressed genes (DEGs), notably AGABI2DRAFT_188981 and AGABI2DRAFT_191000, which encode high-affinity methionine permease (MUP1) and were upregulated in cultivated strains. Methionine, a sulfur-containing amino acid, is a critical precursor for flavor compounds in mushrooms . MUP1-mediated regulation of methionine uptake and metabolism may indirectly modulate flavor by altering the production of sulfur-derived metabolites (e.g., sulfides and thiols), which are known contributors to mushroom flavor profiles. Although MUP1 itself is not a direct flavor determinant, its metabolic role highlights a potential regulatory node linking cultivation-induced genetic changes to flavor diversification. These findings advance our understanding of how cultivation practices reshape the flavor landscape of A. bisporus and provide actionable targets for precision breeding of flavor-enhanced cultivars, thereby supporting sustainable development in the mushroom industry. Functional characterization identified AGABI2DRAFT_191352 as a critical gene encoding 6-phosphofructokinase (PFK, EC 2.7.1.11), the rate-limiting enzyme in glycolysis responsible for generating phosphoenolpyruvate (PEP) . PEP serves as a central precursor for biosynthesis of diverse organic acids . The elevated expression of AGABI2DRAFT_191352 in wild-type mushrooms correlates with enhanced glycolytic flux, potentially driving higher accumulations of organic acids. Notably, redundancy analysis (RDA) revealed a significant positive correlation ( r = 0.82, p < 0.05) between AGABI2DRAFT_191352 expression and 2-hydroxybutanoic acid levels, suggesting PFK-mediated glycolytic activity may directly modulate the synthesis of this flavor-related metabolite. Functional characterization revealed that AGABI2DRAFT_212894 as a critical gene encoding cytochrome P450 enzyme with a broad substrate range, excellent catalytic versatility, and high frequency of participation plays an important role in the biosynthesis of fungal natural products . It is regulated during fruiting body development and maturation of A. bisporus . The function of AGABI2DRAFT_181624 is related to transport and catabolism, which provides energy for A. bisporus by participating in the metabolism of carbohydrates such as glucose and galactose, supplying growth and development . According to the metabolomics results, fumaric acid, isoleucine, phenylalanine 1 and palmitic acid exhibited higher levels in W than in C, while lower levels of 2-hydroxybutanoic acid, uridine 2, proline, alpha-ketoglutaric acid, and O-phosphonothreonine 1 in W than in C. An increasing body of evidence suggests that organic acids impact the taste and aroma of mushrooms – . The difference in these organic acids might be the key to flavor differences between cultivated and wild mushrooms. In this study, fumaric acid was the most abundant organic acid in A. bisporus . Fumaric acid exists naturally in bolete mushrooms, Icelandic moss and lichen, and human skin naturally produces the acid when exposed to sunlight . A synthetic form of fumaric acid is used as a food additive to enhance flavor and sourness . We found that fumaric acid might account for flavor differences between cultivated and wild mushrooms. In addition, uridine 2 exhibited the most pronounced fold change between wild-type (W) and cultivated (C) strains. Previous studies have identified uridine in P. giganteus mushrooms, where it enhances phosphorylation of extracellular signal-regulated kinases (ERKs) and protein kinase B , and is hypothesized to mitigate neurodegenerative pathologies such as Alzheimer’s disease (AD) through promoting neurite outgrowth and synaptic plasticity . Intriguingly, our metabolomic profiling revealed significantly higher uridine abundance in cultivated A. bisporus (3.2-fold increase vs. wild-type, p < 0.01), underscoring its potential as a medicinally valuable metabolite in artificially propagated strains. The observed metabolic divergence between wild and cultivated A. bisporus likely stems from cultivation-induced metabolic reprogramming, particularly in response to substrate composition and environmental controls (e.g., humidity, temperature). This study elucidates the molecular and metabolic foundations of flavor differentiation and establishes a framework for precision breeding strategies, such as targeted gene editing or metabolic engineering, to enhance both organoleptic and nutraceutical properties in mushroom cultivars.
Developmental validation of a novel multiple genotyping assay with 24 Canine STR loci
b6dc5bac-f8f1-4c8b-be5a-8ad76cf58b39
10591528
Forensic Medicine[mh]
Introduction Dogs ( Canis familiaris ) are the most common household pets worldwide. In China, the number of domestic dogs in urban households has exceeded 54 million in 2021 (pethadoop.com), not including rural households. This large population makes accurate individual identification and parentage testing crucial (Linacre et al. ), which is also necessary to detect evidence types (e.g. hair, feces) in criminal cases involving canines. People consider dogs constant companions and share a close relationship with them. Therefore, the canine DNA in biological materials, such as saliva, blood, hair, or feces, which are abundant in daily surroundings, may sometimes be associated with criminal cases (Kun et al. ). Canine DNA can represent criminal evidence, whether a dog is directly involved or not (Pádár et al. ; ; Eichmann et al. ; Halverson and Basten ; Kanthaswamy et al. ; Tom et al. ). To exploit the potential information available from criminal evidence, a reliable method for typing canine DNA samples is required (Eichmann et al. ). In 2008, we proposed the Canine 11 A STR kit as one of the first commercially available canine STR genotyping tools in China (Du ; Du et al. ). The kit consisted of 11 canine autosomal short tandem repeat (STR) loci (PEZ1, PEZ2, PEZ3, PEZ5, PEZ6, PEZ8, PEZ12, FH2010, FH2054, FH2132, and FH2611). In 2011, four novel STR loci (PEZ15, PEZ20, PEZ21 and FH2079) with high polymorphism and one canine sex-determined marker, DAmel, were selected and added to the kit. This construction generated the new canine multiplex STR amplification system, the Canine 17 A STR kit, using five fluorescent dyes for labeling (Ye et al. ; Qian et al. ). This nomenclature system has been applied in over 536 canines from 15 breeds, with a combined power of discrimination (CPD) of 0.999 999 992 and a combined power of exclusion (CPE) of 0.995 026. In forensic cases involving limited DNA samples, implementing a multiplex polymerase chain reaction (PCR) targeting highly polymorphic marker sites has enhanced efficiency and gives superior quality outcomes (Dayton et al. ). This implies that panels with more genetic markers can achieve significant validity in forensic practice cases, particularly given the higher likelihood of excluding potential matches due to the anticipated inbreeding prevalent across various dog breeds. This study aimed to develop a higher-quality canine STR amplification kit with improved system detection efficiency. Recently, hundreds of canine STR markers have been discovered due to many research efforts (Zenke et al. ; Ogden et al. ), like the Canine Genotypes™ Panel 1.1 Kit (catalog number: F860S) and the Canine Genotypes™ Panel 2.1 Kit (catalog number: F864S) (Dayton et al. ; Kanthaswamy et al. ; Lee et al. ). Manufactured by Thermo Fisher Scientific, these two commercial STR amplification kits, consisting of 19 canine STRs, have been successfully used in fields such as canine breed identification, genetic analysis, and kinship analysis. Specific loci from these kits were used as references in developing a novel system. More specifically, we have updated the previous system by supplementing eight additional loci (FH2328, VGL3112, PEZ17, FH3313, FH2088, FH2001, FH2017, and FH2107) and integrated them into a novel system, Canine 25 A kit. The Canine 25 A kit is characterized by more novel markers and improved identification capability, making it a powerful tool for canine individual identification and paternity tests. Multiple validation studies were conducted following the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines to assess the performance of this new system. Materials and methods 2.1. Marker selection and primer design Twenty-four STR loci located dispersedly on different chromosomes with a sex determination marker were selected and assembled into a single system. Except for locus PEZ3, all the STR markers contained tetranucleotide repeat motifs. presents information about repeat motifs, size ranges, and located chromosomes of these markers. Pairs of primers were designed with amplicon sizes between 80–500 bp based on the same parameters using Primer 5.0. The primer specificity was verified by the Basic Local Alignment Search Tool (BLAST) function of GenBank on the National Center for Biotechnology Information (NCBI). Several pairs of primers were designed for each locus, but finally, only one pair exhibiting high efficiency and clean profiles with few pseudo peaks was identified as a candidate. To facilitate their arrangement in the multiplex, according to their amplicon size, primers of 24 STR loci and a sex-determining locus amelogenin were divided into five groups, and one primer of each pair was labeled by a fluorescent dye such as FAM, HEX, TAMRA, ROX, or PUR. Internal standard segments for size analysis, Maker SIZ-500 (AGCU ScienTech, Wuxi, China), were labeled with VIG. shows the markers and dye configuration. 2.2. DNA samples Different types of biological canine samples, including blood, buccal swabs, and feces, were selected for testing. Remnants of samples (blood, buccal swabs, feces) following veterinary examination were provided by Jiangxi Provincial Key Laboratory of Police Dog Breeding and Behavior Science. Samples were obtained from 500 dogs representing 16 breeds widespread in China with the owner’s consent. Samples used for species specificity studies from chickens, cattle, fish, mice, pigs, rabbits, sheep, Escherichia coli ( E. coli ), and humans were collected from Homy Genetics Inc. (China). The blood spot samples were punched to 0.5 mm in diameter using a hole puncher for direct PCR amplification. The other biological samples that could not be amplified directly were extracted by the Chelex-100 or TIANamp Genomic DNA kit protocol (TIANGEN BIOTECH, Beijing, China). 2.3. Positive controls A single canine sample with good amplification and a complete STR genotype was chosen as the standard positive control to ensure the accuracy and reliability of canine profiling. The DNA of positive control was extracted with the TIANamp Genomic DNA kit, and the extracted DNA concentration was quantified using a NanoDrop Spectrophotometer (Thermo Fisher Scientific, Waltham, USA) per the protocols. 2.4. Multiplex PCR amplification Coamplification of 24 canine STR markers and one sex-related marker was performed on a Mastercycler® nexus thermal cycler (Eppendorf Corporate, Hamburg, Germany) with the following standard thermal cycling conditions: 2 min initial denaturation at 95 °C; 30 cycles for 30 s at 94 °C, 1 min at 60.5 °C, and 50 s at 72 °C; followed by a final extension for 60 min at 60 °C. The total reaction was optimized to 10 μL of volume, including 0.5–2 ng of DNA template, 2.0 μL 5× primer set (0.04–0.43 μM of each primer), 0.2 μL heat-activated C-Taq polymerase (5 U/μL), and 4.0 μL 2.5 × PCR mix (containing 125 mM Tris-HCl buffer, 125 mM KCl, 7.5 mM dNTPs, 5.0 mM MgCl 2 ). The volume was adjusted to 10 μL using ddH 2 O. The C-Taq polymerase and PCR mix were from AGCU ScienTech, Wuxi, China. 2.5. Electrophoresis and analysis Samples for fragment separation were prepared as follows: PCR product was first blended with a 10.0 μL mixture consisting of 9.5 μL deionized Hi-Di Formamide and 0.5 μL Maker SIZ-500 size standard (AGCU ScienTech, Wuxi, China). The samples were denatured at 95 °C for 3 min and then immediately chilled on ice for 3 min. Electrophoresis was performed on an Applied Biosystem 3130 Genetic Analyzer using the POP4 Polymer (Thermo Fisher Scientific, Waltham, USA), with the spectral calibration file established using the 6 Dye Matrix Standards (Du et al. ). Fragment sizes and genotyping were analyzed by GeneMapper ID-X software (Thermo Fisher Scientific, Waltham, USA). 2.6. Construction of allelic ladders 2.6.1. Monocloning of the targ et al. leles A single-plex PCR procedure was performed using non-fluorescently labeled primers to obtain the frequent alleles of each marker for allelic ladders. The resulting PCR products were separated using 1.7% agarose gel electrophoresis and visualized by ethylene bromide staining. Subsequently, the target products were retrieved from the gel and purified. The isolated target fragments were cloned into the pMD18-T vector and transformed into E. coli DH5α, followed by overnight incubation in Luria-Bertani medium (Chen et al. ). The recombinants were screened using the blue-white method. Recombinant colonies were identified using the blue-white screening method. After plasmid extraction and gene sequencing, the recombinant plasmids were preserved, and the bacterial cultures were stored in glycerol. 2.6.2. Allelic fragment amplification and one-single ladder preparation Each allelic fragment from the same locus was amplified using the recombinant plasmid as a template, resulting in 151 allelic fragments. These selected fragments were then individually combined to create 25 single-locus ladders, ensuring a balanced concentration based on the peak height ratio with a target peak height above 70% (Zhou et al. ). The ladders of each single locus were extracted and purified using chloroform and were subsequently stored in the dark at −20 °C. 2.6.3. Assembling of allelic ladders The allelic ladders of all 25 Canine loci were combined and adjusted based on the average peak height ratio of the single-locus ladder to achieve an average peak height value of over 400 relative fluorescence units (RFUs). Additionally, the average peak height ratio between loci marked with the same fluorescence dye was ensured to be above 60%, while the average peak height ratio between loci marked with different fluorescence dyes exceeded 60%. This careful adjustment and mixing of the allelic ladders ensured optimal performance and accurate analysis in subsequent experiments. 2.7. Developmental validation 2.7.1. PCR conditions/procedures PCR conditions were optimized for the Canine 25 A kit by tests for ranges of reaction components, annealing temperature, or cycle number. Control DNA was used as the template and maintained at 0.5 ng in each reaction system. Conversely, reaction components, including primers, reaction mix, and C-Taq polymerase, were used in a series of concentrations, namely, 0.5×, 0.75×, 1×, 1.25×, and 1.5× (1× corresponds to the concentrations described in ). Furthermore, amplification was performed at different annealing temperatures (59, 59.5, 60.1, 60.5, 61.2, 61.7, and 62.5 °C) and cycle numbers (29, 30, and 31). 2.7.2. Sensitivity Assay sensitivity was evaluated using serial dilutions of control DNA. One microliter of each DNA dilution was amplified in triplicate with the following template amounts: 2 ng, 1 ng, 500 pg, 250 pg, 125 pg, and 62.5 pg. 2.7.3. Species specificity Species studies were performed using DNA samples from E. coli , humans, and different animal species (e.g. chickens, cattle, fish, mice, pigs, rabbits, and sheep). A DNA input amount of 1 ng was amplified with a Canine 25 A kit. 2.7.4. Mixture study Mixture analysis experiments were conducted using well-characterized canine DNA samples. The mixtures were prepared at specific ratios of 1:1, 1:2, 1:5, 1:10, and 1:20 to evaluate the impact of DNA mixtures on the analysis, with the total amount of DNA constant at 1 ng. Additionally, mixtures combining dog saliva and human blood at the ratios mentioned above were designed to demonstrate the applicability of multiplex assay in forensic casework involving bites by domestic dogs. The human DNA was detected within these mixtures using the AGCU Expressmarker 16CS Kit (AGCU ScienTech, Wuxi, China). 2.7.5. Inhibitor study To evaluate the anti-interference capacity of Canine 25 A kit, six common forensic inhibitors, hematin, hemoglobin, indigo, humic acid, calcium ion, and ethylene diamine tetra acetic acid (EDTA), were added to PCR systems at the following concentrations: 25, 50, 75, 100, or 150 μmol/L hematin; 50, 75, 100, 150, or 200 μmol/L hemoglobin; 4, 8, 12, 16, or 20 mmol/L indigo; 10, 15, 20, 25, or 30 mg/L humic acid; 0.4, 0.8, 1.2, 1.6, or 2.0 mmol/L calcium ion; and 0.3, 0.6, 0.9, 1.2, or 1.5 mmol/L EDTA (Chen et al. ). The amount of control DNA was maintained at 0.5 ng (Chen et al. ). 2.7.6. Reproducibility Thirty randomly selected samples from the population study were analyzed independently by three separate laboratories, and the data were compared to determine the reproducibility. Furthermore, control DNA was detected to validate the consistency of genotype results using an Applied Biosystems 3130XL Genetic Analyzer and a 3500XL Genetic Analyzer. 2.7.7. Size precision The size precision and accuracy of the Canine 25 A kit were evaluated by collecting fragment size data from three full injections of allelic ladder on the Applied Biosystems 3130XL Genetic Analyzer. The mean size was calculated for each allele, and the standard deviation was determined for the common alleles at each locus. 2.7.8. Balance of peak height Eighty different dog samples from the population study were used to calculate the peak height balance for allele pairs at each locus. Homozygous and heterozygous peaks were first normalized by averaging the RFU value for each allele of heterozygous genotypes and dividing the RFU value of each homozygous peak by 2. 2.7.9. Stutter analysis This study utilized a subset of 100 unrelated canine individual blood samples for stutter analysis. Stutter peaks were identified with one repeat smaller or larger than the true allele. The peak height of each stutter peak was divided by the peak height of the corresponding true allele to calculate the stutter ratio. The analytical threshold for the minimum stutter peak height was set at 20 RFUs to ensure the inclusion of all stutter peaks. The average stutter value, standard deviation, and stutter threshold were determined for each analyzed sample. These parameters provide insights into the characteristics of stutter peaks within the canine DNA profiles to understand their variability and distribution; it is essential to accurately interpret and compare genetic profiles in forensic and genetic research applications. 2.7.10. Case sample study Five casework samples were extracted with Chelex-100 resin and amplified using a Canine 25 A kit. DNA was obtained from bloodstains taken from three reference dogs, Rottweiler , German Shepherd Dog , and Poodle breed, and from saliva swabs from a piece of shirt and pants showing dog bite marks. The Likelihood Ratio (LR) was utilized in individual identification through DNA evidence. This entails evaluating the probability of observing the DNA evidence under two hypotheses: LR = P(E | S)/P(E | NS), where P(E | S) is the probability of observing the DNA evidence given that the suspect is the true source of DNA sample, and P(E | NS) is the probability of observing the DNA evidence given that the suspect is not the true source of DNA sample (Buckleton et al. ; Sands ). In this particular case, the DNA evidence was derived from stains on clothing, and the LR can be construed as follows: P(E | S) denotes the likelihood that the stains on the clothing were left by the suspect’s dog, whereas P(E | NS) represents the probability that the stains on the clothing were left by random dogs unrelated to the case. The LR measures the strength of evidence for individual identification, with higher LR values indicating more robust evidence supporting one hypothesis over the other. Bloodstains from one duo family belonging to the Shiba breed were tested. The parentage index (PI) of each locus and combined paternity index (CPI) were calculated. STR frequency involved in LR and PI calculation came from the statistics of population genetic analysis in this study. 2.7.11. Population studies A total of 500 canine samples from Nanchang Police Dog Base in different sample types, including 432 blood stains, 63 buccal swabs, and 5 fecal samples, were profiled using a Canine 25 A kit. Population statistical parameters such as allele frequencies, observed heterozygosity (Hobs), polymorphic information content (PIC), discrimination power (PD), power of exclusion (PE), typical paternity index (TPI), and the p values of Hardy-Weinberg equilibrium were analyzed by STRAF (Gouy and Zieger ; Khacha-Ananda and Mahawong ). Additionally, the combined discrimination power (CDP) and the CPE were calculated as CDP/CPE = 1- ∏ I = 1 k ( 1 − pi ) (Pi represents the ith value of PD or PE) (Chen et al. ). Marker selection and primer design Twenty-four STR loci located dispersedly on different chromosomes with a sex determination marker were selected and assembled into a single system. Except for locus PEZ3, all the STR markers contained tetranucleotide repeat motifs. presents information about repeat motifs, size ranges, and located chromosomes of these markers. Pairs of primers were designed with amplicon sizes between 80–500 bp based on the same parameters using Primer 5.0. The primer specificity was verified by the Basic Local Alignment Search Tool (BLAST) function of GenBank on the National Center for Biotechnology Information (NCBI). Several pairs of primers were designed for each locus, but finally, only one pair exhibiting high efficiency and clean profiles with few pseudo peaks was identified as a candidate. To facilitate their arrangement in the multiplex, according to their amplicon size, primers of 24 STR loci and a sex-determining locus amelogenin were divided into five groups, and one primer of each pair was labeled by a fluorescent dye such as FAM, HEX, TAMRA, ROX, or PUR. Internal standard segments for size analysis, Maker SIZ-500 (AGCU ScienTech, Wuxi, China), were labeled with VIG. shows the markers and dye configuration. DNA samples Different types of biological canine samples, including blood, buccal swabs, and feces, were selected for testing. Remnants of samples (blood, buccal swabs, feces) following veterinary examination were provided by Jiangxi Provincial Key Laboratory of Police Dog Breeding and Behavior Science. Samples were obtained from 500 dogs representing 16 breeds widespread in China with the owner’s consent. Samples used for species specificity studies from chickens, cattle, fish, mice, pigs, rabbits, sheep, Escherichia coli ( E. coli ), and humans were collected from Homy Genetics Inc. (China). The blood spot samples were punched to 0.5 mm in diameter using a hole puncher for direct PCR amplification. The other biological samples that could not be amplified directly were extracted by the Chelex-100 or TIANamp Genomic DNA kit protocol (TIANGEN BIOTECH, Beijing, China). Positive controls A single canine sample with good amplification and a complete STR genotype was chosen as the standard positive control to ensure the accuracy and reliability of canine profiling. The DNA of positive control was extracted with the TIANamp Genomic DNA kit, and the extracted DNA concentration was quantified using a NanoDrop Spectrophotometer (Thermo Fisher Scientific, Waltham, USA) per the protocols. Multiplex PCR amplification Coamplification of 24 canine STR markers and one sex-related marker was performed on a Mastercycler® nexus thermal cycler (Eppendorf Corporate, Hamburg, Germany) with the following standard thermal cycling conditions: 2 min initial denaturation at 95 °C; 30 cycles for 30 s at 94 °C, 1 min at 60.5 °C, and 50 s at 72 °C; followed by a final extension for 60 min at 60 °C. The total reaction was optimized to 10 μL of volume, including 0.5–2 ng of DNA template, 2.0 μL 5× primer set (0.04–0.43 μM of each primer), 0.2 μL heat-activated C-Taq polymerase (5 U/μL), and 4.0 μL 2.5 × PCR mix (containing 125 mM Tris-HCl buffer, 125 mM KCl, 7.5 mM dNTPs, 5.0 mM MgCl 2 ). The volume was adjusted to 10 μL using ddH 2 O. The C-Taq polymerase and PCR mix were from AGCU ScienTech, Wuxi, China. Electrophoresis and analysis Samples for fragment separation were prepared as follows: PCR product was first blended with a 10.0 μL mixture consisting of 9.5 μL deionized Hi-Di Formamide and 0.5 μL Maker SIZ-500 size standard (AGCU ScienTech, Wuxi, China). The samples were denatured at 95 °C for 3 min and then immediately chilled on ice for 3 min. Electrophoresis was performed on an Applied Biosystem 3130 Genetic Analyzer using the POP4 Polymer (Thermo Fisher Scientific, Waltham, USA), with the spectral calibration file established using the 6 Dye Matrix Standards (Du et al. ). Fragment sizes and genotyping were analyzed by GeneMapper ID-X software (Thermo Fisher Scientific, Waltham, USA). Construction of allelic ladders 2.6.1. Monocloning of the targ et al. leles A single-plex PCR procedure was performed using non-fluorescently labeled primers to obtain the frequent alleles of each marker for allelic ladders. The resulting PCR products were separated using 1.7% agarose gel electrophoresis and visualized by ethylene bromide staining. Subsequently, the target products were retrieved from the gel and purified. The isolated target fragments were cloned into the pMD18-T vector and transformed into E. coli DH5α, followed by overnight incubation in Luria-Bertani medium (Chen et al. ). The recombinants were screened using the blue-white method. Recombinant colonies were identified using the blue-white screening method. After plasmid extraction and gene sequencing, the recombinant plasmids were preserved, and the bacterial cultures were stored in glycerol. 2.6.2. Allelic fragment amplification and one-single ladder preparation Each allelic fragment from the same locus was amplified using the recombinant plasmid as a template, resulting in 151 allelic fragments. These selected fragments were then individually combined to create 25 single-locus ladders, ensuring a balanced concentration based on the peak height ratio with a target peak height above 70% (Zhou et al. ). The ladders of each single locus were extracted and purified using chloroform and were subsequently stored in the dark at −20 °C. 2.6.3. Assembling of allelic ladders The allelic ladders of all 25 Canine loci were combined and adjusted based on the average peak height ratio of the single-locus ladder to achieve an average peak height value of over 400 relative fluorescence units (RFUs). Additionally, the average peak height ratio between loci marked with the same fluorescence dye was ensured to be above 60%, while the average peak height ratio between loci marked with different fluorescence dyes exceeded 60%. This careful adjustment and mixing of the allelic ladders ensured optimal performance and accurate analysis in subsequent experiments. Monocloning of the targ et al. leles A single-plex PCR procedure was performed using non-fluorescently labeled primers to obtain the frequent alleles of each marker for allelic ladders. The resulting PCR products were separated using 1.7% agarose gel electrophoresis and visualized by ethylene bromide staining. Subsequently, the target products were retrieved from the gel and purified. The isolated target fragments were cloned into the pMD18-T vector and transformed into E. coli DH5α, followed by overnight incubation in Luria-Bertani medium (Chen et al. ). The recombinants were screened using the blue-white method. Recombinant colonies were identified using the blue-white screening method. After plasmid extraction and gene sequencing, the recombinant plasmids were preserved, and the bacterial cultures were stored in glycerol. Allelic fragment amplification and one-single ladder preparation Each allelic fragment from the same locus was amplified using the recombinant plasmid as a template, resulting in 151 allelic fragments. These selected fragments were then individually combined to create 25 single-locus ladders, ensuring a balanced concentration based on the peak height ratio with a target peak height above 70% (Zhou et al. ). The ladders of each single locus were extracted and purified using chloroform and were subsequently stored in the dark at −20 °C. Assembling of allelic ladders The allelic ladders of all 25 Canine loci were combined and adjusted based on the average peak height ratio of the single-locus ladder to achieve an average peak height value of over 400 relative fluorescence units (RFUs). Additionally, the average peak height ratio between loci marked with the same fluorescence dye was ensured to be above 60%, while the average peak height ratio between loci marked with different fluorescence dyes exceeded 60%. This careful adjustment and mixing of the allelic ladders ensured optimal performance and accurate analysis in subsequent experiments. Developmental validation 2.7.1. PCR conditions/procedures PCR conditions were optimized for the Canine 25 A kit by tests for ranges of reaction components, annealing temperature, or cycle number. Control DNA was used as the template and maintained at 0.5 ng in each reaction system. Conversely, reaction components, including primers, reaction mix, and C-Taq polymerase, were used in a series of concentrations, namely, 0.5×, 0.75×, 1×, 1.25×, and 1.5× (1× corresponds to the concentrations described in ). Furthermore, amplification was performed at different annealing temperatures (59, 59.5, 60.1, 60.5, 61.2, 61.7, and 62.5 °C) and cycle numbers (29, 30, and 31). 2.7.2. Sensitivity Assay sensitivity was evaluated using serial dilutions of control DNA. One microliter of each DNA dilution was amplified in triplicate with the following template amounts: 2 ng, 1 ng, 500 pg, 250 pg, 125 pg, and 62.5 pg. 2.7.3. Species specificity Species studies were performed using DNA samples from E. coli , humans, and different animal species (e.g. chickens, cattle, fish, mice, pigs, rabbits, and sheep). A DNA input amount of 1 ng was amplified with a Canine 25 A kit. 2.7.4. Mixture study Mixture analysis experiments were conducted using well-characterized canine DNA samples. The mixtures were prepared at specific ratios of 1:1, 1:2, 1:5, 1:10, and 1:20 to evaluate the impact of DNA mixtures on the analysis, with the total amount of DNA constant at 1 ng. Additionally, mixtures combining dog saliva and human blood at the ratios mentioned above were designed to demonstrate the applicability of multiplex assay in forensic casework involving bites by domestic dogs. The human DNA was detected within these mixtures using the AGCU Expressmarker 16CS Kit (AGCU ScienTech, Wuxi, China). 2.7.5. Inhibitor study To evaluate the anti-interference capacity of Canine 25 A kit, six common forensic inhibitors, hematin, hemoglobin, indigo, humic acid, calcium ion, and ethylene diamine tetra acetic acid (EDTA), were added to PCR systems at the following concentrations: 25, 50, 75, 100, or 150 μmol/L hematin; 50, 75, 100, 150, or 200 μmol/L hemoglobin; 4, 8, 12, 16, or 20 mmol/L indigo; 10, 15, 20, 25, or 30 mg/L humic acid; 0.4, 0.8, 1.2, 1.6, or 2.0 mmol/L calcium ion; and 0.3, 0.6, 0.9, 1.2, or 1.5 mmol/L EDTA (Chen et al. ). The amount of control DNA was maintained at 0.5 ng (Chen et al. ). 2.7.6. Reproducibility Thirty randomly selected samples from the population study were analyzed independently by three separate laboratories, and the data were compared to determine the reproducibility. Furthermore, control DNA was detected to validate the consistency of genotype results using an Applied Biosystems 3130XL Genetic Analyzer and a 3500XL Genetic Analyzer. 2.7.7. Size precision The size precision and accuracy of the Canine 25 A kit were evaluated by collecting fragment size data from three full injections of allelic ladder on the Applied Biosystems 3130XL Genetic Analyzer. The mean size was calculated for each allele, and the standard deviation was determined for the common alleles at each locus. 2.7.8. Balance of peak height Eighty different dog samples from the population study were used to calculate the peak height balance for allele pairs at each locus. Homozygous and heterozygous peaks were first normalized by averaging the RFU value for each allele of heterozygous genotypes and dividing the RFU value of each homozygous peak by 2. 2.7.9. Stutter analysis This study utilized a subset of 100 unrelated canine individual blood samples for stutter analysis. Stutter peaks were identified with one repeat smaller or larger than the true allele. The peak height of each stutter peak was divided by the peak height of the corresponding true allele to calculate the stutter ratio. The analytical threshold for the minimum stutter peak height was set at 20 RFUs to ensure the inclusion of all stutter peaks. The average stutter value, standard deviation, and stutter threshold were determined for each analyzed sample. These parameters provide insights into the characteristics of stutter peaks within the canine DNA profiles to understand their variability and distribution; it is essential to accurately interpret and compare genetic profiles in forensic and genetic research applications. 2.7.10. Case sample study Five casework samples were extracted with Chelex-100 resin and amplified using a Canine 25 A kit. DNA was obtained from bloodstains taken from three reference dogs, Rottweiler , German Shepherd Dog , and Poodle breed, and from saliva swabs from a piece of shirt and pants showing dog bite marks. The Likelihood Ratio (LR) was utilized in individual identification through DNA evidence. This entails evaluating the probability of observing the DNA evidence under two hypotheses: LR = P(E | S)/P(E | NS), where P(E | S) is the probability of observing the DNA evidence given that the suspect is the true source of DNA sample, and P(E | NS) is the probability of observing the DNA evidence given that the suspect is not the true source of DNA sample (Buckleton et al. ; Sands ). In this particular case, the DNA evidence was derived from stains on clothing, and the LR can be construed as follows: P(E | S) denotes the likelihood that the stains on the clothing were left by the suspect’s dog, whereas P(E | NS) represents the probability that the stains on the clothing were left by random dogs unrelated to the case. The LR measures the strength of evidence for individual identification, with higher LR values indicating more robust evidence supporting one hypothesis over the other. Bloodstains from one duo family belonging to the Shiba breed were tested. The parentage index (PI) of each locus and combined paternity index (CPI) were calculated. STR frequency involved in LR and PI calculation came from the statistics of population genetic analysis in this study. 2.7.11. Population studies A total of 500 canine samples from Nanchang Police Dog Base in different sample types, including 432 blood stains, 63 buccal swabs, and 5 fecal samples, were profiled using a Canine 25 A kit. Population statistical parameters such as allele frequencies, observed heterozygosity (Hobs), polymorphic information content (PIC), discrimination power (PD), power of exclusion (PE), typical paternity index (TPI), and the p values of Hardy-Weinberg equilibrium were analyzed by STRAF (Gouy and Zieger ; Khacha-Ananda and Mahawong ). Additionally, the combined discrimination power (CDP) and the CPE were calculated as CDP/CPE = 1- ∏ I = 1 k ( 1 − pi ) (Pi represents the ith value of PD or PE) (Chen et al. ). PCR conditions/procedures PCR conditions were optimized for the Canine 25 A kit by tests for ranges of reaction components, annealing temperature, or cycle number. Control DNA was used as the template and maintained at 0.5 ng in each reaction system. Conversely, reaction components, including primers, reaction mix, and C-Taq polymerase, were used in a series of concentrations, namely, 0.5×, 0.75×, 1×, 1.25×, and 1.5× (1× corresponds to the concentrations described in ). Furthermore, amplification was performed at different annealing temperatures (59, 59.5, 60.1, 60.5, 61.2, 61.7, and 62.5 °C) and cycle numbers (29, 30, and 31). Sensitivity Assay sensitivity was evaluated using serial dilutions of control DNA. One microliter of each DNA dilution was amplified in triplicate with the following template amounts: 2 ng, 1 ng, 500 pg, 250 pg, 125 pg, and 62.5 pg. Species specificity Species studies were performed using DNA samples from E. coli , humans, and different animal species (e.g. chickens, cattle, fish, mice, pigs, rabbits, and sheep). A DNA input amount of 1 ng was amplified with a Canine 25 A kit. Mixture study Mixture analysis experiments were conducted using well-characterized canine DNA samples. The mixtures were prepared at specific ratios of 1:1, 1:2, 1:5, 1:10, and 1:20 to evaluate the impact of DNA mixtures on the analysis, with the total amount of DNA constant at 1 ng. Additionally, mixtures combining dog saliva and human blood at the ratios mentioned above were designed to demonstrate the applicability of multiplex assay in forensic casework involving bites by domestic dogs. The human DNA was detected within these mixtures using the AGCU Expressmarker 16CS Kit (AGCU ScienTech, Wuxi, China). Inhibitor study To evaluate the anti-interference capacity of Canine 25 A kit, six common forensic inhibitors, hematin, hemoglobin, indigo, humic acid, calcium ion, and ethylene diamine tetra acetic acid (EDTA), were added to PCR systems at the following concentrations: 25, 50, 75, 100, or 150 μmol/L hematin; 50, 75, 100, 150, or 200 μmol/L hemoglobin; 4, 8, 12, 16, or 20 mmol/L indigo; 10, 15, 20, 25, or 30 mg/L humic acid; 0.4, 0.8, 1.2, 1.6, or 2.0 mmol/L calcium ion; and 0.3, 0.6, 0.9, 1.2, or 1.5 mmol/L EDTA (Chen et al. ). The amount of control DNA was maintained at 0.5 ng (Chen et al. ). Reproducibility Thirty randomly selected samples from the population study were analyzed independently by three separate laboratories, and the data were compared to determine the reproducibility. Furthermore, control DNA was detected to validate the consistency of genotype results using an Applied Biosystems 3130XL Genetic Analyzer and a 3500XL Genetic Analyzer. Size precision The size precision and accuracy of the Canine 25 A kit were evaluated by collecting fragment size data from three full injections of allelic ladder on the Applied Biosystems 3130XL Genetic Analyzer. The mean size was calculated for each allele, and the standard deviation was determined for the common alleles at each locus. Balance of peak height Eighty different dog samples from the population study were used to calculate the peak height balance for allele pairs at each locus. Homozygous and heterozygous peaks were first normalized by averaging the RFU value for each allele of heterozygous genotypes and dividing the RFU value of each homozygous peak by 2. Stutter analysis This study utilized a subset of 100 unrelated canine individual blood samples for stutter analysis. Stutter peaks were identified with one repeat smaller or larger than the true allele. The peak height of each stutter peak was divided by the peak height of the corresponding true allele to calculate the stutter ratio. The analytical threshold for the minimum stutter peak height was set at 20 RFUs to ensure the inclusion of all stutter peaks. The average stutter value, standard deviation, and stutter threshold were determined for each analyzed sample. These parameters provide insights into the characteristics of stutter peaks within the canine DNA profiles to understand their variability and distribution; it is essential to accurately interpret and compare genetic profiles in forensic and genetic research applications. Case sample study Five casework samples were extracted with Chelex-100 resin and amplified using a Canine 25 A kit. DNA was obtained from bloodstains taken from three reference dogs, Rottweiler , German Shepherd Dog , and Poodle breed, and from saliva swabs from a piece of shirt and pants showing dog bite marks. The Likelihood Ratio (LR) was utilized in individual identification through DNA evidence. This entails evaluating the probability of observing the DNA evidence under two hypotheses: LR = P(E | S)/P(E | NS), where P(E | S) is the probability of observing the DNA evidence given that the suspect is the true source of DNA sample, and P(E | NS) is the probability of observing the DNA evidence given that the suspect is not the true source of DNA sample (Buckleton et al. ; Sands ). In this particular case, the DNA evidence was derived from stains on clothing, and the LR can be construed as follows: P(E | S) denotes the likelihood that the stains on the clothing were left by the suspect’s dog, whereas P(E | NS) represents the probability that the stains on the clothing were left by random dogs unrelated to the case. The LR measures the strength of evidence for individual identification, with higher LR values indicating more robust evidence supporting one hypothesis over the other. Bloodstains from one duo family belonging to the Shiba breed were tested. The parentage index (PI) of each locus and combined paternity index (CPI) were calculated. STR frequency involved in LR and PI calculation came from the statistics of population genetic analysis in this study. Population studies A total of 500 canine samples from Nanchang Police Dog Base in different sample types, including 432 blood stains, 63 buccal swabs, and 5 fecal samples, were profiled using a Canine 25 A kit. Population statistical parameters such as allele frequencies, observed heterozygosity (Hobs), polymorphic information content (PIC), discrimination power (PD), power of exclusion (PE), typical paternity index (TPI), and the p values of Hardy-Weinberg equilibrium were analyzed by STRAF (Gouy and Zieger ; Khacha-Ananda and Mahawong ). Additionally, the combined discrimination power (CDP) and the CPE were calculated as CDP/CPE = 1- ∏ I = 1 k ( 1 − pi ) (Pi represents the ith value of PD or PE) (Chen et al. ). Results and discussion 3.1. Allelic ladders By adjusting the mixed volume ratio of the allele ladder, the peak height of each allele exceeded 400 RFUs, and the average peak height ratio between loci exceeded 50%; presents the electropherogram displaying the allelic ladder. 3.2. PCR conditions/procedures The recommended optimal reaction volume for the Canine 25 A kit is described in ; however, some laboratories decrease the total reaction volume to reduce costs. Such a reduction in the reaction system may decrease the success rate of sample detection or even the detection failure. The primers, reaction mix, and Taq polymerase are critical for multiplex PCR assays. Herein, we conducted optimization and adjustments on these three components to maximize the robustness and performance of the Canine 25 A kit. illustrates the detection results of DNA profiles at different concentrations of PCR conditions. Complete profiles were obtained at all concentrations of C-Taq DNA polymerase. However, when the reaction mix concentration was reduced to 0.5×, only 32% of the loci were detected. Regarding primers, the percentage of detected loci was 0% and 56% when the concentration was reduced to 0.5× and 0.75×, respectively. Consistently increasing primer concentrations above 1.25× resulted in the gradual generation of allelic impurity peaks. displays the electrophoresis profiles corresponding to the different component concentrations. For a 10.0 μL reaction volume system, the recommended optimal conditions are as follows: 4 μL of PCR mix, 0.4 μL of C-Taq DNA polymerase, and 2 μL of primers. Implementing these conditions resulted in the efficient acquisition of DNA profiles, ensuring the robustness and performance of the novel system. presents the electrophoresis profiles corresponding to the annealing temperature and cycle number. The greatest product yield occurred at an annealing temperature between 59–60.1 °C, with accurate products obtained between 61.2– 62.5 °C. When the annealing temperature decreased to 60.1 °C, unwanted impurity allelic peaks appeared. Nevertheless, when the annealing temperature was increased to 62.5 °C, the peak heights of some loci decreased, and PEZ1/2 and DAmel showed allelic dropout. Therefore, profiles were efficiently and accurately obtained at an annealing temperature of 60.5 °C, which was the optimal and recommended annealing temperature. A total of 0.5 ng of control DNA was amplified at 29, 30, and 31 cycles on an Eppendorf Mastercycler Nexus Thermal Cycler. Full profiles were unfailingly obtained at all three cycle numbers, while the peak heights increased as the number of cycles increased. A cycle number of 30 were indicated to be optimal for the novel developed Canine 25 A kit because this cycle number maximized reagent sensitivity and minimized the occurrence of impurity peaks. 3.3. Sensitivity For the sensitivity test of the Canine 25 A kit, a series of concentrations of control DNA, 2.0, 1.0, 0.5, 0.25, 0.125, and 0.0625 ng, were used as the template and added to the reaction volume of 25 μL ( ). Full profiles of 24 loci and a sex-determining locus amelogenin were consistently obtained with the reaction containing 0.125 ng of DNA ( ). When the DNA was less than 0.125 ng, allelic dropouts or amplification failure occasionally occurred, indicating insufficient template DNA. Consequently, in the 25 μL reaction system, the Canine 25 A kit sensitivity was determined to be 0.125 ng/25 μL. 3.4. Species specificity The novel-developed Canine 25 A kit was tested using DNA samples from different animal species to determine any cross-reactivity (Liu et al. ). No amplification products were detected for samples from chickens, cattle, fish, mice, pigs, rabbits, sheep, E. coli, and humans ( ), which indicated that the system was robust and unlikely to be affected by the presence of genetic material from these animal species. 3.5. Mixture study A mixture study was conducted to evaluate the ability of the Canine 25 A kit to identify the minor contributor (Ensenberger et al. ). In the dog-dog mixed DNA samples, all alleles were distinguishable in the 1:1, 1:2, and 1:5 mixtures; displays the DNA profiles of mixture samples with a mixing ratio of 1:5. However, in the 1:10 mixture, some alleles of the minor contributor were indistinguishable from stutter products, resulting in a loss of information. Notably, when the mixture ratio increased to 1:20, no profiles of the minor contributor could be obtained. In the mixtures of dog saliva and human blood, all alleles were successfully identified in 1:1, 1:2, and 1:5 (with dog saliva DNA as the minor contributor). However, when the mixing ratio reached 1:10, more than half of the peaks started diminishing. As expected, complete human DNA profiles were obtained in all five ratios using the AGCU Expressmarker 16CS Kit; illustrates the genotyping of the mixture of dog saliva and human blood at a ratio of 1:10. These findings suggest that the Canine 25 A kit exhibits promising performance in achieving complete genotyping for dog-dog mixed samples and dog saliva-human blood mixed samples, even when the contribution of dog DNA is as low as 20%. 3.6. Inhibitor study As DNA samples from crime scenes usually contain inhibitors that may interfere with PCR amplification and sometimes even cause complete amplification failure, it is important to validate the robustness and anti-interference quality of the novel developed system (Green et al. ; Zhu et al. ). shows that complete profiles were obtained with concentrations up to 25 μmol/L hematin, 50 μmol/L hemoglobin, 4 mmol/L indigo, 10 mg/L humic acid, 0.8 mmol/L calcium ion, and 0.6 mmol/L EDTA. When the concentrations exceeded these levels, obvious inhibition of the system occurred, reflected as allelic dropouts at some loci. 3.7. Reproducibility A reproducibility study was performed to validate the reliability and accuracy of the developed Canine 25 A kit by testing 30 samples in different laboratories. The results showed that allele calls of all samples were identical across different analyses. To demonstrate the suitability of the system for use on different capillary electrophoresis (CE) platforms, control DNA was examined using both ABI 3130XL and ABI 3730XL. The genotype of the same locus obtained revealed perfect consistency between the two CE methods ( ). 3.8. Size precision study The size precision results indicated that very little variation at each locus was observed in the size of the Canine 25 A allelic ladder, and most allele deviations were nearly 0.05 bases. The maximum standard deviation was close to 0.19 bases at locus VGL3112 ( ). These results demonstrate that the Canine 25 A kit has sufficient ability to ensure accurate genotyping of samples. 3.9. Balance of peak height Two different types of peak balance were evaluated to assess the overall Canine 25 A STR kit performance. The intracolor balance was calculated to assess the peak balance of loci marked with the same fluorescent dye (Zhou et al. ). illustrates that the peak height ratios between different loci within the same dye channel were 59.23% for the blue dye, 46.04% for the green dye, 41.37% for the yellow dye, 54.09% for the red dye, and 55.57% for the purple dye. displays the average RFU values of each group of loci. The average RFU value for the blue dye was approximately 4071 for the blue dye, 3976 for the green dye, 2750 for the yellow dye, 3578 for the red dye, and 5459 for the purple dye. Therefore, the lowest value of peak height ratio between loci labeled with different fluorescent dyes was the minimum RFU at 2750 divided by the maximum at 5459, equal to approximately 50.4%. These results showed that the Canine 25 A kit has a good balance. 3.10. Stutter analysis summarizes the relevant parameters for assessing stutter in the Canine 25 A kit, including the minimum, maximum, and average stutter values, standard deviation, and stutter threshold for each locus. Notably, the FH2107 locus exhibited the highest values across all parameters, indicating a higher propensity for stuttering at this locus. The average stutter percentage for FH2107 was determined to be 19.40%. Accordingly, the recommended stutter filter for GeneMapper IDX software in the Canine 25 A kit was defined as the average value plus three standard deviations. 3.11. Case samples study Canine STR profiling was used to test and analyze the performance in a dog attack incident using the Canine 25 A kit. All three bloodstains from different suspected dogs yielded a full canine-specific STR profile. Meanwhile, saliva stains on the shirt and pants only provided a partial profile ( ); reveals that the Rottweiler sample shared 15 identical loci with the shirt and 20 with the pants. The combined discrimination power (CPD) for these 15 and 20-locus sets was calculated to be 0.999 999 999 983 876 and 0.999 999 999 999 982, respectively, satisfying the system efficiency criteria of individual identification. The likelihood ratios were determined to be 1.9620 × 10 22 and 5.9361 × 10 28 , supporting that the samples taken from the shirt, pants, and Rottweiler originated from one individual. Full profiles were obtained in the father-son duo paternity case, and no locus showed genetic incompatibilities ( ). The paternity index value per marker was calculated, and the final cumulative paternity index was 4.8568 × 10 8 , confirming the parenthood of the tested family. 3.12. Population studies Full DNA profiles were successfully obtained from different sample types of bloodstains, buccal swabs, and feces. depicts the forensic parameters of 24 canine STR loci in 16 canine breeds, revealing insignificant deviations from Hardy-Weinberg expectations after Bonferroni corrections ( p > 0.05/24 = 0.0021), except for the FH2079 locus in Schnauzer , FH2088 in Rottweiler , PEZ15 in Chinese Kunming dog , FH2328 in Golden Retriever , PEZ2 in Poodle and PEZ6 and PEZ8 in Labrador Retriever. The PIC indicated that the marker can establish polymorphisms in populations, and the results showed that FH2107 and FH3313 gave a higher genetic polymorphism over 0.6 compared with the PICs in each canine breed (Khacha-Ananda and Mahawong ). illustrates detailed information on these breeds and their cumulative paternity indices. The CDP values for each breed were exceeded 0.999 999 999 999, and the CPE was over 0.9999. These results suggest that the Canine 25 A kit is suitable for individual identification and paternity testing, providing high accuracy and reliability. Allelic ladders By adjusting the mixed volume ratio of the allele ladder, the peak height of each allele exceeded 400 RFUs, and the average peak height ratio between loci exceeded 50%; presents the electropherogram displaying the allelic ladder. PCR conditions/procedures The recommended optimal reaction volume for the Canine 25 A kit is described in ; however, some laboratories decrease the total reaction volume to reduce costs. Such a reduction in the reaction system may decrease the success rate of sample detection or even the detection failure. The primers, reaction mix, and Taq polymerase are critical for multiplex PCR assays. Herein, we conducted optimization and adjustments on these three components to maximize the robustness and performance of the Canine 25 A kit. illustrates the detection results of DNA profiles at different concentrations of PCR conditions. Complete profiles were obtained at all concentrations of C-Taq DNA polymerase. However, when the reaction mix concentration was reduced to 0.5×, only 32% of the loci were detected. Regarding primers, the percentage of detected loci was 0% and 56% when the concentration was reduced to 0.5× and 0.75×, respectively. Consistently increasing primer concentrations above 1.25× resulted in the gradual generation of allelic impurity peaks. displays the electrophoresis profiles corresponding to the different component concentrations. For a 10.0 μL reaction volume system, the recommended optimal conditions are as follows: 4 μL of PCR mix, 0.4 μL of C-Taq DNA polymerase, and 2 μL of primers. Implementing these conditions resulted in the efficient acquisition of DNA profiles, ensuring the robustness and performance of the novel system. presents the electrophoresis profiles corresponding to the annealing temperature and cycle number. The greatest product yield occurred at an annealing temperature between 59–60.1 °C, with accurate products obtained between 61.2– 62.5 °C. When the annealing temperature decreased to 60.1 °C, unwanted impurity allelic peaks appeared. Nevertheless, when the annealing temperature was increased to 62.5 °C, the peak heights of some loci decreased, and PEZ1/2 and DAmel showed allelic dropout. Therefore, profiles were efficiently and accurately obtained at an annealing temperature of 60.5 °C, which was the optimal and recommended annealing temperature. A total of 0.5 ng of control DNA was amplified at 29, 30, and 31 cycles on an Eppendorf Mastercycler Nexus Thermal Cycler. Full profiles were unfailingly obtained at all three cycle numbers, while the peak heights increased as the number of cycles increased. A cycle number of 30 were indicated to be optimal for the novel developed Canine 25 A kit because this cycle number maximized reagent sensitivity and minimized the occurrence of impurity peaks. Sensitivity For the sensitivity test of the Canine 25 A kit, a series of concentrations of control DNA, 2.0, 1.0, 0.5, 0.25, 0.125, and 0.0625 ng, were used as the template and added to the reaction volume of 25 μL ( ). Full profiles of 24 loci and a sex-determining locus amelogenin were consistently obtained with the reaction containing 0.125 ng of DNA ( ). When the DNA was less than 0.125 ng, allelic dropouts or amplification failure occasionally occurred, indicating insufficient template DNA. Consequently, in the 25 μL reaction system, the Canine 25 A kit sensitivity was determined to be 0.125 ng/25 μL. Species specificity The novel-developed Canine 25 A kit was tested using DNA samples from different animal species to determine any cross-reactivity (Liu et al. ). No amplification products were detected for samples from chickens, cattle, fish, mice, pigs, rabbits, sheep, E. coli, and humans ( ), which indicated that the system was robust and unlikely to be affected by the presence of genetic material from these animal species. Mixture study A mixture study was conducted to evaluate the ability of the Canine 25 A kit to identify the minor contributor (Ensenberger et al. ). In the dog-dog mixed DNA samples, all alleles were distinguishable in the 1:1, 1:2, and 1:5 mixtures; displays the DNA profiles of mixture samples with a mixing ratio of 1:5. However, in the 1:10 mixture, some alleles of the minor contributor were indistinguishable from stutter products, resulting in a loss of information. Notably, when the mixture ratio increased to 1:20, no profiles of the minor contributor could be obtained. In the mixtures of dog saliva and human blood, all alleles were successfully identified in 1:1, 1:2, and 1:5 (with dog saliva DNA as the minor contributor). However, when the mixing ratio reached 1:10, more than half of the peaks started diminishing. As expected, complete human DNA profiles were obtained in all five ratios using the AGCU Expressmarker 16CS Kit; illustrates the genotyping of the mixture of dog saliva and human blood at a ratio of 1:10. These findings suggest that the Canine 25 A kit exhibits promising performance in achieving complete genotyping for dog-dog mixed samples and dog saliva-human blood mixed samples, even when the contribution of dog DNA is as low as 20%. Inhibitor study As DNA samples from crime scenes usually contain inhibitors that may interfere with PCR amplification and sometimes even cause complete amplification failure, it is important to validate the robustness and anti-interference quality of the novel developed system (Green et al. ; Zhu et al. ). shows that complete profiles were obtained with concentrations up to 25 μmol/L hematin, 50 μmol/L hemoglobin, 4 mmol/L indigo, 10 mg/L humic acid, 0.8 mmol/L calcium ion, and 0.6 mmol/L EDTA. When the concentrations exceeded these levels, obvious inhibition of the system occurred, reflected as allelic dropouts at some loci. Reproducibility A reproducibility study was performed to validate the reliability and accuracy of the developed Canine 25 A kit by testing 30 samples in different laboratories. The results showed that allele calls of all samples were identical across different analyses. To demonstrate the suitability of the system for use on different capillary electrophoresis (CE) platforms, control DNA was examined using both ABI 3130XL and ABI 3730XL. The genotype of the same locus obtained revealed perfect consistency between the two CE methods ( ). Size precision study The size precision results indicated that very little variation at each locus was observed in the size of the Canine 25 A allelic ladder, and most allele deviations were nearly 0.05 bases. The maximum standard deviation was close to 0.19 bases at locus VGL3112 ( ). These results demonstrate that the Canine 25 A kit has sufficient ability to ensure accurate genotyping of samples. Balance of peak height Two different types of peak balance were evaluated to assess the overall Canine 25 A STR kit performance. The intracolor balance was calculated to assess the peak balance of loci marked with the same fluorescent dye (Zhou et al. ). illustrates that the peak height ratios between different loci within the same dye channel were 59.23% for the blue dye, 46.04% for the green dye, 41.37% for the yellow dye, 54.09% for the red dye, and 55.57% for the purple dye. displays the average RFU values of each group of loci. The average RFU value for the blue dye was approximately 4071 for the blue dye, 3976 for the green dye, 2750 for the yellow dye, 3578 for the red dye, and 5459 for the purple dye. Therefore, the lowest value of peak height ratio between loci labeled with different fluorescent dyes was the minimum RFU at 2750 divided by the maximum at 5459, equal to approximately 50.4%. These results showed that the Canine 25 A kit has a good balance. Stutter analysis summarizes the relevant parameters for assessing stutter in the Canine 25 A kit, including the minimum, maximum, and average stutter values, standard deviation, and stutter threshold for each locus. Notably, the FH2107 locus exhibited the highest values across all parameters, indicating a higher propensity for stuttering at this locus. The average stutter percentage for FH2107 was determined to be 19.40%. Accordingly, the recommended stutter filter for GeneMapper IDX software in the Canine 25 A kit was defined as the average value plus three standard deviations. Case samples study Canine STR profiling was used to test and analyze the performance in a dog attack incident using the Canine 25 A kit. All three bloodstains from different suspected dogs yielded a full canine-specific STR profile. Meanwhile, saliva stains on the shirt and pants only provided a partial profile ( ); reveals that the Rottweiler sample shared 15 identical loci with the shirt and 20 with the pants. The combined discrimination power (CPD) for these 15 and 20-locus sets was calculated to be 0.999 999 999 983 876 and 0.999 999 999 999 982, respectively, satisfying the system efficiency criteria of individual identification. The likelihood ratios were determined to be 1.9620 × 10 22 and 5.9361 × 10 28 , supporting that the samples taken from the shirt, pants, and Rottweiler originated from one individual. Full profiles were obtained in the father-son duo paternity case, and no locus showed genetic incompatibilities ( ). The paternity index value per marker was calculated, and the final cumulative paternity index was 4.8568 × 10 8 , confirming the parenthood of the tested family. Population studies Full DNA profiles were successfully obtained from different sample types of bloodstains, buccal swabs, and feces. depicts the forensic parameters of 24 canine STR loci in 16 canine breeds, revealing insignificant deviations from Hardy-Weinberg expectations after Bonferroni corrections ( p > 0.05/24 = 0.0021), except for the FH2079 locus in Schnauzer , FH2088 in Rottweiler , PEZ15 in Chinese Kunming dog , FH2328 in Golden Retriever , PEZ2 in Poodle and PEZ6 and PEZ8 in Labrador Retriever. The PIC indicated that the marker can establish polymorphisms in populations, and the results showed that FH2107 and FH3313 gave a higher genetic polymorphism over 0.6 compared with the PICs in each canine breed (Khacha-Ananda and Mahawong ). illustrates detailed information on these breeds and their cumulative paternity indices. The CDP values for each breed were exceeded 0.999 999 999 999, and the CPE was over 0.9999. These results suggest that the Canine 25 A kit is suitable for individual identification and paternity testing, providing high accuracy and reliability. Conclusion The Canine 25 A kit was developed to enhance canine individual identification and parentage determination. This multiplex system offers an in-depth examination of 25 distinct canine STR loci, allowing for full genotyping even when working with minimal DNA template quantities as low as 0.125 ng. Several tests were conducted to validate the analytical performance of this kit, including PCR conditions, cross-reactivity, and stability studies, yielding consistently reliable profiles. The validation study has firmly established that the Canine 25 A kit exhibits significant sensitivity, high inhibitor tolerance, canine specificity within a mixture, species specificity, and precision in genotype determination. In canine population analysis, the Canine 25 A kit demonstrated its effectiveness by generating complete and distinct STR profiles for 500 individuals sourced from various sample types, representing 16 different breeds. It demonstrated a high level of accuracy and reliability with CDP values surpassing 0.999 999 999 999 for each breed and a CPE exceeding 0.9999. These results indicate that the Canine 25 A kit is suitable for precise and dependable individual identification and paternity testing. Furthermore, the practical applications of this kit extended to successfully identifying casework samples involved in real cases, including one related to a dog attack incident and another concerning a father-son duo. The genetic evidence derived from the Canine 25 A kit significantly contributed to resolving these cases by furnishing critical investigative insights. The Canine 25 A kit has proven durability and efficacy in canine forensic testing. Its significant potential in forensic applications contributed to the progression of canine forensic genetics.
Potential of uPAR, αvβ6 Integrin, and Tissue Factor as Targets for Molecular Imaging of Oral Squamous Cell Carcinoma: Evaluation of Nine Targets in Primary Tumors and Metastases by Immunohistochemistry
fe1edf44-74b5-4bb7-9b25-23ce92eae8e1
9962929
Anatomy[mh]
Despite advances in diagnostic techniques and postoperative treatment, poor survival and high recurrence rate remain for patients with oral squamous cell carcinoma (OSCC) . The primary curative treatment is surgery, where the adequate resection margins (>5 mm) are one of the most important prognosticators . Achieving radical resection margins is challenging when the tumor is surrounded by multiple functionally and aesthetically critical structures and the border between the tumor and normal tissue is not clearly delineated. This is reflected in a positive margin rate of 12–30% for OSCC, one of the highest rates among all solid tumors . Additionally, the detection and removal of regional lymph node metastases by neck dissection is a challenge due to a significant risk of occult microscopic disease that is not detected by conventional preoperative imaging . Currently, there are no established real-time intraoperative imaging techniques for distinguishing healthy tissue from tumor tissue in OSCC. The surgeons rely on preoperative imaging and intraoperative visual and tactile information. Intraoperative margin assessment may be performed by use of frozen section microscopy, which is time-consuming and prone to sampling and interpretation errors . Molecular imaging is a rapidly emerging field for the diagnosis and treatment of cancer, particularly head and neck cancer, in which several targets and modalities have been studied and are under development . Due to advancements in imaging hardware and fluorophore biochemistry, targeted fluorescence guided surgery (FGS) is one of the most promising real-time intraoperative imaging techniques. Especially fluorophores with excitation and emission in the near-infrared (NIR) spectrum, such as indocyanine green (ICG) and IRDye800CW, have been investigated due to a relatively high penetration depth compared to other wavelengths . Despite intensive research, no clinically approved tumor-specific imaging agents for head and neck cancer surgery are currently available . The identification of biomarkers with a high and homogenous expression in tumor tissue and minimal expression in normal tissue is essential for the development of new molecular imaging targets in head and neck cancer. The vascular endothelial growth factor receptor 1 and 2 (VEGFR1 and VEGFR2) play important roles in tumor angiogenesis . A high expression of both receptors has been reported in OSCC and several studies have investigated these receptors as targets for molecular imaging in different cancers . Integrin αvβ3 is another receptor expressed by tumor cells that plays an important role in tumor angiogenesis and molecular imaging, and has been explored in several different cancers with promising results . Integrin αvβ6 is a member of the same family that has been more thoroughly studied . Integrin αvβ6 is important for cell migration as it facilitates cell-to-cell and cell-to-extracellular matrix adhesion. In OSCC, integrin αvβ6 has been found to be upregulated, especially at the invasive margin , and involved in different hallmarks of cancer including epithelial to mesenchymal transition , invasion, and migration . The epithelial cell adhesion molecule (EpCAM), like integrins, is a cell adhesion receptor implicated in metastasis. It has been identified as being overexpressed in several malignancies, including OSCC , and several studies have already investigated the use of both fluorescence and radionuclide probes . Cathepsin E and Poly(ADP-ribose)polymerase-1 (PARP-1) are both intracellular enzymes that have been shown to be overexpressed in a variety of malignancies . PARP-1 has been examined as a PET-imaging target and a target for fluorescence imaging in OSCC , whereas Cathepsin E expression in OSCC has not been previously described. However, a fluorescence probe has been developed for Cathepsin E and tested in vivo . The urokinase-type plasminogen activator receptor (uPAR) is a GPI-anchored cell membrane receptor that turns plasminogen into plasmin at the cell surface, thus degrading the extracellular matrix . uPAR has been found to be upregulated in most solid cancers where it facilitates cell invasion and metastasis, and a high expression has been associated with poor prognosis and metastases . The tissue factor, a transmembrane glycoprotein that stimulates the extrinsic coagulation pathway, is thought to have a significant role in tumor progression . An overexpression of the tissue factor has been reported in several malignancies and is related with poor clinical outcomes . Our aim was to investigate the immunohistochemical (IHC) expression of the above mentioned, nine interesting imaging targets in both primary tumor and matched metastatic tissue from OSCC to assess their potential as targets for molecular imaging. For a subgroup, the tissue from recurrent disease was evaluated. 2.1. Patient Characteristics In this population of 41 patients with OSCC, the median age at diagnosis was 58 years (range 23–81 years), and 26 (63%) of the patients were male . The majority of tumors (73%) were moderately differentiated, and tumors were located in the floor of mouth (56%) and oral tongue (44%). All pathologic T-stages were represented. The majority of tumors were in stage T1 or T2 at the time of surgery and 38 patients (93%) had histological confirmed lymph node metastases. Surgery aiming radicality was the first line of treatment for all patients. 2.2. Immunohistochemical Staining Primary tumor tissue was obtained from all 41 patients. In a number of patients, there was insufficient remaining tumor tissue to perform IHC staining for all nine targets, and normal mucosa was not present or only present in some sections. Formalin-fixed, paraffin-embedded (FFPE) blocks containing metastatic tissue were available for 28 patients, while local recurrence tissue was obtained from eight patients. A representative image for each target’s immunohistochemical staining is shown in and the three most promising biomarkers in matched tumor samples from the same patient is shown in . The intensity, proportion, and total immune staining score for all targets are shown in . An overview of the final expression category in primary tumors and metastases of all biomarkers is illustrated in and , respectively. 2.2.1. Integrin αvβ6 Integrin αvβ6 expression was seen in nearly all tumor samples (97%) with strong membrane and cytoplasmic staining in most tumor cells. There was a distinct demarcation between tumor cells and immune cells in lamina propria and surrounding tissue in submucosa. The staining was homogenous in 80% of all tumor samples . The median staining scores (interquartile range) for primary tumor, lymph node metastases, and local tumor recurrence were 12 (12–12), 12 (9.75–12), and 12 (12–12), respectively. Except for a weak staining of muscle cells and a moderate staining of salivary gland ducts, no other normal cells in the subepithelial layers were positive. Integrin αvβ6 was also expressed in normal epithelium. 2.2.2. uPAR The overall expression rate was 97% with highly tumor-specific staining, which was rated as homogeneous in 51% of the samples. uPAR was expressed in 23/24 metastases (96%). Both membrane and cytoplasmic staining were found in tumor cells. The total immune staining scores for primary tumor cells, lymph node metastases, and local tumor recurrence tissue were 6 (6–9), 6 (4–8), and 6 (6–9.75), respectively. Normal epithelium exhibited no staining, except for in four cases where weak epithelial staining was seen. In one case, moderate staining of a lichen planus lesion was observed in the periphery of the tumor. There was a clear contrast between tumor and surrounding tissue at the deep tumor margin. Weak to moderate staining was observed in granulocytes. 2.2.3. Tissue Factor The overall expression rate of tissue factor in tumor tissue was high (86%), but only with a homogenous pattern in 3% of tumor samples. In half of the primary tumor samples, tissue factor showed moderate to intense expression. In lymph node metastases, expression was mainly weak and moderate. Staining scores for primary tumor cells, lymph node metastases, and local recurrence tumor tissue were 6 (2.5–7.5), 2 (1–5.5), and 4 (0–6), respectively. Normal epithelium expressed tissue factor in approximately 80% of the samples, although the staining in this compartment was mostly weak. Salivary duct and acini cells also showed a weak expression of tissue factor. 2.2.4. PARP-1 A high overall expression rate was seen for PARP-1 (97%), with positive staining of tumor nuclei, albeit heterogeneously. For primary tumor cells, lymph node metastases, and local recurrence tumor tissue, the staining scores were 6 (4–9), 6 (6–9), and 6 (4–8), respectively. Nevertheless, the staining was not very tumor-specific, as several normal cells were also stained. Lymphocytes, endothelium, muscle tissues, nerve fibers, salivary gland tissues, plasma cells, and normal epithelium exhibited variable nuclei staining. 2.2.5. VEGFR1 All tumors were positive for VEGFR1, but the staining was not tumor-specific and contrasted poorly with the normal stroma and epithelium. The VEGFR1 staining scores for primary tumor, lymph node metastases, and recurrent tumor tissue were 8 (6–8), 8 (6.5–8), and 8 (3–6), respectively. Macrophages, plasma cells, nerve fibers, endothelium, muscle tissues, and salivary gland tissues had expression of VEGFR. 2.2.6. EpCAM EpCAM was expressed in 57% of all tumor samples, but only 3% exhibited a homogenous pattern. In tumor cells, membrane and cytoplasmic stains were seen. The intensity of EpCAM positive tumors varied but was generally weak to moderate. Total IHC scores were 0.5 (0–2.5), 1.5 (0–3), and 1 (0–6) for primary tumor cells, lymph node metastases, and local recurrence tumor tissue, respectively. Rarely were EpCAM-positive macrophages and plasma cells observed. Normal epithelium exhibited no staining. 2.2.7. VEGFR2 The overall expression rate of VEGFR2 was 79%, with no tumors displaying homogenous expression pattern. The VEGFR2 antibody staining was present in the cytoplasm of the tumor cells, although it was mainly weak. The staining scores for primary tumor tissue, lymph node metastases, and local recurrence were 2 (1–4), 2 (1–4), and 2 (0–2), respectively. Moderate to weak expression was also seen in normal oral squamous epithelium in 29% of samples. No expression was seen in the stroma surrounding tumor. 2.2.8. Cathepsin E and Integrin αvβ3 Only one primary tumor and three lymph node metastases showed Cathepsin E expression. No expression of integrin αvβ3 was observed in primary tumors, metastases, or tissue from local recurrence. The staining scores for both biomarkers for primary tumor cells, lymph node metastases, and local recurrence tumor tissue were 0 (0–0), 0 (0–0), and 0 (0–0). 2.3. Intensity of Staining in Normal Oral Mucosal Epithelium vs. Tumor Tissue The mean staining intensity score between normal epithelium and tumor tissue was compared for all samples where both components were present. The staining intensity was significantly higher in tumors compared to normal epithelium in uPAR ( p < 0.001, n = 37), VEGFR2 ( p = 0.002, n = 41), VEGFR1 ( p = 0.001, n = 41), PARP-1 ( p = 0.003, n = 40), and tissue factor ( p < 0.001, n = 47). No difference in staining intensity between tumor tissue and normal epithelium was seen for integrin αvβ6 ( p = 0.380, n = 47) or EPCAM ( p = 0.130, n = 39). 2.4. Biomarker Expression in Primary Tumor Compared to Lymph Node Metastases and Tissue from Local Recurrence (T-Site) We examined the correlation between total immune staining scores in primary tumors and lymph node metastases for each target in cases where tissue from both locations were available. We identified 28 primary cancers with accessible tissue from lymph node metastasis. All targets with tumor staining exhibited a positive Spearman rank correlation value. However, only uPAR (spearman correlation = 0.554, p = 0.014), tissue factor (spearman correlation = 0.615, p = 0.001), and VEGFR2 (Spearman correlation = 0.765, p < 0.001) had a significant positive correlation between total immune staining scores in primary tumor and lymph node metastases. Due to small numbers of cases with recurrence, no significant correlation was found between the total immune staining scores in primary tumors and tumor tissue from local recurrence, but a tendency toward positive correlation was seen for uPAR (spearman correlation = 0.395; p = 0.510), EpCAM (spearman correlation = 0.111, p = 0.834), and PARP-1 (spearman correlation = 0.064, p = 0.905) In this population of 41 patients with OSCC, the median age at diagnosis was 58 years (range 23–81 years), and 26 (63%) of the patients were male . The majority of tumors (73%) were moderately differentiated, and tumors were located in the floor of mouth (56%) and oral tongue (44%). All pathologic T-stages were represented. The majority of tumors were in stage T1 or T2 at the time of surgery and 38 patients (93%) had histological confirmed lymph node metastases. Surgery aiming radicality was the first line of treatment for all patients. Primary tumor tissue was obtained from all 41 patients. In a number of patients, there was insufficient remaining tumor tissue to perform IHC staining for all nine targets, and normal mucosa was not present or only present in some sections. Formalin-fixed, paraffin-embedded (FFPE) blocks containing metastatic tissue were available for 28 patients, while local recurrence tissue was obtained from eight patients. A representative image for each target’s immunohistochemical staining is shown in and the three most promising biomarkers in matched tumor samples from the same patient is shown in . The intensity, proportion, and total immune staining score for all targets are shown in . An overview of the final expression category in primary tumors and metastases of all biomarkers is illustrated in and , respectively. 2.2.1. Integrin αvβ6 Integrin αvβ6 expression was seen in nearly all tumor samples (97%) with strong membrane and cytoplasmic staining in most tumor cells. There was a distinct demarcation between tumor cells and immune cells in lamina propria and surrounding tissue in submucosa. The staining was homogenous in 80% of all tumor samples . The median staining scores (interquartile range) for primary tumor, lymph node metastases, and local tumor recurrence were 12 (12–12), 12 (9.75–12), and 12 (12–12), respectively. Except for a weak staining of muscle cells and a moderate staining of salivary gland ducts, no other normal cells in the subepithelial layers were positive. Integrin αvβ6 was also expressed in normal epithelium. 2.2.2. uPAR The overall expression rate was 97% with highly tumor-specific staining, which was rated as homogeneous in 51% of the samples. uPAR was expressed in 23/24 metastases (96%). Both membrane and cytoplasmic staining were found in tumor cells. The total immune staining scores for primary tumor cells, lymph node metastases, and local tumor recurrence tissue were 6 (6–9), 6 (4–8), and 6 (6–9.75), respectively. Normal epithelium exhibited no staining, except for in four cases where weak epithelial staining was seen. In one case, moderate staining of a lichen planus lesion was observed in the periphery of the tumor. There was a clear contrast between tumor and surrounding tissue at the deep tumor margin. Weak to moderate staining was observed in granulocytes. 2.2.3. Tissue Factor The overall expression rate of tissue factor in tumor tissue was high (86%), but only with a homogenous pattern in 3% of tumor samples. In half of the primary tumor samples, tissue factor showed moderate to intense expression. In lymph node metastases, expression was mainly weak and moderate. Staining scores for primary tumor cells, lymph node metastases, and local recurrence tumor tissue were 6 (2.5–7.5), 2 (1–5.5), and 4 (0–6), respectively. Normal epithelium expressed tissue factor in approximately 80% of the samples, although the staining in this compartment was mostly weak. Salivary duct and acini cells also showed a weak expression of tissue factor. 2.2.4. PARP-1 A high overall expression rate was seen for PARP-1 (97%), with positive staining of tumor nuclei, albeit heterogeneously. For primary tumor cells, lymph node metastases, and local recurrence tumor tissue, the staining scores were 6 (4–9), 6 (6–9), and 6 (4–8), respectively. Nevertheless, the staining was not very tumor-specific, as several normal cells were also stained. Lymphocytes, endothelium, muscle tissues, nerve fibers, salivary gland tissues, plasma cells, and normal epithelium exhibited variable nuclei staining. 2.2.5. VEGFR1 All tumors were positive for VEGFR1, but the staining was not tumor-specific and contrasted poorly with the normal stroma and epithelium. The VEGFR1 staining scores for primary tumor, lymph node metastases, and recurrent tumor tissue were 8 (6–8), 8 (6.5–8), and 8 (3–6), respectively. Macrophages, plasma cells, nerve fibers, endothelium, muscle tissues, and salivary gland tissues had expression of VEGFR. 2.2.6. EpCAM EpCAM was expressed in 57% of all tumor samples, but only 3% exhibited a homogenous pattern. In tumor cells, membrane and cytoplasmic stains were seen. The intensity of EpCAM positive tumors varied but was generally weak to moderate. Total IHC scores were 0.5 (0–2.5), 1.5 (0–3), and 1 (0–6) for primary tumor cells, lymph node metastases, and local recurrence tumor tissue, respectively. Rarely were EpCAM-positive macrophages and plasma cells observed. Normal epithelium exhibited no staining. 2.2.7. VEGFR2 The overall expression rate of VEGFR2 was 79%, with no tumors displaying homogenous expression pattern. The VEGFR2 antibody staining was present in the cytoplasm of the tumor cells, although it was mainly weak. The staining scores for primary tumor tissue, lymph node metastases, and local recurrence were 2 (1–4), 2 (1–4), and 2 (0–2), respectively. Moderate to weak expression was also seen in normal oral squamous epithelium in 29% of samples. No expression was seen in the stroma surrounding tumor. 2.2.8. Cathepsin E and Integrin αvβ3 Only one primary tumor and three lymph node metastases showed Cathepsin E expression. No expression of integrin αvβ3 was observed in primary tumors, metastases, or tissue from local recurrence. The staining scores for both biomarkers for primary tumor cells, lymph node metastases, and local recurrence tumor tissue were 0 (0–0), 0 (0–0), and 0 (0–0). Integrin αvβ6 expression was seen in nearly all tumor samples (97%) with strong membrane and cytoplasmic staining in most tumor cells. There was a distinct demarcation between tumor cells and immune cells in lamina propria and surrounding tissue in submucosa. The staining was homogenous in 80% of all tumor samples . The median staining scores (interquartile range) for primary tumor, lymph node metastases, and local tumor recurrence were 12 (12–12), 12 (9.75–12), and 12 (12–12), respectively. Except for a weak staining of muscle cells and a moderate staining of salivary gland ducts, no other normal cells in the subepithelial layers were positive. Integrin αvβ6 was also expressed in normal epithelium. The overall expression rate was 97% with highly tumor-specific staining, which was rated as homogeneous in 51% of the samples. uPAR was expressed in 23/24 metastases (96%). Both membrane and cytoplasmic staining were found in tumor cells. The total immune staining scores for primary tumor cells, lymph node metastases, and local tumor recurrence tissue were 6 (6–9), 6 (4–8), and 6 (6–9.75), respectively. Normal epithelium exhibited no staining, except for in four cases where weak epithelial staining was seen. In one case, moderate staining of a lichen planus lesion was observed in the periphery of the tumor. There was a clear contrast between tumor and surrounding tissue at the deep tumor margin. Weak to moderate staining was observed in granulocytes. The overall expression rate of tissue factor in tumor tissue was high (86%), but only with a homogenous pattern in 3% of tumor samples. In half of the primary tumor samples, tissue factor showed moderate to intense expression. In lymph node metastases, expression was mainly weak and moderate. Staining scores for primary tumor cells, lymph node metastases, and local recurrence tumor tissue were 6 (2.5–7.5), 2 (1–5.5), and 4 (0–6), respectively. Normal epithelium expressed tissue factor in approximately 80% of the samples, although the staining in this compartment was mostly weak. Salivary duct and acini cells also showed a weak expression of tissue factor. A high overall expression rate was seen for PARP-1 (97%), with positive staining of tumor nuclei, albeit heterogeneously. For primary tumor cells, lymph node metastases, and local recurrence tumor tissue, the staining scores were 6 (4–9), 6 (6–9), and 6 (4–8), respectively. Nevertheless, the staining was not very tumor-specific, as several normal cells were also stained. Lymphocytes, endothelium, muscle tissues, nerve fibers, salivary gland tissues, plasma cells, and normal epithelium exhibited variable nuclei staining. All tumors were positive for VEGFR1, but the staining was not tumor-specific and contrasted poorly with the normal stroma and epithelium. The VEGFR1 staining scores for primary tumor, lymph node metastases, and recurrent tumor tissue were 8 (6–8), 8 (6.5–8), and 8 (3–6), respectively. Macrophages, plasma cells, nerve fibers, endothelium, muscle tissues, and salivary gland tissues had expression of VEGFR. EpCAM was expressed in 57% of all tumor samples, but only 3% exhibited a homogenous pattern. In tumor cells, membrane and cytoplasmic stains were seen. The intensity of EpCAM positive tumors varied but was generally weak to moderate. Total IHC scores were 0.5 (0–2.5), 1.5 (0–3), and 1 (0–6) for primary tumor cells, lymph node metastases, and local recurrence tumor tissue, respectively. Rarely were EpCAM-positive macrophages and plasma cells observed. Normal epithelium exhibited no staining. The overall expression rate of VEGFR2 was 79%, with no tumors displaying homogenous expression pattern. The VEGFR2 antibody staining was present in the cytoplasm of the tumor cells, although it was mainly weak. The staining scores for primary tumor tissue, lymph node metastases, and local recurrence were 2 (1–4), 2 (1–4), and 2 (0–2), respectively. Moderate to weak expression was also seen in normal oral squamous epithelium in 29% of samples. No expression was seen in the stroma surrounding tumor. Only one primary tumor and three lymph node metastases showed Cathepsin E expression. No expression of integrin αvβ3 was observed in primary tumors, metastases, or tissue from local recurrence. The staining scores for both biomarkers for primary tumor cells, lymph node metastases, and local recurrence tumor tissue were 0 (0–0), 0 (0–0), and 0 (0–0). The mean staining intensity score between normal epithelium and tumor tissue was compared for all samples where both components were present. The staining intensity was significantly higher in tumors compared to normal epithelium in uPAR ( p < 0.001, n = 37), VEGFR2 ( p = 0.002, n = 41), VEGFR1 ( p = 0.001, n = 41), PARP-1 ( p = 0.003, n = 40), and tissue factor ( p < 0.001, n = 47). No difference in staining intensity between tumor tissue and normal epithelium was seen for integrin αvβ6 ( p = 0.380, n = 47) or EPCAM ( p = 0.130, n = 39). We examined the correlation between total immune staining scores in primary tumors and lymph node metastases for each target in cases where tissue from both locations were available. We identified 28 primary cancers with accessible tissue from lymph node metastasis. All targets with tumor staining exhibited a positive Spearman rank correlation value. However, only uPAR (spearman correlation = 0.554, p = 0.014), tissue factor (spearman correlation = 0.615, p = 0.001), and VEGFR2 (Spearman correlation = 0.765, p < 0.001) had a significant positive correlation between total immune staining scores in primary tumor and lymph node metastases. Due to small numbers of cases with recurrence, no significant correlation was found between the total immune staining scores in primary tumors and tumor tissue from local recurrence, but a tendency toward positive correlation was seen for uPAR (spearman correlation = 0.395; p = 0.510), EpCAM (spearman correlation = 0.111, p = 0.834), and PARP-1 (spearman correlation = 0.064, p = 0.905) In this study, we have evaluated nine potential molecular imaging targets from 41 OSCC patients with tissue samples from the primary tumor, lymph node metastases, and local recurrence. This is, to the best of our knowledge, the first study to investigate and compare multiple potential targets in both primary OSCC tumors and their metastases. Based on immunohistochemical expression levels and expression patterns in the tumor, normal epithelium, and surrounding tissue, it was revealed that the uPAR, integrin αvβ6, and tissue factor represent attractive molecular imaging targets in OSCC due to a high overall expression rate of 97%, 97%, and 86%, respectively. The high expression rates of uPAR (96%) and integrin αvβ6 (100%) in lymph node metastases indicate a potential in FGS for detecting lymph node metastases during sentinel lymph node biopsy or neck dissection, which could potentially spare healthy nodes. We found a highly tumor-specific uPAR expression in most tumor samples (97%), with a moderate to intense staining in both primary tumors and metastases. Our results are in accordance with previous immunohistochemical studies that have also found a high tumor-specific expression of uPAR in OSCC, with an absence of staining in the surrounding normal squamous epithelium and weak expression in tumor-associated inflammatory cells (macrophages, neutrophils, and fibroblasts), with a sharp demarcation at the deep tumor margin . Interestingly, our current study demonstrated uPAR expression in 96% of metastasis, which indicates that combined targeted strategies against the tumor as well as metastatic disease seem possible. Different molecular imaging modalities have been explored for uPAR. In clinical trials, uPAR-targeted PET imaging using a peptide-based tracer has been studied for several cancers including OSCC, where a prognostic value was demonstrated . No studies have yet investigated the diagnostic potential of uPAR-targeted PET imaging in OSCC, but a Phase II clinical trial is currently underway (NCT02960724). Few clinical studies have been conducted on FGS using uPAR-directed probes. In a cell-line-based xenograph proof-of-concept study conducted at our institution, it was shown that uPAR-targeted optical near-infrared fluorescence imaging using ICG conjugated to AE-105 can be used to identify small lymph node metastases during surgery . Boonstra et al. also investigated uPAR-targeted FGS in cell-line-based xenograph models with an antibody-based tracer (hybrid ATN 658) conjugated to a fluorophore (ZW800-1), and showed that this modality could also identify primary tumors and lymph node metastases . Clinical trials investigating uPAR-targeted FGS are ongoing in patients with oral cancer, lung cancer, and glioblastoma (EudraCT no. 2022-001361-12, 2021-004389-37 and 2020-003089-38). The tissue factor also demonstrated a tumor-specific expression, but at a lower rate (86%) and with a more heterogeneous pattern than uPAR. The expression of the tissue factor in lymph node metastases was less compared to the primary tumor tissue. These results are consistent with similar immunohistochemistry studies on primary tumor tissue from oral and oropharyngeal squamous cell carcinoma, which found tissue factor expression rates of 58% and 76%, respectively . As, an imaging target tissue factor has been poorly investigated in OSCC, but the potential in several other cancers has been explored. In preclinical studies, the tissue factor has been investigated as a target for FGS, SPECT, and PET using tissue factor-specific monoclonal antibodies in both anaplastic thyroid cancer, glioblastoma, and pancreatic cancer xenografts with promising effect . In 2021, an antibody drug (tisotumab vedotin)-targeting tissue factor was approved by FDA for treatment of metastatic cervical cancer . Subsequently the tissue factor-targeted PET-imaging with a protein (FVIIa) labeled with 18 F was successfully tested first in a human study and proposed as a future diagnostic tool prior to tissue factor-targeted treatment . The high expression of the tissue factor in OSCC and the recent development of tissue factor-targeted tracers in other solid cancers makes it a promising imaging agent in OSCC. Integrin αvβ6 was also highly expressed in our study, with a clear contrast at the deep tumor margin. However, a high integrin αvβ6 expression was also seen in normal squamous cell epithelium without a significant difference in the intensity score between a tumor and normal epithelium. Our findings suggest that molecular imaging drugs targeting integrin αvβ6 may provide a distinct contrast at the deep margin but less at the superficial margins. These results are in line with those obtained by Baart et al., who investigated the immunohistochemical expression of integrin αvβ6 in both OSCC and cutaneous squamous cell carcinoma of the head and neck . They also proposed integrin αvβ6 as a target for FGS in OSCC, especially due to the clear discrimination at the deep margin and compared to EGFR, they found less staining of the normal epithelium. Integrin αvβ6 has been studied as a PET-imaging target in different cancers. In 2019, Hausner et al. successfully performed a first in human studies by exploring PET/CT with a radiolabeled integrin αvβ6-binding peptide in patients with metastatic colon, breast, and pancreas cancer . Later, Quigley et al. tested a Ga-68-labeled peptide (Ga-68-Trivehexin) for human PET/CT imaging of head, neck, and pancreatic cancer, with results showing a high tumor-specific uptake and no uptake in tumor-associated inflammation . Integrin αvβ6 has, to our knowledge, not been tested as a target for fluorescent imaging in OSCC patients. However, Ilyia et al. showed imaging potential in in vitro head and neck cancer models with quantum dots conjugated to an integrin αvβ6-specific peptide . A human trial by de Valk et al. has studied integrin αvβ6-targeted near-infrared fluorescent peptides (cRGD-ZW800-1) in 12 patients with colon carcinoma and was able to show cancer-specific imaging in both open and laparoscopic surgery . Studies investigating integrin αvβ6 as a target for fluorescent imaging in OSCC have not yet been published, but a clinical trial with cRGD-ZW800-1 (NCT 04191460) is planned to investigate whether this modality can improve the rate of adequate surgical resection margins in OSCC. PARP-1 showed mostly moderate and moderate to high expression levels in the tumor nuclei, but it appears less suitable as an imaging target compared to uPAR, αvβ6, and tissue factor, owing to the non-specific staining of several different cell-types in the lamina propria and submucosa as well as the staining of normal squamous epithelium. Even though some expressions of PARP-1 are present in normal tissues, this biomarker might not be excluded as a target for molecular imaging, because the density of the nuclei in tumor cells are higher compared to normal tissues . Kossatz et al. recently investigated a topically applied PARP-1-specific fluorescence agent for the use of early diagnosis of OSCC in a Phase 1 study with 12 patients, where the fluorescence signal showed a tumor to normal ratio > 3 . However, the topical approach is probably confined to early stage disease or screening of mucosal lesions, as the penetration depth is limited (300 μm in the trial by Kossatz et al.). VEGFR1 and VEGFR2 did not appear promising for imaging purposes in our study, as their expression was limited, and the tumor specificity was low. No studies have yet examined the molecular imaging of these targets in OSCC, but different angiogenesis inhibitors for the treatment of head and neck squamous cell carcinoma have been thoroughly investigated, with bevacizumab being the most promising . This study has some limitations. First, a biomarkers appropriateness as an imaging target is determined by several factors in addition to its overexpression. The target selection criteria system has been suggested as a tool to identify potential imaging targets and consists of seven different criterions. However, several of these are either difficult to measure (tumor to normal ratio greater than 10) or questionable (internalization of the tracer) . Second, immunohistochemistry has several inherited limitations, including the selection of an antibody clone, which can affect the intensity and proportion of the stained tumor tissue substantially. In addition, both a manual and semi-quantitative scoring method were used, and several different scoring systems exists. This is a subjective estimate and interobserver variability is unavoidable. Third, the small sample size of tissues from lymph node metastases and tissues from local tumor recurrence compared to primary tumors limits the interpretation of the results. This study does not provide new diagnostic methods in pathology to diagnose OSCC earlier than with current methods, but rather focuses on the potential future targets for molecular imaging. 4.1. Patient and Tissue Selection From an existing, well-defined database consisting of patients diagnosed with OSCC between 2000–2011 and surgically treated at the department of Otolaryngology, Head and Neck Surgery and Audiology at Rigshospitalet (Copenhagen, Denmark), we randomly selected 41 patients. Microscopy slides were retrieved from the archives of the Department of Pathology and one FFPE tissue block containing both tumor tissue and normal epithelium were selected from each patient for following IHC staining. Of the 41 patients, 28 patients also had available tissue from lymph node metastases and 8 patients from recurrent disease. Clinicopathological data were obtained from medical and pathology reports. The 7th edition of the TNM Union for International Cancer Control (UICC) staging system was used. 4.2. Selection of Imaging Targets Through literature search, we identified nine targets with previously described overexpression in several cancers, including head and neck, and for which there is a potential for rapid translation into clinical settings due to earlier research/probe development. The following biomarkers were selected: integrin αvβ6, tissue factor, poly(ADP-ribose) polymerase 1 (PARP-1), urokinase plasminogen activator receptor (uPAR), vascular endothelial growth factor receptor 1 (VEGFR1), epithelial cell adhesion molecule (EpCAM), vascular endothelial growth factor receptor 2 (VEGFR2), Cathepsin E, and integrin αvβ3. Immunohistochemical staining for cytokeratin 5 (CK5) was used to visualize tumor location. Despite its great imaging potential, epidermal growth factor receptor (EGFR) was not included as it is very well characterized in OSCC and clinical trials with targeted tracers are currently being performed (NCT03134846 and NCT03733210). 4.3. Immunohistochemistry The expression of all targets was determined for both the primary tumor, metastasis, and tissue from local recurrence. Tumor tissue had been fixated in 10% formalin solution at room temperature for 24 h and then embedded in paraffin at the time of collection. FFPE blocks were stored at room temperature. Tissue sections of 4 μm were cut and IHC staining with integrin αvβ3, integrin αvβ6, tissue factor, and EPCAM were performed using a semi-automated autostainer, Ventana Benchmark Ultra (Roche Diagnostics). Manual staining was performed for the following biomarkers: Cathepsin E, PARP-1, uPAR, VEGFR1, and VEGFR2. Antibodies, reagents, and methods used for IHC analysis are listed in . Briefly, the slides were incubated at 60 °C for 60 min before being deparaffinized in HistoClear solution, rehydrated in graded ethanol, and submerged in water. Different antigen retrieval methods were used depending on the target. All antibodies were used at optimal dilutions, which were determined using positive and negative control staining (data not shown). Secondary staining with HRP-conjugated antibody was performed by incubation for 30–40 min. The reaction was visualized with Envision DAB+ for the manual staining and with DAB+ chromogen solution for the autostainer. Digital pictures for were obtained using Zeiss Axioscan with 10 × zoom. 4.4. Assessment of Immunohistochemical Staining Two specialized head and neck pathologists (GL and AF) reviewed and scored all samples blinded to clinical data. In the event of a disagreement, individual slides were examined together to obtain a consensus score. Each sample was assessed according to highest staining intensity in tumor compartment, proportion of stained malignant tumor tissue in the total tumor area, expression pattern in tumor tissue (homogenous or heterogeneous), and intensity in normal epithelium. Proportion and intensity scores were generated using a point system: 0% (0), 1–10% (1), 11–50% (2), 51–75% (3), and 76–100% (4), and none (0), weak (1), medium (2), and strong (3), respectively. The staining intensity of normal epithelium around the tumor tissue was scored in the same way. The proportion and intensity scores for tumor tissue were multiplied to provide a single combined score and a total immune staining score (TIS), which is similar to previous studies . This resulted in a score ranging from 0 to 12, which was divided into four final expression categories: 0 = absent; 1–5 = low; 6–8 = intermediate; and 9–12 = high expression. For each target, the proportion of patients categorized as low, intermediate, and high expression was calculated. The expression rate was calculated as the proportion of samples with low, intermediate, and high expression. 4.5. Statistical Analysis Statistical analysis was performed using IBM SPSS statistics 25.0. The median and interquartile range of the staining score were calculated for primary tumor, lymph node metastases, and recurrence. Wilcoxon signed-rank test was used to compare the intensity of immunohistochemistry staining of tumor to normal oral mucosal epithelium. Correlation between total immune staining scores in primary tumor and in lymph node metastases was tested using Spearman’s correlation test. Results were considered statistically significant at the level of p < 0.05. Bar charts were made using GraphPad Prism version 9.3 for PC, GraphPad Software, La Jolla, California, USA. From an existing, well-defined database consisting of patients diagnosed with OSCC between 2000–2011 and surgically treated at the department of Otolaryngology, Head and Neck Surgery and Audiology at Rigshospitalet (Copenhagen, Denmark), we randomly selected 41 patients. Microscopy slides were retrieved from the archives of the Department of Pathology and one FFPE tissue block containing both tumor tissue and normal epithelium were selected from each patient for following IHC staining. Of the 41 patients, 28 patients also had available tissue from lymph node metastases and 8 patients from recurrent disease. Clinicopathological data were obtained from medical and pathology reports. The 7th edition of the TNM Union for International Cancer Control (UICC) staging system was used. Through literature search, we identified nine targets with previously described overexpression in several cancers, including head and neck, and for which there is a potential for rapid translation into clinical settings due to earlier research/probe development. The following biomarkers were selected: integrin αvβ6, tissue factor, poly(ADP-ribose) polymerase 1 (PARP-1), urokinase plasminogen activator receptor (uPAR), vascular endothelial growth factor receptor 1 (VEGFR1), epithelial cell adhesion molecule (EpCAM), vascular endothelial growth factor receptor 2 (VEGFR2), Cathepsin E, and integrin αvβ3. Immunohistochemical staining for cytokeratin 5 (CK5) was used to visualize tumor location. Despite its great imaging potential, epidermal growth factor receptor (EGFR) was not included as it is very well characterized in OSCC and clinical trials with targeted tracers are currently being performed (NCT03134846 and NCT03733210). The expression of all targets was determined for both the primary tumor, metastasis, and tissue from local recurrence. Tumor tissue had been fixated in 10% formalin solution at room temperature for 24 h and then embedded in paraffin at the time of collection. FFPE blocks were stored at room temperature. Tissue sections of 4 μm were cut and IHC staining with integrin αvβ3, integrin αvβ6, tissue factor, and EPCAM were performed using a semi-automated autostainer, Ventana Benchmark Ultra (Roche Diagnostics). Manual staining was performed for the following biomarkers: Cathepsin E, PARP-1, uPAR, VEGFR1, and VEGFR2. Antibodies, reagents, and methods used for IHC analysis are listed in . Briefly, the slides were incubated at 60 °C for 60 min before being deparaffinized in HistoClear solution, rehydrated in graded ethanol, and submerged in water. Different antigen retrieval methods were used depending on the target. All antibodies were used at optimal dilutions, which were determined using positive and negative control staining (data not shown). Secondary staining with HRP-conjugated antibody was performed by incubation for 30–40 min. The reaction was visualized with Envision DAB+ for the manual staining and with DAB+ chromogen solution for the autostainer. Digital pictures for were obtained using Zeiss Axioscan with 10 × zoom. Two specialized head and neck pathologists (GL and AF) reviewed and scored all samples blinded to clinical data. In the event of a disagreement, individual slides were examined together to obtain a consensus score. Each sample was assessed according to highest staining intensity in tumor compartment, proportion of stained malignant tumor tissue in the total tumor area, expression pattern in tumor tissue (homogenous or heterogeneous), and intensity in normal epithelium. Proportion and intensity scores were generated using a point system: 0% (0), 1–10% (1), 11–50% (2), 51–75% (3), and 76–100% (4), and none (0), weak (1), medium (2), and strong (3), respectively. The staining intensity of normal epithelium around the tumor tissue was scored in the same way. The proportion and intensity scores for tumor tissue were multiplied to provide a single combined score and a total immune staining score (TIS), which is similar to previous studies . This resulted in a score ranging from 0 to 12, which was divided into four final expression categories: 0 = absent; 1–5 = low; 6–8 = intermediate; and 9–12 = high expression. For each target, the proportion of patients categorized as low, intermediate, and high expression was calculated. The expression rate was calculated as the proportion of samples with low, intermediate, and high expression. Statistical analysis was performed using IBM SPSS statistics 25.0. The median and interquartile range of the staining score were calculated for primary tumor, lymph node metastases, and recurrence. Wilcoxon signed-rank test was used to compare the intensity of immunohistochemistry staining of tumor to normal oral mucosal epithelium. Correlation between total immune staining scores in primary tumor and in lymph node metastases was tested using Spearman’s correlation test. Results were considered statistically significant at the level of p < 0.05. Bar charts were made using GraphPad Prism version 9.3 for PC, GraphPad Software, La Jolla, California, USA. In conclusion, the uPAR, integrin αvβ6, and tissue factor are promising imaging targets for OSCC. Molecular imaging based on a single target that could be used for both pre- and intraoperative imaging of a primary tumor, lymph node metastases, and in cases, of recurrence would be a powerful tool for the diagnosis and treatment of OSCC.
The incidence of radiolucent lines in cemented attune total knee arthroplasty– a retrospective clinical and radiological study
8bc23fdb-5eff-4d0f-a544-53d56e51fc5f
11926027
Surgical Procedures, Operative[mh]
Total knee arthroplasty (TKA) has become a successful and safe treatment option for end stage osteoarthritis of the knee. However, aseptic loosening remains the most common reason for revision surgery in the long term [ – ]. The Attune™ knee arthroplasty system was introduced based on the clinically approved implant design of the P.F.C.™ Sigma ® knee in order to improve clinical outcome, patient satisfaction and implant survival. There are different configurations available for the Attune total knee system with a mobile bearing cruciate retaining (CR) and posterior stabilized (PS) rotating platform, as well as a CR, PS and a medial stabilized (MS) fixed bearing tibial platform. In addition, the Attune knee is available as cemented and cementless TKA. Despite good initial results reported in the literature for the cemented Attune TKA , some authors have raised concerns regarding higher revision rates due to early debonding and incomplete seating of the Attune tibial component . In a cohort study conducted by Lachiewicz et al., a revision rate of 11.5% for the Attune TKA at an average follow-up period of 30.3 months was reported. Among these revisions, tibial component loosening was observed in 17 out of 19 knees, accounting for 90% of the revised cases . Hoskins et al. reported radiological findings for the cemented Attune TKA with a 23.8% incidence of radiolucent lines at an average follow-up period of 21 months (range 3–51 months) . Various factors can contribute to aseptic implant loosening. While late aseptic loosening is often associated with wear-related problems such as osteolysis due to excessive polyethylene wear, early loosening is usually a consequence of an inadequate initial implant fixation . For example, a poor cementation technique with insufficient cement-interdigitation, leg axis malalignment, or patient specific factors such as high body weight or pronounced osteoporosis, as well as implant specific design characteristics can contribute to an early implant debonding . The detection of radiolucent lines on plain radiographs can be an indication for implant loosening at an early stage, especially if they progress over time . Staats et al. found a significantly higher incidence of radiolucent lines for the Attune knee compared to its predecessor, the P.F.C. knee, 12 months after implantation . In addition, Jaeger et al. observed an increased risk for incomplete seating of the Attune tibial component in an experimental biomechanical study . However, the impact of these findings on the clinical performance of this implant design remain unclear. Therefore, the aim of the present study was to report the clinical, functional and radiological results of the cemented Attune total knee arthroplasty from a non-designer center and to investigate the rate of radiolucent lines at a 4-year median follow-up. Our hypothesis was that cemented TKA using the Attune knee system demonstrated good clinical and functional results with low revision rates and acceptable rates of RLL at short- to mid-term follow-up. This study was a single-center, retrospective cohort study investigating a consecutive cohort of 165 patients, who underwent cemented TKA using the Attune knee system at our institution between February 2014 and December 2017. The local ethics committee approved the study (No. S-804/2019) and written informed consent forms were obtained from all patients. The inclusion criteria were adult patients (> 18 years of age) with severe osteoarthritis of the knee who had received a cemented Attune total knee prosthesis at our department at least 24 months ago. The exclusion criteria were patients who did not consent to participate in the study, legally cared for patients, and patients with comorbidities that impaired the ability to give consent (e.g. dementia, intellectual disability, psychiatric illness) or language barrier. Based on the retrospectively analyzed data set, the patients were either contacted by phone or invited by letter to participate in the study as part of the routine follow-up examination. At the latest follow-up, clinical and radiological parameters as well as postoperative complications were analyzed and validated clinical outcome scores (PROMs) were assessed as described below. Operative technique All surgeries were performed with the patient under spinal or general anesthesia using a medial parapatellar approach and a standard surgical technique according to the manufacturer’s recommendations using the INTUITION™ instrumentation. A femur-first measured resection technique was used in all patients. Distal femoral resection was performed using the intramedullary jig for varus/valgus adjustment and the extramedullary resection guide was used for tibial resection. Extension and flexion gap assessments were performed using the spacer blocks of the Intuition™ instrumentation. According to the intraoperative findings either a cruciate retaining or posterior stabilized femoral component was used. The original design of the cemented Attune fixed bearing tibial component was used in all cases. A tourniquet was applied during cementation and high-pressure pulsatile saline lavage irrigation of the bone was performed prior to cementation. The vacuum mixed bone cement was applied on both the tibial and femoral component as well as on the bone surface via cement gun pressurization. The tibial component and the femoral components were inserted in a single step. The patella was selectively resurfaced, according to the intraoperative findings, if required. The same postoperative rehabilitation protocol with early mobilization and immediate full weight-bearing as tolerated was applied for all patients. Clinical and radiographic evaluation Clinical outcome parameters were determined both before and after surgery using clinical assessments and questionnaires. The following scores were assessed during the latest follow-up: Oxford Knee Score (OKS) , Veterans RAND 12 Item Health Survey (VR-12) , Knee injury and Osteoarthritis Outcome Score (KOOS) , and the clinical and functional American Knee Society Score (AKS) . The AKS score was categorized into four groups: very good result (90 to 100 points), good result (80–89 points), satisfactory result (70–79 points), and unsatisfactory result (< 70 points). Overall patient satisfaction regarding the result of the knee surgery was assessed using a 4-point verbal rating scale (very satisfied, satisfied, neutral, or dissatisfied). Furthermore, subjective patient satisfaction and pain assessment were evaluated using the “Forgotten Joint Score (FJS)” , University of California at Los Angeles (UCLA) activity score and the 11-point Numerical Rating Scale (NRS) . All these instruments are validated, reliable and widely used in clinical outcome studies. The range of motion as well as coronal leg axis alignment were clinically measured using a goniometer. An anatomical valgus angle of 5° to 10° was considered a neutral leg axis, 0° to 4° as mild varus, and < 0° as severe varus deformity. An anatomical valgus angle of 11° to 15° was categorized as mild valgus, and > 15° as severe valgus deformity. At the latest follow-up, standard anteroposterior and lateral knee radiographs were taken under fluoroscopy to ensure an appropriate evaluation of radiolucent lines around the tibial and femoral component. Fluoroscopically guided radiography has proven to be superior for the detection of radiolucent lines following unicondylar and total knee replacement when compared to conventional radiographs . Radiolucent lines were defined as periprosthetic radiolucencies less than 2 mm in width with a sclerotic border, that were located at either the cement-bone or cement-implant interface and showed no signs of progression on serial radiographs. Two independent investigators (RS, TR) assessed all radiographs on the basis of the Modern Knee Society Radiographic Evaluation System . Figure illustrates the radiological classification system used for the standardized assessment of radiolucent lines. Data management and statistical analysis All preoperative and postoperative data were documented in an Excel spreadsheet (Microsoft Excel 2019). The graphical representation was done with Excel (Microsoft Excel 2019). Statistical analysis was performed using the program SPSS Statistics for Windows (version 25.0; SPSS IBM Corp., Chicago, IL, USA). Data were evaluated descriptively as arithmetic mean, standard deviation, median, minimum, and maximum. All data were checked for normal distribution using the normality test by D’Agostino and Pearson . The student’s t-test was used to compare means between two groups. Wilcoxon test and McNemar’s test were applied to compare groups when data were not normally distributed. Kaplan-Meier survivorship analysis was performed with revision for any reason as the endpoint. The level of significance was set at p < 0.05. All surgeries were performed with the patient under spinal or general anesthesia using a medial parapatellar approach and a standard surgical technique according to the manufacturer’s recommendations using the INTUITION™ instrumentation. A femur-first measured resection technique was used in all patients. Distal femoral resection was performed using the intramedullary jig for varus/valgus adjustment and the extramedullary resection guide was used for tibial resection. Extension and flexion gap assessments were performed using the spacer blocks of the Intuition™ instrumentation. According to the intraoperative findings either a cruciate retaining or posterior stabilized femoral component was used. The original design of the cemented Attune fixed bearing tibial component was used in all cases. A tourniquet was applied during cementation and high-pressure pulsatile saline lavage irrigation of the bone was performed prior to cementation. The vacuum mixed bone cement was applied on both the tibial and femoral component as well as on the bone surface via cement gun pressurization. The tibial component and the femoral components were inserted in a single step. The patella was selectively resurfaced, according to the intraoperative findings, if required. The same postoperative rehabilitation protocol with early mobilization and immediate full weight-bearing as tolerated was applied for all patients. Clinical outcome parameters were determined both before and after surgery using clinical assessments and questionnaires. The following scores were assessed during the latest follow-up: Oxford Knee Score (OKS) , Veterans RAND 12 Item Health Survey (VR-12) , Knee injury and Osteoarthritis Outcome Score (KOOS) , and the clinical and functional American Knee Society Score (AKS) . The AKS score was categorized into four groups: very good result (90 to 100 points), good result (80–89 points), satisfactory result (70–79 points), and unsatisfactory result (< 70 points). Overall patient satisfaction regarding the result of the knee surgery was assessed using a 4-point verbal rating scale (very satisfied, satisfied, neutral, or dissatisfied). Furthermore, subjective patient satisfaction and pain assessment were evaluated using the “Forgotten Joint Score (FJS)” , University of California at Los Angeles (UCLA) activity score and the 11-point Numerical Rating Scale (NRS) . All these instruments are validated, reliable and widely used in clinical outcome studies. The range of motion as well as coronal leg axis alignment were clinically measured using a goniometer. An anatomical valgus angle of 5° to 10° was considered a neutral leg axis, 0° to 4° as mild varus, and < 0° as severe varus deformity. An anatomical valgus angle of 11° to 15° was categorized as mild valgus, and > 15° as severe valgus deformity. At the latest follow-up, standard anteroposterior and lateral knee radiographs were taken under fluoroscopy to ensure an appropriate evaluation of radiolucent lines around the tibial and femoral component. Fluoroscopically guided radiography has proven to be superior for the detection of radiolucent lines following unicondylar and total knee replacement when compared to conventional radiographs . Radiolucent lines were defined as periprosthetic radiolucencies less than 2 mm in width with a sclerotic border, that were located at either the cement-bone or cement-implant interface and showed no signs of progression on serial radiographs. Two independent investigators (RS, TR) assessed all radiographs on the basis of the Modern Knee Society Radiographic Evaluation System . Figure illustrates the radiological classification system used for the standardized assessment of radiolucent lines. All preoperative and postoperative data were documented in an Excel spreadsheet (Microsoft Excel 2019). The graphical representation was done with Excel (Microsoft Excel 2019). Statistical analysis was performed using the program SPSS Statistics for Windows (version 25.0; SPSS IBM Corp., Chicago, IL, USA). Data were evaluated descriptively as arithmetic mean, standard deviation, median, minimum, and maximum. All data were checked for normal distribution using the normality test by D’Agostino and Pearson . The student’s t-test was used to compare means between two groups. Wilcoxon test and McNemar’s test were applied to compare groups when data were not normally distributed. Kaplan-Meier survivorship analysis was performed with revision for any reason as the endpoint. The level of significance was set at p < 0.05. Demographic data of the study cohort In this study, a consecutive series of 165 patients (177 knees) were retrospectively evaluated following TKA with the original Attune knee system. The mean age of the patients at time of surgery was 64.7 ± 10 years. In 51% of the patients TKA was performed on the right side and in 49% of patients on the left side. Seven patients (4%) died regardless of the knee operation during the follow-up period. Eighteen patients (11%) refused to participate in the study and 25 patients (15%) were lost to follow-up. Twelve patients received an Attune TKA on both sides. At the final follow-up, a total of 115 patients (69% female and 31% male) with 127 TKAs were available for clinical and radiological assessment at a mean follow-up of 47.8 ± 12.9 months. Survival analysis The revision-free survivorship of the Attune knee with the endpoint revision for any reason was 98.9% (95% confidence interval [CI]: 96–99%) in this cohort at a mean follow-up of 4 years. No patient had to be revised due to aseptic implant loosening. In one patient, a DAIR procedure with an inlay exchange due to a suspected infection was performed after 14 months. Another patient was revised for patellar resurfacing due to symptomatic retropatellar osteoarthritis after 13 months. Clinical evaluation At the latest follow-up, 84.3% of patients reported to be very satisfied or satisfied with the results of the knee replacement, while 15.7% of patients claimed to be dissatisfied or neutral. 101 patients (87.8%) reported absence of pain at rest and 83 patients (72.2%) reported absence of pain during movement at the latest follow-up. A total of 98 knee joints were clinically examined for range of movement and clinical leg axis assessment using a goniometer, at the last follow-up. The mean postoperative knee flexion angle in the study cohort was 117 ± 17.1 degrees (range 80°– 145°). Postoperative clinical leg axes were within the neutral range in 94.9% of patients, compared to 29.1% of patients preoperatively (p-value < 0.01). The results of the objectively clinical assessment and range of movement are presented in Table . Clinical outcome scores demonstrated a significant improvement in AKSS, OKS, UCLA and NRS up to the latest follow-up (p-values < 0.01). The results of patient reported outcome measures investigated in this study are summarized in Table . Radiographic results At latest follow-up, no femoral or tibial component showed radiographic signs of loosening. Radiographs demonstrated an incidence of femoral radiolucent lines of 24% and an incidence of tibial RLL of 26%. RLL were predominantly located in zone 1 and zone 2 of the anterior-posterior (AP) tibial region, comprising 21% and 15% of occurrences. Figure demonstrates the characteristic appearance of tibial RLL seen in this cohort. Lateral radiographs demonstrated RLL predominantly located around the femoral components, with an incidence of 17% and 12% in zone 1 and zone 2, respectively. The distribution of RLL in relation to their location is presented in Figs. , and . In this study, a consecutive series of 165 patients (177 knees) were retrospectively evaluated following TKA with the original Attune knee system. The mean age of the patients at time of surgery was 64.7 ± 10 years. In 51% of the patients TKA was performed on the right side and in 49% of patients on the left side. Seven patients (4%) died regardless of the knee operation during the follow-up period. Eighteen patients (11%) refused to participate in the study and 25 patients (15%) were lost to follow-up. Twelve patients received an Attune TKA on both sides. At the final follow-up, a total of 115 patients (69% female and 31% male) with 127 TKAs were available for clinical and radiological assessment at a mean follow-up of 47.8 ± 12.9 months. The revision-free survivorship of the Attune knee with the endpoint revision for any reason was 98.9% (95% confidence interval [CI]: 96–99%) in this cohort at a mean follow-up of 4 years. No patient had to be revised due to aseptic implant loosening. In one patient, a DAIR procedure with an inlay exchange due to a suspected infection was performed after 14 months. Another patient was revised for patellar resurfacing due to symptomatic retropatellar osteoarthritis after 13 months. At the latest follow-up, 84.3% of patients reported to be very satisfied or satisfied with the results of the knee replacement, while 15.7% of patients claimed to be dissatisfied or neutral. 101 patients (87.8%) reported absence of pain at rest and 83 patients (72.2%) reported absence of pain during movement at the latest follow-up. A total of 98 knee joints were clinically examined for range of movement and clinical leg axis assessment using a goniometer, at the last follow-up. The mean postoperative knee flexion angle in the study cohort was 117 ± 17.1 degrees (range 80°– 145°). Postoperative clinical leg axes were within the neutral range in 94.9% of patients, compared to 29.1% of patients preoperatively (p-value < 0.01). The results of the objectively clinical assessment and range of movement are presented in Table . Clinical outcome scores demonstrated a significant improvement in AKSS, OKS, UCLA and NRS up to the latest follow-up (p-values < 0.01). The results of patient reported outcome measures investigated in this study are summarized in Table . At latest follow-up, no femoral or tibial component showed radiographic signs of loosening. Radiographs demonstrated an incidence of femoral radiolucent lines of 24% and an incidence of tibial RLL of 26%. RLL were predominantly located in zone 1 and zone 2 of the anterior-posterior (AP) tibial region, comprising 21% and 15% of occurrences. Figure demonstrates the characteristic appearance of tibial RLL seen in this cohort. Lateral radiographs demonstrated RLL predominantly located around the femoral components, with an incidence of 17% and 12% in zone 1 and zone 2, respectively. The distribution of RLL in relation to their location is presented in Figs. , and . The goal of introducing a new primary knee system is to improve clinical and functional outcome and to increase patient satisfaction after TKA. Despite good initial results reported in the literature [ , , ], there has been an increasing evidence in recently published studies that the original design of the cemented Attune knee is associated with an increased risk for periprosthetic radiolucencies , that could ultimately result in an increased risk of early debonding and implant failure. The etiology of premature tibial debonding remains poorly understood. Proposed hypotheses encompass factors such as cement viscosity, surface finish, diminished stem length, reduced rotational stabilizers, and decreased cement pockets . The aim of this study was to investigate the clinical and functional results of the first Attune TKA performed at our institution and to assess the incidence of radiolucent lines at a 4-year median follow-up. The rate of RLL in this cohort was relatively high with an overall incidence of 24% for femoral RLL and 26% for tibial RLL. The regions predisposing to a high rate of RLL were identified at the medial and lateral aspect of the tibial baseplate on anterior-posterior radiographs (zone 1 and 2), and on lateral radiographs behind the anterior and posterior flange of the femoral component. The occurrence of radiolucencies at these locations around the tibial and femoral components suggest a potential association with inadequate bone cuts, incomplete component seating during cementation or insufficient cementation technique. Staats et al. hypothesized that the increased number of radiolucent lines in Attune patients is primarily due to technique-related issues, allowing excessive movement during the cement interlocking phase. Additionally, the prosthesis design, particularly the cement pockets, may also contribute to this phenomenon . As a result of reports of early aseptic loosening and tibial debonding, the original design of the Attune tibial component was changed in 2017. The new design of the tibial baseplate (Attune S+) features an undercut cement pocket area and a greater surface roughness of 3.0-6.5 Ra, in order to enhance the mechanical interlock at the cement-implant interface and to increase cement bonding . Van Duren et al. compared the radiological and clinical results of different TKA designs, including the Attune knee, and found no significant differences regarding the incidence of RLL as well as the revision risk between the standard Attune and the Attune S + group . Despite the high incidence of RLL being evident in approximately one quarter of our patients, there was no implant revision due to aseptic loosening in this cohort nor any radiological evidence of implant loosening at final follow-up. Similar results were found by Giaretta et al. , who reported radiolucent lines in 22.4% of the investigated Attune knees after a mean follow-up of three years. Two patients had to be revised after 7 and 13 months due to aseptic loosening of the tibial component . Staats et al. , reported comparable results in a one-year follow-up study with RLL being present in 35.1% of the knees, however no patient had to be revised as a consequence of aseptic loosening in their cohort. Staats and Giaretta suggested that patients with radiolucent lines should be monitored closely at regular intervals, in spite of good clinical results or absence of symptoms. Several recent studies analyzing the short-term outcome of Attune TKAs reported low revision rates [ , , , ]. Prodromidis et al. investigated the incidence of RLL in a recently published meta-analysis, which included a total of 3,861 Attune total knee arthroplasties. They found an overall RLL rate of 21.4%. Notably, the incidence of implant loosening and the revision rate due to aseptic loosening were 1.2% and 0.9%, respectively. These findings are in accordance to the results of our study. O’Donovan et al. also reported a higher incidence of radiolucent lines observed predominantly at the tibial baseplate at the implant–cement interface during a five-year follow-up. Despite this, the revision rate was only 2.2%. This outcome is comparable to data from the Australian Orthopaedic Association National Joint Registry (AOANJR) , which reported a 5-year cumulative revision rate of approximately 3% for the Attune knee. Despite the low revision rates, the clinical significance and long-term implications of the relatively high rate of radiolucencies found in our study need to be further investigated. It should be noted that the presence of RLL alone does not necessarily indicate failure or imply an indication for implant revision However, affected individuals should be monitored closely in order to detect potential implant failure in symptomatic as well as asymptomatic patients at an early stage. Patient satisfaction after TKA is also affected by various additional factors such as excessive patient expectations, persisting pain, the occurrence of postoperative complications, or misalignment of the prosthesis . Bourne et al. presented patient satisfaction rates of 75–89% following total knee replacement. This is in agreement with the results of our study, in which a substantial majority of 84.3% of patients expressed satisfaction with their knee replacement. A significant portion reported relief from rest pain after two years, and many experienced improved walking comfort. Usage of pain relief medication notably declined post-surgery and the majority of participants achieved painless ambulation for extended distances. The mean range of motion increased from 108.4 degrees of flexion preoperatively to 117 degrees at the last follow-up, marking a substantial improvement, however the difference was not statistically significant. Concurrently, the clinical leg axis improved postoperatively, posited to be linked with reduced pain during passive movement and a normalized leg axis, which promotes knee joint mobility. The findings of Ranawat et al. and White et al. corroborate this association between postoperative pain reduction and improved clinical leg axis alignment, as well as improved knee joint mobility following TKA. The clinical scores and their corresponding outcomes observed in our study collectively displayed favorable results during the a 4-year median follow-up assessment. In a two-year study by Moorthy et al. involving 100 Attune knee replacements, Oxford Knee Scores improved significantly, which is consistent with our findings. The mean UCLA Activity Score in our study significantly increased from 4.1 ± 2.4 points to 6.1 ± 2 points, which is consistent with the findings of Turgeon et al. and Kim et al. , emphasizing improved postoperative activity levels following TKA. There are limitations to our study. On the one hand, the study is limited by its retrospective character and its relatively short follow-up duration. A prospective study design with repetitive radiological examinations and a longer follow-up duration would be helpful to assess the occurrence and progression of periprosthetic RLL over time and to investigate the clinical impact of these radiolucencies on the long-term performance of the implant design. All patients in this study population were followed-up for a minimum of two years after surgery, which ensures an adequate assessment of clinical and functional outcomes, as well as failure rates at a 4-year median follow-up. Another limitation that has to be acknowledged is the lack of a control group that would have enabled a comparison of our radiological findings with those of a clinically proven implant design, such as the PFC Sigma knee. Lastly, a total of 15% of patients were lost and 11% denied the participation in the study. The latter were predominantly elderly patients who refused to travel to our institution for a follow-up examination because of health-related problems and restricted mobility. However, this high drop-out rate represents a limitation and could have potentially biased the results of our study. Despite these limitations, the results of our study support the findings of other authors who reported a high incidence of radiolucent lines associated with the original design of the Attune knee at short- to mid-term follow-up. The clinical and radiological findings of this study suggest good clinical and functional results of the cemented Attune knee system with significant improvement in patient reported outcome scores, low rates of implant loosening and acceptable revision rates at a median follow-up of 4 years. However, a relevant proportion of patients demonstrated RLL around the femoral or tibial components, the clinical significance of which remains unclear. Affected individuals should be monitored closely at annual intervals in order to detect debonding or implant failure at an early stage. Further studies with longer follow-up durations are necessary to investigate the natural course of these radiolucencies and their impact on the long-term performance of this knee system.
Delirium prevalence and delirium literacy across Italian hospital wards: a secondary analysis of data from the World Delirium Awareness Day 2023
f92e8d93-b8b1-448a-b7ef-21b09803c928
11614987
Health Literacy[mh]
Delirium, a neuropsychiatric syndrome characterized by an abrupt onset and fluctuating disruption in consciousness, attention, and cognitive function , poses a significant challenge across various clinical settings. Its incidence and prevalence vary considerably, depending on the context and patient demographics . Delirium is relatively uncommon in community dwellers and outpatients, whereas it is more frequent in individuals with acute and exacerbated chronic illnesses . Consistent evidence shows that, on average, one in five hospitalized patients aged 65 years and above experience delirium daily, regardless of the hospital ward type . Delirium occurrence is independently associated with several adverse outcomes, including prolonged hospital stays, increased vulnerability to complications (e.g. pressure ulcers, incontinence, and falls), high mortality rates, and impaired physical and cognitive recovery . Consequently, it also carries substantial implications for healthcare expenditures . Furthermore, as the likelihood of adverse outcomes increases with delay in delirium diagnosis , the critical importance of early detection and proactive management strategies is evident. Current primary management approaches encompass the utilization of validated screening tools and multidomain interventions targeting precipitating conditions, medication review, distress management, complications mitigation, and addressing environmental factors to sustain patient engagement . Despite their well-documented effectiveness , integrating these strategies into acute care settings has proven challenging for healthcare organizations . Key barriers to successful integration include time and staffing constraints, inadequate multi-professional collaboration, and insufficient knowledge among personnel . These barriers contribute to the lack of routine screening for delirium and, consequently, its suboptimal management. In Italy, there is a notable gap in understanding the extent to which healthcare centers incorporate evidence-based protocols for preventing, diagnosing, and treating delirium into daily clinical practice. This gap is particularly concerning due to the adverse prognostic implication of delirium, compounded by its prevalence. Previous nationwide studies have reported a delirium point prevalence of 22% and an elevated risk of short-term mortality among hospitalized older persons with delirium , suggesting that detection and appropriate management of this condition should be a priority for healthcare systems. We hypothesize that the attitude to delirium screening and implementing appropriate prevention and management strategies within hospital wards may be influenced by their level of delirium knowledge and understanding (i.e. delirium literacy). Therefore, this study aims to assess the reported point prevalence of delirium and explore management strategies based on delirium literacy levels across Italian hospitals. Furthermore, it seeks to identify current perceived barriers and future priorities in delirium practice and research. This study is a secondary analysis of Italian data derived from a global delirium prevalence study on World Delirium Awareness Day (WDAD) on March 15th, 2023. Ethical approval for the study was obtained from the Institutional Review Board of the University Mannheim (2022–617) and registration was completed with the German Clinical Trials Register (DRKS00030002, https://drks.de/search/de/trial/DRKS00030002 ). A request for participation has been disseminated through social media platforms, professional networks, and personal contacts. National coordinators were responsible for recruiting clinicians and distributing the survey on the specified study day. All participating clinicians provided informed consent for the research at the outset of the questionnaire, which was administered online via SurveyMonkey . Survey content The questionnaire comprised 39 questions divided into fourteen sections. The first six sections covered data protection and consent, as well as the demographics of the professionals completing the survey. Additionally, these sections collected hospital and ward/department-specific data. The other sections covered data related to delirium assessment, structure, and process, focusing on management and implementation strategies, barriers, and perspectives. Delirium point-prevalence was evaluated both at 8 a.m. and 8 p.m. The respondents were instructed not to directly assess the presence of delirium but to report the assessment method used, the number of patients in the ward/unit at each time point, and the number of patients with and without delirium identified by ward/unit personnel. Importantly, no patient-level sensitive information was collected. Further details on study design, preparation, inclusion and exclusion criteria, and data collection procedures have been already described elsewhere . Sample characteristics For study purposes, starting from 112 completed unique national surveys we initially excluded those from long-term care settings (e.g., rehabilitation, nursing home, intermediate care; n = 25). Subsequently, surveys from ICU and high acuity units were also excluded (n = 29) to maintain consistency in examining delirium within non-intensive care settings. Delirium literacy levels Delirium literacy levels were determined based on two criteria: (i) the routine utilization of a validated delirium assessment tool and (ii) the presence of a written protocol for delirium management. The former aspect was ascertained by assessing whether the tool had been acknowledged in the literature as reliable and validated . High delirium literacy (HL) was defined by the fulfillment of both criteria simultaneously. Outcomes Delirium point-prevalence was calculated by dividing the number of patients reported with delirium by the total number of patients assessed for delirium at both 8 a.m. and 8 p.m. within each delirium literacy group. Delirium management was appraised by evaluating the adoption of non-pharmacological interventions in accordance with the Hospital Elder Life Program (HELP) protocol and identifying differences in pharmacological treatments between units/wards exhibiting high and low delirium literacy. Additionally, the study explored qualitative aspects related to the perceived current barriers and future priorities in delirium practice and research. Statistical analysis Nominal data are presented as frequency (n) and percentages , while metrical non-normally distributed data are described using the median and interquartile range (IQR). Comparisons based on delirium literacy were conducted using the Chi-square tests or the Fisher exact test to explore differences between groups. Statistical significance was set at the level of p < 0.05 for two-tailed tests. The analysis was performed with R software, version 4.2.3 . The questionnaire comprised 39 questions divided into fourteen sections. The first six sections covered data protection and consent, as well as the demographics of the professionals completing the survey. Additionally, these sections collected hospital and ward/department-specific data. The other sections covered data related to delirium assessment, structure, and process, focusing on management and implementation strategies, barriers, and perspectives. Delirium point-prevalence was evaluated both at 8 a.m. and 8 p.m. The respondents were instructed not to directly assess the presence of delirium but to report the assessment method used, the number of patients in the ward/unit at each time point, and the number of patients with and without delirium identified by ward/unit personnel. Importantly, no patient-level sensitive information was collected. Further details on study design, preparation, inclusion and exclusion criteria, and data collection procedures have been already described elsewhere . For study purposes, starting from 112 completed unique national surveys we initially excluded those from long-term care settings (e.g., rehabilitation, nursing home, intermediate care; n = 25). Subsequently, surveys from ICU and high acuity units were also excluded (n = 29) to maintain consistency in examining delirium within non-intensive care settings. Delirium literacy levels were determined based on two criteria: (i) the routine utilization of a validated delirium assessment tool and (ii) the presence of a written protocol for delirium management. The former aspect was ascertained by assessing whether the tool had been acknowledged in the literature as reliable and validated . High delirium literacy (HL) was defined by the fulfillment of both criteria simultaneously. Delirium point-prevalence was calculated by dividing the number of patients reported with delirium by the total number of patients assessed for delirium at both 8 a.m. and 8 p.m. within each delirium literacy group. Delirium management was appraised by evaluating the adoption of non-pharmacological interventions in accordance with the Hospital Elder Life Program (HELP) protocol and identifying differences in pharmacological treatments between units/wards exhibiting high and low delirium literacy. Additionally, the study explored qualitative aspects related to the perceived current barriers and future priorities in delirium practice and research. Nominal data are presented as frequency (n) and percentages , while metrical non-normally distributed data are described using the median and interquartile range (IQR). Comparisons based on delirium literacy were conducted using the Chi-square tests or the Fisher exact test to explore differences between groups. Statistical significance was set at the level of p < 0.05 for two-tailed tests. The analysis was performed with R software, version 4.2.3 . As shown in Table , fifty-eight hospital wards participated in the survey, with the majority being medical/non-surgical units. Twenty-five (43.1%) wards were classified into the HL group as they fulfilled both selected criteria. Further characteristics are shown in Supplementary Table 1s . Delirium screening, prevalence, and management Overall, the reported point prevalence of delirium was 9.6% (n = 113/1181) in the morning and 10.4% (n = 110/1057) in the evening. Notably, reported delirium prevalence was significantly higher both in the morning (12.3% vs. 7.4%, p = 0.006) and in the evening (13.4% vs. 7.7%, p = 0.003), in the HL vs. LL groups (Fig. ). In the HL group, the 4AT constituted the predominant delirium screening tool, used in 84.0% of cases, with the remaining using various versions of the Confusion Assessment Method (CAM). Conversely, within the low literacy group, the assessment of delirium predominantly relied on personal judgment, accounting for 60.6% of cases, followed by psychiatric consultation, Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria, absence of formal tools, or other unspecified methods (see Supplementary Fig. 1s ). In terms of delirium management, despite the lack of statistically significant differences, the HL group exhibited greater adherence to key components outlined in the HELP protocol compared to the LL group. This included higher rates of mobilization (88.0% vs. 66.7%, p = 0.116), sleep hygiene (76.0% vs. 57.6%, p = 0.237), verbal re-orientation and cognitive stimulation (32.0% vs. 18.2%, p = 0.364), and adequate fluid intake (92.0% vs. 69.7%, p = 0.080) (Fig. , panel a ). Differences in pharmacological and non-pharmacological management In the HL group, the most common pharmacological interventions for patients with delirium were haloperidol (100%), quetiapine (76.0%), reduction of potentially delirium-inducing drugs (44.0%), lorazepam (40.0%), specialist medication consulting (24.0%), and diazepam (16.0%). Conversely, in the LL group, the most common interventions included haloperidol (84.8%), lorazepam (54.5%), quetiapine (57.6%), diazepam (45.5%), and midazolam (27.3%) (Fig. , panel b ). Significant differences between the two groups were observed for diazepam ( p = 0.037), and the reduction of potentially delirium-inducing drugs ( p = 0.033), which were respectively less and more prevalent in the HL group. Pharmacological management strategies in the HL group were more frequently based on standard operating procedures/protocols (56.0% vs. 6.1% in the LL group, p < 0.001), and individualized approaches depending on patient characteristics and side effects (80.0% vs. 42.4% in the LL group, p = 0.009) or delirium symptoms (60.0% vs. 24.2% in the LL group, p = 0.013). Additionally, recommendations for withdrawal of delirium-related drugs were reported to be more frequently included in the HL than in the LL group (44.0% vs. 15.2%, p = 0.033). No other significant differences emerged (see Supplementary Table 2s ). Further differences between the two literacy groups regarding the general management protocols enforced in the wards/units are presented in Supplementary Table 3s. Delirium-related structures and processes in the ward Table provides additional information about delirium-related structures and processes within the two groups. In the HL group, the delirium assessment was primarily conducted by physicians (56.0% vs. 24.2%), whereas in the LL group, it was carried out by unspecified mixed professionals (60.6% vs. 8.0%). Regarding interventions aimed at enhancing delirium awareness, no significant differences were found in terms of educational training or the availability of informational materials. However, the presence of delirium experts (6.1% vs. 36.0%, p = 0.011) and the communication of delirium screening rate (12.1% vs. 48.0%, p = 0.006) were found to be higher in the HL group. The most reported barriers against the implementation and/or utilization of evidence-based strategies included a shortage of personnel/staff (39.7%), difficulties in assessing complex patients (36.2%) and limited time (34.5%). No significant differences between the two groups were noted, except for the absence of an appropriate score for delirium assessment, which was more prevalent in the LL group (48.5% vs. 8.0%, p = 0.003). High-priority areas for future delirium care and research In the analysis of free text comments regarding high-priority areas for future delirium care and research, several themes emerged, as outlined in Table . Regarding delirium care, the predominant theme centered on the need for enhanced staff education to improve delirium care. The second core theme that emerged was non-pharmacological management . Respondents stressed the importance of prioritizing these strategies, such as “ encouraging care for occupational therapy” or “family engagement” , as crucial priorities. Additionally, there was a call for a multidisciplinary approach and diagnostic strategies to better address delirium-related care challenges. In terms of delirium research, the predominant theme focused on the prevention of delirium. Additionally, emphasis was placed on pharmacological management , particularly in ensuring “ adequate drug management” . Furthermore, one respondent underscores the importance of “ assessing the economic impact of non-pharmacological treatments to influence policymakers to allocate resources to the prevention of delirium” . Overall, the reported point prevalence of delirium was 9.6% (n = 113/1181) in the morning and 10.4% (n = 110/1057) in the evening. Notably, reported delirium prevalence was significantly higher both in the morning (12.3% vs. 7.4%, p = 0.006) and in the evening (13.4% vs. 7.7%, p = 0.003), in the HL vs. LL groups (Fig. ). In the HL group, the 4AT constituted the predominant delirium screening tool, used in 84.0% of cases, with the remaining using various versions of the Confusion Assessment Method (CAM). Conversely, within the low literacy group, the assessment of delirium predominantly relied on personal judgment, accounting for 60.6% of cases, followed by psychiatric consultation, Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria, absence of formal tools, or other unspecified methods (see Supplementary Fig. 1s ). In terms of delirium management, despite the lack of statistically significant differences, the HL group exhibited greater adherence to key components outlined in the HELP protocol compared to the LL group. This included higher rates of mobilization (88.0% vs. 66.7%, p = 0.116), sleep hygiene (76.0% vs. 57.6%, p = 0.237), verbal re-orientation and cognitive stimulation (32.0% vs. 18.2%, p = 0.364), and adequate fluid intake (92.0% vs. 69.7%, p = 0.080) (Fig. , panel a ). In the HL group, the most common pharmacological interventions for patients with delirium were haloperidol (100%), quetiapine (76.0%), reduction of potentially delirium-inducing drugs (44.0%), lorazepam (40.0%), specialist medication consulting (24.0%), and diazepam (16.0%). Conversely, in the LL group, the most common interventions included haloperidol (84.8%), lorazepam (54.5%), quetiapine (57.6%), diazepam (45.5%), and midazolam (27.3%) (Fig. , panel b ). Significant differences between the two groups were observed for diazepam ( p = 0.037), and the reduction of potentially delirium-inducing drugs ( p = 0.033), which were respectively less and more prevalent in the HL group. Pharmacological management strategies in the HL group were more frequently based on standard operating procedures/protocols (56.0% vs. 6.1% in the LL group, p < 0.001), and individualized approaches depending on patient characteristics and side effects (80.0% vs. 42.4% in the LL group, p = 0.009) or delirium symptoms (60.0% vs. 24.2% in the LL group, p = 0.013). Additionally, recommendations for withdrawal of delirium-related drugs were reported to be more frequently included in the HL than in the LL group (44.0% vs. 15.2%, p = 0.033). No other significant differences emerged (see Supplementary Table 2s ). Further differences between the two literacy groups regarding the general management protocols enforced in the wards/units are presented in Supplementary Table 3s. Table provides additional information about delirium-related structures and processes within the two groups. In the HL group, the delirium assessment was primarily conducted by physicians (56.0% vs. 24.2%), whereas in the LL group, it was carried out by unspecified mixed professionals (60.6% vs. 8.0%). Regarding interventions aimed at enhancing delirium awareness, no significant differences were found in terms of educational training or the availability of informational materials. However, the presence of delirium experts (6.1% vs. 36.0%, p = 0.011) and the communication of delirium screening rate (12.1% vs. 48.0%, p = 0.006) were found to be higher in the HL group. The most reported barriers against the implementation and/or utilization of evidence-based strategies included a shortage of personnel/staff (39.7%), difficulties in assessing complex patients (36.2%) and limited time (34.5%). No significant differences between the two groups were noted, except for the absence of an appropriate score for delirium assessment, which was more prevalent in the LL group (48.5% vs. 8.0%, p = 0.003). In the analysis of free text comments regarding high-priority areas for future delirium care and research, several themes emerged, as outlined in Table . Regarding delirium care, the predominant theme centered on the need for enhanced staff education to improve delirium care. The second core theme that emerged was non-pharmacological management . Respondents stressed the importance of prioritizing these strategies, such as “ encouraging care for occupational therapy” or “family engagement” , as crucial priorities. Additionally, there was a call for a multidisciplinary approach and diagnostic strategies to better address delirium-related care challenges. In terms of delirium research, the predominant theme focused on the prevention of delirium. Additionally, emphasis was placed on pharmacological management , particularly in ensuring “ adequate drug management” . Furthermore, one respondent underscores the importance of “ assessing the economic impact of non-pharmacological treatments to influence policymakers to allocate resources to the prevention of delirium” . In this secondary analysis of data from World Delirium Awareness Day 2023 in Italian hospitals, we observed a reported delirium point prevalence of approximately 10%. Notably, the reported prevalence was two-fold higher in the HL group compared to the LL group, both in the morning and evening assessments. Moreover, the HL group demonstrated greater adherence to appropriate delirium management approaches, including both pharmacological and non-pharmacological strategies. Previous nationwide studies conducted in older hospitalized patients reported a higher delirium point prevalence . Several factors may contribute to the observed discrepancy in prevalence rates between our surveys and those studies. First, previous studies focused primarily on inpatients aged 65 or older, whereas this survey included a more heterogeneous age range. Given that delirium is more prevalent in older inpatients , this difference in age distribution could partially account for the variation in observed prevalence between studies. Moreover, this aspect could also contribute to the discrepancy in the observed prevalence of delirium between the two literacy groups, as wards/units with low literacy tended to have younger patients. Second, unlike previous studies that actively sought to detect delirium using the 4AT, this survey did not require direct assessment. Respondents were required to report the tool commonly used for delirium assessment within the ward/unit, along with the number of patients screened and identified as delirious at both time points. This variance in assessment methodology may have impacted the observed prevalence rates, as delirium tends to be underestimated without active screening . Finally, this variability could partially account for the difference in reported prevalence of delirium between the HL and LL groups, since increased delirium knowledge may have facilitated more rigorous and consistent screening practices. Another finding of our study concerns the differences in the implementation of delirium management strategies between each group. We aimed to assess the application rate of non-pharmacological approaches in accordance with the Hospital Elder Life Program (HELP) protocol and of pharmacological treatments within delirium literacy groups. There was a noticeable inclination towards greater adherence in the HL versus LL group, although without a statistically significant difference. Additionally, regarding pharmacological management, the HL group demonstrated a greater attitude toward discontinuing delirium-inducing drugs and a tendency to prescribe fewer benzodiazepines. These differences suggest that specific protocols for pharmacological management within the HL group, along with increased attention to drug side effects and patient symptoms and characteristics, may have contributed to the observed trends. Furthermore, as previously demonstrated , the implementation of delirium management strategies has been shown to reduce delirium incidence. This could potentially explain the lower delirium prevalence observed in our survey compared to previous Italian studies. Consistent with previous literature , our survey identified similar barriers against the implementation of evidence-based strategies for delirium management, which remained consistent across both literacy groups. These barriers included inadequate resources in terms of time and staff, difficulties in assessing specific patient populations (such as those with dementia), and insufficient awareness of delirium. Notably, the latter emerged as one of the key areas for future high-priority initiatives in delirium care. Furthermore, it is intricately intertwined with other priorities emphasized by the respondents, such as the use of appropriate scoring systems and the prioritization of non-pharmacological interventions. These components are essential for ensuring adequate identification and subsequent management of delirium. In general, our findings suggest that there is still a large potential for improvement of delirium management within our country. Addressing this challenge demands the implementation of multifaceted strategies. Initiatives should commence by integrating delirium-specific training into university curricula, ensuring healthcare professionals are adequately prepared. Comprehensive awareness campaigns among healthcare personnel, ongoing professional development programs, and interdisciplinary collaboration can further enhance healthcare providers' ability to recognize, prevent, and manage delirium effectively. Concurrently, forthcoming research on delirium should prioritize prevention strategies, foster the development of tailored approaches, and comprehensively evaluate the economic issues, both in the short- and long-term periods. This will be pivotal in influencing policymakers to allocate resources toward personnel training, preventive measures, and management strategies to overcome current barriers. Limitations of this study include its survey design, which precluded verification of data collection and entry strategies. Participation bias may have influenced results, as clinicians with an interest in delirium were more likely to participate. Furthermore, the validity of reported delirium assessments also warrants careful consideration. Additionally, the absence of direct delirium assessment and the potential assessment by different individuals in the morning and evening could introduce bias. Finally, the exploration of delirium motor subtypes has not been conducted. This study also exhibits several strengths, including the involvement of an interprofessional team and its nationwide scope. Moreover, conducting delirium assessment twice daily provided a more comprehensive clinical perspective. Lastly, this study may serve as a model for future quality improvement projects aimed at overcoming barriers to delirium management, thereby contributing to increased awareness about delirium. In conclusion, our secondary analysis of WDAD 2023 data provides valuable insights into current delirium care practices within Italian hospitals. Our findings emphasize the importance of enhancing awareness and implementing evidence-based strategies for delirium detection and management. These efforts are essential for optimizing delirium care, improving patient outcomes, and alleviating the burden of delirium in hospital settings. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 183 KB)
Assessing the Diagnostic Accuracy of Physicians for Home Death Certification in Shanghai: Application of SmartVA
c12a61a9-e353-4df4-81c6-ac8aca81874b
9247331
Forensic Medicine[mh]
Accurate data on causes of death are essential for policymakers and public health experts to plan appropriate health policies and interventions to improve population health. In Shanghai, a mega-city with a population of 24 million, the vital statistics registration system registers almost all deaths of the resident (Hukou) population ( ). Deaths that occur in the hospital are certified by the attending doctor. For the 30% of deaths in Shanghai that occur at home or are otherwise not medically attended, the family members of the deceased present to Community Health Centers (CHC), usually with available medical documentation, such as discharge summaries, medical records, and laboratory test results, and the CHC doctor on duty reviews the records and issues a death certificate. In such cases, the recorded cause of death (COD) may be less reliable than that for hospital death. Verbal Autopsy (VA) is a practical method that can help determine causes of death in regions where most deaths occur at home or where medical certification is limited or unreliable ( , ). Automated VA does not require physician review of the responses to a questionnaire to ascertain signs and symptoms preceding death; rather, the most probable COD is predicted from the application of a diagnostic algorithm. Where physicians are available to immediately review the outputs of a verbal autopsy and certify the COD, a specific tool, SmartVA for Physicians, has been developed to facilitate physician diagnoses. This innovation produces a summary of all endorsed symptoms, as reported by family members, providing more information for the certifying physician to determine the COD for people who die outside of hospitals ( , ). The validity of Smart VA as a diagnostic tool has been demonstrated in a diverse range of low- and middle-income populations ( – ). To ascertain whether routine application of the method would improve the quality (i.e., diagnostic accuracy) of death certification in Shanghai (especially for deaths occurring outside health facilities), SmartVA for Physicians was applied to a sample of community deaths for which the true cause had been separately established via an independent medical record review study. The findings were compared with diagnoses from routine practice to ascertain the value, if any, of incorporating Smart VA into the diagnostic practices of physicians in Shanghai certifying the cause of home deaths. SmartVA Auto-Analyze Package The SmartVA Auto-Analyze is a software package that builds on SmartVA Analyze and includes the Population Health Metrics Research Consortium (PHMRC) shortened VA questionnaire, the Open Data Kit (ODK) suite for data collection, and the modified Tariff 2.0 algorithm for computer analysis of the VA interview responses ( – ). The SmartVA Auto-Analyze was developed to be used by physicians in real time, and produces a list of up to three most likely causes of death at the individual level, commonly referred to as SmartVA for Physicians (for brevity, we use the term SmartVA in this article). The PHMRC shortened questionnaire was validated in terms of quantifying the decline in diagnostic accuracy as a function of deleting symptom questions in the long form of the questionnaire, using formal item reduction methods ( ). Subsequently, the shortened questionnaire has been applied to selected China CDC sites and validated against local diagnostic practices in these sites ( ). Training and Administration A local VA team, trained by experts in SmartVA from the University of Melbourne, trained 32 CHC doctors as VA interviewers. User manuals with detailed instructions and Standard Operating Procedures (SOP) were introduced during the training and were made available for use by the Shanghai Municipal Center for Disease Control and Prevention (SCDC) project staff. In addition, the interviewers received training on correct death certification practices as well as training on operating Android-based tablets to conduct SmartVA interviews and implement troubleshooting. After the training, the interviewers underwent supervised field practice to ensure that they had the requisite skills and conceptual knowledge to carry out VAs as required. A local information technology (IT) technical/data management staff member, with support from the University of Melbourne technical team, installed the Open Data Kit Collect software, the electronic SmartVA questionnaire and media file onto tablets, and SmartVA-Auto-Analyze onto computers, and prepared all devices for SmartVA data collection. Data Collection and Diagnostic Procedures Previous experience with similar validation studies suggests that at least 20 gold standard (GS) cases are required for each cause to establish the COD accuracy and validity within acceptable uncertainty bounds ( ). For investigating diagnostic accuracy of the top 20 causes of death, therefore, at least 400 GS cases were required. To allow for VA interview refusals, poor quality medical records to establish GSs, etc., we applied multistage sampling to select 16 community health centers (CHC) from three districts, chosen as representative of urban, suburban, and urban-suburban areas in Shanghai. Minhang district, Songjiang District, and Pudong District, each of which contains urban, suburban, and urban-suburban areas were first selected. Then, five CHCs from Minhang district, five CHCs from Songjiang district, and six CHCs from Pudong district were selected to meet our stratification criteria. All home deaths (1,648) in these CHCs which met our inclusion criteria were eligible for inclusion in our study, although it was expected that the final number of cases would be lower due to refusals, medical record quality and availability, etc. Each home death that occurred between December 2017 and June 2018 was investigated by a trained CHC doctor on duty. Doctors identified an appropriate respondent (>18 years of age, cared for the deceased, or most familiar with the symptoms and terminal phase of the deceased) from among the family members who came to report the death to the CHC, requested their consent to participate in the pilot study, and interviewed them. The various diagnoses associated with each case included in the study are shown in . At the end of the interview, the CHC doctor assigned an Initial diagnosis with an underlying cause of death (UCOD) selected according to usual practice and procedures in place, which included a review of the outpatient clinical records and any other documentation brought by the family when reporting the death to the CHC. Next, the physician ran the SmartVA-Auto-Analyze program for each death which suggested up to three possible UCOD; these predicted diagnoses from the Tariff diagnostic algorithm are labeled as the SmartVA diagnosis (Tariff COD 123) . Finally, the physician then reviewed the Initial diagnosis in the context of the additional information provided by the SmartVA diagnoses, including the list of endorsed symptoms provided by SmartVA, and used this information to assign a Post-VA diagnosis (as shown in ). Ethics Approval Ethics approval was obtained from Shanghai CDC (Ethics ID: 2016-28) and the University of Melbourne Ethics Committees (Ethics ID: 1647517.1.1). All participants were provided with a participant information sheet and consent forms in the local language. Monitoring and Evaluation Each CHC doctor was asked to complete a Microsoft Excel spreadsheet (“COD information form” box in ) with the data on demographics, initial diagnosis, SmartVA (Tariff) diagnosis, and the post-VA diagnosis of the UCOD for each case. This spreadsheet was submitted to SCDC by the CHC doctor at the end of each month, for monitoring the progress and quality of the study implementation. After 6 months of data collection, a program manager from SCDC integrated the data from all 16 CHCs and performed further analysis. GS UCOD and Data Analysis The medical records for all deaths for which a VA was carried out in these three districts were carefully evaluated by an independent Medical Record Review (MRR) team. The medical records of each home deaths were carefully audited according to the ex-ante study protocols adopted from the PHMRC study The MRR team members were experienced district CDC coders/physicians. The members were trained on how to review a medical record by the University of Melbourne team, as well as in the definition and interpretation of the standard diagnostic criteria and GS levels. The MRR team assigned each death a “GS” UCOD, which we define here as the MRR UCOD , based on the GS criteria for each COD developed by the PHMRC, and as applied in several studies ( , – ). Under these criteria, GS1 refers to the highest standard of (i.e., confidence in) diagnostic accuracy of the UCOD, progressing down to GS4, for which diagnostic confidence following the MRR was lowest. For example, the GS1 criteria for a case to be diagnosed as lung cancer is based on histological confirmation, whereas GS4 would be used for cases where the MRR concluded that there was unsupported clinical diagnosis. Causes of death from the application of SmartVA, as well as the UCOD from the MRR, were transformed to the SmartVA cause list (as shown in ) to facilitate comparison, given this was an abbreviated list of causes as appropriate for VA. Based on the Smart VA cause list, we carried out the following comparisons: (i) concordance between the initial diagnosis and MRR UCOD (to ascertain the accuracy of current diagnostic practice); and (ii) concordance between the initial and post-VA diagnosis (to ascertain the impact of applying SmartVA on diagnostic accuracy). In addition, we developed a misclassification matrix by cause to identify the pattern and extent of certification errors. For the misclassification matrices, only the 16 leading causes of death based on MRR UCODs have been included to facilitate interpretation of findings; all other diseases were merged into a residual group, labeled “others.” Standard validation metrics, such as sensitivity, positive predictive value (PPV), Cohen's kappa, chance-corrected concordance (CCC), and cause-specific mortality fraction (CSMF) accuracy, were calculated to assess concordance. The statistical analysis was performed using R 3.6 software. The SmartVA Auto-Analyze is a software package that builds on SmartVA Analyze and includes the Population Health Metrics Research Consortium (PHMRC) shortened VA questionnaire, the Open Data Kit (ODK) suite for data collection, and the modified Tariff 2.0 algorithm for computer analysis of the VA interview responses ( – ). The SmartVA Auto-Analyze was developed to be used by physicians in real time, and produces a list of up to three most likely causes of death at the individual level, commonly referred to as SmartVA for Physicians (for brevity, we use the term SmartVA in this article). The PHMRC shortened questionnaire was validated in terms of quantifying the decline in diagnostic accuracy as a function of deleting symptom questions in the long form of the questionnaire, using formal item reduction methods ( ). Subsequently, the shortened questionnaire has been applied to selected China CDC sites and validated against local diagnostic practices in these sites ( ). A local VA team, trained by experts in SmartVA from the University of Melbourne, trained 32 CHC doctors as VA interviewers. User manuals with detailed instructions and Standard Operating Procedures (SOP) were introduced during the training and were made available for use by the Shanghai Municipal Center for Disease Control and Prevention (SCDC) project staff. In addition, the interviewers received training on correct death certification practices as well as training on operating Android-based tablets to conduct SmartVA interviews and implement troubleshooting. After the training, the interviewers underwent supervised field practice to ensure that they had the requisite skills and conceptual knowledge to carry out VAs as required. A local information technology (IT) technical/data management staff member, with support from the University of Melbourne technical team, installed the Open Data Kit Collect software, the electronic SmartVA questionnaire and media file onto tablets, and SmartVA-Auto-Analyze onto computers, and prepared all devices for SmartVA data collection. Previous experience with similar validation studies suggests that at least 20 gold standard (GS) cases are required for each cause to establish the COD accuracy and validity within acceptable uncertainty bounds ( ). For investigating diagnostic accuracy of the top 20 causes of death, therefore, at least 400 GS cases were required. To allow for VA interview refusals, poor quality medical records to establish GSs, etc., we applied multistage sampling to select 16 community health centers (CHC) from three districts, chosen as representative of urban, suburban, and urban-suburban areas in Shanghai. Minhang district, Songjiang District, and Pudong District, each of which contains urban, suburban, and urban-suburban areas were first selected. Then, five CHCs from Minhang district, five CHCs from Songjiang district, and six CHCs from Pudong district were selected to meet our stratification criteria. All home deaths (1,648) in these CHCs which met our inclusion criteria were eligible for inclusion in our study, although it was expected that the final number of cases would be lower due to refusals, medical record quality and availability, etc. Each home death that occurred between December 2017 and June 2018 was investigated by a trained CHC doctor on duty. Doctors identified an appropriate respondent (>18 years of age, cared for the deceased, or most familiar with the symptoms and terminal phase of the deceased) from among the family members who came to report the death to the CHC, requested their consent to participate in the pilot study, and interviewed them. The various diagnoses associated with each case included in the study are shown in . At the end of the interview, the CHC doctor assigned an Initial diagnosis with an underlying cause of death (UCOD) selected according to usual practice and procedures in place, which included a review of the outpatient clinical records and any other documentation brought by the family when reporting the death to the CHC. Next, the physician ran the SmartVA-Auto-Analyze program for each death which suggested up to three possible UCOD; these predicted diagnoses from the Tariff diagnostic algorithm are labeled as the SmartVA diagnosis (Tariff COD 123) . Finally, the physician then reviewed the Initial diagnosis in the context of the additional information provided by the SmartVA diagnoses, including the list of endorsed symptoms provided by SmartVA, and used this information to assign a Post-VA diagnosis (as shown in ). Ethics approval was obtained from Shanghai CDC (Ethics ID: 2016-28) and the University of Melbourne Ethics Committees (Ethics ID: 1647517.1.1). All participants were provided with a participant information sheet and consent forms in the local language. Each CHC doctor was asked to complete a Microsoft Excel spreadsheet (“COD information form” box in ) with the data on demographics, initial diagnosis, SmartVA (Tariff) diagnosis, and the post-VA diagnosis of the UCOD for each case. This spreadsheet was submitted to SCDC by the CHC doctor at the end of each month, for monitoring the progress and quality of the study implementation. After 6 months of data collection, a program manager from SCDC integrated the data from all 16 CHCs and performed further analysis. The medical records for all deaths for which a VA was carried out in these three districts were carefully evaluated by an independent Medical Record Review (MRR) team. The medical records of each home deaths were carefully audited according to the ex-ante study protocols adopted from the PHMRC study The MRR team members were experienced district CDC coders/physicians. The members were trained on how to review a medical record by the University of Melbourne team, as well as in the definition and interpretation of the standard diagnostic criteria and GS levels. The MRR team assigned each death a “GS” UCOD, which we define here as the MRR UCOD , based on the GS criteria for each COD developed by the PHMRC, and as applied in several studies ( , – ). Under these criteria, GS1 refers to the highest standard of (i.e., confidence in) diagnostic accuracy of the UCOD, progressing down to GS4, for which diagnostic confidence following the MRR was lowest. For example, the GS1 criteria for a case to be diagnosed as lung cancer is based on histological confirmation, whereas GS4 would be used for cases where the MRR concluded that there was unsupported clinical diagnosis. Causes of death from the application of SmartVA, as well as the UCOD from the MRR, were transformed to the SmartVA cause list (as shown in ) to facilitate comparison, given this was an abbreviated list of causes as appropriate for VA. Based on the Smart VA cause list, we carried out the following comparisons: (i) concordance between the initial diagnosis and MRR UCOD (to ascertain the accuracy of current diagnostic practice); and (ii) concordance between the initial and post-VA diagnosis (to ascertain the impact of applying SmartVA on diagnostic accuracy). In addition, we developed a misclassification matrix by cause to identify the pattern and extent of certification errors. For the misclassification matrices, only the 16 leading causes of death based on MRR UCODs have been included to facilitate interpretation of findings; all other diseases were merged into a residual group, labeled “others.” Standard validation metrics, such as sensitivity, positive predictive value (PPV), Cohen's kappa, chance-corrected concordance (CCC), and cause-specific mortality fraction (CSMF) accuracy, were calculated to assess concordance. The statistical analysis was performed using R 3.6 software. Of the 1,648 deaths reported to the study CHCs during the defined period, only 619 (37.6%) could be included in this study. This was because many cases did not meet the study's inclusion criteria for eligible respondents or refused to participate. Of the 619 deaths for which a SmartVA interview was conducted, 570 cases also had available medical records that enabled the establishment of a GS1 and GS2 diagnosis following MRR. There was no significant difference in the age and sex composition between the 570 deaths and the total number of CHC deaths in same area and time period (as shown in ; p = 0.862 for sex and p = 0.135 for age). The majority of deaths were among those aged 70 years and above. The CSMFs for all the home deaths in the 16 CHCs in 2017, and the CSMFs based on the VA results from this study, conducted in the same 16 regions in 2018, showed a similar COD distribution based on the common SmartVA cause list (as shown in ). From the MRR of the deaths analyzed by SmartVA for Physicians, stroke was the leading COD, accounting for 17.8% of deaths, followed by other cancers (15.6%) and ischemic heart disease, lung cancer, and chronic respiratory diseases (CRDs), accounting for 12.6, 12.1, and 11.2% of deaths, respectively. All other causes accounted for <5% of deaths. Broadly speaking, CSMFs for causes of death diagnosed by SmartVA were similar to those based on existing diagnostic practices in Shanghai, with only slight changes in the ranking of causes of death ( ). The concordance between the initial diagnosis and the MRR UCOD (assessing the accuracy of existing diagnostic practices) and between the post-VA diagnosis and MRR UCOD (assessing the impact of SmartVA on diagnostic accuracy) was measured using chance-corrected concordance (CCC; – ). This metric evaluates the extent of agreement (average sensitivity) of individual diagnoses between the two sources, corrected for chance. Additionally, the CSMF accuracy was evaluated by measuring the absolute deviation of the CSMFs for the initial diagnosis of the SmartVA CSFMs from the MRR UCOD ( , , ). The closer this value is to 1, the higher the concordance of the results. Sensitivity and positive predictive value (PPV) were both high for the top six CODs. PPV was low for diabetes and other infectious diseases, indicating that some of the initial diagnoses that were not diabetes or other infectious diseases were reallocated to other diseases after the VA investigation. Although not dramatic, overall CSMF accuracy improved from 0.93, based on the initial diagnoses, to 0.96 after the application of SmartVA (as shown in ). In terms of specific causes, the CCCs for the top six causes of death (stroke, other cancers, ischemic heart disease (IHD), lung cancer, CRD, and stomach cancer, accounting for over 75% of deaths) all increased to more than 0.90 after VA-assisted diagnosis. Detailed metrics are shown in – . Some CODs, especially other non-communicable diseases (NCDs) and other infectious diseases, had noticeable increases in CCC following the application of SmartVA. Of interest is the change in CCC for other cardiovascular diseases (CVDs) and falls; both decreased after the VA investigation. This suggests that CVDs are being used as a convenient diagnostic category for some deaths, possibly those where it was difficult to establish the UCOD from the outpatient clinical records, which were subsequently reclassified following an investigation with SmartVA. shows the misclassification matrix based on the initial diagnosis compared with that from the MRR and is thus a rigorous test of the diagnostic accuracy of existing practices in the CHCs; 86.3% (492/570) of cases were correctly diagnosed by the initial diagnosis. The extent of misclassification was reduced following the VA investigation ( , ), with overall diagnostic accuracy increasing to 90.5% (516/570) among the post-VA diagnoses. Based on the results of the initial diagnosis before the VA investigation, other CVDs and other infectious diseases were more likely to be mis-assigned to other causes; nearly one-third of other CVDs were misclassified as stroke (6/17; ). The accuracy of CSMFs increased following the application of SmartVA, except for the categories of other CVDs and falls ( ). As mentioned, other CVDs were often (6/17 or 35.3%) misclassified cases of stroke, when compared with the MRR diagnoses ( ). Analysis of the VA results with SmartVA Auto Analyze resulted in the causes of 53 deaths, or just under 10% of the sample, being reclassified from their initially assigned causes. This was particularly the case for chronic kidney diseases (CKDs), CRD, and cirrhosis, as well as falls, IHDs, other CVDs, and undetermined causes. Among the 53 cases where the method led to a change in the COD, only 22.6% (12/53) were assigned correctly before VA ( ), whereas 67.9% (36/53) of the new CODs were assigned correctly according to MRR ( ). The number of misclassified conditions, compared with MRR, was also reduced. Among the 53 cases with a change in COD, all the causes assigned before VA ( ) had a high degree of misclassification, except for cirrhosis and falls. After VA, the misclassification was greatly reduced, except for falls and other CVDs ( ). Four undetermined deaths were reallocated to other diagnoses (as shown in – ). For the 53 deaths where the UCOD changed after the application of SmartVA, the initially assigned CODs (initial diagnoses) were distributed reasonably randomly across the 15 causes. In the initially assigned CODs, no cases were assigned to leukemia/lymphoma, diabetes, or other cancers. However, according to the MRR results, other cancers should be the third leading COD in this sample of 53 deaths. SmartVA suggested that the fraction was 17%. CRD was only half as important as a cause (9.4 vs. 17%) according to the initial diagnosis compared with both SmartVA and the MRR. CKD, undetermined causes, other injuries, pneumonia, cervical cancer, and esophageal cancer were not among the UCOD identified by the MRR, or by SmartVA (except for other injuries), while the CHC doctors assigned them as UCODs after initial diagnosis. Overall, the CSMF pattern identified by the application of SmartVA for these 53 cases was much closer to the true pattern suggested from MRR than the initial diagnosis. This suggests a need for greater care when assigning these diseases as UCODs ( , ). With the assistance of SmartVA, the majority of misdiagnosed deaths were assigned to other NCDs (20.8%), CRD (17.0%), and other cancers (17.0%). Though a small degree of misclassification persisted, the post-VA diagnosis of the UCOD agreed more closely with the reference standard (MRR) than the initial diagnosis ( ). Although Shanghai has an established and well-functioning CRVS system, SmartVA for Physicians contributed to an improvement in the accuracy of death certification, as measured by the CSMF, which increased from 0.93 to 0.96 following the introduction of SmartVA. In addition, SmartVA may be a useful tool for inferring some special causes of death, such as those CODs classified as undetermined, which while less of an issue for Shanghai, is a common problem in civil registration systems worldwide ( – ). In our study, four undetermined CODs were reclassified after the application of SmartVA. With the help of this tool, the Shanghai CRVS system could reduce the fraction of undetermined deaths. Among the 53 cases where the UCOD was misclassified according to the VA investigation, the largest impact was for CRD (17 vs. 9.4% suggested by initial diagnosis), other NCDs (17 vs. 3.8%), as well as other cancers (13.2 vs. 0%), suggesting that for causes such as these, a more careful examination of the available medical history may be needed by the certifier before assigning the UCOD. The fact that only 53 cases were misclassified out of a sample of 570 reflects the rigor of the diagnostic practices routinely applied in Shanghai, but given the clustering of these cases around certain causes of death (COPD, residual NCDs, and residual cancers), selective application of the methodology might help to improve diagnostic accuracy even further. The improvement in COD data following the application of SmartVA in this study could be attributed to several factors. First, the SOPs for COD assignment that were followed during the SmartVA investigation ensured a structured and consistent approach, leading to a more accurate COD assignment. Second, the SmartVA procedure has systematic and comprehensive questions about symptoms, which can help to ensure that all relevant medical information regarding the decedent's morbid conditions is captured at the time of certification of the COD. Third, the improvement attributed to Smart VA could in part be due to the comprehensive training in seeking information about symptoms and signs from the family, which is more systematic and comprehensive than current procedures. As Shanghai is highly developed with a relatively advanced CRVS system, the routine use of SmartVA is unlikely to result in a significant improvement in the accuracy of COD data in the Shanghai system, nor is it a cost-effective way to do so. The routine application of SmartVA would add a further 15–30 min to the diagnostic process for each death, which, given the already high diagnostic standards and procedures in place, is not justifiable. Shanghai CHC doctors' routine work already comprises checking and correcting MCCOD data, including re-interviewing the family of the deceased. In contrast, for other cities in China, especially in the remote areas in the west, that do not have a well-functioning death registration system, SmartVA may be more beneficial. There are several reasons why not all the home deaths can be investigated. The high refusal rate undoubtedly reflects the fact that urban, comparatively well-off populations engaged in non-agricultural occupations have little time or inclination to respond to questionnaires, particularly at a time when the family of the deceased is still grieving, making the investigation more difficult to conduct. Second, conducting the SmartVA investigation requires systematic training from an expert team. Aside from regular medical certification of COD training, the training courses include how to install the software for the SmartVA tool, how to connect the tablet to the computer to transmit the survey data, etc. This is further complicated by the mobility of CHC physicians, which is quite high as their workload is heavy. Shanghai CDC has subsequently developed the WeChat version of the Smart VA questionnaire in 2021 and is conducting a new round of home death investigation for those UCODs which were initially assigned as R codes. Our SmartVA study has some limitations. First, the SmartVA tool, especially the cause list, is not perfectly suited to the actual mortality fractions observed in Shanghai ( ). For example, liver cancer is not on the SmartVA cause list, and therefore the program does not assign it as a COD to any deaths, whereas liver cancer accounted for more than 2.6% of all deaths in Shanghai in 2018. This is due to the fact that validation metrics for liver cancer in the original PHMRC study were considered too low to justify the inclusion of liver cancer as a target cause for SmartVA ( , ). Second, in this study, the VA investigations were conducted after the certifier reviewed the previous outpatient medical histories of the deceased. This may have biased the certifier when considering the diagnostic information suggested by SmartVA. Third, in this study in Shanghai, the SmartVA procedure was implemented in 16 communities. As the number of deaths in these communities was not very high and may not be representative of the whole population, further research should be done to determine the generalizability of SmartVA for Physicians before it can be extended to all districts and counties in China where home deaths are common. Fourth, the GS dataset used in the MRR to establish true causes of death is not without errors, given that it has been derived from available medical records, which can themselves contain errors. The surest way to ensure an error free GS is through autopsy, but this is not practical or affordable in most settings. Rather, by adopting ex-ante diagnostic procedures with clearly defined diagnostic criteria for specific causes of interest, the PHMRC methods and exclusion criteria applied in this study are likely to dramatically reduce, but not eliminate, diagnostic uncertainty and subjectivity. Last, the cause pattern identified by SmartVA is constrained to the causes associated with the symptom questions asked in SmartVA. While these causes collectively would likely account for the vast majority of deaths in most low- or middle-income countries, important local causes, such as liver cancer in China, may be omitted due to the criteria and methods used to validate the tool. While we have focused on the applicability of SmartVA in the Shanghai context, it should be kept in mind that there are alternative automated (electronic) VA methods, such as In Silico VA and InterVA, which could also be applied to assist physician diagnoses ( – ). In our study, the strengths and weaknesses of different automated diagnostic methods were not discussed. Rather, we have focused on the applicability of generic automated VA methods as a diagnostic aid for physicians who need to certify the cause of home deaths, often in the absence of good clinical records ( – ). The SCDC plans to adapt the workflow and operational specifications of SmartVA to maximize the effectiveness of the method in improving COD diagnosis and lowering the proportion of undetermined causes of death. This will most likely be through selective application and integration into the existing CRVS system. In addition, SCDC is also considering using SmartVA to identify the possible causes of death in cases with incomplete medical history information. Increasing the diagnostic accuracy of any dataset that is likely to be used to guide public policy is, or should be, a priority for data custodians. Our research has demonstrated that COD accuracy in Shanghai is very good, but it is not without errors. Furthermore, our study has shown that the application of SmartVA can improve diagnostic accuracy even further, if only marginally. As a result, the routine use of SmartVA is unlikely to be a cost-effective strategy to further improve the diagnostic accuracy of an already well-performing system, but its application to improve the diagnoses of certain conditions appears justified. This marginal application would likely further improve confidence in the use of Shanghai COD data for some public health purposes. This research illustrates that although Shanghai has an established and well-functioning CRVS system, SmartVA for Physicians contributed to an improvement in the accuracy of death certification. In addition, SmartVA may be a useful tool for inferring some special causes of death, such as those CODs classified as undetermined. The data that support the findings of this study are available upon request from the Shanghai Municipal Center for Disease Control and Prevention (Shanghai CDC) but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are available from the authors upon reasonable request and with the permission of Shanghai CDC. The studies involving human participants were reviewed and approved by Ethics approval was obtained from Shanghai CDC (Ethics ID: 2016-28) and the University of Melbourne Ethics Committees (Ethics ID: 1647517.1.1). The patients/participants provided their written informed consent to participate in this study. RJ and AL devised the study and were responsible for the study design. ZY, TX, and CW oversaw the research. LC and HL were members of the writing group. HL, TA, CW, HY, and AL provided feedback on data analysis, results, and discussion. RR, ZG, BF, AL, and DM revised the manuscript critically for important intellectual content. All authors contributed to the framework construction, results interpretation, manuscript revision, and approved the final version of the manuscript. The corresponding authors attest that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. The research was funded by the Bloomberg Philanthropies Data for Health Initiative and by the Clinical Research Project of the Health Industry of Shanghai Health Commission in 2020 (Award number: 20204Y0205). The funders had no role in study design, data collection and analysis, decision to publish, or the preparation of the manuscript. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Impact of COVID-19 pandemic on the clinical activities related to arrhythmias and electrophysiology in Italy: results of a survey promoted by AIAC (Italian Association of Arrhythmology and Cardiac Pacing)
6217be09-71f9-490b-9b02-437fe10b1eb0
7474489
Physiology[mh]
The World Health Organization declared COVID-19 a pandemic on March 11, 2020 (1) and Italy was the first European country that had to take urgent decisions for limiting the transmission in the population of Sars-Cov-2 . On March 8th, Italy became the second most affected country in the world after China, and specific rules for restricting social contacts in the whole country were applied by the Italian Government in March 2020 . At the end of June 2020 the total number of subjects found affected by COVID-19 in Italy was reported to be around 240 000 with more than 34,000 deaths . The COVID-19 outbreak had a devastating and massive impact on the organization of social activities, as well as a disruptive impact on the organization of care in Italy, with a dramatic reduction in traditional contacts for ensuring care to non COVID-19 diseases . As a matter of fact, hospital admissions for acute myocardial infarction were significantly reduced during the early phase of COVID-19 pandemic across Italy, with a parallel increase in fatality and complication rates . Moreover, a 52% increase in the occurrence of out of hospital cardiac arrests was documented in some Provinces from Lombardy in the first 2 months of the pandemic, and this increase was associated with worse in hospital outcomes . Within this complex scenario, corresponding to a profound re-arrangement of health care system organization in Italy, no data are available on the different aspects of care in the field of arrhythmia and electrophysiology, either with regard to the period of lockdown or with regard to the so called “Phase 2” (post-COVID-19 recovery phase) that started on May 4, 2020 targeted to a re-organization of all the activities, including health care, after the period of massive emergency. The Italian Association of Arrhythmology and Cardiac Pacing (AIAC) launched a survey among its members in order to report the situation of cardiac care for arrhythmia in these particular phases. From April 24 to May 30, 2020 a survey endorsed by the AIAC was published on the official AIAC website ( https://aiac.it/ ). The survey was open to physicians operating in all Italian centres involved in arrhythmia care. Participation in the survey was voluntary. The questionnaire could be completed by more than one physician from the same centre. The questionnaire consisted of 18 questions: five of them focused on the characteristics of the participating centre (i.e. involvement of the centres and of the physicians in the management of suspected and confirmed patients with COVID-19, volume of annual CIEDs implantations and ablation procedures); seven of them focused on the impact of COVID-19 pandemic on the number of CIED implantations and ablation procedures performed in both elective and emergency settings, and on the number of cases of acute pharmacological and non-pharmacological treatment of atrial fibrillation (AF) in emergency setting; two of them focused on the impact of COVID-19 pandemic on the management of remote monitoring (RM) of CIEDs; the remaining four were focused on the possible organizational strategies for post-COVID-19 recovery phase. Seventeen of the 18 questions were multiple-choice questions (see online Supplementary material for details). Statistical analysis Descriptive statistics were reported as means for normally distributed continuous variables. Continuous variables with skewed distribution were reported as medians with 25–75th percentiles. Categorical data were expressed as percentages, reported in contingency tables, and compared by means of χ 2 test or Fisher’s exact test, as appropriate. p values < 0.05 were considered statistically significant. Descriptive statistics were reported as means for normally distributed continuous variables. Continuous variables with skewed distribution were reported as medians with 25–75th percentiles. Categorical data were expressed as percentages, reported in contingency tables, and compared by means of χ 2 test or Fisher’s exact test, as appropriate. p values < 0.05 were considered statistically significant. Participating centres A total of 104 physicians from 84 Italian arrhythmia centres took part in the survey. For 15 centres, more than one physician responded to the survey (mean; 2; range 2–4). A complete list of participating centres is reported in Appendix. The centres which participated in the survey accounted for 22.6% of all 372 arrhythmia centres operating in Italy in 2019 . The participating centres displayed a wide geographical distribution (Fig. b): a mean of three centres per region (range: 0–13; interquartile range: 1–7) responded. In six regions there were five or more participating centres. The response rate was similar in Northern, Central and Southern Italy (21.6, 30.7, and 18.0% of all operating centres, respectively, p = 0.089). After dividing the Italian regions into four groups, according to incidence of COVID-19 cases (confirmed cases < 1.0, from 1.1 to 3.0, from 3.1 to 5.0, and > 5.0 per 1000 populations, Fig. a) (3), the response rate was similar in the regions with higher incidence of COVID-19 cases (confirmed cases > 5 per 1000 population, n = 6) compared to other regions ( n = 14) (22.9 vs. 22.4%; p = 0.921). Many participating centres (29.8%) had three operators, 4.8% had only one operator, and 6.0% > 6 operators (Fig. c). Fifty-nine of 84 participating centres (70.2%) were located in hospitals designated to treat patients with COVID-19. Of these, 43 (72.9%) reported that during COVID-19 pandemic at least one operator (median: 1; range: 1–12) was directly involved in the management of patients with COVID-19. In these centres a mean of 71.6% of operators was involved in assistance to patients with COVID-19 and in 21 centres (49% of those involved in the care of patients with COVID-19), all the operators of the electrophysiology team were involved in assistance to patients with COVID-19. The majority of participating centres (54.8%) had implanted from 200 to 500 CIEDs during 2019; 21.4% had implanted from 100 to 200 CIEDs, and the remaining 23.8% < 100 or > 500 (Fig. d). In 34.5% of centres, < 50 ablation procedures had been performed during 2019; 28.6% had been performed from 100 to 200 ablation procedures; 20.2% had been performed > 200 ablation procedures (Fig. e). Impact of COVID-19 pandemic on the activity of participating centres Procedures performed in elective setting The vast majority of participating centres (95.2%) reported a significant reduction in the number of elective pacemaker (PM) implantations procedures during the two months March–April 2020 compared to the corresponding two months (March–April) of year 2019. Specifically, 50.0% of centres reported a reduction of > 50%. Only 4.8% of centres reported no significant variations (Fig. a). Similarly, 92.9% of participating centres reported a significant reduction in the number of implantable cardioverter-defibrillator (ICD) implantations for primary prevention in the same period. The majority of these (65.5%) reported a reduction > 50%. Only 7.1% of centres reported no significant variations (Fig. b). COVID-19 pandemic seemed to have an impact also on the number of ICD implantations for secondary prevention; in fact, 72.6% of centres reported a significant reduction (of > 50% in 44.0% of centres), while 27.4% reported no significant variations (p < 0.001 compared to ICD implantations for primary prevention, Fig. b). No significant difference was found in the answers between the centres located in regions with higher incidence of COVID-19 cases and the other ones (Figure S1, panel A–C). The majority of participating centres (77.4%) reported a significant reduction in the number of elective ablations performed during the two months March–April 2020 compared to the 2 months March–April 2019 (reduction of > 50% in 65.5% of the centres); 22.6% reported no significant variations (Fig. c). The impact of the pandemic on the number of elective ablations performed was greater in the regions with higher incidence of COVID-19 cases where there was a significantly higher rate of the centres that reported a reduction in the number of procedures of > 50% (81.3 vs. 55.8%; p = 0.017), and a significantly lower rate of the centres that reported no significant variations (9.4 vs. 30.8%; p = 0.023) compared to other centres (Figure S1, panel D). During COVID-19 pandemic, the participating centres globally reported a mean reduction in the number of elective PM implantations, ICD implantations for primary prevention, ICD implantations for secondary preventions, and elective ablations of 52.0, 57.7, 40.9, and 52.4%, respectively. Procedures performed in emergency setting The majority of participating centres (70.0%) reported a significant reduction in the number of CIED implantation procedures performed in emergency setting (including temporary and definitive PM implantations for severe, life-threatening bradyarrhythmias, and ICD implantations for secondary prevention) during COVID-19 pandemic compared to the same period of the previous year; 22.6% of centres reported no significant variations; 10.0% reported a significant increase (of 10–30% in most cases, Fig. d). About half of the participating centres (54.8%) reported a significant reduction in the number of ablation procedures performed in emergency setting (including urgent ablation of electrical storm, or of refractory ventricular or supraventricular tachycardias) during COVID-19 pandemic compared to the same period of the previous year (of > 50% in 32.1% of the centres); 40.5% reported no significant variations; only 4.8% reported a significant increase (Fig. e). No significant difference was found in the answers between the centres located in regions with higher incidence of COVID-19 cases and the other ones (Figure S2, panel A and B). The majority of participating centres (65.5%) reported a significant reduction of cases of acute pharmacological and non-pharmacological treatment of AF in emergency setting (including pharmacological rate or rhythm control, and urgent electrical cardioversion); 28.6% reported no significant variations; only 6.0% reported a significant increase (Fig. f). In the regions with higher incidence of COVID-19 cases a significantly higher rate of centres reported a reduction of > 50% in the number of cases of acute AF treatment in emergency setting compared to other regions (43.8 vs. 11.5%; p < 0.001; Figure S2, panel C). During COVID-19 pandemic, the participating centres globally reported a mean reduction in the number of urgent CIEDs implantations, urgent ablations, and in the number of cases requiring acute treatment of AF in emergency setting of 27.9, 29.2, and 30.5%, respectively. Based on the reported procedure volumes, we estimated that, during the two months March–April 2020 in the 84 centres that participated in the survey, globally about 2200 fewer CIEDs had been implanted and about 960 fewer ablations had been performed (in both elective and emergency settings) compared to the same period of the previous year. Remote monitoring of CIEDs Eighty-one of 84 participating centres (96.4%) used remote monitoring (RM) for the follow-up of patients with CIEDs. Almost half of these centres (48.8%) reported no significant variations in the number of patients followed by RM during the two months that we analysed (March–April 2020), while 33.3% reported a significant increase; 17.9% declared to offer RM to all available CIED patients (Fig. a). About half of the centres (53.6%) indicated that during COVID-19 pandemic performed in-office evaluation of CIED patients followed by RM only in case of alerts triggered by device/lead malfunction or by clinical events; 21.4% performed in-office evaluation only in case of alerts related to device/lead malfunction; finally, 21.4% declared that during the pandemic no in-office evaluation was performed (Fig. b Strategies and perspectives for the post-COVID-19 recovery phase The following results refer to the whole group of 104 physicians who responded to the questionnaire. The majority of the interviewed physicians (56.7%) considered, as main strategy for the post-COVID-19 recovery phase, the adoption of new organizational structures for patient admission in order to minimize the risk of infection. Besides, 33.7% of respondents considered as main strategy the implementation of short-stay hospitalization for patients undergoing elective procedures (i.e. day-case admission or ordinary admission with a single night stay). Finally, 20.2% of respondents considered as the main challenge for post-COVID-19 phase to overcome the distrust of patients to go to the hospital. For the majority of the interviewed physicians (73.1%) the procedures that could be performed under day-case admission were CIEDs replacements, followed by supraventricular tachycardias (SVTs) ablations (22.1%) and by elective PM implantations (16.3%, Fig. a). Instead, the procedures, that could be performed under ordinary admission with a single night stay, were elective PM implantations for 60.6% of respondents, elective ICD implantations for 56.7%, SVTs ablations for 39.4%, and CIEDs replacements for 32.7% (Fig. b). Concerning the time needed to return to pre-COVID procedure volumes, about a third of respondents (32.7%) thought that it will take at least 6 months; 26.9% that it will take from 1 to 2 years; 12.5% thought that the pre-COVID-19 procedure volumes will be achieved within 3 months. A total of 104 physicians from 84 Italian arrhythmia centres took part in the survey. For 15 centres, more than one physician responded to the survey (mean; 2; range 2–4). A complete list of participating centres is reported in Appendix. The centres which participated in the survey accounted for 22.6% of all 372 arrhythmia centres operating in Italy in 2019 . The participating centres displayed a wide geographical distribution (Fig. b): a mean of three centres per region (range: 0–13; interquartile range: 1–7) responded. In six regions there were five or more participating centres. The response rate was similar in Northern, Central and Southern Italy (21.6, 30.7, and 18.0% of all operating centres, respectively, p = 0.089). After dividing the Italian regions into four groups, according to incidence of COVID-19 cases (confirmed cases < 1.0, from 1.1 to 3.0, from 3.1 to 5.0, and > 5.0 per 1000 populations, Fig. a) (3), the response rate was similar in the regions with higher incidence of COVID-19 cases (confirmed cases > 5 per 1000 population, n = 6) compared to other regions ( n = 14) (22.9 vs. 22.4%; p = 0.921). Many participating centres (29.8%) had three operators, 4.8% had only one operator, and 6.0% > 6 operators (Fig. c). Fifty-nine of 84 participating centres (70.2%) were located in hospitals designated to treat patients with COVID-19. Of these, 43 (72.9%) reported that during COVID-19 pandemic at least one operator (median: 1; range: 1–12) was directly involved in the management of patients with COVID-19. In these centres a mean of 71.6% of operators was involved in assistance to patients with COVID-19 and in 21 centres (49% of those involved in the care of patients with COVID-19), all the operators of the electrophysiology team were involved in assistance to patients with COVID-19. The majority of participating centres (54.8%) had implanted from 200 to 500 CIEDs during 2019; 21.4% had implanted from 100 to 200 CIEDs, and the remaining 23.8% < 100 or > 500 (Fig. d). In 34.5% of centres, < 50 ablation procedures had been performed during 2019; 28.6% had been performed from 100 to 200 ablation procedures; 20.2% had been performed > 200 ablation procedures (Fig. e). Procedures performed in elective setting The vast majority of participating centres (95.2%) reported a significant reduction in the number of elective pacemaker (PM) implantations procedures during the two months March–April 2020 compared to the corresponding two months (March–April) of year 2019. Specifically, 50.0% of centres reported a reduction of > 50%. Only 4.8% of centres reported no significant variations (Fig. a). Similarly, 92.9% of participating centres reported a significant reduction in the number of implantable cardioverter-defibrillator (ICD) implantations for primary prevention in the same period. The majority of these (65.5%) reported a reduction > 50%. Only 7.1% of centres reported no significant variations (Fig. b). COVID-19 pandemic seemed to have an impact also on the number of ICD implantations for secondary prevention; in fact, 72.6% of centres reported a significant reduction (of > 50% in 44.0% of centres), while 27.4% reported no significant variations (p < 0.001 compared to ICD implantations for primary prevention, Fig. b). No significant difference was found in the answers between the centres located in regions with higher incidence of COVID-19 cases and the other ones (Figure S1, panel A–C). The majority of participating centres (77.4%) reported a significant reduction in the number of elective ablations performed during the two months March–April 2020 compared to the 2 months March–April 2019 (reduction of > 50% in 65.5% of the centres); 22.6% reported no significant variations (Fig. c). The impact of the pandemic on the number of elective ablations performed was greater in the regions with higher incidence of COVID-19 cases where there was a significantly higher rate of the centres that reported a reduction in the number of procedures of > 50% (81.3 vs. 55.8%; p = 0.017), and a significantly lower rate of the centres that reported no significant variations (9.4 vs. 30.8%; p = 0.023) compared to other centres (Figure S1, panel D). During COVID-19 pandemic, the participating centres globally reported a mean reduction in the number of elective PM implantations, ICD implantations for primary prevention, ICD implantations for secondary preventions, and elective ablations of 52.0, 57.7, 40.9, and 52.4%, respectively. Procedures performed in emergency setting The majority of participating centres (70.0%) reported a significant reduction in the number of CIED implantation procedures performed in emergency setting (including temporary and definitive PM implantations for severe, life-threatening bradyarrhythmias, and ICD implantations for secondary prevention) during COVID-19 pandemic compared to the same period of the previous year; 22.6% of centres reported no significant variations; 10.0% reported a significant increase (of 10–30% in most cases, Fig. d). About half of the participating centres (54.8%) reported a significant reduction in the number of ablation procedures performed in emergency setting (including urgent ablation of electrical storm, or of refractory ventricular or supraventricular tachycardias) during COVID-19 pandemic compared to the same period of the previous year (of > 50% in 32.1% of the centres); 40.5% reported no significant variations; only 4.8% reported a significant increase (Fig. e). No significant difference was found in the answers between the centres located in regions with higher incidence of COVID-19 cases and the other ones (Figure S2, panel A and B). The majority of participating centres (65.5%) reported a significant reduction of cases of acute pharmacological and non-pharmacological treatment of AF in emergency setting (including pharmacological rate or rhythm control, and urgent electrical cardioversion); 28.6% reported no significant variations; only 6.0% reported a significant increase (Fig. f). In the regions with higher incidence of COVID-19 cases a significantly higher rate of centres reported a reduction of > 50% in the number of cases of acute AF treatment in emergency setting compared to other regions (43.8 vs. 11.5%; p < 0.001; Figure S2, panel C). During COVID-19 pandemic, the participating centres globally reported a mean reduction in the number of urgent CIEDs implantations, urgent ablations, and in the number of cases requiring acute treatment of AF in emergency setting of 27.9, 29.2, and 30.5%, respectively. Based on the reported procedure volumes, we estimated that, during the two months March–April 2020 in the 84 centres that participated in the survey, globally about 2200 fewer CIEDs had been implanted and about 960 fewer ablations had been performed (in both elective and emergency settings) compared to the same period of the previous year. Remote monitoring of CIEDs Eighty-one of 84 participating centres (96.4%) used remote monitoring (RM) for the follow-up of patients with CIEDs. Almost half of these centres (48.8%) reported no significant variations in the number of patients followed by RM during the two months that we analysed (March–April 2020), while 33.3% reported a significant increase; 17.9% declared to offer RM to all available CIED patients (Fig. a). About half of the centres (53.6%) indicated that during COVID-19 pandemic performed in-office evaluation of CIED patients followed by RM only in case of alerts triggered by device/lead malfunction or by clinical events; 21.4% performed in-office evaluation only in case of alerts related to device/lead malfunction; finally, 21.4% declared that during the pandemic no in-office evaluation was performed (Fig. b The following results refer to the whole group of 104 physicians who responded to the questionnaire. The majority of the interviewed physicians (56.7%) considered, as main strategy for the post-COVID-19 recovery phase, the adoption of new organizational structures for patient admission in order to minimize the risk of infection. Besides, 33.7% of respondents considered as main strategy the implementation of short-stay hospitalization for patients undergoing elective procedures (i.e. day-case admission or ordinary admission with a single night stay). Finally, 20.2% of respondents considered as the main challenge for post-COVID-19 phase to overcome the distrust of patients to go to the hospital. For the majority of the interviewed physicians (73.1%) the procedures that could be performed under day-case admission were CIEDs replacements, followed by supraventricular tachycardias (SVTs) ablations (22.1%) and by elective PM implantations (16.3%, Fig. a). Instead, the procedures, that could be performed under ordinary admission with a single night stay, were elective PM implantations for 60.6% of respondents, elective ICD implantations for 56.7%, SVTs ablations for 39.4%, and CIEDs replacements for 32.7% (Fig. b). Concerning the time needed to return to pre-COVID procedure volumes, about a third of respondents (32.7%) thought that it will take at least 6 months; 26.9% that it will take from 1 to 2 years; 12.5% thought that the pre-COVID-19 procedure volumes will be achieved within 3 months. The present survey highlights that the outbreak of COVID-19 pandemic had a disrupting impact on health care organization that profoundly affected the organization of care in the Hospitals and the Cardiology Divisions of many areas, specifically in Northern Italy, with an important impact on the activities of the teams involved in management of arrhythmias and electrophysiology. Indeed, around 70% of centres that participated to this survey were located in hospitals directly involved in treatment of patients with COVID-19 and around 73% reported that during the pandemic at least one physician of the arrhythmia team was directly involved in the management of patients with COVID-19. The extraordinary consequences of the pandemic are even more evident by considering that in 49% of the centres involved in management of COVID-19 all the operators of the electrophysiology team were involved in tackling the emergency situation. The elective procedures related to device implants had a very important reduction in March–April 2020 as compared to the same months of the previous year, with the majority, or even the large majority of centres reporting a greater than 50% reduction in the number of elective PMs or of ICDs implanted in primary prevention of sudden cardiac death. The indication to limit hospital admissions to emergencies or, anyway, to non-deferrable procedures, combined with the fear of patients to be infected in the hospital are all factors that can explain this phenomenon whose impact on future events is unpredictable. The reduction in ICD implants for secondary prevention was less impressive, but these data should be interpreted in a larger perspective, taking into account the increase in out of hospital cardiac arrests observed during the COVID-19 outbreak . It is noteworthy that no significant difference was found in the analysis on device implants between the centres located in regions with higher incidence of COVID-19 cases and the other ones, suggesting that the impact of the pandemic on patient behaviours and organization of care was, in general, independent on the peaks of COVID-19 epidemiological pressure. It is unknown whether the reduction in elective implants for prophylactic ICDs will imply in the future a relative increase in malignant ventricular tachyarrhythmias or cardiac arrests, leading to a rebound increase in ICDs implanted for secondary prevention. Ablation of AF is currently one of the main activities of Italian electrophysiology centres, and is performed with different approaches and techniques both in patients with no underlying heart disease and in selected patients with heart failure . In the present survey around 77% of the centres reported a significant reduction in the number of elective ablations performed in March–April 2020 compared to the previous years, but with some differences related to areas with higher incidence of COVID-19 cases. The elective nature of AF ablation procedures and the re-organization of care related to COVID-19, that obliged many centres to cancel elective procedures may explain the heterogeneity of this finding. Also the electrophysiological procedures and the interventions performed in an emergency setting were markedly reduced during the observation period. In interpreting these findings, it should be considered that the COVID-19 outbreak markedly changed the pattern of Emergency Departments (EDs) referral in Italy, with figures up to a 50% reduction accesses to hospitals and EDs unrelated to COVID-19 . A reduction up to 50% in urgent pacemaker implants for severe bradyarrhythmias was previously reported, in agreement with our national survey, by analysis performed on a single hospital basis or on a regional basis . The reduction in urgent pacemaker implants may imply a lack of prevention for the potential harmful consequences of bradyarrhythmias and, indeed, a relative increase in the proportion of patients presenting with syncope due to bradyarrhythmias was already observed . It is possible that this trend will increase also in the post-lockdown phase and it will be interesting to analyse if it will lead to a rebound in pacemaker implants. In recent years RM of implanted devices has been implemented in clinical practice in a substantial proportion of Italian centres, despite the problems linked to lack of reimbursement or lack of official general plans for large-scale implementation . As compared to patient monitoring with external devices, the use of remote monitoring with implanted devices offers the advantage of an easy implementation, simply requiring patient and caregiver’s education coupled with availability of dedicated transmitters. Therefore, COVID-19 offered a great chance to enhance implementation of RM among patients with implanted devices , although with a variable extent from centre to centre. As a matter of fact, more than 50% of the centres participating to this survey reported some increase in the use of RM for the follow-up of their patients. However, the extent of RM implementation as a consequence of the limitations of COVID-19 lockdown actually ranged from an increase in the range of 10–30% of assisted patients to more than 50% or even (in around 18% of centres) to a complete shift to a strategy based on offering RM to all available CIED patients. Although it is clear that the pressure of the limitations due to COVID-19 lockdown offered a great opportunity for a larger implementation of RM, overcoming a series of bureaucratic and administrative barriers, it emerges a substantial heterogeneity in the extent of implementation of RM, that should be object of future re-assessments. The very drastic limitations linked to the period of massive pressure of CIVID-19 are highlighted by the relatively important proportion of centres (one in five) reporting that during the pandemic no in-office evaluations were performed. Currently remote programming of implanted devices is not allowed, in view of safety concerns, so it remains crucial the adoption of specific recommendations for device programming according to patient profile, thus minimizing troubleshooting during follow up . In the specific context of COVID-19 lockdown the potential advantages of RM should not be limited to device checks. As known, RM can be used with the purpose of remote device check or for monitoring patients' status (heart rhythm, fluid overload, right ventricular pressure, oximetry, etc.), thus with a shift from strictly device-centred follow-up to perspectives centred on the patient (and patient-device interactions) . The organization of disease management of heart failure, though RM in patients with implanted devices is complex, requires an interplay between the competence on devices and heart failure management and, therefore, should be object of promotion for the post-COVID-19 recovery phase. The assessment of quality of care delivered through RM , with appropriate involvement of the patients and the caregivers , will become of primary importance for outcome improvement. Anyway, as stressed in official documents of the major international associations in the field of arrhythmia management, the crisis precipitated by the pandemic has surely catalysed the adoption of RM across many specialties and heart rhythm professionals are in the front line for full adoption of this technological and clinical advancement even beyond the emergency of COVID-19 pandemic, making RM as the true standard of care in this field . AF is a very common arrhythmia and its acute management carries a high burden of workload to EDs and Cardiology Clinics . In view of its epidemiological profile, AF affects subjects in the range of age at highest risk of adverse outcomes if infected by Sars-Cov-2 and the caution in avoiding admissions to hospital may explain the important reduction in acute pharmacological and non-pharmacological treatments applied for AF in emergency setting reported during the study period, as reported in this survey. Since appropriate prescription of oral anticoagulants in patients at risk of stroke is a major determinant of outcome at long term , it will be necessary in the near future to establish even stricter connections between hospital and out of hospital care, for a re-assessment of patients who presented AF and these months with regard to clinical evaluation and appropriateness of treatment for ensuring continuity of care. It will also be interesting to assess to what extent untreated or undiscovered AF occurred during the lockdown will result in major consequences, such as syncope, heart failure, stroke/systemic embolism . It is surprising that the reduction in activities performed by Arrhythmia services during March–April 2020 also involved ablations performed in emergency setting (including urgent ablation of electrical storm, or of refractory VT or SVT), that require high competence and usually cannot be deferred . The patients’ tendency to avoid hospitalization that characterized the peak phase of COVID-19 pandemic could have resulted in an increased amount of cardiovascular deaths occurred at home, but this of difficult to assess now. The implications of the gap of care that the reduction in emergency ablations and electrophysiological interventions implied will require further assessments in the future and should suggest a reorganization of care, with networks able to guarantee these procedures, following an appropriate referral, even in case of national emergencies. One of the key questions after the outbreak of COVID-19 is how to re-organize care in the post-COVID-19 recovery phase and our survey indicated that according to Italian physicians in the field. According to our survey there is absolute need of adopting new organizational models for patient admission in order to minimize the risk of infection. A short-stay hospitalization for patients undergoing elective procedures (i.e. day-case admission or ordinary admission with a single night stay) appears to be a suitable strategy, although up to now it was adopted with substantial heterogeneity, according to administrative reasons and reimbursement policies . According to the majority of respondents, not only device replacements but also ablations for SVTs and elective PM implants could be performed with a short hospital stay, with the advantage to improve efficiency of the system. This perspective will require an increased compliance with prospective registries on electrophysiological procedures , with our Scientific Association providing specific reports on complication rates and outcomes associated with the different procedures programmed in the field of interventional electrophysiology. This will also be the basis for working with policymakers and regulators for planning audits targeted to verify the quality of care in a virtuous circle where daily practice provides a continuous feedback on health care system performance . This will be the appropriate response to the challenging battle against COVID-19 and will allow to improve the performance of our health care system, with the premise for achieving full confidence of the citizens on the overall appropriateness and safety of our care processes. Study limitations Our survey has some limitations since it was not based on a precise computation of activities and procedure in every specific centre; however, this is a method that allows a rapid feedback and was chosen for having a general view of COVID-19 pandemic in Italy at a short time from its onset. Only 84 out of 372 arrhythmia centres operating in Italy took part in the survey (22.6% of the Italian centres). For this reason, our findings should be interpreted with caution, as they may not accurately reflect the impact of COVID-19 pandemic on the activities of all Italian arrhythmia centres. Seventeen of the 18 questions of the questionnaire were multiple-choice questions. This type of questionnaire is time-efficient, and responses are easy to code and interpret. On the other hand, the surveys based on multiple-choice questions have some limitations. Respondents are required to choose a response that does not exactly reflect their answer. In addition, the arbitrary design of questionnaires and multiple-choice questions with pre-conceived categories represents a biased and overly simple view of reality. Our survey has some limitations since it was not based on a precise computation of activities and procedure in every specific centre; however, this is a method that allows a rapid feedback and was chosen for having a general view of COVID-19 pandemic in Italy at a short time from its onset. Only 84 out of 372 arrhythmia centres operating in Italy took part in the survey (22.6% of the Italian centres). For this reason, our findings should be interpreted with caution, as they may not accurately reflect the impact of COVID-19 pandemic on the activities of all Italian arrhythmia centres. Seventeen of the 18 questions of the questionnaire were multiple-choice questions. This type of questionnaire is time-efficient, and responses are easy to code and interpret. On the other hand, the surveys based on multiple-choice questions have some limitations. Respondents are required to choose a response that does not exactly reflect their answer. In addition, the arbitrary design of questionnaires and multiple-choice questions with pre-conceived categories represents a biased and overly simple view of reality. The impact of COVID-19 was disrupting on the entire organization of health care, particularly for hospital care, and had a massive impact on the activities related to arrhythmia management and electrophysiology occurred in Italy in March–April 2020. Our survey focused on real-life activities in this field showed that in hospitals with wards specifically dedicated to care of patients with COVID-19 physicians usually involved in the field of arrhythmias and electrophysiology were frequently moved to take care of patients infected by Sars-Cov-2. In this period a reduction of > 50% in the number of implants of cardiac electronic devices was reported, and involved pacemakers and ICDs, with an important reduction not only on ICD implants for primary prevention of sudden death, but also on ICD implants for secondary prevention. The number of ablation procedures was markedly reduced and the reduction also affected emergency procedures, especially for centres directly involved in the care of COVID-19. In this context, a wider use of RM among patients with implanted devices was achieved, although with a variable extent from centre to centre. It is clear that for the post-COVID-19 recovery phase there is absolute need for adopting new organizational models for patient admission in order to minimize the risk of infection, and a short-stay hospitalization for patients undergoing elective procedures (i.e. day-case admission or ordinary admission with a single night stay) appears to be a suitable strategy. An increased compliance with prospective registries on electrophysiological procedures will allow a continuous monitoring of the type and number of interventions needed in this new phase, with potential differences with regard to historical series, and will also allow a check of centres’ performances in specific procedures, with an enormous potential for quality improvement. Conceptualization, GB and RPR.; methodology, GB and PP; software, PP; validation, GB, RPR and GB; formal analysis, PP; investigation, GB; resources, GB; data curation, GB and PP; writing—original draft preparation, GB, PP; writing—review and editing and visualization, FG, MB, GZ, CL, PN, MA, GB, GBF, ML, AD, RPR and RDP. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 310 kb)
Davida Teller Award Lecture 2017: What can be learned from natural behavior?
91945fcc-2b63-4266-8f86-ce2275ac976f
5895074
Ophthalmology[mh]
Research in vision has always been strongly influenced by the technology available at the time. Until the 1970s, the primary device for presenting visual stimuli was the Maxwellian view optical system, which allowed precise control of stimulus size, duration, color, and luminance of patches of light. However, with only these basic parameters to control, the kinds of questions that could be asked were somewhat restricted. Vision research at the time therefore focused on early visual mechanisms, in step with the breakthroughs in retinal neurophysiology, with recording from photoreceptors and retinal ganglion cells. Maxwellian view systems required that the head be stabilized by a bite bar in order to control the retinal illuminance. Eye-tracking devices also required that the head be stabilized, and this constraint persists to a large extent in modern eye-tracking experiments, where the head is frequently stabilized with a forehead rest. The drawback of having the head fixed in space is that the repertoire of behaviors that the subject can engage in is limited. Vision is designed to function in the context of a constantly moving observer, executing goal-directed actions. While this has long been recognized, for example, in the context of the ecologically focused perception and action tradition, designing experiments to investigate vision in the context of active behavior has been quite challenging. Experimental convenience has always been a strong influence, and as display technology has become more sophisticated and eye and body monitoring in unconstrained observers has become easier, so too has the range of convenient experiments broadened. Head-mounted eye trackers have become lighter and less expensive, with higher spatial and temporal resolution. Head-mounted displays for virtual reality are now cheap and comfortable, eye tracking within virtual-reality displays has vastly improved, and realistic environments are easy to generate. Body-movement monitoring has also improved. These technical developments lead to a variety of exciting possibilities. However, it is important to analyze just what difference it makes to investigate vision in the context of ongoing behavior, given its attendant complexities and the reduction in experimental control. What insights can be gained from doing this? I will review some of the work in my lab and others over the last two decades to gain perspective on this question. I will focus in particular on situations involving ongoing natural behavior, extending over periods of several seconds or more, where there is only limited experimental intervention. This means that we are looking at sequences of actions chosen by the subject, in contrast to the traditional trial structure controlled by the experimenter. This means that we can examine the factors that influence the transitions from one action to the next, something which is harder to get at in more controlled paradigms. Natural behavior also allows us to ask just what information is available to vision and what computations or tasks need to be performed within a given context. Again, these questions are important but hard to answer without looking at natural behavior. Of necessity, there are many large gaps in this review, and a lot of important work is not covered. A more extensive review can be found in Hayhoe ( ). I first consider how to simplify the understanding of complex behavior by breaking it down into specific task components. I will focus on gaze control, since it is a central aspect of active vision. In natural behavior, gaze is used to acquire information about the world to choose and control actions. Looking at behaviors extended in time over periods of seconds or more, different sets of questions emerge, and the behavioral context provides clues to the answers. Consider ordinary behavior such as walking across the street, illustrated in . To accomplish a simple task like this, a person must identify a goal to determine the direction of heading, perhaps establish that the light is green, avoid tripping over the curb, locate other pedestrians or vehicles and their direction of heading so as to avoid bumping into them, and so on. Each of these particular goals requires some visual evaluation of the state of the world in order to make an appropriate action choice in the moment. We can think of this as a sequence of decisions about where to look and what direction to walk. How are these decisions made? What is controlling the gaze changes? Why does gaze move from one location to another so that the walker gets the visual information she needs at the right time? This example is more challenging in some ways than a task context such as making tea or sandwiches (Land, Mennie, & Rusted, ; Hayhoe, Shrivastrava, Myruczek, & Pelz, ), where there is presumably a remembered task sequence that can guide the next action. Thus, when one has put peanut butter on the knife, the next action would be to look at the bread, then guide the knife to the bread, and so on. These tasks clearly reveal the extent to which fixations in a scene are tightly linked to momentary behavioral goals, in both space and time. During performance of tasks like making tea or a sandwich, over 95% of the fixations can be accounted for by the task (Land et al., ; Land & Hayhoe, ; Hayhoe et al., ). Tasks like walking across the street are more difficult, because the scene is less predictable and there is no obvious predetermined sequence of gaze locations. The default strategy has been to consider how properties of the visual image might attract gaze, leading to a large body of work on saliency (Borji, Itti, Liu, Musialski, & Wonka, ). While this will account for some fraction of gaze changes, it is not known how much of the visual information accrued in the course of everyday experience is the result of looking at salient stimuli, because much of the important information may not be particularly salient, and salient information might not be important (Tatler & Land, ). The approach taken here, instead, is to consider what information is needed in a task such as crossing an intersection and how gaze targets are chosen to gather that information. What determines task-driven changes in gaze location? Analysis of visually guided behavior in this way focuses on how selective attention is sequentially controlled to gather behaviorally relevant visual information. The first issue to consider is what the particular subasks in natural behavior are and what information is required for them. This is something that we typically make assumptions about, but it requires a natural behavioral context to answer. Until we look at natural behavior, we do not know the properties of the stimulus milieu that the visual system must deal with. For example, optic flow is typically presented as a constant velocity pattern, but recent measurements of the stimulus array during locomotion reveal complex time-varying optic-flow patterns with rhythmic accelerations and decelerations linked to gait (Matthis, Muller, Bonnen, & Hayhoe, ). In a similar vein, W. Sprague, Cooper, Tosic, and Banks ( ) have shown that natural-image statistics depend on the convergence distances humans choose. We need to examine both the stimulus and the linked behavior in order to be confident about what visual information is required for different aspects of locomotion, such as control of heading direction, foot placement, way finding, and so on (e.g., Fajen & Warren, ). Many of these questions remain unresolved. The second issue is which of several tasks to choose. That is, should a walker execute a visual search for an obstacle or check the traffic light at a particular moment? This question has been investigated more directly, and I will examine several of the factors influencing task choice in what follows. An important factor that influences choice of gaze location is the value of the information for the current behavioral goal. It has been demonstrated that primary rewards, in the form of money or points in humans or juice in monkeys, influences eye movements in a variety of experiments (Navalpakkam, Koch, Rangel, & Perona, ; Gottlieb, ; Schütz, Trommershäuser, & Gegenfurtner, ). It remains to be established how to make the link between the primary rewards used in experimental paradigms and the secondary rewards that operate in natural behavior, where eye movements are for the purpose of acquiring information (Tatler & Land, ; Hoppe & Rothkopf, ; Tong, Zohar, & Hayhoe, ). In principle, the neural reward machinery provides an evaluation mechanism by which gaze shifts can ultimately lead to primary reward, and thus potentially allows us to understand the role that gaze patterns play in achieving behavioral goals. A general consensus is that this accounting is done by a secondary reward estimate, and a huge amount of research implicates dopamine in this role. It is now well established that cells in many of the regions involved in saccade target selection and generation are sensitive to expectation of reward, in addition to coding the movement itself (e.g., Platt & Glimcher, ; Sugrue, Corrado, & Newsome, ; Gottlieb, ; Yasuda, Yamamoto, & Hikosaka, ). There is also good evidence that the neural reward machinery acts in ways predicted by reinforcement-learning models (Schultz, ; Lee, Seo, & Jung, ). The challenge is to understand just how the rewards modulate momentary action selection in the context of ongoing behavior. One factor that is probably a pervasive influence on action choices is energetic cost. Matthis, Barton, and Fajen ( ) controlled the visibility of future footholds and showed that walkers need to have visual information from two steps ahead to take advantage of passive dynamics of the body, which acts like an inverted pendulum. Information from two or more steps ahead avoids braking, and so allows optimal energetic efficiency (Matthis, Barton, & Fajen, ). Further observations in natural outdoor walking have shown that walkers naturally choose to fixate locations that are two steps ahead, allowing minimization of energetic cost. When the terrain becomes rough, walkers also spend time looking three steps ahead, a strategy that may reflect the need to balance energetic costs with other needs such as choosing stable footholds (Matthis, Yates, & Hayhoe, ). Earlier work also attests to the importance of energetic costs. Ballard, Hayhoe, and Pelz ( ) investigated a scenario where subjects copied a model made up of eight colored Duplo blocks, as shown in . Typically, subjects make frequent looks back to the model pattern in the course of copying it. However, if the model pattern was located farther away from the location where the copy was made, separated so that a head movement was required in order to look at the model, subjects made fewer fixations on the model. This suggests that fixations on the model were more costly when a combined eye-and-head movement was required, so now memory was used more. Thus, the choice to fixate the model depended on the cost of the fixation. Subsequent work by Hardiess, Gillner, and Mallot ( ) and Solman and Kingstone ( ) has found similar results. There are other intrinsic costs that are revealed in natural behavior. For example, Jovancevic and Hayhoe ( ) measured gaze distribution while subjects walked around a room in the presence of other walkers. Some of the walkers behaved in an unexpected and potentially hazardous manner, by briefly heading toward the subject on a collision course before reverting to a normal avoidance path. Subjects rapidly modified their gaze-allocation strategies, and the probability of fixations on these pedestrians was increased. Perhaps more importantly, the latencies and durations of these fixations also changed, as shown in , so that fixations on the veering walkers became longer and occurred sooner after the walker appeared in the field of view. This tightly orchestrated aspect of gaze distribution suggests an underlying adaptive gaze-control mechanism that learns the statistics of the environment and allocates gaze in an optimal manner as determined by potential costs. The point of all these examples is that the momentary costs of actions factor into sensorimotor decisions that are being made on a timescale of tens of milliseconds. Thus, whether to step to the right or left of an obstacle, how to allocate attention, and exactly when to make the movement are flexibly adjusted to satisfy global task constraints. Rothkopf and Ballard ( ) and Tong, Zhang, Johnson, Ballard, and Hayhoe ( ) have shown that it is possible to recover an estimate of the intrinsic reward value of particular actions such as avoiding obstacles in a walking task. Thus, it seems likely that subjects learn stable values for the costs of particular actions like walking and obstacle avoidance, and that these subjective values factor into momentary action decisions. The unexpectedly low variability between subjects in many natural behaviors may be the result of a common set of costs and optimization criteria. By looking at natural behavior that extends over timescales of seconds, we can gain insight into the factors that affect momentary action choices, what the task structure might be, and what the subjective values of different actions are. The natural world is complex, dynamic, and unpredictable, so there are many sources of uncertainty about its current state. Consider the previously described example of crossing the street, illustrated in . At any moment there are a number of behavioral needs competing for gaze or attention. Suppose a walker is currently looking at the location of an obstacle in order to gather information to execute an avoidance action. The previous fixation might have been in the direction of the goal, to control heading. This information will be in the peripheral retina with poor spatial resolution, so goal position with respect to the body will probably be stored in working memory, which will decay over time and will also need to be updated as the observer moves in the scene, introducing additional uncertainty. Other relevant information acquired previously will also need to be held in working memory and will decay over time. The choice of the next gaze location will be determined by these various uncertainties. The need to include uncertainty to explain gaze choices stems from the fact that the optimal action choice is unclear if the state is uncertain (N. Sprague, Ballard, & Robinson, ). Thus, the probability of a change in gaze to update state increases as uncertainty increases (Sullivan, Johnson, Rothkopf, Ballard, & Hayhoe, ; Johnson, Sullivan, Hayhoe, & Ballard, ; Tong et al., ). Examination of precisely when a gaze change occurs can be revealing about the underlying mechanisms. In an exploration of how gaze probability is modulated by uncertainty, Hoppe and Rothkopf ( ) devised an experiment where subjects had to detect an event occurring at a variable time in either of two locations. The event could not be detected unless the subject was fixating the location, and the subjects learned to adjust the timing of the saccades between the locations in an optimal manner. Subjects readily learned the temporal regularities of the events and traded off event-detection rate with the behavioral costs of carrying out eye movements. Thus, subjects learn the temporal properties of uncertain environmental events and use these estimates to determine the precise moment to make a gaze change. While growth of uncertainty about task-relevant information appears to initiate a gaze change, there is also evidence for the complementary claim, that other tasks rely on memory estimates when the associated uncertainty is low. This has been shown in experiments by Droll, Hayhoe, Triesch, and Sullivan ( ) and Droll and Hayhoe ( ), illustrated in . In those experiments, subjects picked up virtual blocks on the basis of a feature such as color, and then sorted them on the basis of either the same feature (color) or a different feature (e.g., size). On some trials, the color was changed during the saccade after the block was picked up, as illustrated in the figure. When subjects were cued to place the block on the left or right depending on its color, they frequently acted as if the block was the original color that it was when they picked it up. This information was presumably held in visual working memory, and it was this information—not the actual color of the block on the retina—that was used for sorting. This occurred more frequently in conditions that encouraged subjects to use working memory, and less frequently in conditions when subjects made more frequent refixations of the blocks. Trials when subjects picked up blocks on the basis of their color and also sorted them on the basis of color on every trial are labeled Predictable One-feature trials in , and on these trials subjects used memory for sorting on over 90% of trials. In the trials labeled Unpredictable Two-feature, subjects always picked up the block on the basis of a feature such as color, but sorted on the basis on any of four features, and did not know which feature would be needed until they looked at the placement cue after they had picked up the block. Consequently, there was a heavier memory load in this condition and subjects frequently waited until after pickup to look at the block in hand to get the relevant information, so in this case they sorted on the basis of memory on only 21% of trials. Given that the increased memory load will also increase uncertainty about the block features, is appears that subjects use memory representations when they have low uncertainty about the state of the information, but use gaze to update state when they are more uncertain. This flexible, context-dependent use of memory versus immediately available information is an important feature of natural visually guided behavior. To summarize: The need to update information about task-relevant, potentially rewarding state is important in determining the location and timing of gaze changes, although it is not the only factor. There is some evidence to suggest that working-memory representations are used if they are reliable enough, thus obviating the need for a gaze change. The trade-off between memory and gaze deserves further exploration. Another insight that is made possible by investigating natural behavior is the role of memory in action decisions and control. Information for action decisions can be made on the basis of current sensory data, a memory representation, or some weighted combination of these. In natural behavior, subjects are immersed in a relatively stable environment where they have the opportunity to develop long-term memory representations, and the use of memory in targeting eye and body movements may allow more energetically efficient strategies. Thus, natural behavior introduces constraints that are not evident in standard paradigms. As an individual moves around in the environment, it is necessary to store information about spatial layout. One need for this information arises when orienting to regions outside the field of view. Land et al. ( ) noted instances when subjects made a number of very large gaze shifts to locations outside the field of view in a tea-making task. These gaze shifts involved a combination of eye, head, and body movements, and were remarkably accurate. When objects are within the field of view, subjects have choice of searching for a target on the basis of its visual features, so may not need to use memory. However, it appears that memory is indeed typically used in this instance. Experiments by Epelboim et al. ( ) provide evidence that saccade targeting is facilitated by memory in tasks such as tapping a sequence of lights in known positions. In a task where subjects built a toy model, Aivar et al. ( ) showed that saccades were sometimes made to the remembered locations of targets that had subsequently been moved to new locations, revealing that subjects often planned saccades on the basis of a memory representation even in the presence of conflicting visual information, and then had to make corrective movements. The most likely reason for choosing memory-based targeting over visual targeting is that it allows planning ahead, and this presumably leads to more efficient movements. For example, eye–head–hand coordination patterns to known target locations appear to be designed so that all the effectors arrive at about the same time, which is presumably optimal in terms of executing the next action (Hayhoe, ). Another advantage of planning movements based on spatial memory is that it allows more efficient use of body movements. In a real-world search task, Foulsham, Chapman, Nasiopoulos, and Kingstone ( ) found that 60%–80% of the search time was taken up by head movements, so there is an advantage to minimizing the cost of these movements. Whole-body movements can also be minimized using spatial memory. An example can be seen in , where subjects searched for targets in a virtual apartment. After they searched for the target on three separate occasions, it was moved to another location. The figure shows the head and eye directed at the old target location even before the subject entered the room. The data revealed that subjects look at the old location on 58% of trials (Li, Aivar, Tong, & Hayhoe, ). In addition, subjects rapidly encoded the global structure of the space and reduced the total path walked by eliminating regions where targets were unlikely to be, confining search to more probable regions. Memory of the large-scale spatial structure allows more energetically efficient movements, and this may be an important factor that shapes memory for large-scale environments. Another aspect of natural behavior is that it provides different sensorimotor information and may change the nature of the memory structures. Chrastil and Warren ( ) argue that idiothetic information deriving from efferent motor commands and sensory reafference generated by observer movements aid the development of spatial memory, and Draschkow and Võ ( ) found that active object manipulation influenced memory. Thus, spatial memory is likely to be a fundamental component of movement targeting, as it allows more efficient use of attentional resources and can be shared between different effectors, allowing more efficient movement patterns. Examination of natural behavior immediately makes apparent another factor, namely the central importance of prediction. Body movements are slow, so any action decisions need to be appropriate for the state of the scene hundreds of milliseconds in the future. It is commonly accepted that the proprioceptive consequences of a planned movement are predicted ahead of time using stored internal models of the body's dynamics (Wolpert, Miall, & Kawato, ; Mulliken & Andersen, ), and the comparison of actual and predicted somatosensory feedback is a critical component of the control of movement. Indeed, when somatosensory feedback is severely compromised by somatosensory loss, the consequences for movement can be devastating (Cole & Paillard, ). Perhaps not surprisingly, it is in the context of movements that prediction is most apparent, since movements generate a time-varying visual input. One clear-cut demonstration of prediction is in the context of visual stability, where the need to predict the consequences of one's own movements is readily apparent. These predictions appear to be revealed in the remapping of visual receptive fields before a saccade (Duhamel, Colby, & Goldberg, ; Melcher & Colby, ). Predictive remapping occurs not only in lateral intraparietal cells, but also in superior colliculus, frontal eye fields, and area V3. Evidence indicates that predictive remapping is mediated by a corollary discharge signal originating in the superior colliculus and the mediodorsal nucleus of the thalamus. Cicchini, Binda, Burr, and Morrone ( ) present evidence that this predictive remapping is part of a mechanism for visual stability that relates the pre- and postsaccadic images of a stimulus. Other evidence for prediction also comes from the oculomotor system. Both smooth pursuit and saccadic eye movements reveal prediction of the future visual stimulus in a variety of experimental paradigms (Madelain & Krauzlis, ; Orban de Xivry, Missal, & Lefèvre, ; Ferrera & Barborica, ; Kowler, 2011; Spering, Schütz, Braun, & Gegenfurtner, ). Predictive eye movements are also robust and pervasive in natural behavior, where trajectories are complex and predictions are presumably more difficult. Athletes playing cricket, table tennis, and squash make predictive eye movements to the ball's future location (Land & Furneaux, ; Land & McLeod, ; Hayhoe et al., 2012). Diaz, Cooper, Rothkopf, and Hayhoe ( ) investigated a more controlled setting using a virtual racquetball environment, where unskilled subjects intercepted a virtual ball that bounced prior to interception. Subjects made a saccade ahead of the ball, just before it bounced, to a location on the future ball trajectory. Gaze was held in this location during the bounce and until the ball passed within 1°–2° of the fixated location about 170 ms after the bounce. The location of the predictive saccade was dependent on the ball's elasticity as well as its velocity. The accuracy of the predictions both in time and in space, despite variation in ball properties, suggests that subjects rely at least in part on their history of experience with balls in order to target the eye movements to the ball's future location. The evidence for prediction in the visual system is not entirely clear. Zhao and Warren ( ) argue that actions are planned on the basis of current state using a mapping that has been found as a result of learning to be effective for future state. It may be necessary to take into account a variety of factors in order to understand any one situation. Belousov, Neumann, Rothkopf, and Peters ( ) have shown that predictive and reactive strategies may be optimal and operate in different regimes depending on how much time the observer has, the sensory latencies, and noise both in the observation and in the stored model. Within the framework of optimal probabilistic control, they show that the optimal policy depends on perceptual and internal prediction uncertainties, time to ground contact, and perceptual latency, and switches between generating reactive and predictive behavior based on the ratio of system to observation noise and the ratio between perceptual latency and task duration. Recent approaches to sensorimotor decisions formalize the process within statistical decision theory (Maloney & Zhang, ; Wolpert & Landy, ). This provides a useful framework for understanding natural visually guided behavior and shows how the various factors so far discussed relate to one another. Wolpert and Landy ( ) have reviewed a large body of work over the last 10–15 years within this framework, which is illustrated in . To make a good decision, the actor needs to evaluate the task-relevant state, and this requires both sensory data and a prior, as shown in the figure. Thus, the probability of a particular world state depends on the likelihood of obtaining that sensory data, given a particular state, weighted by the prior probability of that state. These priors can be thought of as instantiations of memory representations, as already described. In order to understand how a particular goal affects behavior, we need to address the costs and benefits of the action in bringing about the goal. Sensorimotor decisions in the context of behavior reveal the pervasive effects of these costs and benefits in momentary decisions of where to look or walk. The framework is not strictly applicable for describing sequences of decisions in behavior, where we also need to consider the transitions from one decision to the next, leading to the reinforcement-learning framework. For simplicity this has been represented as the dotted arrows in the figure indicating where to look next, and I have discussed how uncertainty and the need to update state information factor into that decision. However, the decision-theoretic framework provides a useful structure for conceptualizing at least some aspects of natural behavior. The work reviewed here shows that investigation of natural behavior has contributed a number of insights to our understanding of visual guidance of actions. Natural behavior forces consideration of exactly what information is being gathered by the visual system from moment to moment. First, it allows a more accurate specification of exactly what the spatiotemporal properties of the visual stimulus are, as experienced by the observer in the context of active behavior. In addition, looking at behavior in situ, it becomes clear that knowing the immediate behavioral goals is critical, as it provides the rationale for momentary action decisions. Knowledge of the current behavioral context allows us to understand how various factors are integrated and how they might be modulated in different contexts. Analysis of natural behavior allows an evaluation of the importance of particular factors in behavior. For example, while it has long been accepted that memory can guide movements, it is only in a behavioral context that we can evaluate how important a factor memory actually is. Similarly, the critical role of costs and benefits emerges as a fundamentally important factor. The commonality of the stimulus milieu that humans experience, and the well-defined optimality criteria of much natural behavior, means that the behavioral measures are unexpectedly stable and similar between different individuals. This stability points to the lawfulness of the underlying principles. Finally, in contrast to standard paradigms—where the focus is on events during a single experimental trial—natural behavior focuses attention on behavior over timescales of seconds or minutes, so new questions emerge, such as what factors control the transition from one gaze location to the next within a larger-scale behavioral goal. Thus, while there are many daunting challenges in analysis of natural behavior, it allows the opportunity for exceptional insights.
C-type lectin receptor DCIR contributes to hippocampal injury in acute neurotropic virus infection
b24e4a62-57b0-42d2-bb85-57a24a231317
8664856
Anatomy[mh]
Neurotropic viruses target the brain and can cause asymptomatic or acute and fatal diseases , . Moreover, cognitive deficits and memory impairment, suggestive of hippocampal dysfunction, as well as an increased risk of developing epilepsy are often observed in patients surviving acute viral encephalitis – . Theiler’s murine encephalomyelitis virus (TMEV) is a neurotropic picornavirus that preferentially infects the hippocampus – . While TMEV persists in the central nervous system (CNS) of SJL mice, C57BL/6 mice eliminate the virus following acute polioencephalitis. C57BL/6 mice mount early robust antiviral immune responses, but are prone to develop hippocampal injury with neuronal loss during the acute infection phase , . TMEV infection was shown to increase the susceptibility to develop seizures in C57BL/6 mice , . Moreover, neuronal damage is associated with impaired cognition and spatial memory, making TMEV infection a valuable model for brain damage in neurotropic virus infections – . Innate immune responses during the initial phase significantly contribute to the development of antiviral T cell responses and virus elimination in TMEV-infected mice – . However, CNS-infiltrating macrophages and activated microglia also account for hippocampal degeneration following TMEV infection by releasing pro-inflammatory factors – . Surveillance of virus infection and initiation of innate immune responses are mediated by pattern recognition receptors (PRRs) on professional antigen presenting cells (APCs), such as dendritic cells (DCs). C-type lectin receptors (CLRs) are PRRs that recognise a variety of glycan structures present on pathogens, including viruses, damage-associated molecular patterns and self-glycoproteins – . The C-type lectin receptor Dendritic cell immunoreceptor (DCIR, human gene: CLEC4A, murine gene: Clec4a2) contains an immunoreceptor tyrosine-based inhibition motif (ITIM), which delivers inhibitory signals predominantly in DCs – . Thus, DCIR is a negative regulator of intracellular signalling of APCs, including microglia, and contributes to immune homeostasis in immune mediated disorders – . However, DCIR signalling seems to play an ambiguous role in infectious diseases , , . For instance, while DCIR signalling limits immunopathology in Chikungunya virus-infected mice, the receptor is thought to trigger brain pathology in cerebral malaria models , . The role of DCIR signalling in neurotropic virus infections and its impact on neuropathology have not yet been investigated. The aim of the present study was to investigate DCIR-mediated effects on the balance of immune responses, virus load and neuropathology in Theiler’s murine encephalomyelitis (TME). Genetic ablation of DCIR indicates that receptor signalling contributes to neuroinflammation and brain injury in C57BL/6 mice following TMEV infection. DCIR −/− mice show an improved hippocampal integrity and are able to control neurotropic virus infection more efficiently. In vitro studies reveal that DCIR deficiency enhanced T cell activation in a dendritic cell/T cell co-culture system. DCIR −/− mice show preservation of hippocampal integrity and reduced viral load in the brain The effect of DCIR deficiency on hippocampal and neuronal integrity following TMEV infection was determined (Fig. ). Two-way ANOVA yielded a significant effect of DCIR deficiency on neuronal integrity determined by histology (HE score; p = 0.0008) and immunohistochemistry (NeuN + area/mm 2 , p = 0.0006), as well as on TMEV load (TMEV + cells/mm 2 , p = 0.0257). Subsequent Mann–Whitney U tests at different time points post infection revealed a diminished hippocampal damage of infected DCIR −/− mice with a significant difference compared to WT mice at 14 dpi ( p = 0.005, Fig. a–c). Similarly, a significantly reduced loss of NeuN + neurons in the hippocampus of DCIR −/− animals compared to WT controls was found at 14 dpi ( p = 0.002, Fig. d–f). Although increased numbers of β-APP + axons (damaged axons) were found in hippocampal regions with severe neuronal damage and loss mainly in TMEV-infected WT mice, group differences did not reach the level of significance (Supplementary Fig. a). Likewise, an elevation of GFAP + astrocytes (astrogliosis) was present within the hippocampus of WT animals compared to DCIR −/− mice at 14 dpi, but differences did not reach the level of significance (Supplementary Fig. b). No hippocampal inflammation and damage were found in non-infected, age-matched WT and DCIR −/− animals. In addition, non-infected groups showed a similar amount of NeuN + neurons and GFAP + astrocytes in the hippocampus (Supplementary Fig. ). Viral quantification within the brain was performed by RT-qPCR and TMEV-specific immunohistochemistry. At 7 dpi, TMEV RNA concentration was significantly decreased in DCIR −/− mice compared to WT mice ( p = 0.047, Fig. a). Immunohistochemistry revealed a preferential infection of hippocampal neurons of infected mice in both groups at 7 dpi. Similar to the diminished viral RNA load, reduced numbers of TMEV-infected cells were observed in the brain of DCIR −/− mice at 7 dpi, but differences did not reach level of significance ( p = 0.12, Fig. b). Both, WT and DCIR −/− mice, showed reduced viral RNA levels and TMEV antigen at 14 dpi, indicating viral elimination (Fig. a,b). TMEV RNA concentration in the brain did not differ significantly between both groups at 14 dpi ( p = 0.87, Fig. b). However, the number of TMEV + cells within the hippocampus was significantly reduced at 14 dpi in DCIR −/− mice compared to WT mice, indicating a reduced residual infection following acute infection phase in DCIR deficient animals ( p = 0.005, Fig. b–d). No TMEV was detected in non-infected WT mice and DCIR −/− mice by immunohistochemistry and RT-qPCR (data not shown). Data show that preserved hippocampal morphology in mice lacking DCIR is associated with an enhanced early virus elimination from the brain, indicating a refined induction of protective responses in DCIR −/− mice following TMEV infection. Weekly clinical examination, body weight recordings, Racine score evaluation and RotaRod performance test revealed no symptoms in both groups, indicating a subclinical acute infection (Supplementary Fig. ) . DCIR deficiency leads to diminished brain sequestration of effector immune cells Immunohistochemistry revealed a reduced infiltration of CD3 + T cells ( p = 0.007, Fig. a) and CD45R + B cells ( p = 0.005, Fig. b) in the hippocampus of DCIR −/− mice compared to WT controls at 14 dpi. At both time points, the hippocampus of DCIR −/− mice contained similar numbers of activated CD107b + macrophages/microglia in comparison to WT controls (Fig. c). Brain-infiltrating GrB + effector cells decreased in DCIR −/− mice at 14 dpi ( p = 0.05, Fig. d). Moreover, reduced numbers of CD4 + and CD8 + T cells were found in the hippocampus of DCIR −/− mice at 14 dpi (CD4 + T cells: p = 0.002, Fig. e, CD8 + T cells: p = 0.011, Fig. f), likely related to decreased virus-trigged immune responses in receptor deficient animals. Comparison of CD4 + and CD8 + T cell proportions revealed a slightly reduced ratio of CD4 + to CD8 + T cells in the brain of DCIR −/− mice at 14 dpi ( p = 0.00045, Fig. g), showing a relative increase of cytotoxic T cells in animals lacking DCIR. Statistical analyses (Pearson’s correlation coefficient R) revealed negative correlations between neuronal integrity of the hippocampus (NeuN + area/mm 2 ) and the amount of CD107b + , arginase 1 + and CD45R + cells at 7 and 14 dpi. Foxp3 + cells and GFAP + astrocytes were negatively correlated with neuronal integrity at 14 dpi (Supplemental Table ). Non-infected control animals of both groups showed no leukocyte infiltrations in the hippocampus. Collectively, these findings suggest that the reduced viral brain load in DCIR −/− mice leads to an accelerated termination of brain inflammatory responses. Reduced cerebral cytokine expression in DCIR −/− mice is associated with preserved hippocampal integrity At 7 dpi, mRNA levels of IL-1α ( p = 0.047, Fig. a), TNF-α ( p = 0.039, Fig. b) and IFN-β ( p = 0.009, Fig. c) were significantly reduced in TMEV-infected DCIR −/− animals compared to the WT group. Data show a reduced pro-inflammatory response to virus infection due to receptor deficiency during the early acute polioencephalitis phase (7 dpi). In addition to mRNA levels of IL-1α ( p = 0.018, Fig. a), TNF-α ( p = 0.022, Fig. b), and IFN-β ( p = 0.034, Fig. c), a significantly lower IFN-γ transcription was detected in DCIR −/− animals compared to WT mice also at 14 dpi ( p = 0.027, Fig. d). The mRNA levels of the pro-inflammatory cytokines IL-1β, IL-2, IL-6 and IL-23, as well as the anti-inflammatory cytokines IL-4, IL-5 and TGF-β1 did not differ significantly between both groups (Supplementary Fig. ). Non-infected DCIR −/− mice showed a significantly lower base line mRNA expression of IL-5 compared to WT mice ( p = 0.021, Supplementary Fig. ). Other cytokines mRNA levels of non-infected controls showed no differences between DCIR −/− and WT animals (Supplementary Fig. ). Statistical analyses (Pearson’s correlation coefficient R) revealed negative correlations between neuronal integrity of the hippocampus (NeuN + area/mm 2 ) and mRNA expression levels of IFN-β and IL-1β, and a positive correlation with IL-2 at 7 dpi. IFN-γ was negatively correlated with neuronal integrity at 14 dpi (Supplemental Table ). Reduced pro-inflammatory cytokine expression in the brain at the later stage of acute polioencephalitis (14 dpi) in DCIR −/− mice is likely a direct consequence of reduced viral burden and accelerated termination of neuroinflammation in comparison to WT mice. Diminished induction of immunomodulatory responses in DCIR −/− mice following neurotropic virus infection The suppressive cytokine interleukin-10, secreted by regulatory T cells (Treg) and M2-type macrophages/microglia, is thought to exhibit neuroprotective effects in infectious disorders . On the other hand, Treg and M2-type myeloid cells may dampen effective antiviral responses and thus promote deleterious effects on tissue integrity – . In order to test whether neuronal preservation in the hippocampus in DCIR −/− mice is accompanied by reduced virus load or attributed to immunomodulatory mechanisms, Foxp3 + Treg, arginase 1 + M2-type macrophages/microglia and the key immunomodulatory cytokine IL-10 were quantified. At 14 dpi, numbers of Foxp3 + regulatory T cells ( p = 0.018, Fig. a) and Foxp3 mRNA copy numbers ( p = 0.009, Fig. b), determined by immunohistochemistry and RT-qPCR, respectively, were significantly increased in brain samples of mice with intact DCIR signalling (WT mice) compared to DCIR −/− mice upon infection. Similarly, an increase of arginase 1 + cells was found in the hippocampus of WT mice compared to DCIR −/− mice at both investigated time points with significant differences at 14 dpi ( p = 0.006, Fig. c). Moreover, significantly increased transcription of IL-10 was detected in WT controls compared to DCIR −/− mice at 14 dpi ( p = 0.034, Fig. d). In non-infected WT and DCIR −/− mice cerebral IL-10 mRNA expression was not detectable. Results suggest that TMEV infection elicits compensatory immune pathways, mediated by Treg and M2-type macrophages/microglia in the brain of DCIR intact C57BL/6 mice. Consequently, the observed neuroprotective effect of DCIR deficiency is likely not mediated by classical immunomodulatory mechanisms at the infection site, but rather due to improved virus elimination and timely onset of peripheral protective immune responses. In addition, diminished induction of immunomodulatory and suppressive mechanisms, including decreased numbers of arginase 1 + M2-type macrophages/microglia and Treg, may promote virus control in DCIR deficient animals. DCIR deficiency enhances peripheral T cell responses following neurotropic virus infection The accelerated TMEV elimination observed in the brain of DCIR −/− mice suggests an enhanced antiviral immune response. Priming of naïve T cells in lymphoid organs during the early phase of TME was shown to be crucial for robust antiviral responses , . Therefore, splenic cytokine mRNA expression was quantified by RT-qPCR and splenic T cell responses were analysed by flow cytometry. At 7 dpi, splenic IFN-γ mRNA levels were significantly increased in TMEV-infected DCIR −/− animals compared to the WT group ( p = 0.049, Fig. a), indicating an enhanced pro-inflammatory immune response. At 14 dpi, IL-1α ( p = 0.006, Fig. b) and IL-1β ( p = 0.001, Fig. c) mRNA expression was reduced in TMEV-infected DCIR −/− animals compared to WT mice, likely related to the reduced viral burden and accelerated termination of neuroinflammation in DCIR deficient mice. Splenic IL-2, IL-4, IL-5, IL-6, IL-10, IL-23, IFN-β, TGFβ1, and TNF-α mRNA levels did not differ significantly between both groups (Supplementary Fig. ). Flow cytometric analysis of TMEV-infected groups revealed an increased fraction of CD8 + T cells in the spleen of DCIR −/− mice compared to WT group at 7 dpi ( p = 0.016, Fig. f), while CD4 + T cell frequency remained unchanged at this time point (Fig. e). Accordingly, a significant shift of the CD4 + /CD8 + T cell ratio with dominance of cytotoxic T cells was observed at 7 dpi ( p = 0.009, Fig. g) as well as at 14 dpi ( p = 0.028, Fig. g). Subsequently, the portion of splenic CD4 + T cells decreased in DCIR −/− mice during the disease course, resulting in a significant difference between the groups at 14 dpi ( p = 0.028, Fig. e). Further characterization of splenic T cell subsets revealed a significantly higher proportion of activated CD4 + T cells, displayed by higher fractions of CD4 + CD62L low T cells in DCIR −/− mice compared to WT mice at 7 dpi ( p = 0.016) and 14 dpi ( p = 0.047, Fig. d,j). Moreover, the level of activated CD4 + CD44 + T cells was elevated at both time points in DCIR −/− animals compared to WT mice with statistically significant difference between the groups at 7 dpi ( p = 0.047, Fig. i). Similarly, CD8 + CD44 + cell populations were significantly increased in DCIR −/− animals compared to WT mice at 7 dpi ( p = 0.009) and 14 dpi ( p = 0.016, Fig. l). Splenic CD4 + - and CD8 + T cell subpopulations expressing CD25 as well as CD8 + CD62L low T lymphocytes did not significantly differ between groups at any time point (Fig. h,k,m). Flow cytometry of non-infected control mice showed a slight difference for the portion of CD8 + T cells and the CD4 + /CD8 + T cell ratio comparing the spleens of WT and DCIR −/− mice. However, no major differences in surface expression of T cell activation markers were observed between both non-infected control groups (Supplementary Fig. ). Thus, flow cytometry and cytokine expression analysis revealed an enhanced peripheral T cell activation and an early proportional shift towards cytotoxic CD8 + T cell responses in DCIR −/− mice during viral encephalitis. Identification of potential influencing factors for hippocampal damage using regression analyses Regression analyses were performed to identify factors that correlate with hippocampal damage following TMEV infection. Simple regression models confirmed that neuronal integrity of the hippocampus (NeuN + area/mm 2 ) was significantly associated with the TMEV load. In addition, hippocampal damage was significantly associated with the amount of CD107b + , CD3 + , CD45R + Foxp3 + , arginase 1 + , GFAP + , and granzyme B + cells in the hippocampus, as well as with IFN-β mRNA expression in the brain. The amount of TMEV + cells in the hippocampus remains the only significant parameter in the multiple regression model, pointing at a high collinearity among explanatory variables (Supplementary Table ). Reduced neuroinflammation in DCIR −/− mice following neurotropic virus infection correlates with diminished responses of antigen presenting cells At 14 dpi, significantly reduced CD11c mRNA copy numbers were detected within the brains ( p = 0.017, Fig. a) and spleens of DCIR −/− mice ( p = 0.035, Fig. e). Moreover, CD80 mRNA expression levels were significantly reduced within the brains ( p = 0.049, Fig. b) and spleens ( p = 0.034, Fig. f) of DCIR −/− mice in comparison to WT animals at 7 dpi. CD86 mRNA transcript levels were significantly diminished within the brains of DCIR −/− mice in comparison to WT mice at both time points (7 dpi: p = 0.049, 14 dpi: p = 0.001, Fig. c). Within the spleen, CD86 mRNA transcript levels showed no differences between DCIR −/− and WT animals (Fig. g). MHC-I mRNA expression was significantly reduced in spleens of DCIR deficient mice at 7 dpi ( p = 0.049, Fig. h), whereas cerebral MHC-I mRNA quantities were significantly decreased in DCIR −/− mice compared to WT animals at 14 dpi ( p = 0.003, Fig. d). Reduced expression of CD11c and co-stimulatory molecules in the brains and spleens of DCIR −/− mice may be linked to the accelerated resolution of TMEV infection and termination of neuroinflammatory response in comparison to WT mice. Lack of DCIR in bone marrow-derived dendritic cells causes an increased CD8 + T cell response against Theiler’s murine encephalomyelitis virus in vitro MHC-I-restricted CD8 + cytotoxic T cells are important for TMEV elimination in C57BL/6 mice . To determine the impact of DCIR deficiency on early T cell responses upon TMEV infection in vitro , antigen presentation assays using WT and DCIR −/− MEGs or BMDCs were performed. T cells were isolated from OT-I TCR-transgenic mice, which specifically recognise the OVA-peptide presented via the MHC-I molecule H2-K b . T cells were co-cultured with MEGs or BMDCs, previously exposed to TMEV-OVA – . BMDCs were used for the in vitro stimulation of OT-I T cells since BMDCs from WT and DCIR −/− mice had been previously compared in a global and unbiased manner through genome-wide transcriptome analysis and thus represent a well characterized source of APCs . To analyse CD8 + T cell activation, cytokine release was measured by ELISA, and expression of the early T cell activation marker CD69 was measured by flow cytometry. Microglia, as part of the glial cell mixtures, represent the CNS’ local APC population. Incubating MEGs with TMEV-OVA, however, did not result in a difference between WT or DCIR −/− microglia-mediated CD8 + T cell response (Fig. a). Combined with the marginal levels of released pro-inflammatory cytokines (data not shown), these findings suggest that the potential of microglia to process and present antigens is limited. As an additional source of APCs, BMDCs were used in a co-culture system to stimulate antigen-specific CD8 + T cells. To avoid alterations in TMEV antigenicity and modification of (potential) DCIR ligands, live virus was used for incubation with BMDCs. However, to exclude a productive infection of BMDCs leading to classical antigen presentation via MHC-I molecules, viral RNA loads in TMEV DA-exposed BMDCs as well as viral titers in the supernatant were determined (Supplementary Fig. ). While an initial increase in viral RNA load in BMDCs between 2 and 6 h was observed (Supplementary Fig. a), viral titers in the supernatant decreased continuously from 2 to 22 h after TMEV DA incubation (Supplementary Fig. 7b). Thus, the initial increase of TMEV RNA in BMDCs may be mediated by initial TMEV replication, but it most likely does not reflect a productive infection of BMDCs, but rather an increased TMEV internalisation. In addition, incubation with live TMEV did not lead to a significant decrease in MEG and BMDC viability compared to OVA- or mock-stimulated samples and the vast majority of the cells remained viable (Supplementary Fig. ), further supporting that BMDCs present viral antigens to CD8 + T cells. Upon TMEV DA incubation, BMDCs were activated, but no difference between WT and DCIR −/− BMDCs was detected (Supplementary Fig. ). Similarly, the activation status of BMDCs did not differ between WT and DCIR −/− BMDCs following co-cultivation with OT-I T cells (Supplementary Fig. ). However, TMEV-OVA stimulation of DCIR −/− BMDCs led to an increased expression of CD69 by CD8 + T cells compared to WT BMDCs (Fig. b). Further, the release of IL-2, IFN-γ and GrB by CD8 + T cells was elevated if co-cultured with DCIR −/− BMDCs (Fig. c–e). These results indicate that DCIR deficiency in BMDCs impacts subsequent CD8 + T cell activation and T cell effector functions in this BMDC/T cell co-culture system. Possibly, DCIR in DCs may balance type I and II IFN signaling directly influencing T cell priming , . Additionally, cross-talk of DCIR with other immune receptors, such as Toll-like receptors, is conceivable, which can affect the quality of induced T cell responses even without alterations in the expression of co-stimulatory markers CD80 and CD86, as it was shown for human DCIR – . However, the mechanism by which the differential CD8 + T cell activation by DCIR −/− BMDCs shown here is mediated, remains to be determined in future studies. −/− mice show preservation of hippocampal integrity and reduced viral load in the brain The effect of DCIR deficiency on hippocampal and neuronal integrity following TMEV infection was determined (Fig. ). Two-way ANOVA yielded a significant effect of DCIR deficiency on neuronal integrity determined by histology (HE score; p = 0.0008) and immunohistochemistry (NeuN + area/mm 2 , p = 0.0006), as well as on TMEV load (TMEV + cells/mm 2 , p = 0.0257). Subsequent Mann–Whitney U tests at different time points post infection revealed a diminished hippocampal damage of infected DCIR −/− mice with a significant difference compared to WT mice at 14 dpi ( p = 0.005, Fig. a–c). Similarly, a significantly reduced loss of NeuN + neurons in the hippocampus of DCIR −/− animals compared to WT controls was found at 14 dpi ( p = 0.002, Fig. d–f). Although increased numbers of β-APP + axons (damaged axons) were found in hippocampal regions with severe neuronal damage and loss mainly in TMEV-infected WT mice, group differences did not reach the level of significance (Supplementary Fig. a). Likewise, an elevation of GFAP + astrocytes (astrogliosis) was present within the hippocampus of WT animals compared to DCIR −/− mice at 14 dpi, but differences did not reach the level of significance (Supplementary Fig. b). No hippocampal inflammation and damage were found in non-infected, age-matched WT and DCIR −/− animals. In addition, non-infected groups showed a similar amount of NeuN + neurons and GFAP + astrocytes in the hippocampus (Supplementary Fig. ). Viral quantification within the brain was performed by RT-qPCR and TMEV-specific immunohistochemistry. At 7 dpi, TMEV RNA concentration was significantly decreased in DCIR −/− mice compared to WT mice ( p = 0.047, Fig. a). Immunohistochemistry revealed a preferential infection of hippocampal neurons of infected mice in both groups at 7 dpi. Similar to the diminished viral RNA load, reduced numbers of TMEV-infected cells were observed in the brain of DCIR −/− mice at 7 dpi, but differences did not reach level of significance ( p = 0.12, Fig. b). Both, WT and DCIR −/− mice, showed reduced viral RNA levels and TMEV antigen at 14 dpi, indicating viral elimination (Fig. a,b). TMEV RNA concentration in the brain did not differ significantly between both groups at 14 dpi ( p = 0.87, Fig. b). However, the number of TMEV + cells within the hippocampus was significantly reduced at 14 dpi in DCIR −/− mice compared to WT mice, indicating a reduced residual infection following acute infection phase in DCIR deficient animals ( p = 0.005, Fig. b–d). No TMEV was detected in non-infected WT mice and DCIR −/− mice by immunohistochemistry and RT-qPCR (data not shown). Data show that preserved hippocampal morphology in mice lacking DCIR is associated with an enhanced early virus elimination from the brain, indicating a refined induction of protective responses in DCIR −/− mice following TMEV infection. Weekly clinical examination, body weight recordings, Racine score evaluation and RotaRod performance test revealed no symptoms in both groups, indicating a subclinical acute infection (Supplementary Fig. ) . Immunohistochemistry revealed a reduced infiltration of CD3 + T cells ( p = 0.007, Fig. a) and CD45R + B cells ( p = 0.005, Fig. b) in the hippocampus of DCIR −/− mice compared to WT controls at 14 dpi. At both time points, the hippocampus of DCIR −/− mice contained similar numbers of activated CD107b + macrophages/microglia in comparison to WT controls (Fig. c). Brain-infiltrating GrB + effector cells decreased in DCIR −/− mice at 14 dpi ( p = 0.05, Fig. d). Moreover, reduced numbers of CD4 + and CD8 + T cells were found in the hippocampus of DCIR −/− mice at 14 dpi (CD4 + T cells: p = 0.002, Fig. e, CD8 + T cells: p = 0.011, Fig. f), likely related to decreased virus-trigged immune responses in receptor deficient animals. Comparison of CD4 + and CD8 + T cell proportions revealed a slightly reduced ratio of CD4 + to CD8 + T cells in the brain of DCIR −/− mice at 14 dpi ( p = 0.00045, Fig. g), showing a relative increase of cytotoxic T cells in animals lacking DCIR. Statistical analyses (Pearson’s correlation coefficient R) revealed negative correlations between neuronal integrity of the hippocampus (NeuN + area/mm 2 ) and the amount of CD107b + , arginase 1 + and CD45R + cells at 7 and 14 dpi. Foxp3 + cells and GFAP + astrocytes were negatively correlated with neuronal integrity at 14 dpi (Supplemental Table ). Non-infected control animals of both groups showed no leukocyte infiltrations in the hippocampus. Collectively, these findings suggest that the reduced viral brain load in DCIR −/− mice leads to an accelerated termination of brain inflammatory responses. −/− mice is associated with preserved hippocampal integrity At 7 dpi, mRNA levels of IL-1α ( p = 0.047, Fig. a), TNF-α ( p = 0.039, Fig. b) and IFN-β ( p = 0.009, Fig. c) were significantly reduced in TMEV-infected DCIR −/− animals compared to the WT group. Data show a reduced pro-inflammatory response to virus infection due to receptor deficiency during the early acute polioencephalitis phase (7 dpi). In addition to mRNA levels of IL-1α ( p = 0.018, Fig. a), TNF-α ( p = 0.022, Fig. b), and IFN-β ( p = 0.034, Fig. c), a significantly lower IFN-γ transcription was detected in DCIR −/− animals compared to WT mice also at 14 dpi ( p = 0.027, Fig. d). The mRNA levels of the pro-inflammatory cytokines IL-1β, IL-2, IL-6 and IL-23, as well as the anti-inflammatory cytokines IL-4, IL-5 and TGF-β1 did not differ significantly between both groups (Supplementary Fig. ). Non-infected DCIR −/− mice showed a significantly lower base line mRNA expression of IL-5 compared to WT mice ( p = 0.021, Supplementary Fig. ). Other cytokines mRNA levels of non-infected controls showed no differences between DCIR −/− and WT animals (Supplementary Fig. ). Statistical analyses (Pearson’s correlation coefficient R) revealed negative correlations between neuronal integrity of the hippocampus (NeuN + area/mm 2 ) and mRNA expression levels of IFN-β and IL-1β, and a positive correlation with IL-2 at 7 dpi. IFN-γ was negatively correlated with neuronal integrity at 14 dpi (Supplemental Table ). Reduced pro-inflammatory cytokine expression in the brain at the later stage of acute polioencephalitis (14 dpi) in DCIR −/− mice is likely a direct consequence of reduced viral burden and accelerated termination of neuroinflammation in comparison to WT mice. −/− mice following neurotropic virus infection The suppressive cytokine interleukin-10, secreted by regulatory T cells (Treg) and M2-type macrophages/microglia, is thought to exhibit neuroprotective effects in infectious disorders . On the other hand, Treg and M2-type myeloid cells may dampen effective antiviral responses and thus promote deleterious effects on tissue integrity – . In order to test whether neuronal preservation in the hippocampus in DCIR −/− mice is accompanied by reduced virus load or attributed to immunomodulatory mechanisms, Foxp3 + Treg, arginase 1 + M2-type macrophages/microglia and the key immunomodulatory cytokine IL-10 were quantified. At 14 dpi, numbers of Foxp3 + regulatory T cells ( p = 0.018, Fig. a) and Foxp3 mRNA copy numbers ( p = 0.009, Fig. b), determined by immunohistochemistry and RT-qPCR, respectively, were significantly increased in brain samples of mice with intact DCIR signalling (WT mice) compared to DCIR −/− mice upon infection. Similarly, an increase of arginase 1 + cells was found in the hippocampus of WT mice compared to DCIR −/− mice at both investigated time points with significant differences at 14 dpi ( p = 0.006, Fig. c). Moreover, significantly increased transcription of IL-10 was detected in WT controls compared to DCIR −/− mice at 14 dpi ( p = 0.034, Fig. d). In non-infected WT and DCIR −/− mice cerebral IL-10 mRNA expression was not detectable. Results suggest that TMEV infection elicits compensatory immune pathways, mediated by Treg and M2-type macrophages/microglia in the brain of DCIR intact C57BL/6 mice. Consequently, the observed neuroprotective effect of DCIR deficiency is likely not mediated by classical immunomodulatory mechanisms at the infection site, but rather due to improved virus elimination and timely onset of peripheral protective immune responses. In addition, diminished induction of immunomodulatory and suppressive mechanisms, including decreased numbers of arginase 1 + M2-type macrophages/microglia and Treg, may promote virus control in DCIR deficient animals. The accelerated TMEV elimination observed in the brain of DCIR −/− mice suggests an enhanced antiviral immune response. Priming of naïve T cells in lymphoid organs during the early phase of TME was shown to be crucial for robust antiviral responses , . Therefore, splenic cytokine mRNA expression was quantified by RT-qPCR and splenic T cell responses were analysed by flow cytometry. At 7 dpi, splenic IFN-γ mRNA levels were significantly increased in TMEV-infected DCIR −/− animals compared to the WT group ( p = 0.049, Fig. a), indicating an enhanced pro-inflammatory immune response. At 14 dpi, IL-1α ( p = 0.006, Fig. b) and IL-1β ( p = 0.001, Fig. c) mRNA expression was reduced in TMEV-infected DCIR −/− animals compared to WT mice, likely related to the reduced viral burden and accelerated termination of neuroinflammation in DCIR deficient mice. Splenic IL-2, IL-4, IL-5, IL-6, IL-10, IL-23, IFN-β, TGFβ1, and TNF-α mRNA levels did not differ significantly between both groups (Supplementary Fig. ). Flow cytometric analysis of TMEV-infected groups revealed an increased fraction of CD8 + T cells in the spleen of DCIR −/− mice compared to WT group at 7 dpi ( p = 0.016, Fig. f), while CD4 + T cell frequency remained unchanged at this time point (Fig. e). Accordingly, a significant shift of the CD4 + /CD8 + T cell ratio with dominance of cytotoxic T cells was observed at 7 dpi ( p = 0.009, Fig. g) as well as at 14 dpi ( p = 0.028, Fig. g). Subsequently, the portion of splenic CD4 + T cells decreased in DCIR −/− mice during the disease course, resulting in a significant difference between the groups at 14 dpi ( p = 0.028, Fig. e). Further characterization of splenic T cell subsets revealed a significantly higher proportion of activated CD4 + T cells, displayed by higher fractions of CD4 + CD62L low T cells in DCIR −/− mice compared to WT mice at 7 dpi ( p = 0.016) and 14 dpi ( p = 0.047, Fig. d,j). Moreover, the level of activated CD4 + CD44 + T cells was elevated at both time points in DCIR −/− animals compared to WT mice with statistically significant difference between the groups at 7 dpi ( p = 0.047, Fig. i). Similarly, CD8 + CD44 + cell populations were significantly increased in DCIR −/− animals compared to WT mice at 7 dpi ( p = 0.009) and 14 dpi ( p = 0.016, Fig. l). Splenic CD4 + - and CD8 + T cell subpopulations expressing CD25 as well as CD8 + CD62L low T lymphocytes did not significantly differ between groups at any time point (Fig. h,k,m). Flow cytometry of non-infected control mice showed a slight difference for the portion of CD8 + T cells and the CD4 + /CD8 + T cell ratio comparing the spleens of WT and DCIR −/− mice. However, no major differences in surface expression of T cell activation markers were observed between both non-infected control groups (Supplementary Fig. ). Thus, flow cytometry and cytokine expression analysis revealed an enhanced peripheral T cell activation and an early proportional shift towards cytotoxic CD8 + T cell responses in DCIR −/− mice during viral encephalitis. Regression analyses were performed to identify factors that correlate with hippocampal damage following TMEV infection. Simple regression models confirmed that neuronal integrity of the hippocampus (NeuN + area/mm 2 ) was significantly associated with the TMEV load. In addition, hippocampal damage was significantly associated with the amount of CD107b + , CD3 + , CD45R + Foxp3 + , arginase 1 + , GFAP + , and granzyme B + cells in the hippocampus, as well as with IFN-β mRNA expression in the brain. The amount of TMEV + cells in the hippocampus remains the only significant parameter in the multiple regression model, pointing at a high collinearity among explanatory variables (Supplementary Table ). −/− mice following neurotropic virus infection correlates with diminished responses of antigen presenting cells At 14 dpi, significantly reduced CD11c mRNA copy numbers were detected within the brains ( p = 0.017, Fig. a) and spleens of DCIR −/− mice ( p = 0.035, Fig. e). Moreover, CD80 mRNA expression levels were significantly reduced within the brains ( p = 0.049, Fig. b) and spleens ( p = 0.034, Fig. f) of DCIR −/− mice in comparison to WT animals at 7 dpi. CD86 mRNA transcript levels were significantly diminished within the brains of DCIR −/− mice in comparison to WT mice at both time points (7 dpi: p = 0.049, 14 dpi: p = 0.001, Fig. c). Within the spleen, CD86 mRNA transcript levels showed no differences between DCIR −/− and WT animals (Fig. g). MHC-I mRNA expression was significantly reduced in spleens of DCIR deficient mice at 7 dpi ( p = 0.049, Fig. h), whereas cerebral MHC-I mRNA quantities were significantly decreased in DCIR −/− mice compared to WT animals at 14 dpi ( p = 0.003, Fig. d). Reduced expression of CD11c and co-stimulatory molecules in the brains and spleens of DCIR −/− mice may be linked to the accelerated resolution of TMEV infection and termination of neuroinflammatory response in comparison to WT mice. + T cell response against Theiler’s murine encephalomyelitis virus in vitro MHC-I-restricted CD8 + cytotoxic T cells are important for TMEV elimination in C57BL/6 mice . To determine the impact of DCIR deficiency on early T cell responses upon TMEV infection in vitro , antigen presentation assays using WT and DCIR −/− MEGs or BMDCs were performed. T cells were isolated from OT-I TCR-transgenic mice, which specifically recognise the OVA-peptide presented via the MHC-I molecule H2-K b . T cells were co-cultured with MEGs or BMDCs, previously exposed to TMEV-OVA – . BMDCs were used for the in vitro stimulation of OT-I T cells since BMDCs from WT and DCIR −/− mice had been previously compared in a global and unbiased manner through genome-wide transcriptome analysis and thus represent a well characterized source of APCs . To analyse CD8 + T cell activation, cytokine release was measured by ELISA, and expression of the early T cell activation marker CD69 was measured by flow cytometry. Microglia, as part of the glial cell mixtures, represent the CNS’ local APC population. Incubating MEGs with TMEV-OVA, however, did not result in a difference between WT or DCIR −/− microglia-mediated CD8 + T cell response (Fig. a). Combined with the marginal levels of released pro-inflammatory cytokines (data not shown), these findings suggest that the potential of microglia to process and present antigens is limited. As an additional source of APCs, BMDCs were used in a co-culture system to stimulate antigen-specific CD8 + T cells. To avoid alterations in TMEV antigenicity and modification of (potential) DCIR ligands, live virus was used for incubation with BMDCs. However, to exclude a productive infection of BMDCs leading to classical antigen presentation via MHC-I molecules, viral RNA loads in TMEV DA-exposed BMDCs as well as viral titers in the supernatant were determined (Supplementary Fig. ). While an initial increase in viral RNA load in BMDCs between 2 and 6 h was observed (Supplementary Fig. a), viral titers in the supernatant decreased continuously from 2 to 22 h after TMEV DA incubation (Supplementary Fig. 7b). Thus, the initial increase of TMEV RNA in BMDCs may be mediated by initial TMEV replication, but it most likely does not reflect a productive infection of BMDCs, but rather an increased TMEV internalisation. In addition, incubation with live TMEV did not lead to a significant decrease in MEG and BMDC viability compared to OVA- or mock-stimulated samples and the vast majority of the cells remained viable (Supplementary Fig. ), further supporting that BMDCs present viral antigens to CD8 + T cells. Upon TMEV DA incubation, BMDCs were activated, but no difference between WT and DCIR −/− BMDCs was detected (Supplementary Fig. ). Similarly, the activation status of BMDCs did not differ between WT and DCIR −/− BMDCs following co-cultivation with OT-I T cells (Supplementary Fig. ). However, TMEV-OVA stimulation of DCIR −/− BMDCs led to an increased expression of CD69 by CD8 + T cells compared to WT BMDCs (Fig. b). Further, the release of IL-2, IFN-γ and GrB by CD8 + T cells was elevated if co-cultured with DCIR −/− BMDCs (Fig. c–e). These results indicate that DCIR deficiency in BMDCs impacts subsequent CD8 + T cell activation and T cell effector functions in this BMDC/T cell co-culture system. Possibly, DCIR in DCs may balance type I and II IFN signaling directly influencing T cell priming , . Additionally, cross-talk of DCIR with other immune receptors, such as Toll-like receptors, is conceivable, which can affect the quality of induced T cell responses even without alterations in the expression of co-stimulatory markers CD80 and CD86, as it was shown for human DCIR – . However, the mechanism by which the differential CD8 + T cell activation by DCIR −/− BMDCs shown here is mediated, remains to be determined in future studies. This study highlights the role of DCIR in neuropathology of C57BL/6 mice following acute TMEV infection. Genetic ablation of DCIR appears to exert a supporting effect on viral clearance from the CNS and ameliorates hippocampal damage following virus infection. While susceptible mouse strains (e.g. SJL mice) show an inefficient antiviral immunity and persistent TMEV infection in the CNS, C57BL/6 mice develop vigorous TMEV-specific responses during acute infection . The ability of C57BL/6 mice to eliminate TMEV is caused by robust MHC class I-restricted antiviral CD8 + T cell responses , , , – . As shown in the present study, the lack of DCIR contributes to a more effective priming of peripheral T cells with increased CD44 and reduced CD62L expression by CD4 + T cells together with an increased IFN-γ expression in the spleen during the early phase of TMEV infection. In general, DCIR −/− mice show an age-related increase of CD4 + CD44 high and CD4 + CD62 low T cells by expanding DC populations in lymphoid organs, demonstrating that DCIR deficiency predisposes to effector-memory T cell development. CD4 + T cells are required for protective immunity in TMEV infection, since CD4 deficiency has been shown to cause virus persistence in C57BL/6 mice. CD4 + helper T cells support antiviral CD8 + T cell responses by cytokine release (e.g. IL-2) and by improving the ability of DCs to prime cytotoxic T cell responses (DC licensing) , , . An increased frequency of splenic CD8 + T cells together with an upregulation of the activation marker CD44 was found in infected DCIR −/− mice, suggesting an enhancement of cytotoxic CD8 + T cell responses. The skewed ratio of CD4 + to CD8 + T cells observed in DCIR −/− mice indicates an early dominance of peripheral cytotoxic responses. Increased frequencies of circulating CD8 + T cells were shown to improve antiviral immunity and account for TMEV elimination in C57BL/6 mice , . In agreement with the present findings, enhanced T cell responses in DCIR −/− mice control experimental mycobacteria infection better than WT controls. Of note, DCs of DCIR −/− mice exhibit several transcriptional changes that promote Th1 immunity also under non-infectious conditions . Noteworthy, besides protecting from viral infection, T cell immunity has the ability to contribute to acute brain pathology following TMEV infection. Virus-specific CD8 + T cells target infected neurons of the hippocampus in acutely infected C57BL/6 mice. MHC class I-restricted cytotoxicity towards TMEV epitopes contributes to neuronal loss and brain atrophy , , . Moreover, cytotoxicity boosted by TMEV peptides leads to fatal CNS inflammation in infected C57BL/6 mice, demonstrating the difficulty of balancing immune responses in neurotropic virus infection , . As observed in experimental autoimmune encephalomyelitis, rheumatoid arthritis models, and experimental colitis, DCIR −/− mice are prone to develop autoimmunity and T cell-mediated immunopathology, respectively , , . Strikingly, despite enhanced peripheral cytotoxic responses in the present study, no exacerbated brain injury was observed in DCIR −/− mice, but on the contrary, a reduced hippocampal damage following TMEV infection. DCIR deficiency seems to fine-tune protective immune responses without evoking additional virus-mediated immunopathology in the TME model. The underlying mechanisms remain speculative, but might be associated with diminished pro-inflammatory cytokine responses found in the brain of DCIR −/− mice. Reduced expression of IFN-β and TNF-α in the brain of DCIR −/− mice during the early phase of polioencephalitis (7 dpi) indicates a diminished cytokine response at the infection site. Particularly, increased IFN-β mRNA levels were significantly associated with hippocampal damage in TMEV-infected mice as determined by correlation analyses. IFN-β (type I interferon) expression in the brain is driven by TMEV infection and involved in the induction of innate and adaptive immune responses . Robust antiviral immunity trigged by type I interferons accounts for viral elimination in C57BL/6 mice but also elicit neuronal damage following TMEV infection . Thus, reduced IFN-β expression might have contributed to diminished T cell sequestration in the brain and decreased hippocampal damage in DCIR −/− mice during advanced infection (14 dpi). Similarly, TNF-α is a cytokine produced by activated microglia and macrophages, which initiate protective responses against certain viral infections, including TMEV infection , . However, TNF-α also displays cytotoxic effects and contributes to hippocampal damage in C57BL/6 mice following TMEV infection , , . In addition, TNF-α has been shown to cause excitotoxicity and neuronal damage in murine HIV encephalitis models . Thus, in addition to the accelerated virus elimination, the alleviated brain cytokine response at the infection site might also contribute to the neuroprotective effect observed in DCIR −/− mice. Despite differences of hippocampal integrity and cytokine expression profiles, no obvious clinical changes were observed between DCIR −/− mice and WT mice (subclinical infection). More targeted diagnostic methods such as video/EEG monitoring and behavioral tests (e.g. Morris water maze) are needed to detect subtle clinical changes and fully discover the functional relevance CNS alteration in receptor deficient animals in future studies. In addition to an enhanced CD8 + T cell activation in the periphery during the early infection phase, an altered immune environment at the site of infection, including reduced infiltrations of Foxp3 + Treg and arginase 1 + M2-type cells in DCIR −/− animals at 14 dpi might have influenced TMEV control. Consistent with this, a decreased expression of genes specific for M2-type cells (including arginase 1) can be found in DCIR −/− mice following mycobacteria infection . Moreover, arginase 1 + myeloid cells have been shown to exert suppressive effects on antiviral immunity . For instance, ablation of arginase 1 in macrophages reduces the viral load and ameliorates tissue integrity after experimental Ross River virus infection of mice . Similarly, Treg are able to dampen antiviral responses during TMEV infection . The interplay between innate and adaptive immunity is mediated by APCs, such as macrophages, microglia and DCs, which have the ability to recognize pathogens and induce effector T cell responses , , . Microglia are CNS-resident APCs and play an important role in TMEV-mediated hippocampal damage and seizure development . However, within the present study, in vitro TMEV exposure of DCIR −/− and WT MEGs did not show differences in CD8 + T cell activation. Although there was a slight OVA- and TMEV-OVA-mediated increase of CD69 observed in the MEG/T cell co-cultivation assay, cytokine levels were not elevated. Thus, in comparison to DCs, the in vitro potential of adult microglia to perform APC function and present specific antigens is apparently limited, as previously shown , . DCIR is expressed on all DC subsets and exerts mainly inhibitory effects on immune responses via its intracellular ITIM , , , , , . The present study shows an enhanced activation of CD8 + T cells when DCIR −/− BMDCs were used to prime CD8 + T cells. In addition, the release of IL-2 by activated T cells was elevated upon co-cultivation with DCIR −/− BMDCs. DCIR deficiency results also in an increased production of IFN-γ by lymphocytes which was also observed in the present study . Likewise, Chikungunya virus infection of DCIR −/− mice causes an elevation of IFN-γ in vivo . However, in contrast to the TME model, intact DCIR signalling in experimental Chikungunya virus infection contributes to protection against virus-induced pathology of the joint, demonstrating that the effect of DCIR signalling on disease progression is clearly context dependent and differs between pathogens and the primarily affected organ in infectious disorders . Conclusively, DCIR deficiency seems to support antiviral immune responses of C57BL/6 mice during the initial phase of TMEV infection and to reduce virus-induced neuropathology. Previous studies highlight the potential of DCIR for cell specific targeting and immune modulation , , . Thus, this CLR represents a potential target for intervention strategies to selectively enhance protective immunity in neurotropic virus infection. Animals DCIR −/− mice (C57BL/6-Clec4a2 tm1.1Cfg /Mmucd; RRID:MMRRC_031932-UCD) were obtained from the National Institutes of Health-sponsored Mutant Mouse Resource & Research Center (MMRRC) National System . The mouse strain was backcrossed on C57BL/6 background over more than ten generations . DCIR −/− and respective C57BL/6 mice (WT) were used for the infection experiment. All mice were housed in the animal facility of the University of Veterinary Medicine (Hannover, Germany) in individually ventilated cages under controlled conditions (12 h light/12 h dark cycle, 22–24 °C, humidity 50–60%) with permanent access to water and standard rodent feed. Animal experiments were conducted in accordance with the German law for animal protection and the Directive 2010/63/EU of the European Parliament and of the Council on the protection of animals used for scientific purposes and the ARRIVE guidelines . The study was approved and authorized by the Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit (LAVES), Oldenburg, Germany (permission number 33.19-42502-04-16/2225, date of approval: October 7, 2016). Virus and cell lines For intracerebral injection, the Daniels strain of TMEV (TMEV DA) was used . TMEV DA and live ovalbumin (OVA) peptide-expressing TMEV DA XhoI-OVA8 (TMEV-OVA) were utilized for in vitro bone marrow-derived dendritic cell (BMDC) and adult microglia-enriched glial cell mixtures (MEG)/T cell co-cultivation assays. TMEV-OVA was generated by integrating the coding sequence corresponding to the amino acid sequence OVA (251–267) of chicken egg albumin. Flanking sequences were included to assure natural processing of the immunodominant H-2 Kb restricted epitope OVA( 257–264 ; SIINFEKL). Previous use of this virus has demonstrated viral replication in the CNS of intracranial infected C57BL/6 mice . Furthermore this live virus vector has demonstrated robust generation of H-2 Kb restricted CD8 + T cell responses to the OVA (257–264) antigen after infection and in tumor models – . Virus strains were cultivated and passaged in BHK-21 cells and plaque assays were performed using L-cells for virus titration – .Virus isolation was performed by freezing and thawing. Plaque assays were performed as independent duplicates. Virus titres were determined by calculating the plaque forming units per ml (PFU/ml) as previously described , . Experimental design Five-week old female DCIR −/− and WT mice were anaesthetised with medetomidine (1 mg/kg, Domitor) and ketamine (100 mg/kg) and inoculated into the right cerebral hemisphere with TMEV DA in a total volume of 20 µl DMEM (Biochrom GmbH, Berlin, Germany) supplemented with 2% FCS (PAA Laboratories GmbH, Pasching, Austria) and 50 µg/kg gentamicin (Sigma Aldrich Chemie GmbH, Taufkirchen, Germany) as described . Weekly clinical examination included body weight recordings as well as clinical scorings with evaluation of “posture and outer appearance”, “behaviour and activity” and “gait” . Additionally, a 5 point scale scoring system according to Racine (Racine score) was applied for recording motor seizures . A RotaRod (TSE Systems GmbH, Bad Homburg, Germany) performance test for motor function and coordination was carried out weekly . At 7 and 14 days post infection (dpi) mice were anaesthetised as described above and euthanised with an overdose of medetomidine (1 mg/kg) and ketamine (200 mg/kg). The rostral part of the left cerebrum (contralateral to injection site) was formalin fixed and paraffin embedded (FFPE), and caudal part of the left cerebrum was snap frozen and stored at − 80 °C , . Spleens were taken for flow cytometry and parts of splenic tissue were snap frozen and stored at − 80 °C. Serial sections (2–3 µm thickness) of FFPE coronal brain sections at the hippocampal level (Bregma − 1.46 to − 1.82) were used for histology (hematoxylin and eosin staining) and immunohistochemistry, respectively , . In addition, non-infected age matched controls were used to determine baseline differences of splenic and hippocampal immune cell compositions as well as cerebral cytokine and transcription factor expression profiles between DCIR −/− and WT mice. Histologic scoring of hippocampal lesions Hippocampal damage was evaluated using a semiquantitative scoring system, assessing the integrity of the pyramidal neurons: score 0 = no obvious damage; score 1 = loss involving < 10% of neurons; score 2 = loss involving < 20% of neurons; score 3 = loss involving 20–50% of neurons; score 4 = loss involving > 50% of neurons . Immunohistochemistry Immunohistochemistry was used to detect macrophages/microglia (CD107b, arginase 1), T cells (CD3, CD4, CD8), B cells (CD45R), granzyme B (GrB), regulatory T cells (Foxp3), neurons (neuronal nuclei, NeuN), axons (β-amyloid precursor protein, β-APP), astrocytes (glial fibrillary acidic protein, GFAP), and TMEV capsid protein VP1 as described – . Used antibodies and staining procedures are listed in Supplementary Table . Primary antibodies were diluted in PBS including 1% bovine serum albumin (BSA). In brief, endogenous peroxidase was inhibited by 0.5% H 2 O 2 in ethanol for 30 min. For antigen retrieval (CD107b, arginase 1, CD3, CD45R, Foxp3, NeuN, GrB, β-APP), slides were incubated in citrate buffer within a microwave oven for 20 min. Blocking of unspecific bindings was conducted with either goat serum (TMEV, arginase 1, CD3, NeuN, GrB, β-APP, GFAP) or rabbit serum (CD107b, Foxp3, CD4, CD8). Following, primary antibodies were incubated over night at 4 °C. Biotinylated goat anti-rabbit IgG antibody was used as secondary antibody for TMEV-, arginase 1-, CD3-, and GFAP-specific immunohistochemistry. For CD107b-, CD4-, CD8-, and Foxp3-specific staining, a biotinylated rabbit anti-rat IgG antibody was utilised, and a biotinylated goat anti-mouse IgG antibody was used for NeuN- and β-APP-specific staining. Slides were incubated with the avidin–biotin-peroxidase complex. For visualisation, slides were incubated with 3.3-diaminobenzidine-tetrahydrochloride in PBS containing 0.125% H 2 O 2 and counterstained with Mayer’s hematoxylin. Hippocampi were digitalised and measured by using the bright field mode of the fluorescence microscope BZ-9000 BIOREVO (HS All-in-one fluorescence microscope, Keyence Corporation, Osaka, Japan) and BZ-II Analyzer software (BZ-H2AE, Keyence Corporation, Osaka, Japan). For CD3-, CD107b-, TMEV-, NeuN- and GFAP-specific immunohistochemistry, the proportion of immunolabelled area within the hippocampus was quantified by densitometric analysis. Moreover, for quantifying TMEV-infected cells, arginase 1 + macrophages/microglia, CD45R + B cells, Foxp3 + regulatory T cells, CD4 + T helper cells, CD8 + cytotoxic T cells, and GrB + effector cells, absolute numbers of labelled cells within the hippocampus were counted (cells/mm 2 ). In addition to densitometric analysis, neuronal loss (NeuN) was graded semiquantitavely as described above, too , . Evaluation of axonal damage (β-APP) was performed by a semiquantitative scoring system . Axonal damage in the hippocampus was graded as followed: score 0 = no β-APP + axons; score 1 = 1–25 β-APP + axons; score 2 = 26–50 β-APP + axons; score 3 = 51–75 β-APP + axons; score 4 = 76–100 β-APP + axons; score 5 = more than 100 β-APP + damaged axons . Ribonucleic acid isolation and reverse transcription For RNA isolation, snap frozen tissue of the cerebrum was homogenized in 1 ml QIAzol lysis reagent (Qiagen, Hilden, Germany) with Omni Tip PCR Tissue Homogenizing Kit (Süd-Laborbedarf GmbH, Gauting, Germany). Subsequently, homogenates were treated with RNeasy Lipid tissue Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s protocol. Likewise, RNA isolation of snap frozen splenic tissue has been performed using RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s protocol. The purity and amount of RNA was measured with a Multiskan GO microplate spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) using a µDrop plate (Thermo Fisher Scientific, Waltham, MA, USA) and SkanIt software (version 3.2.1.4 RE, Thermo Fisher Scientific, Waltham, MA, USA) . Equal amounts of RNA were translated into cDNA using Omniscript Reverse Transcription Kit (Qiagen, Hilden, Germany), RNaseOUT Recombinant Ribonuclease Inhibitor (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) and random primers (Promega Corporation, Madison, WI, USA). Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) To determine viral load (TMEV RNA) and mRNA expression levels of CD11c, CD80, CD86, Foxp3, interleukin (IL)-1α, IL-1β, IL-2, IL-4, IL-5, IL-6, IL-10, IL-23, interferon (IFN)-β, IFN-γ, MHC-I, tumor necrosis factor (TNF)-α, transforming growth factor (TGF)-β1, and the three housekeeping genes, β-actin, glyceraldehyde 3-phosphate dehydrogenase (GAPDH), and hypoxanthine–guanine phosphoribosyltransferase (HPRT), RT-qPCR was carried out using the Mx3005P Multiplex Quantitative PCR System (Agilent Technologies Deutschland GmbH, Waldbronn, Germany) and Brilliant III Ultra-Fast SYBR Green QPCR Mastermix (Agilent Technologies Deutschland GmbH, Waldbronn, Germany). Primer details are listed in Supplementary Table . Quantification of copy numbers was achieved by parallel, duplicate amplification of tenfold serial dilution of standards ranging from 10 8 to 10 2 copies/µl. Melting curve analysis proved specificity of each reaction . The geNorm software (version 3.4) was utilised for normalisation of qPCR data , . Flow cytometry of murine splenocytes Spleens were removed and immediately flushed mechanically with a syringe and 1 × PBS to a single cell suspension. Subsequently erythrocytes were lysed using RBC lysis buffer (10% 100 mM Tris–HCl [Tris-(hydroxymethyl)-aminomethanhydrochloride], 90% 160 mM NH 4 Cl [ammonium chloride], Carl Roth, Karlsruhe, Germany). Afterwards, cells were incubated with rat-anti CD16/32 monoclonal antibody (1:100) to block the Fc gamma receptor and therefore to avoid unspecific binding. Cell solutions were stained with following monoclonal anti-mouse antibodies: CD4-FITC, CD62L-PE-Cy7, CD4-PerCP-Cy5.5, CD25-FITC, CD44-APC, CD8a-PE, CD8a-APC and CD19-FITC. Details for all flow cytometry antibodies are listed in Supplementary Table . For fixation, cells were incubated with 1% paraformaldehyde (PFA, Carl Roth, Karlsruhe, Germany). Flow cytometry was performed at the Attune NxT cytometer (Thermo Fisher Scientific, Waltham, MA, USA). Data analysis was conducted with FlowJo software (version 10, FloJo LLC, Ashland, OR, USA) . Isolation of an adult microglia-enriched glial cell mixture (MEG) To isolate MEGs, a previously used method was modified . Brains of WT and DCIR −/− mice were dissected and stored temporarily in HBSS (Sigma Aldrich, St. Louis, MO, USA) containing 15 mM HEPES (Carl Roth, Karlsruhe, Germany) and 0.5% glucose (Carl Roth, Karlsruhe, Germany). For dissociation, brains were squashed with the top end of a syringe in a 6-well plate containing a digestion cocktail (HBSS, 1 mg/ml collagenase D, 5 U/ml DNase I; Roche, Basel, Switzerland). After 10 min of incubation at 37 °C , brains were gently dissociated manually. Afterwards, a 40% Percoll centrifugation (10 min, 350× g , 18 °C; GE Healthcare, Chicago, IL, USA) and erythrocyte lysis were performed. To check the percentage of microglia within the glial cell mixture, cells were blocked with anti-mouse CD16/32, stained with anti-mouse CD11b-PE and anti-mouse CD45-APC and fixed in 1% PFA. Flow cytometry was performed using an Attune NxT Flow Cytometer. Data analysis was conducted with FlowJo software . The purity of microglia (CD11b + /CD45 low+ ) within MEG used for co-culture experiments ranged between 40 to 60% for both WT and DCIR −/− cell suspensions. Microglia-enriched glial cell mixture/T cell co-cultivation Following MEG isolation, glial cells were seeded with 4 × 10 5 cells/ml in culture medium (IMDM medium, 10% FCS, 2 mM l -glutamine, 100 U/ml penicillin 100 µg/ml streptomycin; Pan-Biotech, Aidenbach, Germany) in a 96-well U-bottom plate and stimulated with EndoGrade ovalbumin (0.3 mg/ml, LIONEX, Braunschweig, Germany) or TMEV-OVA (MOI 200) at 37 °C for 22 h. T cells were isolated from spleens of 8 to 12 week old OT-I transgenic mice using magnetic activated cell sorting (MACS, Pan T Cell Isolation Kit II mouse, Miltenyi Biotec, Bergisch Gladbach, Germany). Purified T cells were adjusted to 1 × 10 6 cells/ml, added to the glial cells and co-cultured at 37 °C for 48 h. After incubation, supernatants were harvested and IL-2 and IFN-γ cytokine concentrations were analysed by ELISA (murine IL-2 and IFN-γ Standard ABTS ELISA Development Kit, PeproTech, Rocky Hill, NJ, USA). Co-cultured cells were blocked with anti-mouse CD16/32, stained with anti-mouse CD8a-FITC, CD62L-PE and CD69-APC and fixed in 1% PFA. Flow cytometry was performed using an Attune NxT Flow Cytometer. Data analysis was conducted with FlowJo software . Bone marrow-derived dendritic cells/T cell co-cultivation To generate BMDCs, bone marrow cells were isolated from femurs and tibias of DCIR −/− and C57BL/6 control mice and differentiated into BMDCs by cultivation with differentiation medium (culture medium + 10% X63-GM-CSF supernatant) at 37 °C for 8 to 10 days. Following generation and differentiation, BMDCs were seeded with 2 × 10 5 cells/ml in culture medium in a 96-well U-bottom plate and co-cultivation was performed as described above. Statistical analysis Statistical analyses were performed using SPSS for Windows (version 21, SPSS Inc., IBM Corp.) applying multiple Mann–Whitney U tests (Supplementary Table ) and the statistics software R (version 4.0.4) for nonparametric two-way analysis of variance (ANOVA). Moreover, statistics software R was used for applying simple and multiple regression models to study the influence of infiltrating immune cell composition, virus load and cytokine profile on hippocampal neuronal integrity. First independent parameters were preselected by single regression models. Surviving parameters were subjected to multiple regression models, which were further reduced by automatic backwards variable selection. Due to lower sample sizes at individual time points, regression models were avoided at time-specific subgroup analyses. Instead, correlation analyses using Pearson’s correlation coefficient R regarding specific analyses at time points 7 dpi and 14 dpi were performed. Graphs were designed using GraphPad Prism software (version 8, GraphPad Software Inc., San Diego, CA, USA) . Statistical tests were performed with a significance level of α = 5%. DCIR −/− mice (C57BL/6-Clec4a2 tm1.1Cfg /Mmucd; RRID:MMRRC_031932-UCD) were obtained from the National Institutes of Health-sponsored Mutant Mouse Resource & Research Center (MMRRC) National System . The mouse strain was backcrossed on C57BL/6 background over more than ten generations . DCIR −/− and respective C57BL/6 mice (WT) were used for the infection experiment. All mice were housed in the animal facility of the University of Veterinary Medicine (Hannover, Germany) in individually ventilated cages under controlled conditions (12 h light/12 h dark cycle, 22–24 °C, humidity 50–60%) with permanent access to water and standard rodent feed. Animal experiments were conducted in accordance with the German law for animal protection and the Directive 2010/63/EU of the European Parliament and of the Council on the protection of animals used for scientific purposes and the ARRIVE guidelines . The study was approved and authorized by the Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit (LAVES), Oldenburg, Germany (permission number 33.19-42502-04-16/2225, date of approval: October 7, 2016). For intracerebral injection, the Daniels strain of TMEV (TMEV DA) was used . TMEV DA and live ovalbumin (OVA) peptide-expressing TMEV DA XhoI-OVA8 (TMEV-OVA) were utilized for in vitro bone marrow-derived dendritic cell (BMDC) and adult microglia-enriched glial cell mixtures (MEG)/T cell co-cultivation assays. TMEV-OVA was generated by integrating the coding sequence corresponding to the amino acid sequence OVA (251–267) of chicken egg albumin. Flanking sequences were included to assure natural processing of the immunodominant H-2 Kb restricted epitope OVA( 257–264 ; SIINFEKL). Previous use of this virus has demonstrated viral replication in the CNS of intracranial infected C57BL/6 mice . Furthermore this live virus vector has demonstrated robust generation of H-2 Kb restricted CD8 + T cell responses to the OVA (257–264) antigen after infection and in tumor models – . Virus strains were cultivated and passaged in BHK-21 cells and plaque assays were performed using L-cells for virus titration – .Virus isolation was performed by freezing and thawing. Plaque assays were performed as independent duplicates. Virus titres were determined by calculating the plaque forming units per ml (PFU/ml) as previously described , . Five-week old female DCIR −/− and WT mice were anaesthetised with medetomidine (1 mg/kg, Domitor) and ketamine (100 mg/kg) and inoculated into the right cerebral hemisphere with TMEV DA in a total volume of 20 µl DMEM (Biochrom GmbH, Berlin, Germany) supplemented with 2% FCS (PAA Laboratories GmbH, Pasching, Austria) and 50 µg/kg gentamicin (Sigma Aldrich Chemie GmbH, Taufkirchen, Germany) as described . Weekly clinical examination included body weight recordings as well as clinical scorings with evaluation of “posture and outer appearance”, “behaviour and activity” and “gait” . Additionally, a 5 point scale scoring system according to Racine (Racine score) was applied for recording motor seizures . A RotaRod (TSE Systems GmbH, Bad Homburg, Germany) performance test for motor function and coordination was carried out weekly . At 7 and 14 days post infection (dpi) mice were anaesthetised as described above and euthanised with an overdose of medetomidine (1 mg/kg) and ketamine (200 mg/kg). The rostral part of the left cerebrum (contralateral to injection site) was formalin fixed and paraffin embedded (FFPE), and caudal part of the left cerebrum was snap frozen and stored at − 80 °C , . Spleens were taken for flow cytometry and parts of splenic tissue were snap frozen and stored at − 80 °C. Serial sections (2–3 µm thickness) of FFPE coronal brain sections at the hippocampal level (Bregma − 1.46 to − 1.82) were used for histology (hematoxylin and eosin staining) and immunohistochemistry, respectively , . In addition, non-infected age matched controls were used to determine baseline differences of splenic and hippocampal immune cell compositions as well as cerebral cytokine and transcription factor expression profiles between DCIR −/− and WT mice. Hippocampal damage was evaluated using a semiquantitative scoring system, assessing the integrity of the pyramidal neurons: score 0 = no obvious damage; score 1 = loss involving < 10% of neurons; score 2 = loss involving < 20% of neurons; score 3 = loss involving 20–50% of neurons; score 4 = loss involving > 50% of neurons . Immunohistochemistry was used to detect macrophages/microglia (CD107b, arginase 1), T cells (CD3, CD4, CD8), B cells (CD45R), granzyme B (GrB), regulatory T cells (Foxp3), neurons (neuronal nuclei, NeuN), axons (β-amyloid precursor protein, β-APP), astrocytes (glial fibrillary acidic protein, GFAP), and TMEV capsid protein VP1 as described – . Used antibodies and staining procedures are listed in Supplementary Table . Primary antibodies were diluted in PBS including 1% bovine serum albumin (BSA). In brief, endogenous peroxidase was inhibited by 0.5% H 2 O 2 in ethanol for 30 min. For antigen retrieval (CD107b, arginase 1, CD3, CD45R, Foxp3, NeuN, GrB, β-APP), slides were incubated in citrate buffer within a microwave oven for 20 min. Blocking of unspecific bindings was conducted with either goat serum (TMEV, arginase 1, CD3, NeuN, GrB, β-APP, GFAP) or rabbit serum (CD107b, Foxp3, CD4, CD8). Following, primary antibodies were incubated over night at 4 °C. Biotinylated goat anti-rabbit IgG antibody was used as secondary antibody for TMEV-, arginase 1-, CD3-, and GFAP-specific immunohistochemistry. For CD107b-, CD4-, CD8-, and Foxp3-specific staining, a biotinylated rabbit anti-rat IgG antibody was utilised, and a biotinylated goat anti-mouse IgG antibody was used for NeuN- and β-APP-specific staining. Slides were incubated with the avidin–biotin-peroxidase complex. For visualisation, slides were incubated with 3.3-diaminobenzidine-tetrahydrochloride in PBS containing 0.125% H 2 O 2 and counterstained with Mayer’s hematoxylin. Hippocampi were digitalised and measured by using the bright field mode of the fluorescence microscope BZ-9000 BIOREVO (HS All-in-one fluorescence microscope, Keyence Corporation, Osaka, Japan) and BZ-II Analyzer software (BZ-H2AE, Keyence Corporation, Osaka, Japan). For CD3-, CD107b-, TMEV-, NeuN- and GFAP-specific immunohistochemistry, the proportion of immunolabelled area within the hippocampus was quantified by densitometric analysis. Moreover, for quantifying TMEV-infected cells, arginase 1 + macrophages/microglia, CD45R + B cells, Foxp3 + regulatory T cells, CD4 + T helper cells, CD8 + cytotoxic T cells, and GrB + effector cells, absolute numbers of labelled cells within the hippocampus were counted (cells/mm 2 ). In addition to densitometric analysis, neuronal loss (NeuN) was graded semiquantitavely as described above, too , . Evaluation of axonal damage (β-APP) was performed by a semiquantitative scoring system . Axonal damage in the hippocampus was graded as followed: score 0 = no β-APP + axons; score 1 = 1–25 β-APP + axons; score 2 = 26–50 β-APP + axons; score 3 = 51–75 β-APP + axons; score 4 = 76–100 β-APP + axons; score 5 = more than 100 β-APP + damaged axons . For RNA isolation, snap frozen tissue of the cerebrum was homogenized in 1 ml QIAzol lysis reagent (Qiagen, Hilden, Germany) with Omni Tip PCR Tissue Homogenizing Kit (Süd-Laborbedarf GmbH, Gauting, Germany). Subsequently, homogenates were treated with RNeasy Lipid tissue Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s protocol. Likewise, RNA isolation of snap frozen splenic tissue has been performed using RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s protocol. The purity and amount of RNA was measured with a Multiskan GO microplate spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) using a µDrop plate (Thermo Fisher Scientific, Waltham, MA, USA) and SkanIt software (version 3.2.1.4 RE, Thermo Fisher Scientific, Waltham, MA, USA) . Equal amounts of RNA were translated into cDNA using Omniscript Reverse Transcription Kit (Qiagen, Hilden, Germany), RNaseOUT Recombinant Ribonuclease Inhibitor (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) and random primers (Promega Corporation, Madison, WI, USA). To determine viral load (TMEV RNA) and mRNA expression levels of CD11c, CD80, CD86, Foxp3, interleukin (IL)-1α, IL-1β, IL-2, IL-4, IL-5, IL-6, IL-10, IL-23, interferon (IFN)-β, IFN-γ, MHC-I, tumor necrosis factor (TNF)-α, transforming growth factor (TGF)-β1, and the three housekeeping genes, β-actin, glyceraldehyde 3-phosphate dehydrogenase (GAPDH), and hypoxanthine–guanine phosphoribosyltransferase (HPRT), RT-qPCR was carried out using the Mx3005P Multiplex Quantitative PCR System (Agilent Technologies Deutschland GmbH, Waldbronn, Germany) and Brilliant III Ultra-Fast SYBR Green QPCR Mastermix (Agilent Technologies Deutschland GmbH, Waldbronn, Germany). Primer details are listed in Supplementary Table . Quantification of copy numbers was achieved by parallel, duplicate amplification of tenfold serial dilution of standards ranging from 10 8 to 10 2 copies/µl. Melting curve analysis proved specificity of each reaction . The geNorm software (version 3.4) was utilised for normalisation of qPCR data , . Spleens were removed and immediately flushed mechanically with a syringe and 1 × PBS to a single cell suspension. Subsequently erythrocytes were lysed using RBC lysis buffer (10% 100 mM Tris–HCl [Tris-(hydroxymethyl)-aminomethanhydrochloride], 90% 160 mM NH 4 Cl [ammonium chloride], Carl Roth, Karlsruhe, Germany). Afterwards, cells were incubated with rat-anti CD16/32 monoclonal antibody (1:100) to block the Fc gamma receptor and therefore to avoid unspecific binding. Cell solutions were stained with following monoclonal anti-mouse antibodies: CD4-FITC, CD62L-PE-Cy7, CD4-PerCP-Cy5.5, CD25-FITC, CD44-APC, CD8a-PE, CD8a-APC and CD19-FITC. Details for all flow cytometry antibodies are listed in Supplementary Table . For fixation, cells were incubated with 1% paraformaldehyde (PFA, Carl Roth, Karlsruhe, Germany). Flow cytometry was performed at the Attune NxT cytometer (Thermo Fisher Scientific, Waltham, MA, USA). Data analysis was conducted with FlowJo software (version 10, FloJo LLC, Ashland, OR, USA) . To isolate MEGs, a previously used method was modified . Brains of WT and DCIR −/− mice were dissected and stored temporarily in HBSS (Sigma Aldrich, St. Louis, MO, USA) containing 15 mM HEPES (Carl Roth, Karlsruhe, Germany) and 0.5% glucose (Carl Roth, Karlsruhe, Germany). For dissociation, brains were squashed with the top end of a syringe in a 6-well plate containing a digestion cocktail (HBSS, 1 mg/ml collagenase D, 5 U/ml DNase I; Roche, Basel, Switzerland). After 10 min of incubation at 37 °C , brains were gently dissociated manually. Afterwards, a 40% Percoll centrifugation (10 min, 350× g , 18 °C; GE Healthcare, Chicago, IL, USA) and erythrocyte lysis were performed. To check the percentage of microglia within the glial cell mixture, cells were blocked with anti-mouse CD16/32, stained with anti-mouse CD11b-PE and anti-mouse CD45-APC and fixed in 1% PFA. Flow cytometry was performed using an Attune NxT Flow Cytometer. Data analysis was conducted with FlowJo software . The purity of microglia (CD11b + /CD45 low+ ) within MEG used for co-culture experiments ranged between 40 to 60% for both WT and DCIR −/− cell suspensions. Following MEG isolation, glial cells were seeded with 4 × 10 5 cells/ml in culture medium (IMDM medium, 10% FCS, 2 mM l -glutamine, 100 U/ml penicillin 100 µg/ml streptomycin; Pan-Biotech, Aidenbach, Germany) in a 96-well U-bottom plate and stimulated with EndoGrade ovalbumin (0.3 mg/ml, LIONEX, Braunschweig, Germany) or TMEV-OVA (MOI 200) at 37 °C for 22 h. T cells were isolated from spleens of 8 to 12 week old OT-I transgenic mice using magnetic activated cell sorting (MACS, Pan T Cell Isolation Kit II mouse, Miltenyi Biotec, Bergisch Gladbach, Germany). Purified T cells were adjusted to 1 × 10 6 cells/ml, added to the glial cells and co-cultured at 37 °C for 48 h. After incubation, supernatants were harvested and IL-2 and IFN-γ cytokine concentrations were analysed by ELISA (murine IL-2 and IFN-γ Standard ABTS ELISA Development Kit, PeproTech, Rocky Hill, NJ, USA). Co-cultured cells were blocked with anti-mouse CD16/32, stained with anti-mouse CD8a-FITC, CD62L-PE and CD69-APC and fixed in 1% PFA. Flow cytometry was performed using an Attune NxT Flow Cytometer. Data analysis was conducted with FlowJo software . To generate BMDCs, bone marrow cells were isolated from femurs and tibias of DCIR −/− and C57BL/6 control mice and differentiated into BMDCs by cultivation with differentiation medium (culture medium + 10% X63-GM-CSF supernatant) at 37 °C for 8 to 10 days. Following generation and differentiation, BMDCs were seeded with 2 × 10 5 cells/ml in culture medium in a 96-well U-bottom plate and co-cultivation was performed as described above. Statistical analyses were performed using SPSS for Windows (version 21, SPSS Inc., IBM Corp.) applying multiple Mann–Whitney U tests (Supplementary Table ) and the statistics software R (version 4.0.4) for nonparametric two-way analysis of variance (ANOVA). Moreover, statistics software R was used for applying simple and multiple regression models to study the influence of infiltrating immune cell composition, virus load and cytokine profile on hippocampal neuronal integrity. First independent parameters were preselected by single regression models. Surviving parameters were subjected to multiple regression models, which were further reduced by automatic backwards variable selection. Due to lower sample sizes at individual time points, regression models were avoided at time-specific subgroup analyses. Instead, correlation analyses using Pearson’s correlation coefficient R regarding specific analyses at time points 7 dpi and 14 dpi were performed. Graphs were designed using GraphPad Prism software (version 8, GraphPad Software Inc., San Diego, CA, USA) . Statistical tests were performed with a significance level of α = 5%. Supplementary Information.
The influence of the SARS-CoV-2 pandemic on in-hospital mortality in a gastroenterology service
2667da4c-4656-4ee9-be2b-b4cdadde8e07
9682883
Internal Medicine[mh]
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) infection, emerged in late 2019 in Wuhan, China. On 11 March 2020, the World Health Organization (WHO) declared a state of pandemic. The first case in Spain was confirmed on 31 January in La Gomera (Canary Islands) and a state of alarm was declared throughout the national territory from 14 March to 21 June 2020. The pandemic caused by COVID-19 has brought about a change in the global health landscape, in which today the number of people infected is more than 517 million with more than 6.2 million deaths. Very recently, the WHO has published data that estimate the number of deaths from COVID, either directly or indirectly, at almost 15 million. This excess mortality includes those who died from Covid who were not diagnosed and those who died due to the impact of the health crisis. The health system was saturated and on the verge of collapse on several occasions, leading to a relocation of physical and material resources and a transformation of hospital care processes. Specialists from different areas have become part of multidisciplinary care units to deal with the high number of admissions caused by SARS-CoV-2. Hospitals in Spain have undergone an unprecedented transformation, increasing the number of beds (mainly intensive care), creating field hospitals in trade fair grounds and sports centres, transforming hotels into centres for minor patients and health professionals from other areas. The Gastroenterology Service, like practically all others, has been affected. Firstly, due to the variety of digestive symptoms (such as vomiting, diarrhoea and abdominal pain) and laboratory abnormalities (mainly hypertransaminasemia) that can be caused by SARS-CoV-2. Secondly, due to the need to restructure schedules, limiting the number of endoscopies, cancelling appointments and reducing the number of scheduled admissions. The pandemic has affected the different autonomous communities and Spanish cities to a greater or lesser extent. During the year 2020 there were clearly two temporary periods of maximum incidence of SARS-CoV-2 infection defined as “waves”. In Andalusia there was a first incidence peak (first wave) during the months of March, April and May; and a second peak, more marked, in the months of September, October and November. In Malaga, the behaviour was similar to the Andalusian community in terms of incidence rate and hospital admissions for SARS-CoV-2 pneumonia. During this period, the hypothesis arose from the specialists in the Gastroenterology Service regarding the existence of a probable absence or delay in the request for urgent care by patients with a digestive pathology, with the risks and increased morbidity/mortality that this could entail. This clinical research work has been designed to try to confirm or refute this hypothesis and assess its influence on hospitalisation/mortality in a Gastroenterology Service. The main objective of this study was to analyse global in-hospital mortality in a Gastroenterology Service after the start of the COVID-19 pandemic. As secondary objectives, in-hospital mortality was analysed in the different subgroups of digestive diseases, as well as general variables (demographic, stay in days, etc.) and specific variables of each subgroup of digestive diseases that could act as predictors of mortality. Patient selection and structure of the study This is a single-centre, observational and retrospective study that included 1039 patients admitted to the Gastroenterology Service of the Virgen de la Victoria University Hospital in the period between 1 December 2019 and 30 November 2020 (12 months). The inclusion criteria were: patients ≥ 18 years admitted urgently or scheduled in the Gastroenterology Service in the period described. The exclusion criteria were: patients < 18 years old, patients who were admitted to the Gastroenterology Service and changed to another service during admission (except Intensive Care-ICU) and patients who were admitted to other services (except ICU) and subsequently refereed to the Gastroenterology Service. The patients were divided into four time-related groups (by trimesters): from 1 December 2019 to 29 February 2020 (“pre-wave” period), from 1 March 2020 to 31 May 2020 (“first wave”), from 1 June 2020 to 31 August 2020 (“inter-wave” period) and from 1 September 2020 to 30 November 2020 (“second wave”) . The control group was the “pre-wave” period and the groups were compared with each other both globally (general mortality) and by different groups of diseases (biliopancreatic, non-variceal gastrointestinal bleeding, miscellaneous, liver, inflammatory bowel disease (IBD) and admissions scheduled). In addition, a subanalysis of the gastrointestinal tumours observed in each of the above groups was performed. This study has been carried out in accordance with current legislation and following the ethical principles established in the Declaration of Helsinki. Informed consent forms were not given to the patients included due to the retrospective nature of the study. Logistics and healthcare characteristics of the centre The Virgen de la Victoria hospital is a third-level university hospital that offers specialised care to 470,000 inhabitants of the province of Malaga. The Emergency Department, with three different points of entry, attends an estimated average of 750 emergencies per day for COVID and non-COVID pathologies. The Gastroenterology Service has 35 hospitalisation beds, 40 weekly consultations (General Gastroenterology, Hepatology and Inflammatory Bowel Disease), 4 conventional endoscopy rooms and 1 advanced endoscopy room (4 days/week). There is also a Hepatology Unit, a Comprehensive Inflammatory Bowel Disease Unit and an Endoscopy Unit. Variables studied - Overall analysis : mortality, sex, age, days from symptom onset to emergency room visit, hospital stay and mortality. - Non-variceal gastrointestinal bleeding : externalisation (high, low), cause (peptic, tumour or others), Glasgow-Blatchford score, haemoglobin on arrival, need for ICU, need for transfusion, complications, time from arrival at the emergency room to the first endoscopy. - Biliopancreatic diseases : diseases (acute pancreatitis, complicated biliary colic, acute cholecystitis, acute cholangitis, painless jaundice), BISAP score, Quick-Sofa score on arrival at the Emergency Department, pancreatic necrosis, drainage of collections, scheduled biliary drainage, surgical necrosectomy, need for ICU. - Liver diseases : diseases (acute hepatitis, variceal haemorrhage, hepatic encephalopathy, ascites, acute on-chronic liver failure, infections, others), haemoglobin on arrival, need for transfusion, need for tamponade, rescue TIPS, Child–Pugh–Turcotte stage, MELD score, need for liver transplant, need for ICU. - Inflammatory bowel disease : types (Crohn's disease, ulcerative colitis, indeterminate/unclassifiable colitis), immunosuppressive treatment, discontinuation of immunosuppressive treatment, modified Truelove–Witts index, Harvey–Bradshaw index, debut, local complications, need for surgery. - Scheduled admissions : diseases (biliary, hepatology, IBD, tumour, others). - Miscellaneous: diseases (inflammatory-ischaemic-infectious ileocolitis, endoscopic complication/surveillance, aphagia–dysphagia–emetic syndrome, others). - Tumours : origin of the tumour, metastasis at diagnosis, attitude established by the tumour committee. Statistical analysis Quantitative variables have been reported as means (± standard deviation – SD) and qualitative variables as whole numbers (percentages). The normal distribution of the variables was verified and a univariate analysis was carried out using Chi square to compare nominal and ordinal qualitative variables, and ANOVA test with Bonferroni adjustments to compare quantitative variables, between the different groups. The inferential study has been carried out with the support of IBM SPSS Statistics 23. This is a single-centre, observational and retrospective study that included 1039 patients admitted to the Gastroenterology Service of the Virgen de la Victoria University Hospital in the period between 1 December 2019 and 30 November 2020 (12 months). The inclusion criteria were: patients ≥ 18 years admitted urgently or scheduled in the Gastroenterology Service in the period described. The exclusion criteria were: patients < 18 years old, patients who were admitted to the Gastroenterology Service and changed to another service during admission (except Intensive Care-ICU) and patients who were admitted to other services (except ICU) and subsequently refereed to the Gastroenterology Service. The patients were divided into four time-related groups (by trimesters): from 1 December 2019 to 29 February 2020 (“pre-wave” period), from 1 March 2020 to 31 May 2020 (“first wave”), from 1 June 2020 to 31 August 2020 (“inter-wave” period) and from 1 September 2020 to 30 November 2020 (“second wave”) . The control group was the “pre-wave” period and the groups were compared with each other both globally (general mortality) and by different groups of diseases (biliopancreatic, non-variceal gastrointestinal bleeding, miscellaneous, liver, inflammatory bowel disease (IBD) and admissions scheduled). In addition, a subanalysis of the gastrointestinal tumours observed in each of the above groups was performed. This study has been carried out in accordance with current legislation and following the ethical principles established in the Declaration of Helsinki. Informed consent forms were not given to the patients included due to the retrospective nature of the study. The Virgen de la Victoria hospital is a third-level university hospital that offers specialised care to 470,000 inhabitants of the province of Malaga. The Emergency Department, with three different points of entry, attends an estimated average of 750 emergencies per day for COVID and non-COVID pathologies. The Gastroenterology Service has 35 hospitalisation beds, 40 weekly consultations (General Gastroenterology, Hepatology and Inflammatory Bowel Disease), 4 conventional endoscopy rooms and 1 advanced endoscopy room (4 days/week). There is also a Hepatology Unit, a Comprehensive Inflammatory Bowel Disease Unit and an Endoscopy Unit. - Overall analysis : mortality, sex, age, days from symptom onset to emergency room visit, hospital stay and mortality. - Non-variceal gastrointestinal bleeding : externalisation (high, low), cause (peptic, tumour or others), Glasgow-Blatchford score, haemoglobin on arrival, need for ICU, need for transfusion, complications, time from arrival at the emergency room to the first endoscopy. - Biliopancreatic diseases : diseases (acute pancreatitis, complicated biliary colic, acute cholecystitis, acute cholangitis, painless jaundice), BISAP score, Quick-Sofa score on arrival at the Emergency Department, pancreatic necrosis, drainage of collections, scheduled biliary drainage, surgical necrosectomy, need for ICU. - Liver diseases : diseases (acute hepatitis, variceal haemorrhage, hepatic encephalopathy, ascites, acute on-chronic liver failure, infections, others), haemoglobin on arrival, need for transfusion, need for tamponade, rescue TIPS, Child–Pugh–Turcotte stage, MELD score, need for liver transplant, need for ICU. - Inflammatory bowel disease : types (Crohn's disease, ulcerative colitis, indeterminate/unclassifiable colitis), immunosuppressive treatment, discontinuation of immunosuppressive treatment, modified Truelove–Witts index, Harvey–Bradshaw index, debut, local complications, need for surgery. - Scheduled admissions : diseases (biliary, hepatology, IBD, tumour, others). - Miscellaneous: diseases (inflammatory-ischaemic-infectious ileocolitis, endoscopic complication/surveillance, aphagia–dysphagia–emetic syndrome, others). - Tumours : origin of the tumour, metastasis at diagnosis, attitude established by the tumour committee. Quantitative variables have been reported as means (± standard deviation – SD) and qualitative variables as whole numbers (percentages). The normal distribution of the variables was verified and a univariate analysis was carried out using Chi square to compare nominal and ordinal qualitative variables, and ANOVA test with Bonferroni adjustments to compare quantitative variables, between the different groups. The inferential study has been carried out with the support of IBM SPSS Statistics 23. A total of 1039 admissions were recorded between 1 December 2019 and 30 November 2020. The disease groups that most frequently led to admission were biliopancreatic (43.5%), scheduled admissions (16.5%), non-variceal gastrointestinal bleeding (16.1%), miscellaneous (10.4%), liver (8.1%) and IBD (5.5%) . In-hospital mortality Overall in-hospital mortality was analysed with a total of 54 deaths (5.2%), by periods (17 pre-wave (5.7%), 7 first wave (3.2%), 12 inter-wave (4.5%) and 18 second wave (7.2%)), not reaching statistical significance ( p 0.23). Non-variceal gastrointestinal bleeding No differences were found in terms of sex and age in the 4 groups. It was observed that the number of days that elapsed from the onset of symptoms to consultation in the emergency room was lower in the pre-wave group (2.06) than in the rest of the groups (5.1, 5 and 10.5 respectively), reaching only statistical significance the difference with the second wave ( p 0.034). The most frequent cause was non-tumour non-peptic (62%). No differences were observed in the Glasgow–Blatchford scale, haemoglobin values on arrival, need for ICU admission, number of complications, need for transfusion, time to first endoscopy, mean stay, or mortality . Biliopancreatic diseases The distribution of sex and age was similar in the 4 groups. A clear predominance of acute pancreatitis was observed as the reason for admission (43.8%, p 0.04). There were no differences in the number of days from the onset of symptoms to emergency room visit or in the BISAP scale upon arrival, whereas a higher score was observed on the Quick-Sofa scale in the “second wave” compared with the “pre-wave” group ( p < 0.05). No differences were found in the presence of necrosis, the need for biliary drainage or collections. Mortality from this group of diseases was significantly higher in the second wave ( p 0.015) . Liver diseases Regarding liver-related diseases, a clear predominance of the male sex was observed in admissions (88.1%) with a mean age of around 60 years. The most frequent reason for admission was ascitic decompensation of underlying cirrhosis (29.8%), followed by hepatic encephalopathy (20.2%) and acute hepatitis (17.9%). Regarding the functional stage, we did not find statistically significant differences, although we observed how the percentage of patients with advanced cirrhosis (Child–Pugh C) was higher during the “first” and “second wave” compared to the “pre-wave” and “inter-wave” periods (52.7 and 75% vs 35.1 and 50%). The longest mean stay was recorded in the inter-wave period (12.3 days) followed by the second wave (11.3 days) . Inflammatory bowel disease No differences were found in terms of sex, age and time from the onset of symptoms to emergency room consultation. Only one patient of the 29 under immunosuppressive treatment abandoned it. No differences were observed in the Harvey–Bradshow and Truelove–Witts indices at admission. Three patients required urgent surgery, all of them during the first wave ( p 0.04). The mean length of stay was longer in the “pre-wave” group (13 days) compared to the “inter-wave” groups (7.5 days, p 0.01) and “second wave” (8.2 days, p 0.03). No deaths were recorded in patients hospitalised for IBD during the period evaluated . Scheduled admissions The number of scheduled admissions was reduced during the first wave (23) compared to the “pre-wave” group (34), a situation that was attempted to counteract during the two subsequent periods (60 and 54). The most frequent cause responsible for admission was biliary disease (56.7%, p 0.034) . Miscellaneous No differences were observed in sex, age, days from the onset of symptoms, mean length of stay, or mortality. The most frequent disease group independently was infectious/inflammatory/ischaemic ileo-colitis (34.3%) . Tumours The predominant sex was male with a mean age of around 70 years, without observing differences in the 4 periods. The number of days from the onset of symptoms to emergency room consultation was higher in the second semester, reaching statistical significance ( p 0.04) between the “pre-wave period” (19.03 days) and the “inter-wave period” (55.34 days). The most frequent origin of tumour was biliopacreatic (56%) followed by colorectal (20%). Mortality was significantly higher during the second semester . Overall in-hospital mortality was analysed with a total of 54 deaths (5.2%), by periods (17 pre-wave (5.7%), 7 first wave (3.2%), 12 inter-wave (4.5%) and 18 second wave (7.2%)), not reaching statistical significance ( p 0.23). No differences were found in terms of sex and age in the 4 groups. It was observed that the number of days that elapsed from the onset of symptoms to consultation in the emergency room was lower in the pre-wave group (2.06) than in the rest of the groups (5.1, 5 and 10.5 respectively), reaching only statistical significance the difference with the second wave ( p 0.034). The most frequent cause was non-tumour non-peptic (62%). No differences were observed in the Glasgow–Blatchford scale, haemoglobin values on arrival, need for ICU admission, number of complications, need for transfusion, time to first endoscopy, mean stay, or mortality . The distribution of sex and age was similar in the 4 groups. A clear predominance of acute pancreatitis was observed as the reason for admission (43.8%, p 0.04). There were no differences in the number of days from the onset of symptoms to emergency room visit or in the BISAP scale upon arrival, whereas a higher score was observed on the Quick-Sofa scale in the “second wave” compared with the “pre-wave” group ( p < 0.05). No differences were found in the presence of necrosis, the need for biliary drainage or collections. Mortality from this group of diseases was significantly higher in the second wave ( p 0.015) . Regarding liver-related diseases, a clear predominance of the male sex was observed in admissions (88.1%) with a mean age of around 60 years. The most frequent reason for admission was ascitic decompensation of underlying cirrhosis (29.8%), followed by hepatic encephalopathy (20.2%) and acute hepatitis (17.9%). Regarding the functional stage, we did not find statistically significant differences, although we observed how the percentage of patients with advanced cirrhosis (Child–Pugh C) was higher during the “first” and “second wave” compared to the “pre-wave” and “inter-wave” periods (52.7 and 75% vs 35.1 and 50%). The longest mean stay was recorded in the inter-wave period (12.3 days) followed by the second wave (11.3 days) . No differences were found in terms of sex, age and time from the onset of symptoms to emergency room consultation. Only one patient of the 29 under immunosuppressive treatment abandoned it. No differences were observed in the Harvey–Bradshow and Truelove–Witts indices at admission. Three patients required urgent surgery, all of them during the first wave ( p 0.04). The mean length of stay was longer in the “pre-wave” group (13 days) compared to the “inter-wave” groups (7.5 days, p 0.01) and “second wave” (8.2 days, p 0.03). No deaths were recorded in patients hospitalised for IBD during the period evaluated . The number of scheduled admissions was reduced during the first wave (23) compared to the “pre-wave” group (34), a situation that was attempted to counteract during the two subsequent periods (60 and 54). The most frequent cause responsible for admission was biliary disease (56.7%, p 0.034) . No differences were observed in sex, age, days from the onset of symptoms, mean length of stay, or mortality. The most frequent disease group independently was infectious/inflammatory/ischaemic ileo-colitis (34.3%) . The predominant sex was male with a mean age of around 70 years, without observing differences in the 4 periods. The number of days from the onset of symptoms to emergency room consultation was higher in the second semester, reaching statistical significance ( p 0.04) between the “pre-wave period” (19.03 days) and the “inter-wave period” (55.34 days). The most frequent origin of tumour was biliopacreatic (56%) followed by colorectal (20%). Mortality was significantly higher during the second semester . Our study shows that the SARS-CoV-2 pandemic did not cause an increase in in-hospital mortality in a Digestive hospitalisation unit when comparing the different periods. A lower number of deaths was observed during the “first wave” attributable to the lower number of admissions recorded in this period, and not associated with a shortage of beds in the Gastroenterology Service determined by the increase of COVID-19 admission. Unlike other countries such as the United States, where the most frequent cause of admission to a Digestive Service was gastrointestinal bleeding, in our area it was acute pancreatitis with approximately 1 out of every 5 admissions (19.05%), a percentage that remained stable during the 4 periods. Regarding biliary diseases, our study observed how a greater number of cases of complicated biliary colic were admitted during the first trimester (30 vs 15, 20 and 11) and in the second semester (“inter-wave” and “second wave” periods) the number of admissions for acute cholangitis increased. In addition, a higher score on the Quick-Sofa scale on arrival was observed during the two waves (especially during the second) together with an increase in mortality in this last period. All this suggests that many patients had symptoms at home for which they did not request consultation early, and this may have led to an increase in the number of infections and a worse prognosis for them, with the consequent increase in mortality. These facts coincide with a greater impact of the SARS-CoV-2 pandemic in the province of Malaga during the months of September, October and November 2020. Although in admissions for biliopancreatic diseases there was no delay in the demand for care by patients, the same did not occur in the case of non-variceal gastrointestinal bleeding. In the quarter prior to the start of the pandemic, patients admitted for this reason had taken an average of about 2 days to seek consultation since the symptoms had appeared. This delay was multiplied by 2 during the “first wave” and the “inter-wave” period (around 5 days) and multiplied by 5 during the “second wave” with an average of about 10 days. However, all this did not translate into a statistically significant increase in morbidity and mortality. Unlike other studies, in our cohort there was no increase in the time from arrival at the emergency department to the first endoscopy. As for liver diseases, there was a decrease in the number of emergency admissions after the start of the pandemic, with no higher in-hospital mortality observed. With reference to admissions for IBD, a shorter average stay was observed during the pandemic. A possible justification for this is the need to reduce the probability of nosocomial infection by SARS-CoV-2 in a group of patients who are often immunocompromised. On many occasions, this was possible thanks to the IBD unit facilities available in our department, which offers this type of patient comprehensive doctor-nurse care very early after hospitalisation. In relation to diagnosed tumours, it is noteworthy how the delay in requesting assistance was more than double in the second half of the period studied with the consequent increase in mortality, a period that coincides with the greatest impact of the SARS-CoV-2 pandemic in the province of Malaga. As strengths of the study, it should be noted that it is the first study in the literature that attempts to analyse, comprehensively, how the SARS-CoV-2 pandemic may have influenced hospital admissions and mortality in a Gastroenterology Service. As weaknesses, it should be noted, on the one hand, the retrospective and single-centre nature of the study. In addition, our study only analyses the short-term mortality and leaves out the potential increasing mortality in the long-term as a consequence of the delays in the diagnosis process. On the other hand, and underestimation of the group of patients who developed COVID and did not go to the hospital despite their digestive disease or those in whom, despite going, respiratory symptoms prevailed over their digestive condition, cannot be ruled out. In conclusion, we can highlight that overall in-hospital mortality in the Digestive Tract among hospitalised patients has not increased with the advent of the SARS-CoV-2 pandemic, although higher in-hospital mortality has been recorded in biliopancreatic diseases and digestive tumours diagnosed in-hospital during the semester between June and November 2020, accompanied in the case of tumours, by a delay in the initial going to the Emergency Department. An analysis such as the one carried out in this study can help improve the organisation of devices and healthcare circuits in the Gastroenterology Services in future waves or pandemics, however, more studies would be needed in this regard to corroborate our results. In the meantime, it would be opportune that health care providers keep in mind the importance in future pandemic of reinforcing the duties of Gastroenterology Services and the need to establish clinical pathways in agreement to general practitioners to provide a quicker care for patients with alarm digestive symptoms. J.P.B. A.M.G.G and G.A.B: contributed to the design of the study. J.P.B and A.M.G.G: contributed to write of the manuscript. M.G.C and R.J.A: both of them carried out the revision of the final version of this paper. All authors read and approved the final version of the manuscript. Instituto de Investigación Biomedica de Málaga-IBIMA , Hospital Universitario Virgen de la Victoria , Universidad de Málaga, Málaga, Spain, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBERehd) , Madrid, Spain. CM21/00074 (Rio Hortega contract: J.M.P.B.). None declared.
Evolution of an obstetrics and gynecology interprofessional simulation-based education session for medical and nursing students
adffcf2f-d79c-4b9f-b7c0-68b4e365ef58
7581067
Gynaecology[mh]
Introduction Simulation training has increased significantly across medical schools and residency programs as a way to teach learners valuable skills. Simulation can reproduce a wide range of clinical conditions; thus novices can practice and hone their skills in a risk-free environment. This allows learners to approach clinical scenarios with more confidence, creating an atmosphere that puts patients at ease, improves patient safety, and decreases medical errors. Most medical students make the transition from the classroom to clinical settings in their third year of training; simulations may facilitate bridging that transition if students can get exposure and practice concepts in the year prior to their first interactions with patients. To ensure high quality patient care, an effective interprofessional collaboration between healthcare professionals is required. Interprofessional education (IPE) has a positive impact on teamwork and improves patient safety. In addition, Objective Structured Clinical Examination (OSCE) assessment of learners in simulation and controlled environments can promote competence of clinical skills and application to real-life scenarios. This follows Miller's Pyramid Level 3 “Shows How” or Kirkpatrick's Model of Evaluation Level 3 “Behavioral Change.” The purpose of this report is to describe the evolution and progression of an Obstetrics & Gynecology (OBGYN) IPE simulation program for medical and nursing students (NS) over a 4-year period. Methods This was a prospective cohort educational and programmatic study from 2014 to 2017 conducted at the Oakland University William Beaumont School of Medicine (OUWB), with approval granted by the Oakland University Institutional Review Board. The conceptual framework used was deliberate interprofessional simulation practice in which the teacher plans learning and provides immediate feedback. The active learning technique utilized was simulation. We utilized a deductive investigational pathway that was initiated based on the hypothesis that a progressive IPE simulation program incorporating both faculty and interprofessional student collaboration would improve medical students’ knowledge retention, comfort with procedural skills, positive teamwork and respectful interaction between students. Our study utilized a step-by-step approach in a logical progression of 4 steps based on educational principles and needs assessment. From 2014 to 2017; progressive modification of the educational principles and the OBGYN curriculum concepts occurred as a collaboration between the co-directors, nurse clinical skills instructors, Maternal Fetal Medicine (MFM) fellows, and basic scientists inclusive of the feedback from the students-end -of course assessments. From 2014 to 2017 all second-year medical students at OUWB and from 2015 to 2017 NS on their obstetrics rotation participated (inclusion criteria included a new cohort of second-year medical students and NS annually with the exclusion of all other students). There was an obstetrical experience mismatch between the medical students and the NS; the medical students had no previous obstetrical experience while the NS were finishing their obstetrical rotation and had training on vaginal delivery and fetal heart rate patterns. Both students and faculty evaluated the program. The program evaluation included the students-end-of-course assessments that contained both qualitative comments and quantitative scores. On procedures, students were assessed by Objective Structural Clinical Assessments checklists (OSCE) which were completed both by students (self-assessments) and by faculty (faculty assessments). All students completed survey questions based on attitude, knowledge, and perception (Table ). These surveys were completed pre and post the educational intervention to determine significant changes in attitude, knowledge, and perception (see Appendix 1, Available at:). The four steps of the deductive educational pathway are as follows: Step 1, 2014: The first step in our deductive approach was an obstetrics simulation curriculum that was incorporated into the Reproductive Sciences Course for second year medical students (MS2). The educational principles for the first step included flipped classroom and OSCE based obstetrical simulation. In 2014, the co-directors of the Reproductive Sciences course in collaboration with OBGYN residents developed an obstetrics simulation curriculum that was incorporated into the Reproductive Sciences Course for MS2. The first simulation was held in 2014 at William Beaumont Hospital Simulation Center, Royal Oak, Michigan. Faculty included OBGYN residents and generalists, MFM fellows and faculty, basic science faculty, nursing instructors, OBGYN nurses, a simulation technician, and an intrauterine device clinical specialist. Using a flipped classroom model, students received a pre-curriculum lecture on intrapartum obstetrics and fetal heart rate tracings and watched a brief video on labor. The simulation was performed with students in groups of 3 to 4 rotating through three stations for 20 minutes each. At the station on simulated vaginal delivery, each student was guided in delivering a baby by MFM faculty with a simulation technician support using SimMom (Laerdal). An OBGYN resident gave an interactive workshop on fetal heart rate (FHR) tracings. Another OBGYN resident taught and assessed students on cervical dilation using “blinded” and “open” cervical models. A debriefing session occurred at the end to answer questions and obtain constructive feedback. Students completed surveys on attitude and knowledge on obstetrics and FHR concepts before, immediately after, and 4 months after the curriculum. A perception survey was also completed immediately after and 4 months after the curriculum (Appendix 1, Available at:). A standard Simulation Learning Center technical assessment survey was completed immediately after the course, covering themes such as communication, achievement of goals, teaching styles, and realism. Step 2 2015: The second step in our deductive approach was an interprofessional obstetrics simulation curriculum involving nursing and medical students. The additional educational principles of the second step included the introduction of interprofessional interaction and OSCE. In 2015, to further develop IPE, the simulation curriculum was re-located to the Oakland University School of Nursing simulation center. The time for each station was increased to 30 minutes. The nurse clinical skills instructor (author SV) was instrumental in curriculum re-design and the Noelle obstetrics simulator (Gaumard Scientific) was used for the simulated vaginal deliveries. NS were included but they only participated in the FHR station at which they gave a Situation, Background, Assessment, Recommendation (SBAR) report and asked for a management plan from the medical students. OSCE checklists completed by faculty were introduced in the FHR and cervical exam stations. Knowledge and Attitude surveys were offered pre, post, and 8 months after course. The Perception survey also occurred immediately after the course and after 8 months. Step 3, 2016: The third step in our deductive approach was expanding the interprofessional obstetrics simulation curriculum and adding gynecological simulation involving both nursing and medical students. In addition to the previous education principles, the third step focused on teamwork and interaction of medical and NS. In 2016, both medical and NS completed FHR, delivery, and cervical exam training, plus a new contraception and intrauterine device insertion station. In the delivery station, NS gave history and supported the delivery. Knowledge and Attitude surveys were only done pre & immediately post course. The Perception survey was done after the course. In 2014 and 2015, cervical clay models developed by clinical nursing instructors were used, to improve fidelity, in 2016 professional cervical models were purchased and used (Lifeform Replicas from Nasco, Fort Atkinson, WI). Step 4; 2017: The fourth step in our deductive approach was increasing procedural training and integration of NS. The additional educational principles of the fourth step included a focus on interprofessional student teaching, Patient Safety principles of teamwork and the introduction of OSCE self-assessment by both nursing and medical students. In 2017, new additions compared to previous years were: (1) in the delivery station, NS resuscitated and assessed newborns with Apgar Scores and gave an SBAR report to the medical students, (2) both medical and NS performed self-assessment and also received a faculty-assessment on IUD insertion practice and cervical examination stations, (3) medical students performed self-assessment and also received a faculty-assessment in the delivery station, (4) both nursing and medical students participated in a knowledge quiz on family planning and contraception methods, (5) time for each scenario was increased from 30 to 45 minutes, (6) the FHR module was removed from the simulation and conducted separately as a 60 minute flipped classroom case-based workshop. In this workshop, students divided into groups to investigate and interpret a specific FHR case, and presented their results to the whole group, and (7) an additional station on Obstetric Procedures was introduced at which students had hands-on training with B-Lynch Suture, as well as postpartum hemorrhage management, forceps, vacuum extractors, scalp electrodes, and pressure catheters (Table ). The mean Likert scores of the pre and post survey scores were compared using t tests to determine significant differences. The subjective outcomes studied included self-perceived confidence comfort levels, and perception of the value of obstetrics simulation (Kirkpatrick's level 1 reaction). The objective outcomes were acquired knowledge including the knowledge tests and the final examination for the course including laboratory practical examination and National Board of Medical Examiners (NBME) examination (Kirkpatrick's level 2 learning). The behavioral outcomes were the communication, professionalism and procedural skills attainment (Kirkpatrick's level 3 behavior). Results 3.1 2014 In 2014, of 105 students who participated in the curriculum, 95 completed the pre and immediate post simulation survey. Fifty-six completed the 4-month post survey. For the knowledge questions on obstetrics and FHR, students obtained a mean pre-score for correct answers of 12.82 (SD = 6.02), with post-simulation mean score increasing to 29.57 (5.15), P <.001. At 4 months the score was 20 (7.46), a significant decrease from the post-simulation score but still significantly higher than the baseline pre-simulation score. Similarly, for the attitude questions, students’ comfort level with obstetrical procedures increased significantly immediately post simulation but had decreased at 4 months. Again, the 4-month post-simulation score was significantly lower than the immediate post-simulation score but was still significantly higher than the baseline pre-simulation score. The perception survey was conducted post-curriculum with a mean score of 9.05 (0.99). When repeated 4 months later, the mean score dropped slightly, but significantly, to 8.43 (1.3), P = .001 (Table ). The Simulation Learning Center standard technical assessment was completed immediately after the course only in 2014. On a Likert scale of 1 to 4 results were: Objectives were communicated = 3.37 (0.61); Teaching methods adequate = 3.86 (0.35); Instructors Knowledge = 3.95 (0.23); Clinical content =3.84 (0.37); and Realistic program = 3.85 (0.36). Written comments were also analyzed. When asked to comment on “what went well,” 86% of students gave a positive comment and 14% no comments. There were no negative comments. On “what needs to be improved.” 74% felt improvement was required. The majority of the improvements suggested were to provide more time at each station. On “what should be discarded,” only 3% felt anything should be discarded and over 60% reported nothing needed to be changed. 3.2 2015 In 2015, 95 MS2 participated. The mean scores for the FHR OSCE (0–1, for OSCEs met/yes = 1; partially = 0.5; not met/no = 0) were: identifies FHR baseline = 0.97, identifies FHR variability= 0.73, provides accurate identification of periodic pattern = 0.73, identifies FHR category = 0.67, orders appropriate medical interventions = 0.93, communicates respect with IP health team = 0.91, professionalism reflected in IP interactions = 0.91. The comfort level scores with obstetrical procedures compared to baseline significantly increased post-simulation and were still significantly increased at 8 months compared to baseline. The 8-month score was however significantly lower than the immediate post simulation score. For the knowledge questions on obstetrics and FHR, students mean post-curriculum score increased significantly from pre-simulation. By 8 months it was not significantly different from baseline and was significantly lower than the immediate post simulation scores. This indicated the 8-month knowledge scores had returned to the baseline. As in 2014, the perception scores were significantly decreased at 8 months when compared to the post-simulation scores (Table ). Forty-one NS participated, and provided feedback, but they did not participate in the surveys. 3.3 2016 In 2016, 127 medical students participated in the curriculum. They only completed surveys pre and immediately post-simulation. The results were similar to the previous years, which showed a statistically significant increase in attitude and knowledge questions immediately post simulation (Table ). Forty-five NS participated in 2016. They gave general feedback during the debriefing session and written comments. Nursing student feedback included that they enjoyed cervical examination practice and the IUD insertion practice, they appreciated new experiences with exposure to contraception and family planning, but they wanted to be more involved. 3.4 2017 In 2017, the program trained 116 medical and 51 NS. Both groups participated in all surveys and tests. The outcome measures we analyzed were IUD insertion self-assessment, IUD insertion faculty assessment, cervical examination scores, and the contraception knowledge quiz. Statistical analysis showed no significant differences between medical student and nursing student scores (Table ). There was a significant difference between the medical students’ self-assessment score and the faculty-assessment score at the delivery simulation (8.63 ± 0.82 and 8.93 ± 0.30; P < .001). The end-of-course evaluation has 8 items and included the item: “variety of instructional methods used,” on a Likert scale of 1-5, this score on this item increased from 3.91 in 2015 to 4.22 by 2017. This was the highest score of all the 8 items on the end-of-course evaluation in 2017. Furthermore, students’ comments revealed that the IPE simulation was the highlight of the course and of high value to students’ learning on the course. The mean NBME exam score for the Reproductive Sciences course was 85.62% (0.51) and the practical laboratory exam score 86.73% (0.57). A correlation analysis was performed between NBME scores with outcome measures, and the only significant finding was a weak correlation between NBME scores and IUD insertion self-assessment (rho= 0.22, P = .02). Scores for professionalism and communication by medical students that addressed IPE engagement (eg, demonstrates willingness to listen to nursing student) were nearly perfect ranging between 0.99 and 1 (range of scores = 0-1). 2014 In 2014, of 105 students who participated in the curriculum, 95 completed the pre and immediate post simulation survey. Fifty-six completed the 4-month post survey. For the knowledge questions on obstetrics and FHR, students obtained a mean pre-score for correct answers of 12.82 (SD = 6.02), with post-simulation mean score increasing to 29.57 (5.15), P <.001. At 4 months the score was 20 (7.46), a significant decrease from the post-simulation score but still significantly higher than the baseline pre-simulation score. Similarly, for the attitude questions, students’ comfort level with obstetrical procedures increased significantly immediately post simulation but had decreased at 4 months. Again, the 4-month post-simulation score was significantly lower than the immediate post-simulation score but was still significantly higher than the baseline pre-simulation score. The perception survey was conducted post-curriculum with a mean score of 9.05 (0.99). When repeated 4 months later, the mean score dropped slightly, but significantly, to 8.43 (1.3), P = .001 (Table ). The Simulation Learning Center standard technical assessment was completed immediately after the course only in 2014. On a Likert scale of 1 to 4 results were: Objectives were communicated = 3.37 (0.61); Teaching methods adequate = 3.86 (0.35); Instructors Knowledge = 3.95 (0.23); Clinical content =3.84 (0.37); and Realistic program = 3.85 (0.36). Written comments were also analyzed. When asked to comment on “what went well,” 86% of students gave a positive comment and 14% no comments. There were no negative comments. On “what needs to be improved.” 74% felt improvement was required. The majority of the improvements suggested were to provide more time at each station. On “what should be discarded,” only 3% felt anything should be discarded and over 60% reported nothing needed to be changed. 2015 In 2015, 95 MS2 participated. The mean scores for the FHR OSCE (0–1, for OSCEs met/yes = 1; partially = 0.5; not met/no = 0) were: identifies FHR baseline = 0.97, identifies FHR variability= 0.73, provides accurate identification of periodic pattern = 0.73, identifies FHR category = 0.67, orders appropriate medical interventions = 0.93, communicates respect with IP health team = 0.91, professionalism reflected in IP interactions = 0.91. The comfort level scores with obstetrical procedures compared to baseline significantly increased post-simulation and were still significantly increased at 8 months compared to baseline. The 8-month score was however significantly lower than the immediate post simulation score. For the knowledge questions on obstetrics and FHR, students mean post-curriculum score increased significantly from pre-simulation. By 8 months it was not significantly different from baseline and was significantly lower than the immediate post simulation scores. This indicated the 8-month knowledge scores had returned to the baseline. As in 2014, the perception scores were significantly decreased at 8 months when compared to the post-simulation scores (Table ). Forty-one NS participated, and provided feedback, but they did not participate in the surveys. 2016 In 2016, 127 medical students participated in the curriculum. They only completed surveys pre and immediately post-simulation. The results were similar to the previous years, which showed a statistically significant increase in attitude and knowledge questions immediately post simulation (Table ). Forty-five NS participated in 2016. They gave general feedback during the debriefing session and written comments. Nursing student feedback included that they enjoyed cervical examination practice and the IUD insertion practice, they appreciated new experiences with exposure to contraception and family planning, but they wanted to be more involved. 2017 In 2017, the program trained 116 medical and 51 NS. Both groups participated in all surveys and tests. The outcome measures we analyzed were IUD insertion self-assessment, IUD insertion faculty assessment, cervical examination scores, and the contraception knowledge quiz. Statistical analysis showed no significant differences between medical student and nursing student scores (Table ). There was a significant difference between the medical students’ self-assessment score and the faculty-assessment score at the delivery simulation (8.63 ± 0.82 and 8.93 ± 0.30; P < .001). The end-of-course evaluation has 8 items and included the item: “variety of instructional methods used,” on a Likert scale of 1-5, this score on this item increased from 3.91 in 2015 to 4.22 by 2017. This was the highest score of all the 8 items on the end-of-course evaluation in 2017. Furthermore, students’ comments revealed that the IPE simulation was the highlight of the course and of high value to students’ learning on the course. The mean NBME exam score for the Reproductive Sciences course was 85.62% (0.51) and the practical laboratory exam score 86.73% (0.57). A correlation analysis was performed between NBME scores with outcome measures, and the only significant finding was a weak correlation between NBME scores and IUD insertion self-assessment (rho= 0.22, P = .02). Scores for professionalism and communication by medical students that addressed IPE engagement (eg, demonstrates willingness to listen to nursing student) were nearly perfect ranging between 0.99 and 1 (range of scores = 0-1). Discussion We have described a longitudinal interprofessional simulation-based education (IPSE) program as it evolved between a school of nursing and a school of medicine. It developed over 4 years to be inclusive of the needs of both nursing and medical students as well as expanding from intrapartum obstetrics to several other aspects of OBGYN. A major focus of this simulation session was the teaching of core competencies of Professionalism, Practice-Based Learning and Improvement, Interpersonal & Communication skills, and Interprofessional Collaboration. OBGYN trainees had a major role in development and sustainability of the program. Residents participated using the resident-as-teachers model and to meet ACGME resident research requirements. In the first year, OBGYN residents facilitated 2 of the 3 stations. MFM fellows became involved in the latter 2 years and facilitated 2 stations. From a scholarly perspective, 3 OBGYN residents and one medical student presented successive updates of this curriculum at professional conferences or used the data for their research requirements. A review of the literature shows that most previous reports were designed by faculty with no GME lead role. Similar to our study, but without the longitudinal approach, Nemer et al created a “Labor Game” by using the resident-as-teacher model with students on OBGYN clerkship rotating through 7 simulated obstetrics stations. Points were awarded at each station, and the student with the highest score won. Most previous reports of simulation for medical students in OBGYN have focused training around the third-year medical students OBGYN clerkship. This is in contrast to our curriculum in which we focused on MS2 with the simulation occurring in the fall, approximately 7 to 8 months prior to starting clinical rotations. Our goal was to provide early exposure and experience to clinical concepts and procedures, which could lead to more integration of basic science concepts and better preparedness for the clinical rotation. Furthermore, unlike most previous reports that focused mainly on obstetrical procedures, we expanded our curriculum to Gynecology in the last 2 years to include stations on family planning, contraception, and IUD insertion. Lerner et al also provided an extensive OBGYN simulation that in addition to obstetrics procedures included IUD insertion, hysteroscopy/cystoscopy, colposcopy/LEEP, and circumcision. However, this comprehensive two-week simulation-based elective course only trained 10 post-OBGYN clerkship third- and fourth-year medical students as a transition to OBGYN residency. Of all the stations, the concept of FHR patterns was the most challenging for the students, especially variability, periodic patterns, and tracing category. As a result, this station was removed from the IPSE and expanded into a PBL workshop to help students learn FHR tracing concepts better. For the 3 years that we performed assessments, we demonstrated that students’ knowledge increased immediately after simulation; knowledge had diminished but was still significantly retained at 4 months post simulation but had dissipated by 8 months post-simulation. Students’ comfort level or confidence in the OBGYN procedures also increased immediately post training and decreased but was still significantly higher than baseline at both 4- and 8- months post simulation. This data suggests that, for these skills and procedures, confidence is better retained than knowledge. As demonstrated in our study, a review of the literature consistently demonstrates the immediate post simulation significant increase in knowledge and comfort level. Holmstrom et al showed that students receiving simulation training were significantly more confident in performing a vaginal delivery immediately after assessment than control students; however, these differences narrowed by 4 weeks. Simulation students also scored significantly higher on examinations 4 weeks post-intervention. DeStephano et al compared a high-fidelity birth simulator versus low-tech birth simulator on performance and exam scores at the end of the OBGYN clerkship, finding similar performance gains and scores for both forms of simulation. Our literature review did not reveal any other study reporting post-simulation long-term knowledge or comfort level gains after 6 weeks. Thus, our study with 8-month outcome results supports the utility of OBGYN simulation, particularly proximate to or within the clerkship. Furthermore, we assessed relationships between simulation and interest in OBGYN by our perception survey. Results showed a very high interest immediately post-simulation but this decreased significantly at both 4 and 8 months. Many of the students’ narrative comments stated that “they had forgotten” and “it was a long time ago”. This finding suggests that interest in a program generated immediately post-intervention may be very short lived. IPSE enables students from different professions to practice teamwork and communication skills in a controlled environment. The Liaison Committee on Medical Education standards, 7.9 on Interprofessional Collaborative Skills, supports the inclusion of IPSE in the medical school curriculum. Similar to other studies, our IPSE consisted of medical and NS assessing both teamwork and communication. Additionally, we explored further possibilities of IPSE by creating a scenario in which medical students were able to learn from NS. NS had already learned newborn assessment, unlike the medical students, hence in the delivery simulation station, NS demonstrated Apgar score assessment to the medical students. Furthermore, we allowed NS to learn and perform the same procedures as medical students and were able to show no difference in proficiency between NS and medical students. There were a number of limitations to this study. There were no controls and randomization was not performed; because of the LCME accreditation requirements and the known benefits of IPSE, we felt it would have been unethical not to offer the curriculum to all the students. Additionally, data collection evolved and was varied, nor could we compare the anonymous survey results from this IPSE with performance in the OBGYN clerkship. Lastly, since this study was conducted at only one institution, results may not be generalizable. On the other hand, strengths included the use of 4 consecutive student classes; an interprofessional approach in curriculum development, faculty instruction and student participation; use of the flipped classroom model; and programmatic improvement based on student feedback. Conclusions Over a 4 year period, our IPSE expanded to include nursing, physician and resident faculty instructors working with medical students and NS jointly. The session improved students’ short-term medical knowledge, comfort, and perception with some long-term persistence noted at 4 to 8 months. The program evolved to include OSCE assessments, which showed that students struggled more with learning complex processes like fetal heart rate interpretation. Medical students were more critical of their learning compared to their evaluation by faculty. Communication and professionalism of the medical students in their interaction with NS was stressed and assessed, and NS had the opportunity to teach medical students. Conceptualization: Dotun Ogunyemi, Stephanie Vallie, Thomas M. Ferrari. Data curation: Dotun Ogunyemi, Christopher Haltigin. Formal analysis: Dotun Ogunyemi, Christopher Haltigin. Investigation: Stephanie Vallie, Thomas M. Ferrari. Methodology: Dotun Ogunyemi, Stephanie Vallie, Thomas M. Ferrari. Project administration: Stephanie Vallie, Thomas M. Ferrari. Resources: Stephanie Vallie, Thomas M. Ferrari. Supervision: Dotun Ogunyemi, Stephanie Vallie, Thomas M. Ferrari. Writing – original draft: Dotun Ogunyemi. Writing – review & editing: Christopher Haltigin, Stephanie Vallie, Thomas M. Ferrari. Supplemental Digital Content
Pharmacogenomics in Psychiatry Practice: The Value and the Challenges
829519a9-0171-4ea3-bef2-317dc1575b28
9655367
Pharmacology[mh]
Psychiatric disorders are prevalent and associated with high levels of morbidity and mortality. Conditions such as depression and anxiety are among the leading causes of disease burden worldwide . According to statistics reports of the World Health Organization (WHO), more than 264 million patients suffer from depression . The Global Burden of Diseases measures the burden of disorders by using a disability-adjusted life-year (DALY) metric, which quantifies the burden of a disease in terms of mortality and morbidity . The highest DALY was reported for major depressive disorders in the age group of 30–34 years in 2020 . Increased burden of mental disorders has been recognized to be globally persistent since 1990 . In recent years, significant discoveries have been made in the management of psychiatric conditions, including psychological, pharmacological, and physical treatments. Effective psychotropic drugs, such as antidepressants, mood stabilizers, and antipsychotics, are commonly used to treat several psychiatric disorders. However, one of the major challenges for the prescribing clinician is how to select a safe and effective treatment option tailored to the needs of each patient. Unfortunately, many patients often go through a trial-and-error process characterized by poorly controlled symptoms and/or severe drug responses before the most suitable psychotropic drug and doses are established . Individualizing treatment plans for psychiatric patients by implementing pharmacogenomic (PGx) testing with the aim of prescribing precision therapies is the focus of the newly developed field of precision psychiatry . This promising alternative to conventional psychiatric prescribing utilizes our understanding of how certain genes and specific biomarkers may influence the individual’s response to medications, paving the way to more individually tailored approaches. For example, evidence supports the use of single nucleotide polymorphisms (SNPs) to measure treatment response and potential adverse drug reactions for antidepressants. Furthermore, it has been proposed that multi-omics and neuroimaging data can be used as biomarkers to predict responses based on newly developed artificial intelligence and deep learning frameworks . However, wide adoption of pharmacogene testing has not yet occurred in psychiatry, which may be due to a number of factors, including varying knowledge of genetics among psychiatrists, differing opinions on the efficacy of pharmacogenomic testing in clinical practice, and the presence of conflicting perceptions of the PGx tool evidence-base . In this review, we critically discuss the recent advances in our understanding of pharmacogenomics in psychiatry. We focus on the activity of cytochrome P450 enzymes, how they may be influenced by genetic and non-genetic factors, their influence on the metabolism of psychotropic medications, and the variation among different populations. We will also highlight the impact of HLA gene variation in predicting the potential adverse effects of psychotropic medications. Finally, we will discuss genetic susceptibility to obesity and metabolic syndrome, since both are recognized adverse effects of several psychotropic medications, especially second generation anti-psychotics. 1.1. CYP450 Genetic and Phenotypic Variations Cytochrome P450 (CYP450) enzymes are hemoproteins found in different human tissues, including intestines, kidneys, plasma, lung, and mainly in the liver . Their role encompasses detoxifying several endogenous and exogenous substances by oxidation, hydroxylation, epoxidation, and dealkylation mechanisms. Genetic variability of CYP enzymes highly interferes with enzymatic activity of different polymorphisms, resulting in personalized metabolism manner . Allelic variants of CYP enzymes are commonly named according to the system and translated to different phenotypes, including ultrarapid metabolizers (UMs), rapid metabolizers (RMs), normal metabolizers (NMs), intermediate metabolizers (IMs), and poor metabolizers (PMs) . In fact, several mutations are caused by CYPs polymorphisms, such as alternative splicing and frame shifting, resulting in an altered structure and function of the enzymes . In particular, structural modification of the binding site regions has been noticed in multiple polymorphisms of CYP2C19, CYP2D6, and CYP2C9, as shown in . For instance, CYP2C19*5B (rs56337013) and CYP2C19*8 (rs41291556) alleles are known to be catalytically inactive due to mutations in the heme binding site ( A) . Despite the fact that some alleles of CYP2D6 are associated with mutations in the binding sites ( B), recent studies of PGx in clinical psychiatry have focused on measuring the copy-number of CYP2D6, such as gene duplication and deletion, due to the complexity of studying SNPs of CYP2D6 locus . Remarkably, more than 50,000 CYP enzymes are found in nature. Around fifty-seven CYP450 genes have been investigated in humans; six of them are mainly involved in drug biotransformation, including CYP 2C9, 2D6, 2C19, 3A4,1A2, and 2E1. The most abundant human CYP enzymes are CYP2D6 and CYP3A4 . In particular, hepatic CYP isoenzymes commonly metabolize psychotropic medications: antipsychotics (CYP2D6), anticonvulsants (CYP2C9), and antidepressants (CYP2C19 and CYP2D6) . Although most psychotropic medications are metabolized in the liver, some of them, such as lithium, are directly eliminated by the kidneys . Moreover, advancements in pharmacogenomics research have progressively led to the corroboration of not only inter-individual but inter-ethnic differences in drug pharmacokinetics and pharmacodynamics. Considering the inter-ethnic differences in allele frequencies, the prevalence of certain genes in one ethnic group over the other can help prioritize those in need of testing or prescribing. This approach has been endorsed by the FDA and clinical guidelines through the Clinical Pharmacogenetics Implementation Consortium (CPIC) which recommend genetic screening before prescribing certain drugs, such as carbamazepine. Most studies focus on genetic variations of the Drug Metabolizing Enzymes (DMEs) CYP2C9, CYP2C19, CYP2D6, which are, primarily, responsible for phase I metabolism of around 40% of drugs in clinical use. Hence, several variant alleles have been uncovered; the most frequently reported are summarized in . 1.2. CYP450 Genetic Variations and Antidepressants Selective serotonin reuptake inhibitors (SSRIs) and serotonin and noradrenaline reuptake inhibitors (SNRIs) are among the most prescribed antidepressant medications. At higher doses, they may be associated with mild to severe potential side effects, such as sweating, serotonin toxicity, and sexual dysfunction . All SSRIs and SNRIs are metabolized in the liver by different CYP450 isoenzymes. SSRIs, citalopram, and escitalopram are mainly metabolized by the highly polymorphic cytochrome 2C19. Consequently, the clinical Pharmacogenetics Implementation Consortium (CPIC) published gene-based therapeutic guidelines for the SSRIs citalopram/escitalopram, based on the CYP2C19 genotype . Genetic variations in the CYP2C19 gene may cause an increase or decrease in the metabolic activity of the CYP2C19 enzyme, and thereby, the drug plasma concentration . For example, the wild-type allele CYP2C19*1 encodes a fully functional enzyme with normal phenotypic activity; whereas the *17 variation is associated with higher enzyme activity, and *2 with no activity. The phenotypes associated with star alleles are stated in . Therefore, depressed patients with poor metabolizing phenotypes demonstrate high citalopram plasma concentration. Thus, the FDA recommends a 50% dose reduction for poor metabolizers to avoid the risk of developing QT prolongation . Furthermore, fluvoxamine is metabolized by CYP2D6. The CPIC dosing guidelines suggest a 25–50% reduction in the recommended initial dose of the fluvoxamine for poor metabolizers and titrate until maximum effective dose to avoid unwanted adverse effects, or to use an alternative medicine not metabolized by CYP2D6 . Venlafaxine and paroxetine are also metabolized by the CYP2D6 enzyme . Moreover, paroxetine is capable of inhibiting the CYP2D6 enzyme; when it is given in combination with the selective noradrenaline reuptake inhibitor, atomoxetine, higher steady state concentrations of atomoxetine have been observed . Tricyclic antidepressants (TCAs), including, imipramine, amitriptyline, trimipramine, and clomipramine, are among the earliest approved antidepressant agents which are primarily metabolized by CYP2C19, as summarized in . However, the CYP2D6 pathway is essential to catalyze further metabolic pathways . On the other hand, the relatively newer antidepressant drug, mirtazapine, is mainly metabolized by CYP2D6, CYP1A2, and CYP3A4 isoenzymes . 1.3. CYP450 Genetic Variations and Antipsychotics Antipsychotics are a class of psychotropic medications used to treat multiple psychiatric conditions, including schizophrenia spectrum disorders, bipolar disorder, and major depression . Psychiatric chemotherapy is strongly influenced by variations in CYP2D6 and CYP2C19 genes; thus showing individualized drug efficacy and safety profile . Around 40% of antipsychotic medications are metabolized by the highly polymorphic CYP2D6 enzyme . Accordingly, food and drug administration labeled 24 antipsychotic dose recommendations, including clozapine, to be prescribed based on variant individual phenotypes , in addition to reducing the haloperidol dose by half for CYP2D6 PM phenotypes . Several haplotypes of CYP2D6, CYP1A2, CYP3A5, and CYP3A4 significantly influence antipsychotic drug metabolism . In particular, CYP1A2 metabolizes mainly olanzapine, asenapine, and clozapine . CYP2D6 has a major role in the metabolism of aripiprazole and risperidone, while CYP3A4 metabolizes mainly aripiprazole, clozapine, quetiapine, and levomepromazine . In this regard, the missense mutation of the rs680055 variant of CYP3A4 highly impacts antipsychotic response . Conversely, quetiapine and aripiprazole are the least affected drugs by SNP of CYP450 genes; however, a patient’s genotype should be taken into consideration to minimize antipsychotic drugs harm . One study clinically assessed a psychiatric patient’s response before and after performing pharmacogenetic testing for CYP2C19 and CYP2D6; based on judgments made by physicians, 23% reported improvement in patient outcomes, and 41% reported no change in term of improvement. Importantly, none of them reported worse outcomes upon utilizing pharmacogenetic testing . Another study, conducted in 868 patients diagnosed with depression taking either nortriptyline or escitalopram, found that CYP450 genotyping analysis was ineffective in predicting any adverse drug reaction . In this context, CYP2D6 genotyping exhibited no reduction in the risk of hyperprolactinemia, which may be induced by many antipsychotics as a side effect . Moreover, Walden et al. stated that utilizing pharmacogenomic testing to predict the occurrence of side effects was statically insignificant . Most antipsychotic agents are given orally; therefore, analyzing pharmacogenetic parameters of antipsychotics must include studying different families of xenobiotic transporter genes instead of focusing only on CYP450 genes . For instance, the mutation of HTR2C (serotonin receptor) is associated with weight gain and metabolic syndrome. In addition, high prolactin levels have been investigated in patients with DRD2*A1 allele (dopamine receptor); interestingly, patients on clozapine showed hyperprolactinemia, although it was least likely to increase prolactin levels . 1.4. CYP450 Genetic Variations and Mood Stabilizers Valproic acid (VPA) and Carbamazepine (CBZ) are commonly prescribed mood stabilizers, especially for bipolar disorder, which is characterized by mood alternations between depression and mania. However, the response and development of side effects to CBZ and VPA are highly variable among patients due to drug metabolism heterogeneity among different populations . For example, Japanese women with CYP2C19*3 or CYP2C19*2 alleles are more susceptible to gain weight during valproate therapy. These alleles significantly correlated with variant VPA plasma concentrations. Therefore, detection of the SNPs of CYP2C19 is likely to be beneficial to optimize the valproic blood concentration; hence, controlling drug responses and side effects . In addition, CYP2B6 and CYP2A6 are responsible for producing only 20–25% of valproic acid metabolites; however, the CYP2C9*3 allelic variant is primarily associated with the formation of hepatotoxic 4-ene-VPA metabolite, which is more potent than the VPA substrate . Although carbamazepine metabolism is catalyzed by CYP2C8 and CYP3A4, to form the active equipotent CBZ metabolite, EPHX1 (epoxide hydrolase) gene is considered the major CBZ metabolizer. Notably, the influence of CYP3A4 and EPHX1 polymorphisms is still under investigation . Furthermore, individuals with CYP3A5 variation exhibited a wide range of CBZ responses; the non-functional CYP3A5*3 allele highly influences CBZ blood concentration, in addition to using CBZ in combination with enzyme inducers which showed a different CBZ clearance rate . It has been suggested that individuals’ susceptibility to developing CBZ side effects could be measured by genotyping CYP2C19, CYP3A5, and EPHX1 . Lithium, being a generally effective mood stabilizer, is quite unique since its renal metabolism bypasses the CYP450 system. While it is still recommended for use as a first line mood stabilizer , it is worth noting that its long-term use is associated with an impairment of thyroid and renal functions . The findings of lithium-gene association studies revealed the importance of genetic factors in lithium response. As yet, the exact genes that co-segregate with lithium responses have not been fully detected . Cytochrome P450 (CYP450) enzymes are hemoproteins found in different human tissues, including intestines, kidneys, plasma, lung, and mainly in the liver . Their role encompasses detoxifying several endogenous and exogenous substances by oxidation, hydroxylation, epoxidation, and dealkylation mechanisms. Genetic variability of CYP enzymes highly interferes with enzymatic activity of different polymorphisms, resulting in personalized metabolism manner . Allelic variants of CYP enzymes are commonly named according to the system and translated to different phenotypes, including ultrarapid metabolizers (UMs), rapid metabolizers (RMs), normal metabolizers (NMs), intermediate metabolizers (IMs), and poor metabolizers (PMs) . In fact, several mutations are caused by CYPs polymorphisms, such as alternative splicing and frame shifting, resulting in an altered structure and function of the enzymes . In particular, structural modification of the binding site regions has been noticed in multiple polymorphisms of CYP2C19, CYP2D6, and CYP2C9, as shown in . For instance, CYP2C19*5B (rs56337013) and CYP2C19*8 (rs41291556) alleles are known to be catalytically inactive due to mutations in the heme binding site ( A) . Despite the fact that some alleles of CYP2D6 are associated with mutations in the binding sites ( B), recent studies of PGx in clinical psychiatry have focused on measuring the copy-number of CYP2D6, such as gene duplication and deletion, due to the complexity of studying SNPs of CYP2D6 locus . Remarkably, more than 50,000 CYP enzymes are found in nature. Around fifty-seven CYP450 genes have been investigated in humans; six of them are mainly involved in drug biotransformation, including CYP 2C9, 2D6, 2C19, 3A4,1A2, and 2E1. The most abundant human CYP enzymes are CYP2D6 and CYP3A4 . In particular, hepatic CYP isoenzymes commonly metabolize psychotropic medications: antipsychotics (CYP2D6), anticonvulsants (CYP2C9), and antidepressants (CYP2C19 and CYP2D6) . Although most psychotropic medications are metabolized in the liver, some of them, such as lithium, are directly eliminated by the kidneys . Moreover, advancements in pharmacogenomics research have progressively led to the corroboration of not only inter-individual but inter-ethnic differences in drug pharmacokinetics and pharmacodynamics. Considering the inter-ethnic differences in allele frequencies, the prevalence of certain genes in one ethnic group over the other can help prioritize those in need of testing or prescribing. This approach has been endorsed by the FDA and clinical guidelines through the Clinical Pharmacogenetics Implementation Consortium (CPIC) which recommend genetic screening before prescribing certain drugs, such as carbamazepine. Most studies focus on genetic variations of the Drug Metabolizing Enzymes (DMEs) CYP2C9, CYP2C19, CYP2D6, which are, primarily, responsible for phase I metabolism of around 40% of drugs in clinical use. Hence, several variant alleles have been uncovered; the most frequently reported are summarized in . Selective serotonin reuptake inhibitors (SSRIs) and serotonin and noradrenaline reuptake inhibitors (SNRIs) are among the most prescribed antidepressant medications. At higher doses, they may be associated with mild to severe potential side effects, such as sweating, serotonin toxicity, and sexual dysfunction . All SSRIs and SNRIs are metabolized in the liver by different CYP450 isoenzymes. SSRIs, citalopram, and escitalopram are mainly metabolized by the highly polymorphic cytochrome 2C19. Consequently, the clinical Pharmacogenetics Implementation Consortium (CPIC) published gene-based therapeutic guidelines for the SSRIs citalopram/escitalopram, based on the CYP2C19 genotype . Genetic variations in the CYP2C19 gene may cause an increase or decrease in the metabolic activity of the CYP2C19 enzyme, and thereby, the drug plasma concentration . For example, the wild-type allele CYP2C19*1 encodes a fully functional enzyme with normal phenotypic activity; whereas the *17 variation is associated with higher enzyme activity, and *2 with no activity. The phenotypes associated with star alleles are stated in . Therefore, depressed patients with poor metabolizing phenotypes demonstrate high citalopram plasma concentration. Thus, the FDA recommends a 50% dose reduction for poor metabolizers to avoid the risk of developing QT prolongation . Furthermore, fluvoxamine is metabolized by CYP2D6. The CPIC dosing guidelines suggest a 25–50% reduction in the recommended initial dose of the fluvoxamine for poor metabolizers and titrate until maximum effective dose to avoid unwanted adverse effects, or to use an alternative medicine not metabolized by CYP2D6 . Venlafaxine and paroxetine are also metabolized by the CYP2D6 enzyme . Moreover, paroxetine is capable of inhibiting the CYP2D6 enzyme; when it is given in combination with the selective noradrenaline reuptake inhibitor, atomoxetine, higher steady state concentrations of atomoxetine have been observed . Tricyclic antidepressants (TCAs), including, imipramine, amitriptyline, trimipramine, and clomipramine, are among the earliest approved antidepressant agents which are primarily metabolized by CYP2C19, as summarized in . However, the CYP2D6 pathway is essential to catalyze further metabolic pathways . On the other hand, the relatively newer antidepressant drug, mirtazapine, is mainly metabolized by CYP2D6, CYP1A2, and CYP3A4 isoenzymes . Antipsychotics are a class of psychotropic medications used to treat multiple psychiatric conditions, including schizophrenia spectrum disorders, bipolar disorder, and major depression . Psychiatric chemotherapy is strongly influenced by variations in CYP2D6 and CYP2C19 genes; thus showing individualized drug efficacy and safety profile . Around 40% of antipsychotic medications are metabolized by the highly polymorphic CYP2D6 enzyme . Accordingly, food and drug administration labeled 24 antipsychotic dose recommendations, including clozapine, to be prescribed based on variant individual phenotypes , in addition to reducing the haloperidol dose by half for CYP2D6 PM phenotypes . Several haplotypes of CYP2D6, CYP1A2, CYP3A5, and CYP3A4 significantly influence antipsychotic drug metabolism . In particular, CYP1A2 metabolizes mainly olanzapine, asenapine, and clozapine . CYP2D6 has a major role in the metabolism of aripiprazole and risperidone, while CYP3A4 metabolizes mainly aripiprazole, clozapine, quetiapine, and levomepromazine . In this regard, the missense mutation of the rs680055 variant of CYP3A4 highly impacts antipsychotic response . Conversely, quetiapine and aripiprazole are the least affected drugs by SNP of CYP450 genes; however, a patient’s genotype should be taken into consideration to minimize antipsychotic drugs harm . One study clinically assessed a psychiatric patient’s response before and after performing pharmacogenetic testing for CYP2C19 and CYP2D6; based on judgments made by physicians, 23% reported improvement in patient outcomes, and 41% reported no change in term of improvement. Importantly, none of them reported worse outcomes upon utilizing pharmacogenetic testing . Another study, conducted in 868 patients diagnosed with depression taking either nortriptyline or escitalopram, found that CYP450 genotyping analysis was ineffective in predicting any adverse drug reaction . In this context, CYP2D6 genotyping exhibited no reduction in the risk of hyperprolactinemia, which may be induced by many antipsychotics as a side effect . Moreover, Walden et al. stated that utilizing pharmacogenomic testing to predict the occurrence of side effects was statically insignificant . Most antipsychotic agents are given orally; therefore, analyzing pharmacogenetic parameters of antipsychotics must include studying different families of xenobiotic transporter genes instead of focusing only on CYP450 genes . For instance, the mutation of HTR2C (serotonin receptor) is associated with weight gain and metabolic syndrome. In addition, high prolactin levels have been investigated in patients with DRD2*A1 allele (dopamine receptor); interestingly, patients on clozapine showed hyperprolactinemia, although it was least likely to increase prolactin levels . Valproic acid (VPA) and Carbamazepine (CBZ) are commonly prescribed mood stabilizers, especially for bipolar disorder, which is characterized by mood alternations between depression and mania. However, the response and development of side effects to CBZ and VPA are highly variable among patients due to drug metabolism heterogeneity among different populations . For example, Japanese women with CYP2C19*3 or CYP2C19*2 alleles are more susceptible to gain weight during valproate therapy. These alleles significantly correlated with variant VPA plasma concentrations. Therefore, detection of the SNPs of CYP2C19 is likely to be beneficial to optimize the valproic blood concentration; hence, controlling drug responses and side effects . In addition, CYP2B6 and CYP2A6 are responsible for producing only 20–25% of valproic acid metabolites; however, the CYP2C9*3 allelic variant is primarily associated with the formation of hepatotoxic 4-ene-VPA metabolite, which is more potent than the VPA substrate . Although carbamazepine metabolism is catalyzed by CYP2C8 and CYP3A4, to form the active equipotent CBZ metabolite, EPHX1 (epoxide hydrolase) gene is considered the major CBZ metabolizer. Notably, the influence of CYP3A4 and EPHX1 polymorphisms is still under investigation . Furthermore, individuals with CYP3A5 variation exhibited a wide range of CBZ responses; the non-functional CYP3A5*3 allele highly influences CBZ blood concentration, in addition to using CBZ in combination with enzyme inducers which showed a different CBZ clearance rate . It has been suggested that individuals’ susceptibility to developing CBZ side effects could be measured by genotyping CYP2C19, CYP3A5, and EPHX1 . Lithium, being a generally effective mood stabilizer, is quite unique since its renal metabolism bypasses the CYP450 system. While it is still recommended for use as a first line mood stabilizer , it is worth noting that its long-term use is associated with an impairment of thyroid and renal functions . The findings of lithium-gene association studies revealed the importance of genetic factors in lithium response. As yet, the exact genes that co-segregate with lithium responses have not been fully detected . Generally, CYP450 enzymes contribute to cellular homeostasis by metabolizing several endogenous compounds, including, dopamine, serotonin, cortisol, progesterone, and testosterone . Importantly, dopamine and serotonin are incapable of crossing the blood-brain barrier, suggesting the presence of CYP450 enzymes in the brain. Particularly, in the brain tissues, human CYP2D6 demonstrated the ability to catalyze the aromatic hydroxylation of tyramine to dopamine (DA), and O-demethylation of 5-methoxytryptamine to serotonin (5-HT) . As a result, entire human behaviors, including personality and neuropsychiatric disorders such as schizophrenia (SCZ), major depression (MD), and obsessive-compulsive disorder (OCD), are influenced by the highly observed CYP2D6 variations . In contrast, CYP2C19 is expressed in human fetal brain tissue and disappears after birth; therefore, CYP2C19 is involved in brain neurodevelopment and significantly influences adult depressive phenotypes. In particular, the absence of CYP2C19 is correlated with a lower prevalence of depression. For example, one of the most common CYP2C19 alleles among Swedish subjects is CYP2C19*2. It is characterized by an inactive CYP2C19 enzyme; thus, lowering susceptibility to depressive moods . A recent study examined the genetic impact of CYP2D6 polymorphism on the susceptibility of individuals to develop schizophrenia, and suggested that CYP2D6 variations may alter the structure of the hippocampal white matter region of the brain and the neurotransmission of dopamine; thus, highlighting neuronal connectivity underlying the pathophysiology of schizophrenia . For instance, PMs have a greater DA tone in the pituitary gland, combined with a lower serotonin tone, due to serotonin-mediated tonic inhibition . Overall, the contributions of CYP2D6 and CYP2C19 in the metabolism of endogenous substances are not fully understood, and further investigations are required to prove the physiological implications of CYP450 in the brain . 2.1. Dopamine Synthesis via CYP2D6 Dopamine is a neurotransmitter that also functions as a precursor to noradrenaline, and adrenaline is synthesized initially from phenylalanine, which is converted by phenylalanine hydroxylase to tyrosine and then oxidized to dihydroxyphenylalanine (L-DOPA) by tyrosine hydroxylase; L-DOPA is ultimately metabolized by DOPA decarboxylase to dopamine . Alternatively, dopamine can be formed in the brain from p- and m-tyramine through aromatic hydroxylation by CYP2D6 . Notably, among CYP450, only CYP2D6 catalyzes the synthesis of dopamine . 2.2. Serotonin Metabolism via CYP2D6 and CYP2C19 Serotonin is a fundamental neurotransmitter that has been implicated in impulsive behavior, as well as in depressive and anxiety disorders. It is found in both vertebrate and invertebrate neural systems . Serotonergic pathways in the brain are initiated from 5-HT, containing groups (B1–B9) of neurons of the raphe nuclei in the brain stem; subsequently, serotonin concentration mainly depends on free plasma tryptophan levels . In vivo and in vitro studies have documented the mediation of CYP2D6 to regenerate serotonin from 5-methoxytryptamine (5-MT), suggesting the importance of CYP2D6 in neuropsychological events on the central nervous system (CNS), as well as drug metabolism among different individuals . Notably, various CYP2D6 activities in human populations have been correlated to different personality traits; people with CYP2D6 Ems were less anxious and more socially successful than PMs . Interestingly, in vivo investigations revealed that ultrarapid metabolizers (Ums) had higher serotonin levels in their platelets than extensive metabolizers (Ems) and poor metabolizers (PMs) . It was also suggested that CYP2C19 is involved in the biotransformation of serotonin and is correlated with bilateral hippocampal volume. Increasing CYP2C19 expression is noticed with 5-HT1A downstream signaling and reduction of hippocampal volume. However, further detailed studies are required to confirm the role of CYP2C19 in 5-HT1A biochemical signaling, and consequently, hippocampal volume and depression . The effect of gene variants on the metabolism of dopamine and serotonin (and potentially other neurotransmitters) adds to the complexity of predictions of the effect of such variants on the overall response of the patient to medications . Nonetheless, following dosing recommendations based on an individual’s phenotype can aid in optimizing psychotherapy. provides a list of dosing recommendations for psychotropic drugs based on CPIC guidelines for CYP2C19, CYP2D6, and CYP2C9 phenotypes. lists the drugs that can be used as alternatives, so as to minimize the likelihood of pharmacokinetic variability accounted for by the CYP isoenzymes. Dopamine is a neurotransmitter that also functions as a precursor to noradrenaline, and adrenaline is synthesized initially from phenylalanine, which is converted by phenylalanine hydroxylase to tyrosine and then oxidized to dihydroxyphenylalanine (L-DOPA) by tyrosine hydroxylase; L-DOPA is ultimately metabolized by DOPA decarboxylase to dopamine . Alternatively, dopamine can be formed in the brain from p- and m-tyramine through aromatic hydroxylation by CYP2D6 . Notably, among CYP450, only CYP2D6 catalyzes the synthesis of dopamine . Serotonin is a fundamental neurotransmitter that has been implicated in impulsive behavior, as well as in depressive and anxiety disorders. It is found in both vertebrate and invertebrate neural systems . Serotonergic pathways in the brain are initiated from 5-HT, containing groups (B1–B9) of neurons of the raphe nuclei in the brain stem; subsequently, serotonin concentration mainly depends on free plasma tryptophan levels . In vivo and in vitro studies have documented the mediation of CYP2D6 to regenerate serotonin from 5-methoxytryptamine (5-MT), suggesting the importance of CYP2D6 in neuropsychological events on the central nervous system (CNS), as well as drug metabolism among different individuals . Notably, various CYP2D6 activities in human populations have been correlated to different personality traits; people with CYP2D6 Ems were less anxious and more socially successful than PMs . Interestingly, in vivo investigations revealed that ultrarapid metabolizers (Ums) had higher serotonin levels in their platelets than extensive metabolizers (Ems) and poor metabolizers (PMs) . It was also suggested that CYP2C19 is involved in the biotransformation of serotonin and is correlated with bilateral hippocampal volume. Increasing CYP2C19 expression is noticed with 5-HT1A downstream signaling and reduction of hippocampal volume. However, further detailed studies are required to confirm the role of CYP2C19 in 5-HT1A biochemical signaling, and consequently, hippocampal volume and depression . The effect of gene variants on the metabolism of dopamine and serotonin (and potentially other neurotransmitters) adds to the complexity of predictions of the effect of such variants on the overall response of the patient to medications . Nonetheless, following dosing recommendations based on an individual’s phenotype can aid in optimizing psychotherapy. provides a list of dosing recommendations for psychotropic drugs based on CPIC guidelines for CYP2C19, CYP2D6, and CYP2C9 phenotypes. lists the drugs that can be used as alternatives, so as to minimize the likelihood of pharmacokinetic variability accounted for by the CYP isoenzymes. The human leukocyte antigens (HLA) are a group of genes encoding the major histocompatibility complex proteins (MHC), which play a crucial role in immune and inflammatory responses . The HLA variants should be added to the list of pharmacogenomics panels to be tested in psychiatry patients, to ensure a safe and effective individualized therapy. Recent studies have demonstrated the impact of genetic variations of the HLA gene cluster on the etiology of several psychiatric disorders. Interestingly, data showed that the highly variable HLA molecules play a major role in the etiology of bipolar disorder and schizophrenia, but not in depressive disorders and attention deficit hyperactivity disorder (ADHD) . In particular, HLA molecules were found to modulate neural signaling and synaptic integration, thus affecting congenital abilities such as behavior, learning, and memory. However, the exact contribution is not yet fully understood . Additionally, HLA genetic diversity is associated with psychotropic treatment response and developing adverse drug reactions (ADRs). For instance, Class I and II HLA alleles have been shown to partially mediate clozapine-induced agranulocytosis . A recent study showed a correlation between double amino-acid variants at positions 62 and 66 of HLA-A peptide-binding groove and a better response to treatment with Risperidone in schizophrenia patients . HLA polymorphisms among different ethnic groups is significantly associated with SCARs (Severe Cutaneous Adverse Drug Reactions), including Stevens-Johnson syndrome (SJS) and the toxic epidermal necrolysis (TEN), the life threating adverse drug reactions presenting as serious skin hypersensitivity reactions . Therefore, the implementation of pre-emptive testing of HLA genotyping in clinical practice might prevent these side effects . Recently, in neuropsychiatric medications, Phenytoin, Carbamazepine, and Oxcarbazepine, with the following biomarkers (HLA-B*15:02, HLA-A*31:01/HLA-B*15:02 and HLA-B*15:02/HLA-A*31:01), respectively, have been the most documented with SJS/TEN . We expect more gene variants will be added soon to the pharmacogenomics panel for psychiatric patients; some of the most significant are those associated with obesity and metabolic syndrome. Research has found that patients with psychiatric disorders are known to have higher morbidity and mortality compared to the overall population . This is likely due to the metabolic syndrome that these patients experience, predisposing them to cardiovascular diseases, type 2 diabetes, hypertension, dyslipidemia, hyperglycemia, and obesity . Various aspects impact this high comorbidity, including genetic factors. Polymorphisms of different genes are known to be associated with the development of metabolic syndromes in psychiatric patients among different ethnic populations . 4.1. Genes Associated with Obesity The fat mass and obesity-associated (FTO) gene encodes for the alpha-ketoglutarate-dependent dioxygenase. Monogenic disorders related to mutations in the FTO regions have also been identified in humans , and polymorphisms in the noncoding areas of this gene have been linked with obesity and several other diseases, particularly those for which obesity is a risk factor . While FTO gene polymorphisms associated with obesity have substantial variability in allele frequencies among ethnoterritorial groups, comparable allele recording frequencies of multiple SNPs across representatives of the same ethnoterritorial group also exist. The diversity between subpopulations within territorial groupings is similarly negligible for most SNPs; independent of interterritorial variations in allele recording frequency. It is important to note that the FTO gene polymorphisms linked to characteristics are found in noncoding regions that have no effect on the structure or function of alpha-ketoglutarate-dependent dioxygenase. Nonetheless, BMI and other anthropometric features representing the degree of obesity have been reported to be linked to rs1421085. The T-to-C substitution results in a twofold increase in the expression of two genes distal to FTO, namely IRX3 and IRX5 . During preadipocyte differentiation, the proteins encoded by the IRX3 and IRX5 genes shift their development from energy-dissipating to energy-storing adipocytes; an increase in IRX3 and IRX5 expression also leads to an increase in lipid accumulation . For this polymorphism, subpopulation differences in allele frequency between the geographical groupings analyzed in the scope of the 1000 Genomes Project did not surpass 8% (except for American subpopulations) . In addition, among 40 SNPs located in intron 1 that showed relationships with diseases based on GWAS, 35 SNPs exhibited less than 5% interpopulation variances in the recording frequency of one of the alleles. This can be explained by the fact that, in addition to significant population differentiation in SNP allele frequencies, large blocks of linkage disequilibrium were discovered in the region of the FTO intron 1 in European and Asian populations (blocks of linkage are smaller in African populations) . Furthermore, association studies revealed that haplotypes could create variations with risk/protective effects on pathological conditions, including BMI and obesity. In another study, researchers in Hungary found that for 11 SNPs, risk allele frequencies considerably changed across the two ethnic minorities: the Hungarian and the Roma . Variants in the fat mass and obesity-associated (FTO) gene (rs1558902, rs1121980, rs9939609, and rs9941349) showed a robust yet ethnicity-independent link with obesity. The Roma community had greater connections with obesity than the Hungarian general population, which was explained by the ethnicity-associated behavioral and environmental factors. In a recent 2021 study by Boiko et al., four FTO SNPs were found to be significantly associated with body mass index in patients with schizophrenia, irrespective of the treatment regimen . Furthermore, such association was not confirmed for antipsychotic drug-induced metabolic syndrome. It is not clearly understood if the presence of specific FTO SNPs will increase the risk of obesity in patients receiving antipsychotic medications. 4.2. Genes Associated with Metabolic Syndrome 4.2.1. ADRA1A Weight increases in people with schizophrenia have also been linked to a range of genetic variations. For instance, the alpha-1A adrenergic receptor (ADRA1A) gene has been linked to cardiovascular risk factors such as obesity and hypertension, and a positive connection has been observed between the presence of the Arg347 allele of ADRA1A and the total number of metabolic syndrome (MetS) components . When the three genes for alpha1 adrenergic receptors (ADRA1A) in the human genome (ADRA1A, ADRA1B, and ADRA1D) were investigated in the population of the United States ADRA1A, ADRA1B, and ADRA1D, haplotype blocks of various lengths were detected in the Caucasian and African American populations . Haplotypes exhibit substantial linkage disequilibrium over extended chromosomal areas . Therefore, human genome haplotype-block organization can have crucial implications for successfully mapping genetic polymorphisms linked with complicated diseases. Once the haplotype blocks of a candidate gene have been identified, a collection of haplotype tag SNPs that can capture the haplotype variety of the blocks may be chosen. This offers an effective method for screening each haplotype block for association, particularly because they allow for the discovery of effects of any allele of moderate abundance and effect size, even if the causal allele is unknown . In the U.S. Caucasian population, all SNP markers fell within haplotype blocks in ADRA1A and ADRA1D. In general, shorter haplotype blocks were found in African Americans, and 30–40% of the genomic regions of ADRA1B and ADRA1D did not exhibit block structure in this population . The findings from the study confirm that the haplotype block architecture of three alpha-adrenergic receptor genes exhibit demographic disparities in haplotype block structure between Caucasians and African Americans in the United States. 4.2.2. eNOS Endothelial nitric oxide synthase (eNOS) produces NO in endothelial cells and platelets, and it is critical for maintaining vascular homeostasis, preventing platelet and leukocyte adhesion, and inhibiting vascular smooth muscle cell migration and proliferation. Moreover, clinical investigations have demonstrated that functional polymorphisms or haplotypes in the eNOS gene are linked to an increased risk of MetS . In a study that aimed to see if eNOS gene polymorphisms or haplotypes are linked to MetS vulnerability in children and adolescents, the distribution of genetic variations of three clinically important eNOS polymorphisms (T786C in the promoter, VNTR in intron 4, and Glu298Asp in exon 7) in ethnically-defined DNA samples was examined to calculate the haplotype frequency and look for correlations between these variations . In , Caucasians (34.5%) had a higher prevalence of the Asp298 variation than African Americans (15.5%) or Asians (8.6%), and a higher prevalence (42.0%) of the C-786 variation than African Americans (17.5%) or Asians (13.8%). African Americans (26.5%) had a higher prevalence of the 4a variation in intron 4 than Caucasians (16.0%) or Asians (12.9%). In each of the three groups, the most frequent projected haplotype exclusively included wild-type variations. This haplotype was more prevalent in Asians (77% vs. 46% in the other ethnicities). In African Americans, the second most frequent haplotype contained the variation 4a and wild-type variants; the Asp298 and 4a variants were negatively related in this group. Since the biological changes associated with the T786C polymorphism predispose children and adolescents to MetS, genetic tests should be performed to clinically address variants as a result of the aforementioned interethnic differences. Further studies are required to decipher the potential effects of those gene variants in patients receiving psychotropic medications and whether they can contribute to an increased susceptibility and/or degree of severity of their metabolic adverse effects. The fat mass and obesity-associated (FTO) gene encodes for the alpha-ketoglutarate-dependent dioxygenase. Monogenic disorders related to mutations in the FTO regions have also been identified in humans , and polymorphisms in the noncoding areas of this gene have been linked with obesity and several other diseases, particularly those for which obesity is a risk factor . While FTO gene polymorphisms associated with obesity have substantial variability in allele frequencies among ethnoterritorial groups, comparable allele recording frequencies of multiple SNPs across representatives of the same ethnoterritorial group also exist. The diversity between subpopulations within territorial groupings is similarly negligible for most SNPs; independent of interterritorial variations in allele recording frequency. It is important to note that the FTO gene polymorphisms linked to characteristics are found in noncoding regions that have no effect on the structure or function of alpha-ketoglutarate-dependent dioxygenase. Nonetheless, BMI and other anthropometric features representing the degree of obesity have been reported to be linked to rs1421085. The T-to-C substitution results in a twofold increase in the expression of two genes distal to FTO, namely IRX3 and IRX5 . During preadipocyte differentiation, the proteins encoded by the IRX3 and IRX5 genes shift their development from energy-dissipating to energy-storing adipocytes; an increase in IRX3 and IRX5 expression also leads to an increase in lipid accumulation . For this polymorphism, subpopulation differences in allele frequency between the geographical groupings analyzed in the scope of the 1000 Genomes Project did not surpass 8% (except for American subpopulations) . In addition, among 40 SNPs located in intron 1 that showed relationships with diseases based on GWAS, 35 SNPs exhibited less than 5% interpopulation variances in the recording frequency of one of the alleles. This can be explained by the fact that, in addition to significant population differentiation in SNP allele frequencies, large blocks of linkage disequilibrium were discovered in the region of the FTO intron 1 in European and Asian populations (blocks of linkage are smaller in African populations) . Furthermore, association studies revealed that haplotypes could create variations with risk/protective effects on pathological conditions, including BMI and obesity. In another study, researchers in Hungary found that for 11 SNPs, risk allele frequencies considerably changed across the two ethnic minorities: the Hungarian and the Roma . Variants in the fat mass and obesity-associated (FTO) gene (rs1558902, rs1121980, rs9939609, and rs9941349) showed a robust yet ethnicity-independent link with obesity. The Roma community had greater connections with obesity than the Hungarian general population, which was explained by the ethnicity-associated behavioral and environmental factors. In a recent 2021 study by Boiko et al., four FTO SNPs were found to be significantly associated with body mass index in patients with schizophrenia, irrespective of the treatment regimen . Furthermore, such association was not confirmed for antipsychotic drug-induced metabolic syndrome. It is not clearly understood if the presence of specific FTO SNPs will increase the risk of obesity in patients receiving antipsychotic medications. 4.2.1. ADRA1A Weight increases in people with schizophrenia have also been linked to a range of genetic variations. For instance, the alpha-1A adrenergic receptor (ADRA1A) gene has been linked to cardiovascular risk factors such as obesity and hypertension, and a positive connection has been observed between the presence of the Arg347 allele of ADRA1A and the total number of metabolic syndrome (MetS) components . When the three genes for alpha1 adrenergic receptors (ADRA1A) in the human genome (ADRA1A, ADRA1B, and ADRA1D) were investigated in the population of the United States ADRA1A, ADRA1B, and ADRA1D, haplotype blocks of various lengths were detected in the Caucasian and African American populations . Haplotypes exhibit substantial linkage disequilibrium over extended chromosomal areas . Therefore, human genome haplotype-block organization can have crucial implications for successfully mapping genetic polymorphisms linked with complicated diseases. Once the haplotype blocks of a candidate gene have been identified, a collection of haplotype tag SNPs that can capture the haplotype variety of the blocks may be chosen. This offers an effective method for screening each haplotype block for association, particularly because they allow for the discovery of effects of any allele of moderate abundance and effect size, even if the causal allele is unknown . In the U.S. Caucasian population, all SNP markers fell within haplotype blocks in ADRA1A and ADRA1D. In general, shorter haplotype blocks were found in African Americans, and 30–40% of the genomic regions of ADRA1B and ADRA1D did not exhibit block structure in this population . The findings from the study confirm that the haplotype block architecture of three alpha-adrenergic receptor genes exhibit demographic disparities in haplotype block structure between Caucasians and African Americans in the United States. 4.2.2. eNOS Endothelial nitric oxide synthase (eNOS) produces NO in endothelial cells and platelets, and it is critical for maintaining vascular homeostasis, preventing platelet and leukocyte adhesion, and inhibiting vascular smooth muscle cell migration and proliferation. Moreover, clinical investigations have demonstrated that functional polymorphisms or haplotypes in the eNOS gene are linked to an increased risk of MetS . In a study that aimed to see if eNOS gene polymorphisms or haplotypes are linked to MetS vulnerability in children and adolescents, the distribution of genetic variations of three clinically important eNOS polymorphisms (T786C in the promoter, VNTR in intron 4, and Glu298Asp in exon 7) in ethnically-defined DNA samples was examined to calculate the haplotype frequency and look for correlations between these variations . In , Caucasians (34.5%) had a higher prevalence of the Asp298 variation than African Americans (15.5%) or Asians (8.6%), and a higher prevalence (42.0%) of the C-786 variation than African Americans (17.5%) or Asians (13.8%). African Americans (26.5%) had a higher prevalence of the 4a variation in intron 4 than Caucasians (16.0%) or Asians (12.9%). In each of the three groups, the most frequent projected haplotype exclusively included wild-type variations. This haplotype was more prevalent in Asians (77% vs. 46% in the other ethnicities). In African Americans, the second most frequent haplotype contained the variation 4a and wild-type variants; the Asp298 and 4a variants were negatively related in this group. Since the biological changes associated with the T786C polymorphism predispose children and adolescents to MetS, genetic tests should be performed to clinically address variants as a result of the aforementioned interethnic differences. Further studies are required to decipher the potential effects of those gene variants in patients receiving psychotropic medications and whether they can contribute to an increased susceptibility and/or degree of severity of their metabolic adverse effects. Weight increases in people with schizophrenia have also been linked to a range of genetic variations. For instance, the alpha-1A adrenergic receptor (ADRA1A) gene has been linked to cardiovascular risk factors such as obesity and hypertension, and a positive connection has been observed between the presence of the Arg347 allele of ADRA1A and the total number of metabolic syndrome (MetS) components . When the three genes for alpha1 adrenergic receptors (ADRA1A) in the human genome (ADRA1A, ADRA1B, and ADRA1D) were investigated in the population of the United States ADRA1A, ADRA1B, and ADRA1D, haplotype blocks of various lengths were detected in the Caucasian and African American populations . Haplotypes exhibit substantial linkage disequilibrium over extended chromosomal areas . Therefore, human genome haplotype-block organization can have crucial implications for successfully mapping genetic polymorphisms linked with complicated diseases. Once the haplotype blocks of a candidate gene have been identified, a collection of haplotype tag SNPs that can capture the haplotype variety of the blocks may be chosen. This offers an effective method for screening each haplotype block for association, particularly because they allow for the discovery of effects of any allele of moderate abundance and effect size, even if the causal allele is unknown . In the U.S. Caucasian population, all SNP markers fell within haplotype blocks in ADRA1A and ADRA1D. In general, shorter haplotype blocks were found in African Americans, and 30–40% of the genomic regions of ADRA1B and ADRA1D did not exhibit block structure in this population . The findings from the study confirm that the haplotype block architecture of three alpha-adrenergic receptor genes exhibit demographic disparities in haplotype block structure between Caucasians and African Americans in the United States. Endothelial nitric oxide synthase (eNOS) produces NO in endothelial cells and platelets, and it is critical for maintaining vascular homeostasis, preventing platelet and leukocyte adhesion, and inhibiting vascular smooth muscle cell migration and proliferation. Moreover, clinical investigations have demonstrated that functional polymorphisms or haplotypes in the eNOS gene are linked to an increased risk of MetS . In a study that aimed to see if eNOS gene polymorphisms or haplotypes are linked to MetS vulnerability in children and adolescents, the distribution of genetic variations of three clinically important eNOS polymorphisms (T786C in the promoter, VNTR in intron 4, and Glu298Asp in exon 7) in ethnically-defined DNA samples was examined to calculate the haplotype frequency and look for correlations between these variations . In , Caucasians (34.5%) had a higher prevalence of the Asp298 variation than African Americans (15.5%) or Asians (8.6%), and a higher prevalence (42.0%) of the C-786 variation than African Americans (17.5%) or Asians (13.8%). African Americans (26.5%) had a higher prevalence of the 4a variation in intron 4 than Caucasians (16.0%) or Asians (12.9%). In each of the three groups, the most frequent projected haplotype exclusively included wild-type variations. This haplotype was more prevalent in Asians (77% vs. 46% in the other ethnicities). In African Americans, the second most frequent haplotype contained the variation 4a and wild-type variants; the Asp298 and 4a variants were negatively related in this group. Since the biological changes associated with the T786C polymorphism predispose children and adolescents to MetS, genetic tests should be performed to clinically address variants as a result of the aforementioned interethnic differences. Further studies are required to decipher the potential effects of those gene variants in patients receiving psychotropic medications and whether they can contribute to an increased susceptibility and/or degree of severity of their metabolic adverse effects. 5.1. CYP2D6 Isoenzyme The polyallelic attribute of CYP2D6 has associated Caucasian populations with Poor Metabolizers (PM) (8%), of which Asian populations have a lower prevalence (1%). On the other hand, Intermediate Metabolizers (IM) are more prevalent among Asian populations (35–55%) than in Caucasians (<2%) (5, 8). Although the CYP2D6*4 allele contributing to the PM phenotype is more frequently found in Caucasians, population studies reveal that Asians harbor the highest frequency of the decreased function CYP2D6*10 allele (52%); white Europeans and Oceanians account for lower than 3–7% and it is seldom seen in African populations . Saruwatari et al. analyzed the non-linear pharmacokinetic (PK) parameters of the Michaelis–Menten constant (Km) and maximum velocity (Vmax) in major depressive disorder Japanese patients who were prescribed paroxetine to investigate the effects of CYP2D6 polymorphisms, including CYP2D6*10, on plasma paroxetine concentrations. Results indicated a significant difference in the CYP2D6*10 carriers than non-carriers between the Kmax (24.2 ± 18.3 ng/mL and 122.5 ± 106.3 ng/mL, p = 0.008) and Vmax values (44.2 ± 16.1 mg/day and 68.3 ± 15.0 mg/day, p = 0.022), respectively. Owing to the inter-ethnic disparities in the CYP2D6*10 allele frequency, genotyping individuals could contribute to achieving optimal blood paroxetine concentrations. 5.2. Ethnic Variation of CYP2C19 Galindo et al. compiled 138 research studies to classify the prevalence of CYP2C19 alleles based on ethnic groups and geographical regions. Compared to the rest of the population, the CYP2C19*2 allele was more widespread in Native Oceanians (61.3%), and thereafter, in East and South Asians (30.3%). The CYP2C19*3 allele was likewise more frequent in Native Oceanians (14.42%) and East Asians (6.89%); in contrast, it was infrequent in the rest of the ethnic groups. Moreover, the star 17 allele was more prevalent in the Mediterranean and South Europeans (42%), and in the Middle Eastern region (24.87%). Frequencies are listed below from PharmGKB in . Although the non-functional CYP2C219*2 and *3 are the most commonly genotyped alleles, extensive and ultra-rapid metabolizer phenotypes of CYP2C19*17 exhibit the highest inter-ethnic diversity . Clinically, such phenotypes have a notable impact on CYP2C19 substrates, such as amitriptyline, a tricyclic antidepressant that works as both a serotonin and norepinephrine reuptake inhibitor . Kirchheiner et al. reported treatment with antidepressants such as amitriptyline, among others, would benefit from the CYP2C19-based dose adjustment by reducing the drug dosage prescribed by 110% for carriers of homozygous Extensive Metabolizer (EM) allele, <100% for heterozygous and 60% for PM. Owing to the vast inter-ethnic differences between CY2C19*17 , individualized dosing would significantly impact the therapeutic response. 5.3. Ethnic Variation of CYP2C9 Among the two most common variants are the decreased function CYP2C9*2 and the no-function CYP2C9*3. The CYP2C9*2 allele has frequencies ranging from 11–13% in Middle Eastern, European, and South/Central Asian populations; however, it has an estimate of 2% in African ancestry and <1% in East Asian populations. CYP2C9*3 ranges from frequencies of around 7–11% in European populations, Middle Eastern, and South/Central Asian ancestry, but is lower in East Asian (3%) and is even lower in African populations . In a study by Zubiaur et al. , 80 participants were profiled for pharmacokinetic parameters through blood sample collection pre-dose and up to 72 h after olanzapine intake, and were subsequently genotyped. Analysis revealed PM were linked to statistically significant higher half-life (t ½) and volume of distribution compared to Normal Metabolizers (NM) or IM. This showed the accumulation of olanzapine to a wider degree than the other phenotypes. In addition, polymorphism was related to adverse drug reactions and the PK variability was congruent with the polymorphism of transporters. Since PMs of homozygous for CYP2C9*3 or heterozygous CYP2C9*3/*4, among others, could lead to such consequences, correlating inter-ethnic differences with allelic function emphasizes the potential need for dose adjustment and possible prevention of adverse effect if the inter-ethnic differences are considered during prescription by physicians. 5.4. Ethnic Variation of HLA (Human Leukocyte Antigen) When compared to Japanese (0.002), Koreans (0.004), and Europeans (0.01–0.02), the carbamazepine-induced Stevens-Johnson syndrome/toxic epidermal necrolysis (SJS/TEN) was comparatively higher among Han Chinese (0.057–0.145), Malaysians (0.12–0.157), and Thai (0.085–0.275) . More recent genome-wide association studies (GWAS) have revealed that the HLA-A*31:01 allele has a relatively stronger association with carbamazepine-induced hypersensitivity in the populations with lower frequency of HLA-B*15:02, namely Northern Europeans, Japanese, and Koreans . Furthermore, the HLA-B*15:11 allele has also been linked to carbamazepine-induced SJS/TEN in Japanese and Korean populations . Pharmacogenetic studies in East Asians from Taiwan, Thailand, and Japan found that phenytoin-related ADRs are linked with CYP2C9*3 and HLA-B*13:01, HLA-B*15:02, and HLA-B*51:01 . Moreover, similar phenytoin-related ADRs have been reported to be elevated in Thai and Malay patients with the HLA-B*13:01, HLAB*56:02/04, and CYP2C19*3 variants, or when omeprazole is co-administered in patients of Chinese descent . The HLA-A*02:01:01, HLA-B*35:01:01, and HLA-C*04:01:01 haplotypes have also been shown as biological markers for lamotrigine-induced ADRs in Mexicans . The polyallelic attribute of CYP2D6 has associated Caucasian populations with Poor Metabolizers (PM) (8%), of which Asian populations have a lower prevalence (1%). On the other hand, Intermediate Metabolizers (IM) are more prevalent among Asian populations (35–55%) than in Caucasians (<2%) (5, 8). Although the CYP2D6*4 allele contributing to the PM phenotype is more frequently found in Caucasians, population studies reveal that Asians harbor the highest frequency of the decreased function CYP2D6*10 allele (52%); white Europeans and Oceanians account for lower than 3–7% and it is seldom seen in African populations . Saruwatari et al. analyzed the non-linear pharmacokinetic (PK) parameters of the Michaelis–Menten constant (Km) and maximum velocity (Vmax) in major depressive disorder Japanese patients who were prescribed paroxetine to investigate the effects of CYP2D6 polymorphisms, including CYP2D6*10, on plasma paroxetine concentrations. Results indicated a significant difference in the CYP2D6*10 carriers than non-carriers between the Kmax (24.2 ± 18.3 ng/mL and 122.5 ± 106.3 ng/mL, p = 0.008) and Vmax values (44.2 ± 16.1 mg/day and 68.3 ± 15.0 mg/day, p = 0.022), respectively. Owing to the inter-ethnic disparities in the CYP2D6*10 allele frequency, genotyping individuals could contribute to achieving optimal blood paroxetine concentrations. Galindo et al. compiled 138 research studies to classify the prevalence of CYP2C19 alleles based on ethnic groups and geographical regions. Compared to the rest of the population, the CYP2C19*2 allele was more widespread in Native Oceanians (61.3%), and thereafter, in East and South Asians (30.3%). The CYP2C19*3 allele was likewise more frequent in Native Oceanians (14.42%) and East Asians (6.89%); in contrast, it was infrequent in the rest of the ethnic groups. Moreover, the star 17 allele was more prevalent in the Mediterranean and South Europeans (42%), and in the Middle Eastern region (24.87%). Frequencies are listed below from PharmGKB in . Although the non-functional CYP2C219*2 and *3 are the most commonly genotyped alleles, extensive and ultra-rapid metabolizer phenotypes of CYP2C19*17 exhibit the highest inter-ethnic diversity . Clinically, such phenotypes have a notable impact on CYP2C19 substrates, such as amitriptyline, a tricyclic antidepressant that works as both a serotonin and norepinephrine reuptake inhibitor . Kirchheiner et al. reported treatment with antidepressants such as amitriptyline, among others, would benefit from the CYP2C19-based dose adjustment by reducing the drug dosage prescribed by 110% for carriers of homozygous Extensive Metabolizer (EM) allele, <100% for heterozygous and 60% for PM. Owing to the vast inter-ethnic differences between CY2C19*17 , individualized dosing would significantly impact the therapeutic response. Among the two most common variants are the decreased function CYP2C9*2 and the no-function CYP2C9*3. The CYP2C9*2 allele has frequencies ranging from 11–13% in Middle Eastern, European, and South/Central Asian populations; however, it has an estimate of 2% in African ancestry and <1% in East Asian populations. CYP2C9*3 ranges from frequencies of around 7–11% in European populations, Middle Eastern, and South/Central Asian ancestry, but is lower in East Asian (3%) and is even lower in African populations . In a study by Zubiaur et al. , 80 participants were profiled for pharmacokinetic parameters through blood sample collection pre-dose and up to 72 h after olanzapine intake, and were subsequently genotyped. Analysis revealed PM were linked to statistically significant higher half-life (t ½) and volume of distribution compared to Normal Metabolizers (NM) or IM. This showed the accumulation of olanzapine to a wider degree than the other phenotypes. In addition, polymorphism was related to adverse drug reactions and the PK variability was congruent with the polymorphism of transporters. Since PMs of homozygous for CYP2C9*3 or heterozygous CYP2C9*3/*4, among others, could lead to such consequences, correlating inter-ethnic differences with allelic function emphasizes the potential need for dose adjustment and possible prevention of adverse effect if the inter-ethnic differences are considered during prescription by physicians. When compared to Japanese (0.002), Koreans (0.004), and Europeans (0.01–0.02), the carbamazepine-induced Stevens-Johnson syndrome/toxic epidermal necrolysis (SJS/TEN) was comparatively higher among Han Chinese (0.057–0.145), Malaysians (0.12–0.157), and Thai (0.085–0.275) . More recent genome-wide association studies (GWAS) have revealed that the HLA-A*31:01 allele has a relatively stronger association with carbamazepine-induced hypersensitivity in the populations with lower frequency of HLA-B*15:02, namely Northern Europeans, Japanese, and Koreans . Furthermore, the HLA-B*15:11 allele has also been linked to carbamazepine-induced SJS/TEN in Japanese and Korean populations . Pharmacogenetic studies in East Asians from Taiwan, Thailand, and Japan found that phenytoin-related ADRs are linked with CYP2C9*3 and HLA-B*13:01, HLA-B*15:02, and HLA-B*51:01 . Moreover, similar phenytoin-related ADRs have been reported to be elevated in Thai and Malay patients with the HLA-B*13:01, HLAB*56:02/04, and CYP2C19*3 variants, or when omeprazole is co-administered in patients of Chinese descent . The HLA-A*02:01:01, HLA-B*35:01:01, and HLA-C*04:01:01 haplotypes have also been shown as biological markers for lamotrigine-induced ADRs in Mexicans . Extrinsic factors, such as smoking, pregnancy, age, and the use of concomitant medications, interact with CYP450 and potentially affect their catalytic activity. Cigarette smoking can potently induce the expression of the inducible enzyme CYP1A2; however, the induction effect depends mainly on the CYP1A2 genotype . For instance, phenoconversion into a faster metabolizing phenotype was investigated in CYP1A2*1F smokers; consequently, higher doses of olanzapine were needed for them to have an effective olanzapine plasma concentration. Additionally, other drugs, such as clozapine, require analyzing variant polymorphisms of CYP2D6, CYP2C19, and CYP1A2, along with several nongenetic factors, specifically smoking and concomitant medications . The inhibition by other drugs is also a problem that impairs the genetic data, thus changing the apparent phenotype. For example, receiving carbamazepine and valpromide in combination leads to an increase in the patient’s carbamazepine plasma level, since EPHX1 metabolizes carbamazepine and, at the same time, is inhibited by valpromide . Aging is another critical factor that alters CYP450 genes expression, and consequently, their proteins’ metabolic functionality. Notably, age can decrease biotransformation activity of CYP450 enzymes, thus leading to a low metabolizer phenotype. Similarly, genetic variations in addition to the demographic parameters have been investigated to contribute to the modified CYP2B6 expression and activity . Although hepatic CYP2B6 represents only a small proportion (1–4%) of the human CYP450, it is responsible for catalyzing the metabolism of many important drugs, including the atypical antidepressant bupropion . Regarding pregnancy as a nongenetic factor, conflicting results for the impact of pregnancy on CYP2D6 enzymatic activity from two different studies on pregnant women receiving paroxetine and dextromethorphan have been obtained. Particularly, for CYP2D6 PMs and CYP2D6 IMs, contradictory results were reported, most probably due to the metabolism of drugs by alternative CYP450 enzymes “other than CYP2D6” that exhibit low enzymatic activity specifically during pregnancy . Overall, single nucleotide polymorphisms (SNPs) play a major role in inter-individual variability in therapeutic drug response. In contrast, in a particular genotype group, variability is still observed, suggesting the contribution of nongenetic factors that influence CYP450 activity . In a comprehensive analysis, Jithesh et al. investigated the population of Qatar. A total of 6,045 whole genomes from Qataris revealed 1320 variants in 703 genes that ultimately affect 299 drugs, and were significantly different from those from the other populations (76, 156 whole genomes) archived in the gnomAD v3 dataset. Furthermore, 615 of the variants were more frequently found in Qataris. The rs1137101 SNP in the LEPR gene was found to be lower in the Qatari population. The LEPR gene encodes for the leptin receptor and mutations were associated with obesity and pituitary malfunction . On the other hand, the rs2289669 in SLC47A1 and rs11212617 in ATM were both higher. The former encodes the multidrug and toxin extrusion protein 1 (MATE1) and has been linked with cellular uptake and sensitivity to the anticancer drug imatinib , while mutations in the latter cause the neurodegenerative disease of ataxia-telangiectasia . Moreover, an assessment of 15 pharmacogenes that impact 46 medications revealed individuals were found to possess 3.6 actionable genotypes/diplotypes on average, with at least one clinically actionable genotype/diplotype seen in 99.5% of the study participants. Moreover, on average, Qataris were found to carry pharmacogenetic variants that predict actionable phenotypes, which ultimately influence 12.9 (28.8%) of the 46 medications. Additionally, a genome-wide association (GWAS) study (n = 182) reported the genetic variations related to the weekly warfarin dosing requirements in Middle Eastern and North African (MENA) populations. Results revealed variants rs9934438 within Vitamin K Epoxide Reductase Complex Subunit 1 (VKORC1) and rs4086116 in CYP2C9 accounted for 39% and 27% of the variability seen in the Qatari (n = 132) and Egyptians (n = 50), respectively . In the endoplasmic reticulum membrane, VKORC1 produces the catalytic component of the vitamin K epoxide reductase complex, which converts inactive vitamin K 2,3-epoxide to active vitamin K. Therefore, allelic variation could cause increased sensitivity or resistance to warfarin, a vitamin K epoxide reductase inhibitor . In another study conducted in association with the SEAPharm consortium project , Al-Mahayri et al. analyzed the landscape of variation among the indigenous citizens of the United Arab Emirates through targeted resequencing. The DNA from 100 self-identified Emirati participants was extracted from whole blood samples and was resequenced by utilizing the targeted sequencing panel (PKSeq). The aim of the study signifies that despite the fact rare variants (allele frequency <1%) are seldom observed in clinical trials, many of them have been determined to be actionable. Such variants “cluster geographically” or become exclusive to a population, which necessitates the creation of libraries archiving rare variants around the world . In population genetics, a genetic variant is characterized as common if its minor allele frequency (MAF) is higher than 1%, and uncommon if its MAF is less than 1%. Out of the 1243 variants identified, a majority (63%) of these variants were found in a MAF less than or equal to 1% (MAF ≤1%). Interestingly, when compared to other populations from multiple databases, around 30% of the identified variants in the Emirati participants were unique . Moreover, among the pharmacogenes investigated in this study, the CYP family had the largest number of variations. Within this family, the CYP2D6 gene came second to CYP4F12, which bore the greatest number of variants. From the CYP2D6 variants, eleven were identified as key markers in clinically actionable haplotypes. Thirty participants possessed CYP2C9*2 or *3 alleles (low and no function, respectively), while three were homozygous for one of the aforementioned alleles. The highest detected diplotype was the CYP4F2*1/*3 (54%), while CYP2B6*1/*18, CYP2C9*2/*3 (1%) were the lowest among the CYP family. Moreover, although the detected SNP for ABCC4 had the highest allele frequency (rs1751034, 82.6%), among the CYP family identified in the current group, in a descending order of allele frequency, CYP4F2 had the highest allele frequency (rs2108622, 45.92%), followed by CYP2B6 (rs3745274, 31%), CYP2C19 (rs4244285, 15.1%), CYP2C8 (rs10509681, 13.64%), CYP2C9 (rs1057910, 7.07%), CYP2C8 (rs7900194, 2%), and CYP2B6 (rs28399499, 1.01%) . Such studies highlight the significance of pharmacogenetics research, particularly in the Middle East, where the lack of such data and awareness hinders the implementation of pharmacogenomics into practice . The CPIC guidelines are good tools by which to recommend the appropriate selection and dosing of a variety of medications, with those pertaining to psychiatry the most prominent. To adopt an approach of pharmacogenomics-guided clinical practice, genotyping results and consequent recommendations should be available at the point of care. Different levels of metabolism are assigned to different gene variants (e.g., intermediate metabolizer, poor metabolizer, etc.). However, the development of activity scores can provide a better classification of different grades of metabolism with complexity; whereas medications are metabolized by more than one enzyme. Notably, the activity score may depend on the medication itself (substrate dependency) . There have been several examples in which enzymatic activity markedly varies according to genotype, with potential consequences on drug efficacy and safety. Previous studies investigated more than 2000 patients receiving escitalopram by CYP2C19 genotyping patients. The *17 allele increased the enzymatic capacity of CYP2C19 by only approximately 20% compared to the wild type . In another example, in more than 1000 patients receiving venlafaxine and genotyped for CYP2D6, patients with *9 and *10 alleles had a 70% reduction of enzymatic capacity; whereas, the presence of *41 allele reduced the enzymatic capacity by around 85% . To add to this complexity, as previously discussed, enzymatic activity may vary in different ethnicities, and therefore, in vivo validation studies of such predictive values are required. Moreover, many patients may have comorbidities and receive several medications, with potential drug interactions. Accordingly, mathematical modeling may help integrate numerous factors affecting drug response, thus optimizing the selection and dosing of medications. Integrating the results of genotyping into electronic medical records is another challenge to the routine use of pharmacogenomics in clinical practice. The recommendations should also be integrated in order to provide adequate guidance to the physician at the site of care. The cost of genetic testing should be interpreted within the context of total (direct and indirect) costs of mental health disorders. Notably, indirect costs are high for psychiatric patients , whereas the cost of genetic testing has markedly dropped by using new technologies and high throughput devices. One of the important causes of non-compliance is the development of adverse drug reactions. As previously described, adding HLA genotyping and predictors of obesity and metabolic syndrome may help precise selection of medications, and avoid prescribing those with a high probability of causing adverse drug reactions in patients possessing certain gene variants. Psycho-pharmaco-genetic studies seem to concur with the effect genetic variations have on the treatment outcomes for patients on psychotropic medications and can be utilized to optimize patient therapy. However, thus far, much of the pharmacogenetic studies have been contradictory. For instance, while CPIC advises CYP2D6 poor metabolizers to avoid amitriptyline , the FDA simply discusses its use in the same phenotypic population as a warning . This has contributed to hindering the implementation of pharmacogenomics into practice, with the lack of prescriber training and confidence in interpreting the results being additional contributing factors . Moreover, the contradictory results of psychiatric genotyping studies, due to small sample size, lack of psychopharmacological expertise, and demographic, clinical and environmental differences in patient cohorts, have limited the utilization of psychopharmacologic assays to specific psychotic disorders, such as refractory schizophrenia . Furthermore, the complexity of psychiatric diseases and the inter-ethnic differences in drug response is a challenge for pharmacogenomics testing in psychiatric clinics . Shugg, T. et al. investigated inconsistencies with pharmacogenomic recommendations from the major U.S. reporting sources (CPG, CPIC, and FDA), which significantly differed in the following categories: recommendation, addressing routine screening, specific biomarkers, variants, and patient groups. The study concluded that almost half of the recommendations were inconsistent. Proposed causes included: (1) inconsistencies between the sources in the level of evidence required to deem the drug-gene pair as “clinically actionable”; (2) different mission statements between the sources (i.e., where the CPIC recommends, and the FDA provides information rather than recommendations). Potential solutions, as reported by Shugg et al., include constructing strong evidence that favors pharmacogenomics generated through randomized controlled trials, the collaboration of expert panels with official organizations and consortia, or collective agreement of organizations to follow a single source for pharmacogenomic recommendations . Moreover, as previously described, genetic background significantly differs among different ethnicities. The regional disparities in allele frequencies could complicate interpretations of pharmacogenomic results among the various groups of subjects involved in the studies. Further complexity arises from the ancestral origin of the said groups. For example, North Indians, South Indians, and East Indians are all genetically distinct, but are grouped into a “homogenous” Asian population . Moreover, self-reported ethnicities can cause complications in genetic dosing algorithms. Additionally, doubts have recently been raised regarding the necessity for race-based algorithms , as it is unclear what ethnic and racial groups should be used as a standard in such studies, with paucity of studies in many ethnicities, including those of the Middle East . This calls for the need to establish well-defined categories than the broader black, white, and Asian, to be able to use ethnicity as a proxy when, for example, no information is available for a patient’s genotype, as well as to improve precision in pharmacogenetics research. The “trial-and error” method of selecting psychotropic agents for psychiatric patients implores the need to find ways that improve the current prescribing patterns for patients. In this review, we have demonstrated the recent significant progress in our understanding of individualized therapy. Data showed that, based on the inter-differences in drug response, pharmacogenomic studies in clinical practice can aid in identifying reasons behind an individual’s lack of response and/or the occurrence of an adverse drug reaction. However, disparities in technological advancements and research capacities between regions around the world have created a paucity of data on the prevalence of actionable pharmacogenomic variations in several countries, including the Middle East. Moreover, the complex genetics and phenotypes of psychiatric diseases, as well as the apparent inter-ethnic differences, minimize the utilization of pharmacogenomics testing in psychiatric clinics. In addressing the above-mentioned challenges, future work should focus on transforming the world towards practicing precision medicine based on individualized genetic profiles. For this purpose, it is essential that healthcare professionals have sufficient knowledge of genetic principles. Practically, increased awareness toward the significance of allelic variants in clinical practice and establishment of electronic genetic records can prospectively identify patients that would benefit from pharmacogenetic testing, and thus, appropriate psychiatric treatments.
COVID-19 pandemic—testing times for post graduate medical education
bc2cc45b-e3d3-4ae3-969b-eb69e77aa8ee
7926130
Ophthalmology[mh]
Nil. There are no conflicts of interest.
Immunoexpression of stem cell markers SOX-2, NANOG AND OCT4 in ameloblastoma
27660049-3b24-49b5-abe4-ce7d1df8fa2f
9841912
Anatomy[mh]
Ameloblastoma (AME) is an odontogenic tumour of epithelial origin and, although classified as a benign tumour, it is characterized by a locally invasive growth pattern, which can reach large proportions and promote facial deformities in patients . Surgical removal is the treatment of choice, but when conservative techniques are applied, small islands of tumour are not completely removed, leading to local recurrence in 60–80% of cases of solid AME . In the search for cellular mechanisms that justify the local aggressiveness of this benign lesion, the investigation of the role of stem cells (SCs) and cancer stem cells (CSCs) has gained prominence in tumour biology, with research identifying their participation in growth, angiogenesis, progression, tumour recurrence and self-renewal potential . The SOX-2, NANOG and octamer-binding protein 4 (OCT4) are important biomarkers in the analysis of the presence of SCs. They act as critical regulators of embryonic self-renewal and pluripotency capable of mediating tumour proliferation and differentiation . Furthermore, the relation of these proteins seems to play an oncogenic role, considering studies that point to the presence of these factors in different types of cancers, such as lung adenocarcinoma, breast, colorectal and gastric cancer . The SOX-2 protein (HMG-box gene 2 related to SRY) acts as an important transcription factor in maintaining the self-renewal capacity of SCs. Previous studies have shown it to be associated with a pro-oncogenic function , in AME and ameloblastic carcinoma , with a differentiated expression between the last two. NANOG (homeodomain protein) is another transcription factor that plays a central role in maintaining cell pluripotency during embryonic development, in addition to being associated with cell proliferation and renewal . The high expression of NANOG has also been identified in patients with some types of cancer , but it has not yet been studied in AME. OCT4 also acts in pluripotency and cell self-renewal , and has been associated with cell proliferation and tumour progression . The expression of OCT4 in AME was related to cell development and differentiation, and an initial study on the expression of this protein in lesions of odontogenic origin showed divergence in expression between AME and ameloblastic carcinoma . SOX-2 and OCT4 are considered essential regulators for the maintenance and early development of SCs . Although they have independent roles in different cell types, they present a synergistic interaction that leads to the transcription of target genes , with NANOG as one of the targets of this interaction . When together, SOX-2, NANOG and OCT4 bind to the promoters of their own genes, forming interconnected self-regulatory loops . It is believed that this self-regulation network can provide advantages for SCs, such as reduced response time to environmental stimuli and greater stability of gene expression, thus maintaining cell fate . These characteristics are important for cell survival, stability and tumour progression. It has been reported that odontogenic neoplasms, such as AME, originate from remaining SCs from the dental lamina . However, the true contribution of SCs to the molecular mechanism involved in the pathogenesis of AME still needs clarification . This is the first work to study these three proteins (SOX-2, NANOG and OCT4) in AME. In this sense, identifying the proteins that are related to cellular self-renewal and pluripotency may justify the biological behaviour of AME and becomes extremely important to the development of better treatments and prognosis. Sample This was an experimental laboratory study. For the in vivo study, 23 AME samples were used, retrieved from the archives of the Department of Oral Pathology, Faculty of Dentistry, University Centre of the State of Pará (CESUPA). DC, as well as AME, is derived from the odontogenic epithelium, but has a less aggressive behaviour. Thus, 10 samples of DC were used as a control, added to 10 samples of DF (tissue without neoplastic alterations of odontogenic origin), obtained from the Laboratory of Pathological Anatomy and Immunohistochemistry, Faculty of Dentistry, Federal University of Pará (UFPA). The clinical data of the AME samples were acquired through medical records with reports present in the files, collected manually, and histologically classified by two oral pathologists. For the in vitro study, the cell line derived from human AME, called AME-hTERT, established at the Cell Culture Laboratory of the Faculty of Dentistry, Federal University of Pará (UFPA) , was used. This study was registered and approved by the Human Research Ethics Committee of the Health Sciences Institute of the Federal University of Pará—CEP/ICS/UFPA (CAAE: 30647720.6.0000.0018). Informed consent was waived by this one. Immunohistochemistry Immunohistochemical analysis was performed according to , where AME, DC and DF tissues were incubated with Anti-SOX-2 (1:50 Sigma, St. Louis, MO, USA), Anti-Nanog (1:150 Millipore, Burlington, MA, USA) and Anti-Oct4 (1:25 Millipore, Burlington, MA, USA) antibodies for 1 h. As a positive control, samples of oral squamous cell carcinoma were used and as a negative control, the primary antibody was replaced by BSA and fetal bovine serum in TRIS buffer. Immunohistochemical evaluation Five brightfield images were randomly acquired from each AME, DC and DF of regions with intact epithelium, and in the case of AME that was representative of the lesion, a sample on an AxioScope microscope (Carl Zeiss, Oberkochen, Germany, DEU) equipped with an AxioCam HRC colour CCD camera (Carl Zeiss, Oberkochen, Germany) was obtained. Images were taken at 400× magnifications and saved in TIFF format. The areas stained with diaminobenzidine were analysed using the “Immunohistochemistry (IHC) Image Analysis Toolbox” of the ImageJ software (National Institute of Mental Health (NIMH), National Institute of Health (NIH, Bethesda, MD, USA). Semi-automatic image analysis was then performed by detecting DAB staining. The mean percentages of marking obtained in the tumour parenchyma, of five fields per sample, were analysed using the GraphPad Prism 8 software (GraphPad Software Inc., San Diego, CA, USA). Cell cultivation The ameloblastoma cell line was cultured and maintained in culture bottles according to . Indirect immunofluorescence The AME-hTERT strain was seeded on glass coverslips in 24-well plates and submitted to an indirect immunofluorescence protocol to detect the expression of SOX-2, NANOG and OCT4. This process was initiated by fixing the cells in 2% paraformaldehyde for 10 min, followed by washing with PBS, permeabilization of the membrane with 0.5% Triton X-100 (Sigma, St. Louis, MI, USA) solution for 5 min, a second washing with PBS, and incubation in PBS/BSA (BSA, Sigma, St. Louis, MI, USA) at 1% for 30 min. Subsequently, primary antibodies diluted in PBS/BSA at 1% were incubated for a maximum of 18 h in a humid chamber at 4 °C. The primary antibodies used were: Anti-SOX-2 (1:50 Sigma, St. Louis, MI, USA), Anti-Nanog (1:50 Millipore, Burlington, MA, USA), and Anti-Oct4 (1:50 Millipore, Burlington, MA, USA). To detect the primary antibody, incubation in a solution containing the secondary antibody conjugated to AlexaFluor 488 (Invitrogen, Carlsbad, CA, USA) for 1 h in a dark, humid chamber at room temperature was performed. For better visualisation of the cytoskeleton, Alexa Fluor 568 Phalloidin (Life Technologies, Carlsbad, CA, USA) was used. The nuclei were labelled with DAPI coupled to ProLong Gold antifade reagent (Invitrogen, Carlsbad, CA, USA). After mounting, the coverslips were analysed in a fluorescence microscope (AxioScope.A1; Zeiss, Jena, Germany), equipped with a digital camera (AxioCamMRc; Zeiss, Jena, Germany), with which images were obtained from the slides to record immunoexpression. All images were acquired at the same magnification (40× lens). The slides were examined under a fluorescence microscope (Axio Scope.A1; Zeiss, Jena, Germany, TH, DEU) equipped with a digital camera (AxioCam MRc; Zeiss, Jena, Germany), in which five images were randomly obtained from each slide for the acquisition of 50 cells per group. All images were acquired at the same magnification (40× objective). The immunostaining evaluation was performed using the ImageJ software. Statistical analysis Clinicohistological data were then tabulated and analysed using descriptive statistics. To analyse the expression of the three proteins (SOX-2, NANOG and OCT4), comparing the AME samples with those of DC and DF, the statistical tests analysis of variance (ANOVA; samples with parametric distribution) and Kruskal-Wallis were used, followed by Dunn’s test multiple comparisons (samples with non-parametric distribution). To verify correlation, the Pearson correlation test (samples with parametric distribution) was used. A significance level of α = 0.05 was adopted. This was an experimental laboratory study. For the in vivo study, 23 AME samples were used, retrieved from the archives of the Department of Oral Pathology, Faculty of Dentistry, University Centre of the State of Pará (CESUPA). DC, as well as AME, is derived from the odontogenic epithelium, but has a less aggressive behaviour. Thus, 10 samples of DC were used as a control, added to 10 samples of DF (tissue without neoplastic alterations of odontogenic origin), obtained from the Laboratory of Pathological Anatomy and Immunohistochemistry, Faculty of Dentistry, Federal University of Pará (UFPA). The clinical data of the AME samples were acquired through medical records with reports present in the files, collected manually, and histologically classified by two oral pathologists. For the in vitro study, the cell line derived from human AME, called AME-hTERT, established at the Cell Culture Laboratory of the Faculty of Dentistry, Federal University of Pará (UFPA) , was used. This study was registered and approved by the Human Research Ethics Committee of the Health Sciences Institute of the Federal University of Pará—CEP/ICS/UFPA (CAAE: 30647720.6.0000.0018). Informed consent was waived by this one. Immunohistochemical analysis was performed according to , where AME, DC and DF tissues were incubated with Anti-SOX-2 (1:50 Sigma, St. Louis, MO, USA), Anti-Nanog (1:150 Millipore, Burlington, MA, USA) and Anti-Oct4 (1:25 Millipore, Burlington, MA, USA) antibodies for 1 h. As a positive control, samples of oral squamous cell carcinoma were used and as a negative control, the primary antibody was replaced by BSA and fetal bovine serum in TRIS buffer. Five brightfield images were randomly acquired from each AME, DC and DF of regions with intact epithelium, and in the case of AME that was representative of the lesion, a sample on an AxioScope microscope (Carl Zeiss, Oberkochen, Germany, DEU) equipped with an AxioCam HRC colour CCD camera (Carl Zeiss, Oberkochen, Germany) was obtained. Images were taken at 400× magnifications and saved in TIFF format. The areas stained with diaminobenzidine were analysed using the “Immunohistochemistry (IHC) Image Analysis Toolbox” of the ImageJ software (National Institute of Mental Health (NIMH), National Institute of Health (NIH, Bethesda, MD, USA). Semi-automatic image analysis was then performed by detecting DAB staining. The mean percentages of marking obtained in the tumour parenchyma, of five fields per sample, were analysed using the GraphPad Prism 8 software (GraphPad Software Inc., San Diego, CA, USA). The ameloblastoma cell line was cultured and maintained in culture bottles according to . The AME-hTERT strain was seeded on glass coverslips in 24-well plates and submitted to an indirect immunofluorescence protocol to detect the expression of SOX-2, NANOG and OCT4. This process was initiated by fixing the cells in 2% paraformaldehyde for 10 min, followed by washing with PBS, permeabilization of the membrane with 0.5% Triton X-100 (Sigma, St. Louis, MI, USA) solution for 5 min, a second washing with PBS, and incubation in PBS/BSA (BSA, Sigma, St. Louis, MI, USA) at 1% for 30 min. Subsequently, primary antibodies diluted in PBS/BSA at 1% were incubated for a maximum of 18 h in a humid chamber at 4 °C. The primary antibodies used were: Anti-SOX-2 (1:50 Sigma, St. Louis, MI, USA), Anti-Nanog (1:50 Millipore, Burlington, MA, USA), and Anti-Oct4 (1:50 Millipore, Burlington, MA, USA). To detect the primary antibody, incubation in a solution containing the secondary antibody conjugated to AlexaFluor 488 (Invitrogen, Carlsbad, CA, USA) for 1 h in a dark, humid chamber at room temperature was performed. For better visualisation of the cytoskeleton, Alexa Fluor 568 Phalloidin (Life Technologies, Carlsbad, CA, USA) was used. The nuclei were labelled with DAPI coupled to ProLong Gold antifade reagent (Invitrogen, Carlsbad, CA, USA). After mounting, the coverslips were analysed in a fluorescence microscope (AxioScope.A1; Zeiss, Jena, Germany), equipped with a digital camera (AxioCamMRc; Zeiss, Jena, Germany), with which images were obtained from the slides to record immunoexpression. All images were acquired at the same magnification (40× lens). The slides were examined under a fluorescence microscope (Axio Scope.A1; Zeiss, Jena, Germany, TH, DEU) equipped with a digital camera (AxioCam MRc; Zeiss, Jena, Germany), in which five images were randomly obtained from each slide for the acquisition of 50 cells per group. All images were acquired at the same magnification (40× objective). The immunostaining evaluation was performed using the ImageJ software. Clinicohistological data were then tabulated and analysed using descriptive statistics. To analyse the expression of the three proteins (SOX-2, NANOG and OCT4), comparing the AME samples with those of DC and DF, the statistical tests analysis of variance (ANOVA; samples with parametric distribution) and Kruskal-Wallis were used, followed by Dunn’s test multiple comparisons (samples with non-parametric distribution). To verify correlation, the Pearson correlation test (samples with parametric distribution) was used. A significance level of α = 0.05 was adopted. Clinicohistological data of patients with AME In the sample studied, the average age was 39 years, with 61% of individuals below this average and the remaining 39% above. The male gender was observed in 56% of the cases and the female in 44%. The location of greatest involvement was the mandible, totalling 91% of the cases. Regarding the histological type, 10 cases were of the follicular type, eight of plexiform, three acanthomatous and two of granular cells . AME presents a begreater expression of stem cell markers when compared to dentigerous cyst and dental follicle AME samples showed higher expressions of SOX-2, NANOG and OCT4 proteins when compared to dentigerous cyst (DC) and dental follicle (DF) ( p < 0.001). There was no statistical difference between DC and DF ( p > 0.05) . Variations in the immunomarking of SOX-2, NANOG and OCT4 in the neoplastic cell compartment of AME Immunohistochemical staining for SOX-2, NANOG and OCT4 was mainly located in the cords and islands of the odontogenic tumour epithelium. SOX-2 labelling was present predominantly in the nucleus, while NANOG and OCT4 were found in the cell nucleus and diffusely in the cytoplasm of tumour parenchymal cells. Subtle nuclear markings of SOX-2, OCT4 and NANOG were observed in the DC cystic epithelium, and the same occurred in the DF epithelial islands. NANOG was diffusely labelled in the connective tissue of the DF . AME-hTERT lineage presents immunoexpression of stem cell markers The AME-hTERT strain was verified to express SOX-2, NANOG and OCT4 proteins. Predominantly nuclear expression of SOX-2, NANOG and OCT4 was observed in neoplastic cells . In the sample studied, the average age was 39 years, with 61% of individuals below this average and the remaining 39% above. The male gender was observed in 56% of the cases and the female in 44%. The location of greatest involvement was the mandible, totalling 91% of the cases. Regarding the histological type, 10 cases were of the follicular type, eight of plexiform, three acanthomatous and two of granular cells . AME samples showed higher expressions of SOX-2, NANOG and OCT4 proteins when compared to dentigerous cyst (DC) and dental follicle (DF) ( p < 0.001). There was no statistical difference between DC and DF ( p > 0.05) . Immunohistochemical staining for SOX-2, NANOG and OCT4 was mainly located in the cords and islands of the odontogenic tumour epithelium. SOX-2 labelling was present predominantly in the nucleus, while NANOG and OCT4 were found in the cell nucleus and diffusely in the cytoplasm of tumour parenchymal cells. Subtle nuclear markings of SOX-2, OCT4 and NANOG were observed in the DC cystic epithelium, and the same occurred in the DF epithelial islands. NANOG was diffusely labelled in the connective tissue of the DF . The AME-hTERT strain was verified to express SOX-2, NANOG and OCT4 proteins. Predominantly nuclear expression of SOX-2, NANOG and OCT4 was observed in neoplastic cells . This study verified the expression of SOX-2, NANOG and OCT4, SC biomarkers, in AME parenchyma, and subtle expression in the odontogenic epithelium of DC and DF, immunoexpression of the studied proteins was observed in the AME-hTERT cell. The clinical and histological data of the studied samples corroborate the data found in the literature, in which the tumour shows a predominance during the third to seventh decade of life, and the mandible is the location of the highest involvement . In our study, we observed a high expression of transcription factors SOX-2, NANOG and OCT4 in the parenchyma of AME samples. On the other hand, there was a weak expression of these factors in DC and DF. performed immunohistochemical analyses of the expression of SCs transcription factors (OCT4, SOX-2, NANOG and Stat-3) in tooth germs, suggesting a wide potential for development and differentiation. In our immunohistochemical analysis, the expression of SOX-2, NANOG and OCT4 was observed both in the periphery and central cells of the nests and epithelial cords, which may be related to the broad expression of these factors in the development of AME, considering that the SCs of the dental lamina are possible targets of carcinogenic agents . The expression of SOX-2 in this study was predominantly nuclear in the tissue samples of AME and AME-hTERT lineage, as observed in nasopharyngeal carcinoma, gastric, colorectal, lung and breast cancer . , also observed the presence of SOX-2 in AME and related it to the proliferation and embryonic origin of this tumour. In our work, a greater expression of SOX-2 was observed in AME parenchyma in relation to the DC lining epithelium and DF epithelial nests. Although smaller, the nuclear expression of SOX-2 in the odontogenic epithelium of DC and DF is interesting. , indicate that SOX-2 plays an essential role in tumour genesis, pointing out the role of SOX-2 in regulating functions in the initiation and progression of squamous cell carcinoma. It was indicated that the expression of this protein in cells originating from the odontogenic epithelium of the dental blade could lead to the development of AME . It was also speculated whether the same could not happen with DC and DF, since their epithelia express SOX-2. High OCT4 expression was detected in lung adenocarcinoma CSCs and tumour initiator cells in a mouse model with tumour p53 −/− , depicting that OCT4 expression plays a critical role in tumour cell survival. , did not detect OCT4 expression in 20 cases of aggressive multicystic solid AME, stating that this protein could be used as a useful indicator to histologically distinguish ameloblastic carcinoma from aggressive multicystic solid AME; whereas, in our study, we observed high OCT4 expression in 23 cases of solid AME. However, we cannot fail to consider the differences found when comparing the methodologies used. OCT4 was also observed in epithelial and mesenchymal components during tooth development and is believed to participate in the ameloblast differentiation process . In both the AME samples and the AME-hTERT strain, OCT4 staining was predominantly nuclear. According to , the presence of OCT4 in the nucleus is linked to cellular “stemness”, which indicates that in ameloblastoma the nuclear expression of this protein may be related to tumour cell survival. NANOG was found expressed in the tumour parenchyma of the tissue samples of AME with cytoplasmic and nuclear marking, as well as in the AME-hTERT lineage, in this study. The epithelial expression of NANOG has been described in head and neck cancers . Its expression was recently investigated in the stroma of odontogenic lesions, including AME, by , in which the presence of mesenchymal cells positive for NANOG was verified. Thus far, the present study is the only one to observe the expression of this protein in AME parenchyma. In general, the expression pattern identified for SOX-2, NANOG and OCT4 is in accordance with their biological functions as transcription factors. The literature indicates that these markers are crucial transcription factors that are capable of allowing cancer cells to obtain properties similar to those of SCs . CSCs, in turn, manifest properties similar to those of SCs. These properties include the oncogenic reprogramming of different self-renewal genes, presenting characteristics of immortality, which persist in tumours, usually in nests, representing the source of expansion of growth and tumour maintenance, metastasis formation and tumour recurrence . There are difficulties in the literature regarding naming the AME neoplastic cells that express these proteins. It seems inappropriate to call them SCs, given the various genetic and proteomic alterations described in AME neoplastic cells . Even more misleading would be to call them CSCs. The term “tumour stem cells” seems to be the most appropriate, since describing them as cells that have characteristics of SCs or cells that express SC markers seems somewhat vague. , showed, using chromatin immunoprecipitation, that OCT4 and SOX-2 interact synergistically and bind to NANOG in human and live mouse embryonic stem cells, driving the expression of target genes related to pluripotency. According to , transcription factors together bind to the promoters of their own genes, forming interconnected autoregulatory loops and an autoregulatory network that is capable of providing advantages for the SCs, which are important for cell survival, stability and tumour progression. In comparison to the non-tumour epithelium, we observed that the expression levels of SOX-2, NANOG and OCT4 were increased in AME parenchyma, suggesting that these molecules may be involved in the pathogenesis of AME. It is noteworthy that in the literature, there are no studies that jointly assess the expression of these three markers both in the parenchyma of this neoplasm, as well as in cell lines originating from AME. It is worth highlighting the need for further studies, such as mechanistic assays, which can suppress the expression of stem cell biomarkers and see the influence of this lock, given the limitations that the immunohistochemical study and the sample size may have. Furthermore, some steps in the use of the method used (IHC) aim to increase the specificity of the primary antibody, such as blocking with BSA and using a positive control. However, this does not guarantee the full specificity of the antibody used. Based on the obtained results, the high expression of SOX-2, NANOG and OCT4 markers in AME neoplastic cells using immunohistochemistry, and in the AME-hTERT cell line using immunofluorescence, was verified. The methods used confirm the presence and probable participation of these proteins in the origin and progression of AME. It is suggested that this tumour has cells with characteristics of CSs that could be related to the progression and recurrence of this odontogenic tumour.
METTL14 promotes glomerular endothelial cell injury and diabetic nephropathy via m6A modification of α-klotho
3b7e9e47-bdf8-4aec-8c27-1dcd30d5fa10
8427885
Anatomy[mh]
Diabetic nephropathy (DN) is the most common microvascular complication in diabetes and a chronic progressive kidney disease which causes the end-stage renal disease (Vasanth Rao et al. ). The occurrence and development of DN may be caused by an interaction between inflammation, metabolism, and hemodynamics, which leads to increased glomerular injury and molecular modifications under hyperglycemia conditions (Warren et al. ; Rayego-Mateos et al. ). The pathogenesis of DN is complex, and the effect of existing treatment methods is limited. Therefore, further exploration of the molecular mechanisms of DN will help to find potential therapeutic targets and provide new treatment options. N6-methyladenosine (m6A) modification is the most abundant and conservative reversible post-transcriptional modification in the mRNA of bacteria and eukaryotic cells (Wei et al. ; Bi et al. ). M6A is mediated by a methyltransferase complex, which contained the methyltransferase-like enzymes METTL3 and METTL14 (Liu et al. , ). In contrast, m6A on RNA can be removed by demethylase such as FTO and ALKBH5 (Gao et al. ; Zou et al. ). The m6A modification regulates the splicing, transport, stability and translation efficiency of RNA, and further participates in the biological processes of metabolic diseases such as obesity and diabetes (Wu et al. ). Yang et al. found that in patients with type 2 diabetes, the m6A content was decreased. Interestingly, the mRNA expression levels of METTL3, METTL14, FTO, and WTAP were increased (Yang et al. ; Shen et al. ). METTL3/METT14 deleted mice developed hyperglycemia and hypoinsulinemia by regulating β-cell development and glycemic control (Wang et al. ). METT14 deficiency enhanced AKT signaling activation and decreased gluconeogenesis, playing a key role in β-cell survival, insulin secretion and glucose homeostasis (Liu et al. ). Similarly, in diabetic complications such as diabetic cataract (Yang et al. ), METTL3 was upregulated in the diabetic cataract tissue specimens and high glucose-induced lens epithelial cells. METTL3 knockdown promoted the proliferation and repressed the apoptosis of lens epithelial cells. Zha and his colleagues ( ) have found that METTL3 rescued cell viability in high-glucose treated retinal pigment epithelium cells by targeting miR-25-3p/PTEN/Akt signaling cascade in diabetic retinopathy. However, the role of m6A in the pathogenesis of DN is still unknown. Klotho is identified as an anti-aging gene and is involved in human health and disease (Kuro ). The protein encoded by klotho gene has many biological effects, such as anti-inflammatory response, anti-oxidative stress, anti-apoptosis, and anti-fibrosis. In recent years, it has been reported that α-klotho plays a protective role in DN (Xiong and Zhou ). In our previous study, we also found that α-klotho prevented renal tubular and glomerular injury and attenuated diabetic nephropathy in diabetic mice (Kang and Xu ; Li et al. ; Wang et al. ). It has become clear that epigenetic processes such as DNA methylation, histone modifications and non-coding RNA regulation are essentially involved in Klotho gene expression (Li et al. ; Zhu et al. ; Han and Sun ). Chen et al. ( ) have shown that α-klotho mRNA is hypermethylated by METTL14, resulting in the α-klotho mRNA de-expression. In the current study, we investigated the functions of METTL14 on glomerular endothelial cell injury in vitro and diabetic nephropathy in vivo, and explored whether METTL14 works through mediating m6A modification of α-klotho. Clinical samples This study was approved by the Ethics Committee of the Second Affiliated Hospital of Nanchang University. The renal samples of 20 DN patients were collected from the Second Affiliated Hospital of Nanchang University. Twenty control samples were obtained from normal adjacent tissues of renal carcinoma patients who underwent tumor nephrectomies without diabetes or other renal diseases. Animals All animal experiments were approved by Animal Ethical Committee of Nanchang University. A total of 30 db/db mice were purchased from Model Animal Research Center of Nanjing University. After adaptive feeding for 1 week, db/db mice were randomly divided into five groups (n = 6): db/db group, db/db + rAAV group, db/db + rAAV-METTL14 group, db/db + rAAV-klotho group, and db/db + rAAV-METTL14 + rAAV-klotho group. Except db/db group, the other four groups were injected with recombinant adeno-associated virus (rAAV) control, rAAV mediated delivery of METTL14 (rAAV-METTL14), or/and rAAV mediated delivery of klotho (rAAV-klotho) respectively via tail vein. Six db/m mice were chosen as the normal control. After 8 weeks of injection, mice were sacrificed, and the blood and kidney tissues were collected. The 24-h urine protein, kidney weight (KW) and body weight (BW) were assayed and the kidney hypertrophy index (KHI) was calculated according to the formula: KHI = KW/BW. Histopathological analysis For assessment of kidney injury, renal sections were stained with hematoxylin and eosin (H&E) and Masson. Briefly, renal tissues of each mouse were fixed in 4% paraformaldehyde, embedded in paraffin, and then sectioned at 4 μm thickness. H&E staining (Abcam, UK) was performed to detect general morphological changes and Masson staining (Sigma-Aldrich, USA) was assessed to examination of matrix deposition within the interstitium according to standard protocols. Cell culture Human renal glomerular endothelial cells (HRGECs) purchased from ScienCell Research Laboratories were maintained in Endothelial Cell Medium (ECM, Carlsbad, CA, USA) containing 5.6 mmol/L glucose and 10% fetal bovine serum (FBS; Gibco). To induce disease model in vitro, HRGECs were exposed to 20 mmol/L d -glucose, named high glucose (HG) group. In addition, HRGECs were exposed to 5.6 mmol/L d -glucose, as normal glucose (NG) group, and to 5.6 mmol/L d -glucose + 14.4 mmol/L mannitol, as an osmotic pressure control, named high mannitol (HM) group. Plasmid construction and cell transfection To construct expression plasmids of METTL14 and α-klotho, the sequences of wild-typed METTL14 or klotho were amplified respectively and cloned into the pcDNA 3.1 vector (Invitrogen, USA). The specific siRNAs against METTL14 were designed and synthesized by GenePharma (Shanghai, China). When HRGECs reached 80% confluence, they were transfected with plasmids or siRNAs using Lipofectamine (Invitrogen) according to the manufacturer’s instructions. The siRNA with the highest knockdown efficiency was selected for the further research. Cell counting kit-8 (CCK8) assay Cell proliferation was tested by CCK8 (Beyotime, China). HRGECs with different treatment were cultured in a 96-well plate for 24 h, 48 h and 72 h respectively and then incubated with CCK-8 reagent. The proliferation was assessed via the absorbance at 450 nm using a microplate reader (Thermo Fisher Scientific). Terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) staining The apotosis of HRGECs was examined by TUNEL assay kit (Beyotime, China) according to the manufacturer’s directions. DAPI was used to locate the nuclei of the cells. The number of TUNEL-positive cells was counted and the ratio of TUNEL-positive cells to total cells indicates the apoptosis index. Enzyme linked immunosorbent (ELISA) assay The ROS (#DRE901Mu, ShangHaiLianShuo Biological Co., Ltd.), SOD (R&D Systems, #DYC3419-2) and MDA (#EU2577, Wuhan Fine Biotech Co., Ltd.) levels in mice renal tissues and α-klotho (R&D Systems, #AF1819), TNF-α (R&D Systems, #MTA00B) and IL-6 (Sigma, #RAB0308-1KT) levels in mice serum were detected by ELISA kit. In addition, ROS (#NG-EA691, ShangHaiYuanmin Biotechnology Co., Ltd.), TNF-α (R&D Systems, #DTA00D) and IL-6 (Sigma, #RAB0306-1KT) levels in HRGECs cellular supernatant were detected by ELISA kit. Western blot Kidney tissues and HRGECs were lysed in RIPA lysis buffer (Roche, Germany) and the total protein content was determined by the BCA kit (Beyotime, China). The primary antibodies used were as follows: Anti-METTL14 (1:1000; Abcam, #ab252562, USA), Anti-α-Klotho (1:1000; Abcam, #ab181373, USA) and Anti-β-actin (1:10,000; Proteintech Group, USA). After incubated with HRP-conjugated secondary antibodies, the protein bands were visualized using an ECL kit (Pierce, USA) and analyzed using Tanon-4500 Gel Imaging System (Tanon, China). Quantitative realtime PCR (qRT-PCR) The total RNA of renal tissues and HRGECs were extracted with RNAiso Reagent (TaKaRa, China) and then retro-transcribed with M-MLV Reverse Transcriptase (TaKaRa, China). qRT-PCR was performed with METTL14, Klotho gene specific primers and SYBR Green PCR Master Mix (Applied Biosystems, USA). GAPDH was used as endogenous control. m6A RNA immunoprecipitation PCR (RIP-qPCR) The m6A-RIP test was carried out by Magna MeRIP m6A Kit (Merck, Germany) according to the manufacturer’s instructions. Total RNA was extracted and fragmented by ultrasound. The RNA fragment was incubated with Magnetic beads bound by anti-M6A antibody. After washed with m6A Salt, the binding RNA was eluted and purified by the RNA purification kit (Qiagen, USA) for qRT-PCR detection. The relative fold enrichment was calculated using 2 −ΔΔCt methods. Statistical analysis All statistical analyses were performed using the SPSS software (SPSS Inc., USA). Data were produced as mean ± SD. When the data were normally distributed, they were analysed by unpaired two-tailed Student’s t tests and multiple groups were analysed by one-way analysis of variance (ANOVA). When the data were not normally distributed, nonparametric tests were used. P-value < 0.05 was considered significant. This study was approved by the Ethics Committee of the Second Affiliated Hospital of Nanchang University. The renal samples of 20 DN patients were collected from the Second Affiliated Hospital of Nanchang University. Twenty control samples were obtained from normal adjacent tissues of renal carcinoma patients who underwent tumor nephrectomies without diabetes or other renal diseases. All animal experiments were approved by Animal Ethical Committee of Nanchang University. A total of 30 db/db mice were purchased from Model Animal Research Center of Nanjing University. After adaptive feeding for 1 week, db/db mice were randomly divided into five groups (n = 6): db/db group, db/db + rAAV group, db/db + rAAV-METTL14 group, db/db + rAAV-klotho group, and db/db + rAAV-METTL14 + rAAV-klotho group. Except db/db group, the other four groups were injected with recombinant adeno-associated virus (rAAV) control, rAAV mediated delivery of METTL14 (rAAV-METTL14), or/and rAAV mediated delivery of klotho (rAAV-klotho) respectively via tail vein. Six db/m mice were chosen as the normal control. After 8 weeks of injection, mice were sacrificed, and the blood and kidney tissues were collected. The 24-h urine protein, kidney weight (KW) and body weight (BW) were assayed and the kidney hypertrophy index (KHI) was calculated according to the formula: KHI = KW/BW. For assessment of kidney injury, renal sections were stained with hematoxylin and eosin (H&E) and Masson. Briefly, renal tissues of each mouse were fixed in 4% paraformaldehyde, embedded in paraffin, and then sectioned at 4 μm thickness. H&E staining (Abcam, UK) was performed to detect general morphological changes and Masson staining (Sigma-Aldrich, USA) was assessed to examination of matrix deposition within the interstitium according to standard protocols. Human renal glomerular endothelial cells (HRGECs) purchased from ScienCell Research Laboratories were maintained in Endothelial Cell Medium (ECM, Carlsbad, CA, USA) containing 5.6 mmol/L glucose and 10% fetal bovine serum (FBS; Gibco). To induce disease model in vitro, HRGECs were exposed to 20 mmol/L d -glucose, named high glucose (HG) group. In addition, HRGECs were exposed to 5.6 mmol/L d -glucose, as normal glucose (NG) group, and to 5.6 mmol/L d -glucose + 14.4 mmol/L mannitol, as an osmotic pressure control, named high mannitol (HM) group. To construct expression plasmids of METTL14 and α-klotho, the sequences of wild-typed METTL14 or klotho were amplified respectively and cloned into the pcDNA 3.1 vector (Invitrogen, USA). The specific siRNAs against METTL14 were designed and synthesized by GenePharma (Shanghai, China). When HRGECs reached 80% confluence, they were transfected with plasmids or siRNAs using Lipofectamine (Invitrogen) according to the manufacturer’s instructions. The siRNA with the highest knockdown efficiency was selected for the further research. Cell proliferation was tested by CCK8 (Beyotime, China). HRGECs with different treatment were cultured in a 96-well plate for 24 h, 48 h and 72 h respectively and then incubated with CCK-8 reagent. The proliferation was assessed via the absorbance at 450 nm using a microplate reader (Thermo Fisher Scientific). The apotosis of HRGECs was examined by TUNEL assay kit (Beyotime, China) according to the manufacturer’s directions. DAPI was used to locate the nuclei of the cells. The number of TUNEL-positive cells was counted and the ratio of TUNEL-positive cells to total cells indicates the apoptosis index. The ROS (#DRE901Mu, ShangHaiLianShuo Biological Co., Ltd.), SOD (R&D Systems, #DYC3419-2) and MDA (#EU2577, Wuhan Fine Biotech Co., Ltd.) levels in mice renal tissues and α-klotho (R&D Systems, #AF1819), TNF-α (R&D Systems, #MTA00B) and IL-6 (Sigma, #RAB0308-1KT) levels in mice serum were detected by ELISA kit. In addition, ROS (#NG-EA691, ShangHaiYuanmin Biotechnology Co., Ltd.), TNF-α (R&D Systems, #DTA00D) and IL-6 (Sigma, #RAB0306-1KT) levels in HRGECs cellular supernatant were detected by ELISA kit. Kidney tissues and HRGECs were lysed in RIPA lysis buffer (Roche, Germany) and the total protein content was determined by the BCA kit (Beyotime, China). The primary antibodies used were as follows: Anti-METTL14 (1:1000; Abcam, #ab252562, USA), Anti-α-Klotho (1:1000; Abcam, #ab181373, USA) and Anti-β-actin (1:10,000; Proteintech Group, USA). After incubated with HRP-conjugated secondary antibodies, the protein bands were visualized using an ECL kit (Pierce, USA) and analyzed using Tanon-4500 Gel Imaging System (Tanon, China). The total RNA of renal tissues and HRGECs were extracted with RNAiso Reagent (TaKaRa, China) and then retro-transcribed with M-MLV Reverse Transcriptase (TaKaRa, China). qRT-PCR was performed with METTL14, Klotho gene specific primers and SYBR Green PCR Master Mix (Applied Biosystems, USA). GAPDH was used as endogenous control. The m6A-RIP test was carried out by Magna MeRIP m6A Kit (Merck, Germany) according to the manufacturer’s instructions. Total RNA was extracted and fragmented by ultrasound. The RNA fragment was incubated with Magnetic beads bound by anti-M6A antibody. After washed with m6A Salt, the binding RNA was eluted and purified by the RNA purification kit (Qiagen, USA) for qRT-PCR detection. The relative fold enrichment was calculated using 2 −ΔΔCt methods. All statistical analyses were performed using the SPSS software (SPSS Inc., USA). Data were produced as mean ± SD. When the data were normally distributed, they were analysed by unpaired two-tailed Student’s t tests and multiple groups were analysed by one-way analysis of variance (ANOVA). When the data were not normally distributed, nonparametric tests were used. P-value < 0.05 was considered significant. METTL14 was highly expressed in DN patients and high glucose-induced HRGECs As shown in Fig. , METTL14 was significantly increased in kidney tissues of DN patients both at the mRNA and protein level (Fig. A–C). In high glucose (HG) induced HRGECs, the expression of METTL14 was also increased (Fig. D–F) compared with NG group. We also found that the m6A content was higher (Fig. G) in HG group examined by colorimetric method with m6A RNA methylation quantitative detection kit. These results indicated that METTL14 may be involved in the progress of diabetic nephropathy. METTL14 promoted high glucose-induced glomerular endothelial cell injury To explore the biological role of METTL14, Vector, METTL14 overexpression plasmid (METTL14), siRNA negative control (si-Ctrl) or METTL14 siRNA (si-METTL14-1, si-METTL14-2, si-METTL14-3) were transfected into HRGECs to overexpress or knockdown METTL14 (Fig. A, B). ELISA assay revealed (Fig. C–E) that the concentration of ROS, TNF-α and IL-6 were increased in high glucose-induced HRGECs and reached even higher levels after METTL14 overexpression, while decreased after knockdown of METTL14. In addition, overexpression of METTL14 markedly suppressed the cell proliferation (Fig. F) and promoted cell apoptosis (Fig. G, H) in high glucose-induced HRGECs. Conversely, METTL14 siRNA significantly promoted cell proliferation (Fig. F) but inhibited cell apoptosis (Fig. G, H). Together, these data suggested that METTL14 promoted high glucose-induced glomerular endothelial cell injury. METTL14 down-regulated α-klotho expression Consistent with our previous study (Wang et al. ), the mRNA and protein expression of α-klotho were down-regulated in DN patients and high glucose-induced HRGECs (Fig. A–F). Overexpression of METTL14 significantly decreased, but knockdown of METTL14 significantly increased the expression of α-klotho (Fig. G–I), indicating that METTL14 negatively regulated α-klotho. What’s more, increased ROS, TNF-α and IL-6 levels (Fig. J–L) and cell apoptosis (Fig. N, O), and decreased cell proliferation (Fig. M) in METTL14 overexpression cells were significantly rescued after co-transfection with Klotho expression plasmid. These findings suggested that α-klotho could be down-regulated by METTL14 and partially counteracted the function of METTL14 in glomerular endothelial cell injury. METTL14 regulated α-klotho m6A modification As we know, METTL14 regulates gene expression through m6A modification. Here, we detected the m6A content of Klotho by RIP-qPCR. As displayed in Fig. A, the α-klotho m6A content was increased under high glucose condition. Overexpression of METTL14 increased the m6A level of α-klotho mRNA while METTL14 silence reduced the m6A level of α-klotho mRNA (Fig. B), indicating that METTL14 may mediate m6A modification of α-klotho. METTL14 aggravated renal injury and inflammation in db/db mice and rescued by Klotho To further confirm the role of METTL14 in vivo, db/db mice were treated with METTL14-expressed rAAV or Klotho-expressed rAAV. After administration of rAAV for 8 weeks, 24-h urine protein, kidney weight (KW), body weight (BW), renal injury were evaluated. As shown in Fig. A–D, rAAV-mediated overexpression of METTL14 significantly increased the levels of 24-h urine protein, KW and KHI but reduced the BW of db/db mice. By contrast, injection of rAAV-Klotho decreased the levels 24-h urine protein, KW and KHI but increased the BW of db/db mice, and even rescued the function of rAAV-METTL14. Moreover, H&E (Fig. E) and Masson staining (Fig. F) revealed that METTL14 overexpression exacerbated renal pathological alterations and collagen accumulation which could partially rescued by Klotho overexpression. The concentration of ROS, MDA in kidney tissues of db/db mice were notably increased and the levels of SOD was significantly decreased in db/db mice after injection of rAAV-METTL14. Overexpresssion of Klotho in contrast decreased the ROS and MDA but increased SOD levels in db/db mice or db/db + rAAV-METTL14 mice (Fig. A–C). In addition, the serum levels of TNF-α and IL-6 were increased while α-klotho level was decreased after injection of rAAV-METTL14, which were also reversed by rAAV-Klotho (Fig. D–F). Taking together, these results suggested that METTL14 aggravated renal injury of db/db mice which could be rescued by overexpression of α-klotho. As shown in Fig. , METTL14 was significantly increased in kidney tissues of DN patients both at the mRNA and protein level (Fig. A–C). In high glucose (HG) induced HRGECs, the expression of METTL14 was also increased (Fig. D–F) compared with NG group. We also found that the m6A content was higher (Fig. G) in HG group examined by colorimetric method with m6A RNA methylation quantitative detection kit. These results indicated that METTL14 may be involved in the progress of diabetic nephropathy. To explore the biological role of METTL14, Vector, METTL14 overexpression plasmid (METTL14), siRNA negative control (si-Ctrl) or METTL14 siRNA (si-METTL14-1, si-METTL14-2, si-METTL14-3) were transfected into HRGECs to overexpress or knockdown METTL14 (Fig. A, B). ELISA assay revealed (Fig. C–E) that the concentration of ROS, TNF-α and IL-6 were increased in high glucose-induced HRGECs and reached even higher levels after METTL14 overexpression, while decreased after knockdown of METTL14. In addition, overexpression of METTL14 markedly suppressed the cell proliferation (Fig. F) and promoted cell apoptosis (Fig. G, H) in high glucose-induced HRGECs. Conversely, METTL14 siRNA significantly promoted cell proliferation (Fig. F) but inhibited cell apoptosis (Fig. G, H). Together, these data suggested that METTL14 promoted high glucose-induced glomerular endothelial cell injury. Consistent with our previous study (Wang et al. ), the mRNA and protein expression of α-klotho were down-regulated in DN patients and high glucose-induced HRGECs (Fig. A–F). Overexpression of METTL14 significantly decreased, but knockdown of METTL14 significantly increased the expression of α-klotho (Fig. G–I), indicating that METTL14 negatively regulated α-klotho. What’s more, increased ROS, TNF-α and IL-6 levels (Fig. J–L) and cell apoptosis (Fig. N, O), and decreased cell proliferation (Fig. M) in METTL14 overexpression cells were significantly rescued after co-transfection with Klotho expression plasmid. These findings suggested that α-klotho could be down-regulated by METTL14 and partially counteracted the function of METTL14 in glomerular endothelial cell injury. As we know, METTL14 regulates gene expression through m6A modification. Here, we detected the m6A content of Klotho by RIP-qPCR. As displayed in Fig. A, the α-klotho m6A content was increased under high glucose condition. Overexpression of METTL14 increased the m6A level of α-klotho mRNA while METTL14 silence reduced the m6A level of α-klotho mRNA (Fig. B), indicating that METTL14 may mediate m6A modification of α-klotho. To further confirm the role of METTL14 in vivo, db/db mice were treated with METTL14-expressed rAAV or Klotho-expressed rAAV. After administration of rAAV for 8 weeks, 24-h urine protein, kidney weight (KW), body weight (BW), renal injury were evaluated. As shown in Fig. A–D, rAAV-mediated overexpression of METTL14 significantly increased the levels of 24-h urine protein, KW and KHI but reduced the BW of db/db mice. By contrast, injection of rAAV-Klotho decreased the levels 24-h urine protein, KW and KHI but increased the BW of db/db mice, and even rescued the function of rAAV-METTL14. Moreover, H&E (Fig. E) and Masson staining (Fig. F) revealed that METTL14 overexpression exacerbated renal pathological alterations and collagen accumulation which could partially rescued by Klotho overexpression. The concentration of ROS, MDA in kidney tissues of db/db mice were notably increased and the levels of SOD was significantly decreased in db/db mice after injection of rAAV-METTL14. Overexpresssion of Klotho in contrast decreased the ROS and MDA but increased SOD levels in db/db mice or db/db + rAAV-METTL14 mice (Fig. A–C). In addition, the serum levels of TNF-α and IL-6 were increased while α-klotho level was decreased after injection of rAAV-METTL14, which were also reversed by rAAV-Klotho (Fig. D–F). Taking together, these results suggested that METTL14 aggravated renal injury of db/db mice which could be rescued by overexpression of α-klotho. M6A modification has been proposed to be participates in many physiological and pathological processes. However, its roles in DN are unknown. In this work, we discovered the high content of m6A modification in DN, and identified the roles of METTL14 on glomerular endothelial cell injury in vitro and renal injury in vivo, uncovering the novel function of METTL14-mediated m6A modification in DN. In the present study, we found that METTL14 is up-regulated in renal tissues of DN patients and high glucose-incubated glomerular endothelial cells. Overexpression of METTL14 promoted glomerular endothelial cells apoptosis and inflammation, and aggravated renal injury of DN mice. The roles of METTL14 in human diseases have been widely elucidated. For example, in cisplatin-induced acute kidney injury, overexpression of METTL14 promoted apoptosis of kidney proximal tubular cells (Zhou et al. ). In renal carcinoma, METTL14 was down-regulated and abrogated P2RX6 expression via m6A modification to suppress renal cancer cell migration and invasion (Gong et al. ). Xu et al. ( ) found that METTL14 knockdown protected the kidney against renal ischemic reperfusion injury. However, the mechanisms for METTL14 increased in DN and high glucose induced glomerular endothelial cells is unclear. Thus, more studies are needed in future. Mechanistically, we have identified Klotho as a downstream target of METTL14 in DN. METTL14 overexpression significantly decreased α-klotho expression while silence of METTL14 significantly increased α-klotho expression. The m6A level of α-klotho mRNA was both increased under high glucose condition and after METTL14 overexpression, but reduced when METTL14 knockdown. Finally, overexpression of Klotho effectively abrogated the effect of METTL14 in glomerular endothelial cells and DN both in vitro and in vivo. These data demonstrated that the contribution of METTL14 in the DN progression has relied on α-klotho. According to the literature, the expression of α-klotho in serum and urine klotho-to-creatinine ratio of patients with kidney disease is down regulated (Yi et al. ), such as renal fibrosis and podocyte injury (Cho et al. ), acute kidney injury (Qian et al. ), DN (Kacso et al. ), etc. α-klotho was found to be decreased during DN progress. Overexpression of α-klotho prevented renal injury of diabetic mice. It is reported that the expression of α-klotho is regulated by promoter methylation and histone deacetylation. For example, Masahiro et al. ( ) found that promoter methylation restricts klotho gene expression in renal tubular cells. In renal fibrosis, α-klotho was downregulated by histone deacetylation and restored by genistein through inhibiting histone 3 deacetylation of α-klotho promoter (Li et al. ). Our study has shown that m6A RNA methylation contributed to the dysregulation of α-klotho. As the development of m6A detecting techniques and the emerging of other novel technologies, a variety of m6A modification-related regulatory enzymes have been identified, facilitating the interpretation of their potential biological functions. M6A modification is regulated by methyltransferase (Writers), demethylase (Erasers) and binding proteins (Readers) (Yang et al. ). The main components of the methyltransferase complexes that have been discovered include METTL3, METTL14, WTAP and KIAA1429, while demethylases FTO and ALKBH5 are used as “Erasers” to remove the methylation. The biological function of m6A is mainly mediated by the “Reader” proteins through the selective recognition of m6A sites. The currently known “Reader” proteins include YTH domain proteins (YTHDF1, YTHDF2, YTHDF3, YTHDC1 and YTHDC2) and heterogeneous nuclear ribonucleoprotein HNRNP family proteins (HNRNPA2B1, HNRNPC and HNRNPG). METTL3 was up-regulated in patients with type 2 diabetes and mice with high-fat diet, and inhibited insulin sensitivity and promoted fatty acid metabolism (Xie et al. ). Proteomics analysis showed that high glucose induced high expression of WTAP in retinal pigmented epithelium cells (Chen et al. ). FTO has been reported to be associated with obesity and diabetes (Zhou et al. ; Mizuno ). Makiko et al. ( ) found that a variant in FTO was significantly associated with susceptibility to DN patients. YTHDC2 was found to be markedly down-regulated in obese mice and its overexpression improved the liver steatosis and insulin resistance through binding to the mRNA of lipogenic genes (Zhou et al. ). In addition, the whole exome sequencing has identified a variant of YTHDC2 which may contribute to type 2 diabetes susceptibility in Northeast India (Lalrohlui et al. ). Until now, the expression and roles of m6A methylation regulatory factors mentioned above in DN are unknown. Further studies will be required to research the functions of other regulatory factors to advance our understanding of m6A methylation in DN. In conclusion, our study found that METTL14 could aggravated high glucose-induced glomerular endothelial cell injury and diabetic nephropathy through m6A modification of α-klotho. The discovery of the METTL14-α-klotho pathway provides a new perspective for understanding of m6A modification in DN and is conducive to revealing new therapeutic targets.
Influences: Childhood, boyhood, and youth
4fc4a0d0-8da3-4b7d-8c3a-6def34b2e998
5940256
Physiology[mh]
My graduate education was a slow process of picking my way out of Ling’s intricate system of thought. Early on, designing what I imagined would be a suite of crucial experiments to decide between the membrane theory and Ling’s ideas, I plunged into several years of thesis work on sugar transport in mouse muscle. (I once brought home for dinner some mouse livers from my day’s dissections, to my wife’s disgust and my dog’s delight.) As so often in research, collaboration was essential; I survived Ling’s no-membrane nonsense thanks to two other students in the laboratory, Jeff Freedman and Larry Palmer, also physics-trained Ling acolytes, ignorant of matters biological. We rescued a discarded blackboard off the streets of West Philly and met weekly in our apartments for subversive nighttime seminars to read the literature (something our adviser had advised us against, as it would only confuse us). Slowly, slowly, we emerged from Ling’s worldview to see that, despite its self-consistency, it just didn’t mesh with the facts outside our laboratory bubble. Ling refused to sign my thesis, which claimed to refute his ideas, and he boycotted my public defense, precipitating a hilarious last-minute slapstick scene in which my depleted committee frantically commandeered a hapless passerby to “just sit there” to make up a quorum so they could get rid of me, PhD in hand. I hasten to add that, despite the personal pain Ling suffered at my “disloyalty,” he never used his power to kick me out of his laboratory (as was his right; academic science retains the very best elements of feudalism). I remain grateful to him for tolerating what must have been emotionally taxing: the daily presence in his laboratory of a traitor. Ling had a tremendous, abiding influence on me. He was a broad intellectual who wove music, literature, and Chinese cooking into the laboratory’s buzz, and who taught us how to critically dismember research papers (of his opponents). He was a kind man with a fine sense of humor, high integrity, and an infectious passion for research, and he was a skilled experimentalist who set a lasting example by working in the laboratory side by side with his students. But he was a tragic figure, his wealth of professional virtues nullified by rigid attachment to theory, a violation of the first commandment of science: when Nature speaks, you’d better listen. He became the scientific analogue of a religious fanatic and continues today in his mid-’90s, a self-proclaimed revolutionary ( http://www.gilbertling.org ). I reckon that his greatest influence on me was to instill, along with a bizarre fascination with small inorganic ions, a profound aversion to becoming emotionally attached to my own ideas. Throughout this time, I had grown fascinated with ion channels from reading papers on the single-molecule stochastic behavior observed with certain bacterial peptides added to “planar bilayer” membranes ( , ), whose teraohm leak resistance made such measurements feasible. Paul Mueller, a master of electronics tinkering, and whose nearby laboratory I’d also visited while in that undergraduate biophysics class, had invented planar bilayers in the early ’60s ( ). During my last few months at Penn, I asked Paul to teach me the technique, and he let me tinker along with him. There, I fell in love with those “artificial” membranes whose existence nobody, not even Ling, could deny. That summer was a golden time; I’d ride my motorbike up to Paul’s laboratory to play with bilayers in the morning and then return by noon to watch, spellbound, Sam Ervin’s Watergate hearings on TV, and in the evening write up thesis chapters and manuscripts (single author, because they argued against Ling’s theory ). As summer became winter and spring, I wrote a postdoctoral grant proposal to work with Efraim Racker at Cornell. In a brilliant experimental flash of Gordian knot cutting ( ), Ef had engineered reconstituted membranes to disprove the reigning idea of chemical coupling in mitochondrial ATP synthesis, thereby ensuring Peter Mitchell’s Nobel Prize for his heresy that proton gradients thermodynamically drive oxidative phosphorylation. My postdoc interview had been unpromising; after I’d explained my odd situation as a born-again membrane researcher, Ef probed me with an arcane question about nucleotide metabolism, a subject about which I understood little. Downhearted, I confessed that I had no clue what his question even meant. For what seemed like minutes, Ef silently contemplated the carpet in his office with his characteristic frown, and then looked up and said: “Well, if I don’t accept you, you are lost forever.” To this day, I am sure he was right about that. Though substantively idiotic in retrospect, my postdoctoral grant proposal to reconstitute the Ca 2+ ATPase of SR into planar bilayers, where I’d measure its electrical properties, was funded. I was elated to move to Cornell (and not at all unhappy at the rise in yearly stipend from $2,400 to $12,000, a $6,000 check appearing biannually in the mail). Ef was an amazing adviser who, while doing his own benchwork, somehow kept himself deeply informed of the diverse membrane reconstitution projects of his 15 postdocs. My close companion was Baruch Kanner, whose work there led to his later breakthroughs identifying neuronal glutamate and GABA transporters. I spent two exceedingly happy years learning to handle defined proteins in defined membranes, entirely free of ideologies associated with scientific orthodoxy or heresy, which were daily fare in Ling’s laboratory. Here I was hypothesis free, just exploring dark territory with liposomes and planar bilayers, doing a completely different kind of research: discovery rather than epistemology. I stumbled on something unexpected: an ion channel. It was known that SR membranes are chock full of Ca 2+ pumps and that they must also harbor some sort of Ca 2+ release channel to trigger muscle contraction. But in fusing SR membranes into planar bilayers, I recorded only an unknown voltage-dependent, K + -selective channel ( ). As soon as I saw its single-channel fluctuations—a “real protein” rather than a bacterial peptide—I lost all interest in Ca 2+ pumps. Of the scores of job applications I sent out toward the end of my postdoc, I scored just one interview, at Brandeis’s Biochemistry Department. (In those days, when unsuccessful paper applicants would sometimes receive form letters of rejection, I received the same rejection letter from the same university on four successive Fridays: a case of either a copier gone psychotic, or a department that really, really didn’t want me on their faculty.) Arriving at Brandeis as a 29-year-old assistant professor of biochemistry, I excitedly set up my own laboratory to continue working on SR K + channels in planar bilayers. I knew nothing about mechanistic enzymology, my department’s widely regarded strength. Two giants of that field—Bill Jencks, a deep scholar, and Bob Abeles, a true genius—had laboratories just upstairs from me. Sergei Timasheff, a highly respected physical biochemist, and Bob Schlief, a young, creative geneticist in the early days of DNA manipulation, were also close by. Al Redfield, a brilliant pioneer in nuclear magnetic resonance relaxation theory (and arguably the worst undergraduate teacher ever), was just down the hall. Andrew Szent-Györgyi shared floor space with David DeRosier, Don Caspar, and Carolyn Cohen, structural biology gurus, while Michael Rosbash, a brash assistant professor working on something called RNA, and John Lisman, a biophysically minded neuro geek, lived in the adjacent Biology Department. This small university was literally crawling with terrific scientists. Our department was small, so we collided in the halls often and discussed each other’s research in monthly informal lunch presentations. They hired me, I surmised, as the “membrane guy,” thinking that membrane proteins should be brought into the biochemical fold. They knew I would be incompetent at teaching standard biochemistry classes, so I was asked to design a course on biochemical thermodynamics, which in one form or another I’ve been teaching for over 40 years. For a lowly assistant professor, this was a heady time, in part because our undergrad research students were so talented—my first was a shy transfer student from UMass Boston named Rod MacKinnon—and in part because of a complete absence of departmental hierarchy. These jaw-droppingly eminent scientists treated me like a peer, seeming to want to learn from me about ion channels and the power of single-molecule kinetics, subjects with which I was comfortable by that point. In my second year at Brandeis, I stumbled upon yet another channel that wasn’t supposed to be there, a strange Cl − channel in an electric fish, but I’ve told that story already ( ). Suffice it to say that Brandeis biochemistry provided an almost effortless, learning-by-osmosis immersion in a foreign subject that deeply informed the “enzymological” approach to channels championed in the ’70s by Bertil Hille, one that I applied experimentally to ion channels in chemically defined membranes. By the early ’80s, thanks also to my friendship with Ramon Latorre, at Harvard on what Boston’s vibrant Chilean expat community called a “Pinochet Fellowship,” I had learned enough about and produced enough work on ion channels to have been noticed, to my amazement, by my electrophysiological heroes, Clay Armstrong, Chuck Stevens, Knox Chandler, Alan Finkelstein, and Peter Läuger, as well as by colleagues just out of the postdoctoral hatchery: Rick Aldrich, David Clapham, David Corey, and Fred Sigworth. These young stars made me realize that my own scientific youth was over, that I was now an adult embedded in an effervescent, blooming field—an unusual one where your uncompromising competitors are also generous collaborators and helpers, to the untainted benefit of the collective progress of our science.
Navigating diagnostic challenges in
9126ec64-5420-4696-8c9f-00f8dc02469f
11869542
Thoracic Surgery[mh]
Infective endocarditis (IE) is a life-threatening, systemic infectious disease. Its high morbidity and mortality rate make it a significant public health concern Antibiotic resistance is a major factor in the increase in the population at risk for IE. In addition, a significant factor in the increase in IE incidence is the emergence of new diagnostic tools and multimodal imaging for IE diagnosis. A number of conditions are typically essential to placing this group at risk, including the existence of predisposing risk factors, particularly for individuals with congenital heart disease, prosthetic valves, or any intracardiac material. The diagnosis of IE is established according to the modified Duke criteria . Identification of microorganisms by blood culture is initially a cornerstone for diagnosis and treatment. In some cases, if blood culture is negative, empirical therapy is started with further investigation, and blood culture-negative infective endocarditis accounts for 5–10% of all cases of endocarditis. BCNIE is often severe and difficult to diagnose . Three primary categories identify blood culture-negative infective endocarditis (BCNIE): endocarditis caused by fastidious microorganisms requiring extended incubation, bacterial endocarditis with blood cultures sterilized by prior antibiotic therapy, and true blood culture-negative endocarditis as a result of intracellular bacteria that are not routinely cultured in blood . The main causes of BCNIE are Brucella spp., Coxiella burnetii , Bartonella spp., Legionella spp., Mycoplasma spp., and Tropheryma whipplei . A team approach is needed for the diagnosis and treatment of blood culture-negative endocarditis because it requires sophisticated and innovative molecular analysis, histology, and vital epidemiological information. Molecular and serological techniques have emerged as crucial tools for Bartonella species detection. The most common species among the 14 species associated with Bartonella endocarditis was B. henselae . Catch-scratch illnesses are also recognized to be mostly caused by this species . Bartonella henselae endocarditis is a rare but serious condition, with a limited number of cases reported in the literature. This rarity makes diagnosis challenging and often leads to delays and complications. The atypical presentation of Bartonella endocarditis, particularly in populations without common risk factors, further complicates its diagnosis. There is a specific knowledge gap regarding the diverse clinical manifestations and optimal management strategies for Bartonella endocarditis, especially in patients without a clear epidemiological history of animal exposure. Addressing these knowledge gaps is crucial for improving the diagnostic accuracy and treatment outcomes in affected patients. The aim of this report was to present a case of B. henselae endocarditis associated with Bartonella -infected domestic animals in Tunisia, highlighting the diagnostic challenges and therapeutic strategies involved. A 65-year-old Tunisian woman presented to our department in July 2023 with general weakness, weight loss, arthralgia, and symmetrical petechial and purpuric rashes on her feet. The patient reported a 2-month history of fever. Her medical history included type 2 diabetes for the past 5 years, which was effectively managed. Additionally, the patient had hypothyroidism and was currently receiving levothyroxine. She also had dyslipidemia. In 2021, the patient underwent coronary stenting for a non-ST-elevation myocardial infarction. Echocardiography revealed a preserved left ventricular ejection fraction and no valvular heart disease. Additionally, 3 months before the present admission, the patient underwent evaluation by a gastroenterologist for thrombocytopenia and anicteric cholestasis. Clinical manifestations indicative of portal hypertension such as moderate ascites and splenomegaly were also observed. Serological tests for hepatitis B and hepatitis C yielded negative results. Abdominal ultrasound results supported the diagnosis, demonstrating characteristics aligned with portal hypertension and cirrhosis classified as stage F3–F4 according to FibroScan analysis. On admission, the patient presented with a body temperature of 38.1 °C. Vascular ecchymotic purpura were observed in the lower limbs. Cardiovascular examination revealed no abnormalities and palpation revealed splenomegaly. Echocardiography revealed a preserved left ventricular ejection fraction with no wall motion abnormalities. The aortic valve was tricuspid and showed a mobile image measuring 4 mm × 9 mm on the non-coronary cusp and prolapsing into the left ventricular outflow tract, causing grade 2 aortic insufficiency, with a vena contracta of 4 mm (Figs. , ). The initial biochemical profile of the patient revealed anemia with a hemoglobin level of 9 g/dL, platelet count of 175,000/mm 3 , and white blood cell count of 6030/mm 3 . Renal function parameters were as follows: urea, 8.48 mmol/L and creatinine, 115 µmol/L. Inflammatory marker levels were elevated, with a C-reactive protein level of 41 mg/L. Liver function tests revealed mild abnormalities, including aspartate aminotransferase (ASAT) at 43 U/L, alanine aminotransferase (ALAT) at 14 U/L, and gamma-glutamyl transferase (GGT) at 175 U/L. Alkaline phosphatase (PAL) were notably elevated at 348 U/L. The immunological profile was negative for antineutrophil cytoplasmic antibodies (ANCA), anti-extractable nuclear antigens (ENA), and anti-mitochondrial antibodies (anti-ML).Additionally, the rheumatoid factor (RF) level was elevated, exceeding 200 IU/mL, whereas anti-cyclic citrullinated peptide (anti-CCP) and anti-fibrillarin antibodies (anti-FI) were both negative. Urinary analysis revealed a 24 hour protein excretion of 0.2 g/24 hour, with no detectable hematuria or red blood cell casts. A skin biopsy revealed characteristics indicative of leukocytoclastic vasculitis, with medium-intensity C3 and IgM deposits detected in the vascular structures. In the presence of signs of infective endocarditis, despite negative blood cultures and a thoracoabdominopelvic computed tomography (CT) scan showing no abnormalities, empirical antibiotic therapy was initiated. The chosen treatment regimen included ampicillin, oxacillin, and gentamicin, with dosages tailored to the results of blood analyses. After 10 days of antibiotic therapy, there was an initial improvement in the apyrexia and a decrease in the levels of biological inflammatory markers. However, febrile episodes recurred and C-reactive protein levels increased, indicating an incomplete response to the initial treatment. Antibiotic therapy was modified without improvement, and investigation of the rare causes of culture-negative endocarditis was initiated. Serological tests for Coxiella burnetii , Legionellosis , Aspergillus , Mycoplasma , Tropheryma whipplei , and brucellosis all returned negative results. In addition, a positron emission tomography (PET) scan was performed and the results were negative. On day 14, serological tests for Bartonella henselae were positive and both IgM and IgG were positive. Antibiotic treatment with rifampin and doxycycline was initiated; however, aminoglycosides were not incorporated into the treatment regimen because of the patient’s borderline kidney function. At this point, the patient recalled being scratched by a cat several months prior to her current admission. The patient showed significant improvement with this treatment, with complete resolution of fever, normalization of inflammatory markers, stability of cardiac ultrasound findings, and restoration of liver function values to normal levels. One month later, the patient presented with acute pulmonary edema without evident etiology. Coronary angiography was performed to assess the coronary status, revealing triple-vessel disease with involvement of the left main coronary artery. Echocardiography revealed the same appearance of vegetation responsible for grade 2 aortic valve regurgitation (Figs. , ). Given this condition and the low operative risk in our patient, we decided to proceed with aortic valve replacement using a mechanical prosthesis and coronary artery bypass grafting. The patient underwent surgery with good postoperative recovery, and the culture of the aortic valve was positive for Bartonella henselae , which further supports our diagnosis.After the surgery, the patient was closely monitored with monthly follow-up visits. Throughout this period, she demonstrated a favorable clinical course, remaining apyretic with a preserved general condition. Monthly transthoracic echocardiograms consistently showed a good hemodynamic profile of the aortic prosthesis with no evidence of vegetation. Now, 1-year post-surgery, she continues to undergo regular follow-ups, maintaining a stable recovery and showing no signs of complications. Bartonella henselae infective endocarditis poses a diagnostic challenge in predisposed patients and is associated with a high mortality rate. A substantial level of suspicion is needed for early identification because atypical presentations and lack of usual signs and symptoms of infection can delay diagnosis. When blood cultures are negative after 72–96 hours, individuals with epidemiological risk factors should be evaluated for the disease . Bartonella henselae is an intracellular pathogen that primarily infects the endothelial cells and macrophages. After entering the host, often through a scratch or bite from an infected cat, the bacteria disseminate through the bloodstream and adhere to the damaged endocardial surfaces. The bacteria’s ability to invade endothelial cells and evade the host immune response is a key factor in its pathogenicity. Chronic infection of endothelial cells leads to the formation of vegetation on heart valves, which are aggregates of platelets, fibrin, and bacteria. These vegetation types can cause valve dysfunction, embolic phenomena, and systemic inflammatory responses. The fastidious nature of Bartonella henselae , which requires specific culture conditions and extended incubation periods, complicates its detection in blood cultures, often necessitating the use of serological and molecular diagnostic techniques. Culture-negative endocarditis is associated with several Bartonella species. Up to 95% of cases are accounted for with two common species: B. henselae and B. Quintana . Patients with negative blood cultures and risk factors for these infections should consider early serological testing. Polymerase chain reaction (PCR) has a 100% specificity rate but only a 58% sensitivity, and the test is not always accessible. In addition, the fastidious nature of the organism limits its growth in blood cultures, and the incubation period can extend up to 21 days, which further delays prompt identification and treatment. Therefore, the initial diagnosis is reliant on the evaluation of serum IgM and IgG titers . In our case report, the classification of endocarditis according to the Duke criteria underscores the diagnostic uncertainty initially encountered . Given the urgency of the situation, empirical antibiotic therapy was promptly initiated while awaiting specific test results. This proactive approach facilitates early treatment initiation and contributes to the initial stabilization of the patient. Furthermore, comprehensive patient interrogation revealed crucial information regarding exposure to kittens, prompting suspicion of Bartonella henselae infection. This highlights the importance of thorough history-taking to guide the diagnosis of specific pathogens and prevent treatment delays. The timely recognition of Bartonella henselae infection allowed for swift adjustment of antibiotic therapy, replacing empirical treatment with a tailored combination targeting the identified pathogen. This not only enhanced treatment efficacy but also reduced the risk of complications, leading to improved patient outcomes. The long-term prognosis of patients with Bartonella endocarditis who undergo successful surgical intervention and appropriate antibiotic therapy is generally favorable. In our case, the patient’s condition improved significantly after valve replacement surgery and targeted antibiotic treatment, and no recurrence of infection was observed during follow-up. Our case underscores the critical importance of an integrated approach in managing patients with infective endocarditis, emphasizing the significance of accurate classification, prompt initiation of empirical therapy, and thorough interrogation to guide diagnosis and optimize therapeutic outcomes. However, a major challenge is the time-consuming nature of these diagnostic methods, and the difficulty of determining the appropriate treatment for individuals with infective endocarditis that is culture-negative . It is essential to establish a balance between the necessity of empirical antibiotic therapy and the possible toxicity of certain drugs, including aminoglycosides . A more thorough investigation into the causes of culture-negative endocarditis, incorporating history, physical examination, and additional diagnostic tools are necessary in cases of negative blood cultures, which pose a diagnostic issue. Health professionals should be alert to an atypical case of infective endocarditis, especially when patients have weight loss, manifestation of liver damage, and epidemiological history of domestic animals. The limitations of this case report include its limited generalizability.While the case provides valuable insights into the diagnosis and management of Bartonella endocarditis, the specific clinical presentation and treatment approach may not be applicable to all patients. Each case of endocarditis can present uniquely, especially in the context of culture-negative infections, which underscores the need for individualized patient assessment and management. Future investigations should balance empirical therapy with potential drug toxicities, particularly with aminoglycosides. Bartonella endocarditis requires high clinical suspicion, especially weight loss, liver damage, and contact with domestic animals.
Technological dental sealants: in vitro evaluation of material properties and antibiofilm potential
c22637c3-6718-4f66-983f-93aeba8cabac
11786408
Dentistry[mh]
Dental caries remains a pervasive global health issue, affecting approximately 35% of individuals across all age groups . Notably, a significant proportion (50%) of caries cases occur in occlusal pits and fissures, which constitute only 15% of the total tooth surface area . Permanent first and second molars are particularly susceptible to caries initiation . The complex morphology of occlusal surfaces predisposes them to bacterial plaque accumulation and biofilm formation . The oral microbiome, comprising over 700 bacterial species, including anaerobic and aerobic bacteria, forms a diverse microbial community. Streptococcus mutans is a key cariogenic bacterium that metabolizes sucrose, produces acid, and facilitates biofilm development, leading to tooth demineralization . Effective caries prevention strategies include water fluoridation, dietary sugar control, oral hygiene practices, and professional fluoride applications . Antimicrobial peptides (AMPs) have emerged as promising agents for dental applications, exhibiting potential to inhibit bacterial adhesion and disrupt biofilm formation . However, fissure sealants remain a well-established preventive measure, demonstrating a significant reduction in caries incidence on occlusal surfaces . Sealants are resinous materials designed to seal pits and fissures, preventing bacterial colonization and subsequent carbohydrate fermentation. Additionally, fluoride-releasing sealants can promote remineralization and inhibit bacterial growth . Several factors influence sealant longevity, including microhardness, surface roughness, and retention . A well-sealed surface can effectively resist wear and tear, minimize bacterial adhesion, and maintain a long-lasting protective barrier . The incorporation of antimicrobial agents, such as those found in glass ionomer-based sealants, can further enhance their efficacy . Despite advancements in sealant technology, a comprehensive comparison of different materials, particularly those incorporating novel technologies, is lacking. This study aims to address this gap by evaluating the physical, mechanical, and antimicrobial properties of various sealants. The null hypothesis is that the tested materials have no significant differences, regardless of their composition. By investigating these parameters, this study seeks to provide valuable insights into the performance of contemporary sealants and aid clinicians in making informed decisions for optimal patient care. Tested materials and sample size calculation To evaluate the performance of four dental sealant materials, an in vitro study was conducted. Sample sizes for each material group (Table ) were determined using G*Power, considering a significance level of α = 0.05, power of 80%, and an effect size of 0.8. A 10–20% increase in sample size was incorporated to account for potential losses. Consequently, quantitative tests required 5–10 specimens per group, while qualitative tests necessitated one specimen per group. Specimen preparation and experimental design Cylindrical specimens (6 mm diameter × 2 mm depth) were prepared using an acrylic matrix and handled according to the manufacturer's instructions . The materials were applied in a single step, covered with a polyester strip, and light-cured using an LED curing light (CV-218, wavelength: 430–485 nm, intensity ≥ 1800 mW/cm 2 ) for the recommended duration. After 24 h of storage in distilled water at room temperature, the specimens were polished sequentially with sandpaper discs (#400, #600, #1200, #1500, and #2000) and cleaned with an air/water spray. The specimens were randomly assigned to four groups: Self-etching: Beautisealant® (Shofu, Kyoto, Japan) Control: Fluroshield® (Dentsply, Bogotá, Colombia) Self-adhesive and self-etching: Constic® (DMG, Hamburg, Germany) Conventional: Beautiful Flow Plus® F03 (Shofu, Kyoto, Japan) with 37% phosphoric acid etching Each group was subjected to a series of tests, submitted to mechanical (Surface Roughness—RS and Vickers Microhardness—VM), compositional (Energy Dispersive Spectroscopy—EDS), qualitative tests (Scanning Electron Microscopy Analysis SEM), and microbiological analysis. Mechanical analysis Surface roughness measurement The surface roughness measurement was performed using a rugosimeter (Mitutoyo Corporation, Japan) following the ISO 1997 standard. Each sample ( N = 8) was carefully dried with absorbent paper before readings. The value of the initial reading (Ra; µm) was obtained through the arithmetic mean of 5 consecutive readings in each specimen in different regions, thus obtaining the mean and standard deviation as well . Surface microhardness measurement Surface microhardness was evaluated using a digital microhardness meter (FM-700, Poland) coupled with software standardized to a Vickers-type pyramidal diamond indenter (Vickers Microhardness – VM). The measurement was performed using five readings for each specimen ( N = 8) in different regions with an analysis force of 100 gF/mm 2 for 15 s. The values were obtained in gF/mm 2 . For this measurement, we used the 6507 ISO. Compositional analysis Energy Dispersive Spectroscopy Analysis (EDS) To analyze the chemical composition, the samples ( N = 8) were metalized with gold alloys and subjected to EDS (Oxford INCA X-ACT, 51-ADD0048, Abingdon-on-Thames, UK) with measurements at the center of each sample. The samples were fixed in stubs, metalized with gold (MED 010, Balzers, USA). Qualitative analysis Scanning Electron Microscopy Analysis (SEM) The sample ( N = 1) was fixed in stubs, metalized with gold (MED 010, Balzers, USA), and analyzed in a scanning electron microscope (SEM, JEOL-JMS-T33A Scanning Microscope, JEOL – USA Inc., Peabody, MA, USA). Analysis of the surface of the samples by scanning electron microscopy (SEM) was performed under a microscope with a readout using a qualitative surface analysis method of resin dental materials developed by ZHANG et al. . The presence of inorganic nanoparticles dispersed by larger fillers and irregularly shaped fillers distributed in the resin matrix were analyzed. Microbiological analysis The bacterial inhibition capacity of specimens made in BHI agar culture medium and the ability of bacterial adhesion using Streptococcus mutans biofilm were tested. For that, was used for the test in triplicate. Group 2: Control (Fluroshield® – Dentsply, Bogotá, Colombia) was used as a control group in the microbiological tests due to the manufacturer's claims regarding its anti-cariogenic capacity and fluoride release. Inhibition halo analysis Streptococcus mutans strain #25,175 was reactivated on BHI agar plates under microaerophilic conditions at 37 °C for 48 h. Bacterial cells were Gram-stained to confirm their identity. A bacterial suspension was prepared in phosphate-buffered saline (PBS) to a concentration of 1 × 10^8 CFU/mL. BHI agar plates were inoculated with the S. mutans suspension and samples ( N = 5) incubated under microaerophilic conditions at 37 °C for 48 h. Control plates containing only BHI medium and S. mutans inoculum were included to monitor sterility and bacterial growth . Superficial bacterial adherence analysis Streptococcus mutans strain #25,175 was reactivated and cultured as described previously. A portion of the culture was cryopreserved in BHI broth containing 10% DMSO at −20 °C. A bacterial suspension was prepared to a concentration of 1 × 10^8 CFU/mL. Sterilized specimens ( N = 4) were inoculated with S. mutans and incubated in BHI broth supplemented with 5% sucrose for 24 h to allow initial biofilm formation. This process was repeated daily for five days, with fresh medium and inoculum added each day. Control groups included BHI broth alone and BHI broth with S. mutans . After five days, the biofilm was collected by scraping the specimen surface with a sterile loop. The biofilm suspension was serially diluted and plated on BHI agar plates. Colony-forming units (CFU) were counted after 48 h of incubation at 37 °C. Microshear bond strength analysis Specimen preparation Twenty-five healthy bovine incisors were selected for the study. The teeth were cleaned with a pumice stone slurry and water using a low-speed motor and then stored in distilled water. To ensure sterilization without compromising enamel properties, the teeth were disinfected with a 0.1% thymol solution for five days. The teeth were divided into five groups: Self-etching: Beautisealant® (Shofu, Kyoto, Japan) Control: 37% phosphoric acid etching + Fluroshield® (Dentsply, Bogotá, Colombia) Self-adhesive and self-etching: Constic® (DMG, Hamburg, Germany) Conventional: 37% phosphoric acid etching + Single Bond Universal® + Beautiful Flow Plus® F03 (Shofu, Kyoto, Japan) Conventional: 37% phosphoric acid etching + Single Bond Universal® + FluroShield® (Dentsply) All materials were applied according to the manufacturer's instructions. The roots of the teeth were sectioned 1 mm below the cementoenamel junction using a low-speed diamond saw under water coolant. The crown portions were cleaned with a pumice stone slurry and water using a low-speed micromotor. The teeth were then embedded in acrylic resin blocks, exposing the buccal surface. The embedded teeth were sectioned using a low-speed diamond saw to create standardized specimens measuring 10 mm in length and 6 mm in width. After cleaning and ultrasonic debridement, the specimens were stored in distilled water at 37 °C for 24 h. Two 2-mm diameter cavities were prepared on the exposed enamel surface using a rubber dam and a standardized bur. For groups 2, 4, and 5, the enamel surface was etched with 37% phosphoric acid (Condac®, FGM, Joinville, SC, Brazil) for 30 s, followed by a 30-s water rinse and 15-s air-drying. Groups 4 and 5 received an additional step of adhesive application. 3 M™ Single Bond Universal® adhesive was applied with a microbrush for 10 s, air-dried for 5 s, and light-cured for 20 s using a Bluephase® curing unit (Ivoclar, Switzerland, wavelength: 380–515 nm, intensity: 1200 mW/cm 2 ). After material application, the specimens ( N = 10) were embedded in a testing device. A universal testing machine (Triax Digital 50, Controls, Milan, Italy) was used to apply a shear load at a crosshead speed of 0.5 mm/min until failure. The shear bond strength (MPa) was calculated based on the load at failure and the diameter of the detached composite cylinder. Statistical analysis Before the inferential analysis of the data, the normality of data distribution was verified using the statistical program (SPSS), version 20.0, which was performed for all variables (Roughness, Microhardness, EDS, Inhibition Halo, Superficial Bacterial Adherence, and Microshear Bond Strength). After verification of normality, the ANOVA parametric test (one way) was used to compare all variables followed by the Tukey, considering the value of p < 0.05. To evaluate the performance of four dental sealant materials, an in vitro study was conducted. Sample sizes for each material group (Table ) were determined using G*Power, considering a significance level of α = 0.05, power of 80%, and an effect size of 0.8. A 10–20% increase in sample size was incorporated to account for potential losses. Consequently, quantitative tests required 5–10 specimens per group, while qualitative tests necessitated one specimen per group. Cylindrical specimens (6 mm diameter × 2 mm depth) were prepared using an acrylic matrix and handled according to the manufacturer's instructions . The materials were applied in a single step, covered with a polyester strip, and light-cured using an LED curing light (CV-218, wavelength: 430–485 nm, intensity ≥ 1800 mW/cm 2 ) for the recommended duration. After 24 h of storage in distilled water at room temperature, the specimens were polished sequentially with sandpaper discs (#400, #600, #1200, #1500, and #2000) and cleaned with an air/water spray. The specimens were randomly assigned to four groups: Self-etching: Beautisealant® (Shofu, Kyoto, Japan) Control: Fluroshield® (Dentsply, Bogotá, Colombia) Self-adhesive and self-etching: Constic® (DMG, Hamburg, Germany) Conventional: Beautiful Flow Plus® F03 (Shofu, Kyoto, Japan) with 37% phosphoric acid etching Each group was subjected to a series of tests, submitted to mechanical (Surface Roughness—RS and Vickers Microhardness—VM), compositional (Energy Dispersive Spectroscopy—EDS), qualitative tests (Scanning Electron Microscopy Analysis SEM), and microbiological analysis. Surface roughness measurement The surface roughness measurement was performed using a rugosimeter (Mitutoyo Corporation, Japan) following the ISO 1997 standard. Each sample ( N = 8) was carefully dried with absorbent paper before readings. The value of the initial reading (Ra; µm) was obtained through the arithmetic mean of 5 consecutive readings in each specimen in different regions, thus obtaining the mean and standard deviation as well . Surface microhardness measurement Surface microhardness was evaluated using a digital microhardness meter (FM-700, Poland) coupled with software standardized to a Vickers-type pyramidal diamond indenter (Vickers Microhardness – VM). The measurement was performed using five readings for each specimen ( N = 8) in different regions with an analysis force of 100 gF/mm 2 for 15 s. The values were obtained in gF/mm 2 . For this measurement, we used the 6507 ISO. The surface roughness measurement was performed using a rugosimeter (Mitutoyo Corporation, Japan) following the ISO 1997 standard. Each sample ( N = 8) was carefully dried with absorbent paper before readings. The value of the initial reading (Ra; µm) was obtained through the arithmetic mean of 5 consecutive readings in each specimen in different regions, thus obtaining the mean and standard deviation as well . Surface microhardness was evaluated using a digital microhardness meter (FM-700, Poland) coupled with software standardized to a Vickers-type pyramidal diamond indenter (Vickers Microhardness – VM). The measurement was performed using five readings for each specimen ( N = 8) in different regions with an analysis force of 100 gF/mm 2 for 15 s. The values were obtained in gF/mm 2 . For this measurement, we used the 6507 ISO. Energy Dispersive Spectroscopy Analysis (EDS) To analyze the chemical composition, the samples ( N = 8) were metalized with gold alloys and subjected to EDS (Oxford INCA X-ACT, 51-ADD0048, Abingdon-on-Thames, UK) with measurements at the center of each sample. The samples were fixed in stubs, metalized with gold (MED 010, Balzers, USA). To analyze the chemical composition, the samples ( N = 8) were metalized with gold alloys and subjected to EDS (Oxford INCA X-ACT, 51-ADD0048, Abingdon-on-Thames, UK) with measurements at the center of each sample. The samples were fixed in stubs, metalized with gold (MED 010, Balzers, USA). Scanning Electron Microscopy Analysis (SEM) The sample ( N = 1) was fixed in stubs, metalized with gold (MED 010, Balzers, USA), and analyzed in a scanning electron microscope (SEM, JEOL-JMS-T33A Scanning Microscope, JEOL – USA Inc., Peabody, MA, USA). Analysis of the surface of the samples by scanning electron microscopy (SEM) was performed under a microscope with a readout using a qualitative surface analysis method of resin dental materials developed by ZHANG et al. . The presence of inorganic nanoparticles dispersed by larger fillers and irregularly shaped fillers distributed in the resin matrix were analyzed. Microbiological analysis The bacterial inhibition capacity of specimens made in BHI agar culture medium and the ability of bacterial adhesion using Streptococcus mutans biofilm were tested. For that, was used for the test in triplicate. Group 2: Control (Fluroshield® – Dentsply, Bogotá, Colombia) was used as a control group in the microbiological tests due to the manufacturer's claims regarding its anti-cariogenic capacity and fluoride release. Inhibition halo analysis Streptococcus mutans strain #25,175 was reactivated on BHI agar plates under microaerophilic conditions at 37 °C for 48 h. Bacterial cells were Gram-stained to confirm their identity. A bacterial suspension was prepared in phosphate-buffered saline (PBS) to a concentration of 1 × 10^8 CFU/mL. BHI agar plates were inoculated with the S. mutans suspension and samples ( N = 5) incubated under microaerophilic conditions at 37 °C for 48 h. Control plates containing only BHI medium and S. mutans inoculum were included to monitor sterility and bacterial growth . Superficial bacterial adherence analysis Streptococcus mutans strain #25,175 was reactivated and cultured as described previously. A portion of the culture was cryopreserved in BHI broth containing 10% DMSO at −20 °C. A bacterial suspension was prepared to a concentration of 1 × 10^8 CFU/mL. Sterilized specimens ( N = 4) were inoculated with S. mutans and incubated in BHI broth supplemented with 5% sucrose for 24 h to allow initial biofilm formation. This process was repeated daily for five days, with fresh medium and inoculum added each day. Control groups included BHI broth alone and BHI broth with S. mutans . After five days, the biofilm was collected by scraping the specimen surface with a sterile loop. The biofilm suspension was serially diluted and plated on BHI agar plates. Colony-forming units (CFU) were counted after 48 h of incubation at 37 °C. The sample ( N = 1) was fixed in stubs, metalized with gold (MED 010, Balzers, USA), and analyzed in a scanning electron microscope (SEM, JEOL-JMS-T33A Scanning Microscope, JEOL – USA Inc., Peabody, MA, USA). Analysis of the surface of the samples by scanning electron microscopy (SEM) was performed under a microscope with a readout using a qualitative surface analysis method of resin dental materials developed by ZHANG et al. . The presence of inorganic nanoparticles dispersed by larger fillers and irregularly shaped fillers distributed in the resin matrix were analyzed. The bacterial inhibition capacity of specimens made in BHI agar culture medium and the ability of bacterial adhesion using Streptococcus mutans biofilm were tested. For that, was used for the test in triplicate. Group 2: Control (Fluroshield® – Dentsply, Bogotá, Colombia) was used as a control group in the microbiological tests due to the manufacturer's claims regarding its anti-cariogenic capacity and fluoride release. Streptococcus mutans strain #25,175 was reactivated on BHI agar plates under microaerophilic conditions at 37 °C for 48 h. Bacterial cells were Gram-stained to confirm their identity. A bacterial suspension was prepared in phosphate-buffered saline (PBS) to a concentration of 1 × 10^8 CFU/mL. BHI agar plates were inoculated with the S. mutans suspension and samples ( N = 5) incubated under microaerophilic conditions at 37 °C for 48 h. Control plates containing only BHI medium and S. mutans inoculum were included to monitor sterility and bacterial growth . Streptococcus mutans strain #25,175 was reactivated and cultured as described previously. A portion of the culture was cryopreserved in BHI broth containing 10% DMSO at −20 °C. A bacterial suspension was prepared to a concentration of 1 × 10^8 CFU/mL. Sterilized specimens ( N = 4) were inoculated with S. mutans and incubated in BHI broth supplemented with 5% sucrose for 24 h to allow initial biofilm formation. This process was repeated daily for five days, with fresh medium and inoculum added each day. Control groups included BHI broth alone and BHI broth with S. mutans . After five days, the biofilm was collected by scraping the specimen surface with a sterile loop. The biofilm suspension was serially diluted and plated on BHI agar plates. Colony-forming units (CFU) were counted after 48 h of incubation at 37 °C. Specimen preparation Twenty-five healthy bovine incisors were selected for the study. The teeth were cleaned with a pumice stone slurry and water using a low-speed motor and then stored in distilled water. To ensure sterilization without compromising enamel properties, the teeth were disinfected with a 0.1% thymol solution for five days. The teeth were divided into five groups: Self-etching: Beautisealant® (Shofu, Kyoto, Japan) Control: 37% phosphoric acid etching + Fluroshield® (Dentsply, Bogotá, Colombia) Self-adhesive and self-etching: Constic® (DMG, Hamburg, Germany) Conventional: 37% phosphoric acid etching + Single Bond Universal® + Beautiful Flow Plus® F03 (Shofu, Kyoto, Japan) Conventional: 37% phosphoric acid etching + Single Bond Universal® + FluroShield® (Dentsply) All materials were applied according to the manufacturer's instructions. The roots of the teeth were sectioned 1 mm below the cementoenamel junction using a low-speed diamond saw under water coolant. The crown portions were cleaned with a pumice stone slurry and water using a low-speed micromotor. The teeth were then embedded in acrylic resin blocks, exposing the buccal surface. The embedded teeth were sectioned using a low-speed diamond saw to create standardized specimens measuring 10 mm in length and 6 mm in width. After cleaning and ultrasonic debridement, the specimens were stored in distilled water at 37 °C for 24 h. Two 2-mm diameter cavities were prepared on the exposed enamel surface using a rubber dam and a standardized bur. For groups 2, 4, and 5, the enamel surface was etched with 37% phosphoric acid (Condac®, FGM, Joinville, SC, Brazil) for 30 s, followed by a 30-s water rinse and 15-s air-drying. Groups 4 and 5 received an additional step of adhesive application. 3 M™ Single Bond Universal® adhesive was applied with a microbrush for 10 s, air-dried for 5 s, and light-cured for 20 s using a Bluephase® curing unit (Ivoclar, Switzerland, wavelength: 380–515 nm, intensity: 1200 mW/cm 2 ). After material application, the specimens ( N = 10) were embedded in a testing device. A universal testing machine (Triax Digital 50, Controls, Milan, Italy) was used to apply a shear load at a crosshead speed of 0.5 mm/min until failure. The shear bond strength (MPa) was calculated based on the load at failure and the diameter of the detached composite cylinder. Statistical analysis Before the inferential analysis of the data, the normality of data distribution was verified using the statistical program (SPSS), version 20.0, which was performed for all variables (Roughness, Microhardness, EDS, Inhibition Halo, Superficial Bacterial Adherence, and Microshear Bond Strength). After verification of normality, the ANOVA parametric test (one way) was used to compare all variables followed by the Tukey, considering the value of p < 0.05. Twenty-five healthy bovine incisors were selected for the study. The teeth were cleaned with a pumice stone slurry and water using a low-speed motor and then stored in distilled water. To ensure sterilization without compromising enamel properties, the teeth were disinfected with a 0.1% thymol solution for five days. The teeth were divided into five groups: Self-etching: Beautisealant® (Shofu, Kyoto, Japan) Control: 37% phosphoric acid etching + Fluroshield® (Dentsply, Bogotá, Colombia) Self-adhesive and self-etching: Constic® (DMG, Hamburg, Germany) Conventional: 37% phosphoric acid etching + Single Bond Universal® + Beautiful Flow Plus® F03 (Shofu, Kyoto, Japan) Conventional: 37% phosphoric acid etching + Single Bond Universal® + FluroShield® (Dentsply) All materials were applied according to the manufacturer's instructions. The roots of the teeth were sectioned 1 mm below the cementoenamel junction using a low-speed diamond saw under water coolant. The crown portions were cleaned with a pumice stone slurry and water using a low-speed micromotor. The teeth were then embedded in acrylic resin blocks, exposing the buccal surface. The embedded teeth were sectioned using a low-speed diamond saw to create standardized specimens measuring 10 mm in length and 6 mm in width. After cleaning and ultrasonic debridement, the specimens were stored in distilled water at 37 °C for 24 h. Two 2-mm diameter cavities were prepared on the exposed enamel surface using a rubber dam and a standardized bur. For groups 2, 4, and 5, the enamel surface was etched with 37% phosphoric acid (Condac®, FGM, Joinville, SC, Brazil) for 30 s, followed by a 30-s water rinse and 15-s air-drying. Groups 4 and 5 received an additional step of adhesive application. 3 M™ Single Bond Universal® adhesive was applied with a microbrush for 10 s, air-dried for 5 s, and light-cured for 20 s using a Bluephase® curing unit (Ivoclar, Switzerland, wavelength: 380–515 nm, intensity: 1200 mW/cm 2 ). After material application, the specimens ( N = 10) were embedded in a testing device. A universal testing machine (Triax Digital 50, Controls, Milan, Italy) was used to apply a shear load at a crosshead speed of 0.5 mm/min until failure. The shear bond strength (MPa) was calculated based on the load at failure and the diameter of the detached composite cylinder. Before the inferential analysis of the data, the normality of data distribution was verified using the statistical program (SPSS), version 20.0, which was performed for all variables (Roughness, Microhardness, EDS, Inhibition Halo, Superficial Bacterial Adherence, and Microshear Bond Strength). After verification of normality, the ANOVA parametric test (one way) was used to compare all variables followed by the Tukey, considering the value of p < 0.05. The results revealed no statistically significant difference between groups ( p > 0.05) regarding surface roughness, with all groups showing values smaller than 0,2 µm. The one-way ANOVA revealed a statistically significant difference among the groups ( p < 0,001) on the microhardness measurement. The Tukey test revealed no difference between G1 and G2 ( p = 0.99). However, G3 and G4 showed the highest values, differing from the other groups and between each other ( p < 0.01), with Beautiful Flow Plus F03® showing the highest value (Fig. ). Carbon (C) and oxygen (O), the primary components of the organic matrix, were detected in all materials. Sodium (Na) was present in BeautiSealant® (G1) and Beautifil Flow Plus F03® (G4). Fluoride (F) was detected only in Beautifil Flow Plus F03® (G4), absent from biointeractive materials like BeautiSealant® (G1) and FluroShield® (G2). Aluminum (Al) was present in all materials. Silicon (Si) was detected in all groups, as expected due to its role as a nucleating agent in the inorganic matrix. Strontium (Sr) was present in groups G1 and G4, both containing GIOMER technology. Tungsten (W) was detected only in BeautiSealant® (G1). Barium (Ba) was found in FluroShield® (G2), Constic® (G3), and trace amounts in Beautifil Flow Plus F03® (G4) (Table and Fig. ). Plows and furrows are more clearly visible in G1—BeautiSealant® than in the G4—Beautifil Flow Plus F03®. However, G1 – BeautiSealant® and G4 – Beautifil Flow Plus F03® are materials that present agglomerates of well-dispersed filler particles on their surface. This organization is also seen in the G3 – Constic® group. However, it is not observed in the G2 – Fluroshield®. The G2 – FluroShield® group has a qualitatively more significant number of grooves/pits compared to the other compounds studied. Even though each group presents singularities on its surface, surface roughness rates were similar across all materials (Fig. ). All groups, including Constic® (G3) which lacks antibacterial properties, exhibited an inhibition halo. However, no statistically significant differences in inhibition halo size were observed among the groups ( p > 0.05). Similarly, no significant differences in superficial bacterial adhesion were found among the groups, despite variations in material composition and the presence of antibacterial agents (Fig. ). However, Group G4 (Beautiful Flow Plus® F03) showed the lowest level of bacterial adhesion (5.06 ± 0.24 CFU/mL). BeautiSealant® (G1) and Constic® (G3) exhibited the lowest microshear bond strength, significantly lower than all other groups ( p < 0.05). However, no significant difference was observed between these two groups (Tukey's test). The conventional sealant FluroShield® (G2) demonstrated similar microshear bond strength to the traditional resin Beautifil Flow Plus® F03 (G4), with no significant difference ( p > 0.05). Additionally, the conventional sealant FluroShield® + Adhesive (G5) exhibited the highest microshear bond strength, but no significant difference was observed compared to FluroShield® (G2) ( p > 0.05) (Fig. ). Pit and fissure sealants have been shown to effectively prevent caries lesions on occlusal surfaces . Newer materials, such as self-adhesive and self-etching sealants, offer potential advantages in clinical practice, especially for pediatric patients, due to their reduced sensitivity to moisture contamination and easier application . This laboratory study aimed to investigate the physical, compositional, and antibacterial properties of various dental sealants. Our findings demonstrate significant differences among the tested materials, rejecting the null hypothesis. All groups exhibited comparable surface roughness values ( p = 0.61), likely due to the similarity in particle size distribution among the materials. This suggests that the consistent particle size distribution among the materials contributed to the similar surface roughness. Similar findings were reported by Hernández-mendieta et al . for BeautiSealant® and Beautiful Flow Plus®. Regarding FluroShield® surface roughness, Kantovitz et al. reported a value of 0.15 ± 0.02, consistent with our findings. Leal et al . found a similar value for Constic® (0.11 ± 0.019). It's important to note that surface roughness can be influenced by factors such as the degree of polymerization, material hardness, and filler composition . Beautifil Flow Plus® exhibited the highest microhardness (37.9 ± 4.87), significantly differing from other groups. This higher microhardness may be attributed to its higher inorganic filler content and potential water sorption resistance. Hernández-mendieta et al . reported similar findings, with Beautifil Flow Plus® showing higher microhardness than BeautiSealant®. However, our study revealed a more significant difference. This discrepancy may be due to variations in testing conditions or material batches. FluroShield® exhibited a lower microhardness (16.28 ± 4.91), comparable to findings by Alexandre et al . . Constic® showed intermediate microhardness (26.11 ± 3.46). The differing microhardness values can be explained by variations in filler type, concentration, and the nature of the organic matrix. For instance, GIOMER materials, containing UDMA, TEGDMA, and Bis-GMA, may be more susceptible to water sorption and degradation. In contrast, materials with higher inorganic filler content, like Beautiful Flow Plus®, may exhibit improved mechanical properties. Still, FluroShield® and Constic® incorporate Bis-GMA and Silicon Dioxide, while Beautiful Flow Plus® has a higher inorganic filler content, contributing to its superior microhardness. The higher silicate filler concentration in Constic® may enhance its bond strength and resistance to wear. SEM analysis revealed distinct surface morphologies. BeautiSealant® and Beautiful Flow Plus® exhibited smooth surfaces with dispersed nanoparticle fillers, aligning with previous studies . In contrast, FluroShield® showed larger surface grooves, which could potentially serve as niches for bacterial growth . This observation is consistent with earlier research by Cooley et al . , who noted the presence of voids and air bubbles within FluroShield®. These findings highlight the importance of evenly distributed nanoparticle clusters for optimal bond strength and surface properties. The materials analyzed contained carbon and oxygen, forming the organic matrix. Additionally, filler elements such as calcium, aluminum, strontium, and fluoride were incorporated to enhance the material's properties. A higher inorganic filler content, as seen in Beautiful Flow Plus®, is associated with improved mechanical properties like microhardness . While fluoride, a key remineralizing agent, was detected in Beautiful Flow Plus®, it was absent in BeautiSealant® and FluroShield®. This suggests that fluoride release may occur during storage in aqueous environments . Aluminum, present in all materials, can contribute to desensitizing effects and remineralization . Silicon, a significant component of Constic®, acts as a nucleating agent, potentially contributing to its higher microhardness. Strontium, found in GIOMER materials, can release Sr 2 ⁺ ions to form strontium apatite, supporting remineralization . Barium, present in FluroShield®, complements the inorganic matrix, while boron, a potential antibacterial agent, may be released from GIOMER materials . All materials exhibited similar halo inhibition zones, suggesting that the inherent antimicrobial properties of the resin-based materials, such as the presence of TEGDMA and Bis-GMA, may have contributed to bacterial growth inhibition . While bioactive ions released from materials like GIOMERS can exhibit antimicrobial effects, the static nature of the halo inhibition test may not have fully captured these properties . Factors such as pH, temperature, and the presence of organic matter can influence ion release and subsequent antimicrobial activity. Like the halo inhibition results, no significant differences were observed in biofilm formation among the materials. While GIOMERS have been shown to exhibit lower bacterial adhesion , the similar surface roughness of the tested materials may have mitigated this effect . The limited antibiofilm activity of the S-PRG particles may be attributed to the static nature of the biofilm model used in this study. Dynamic biofilm models, incorporating factors like sucrose challenge and pH fluctuations, may better simulate oral conditions and reveal differences in antimicrobial activity . Clinical studies have demonstrated similar results for bioactive sealants, with BeautiSealant® showing lower retention and marginal adaptation compared to FluroShield® . Further research is needed to optimize the formulation and application of S-PRG sealants to improve their clinical performance. Recent advancements in dental materials have introduced self-etching and self-adhesive technologies, aiming to simplify clinical procedures. However, our findings suggest that conventional etching techniques may still offer superior bond strength. Self-etching materials, such as BeautiSealant® and Constic®, demonstrated lower microshear bond strength compared to conventional techniques. This may be attributed to their limited ability to demineralize enamel and create optimal micromechanical retention . A systematic review by Botton et al . further supports this, indicating that self-etch systems may have lower retention rates over time. In contrast, conventional techniques, including the use of adhesive systems with enamel etching, exhibited higher bond strengths. The combination of phosphoric acid etching and a universal adhesive, such as Single Bond Universal, can enhance micromechanical retention and chemical bonding . The compatibility between FluroShield® and Single Bond Universal, facilitated by the presence of MDP and HEMA, may contribute to the improved bond strength . Clinical studies have consistently demonstrated the superior performance of conventional etch-and-rinse techniques over self-etching systems . The removal of the enamel smear layer through acid etching is crucial for optimal bond strength. Self-etching systems may struggle to effectively remove this layer, leading to weaker bonds. Our in vitro study provides valuable insights into the bond strength of various sealant materials. While self-etching and self-adhesive systems offer convenience, they may compromise bond strength compared to conventional techniques. The limited demineralization and micromechanical retention provided by self-etching systems can impact long-term clinical performance . Additionally, Beautifil Flow Plus® demonstrated superior performance in terms of microhardness and shear bond strength, making it a suitable option for high-risk caries patients and those with Molar-Incisor-Hypomineralization (MIH) . For simpler applications, FluroShield® can be used without an adhesive system, reducing clinical time. The selection of a sealant technique depends on various factors, including patient cooperation, saliva control, and the need for remineralization. While self-etching systems offer convenience, conventional techniques often provide superior bond strength and long-term performance. Clinical experience and operator skills are crucial for successful sealant application. Our findings indicate that materials with higher inorganic filler content, such as Beautiful Flow Plus®, exhibit superior microhardness, potentially enhancing their durability under occlusal stress. However, surface roughness remained consistent across all materials, suggesting that this factor may not significantly influence clinical performance. Microbiologically, all materials demonstrated similar behavior, indicating that the inherent antimicrobial properties of the resin-based materials may play a significant role in inhibiting bacterial growth. While conventional adhesive techniques continue to offer superior bond strength and long-term clinical performance, self-etching and self-adhesive systems may be suitable for specific clinical scenarios. Dentists should carefully consider the patient's needs when selecting a sealant material. For patients at high risk of caries or sensitivity, bioactive materials like Beautiful Flow Plus® may be advantageous. In situations where time efficiency is a priority, FluroShield® can be applied without an adhesive system, simplifying the clinical procedure. Ultimately, the choice of material should be based on the clinician's expertise and the specific requirements of each patient.
Postmortem metabolomics: influence of time since death on the level of endogenous compounds in human femoral blood. Necessary to be considered in metabolome study planning?
85422229-52f8-4f8b-9d33-dea42d4ca331
11081988
Forensic Medicine[mh]
Metabolomics (metabolic profiling) aims to comprehensively analyze endogenous low molecular weight compounds within biological systems, (e.g., amino acids and lipids). It represents the downstream output of the -omics cascade (genomics, transcriptomics, proteomics/peptidomics) and is also highly influenced by environmental factors, such as lifestyle habits, diseases, and drug intake. Over the years, a number of metabolomics techniques have been established in a variety of disciplines for biomarker search or for generating hypotheses, as different environmental stimuli may lead to particular changes within the metabolome (Castillo-Peinado & Luque de Castro, ; Johnson et al., ; Patti et al., ; Steuer et al., , ; Wishart, ; Zeki et al., ). In this regard, untargeted metabolome acquisition approaches which theoretically measure all compounds simultaneously are used. Data acquisition is followed by sophisticated data evaluation strategies and statistical methods to identify compounds of interest. Depending on the data set and underlying question, simple univariate statistics, i.e., (non)parametric significance testing in combination with fold-change analysis, as well as multivariate statistics for multifactorial phenomena, or holistic models applying machine learning algorithms can be applied (Anwardeen et al., ; Chen et al., ; Pomyen et al., ; Procopio et al., ). Recently, the (un)targeted analysis of endogenous compounds has also gained interest in the field of forensic postmortem investigations, e.g., for assessment of biomarkers of the postmortem interval (PMI) (Bonicelli et al., ; Chighine et al., ; Donaldson & Lamont, , ; Locci et al., , ; Mora-Ortiz et al., ; Pesko et al., ; Peyron et al., ), postmortem redistribution (PMR) (Brockbals et al., , ), or the improved interpretation of the cause of death (COD) (Cao et al., ; Elmsjo et al., , ; Nariai et al., ; Ward et al., ). However, the highly dynamic nature of the metabolome needs to be considered during the study design to allow observed effects to be attributable to the research question. Postmortem specimens are considered even more challenging as death is a dynamic process in itself, which introduces other, unpredictable variations. From numerous investigations and routine experience, the phenomenon of PMR is well recognized in forensic toxicology. PMR refers to all artificial changes in the postmortem concentrations of drugs after death (Pelissier-Alicot et al., ; Skopp, , ). While certainly not fully understood, passive diffusion, degradation, or drug neo-formation represent the most common underlying mechanisms, as do factors such as the drug properties (lipophilicity, protein binding affinity, volume of distribution, basicity). Both ante- and postmortem biochemical processes also play a role (Drummer & Gerostamoulos, ; Peters & Steuer, ). Recent studies suggest the COD, manner of death (Elmsjo et al., , ; Ward et al., ) and, the PMI between death and sample collection (Bonicelli et al., ; Chighine et al., ; Donaldson & Lamont, , ; Locci et al., , ; Mora-Ortiz et al., ; Pesko et al., ; Peyron et al., ) contribute to the postmortem metabolome composition. For instance, decreased levels of short-, medium- and long-chain acylcarnitines in human blood were observed to be related to oxycodone intoxication (Elmsjo et al., ). Elmsjo et al., reported higher concentrations of cortisol, phenylacetylglutamine, valerylcarnitine, or phenylalanine, or decreased concentrations of palmitoylcarnitine and various lysophosphatidylcholines in blood samples were associated with deaths attributed to pneumonia relative to a control group (Elmsjo et al., ). Previous studies also investigate the potential of using PMI-dependent concentration changes of endogenous molecules for biochemical estimation of the time of death. A variety of endogenous compounds were shown to increase with time, including different amino acids (hydroxyproline, tyrosine, phenylalanine), creatinine, citrate cycle intermediates (α-ketoglutarate, succinate), lactate, niacinamide, taurine, or uracil (Donaldson & Lamont, ; Du et al., ; Mora-Ortiz et al., ; Pesko et al., ). In contrast to human forensic investigations, where femoral blood or serum are the most commonly used matrix most studies on PMI estimation were performed in either animal models and/or specimens other than femoral blood (Chighine et al., ; Donaldson & Lamont, ; Du et al., ; Locci et al., , ; Mora-Ortiz et al., ; Pesko et al., ); that said, there is still a lack of comprehensive studies with sufficient case numbers (Zelentsova et al., ). According to a recent publication, PMI can be considered the main driving force of postmortem metabolome changes, highlighting the need for more data and standardization for postmortem metabolomics studies that aim to answer research questions other than assessing the PMI (Chighine et al., ). Our current study aimed to comprehensively investigate the influence of the time since death on the endogenous compound composition of human femoral blood samples. To this end, we have compiled a unique, exceptionally extensive postmortem data set consisting of 427 cases, each with paired blood samples (854 in total) collected at two different time points after death. This dataset should allow the systematic investigation of blood collection time after death and its relevance in future postmortem metabolome study designs. Chemical and reagents Acetylcarnitine (C2), adenine, adenosine, alanine, arginine, carnitine (C0), cholic acid, cortisol, cortisone, creatinine, decanoylcarnitine (C10), dodecanoylcarnitine (lauroylcarnitine, C12), glycocholic acid, hexadecanoylcarnitine (palmitoylcarnitine, C16), hippuric acid, histidine, inosine, isoleucine, kynurenine, leucine, levothyroxine, lysine, methionine, octadecanoylcarnitine (stearoylcarnitine, C18), octanoylcarnitine (C8), ornithine, phenylalanine, proline, propionylcarnitine (C3), reserpine, riboflavin, serine, taurine, taurocholic acid, tetradecanoylcarnitine (myristoylcarnitine, C14), threonine, tryptophane, tyrosine, uracil, uric acid, valine, and 5,10,15,20-tetrakis-(pentafluorphenyl)-porphyrin were purchased from Sigma-Aldrich (Buchs, Switzerland). The lipids 1-palmiotyl-2-hydroxy-sn-glycero-3-phosphocholine (lyso PC 16:0), 1-oleoyl-2-hydroxy-sn-glycero-3-phosphocholine (lyso PC 18:1), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (PC 34:1), 1-stearoyl-2-linoleoyl-sn-glycero-3-phosphocholine (PC 36:2), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (PE 34:1) and 1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphoethanolamin (PE 36:4) were purchased from Avanti Polar Lipids and were delivered by LuBio Science (Zurich, Switzerland). Deuterated and heavy-labeled internal standards (IS), arginine 13 C6, creatinine N-methyl-D3, and phenylalanine-D1 (purity > 98%) were purchased from Cambridge isotope laboratories, which were delivered by ReseaChem Life Science (Burgdorf, Switzerland) or Sigma-Aldrich (Buchs, Switzerland). Water, acetonitrile (ACN) and methanol (MeOH) of HPLC grade were obtained from Fluka (Buchs, Switzerland). All other chemicals used were from Merck (Zug, Switzerland) and of the highest grade available. Postmortem femoral blood samples Femoral blood samples from authentic forensic cases were collected at two time points after death during the routine toxicological investigation at the Victorian Institute of Forensic Medicine (VIFM), Melbourne, Australia. Upon mortuary admission of a deceased, approximately 2 to 5 mL of postmortem femoral blood was collected by leg puncture (blind stick) as soon as practicable, as per provisions of the Coroners Act 2008 (Victoria) (t1). A second femoral blood sample was collected during the medico-legal autopsy (t2) after preparation of the femoral vein. For cases where the time of death (ToD) could only be narrowed down to a specific day but no exact time ( n = 163) and admission to the VIFM on a later day, ToD was defined as 12 pm of the estimated day of death. If admission to the VIFM was on the same day as the estimated day of death, ToD was specified to be between 12 am and mortuary admission of the body (t1). These timings were used to calculate the pre-admission and pre-autopsy intervals per case [defined as the time between death and sample collection at mortuary admission (t1) / autopsy (t2)]. All postmortem blood samples were preserved in 1% w/v sodium fluoride and potassium oxalate and stored at 4 °C until shipment. Samples were transported to the Zurich Institute of Forensic Medicine (ZIFM, Switzerland; exempt specimens, no import/export permission required) in a temperature-controlled environment at 4 °C and immediately frozen at − 80 °C upon receipt until re-analysis for drug and metabolome changes. Anonymized information on the estimated ToD and sampling time points were provided for further data analysis. From an initial 477 cases (Brockbals et al., ), 427 were included in the current study. Case selection was based on the detectability of one or more drugs (of abuse) independent to the cause of death. Re-analysis of the samples in an anonymized format for the specific research project was approved by the Ethics Committee of the VIFM (EC 20-2019; EC 23-1275). The individuals cannot be identified from the information provided. Hence, no written informed consent from the individuals or their relatives was needed for this study. Additionally, the study was conducted in full conformance with the Swiss ethical laws, particularly those covering the use of human material in research. Sample preparation To 150 μL of postmortem blood, 15 μL IS solution (0.025 mM creatinine-d3, 0.03 mM L-arginine (13C6), and 0.04 mM L-phenylalanine-D1) were added, followed by the addition of 450 μL of a MeOH/acetone mixture (90:10 v/v) for protein precipitation. The samples were shaken and stored at − 20 °C overnight. Subsequently, the samples were resuspended, centrifuged at 14′000 rpm for 15 min, and one aliquot (50 μL) of the supernatant was transferred to an autosampler vial for analysis by reversed-phase chromatography (RP) as detailed below. A second aliquot (50 μL) was stored at − 80 °C for analysis by hydrophilic interaction chromatography (HILIC) approximately 1 month later. Before analysis, all samples were centrifuged again (14′000 rpm for 15 min). In addition, a femoral blood pool sample was prepared from 11 authentic postmortem blood samples collected at the ZIFM, stored in aliquots at − 80 °C, thawed, and extracted identically to the study samples each day for quality control purposes. HR-MS analysis Analysis was performed on a Thermo Fisher Ultimate 3000 UHPLC system (Thermo Fisher Scientific, San Jose, CA, USA) coupled with a high-resolution (HR) time of flight (TOF) instrument system (TripleTOF 6600 Sciex, Turbo V ion source, Concord, Ontario, Canada) as described in detail elsewhere (Boxler et al., ; Steuer et al., ). Briefly, two chromatographic columns were applied, (a) a RP column (XSelect HSST RP-C18 column; 150 mm × 2.1 mm i.d; 2.5 µm particle size; Waters, Baden, Daettwil, Switzerland) with 10 mM ammonium formate and 0.1% (v/v) formic acid in water or 0.1% (v/v) formic acid in methanol as mobile phases A and B, respectively; gradient elution starting at 100% A with a flow rate of 0.5 mL/min, increase to 100% B between 1 and 15 min, held for 3 min and re-equilibrated for 2 min (0.7 mL/min flow rate after 15 min); 20 min total run time; (b) a Merck SeQuant ZIC HILIC column (150 mm × 2.1 mm i.d; 3.5 µm particle size) with 25 mM ammonium acetate and 0.1% (v/v) acetic acid in water and 0.1% (v/v) acetic acid in ACN as mobile phases C and D, respectively; gradient elution at a flow rate of 0.5 mL/min over 15 min; starting conditions were 95% D, decreased to 40% D between 1 and 10 min, further decreased to 10% D until 12 min, hold for 1 min and re-equilibrated for 4 min. HR-MS (resolving power (full width at half-maximum, FWHM at 400 m/z) of 30,000) and MS/MS (resolving power 15,000 in MS 2 ) data were acquired by data-dependent acquisition (DDA) after electrospray ionization (ESI) in positive mode for RP chromatography and negative mode for HILIC chromatography, respectively. The following settings were applied: full scan over a mass range from m/z 50 to m/z 1000 (accumulation time 50 ms, CE 5 eV) and MS2 scan (accumulation time for each DDA experiment 100 ms, CE 35 eV with a CE spread of 15 eV) after dynamic background subtraction on the five most intense ions with an intensity threshold above 100 cps and exclusion time of 5 s (half peak width) after two occurrences in high sensitivity mode. Data acquisition was controlled by Analyst TF software (version 1.7, Sciex). All sample extracts were divided into 17 batches, with samples t1 and t2 from one case assigned to the same batch. Per batch, samples were measured in randomized order (total time period 1 month per chromatographic method). A system suitability test (SST) containing arginine, cortisol, cortisone, creatinine, glycocholic acid, hippuric acid, leucine, raffinose, riboflavin, and tryptophan (concentration 10 μg/ml each) was measured at the beginning of each measurement batch to check the general instrument performance via retention time and peak area comparison after peak integration in MultiQuant V 2.1 (Sciex). Automatic MS and MS/MS calibration was performed every 10 sample injections using a pooled blood sample (450 μL supernatant) fortified with 45 μL of a self-prepared calibration solution (creatinine, leucine, arginine, hippuric acid, tryptophane, inosine, cortisol, cortisone, riboflavin, glycocholic acid, taurocholic acid, reserpine, levothyroxine and 5,10,15,20-tetrakis-(pentafluorphenyl)-porphyrin, 7.1 μg/ml per analyte). Additionally, a pooled blood sample was repeatedly injected following each calibration and evaluated for intra- and inter-batch differences in retention time and peak area. Data processing and data analysis Data analysis was done in a targeted approach through peak integration of 38 analytes (given in Table ) in MultiQuant V 2.1 (Sciex). After raw data export to Microsoft Excel, further data analysis was performed using Microsoft Excel, GraphPad Prism 10.0.2, and R (R_Core_Team, ) in R Studio (R Version 4.3.1. “Beagle Scouts”; RStudio version 2023.03.0 + 386) with the following R packages: tidyverse (Wickham et al., ), gridExtra (Auguie, ), trelliscopeis (Hafen & Schloerke, ), flextable (Gohel & Skintzos, ), ggforce (Pedersen, ), readxl (Wickham & Bryan, ), and lubridate (Grolemund & Wickham, ). Quality control ISs were monitored to identify outliers (Grubbs test (GraphPad Prism 10.0.2) on batch-normalized IS peak areas, p < 0.05) and for quality control purposes, considering a variation (relative standard deviation, RSD) of < 30% as sufficiently robust among the authentic samples. The mean and range of retention times and peak areas of all 38 analytes were determined in the pool samples. Intra- and inter-batch differences were calculated using Microsoft Excel. Deviations (standard deviation) of a maximum of 0.05 min or 0.2 min, and 20% or 30% in peak areas were considered acceptable within and between batches, respectively. Evaluation of normalization procedures To account for inter-batch differences originating from technical variation, all analyte peak areas were normalized to the mean ( n = 5) of the batch’s pool-sample analyte peak area (batch correction). Two different sample normalization strategies were evaluated: normalization to heavy-labeled ISs (IS-normalization) and probabilistic quotient normalization (PQN).IS-normalization was performed by dividing the analyte’s peak through the IS area. Metaboanalyst 6.0 (Pang et al., ) was used for PQN normalization of the whole data set (38 analytes, 854 samples). Postmortem changes between two time points of the same case (paired) Percent differences of raw peak areas between t2 and t1 were calculated for each case ( n = 427) and analyte. Subgroups, in terms of increasing time intervals, were formed according to the time difference (Δ t ) between t2 and t1 as follows: 0–12 h,12–24 h, 24–36 h, 36–48 h, 48–72 h, 72–96 h, 96–120 h, 120–144 h, and > 144 h. A paired Wilcoxon signed ranked test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001; p-values adjusted for multiple testing according to “holm”) was applied between t2 and t1 peak areas for all cases and in Δ t subgroups. Postmortem changes over time (unpaired analysis) Subgroups were formed according to the time difference of each individual blood sample (tx_ToD, n = 854) to the known or estimated ToD as follows: 0–6 h (group 1), 6–12 h (group 2), 12–24 h (group 3), 24–36 h (group 4), 36–48 h (group 5), 48–72 h (group 6), 72–96 h (group 7), 96–120 h (group 8), 120–144 h (group 9), > 144 h (group 10). Statistical differences between groups were assessed by application of a Kruskal Wallis test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001) followed by Dunn’s multiple comparison test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001) after false-discovery rate correction by the Holm method. Percent differences of the median normalized peak area of each group to group 1 (0–6 h) were calculated. Correlations The percent changes (paired and unpaired analysis) were correlated with the possible influencing factors logP (lipophilicity), molecular weight (MW), and retention time in two different chromatographic settings by Spearman correlation analysis in GraphPad Prism 10.0.2. The corresponding characteristics and references used for correlations are summarized in Table . Acetylcarnitine (C2), adenine, adenosine, alanine, arginine, carnitine (C0), cholic acid, cortisol, cortisone, creatinine, decanoylcarnitine (C10), dodecanoylcarnitine (lauroylcarnitine, C12), glycocholic acid, hexadecanoylcarnitine (palmitoylcarnitine, C16), hippuric acid, histidine, inosine, isoleucine, kynurenine, leucine, levothyroxine, lysine, methionine, octadecanoylcarnitine (stearoylcarnitine, C18), octanoylcarnitine (C8), ornithine, phenylalanine, proline, propionylcarnitine (C3), reserpine, riboflavin, serine, taurine, taurocholic acid, tetradecanoylcarnitine (myristoylcarnitine, C14), threonine, tryptophane, tyrosine, uracil, uric acid, valine, and 5,10,15,20-tetrakis-(pentafluorphenyl)-porphyrin were purchased from Sigma-Aldrich (Buchs, Switzerland). The lipids 1-palmiotyl-2-hydroxy-sn-glycero-3-phosphocholine (lyso PC 16:0), 1-oleoyl-2-hydroxy-sn-glycero-3-phosphocholine (lyso PC 18:1), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (PC 34:1), 1-stearoyl-2-linoleoyl-sn-glycero-3-phosphocholine (PC 36:2), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (PE 34:1) and 1-palmitoyl-2-arachidonoyl-sn-glycero-3-phosphoethanolamin (PE 36:4) were purchased from Avanti Polar Lipids and were delivered by LuBio Science (Zurich, Switzerland). Deuterated and heavy-labeled internal standards (IS), arginine 13 C6, creatinine N-methyl-D3, and phenylalanine-D1 (purity > 98%) were purchased from Cambridge isotope laboratories, which were delivered by ReseaChem Life Science (Burgdorf, Switzerland) or Sigma-Aldrich (Buchs, Switzerland). Water, acetonitrile (ACN) and methanol (MeOH) of HPLC grade were obtained from Fluka (Buchs, Switzerland). All other chemicals used were from Merck (Zug, Switzerland) and of the highest grade available. Femoral blood samples from authentic forensic cases were collected at two time points after death during the routine toxicological investigation at the Victorian Institute of Forensic Medicine (VIFM), Melbourne, Australia. Upon mortuary admission of a deceased, approximately 2 to 5 mL of postmortem femoral blood was collected by leg puncture (blind stick) as soon as practicable, as per provisions of the Coroners Act 2008 (Victoria) (t1). A second femoral blood sample was collected during the medico-legal autopsy (t2) after preparation of the femoral vein. For cases where the time of death (ToD) could only be narrowed down to a specific day but no exact time ( n = 163) and admission to the VIFM on a later day, ToD was defined as 12 pm of the estimated day of death. If admission to the VIFM was on the same day as the estimated day of death, ToD was specified to be between 12 am and mortuary admission of the body (t1). These timings were used to calculate the pre-admission and pre-autopsy intervals per case [defined as the time between death and sample collection at mortuary admission (t1) / autopsy (t2)]. All postmortem blood samples were preserved in 1% w/v sodium fluoride and potassium oxalate and stored at 4 °C until shipment. Samples were transported to the Zurich Institute of Forensic Medicine (ZIFM, Switzerland; exempt specimens, no import/export permission required) in a temperature-controlled environment at 4 °C and immediately frozen at − 80 °C upon receipt until re-analysis for drug and metabolome changes. Anonymized information on the estimated ToD and sampling time points were provided for further data analysis. From an initial 477 cases (Brockbals et al., ), 427 were included in the current study. Case selection was based on the detectability of one or more drugs (of abuse) independent to the cause of death. Re-analysis of the samples in an anonymized format for the specific research project was approved by the Ethics Committee of the VIFM (EC 20-2019; EC 23-1275). The individuals cannot be identified from the information provided. Hence, no written informed consent from the individuals or their relatives was needed for this study. Additionally, the study was conducted in full conformance with the Swiss ethical laws, particularly those covering the use of human material in research. To 150 μL of postmortem blood, 15 μL IS solution (0.025 mM creatinine-d3, 0.03 mM L-arginine (13C6), and 0.04 mM L-phenylalanine-D1) were added, followed by the addition of 450 μL of a MeOH/acetone mixture (90:10 v/v) for protein precipitation. The samples were shaken and stored at − 20 °C overnight. Subsequently, the samples were resuspended, centrifuged at 14′000 rpm for 15 min, and one aliquot (50 μL) of the supernatant was transferred to an autosampler vial for analysis by reversed-phase chromatography (RP) as detailed below. A second aliquot (50 μL) was stored at − 80 °C for analysis by hydrophilic interaction chromatography (HILIC) approximately 1 month later. Before analysis, all samples were centrifuged again (14′000 rpm for 15 min). In addition, a femoral blood pool sample was prepared from 11 authentic postmortem blood samples collected at the ZIFM, stored in aliquots at − 80 °C, thawed, and extracted identically to the study samples each day for quality control purposes. Analysis was performed on a Thermo Fisher Ultimate 3000 UHPLC system (Thermo Fisher Scientific, San Jose, CA, USA) coupled with a high-resolution (HR) time of flight (TOF) instrument system (TripleTOF 6600 Sciex, Turbo V ion source, Concord, Ontario, Canada) as described in detail elsewhere (Boxler et al., ; Steuer et al., ). Briefly, two chromatographic columns were applied, (a) a RP column (XSelect HSST RP-C18 column; 150 mm × 2.1 mm i.d; 2.5 µm particle size; Waters, Baden, Daettwil, Switzerland) with 10 mM ammonium formate and 0.1% (v/v) formic acid in water or 0.1% (v/v) formic acid in methanol as mobile phases A and B, respectively; gradient elution starting at 100% A with a flow rate of 0.5 mL/min, increase to 100% B between 1 and 15 min, held for 3 min and re-equilibrated for 2 min (0.7 mL/min flow rate after 15 min); 20 min total run time; (b) a Merck SeQuant ZIC HILIC column (150 mm × 2.1 mm i.d; 3.5 µm particle size) with 25 mM ammonium acetate and 0.1% (v/v) acetic acid in water and 0.1% (v/v) acetic acid in ACN as mobile phases C and D, respectively; gradient elution at a flow rate of 0.5 mL/min over 15 min; starting conditions were 95% D, decreased to 40% D between 1 and 10 min, further decreased to 10% D until 12 min, hold for 1 min and re-equilibrated for 4 min. HR-MS (resolving power (full width at half-maximum, FWHM at 400 m/z) of 30,000) and MS/MS (resolving power 15,000 in MS 2 ) data were acquired by data-dependent acquisition (DDA) after electrospray ionization (ESI) in positive mode for RP chromatography and negative mode for HILIC chromatography, respectively. The following settings were applied: full scan over a mass range from m/z 50 to m/z 1000 (accumulation time 50 ms, CE 5 eV) and MS2 scan (accumulation time for each DDA experiment 100 ms, CE 35 eV with a CE spread of 15 eV) after dynamic background subtraction on the five most intense ions with an intensity threshold above 100 cps and exclusion time of 5 s (half peak width) after two occurrences in high sensitivity mode. Data acquisition was controlled by Analyst TF software (version 1.7, Sciex). All sample extracts were divided into 17 batches, with samples t1 and t2 from one case assigned to the same batch. Per batch, samples were measured in randomized order (total time period 1 month per chromatographic method). A system suitability test (SST) containing arginine, cortisol, cortisone, creatinine, glycocholic acid, hippuric acid, leucine, raffinose, riboflavin, and tryptophan (concentration 10 μg/ml each) was measured at the beginning of each measurement batch to check the general instrument performance via retention time and peak area comparison after peak integration in MultiQuant V 2.1 (Sciex). Automatic MS and MS/MS calibration was performed every 10 sample injections using a pooled blood sample (450 μL supernatant) fortified with 45 μL of a self-prepared calibration solution (creatinine, leucine, arginine, hippuric acid, tryptophane, inosine, cortisol, cortisone, riboflavin, glycocholic acid, taurocholic acid, reserpine, levothyroxine and 5,10,15,20-tetrakis-(pentafluorphenyl)-porphyrin, 7.1 μg/ml per analyte). Additionally, a pooled blood sample was repeatedly injected following each calibration and evaluated for intra- and inter-batch differences in retention time and peak area. Data analysis was done in a targeted approach through peak integration of 38 analytes (given in Table ) in MultiQuant V 2.1 (Sciex). After raw data export to Microsoft Excel, further data analysis was performed using Microsoft Excel, GraphPad Prism 10.0.2, and R (R_Core_Team, ) in R Studio (R Version 4.3.1. “Beagle Scouts”; RStudio version 2023.03.0 + 386) with the following R packages: tidyverse (Wickham et al., ), gridExtra (Auguie, ), trelliscopeis (Hafen & Schloerke, ), flextable (Gohel & Skintzos, ), ggforce (Pedersen, ), readxl (Wickham & Bryan, ), and lubridate (Grolemund & Wickham, ). Quality control ISs were monitored to identify outliers (Grubbs test (GraphPad Prism 10.0.2) on batch-normalized IS peak areas, p < 0.05) and for quality control purposes, considering a variation (relative standard deviation, RSD) of < 30% as sufficiently robust among the authentic samples. The mean and range of retention times and peak areas of all 38 analytes were determined in the pool samples. Intra- and inter-batch differences were calculated using Microsoft Excel. Deviations (standard deviation) of a maximum of 0.05 min or 0.2 min, and 20% or 30% in peak areas were considered acceptable within and between batches, respectively. Evaluation of normalization procedures To account for inter-batch differences originating from technical variation, all analyte peak areas were normalized to the mean ( n = 5) of the batch’s pool-sample analyte peak area (batch correction). Two different sample normalization strategies were evaluated: normalization to heavy-labeled ISs (IS-normalization) and probabilistic quotient normalization (PQN).IS-normalization was performed by dividing the analyte’s peak through the IS area. Metaboanalyst 6.0 (Pang et al., ) was used for PQN normalization of the whole data set (38 analytes, 854 samples). Postmortem changes between two time points of the same case (paired) Percent differences of raw peak areas between t2 and t1 were calculated for each case ( n = 427) and analyte. Subgroups, in terms of increasing time intervals, were formed according to the time difference (Δ t ) between t2 and t1 as follows: 0–12 h,12–24 h, 24–36 h, 36–48 h, 48–72 h, 72–96 h, 96–120 h, 120–144 h, and > 144 h. A paired Wilcoxon signed ranked test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001; p-values adjusted for multiple testing according to “holm”) was applied between t2 and t1 peak areas for all cases and in Δ t subgroups. Postmortem changes over time (unpaired analysis) Subgroups were formed according to the time difference of each individual blood sample (tx_ToD, n = 854) to the known or estimated ToD as follows: 0–6 h (group 1), 6–12 h (group 2), 12–24 h (group 3), 24–36 h (group 4), 36–48 h (group 5), 48–72 h (group 6), 72–96 h (group 7), 96–120 h (group 8), 120–144 h (group 9), > 144 h (group 10). Statistical differences between groups were assessed by application of a Kruskal Wallis test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001) followed by Dunn’s multiple comparison test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001) after false-discovery rate correction by the Holm method. Percent differences of the median normalized peak area of each group to group 1 (0–6 h) were calculated. Correlations The percent changes (paired and unpaired analysis) were correlated with the possible influencing factors logP (lipophilicity), molecular weight (MW), and retention time in two different chromatographic settings by Spearman correlation analysis in GraphPad Prism 10.0.2. The corresponding characteristics and references used for correlations are summarized in Table . ISs were monitored to identify outliers (Grubbs test (GraphPad Prism 10.0.2) on batch-normalized IS peak areas, p < 0.05) and for quality control purposes, considering a variation (relative standard deviation, RSD) of < 30% as sufficiently robust among the authentic samples. The mean and range of retention times and peak areas of all 38 analytes were determined in the pool samples. Intra- and inter-batch differences were calculated using Microsoft Excel. Deviations (standard deviation) of a maximum of 0.05 min or 0.2 min, and 20% or 30% in peak areas were considered acceptable within and between batches, respectively. To account for inter-batch differences originating from technical variation, all analyte peak areas were normalized to the mean ( n = 5) of the batch’s pool-sample analyte peak area (batch correction). Two different sample normalization strategies were evaluated: normalization to heavy-labeled ISs (IS-normalization) and probabilistic quotient normalization (PQN).IS-normalization was performed by dividing the analyte’s peak through the IS area. Metaboanalyst 6.0 (Pang et al., ) was used for PQN normalization of the whole data set (38 analytes, 854 samples). Percent differences of raw peak areas between t2 and t1 were calculated for each case ( n = 427) and analyte. Subgroups, in terms of increasing time intervals, were formed according to the time difference (Δ t ) between t2 and t1 as follows: 0–12 h,12–24 h, 24–36 h, 36–48 h, 48–72 h, 72–96 h, 96–120 h, 120–144 h, and > 144 h. A paired Wilcoxon signed ranked test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001; p-values adjusted for multiple testing according to “holm”) was applied between t2 and t1 peak areas for all cases and in Δ t subgroups. Subgroups were formed according to the time difference of each individual blood sample (tx_ToD, n = 854) to the known or estimated ToD as follows: 0–6 h (group 1), 6–12 h (group 2), 12–24 h (group 3), 24–36 h (group 4), 36–48 h (group 5), 48–72 h (group 6), 72–96 h (group 7), 96–120 h (group 8), 120–144 h (group 9), > 144 h (group 10). Statistical differences between groups were assessed by application of a Kruskal Wallis test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001) followed by Dunn’s multiple comparison test ( p < 0.05; ns > 0.05, * < 0.05, ** < 0.01, *** < 0.001) after false-discovery rate correction by the Holm method. Percent differences of the median normalized peak area of each group to group 1 (0–6 h) were calculated. The percent changes (paired and unpaired analysis) were correlated with the possible influencing factors logP (lipophilicity), molecular weight (MW), and retention time in two different chromatographic settings by Spearman correlation analysis in GraphPad Prism 10.0.2. The corresponding characteristics and references used for correlations are summarized in Table . Sample cohort and analysis The sample cohort consisted of blood samples from 427 authentic forensic cases with two collection time points after death (t1 and t2) per case ( n = 854 blood samples). Median (and range) collection times after death were 8 h (1.3–290 h) for t1, and 88 h (11–478 h) for t2, respectively, resulting in a median Δt between t1 and t2 samples of 71 h (6.4–434 h). The different sampling procedures for t1 and t2 revealed no statistically significant differences for 24 analytes when exemplarily comparing t1 and t2 samples collected between 24 and 36 h (best-balanced time-group with n = 33 t1 vs. n = 45 t2 samples, nonparametric Mann–Whitney test, p < 0.05). The 12 analytes with significant findings, all pointed towards lower concentrations at t2 (median difference − 31%). Manner of death was natural in 195 cases, accidental in 69 cases, suicide in 57 cases, and remained unknown in 106 cases. The age of the deceased at the ToD ranged from 15 to 98 years (mean/median 59 years). No correlation could be observed between age of the deceased and the time between death and t1 (Spearman rank correlation coefficient: 0.23, linear model R 2 = 0.04; data not shown). Additionally, no trend was found that would indicate longer/shorter time intervals until first sample collection or Δt with different manner of deaths (data not shown in detail). All cases included in the current study tested positive for at least one drug or alcohol during a comprehensive routine drug screening (Di Rago et al., ); 210 for opioids, 216 for benzodiazepines, 216 for antidepressants, 97 for antipsychotics, 43 for cannabis, and 36 for stimulants (amphetamines, cocaine). Significant influences of storage and shipping conditions were considered negligible, as shown in a preceding study (Brockbals et al., ). QTOF analysis allowed for sufficient targeted processing of 38 endogenous compounds following separation by standard RP chromatography. For analytes with a low RP retention time, trends in time-dependent changes were confirmed by HILIC chromatography mode before inclusion in the results ( n = 18 analytes as detailed in Table S2). Quality control and evaluation of normalization strategies IS were used for quality assessment throughout the analytical batch. A performed Grubbs test indicated three samples as potential outliers based on one out of the three IS, but no sample was classified as a potential outlier for all three IS. IS RSDs as well as RSDs of all analytes within the QC pools and the authentic samples are provided in the supplementary information (Table S1). The set criteria of ± 30% for pool samples and IS were fulfilled for all analytes in RP mode except for (lyso)phospholipids, serine, and threonine. All samples were batch-corrected to account for instrument variation over a 1-month measuring period. In addition, two common normalization strategies were evaluated. In targeted (semi-quantitative) analysis, using the respective isotopically labelled IS of an analyte represents the gold standard to account for variation resulting from the laboratory handling, while PQN is a common normalization method in untargeted analysis accounting for many features (Dieterle et al., ). Effects of the different procedures are exemplified in Fig. for two compounds with matching heavy-labelled IS (creatinine, phenylalanine), and two additional compounds with high time-dependent effects in the current study (C0 and taurine), for paired (A) and unpaired analysis (B). General trends of increasing concentrations over time and observed significant differences remain comparable among the two normalization approaches compared to only batch-corrected data, while the magnitude of change is lower for PQN normalization. Postmortem changes in endogenous compounds between two time points of the same case (paired; t2 vs. t1) An overview of medianpercent differences for the chosen 38 analytes over all analyzed cases is provided in Table . Exemplified for (acyl)carnitines with increasing carbon side chain length and different amino acids, the extent and distribution of time-dependent changes is depicted in Fig. ; for all other analytes visual representation can be found in the Supplementary information in Fig. S1. Except for serine, threonine, and PC 34:1, all compounds revealed significant differences between t2 and t1 ( p < 0.05). For octanoylcarnitine (C8), decanoylcarnitine (C10), lauroylcarnitine (C12), arginine, ornithine, proline, valine, cortisol, lyso PC 16:0, and lyso PC 18:1 median decreases were observed, while all other analytes showed significant median increases from t1 to t2. Overall, changes mainly ranged from− 50% to + 100% (corresponding to a fold change of two) when considering their interquartile range. Exceptions presented carnitine (C0), acetylcarnitine (C2), decanoylcarnitine (C10), lauroylcarnitine (C12), alanine, taurine, cholic acid, uracil, and lyso PE 18:0. Here, maximum median differences between t2 and t1 reached from − 63% [decanoylcarnitine (C10)] to 166% (cholic acid) and 141% (taurine). Still, large inter-individual variations were observed for all samples and also subgroups with increasing Δt intervals (< 6 h—> 144 h) (see supplementary information Table S2). Endogenous compounds were categorized into four patterns of median changes depending on the length of the Δt: Steady increase: alanine, creatinine, proline, tryptophane, taurine, uracil, valine, carnitine (C0), acetylcarnitine (C2), propionylcarnitine (C3), steraroylcarnitine (C18), lysoPE 18:0, PE 34:1, PE 36:2. Constant median change for the time intervals of approximately 24 to 36 h followed by an increase with longer Δt: histidine, leucine/isoleucine, lysine, methionine, phenylalanine, tyrosine, uric acid, palmitoylcarnitine (C16), cholic acid. Decrease: cortisol, decanoylcarnitine (C10), lauroylcarnitine (C12). No or < 30% change over time: arginine, inosine, ornithine, serine, threonine, octanoylcarnitine (C8), myristoylcarnitine (C14), kynurenine, lysoPC 16:0, lysoPC 18:1, PC 34:1, PC 36:2. Representative examples are depicted in Fig. a for taurine taurine (a), tyrosine (b), decanoylcarnitine (C10) and cortisol (c), and octanoylcarnitine (C8) (d). Postmortem changes in endogenous compounds according to their time since death (unpaired; tx_ToD) To determine whether the actual time after death plays a decisive role, or is even more important than the time interval between t2 and t1, all samples ( n = 854) were binned into groups according to the individual samples’ time since death (ToD to t1 and ToD to t2, tx_ToD) and were statistically compared for differences in an analyte’s normalized peak area. In nine cases, t1 and t2 blood samples were binned within the same group therefrom five cases had a sampling time > 144 h. A Kruskal–Wallis test revealed significant changes between above mentioned groups for all tested endogenous compounds except for arginine, octanoylcarnitine (C8), cortisol, and PC 34:1 (Table ). Median percent differences of each group (1–10) to group 1 (0–6 h, earliest) are summarized in Table S2 of the Supplementary information. The highest median changes were observed for carnitine (C0) (+ 274%), taurine (+ 361%), and cholic acid (+ 1190%), as well as decanoylcarnitine (C10) (− 80%) and lauroylcarnitine (C12) (− 42%). As shown in Fig. , the correlation per analyte of its median Δt changes from paired analysis to the highest median %change of each group (2–10) to group 1 (unpaired analysis) indicated good agreement (spearman correlation R 2 0.91) despite other expected influencing factors in unpaired analysis. Figure b exemplifies box plots of the normalized peak area (left y-axis) and the median percentage change to time group1 (right y-axis). Boxplots of all other analytes are presented in Fig.S2. Also, for individual compounds, the time-dependent changes of the unpaired samples matched well with paired Δt data (Fig. a,b). The exception was cortisol, which decreases significantly between t2 and t1 (paired analysis) but showed no trend in the normalized peak areas with respect to the respective time since death of the blood samples. In contrast, myristoylcarnitine (C14), and lyso PC18:1 showed no trend as a function of Δt length but an increase or decrease as a function of time since death (Table S2). For statistical analysis between all groups, Dunn’s post-hoc test was applied on Kruskal–Wallis significant analytes. Significant differences are given as a so-called p -value heat map for the chosen examples in Fig. c and all remaining compounds in the Supplementary information in Fig. S3. Significant changes between groups most often appeared with increasing time since death, while the initial 36 or even 48 h indicated relatively stable normalized peak areas. Few exceptions were observed for alanine, taurine, tryptophan, valine, carnitine (C0), and acetylcarnitine (C2), in line with findings from paired analysis between t2 and t1. Correlations To find the underlying causes for the varying behavior between different compounds, the percent changes (paired and unpaired analysis) were correlated with the possible influencing factors. No correlations existed between percent change and lipophilicity or molecular weight (supplementary information Fig. S4A, B). In RP chromatography, the highest percent changes occurred for compounds eluting in the first two minutes and around 15 min of the chromatogram (Fig. S4C). HILIC chromatography (ESI negative) used for selected compounds indicated a similar extent of percental change despite compound elution around five to ten minutes. Further direct comparison of the observed postmortem changes for 18 compounds between RP and HILIC chromatography (Tables S2 and S3) also did not find any differences caused by the applied chromatography, with the exception of proline and valine. Both amino acids revealed increases in HILIC chromatography, in line with other amino acids, and appeared to be stable or slightly decreased when analyzed in RP mode. Examples from lysine (no difference, retention time RP 0.8, HILIC 9.6, respectively) and proline (different postmortem behavior, retention time RP 0.9 min, HILIC 6.2 min, respectively) are given in Fig. . The sample cohort consisted of blood samples from 427 authentic forensic cases with two collection time points after death (t1 and t2) per case ( n = 854 blood samples). Median (and range) collection times after death were 8 h (1.3–290 h) for t1, and 88 h (11–478 h) for t2, respectively, resulting in a median Δt between t1 and t2 samples of 71 h (6.4–434 h). The different sampling procedures for t1 and t2 revealed no statistically significant differences for 24 analytes when exemplarily comparing t1 and t2 samples collected between 24 and 36 h (best-balanced time-group with n = 33 t1 vs. n = 45 t2 samples, nonparametric Mann–Whitney test, p < 0.05). The 12 analytes with significant findings, all pointed towards lower concentrations at t2 (median difference − 31%). Manner of death was natural in 195 cases, accidental in 69 cases, suicide in 57 cases, and remained unknown in 106 cases. The age of the deceased at the ToD ranged from 15 to 98 years (mean/median 59 years). No correlation could be observed between age of the deceased and the time between death and t1 (Spearman rank correlation coefficient: 0.23, linear model R 2 = 0.04; data not shown). Additionally, no trend was found that would indicate longer/shorter time intervals until first sample collection or Δt with different manner of deaths (data not shown in detail). All cases included in the current study tested positive for at least one drug or alcohol during a comprehensive routine drug screening (Di Rago et al., ); 210 for opioids, 216 for benzodiazepines, 216 for antidepressants, 97 for antipsychotics, 43 for cannabis, and 36 for stimulants (amphetamines, cocaine). Significant influences of storage and shipping conditions were considered negligible, as shown in a preceding study (Brockbals et al., ). QTOF analysis allowed for sufficient targeted processing of 38 endogenous compounds following separation by standard RP chromatography. For analytes with a low RP retention time, trends in time-dependent changes were confirmed by HILIC chromatography mode before inclusion in the results ( n = 18 analytes as detailed in Table S2). IS were used for quality assessment throughout the analytical batch. A performed Grubbs test indicated three samples as potential outliers based on one out of the three IS, but no sample was classified as a potential outlier for all three IS. IS RSDs as well as RSDs of all analytes within the QC pools and the authentic samples are provided in the supplementary information (Table S1). The set criteria of ± 30% for pool samples and IS were fulfilled for all analytes in RP mode except for (lyso)phospholipids, serine, and threonine. All samples were batch-corrected to account for instrument variation over a 1-month measuring period. In addition, two common normalization strategies were evaluated. In targeted (semi-quantitative) analysis, using the respective isotopically labelled IS of an analyte represents the gold standard to account for variation resulting from the laboratory handling, while PQN is a common normalization method in untargeted analysis accounting for many features (Dieterle et al., ). Effects of the different procedures are exemplified in Fig. for two compounds with matching heavy-labelled IS (creatinine, phenylalanine), and two additional compounds with high time-dependent effects in the current study (C0 and taurine), for paired (A) and unpaired analysis (B). General trends of increasing concentrations over time and observed significant differences remain comparable among the two normalization approaches compared to only batch-corrected data, while the magnitude of change is lower for PQN normalization. An overview of medianpercent differences for the chosen 38 analytes over all analyzed cases is provided in Table . Exemplified for (acyl)carnitines with increasing carbon side chain length and different amino acids, the extent and distribution of time-dependent changes is depicted in Fig. ; for all other analytes visual representation can be found in the Supplementary information in Fig. S1. Except for serine, threonine, and PC 34:1, all compounds revealed significant differences between t2 and t1 ( p < 0.05). For octanoylcarnitine (C8), decanoylcarnitine (C10), lauroylcarnitine (C12), arginine, ornithine, proline, valine, cortisol, lyso PC 16:0, and lyso PC 18:1 median decreases were observed, while all other analytes showed significant median increases from t1 to t2. Overall, changes mainly ranged from− 50% to + 100% (corresponding to a fold change of two) when considering their interquartile range. Exceptions presented carnitine (C0), acetylcarnitine (C2), decanoylcarnitine (C10), lauroylcarnitine (C12), alanine, taurine, cholic acid, uracil, and lyso PE 18:0. Here, maximum median differences between t2 and t1 reached from − 63% [decanoylcarnitine (C10)] to 166% (cholic acid) and 141% (taurine). Still, large inter-individual variations were observed for all samples and also subgroups with increasing Δt intervals (< 6 h—> 144 h) (see supplementary information Table S2). Endogenous compounds were categorized into four patterns of median changes depending on the length of the Δt: Steady increase: alanine, creatinine, proline, tryptophane, taurine, uracil, valine, carnitine (C0), acetylcarnitine (C2), propionylcarnitine (C3), steraroylcarnitine (C18), lysoPE 18:0, PE 34:1, PE 36:2. Constant median change for the time intervals of approximately 24 to 36 h followed by an increase with longer Δt: histidine, leucine/isoleucine, lysine, methionine, phenylalanine, tyrosine, uric acid, palmitoylcarnitine (C16), cholic acid. Decrease: cortisol, decanoylcarnitine (C10), lauroylcarnitine (C12). No or < 30% change over time: arginine, inosine, ornithine, serine, threonine, octanoylcarnitine (C8), myristoylcarnitine (C14), kynurenine, lysoPC 16:0, lysoPC 18:1, PC 34:1, PC 36:2. Representative examples are depicted in Fig. a for taurine taurine (a), tyrosine (b), decanoylcarnitine (C10) and cortisol (c), and octanoylcarnitine (C8) (d). To determine whether the actual time after death plays a decisive role, or is even more important than the time interval between t2 and t1, all samples ( n = 854) were binned into groups according to the individual samples’ time since death (ToD to t1 and ToD to t2, tx_ToD) and were statistically compared for differences in an analyte’s normalized peak area. In nine cases, t1 and t2 blood samples were binned within the same group therefrom five cases had a sampling time > 144 h. A Kruskal–Wallis test revealed significant changes between above mentioned groups for all tested endogenous compounds except for arginine, octanoylcarnitine (C8), cortisol, and PC 34:1 (Table ). Median percent differences of each group (1–10) to group 1 (0–6 h, earliest) are summarized in Table S2 of the Supplementary information. The highest median changes were observed for carnitine (C0) (+ 274%), taurine (+ 361%), and cholic acid (+ 1190%), as well as decanoylcarnitine (C10) (− 80%) and lauroylcarnitine (C12) (− 42%). As shown in Fig. , the correlation per analyte of its median Δt changes from paired analysis to the highest median %change of each group (2–10) to group 1 (unpaired analysis) indicated good agreement (spearman correlation R 2 0.91) despite other expected influencing factors in unpaired analysis. Figure b exemplifies box plots of the normalized peak area (left y-axis) and the median percentage change to time group1 (right y-axis). Boxplots of all other analytes are presented in Fig.S2. Also, for individual compounds, the time-dependent changes of the unpaired samples matched well with paired Δt data (Fig. a,b). The exception was cortisol, which decreases significantly between t2 and t1 (paired analysis) but showed no trend in the normalized peak areas with respect to the respective time since death of the blood samples. In contrast, myristoylcarnitine (C14), and lyso PC18:1 showed no trend as a function of Δt length but an increase or decrease as a function of time since death (Table S2). For statistical analysis between all groups, Dunn’s post-hoc test was applied on Kruskal–Wallis significant analytes. Significant differences are given as a so-called p -value heat map for the chosen examples in Fig. c and all remaining compounds in the Supplementary information in Fig. S3. Significant changes between groups most often appeared with increasing time since death, while the initial 36 or even 48 h indicated relatively stable normalized peak areas. Few exceptions were observed for alanine, taurine, tryptophan, valine, carnitine (C0), and acetylcarnitine (C2), in line with findings from paired analysis between t2 and t1. To find the underlying causes for the varying behavior between different compounds, the percent changes (paired and unpaired analysis) were correlated with the possible influencing factors. No correlations existed between percent change and lipophilicity or molecular weight (supplementary information Fig. S4A, B). In RP chromatography, the highest percent changes occurred for compounds eluting in the first two minutes and around 15 min of the chromatogram (Fig. S4C). HILIC chromatography (ESI negative) used for selected compounds indicated a similar extent of percental change despite compound elution around five to ten minutes. Further direct comparison of the observed postmortem changes for 18 compounds between RP and HILIC chromatography (Tables S2 and S3) also did not find any differences caused by the applied chromatography, with the exception of proline and valine. Both amino acids revealed increases in HILIC chromatography, in line with other amino acids, and appeared to be stable or slightly decreased when analyzed in RP mode. Examples from lysine (no difference, retention time RP 0.8, HILIC 9.6, respectively) and proline (different postmortem behavior, retention time RP 0.9 min, HILIC 6.2 min, respectively) are given in Fig. . (Un)targeted metabolome approaches have gained significant interest in forensic toxicology analysis, including postmortem cases (Bonicelli et al., ; Brockbals et al., , ; Chighine et al., ; Donaldson & Lamont, , ; Elmsjo et al., , ; Locci et al., , ; Mora-Ortiz et al., ; Pesko et al., ; Peyron et al., ). Due to study design and ethical restrictions in controlled human studies, postmortem research typically involves random routine cases. However, the metabolome is highly dynamic and, even in living people, susceptible to many environmental factors influencing the metabolic profile or particular biomarkers. Postmortem specimens such as blood represent an even greater challenge given the well-recognized issues of postmortem changes or PMR, seen with drugs (Butzbach, ; Drummer & Gerostamoulos, ; Mantinieks et al., ; McIntyre & Escott, ; Pelissier-Alicot et al., ; Peters & Steuer, ; Skopp, ). So far, little is known about such (additional) confounding factors originating from death itself, but severe influences are expected, particularly from the time since death. A better understanding of these factors will significantly improve the experimental design of future postmortem metabolome studies. Our current study comprised one of the most extensive data sets in the context of postmortem studies and is characterized by two blood collection time points per case. Despite the non-controlled sample collection, the study cohort can be considered representative of typical forensic postmortem cases, as different manners of death, a large age range, and a wide variety of collection time points were included. No systematic differences or correlations in age of the deceased or manner of death in relation to the PMI were found. Thirty-eight endogenous compounds were chosen for detailed, time-dependent evaluation of postmortem changes from an untargeted acquired data set. These included metabolites of different compound classes with significantly different physicochemical properties, such as amino acids, acylcarnitines, (lyso)phopsholipids, bile acids, steroids, etc. The targeted processing method originally (during method development) included more endogenous compounds, from which we focused on analytes that could be measured with sufficient analytical precision. Of those, some could not reliably be detected in the postmortem sample cohort and were dropped subsequently, e.g. the nucleobase adenine and the nucleoside adenosine (Boxler et al., ). Of course, it is only a small selection of analytes and not representative of the complete metabolome. However, the chosen targeted compounds were previously described in (postmortem) or generally forensic metabolome studies. They were proposed as predictive biomarkers, e.g., as intoxication markers for oxycodone poisoning (Elmsjo et al., ) or the postmortem interval (Donaldson & Lamont, ; Mora-Ortiz et al., ). Batch normalization was performed based on pooled sample peak areas measured within the same batch to account for analytical bias. In addition, individual sample normalization to account for, e.g., extraction effects is commonly applied. For targeted (semi-/quantitative) analysis, matching isotopically labelled IS per analyte represent the gold standard for normalization. In (targeted) metabolomics, most often, such IS are not available for all compounds of interest. If a general IS (isotopically labelled, but not matching the analyte of interest) is used to normalize another analyte, the analyte of interest needs careful evaluation during method development and validation, and in the worst case the use of a general IS can increase variation rather than compensate for it (Boxler et al., ). In untargeted metabolomics, where compounds of interest are a priori unknown, specific or general IS-use is therefore unfeasible. PQN was demonstrated as a versatile sample normalization strategy of untargeted datasets of thousands of features, where a quotient for each feature is calculated in relation to a reference sample (pool), and the median of all feature quotients is used as a sample’s individual normalization/dilution factor (Dieterle et al., ). However, PQN can be biased, if, e.g., a large proportion of the features are changed because of a systematic rather than a dilution/extraction variation effect (Correia et al., ). In this current semi-targeted analysis, only 38 compounds were evaluated, from which several were already described to show PMI-dependent changes (Donaldson & Lamont, ; Mora-Ortiz et al., ). It is therefore possible, that PQN might attribute actual effects of the PMI to dilution effects, consequently underestimating the real time- dependent effect (Fig. ). Given the descriptive nature of the current study, the final data evaluation was based on batch-normalized data only, to avoid overfitting effects of PQN normalization. When performing a paired analysis of two blood samples from the same individual, it could be proposed that variables other than the time could be excluded. All case-specific parameters, like cause or manner of death, age, etc., remain identical. However, looking only at the %Δt change between t2 and t1 left out the actual time effect, i.e., the time since the death occurred. For instance, specific forensic postmortem cases can have a time difference between t1 and t2 of 12 h, but information, if death occurred one, two, or more days before blood collection, is not considered. We, therefore, additionally compared metabolite changes according to the time since death, although higher variability can be expected. Individual analysis increased the total number of samples to 854, as t1 and t2 were evaluated separately. Both data evaluation strategies (time intervals between paired t1 and t2 samples vs. unpaired analysis in groups according to the time since death of individual samples) returned very well-matching results (Figs. , ). Cortisol poses one crucial example, where paired data evaluation was able to indicate time-dependent changes, but evaluation of random (non-paired) samples showed no trend. Generally, increases outweighed decreases over time (Table ). Concerning the median and interquartile changes, almost all analytes ranged between a fold change of plus/minus two, but inter-individual variation was high (Figs. , and S1). Taurine and uracil, as two compounds exceeding the described range and showing time-dependent concentration increases, were already described as potential biomarkers of the PMI. However, in contrast to the current results, concentration decreases over time were described in mice (Mora-Ortiz et al., ). If univariate statistics are employed for metabolome data evaluation in controlled studies of living people, often fold-changes of 1.5 or 2 are used as one of several filter criteria for interesting features. Considering our (paired) results, it was shown that already the time factor can introduce such variation for some analytes (Fig. ). Depending on the research question and sample selection, higher fold-changes might be advisable in postmortem metabolome analysis to improve biological significance and avoid random findings. As in line with former works (Chighine et al., ), PMI is one of the main influencing factors on the metabolome; controlling or accounting for different PMIs within the study cohort is highly important for future postmortem metabolome studies. Our data suggest that the influence of PMI is most homogenous within the first 48 h after death. As such, the most reliable results would be obtained if a sufficiently high number of blood samples taken within the first 48 h after death can be used, ensuring the least influence of the PMI on concentration changes of endogenous analytes. Alternatively, PMI among study groups should be as balanced as possible. Using correlation analysis, we attempted to find causes for the observed differences in postmortem behavior depending on the substance. Based on existing knowledge of exogenous compounds, e.g., lipophilicity, the volume of distribution (Vd), or the ratio of cardiac to peripheral blood (C/P-ratio) can help predict PMR (Han et al., ; Skopp, ), we aimed to compare different chemical properties of the endogenous compounds. Thereby, Vd is not available for endogenous metabolites, as they are typically not administered in known amounts to calculate their expected blood/plasma concentration in relation to the dose. C/P ratios or general distribution of endogenous metabolites would be interesting for further investigation of underlying PMR mechanisms but was out of scope for the current study that focused on femoral blood samples only. No correlations between logP or molecular weight and the extent of postmortem change could be observed (Fig. S4). Comparison of retention time and %change pointed towards more severe postmortem changes for those analytes eluting within the first two minutes of the RP chromatography. This could be due to similar physicochemical properties of these substances but also due to matrix effects. Typically, the first three minutes, as well as the end of a RP chromatography, are prone to matrix effects, given salts and extremely polar vs. highly lipophilic compounds (phospholipids), respectively (Van Eeckhaut et al., ). Further, it is well known that postmortem samples are more susceptible to matrix effects than samples of living persons (Drummer, ; Saar et al., ). Eighteen analytes were additionally evaluated in a different chromatographic system (HILIC) and ESI negative ionization, a typically complementary method, to exclude matrix effects as the leading cause of the observed time-dependent changes. Only for proline and valine, a different time-dependent behavior was observed when changing the analytical methodology, which points towards a matrix effect for these two compounds in RP chromatography (Fig. , Tables S2, S3). Apart from that, HILIC has led to the same results as RP (Figs. S4 and , Tables S2, S3), but with overall higher variation. So far, no common physico-chemical properties could be deduced, allowing for a likelihood prediction of postmortem changes in endogenous metabolites. The water content of postmortem blood samples demonstrates high variation ranging from 60 to 90% (Skopp, ), possibly contributing to a certain (minor) extent to the observed concentration changes. PQN normalization could compensate for these effects, however, as discussed above, more compounds/features in an untargeted data processing workflow will be necessary for a conclusion. The changes in endogenous substances probably occur due to the lack of energy after death, accompanied by the cessation of aerobic and partial continuation of anaerobic metabolic pathways. Other studies in animal models found modulations in metabolites associated with anaerobic metabolism, such as lactate (Mora-Ortiz et al., ). In the broadest sense, our results also confirm previous studies with limited numbers of animals showing significant increases in amino acid levels. Different underlying mechanisms were discussed, among others, protein catabolism in postmortem cells leading to accumulation in blood after cell lysis or decreased protein synthesis (Donaldson & Lamont, ). A higher number of endogenous compounds or an untargeted data evaluation will be necessary to uncover biological mechanisms, e.g., through pathway analysis. Ideally, future research should apply an adapted experimental setup to include/extend the analysis to macromolecules (carbohydrates, proteins, lipids, DNA or RNA) and different (blood-surrounding) tissues that might influence concentrations of small endogenous molecules through distribution, changes in protein binding, or general postmortem degradation. The current study comprises one of the most extensive data sets in the context of time-dependent postmortem studies focusing on endogenous compounds. Comparable to drugs, we observed changes in blood levels of nearly all endogenous compounds in a time-dependent manner after death. Our paired analysis of two individual blood samples collected from the same individual proved highly valuable, as time since death represents the only variable. Additional unpaired sample evaluation purely based on the time since death, generally indicated, similar results to those of the matched time intervals, despite more confounders and higher variation. As PMI is one of the main influencing factors on postmortem metabolome changes, controlling or accounting for different PMIs within the study cohort is highly important for future postmortem metabolome studies. Most reliable results can be expected if blood samples preserved within the first 48 h after death can be used and/or PMI among study groups is balanced. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 2767 KB)
Assembly, network and functional compensation of specialists and generalists in poplar rhizosphere under salt stress
d241dc71-dc62-400d-bc18-1142192e685b
11825717
Microbiology[mh]
Soil salinization significantly hampers plant growth and crop productivity worldwide . The rhizosphere microbiome, pivotal in aiding plant tolerance to salt stress, is often considered as the plant’s ‘second genome’ , . Numerous studies have shown when faced with high salinity conditions, plants can attract specific beneficial soil bacteria to their rhizosphere, fostering growth – . Populus euphratica , renowned for its ability to thrive in saline environments, possesses a distinct soil microbiome that may underpin its stress resistance . Despite the critical interplay between P. euphratica and its rhizosphere microorganisms, research in this area remains sparse. Traditional studies have generally classified microorganisms as either abundant or rare taxa, neglecting their ecological niche. In response to diverse environmental conditions, species are often identified as specialists, generalists, or neutral taxa based on their niche breadth, which reflects the range of resources, habitats, or environments a species utilized . This approach offers a novel perspective in microbial classification. Previous research indicated that microbial generalists had a stronger ability to evolve toward specialists , although the underlying reasons for this evolutionary process remain unclear. Xu et al. observed that the impact of generalists and specialists on soil microbial diversity in farmland varies, depending on network perspective, community assembly, and biogeographic patterns . Specialists are more governed by deterministic processes, whereas generalists are swayed by stochastic processes. Liao et al. found that while stochastic processes predominantly influenced the distribution of generalists in plateau lakes of China, deterministic processes played a more significant role in the assembly of specialists . However, a study on Tibetan lake sediment microorganisms found that stochastic processes significantly affected both generalists and specialists , with specialists maintaining robust connections within the network and exhibiting high modularity. Nevertheless, there is a lack of research on the assembly mechanisms and networks of specialists and generalists, underscoring the need for more exploration into their assembly and network characterization in the rhizosphere under salt stress. This will help uncover the various adaptive dynamics and evolutionary reasons behind the existence of specialists and generalists. Plants can modify the composition of the rhizosphere microbiome to better adapt to various soil conditions. There is evidence that plants may recruit beneficial microorganisms to aid growth under stressful and nutrient-limited environments . Ren et al. explored rhizosphere function across different soil environments, uncovering an effect for functional compensation . The availability of nutrients likely influenced plant rhizosphere microbial communities, triggering functional compensation to boost host fitness. For instance, in nutrient-rich soil, nutrient cycling functions in the rhizosphere bacterial community might be downregulated, whereas nutrient cycling might become more crucial in nutrient-poor soil . The question of whether functional compensation is a widespread phenomenon remains open for exploration, particularly regarding the roles of microbial generalists and specialists. This study aimed to (i) examine the composition and characteristics of specialist and generalist microbiomes in the rhizosphere of P. euphratica ; (ii) understand the community assembly mechanisms and network characterization of specialists and generalists under salt stress; (iii) decipher the functional potential of rhizosphere microorganisms in P. euphratica under salt stress. By elucidating the assembly patterns, network characterization, and functional roles of specialists and generalists in P. euphratica , we aspire to deepen our understanding of the ecological processes of rhizosphere microorganisms under salt stress. This knowledge may pave the way for novel strategies to manipulate microorganisms and enhance ecosystem functions in the P. euphratica rhizosphere community. Diversity of bacterial specialists exhibited a positive correlation with salinity We collected 251 rhizosphere soil samples from P. euphratica trees (Fig. ) and proceeded to analyze the soil using amplicon sequencing. Following the filtering process, our analysis yielded 5524 bacterial ASVs and 1298 fungal ASVs. Examination of bacterial species categorized 1979 (35.8%) as specialist species and 901 (16.3%) as generalists (Fig. ). Among fungi, 847 (65.8%) were identified as specialists and 31 (2.4%) as generalists. At the phylum level, bacterial specialists were dominated by Firmicutes, Proteobacteria, and Actinomycetes, whereas generalists were predominantly Proteobacteria, Actinomycetes, and Bacteroidota (Fig. ). As for fungi, the specialists were primarily composed of Sordariomycetes and Dothideomycetes , with generalists largely from Tremellomycetes , Sordariomycetes , Dothideomycetes , and Eurotiomycetes . Bacterial generalists exhibited higher α diversity than specialists (Fig. and Supplementary Fig. ). Correlation analysis between microbial diversity of environmental factors (pH, AK, AP, salt, and OM) showed a significant positive correlation between bacterial specialists’ α diversity with salinity, unlike the negative correlation observed for generalists (Fig. ). This trend was exclusive to bacteria, as only the fungi generalists showed a negative correlation with salinity (Fig. and Supplementary Fig. ). By dividing bacterial communities according to salinity levels, we can observe the trends more clearly as described above (Supplementary Fig. ). Additionally, species composition analysis under salinity gradients showed an increased abundance of Planococcus , Planomicrobium , among bacterial specialists (Fig. and Supplementary Fig. ). Furthermore, specialists showed significant structural variations ( F = 7.289, P = 0.01) across salinity rather than generalists ( F = 1.313, P = 0.3) as indicated by PCoA analysis (Fig. ). Further analysis revealed that salinity had the most significant impact on bacterial specialists (Supplementary Fig. , F = 9.119, P = 0.01). Assembly mechanisms and environmental adaptation of bacterial specialists and generalists In this study, we integrated the neutral model with the null model , revealed a high degree of fit for both specialists and generalists (Supplementary Fig. ). The null model indicated that stochastic processes predominantly governed the assembly of both groups, accounting for 80.52 and 81.95% for specialists and generalists, respectively (Fig. ). Specialists were more controlled by deterministic assembly compared to generalists. A regression analysis of the Euclidean distance of salinity and βNTI demonstrated a strong correlation ( P < 0.001) between the pairwise comparisons of βNTI values for bacterial specialists and generalists with salinity changes, as shown in Fig. . Bacterial specialists and generalists exhibited greater certainty (mainly homogeneous selection) in high salinity stress and extremely low salinity conditions (Fig. ), while heterogeneous selection showed the opposite pattern in these situations. The assembly process of fungal specialists is less impacted by salinity (Supplementary Fig. ). While fungal generalists might be somewhat affected by salinity, they were primarily controlled by undominant processes (Supplementary Fig. ), the proportion of undominant processes decreased under high salt conditions. Undominant processes had a greater impact on fungal generalists compared to bacterial generalists. Homogeneous dispersal only accounted for a relatively high proportion in bacterial specialists with low salinity (0–2 g/kg), while the proportion of others was very small. Furthermore, dispersal limitation had a greater impact on specialists than generalists, and undominated had a greater impact on generalists than specialists. An increase in dispersal limitation, mainly for bacterial specialists was significantly correlated with rising salinity levels (Fig. ), which was opposite to the situation of fungal specialists (Supplementary Fig. ). Therefore, Specialists, especially bacterial specialists, played a pivotal role in shaping community diversity through deterministic processes and dispersal limitation. To identify the environmental thresholds for the specialists and generalists in response to various variables, we evaluated the cumulating z− and z+ change points using threshold indicator taxa analysis (Supplementary Fig. ). The results showed that, apart from AP, specialists possess broader environmental threshold than generalists for AK, OM, pH, and salt (Fig. ). Positive correlation between key microbe and network complexity under salt stress We constructed five bacterial co-occurrence networks under varying salinity gradients and analyzed the evolution of these networks as salinity increased (Fig. ). The analysis revealed that both the quantity of the nodes and links, as well as the degree and weighted degree of networks, increased up to a salinity concentration of 20 g/kg (Fig. ). Moreover, an analysis of the eigenvalues related to the networks’ closeness centrality suggested an increase in network complexity with rising salinity levels (Fig. ). Further investigation confirmed a significant positive correlation between network complexity and salinity (Fig. ). Within these networks, 31 module hubs and one connector were identified, with the absence of generalists being noteworthy (Fig. and Supplementary Fig. ). Keystone species, comprising of bacterial specialists and neutral taxa, demonstrated a strong positive correlation with closeness centrality (Fig. ). Phylogenetic analysis of these keystone species (module hubs and connectors) identified them predominantly within the Proteobacteria, Chloroflexi, and Actinobacteria phyla (Fig. ). Correlation analysis between keystone species and five physicochemical properties (pH, AK, AP, salt, and OM) identified a positive relationship of keystone species abundance and rising salinity. The key microbes are those keystone species that are correlated with salinity and rank in the top 1% of IVI values. A significant positive correlation was also observed between key microbes and network complexity (Fig. ). Node-level topological features across different sub-communities were examined, revealing that specialists exhibited higher values in degree, weighted degree, closeness, harmonic closeness, clustering, and eigenvector centrality compared to generalists (Supplementary Fig. ). The most dominant phylum within the network was Proteobacteria, followed by Actinobacteria, Chloroflexi, Firmicutes, Acidobacteriota, Gemmatimonadota, and Ascomycota (Supplementary Fig. ). Proteobacteria, Actinomycetes, Chloroflexi, Firmicutes were found to make up a large proportion of the bacteria in the network, while Ascomycota and Myxomycetes were the most prominent fungi. Network analysis showed that most nodes were organized into nine major modules, representing 55.47% of the total nodes (Supplementary Fig. ). Examination of these modules revealed that eight predominantly consisted of specialists and neutral groups, whereas generalists were less represented. The analysis underscored the prominence of bacterial specialists and neutral taxa within the network (Supplementary Fig. ). Salinity stress triggers functional compensation in the P. euphratica rhizosphere To elucidate the functional characteristics within different salinity gradients, we constructed co-occurrence networks for ASVs across these different salinity gradients and ranked them based on node degree. This allowed us to categorize the ASVs into clusters for functional prediction. By comparing the predicted functions with theoretical expectations, we identified functions that were either enriched or depleted across varying salinity levels. The analysis of the rhizosphere bacterial community’s functional profiles revealed distinct variations across different salinity conditions (Fig. ). Notably, the shift in functional enrichment between varying levels of salinity was significant. Under conditions of low salinity (2–5 g/kg, Fig. ), ASVs that were less dominant exhibited an enrichment in a broader range of metabolic functions. Conversely, in conditions of higher salinity (5–10 g/kg, 10–20 g/kg, Fig. ), more dominant ASVs demonstrated enrichment in specific metabolic functions including “Ascorbate and aldarate metabolism”, “Arachidonic acid metabolism”, “Ubiquinone and other terpenoid−quinone biosynthesis”, and “Terpenoid backbone biosynthesis” and “Fructose and mannose metabolism”. Functional changes did not exhibit any specific pattern under extremely low salinity (0–2 g/kg, Fig. ) and extremely high salinity stress (>20 g/kg, Fig. ). Hence, within certain boundaries, metabolic functions were more greatly suppressed in low salt conditions, while they were enhanced in high salt conditions. This suggests that certain metabolic functions become more critical under high salinity conditions, possibly as a response to the increased stress, thereby enhancing the microbial community’s requirement for metabolic function associated with stress resistance. The noteworthy observation is that 50% of the dominant ASVs with enriched function are specialists, while only 20% are generalists. In the context of bacterial communities, specialists exhibited a higher proficiency in metabolic functions including those related to ascorbate, arachidonic acid, terpenoids, carbon, nitrogen, and methane. while generalists played a more significant role in pathways such as the metabolism, phosphonate, and phosphonate metabolism (Supplementary Fig. 8). The insights suggest that specialists contribute more significantly to functional compensation within the rhizosphere of P. euphratica under salt stress conditions, potentially explaining the observed increase in their abundance and diversity. We collected 251 rhizosphere soil samples from P. euphratica trees (Fig. ) and proceeded to analyze the soil using amplicon sequencing. Following the filtering process, our analysis yielded 5524 bacterial ASVs and 1298 fungal ASVs. Examination of bacterial species categorized 1979 (35.8%) as specialist species and 901 (16.3%) as generalists (Fig. ). Among fungi, 847 (65.8%) were identified as specialists and 31 (2.4%) as generalists. At the phylum level, bacterial specialists were dominated by Firmicutes, Proteobacteria, and Actinomycetes, whereas generalists were predominantly Proteobacteria, Actinomycetes, and Bacteroidota (Fig. ). As for fungi, the specialists were primarily composed of Sordariomycetes and Dothideomycetes , with generalists largely from Tremellomycetes , Sordariomycetes , Dothideomycetes , and Eurotiomycetes . Bacterial generalists exhibited higher α diversity than specialists (Fig. and Supplementary Fig. ). Correlation analysis between microbial diversity of environmental factors (pH, AK, AP, salt, and OM) showed a significant positive correlation between bacterial specialists’ α diversity with salinity, unlike the negative correlation observed for generalists (Fig. ). This trend was exclusive to bacteria, as only the fungi generalists showed a negative correlation with salinity (Fig. and Supplementary Fig. ). By dividing bacterial communities according to salinity levels, we can observe the trends more clearly as described above (Supplementary Fig. ). Additionally, species composition analysis under salinity gradients showed an increased abundance of Planococcus , Planomicrobium , among bacterial specialists (Fig. and Supplementary Fig. ). Furthermore, specialists showed significant structural variations ( F = 7.289, P = 0.01) across salinity rather than generalists ( F = 1.313, P = 0.3) as indicated by PCoA analysis (Fig. ). Further analysis revealed that salinity had the most significant impact on bacterial specialists (Supplementary Fig. , F = 9.119, P = 0.01). In this study, we integrated the neutral model with the null model , revealed a high degree of fit for both specialists and generalists (Supplementary Fig. ). The null model indicated that stochastic processes predominantly governed the assembly of both groups, accounting for 80.52 and 81.95% for specialists and generalists, respectively (Fig. ). Specialists were more controlled by deterministic assembly compared to generalists. A regression analysis of the Euclidean distance of salinity and βNTI demonstrated a strong correlation ( P < 0.001) between the pairwise comparisons of βNTI values for bacterial specialists and generalists with salinity changes, as shown in Fig. . Bacterial specialists and generalists exhibited greater certainty (mainly homogeneous selection) in high salinity stress and extremely low salinity conditions (Fig. ), while heterogeneous selection showed the opposite pattern in these situations. The assembly process of fungal specialists is less impacted by salinity (Supplementary Fig. ). While fungal generalists might be somewhat affected by salinity, they were primarily controlled by undominant processes (Supplementary Fig. ), the proportion of undominant processes decreased under high salt conditions. Undominant processes had a greater impact on fungal generalists compared to bacterial generalists. Homogeneous dispersal only accounted for a relatively high proportion in bacterial specialists with low salinity (0–2 g/kg), while the proportion of others was very small. Furthermore, dispersal limitation had a greater impact on specialists than generalists, and undominated had a greater impact on generalists than specialists. An increase in dispersal limitation, mainly for bacterial specialists was significantly correlated with rising salinity levels (Fig. ), which was opposite to the situation of fungal specialists (Supplementary Fig. ). Therefore, Specialists, especially bacterial specialists, played a pivotal role in shaping community diversity through deterministic processes and dispersal limitation. To identify the environmental thresholds for the specialists and generalists in response to various variables, we evaluated the cumulating z− and z+ change points using threshold indicator taxa analysis (Supplementary Fig. ). The results showed that, apart from AP, specialists possess broader environmental threshold than generalists for AK, OM, pH, and salt (Fig. ). We constructed five bacterial co-occurrence networks under varying salinity gradients and analyzed the evolution of these networks as salinity increased (Fig. ). The analysis revealed that both the quantity of the nodes and links, as well as the degree and weighted degree of networks, increased up to a salinity concentration of 20 g/kg (Fig. ). Moreover, an analysis of the eigenvalues related to the networks’ closeness centrality suggested an increase in network complexity with rising salinity levels (Fig. ). Further investigation confirmed a significant positive correlation between network complexity and salinity (Fig. ). Within these networks, 31 module hubs and one connector were identified, with the absence of generalists being noteworthy (Fig. and Supplementary Fig. ). Keystone species, comprising of bacterial specialists and neutral taxa, demonstrated a strong positive correlation with closeness centrality (Fig. ). Phylogenetic analysis of these keystone species (module hubs and connectors) identified them predominantly within the Proteobacteria, Chloroflexi, and Actinobacteria phyla (Fig. ). Correlation analysis between keystone species and five physicochemical properties (pH, AK, AP, salt, and OM) identified a positive relationship of keystone species abundance and rising salinity. The key microbes are those keystone species that are correlated with salinity and rank in the top 1% of IVI values. A significant positive correlation was also observed between key microbes and network complexity (Fig. ). Node-level topological features across different sub-communities were examined, revealing that specialists exhibited higher values in degree, weighted degree, closeness, harmonic closeness, clustering, and eigenvector centrality compared to generalists (Supplementary Fig. ). The most dominant phylum within the network was Proteobacteria, followed by Actinobacteria, Chloroflexi, Firmicutes, Acidobacteriota, Gemmatimonadota, and Ascomycota (Supplementary Fig. ). Proteobacteria, Actinomycetes, Chloroflexi, Firmicutes were found to make up a large proportion of the bacteria in the network, while Ascomycota and Myxomycetes were the most prominent fungi. Network analysis showed that most nodes were organized into nine major modules, representing 55.47% of the total nodes (Supplementary Fig. ). Examination of these modules revealed that eight predominantly consisted of specialists and neutral groups, whereas generalists were less represented. The analysis underscored the prominence of bacterial specialists and neutral taxa within the network (Supplementary Fig. ). P. euphratica rhizosphere To elucidate the functional characteristics within different salinity gradients, we constructed co-occurrence networks for ASVs across these different salinity gradients and ranked them based on node degree. This allowed us to categorize the ASVs into clusters for functional prediction. By comparing the predicted functions with theoretical expectations, we identified functions that were either enriched or depleted across varying salinity levels. The analysis of the rhizosphere bacterial community’s functional profiles revealed distinct variations across different salinity conditions (Fig. ). Notably, the shift in functional enrichment between varying levels of salinity was significant. Under conditions of low salinity (2–5 g/kg, Fig. ), ASVs that were less dominant exhibited an enrichment in a broader range of metabolic functions. Conversely, in conditions of higher salinity (5–10 g/kg, 10–20 g/kg, Fig. ), more dominant ASVs demonstrated enrichment in specific metabolic functions including “Ascorbate and aldarate metabolism”, “Arachidonic acid metabolism”, “Ubiquinone and other terpenoid−quinone biosynthesis”, and “Terpenoid backbone biosynthesis” and “Fructose and mannose metabolism”. Functional changes did not exhibit any specific pattern under extremely low salinity (0–2 g/kg, Fig. ) and extremely high salinity stress (>20 g/kg, Fig. ). Hence, within certain boundaries, metabolic functions were more greatly suppressed in low salt conditions, while they were enhanced in high salt conditions. This suggests that certain metabolic functions become more critical under high salinity conditions, possibly as a response to the increased stress, thereby enhancing the microbial community’s requirement for metabolic function associated with stress resistance. The noteworthy observation is that 50% of the dominant ASVs with enriched function are specialists, while only 20% are generalists. In the context of bacterial communities, specialists exhibited a higher proficiency in metabolic functions including those related to ascorbate, arachidonic acid, terpenoids, carbon, nitrogen, and methane. while generalists played a more significant role in pathways such as the metabolism, phosphonate, and phosphonate metabolism (Supplementary Fig. 8). The insights suggest that specialists contribute more significantly to functional compensation within the rhizosphere of P. euphratica under salt stress conditions, potentially explaining the observed increase in their abundance and diversity. Specific root-associated bacteria can be recruited by plants when confronted with salinity stress . This adaptive strategy is notably evident in halophytes, which leverage root-associated microorganisms to enhance their resilience to salt stress . Our research indicated that the rhizosphere microbiome of P. euphratica demonstrated functional compensation, with specialists in the rhizosphere adapting and performing necessary functions to aid the plant in surviving in salty conditions. We postulated that the harsh saline environment might have driven the evolution of these specialists, uniquely adapted to their specific ecological niche , . Conversely, it might also be interpreted as a strategic response by the plant, a “cry for help” to attract these specialist microorganisms, thereby fortifying its defense against the challenges posed by salinity , . Our findings revealed a positive correlation between soil salinity levels and the α diversity of bacterial specialists within the rhizosphere. The influence of salinity on the differentiation between specialist and generalist bacteria is notable. This distinction underscores the critical role of salt resistance among bacterial specialists. Our results showed that Bacteria had a stronger reaction to changes in salinity during the salt stress period than fungi. Bacterial Specialists demonstrated a positive response to salt stress, resulting in increased diversity. Although the community structure of fungi changed, there was no notable increase in fungal diversity. It has been reported that bacterial generalists are vital for maintaining community and functional stability in dynamic environments due to their broad ecological resistance and diversification . However, in more static environments, specialists are key contributors to community diversity and function. Our research supports this paradigm within the relatively stable rhizosphere of P. euphratica . Further supporting our findings, we observed an increased abundance of bacterial specialists, particularly from the genera Planocucus and Planomicronium , in environments with elevated salinity. This aligns with existing research indicating these microorganisms’ potential for salt tolerance , . For instance, Planococcus rifietoensis , known for its moderate halotolerance , has been shown to facilitate wheat growth under salinity conditions by converting ammonia into nitrogen, thereby enhancing soil fertilization. Additionally, this bacterium’s ability to metabolize potassium contributes to maintaining ion balance within plant cells . Similarly, the discovery of Planomicrobium iranicum sp. nov . highlights the emergence of slightly halophilic bacteria adapted to saline environments . Moreover, Li et al. suggest that the broader capability of soil bacteria to mitigate salt stress in plants, extending beyond the microorganisms’ own salinity tolerance levels . This trend suggests that these salt-tolerant species may colonize the rhizosphere to compensate for functional deficits induced by salt stress. We found that specialists exhibit a more deterministic assembly process, this is consistent with the results of previous studies , , . Heterogeneous selection was found to play a more substantial role in the assembly of specialists compared to generalists . Bacterial specialists and generalists exhibit greater certainty in high salinity stress and extremely low salinity conditions. A previous report had shown that environmental filters are more pronounced under extreme, such as highly variable soil pH , indicating that challenging environmental conditions may amplify the deterministic processes governing microbial community assembly mechanisms. With increasing salinity, especially bacterial specialists experienced enhanced diffusion limitations. This finding suggests that salinity may act as a deterministic force influencing the diffusion process of microorganisms. This is consistent with the results of previous studies, stochastic processes, particularly dispersal limitation, played critical roles even under high-stress conditions , . Previous results have also found that bacteria are more affected by diffusion limitations than fungi . However, the stronger stochastic exhibited by fungi may be due to the better stability of fungal communities under stress , . The assembly mechanism of fungi seemed to be more influenced by randomness, indicating that salinity factors had a smaller impact on fungi in comparison to bacteria. In summary, within the rhizosphere of P. euphratica , specialists play a pivotal role in shaping community diversity through deterministic processes and dispersal limitation. Our study revealed that specialist organisms in the rhizosphere of P. euphratica exhibited a greater environmental threshold compared to their generalist counterparts. Despite their narrower ecological niche, specialists demonstrated an ability to thrive across a broader spectrum of environmental factors within specific habitats. This finding aligned with the ecological principles of categorizing species as specialists or generalists based on Levins’ niche breadth. Generalists, despite their adaptability to a wide range of environments, are at a disadvantage in specialized habitats. On the other hand, specialists, with their narrow ecological niche, demonstrate superior competitiveness and adaptability in specialized environments when compared to generalists. The rhizosphere of P. euphratica was characterized by reduced fluctuation, providing specialists with a survival advantage. While generalists were capable of adapting to diverse ecological settings, they tended to be outcompeted by specialists within certain niche ranges. It has been noticed that in stable environments, specialists are more likely to contribute to enhancing community diversity than generalists . Furthermore, an increase in salinity had been observed to complicate the network dynamics within rhizosphere communities, which exhibited distinct network interactions in response to external disturbances . Despite these complexities, specialists constituted a significant portion of the network across various salinity levels. Our analysis of the microbial co-occurrence network indicated a closer association between bacterial specialists and neutral groups, with specialists and neutral groups dominating in eight observed cases. Network eigenvalues revealed that specialists generally had higher values than generalists, except betweenness centrality and eccentricity. This could be attributed to the prominent centrality and eccentricity of specific ASV mediators among generalists, suggesting that specialists occupied central roles within the network and maintained strong connections with neutral groups, thereby exhibiting high modularity . The composition of most modules primarily included specialists, neutral taxa, and a few generalists, apart from one module where generalists predominated, implying a significant interconnection and functional exchange among specialists and neutral tax . Keystone species predominantly consisted of bacterial specialists and neutral taxa. Fungi had only two keystone species. This underscored the pivotal role of bacterial specialists over generalists within the network. In a previous study, researchers identified the influential microbial players in a network, using IVI and some other centrality measures . In our research, we also used the IVI to identify key microbes. We obtained a significant positive correlation between the key microbes and the network complexity and salinity. Many of these key microbes are bacterial specialists and neutral groups, highlighting the significance of bacterial specialists in high salt stress conditions rather than fungi. These microorganisms play a crucial role in regulating microbial interactions under high salt stress, potentially aiding in functional compensation. Salt stress can significantly alter the metabolic and ecological functions of root-associated bacteria . Rhizosphere microbes play a crucial role in enhancing plant salt stress tolerance by re-establishing ion and osmotic homeostasis, thereby preventing damage to plant cells and facilitating the resumption of plant growth under salt stress conditions . Our research demonstrated that in high salinity environments, the functions of the rhizosphere microorganisms, particularly those that bolster plant tolerance to abiotic and biotic stress-are increasingly valued, leading to functional compensation in the P. euphratica rhizosphere. Specifically, metabolic pathways such as “Ascorbate and aldarate metabolism”, “Arachidonic acid metabolism”, “Ubiquinone and other terpenoid−quinone biosynthesis”, and “Terpenoid backbone biosynthesis”, “glycan biosynthesis, are generally involved in enhancing plant stress tolerance . For instance, l -ascorbic acid (AsA) was a plentiful metabolite in plants, playing crucial roles in stress physiology as well as growth and development . Similarly, arachidonic acid has been identified as a signaling molecule that can attract beneficial microbiota to the rhizosphere, thus promoting plant growth and facilitating nutrient turnover in the soil . Additionally, the triterpenoid compound cucurbitacin has been found to improve plant disease resistance by regulating the rhizosphere flora . However, there are numerous metabolic functions related to stress resistance that remain unexplored. Our observations indicated that while many of these functions were marginalized in low salt conditions, they gained prominence in high salt soil environments. Further investigation into the roles of these metabolisms revealed significant implications for plant health. These findings highlight the multifaceted roles of bacterial specialists in supporting plant resilience and health in saline conditions. In summary, our study elucidates the significant impact of salinity on the formation and function of specialist versus generalist bacteria within the rhizosphere, highlighting the adaptive strategies that enable certain bacteria to thrive under saline stress. Our findings also confirmed that the P. euphratica rhizosphere microbiome also employed a functional compensation in response to salt stress, highlighting the pivotal role of bacterial specialists in this process. This adaptive response may be attributed to the recruitment of more salt-resistant and microorganism specialists by P. euphratica as salinity levels increase. Previous studies have also found that generalist-to-specialist transformations occur three times more frequently than the reverse transformations . It is hypothesized that in the rhizosphere of P. euphratica , the increase in salinity may trigger functional compensation, leading to the shift from generalists to specialists. This insight not only advances our understanding of microbial ecology in saline environments but also points to potential avenues for leveraging these microbial adaptations under salinity stress aimed at exploring the formation causes of specialists and generalists and enhancing crop resilience to salinity stress. Moving forward, we aim to identify and further investigate salt-tolerant strains among these specialists, exploring their function and the potential for creating synthetic microbial communities (SynCom). The development of artificially selected microbiomes that confer salt tolerance represents a promising strategy to enhance agricultural productivity . The engineering of the desert microbiome into SynCom capable of protecting plants in natural soils from abiotic stress opens new avenues for agricultural innovation . Quite a few studies have demonstrated that root endophytes also have the ability to help plants withstand stress tolerance – . By integrating the findings on endophytes with the importance of rhizosphere microorganisms in salt tolerance, we anticipate uncovering novel and intriguing insights in further studies. Our findings not only shed light on the dynamics of the P. euphratica rhizosphere microbiome under salt stress but also provide a valuable framework for the selection of salt-resistant strains. This research lays the foundation for future studies on the interplay between specialists' and generalists' microorganisms in the P. euphratica rhizosphere, offering insights that could lead to the development of resilient agricultural systems in arid and saline environments. Research on the rhizosphere microorganisms of P. euphratica has revealed that an increase in salinity will lead to an increase in the α diversity of bacterial specialists and alterations of structure. Changes in salinity levels have an effect on the assembly of bacterial specialists and generalists, with the former being more characterized by deterministic processes and exhibiting wider adaptation. Furthermore, bacterial specialists are found to play a more significant role in the microbial community. The relationship between key microbes, particularly bacterial specialists, and network complexity is strongly positive. As salinity levels increase, the metabolic function of microorganisms becomes more crucial, shaping the assembly of plant rhizosphere microbial communities under stress. This stress prompts a functional compensation that enhances plant health, as P. euphratica recruits specialized rhizosphere microorganisms. This research highlights the importance of the plant-microbe interaction in promoting resilience and adaptability in the face of environmental challenges, shedding light on the diversity, assembly, network characterization, and functions of bacterial specialists and generalists in the rhizosphere of P. euphratica . Sample collection The 251 rhizosphere soil samples were collected from P. euphratica trees located in the Tarim River Basin of Yuli County, Xinjiang Uygur Autonomous Region, China (41°00–41°20N, 86°00–86°20E) in September, 2021. Using a five-point sampling method, we collected rhizosphere microorganisms around each tree, maintaining a distance of 0.5 m from the trunk. By using a soil drill, we obtained fine roots with a diameter of ≤2 mm. Each fine root was shaken carefully to remove the bulk soil. The soil still adhering to the fine roots was defined as rhizosphere soil. The rhizosphere soil was separated from the fine roots by agitating it in 50 ml of sterile 0.9% NaCl solution for 5 min and then centrifuging it at 8000× g for 10 min. Soil physicochemical measurement According to the Environmental Monitoring Method Standards of the Ministry of Ecology and Environment of the People’s Republic of China, the pH, AK, AP, salinity, and OM of the P. euphratica rhizosphere soil were determined. DNA extraction and sequencing We utilized the E Z. N.A ® Instructions for the Soil DNA Kit (Omega BioTek, Norcross, GA, USA) to extract total microbial DNA from rhizosphere samples. The structure of rhizosphere bacterial communities was analyzed using the V4-V5 region-targeting primers 515 F (5′-GTGCCAGCMGCCGCGGTAA-3′) and 907R (5′-CCGTCAATTCMTTTRAGTTT-3′) . Rhizosphere fungi were assessed using the ITS1F-ITS2R primers ITS1F (5′-barcode CTTGGTCATTTAGAGGAAGTAA-3′) and ITS2R (5′-GCTGGTTCTTCATCGATGC-3′) . The PCR reaction was set up as follows: initial 5 min at 95 °C, 30 cycles of 30 s at 95 °C, 30 s at 55 °C and 30 s at 72 °C, then followed by 5 min extension at 72 °C at the end of amplification. Purified PCR products were sequenced on an Illumina MiSeq platform at Shanghai Biozeron Biological Technology. The raw data obtained from sequencing is distinguished by barcodes and primers at the beginning and end of the sequence, and the sequence direction is adjusted. After data splitting, data impurities are removed. Analysis of the sequencing data using the ASV-based pipeline was performed using the DADA2 pipeline. Finally, the table of ASV and the species information of each ASV at various taxonomic levels is obtained, and the microbial community composition of each sample at each taxonomic level is statistically analyzed. Community structure and diversity analysis To test whether diversity patterns of the microbiome in the rhizosphere of Populus euphratica , we evaluated α diversity indices of bacteria and fungi with the vegan package in R.4.3.0. Principal coordinates analysis (PCoA) based on Bray–Curtis dissimilarity was applied to explore the pattern of the community. Statistical associations between variables were inferred using the Pearson correlation test. Statistical differences between groups were inferred using ANOVA followed. These statistical analyses were operated in an R environment. Analysis of the habitat specialists and generalists To determine the habitat specialization of microorganisms in the P. euphratica rhizosphere, we employed Levin’s niche breadth . Levin’s niche breadth is a statistical concept utilized in ecology for assessing a species’ ecological niche breadth. Niche width indicates the variety of environmental conditions in which a species can thrive and reproduce. Species with higher Levins’ niche breadth are typically more generalized and tend to be generalists. On the contrary, species with lower Levins’ niche breadth exhibit stronger specialization and are more inclined towards specialists. The EcolUtils package is a tool in the R including functions that can be used to calculate Levins’ niche breadth. The EcolUtils package was used to evaluate the statistical significance of each specialist index, with 1000 permutations. If the habitat specialization values surpass the upper 95% confidence interval or fall below the lower 95% confidence interval of the 1000 permutations, they are labeled as generalists or specialists . Quantification of community assembly We employed Null model-based β diversity metrics (βNTI and RC Bray ) to value various community assembly processes . To estimate the relative influences of stochastic and deterministic processes, we calculate the βNTI and RC Bray values. In brief, βNTI <−2 or >+2 indicates that βMNTD obs deviates from the mean βMNTD null by more than two standard deviations. Thus, the model considers βNTI <−2 or >+2 to indicate significantly less than or greater than expected phylogenetic turnover, respectively, for a given pairwise comparison. In this case, βNTI <−2 indicates the dominance of deterministic processes and low turnover (i.e., homogeneous selection); βNTI >2 indicates the dominance of deterministic processes and high turnover (i.e., variable selection); and −2 < βNTI <2 indicates the lack of deviation and the dominance of stochastic processes. In addition, we calculated the Bray–Curtis-based Raup–Crick metric (RC Bray ) to further partition the relative influences of non-selection processes, i.e., dispersal limitation (RC Bray >0.95), homogenizing dispersal (RC Bray <−0.95) and undominated processes (−0.95 >RC Bray >0.95). Dispersal limitation constrains the movement of species and led to higher levels of community dissimilarity; on the contrary, homogenizing dispersal, defined as high levels of species movement, led to a decrease in community dissimilarities . The neutral model is also used by us to evaluate the mechanism of community assembly , . Phylogenetic distance, environmental breadth, and phylogenetic signal analysis To determine the threshold value of habitat specialists and generalists in response to each environment variable, threshold indicator taxa analysis (TITAN) was carried out using the “TITAN2” package of R . Briefly, we used the sums of taxa scores for ASVs to determine upper and lower thresholds of difference in the habitat specialists and generalists based on environmental variables . TITAN categorizes the community into two groups: Z − taxa that negatively respond to increased environmental gradient, and Z + taxa that positively respond to increased gradient. Taxa without any response to the environmental gradient were not considered. TITAN then tracks the cumulative responses of declining taxa (sum(Z − )) and increasing taxa (sum(Z + )) in the community. Ecological thresholds are defined as the points where the maximum aggregate change in the frequency and relative abundance of responding taxa occurs. When the environmental values reach and exceed the ecological thresholds, the abundance and occurrence frequency of species will decrease in the Z − group while increasing in the Z + group. Therefore, the range of niche optima for the community is defined as the gradient below sum(Z − ) and above sum(Z + ). Co-occurrence network construction The network analysis was conducted to identify co-occurrence patterns of generalist and specialist species. Correlations with Spearman’s correlation coefficients (ρ) greater than 0.6 and corresponding P values less than 0.01 were considered significant . The “rcorr” function from the Hmisc package was used to perform pairwise comparisons based on ASVs, with p values adjusted accordingly . Co-occurrence networks were constructed using the igraph package, where each node represented one ASV and each edge represented a strong and significant correlation , . The resulting networks were visualized using the interactive platform Gephi (0.9.2). Furthermore, using the Gephi software, we performed calculations on the node-level topological features, such as degree, betweenness, closeness, and eigenvector centrality. To identify statistical differences in these features, we conducted the Wilcoxon test. High values of the topological features suggest a core position of a node in the network, while low values suggest a peripheral position , . Subsequently, we categorized the nodes into four groups based on their within-module connectivity (Zi) and among-module connectivity (Pi) to assess the topological roles of taxa in the networks. These groups consisted of module hubs (Zi >2.5), network hubs (Zi >2.5 and Pi >0.62), connectors (Pi >0.62), and peripherals (Zi <2.5 and Pi <0.62). All statistical analyses were performed using R version 4.3.0. The integrated value of influence (IVI) is a novel influential node detection method, and the IVI algorithm is the synergistic product of Hubness and spreading values . We used an influential package to calculate the IVI value of each node in the network to evaluate its importance using R version 4.3.0. Functional prediction and compensation effect The composition of the rhizosphere bacterial community was measured by high-throughput DNA sequencing of the 16S rRNA gene, and the rhizosphere functional traits were predicted using PICRUSt2 software. The PICRUSt2 method consists of phylogenetic placement, hidden-state prediction, and sample-wise gene and pathway abundance tabulation. ASV sequences and abundances are taken as input, and gene family and pathway abundances are output. All necessary reference trees and trait databases for the default workflow are included in the PICRUSt2 implementation . The PICRUSt2 software was applied to predict KEGG ortholog (KO) functional profiles of microbial communities using the 16S rRNA gene sequences. To analyze the segmented functions, we assessed the importance of each ASV in the co-occurrence network by examining its degree of correlation with other ASVs. The ASVs were then organized into functional clusters, with the most important ASVs forming the first cluster and the least important forming the last. The PICRUSt2 software was used to predict the “segmented predicted function” of each cluster, while the “segmented theoretical function” was calculated using the relative abundance of each segmented ASV cluster . The 251 rhizosphere soil samples were collected from P. euphratica trees located in the Tarim River Basin of Yuli County, Xinjiang Uygur Autonomous Region, China (41°00–41°20N, 86°00–86°20E) in September, 2021. Using a five-point sampling method, we collected rhizosphere microorganisms around each tree, maintaining a distance of 0.5 m from the trunk. By using a soil drill, we obtained fine roots with a diameter of ≤2 mm. Each fine root was shaken carefully to remove the bulk soil. The soil still adhering to the fine roots was defined as rhizosphere soil. The rhizosphere soil was separated from the fine roots by agitating it in 50 ml of sterile 0.9% NaCl solution for 5 min and then centrifuging it at 8000× g for 10 min. According to the Environmental Monitoring Method Standards of the Ministry of Ecology and Environment of the People’s Republic of China, the pH, AK, AP, salinity, and OM of the P. euphratica rhizosphere soil were determined. We utilized the E Z. N.A ® Instructions for the Soil DNA Kit (Omega BioTek, Norcross, GA, USA) to extract total microbial DNA from rhizosphere samples. The structure of rhizosphere bacterial communities was analyzed using the V4-V5 region-targeting primers 515 F (5′-GTGCCAGCMGCCGCGGTAA-3′) and 907R (5′-CCGTCAATTCMTTTRAGTTT-3′) . Rhizosphere fungi were assessed using the ITS1F-ITS2R primers ITS1F (5′-barcode CTTGGTCATTTAGAGGAAGTAA-3′) and ITS2R (5′-GCTGGTTCTTCATCGATGC-3′) . The PCR reaction was set up as follows: initial 5 min at 95 °C, 30 cycles of 30 s at 95 °C, 30 s at 55 °C and 30 s at 72 °C, then followed by 5 min extension at 72 °C at the end of amplification. Purified PCR products were sequenced on an Illumina MiSeq platform at Shanghai Biozeron Biological Technology. The raw data obtained from sequencing is distinguished by barcodes and primers at the beginning and end of the sequence, and the sequence direction is adjusted. After data splitting, data impurities are removed. Analysis of the sequencing data using the ASV-based pipeline was performed using the DADA2 pipeline. Finally, the table of ASV and the species information of each ASV at various taxonomic levels is obtained, and the microbial community composition of each sample at each taxonomic level is statistically analyzed. To test whether diversity patterns of the microbiome in the rhizosphere of Populus euphratica , we evaluated α diversity indices of bacteria and fungi with the vegan package in R.4.3.0. Principal coordinates analysis (PCoA) based on Bray–Curtis dissimilarity was applied to explore the pattern of the community. Statistical associations between variables were inferred using the Pearson correlation test. Statistical differences between groups were inferred using ANOVA followed. These statistical analyses were operated in an R environment. To determine the habitat specialization of microorganisms in the P. euphratica rhizosphere, we employed Levin’s niche breadth . Levin’s niche breadth is a statistical concept utilized in ecology for assessing a species’ ecological niche breadth. Niche width indicates the variety of environmental conditions in which a species can thrive and reproduce. Species with higher Levins’ niche breadth are typically more generalized and tend to be generalists. On the contrary, species with lower Levins’ niche breadth exhibit stronger specialization and are more inclined towards specialists. The EcolUtils package is a tool in the R including functions that can be used to calculate Levins’ niche breadth. The EcolUtils package was used to evaluate the statistical significance of each specialist index, with 1000 permutations. If the habitat specialization values surpass the upper 95% confidence interval or fall below the lower 95% confidence interval of the 1000 permutations, they are labeled as generalists or specialists . We employed Null model-based β diversity metrics (βNTI and RC Bray ) to value various community assembly processes . To estimate the relative influences of stochastic and deterministic processes, we calculate the βNTI and RC Bray values. In brief, βNTI <−2 or >+2 indicates that βMNTD obs deviates from the mean βMNTD null by more than two standard deviations. Thus, the model considers βNTI <−2 or >+2 to indicate significantly less than or greater than expected phylogenetic turnover, respectively, for a given pairwise comparison. In this case, βNTI <−2 indicates the dominance of deterministic processes and low turnover (i.e., homogeneous selection); βNTI >2 indicates the dominance of deterministic processes and high turnover (i.e., variable selection); and −2 < βNTI <2 indicates the lack of deviation and the dominance of stochastic processes. In addition, we calculated the Bray–Curtis-based Raup–Crick metric (RC Bray ) to further partition the relative influences of non-selection processes, i.e., dispersal limitation (RC Bray >0.95), homogenizing dispersal (RC Bray <−0.95) and undominated processes (−0.95 >RC Bray >0.95). Dispersal limitation constrains the movement of species and led to higher levels of community dissimilarity; on the contrary, homogenizing dispersal, defined as high levels of species movement, led to a decrease in community dissimilarities . The neutral model is also used by us to evaluate the mechanism of community assembly , . To determine the threshold value of habitat specialists and generalists in response to each environment variable, threshold indicator taxa analysis (TITAN) was carried out using the “TITAN2” package of R . Briefly, we used the sums of taxa scores for ASVs to determine upper and lower thresholds of difference in the habitat specialists and generalists based on environmental variables . TITAN categorizes the community into two groups: Z − taxa that negatively respond to increased environmental gradient, and Z + taxa that positively respond to increased gradient. Taxa without any response to the environmental gradient were not considered. TITAN then tracks the cumulative responses of declining taxa (sum(Z − )) and increasing taxa (sum(Z + )) in the community. Ecological thresholds are defined as the points where the maximum aggregate change in the frequency and relative abundance of responding taxa occurs. When the environmental values reach and exceed the ecological thresholds, the abundance and occurrence frequency of species will decrease in the Z − group while increasing in the Z + group. Therefore, the range of niche optima for the community is defined as the gradient below sum(Z − ) and above sum(Z + ). The network analysis was conducted to identify co-occurrence patterns of generalist and specialist species. Correlations with Spearman’s correlation coefficients (ρ) greater than 0.6 and corresponding P values less than 0.01 were considered significant . The “rcorr” function from the Hmisc package was used to perform pairwise comparisons based on ASVs, with p values adjusted accordingly . Co-occurrence networks were constructed using the igraph package, where each node represented one ASV and each edge represented a strong and significant correlation , . The resulting networks were visualized using the interactive platform Gephi (0.9.2). Furthermore, using the Gephi software, we performed calculations on the node-level topological features, such as degree, betweenness, closeness, and eigenvector centrality. To identify statistical differences in these features, we conducted the Wilcoxon test. High values of the topological features suggest a core position of a node in the network, while low values suggest a peripheral position , . Subsequently, we categorized the nodes into four groups based on their within-module connectivity (Zi) and among-module connectivity (Pi) to assess the topological roles of taxa in the networks. These groups consisted of module hubs (Zi >2.5), network hubs (Zi >2.5 and Pi >0.62), connectors (Pi >0.62), and peripherals (Zi <2.5 and Pi <0.62). All statistical analyses were performed using R version 4.3.0. The integrated value of influence (IVI) is a novel influential node detection method, and the IVI algorithm is the synergistic product of Hubness and spreading values . We used an influential package to calculate the IVI value of each node in the network to evaluate its importance using R version 4.3.0. The composition of the rhizosphere bacterial community was measured by high-throughput DNA sequencing of the 16S rRNA gene, and the rhizosphere functional traits were predicted using PICRUSt2 software. The PICRUSt2 method consists of phylogenetic placement, hidden-state prediction, and sample-wise gene and pathway abundance tabulation. ASV sequences and abundances are taken as input, and gene family and pathway abundances are output. All necessary reference trees and trait databases for the default workflow are included in the PICRUSt2 implementation . The PICRUSt2 software was applied to predict KEGG ortholog (KO) functional profiles of microbial communities using the 16S rRNA gene sequences. To analyze the segmented functions, we assessed the importance of each ASV in the co-occurrence network by examining its degree of correlation with other ASVs. The ASVs were then organized into functional clusters, with the most important ASVs forming the first cluster and the least important forming the last. The PICRUSt2 software was used to predict the “segmented predicted function” of each cluster, while the “segmented theoretical function” was calculated using the relative abundance of each segmented ASV cluster . Supplementary Information
Lifestyle and dietary factors, iron status and one-carbon metabolism polymorphisms in a sample of Italian women and men attending a Transfusion Medicine Unit: a cross-sectional study
09df49ea-8b33-4569-a565-10f2c1555093
10244012
Internal Medicine[mh]
Ethical standards The study was approved by the Ethical Committee of the Verona University Hospital. Eligible subjects were informed about the objectives and procedures of the study. All participants provided written informed consent before enrolment. The procedures used were in accordance with the ethical standards of the responsible institutional or regional committee on human experimentation or in accordance with the Helsinki Declaration of 1975 as revised in 1983. Study design and participants The study design has already been published elsewhere ( ) . Briefly, from April 2016 to May 2018, 551 healthy blood donors, consecutively attending the Transfusion Medicine Unit of the Verona University Hospital (Italy), were enrolled in this cross-sectional study, 538 of whom (97·6 %) accepted. Overall, 499 subjects were finally included in the study (255 men, 244 women: 155 of childbearing age 18–44 years). During the visit to be enrolled for the blood donation, each eligible subject, after detailed explanation of the study design, was invited to participate. After giving written informed consent, each participant was interviewed about his/her general characteristics, medical history and current therapy. Lifestyle, education as a recognised parameter of lifestyle and dietary behaviour, diet and dietary habits including alcohol and smoking habits, fruits and vegetables consumption were also recorded. For fruits and vegetables, one portion was defined as 150 and 100 g, respectively. Laboratory parameters Venous whole blood samples were collected after an overnight fasting into Vacutainer® tubes either containing EDTA or lithium/heparin as anticoagulants, to measure biochemical variables and for extracting DNA from peripheral blood mononuclear cells. After centrifugation at 1500 g for 10 min at room temperature, lithium heparin plasma was separated, stored in aliquots and kept frozen at −70°C until measurement. Fe was determined by the routine method used in the local laboratory (Roche Diagnostics). Ferritin concentration was measured with an automated chemiluminescence method, on Roche Cobas e801 (Roche Diagnostics). DNA was extracted from peripheral blood mononuclear cells by Wizard Genomic DNA Purification Kit (Promega Corporation). Genotyping for one-carbon-related polymorphisms ( MTHFR 677C > T, cSHMT 1420C > T, DHFR 19bp ins/del, RFC1 80G > A) was analysed by different methods as previously described ( ) . A plasma Fe concentration > 10·74 μmol/l and ferritin range values 20–200 µg/l were considered as adequate concentrations ( ) . Statistical analysis Data were collected in a specific database after a review for completeness, consistency, and plausibility. This study included 499 subjects. The original sample size was computed considering as the primary objective of the study the frequency of adequate plasma folate concentrations (> 15 nmol/l) ( ) . Considering the end point of the present analysis (i.e. the frequency of adequate Fe status) based on a post hoc computation, we were able to obtain estimates of adequate plasma Fe concentrations with narrow 95 % CI. For example, considering separately men and women, the expected 95 % CI were, respectively, 74·3–84·8 if the adequate plasma Fe concentrations were 80 % or 85·7–93·6 if the values were 90 % (sample size 244 for each group). Continuous variables were calculated by mean values and standard deviations, while logarithmic transformation was used for not-normally distributed variables, for which geometric means and CI were used, as appropriate. Categorical variables were presented by calculating absolute frequency and percentage. The 95 % CI of the mean and proportion were provided to assess the precision of estimates. Genetic data were analysed to evaluate the frequency of each genotype in the population studied after evaluating the Hardy Weinberg equilibrium. Categorical variables were compared using the Pearson or Mantel–Haenszel χ 2 , as appropriate. Continuous variables were analysed using ANOVA, after logarithmic transformation if needed, or Kruskal–Wallis test when appropriate. We considered a two-tailed P value of < 0·05 to be significant. OR for inadequate status of Fe and ferritin according to socio-demographic and general characteristics were computed. To take into account potential confounding factors, we used unconditional multiple logistic regression, with maximum likelihood fitting including in the model terms for sex, age, education, BMI, smoking, alcohol drinking, fruits and vegetables consumption and physical activity. All the analyses were performed using the SAS software, version 9.4 (SAS Institute, Inc). The study was approved by the Ethical Committee of the Verona University Hospital. Eligible subjects were informed about the objectives and procedures of the study. All participants provided written informed consent before enrolment. The procedures used were in accordance with the ethical standards of the responsible institutional or regional committee on human experimentation or in accordance with the Helsinki Declaration of 1975 as revised in 1983. The study design has already been published elsewhere ( ) . Briefly, from April 2016 to May 2018, 551 healthy blood donors, consecutively attending the Transfusion Medicine Unit of the Verona University Hospital (Italy), were enrolled in this cross-sectional study, 538 of whom (97·6 %) accepted. Overall, 499 subjects were finally included in the study (255 men, 244 women: 155 of childbearing age 18–44 years). During the visit to be enrolled for the blood donation, each eligible subject, after detailed explanation of the study design, was invited to participate. After giving written informed consent, each participant was interviewed about his/her general characteristics, medical history and current therapy. Lifestyle, education as a recognised parameter of lifestyle and dietary behaviour, diet and dietary habits including alcohol and smoking habits, fruits and vegetables consumption were also recorded. For fruits and vegetables, one portion was defined as 150 and 100 g, respectively. Venous whole blood samples were collected after an overnight fasting into Vacutainer® tubes either containing EDTA or lithium/heparin as anticoagulants, to measure biochemical variables and for extracting DNA from peripheral blood mononuclear cells. After centrifugation at 1500 g for 10 min at room temperature, lithium heparin plasma was separated, stored in aliquots and kept frozen at −70°C until measurement. Fe was determined by the routine method used in the local laboratory (Roche Diagnostics). Ferritin concentration was measured with an automated chemiluminescence method, on Roche Cobas e801 (Roche Diagnostics). DNA was extracted from peripheral blood mononuclear cells by Wizard Genomic DNA Purification Kit (Promega Corporation). Genotyping for one-carbon-related polymorphisms ( MTHFR 677C > T, cSHMT 1420C > T, DHFR 19bp ins/del, RFC1 80G > A) was analysed by different methods as previously described ( ) . A plasma Fe concentration > 10·74 μmol/l and ferritin range values 20–200 µg/l were considered as adequate concentrations ( ) . Data were collected in a specific database after a review for completeness, consistency, and plausibility. This study included 499 subjects. The original sample size was computed considering as the primary objective of the study the frequency of adequate plasma folate concentrations (> 15 nmol/l) ( ) . Considering the end point of the present analysis (i.e. the frequency of adequate Fe status) based on a post hoc computation, we were able to obtain estimates of adequate plasma Fe concentrations with narrow 95 % CI. For example, considering separately men and women, the expected 95 % CI were, respectively, 74·3–84·8 if the adequate plasma Fe concentrations were 80 % or 85·7–93·6 if the values were 90 % (sample size 244 for each group). Continuous variables were calculated by mean values and standard deviations, while logarithmic transformation was used for not-normally distributed variables, for which geometric means and CI were used, as appropriate. Categorical variables were presented by calculating absolute frequency and percentage. The 95 % CI of the mean and proportion were provided to assess the precision of estimates. Genetic data were analysed to evaluate the frequency of each genotype in the population studied after evaluating the Hardy Weinberg equilibrium. Categorical variables were compared using the Pearson or Mantel–Haenszel χ 2 , as appropriate. Continuous variables were analysed using ANOVA, after logarithmic transformation if needed, or Kruskal–Wallis test when appropriate. We considered a two-tailed P value of < 0·05 to be significant. OR for inadequate status of Fe and ferritin according to socio-demographic and general characteristics were computed. To take into account potential confounding factors, we used unconditional multiple logistic regression, with maximum likelihood fitting including in the model terms for sex, age, education, BMI, smoking, alcohol drinking, fruits and vegetables consumption and physical activity. All the analyses were performed using the SAS software, version 9.4 (SAS Institute, Inc). Characteristics of the study population Socio-demographic and general characteristics of the study population according to age and sex are shown in . Biochemical parameters The mean plasma Fe and ferritin concentrations were 16·6 (95 % CI 16·0, 17·2) µmol/l and 33·8 (95 % CI 31·5, 36·2) µg/l, respectively. Males had significantly higher plasma concentrations of Fe and ferritin than females, while women < 45 years had lower ferritin concentrations compared with those aged 45 years or older ( ). Adequate Fe concentrations, defined as plasma levels > 10·74 µmol/l, were observed in 84·3 % of total blood donors, while adequate ferritin concentrations, defined as values ranging between 20 and 200 µg/l, were found in 72·5 % of total blood donors; 80·7 % females v . 87·8 % males displayed adequate Fe concentrations, while 65·1 % females v . 79·6 % males displayed adequate ferritin concentrations, with significant difference between sexes ( ). In females < 45 years of age, the prevalence of subjects with adequate Fe concentrations was significantly lower compared with females aged ≥ 45 years ( ). In , intake of ≥ 3 portion/d of fruits and vegetables was associated with inadequate plasma Fe levels, while alcohol intake was associated with adequate plasma Fe concentrations. No relation was found between these determinants and ferritin concentrations. No significant differences emerged in Fe status according to other general characteristics and lifestyle factors. One-carbon metabolism-related polymorphisms The homozygous mutant allele frequencies were 0·21 for the MTHFR 677TT, 0·11 for the cSHMT 1420TT, 0·18 for the DHFR 19bp del/del and 0·20 for the RFC1 80AA. As for a possible association among plasma Fe and ferritin concentrations and MTHFR 677C > T, cSHMT 1420C > T and RFC1 80G > A polymorphisms, no relationship was detected. Carriers of the DHFR 19bp del/del genotype showed lower ferritin concentrations compared with the DHFR 19bp ins/del genotypes (29·5 (95 % CI 24·4, 35·6) µg/l v . 37·3 (95 % CI 33·5, 41·5) µg/l, P = 0·02). When data were analysed for the comparison of carriers of the mutant allele DHFR 19bp ins/del plus del/del genotypes v . the DHFR 19bp ins/ins, no differences were observed for plasma ferritin concentrations (34·9 (95 % CI 31·7, 38·3) µg/l v . 31·8 (95 % CI 28·5, 35·6) µg/l, respectively, P = 0·22). Socio-demographic and general characteristics of the study population according to age and sex are shown in . The mean plasma Fe and ferritin concentrations were 16·6 (95 % CI 16·0, 17·2) µmol/l and 33·8 (95 % CI 31·5, 36·2) µg/l, respectively. Males had significantly higher plasma concentrations of Fe and ferritin than females, while women < 45 years had lower ferritin concentrations compared with those aged 45 years or older ( ). Adequate Fe concentrations, defined as plasma levels > 10·74 µmol/l, were observed in 84·3 % of total blood donors, while adequate ferritin concentrations, defined as values ranging between 20 and 200 µg/l, were found in 72·5 % of total blood donors; 80·7 % females v . 87·8 % males displayed adequate Fe concentrations, while 65·1 % females v . 79·6 % males displayed adequate ferritin concentrations, with significant difference between sexes ( ). In females < 45 years of age, the prevalence of subjects with adequate Fe concentrations was significantly lower compared with females aged ≥ 45 years ( ). In , intake of ≥ 3 portion/d of fruits and vegetables was associated with inadequate plasma Fe levels, while alcohol intake was associated with adequate plasma Fe concentrations. No relation was found between these determinants and ferritin concentrations. No significant differences emerged in Fe status according to other general characteristics and lifestyle factors. The homozygous mutant allele frequencies were 0·21 for the MTHFR 677TT, 0·11 for the cSHMT 1420TT, 0·18 for the DHFR 19bp del/del and 0·20 for the RFC1 80AA. As for a possible association among plasma Fe and ferritin concentrations and MTHFR 677C > T, cSHMT 1420C > T and RFC1 80G > A polymorphisms, no relationship was detected. Carriers of the DHFR 19bp del/del genotype showed lower ferritin concentrations compared with the DHFR 19bp ins/del genotypes (29·5 (95 % CI 24·4, 35·6) µg/l v . 37·3 (95 % CI 33·5, 41·5) µg/l, P = 0·02). When data were analysed for the comparison of carriers of the mutant allele DHFR 19bp ins/del plus del/del genotypes v . the DHFR 19bp ins/ins, no differences were observed for plasma ferritin concentrations (34·9 (95 % CI 31·7, 38·3) µg/l v . 31·8 (95 % CI 28·5, 35·6) µg/l, respectively, P = 0·22). The study provides information on lifestyle, dietary factors, Fe status and MTHFR 677C > T, cSHMT 1420C > T, DHFR 19bp ins/del, RFC1 80G > A genotypes in a sample of healthy Italian women and men aged 18–65 years, attending a Transfusion Medicine Unit in Northern Italy. The study shows that adequate Fe status in terms of both plasma Fe and ferritin concentrations could be reached in a large proportion of this Italian sample of healthy blood donors. Moreover, intake of ≥ 3 portion/d of fruits and vegetables was found to be associated with inadequate plasma Fe concentrations unlike ferritin. Fe available in foodstuffs can be of heme and non-heme type. In animal-origin products, 40 % of the existing Fe is of heme type, and 60 % is non-heme, whereas plant-origin foodstuffs only contain non-heme Fe. Heme Fe is absorbed by 15–35 % in the gastrointestinal tract, whereas non-heme Fe presents lower absorption, between 2 and 20 % ( ) . Non-heme-Fe absorption is strongly influenced by many inhibitory and enhancing factors in the diet, whereas heme-Fe absorption is very little affected by other dietary components. Furthermore, early studies with radioisotope labelled foods found that Fe from animal foods was better absorbed than that from plant foods ( ) . While emphasising that fruit and vegetables are essential to the diet, this varying bioavailability can support our results. Regarding the association between alcohol intake and adequate status of Fe but not ferritin, we should consider that Fe bioavailability is influenced by various dietary components that either enhance or inhibit its absorption. Alcohol-induced disorders of the Fe metabolism were investigated in animal models and clinical and epidemiological studies ( , ) . Furthermore, increases in indices of Fe stores, such as serum ferritin, have also been described in subjects drinking small amounts of alcohol compared with teetotallers ( , ) and there is evidence that both Fe and alcohol can initiate the formation of free radicals and produce oxidative stress within the liver ( – ) . The relationship between alcohol intake and Fe stores is therefore of interest also among the general population; however, in evaluating this association it is important to consider the whole picture rather than relying on single test result. Regarding Fe status in Italian population, Salvaggio et al. ( ) studied 400 subjects, 200 men and 200 women, aged between 20 and 60 years, reporting that the frequency of Fe deficiency was increased in women of childbearing age. Overall, 13 % of women in the three younger age groups had low serum ferritin levels. Only 6 % of women aged over 50 years were instead found to be Fe deficient, according to other published studies ( , ) . Most recently, a population-based study in primary care ( ) showed that the incidence rate of Fe-deficiency anaemia increased by 51·4 % over a nearly 10-year period in Italy, from 5·9 per 1000 person-years in 2002 to 8·93 per 1000 person-years in 2013 with an incidence rate in females that was almost 4-fold higher than in males. As for women in childbearing age, the present study reveals adequate levels of Fe and ferritin in 76·1 and 61·9 % women aged 18–44 years, respectively. Considering the determinants of Fe status, our results confirm that alcohol intake was associated with adequate Fe status, as already shown in earlier investigations ( , ) , unlike ferritin ( ) . Moreover, higher consumption of fruits and vegetables (i.e. ≥ 3 portion/d) was associated with lower levels of Fe but not of ferritin, with evidence for the latter biomarker already reported by other studies ( ) . Scarce and controversial information remains currently available on this association ( – ) . As for a possible association between markers of Fe status and cSHMT 1420C > T and the other common one-carbon metabolism polymorphisms analysed, no relationship was observed. Interestingly, however, carriers of the DHFR 19bp del/del genotype showed lower ferritin concentrations when compared with the DHFR 19bp ins/del genotypes. Although there is no evident explanation for this finding, a possible hypothesis is related to the loss of enzymatic function induced by the DHFR 19bp polymorphism, as it occurs in other models in which DHFR silence reduces the development of liver fibrosis by altering the crosstalk between hepatic stellate cells and macrophages. This is a crucial event underlying inflammation ( ) , where ferritin is a marker of macrophage activation ( ) . Another hypothesis may be suggested through the mechanisms regulating the ferritin-mediated folate catabolism and turnover of differential folate compounds ( ) in which the DHFR enzyme may be involved as a key factor for the balance of the biochemical functions of folate degradation of labile forms of folate ( ) . Nonetheless, further specific studies would be needed to better explore this topic. The study has some weaknesses and strengths. It is mostly based on a small and specific population, perhaps not representative of the Italian healthy adult population. We evaluated blood donors: these subjects are considered a healthier group, so the results expected in the general population could be different, though we overcome this potential bias by balancing the age and sex groups. Moreover, elderly people are not eligible for blood donation and, therefore, excluded. Finally, regarding determinants of Fe status, fruits and vegetables components of diet and level of alcohol intake were not investigated. Among the strengths of this study, we consider the overall availability of data on lifestyle and dietary factors along with blood concentrations of Fe and ferritin in Italian population that may be helpful to design public health interventions on the target population, especially in subgroups of population with special needs. Furthermore, it is of interest the evaluation of the potential impact of common one-carbon-related genetic variants influencing Fe status. Conclusion In conclusion, adequate concentrations of Fe and ferritin were reached in a large proportion of an Italian sample of healthy blood donors. The relation of Fe status with lifestyle factors and the one-carbon polymorphisms investigated requires further research to better clarify possible further gene–nutrient interactions involved in folate and Fe metabolism. In conclusion, adequate concentrations of Fe and ferritin were reached in a large proportion of an Italian sample of healthy blood donors. The relation of Fe status with lifestyle factors and the one-carbon polymorphisms investigated requires further research to better clarify possible further gene–nutrient interactions involved in folate and Fe metabolism.
Advanced undergraduate medical students’ perceptions of basic medical competences and specific competences for different medical specialties – a qualitative study
10b6c6bc-1847-4450-a479-a4e95268ed06
9341094
Internal Medicine[mh]
After having completed their undergraduate medical studies, graduates should have acquired basic competences that enable them to work independently as physicians . Competences represent the individually developed repertoire of abilities, skills, personality traits, and motivational aspects necessary for successful performance within the medical context . Many countries have defined basic learning objectives for undergraduate education so that their students can achieve this goal [ – ]. Having acquired these basic competences, students should be able to start their postgraduate training in any specialty they like to choose . For the work as resident, competences like, prioritizing work according to clinical urgency or responding to individual patients’ health needs are of particular importance in order to accomplish various medical roles according to the CanMEDS framework for postgraduate education . During postgraduate training, a physician builds on the basic competences acquired in medical school to obtain and develop the specialty-specific competences required for practice in the respective specialty . Medical specialties are characterized by a great diversity in their work requirements, which are associated with different specialty-specific competence profiles as defined by the Requirement-Tracking questionnaire (R-Track) . Very detailed profiles have been described with the R-Track for anaesthesiology and nephrology . Psychomotor and multitasking abilities are particularly needed for specialties with surgical activities, while social interactive competences are of prominent importance for specialties with an intense level of patient-physician-interaction, for example, psychiatry or internal medicine . With respect to the specific competence requirements and a great variety of medical specialties, choosing a medical specialty for residency training seems to be a difficult task for medical students, because the choice usually represents a lifelong career decision . The final year of undergraduate medical education or internship where students get to know different specialties more intensely provides a good opportunity to explore career options . These experiences or working in a particular medical specialty or learning from role models can help students in their decision to choose a specialty for residency [ – ]. Algorithm-based matching programs are also employed to bring applicants and vacancies together . Their aim is to provide realistic information about the specialties and to identify applicants who would be a particularly good fit . Besides interviews with the candidates, the selection process is mainly based on objective criteria such as assessment scores and academic performance . Other aspects like personality or assessment of psychomotor skills for surgical activities [ – ] have also been used for applicant selection. The students’ reasons for choosing a medical specialty are complex and diverse. They can be based on the students’ personality , specialty related anticipations such as prestige and income or gender-specific career and lifestyle ideas and anticipated work-life balance of different specialties [ – ]. When applying for a residency position, graduates should have a solid understanding of the required competences that are needed in the different medical specialties. Whether medical students have a realistic perspective on competences required for different medical specialties is not known. This study aims to identify final-year students’ perceptions of basic medical and specialty-specific competences. Comparing the students’ perspectives on medical competences with physicians’ assessment of competences that are required for different medical specialties will provide information whether medical students have a realistic perception of competences that are required for postgraduate training in different specialties. This study’s findings will provide insights whether further competence-based guidance for medical students’ specialty choice for postgraduate education is needed. Study design and participants In December 2020, sixty-four advanced medical students from year 4 and 5 of a 6-year undergraduate medical curriculum, 65.6% female and 34.4% male, participated in a competence-based training simulating the first day of residency under pandemic conditions . This training included a telemedicine-based consulting hour with four simulated patients, documentation, and management with electronic patient charts, and one case presentation per participant in a virtual round with an attending physician. Participation was on a voluntary and a first come, first served basis. Eight focus group interviews with eight participants each were conducted directly after the training using a semi-structured interview guideline to identify students’ perceptions of basic medial competences and specific competences needed in different medical disciplines. The study was performed in accordance with the Declaration of Helsinki and the Ethics Committee of the Chamber of Physicians, Hamburg, approved this study and confirmed its innocuousness (PV3649). All participants provided informed written consent for participation in this study. All data were anonymized. Interview guideline and interview conduction The semi-structured interview guideline was developed based on catalogues of basic medical competences and studies regarding basic medical competences [ , , ] and competence profiles of medical specialties . The interview guideline included a brief introduction about the context of the competence-based training the participants had just completed, questions on skills and abilities needed for general medical task, for example, patient consultations, diagnostics, and case presentations, and specific abilities needed in different medical specialties, i.e., anaesthesiology, internal medicine, psychiatry, radiology, and surgery. These specialties were selected as prototypes because they showed significant differences in their competence profiles . With this study, we wished to elucidate whether medical students’ have a realistic perception of and perspective on the different competence profiles of these specialties. All focus group interviews were conducted by E.Z., videotaped, and transcribed verbatim following simple transcription rules which slightly smoothen speech to focus on content . Interviews were anonymized during transcription. The forward translation of exemplary quotes was performed by SH, a physician who holds a C2 level certificate in English language and worked for several years in a hospital in the United States. The back translation was carried out by EZ, a sociologist who has been working in the field of medicine for three years. The translations were checked by VO, a psychologist who has been working in the field of medicine for more than a decade. Data analysis We analysed the transcripts with MAXQDA 2020 (Verbi GmbH) using Braun & Clarke’s thematic analysis, a qualitative method for identifying, analysing, and reporting patterns and themes within data . We developed a detailed overall description of the dataset and used the semantic approach focusing on the identification of explicit meanings of the data, following a realistic paradigm. The thematic analysis included the following six steps: 1) familiarization with the data, 2) generating initial codes, 3) searching for themes, 4) reviewing themes, 5) defining and naming themes, and 6) producing the report . We inductively generated initial codes and searched for themes and deductively assigned themes with respect to competences and competence areas of the Requirement-Tracking questionnaire (R-Track). It includes 63 items that can be assigned to six competence areas: 1) ‘ Personality traits ’, which includes factors that influence the way someone thinks and acts, 2) ‘ Social interactive competences ’, which consists of skills that involve the way someone communicates, interacts and collaborates with others in a team, 3) ‘ Mental abilities ’, consisting of factors that make up cognitive performance, 4) ‘ Sensory abilities ’, which includes factors influencing the perception of the environment, 5) ‘ Psychomotor & multitasking abilities ’, including factors influencing performance in manual control tasks, and 6) ‘ Motivation ’, consisting of factors measuring goal directed effort that leads to performance and expertise . We chose the R-Track for analysis because it is based on the Fleishman Job Analysis Survey (F-JAS) which can be used to assess skills and abilities for different professions . The R-Track was originally adapted from the F-JAS to identify competence profiles of airline pilots and eventually further adapted for health care professionals . This allows for classification of physician competence profiles in specific, as well as larger, professional contexts. In December 2020, sixty-four advanced medical students from year 4 and 5 of a 6-year undergraduate medical curriculum, 65.6% female and 34.4% male, participated in a competence-based training simulating the first day of residency under pandemic conditions . This training included a telemedicine-based consulting hour with four simulated patients, documentation, and management with electronic patient charts, and one case presentation per participant in a virtual round with an attending physician. Participation was on a voluntary and a first come, first served basis. Eight focus group interviews with eight participants each were conducted directly after the training using a semi-structured interview guideline to identify students’ perceptions of basic medial competences and specific competences needed in different medical disciplines. The study was performed in accordance with the Declaration of Helsinki and the Ethics Committee of the Chamber of Physicians, Hamburg, approved this study and confirmed its innocuousness (PV3649). All participants provided informed written consent for participation in this study. All data were anonymized. The semi-structured interview guideline was developed based on catalogues of basic medical competences and studies regarding basic medical competences [ , , ] and competence profiles of medical specialties . The interview guideline included a brief introduction about the context of the competence-based training the participants had just completed, questions on skills and abilities needed for general medical task, for example, patient consultations, diagnostics, and case presentations, and specific abilities needed in different medical specialties, i.e., anaesthesiology, internal medicine, psychiatry, radiology, and surgery. These specialties were selected as prototypes because they showed significant differences in their competence profiles . With this study, we wished to elucidate whether medical students’ have a realistic perception of and perspective on the different competence profiles of these specialties. All focus group interviews were conducted by E.Z., videotaped, and transcribed verbatim following simple transcription rules which slightly smoothen speech to focus on content . Interviews were anonymized during transcription. The forward translation of exemplary quotes was performed by SH, a physician who holds a C2 level certificate in English language and worked for several years in a hospital in the United States. The back translation was carried out by EZ, a sociologist who has been working in the field of medicine for three years. The translations were checked by VO, a psychologist who has been working in the field of medicine for more than a decade. We analysed the transcripts with MAXQDA 2020 (Verbi GmbH) using Braun & Clarke’s thematic analysis, a qualitative method for identifying, analysing, and reporting patterns and themes within data . We developed a detailed overall description of the dataset and used the semantic approach focusing on the identification of explicit meanings of the data, following a realistic paradigm. The thematic analysis included the following six steps: 1) familiarization with the data, 2) generating initial codes, 3) searching for themes, 4) reviewing themes, 5) defining and naming themes, and 6) producing the report . We inductively generated initial codes and searched for themes and deductively assigned themes with respect to competences and competence areas of the Requirement-Tracking questionnaire (R-Track). It includes 63 items that can be assigned to six competence areas: 1) ‘ Personality traits ’, which includes factors that influence the way someone thinks and acts, 2) ‘ Social interactive competences ’, which consists of skills that involve the way someone communicates, interacts and collaborates with others in a team, 3) ‘ Mental abilities ’, consisting of factors that make up cognitive performance, 4) ‘ Sensory abilities ’, which includes factors influencing the perception of the environment, 5) ‘ Psychomotor & multitasking abilities ’, including factors influencing performance in manual control tasks, and 6) ‘ Motivation ’, consisting of factors measuring goal directed effort that leads to performance and expertise . We chose the R-Track for analysis because it is based on the Fleishman Job Analysis Survey (F-JAS) which can be used to assess skills and abilities for different professions . The R-Track was originally adapted from the F-JAS to identify competence profiles of airline pilots and eventually further adapted for health care professionals . This allows for classification of physician competence profiles in specific, as well as larger, professional contexts. Basic medical competences A total of 220 codes were assigned as aspects of basic medical competences. They could be allocated to 21 Requirement-Tracking questionnaire items, i.e. 33.3% of the 63 R-Track items. These items were represented in four of the six R-Track competence areas (Table ). Skills and abilities belonging to the area ‘ Social interactive competences ’ were mentioned most frequently ( n = 113), followed by ‘ Mental abilities ’ ( n = 39), ‘ Personality traits ’ ( n = 37), and ‘ Motivation ’ ( n = 31). No aspects were mentioned from the competence areas ‘ Sensory abilities’ and ‘ Psychomotor & multitasking abilities ’. The four R-Track competence areas, their identified items and sub-themes are presented in Table and illustrated with examples for an extended overview. Social interactive competences The aspects assigned to the competence area ‘ Social interactive competences ’ covered 47.6% of its 21 R-Track items. Within the item ‘ Structuring information ’, 16 aspects could be directly assigned, and six sub-themes were discovered: ‘ Self-organisation ’, ‘ Selection information’ , ‘ Prioritising information ’, ‘ Weighting information ’, ‘ Time management ’, and ‘ Summarizing information ’. The item ‘ Tactfulness ’ included only the sub-theme ‘ Change of perspective’ . ‘ Staying calm ’ was discovered as a sub-theme of the item ‘ Stress resistance ’. Further aspects mentioned by the students could be assigned to the items ‘ Norms & rule orientation ’, ‘ Orientation toward patients ’, ‘ Coordination & decision making ’, ‘ Delegation / Delegating ’, ‘ Persuasiveness ’, ‘ Sovereignty ’, and ‘ Resistance to monotony ’. Mental abilities The aspects assigned to the competence area ‘ Mental abilities ’ covered 21.4% of its 14 R-Track items. Within the item ‘ Concentration ’, three sub-themes were discovered: ‘ Focusing ’, ‘ Attentiveness ’, and ‘ Mindfulness ’. Further aspects mentioned by the students could be assigned to the items ‘ Clarity of speech ’ and ‘ Memory capacity ’. Personality traits The aspects assigned to the competence area ‘ Personality traits ’ covered 41.7% of its 12 R-Track items. The item ‘ Honesty ’ included five sub-themes: ‘ Being unprejudiced ’, ‘ Self-reflection ’, ‘ Dealing with ignorance ’, ‘ Asking for help ’, and ‘ Transparency ’. Further reported items were ‘ Openness to novelty ’, ‘ Flexibility ’, ‘ Prudence ’, and ‘ Cooperation / Agreeableness ’. Motivation The aspects assigned to the competence area ‘ Motivation ’ covered 60% of its 5 R-Track items. The item ‘ Expertise ’ approached with four sub-themes: ‘ Communication techniques ’, ‘ Pattern recognition ’, ‘ Technical skills ’, and ‘ ’. Further aspects mentioned by the students could be assigned to the items ‘ Thoroughness ’ and ‘ Endurance ’. Specialty-specific competences A total of 231 codes were assigned to five different medical specialties: anaesthesiology ( n = 55), internal medicine ( n = 42), psychiatry ( n = 52), radiology ( n = 39) and surgery ( n = 43). These included basic competences that were mentioned per specialty and specialty-specific competences. Table shows the newly mentioned specialty-specific aspects at individual item level that were not already discussed as basic competences. While for anaesthesiology only one item from the area ‘ Psychomotor & multitasking abilities ’ was mentioned as being specialty-specific and for internal medicine only one item each from the areas ‘ Mental abilities ’ and ‘ Personality traits ’, many more items from different competence areas were identified as being specialty-specific for psychiatry, radiology, and surgery. Figure shows the percentage of total R-track items mentioned per specialty versus basic items from the six competence areas, which were also mentioned for the respective specialty as being specialty-specific. In the competence area ‘ Social interactive competences ’, specialty-specific competences occurred only for psychiatry (4.8%). In the area ‘ Mental abilities ’, new aspects were mentioned for internal medicine (7.1%), psychiatry (7.1%), radiology (21.4%), surgery (14.6%). In the area ‘ Personality traits ’, specialty-specific competences occurred for internal medicine (8.3%), psychiatry (25%) and surgery (16.7%). No additional aspects were mentioned for any specialty regarding the area ‘ Motivation ’. Specialty-specific aspects from the area ‘ Sensory abilities ’ included only new aspects and occurred only for radiology (44.4%) and surgery (33.3%). With regard to the area ‘ Psychomotor & multitasking abilities ’, specialty-specific aspects were only mentioned for anaesthesiology (50%) and surgery (50%). A total of 220 codes were assigned as aspects of basic medical competences. They could be allocated to 21 Requirement-Tracking questionnaire items, i.e. 33.3% of the 63 R-Track items. These items were represented in four of the six R-Track competence areas (Table ). Skills and abilities belonging to the area ‘ Social interactive competences ’ were mentioned most frequently ( n = 113), followed by ‘ Mental abilities ’ ( n = 39), ‘ Personality traits ’ ( n = 37), and ‘ Motivation ’ ( n = 31). No aspects were mentioned from the competence areas ‘ Sensory abilities’ and ‘ Psychomotor & multitasking abilities ’. The four R-Track competence areas, their identified items and sub-themes are presented in Table and illustrated with examples for an extended overview. The aspects assigned to the competence area ‘ Social interactive competences ’ covered 47.6% of its 21 R-Track items. Within the item ‘ Structuring information ’, 16 aspects could be directly assigned, and six sub-themes were discovered: ‘ Self-organisation ’, ‘ Selection information’ , ‘ Prioritising information ’, ‘ Weighting information ’, ‘ Time management ’, and ‘ Summarizing information ’. The item ‘ Tactfulness ’ included only the sub-theme ‘ Change of perspective’ . ‘ Staying calm ’ was discovered as a sub-theme of the item ‘ Stress resistance ’. Further aspects mentioned by the students could be assigned to the items ‘ Norms & rule orientation ’, ‘ Orientation toward patients ’, ‘ Coordination & decision making ’, ‘ Delegation / Delegating ’, ‘ Persuasiveness ’, ‘ Sovereignty ’, and ‘ Resistance to monotony ’. The aspects assigned to the competence area ‘ Mental abilities ’ covered 21.4% of its 14 R-Track items. Within the item ‘ Concentration ’, three sub-themes were discovered: ‘ Focusing ’, ‘ Attentiveness ’, and ‘ Mindfulness ’. Further aspects mentioned by the students could be assigned to the items ‘ Clarity of speech ’ and ‘ Memory capacity ’. The aspects assigned to the competence area ‘ Personality traits ’ covered 41.7% of its 12 R-Track items. The item ‘ Honesty ’ included five sub-themes: ‘ Being unprejudiced ’, ‘ Self-reflection ’, ‘ Dealing with ignorance ’, ‘ Asking for help ’, and ‘ Transparency ’. Further reported items were ‘ Openness to novelty ’, ‘ Flexibility ’, ‘ Prudence ’, and ‘ Cooperation / Agreeableness ’. The aspects assigned to the competence area ‘ Motivation ’ covered 60% of its 5 R-Track items. The item ‘ Expertise ’ approached with four sub-themes: ‘ Communication techniques ’, ‘ Pattern recognition ’, ‘ Technical skills ’, and ‘ ’. Further aspects mentioned by the students could be assigned to the items ‘ Thoroughness ’ and ‘ Endurance ’. A total of 231 codes were assigned to five different medical specialties: anaesthesiology ( n = 55), internal medicine ( n = 42), psychiatry ( n = 52), radiology ( n = 39) and surgery ( n = 43). These included basic competences that were mentioned per specialty and specialty-specific competences. Table shows the newly mentioned specialty-specific aspects at individual item level that were not already discussed as basic competences. While for anaesthesiology only one item from the area ‘ Psychomotor & multitasking abilities ’ was mentioned as being specialty-specific and for internal medicine only one item each from the areas ‘ Mental abilities ’ and ‘ Personality traits ’, many more items from different competence areas were identified as being specialty-specific for psychiatry, radiology, and surgery. Figure shows the percentage of total R-track items mentioned per specialty versus basic items from the six competence areas, which were also mentioned for the respective specialty as being specialty-specific. In the competence area ‘ Social interactive competences ’, specialty-specific competences occurred only for psychiatry (4.8%). In the area ‘ Mental abilities ’, new aspects were mentioned for internal medicine (7.1%), psychiatry (7.1%), radiology (21.4%), surgery (14.6%). In the area ‘ Personality traits ’, specialty-specific competences occurred for internal medicine (8.3%), psychiatry (25%) and surgery (16.7%). No additional aspects were mentioned for any specialty regarding the area ‘ Motivation ’. Specialty-specific aspects from the area ‘ Sensory abilities ’ included only new aspects and occurred only for radiology (44.4%) and surgery (33.3%). With regard to the area ‘ Psychomotor & multitasking abilities ’, specialty-specific aspects were only mentioned for anaesthesiology (50%) and surgery (50%). Medical students recognized many essential aspects related to basic competences needed by physicians. The highest number of aspects was found in the competence area of ‘ Social interactive competences ’ which resembles a core component of undergraduate medical education . The students mentioned, for instance, ‘ Structuring information’ , ‘ Tactfulness ’ and ‘ Stress resistance ’ from this competence area. Structuring information about patients is a central aspect of clinical reasoning that constitutes a basic competence for all specialties, exemplary shown for internal medicine or orthopaedics . Tactfulness is of great importance in patient-physician interaction and an essential component of medical professionalism . Stress resistance is an important aspect for health professionals because their work is often associated with high levels of stress which can have a negative impact on professional performance and quality of patient care . As a basic mental skill, students emphasized the ability to concentrate, which has been shown to be closely linked to clinical decision-making and has been shown in surgery to be needed to execute difficult manual work . ‘ Honesty ’ was a particularly important personality trait for physicians in general from the students’ perspective. The patient-physician relationship constitutes a special interpersonal relationship based on honest information about the diagnosis and the outcome . Students were also aware that medical expertise is particularly important for motivation. Competency profiles of different medical specialties showed that ‘ Motivation ’ was the highest rated competence area in almost all specialties . Motivation is a fundamental aspect for the profession of medicine, which is also decisive for the choice of a speciality. In the context of choosing a specialty, it is particularly interesting to see whether the students' ideas match those of the specialties. For the five investigated specialties, the students mentioned at least one aspect from a competence area that had not been mentioned for the respective specialty as basic competence. Surgery showed the greatest differences between basic and specialty-specific competences. Aspects from the two competence areas ‘ Sensory abilities ’ and ‘ Psychomotor & multitasking abilities ’ were only mentioned as being specialty-specific for surgery. ‘ Psychomotor coordination ’ is acquired in postgraduate training, for example, with laparoscopic or arthroscopic simulators . As additional surgery-specific aspect informs the competence area ‘ Mental abilities ’ ‘ Problem comprehension ’ was mentioned, which is required when selecting patients for surgical treatment . ‘ Emotional stability ’ was additionally addressed as an exemplary aspect of ‘ Personality traits ’, which has been shown to be higher in surgeons than in the population norms . The competence area ‘ Sensory abilities ’ was newly added by our participating students as being specialty-specific for radiology and included the aspects ‘ Perceptual range ’ and ‘ Perceptual speed ’ which can be measured with radiology-specific tests . Several aspects from the competence area ‘ Mental abilities ’ were mentioned as specialty-specific aspects of radiology, for example, ‘ Written expression ’. Indeed, the written radiology report is a key component in the communication between radiologists and referring physicians . ‘ Mental abilities ’ emerged as new specialty-specific competence area for psychiatry with ‘ Problem comprehension’ being a relevant aspect. In residency training, problem-based conferences were a successful teaching method for psychiatry residents to acquire psychiatry patient management . From the competence area ‘ Personality traits ’, students particularly mentioned ‘ Emotional stability ’ and ‘ Openness to other people ’ as specialty-specific for psychiatry. In personality analyses of physicians from different specialties, psychiatrists have been found to reach high scores for ‘ Emotional stability ’ and ‘ Openness ’ . ‘ Mental ability ’ was also a newly mentioned competence area for the specialty of internal medicine with the aspect ‘ Written expression ’, which can be trained during internal medicine residency by the scholarly activity of writing case reports . ‘ Emotional stability ’, was identified as an internal medicine-specific aspect in the competence area ‘ Personality traits ’ and seems to be highly necessary, since 76% of internal medicine residents met the criteria for burnout , ‘ Psychomotor coordination ’ was mentioned by the students as the only new specialty-specific aspect from the additional specialty-specific competence area ‘ Psychomotor & multitasking abilities ’ for anaesthesiologists. Indeed, good manual movement and hand–eye coordination is necessary for anaesthesiologists to perform complex psychomotor tasks such as placing a nasotracheal intubation . Overall, the students had a good perception of the competences needed for different specialties as assessed by physicians from the respective specialties . The best match was found for psychiatry. For surgery and radiology, the students overestimated the relevance of ‘ Sensory abilities ’ and they underestimated it for anaesthesiology. They also overestimated ‘ Social interactive competences ’ for anaesthesiology while they underestimated these for internal medicine. The students somewhat underestimated ‘ Motivation ’ for surgery and seem to have overestimated ‘ Personality traits ’ a bit for this specialty. A limitation of our study was that the respondents only came from one medical school. Since their participation in the simulation was voluntary, self-selection could have led to particularly interested and engaged participants. Furthermore, we did not distinguish between male and female participants in the focus groups which could have led to somewhat distorted results. However, the distribution of male and female participants resembled the distribution among medical students in general at our medical school. Interestingly, 42 of the 63 competences were not mentioned by the students. These include mostly general competences like ‘Comprehension’, ‘Memory capacity’, or ‘Sociability’. Since we did not specifically discuss competences with the students that were not mentioned with respect to their relevance for physicians or specific specialties it remains unknown, whether students took them for granted or regarded them as irrelevant. This needs to be addressed in further studies. A strength of this study are the semi-structured interviews conducted immediately after the training. The simulation experience made it easier for students to visualize the competences they needed for independent medical practice rather than thinking of their abstract definitions. The data collection in association with the training allowed participants to talk openly about their experiences while being guided thematically by the interviewer. With this qualitative approach, we have provided a first insight into the perceptions of advanced medical students on required basic and specialty-specific competences. A closer look at the specialties of anaesthesiology, internal medicine, psychiatry, radiology, and surgery showed that the students already had quite good perceptions of basic competences, but there were still some inconsistencies with regard to the specialty-specific competences. Students should compare their ideas about a specialty they would like to choose for postgraduate training with the competence profile suggested by physicians from the respective specialty. This could lead to a more realistic picture of specialty-specific competence requirements and eventually prevent dropouts of postgraduate training. Additionally, medical educators could provide specialty-specific training for undergraduate students in clerkships for competence areas which are specifically required by a specific specialty. The medical students in this study seem to have developed a good perception of the necessary basic competences for clinical practice. With regard to the specific competence requirements of different disciplines, a high degree of agreement on specialty-specific competences between students and physicians was only found for psychiatry, while a lack of consensus with regard to specialty-specific competences remained for anaesthesiology, internal medicine, surgery, and radiology. Incorrect perceptions of specialty-specific competences could lead to wrong concepts about what to expect of residency training in the respective specialty. Students should be invited to compare their ideas of specialty-specific competence profiles with the competence requirements as assessed by physicians from a respective specialty to get a realistic impression of specialty–specific postgraduate training. Courses during undergraduate education in specialty-specific competences could also prepare students to develop a realistic impression of the different competence profiles of medical specialties and support their choice of specialty for residency training.
Helpfulness of Question Prompt Sheet for Patient-Physician Communication Among Patients With Advanced Cancer
24096be9-0481-4c4a-b45a-ffc19182dc73
10155065
Internal Medicine[mh]
A question prompt sheet (QPS) is a structured list of potential questions that are available for patients to ask physicians during a clinical encounter. It may allow practitioners to meet patients’ desired information needs, assist with decision-making, and improve the overall communication process. This is vital because patients sometimes are unsure about the questions to ask their physicians, forget to ask the relevant questions, or feel uncomfortable to ask certain questions. , A QPS may also prevent physicians from conveying unsolicited and potentially distressing information to patients. Studies have demonstrated the value of a QPS in patient-physician interactions in diverse fields of medicine. , , , , , , , However, there is insufficient data regarding the utility of a QPS among patients with advanced cancer. , Moreover, very few methodologically robust evaluations of a QPS in a head-to-head comparison with an attention control group have been conducted. The main objective of this study was to compare patients’ perceptions about the helpfulness, overall global evaluation, and preference for a systematically developed QPS vs a standard general information sheet (GIS) during patient-physician encounters. We also examined the effect of the QPS on participants’ anxiety, participants’ speaking time, the number of questions asked, and the length of the clinical encounter. Study Design, Participants, Procedures This randomized clinical trial was approved by the institutional review board of the University of Texas MD Anderson Cancer Center, Houston. All participants provided written informed consent. The trial protocol and statistical analysis plan are available in . The study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. This trial was conducted among patients seen at the outpatient Palliative and Supportive Care Clinic at the University of Texas MD Anderson Cancer Center from September 1, 2017, to May 31, 2019. This clinic sees patients with advanced cancer who are referred by their primary oncologists for the management of complex physical, psychosocial, and spiritual needs, as well as assistance with medical decision-making and overall goals of care. Eligible patients were aged at least 18 years, had a cancer diagnosis, were undergoing their initial outpatient consultation visit with 1 of 10 palliative care physicians, and could read and communicate in English. After providing written informed consent, patients completed baseline questionnaires and were then randomly assigned in a 1:1 fashion to receive either the QPS or the GIS 30 minutes prior to their physician consultation. Randomization was conducted by the biostatistician via the institution’s clinical trial conduct website using the Pocock-Simon method. Patients were stratified by physician to carefully control for physicians’ impact on the primary end point. Both interventions were concealed in identical opaque envelopes. Patients, research staff who enrolled the patients, and physicians were blinded to the study assignments. Patients were encouraged to read the information material before the visit. Physicians were asked to endorse the use of the information material during the encounter by asking the patient if they had any questions, and either explaining why it was important to ask questions or inviting the patient more than once to ask questions. , Conversations were audiotaped and later transcribed. At the end of the consultation, patients completed questionnaires assessing their views about the information material they received, their overall satisfaction with the consultation, and their anxiety level. The participating physicians also completed a physician assessment questionnaire. In an exploratory open-label format, patients who returned for follow-up at 4 weeks (±7 days) openly received both the QPS and the GIS 30 minutes prior to seeing their physician and were encouraged to use the materials in preparation for their visit. After the visit, they indicated which of the materials they preferred. Data Collection Patients’ demographic and clinical characteristics were obtained from their medical records. Race and ethnicity were categorized as Asian, Black, Hispanic or Latino, White, and other (including American Indian or Alaskan Native, refused to answer, and unknown). Race and ethnicity were included in analyses because we wanted to explore any potential association between the use of the communication aids and those variables. The deidentified audio recordings were transcribed by a professional medical transcription company. The number and types of questions that patients asked were carefully and independently extracted from the transcribed data by one experienced investigator (V.P.) and then verified by a second investigator (J.A.); any discrepancies were discussed in detail until a mutual agreement was reached. Study Interventions The QPS (eAppendix 1 in ) is a single-page list of 25 questions that was developed by an expert panel of clinicians using a Delphi process and later tested for its content validity among a group of patients and caregivers attending an ambulatory palliative medicine clinic. The GIS (eAppendix 2 in ) is a single page of generic informational material that was created by our group and is routinely provided to patients who are seen at the clinic. It contains general patient information about palliative care and other related information felt to be relevant to new patients. Questionnaires and Outcome Measures The primary outcome, patients’ perception of helpfulness, and other views about the information materials were assessed immediately after the consultation using the Patient Assessment Questionnaire. This is a 7-item, 0- to 10-point scale that assessed the extent to which patients felt the material helped them to communicate with their physician, was clear or easily understandable, had the right amount of information, would be recommended to other patients, did not make them anxious, helped them to think of questions or concerns they had not previously thought of, and would be used in the future. The mean score across all the 7 individual patient ratings was calculated to obtain the global perception score, with higher score indicating more positive perception. The questionnaire has been used in several previous studies. , , Patients’ satisfaction with the consultation was assessed using the Patient Satisfaction Questionnaire, , , a 5-item visual analogue scale ranging form 0 to 100, with an internal reliability (Cronbach α) of 0.90 and higher score indicating more satisfaction. Patient anxiety was measured by the Spielberger State Anxiety Inventory, a 20-item self-report scale with high reliability ( r = 0.93), internal consistency, and validity. Scores range from 20 to 80, with higher score indicating greater anxiety. Baseline patient preferences for information were measured using 2 items from the Cassileth Information Styles Questionnaire, with 1 item consisting of a 5-point Likert scale that assessed the amount of detail a patient preferred (1 indicates very little; 5, as much as possible) and the other item a multiple choice question asking what kind of information a patient preferred, with options “I want only the information needed to care for myself properly,” “I want additional information only if it is good news,” and “I want as much information as possible, good and bad.” Baseline patient preferences for level of involvement in decision-making were assessed with the validated Control Preferences Scale. , , Overall preference for the QPS or GIS was assessed using a single multiple-choice question: “Now that you have had the opportunity to use the two different information materials, overall, which of them would you prefer to use in communicating with your doctor?” Patients could select whether they preferred either material a little or a lot more, or whether they had no preference. The Physician Assessment Form asked physicians to indicate on a scale of 0 to 10 points their perception about the helpfulness of the information material to the patient, its effect on the visit duration, and their overall satisfaction with the consultation, with higher score indicating more positive perception. Other outcome measures included the total number and types of participant questions, speaking times, and overall consultation duration. Statistical Analysis The primary outcome was patients’ perception of helpfulness (0-10 scale) of the informational material. A 2-sample t test was applied to examine the outcome difference between the QPS and the GIS group. With 136 enrolled patients and a 5% attrition rate, we estimated 80% power to detect a difference in means of 2 on a 0- to 10-point scale of the primary outcome, assuming an SD of 4 using the 2-sample t test with a 2-sided significance level of P = .05. Summary statistics, such as means and SDs, were used to describe continuous variables, while frequencies and percentages were used to describe categorical variables. Similar 2-sample t and χ 2 tests or Fisher exact test were used to examine the group difference for selected secondary outcomes. Goodness-of-fit test was used for χ 2 to assess patients’ overall preference after using both information materials concurrently. Associations of the demographic or clinical factors with the primary outcome were assessed using ordinary least-squared regression. Analysis was modified intention-to-treat because 5 randomized patients (3.7%) who did not receive the allocated intervention and had missing data were excluded. P = .05 was used to determine the statistical significance for all secondary outcomes analyses, given this portion of the analyses was exploratory and was for hypothesis generating purpose. Data were analyzed using Stata/SE version 16.1 (StataCorp). Data were analyzed from May 18 to June 27, 2022. This randomized clinical trial was approved by the institutional review board of the University of Texas MD Anderson Cancer Center, Houston. All participants provided written informed consent. The trial protocol and statistical analysis plan are available in . The study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. This trial was conducted among patients seen at the outpatient Palliative and Supportive Care Clinic at the University of Texas MD Anderson Cancer Center from September 1, 2017, to May 31, 2019. This clinic sees patients with advanced cancer who are referred by their primary oncologists for the management of complex physical, psychosocial, and spiritual needs, as well as assistance with medical decision-making and overall goals of care. Eligible patients were aged at least 18 years, had a cancer diagnosis, were undergoing their initial outpatient consultation visit with 1 of 10 palliative care physicians, and could read and communicate in English. After providing written informed consent, patients completed baseline questionnaires and were then randomly assigned in a 1:1 fashion to receive either the QPS or the GIS 30 minutes prior to their physician consultation. Randomization was conducted by the biostatistician via the institution’s clinical trial conduct website using the Pocock-Simon method. Patients were stratified by physician to carefully control for physicians’ impact on the primary end point. Both interventions were concealed in identical opaque envelopes. Patients, research staff who enrolled the patients, and physicians were blinded to the study assignments. Patients were encouraged to read the information material before the visit. Physicians were asked to endorse the use of the information material during the encounter by asking the patient if they had any questions, and either explaining why it was important to ask questions or inviting the patient more than once to ask questions. , Conversations were audiotaped and later transcribed. At the end of the consultation, patients completed questionnaires assessing their views about the information material they received, their overall satisfaction with the consultation, and their anxiety level. The participating physicians also completed a physician assessment questionnaire. In an exploratory open-label format, patients who returned for follow-up at 4 weeks (±7 days) openly received both the QPS and the GIS 30 minutes prior to seeing their physician and were encouraged to use the materials in preparation for their visit. After the visit, they indicated which of the materials they preferred. Patients’ demographic and clinical characteristics were obtained from their medical records. Race and ethnicity were categorized as Asian, Black, Hispanic or Latino, White, and other (including American Indian or Alaskan Native, refused to answer, and unknown). Race and ethnicity were included in analyses because we wanted to explore any potential association between the use of the communication aids and those variables. The deidentified audio recordings were transcribed by a professional medical transcription company. The number and types of questions that patients asked were carefully and independently extracted from the transcribed data by one experienced investigator (V.P.) and then verified by a second investigator (J.A.); any discrepancies were discussed in detail until a mutual agreement was reached. The QPS (eAppendix 1 in ) is a single-page list of 25 questions that was developed by an expert panel of clinicians using a Delphi process and later tested for its content validity among a group of patients and caregivers attending an ambulatory palliative medicine clinic. The GIS (eAppendix 2 in ) is a single page of generic informational material that was created by our group and is routinely provided to patients who are seen at the clinic. It contains general patient information about palliative care and other related information felt to be relevant to new patients. The primary outcome, patients’ perception of helpfulness, and other views about the information materials were assessed immediately after the consultation using the Patient Assessment Questionnaire. This is a 7-item, 0- to 10-point scale that assessed the extent to which patients felt the material helped them to communicate with their physician, was clear or easily understandable, had the right amount of information, would be recommended to other patients, did not make them anxious, helped them to think of questions or concerns they had not previously thought of, and would be used in the future. The mean score across all the 7 individual patient ratings was calculated to obtain the global perception score, with higher score indicating more positive perception. The questionnaire has been used in several previous studies. , , Patients’ satisfaction with the consultation was assessed using the Patient Satisfaction Questionnaire, , , a 5-item visual analogue scale ranging form 0 to 100, with an internal reliability (Cronbach α) of 0.90 and higher score indicating more satisfaction. Patient anxiety was measured by the Spielberger State Anxiety Inventory, a 20-item self-report scale with high reliability ( r = 0.93), internal consistency, and validity. Scores range from 20 to 80, with higher score indicating greater anxiety. Baseline patient preferences for information were measured using 2 items from the Cassileth Information Styles Questionnaire, with 1 item consisting of a 5-point Likert scale that assessed the amount of detail a patient preferred (1 indicates very little; 5, as much as possible) and the other item a multiple choice question asking what kind of information a patient preferred, with options “I want only the information needed to care for myself properly,” “I want additional information only if it is good news,” and “I want as much information as possible, good and bad.” Baseline patient preferences for level of involvement in decision-making were assessed with the validated Control Preferences Scale. , , Overall preference for the QPS or GIS was assessed using a single multiple-choice question: “Now that you have had the opportunity to use the two different information materials, overall, which of them would you prefer to use in communicating with your doctor?” Patients could select whether they preferred either material a little or a lot more, or whether they had no preference. The Physician Assessment Form asked physicians to indicate on a scale of 0 to 10 points their perception about the helpfulness of the information material to the patient, its effect on the visit duration, and their overall satisfaction with the consultation, with higher score indicating more positive perception. Other outcome measures included the total number and types of participant questions, speaking times, and overall consultation duration. The primary outcome was patients’ perception of helpfulness (0-10 scale) of the informational material. A 2-sample t test was applied to examine the outcome difference between the QPS and the GIS group. With 136 enrolled patients and a 5% attrition rate, we estimated 80% power to detect a difference in means of 2 on a 0- to 10-point scale of the primary outcome, assuming an SD of 4 using the 2-sample t test with a 2-sided significance level of P = .05. Summary statistics, such as means and SDs, were used to describe continuous variables, while frequencies and percentages were used to describe categorical variables. Similar 2-sample t and χ 2 tests or Fisher exact test were used to examine the group difference for selected secondary outcomes. Goodness-of-fit test was used for χ 2 to assess patients’ overall preference after using both information materials concurrently. Associations of the demographic or clinical factors with the primary outcome were assessed using ordinary least-squared regression. Analysis was modified intention-to-treat because 5 randomized patients (3.7%) who did not receive the allocated intervention and had missing data were excluded. P = .05 was used to determine the statistical significance for all secondary outcomes analyses, given this portion of the analyses was exploratory and was for hypothesis generating purpose. Data were analyzed using Stata/SE version 16.1 (StataCorp). Data were analyzed from May 18 to June 27, 2022. A total of 135 eligible patients were randomly assigned to receive either the QPS or GIS. After excluding the 5 randomized patients (3.7%) who did not receive the allocated intervention, data were available for 130 patients (mean [SD] age, 58.6 [13.3] years; 79 [60.8%] female), including 67 patients (51.5%) randomized to GPS and 63 patients (48.5%) randomized to GIS . There were no significant differences in the baseline demographic and clinical characteristics between the 2 groups. Perception of helpfulness was equally high, with no statistically significant difference between the QPS and the GIS groups (mean [SD] helpfulness score, 7.2 [2.3] points vs 7.1 [2.7] points; P = .79). The QPS prompted participants to think of new questions more than the GIS did (mean [SD] score, 7.0 [2.9] vs 5.3 [3.5]; P = .005). Participants had a higher global perception score for the QPS than the GIS (mean [SD] score, 7.1 [1.3] vs 6.5 [1.7]; P = .03) . All 47 participants who returned for their 4-week follow-up appointment participated in the open-label phase. The demographic and clinical characteristics of patients who returned and those who did not were not significantly different, including age, race, cancer type, type of intervention received at the initial visit, and Edmonton Symptom Assessment System (ESAS) total Symptom Distress Score. Therefore, the informative missingness of the data was largely ignorable. After using both informational materials concurrently, more participants preferred the QPS to the GIS in communicating with their physicians (24 patients [51.1%] vs 7 patients [14.9%]; no preference: 16 patients [34.0%]; P = .01) . In a separate analysis, there were no differences in the effects of the QPS and GIS on physicians’ perceptions of the helpfulness (mean [SD] score, 6.79 [2.74] vs 6.27 [2.96]; P = .32), the consultation length (mean [SD] score, 8.33 [2.53] vs 8.52 [2.14]; P = .67), or overall satisfaction (mean [SD] score, 8.74 [1.38] vs 8.72 [2.06]; P = .95). The mean physician speaking time was not significantly different between the 2 groups (eTable in ). Participants in the QPS group spoke less than those in the GIS group (mean [SD] time, 8.0 [5.3] minutes vs 10.0 [5.3] minutes; P = .06). Both groups asked more treatment-related questions and fewer prognosis- and end-of-life–related questions. No significant association was observed between the QPS and the GIS groups regarding the number and types of questions asked. Overall, both groups were equally satisfied with the consultation. (mean [SD] score, QPS: 95.01 [10.51] vs GIS: 93.90 [14.18]; P = .63). Patients’ change in anxiety scores from baseline were also similar in both groups (mean [SD] anxiety rating, 2.3 [3.7] vs 1.6 [2.7]; P = .19). shows the factors associated with participants’ perception of helpfulness the information material they received. Compared with White patients, Black and Hispanic patients were significantly more likely to perceive either of the informational materials they received as helpful (coefficient, 1.95; 95% CI, 0.72 to 3.18; P = .002). In addition, older age (coefficient, 0.04; 95% CI, 0.01 to 0.07; P = .02) and lower ESAS depression (coefficient, −0.20; 95% CI, −0.38 to −0.01; P = .04) were associated greater perceived helpfulness of the informational material. In this randomized clinical trial, patients perceived both the QPS and GIS as helpful when communicating with their physician, with no significant difference between groups. However, patients felt the QPS facilitated generation of new questions. They also had a better overall global view of the QPS, and after using both materials concurrently during a follow-up visit, patients preferred the QPS to the GIS for communicating with their physicians. Previous studies by our group have reported the perceived helpfulness of the QPS during patient-physician communication. , In a randomized clinical trial comparing a disease-specific QPS with a GIS among 60 women with breast cancer consulting with their medical oncologists, we found that patients perceived the QPS as more helpful than the GIS. Although participants in this study perceived both materials as helpful, their better global view of and relative preference for the QPS validate its value in routine clinical care and further underscore the need for its integration in clinical guidelines and health policies. The use of a GIS as an attention control group in this study allowed for a more rigorous and robust evaluation of the QPS. Only a few studies have compared the QPS with another communication aid. Moreover, data on the focal evaluation of patients’ perceptions about QPS’ utility are limited. The QPS did not increase patient anxiety during the clinical encounter. This should reassure health care practitioners who may be concerned that the QPS questions will be emotionally upsetting and negatively impact patients’ psychological outcomes. Several studies have examined the association between the use of a QPS and patient anxiety. , , , , , , , , Many did not find any significant association with anxiety, , , , while a few studies showed a decrease in patient anxiety levels , immediately after, and 6 weeks, and 4 months, after initial the consultation. A study by Brown et al randomized 318 patients with cancer consulting with their oncologists to either receive or not receive a QPS and found that QPS patients whose physicians passively responded to questions from the QPS had higher anxiety than did those whose physicians proactively addressed questions from the QPS and controls. We found that the QPS neither prolonged the duration of the visit nor increased the physician or patient speaking time. In fact, participants in the QPS group spoke less than did those in the GIS group, suggesting that the QPS may improve the efficiency of communication without prolonging clinical encounters. Previous studies by our group and others also observed no association of the QPS with consultation length. , , , , , In a randomized clinical trial of 174 patients with advanced cancer who were assigned to receive either the QPS or standard consultation without QPS, Clayton et al found that QPS consultations were longer than controls, probably because a longer 20-page QPS brochure consisting of 112 items was used in that study. It is conceivable that such observation was not found in this study because we used a disease-specific, single-page 25-item QPS. Future studies are needed to investigate the effect of QPS length on consultation duration. Although patients felt the QPS facilitated generation of new questions, it did not result in an increase in the number of questions asked. The goal with the use of a QPS is to empower patients to generate and ask essential questions that meet their information needs. The QPS may effectively improve communication quality without necessarily increasing the number of questions that patients ask. Patients may be able to ask their most meaningful questions rather than simply asking more questions. In that regard, patient self-report of the helpfulness of the material might be a highly reliable indicator of benefit from the information material. Further studies are needed to ascertain the best means of measuring the true utility of the QPS. Compared with previous findings, , , patients in this study asked more treatment- and symptom-related questions and fewer prognosis and end-of-life–related questions. This may be because a considerable number of them were still receiving disease-directed therapy and therefore had a particular interest in treatment- and symptom-related questions and concerns. In clinical settings, such as the inpatient palliative care units where patients have more advanced disease, prognosis and end-of-life questions might be more relevant. Moreover, patients might have preferred to first focus on their acute issues and would eventually discuss the more sensitive prognosis and end-of-life issues once their acute physical symptoms were addressed and they had the opportunity to build a closer therapeutic relationship with their physicians. The reason why Black and Hispanic patients were more likely to perceive the information material as helpful is unclear, but it suggests that a written material that aids in patient communication might be particularly valued by members of racial and ethnic minority groups, including Black and Hispanic patients. In a different study, the QPS was found very acceptable by Black patients with cancer and effectively increased their active participation in racially discordant interactions. Similarly, our findings also suggest that an informational material may be particularly useful to older patients in guiding them to navigate important conversations with their physicians. Major medical organizations, such as the National Cancer Institute, the National Academy of Medicine, and the American Society for Clinical Oncology, have alluded to the benefits of good communication in quality of care and emphasized the need for improved patient-physician communication among patients with advanced illnesses. , , , The QPS is a simple, inexpensive tool that might help in achieving this goal. Despite increasing evidence regarding the utility of QPS in physician-patient consultations, it has not been fully adopted and implemented in oncologic settings. Some barriers to its full implementation include a feeling among patients of being overwhelmed by the sheer amount of written information. , It is challenging to develop a universal QPS that suits all patients’ needs in view of the vast diversity within the population of patients with cancer and the dynamic nature of patient-physician communications. Wide variations in patient learning styles, communication goals, degrees of knowledge, and emotional capabilities may present real challenges in using a standardized QPS for all. One potential solution is to ensure that the development of a QPS is distinctively tailored to specific patient populations to enhance its efficacy. An electronic health system that integrates an interactive QPS that allows patients to generate their own list of questions based on their individual preferences and information needs would be ideal. Limitations This study has some limitations. One limitation is that it was conducted at a single tertiary academic center. Therefore, the results might not be generalizable to other clinical settings. In addition, we were unable to record the specific QPS questions that participants eventually asked during the visit. A better understanding of how participants used the material in real time and which questions were the most useful should be a focus in future research. Another limitation is that hypothesis testing for the secondary end points is considered exploratory when the primary end point does not show statistical significance, which was the case for this study. Additionally, the study was conducted among ambulatory patients with relatively good functional status. Future studies should include patients in acute inpatient settings, since they might have different symptom severity and therefore different outcomes. This study has some limitations. One limitation is that it was conducted at a single tertiary academic center. Therefore, the results might not be generalizable to other clinical settings. In addition, we were unable to record the specific QPS questions that participants eventually asked during the visit. A better understanding of how participants used the material in real time and which questions were the most useful should be a focus in future research. Another limitation is that hypothesis testing for the secondary end points is considered exploratory when the primary end point does not show statistical significance, which was the case for this study. Additionally, the study was conducted among ambulatory patients with relatively good functional status. Future studies should include patients in acute inpatient settings, since they might have different symptom severity and therefore different outcomes. This randomized clinical trial found that patients perceived both QPS and GIS as equally helpful in communicating with their physician during consultation. However, they had a more positive global evaluation of, and preferred the QPS to the GIS. The QPS reportedly facilitated generation of new questions without increasing patient anxiety nor prolonging the consultation visit. The findings support the adoption, integration, and implementation of QPS in routine oncologic care.
Interfaces between oncology and psychiatry
09c1d3e7-9c5f-4f82-8986-2d36cfe3f7d6
11164258
Internal Medicine[mh]
Cancer caused nearly 10 million deaths worldwide in 2020, or one-sixth of all deaths. Cancer is a difficult disease with many physical and psychological effects. Pain, weariness, and appearance changes may occur. They may struggle with depression, anxiety, and hopelessness. The sickness and its treatment raise life situations and existential and spiritual questions. The patient and their family and friends suffer during this long and difficult journey , . Mental problems are 30–60% more common in cancer patients. In addition, 29–43% of these individuals fulfill mental illness diagnostic criteria – with 1.28 more chance than controls . Depression, adjustment difficulties, anxiety disorders, and delirium are common in these people. Advanced disease patients have a greater frequency and worse prognosis for these disorders – . Unfortunately, the occurrence of these common disorders, which have the potential for successful treatment, is underestimated and undertreated in cancer patients. Only 10% of these persons are referred for mental health services, according to empirical evidence – . The problems of stigma and discrimination, poorer dignity, poorer health behavior, and lack of integration in health-care services for people with severe mental disorders need to be addressed and solved in cancer care – . Interdisciplinary collaboration across medical disciplines is needed to advance cancer research and improve clinical care due to its complexity. Allowing cancer patients to communicate their fears might induce psychological distress management. Cognitive-behavioral therapy, crisis intervention, problem-solving, supportive, and group psychotherapy have been shown to reduce distress and improve the quality of life in cancer patients . Psychotropic drugs and a psychiatrist are needed for severe and long-lasting symptoms. Mental condition differential diagnosis requires a thorough and specialized examination to distinguish between main and secondary causes . Despite the importance of recognizing and correctly managing mental disorders in cancer patients, there is still less information in the literature on the subject. Therefore, this article aims to present a narrative review regarding the interfaces between oncology and psychiatry, in addition to discussing how the psychiatrist can assist the oncologist and other professionals who deal with oncological diseases in the correct management of mental disorders with a focus on in improving the prognosis and quality of life. A narrative review was carried out using the following keywords according to Mesh: oncology AND mental disorders. There was no restriction by language or date. The following articles were included: meta-analysis, systematic and non-systematic review, guidelines, clinical trials, cohorts, case-control, and cross-sectional studies. The following were excluded: case reports, case series, editorials, letters to the editor, and abstracts in event annals. Based on their technical knowledge and experience, the authors selected articles for inclusion in the final text for convenience. Many cancer patients have psychological anguish after their diagnosis and treatment, regardless of stage. Distress includes unfavorable experiences impacted by cognitive, behavioral, emotional, social, spiritual, and physical variables. It can impair cancer management, including symptoms and therapy. Distress ranges from vulnerability, sadness, and anxiety to severe suffering and psychological and social impairment, which may indicate a mental disease – . Stress can result from cancer diagnosis and the many changes that occur during treatment and afterward. Despite advances in cancer detection and treatment, the prevalence of long-term side effects outweighs the efficacy of cancer treatments in improving survival rates across all age groups. Patients' everyday activities are hindered by weariness, discomfort, worry, and sadness – . Those with a history of mental illness, depression, or substance abuse are more likely to experience moderate or severe discomfort. Cognitive impairment, major concomitant diseases, uncontrolled symptoms, communication issues, and social barriers increase risk. Younger age, living alone, having young dependents, and earlier trauma and abuse—physical, sexual, emotional, and verbal—are social challenges and risk factors. Understanding cancer genetics is linked to emotional and cognitive distress. Distress has been linked to non-adherence to oncological treatment, increased difficulty making treatment decisions, increased medical appointment frequency, prolonged hospital stays, decreased quality of life, decreased surveillance examination participation, reduced physical activity, and limited smoking cessation progress – . Support from a psychiatrist and differential and early diagnosis help with a better prognosis and improved quality of life and can prevent the emergence of a mental disorder or its worsening when it already exists. Management of mental disorders in oncology Delirium Neurocognitive impairment caused by brain dysfunction is sometimes called delirium. Changes in consciousness occur suddenly in this state. Patients may develop the neurocognitive and behavioral syndrome at any stage of cancer development, including at diagnosis. This condition may result from cancer, medicine, surgery, or nonmalignant diseases including myocardial infarction , . Advanced-stage cancer patients have a 90% chance of developing delirium in their final hours, days, and weeks. The most used screening tool is the Confusion Assessment Method (CAM). Delirium's four main symptoms—sudden start and fluctuating course, lack of attention, decreased cognitive functioning, and consciousness changes—form the basis for CAM diagnosis. The CAM method requires criteria (1), (2), and (3) or (4) to diagnose delirium , . Delirium is treated with pharmaceutical and nonpharmacological methods. Doctors, nurses, and caretakers must collaborate on nonpharmacological treatments. Healthcare practitioners try to alleviate patient stress while guaranteeing patient safety and integrity. Patient and staff safety must always come first. To prevent patient, caregiver, and staff harm, lines and catheters must be repaired immediately , . A recommended routine includes bed exercises and walking. Physical constraints can worsen symptoms and cause psychological distress; thus, they should be minimized. Patients' needs, including toilet access, must be met immediately. Superfluous procedures and annoying inputs such as light, noise, and bustle should be reduced. Eyeglasses and hearing aids can remedy visual or auditory impairments. To ensure comfort and familiarity, a familiar person should be positioned near the patient. Family and carers should be informed about delirium and its progression , . This effort educates caregivers and family members on patient support and agitation management. Medical experts should deliver this instructional intervention. Before starting medication, delirium's multiple causes must be identified and treated. Opioids and other risky drugs should be avoided. To eliminate kidney metabolites, infections and hydration must be treated. Antipsychotics including olanzapine, quetiapine, and aripiprazole may help cancer patients with delirium by increasing calm , . Anxiety disorders Threats cause psychological and bodily anxiety. Cancer is a life-threatening condition that can cause worry in many individuals. In one research, 77% of 913 patients experienced anxiety within 2 years of medication. Anxiety disorders have several symptoms. Quantitatively excessive reactions, such as anxious adjustment disorder, often occur within a month of stress , . Generalized anxiety disorder (GAD) requires more symptoms than anxious adjustment disorder and symptom persistence for 6 months. In these conditions, anxiety often seems free-floating, without a precipitant or intensification pattern. Panic disorder causes anxiety to build to a peak. Phobic anxiety only responds to certain triggers, causing anticipatory avoidance. Medical facilities and therapies can cause phobias, and animal and social phobias may precede cancer. A descriptive classification of anxiety disorders is common. Regardless of its qualities, aberrant anxiety caused by an organic stimulation is called organic anxiety. Drugs like interferon can cause organic anxiety in cancer patients. Depression and anxiety might arise. Cancer specialists are responsible for diagnosing cancer patients' anxiety. Cancer specialists are still poor at recognizing and treating patients with mental disorders. Many questionnaires have been used to measure psychological discomfort and depression in cancer patients. All these procedures perform poorly when compared with standardized psychiatric interviews, and their use does not improve depression or anxiety outcomes. The explanation for these poor results is inadequate. Several self-report surveys measure anxiety specifically , . However, their relative effectiveness in detecting elevated anxiety levels is unclear. Identifying high-risk populations may help discover anxiety disorders. Younger people, women, and the disadvantaged are more likely to worry. Anxiety symptoms rise following a cancer diagnosis but decrease with time. Several contextual variables affect cancer patients' anxiety. Cancer research has traditionally examined anxiety as a continuum rather than pathological levels, making it unclear how cancer-related conditions affect anxiety disorders or adaptive normal anxiety. People with such symptoms must consult a doctor. Scales aid identification. The nosological diagnosis should guide treatment, which includes psychotherapy, with cognitive-behavioral psychotherapy being the most common, and psychotropic medicines of various durations , . Mood disorders Mood disorders pose a substantial health and economic burden across the globe . Due to their chronic, often recurrent nature and common pathophysiological pathways, mood disorders have been associated with a host of physical conditions and illnesses, including cardiovascular disease, diabetes, gastroesophageal reflux disease, asthma, arthritis, and bone fracture. Moreover, mortality rates among those with mood disorders have been estimated to be 35% greater than in the general population, with most of these deaths due to comorbid chronic physical conditions. In the case-control study (n=807), mood disorder was documented for 18 of the 75 (9.3%) cancer cases and among 288 controls (24.0% vs. 39.3%) . Suspicion should arise not only in the presence of mood symptoms (e.g., hypothymia, euphoria, or mixed state) but also in a previous history of mood disorder. Reduced pleasure, difficulties with sleep, changes in appetite, reduced expectations about the future, and ideas of death (with or without planning) may suggest the presence of depression. Increased energy, reduced need for sleep, accelerated thinking, and grandiosity may suggest mania. When conducting the case, it is important to share with the psychiatrist the investigation of possible primary or secondary causes. Examples of the latter include medications, the inflammatory process itself, and hormonal changes (such as the euphoria caused by increased serotonin production in carcinoid tumors or the effects of thyroid hormone supply in preventing recurrence in thyroid neoplasms) – . Treatment will depend on the diagnosis: depressive disorder (psychotherapy and antidepressants), mania, or mixed state (mood stabilizers and/or atypical antipsychotics) , , . Psychotic disorders Studies have noted that people with schizophrenia or mental disorders are most diagnosed in advanced stages of cancer , , . Some symptoms of schizophrenia can emerge secondary to brain tumors and chemotherapy and can be confused with symptoms of delirium , , . The preexisting or recent-onset psychosis can have a negative impact on the quality of care, continuity of care, and reaching remission as it is noted that a significant number of people are lost to follow-up in 1 year , , . The quality of care is further poor in the homeless and institutionalized psychiatric patients. Treatment involves the use of antipsychotics; however, it is important that the team is aware of the limitations of these patients who have a distorted sense of reality . Such patients will need support from their family members to make decisions. Delusional symptoms should not be confronted directly. At the same time, careful guidance is necessary regarding the state of health and the steps of the entire treatment , , , . Suicide behavior Mental problems such as mood, substance use, psychotic, personality, and anxiety disorders can lead to suicidal behavior. Suicide risk among cancer patients who have mood disorders or anxiety and somatoform disorders is higher than for those without mental disorders . A unified framework for describing suicidal conduct must include thought, planning, and attempt – . This improves situational management. Risk factors for suicide behavior must be categorized by the individual's condition, genetic predisposition, demographics, psychological variables, physical well-being, and health status, including chronic diseases. Also, the person's history of suicidal conduct, including non-suicidal self-harm, should be evaluated – . The examination of a mental disorder requires a safety plan that includes counseling, research, and monitoring to protect the individual. Suicide-risk persons must be monitored. Hospitalization may be necessary to protect their health in high-risk scenarios such as repeated attempts or a set strategy – . Delirium Neurocognitive impairment caused by brain dysfunction is sometimes called delirium. Changes in consciousness occur suddenly in this state. Patients may develop the neurocognitive and behavioral syndrome at any stage of cancer development, including at diagnosis. This condition may result from cancer, medicine, surgery, or nonmalignant diseases including myocardial infarction , . Advanced-stage cancer patients have a 90% chance of developing delirium in their final hours, days, and weeks. The most used screening tool is the Confusion Assessment Method (CAM). Delirium's four main symptoms—sudden start and fluctuating course, lack of attention, decreased cognitive functioning, and consciousness changes—form the basis for CAM diagnosis. The CAM method requires criteria (1), (2), and (3) or (4) to diagnose delirium , . Delirium is treated with pharmaceutical and nonpharmacological methods. Doctors, nurses, and caretakers must collaborate on nonpharmacological treatments. Healthcare practitioners try to alleviate patient stress while guaranteeing patient safety and integrity. Patient and staff safety must always come first. To prevent patient, caregiver, and staff harm, lines and catheters must be repaired immediately , . A recommended routine includes bed exercises and walking. Physical constraints can worsen symptoms and cause psychological distress; thus, they should be minimized. Patients' needs, including toilet access, must be met immediately. Superfluous procedures and annoying inputs such as light, noise, and bustle should be reduced. Eyeglasses and hearing aids can remedy visual or auditory impairments. To ensure comfort and familiarity, a familiar person should be positioned near the patient. Family and carers should be informed about delirium and its progression , . This effort educates caregivers and family members on patient support and agitation management. Medical experts should deliver this instructional intervention. Before starting medication, delirium's multiple causes must be identified and treated. Opioids and other risky drugs should be avoided. To eliminate kidney metabolites, infections and hydration must be treated. Antipsychotics including olanzapine, quetiapine, and aripiprazole may help cancer patients with delirium by increasing calm , . Anxiety disorders Threats cause psychological and bodily anxiety. Cancer is a life-threatening condition that can cause worry in many individuals. In one research, 77% of 913 patients experienced anxiety within 2 years of medication. Anxiety disorders have several symptoms. Quantitatively excessive reactions, such as anxious adjustment disorder, often occur within a month of stress , . Generalized anxiety disorder (GAD) requires more symptoms than anxious adjustment disorder and symptom persistence for 6 months. In these conditions, anxiety often seems free-floating, without a precipitant or intensification pattern. Panic disorder causes anxiety to build to a peak. Phobic anxiety only responds to certain triggers, causing anticipatory avoidance. Medical facilities and therapies can cause phobias, and animal and social phobias may precede cancer. A descriptive classification of anxiety disorders is common. Regardless of its qualities, aberrant anxiety caused by an organic stimulation is called organic anxiety. Drugs like interferon can cause organic anxiety in cancer patients. Depression and anxiety might arise. Cancer specialists are responsible for diagnosing cancer patients' anxiety. Cancer specialists are still poor at recognizing and treating patients with mental disorders. Many questionnaires have been used to measure psychological discomfort and depression in cancer patients. All these procedures perform poorly when compared with standardized psychiatric interviews, and their use does not improve depression or anxiety outcomes. The explanation for these poor results is inadequate. Several self-report surveys measure anxiety specifically , . However, their relative effectiveness in detecting elevated anxiety levels is unclear. Identifying high-risk populations may help discover anxiety disorders. Younger people, women, and the disadvantaged are more likely to worry. Anxiety symptoms rise following a cancer diagnosis but decrease with time. Several contextual variables affect cancer patients' anxiety. Cancer research has traditionally examined anxiety as a continuum rather than pathological levels, making it unclear how cancer-related conditions affect anxiety disorders or adaptive normal anxiety. People with such symptoms must consult a doctor. Scales aid identification. The nosological diagnosis should guide treatment, which includes psychotherapy, with cognitive-behavioral psychotherapy being the most common, and psychotropic medicines of various durations , . Mood disorders Mood disorders pose a substantial health and economic burden across the globe . Due to their chronic, often recurrent nature and common pathophysiological pathways, mood disorders have been associated with a host of physical conditions and illnesses, including cardiovascular disease, diabetes, gastroesophageal reflux disease, asthma, arthritis, and bone fracture. Moreover, mortality rates among those with mood disorders have been estimated to be 35% greater than in the general population, with most of these deaths due to comorbid chronic physical conditions. In the case-control study (n=807), mood disorder was documented for 18 of the 75 (9.3%) cancer cases and among 288 controls (24.0% vs. 39.3%) . Suspicion should arise not only in the presence of mood symptoms (e.g., hypothymia, euphoria, or mixed state) but also in a previous history of mood disorder. Reduced pleasure, difficulties with sleep, changes in appetite, reduced expectations about the future, and ideas of death (with or without planning) may suggest the presence of depression. Increased energy, reduced need for sleep, accelerated thinking, and grandiosity may suggest mania. When conducting the case, it is important to share with the psychiatrist the investigation of possible primary or secondary causes. Examples of the latter include medications, the inflammatory process itself, and hormonal changes (such as the euphoria caused by increased serotonin production in carcinoid tumors or the effects of thyroid hormone supply in preventing recurrence in thyroid neoplasms) – . Treatment will depend on the diagnosis: depressive disorder (psychotherapy and antidepressants), mania, or mixed state (mood stabilizers and/or atypical antipsychotics) , , . Psychotic disorders Studies have noted that people with schizophrenia or mental disorders are most diagnosed in advanced stages of cancer , , . Some symptoms of schizophrenia can emerge secondary to brain tumors and chemotherapy and can be confused with symptoms of delirium , , . The preexisting or recent-onset psychosis can have a negative impact on the quality of care, continuity of care, and reaching remission as it is noted that a significant number of people are lost to follow-up in 1 year , , . The quality of care is further poor in the homeless and institutionalized psychiatric patients. Treatment involves the use of antipsychotics; however, it is important that the team is aware of the limitations of these patients who have a distorted sense of reality . Such patients will need support from their family members to make decisions. Delusional symptoms should not be confronted directly. At the same time, careful guidance is necessary regarding the state of health and the steps of the entire treatment , , , . Suicide behavior Mental problems such as mood, substance use, psychotic, personality, and anxiety disorders can lead to suicidal behavior. Suicide risk among cancer patients who have mood disorders or anxiety and somatoform disorders is higher than for those without mental disorders . A unified framework for describing suicidal conduct must include thought, planning, and attempt – . This improves situational management. Risk factors for suicide behavior must be categorized by the individual's condition, genetic predisposition, demographics, psychological variables, physical well-being, and health status, including chronic diseases. Also, the person's history of suicidal conduct, including non-suicidal self-harm, should be evaluated – . The examination of a mental disorder requires a safety plan that includes counseling, research, and monitoring to protect the individual. Suicide-risk persons must be monitored. Hospitalization may be necessary to protect their health in high-risk scenarios such as repeated attempts or a set strategy – . Neurocognitive impairment caused by brain dysfunction is sometimes called delirium. Changes in consciousness occur suddenly in this state. Patients may develop the neurocognitive and behavioral syndrome at any stage of cancer development, including at diagnosis. This condition may result from cancer, medicine, surgery, or nonmalignant diseases including myocardial infarction , . Advanced-stage cancer patients have a 90% chance of developing delirium in their final hours, days, and weeks. The most used screening tool is the Confusion Assessment Method (CAM). Delirium's four main symptoms—sudden start and fluctuating course, lack of attention, decreased cognitive functioning, and consciousness changes—form the basis for CAM diagnosis. The CAM method requires criteria (1), (2), and (3) or (4) to diagnose delirium , . Delirium is treated with pharmaceutical and nonpharmacological methods. Doctors, nurses, and caretakers must collaborate on nonpharmacological treatments. Healthcare practitioners try to alleviate patient stress while guaranteeing patient safety and integrity. Patient and staff safety must always come first. To prevent patient, caregiver, and staff harm, lines and catheters must be repaired immediately , . A recommended routine includes bed exercises and walking. Physical constraints can worsen symptoms and cause psychological distress; thus, they should be minimized. Patients' needs, including toilet access, must be met immediately. Superfluous procedures and annoying inputs such as light, noise, and bustle should be reduced. Eyeglasses and hearing aids can remedy visual or auditory impairments. To ensure comfort and familiarity, a familiar person should be positioned near the patient. Family and carers should be informed about delirium and its progression , . This effort educates caregivers and family members on patient support and agitation management. Medical experts should deliver this instructional intervention. Before starting medication, delirium's multiple causes must be identified and treated. Opioids and other risky drugs should be avoided. To eliminate kidney metabolites, infections and hydration must be treated. Antipsychotics including olanzapine, quetiapine, and aripiprazole may help cancer patients with delirium by increasing calm , . Threats cause psychological and bodily anxiety. Cancer is a life-threatening condition that can cause worry in many individuals. In one research, 77% of 913 patients experienced anxiety within 2 years of medication. Anxiety disorders have several symptoms. Quantitatively excessive reactions, such as anxious adjustment disorder, often occur within a month of stress , . Generalized anxiety disorder (GAD) requires more symptoms than anxious adjustment disorder and symptom persistence for 6 months. In these conditions, anxiety often seems free-floating, without a precipitant or intensification pattern. Panic disorder causes anxiety to build to a peak. Phobic anxiety only responds to certain triggers, causing anticipatory avoidance. Medical facilities and therapies can cause phobias, and animal and social phobias may precede cancer. A descriptive classification of anxiety disorders is common. Regardless of its qualities, aberrant anxiety caused by an organic stimulation is called organic anxiety. Drugs like interferon can cause organic anxiety in cancer patients. Depression and anxiety might arise. Cancer specialists are responsible for diagnosing cancer patients' anxiety. Cancer specialists are still poor at recognizing and treating patients with mental disorders. Many questionnaires have been used to measure psychological discomfort and depression in cancer patients. All these procedures perform poorly when compared with standardized psychiatric interviews, and their use does not improve depression or anxiety outcomes. The explanation for these poor results is inadequate. Several self-report surveys measure anxiety specifically , . However, their relative effectiveness in detecting elevated anxiety levels is unclear. Identifying high-risk populations may help discover anxiety disorders. Younger people, women, and the disadvantaged are more likely to worry. Anxiety symptoms rise following a cancer diagnosis but decrease with time. Several contextual variables affect cancer patients' anxiety. Cancer research has traditionally examined anxiety as a continuum rather than pathological levels, making it unclear how cancer-related conditions affect anxiety disorders or adaptive normal anxiety. People with such symptoms must consult a doctor. Scales aid identification. The nosological diagnosis should guide treatment, which includes psychotherapy, with cognitive-behavioral psychotherapy being the most common, and psychotropic medicines of various durations , . Mood disorders pose a substantial health and economic burden across the globe . Due to their chronic, often recurrent nature and common pathophysiological pathways, mood disorders have been associated with a host of physical conditions and illnesses, including cardiovascular disease, diabetes, gastroesophageal reflux disease, asthma, arthritis, and bone fracture. Moreover, mortality rates among those with mood disorders have been estimated to be 35% greater than in the general population, with most of these deaths due to comorbid chronic physical conditions. In the case-control study (n=807), mood disorder was documented for 18 of the 75 (9.3%) cancer cases and among 288 controls (24.0% vs. 39.3%) . Suspicion should arise not only in the presence of mood symptoms (e.g., hypothymia, euphoria, or mixed state) but also in a previous history of mood disorder. Reduced pleasure, difficulties with sleep, changes in appetite, reduced expectations about the future, and ideas of death (with or without planning) may suggest the presence of depression. Increased energy, reduced need for sleep, accelerated thinking, and grandiosity may suggest mania. When conducting the case, it is important to share with the psychiatrist the investigation of possible primary or secondary causes. Examples of the latter include medications, the inflammatory process itself, and hormonal changes (such as the euphoria caused by increased serotonin production in carcinoid tumors or the effects of thyroid hormone supply in preventing recurrence in thyroid neoplasms) – . Treatment will depend on the diagnosis: depressive disorder (psychotherapy and antidepressants), mania, or mixed state (mood stabilizers and/or atypical antipsychotics) , , . Studies have noted that people with schizophrenia or mental disorders are most diagnosed in advanced stages of cancer , , . Some symptoms of schizophrenia can emerge secondary to brain tumors and chemotherapy and can be confused with symptoms of delirium , , . The preexisting or recent-onset psychosis can have a negative impact on the quality of care, continuity of care, and reaching remission as it is noted that a significant number of people are lost to follow-up in 1 year , , . The quality of care is further poor in the homeless and institutionalized psychiatric patients. Treatment involves the use of antipsychotics; however, it is important that the team is aware of the limitations of these patients who have a distorted sense of reality . Such patients will need support from their family members to make decisions. Delusional symptoms should not be confronted directly. At the same time, careful guidance is necessary regarding the state of health and the steps of the entire treatment , , , . Mental problems such as mood, substance use, psychotic, personality, and anxiety disorders can lead to suicidal behavior. Suicide risk among cancer patients who have mood disorders or anxiety and somatoform disorders is higher than for those without mental disorders . A unified framework for describing suicidal conduct must include thought, planning, and attempt – . This improves situational management. Risk factors for suicide behavior must be categorized by the individual's condition, genetic predisposition, demographics, psychological variables, physical well-being, and health status, including chronic diseases. Also, the person's history of suicidal conduct, including non-suicidal self-harm, should be evaluated – . The examination of a mental disorder requires a safety plan that includes counseling, research, and monitoring to protect the individual. Suicide-risk persons must be monitored. Hospitalization may be necessary to protect their health in high-risk scenarios such as repeated attempts or a set strategy – . As previously discussed, the prevalence of mental illnesses in cancer patients is significant, underscoring the criticality of effectively managing these conditions. The presence of stress associated with illness is a risk factor that requires attention, as it has been identified as a contributing element in the onset of mental disorders. There is a limited body of literature pertaining to this subject, and the current knowledge on differential diagnosis and therapy draws upon the same information utilized for other patient populations. When formulating a treatment plan, it is crucial to carefully evaluate any secondary reasons (such as drugs and clinical disorders) to see if they may be reversed before taking any psychiatric medication. The management of such circumstances typically entails psychotherapy or pharmacotherapy and necessitates the involvement of a psychiatrist collaborating with the oncology team. The prevalence of mental illnesses among individuals diagnosed with cancer is significant, necessitating the crucial involvement of a psychiatrist in their treatment. These subjects exhibit considerable research potential, as there is a dearth of specialized investigations within this particular cohort.
Practical considerations for optimising homologous recombination repair mutation testing in patients with metastatic prostate cancer
55560f2e-6aa0-4669-b2c7-56e6d965d117
8185363
Pathology[mh]
Prostate cancer is a heterogeneous disease with a variable prognosis depending on the stage at diagnosis, as well as other clinical and biological factors. Most patients are diagnosed with curable disease, but approximately 15% of patients will present with, or eventually develop, metastatic disease and resistance to androgen‐based therapies; for this group of patients, there has been a significant improvement in treatment approaches with the development of targeted agents . One novel class of targeted agents, poly(ADP‐ribose) polymerase (PARP) inhibitors, is beneficial for selected patients with metastatic castration‐resistant prostate cancer (mCRPC). PARP enzymes have a key role in DNA repair, but when PARP inhibitors catalytically inhibit PARylation and physically ‘trap’ PARP on DNA at sites of single‐strand breaks, they prevent DNA repair via the base‐excision repair pathway . This leads to the generation of double‐strand breaks which cannot be efficiently repaired in tumour cells that have defects in the homologous recombination repair (HRR) pathway, causing accumulation of DNA damage and tumour cell death (Figure ) . This mechanism of action is known as synthetic lethality, where deleterious (i.e. pathogenic or likely pathogenic) HRR gene alterations can confer sensitivity to PARP inhibition, and has been demonstrated in prostate cancer, as well as ovarian, pancreatic, and breast cancer . Commonly reported genomic alterations in mCRPC include mutations and copy number alterations in genes such as TP53 , AR , RB1 , PTEN , and those involved in repairing DNA damage, predominantly those with a role in HRR . Table details HRR genes where genomic alterations have been reported across different tumour types in the literature. Recent studies have shown that approximately 25% of patients with mCRPC harbour deleterious alterations in genes directly or indirectly involved in HRR that may act as biomarkers of response to PARP inhibitors (Table ) . With the introduction of targeted agents into clinical practice, molecular diagnostic profiling is required to identify patients who may benefit from these therapies. One commonly used method for HRR assessment in mCRPC is the sequencing of DNA extracted from tumour tissue specimens as it captures patients with both germline and somatic alterations. If necessary, subsequent germline testing can be used to resolve whether an alteration is germline or somatic as tumour tissue tests cannot distinguish between these. Tumour material for testing is obtained from archival tissue biopsy specimens. Given that the majority of HRR alterations in prostate cancer are either germline or appear to occur early in the disease and prior to metastatic spread , evaluation of dominant tumour focus (high volume/grade) in archival diagnostic specimens is appropriate for molecular diagnostics even after mCRPC progression . Indeed, the molecular selection of patients with metastatic disease based on testing of primary tumours has been the main strategy for patient enrolment in the pivotal PARP inhibitor trials for patients with mCRPC . mCRPC Real‐world data on the testing success of prostate tumour samples are limited as clinical next‐generation sequencing (NGS) has only recently been implemented for this tumour type outside of the context of clinical trials. However, in clinical trials to date, attrition rates of approximately 30–40% have been reported for strategies relying on tumour tissue testing in patients with mCRPC . Consequently, there is an urgent need to significantly improve testing approaches. The main reasons for test failures appear to be: (1) the limited amount of tumour tissue collected during diagnostic biopsies, (2) exhaustion of diagnostic material during the histological diagnosis, (3) insufficient tumour content for genomic analysis, and (4) suboptimal DNA yield/quality due to DNA degradation during fixation and/or storage of diagnostic material . The aim of this review is to provide practical considerations and recommendations for molecular diagnostic testing of specimens collected from patients with mCRPC in clinical practice with a focus on optimizing the success rates for multigene NGS assays. For the purpose of this manuscript, HRR genes refer generically to BRCA1 and BRCA2 , at a minimum, and to a larger variety of genes known to be involved directly or indirectly in the HRR pathway (Tables and ). PARP inhibitor studies in mCRPC Several PARP inhibitors have been evaluated in studies of patients with mCRPC, many of which have included prospective selection for HRR alterations prior to treatment . The phase II PARP inhibitor monotherapy studies TOPARP‐B (olaparib), TRITON2 (rucaparib), TALAPRO‐1 (talazoparib), and GALAHAD (niraparib) identified responses in patients with germline or somatic HRR alterations, although higher response rates and longer duration of responses were generally observed in those with BRCA1 and BRCA2 alterations (Table ) . The PROfound study was the first randomised phase III study demonstrating the efficacy of a PARP inhibitor, olaparib, in patients with mCRPC . In PROfound, treatment with olaparib was associated with significantly longer progression‐free survival and overall survival than enzalutamide or abiraterone (control) in patients who had at least one alteration in BRCA1 , BRCA2 , or ATM (cohort A) and had disease progression while receiving enzalutamide or abiraterone (see Table for details) . Based on the findings of the PROfound trial, the Food and Drug Administration (FDA) approved olaparib for adult patients with deleterious or suspected deleterious germline or somatic HRR gene‐mutated mCRPC who have progressed following prior treatment with enzalutamide or abiraterone . In addition, the European Medicines Agency approved olaparib as monotherapy for the treatment of adult patients with mCRPC and BRCA1 or BRCA2 mutations (germline and/or somatic) whose disease progressed following prior therapy that included a next‐generation hormonal agent . Rucaparib was also approved by the FDA for patients with deleterious BRCA1 or BRCA2 mutation (germline and/or somatic)‐associated mCRPC who have been treated with androgen receptor‐directed therapy and a taxane‐based chemotherapy based on the tumour testing findings of the TRITON2 study . A phase III study (TRITON3) of rucaparib in patients with mCRPC and a deleterious germline or somatic BRCA1 , BRCA2 , or ATM mutation is ongoing . Breakthrough therapy designation has also been granted by the FDA for niraparib based on the findings of the GALAHAD study , and other approvals are anticipated. Beyond differences in the PARP inhibitors being evaluated, these trials differed in the patient selection strategy and also used different assays, including tissue and liquid biopsy‐based testing of slightly different panels of HRR genes. However, these studies support the importance of genomic profiling and the implementation of molecular analysis in the clinical pathway. The US National Comprehensive Cancer Network (NCCN) guidelines were updated in 2019 to recommend tumour testing for HRR gene alterations and consider microsatellite instability (MSI)/mismatch repair testing in all patients with regional or metastatic prostate cancer . This information may be used for genetic counselling, eligibility for PARP inhibitor treatment, or enrolment in clinical trials. If pathogenic or likely pathogenic alterations in BRCA1 , BRCA2 , ATM , PALB2 , and CHEK2 are found, and/or there is a strong family history of cancer, then patients should be referred for genetic counselling and confirmatory germline testing. The Advanced Prostate Cancer Consensus Conference held in 2019 supported consideration of BRCA1 and BRCA2 testing in screening, management, and informing prognosis/treatment, with germline testing recommended in patients with a tumour BRCA1 , BRCA2 , or ATM mutation . Similar recommendations for germline testing were published by the 2019 Philadelphia International Prostate Cancer Consensus that supported the use of prostate cancer gene‐testing panels . The American Urological Association/American Society for Radiation Oncology/Society of Urologic Oncology (AUA/ASTRO/SUO) guidelines published in June 2020 state that patients with mCRPC should be offered tumour and/or germline HRR gene testing and MSI status . More recently, the European Society of Medical Oncology (ESMO) clinical practice guidelines for diagnosis, treatment, and follow up of prostate cancer were updated to provide guidance for precision medicine . The ESMO Precision Medicine Working Group recommends that multigene NGS panel testing replace single‐gene assays and be considered for patients with metastatic prostate cancer, and those with pathogenic or likely pathogenic mutations in cancer‐risk genes should be referred for genetic counselling and germline testing for BRCA1/BRCA2 and other HRR alterations . While there may be variations in testing recommendations, access to testing, and reimbursement issues between countries, analyses of somatic and germline BRCA1 and BRCA2 alterations are likely to become the minimum requirement in many countries for patients with mCRPC. Tumour tissue collection in prostate cancer is predominantly driven by diagnostic need, particularly as pathological tumour typing is directly related to clinical management and, ultimately, patient outcome. In current practice, tissue‐based molecular diagnostic testing (that identifies mutations that could be of somatic or germline origin) is most likely to be requested at the point when a patient develops metastatic disease, aligned to access to biomarker‐targeted therapies. However, for patients with a strong family history of cancer, germline screening for cancer predisposition genes may be requested even when only local/regional disease is present. Understanding regional differences in diagnostic policies and capabilities will be important to provide appropriate guidance for the successful introduction of molecular diagnostic testing in the community setting. Pathologists, radiologists, and urologists have clear protocols for the collection of prostate tissue samples for diagnosis and Gleason scoring , and guidelines for best practice in biospecimen collection and processing are available . There are not yet, however, international standard protocols or specific guidance to obtain prostate tumour samples to aid the implementation of molecular diagnostic testing in routine clinical practice. Figure provides a schematic representation of the tissue collection methodology, addressing the factors to be considered to improve testing success rates. Table lists the factors and recommendations for formalin‐fixed and paraffin‐embedded (FFPE) sample collection, processing, and storage. During histopathological diagnosis and staging, the diagnostic pathologist should preserve and label a ‘molecular diagnostic’ FFPE block where haematoxylin and eosin (H&E) staining shows sufficient cellularity and tumour content for genomic analysis. For HRR alteration testing in mCRPC, suitable specimens should contain enough cellularity to yield the required DNA amount for the local test and a minimum neoplastic cell content (e.g. 10–30%, depending on the test used and local validation data and whether sequence variants only or copy number variants are being screened for) to ensure variants can be easily detected and distinguished from deamination or oxidation artefacts and other sequencing background noise . Low tumour content not only impedes detection of low allele frequency somatic mutations but also affects the correct assessment of copy number variations as these may be diluted into the normal copy number profile of non‐tumour cells in the samples; this is particularly relevant to identify patients with intragenic or homozygous BRCA2 deletions. Practical recommendations to assess cellularity and neoplastic content for different genomic applications are available online . New, more accurate methods to obtain tumour tissue, such as targeted prostate biopsies using multi‐parametric magnetic resonance imaging, where available, can also help increase tumour content . Selection of the optimal sample type is dependent on factors such as size, age, collection method, and organ site. Tumour size may be critical in ensuring that the required quantity and quality of DNA are available for analysis, although this is highly dependent on cellularity. Surgical specimens, such as radical prostatectomies (available in approximately 10–15% of mCRPC patients), may provide a large amount of material, but this does not always translate to sufficient quantity/quality of DNA for testing if the tumour area is small and tumour cellularity is low. Conversely, smaller biopsy samples (i.e. core needle biopsy), typically used at initial diagnosis, may have limited tumour tissue for molecular testing after pathology diagnosis and grading, although they can provide good‐quality DNA as processing and fixation steps can be carefully controlled. For example, a small core needle biopsy of 1 mm × 10 mm may contain thousands of neoplastic cells with >80% tumour content and a yield >100 ng of DNA (such as shown in the example in Figure ), while another biopsy of similar size could be mostly non‐neoplastic cells, rendering it unsuitable for molecular analysis (such as in Figure ). Pooling of multiple cores from more than one biopsy may increase the yield of DNA, while macro‐dissection of the tumour area is recommended to increase the neoplastic content of the sample. Although there is a small risk that this practice may dilute the inter‐lesion heterogeneity of multifocal tumours, there is currently insufficient data regarding heterogeneity of HRR alterations in prostate cancer. Sample age is also known to influence testing success; DNA extracted from newly collected FFPE samples is generally of adequate quality, although there is a gradual decline over time due to degradation and chemical modification. In the absence of newly collected FFPE samples, archived samples can provide successful test results, indicating that the preservation of DNA is achievable with optimisation of fixation and storage conditions. Findings from the PROfound study identified a decrease in test success rate with increased age of archived samples; however, successful tests were obtained in a proportion of samples that had been archived for >10 years . Collection and processing of samples from metastatic biopsies are associated with challenges. Osteoblastic bone lesions are the most common metastases in patients with prostate cancer , and collection from this site presents issues for patients and the clinical team, including toxicity, invasiveness of the procedure, requirement for anaesthetic, and costs, such that clinicians may not pursue collection. Furthermore, processing of bone biopsy samples that require decalcification may lead to a reduction in the quantity and quality of DNA, and therefore, if required, EDTA must be used instead of harsher decalcification . While there may be concern about whether a sample from an archived primary tumour is representative of distant metastatic disease at the time of consideration of PARP inhibitor treatment, evidence from the PROfound study showed that successful testing was undertaken with both primary and metastatic tumour samples, with the overall prevalence of HRR alterations being similar (27.2 and 31.8%, respectively) . Beyond germline mutations, findings from a small series of longitudinal samples from the same patient suggest that, at least for BRCA1 and BRCA2 , somatic HRR mutations are usually detectable in primary tumours in comparison with other genomic events, such as AR alterations, that emerge later in response to treatment‐selective pressure . Although there are challenges associated with sample collection and processing, clinical studies have shown that approximately 60–70% of primary and metastatic samples from patients with prostate cancer have successful test results . These findings highlight that the optimisation of diagnostic tissue collection and processing to provide an adequate quantity of high‐quality tumour samples is crucial for the testing process as primary specimens are currently the preferred source of material for HRR analysis . Increased understanding of the link between molecular diagnostics and access to novel targeted therapies are likely to be significant motivating factors in implementing changes in the practice of tumour sample collection and processing. Involvement of the entire multidisciplinary team at the different stages of the patient's journey is critical to ensure that testing has a patient‐centric approach (Figure ). Here, we provide a series of specific recommendations for different stages of the diagnostic pathway. Collection and handling of biopsy samples in pathology laboratories Proactive identification of the most suitable sample for future molecular diagnostic testing should be championed by the diagnostic pathologist. Specific key recommendations for biopsy specimen handling are listed in Table . At diagnosis, adherence to pathology protocols can ensure rapid access to archived primary samples when the need for testing is identified and could significantly reduce the incidence of archived blocks being retrieved and found unsuitable for molecular diagnostics. The decision of whether to archive tissue samples as an FFPE block or extracted DNA may vary depending on the available facilities and institutional policies. Currently, long‐term storage of FFPE blocks is standard practice in many countries, including the European Union, Canada, and the USA, which are frequently archived at off‐site facilities, potentially leading to increased costs associated with sample retrieval and increased turnaround times. If no suitable sample is available, germline testing using blood samples or liquid biopsy with analysis of circulating cell‐free DNA (cfDNA) could be undertaken, or alternatively, re‐biopsy of a metastatic lesion could be considered. Processing specimens in molecular pathology laboratories Guidance should be sought from appropriate laboratory technicians and scientists regarding the suitability of DNA samples for testing and DNA extraction procedures. Table provides some specific key recommendations. Pre‐analytical quality control (QC) of DNA samples, including quantification of double‐stranded DNA yield and confirmation of the ability to amplify the DNA from sample or mean fragment size assessment, should be undertaken to minimise post‐library test failures . This should include evaluation of DNA amount (total), library QC, and quality of nucleic acids. Due to the need to sequence the entire coding regions of very large genes, NGS is the method currently used for HRR alteration testing. The panel of gene alterations to be evaluated should include BRCA1 and BRCA2 at a minimum, with other HRR genes being assessed depending on country‐specific approval. Evidence from breast and ovarian cancer studies has shown that an integrative NGS‐based approach is efficient to detect germline and somatic mutations in BRCA genes while simultaneously targeting a large spectrum of genetic alterations using FFPE tissue samples . The chosen NGS approach should also be considered due to DNA requirements as some amplicon‐based NGS approaches (i.e. those using multiplexed primer pairs specific to the regions analysed to produce the required amplicons) only require approximately 10 ng of DNA, while targeted capture‐based NGS approaches (i.e. those using DNA or RNA probes to hybridise and capture the required genomic regions for downstream NGS) generally require more DNA (30–200 ng of DNA, depending on methodology used and local validation data) . Ideally, laboratories performing capture‐based NGS approaches should aim for a minimum mean coverage of 500 unique reads (although less coverage is acceptable in cases with high tumour content), with at least 99% of coding regions being covered at >100×. For laboratories using amplicon‐based NGS approaches without de‐duplication strategies (e.g. unique molecular identifiers), local validation of required coverage is needed for different input DNA quantities and qualities. In addition to considering ways to improve tissue testing success rates, the time and cost consequences for test failures should be considered. A pathologist can identify samples likely to fail based on an existing H&E‐stained slide within minutes at a minimal cost, whereas retrieving and shipping a sample to a laboratory, annotation, macro‐dissection, DNA extraction, and QC checks take significantly more resource in terms of both time and cost. More importantly, a test failure, or the need to obtain a re‐biopsy, may mean a delay in a patient receiving the appropriate targeted treatment, which can be critical given the poor prognosis for patients with mCRPC. Overall, the turnaround time from receiving the sample in the laboratory to final report should be within 2–3 weeks. However, the time from request of the test to the sample being received in the laboratory can vary significantly and delay the whole process; this needs to be taken into consideration when designing efficient local sample pathways. Reporting tumour HRR alterations for treatment eligibility Table provides some key specific recommendations for reporting HRR alterations for treatment eligibility. In the mCRPC setting, only pathogenic or likely pathogenic mutations should be reported in the context of PARP inhibitor eligibility. Reporting of variants of uncertain significance (VUS) is not recommended for treatment eligibility, although some laboratory policies may require these to be included in the report. If VUS are reported, this must be reported separately to the main body of the report to avoid confusion and potential over‐treatment and unnecessary referrals to clinical genetics. The assignment of clinical relevance to findings using standardised scales, such as OncoKB Levels of Evidence scale or the ESMO Scale for Clinical Actionability of molecular Targets (ESCAT), can help to improve clinical interpretation of additional NGS findings and facilitate patient–physician discussion . As tumour testing is routinely carried out using FFPE samples, there is a risk of artefacts of fixation/storage being considered bona fide mutations, particularly due to the deamination and oxidation of DNA. This problem can be ameliorated by using methods incorporating unique molecular identifiers or similar approaches. In addition, it is critical to only report variants found at variant allele frequencies higher than the validated limit of detection of the method used (approximately 5% when using FFPE) to avoid the reporting of false‐positive, artefactual results. A joint consensus recommendation for the interpretation and reporting of sequence variants in cancer compiled by the Association for Molecular Pathology, American Society of Clinical Oncology, and College of American Pathologists provides further details . When should molecular testing be requested in the patient pathway? Currently, among prostate cancer patients, only those with mCRPC are eligible for PARP inhibitor treatment, and so, molecular testing should be prioritised for these patients in routine clinical practice (Figure ). Molecular testing of all men with newly diagnosed prostate cancer would currently involve a significant resource with very limited outcome in terms of targeted treatment as most patients with prostate cancer do not progress to metastatic disease. However, this situation may change in the future if targeted treatments became approved in earlier settings or if there is evidence that certain biomarker‐defined subgroups have a different prognosis, which may impact selection of the initial therapeutic approach. Given the potential delays in retrieving archival tissue, as well as the potential failure rates in up to 30–40% of specimens, consideration could also be given to retrieving diagnostic specimens for molecular testing at the time of metastatic disease and prior to progression to mCRPC, even though most patients will not progress on hormone therapy for 2–2.5 years. In addition, some centres may also consider HRR alteration testing in a wider patient population based on family history and/or aggressiveness of the tumour at diagnosis. Recent recommendations from ESMO also endorse academic centres and university hospitals in pursuing testing in wider populations, in the setting of clinical research programmes and after obtaining patient consent, in order to generate data to assess the value of testing in different disease settings that can help shape the optimal use of NGS testing in the near future and optimise the development of drugs currently in clinical trials . Informed consent and germline implications of tumour testing The possibility of any deleterious or likely deleterious HRR alteration detected by tumour testing being of germline origin varies across populations but can potentially be more than 50% of all HRR gene alterations . Many of the current guidelines advise that patients should be informed that tumour testing has the potential to uncover germline findings, which may warrant further investigation. NCCN guidelines recommend follow up for germline testing if tumour alterations, including BRCA1 and BRCA2 , are detected and/or if there is a strong family history of cancer , and ESMO guidelines recommend that patients with pathogenic mutations in cancer‐risk genes, identified through tumour testing, should be referred for germline testing and genetic counselling . As the implications of a germline test result will have a significant impact not only on patients but also on their families, discussion of test results is highly recommended for patients who are referred for tissue testing. This may be undertaken by a urologist/oncologist before tissue testing or by medical geneticists after a relevant deleterious or likely deleterious HRR variant is identified on tissue testing. Proactive identification of the most suitable sample for future molecular diagnostic testing should be championed by the diagnostic pathologist. Specific key recommendations for biopsy specimen handling are listed in Table . At diagnosis, adherence to pathology protocols can ensure rapid access to archived primary samples when the need for testing is identified and could significantly reduce the incidence of archived blocks being retrieved and found unsuitable for molecular diagnostics. The decision of whether to archive tissue samples as an FFPE block or extracted DNA may vary depending on the available facilities and institutional policies. Currently, long‐term storage of FFPE blocks is standard practice in many countries, including the European Union, Canada, and the USA, which are frequently archived at off‐site facilities, potentially leading to increased costs associated with sample retrieval and increased turnaround times. If no suitable sample is available, germline testing using blood samples or liquid biopsy with analysis of circulating cell‐free DNA (cfDNA) could be undertaken, or alternatively, re‐biopsy of a metastatic lesion could be considered. Guidance should be sought from appropriate laboratory technicians and scientists regarding the suitability of DNA samples for testing and DNA extraction procedures. Table provides some specific key recommendations. Pre‐analytical quality control (QC) of DNA samples, including quantification of double‐stranded DNA yield and confirmation of the ability to amplify the DNA from sample or mean fragment size assessment, should be undertaken to minimise post‐library test failures . This should include evaluation of DNA amount (total), library QC, and quality of nucleic acids. Due to the need to sequence the entire coding regions of very large genes, NGS is the method currently used for HRR alteration testing. The panel of gene alterations to be evaluated should include BRCA1 and BRCA2 at a minimum, with other HRR genes being assessed depending on country‐specific approval. Evidence from breast and ovarian cancer studies has shown that an integrative NGS‐based approach is efficient to detect germline and somatic mutations in BRCA genes while simultaneously targeting a large spectrum of genetic alterations using FFPE tissue samples . The chosen NGS approach should also be considered due to DNA requirements as some amplicon‐based NGS approaches (i.e. those using multiplexed primer pairs specific to the regions analysed to produce the required amplicons) only require approximately 10 ng of DNA, while targeted capture‐based NGS approaches (i.e. those using DNA or RNA probes to hybridise and capture the required genomic regions for downstream NGS) generally require more DNA (30–200 ng of DNA, depending on methodology used and local validation data) . Ideally, laboratories performing capture‐based NGS approaches should aim for a minimum mean coverage of 500 unique reads (although less coverage is acceptable in cases with high tumour content), with at least 99% of coding regions being covered at >100×. For laboratories using amplicon‐based NGS approaches without de‐duplication strategies (e.g. unique molecular identifiers), local validation of required coverage is needed for different input DNA quantities and qualities. In addition to considering ways to improve tissue testing success rates, the time and cost consequences for test failures should be considered. A pathologist can identify samples likely to fail based on an existing H&E‐stained slide within minutes at a minimal cost, whereas retrieving and shipping a sample to a laboratory, annotation, macro‐dissection, DNA extraction, and QC checks take significantly more resource in terms of both time and cost. More importantly, a test failure, or the need to obtain a re‐biopsy, may mean a delay in a patient receiving the appropriate targeted treatment, which can be critical given the poor prognosis for patients with mCRPC. Overall, the turnaround time from receiving the sample in the laboratory to final report should be within 2–3 weeks. However, the time from request of the test to the sample being received in the laboratory can vary significantly and delay the whole process; this needs to be taken into consideration when designing efficient local sample pathways. HRR alterations for treatment eligibility Table provides some key specific recommendations for reporting HRR alterations for treatment eligibility. In the mCRPC setting, only pathogenic or likely pathogenic mutations should be reported in the context of PARP inhibitor eligibility. Reporting of variants of uncertain significance (VUS) is not recommended for treatment eligibility, although some laboratory policies may require these to be included in the report. If VUS are reported, this must be reported separately to the main body of the report to avoid confusion and potential over‐treatment and unnecessary referrals to clinical genetics. The assignment of clinical relevance to findings using standardised scales, such as OncoKB Levels of Evidence scale or the ESMO Scale for Clinical Actionability of molecular Targets (ESCAT), can help to improve clinical interpretation of additional NGS findings and facilitate patient–physician discussion . As tumour testing is routinely carried out using FFPE samples, there is a risk of artefacts of fixation/storage being considered bona fide mutations, particularly due to the deamination and oxidation of DNA. This problem can be ameliorated by using methods incorporating unique molecular identifiers or similar approaches. In addition, it is critical to only report variants found at variant allele frequencies higher than the validated limit of detection of the method used (approximately 5% when using FFPE) to avoid the reporting of false‐positive, artefactual results. A joint consensus recommendation for the interpretation and reporting of sequence variants in cancer compiled by the Association for Molecular Pathology, American Society of Clinical Oncology, and College of American Pathologists provides further details . Currently, among prostate cancer patients, only those with mCRPC are eligible for PARP inhibitor treatment, and so, molecular testing should be prioritised for these patients in routine clinical practice (Figure ). Molecular testing of all men with newly diagnosed prostate cancer would currently involve a significant resource with very limited outcome in terms of targeted treatment as most patients with prostate cancer do not progress to metastatic disease. However, this situation may change in the future if targeted treatments became approved in earlier settings or if there is evidence that certain biomarker‐defined subgroups have a different prognosis, which may impact selection of the initial therapeutic approach. Given the potential delays in retrieving archival tissue, as well as the potential failure rates in up to 30–40% of specimens, consideration could also be given to retrieving diagnostic specimens for molecular testing at the time of metastatic disease and prior to progression to mCRPC, even though most patients will not progress on hormone therapy for 2–2.5 years. In addition, some centres may also consider HRR alteration testing in a wider patient population based on family history and/or aggressiveness of the tumour at diagnosis. Recent recommendations from ESMO also endorse academic centres and university hospitals in pursuing testing in wider populations, in the setting of clinical research programmes and after obtaining patient consent, in order to generate data to assess the value of testing in different disease settings that can help shape the optimal use of NGS testing in the near future and optimise the development of drugs currently in clinical trials . The possibility of any deleterious or likely deleterious HRR alteration detected by tumour testing being of germline origin varies across populations but can potentially be more than 50% of all HRR gene alterations . Many of the current guidelines advise that patients should be informed that tumour testing has the potential to uncover germline findings, which may warrant further investigation. NCCN guidelines recommend follow up for germline testing if tumour alterations, including BRCA1 and BRCA2 , are detected and/or if there is a strong family history of cancer , and ESMO guidelines recommend that patients with pathogenic mutations in cancer‐risk genes, identified through tumour testing, should be referred for germline testing and genetic counselling . As the implications of a germline test result will have a significant impact not only on patients but also on their families, discussion of test results is highly recommended for patients who are referred for tissue testing. This may be undertaken by a urologist/oncologist before tissue testing or by medical geneticists after a relevant deleterious or likely deleterious HRR variant is identified on tissue testing. HRR alteration diagnostic tests Tissue testing using FFPE specimens is currently the most widely used and standard approach for molecular diagnostic testing in most cancer types, including in mCRPC clinical trials ; however, there may be instances when this may not be an option. One alternative test that is under investigation uses a liquid biopsy or cfDNA . Studies have shown that primary tissue and cfDNA share relevant somatic alterations, suggesting that cfDNA analysis may be a suitable surrogate for molecular subtyping in prostate cancer . Some studies have included cfDNA assessments so that matched tissue and plasma samples, along with associated data on patient responses to treatment, can be compared to assess the relative benefits of both approaches . Genomic profiling of both cfDNA and FFPE tumour tissue samples using NGS from patients with mCRPC enrolled in the TRITON2 and TRITON3 studies successfully identified those with an HRR gene alteration for the evaluation of rucaparib . Gene alterations in BRCA1 , BRCA2 , and ATM were detected in 2.0, 10.7, and 8.8%, respectively, of cfDNA samples and in 1.6, 8.2, and 5.8%, respectively, of tumour tissue samples . Based on the findings of TRITON2 and PROfound, the FDA has approved the FoundationOne Liquid CDx test, a comprehensive pan‐tumour liquid biopsy test, for use as a companion diagnostic for rucaparib and olaparib, respectively . Data from other studies in mCRPC are limited, although a retrospective study that evaluated gene alterations including HRR showed good concordance in BRCA alterations from cfDNA and FFPE tumour tissue samples . Furthermore, good concordance in gene alterations between cfDNA and tumour tissue has been reported in other tumours such as non‐small cell lung and metastatic breast cancers . It is important to highlight that the gene alterations in cfDNA and FFPE samples can reflect germline alterations from normal cells as the DNA samples are derived from a combination of malignant and normal cells. In addition, there is a risk of clonal haematopoiesis of indeterminate potential (CHIP) interference in DNA repair genes. A recent study evaluating plasma cfDNA from 69 patients with advanced prostate cancer found that up to 10% of patients can have CHIP involving HRR genes (primarily ATM but also BRCA2 and CHEK2 ), suggesting a need for paired whole‐blood samples as a control to avoid misdiagnosis . Several guidelines and recommendations have been published for the handling and analysis of cfDNA samples in the clinical setting . Molecular diagnostic testing of patients with prostate cancer requires a multidisciplinary team approach in the era of precision medicine. As molecular profiling is a rapidly evolving field, education for pathologists and laboratory staff, in collaboration with radiologists, urologists, and oncologists, is needed for all aspects of collection, processing, storage, and availability of tumour tissue samples for molecular diagnostic testing, as well as an understanding of the NGS technology and diagnostic assays and the consequence of detection of germline variants for patients and families. The cancer geneticist/geneticist will be involved if the tumour testing suggests that there may be a germline mutation as this, if validated, could then involve testing family members. We recommend that considerations for molecular analysis be implemented in the diagnostic pathway of patients with prostate cancer to ensure that appropriate specimens are collected at diagnosis of metastatic disease and are suitable for genomic testing at the point of clinical decision‐making. With increased knowledge of the requirements for molecular profiling, greater adoption of best practices for genomic testing can be implemented both in local and reference centres. Optimisation of molecular diagnostic testing is not only feasible but also critical to ensure that patients with mCRPC, who would most likely benefit from targeted therapies such as PARP inhibitors, are identified. This work includes contributions from, and was reviewed by, individuals who are employed by AstraZeneca and Merck Sharp & Dohme Corp., a subsidiary of Merck & Co., Inc.. The content is solely the responsibility of the authors and does not necessarily represent the official views of AstraZeneca or Merck Sharp & Dohme Corp., a subsidiary of Merck & Co., Inc. All authors contributed to the development and drafting of the manuscript and approved the final version for submission.
High quality of SARS-CoV-2 molecular diagnostics in a diverse laboratory landscape through supported benchmark testing and External Quality Assessment
c4233ef9-378c-45e2-ba93-59b75d7d1c0f
10792020
Pathology[mh]
High quality pathogen detection systems, with both high sensitivity and specificity, are of paramount importance for public health and individual patient diagnostics – . In The Netherlands, diagnostic laboratories have the option to choose their own experimental workflows in contrast to many other countries where one or only a few central testing facilities for the whole country are used (e.g. Denmark ) or a single workflow type is implemented in multiple decentralized laboratories (e.g. USA ). At the start of the SARS-CoV-2 pandemic, no laboratory diagnostic tests for specific SARS-CoV-2 detection were available. Various initiatives were taken to develop specific SARS-CoV-2 tests, including ours at the national reference laboratories for public health action in emerging situations (Dutch National Institute for Public Health and the Environment (RIVM) and Erasmus Medical Centre (Erasmus MC)) , , . We were involved in the validation of real-time reverse transcription PCR (rRT-PCR) assays for the detection of the novel SARS-CoV-2 virus . This initial assay was based on limited genomic information and developed by Corman et al. and implemented for Dutch national SARS-CoV-2 testing. In an emerging pathogen situation, like SARS-CoV-2, reference and clinical materials of confirmed positive and negative specimens are largely lacking and procedures for at least verification of the assays with standardized controls is needed. A complicating element was the evolution of the virus resulting in potential mismatched primers leading to false-negative results – . A widely applied method to evaluate the quality of the complete workflows in diagnostic laboratories (from extraction of nucleic acid to specific virus target detection) is through an External Quality Assessment (EQA) – . If the test results are unsatisfactory, additional in-depth analyses of the individual components of the workflow can be carried out. In addition, sharing detailed (anonymised) information about workflows and procedures from other laboratories might suggest alternatives and possible solutions. Here, we describe the application of the combination of an initial benchmark testing (entry-control) procedure using simulated clinical specimens, provision of positive control material and confirmatory testing of patient clinical specimens at the reference laboratory, in which feedback and assistance are offered, followed by periodic EQAs for SARS-CoV-2 molecular diagnostic testing using Nucleic Acid Amplification Tests (NAAT) in 71 diagnostic laboratories in The Netherlands in 2020 and 2021. Passing benchmark testing was necessary for a laboratory to be able to start diagnostic testing or high throughput testing for the general population. We demonstrate that the introduction of the benchmark testing phase before an EQA was highly effective and efficient, and resulted in high quality diagnostic testing. An important aspect of this study is the exploration of additional analysis methods of some steps of/in the workflows. We applied Bayesian statistical modelling to estimate the contribution of the choice of target gene on the Cq values and composed a model that incorporates the effect of individual laboratories. These strategies can identify sensitive steps in the workflows and be helpful to uncover valuable information for the laboratories to improve their performance. Furthermore, the abundant information on Cq values resulting from a high number of different workflows at different laboratories for the same viral concentration specimens, in combination with metadata on strategies and techniques, provided valuable information in the use of Cq values as absolute proxy for viral load. We suggest applying the two-stage strategy and the associated analysis strategy as components of diagnostic preparedness plans for a much wider range of (re-) emerging pathogens of public health concern. Benchmark testing Blinded simulated clinical specimen panels (benchmark panel) for sensitivity and specificity analyses were prepared and distributed by the RIVM in collaboration with Erasmus MC. Preparation was performed as previously described , . Briefly, specimens were prepared in Minimal Essential Medium with Hanks’ salts and Hep2 cells to simulate clinical specimens. The panels contained a randomized dilution series of cultured SARS-CoV-2 and specimens with other related or different viruses were included as analytical specificity controls. A detailed description of the composition of the specimens is given in Supplementary Table . Initially, SARS-CoV-1 and also SARS-CoV-2 were included as RNA. As soon as they were available, inactivated Dutch SARS-CoV-2 isolates were included to assess the extraction component in the workflows. Laboratories were asked to report test panel results, as well as information about specimen input volume, extraction volume, elution volume, PCR/NAAT-reaction volume, devices and kits/reagents implemented, and target gene (sequences) for their assays. Alongside the benchmark panels, a positive control specimen initially containing SARS-CoV-1, rapidly replaced by SARS-CoV-2 when available, and validated primers and probes and/or their nucleotide sequence were supplied for implementing laboratory developed tests in the phase when no commercial detection kits were available. Laboratories implementing solely sample-to-result assays were given the option to test a reduced benchmark panel of four specimens to reduce costly and scarce testing cartridges. In addition, the participating laboratories were requested to supply a minimum of five SARS-CoV-2 positive and 10 SARS-CoV-2 negative tested clinical specimens derived from their own COVID-19 diagnostic pipeline for confirmatory testing at the reference laboratories. Together, these procedures were considered an entry benchmark test. In the event the results returned by a laboratory were unsatisfactory, the laboratory could request another benchmark panel after taking corrective actions. In exceptional cases multiple rounds of benchmark testing were performed. Furthermore, advice was offered by the reference laboratories to improve the technical procedures including the handling of the specimen, the execution of the testing methods, and data analysis. A laboratory’s performance was considered satisfactory during the benchmark testing phase when it was able to test the full panel and the confirmation specimens without false-positive or false-negative results. After a laboratory’s performance was satisfactory it had the freedom to implement other SARS-CoV-2 diagnostic assays, so the new test would be cross-referenced to their primary verified workflow. Laboratories were encouraged to request additional benchmark panels and apply for additional confirmatory testing of clinical specimens to verify novel SARS-CoV-2 diagnostic workflows. External Quality Assessment Three rounds of EQA were performed, in November 2020, February 2021, and May 2021. The EQA panels, consisting of 10 specimens each, were produced in similar fashion as the benchmark panels and their components are described in Table . Copies of SARS-CoV-2 E-gene RNA per mL were determined by digital droplet PCR as described previously , . For sensitivity analyses of SARS-CoV-2, the specimens containing 1.28 × 10 3 and 1.28 × 10 5 copies of SARS-CoV-2/mL (referred to as SARS2_L and SARS2_H, respectively) are fundamental as these mimic clinical samples most realistically. The specimen with the lowest virus concentration (1.28 × 10 2 copies; indicated as SARS2_Edu) was included to get insight into the detection limits of the various workflows. The SARS-CoV-1 containing specimen was included to get an insight into both assay specificity and target gene specificity for pathogens highly similar to SARS-CoV-2, especially as primers and probes specific for SARS-Betacoronaviruses (Sarbecoviruses) are being used . This educational specimen and the SARS-CoV-1 containing specimen were not included in the judgment of the performance of a specific assay regarding applicability for diagnostic testing. Before shipping, the prepared panel was validated at the reference laboratories to confirm expected results. Laboratories were also asked to submit the same metadata as for the benchmark testing phase. The performance per workflow was divided over three performance categories based on the number of false-negative results for SARS-CoV-2, false-positive results for SARS-CoV-2 or inconclusive test results for the non-educational specimens: ‘Excellent’ (100% correct), ‘Mediocre’ (maximally one false positive or negative, or up to two inconclusive) and ‘Unsatisfactory’ (more than one false positive or negative and/or more than two inconclusive). An inconclusive or incorrect result can occur from inadequate specimen preparation or processing or not optimal limit of detection of the NAAT. Specifically, an inconclusive result can be the consequence of differences in individual target results of multi-target tests leading to no clear conclusion concerning the pathogen presence in the tested specimen; not negative and not positive. Statistical analyses Statistical analyses were based on a Bayesian model using R (version 4.2.2) and Rstan (R package version 2.21.7) where the measured Cq-value [12pt]{minimal} $$Cq_{j}$$ C q j was assumed to be linearly dependent on the true Cq-value [12pt]{minimal} $$$$ μ . Errors were assumed normally distributed with standard deviation [12pt]{minimal} $$$$ σ , so that for data point [12pt]{minimal} $$j$$ j we have: 1 [12pt]{minimal} $$Cq_{j} N( {_{j} , } )$$ C q j ∼ N μ j , σ The Cq-value [12pt]{minimal} $$_{j}$$ μ j is modelled as a sum of components: 2 [12pt]{minimal} $$_{j} = _{0} + _{d[ j ]}^{{{}}} + _{t[ j ]}^{{{}}} + _{l[ j ]}^{{{}}}$$ μ j = μ 0 + μ d j dilution + μ t j target + μ l j laboratory The component [12pt]{minimal} $$_{0}$$ μ 0 is the baseline Cq-value in the specimen labelled as ‘SARS2_H’ containing 1.28 × 10 5 copies of SARS-CoV-2 per mL, with prior value set to [12pt]{minimal} $$_{0} N( {30,3} )$$ μ 0 ∼ N 30 , 3 . The component [12pt]{minimal} $$_{d[ j ]}^{{{}}}$$ μ d j dilution is the contribution of the dilution factor at dilution [12pt]{minimal} $$d[ j ]$$ d j of data point [12pt]{minimal} $$j$$ j (which takes values 0 = ’ SARS2_H’, 1 = ’ SARS2_L’, and 2 = ’ SARS2_Edu’). The dilution labelled ‘SARS2_H’ (lowest dilution factor; containing 1.28 × 10 5 copies of SARS-CoV-2 per mL), is defined as the baseline Cq-value contribution to the dilution-specific term of the model, hence we set [12pt]{minimal} $$_{0}^{{{}}} = 0$$ μ 0 dilution = 0 . For the other dilutions labelled with ‘SARS2_L’ (medium dilution factor; containing 1.28 × 10 3 copies of SARS-CoV-2 per mL) and ‘SARS2_Edu’ (highest dilution factor; containing 1.28 × 10 2 copies of SARS-CoV-2 per mL) we expect a correction of respectively [12pt]{minimal} $$2 ( {10} )/{}( 2 )$$ 2 × log 10 / log 2 and 3 [12pt]{minimal} $$ ( {10} )/{}( 2 )$$ × log 10 / log 2 , since we have 2 and 3 log 10 decreases, and theoretically each halving of the number of genomic copies increases the Cq-value by one. Hence we set priors [12pt]{minimal} $$_{1}^{{{}}} N( {2 )}}{ ( 2 )}, 0.5} )$$ μ 1 dilution ∼ N 2 log 10 log 2 , 0.5 and [12pt]{minimal} $$_{2}^{{{}}} N( {3 )}}{ ( 2 )}, 0.5} )$$ μ 2 dilution ∼ N 3 log 10 log 2 , 0.5 . The components [12pt]{minimal} $$_{t[ j ]}^{{{}}}$$ μ t j target and [12pt]{minimal} $$_{l[ j ]}^{{{}}}$$ μ l j laboratory are the gene-target and laboratory-specific contributions to the Cq-value. We model those as random effects, i.e. the values they take are assumed to stem from a common distribution: 3 [12pt]{minimal} $$_{t[ j ]}^{{{}}} N( {0,^{{{}}} } ) {} _{l[ j ]}^{{{}}} N( {0,^{{{}}} } )$$ μ t j target ∼ N 0 , σ target and μ l j laboratory ∼ N 0 , σ laboratory The parameters [12pt]{minimal} $$^{{{}}}$$ σ target and [12pt]{minimal} $$^{{{}}}$$ σ laboratory measure how similar gene-target and laboratory specific Cq-value contributions are. Those are also estimated from the data. We set priors that encode our belief that more than two log 10 units difference is unlikely: 4 [12pt]{minimal} $$^{{{}}} N( {0,2} )$$ σ target ∼ N 0 , 2 5 [12pt]{minimal} $$^{{{}}} N( {0,2} )$$ σ laboratory ∼ N 0 , 2 Additionally, we enforce sum-to-zero constraints to [12pt]{minimal} $$_{t[ j ]}^{{{}}}$$ μ t j target and [12pt]{minimal} $$_{l[ j ]}^{{{}}}$$ μ l j laboratory . Results that were marked ‘no detection’ were treated differently. For those values the Cq-value was not reported, because no Cq value was generated at all or it is above some Cq threshold. This assessment by the laboratory is unknown to us and could vary between gene-target and dilution. As a substitute we recorded the highest Cq-value found for each combination of ‘dilution’ and ‘gene target’, and used this value (denoted [12pt]{minimal} $$c[ j ]$$ c j ) as the censoring level of sample [12pt]{minimal} $$j$$ j . The censoring is then implemented by using not the probability density function (PDF) for modelling Eq. , but the cumulative complementary PDF. This models that Cq-value of non-detects in SARS-CoV-2 containing specimens is somewhere above the censoring level [12pt]{minimal} $$c[ j ]$$ c j , in the tail of the normal distribution Eq. . Blinded simulated clinical specimen panels (benchmark panel) for sensitivity and specificity analyses were prepared and distributed by the RIVM in collaboration with Erasmus MC. Preparation was performed as previously described , . Briefly, specimens were prepared in Minimal Essential Medium with Hanks’ salts and Hep2 cells to simulate clinical specimens. The panels contained a randomized dilution series of cultured SARS-CoV-2 and specimens with other related or different viruses were included as analytical specificity controls. A detailed description of the composition of the specimens is given in Supplementary Table . Initially, SARS-CoV-1 and also SARS-CoV-2 were included as RNA. As soon as they were available, inactivated Dutch SARS-CoV-2 isolates were included to assess the extraction component in the workflows. Laboratories were asked to report test panel results, as well as information about specimen input volume, extraction volume, elution volume, PCR/NAAT-reaction volume, devices and kits/reagents implemented, and target gene (sequences) for their assays. Alongside the benchmark panels, a positive control specimen initially containing SARS-CoV-1, rapidly replaced by SARS-CoV-2 when available, and validated primers and probes and/or their nucleotide sequence were supplied for implementing laboratory developed tests in the phase when no commercial detection kits were available. Laboratories implementing solely sample-to-result assays were given the option to test a reduced benchmark panel of four specimens to reduce costly and scarce testing cartridges. In addition, the participating laboratories were requested to supply a minimum of five SARS-CoV-2 positive and 10 SARS-CoV-2 negative tested clinical specimens derived from their own COVID-19 diagnostic pipeline for confirmatory testing at the reference laboratories. Together, these procedures were considered an entry benchmark test. In the event the results returned by a laboratory were unsatisfactory, the laboratory could request another benchmark panel after taking corrective actions. In exceptional cases multiple rounds of benchmark testing were performed. Furthermore, advice was offered by the reference laboratories to improve the technical procedures including the handling of the specimen, the execution of the testing methods, and data analysis. A laboratory’s performance was considered satisfactory during the benchmark testing phase when it was able to test the full panel and the confirmation specimens without false-positive or false-negative results. After a laboratory’s performance was satisfactory it had the freedom to implement other SARS-CoV-2 diagnostic assays, so the new test would be cross-referenced to their primary verified workflow. Laboratories were encouraged to request additional benchmark panels and apply for additional confirmatory testing of clinical specimens to verify novel SARS-CoV-2 diagnostic workflows. Three rounds of EQA were performed, in November 2020, February 2021, and May 2021. The EQA panels, consisting of 10 specimens each, were produced in similar fashion as the benchmark panels and their components are described in Table . Copies of SARS-CoV-2 E-gene RNA per mL were determined by digital droplet PCR as described previously , . For sensitivity analyses of SARS-CoV-2, the specimens containing 1.28 × 10 3 and 1.28 × 10 5 copies of SARS-CoV-2/mL (referred to as SARS2_L and SARS2_H, respectively) are fundamental as these mimic clinical samples most realistically. The specimen with the lowest virus concentration (1.28 × 10 2 copies; indicated as SARS2_Edu) was included to get insight into the detection limits of the various workflows. The SARS-CoV-1 containing specimen was included to get an insight into both assay specificity and target gene specificity for pathogens highly similar to SARS-CoV-2, especially as primers and probes specific for SARS-Betacoronaviruses (Sarbecoviruses) are being used . This educational specimen and the SARS-CoV-1 containing specimen were not included in the judgment of the performance of a specific assay regarding applicability for diagnostic testing. Before shipping, the prepared panel was validated at the reference laboratories to confirm expected results. Laboratories were also asked to submit the same metadata as for the benchmark testing phase. The performance per workflow was divided over three performance categories based on the number of false-negative results for SARS-CoV-2, false-positive results for SARS-CoV-2 or inconclusive test results for the non-educational specimens: ‘Excellent’ (100% correct), ‘Mediocre’ (maximally one false positive or negative, or up to two inconclusive) and ‘Unsatisfactory’ (more than one false positive or negative and/or more than two inconclusive). An inconclusive or incorrect result can occur from inadequate specimen preparation or processing or not optimal limit of detection of the NAAT. Specifically, an inconclusive result can be the consequence of differences in individual target results of multi-target tests leading to no clear conclusion concerning the pathogen presence in the tested specimen; not negative and not positive. Statistical analyses were based on a Bayesian model using R (version 4.2.2) and Rstan (R package version 2.21.7) where the measured Cq-value [12pt]{minimal} $$Cq_{j}$$ C q j was assumed to be linearly dependent on the true Cq-value [12pt]{minimal} $$$$ μ . Errors were assumed normally distributed with standard deviation [12pt]{minimal} $$$$ σ , so that for data point [12pt]{minimal} $$j$$ j we have: 1 [12pt]{minimal} $$Cq_{j} N( {_{j} , } )$$ C q j ∼ N μ j , σ The Cq-value [12pt]{minimal} $$_{j}$$ μ j is modelled as a sum of components: 2 [12pt]{minimal} $$_{j} = _{0} + _{d[ j ]}^{{{}}} + _{t[ j ]}^{{{}}} + _{l[ j ]}^{{{}}}$$ μ j = μ 0 + μ d j dilution + μ t j target + μ l j laboratory The component [12pt]{minimal} $$_{0}$$ μ 0 is the baseline Cq-value in the specimen labelled as ‘SARS2_H’ containing 1.28 × 10 5 copies of SARS-CoV-2 per mL, with prior value set to [12pt]{minimal} $$_{0} N( {30,3} )$$ μ 0 ∼ N 30 , 3 . The component [12pt]{minimal} $$_{d[ j ]}^{{{}}}$$ μ d j dilution is the contribution of the dilution factor at dilution [12pt]{minimal} $$d[ j ]$$ d j of data point [12pt]{minimal} $$j$$ j (which takes values 0 = ’ SARS2_H’, 1 = ’ SARS2_L’, and 2 = ’ SARS2_Edu’). The dilution labelled ‘SARS2_H’ (lowest dilution factor; containing 1.28 × 10 5 copies of SARS-CoV-2 per mL), is defined as the baseline Cq-value contribution to the dilution-specific term of the model, hence we set [12pt]{minimal} $$_{0}^{{{}}} = 0$$ μ 0 dilution = 0 . For the other dilutions labelled with ‘SARS2_L’ (medium dilution factor; containing 1.28 × 10 3 copies of SARS-CoV-2 per mL) and ‘SARS2_Edu’ (highest dilution factor; containing 1.28 × 10 2 copies of SARS-CoV-2 per mL) we expect a correction of respectively [12pt]{minimal} $$2 ( {10} )/{}( 2 )$$ 2 × log 10 / log 2 and 3 [12pt]{minimal} $$ ( {10} )/{}( 2 )$$ × log 10 / log 2 , since we have 2 and 3 log 10 decreases, and theoretically each halving of the number of genomic copies increases the Cq-value by one. Hence we set priors [12pt]{minimal} $$_{1}^{{{}}} N( {2 )}}{ ( 2 )}, 0.5} )$$ μ 1 dilution ∼ N 2 log 10 log 2 , 0.5 and [12pt]{minimal} $$_{2}^{{{}}} N( {3 )}}{ ( 2 )}, 0.5} )$$ μ 2 dilution ∼ N 3 log 10 log 2 , 0.5 . The components [12pt]{minimal} $$_{t[ j ]}^{{{}}}$$ μ t j target and [12pt]{minimal} $$_{l[ j ]}^{{{}}}$$ μ l j laboratory are the gene-target and laboratory-specific contributions to the Cq-value. We model those as random effects, i.e. the values they take are assumed to stem from a common distribution: 3 [12pt]{minimal} $$_{t[ j ]}^{{{}}} N( {0,^{{{}}} } ) {} _{l[ j ]}^{{{}}} N( {0,^{{{}}} } )$$ μ t j target ∼ N 0 , σ target and μ l j laboratory ∼ N 0 , σ laboratory The parameters [12pt]{minimal} $$^{{{}}}$$ σ target and [12pt]{minimal} $$^{{{}}}$$ σ laboratory measure how similar gene-target and laboratory specific Cq-value contributions are. Those are also estimated from the data. We set priors that encode our belief that more than two log 10 units difference is unlikely: 4 [12pt]{minimal} $$^{{{}}} N( {0,2} )$$ σ target ∼ N 0 , 2 5 [12pt]{minimal} $$^{{{}}} N( {0,2} )$$ σ laboratory ∼ N 0 , 2 Additionally, we enforce sum-to-zero constraints to [12pt]{minimal} $$_{t[ j ]}^{{{}}}$$ μ t j target and [12pt]{minimal} $$_{l[ j ]}^{{{}}}$$ μ l j laboratory . Results that were marked ‘no detection’ were treated differently. For those values the Cq-value was not reported, because no Cq value was generated at all or it is above some Cq threshold. This assessment by the laboratory is unknown to us and could vary between gene-target and dilution. As a substitute we recorded the highest Cq-value found for each combination of ‘dilution’ and ‘gene target’, and used this value (denoted [12pt]{minimal} $$c[ j ]$$ c j ) as the censoring level of sample [12pt]{minimal} $$j$$ j . The censoring is then implemented by using not the probability density function (PDF) for modelling Eq. , but the cumulative complementary PDF. This models that Cq-value of non-detects in SARS-CoV-2 containing specimens is somewhere above the censoring level [12pt]{minimal} $$c[ j ]$$ c j , in the tail of the normal distribution Eq. . Supporting laboratories to validate and improve their SARS-CoV-2 testing, the benchmark test As part of the response to the spread of the novel SARS-CoV-2 virus, the reference laboratories assessed and helped to improve the quality of newly introduced workflows for the testing of SARS-CoV-2 in diagnostic laboratories during the early stages of the pandemic. The procedure consisted of two stages, a benchmark testing phase, consisting of the combination of a benchmark panel and a series of confirmation samples, and three rounds of confirmatory EQAs (Fig. ). An important aspect of this arrangement was the support offered by the reference laboratories, to assist in the introduction of the workflows and subsequent evaluation thereof. In the benchmark phase multiple technical issues were encountered by some of the laboratories based on the results of the benchmark panel and the confirmation samples that were sent to RIVM. Sensitivity issues were experienced by 15/71 laboratories (21.1%). Also, specificity issues were identified, as 2/71 laboratories (2.8%) were unable to differentiate between SARS-CoV-2 and other (seasonal) coronaviruses. In both cases, RNA isolation and/or amplification techniques were adjusted or substituted which solved the issues. One manufacturer was contacted to improve the performance of three of their kits since laboratories using these kits were experiencing both specificity and sensitivity issues. Contamination issues either during inter-facility specimen transport within the testing laboratory or during testing were experienced in 9/71 laboratories (12.7%). Overall, 56/71 laboratories (78.9%) immediately reached the ‘Excellent’ score whereas 15/71 laboratories (21.1%) needed to implement several adaptations to reach the desired quality level confirmed by testing and passing with another panel. The type of adjustments ranged from fine-tuning their workflow by changing the volumes used during RNA amplification to changing the RNA isolation and/or RNA amplification technique entirely before performance became ‘Excellent’ and passing the benchmark phase. Performance of diagnostic laboratories over three EQA rounds After successfully passing the benchmark test, laboratories took part in up to three EQA rounds which were performed over the course of 7 months. Some laboratories were added to the SARS-CoV-2 testing laboratory network and started and finished the benchmark test after already one or two EQA rounds were completed and therefore could not partake in all three EQA rounds. Other laboratories did not submit data for all EQA rounds despite finishing the benchmark test. In total 53 laboratories participated in EQA1, 60 in EQA2 and 68 in EQA3. The composition of the EQA panels was adapted each round to reflect the occurrence of novel SARS-CoV-2 variants of concern. A schematic overview of the performance of all individual 277 workflows submitted by the 71 laboratories spread over the three EQA rounds is given in Fig. . Many laboratories submitted datasets of multiple workflows which culminated to a total of 489 data sets. The composition of the various workflows was subject to considerable change over time (Supplementary Figs. , and ). An overview of the various target genes applied by the laboratories is given in Supplementary Fig. A. Some workflows were deployed in all three EQA rounds while others were used only in one or two rounds (Fig. ). Remarkably, the overall performance of the workflows did not improve in subsequent rounds. The quality of assays was consistent over the three EQA rounds with approximately 85% of the implemented assays having a 100% score (performance category ‘Excellent’) (Fig. ). A cumulative overview of the performance on all tested specimens is given in Table . As expected, a virus concentration as low as 1.28 × 10 2 digital copies of E-gene/mL (the educational specimen SARS2_Edu) is a challenge for multiple workflows and resulted in a high proportion (40.1%) of false-negative test results. The various SARS-CoV-2 variants were detected with high accuracy (specificity 99.7%). The specificity of the testing procedures was high, 99.6% of the non-SARS-CoV-2 containing specimens were not mistaken for SARS-CoV-2 except for SARS-CoV-1 which was included in the panels as an educational specimen. Most workflows (53.3%) failed to distinguish SARS-CoV-2 from the closely related SARS-CoV-1 resulting in false-positive results because some workflows solely implemented the E-gene based primers as described by Corman et al. , which cannot discriminate between the two pathogens. However, due to the absence of circulating SARS-CoV-1 since its elimination in 2003, this was not considered a problem. Remarkably, newly developed and implemented assays showed the same high level of quality as pre-existing ones (Fig. ). Overall, the quality of the implemented workflows was high and stable over time during the study period in which new Variants of Concern emerged. The spread of the reported Cq values by the laboratories over the three sensitivity SARS-CoV-2 specimens is visualized in Fig. , in which a subdivision over the target genes is given. Whereas the data for most target genes are produced from multiple assays, the Cq values from the multiplex E-gene/N2-gene are all derived from a single type of cartridge-based sample-to-result assay (Cepheid, Xpert® Xpress SARS-CoV-2/Flu/RSV assay). We observed the least spread of Cq values with this last assay (24.2–30.5), whereas the spread overall for the other assays was 18.11–39.02 for specimen SARS2_H. For each of the targets a lower concentration resulted in a higher Cq value (connectors between specimens for individual workflows not shown in Fig. ). The reported Cq values are systematically higher than theoretically expected based on the dilutions (Supplementary Fig. ). Supplementary Fig. shows the predicted versus the reported Cq values of the individual target genes when plotted against each other. These data demonstrate that there is no strict linear correlation between Cq value and viral concentration in the studied concentration range and that this is independent of the choice of target gene. Quantification of the contribution of some parameters to workflow performance To infer the quantitative effect of the choice of target genes on the assay read-out parameter Cq, we applied Bayesian statistic modelling. This method estimates the likelihood of a Cq value as a distribution while correcting for confounding factors (for details see “ ” section). To determine what the effect of the chosen target genes on the reported Cq values is, we modelled this effect for the SARS-CoV-2 specimens SARS2_H, SARS2_L and SARS2_Edu. Figure shows the (mean) effect of target gene selection on derived Cq values for an assay. The predicted range is a parameter of the number of data points and that over all individual target genes, the mean values are distributed over a range of about 3.5 Cq values. Such data could be useful for selecting a new target gene for an assay if necessary. Similarly as for the target genes, we modelled the contribution of ‘Laboratory’ on the reported Cq values (for details see Methods section). This ‘Laboratory’ effect on the predicted Cq values for all types of assays ranged from -2.3 to 3.2 from the mean (Supplementary Fig. , panel B). The laboratories CS and CT occupy a relatively separate position which can possibly be attributed to the relative high number of specimens incorrectly reported as SARS-CoV-2 negative due to sensitivity issues. Thus, even when adjusting for the ‘Laboratory effect’ the difference in Cq values between laboratories for same concentration specimens remained considerable, further illustrating that taking Cq values as absolute proxy for viral load between laboratories and assays has limited value. As part of the response to the spread of the novel SARS-CoV-2 virus, the reference laboratories assessed and helped to improve the quality of newly introduced workflows for the testing of SARS-CoV-2 in diagnostic laboratories during the early stages of the pandemic. The procedure consisted of two stages, a benchmark testing phase, consisting of the combination of a benchmark panel and a series of confirmation samples, and three rounds of confirmatory EQAs (Fig. ). An important aspect of this arrangement was the support offered by the reference laboratories, to assist in the introduction of the workflows and subsequent evaluation thereof. In the benchmark phase multiple technical issues were encountered by some of the laboratories based on the results of the benchmark panel and the confirmation samples that were sent to RIVM. Sensitivity issues were experienced by 15/71 laboratories (21.1%). Also, specificity issues were identified, as 2/71 laboratories (2.8%) were unable to differentiate between SARS-CoV-2 and other (seasonal) coronaviruses. In both cases, RNA isolation and/or amplification techniques were adjusted or substituted which solved the issues. One manufacturer was contacted to improve the performance of three of their kits since laboratories using these kits were experiencing both specificity and sensitivity issues. Contamination issues either during inter-facility specimen transport within the testing laboratory or during testing were experienced in 9/71 laboratories (12.7%). Overall, 56/71 laboratories (78.9%) immediately reached the ‘Excellent’ score whereas 15/71 laboratories (21.1%) needed to implement several adaptations to reach the desired quality level confirmed by testing and passing with another panel. The type of adjustments ranged from fine-tuning their workflow by changing the volumes used during RNA amplification to changing the RNA isolation and/or RNA amplification technique entirely before performance became ‘Excellent’ and passing the benchmark phase. After successfully passing the benchmark test, laboratories took part in up to three EQA rounds which were performed over the course of 7 months. Some laboratories were added to the SARS-CoV-2 testing laboratory network and started and finished the benchmark test after already one or two EQA rounds were completed and therefore could not partake in all three EQA rounds. Other laboratories did not submit data for all EQA rounds despite finishing the benchmark test. In total 53 laboratories participated in EQA1, 60 in EQA2 and 68 in EQA3. The composition of the EQA panels was adapted each round to reflect the occurrence of novel SARS-CoV-2 variants of concern. A schematic overview of the performance of all individual 277 workflows submitted by the 71 laboratories spread over the three EQA rounds is given in Fig. . Many laboratories submitted datasets of multiple workflows which culminated to a total of 489 data sets. The composition of the various workflows was subject to considerable change over time (Supplementary Figs. , and ). An overview of the various target genes applied by the laboratories is given in Supplementary Fig. A. Some workflows were deployed in all three EQA rounds while others were used only in one or two rounds (Fig. ). Remarkably, the overall performance of the workflows did not improve in subsequent rounds. The quality of assays was consistent over the three EQA rounds with approximately 85% of the implemented assays having a 100% score (performance category ‘Excellent’) (Fig. ). A cumulative overview of the performance on all tested specimens is given in Table . As expected, a virus concentration as low as 1.28 × 10 2 digital copies of E-gene/mL (the educational specimen SARS2_Edu) is a challenge for multiple workflows and resulted in a high proportion (40.1%) of false-negative test results. The various SARS-CoV-2 variants were detected with high accuracy (specificity 99.7%). The specificity of the testing procedures was high, 99.6% of the non-SARS-CoV-2 containing specimens were not mistaken for SARS-CoV-2 except for SARS-CoV-1 which was included in the panels as an educational specimen. Most workflows (53.3%) failed to distinguish SARS-CoV-2 from the closely related SARS-CoV-1 resulting in false-positive results because some workflows solely implemented the E-gene based primers as described by Corman et al. , which cannot discriminate between the two pathogens. However, due to the absence of circulating SARS-CoV-1 since its elimination in 2003, this was not considered a problem. Remarkably, newly developed and implemented assays showed the same high level of quality as pre-existing ones (Fig. ). Overall, the quality of the implemented workflows was high and stable over time during the study period in which new Variants of Concern emerged. The spread of the reported Cq values by the laboratories over the three sensitivity SARS-CoV-2 specimens is visualized in Fig. , in which a subdivision over the target genes is given. Whereas the data for most target genes are produced from multiple assays, the Cq values from the multiplex E-gene/N2-gene are all derived from a single type of cartridge-based sample-to-result assay (Cepheid, Xpert® Xpress SARS-CoV-2/Flu/RSV assay). We observed the least spread of Cq values with this last assay (24.2–30.5), whereas the spread overall for the other assays was 18.11–39.02 for specimen SARS2_H. For each of the targets a lower concentration resulted in a higher Cq value (connectors between specimens for individual workflows not shown in Fig. ). The reported Cq values are systematically higher than theoretically expected based on the dilutions (Supplementary Fig. ). Supplementary Fig. shows the predicted versus the reported Cq values of the individual target genes when plotted against each other. These data demonstrate that there is no strict linear correlation between Cq value and viral concentration in the studied concentration range and that this is independent of the choice of target gene. To infer the quantitative effect of the choice of target genes on the assay read-out parameter Cq, we applied Bayesian statistic modelling. This method estimates the likelihood of a Cq value as a distribution while correcting for confounding factors (for details see “ ” section). To determine what the effect of the chosen target genes on the reported Cq values is, we modelled this effect for the SARS-CoV-2 specimens SARS2_H, SARS2_L and SARS2_Edu. Figure shows the (mean) effect of target gene selection on derived Cq values for an assay. The predicted range is a parameter of the number of data points and that over all individual target genes, the mean values are distributed over a range of about 3.5 Cq values. Such data could be useful for selecting a new target gene for an assay if necessary. Similarly as for the target genes, we modelled the contribution of ‘Laboratory’ on the reported Cq values (for details see Methods section). This ‘Laboratory’ effect on the predicted Cq values for all types of assays ranged from -2.3 to 3.2 from the mean (Supplementary Fig. , panel B). The laboratories CS and CT occupy a relatively separate position which can possibly be attributed to the relative high number of specimens incorrectly reported as SARS-CoV-2 negative due to sensitivity issues. Thus, even when adjusting for the ‘Laboratory effect’ the difference in Cq values between laboratories for same concentration specimens remained considerable, further illustrating that taking Cq values as absolute proxy for viral load between laboratories and assays has limited value. This study describes a successful strategy for assessment, increasing and maintaining the quality of molecular diagnostics for SARS-CoV-2 in a very heterogeneous laboratory landscape by combining a benchmark testing phase and an EQA phase. These establishment and evaluation procedures were of great importance for setting up diagnostic testing facilities throughout The Netherlands in an early phase of the pandemic. The Netherlands chose to implement decentralised testing with a wide variety of SARS-CoV-2 assays, the same approach that was also chosen for The Netherlands during the 2009 influenza pandemic . This strategy has challenges as it is potentially difficult to maintain a homogenous high-quality level in a heterogenous testing landscape. This issue can be resolved by a well-designed test-implementation system with regular EQA and inter-laboratory comparisons as shown in this study. Importantly, a laboratory network implementing a multitude of assays essentially reduces the risk of collapse of the complete testing landscape (don’t put all of your eggs in one basket) . During the COVID-19 pandemic, multiple issues were encountered including manufacturing problems, contamination of primers/probes and drop-outs because of genomic mutations in target genes – , . In contrast to The Netherlands, the USA took the approach of decentralised testing with one assay type, similar to what they did during the 2009 influenza pandemic , . Although this method generally allows for quick and relatively simple upscaling of diagnostic capacity, when this strategy was implemented for SARS-CoV-2 in 2020 in the USA it had its challenges, namely contamination of primers/probes with synthetic template and improper primer/probe design , – which impaired the testing system. The CDC had a similar experience when implementing an mpox assay in their laboratory network in 2022 , . While we acknowledge that this topic is too complex to be discussed thoroughly in our paper, we feel it is worth briefly mentioning in this discussion as a way of starting or adding to pandemic preparedness systems discourse. An important characteristic of the strategy of implementing heterogenous assays in The Netherlands was the presence of a ‘preparation phase’ (benchmark testing). In this phase laboratories could already make use of readily available blinded panels of simulated clinical specimens containing SARS-CoV-2 and other viruses during the early stage of the pandemic, and in addition receive advice and support from the reference laboratories. We observed that during this preparation phase the performance of several laboratories improved considerably, resulting in high quality testing in these laboratories and meeting set requirements for inclusion in the list of qualified SARS-CoV-2 diagnostics laboratories . Most issues were found in high volume laboratories that previously did not perform diagnostics on human-derived specimens, which included veterinary laboratories and newly set up laboratories specific for SARS-CoV-2 testing, among others. After finishing the benchmark phase, 84.5% of all submitted workflows performed up to the desired level in the subsequent individual EQA rounds. This is remarkable, as according to the benchmark inclusion criteria, all workflows were expected to perform ‘Excellent’ in the EQA rounds. It is possible that not all new workflows implemented in laboratories were pre-tested in the benchmark phase. However, our data does not provide a clear explanation for this observation. The first published SARS-CoV-2 EQA was performed, primarily focused on frontline diagnostic laboratories, in April/May of 2020 . In this first EQA, 365 of 406 laboratories from 36 countries submitted 521 datasets. All core samples from the EQA were correctly reported by 86.3% of participating laboratories and 83.1% of the datasets , similar to our study. In another early SARS-CoV-2 EQA (which focused more on “expert” and reference laboratories, rather than frontline diagnostic laboratories) among 68 diagnostic laboratories spread over 35 European countries , the test performances were of significantly lower quality than in our study (39.7% versus 84.5% of workflows scored all core specimens correct). The percentage of false positives or negatives in our study were 3.2% false negative, 0.1% false positive, whereas Fischer and colleagues found 8.6% false negative and 1.1% false positive results in their European study . The Fischer et al. EQA was performed in June and July 2020 while our EQAs started in December that same year . As laboratories had more time to set up their assays before the start of our study compared to the laboratories partaking in the Fischer et al. study , the difference in diagnostic quality between the two studies might be partly due to more experience in COVID-19 diagnostics at the partaking laboratories. A major difference with our study is that the Fischer et al. or Matheeussen et al. study did not involve a benchmark testing procedure in advance of the EQAs , . The benchmark testing phase in our study started as early as March 2020 and could be considered an individual EQA with strict targets to be met by the laboratories. Nevertheless, this actually shows the benefit of our systematically applied entry benchmark testing approach that was (largely) lacking in other approaches. Based on our results, we expect that the availability of blinded testing panels to validate assays during implementation and compare performance with that of other laboratories, in combination with technical support, could improve the quality of the diagnostic testing performance in laboratories elsewhere. It is of note that this strategy is widely applicable and can cover other (novel) pathogens as well. Other national SARS-CoV-2 diagnostic testing EQAs were performed and documented in Japan (94.1% correct reporting) , South Korea (93.2% correct reporting) , and Austria (93% correct reporting) , with mostly similar results. Comparing these EQAs, or the original EQA from Matheeussen et al. , with our EQA program is challenging, as sample quantification and preparation were done differently (in our study, using Minimal Essential Medium with Hanks’ salts and Hep2 cells instead of transport medium for sample preparation, varying methods for virus concentration determination, and using inactivated SARS-CoV-2 virus instead of RNA or pseudovirus constructs). It is of additional value when, in addition to the test results of the panel specimens, detailed information about the technical and procedural aspects, the so-called metadata, are shared with the organizer of an EQA. Communicating an overview of these anonymized and aggregated data, which cannot be collected within individual laboratories, among all EQA participants might be informative for an individual laboratory to compare its own quality level with its peers and especially, for getting suggestions for alternatives in case of suboptimal performance. In this report, we have taken this analysis a step further and demonstrate the possibility to get insights into specific aspects of the workflows. Such information can hint at steps in the procedures that are critical and it provides a quantitative estimate of its impact. Here we show a comparison of the consequences of the various target genes used by the laboratories on the workflows. This provided a direct comparison of target genes and suggests validated alternatives in case a gene target is no longer available because of mutations. Such analyses can also be performed for other elements of the workflows or even as a comparison between laboratories as we demonstrate. A much debated topic is the use of Cq values as a measure for the absolute amount of virus in a clinical specimen. Differences in the amount of mRNA for protein production between the various targeted genes besides the presence of viral genomic and subgenomic RNA and differences in stability of the various RNAs will influence the amount of substrate for the RT-PCR reaction , . This is supposedly also in part reflected in the results of our analyses of the influence of target genes on Cq values. These factors limit a direct use of Cq values for virus concentration, apart from the difficulty to collect a specimen from a host in a reproducible way. Even standardised sample-to-result assays, which excludes some technical variation by making use of cartridges, showed substantial spread in Cq values in our study. Therefore, Cq values without calibration to international standards cannot be used to determine the amount of virus reliably and can at most provide a rough estimate, as for all workflows analyzed a decrease in concentration resulted in an increase in Cq value. The N2 target region seemed to be most sensitive target with the highest percentage of workflows with a positive result for the SARS2_Edu specimen, although Cq values were the highest compared to the other targets for this specimen. Differential generation of subgenomic mRNA’s and differences in reaction efficacy at low target concentration in a specimen—we show amplification is not exponential anymore (Supplementary Fig. and )—could explain this phenomenon. We consider the system of initial provision of validated primers and probes and protocols for laboratory developed tests and subsequent entry benchmark testing as an excellent way to develop and improve molecular diagnostic testing of pathogens in emerging situations requiring rapid availability of validated assays and high testing capacity. Technical and logistic assistance from a public health institute and/or expert laboratory is an important component. As this report demonstrates, the program is highly flexible and fast, allowing laboratories to design or purchase their own preferred assays and workflows while verifying and maintaining high quality testing. The addition of timely followup EQA rounds are necessary to maintain the overall quality of the diagnostic network. The collection and exchange of metadata is a valuable component and sophisticated statistical analyses can provide informative insight to the laboratories into components of their workflows. Importantly, the strategies here described are applicable to other pathogens as well and can be of great value in improving preparedness for novel pathogen detection and contribute to the advance of public health in a continuously developing and changing diagnostic field. Supplementary Information 1. Supplementary Information 2.
A patient-centered evaluation of a novel medical student-based patient navigation program
c9cb28cb-9b4c-4fe6-861b-eeea71e17137
10947789
Patient-Centered Care[mh]
Introduction The first patient navigator (PN) program was implemented in Harlem Hospital in 1990 and focused on breast cancer screening in Black patients . Since then, PN programs have become increasingly popular, however there is considerable heterogeneity in models. These programs can go by a number of names including patient assistance programs, system navigators, patient advocates, case coordinators, and health coaches. Each describes someone working to coordinate patient care and address barriers to healthcare . Some specific examples include facilitating communication between patient and provider, appointment reminders, assisting with medication refills, providing educational resources, and providing emotional support . These services, among others, have been shown to increase patient understanding of their disease and lead to improved patient outcomes [ – ]. The most robust data demonstrating the efficacy of PN programs comes from improving cancer screening and treatment , various primary care interventions , and HIV medication adherence . Despite immense disparities in systemic lupus erythematosus (SLE) care for racial and ethnic minorities , there remains a paucity of PN programs serving those suffering from rheumatological diseases. Some studies, however, have shown efficacy of PN interventions in this population. For example, one study implemented a rheumatology-specific PN program to address disease modifying anti-rheumatic drugs (DMARD) medication adherence. Another study demonstrated that only PN, compared to peer-to-peer support and patient support groups, had a significant increase in measured self-efficacy and patient activation among Black patients with SLE. Similar studies also show increased self-efficacy in SLE when patients are paired with a PN . Nonetheless, there are few rheumatology-specific patient navigation programs despite the known disparities in care and complex management issues in this population. The role of PN can be filled by a number of people including lay people , case managers , social workers , nurses and former patients with the same disease . The vast majority of PNs are nurses or lay/community healthcare workers , however a few programs have successfully used medical students as PNs. One such program at Case Western Reserve University offered first year medical students the opportunity to serve as PNs as part of their Health Systems Science curriculum . Focus groups of students from this cohort demonstrated remarkable impact on learners across multiple domains. Others have focused on the utilization of medical student navigators to teach empathy in medical school . However, each of these approaches fail to assess the patient perception of these programs. Furthermore, there are very few studies that are centered on the patient’s experience of navigation outside of medical outcomes. One systematic review of PN programs for patients with chronic disease explicitly called for more studies that analyzed patients’ experiences . One of the few studies that did examine patients’ navigator preferences demonstrated that patients valued not only the logistical role that their PNs could play, but the emotional support as well . Further, this study highlighted the ways in which the PN was able to bridge the gap between the complex healthcare system and the biopsychosocial needs of the patient. However, as with much of the research on patient navigation, this study was conducted specifically in the setting of cancer care and utilized lay professionals as PNs. Our novel PN program is built on the foundation and proven efficacy of prior models. This program was created in response to a 2021 grant from Aurinia Pharmaceuticals with a goal of “. increasing access to equitable healthcare for people living with lupus nephritis in underserved. communities and create meaningful impact to patients .” Five sites were awarded $50,000 to implement a PN program at their institution. Four sites hired a part-time nurse or social worker to serve as PN. Ours was the only site to specifically hire medical students to act as PNs. Second-year medical students were asked to apply for the program, underwent training, and were provided a modest stipend for their time. In line with the goals of the grant, patients with known barriers to healthcare, history of missed appointments, and/or low health literacy were offered to be connected with a student navigator. The novelty of our program lies in the fact that PNs were medical students who maintained relationships with their assigned patients over the course of two or more years. This confers a number of benefits for patients and students alike. For example, medical students are granted access to the electronic health record through their school which allows for easy onboarding. Additionally, hiring outside PNs requires over-coming tremendous bureaucratic hurdles from access to clinic sites to handling protected health information. This became a major obstacle faced by the other grant sites delaying their start by months. In contrast, since our medical students already operate within the hospital system, they were able to begin helping patients from day one. Further, students at this level of training have already begun generating a medical vocabulary that allows them to interpret medical records in a way that navigators lacking this training cannot. More so, students were eager to learn and forge relationships with patients as this served as an opportunity to experience early clinical encounters. Given the complex nature of rheumatological care and urgent need for interventions addressing health disparities in this population, our study aimed to (1) assess the patient experience of using students as patient navigators and (2) identify areas for improvement of the program. This intervention is novel both in its use of medical students as navigators as well as the focused assessment of patient experience of navigation in a disease population in which PNs have been less frequently utilized. Based on our results, we also identified future areas of improvement and possible expansion into different clinics. Methods 2.1. Participant recruitment This study looked specifically at the first year cohort of patients enrolled in the rheumatology PN program. As this program was established in response to a grant to target underserved populations, patients were selected based on a history of recurrent missed appointments, inconsistent clinic follow up, or knowledge of significant barriers to care. Participant demographics are outlined in . Most patients in this clinic are well known to the medical team who were able to evaluate if they might benefit from a PN. The majority of the patients recruited had SLE and were Black. Patients were connected to their navigator in the clinic or occasionally over the phone following a visit. Navigators discussed with the patient the various ways in which they may be able to support the patient. 2.2. PN involvement The first cohort of PNs included seven second year medical students of various backgrounds and languages spoken. Over the course of a few months, each PN was connected with between 8–10 patients each. In many cases, patients with English as their non-native language were able to be paired with navigators who spoke their native tongue. Students were able to access their patients’ medical records, attended rheumatology clinic bimonthly, and had support from clinic staff. PNs were given training regarding their role and were instructed to bring medical concerns to the attention of the fellows and attendings. PNs followed their patients for one year (sometimes more) often attending medical appointments, calling patients to check in, and helping connect patients with existing resources. 2.3. Data collection We conducted a cross-sectional study of patients enrolled in the first cohort of the program. One to four months after completing their first year in the program, patients were contacted by phone to fill out a questionnaire ( ) to better understand their experience. During this time, many of the patients continued to benefit from the program, however, they were asked to reflect on the first year specifically. Patients were contacted by a member of the team who was not their assigned PN to limit response bias. The survey items were compiled based on common tasks PNs were able to assist patients with. There was also a section designed for open-ended patient responses. A preliminary survey was shared with select patients for review to incorporate patient feedback in the study design. 2.4. Data analysis Survey results were downloaded from Google Forms and imported into Google Sheets. Responses were tabulated and percentage of responses corresponding to the desired value were calculated using formulas and functions in Google Sheets. Results were then imported into GraphPad Prism v9 to be formatted visually ( ). For open-ended questions, data were analyzed using inductive thematic analysis as outlined by Braun and Clarke . Two members of the research team (JW, DL) served as independent reviewers of the data. Reviewers began by familiarizing themselves with the survey responses. They then independently open coded responses which were subsequently grouped into initial themes. These themes were reviewed to ensure that they accurately depicted the codes and the data as a whole. Strong consensus between reviewers regarding themes was reached and representative theme titles were generated. Representative quotations for each theme were selected and agreed upon by members of the research team. One reviewer (DL) utilized Delve in vivo qualitative analysis software to ensure accuracy of coding and that no key themes were missed. Data saturation was achieved with 39 of 44 patients providing free-response answers . Participant recruitment This study looked specifically at the first year cohort of patients enrolled in the rheumatology PN program. As this program was established in response to a grant to target underserved populations, patients were selected based on a history of recurrent missed appointments, inconsistent clinic follow up, or knowledge of significant barriers to care. Participant demographics are outlined in . Most patients in this clinic are well known to the medical team who were able to evaluate if they might benefit from a PN. The majority of the patients recruited had SLE and were Black. Patients were connected to their navigator in the clinic or occasionally over the phone following a visit. Navigators discussed with the patient the various ways in which they may be able to support the patient. PN involvement The first cohort of PNs included seven second year medical students of various backgrounds and languages spoken. Over the course of a few months, each PN was connected with between 8–10 patients each. In many cases, patients with English as their non-native language were able to be paired with navigators who spoke their native tongue. Students were able to access their patients’ medical records, attended rheumatology clinic bimonthly, and had support from clinic staff. PNs were given training regarding their role and were instructed to bring medical concerns to the attention of the fellows and attendings. PNs followed their patients for one year (sometimes more) often attending medical appointments, calling patients to check in, and helping connect patients with existing resources. Data collection We conducted a cross-sectional study of patients enrolled in the first cohort of the program. One to four months after completing their first year in the program, patients were contacted by phone to fill out a questionnaire ( ) to better understand their experience. During this time, many of the patients continued to benefit from the program, however, they were asked to reflect on the first year specifically. Patients were contacted by a member of the team who was not their assigned PN to limit response bias. The survey items were compiled based on common tasks PNs were able to assist patients with. There was also a section designed for open-ended patient responses. A preliminary survey was shared with select patients for review to incorporate patient feedback in the study design. Data analysis Survey results were downloaded from Google Forms and imported into Google Sheets. Responses were tabulated and percentage of responses corresponding to the desired value were calculated using formulas and functions in Google Sheets. Results were then imported into GraphPad Prism v9 to be formatted visually ( ). For open-ended questions, data were analyzed using inductive thematic analysis as outlined by Braun and Clarke . Two members of the research team (JW, DL) served as independent reviewers of the data. Reviewers began by familiarizing themselves with the survey responses. They then independently open coded responses which were subsequently grouped into initial themes. These themes were reviewed to ensure that they accurately depicted the codes and the data as a whole. Strong consensus between reviewers regarding themes was reached and representative theme titles were generated. Representative quotations for each theme were selected and agreed upon by members of the research team. One reviewer (DL) utilized Delve in vivo qualitative analysis software to ensure accuracy of coding and that no key themes were missed. Data saturation was achieved with 39 of 44 patients providing free-response answers . Results 3.1. Quantitative results Out of the 71 patients contacted, 44 completed the questionnaire (62% response rate), with relatively equal participation between navigator assignments (ranging between 5–7 patients per navigator). When asked about their satisfaction with the program, 84% reported a satisfaction of ≥ 4 on a 5 point Likert scale (where 4 is satisfied and 5 is very satisfied). Analysis of patient responses to which areas they wanted/received help showed that, for those interested in assistance, 94% of navigators were able to schedule appointments, 85% were able to get in touch with the doctor, 87% could assist with filling prescriptions, 85% were able to provide additional clarification from clinic visits, 84% were able to answer medical questions, 81% could remind patients of appointments, 76% were able to provide emotional support, and 59% were able to assist with filling out forms. The category with lowest percentage of assistance was arranging transportation to clinic visits where 50% felt that the navigators met their needs. These data reflect the percentage of patients who initially indicated interest in receiving help and ended up receiving that assistance. Patients who did not ask for help in a specific area, but nonetheless received that assistance were excluded from this analysis. These results are summarized in . The responses for each patient are individually tabulated in ( ) to show which patients requested and received help in specific areas. When asked about the lasting impact our program had, 91% of patients agreed or strongly agreed they felt more cared for by their healthcare team, 84% agreed or strongly agreed they were more motivated to better care for their health as a result of the program, and 84% agreed or strongly agreed they felt their healthcare team now better understands the challenges they face in daily life (where agreed or strongly agreed was a 4 or 5 on a 5-point Likert scale, respectively). 3.2. Qualitative results 3.2.1. Positive feedback When asked about the most helpful aspect of having a PN, patients better elucidated some of the benefits of the program. The following themes emerged from our qualitative analysis of the free response data. 3.2.1.1. Theme 1. Ease of contacting their doctor. A number of patients specifically highlighted that they benefited tremendously from having a direct link to their doctor through their PN. Patients emphasized that they did not have their doctor’s phone number or email and that contacting them through the hospital phones takes too long. Having a person to advocate for them directly made getting treatment much easier. One patient emphasized this point, “There were times when I had pain and could not get in touch with my doctor, but I was always able to get in touch with [PN] and he helped advise me what to do.” Many patients took note of how quickly they were able to have responses to questions or concerns they had. One patient described it as follows: “When I had an issue, she was able to get in touch with my doctor and find out the answer in such a quick way that I never would have been able to do on my own. For example, I’ve been having headaches and lately they have been affecting my eyes. I texted [PN] and she answered in 5 minutes and was giving me answers really quickly. She asked me medical questions before speaking with the doctor and resolved my issue within the hour. I was very impressed with her medical knowledge and responsiveness.” 3.2.1.2. Theme 2. Arranging appointments and prescription refill assistance. Other patients pointed to the aid they received with previously cumbersome tasks, such as filling prescriptions and setting up appointments, as a particularly strong benefit of the program. Patients noted that sometimes their prescription would run out and rather than having to make an appointment or call the office (which can have delays), their PN was able to contact their doctor to have the prescription filled in a timely manner. This often led to alleviation of pain or other symptoms much quicker than in the past. Furthermore, PNs in the program were given direct access to the office manager who could schedule appointments for patients in a timely manner. Instead of having to call the appointment call center and waiting for the next available appointment (which may not be for months), patients could contact their PN who had authority to help schedule their appointment even if the clinic was booked. As one patient put it, “When I need something she (my PN) was there and got stuff done in a timely manner. She was most helpful in filling prescriptions and setting up appointments for me.” Another said, “I needed help getting an appointment made and I reached out to her (my PN) and she took care of it. She helped me with something I couldn’t do myself. She has made things a lot easier for me which I really am thankful for.” 3.2.1.3. Theme 3. Addressing life stressors. Additionally, patients appreciated having someone on their medical team who could fill in some of the gaps that serve as barriers to their care. For example, two patients described how their PN was able to help them fill out housing application forms. Another patient who was dealing with food insecurity shared how their PN was “able to send [her] details and locations for food pantries.” Other patients shared experiences of times in which their PN went above and beyond what they expected. “One time I was having a really bad flare up and didn’t have anyone to watch my son while I visited the doctor. [My PN] volunteered to take care of him for a few hours so I could see the doctor.” 3.2.1.4. Theme 4. Strong relationships and empathy. Patients repeatedly mentioned the relationships they were grateful to forge with their PNs. Many PNs were able to help simply provide support to patients in times of need. As one patient summarized, “She (my PN) gave me hope. She always listens to me. She gives me advice. When she speaks with me she makes me feel happy.” Another patient shared, “.the program has been great. [My PN] is very nice and she calls to check in on me which I really appreciate to know someone is looking out for me.” Yet another patient made a comment that she feels like she can relate to her PN “.like a daughter. She gives me a lot of hope when I’m feeling down. She encourages me to take care of myself. She always calls to check in on me and makes me feel better when I’m feeling lonely.” 3.2.2. Areas for improvement The overwhelming response from patients was positive. However, when prompted for constructive feedback, eight patients provided examples of ways their navigator was not able to meet their needs. Seven of these responses made reference to inadequate communication from their PN wishing the student had contacted them more regularly to check-in or remind them of appointments. Two patients said they had wanted help with filling out housing forms that their PN was not able to assist with, however recognized that this may be out of the scope of their role. Similarly, one patient felt she was unaware of how the navigator could be of assistance. Finally, one patient wished their navigator was able to accompany her to appointments more regularly and felt that her PN was not “competent enough in terms of medical knowledge.” Quantitative results Out of the 71 patients contacted, 44 completed the questionnaire (62% response rate), with relatively equal participation between navigator assignments (ranging between 5–7 patients per navigator). When asked about their satisfaction with the program, 84% reported a satisfaction of ≥ 4 on a 5 point Likert scale (where 4 is satisfied and 5 is very satisfied). Analysis of patient responses to which areas they wanted/received help showed that, for those interested in assistance, 94% of navigators were able to schedule appointments, 85% were able to get in touch with the doctor, 87% could assist with filling prescriptions, 85% were able to provide additional clarification from clinic visits, 84% were able to answer medical questions, 81% could remind patients of appointments, 76% were able to provide emotional support, and 59% were able to assist with filling out forms. The category with lowest percentage of assistance was arranging transportation to clinic visits where 50% felt that the navigators met their needs. These data reflect the percentage of patients who initially indicated interest in receiving help and ended up receiving that assistance. Patients who did not ask for help in a specific area, but nonetheless received that assistance were excluded from this analysis. These results are summarized in . The responses for each patient are individually tabulated in ( ) to show which patients requested and received help in specific areas. When asked about the lasting impact our program had, 91% of patients agreed or strongly agreed they felt more cared for by their healthcare team, 84% agreed or strongly agreed they were more motivated to better care for their health as a result of the program, and 84% agreed or strongly agreed they felt their healthcare team now better understands the challenges they face in daily life (where agreed or strongly agreed was a 4 or 5 on a 5-point Likert scale, respectively). Qualitative results 3.2.1. Positive feedback When asked about the most helpful aspect of having a PN, patients better elucidated some of the benefits of the program. The following themes emerged from our qualitative analysis of the free response data. 3.2.1.1. Theme 1. Ease of contacting their doctor. A number of patients specifically highlighted that they benefited tremendously from having a direct link to their doctor through their PN. Patients emphasized that they did not have their doctor’s phone number or email and that contacting them through the hospital phones takes too long. Having a person to advocate for them directly made getting treatment much easier. One patient emphasized this point, “There were times when I had pain and could not get in touch with my doctor, but I was always able to get in touch with [PN] and he helped advise me what to do.” Many patients took note of how quickly they were able to have responses to questions or concerns they had. One patient described it as follows: “When I had an issue, she was able to get in touch with my doctor and find out the answer in such a quick way that I never would have been able to do on my own. For example, I’ve been having headaches and lately they have been affecting my eyes. I texted [PN] and she answered in 5 minutes and was giving me answers really quickly. She asked me medical questions before speaking with the doctor and resolved my issue within the hour. I was very impressed with her medical knowledge and responsiveness.” 3.2.1.2. Theme 2. Arranging appointments and prescription refill assistance. Other patients pointed to the aid they received with previously cumbersome tasks, such as filling prescriptions and setting up appointments, as a particularly strong benefit of the program. Patients noted that sometimes their prescription would run out and rather than having to make an appointment or call the office (which can have delays), their PN was able to contact their doctor to have the prescription filled in a timely manner. This often led to alleviation of pain or other symptoms much quicker than in the past. Furthermore, PNs in the program were given direct access to the office manager who could schedule appointments for patients in a timely manner. Instead of having to call the appointment call center and waiting for the next available appointment (which may not be for months), patients could contact their PN who had authority to help schedule their appointment even if the clinic was booked. As one patient put it, “When I need something she (my PN) was there and got stuff done in a timely manner. She was most helpful in filling prescriptions and setting up appointments for me.” Another said, “I needed help getting an appointment made and I reached out to her (my PN) and she took care of it. She helped me with something I couldn’t do myself. She has made things a lot easier for me which I really am thankful for.” 3.2.1.3. Theme 3. Addressing life stressors. Additionally, patients appreciated having someone on their medical team who could fill in some of the gaps that serve as barriers to their care. For example, two patients described how their PN was able to help them fill out housing application forms. Another patient who was dealing with food insecurity shared how their PN was “able to send [her] details and locations for food pantries.” Other patients shared experiences of times in which their PN went above and beyond what they expected. “One time I was having a really bad flare up and didn’t have anyone to watch my son while I visited the doctor. [My PN] volunteered to take care of him for a few hours so I could see the doctor.” 3.2.1.4. Theme 4. Strong relationships and empathy. Patients repeatedly mentioned the relationships they were grateful to forge with their PNs. Many PNs were able to help simply provide support to patients in times of need. As one patient summarized, “She (my PN) gave me hope. She always listens to me. She gives me advice. When she speaks with me she makes me feel happy.” Another patient shared, “.the program has been great. [My PN] is very nice and she calls to check in on me which I really appreciate to know someone is looking out for me.” Yet another patient made a comment that she feels like she can relate to her PN “.like a daughter. She gives me a lot of hope when I’m feeling down. She encourages me to take care of myself. She always calls to check in on me and makes me feel better when I’m feeling lonely.” 3.2.2. Areas for improvement The overwhelming response from patients was positive. However, when prompted for constructive feedback, eight patients provided examples of ways their navigator was not able to meet their needs. Seven of these responses made reference to inadequate communication from their PN wishing the student had contacted them more regularly to check-in or remind them of appointments. Two patients said they had wanted help with filling out housing forms that their PN was not able to assist with, however recognized that this may be out of the scope of their role. Similarly, one patient felt she was unaware of how the navigator could be of assistance. Finally, one patient wished their navigator was able to accompany her to appointments more regularly and felt that her PN was not “competent enough in terms of medical knowledge.” Positive feedback When asked about the most helpful aspect of having a PN, patients better elucidated some of the benefits of the program. The following themes emerged from our qualitative analysis of the free response data. 3.2.1.1. Theme 1. Ease of contacting their doctor. A number of patients specifically highlighted that they benefited tremendously from having a direct link to their doctor through their PN. Patients emphasized that they did not have their doctor’s phone number or email and that contacting them through the hospital phones takes too long. Having a person to advocate for them directly made getting treatment much easier. One patient emphasized this point, “There were times when I had pain and could not get in touch with my doctor, but I was always able to get in touch with [PN] and he helped advise me what to do.” Many patients took note of how quickly they were able to have responses to questions or concerns they had. One patient described it as follows: “When I had an issue, she was able to get in touch with my doctor and find out the answer in such a quick way that I never would have been able to do on my own. For example, I’ve been having headaches and lately they have been affecting my eyes. I texted [PN] and she answered in 5 minutes and was giving me answers really quickly. She asked me medical questions before speaking with the doctor and resolved my issue within the hour. I was very impressed with her medical knowledge and responsiveness.” 3.2.1.2. Theme 2. Arranging appointments and prescription refill assistance. Other patients pointed to the aid they received with previously cumbersome tasks, such as filling prescriptions and setting up appointments, as a particularly strong benefit of the program. Patients noted that sometimes their prescription would run out and rather than having to make an appointment or call the office (which can have delays), their PN was able to contact their doctor to have the prescription filled in a timely manner. This often led to alleviation of pain or other symptoms much quicker than in the past. Furthermore, PNs in the program were given direct access to the office manager who could schedule appointments for patients in a timely manner. Instead of having to call the appointment call center and waiting for the next available appointment (which may not be for months), patients could contact their PN who had authority to help schedule their appointment even if the clinic was booked. As one patient put it, “When I need something she (my PN) was there and got stuff done in a timely manner. She was most helpful in filling prescriptions and setting up appointments for me.” Another said, “I needed help getting an appointment made and I reached out to her (my PN) and she took care of it. She helped me with something I couldn’t do myself. She has made things a lot easier for me which I really am thankful for.” 3.2.1.3. Theme 3. Addressing life stressors. Additionally, patients appreciated having someone on their medical team who could fill in some of the gaps that serve as barriers to their care. For example, two patients described how their PN was able to help them fill out housing application forms. Another patient who was dealing with food insecurity shared how their PN was “able to send [her] details and locations for food pantries.” Other patients shared experiences of times in which their PN went above and beyond what they expected. “One time I was having a really bad flare up and didn’t have anyone to watch my son while I visited the doctor. [My PN] volunteered to take care of him for a few hours so I could see the doctor.” 3.2.1.4. Theme 4. Strong relationships and empathy. Patients repeatedly mentioned the relationships they were grateful to forge with their PNs. Many PNs were able to help simply provide support to patients in times of need. As one patient summarized, “She (my PN) gave me hope. She always listens to me. She gives me advice. When she speaks with me she makes me feel happy.” Another patient shared, “.the program has been great. [My PN] is very nice and she calls to check in on me which I really appreciate to know someone is looking out for me.” Yet another patient made a comment that she feels like she can relate to her PN “.like a daughter. She gives me a lot of hope when I’m feeling down. She encourages me to take care of myself. She always calls to check in on me and makes me feel better when I’m feeling lonely.” Theme 1. Ease of contacting their doctor. A number of patients specifically highlighted that they benefited tremendously from having a direct link to their doctor through their PN. Patients emphasized that they did not have their doctor’s phone number or email and that contacting them through the hospital phones takes too long. Having a person to advocate for them directly made getting treatment much easier. One patient emphasized this point, “There were times when I had pain and could not get in touch with my doctor, but I was always able to get in touch with [PN] and he helped advise me what to do.” Many patients took note of how quickly they were able to have responses to questions or concerns they had. One patient described it as follows: “When I had an issue, she was able to get in touch with my doctor and find out the answer in such a quick way that I never would have been able to do on my own. For example, I’ve been having headaches and lately they have been affecting my eyes. I texted [PN] and she answered in 5 minutes and was giving me answers really quickly. She asked me medical questions before speaking with the doctor and resolved my issue within the hour. I was very impressed with her medical knowledge and responsiveness.” Theme 2. Arranging appointments and prescription refill assistance. Other patients pointed to the aid they received with previously cumbersome tasks, such as filling prescriptions and setting up appointments, as a particularly strong benefit of the program. Patients noted that sometimes their prescription would run out and rather than having to make an appointment or call the office (which can have delays), their PN was able to contact their doctor to have the prescription filled in a timely manner. This often led to alleviation of pain or other symptoms much quicker than in the past. Furthermore, PNs in the program were given direct access to the office manager who could schedule appointments for patients in a timely manner. Instead of having to call the appointment call center and waiting for the next available appointment (which may not be for months), patients could contact their PN who had authority to help schedule their appointment even if the clinic was booked. As one patient put it, “When I need something she (my PN) was there and got stuff done in a timely manner. She was most helpful in filling prescriptions and setting up appointments for me.” Another said, “I needed help getting an appointment made and I reached out to her (my PN) and she took care of it. She helped me with something I couldn’t do myself. She has made things a lot easier for me which I really am thankful for.” Theme 3. Addressing life stressors. Additionally, patients appreciated having someone on their medical team who could fill in some of the gaps that serve as barriers to their care. For example, two patients described how their PN was able to help them fill out housing application forms. Another patient who was dealing with food insecurity shared how their PN was “able to send [her] details and locations for food pantries.” Other patients shared experiences of times in which their PN went above and beyond what they expected. “One time I was having a really bad flare up and didn’t have anyone to watch my son while I visited the doctor. [My PN] volunteered to take care of him for a few hours so I could see the doctor.” Theme 4. Strong relationships and empathy. Patients repeatedly mentioned the relationships they were grateful to forge with their PNs. Many PNs were able to help simply provide support to patients in times of need. As one patient summarized, “She (my PN) gave me hope. She always listens to me. She gives me advice. When she speaks with me she makes me feel happy.” Another patient shared, “.the program has been great. [My PN] is very nice and she calls to check in on me which I really appreciate to know someone is looking out for me.” Yet another patient made a comment that she feels like she can relate to her PN “.like a daughter. She gives me a lot of hope when I’m feeling down. She encourages me to take care of myself. She always calls to check in on me and makes me feel better when I’m feeling lonely.” Areas for improvement The overwhelming response from patients was positive. However, when prompted for constructive feedback, eight patients provided examples of ways their navigator was not able to meet their needs. Seven of these responses made reference to inadequate communication from their PN wishing the student had contacted them more regularly to check-in or remind them of appointments. Two patients said they had wanted help with filling out housing forms that their PN was not able to assist with, however recognized that this may be out of the scope of their role. Similarly, one patient felt she was unaware of how the navigator could be of assistance. Finally, one patient wished their navigator was able to accompany her to appointments more regularly and felt that her PN was not “competent enough in terms of medical knowledge.” Discussion and conclusion 4.1. Discussion This cross-sectional study focused on the patient experience of a medical student patient navigation program in Brooklyn, New York. While prior studies have demonstrated the efficacy of patient navigation programs in health outcomes on a number of different metrics, few have explored the patient perspective on these programs. Furthermore, the use of medical students as PNs has not been widely implemented despite calls from academic organizations to expand their use . Our findings demonstrate high levels of patient satisfaction with employing medical students as PNs for patients with significant barriers to care. Students were able to be most helpful to patients in scheduling appointments, contacting patients’ doctors, filling prescriptions, answering medical questions, addressing life stressors, and providing emotional assistance. As a result of the program, many patients expressed feeling more cared for, feeling more motivated to care for their health, and feeling better understood by their healthcare team. The importance of these results is bolstered by the strong association between a number of these psychosocial factors and clinical outcomes (including medication adherence) [ – ]. Some have focused on the benefits of student navigation programs on medical education by providing students with longitudinal, health systems-based, value-added patient experience [ – ]. However, we argue that the use of medical students as navigators confers a number of unique benefits to patients as well. First, by implementing a PN program using a number of medical students, the case-load per navigator can be diffused. Second, medical students are eager to gain early access to patient experience. Thus, compared to other types of navigators, students may be more proactive in reacting to patient needs. Additionally, clinics wishing to implement a PN program need not hire an additional staff member who requires training, salary, and clearance. As students already exist within the university medical system, they have a basic understanding of medical jargon and systems and may require less training than other potential PNs. While these factors may not hold true everywhere, compared to other recipients of our PN grant, our students were able to begin seeing patients months before any hired navigators at the four other sites. Furthermore, students’ basic medical knowledge allows them to serve a more expansive role than non-clinical navigators . It should be noted that no head-to-head data exist comparing student navigators to others. Nonetheless, in conjunction with other similar models that have improved health outcomes with the use of student navigators , our patient-centered data indicate high levels of satisfaction from patients with this model. 4.2. Strengths and limitations The present study is not without limitations. First, given that this was a pilot program, there was variation in quality and duration of follow up based on patient and PN factors. We did not seek to quantify the amount of time spent on each patient, instead choosing to analyze the program as a whole. Some patients had weekly contact with their navigators, while others only interacted with them a number of times whether due to lack of need or lack of follow up by either party. This was reported as feedback from a number of patients as noted above. Furthermore, as the present program enlisted medical students of varying backgrounds and interests, naturally, some pairings developed more therapeutic relationships than others and we did not conduct individual evaluations by navigator. Second, while our response rate was acceptable at over 63%, our results may be confounded by nonresponse bias. Next, it is important to note that this study focused solely on the patient’s experience of the program rather than looking at direct disease outcome measures. Numerous studies have demonstrated that PNs can be quite efficacious. We, therefore, chose to examine patient perspectives specifically. It should also be noted that patient experience responses tend to be positively skewed and should be used as a starting point for further improvements in patient care . Finally, this study was limited in scope to a single outpatient setting in a minority urban population. The patients served have many health problems and significantly lack health resources. By design, this may contribute to selection bias as our program chose to intervene for our most at-risk patients rather than a random sampling. These are patients that have a clear need for a program like this, while other patient populations may not demonstrate the same robust results. At the same time, PN programs have been shown to be most beneficial in low income populations, especially in communities that have been historically disadvantaged . Our study corroborates this data and serves to strengthen the support for employing PNs for underserved patients. 4.3. Conclusion In this study of 44 patients enrolled in a novel medical student patient navigation program, patients expressed high levels of satisfaction across a number of different domains. Patients’ specific needs were met in the vast majority of cases spanning appointment scheduling to emotional support. As a result, most patients reported feeling more cared for by their healthcare team, felt their healthcare team better understood their challenges, and were more motivated to better care for their health. Our study builds on existing data that demonstrates that PN programs can be mutually beneficial for students and patients. While additional longitudinal data is needed to better assess follow up and disease-specific outcomes, our initial data suggest that patients, at least subjectively, benefit greatly from the use of students as care coordinators. With this model’s strong educational benefit, further research is warranted to better assess the efficacy of students compared to lay navigators in achieving patient-centered disease outcomes. 4.4. Practice implications Patients from lower SES backgrounds continue to struggle to navigate America’s increasingly complex healthcare system. PNs have been proven to be an effective, low cost, and personal way to address this growing issue. Medical student patient navigation is an easily scalable approach that can be used in medical schools throughout the country in a variety of departments. Medical schools throughout the country have been moving towards earlier clinical experience for their students. Additionally, medical students often lack longitudinal care opportunities early in their training. Many medical students have the opportunity to work with at-risk patients in a student-run free clinic (as is the case in our institution). However, these encounters are limited to a single visit and lack the critical longitudinal follow up that allows for a more robust understanding of the issues these patients face in accessing care . Choosing students as navigators not only provides tremendous educational benefit, but fills a crucial gap in care for underserved patients. In addition to the stated roles outlined above, students may have the opportunity to identify contextual errors (errors that result from overlooking patient contextual factors) and learn to avoid these errors in their future practice. This program’s preliminary success has led to its expansion into other departments in our hospital and is being considered for inclusion into the medical student curriculum at our institution. Future directions for this model are vast and promising. These may include: (1) providing medical students with additional training and clarifying communication expectations; (2) a social worker to whom students can refer patients for more complex issues; (3) reflection exercises for students to process their patient experiences and share resources; (4) tailored matching of student-patient pairs based on both patient and student characteristics and/or interests; (5) patient-led teaching sessions about barriers they face to care. Our hope is that with further expansion and research, medical student-based patient navigation can become a pillar of both medical student education and patient care. Discussion This cross-sectional study focused on the patient experience of a medical student patient navigation program in Brooklyn, New York. While prior studies have demonstrated the efficacy of patient navigation programs in health outcomes on a number of different metrics, few have explored the patient perspective on these programs. Furthermore, the use of medical students as PNs has not been widely implemented despite calls from academic organizations to expand their use . Our findings demonstrate high levels of patient satisfaction with employing medical students as PNs for patients with significant barriers to care. Students were able to be most helpful to patients in scheduling appointments, contacting patients’ doctors, filling prescriptions, answering medical questions, addressing life stressors, and providing emotional assistance. As a result of the program, many patients expressed feeling more cared for, feeling more motivated to care for their health, and feeling better understood by their healthcare team. The importance of these results is bolstered by the strong association between a number of these psychosocial factors and clinical outcomes (including medication adherence) [ – ]. Some have focused on the benefits of student navigation programs on medical education by providing students with longitudinal, health systems-based, value-added patient experience [ – ]. However, we argue that the use of medical students as navigators confers a number of unique benefits to patients as well. First, by implementing a PN program using a number of medical students, the case-load per navigator can be diffused. Second, medical students are eager to gain early access to patient experience. Thus, compared to other types of navigators, students may be more proactive in reacting to patient needs. Additionally, clinics wishing to implement a PN program need not hire an additional staff member who requires training, salary, and clearance. As students already exist within the university medical system, they have a basic understanding of medical jargon and systems and may require less training than other potential PNs. While these factors may not hold true everywhere, compared to other recipients of our PN grant, our students were able to begin seeing patients months before any hired navigators at the four other sites. Furthermore, students’ basic medical knowledge allows them to serve a more expansive role than non-clinical navigators . It should be noted that no head-to-head data exist comparing student navigators to others. Nonetheless, in conjunction with other similar models that have improved health outcomes with the use of student navigators , our patient-centered data indicate high levels of satisfaction from patients with this model. Strengths and limitations The present study is not without limitations. First, given that this was a pilot program, there was variation in quality and duration of follow up based on patient and PN factors. We did not seek to quantify the amount of time spent on each patient, instead choosing to analyze the program as a whole. Some patients had weekly contact with their navigators, while others only interacted with them a number of times whether due to lack of need or lack of follow up by either party. This was reported as feedback from a number of patients as noted above. Furthermore, as the present program enlisted medical students of varying backgrounds and interests, naturally, some pairings developed more therapeutic relationships than others and we did not conduct individual evaluations by navigator. Second, while our response rate was acceptable at over 63%, our results may be confounded by nonresponse bias. Next, it is important to note that this study focused solely on the patient’s experience of the program rather than looking at direct disease outcome measures. Numerous studies have demonstrated that PNs can be quite efficacious. We, therefore, chose to examine patient perspectives specifically. It should also be noted that patient experience responses tend to be positively skewed and should be used as a starting point for further improvements in patient care . Finally, this study was limited in scope to a single outpatient setting in a minority urban population. The patients served have many health problems and significantly lack health resources. By design, this may contribute to selection bias as our program chose to intervene for our most at-risk patients rather than a random sampling. These are patients that have a clear need for a program like this, while other patient populations may not demonstrate the same robust results. At the same time, PN programs have been shown to be most beneficial in low income populations, especially in communities that have been historically disadvantaged . Our study corroborates this data and serves to strengthen the support for employing PNs for underserved patients. Conclusion In this study of 44 patients enrolled in a novel medical student patient navigation program, patients expressed high levels of satisfaction across a number of different domains. Patients’ specific needs were met in the vast majority of cases spanning appointment scheduling to emotional support. As a result, most patients reported feeling more cared for by their healthcare team, felt their healthcare team better understood their challenges, and were more motivated to better care for their health. Our study builds on existing data that demonstrates that PN programs can be mutually beneficial for students and patients. While additional longitudinal data is needed to better assess follow up and disease-specific outcomes, our initial data suggest that patients, at least subjectively, benefit greatly from the use of students as care coordinators. With this model’s strong educational benefit, further research is warranted to better assess the efficacy of students compared to lay navigators in achieving patient-centered disease outcomes. Practice implications Patients from lower SES backgrounds continue to struggle to navigate America’s increasingly complex healthcare system. PNs have been proven to be an effective, low cost, and personal way to address this growing issue. Medical student patient navigation is an easily scalable approach that can be used in medical schools throughout the country in a variety of departments. Medical schools throughout the country have been moving towards earlier clinical experience for their students. Additionally, medical students often lack longitudinal care opportunities early in their training. Many medical students have the opportunity to work with at-risk patients in a student-run free clinic (as is the case in our institution). However, these encounters are limited to a single visit and lack the critical longitudinal follow up that allows for a more robust understanding of the issues these patients face in accessing care . Choosing students as navigators not only provides tremendous educational benefit, but fills a crucial gap in care for underserved patients. In addition to the stated roles outlined above, students may have the opportunity to identify contextual errors (errors that result from overlooking patient contextual factors) and learn to avoid these errors in their future practice. This program’s preliminary success has led to its expansion into other departments in our hospital and is being considered for inclusion into the medical student curriculum at our institution. Future directions for this model are vast and promising. These may include: (1) providing medical students with additional training and clarifying communication expectations; (2) a social worker to whom students can refer patients for more complex issues; (3) reflection exercises for students to process their patient experiences and share resources; (4) tailored matching of student-patient pairs based on both patient and student characteristics and/or interests; (5) patient-led teaching sessions about barriers they face to care. Our hope is that with further expansion and research, medical student-based patient navigation can become a pillar of both medical student education and patient care.
Medtronic’s Hugo
fed5b799-fd1b-4865-b1c0-1ed8ebadac07
11438614
Robotic Surgical Procedures[mh]
The field of urology has been at the forefront of surgical innovations, adapting novel techniques and technologies to benefit patient outcomes. This is well illustrated by historical milestones such as the use of cystoscopy in the late nineteenth century , emergence of laparoscopic surgery in the early twentieth century , and most recently, the advent of robotic surgery in the twenty-first century. Since U.S. Food and Drug Administration (FDA) approval in 2000, the da Vinci ® surgical system developed by Intuitive Surgical, has ushered in a new age of minimally invasive surgery; with its robotic-assisted surgical platform boasting superior dexterity and precision as well as visualisation compared to prior laparoscopic techniques . This system has dominated the robotic landscape due to its high-quality and long-standing intellectual patents . However, the expiration of these patents in 2019 has allowed new competitors to emerge. Medtronic's Hugo ™ Robotic-Assisted Surgery (RAS) system stands out as a particularly promising alternative, displaying advanced robotic technology, artificial intelligence, and cutting-edge imaging capabilities . Notably, it enhances the existing robotic platforms by introducing an open console, fostering seamless communication amongst the surgical team . Additionally, the system features a modular set of patient arm carts, amplifying versatility in surgical approaches. These comparisons can be viewed in Figs. and , respectively. Robotic-Assisted Radical Prostatectomies (RARP) took over from laparoscopic techniques in the early 2000s at centres capable of implementing the new robotic platforms . This transition was credited to the successful mitigation of laparoscopic limitations and downsides to open surgery, with enhancements in operative, functional, and oncological outcomes . Currently, reported cases at centres using the Hugo ™ RAS are still limited due to the novelty, but with more regulatory boards granting approval for its use, the numbers are growing. The team at Guy’s and St Thomas’ NHS Trust, a well-established high-volume robotic surgery centre, is conducting an ongoing comparative study using the IDEAL framework to evaluate the Hugo ™ RAS system in urology. Preliminary findings suggest that this system offers peri-operative and oncological outcomes comparable to those of the da Vinci ® system for RARP, indicating that Hugo ™ RAS is a safe and viable alternative . As surgical teams gain experience on the platform and confidence grows, its implementation in a wider array of procedures will follow. RARPs are the most commonly conducted procedure with robotic assistance within urology , and therefore, a comprehensive understanding of outcomes and morbidities of this procedure has been well established. This creates practical areas for analysis of new systems to ensure comparable efficacy and safety. Data suggesting equivalence between platforms as a minimum can lead to the advantages brought by updated systems to optimise procedures. Greater affordability of Medtronic’s Hugo ™ RAS could improve access to robotics at many centres. With the Hugo ™ RAS platform becoming integrated into an expanding number of centres worldwide for use in urology and particularly RARP procedures, we set out to systematically review experiences of this new system, comparing its safety and feasibility to other clinically available robotic systems. This study used the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines for protocol creation, and was registered on the Prospective Register of Systematic reviews with the reference: CRD42024504844. We used a comprehensive search strategy to identify relevant articles comparing the safety and efficacy of other robotic surgical systems for prostate cancer (Appendix 1). Population was defined as patients aged 18 years or older with prostate cancer (clinical stage T1 to T3, N0, M0), who underwent RARP; intervention was the Medtronic Hugo ™ RAS system; for comparator, where possible or published, Hugo ™ data compared with the existing robotic surgical outcome data using other robotic-assisted surgical systems and outcomes operatively, functionally, and oncologically. Following a scoping search, terms were adapted to include relevant synonyms. The following databases were used to carry out the search: MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Scopus, Web of Science, clinicaltrials.gov, and World Health Organization (WHO) international clinical trials registry platform (ICTRP). Manual forward and backward citation searching was conducted for eligible studies, as well as hand searching of the five most cited urological/robotic journals (European Urology, BJUI, World Journal of Urology, The International Journal of Medical Robotics and Computer assisted Surgery, and Journal of Endourology [2018–2024]). Studies were collated using Endnote Clarivate, where duplicates were removed. The initial search was conducted on 22/01/2024, with a following top-up search carried out on 09/09/2024 finding additional articles included. Two authors independently conducted each stage of screening (MST, AS), with any differences resolved in consensus meetings, or through consulting a third reviewer, (BC) if needed. An initial screening tool was applied to the titles and abstracts to include studies with direct comparisons between data from robotic platforms or descriptive comparisons between Hugo ™ RAS experiences and current practise (other robotic platforms, laparoscopy, or open surgery). We included case-series, cohort studies, case–control studies, randomised control trials, systematic reviews, and meta-analyses meeting our inclusion criteria. We then conducted full-text review of included papers according to our predetermined PICO criteria . Furthermore, we did not exclude relevant studies that do not report outcomes of interest of this review; and rather, summarised the key findings of any such studies. Reasons for exclusion were recorded and can be viewed in Fig. . Whilst there were no limitations based on language, logistical constraints necessitated the retrieval of non-English papers only if they had an accessible English abstract or full-text translation. The same reviewers (MST, AS) then extracted data based on study characteristics, participant characteristics, intervention characteristics and outcomes, as well as any study funding sources for analysis. Risk-of-bias assessment implemented the ROBINS-I (Risk of Bias in non-randomised Studies of Interventions) approach to assess methodological quality, appraising the following domains: confounding, participant selection, intervention classification, intervention deviations, missing data, measurement of outcomes, and reporting of results. Data analysis for quantitative findings included risk ratios (RR) for dichotomous outcomes and mean differences (MD) for continuous outcomes, accompanied by 95% confidence intervals (CI). In the case of observational studies such as case–control studies, non-randomised trials and cohort studies, we intended to highlight effect estimates as adjusted RR or odds ratios (OR), with corresponding 95% CI. If such data were unavailable, we used unadjusted RR or with 95% CI, P values only, or percentages in tabular format. Figure documents the search results and exclusions conducted at various stages of screening. The initial search screening tool found 36 potentially relevant studies; with 14 assessed as eligible following full-text review. The top-up search then found an additional 8 studies. One of these was the full-text update to a conference abstract included initially (Brime Menendez et al. 2024 paper for 2023 abstract ). A further two were studies from the same centre with more recent data thus replacing the initially included paper (Gandi et al. 2024 update over Totaro et al. 2024 and 2022 [ – ]). Table presents an overview of the key characteristics of the included articles. Of the 19 included studies, 9 were comparative almost exclusively between Hugo ™ and da Vinci ® with a single paper by Rocco et al. including the CMR Versius ™ system. Notably, two of the studies lack quantitative values for their results, yet they offer valuable insights through narrative explanations. These studies, conducted by Rocco et al. (2023) and Sarchi et al. (2022) , contribute diverse perspectives. Sarchi et al. conducted a cadaveric study, furnishing a comprehensive guide for the setup and docking of the novel Hugo ™ robotic platform for RARP. Whilst the conference abstract by Rocco et al. conducted an evaluation focussing on safety aspects, analysing system errors across three platforms: da Vinci ® , Hugo ™ , and Versius ™ . The study concluded that Hugo ™ experienced three non-critical alarms and an instrument change, yet these events had no discernible adverse clinical or surgical impact (Fig. ). The quality of eligible studies was assessed using the ROBINS-I tool , showing serious risk of bias for four studies and moderate risk in remaining studies. Figure demonstrates proportions of studies at each level of bias risk for the domains alongside the overall risk-of-bias performance. The main contributing domain to bias was confounding, attributable to a majority of eligible studies lacking randomisation. Bias in measurement outcomes was a factor for moderate risk in a high proportion of studies as assessors could not be blinded to the intervention and outcomes could be affected by their knowledge, such as in the case of a more cautious approach to the procedure in early cases of the implementation of a novel robotic system. One paper had serious risk in the domain for missing data due to the exclusion criteria of cases with conversions to open surgery. Also of note, the study by Antonelli, et al. was an open-label, non-randomised clinical trial . Due to the high heterogeneity between the included studies, a wholly narrative synthesis was undertaken, forming a thematic analysis through the tabulation of the results of the studies for assignment into a framework of structured themes. This consisted of the following factors: operative time and its breakdown, safety of the Hugo ™ RAS, participant demographics, and transfer of skills. The Hugo ™ RAS employs a modular arm setup with individual carts to allow for greater versatility of docking setups based on various procedures, laterality, and location . In addition to providing a more adaptable platform to optimise configurations based on surgeon or operating room requirements. This does, however, bring forth the issue of implementation within surgical teams experienced with the da Vinci ® platform. Patient positioning and trocar placements are mostly unchanged, but cart distances and angulation differ to achieve correct triangulation. Studies found that the docking times for the initial cases exceeded da Vinci ® times greatly but did improve quickly ; 15 min down to 7 min as explained by Alfano et al. . Although median times were still longer when compared to da Vinci ® , with Olsen et al. highlighting a statistically significant difference ( P = 0.04) of 8 min with Hugo ™ against 3 min. Total operative times are comparable between eligible studies for Hugo ™ , whilst console times contain more variation based on procedure approach and experience with robotic systems . Gandi et al. even show comparable total operative times between systems in a matched cohort, with Shepherd, et al. illustrating the same for console times . When compared with da Vinci ® , conflict arises between studies with Bravi et al. finding significantly ( P < 0.05) raised operative times for Hugo ™ , yet Ragavan et al. concluding significantly ( P < 0.05) reduced Hugo ™ time . Meanwhile, Olsen et al. determining no clinically relevant difference between systems . Menendez et al. have produced the same conclusion, even with statistically significant reductions in the Hugo ™ cohort when steps of the procedure are broken down . This could possibly be attributable to confounders for instance patient disease stage and surgical team experience. Antonelli et al. remarks that the early longer timings can also be attributed to more meticulous care taken due to the novelty of the device . Table collates the findings of these 12 studies for operative timings in Hugo ™ alongside any comparison. Current understanding explains on average longer operation times for Hugo ™ in comparison to da Vinci ® , due to challenges with docking setup of modular arm carts. However, with experience, this improves alongside shortening of isolated console times which becomes even faster than on da Vinci ® . One of the key components for transitioning to a novel robotic platform is the transfer of skills from previously used systems for both the surgeon and surgical team. Eligible studies have highlighted that centres which will be implementing the Hugo ™ RAS undergo adequate dry and wet lab training provided by Medtronic, vital in assisting the team with adapting to the new platform . Adoption of the new system designs, namely pistol-like hand controls and modular arms were the aspects requiring a longer learning curve in the studies. Nevertheless, surgical experience with a robotic platform has been shown to transfer to Hugo ™ in a rapid period; Ng et al. and Ou et al. showing transitions under ten cases . Antonelli et al. present a comprehensive longitudinal analysis of procedure timings, concluding that due to the novelty of the Hugo ™ system, both operative time and setup duration are initially longer . However, with increased experience, these times improve, reflecting the expected learning curve associated with the device. Table aggregates the five relevant studies assessing transfer of skills and learning curves for Hugo ™ ; highlighting short curves for early adoption of the new system which is even faster for surgeons with prior robotic experience. Allowing for unhindered transfer from da Vinci ® . Studies into robot-naïve surgeons will be the next step to understanding learning progress for the Hugo ™ . Understanding the safety, feasibility, and operative outcomes from research into new robotic platforms requires the use of representative patient cohorts. Being selective with cases that can be performed by Hugo ™ is important for maintaining patient safety, stemming from being unacquainted with its capabilities and limitations. Even so, selection criteria should be adjusted with increasing surgeon confidence to enable comparable patient demographics with previously used platforms. Olsen et al. found that patients for which Hugo ™ was used had a larger BMI range and higher cancer staging . The other comparative studies also lacked any statistically significant difference between their patient cohorts, indicating a level of reliability to their findings [ , , , ]. Overall, conveying the unrestrictive nature of patient selection and range in case complexities for the Hugo ™ cohorts. Therefore, the comparisons made with da Vinci ® are between similar oncological and patient specific demographics, ensuring robust internal validity. In order for a recently launched intervention, particularly a robotic platform, to gain widespread acceptance from surgeons and their respective centres, it has to successfully show safety in its early cases that parallels the precursor. In the instance of Hugo ™ , rates of complications and positive surgical margins are a key variable to consider as comparable to da Vinci ® . Following data synthesis, it is evident that although complications (Clavien–Dindo ≥ 2) may arise from either system; it needs to be differentiated whether the device was responsible/linked or an inherent risk associated with the procedure itself, such as wound closure was the cause. Occurrences of conversion to laparoscopic, alternate robotic platforms, or open is another crucial aspect of safety assessment. Olsen et al. demonstrated that an experienced robotic surgeon can successfully switch from da Vinci ® to Hugo ™ without a clinically relevant dip in performance; although no improvements were observed for operative outcomes, there were also no compromises due to complications . The complications that did arise were due to quality of the closure and had been previously described for da Vinci ® . This study also showed that positive surgical margin rates were comparable between the systems, even though marginally higher in Hugo ™ . Bravi et al., Shepherd et al., and Gandi et al. also recognised no statistically significant difference between the platforms for complications and positive margin rates; however, an issue with the latter is an exclusion criteria for cases converted to open [ , , ]. Moreover, Dell’Oglio et al. found non-negligible rates of positive surgical margins when comparing with other related papers. Alfano et al. illustrated safety with the use of Hugo ™ at their centre owed to a lack of intra-operative complications, a single non-robotic post-operative complication, and positive margin rates in accordance with the other studies. This result was supported by Ou et al., Marques-Monteiro et al., and Territo et al. alongside the latter two studies indicating no mechanical failures to have emerged . With regard to malfunctioning and technological issues, four studies directly concluded that mechanical failures were absent. These were the two mentioned prior as well as Sarchi et al. and Takahara et al., whilst other included papers made no mention of such faults. Andrade et al. also mention the recurring issue with arm collisions . The presence of arm and monitor failures causing disruptions to operative flow have been reported, however, with Antonelli et al. presenting a multitude of malfunctioning/troubleshooting events that did not lead to conversion or complication . These are to be expected as Hugo ™ RAS is currently at its first edition; therefore, recognition of these failures are key to improving safety. Bringing these to light will ensure that software issues are resolved through scheduled updates and more major mechanical compromises are acted on to prevent, with failures functioning as learning points for future generations of the device. March 2024 saw the implementation of a significant software update that solved prior issues with alarms and arm clash errors. This demonstrates how Medtronic is actively listening to feedback from Hugo RAS ™ users to implement changes that address faults and enhance the system. Table summarises the findings of 17 studies evaluating the safety and feasibility of Hugo ™ . It presents peri-operative outcomes, including estimated blood loss, positive surgical margins, and the presence of intra-operative complications or conversions. These metrics, used by the majority of eligible studies, highlights the efficacy and feasibility of Hugo ™ as an alternative to da Vinci ® . Furthermore, no eligible studies mention instances of conversions to open, laparoscopy or another robotic platform. Another domain to consider is also the functional outcomes, particularly continence and erectile function. Data on these variables are not always readily available in the literature due to the novelty of the research. However, seven of the included studies do incorporate it with early results being encouraging. Two studies delved into erectile function through use of IIEF-5 scoring and found comparability . Urinary continence recovery at 1 and/or 3 months were found to be acceptable and in most cases comparable, only 1 study concluded significant difference in favour of da Vinci ® [ , – , ]. With current research demonstrating an absence of serious peri-operative complications as well as any post-operative complications unrelated to the robotic system, there is a promising outlook for the safety of Hugo ™ . Current literature delving into varying aspects of the new Hugo ™ RAS platform by Medtronic (for RARP in prostate cancer patients) has a common agreement on the safety and applicability of the system. This is evidenced by a multitude of factors such as the short learning curves for transfer of skills from other robotic platforms or in robot-naïve surgeons, absence of intra-operative complications or conversions, comparable peri-operative outcomes, and competitive console times [ – , , , , ]. Experiences at multiple centres and with different surgeons have been positive in showing minimal change between this novel platform and the pre-existing conventional system, da Vinci ® . This creates a foundation for more centres worldwide to involve the Hugo ™ platform in their cases, widening understanding of the advantages it may bring to patient outcomes and safety, whilst learning of the shortcomings it may possess due to the limited experience Medtronic have in the robotic market. Concurring improvements to the platform, adaptations for a greater range of uses and developing competition in the market to push companies to deliver better products are positive implications. Inevitably enabling surgeons to have a system that suits their needs to bring refinements to patient care with a rivalling cost-effectiveness that centres can make the most of. Cost analysis comparing the Hugo ™ and da Vinci ® has revealed a saving of 11% in total financial burden with the Hugo ™ . This cost advantage makes Hugo ™ a more affordable option, which could significantly lower the barrier for many centres looking to adopt robotic surgery. Another central finding of the review was the lack of high-quality evidence surrounding the topic as the system is new and no high-quality randomised trials have yet been attempted. Although the available literature points towards Hugo ™ being a safe and appropriate alternative to da Vinci ® with comparable outcomes, the evidence base consists of observational studies as a majority. Following risk-of-bias assessment, a high proportion of studies were graded as moderate-to-serious risk of bias as a result of performance and detection bias from a lack of blinding, as well as selection bias from an absence of randomisation or unclear allocation criteria. Furthermore, much of the available articles contain qualitative accounts of their experience, and between those that are quantitative, there is much heterogeneity of measured parameters, preventing reliable pooled analyses. This study’s limitation stems from the scarcity of high-quality randomised-controlled trials, which impedes complete confidence in the conclusions able to be drawn. These, however, are inherent weaknesses from Hugo ™ still being in the early stages of implementation with a growing number of centres using the platform. Further larger multicentre studies supporting these early results are the next stage in the evaluation and implementation of this new technology, following the IDEAL principles of evaluation . To our knowledge, there is currently a single study with pooled statistical analysis of the comparison between Hugo ™ and da Vinci ® . The conclusions from this study ameliorate our findings, demonstrating the wide range of heterogeneity within the literature as well as comparable surgical, oncological, and functional outcomes. Thus, underscoring the safety, feasibility, and efficacy at this early stage with the necessity of more complete data. A key distinction of our paper, in contrast to their study, lies in the more comprehensive scope of our analysis, as we incorporate 19 articles instead of their 12. In conclusion, this systematic review assessed the safety and feasibility of the Medtronic Hugo ™ Robotic-Assisted Surgery (RAS) system for Robotic-Assisted Radical Prostatectomy (RARP) in comparison to other robotic surgical systems, particularly the da Vinci ® . The study examines operative timings, transfer of skills, participant demographics, safety, and oncological outcomes. Although limited by a scarcity of high-quality randomised-controlled trials, the current evidence suggests that Hugo ™ is a safe alternative with comparable outcomes. The findings encourage wider adoption, anticipating refinements and cost-effectiveness, whilst highlighting the need for rigorous research to strengthen the evidence base for the evolving Hugo ™ platform in other urological procedures or specialties.
Objectivity applied to embodied subjects in health care and social security medicine: definition of a comprehensive concept of cognitive objectivity and criteria for its application
0d78cff4-5864-4990-bcd2-5bb0b42e3714
5833064
Preventive Medicine[mh]
Objectivity is a contested concept in health care and social security medicine. ‘Objective finding’ is the traditional criterion of objectivity, based on the biomedical model of disease. It is also the officially sanctioned criterion of objectivity in social security . However, in most descriptions of mental illnesses and conditions involving so-called ‘medically unexplained symptoms’, no objective findings are present. In such cases, an issue arises about criteria of objectivity that could substitute for objective findings. Moreover, in medical practice, it seems that objectivity is affected by personal or social interests: human subjectivity lurks everywhere. In this article, we define an epistemological concept of objectivity that takes the pervasiveness of human subjectivity into account, and specify the practical criteria for its application. In an analysis of the objectivity of work disability assessments in medical certificates for social security, we explore whether the criteria are useful and fruitful or not. Our analysis of the concept of objectivity takes a pre-scientific starting point: not from objective findings of what is in the body alone, but from the perception of a patient or claimant of social security benefits as a whole person. This presupposes a holistic concept of the human being as constituted by body, soul and spirit, forming an integrated whole . Accordingly, a common-sense understanding of objectivity is the point of departure for our analysis. In human social life certain data, facts or states of affairs exist that are there for everyone to perceive, examine or discuss together with others. This assumes that both perceptions and arguments have a public character. People can confirm, or disconfirm, the validity of one another’s perceptions, arguments or statements. In this sense, a basic common-sense understanding of objectivity means in principle validity for everyone . Philosophers have defined the concepts of objectivity and subjectivity in fruitful ways . It is relevant for this study to distinguish between three senses of objectivity and subjectivity: ontological, epistemological and ethical. We shall use definitions of ontological and epistemological (or epistemic) objectivity by John Searle as our starting point. Searle has defined the concept of ontological objectivity (hereafter O-objectivity ) as follows: [T]he ontological sense [of objectivity] refers to the status of the mode of existence of types of entities in the world. Mountains and glaciers have an objective mode of existence because their mode of existence does not depend on being experienced by a subject (, p. 44). ‘Objectivity’ as an ontological term refers to a mode of existence that is independent of experience by a subject. For our purpose, we maintain that the ontological sense of objectivity, as traditionally used in the natural sciences, refers to material entities. It also represents a common-sense view of reality according to which what is there for everyone to perceive and agree upon is material reality. We now come to the definition of O-objectivity as applied to medicine. In the natural sciences – of which biomedicine is a part – O-objectivity was earlier often regarded as the only concept of objectivity. To say that an objective finding is O-objective is to say primarily that what is found is something that exists ‘outside’ experience . An ‘objective finding’ of this kind is pathological ‘reality’ as seen and felt by a surgeon or pathologist. The concept of the objective finding so defined presupposes that neither the patient’s nor the physician’s consciousness (subjectivity) affects the content and rigour of a medical assessment. We wish to emphasize that in certain biological systems such as the human being, subjective reality is fundamental . Searle writes that the concept of ontological subjectivity ( O-subjectivity ) refers to a mode of existence of a kind that exists ‘only as experienced by some human or animal subject’. Examples are ‘pains, tickles, and itches, as well as thoughts and feelings’ (, p. 44). Other examples of O-subjective reality are: illness, pain, anxiety, depression, well-being, quality and meaning of life. Thus, subjective reality exists, but only as entirely dependent on consciousness. Subjective realities have a quality of felt experience, or awareness. To describe them correctly one needs to take a first-person viewpoint, i.e. the viewpoint of the experiencing subject. In the present context, this means the view of the patient or claimant. It has a first-person ontology, as Searle maintains (, p. 52). Aspects of human communication also have a first-person viewpoint, sometimes expressing the experience of being not only an ‘I’, but also a ‘we’, p. 43f). At other times, O-subjectivity in human communication embodies an experience of the other person as a ‘you’, as Martin Buber has clarified in relation to genuine dialogue . This is a second-person viewpoint. Both the first- and the second-person viewpoints express the specific character of human consciousness, which is an ontologically subjective phenomenon. When discussing epistemology, following Nicholas Rescher , we use the term cognitive objectivity ( C-objectivity ) as a synonym of epistemological objectivity. And we use cognitive subjectivity (C-subjectivity ) as a synonym of epistemological subjectivity. With the concept of C-objectivity, ‘a statement is considered objective if it can be known to be true or false independently of the feelings, attitudes, and prejudices of people’ (, p. 44). With regard to science, Searle maintains that ‘[s]cience is indeed epistemically objective in the sense that scientists try to discover truths that are independent of anyone’s feelings, attitudes, or prejudices’ , p. 45). He adds: ‘So the fact that consciousness has a subjective mode of existence does not prevent us from having an objective science of consciousness’ (, p. 45). It is possible to have a science of psychology, for example. Description both of ontologically objective entities and subjective phenomena can be C-objective. Like other sciences, medicine is a C-objective science. The American Medical Association’s Guides to the Evaluation of Permanent Impairment display a C-objective definition of objective findings, typically defining ‘objective finding’ in terms of quantitative measurements (‘Objective Test Results’): […] for example, X-rays, computed tomography (CT), magnetic imaging (MRI), laboratory tests, electrocardiography (ECG), electromyography (EMG) – with specific findings that confirm or validate the diagnosis and/or indicate severity of the particular condition. As tests they are the most objective source of data available […]. (, p. 15) The test apparatus described here, is based on C-objectivity. The particular abnormal findings are to be read and interpreted by a specialist with qualified knowledge. ‘Objective finding’ is also defined cognitively as ‘a sign that can be seen, heard, felt or measured’ (, p. 1316). This definition broadens the scope of what can be found objectively, not only by a test battery, but also, for example, by what the physician observes from the patient’s general appearance, e.g. distress and changed posture, walk, and motor activity, and also the specific findings of the professional examination of an embodied human being . Another doctor would have observed approximately the same abnormalities, i.e. the observations can be C-objective. It should be noted that C-objectivity occurs in degrees, i.e. it is possible to be more or less objective, since the assessment of objectivity may vary with the clinician or expert who undertakes it. We believe that C-objectivity is appropriate to a medical understanding of objective findings. A ‘finding’ is never like an exact photography of reality. It is the result of an interpretation of signs of abnormalities or pathologies of the living human organism, an interpretation that is always carried out in the terms of professional expertise. In medicine, objective findings must be seen primarily as C-objective. To sum up so far: We have indicated three kinds of human domains found in health care: (i) the human body as a physical entity existing independently of our perception of it, (ii) the psyche or mind (depression, angst, joy, despair, meaninglessness, etc.), which has an O-subjective existence, (iii) the embodied human being, whose ontology we come back to below. All of these human domains can be studied in a C-objective way; and in all these domains, cognitive data implies, as we conceive it, the existence of something outside the data itself as the ontological reference of the data. One recognized definition of C-objectivity is intersubjectivity, not least in science today (, p. 23). ‘Intersubjectivity’ implies that a statement is ‘established as true, probable or acceptable by procedures that in principle can be followed by everyone’ (our translation) (, p. 350). This presupposes intersubjective communicability. Defined in this way, the meaning content of ‘intersubjectivity’ is close to that of ‘confirmability’ as a methodological requirement. As will be shown, intersubjectivity is important in the analysis below. We have already introduced the first- and second-person viewpoints. We now introduce the third-person viewpoint. The third-person viewpoint is the observer’s point of view. It is the point of view of the scientist, appropriate for an object of the natural sciences. An important inference from Searle’s concepts is that, because consciousness has a first-person ontology, it “cannot be reduced, or eliminated in favour of, phenomena with a third-person ontology” (, p. 52). In other words: We can have a science of the third-person aspect of consciousness, but the aspect of first-person ontology, of subjective experience, cannot be reduced to third-person ontology. We conclude that C-objectivity has to take into consideration the reality of three irreducible viewpoints: first-person, second-person and third-person. All three viewpoints are necessary to describe the living, communicating human being – as constituted by the dimensions body, soul and spirit – living in interaction with environments . Searle writes that a statement is C-subjective ‘if its truth depends essentially on the attitudes and feelings of observers’ (, p. 44). More examples of such attitudes and emotions are prejudices, passions, biases, loyalties, conformities, allegiances and unsupported opinions (, p. 5) In daily life, people in general have a variety of such first-person standpoints, which they may recognize as important for themselves. In professional scientific contexts, however, C-subjectivity should be avoided as far as possible. What is important is to be as aware of one’s prejudices and preconceptions as one can. Concerning the essential features of C-objectivity, David Bell writes that the distinction between objectivity and subjectivity in epistemology […] serves to distinguish two grades of cognitive achievement. In this sense, only such things as judgments, beliefs, theories, concepts and perceptions can significantly be said to be objective or subjective. Here objectivity can be construed as a property of the contents of mental acts and states. (, p. 310) We agree with Bell in supporting Immanuel Kant’s insight that the above-mentioned property entails ‘presumptive universality’ which means that: for a judgment to be C-objective it must, at least, possess a content that ‘may be presupposed to be valid for all men’ (, p. 310). Hence, C-objectivity is a cognitive property of mental acts and statements that in principle make them valid for all rational humans, at least in the same historical cultural context. Practically, this is done by stating valid reasons supporting judgements, beliefs, assessments, theories, concepts or perceptions as objective. In a nutshell: C-objectivity claims general validity, based on reasons . To secure, as far as possible, general validity in concrete situations, epistemological principles are necessary in applying C-objectivity. One meta-principle is that rationality should be exercised with an appropriate goal in mind (, p. 9). Applied to the present study, this means that certificates should be written with a stated commission in mind . The application of this methodological rule is presupposed in the following analyses. Other principles for the application of C-objectivity are impartiality , a ccuracy and correctness . We shall come back to the use of epistemological principles below. Objectivity has also an ethical sense, linked to the concept of impartiality, which is commonly also understood as a principle or criterion of justice as fairness. Often viewed as synonymous with fair-mindedness, impartiality holds 'that decisions should be based on objective criteria rather than on the basis of bias, prejudice, or preferring to benefit one person over another for improper reasons’ . To objectively balance conflicting interests, duties and goods in social collaboration requires impartiality. We have now defined the concepts of objectivity and subjectivity as a necessary condition for defining the cognitive concept of objectivity that takes into account human subjectivity. This concept has to avoid two types of ontology that are still widespread: (a) Cartesian substance dualism (‘everything is either matter or mind’, i.e. matter and mind are two separate and independent realities), and (b) a reductionist monistic materialism (‘everything is only matter’, i.e. subjectivity does not really exist) . We believe that both O-objectivity and O-subjectivity are necessary conditions for understanding living beings such as the human one, but they should not be regarded as separate and independent of each other. This is a fundamental aspect of a holistic and multidimensional view of the human being . A cognitive concept of objectivity which takes into account this view of the human being in health care and social security is what we term a comprehensive concept of cognitive objectivity (CCCO). Below we shall define the CCCO and explain the criteria for its application in health care. Aim The aim of the study was three-fold. The first aim was to specify some necessary conditions for the definition of a CCCO, which enable objective descriptions and assessments even of subjective phenomena in health care. The second was to formulate some necessary criteria for the application of CCCO. The third was to investigate the application of these criteria in a collection of work disability assessments in medical certificates for social security purposes written in a mental health care context. Design The study was based on a theoretical design consisting of two interacting parts. The first part used conceptual analysis to specify some necessary conditions for the definition of a CCCO and the resulting criteria for application of the CCCO in health care and social security medicine. The analysis was carried out by having in mind a variety of assumed objective work ability or disability assessments in a collection of texts consisting of medical certificates. The second part used the defined criteria to make reasonable interpretations of the objectivity of work disability assessments in medical certificates issued for social security purposes, regarded as texts. By ‘reasonable interpretation’ we mean an interpretation allowed by the rules of grammar, semantics and logic and the context of the text. Hence, the interpretation was carried out, inter alia , from a hermeneutical point of view, which emphasizes that meaning arises within contexts and that the interpreter of a text is influenced, among other things, by his or her pre-understanding and cultural context . Details regarding how the analysis was carried out are found in our earlier article . Setting The social and cultural setting of this study is a social welfare system of the Nordic type in Norway. There is a close relationship between the Norwegian health care system and the Norwegian Labour and Welfare Administration (NLWA). Hence, clinicians have two roles to handle, one as a practitioner treating the patient and another as an expert writing certificates to third parties on demand. The claimants are long-term patients at two units in the Division of Mental Health and Addiction at the Vestfold Hospital Trust in Southern Norway (‘the clinic’). Certificates written by both psychiatrists and psychology specialists (‘experts’) working at the clinic constitute the data for this study. Material Certificates from the clinic, commissioned by the local office of the NLWA, were collected over 3 years between 1 January 2007 and 31 December 2009. Questions concerning the patient’s medical details and possible educational or vocational activities were answered. Details of the procedure by which dis-identified copies of the certificates were produced and received running numbers have been described elsewhere . In all, the material consisted of 86 medical certificates issued for social security purposes in respect of 66 claimants (43 women and 23 men) between the ages of 19 and 64 years (median age 40 years). They were written by 12 psychiatrists and 12 psychology specialists. The material represents 28% of the claimants and 65% of the eligible experts. For the present article, the 18 disability assessments from this material were studied. In the quotations from the assessments given below, the certificate being quoted is identified by its running number. The aim of the study was three-fold. The first aim was to specify some necessary conditions for the definition of a CCCO, which enable objective descriptions and assessments even of subjective phenomena in health care. The second was to formulate some necessary criteria for the application of CCCO. The third was to investigate the application of these criteria in a collection of work disability assessments in medical certificates for social security purposes written in a mental health care context. The study was based on a theoretical design consisting of two interacting parts. The first part used conceptual analysis to specify some necessary conditions for the definition of a CCCO and the resulting criteria for application of the CCCO in health care and social security medicine. The analysis was carried out by having in mind a variety of assumed objective work ability or disability assessments in a collection of texts consisting of medical certificates. The second part used the defined criteria to make reasonable interpretations of the objectivity of work disability assessments in medical certificates issued for social security purposes, regarded as texts. By ‘reasonable interpretation’ we mean an interpretation allowed by the rules of grammar, semantics and logic and the context of the text. Hence, the interpretation was carried out, inter alia , from a hermeneutical point of view, which emphasizes that meaning arises within contexts and that the interpreter of a text is influenced, among other things, by his or her pre-understanding and cultural context . Details regarding how the analysis was carried out are found in our earlier article . The social and cultural setting of this study is a social welfare system of the Nordic type in Norway. There is a close relationship between the Norwegian health care system and the Norwegian Labour and Welfare Administration (NLWA). Hence, clinicians have two roles to handle, one as a practitioner treating the patient and another as an expert writing certificates to third parties on demand. The claimants are long-term patients at two units in the Division of Mental Health and Addiction at the Vestfold Hospital Trust in Southern Norway (‘the clinic’). Certificates written by both psychiatrists and psychology specialists (‘experts’) working at the clinic constitute the data for this study. Certificates from the clinic, commissioned by the local office of the NLWA, were collected over 3 years between 1 January 2007 and 31 December 2009. Questions concerning the patient’s medical details and possible educational or vocational activities were answered. Details of the procedure by which dis-identified copies of the certificates were produced and received running numbers have been described elsewhere . In all, the material consisted of 86 medical certificates issued for social security purposes in respect of 66 claimants (43 women and 23 men) between the ages of 19 and 64 years (median age 40 years). They were written by 12 psychiatrists and 12 psychology specialists. The material represents 28% of the claimants and 65% of the eligible experts. For the present article, the 18 disability assessments from this material were studied. In the quotations from the assessments given below, the certificate being quoted is identified by its running number. The embodied subject We must first reflect on how the embodied human being should be envisaged. We believe the concept of lived experience provides a fruitful way of approaching a CCCO for practical use in health care. An appropriate method of describing lived experiences is phenomenology , which is also an area of philosophical study and of understanding of actual human experience (German: Erlebnis ), especially ‘the ways things present themselves to us in and through such experience’ (, p. 2). Our analysis in this article shows that aspects of the lived experience not only of the patient or claimant, but also of the clinician, have to be taken into account. Maurice Merleau-Ponty represents an approach within phenomenology that combines philosophical phenomenology with empirical sciences. We follow this approach in dealing with the human being. We have drawn on Merleau-Ponty’s concept of the embodied subject in fleshing out the conceptual structure below . The human body is both biological organism and lived experience. Biological organisms are not isolated things, as the science of ecology shows. Neither is lived experience something that occurs in a mind/body shut in on itself. A basic bodily experience is that ‘my body is a movement toward the world and […] the world is my body’s support’ (, p. 366). The embodied subject has to be understood as life that stretches out towards and is supported by its surroundings. Hence, human bodies should be basically understood as interacting with one another and with their surroundings. Merleau-Ponty writes that ‘we must rediscover the social world […], not as an object or sum of objects, but as the permanent field or dimension of existence […]’ (, p. 379). In the present article, the concept of embodied subject expresses the concrete living human being, where material embodiment, bodily experience of being in the world, and social, cultural and social environments are regarded as dynamically linked (, pp. 159–175). In the analysis below we have employed the following concepts from the phenomenological tradition: life-world , phenomenological object and empathy . In stating four necessary conditions for the definition of CCCO in health care, we have combined these concepts with the concepts of O-objectivity, O-subjectivity, C-objectivity and C-subjectivity as defined above under ‘Background’. First condition: Acknowledgement of the patient’s social context and life-world The WHO has acknowledged the importance of social context in its development of the International Classification of Functioning, Disability and Health, ICF (hereafter ICF) ). The ICF attempts to integrate the medical model with a social model (, p. 20). The manual describes human functioning in terms of body integrity, individual activities or actions in environments and participation in social life . In his study of medical practice, Eric J. Cassell emphasizes the patient’s functioning using the terms of the ICF . The ICF has taken important steps towards recognizing the social context of human functioning. There are, however, basic problems with the ICF. Its medical model of interpretation is still based on reductionist monistic materialism , and hence it provides only the third-person viewpoint. Important concepts such as intention or goal – which are integral elements of an action – are not included among the components of the ICF . Rehabilitation doctors have struggled for recognition of the subjective dimension of functioning and disability . The phenomenological notion of life-world (German: Lebenswelt) can fill out the shortcomings of the ICF in relation to subjective experience. Edmund Husserl described the life-world as the concrete and immediate world of everyday experience . This world is pre-scientific and is experienced even before ‘the split between physical and psychical‘ (, p.189). ‘Life-world is an all-embracing term that includes the “surrounding world” ( Umwelt ), both that of nature and culture, including humans and their societies (“the world of culture”), things, animals, our overall environment’ (, p. 190). We believe that ‘life-world’ is the appropriate overarching ontological term in health care for the unity of the human, social world as it is experienced by the embodied person. The life-world encompasses first-, second-, and third-person viewpoints as already defined above under ‘Background’. It is important to note that, because a person’s life-world includes first- and second-person viewpoints, there will always be limits to how far the life-world can be described objectively. Merleau-Ponty writes that ‘the social exists silently and as a solicitation’ even before we ‘come to know it or when we judge it’ (, p. 379). ‘Life-world’ is now an established term in psychiatry and psychology . Second condition: The patient perceived as a cognitive object providing a variety of data The second condition is based on an understanding that the patient, as an embodied subject, appears as a living cognitive object to an observer – in this case a clinician – in a variety of ways when the latter is in authentic communication with the patient. To explain this point, we need first the phenomenological notion of the phenomenological object. It is a mental object or ideal entity, and not physical, as is the usual sense of the word ‘object’. Karl Jaspers defines the concept of phenomenological, i.e. intentional object (German: Gegenstand ) as follows: We give the name ‘object’ in its widest sense to anything which confronts us; anything which we look at, apprehend, think about or recognize with our inner eye or with our sense-organs. In short anything to which we give our inner attention, whether it be real or unreal, concrete or abstract, dim or distinct. Objects exist for us in the form of perceptions or ideas (, p. 60). We shall follow this definition, but add to it cognitive aspects of emotions . The quotation above is an example of a fundamental philosophical insight that conscious states are intentional: they are about, or refer to, intentional objects . They are called ‘intentional’ because they are often directed by consciousness towards something (the intended object), which could be, for example, other people, the environment, numbers, facts, states of affairs, signs, data or plans for the future. They can also be about the subject’s own ego, psyche or mind. According to phenomenology, perceptions, ideas and emotions, as described above, typically have cognitive contents, namely their intentional objects. Phenomenology combines properties of the object ‘outside mind’ with the experience of ‘inside mind’ into a unified cognitive act. In this act, as human beings we are related both to objects in the external world, to other human beings and to our own experience, and in this way meaning is formed. ‘[T]he meaning of things, in a sense, exists neither “inside” our minds nor in the world itself, but in the space between us and the world’ (, p. 34). Applied to healthcare, the data from the patient acquire meaning in the interaction between the patient and the clinician. Such meaningful data are here termed ‘cognitive objects’. We introduce the concept of cognitive object (Jaspers’ phenomenological object) in this study to expand the application of C-objectivity to the human being as embodied subject. To explicate further the cognitive object in the interpersonal context, we need the phenomenological concept of empathy . Empathy is the ability to understand and share the feelings of another. Phenomenologically, empathy is intentionality directed at the experiences of the other person. Understanding comes into being by perceiving the other person in context. This understanding is both emotional and cognitive. Imagining the other person in his/her life arenas is also important . Empathy is recognized as a fundamental phenomenon in human interaction and communication . Intentional objects are perceivable and communicable intersubjectively. Phenomenology explains this basically in terms of the concept of empathy, which ‘allows us to experience behaviour as expressive of mind. [Empathy] allows us to access the feelings, desires, and beliefs of others in their expressive behaviour. Our experience and understanding of others is [however] fallible’ (, p. 155). Empathy is a means to intersubjective understanding. We shall come back to the concept of dialogic intersubjectivity below. Merleau-Ponty’s view of intentionality – as pre-predicate unity of the experienced world and life – helps us to become aware that not all aspects of a problematic relationship between a person with ill health and the work market (a work disability) can be accessed as cognitive objects, i.e. as available to our knowledge (, p. lxxxii); or, as Searle underlines, not everything that a human being experiences can be accessed as cognitive objects by others (i.e. from the third-person viewpoint). Examples are ‘[u]ndirected feelings of well-being or anxiety are not intentional’ (, p. 327). In Searle’s terms this means that some of patients’ or claimants’ undirected feelings, including their well-being and anxiety, cannot be accessed as intentional or cognitive objects. If well-being as an undirected feeling cannot be accessed completely as a cognitive object, its opposite, permanent ill health, also cannot be fully described as an object for other persons. This is interesting in our context, because a common understanding of work disability is that it is often complex and sometimes difficult to describe and explain in full. However, ill health can still be described as a narrative (see below). We now describe the ways in which embodied subjects present themselves and provide data for clinicians, divided into clinical, psychometric and behavioural data, as follows. Data from clinical examinations The concept of the embodied subject fully includes scientific data from the human organism and its illnesses and impairments. Descriptions of signs from clinical examinations in the different specialties of medicine and psychology are fundamental cognitive objects. Psychometric data Psychometric data are obtained through psychological tests. Psychology is defined as ‘the study of the nature, function, and phenomena of behaviour and mental experience’ (, p. 619). Seen this way, we can say that psychometric data obtained by psychological tests belong to the third-person viewpoint, the point of view of the observer. However, evaluating a psychological test is a challenging cognitive activity. Important questions are the test’s theoretical orientation, practical issues, standardization norms, reliability and validity . Nevertheless, reflectively carried through, psychometrics provides a way of obtaining meaningful data relating to the embodied subject. Behavioural data In health care, behaviour can often be understood as reaching out towards fellow human beings in terms of what Jaspers calls ‘expressions’. He maintains that the ‘psyche and body are one for us in expression’ (, p. 225). We use Jaspers’ broad concept of behaviour to characterize objects or data relating to the embodied subject in terms of activities/actions, expressions and reflection. Behaviour has to be understood through empathy ([, pp. 251–97). Other philosophers also acknowledge that mental life expresses itself through the body. P. M. S. Hacker, writes that ‘behaviour is not only bare bodily movements, but smiles and scowls, a tender or angry voice, gestures of love or contempt, and what the person says and does’. Such behaviour ‘manifests the inner’ and runs counter to Cartesian substance dualism (, p. 45). Activities and actions in environments Jaspers describes the psychiatric patient as an active human being in terms of a variety of objective performances (, pp. 168–221). Since he wrote that in 1959, WHO has developed the ICF to describe human functioning . Environmental facilitators and barriers are public phenomena. Abilities (and competences) are also phenomena that can be spoken about in a public or cognitively objective way. Meaningful expressions of mind/body relation The concept of meaningful expression, which is publicly visible, comprises ‘meaningful objective phenomena’ (Jaspers) such as: Life in the individual’s ‘own personal world’, the place where the individual ‘by means of his attitudes, behaviour, actions […gives shape] to his environment and social relations‘ (, p. 251). Or, in other terms, the life-world of the individual so far as it can be perceived by another person. Postures, movements, gestures, facial expressions, gazes, and tones of voice (, pp. 253–74). A drive to express oneself in different ways: speech, written productions, drawing, art and handicraft, and individual outlooks of the world (, pp. 287–97). Self-reflection Reason as the capacity for reflection is fundamental in the human world. Reflecting on one’s own goals or intentions is a part of being a rational being. This is because an important quality of the person is that he or she is an agent , that is, an acting being . A person’s intention or goal is part of the world of reason. In clinical work, too, there are opportunities for the clinician and patient or claimant to reflect together (, p. 274). A patient’s/claimant’s intention or goal is therefore a cognitive object that the clinician and the patient/claimant can reason and deliberate about. To sum up: The second condition for the definition of the CCCO enables clinicians to perceive the patient as a living, cognitive object providing a variety of data, both quantitative and qualitative. Third condition: The interpretation of data in context To make sense, the myriad of meaningful data about a patient or claimant have to be interpreted by the clinician. They need to be interpreted in light of the social context, the purpose of the assessment, and clinical knowledge. This third condition calls for specific attention to the ways in which the perceived data are interpreted by a clinician. The clinician will use his or her knowledge and experience to make sense of the interpretation in the current context. Clinical interpretation has two aspects: one is in some way to describe the patient’s lived life, the other to make a professional assessment of themes of that life. The first can be described as a narrative, the second as a theoretical interpretation . The latter uses scientific models. Daniela Bailer-Jones defines a scientific model as ‘an interpretive description of a phenomenon that facilitates access to that phenomenon’ (, p. 1). This definition is useful for the use of practical models in health care, too. We study work (dis)ability models below. A basic aspect of interpretation is the circular relationship between the whole and its parts. In our context, this means that the data can only be understood when aspects of the patient’s life-world as a whole – daily routine, different activities, health condition, social relationships, cultural setting and so on – are taken into consideration . Similarly, the patient’s life-world taken as a whole can only be understood in relation to the data on each of these aspects. When working out this interpretative, i.e., hermeneutic circularity, the clinician will ask the patient/claimant questions, comparing the information given against experiences from his/her own life-world and experience of being an embodied subject. Sometimes it is relevant to check for coherence and consistency among the data provided as components of a life narrative. In this clinical activity, the ethical sense of objectivity comes to the fore. “[M]edical professionals have a particular obligation to create situations where it is possible for patients to present themselves as subjects with integrity and legitimate opinions” . When writing certificates, questions about the credibility of a claimant’s presentation of data will sometimes come to the mind of the expert . Sometimes, degrees of symptom magnification or occasional malingering have to be considered . This requires a reasonable interpretation of the collected data. Fourth condition: The use of epistemological principles The first three conditions involve perceiving and assessing a patient/claimant in the particular relationship between the patient/claimant and the clinician. The fourth condition consists in the use of general epistemological principles for objectivity. Epistemological principles should be used to ensure the validity of interpretations, descriptions and judgements of what is perceived, understood and assessed as C-objective. Well-known epistemological principles for application of C-objectivity are the following : Intersubjectivity C- objectivity was defined in terms of intersubjectivity under ‘Background’ above. Applied to clinical assessments, by ‘intersubjective validity’ we mean ‘what is the case/evident or true according to current professional expertise’. This means that an account should be built upon available facts or data, and that it should be supported by arguments . (Germanic terms are saklighet [Norwegian] and Sachlichkeit [German]). What is clinically described or assessed should be intersubjectively communicable and testable by other professionals in the same or similar contexts. In psychotherapy, the practice of intersubjectivity is specified as a kind of interpersonal exchange that, following Buber (see above under ‘Background’), in this article is called dialogic intersubjectivity . In certificates, for example, the concept of dialogic intersubjectivity is found in use when the expert refers to important life-world events that the expert and patient/claimant have talked about together (a cognitive object as described in the second condition for the CCCO above). An interpretation of the patient’s problem situation is constructed by the patient and therapist together. This interpretation is then written out in a certificate as an account of some relevant and important aspects of the patient’s/claimant’s life and work history. The intersubjective communication is primarily between two subjects, but the account (in the certificate, in this example) is written in such a way that it can be understood and considered to be objectively valid not just by those two persons but by any competent reader. Impartiality An account should not be twisted by the omission of some information that the writer (e.g. of a certificate) ought to understand is important for the receiver and for the purpose of the certificate. The descriptions should be factual and sober and not biased or tendentious . Accuracy and correctness The information should be copious enough that the receiver can imagine the claimant’s situation for him/herself, thus allowing misconceptions about the real situation to be avoided . A greater degree of objectivity is ensured by using these principles. When they are not used, C-subjectivity can result. From defining conditions to criteria for their application The CCCO has been defined above in terms of four necessary conditions. These conditions can now be expressed as the following criteria for the application of a CCCO in health care and social security medicine: First criterion To take into consideration the patient’s/claimant’s social context and, when appropriate, also life-world (lived experience). At the least, important aspects of the patient’s social context (e.g. close relatives) should be considered. First- and second-person perspectives should be recognized. Second criterion To take into consideration a variety of quantitative and qualitative data from the clinician’s empathic perceptions of the patient as a cognitive object. Third criterion To be aware of the need to interpret the data in terms of both the patient’s/claimant’s lived experience and of a professional assessment. Fourth criterion To apply general epistemological principles to ensure objectivity in the concrete situation. The use of all these criteria presupposes genuine communication between the expert and the patient/claimant. To sum up: The patient/claimant should be seen as a whole human being, and listened to in his/her social context. When appropriate, relevant aspects of his/her life-world should be appraised. The clinician should recognize the patient as an embodied subject, not as a merely physical object, i.e. as his/her (clinician’s) own cognitive object, perceived through the use of empathy and imagination. The clinician should use his/her interpretive capacity to understand the variety of data from the patient/claimant, in the context of his/her clinical knowledge and experience and the purpose of the assessment. To ensure objectivity of assessments, it is important to use some generally recognized epistemological principles in the concrete situation. Application of the criteria of the CCCO in medical certificates for social security The material on which this analysis is based consists of social security certificates written by psychiatrists and psychology specialists in their role as experts to determine claimants’ eligibility for social benefits. It is a requirement that such assessments should be objective. In Norway, the law prescribes that ‘[a]nyone who issues medical certificates, medical reports, etc., shall be careful, precise and objective’ (, §15). The government admits that claimants diagnosed with illness without objective findings, but with credible chronic disability, are also eligible for disability benefit. This has provided greater room for the use of professional discretion concerning the objectivity of assessments of work disability. The certificates were written in a context where the claimant and the expert had met each other for at least one interview in the hospital setting. The expert had access to the patient’s earlier medical files, and in addition often knew aspects of the patient’s life from ongoing or earlier treatment spells. The certificates were all written in such a way that the reader understands that the expert has empathy with the claimant. All the 18 certificates that constitute the material for this study concluded with ‘at least 50% work disability’, for a few years ahead or permanently. No disability is described in the texts in terms of objective findings. The main reason for this seems to be that no certificate contains a diagnosis from the group of organic mental disorders (ICD 10 diagnostic block F00-F09). It should be noted that the expert assessing disability is not obliged to conclude with any specific quantification of the level of work disability. The specific level, 50% or higher, is decided by the NLWA on the basis of information about loss of income, and often also on reports from work ability training or work ability testing in various settings. According to Norwegian law, a certificate from a health care expert should usually contain three parts: (a) background and relevant history, (b) a description of current data from clinical examination and interview, and (c) assessments and conclusions or recommendations . In this article, we study (b) in terms of descriptions of the patient or claimant’s present functional disability in relation to work. We also study (c). We do not study whether or not the requirements of the law – that appropriate treatment and attempts to return to work have been completed – are fulfilled in the legal sense. We have no information as to whether the claimants’ applications for disability benefit were granted or not. Work (dis)ability models as structuring devices for our interpretation We have seen above that the use of professional practical models is important in fulfilling the third criterion of application of the CCCO – that concerning data interpretation. We therefore use work (dis)ability models to structure our interpretation of disability assessments. In our earlier study we found that the following three models of work (dis)ability are implicitly in use in the text collection as a whole The biomedical disability model (BDM): This model ‘views disability as a problem of the person, directly caused by disease, trauma or other health condition’ (, p. 20). The basic question is whether a person’s disability is caused by a disease or not. A disability is described in only general and medical terms . Earlier analyses have shown that the form used in work (dis)ability assessment is structured according to the BDM, with ‘objective finding’ as the fundamental criterion of objectivity. The basic sense of the concept seems to be O-objective . It is not obligatory for experts to use this form. The ability-based health model (AHM): This model is based on action theory and on the holistic and relational definition of health by Lennart Nordenfelt . Three components of the definition constitute what we have termed health factors . These are: (a) ability (or capacity), which in the ICF is a qualifier of the component activity ; (b) environment, described by the ICF in terms of barriers or facilitators; and (c) goal or intention – a factor not considered in the ICF . For an assessment to be qualified as using the AHM, the expert must describe the claimant holistically in a particular context, giving specific details about all three of the health factors specified above. The factors abilities and goals are examples of data belonging to the second criterion of a CCCO. The mixed health model (MHM): This model is intermediate between the BDM and the AHM. The BDM is most often used as a base, but with one, two or three of the above mentioned health factors added to the descriptions . In some other descriptions, health factors are described without taking the BDM into account. All three of the models of work (dis)ability are found in use in the collection of the 18 disability assessment texts analysed in this study. We investigated whether these disability assessments apply the four criteria of the CCCO, and if they do, in what way. Where the fourth criterion is concerned, we focus on professional expertise, dialogic intersubjectivity and accuracy. Here, we highlight our most striking interpretive findings. General causal assessments based on the BDM In one group of 10 certificates, the BDM is used exclusively in three certificates; in the other seven, the BDM is used as the basic model, but the claimant is described by a few words in a social context, meaning that the model used is the MHM. Table shows the reasoning process in one of the assessments based on BDM alone (cert.12). A greatly reduced ability to work is worded as ‘owing to prolonged depression’. Permanent, complete work disability status is recommended by the expert. The two other certificates based on BDM alone demonstrate the same reasoning process in the case of a middle-aged woman with 20 years of dependence on psychoactive substances. Functioning was described only in relation to symptoms or impairments in these three certificates. Among the certificates that included a few words about the social context, cert. 43 (Table ) assesses a claimant as work-disabled, but comments that the cause of the disability seems to be obscure. Cert. 73 (Table ) gives a condensed summary of another claimant’s situation. The expert formulates a hard-hitting argument by grounding it in a description of poor functioning and negative self-image over the course of many years. The expert seems to be saying: Trust me, this claimant is permanently unable to work. There are two more certificates in this group that describe relationship stress as having significance for the work (dis)ability. One of these is cert. 7 (Table ), which describes a claimant living with a husband suffering from chronic excessive alcohol consumption. Two common features of this group of certificates are the following: Only a third-person viewpoint is employed. The assessments are short and focused on the causal relationship between illness and work disability. Aspects of the claimant’s life situation are not included in the assessments, or only to a small degree. These features are not surprising, as the BDM is strictly scientific and does not include social contexts other than labour services . A basic deficiency in these descriptions is that the social context is not described well enough to explain the disability. These assessments do not fulfil the first criterion to take at least the patient’s social context into consideration. Concerning the second criterion of the CCCO in this group of certificates, the assessments are based on standard clinical examinations. Neuropsychological examination was the only psychometric test used as part of the work disability assessment ( n = 2). Behavioural data (belonging to the second criterion) are, however, not provided. The certificates do not distinguish between the claimants’ experience and the professional assessment. This is in accordance with the form used, which does not make this distinction. The third criterion is not fulfilled. Concerning the fourth criterion of the CCCO, to apply general epistemological principles, we tested the application of the principle of professional expertise. When the sentences assessing the work disability are analysed, as they were in our study, the permanent disability that is claimed on the basis of causal reasoning alone is unconvincing. The descriptions lack some information that would demonstrate more clearly the reasoning from premises to a conclusion. The assessments based on the BDM fulfil the principle of professional expertise to a lesser degree than if social context had been described more closely. This point is demonstrated in one of the certificates when a broader social description in the medical history is taken into account (cert. 12, Table ). This certificate states that the claimant’s suffering is caused by a range of disabilities: ‘limited knowledge of the Norwegian language, dyslexia, lack of school education, economic problems’ and also ‘lack of desire to recover’. It is likely that these mainly social disabilities were an important background to the final assessment. We can also say that the 10 assessments based on the BDM do not fulfil the principle of accuracy because they lack important information about the claimant’s social context. A social medical assessment based on the MHM One MHM-based certificate (cert. 81, a middle-aged woman, diagnosis F60.7), using the BDM as the underlying model is, however, different from the others. It uses the NLWA form, but transcends it by describing a demanding social situation in the family in some detail, but still only from a third-person viewpoint. The husband is on permanent disability benefit and has his own problems, and two of their three children have special needs at school. The claimant is described in terms of symptoms, but also of activity limitations: ‘She has great interpersonal problems and also has problems of taking care of her own needs and those of her family’. She is assessed as having ‘such great personal and family difficulties that she appears unable to work’. The expert also argues that in reality she has not worked for 10 years. This certificate fulfils the first criterion of taking the social context into consideration. The description enables the reader to understand the claimant’s difficult social situation and to follow the reasoning process towards the conclusion of long-term work disability. The epistemological principles of professional expertise and accuracy are to some extent fulfilled. MHM disability assessments where the BDM is left out The remaining certificates ( n = 7) are structured as the experts themselves choose. One common feature of these certificates is that the assessment includes more aspects of the claimant’s life situation. The claimant’s subjectivity also comes to the fore. Five of the seven certificates have been interpreted as applying the MHM, but without using the BDM as a base. These certificates are predominantly written in a third-person viewpoint, but they also approach the first-person viewpoint. The first-person viewpoint is expressed clearly in a certificate when what the claimant thinks, feels or experiences is quoted directly. None of the certificates in our study material does this. However, in these five certificates the experts state the claimant’s opinion and use this statement of the claimant’s first-person viewpoint in the argument in favour of work disability. This is thus a use of what we term the close to the first-person viewpoint . One expert writes about a claimant that he was fired from a job in which he had invested a lot of his strength and sense of responsibility: ‘When he was made redundant he collapsed, and he has realized that he cannot face entering into new arrangements to try out his work ability’ (cert. 35). Another expert writes: ‘To questions about his thoughts on employment, he says that he never will dare to meet others in an office or working place’ (cert. 16). The descriptions fulfil the first criterion of context and lived experience more than the previous certificates discussed. The certificates distinguish between descriptions of the claimants’ opinions, etc., and the professional assessment. It is clear from the assessments that the experts have given some weight to claimants’ opinions. We emphasize that four of them are the first ones among the certificates studied to fulfil explicitly the third criterion : To be aware of the need to interpret the data in terms of both the patient’s experience and the professional assessment. The fifth, however, does not distinguish clearly between the claimant’s opinions and the expert’s own professional view of these opinions. It is therefore an interesting case. The certificate relates to a claimant who was unable to complete a work training programme at an appointed place because the requirement to attend 2 days a week created great anxiety in him. As can be seen in the assessment part of cert. 57 (Table ), the claimant’s opinion has permeated the expert’s assessment. This is clear from the following statements: ‘He believes that he will not be able to get into employment again. If he is required to do so, his anxiety level will increase significantly, with the risk of alcohol abuse and hence increased risk of suicide.’ It looks as if the expert has taken the claimant’s opinions at face value. The data regarding the claimant’s opinion have entered into the expert’s assessment in a direct way. This assessment does not fulfil the third criterion. Narrative and dialogic intersubjectivity based on the AHM The last two certificates briefly describe abilities, environments and goals, all in the particular context of the work-disabled claimant, i.e., the AHM is used. Due to lack of space, we shall analyse only the most detailed certificate here (cert. 44, a young woman, diagnosis F48.00). The text in it is introduced as follows: ‘Knowledge of the patient’s education, work experience and occupational training is taken as granted. Other information is here given fully, because it is considered significant in explaining her level of functioning to-day’. Information about the claimant’s work disability is given in terms of a life narrative. Cassell has characterized a narrative in the health-care context, the following aspects of which are relevant to our study. A narrative should reveal ‘the chain of events that led to the present state’. It should explain both causative factors and the patient’s ‘purposes and goals’. The ‘meanings that the patient has attached to what has and is happening’ also belong to it, as do ‘the patient’s values. What the patient thinks is important’ (, p. 93). We analyse cert. 44 with these aspects in mind. First, fundamental influences on the claimant’s situation in her childhood are described: an alcohol-abusing and violent father, and an unstable, chronic sick mother. She was sexually abused for 5 years in childhood and was bullied at school. Second, the claimant’s moral standards and important actions are described. The expert writes that she managed to stop her incipient drug abuse and also tried to help others to stop their drug abuse. ‘She has always felt responsibility in the family and has been a prop and mainstay for everyone’. She now has a ‘secure family life’ with a husband and children. Central themes in the appointments with the expert have been her worries for her sick mother, her children, her own failing ability to work and social isolation. Her deserving efforts are emphasized. Third, she has suffered and been treated for, among other things, asthma and chronic muscle pain. For as long as she can remember, she has struggled with mental problems and great burdens. Fourth, at present she is in despair, because she feels she has no control and is at the mercy of her life situation. She feels guilty for not managing to give her children a better childhood. Functionally, she is described as powerless and unable to mobilize strength. She is said to be unable to go outside with her children as she wants. She has handed over large parts of the housework to her husband. She manages to care for the family’s dog and has a close female friend whom she meets regularly. The narrative describes an ‘intertwinement of action and passion’ in human life (, p. 266). The descriptions are written from a third-person viewpoint, especially in regard to the claimant’s childhood. As can be seen from the quotation above, the ‘close to the first-person viewpoint’ has also been used. We interpret this narrative also as written from a second-person viewpoint. The life history is based on a clinical dialogue that has been going on for some years. We regard this certificate as fulfilling the first criterion: a description in terms of the patient’s life-world. The certificate demonstrates the kind of cognitive objects and data that belong to the second criterion: Important activities and actions (taking responsibility, helping others), struggling with failing ability to work and social isolation, reduced functioning and self-reflection. Because some behavioural data are described, we can say that the second criterion is fulfilled. The two AHM-based certificates distinguish clearly between the narrative data expressed by the claimants and the experts’ assessments of these data. The third criterion is fulfilled. In regard to the fourth criterion, we assess this narrative as fulfilling the epistemological principle of dialogic intersubjectivity. Cassell has an illuminating description of this kind of collaborative activity: ‘[I]t is true to call the doctor the historian while the patient is the storyteller’ (, p. 92). The narrative is a joint product between two collaborating subjects. It also fulfils the principle of accuracy. This is an example of what Cassell points out about narratives. They ‘include attitudes and valence – the emotional force – of the teller […]’ (, p. 92). It is also an example of how the ethical sense of objectivity comes to the fore. This sense is closely related to the virtue of justice conceived as fairness. The assessment in cert. 44 interprets the claimant as a ‘traumatized and vulnerable young woman who seems to have stood upright in the family since she was a child’. The assessment explains why the claimant’s childhood traumas remain untreated. The assessment concludes that the claimant is long-termed disabled. The reasoning process from premises to conclusions fulfils the condition of professional expertise. We must first reflect on how the embodied human being should be envisaged. We believe the concept of lived experience provides a fruitful way of approaching a CCCO for practical use in health care. An appropriate method of describing lived experiences is phenomenology , which is also an area of philosophical study and of understanding of actual human experience (German: Erlebnis ), especially ‘the ways things present themselves to us in and through such experience’ (, p. 2). Our analysis in this article shows that aspects of the lived experience not only of the patient or claimant, but also of the clinician, have to be taken into account. Maurice Merleau-Ponty represents an approach within phenomenology that combines philosophical phenomenology with empirical sciences. We follow this approach in dealing with the human being. We have drawn on Merleau-Ponty’s concept of the embodied subject in fleshing out the conceptual structure below . The human body is both biological organism and lived experience. Biological organisms are not isolated things, as the science of ecology shows. Neither is lived experience something that occurs in a mind/body shut in on itself. A basic bodily experience is that ‘my body is a movement toward the world and […] the world is my body’s support’ (, p. 366). The embodied subject has to be understood as life that stretches out towards and is supported by its surroundings. Hence, human bodies should be basically understood as interacting with one another and with their surroundings. Merleau-Ponty writes that ‘we must rediscover the social world […], not as an object or sum of objects, but as the permanent field or dimension of existence […]’ (, p. 379). In the present article, the concept of embodied subject expresses the concrete living human being, where material embodiment, bodily experience of being in the world, and social, cultural and social environments are regarded as dynamically linked (, pp. 159–175). In the analysis below we have employed the following concepts from the phenomenological tradition: life-world , phenomenological object and empathy . In stating four necessary conditions for the definition of CCCO in health care, we have combined these concepts with the concepts of O-objectivity, O-subjectivity, C-objectivity and C-subjectivity as defined above under ‘Background’. The WHO has acknowledged the importance of social context in its development of the International Classification of Functioning, Disability and Health, ICF (hereafter ICF) ). The ICF attempts to integrate the medical model with a social model (, p. 20). The manual describes human functioning in terms of body integrity, individual activities or actions in environments and participation in social life . In his study of medical practice, Eric J. Cassell emphasizes the patient’s functioning using the terms of the ICF . The ICF has taken important steps towards recognizing the social context of human functioning. There are, however, basic problems with the ICF. Its medical model of interpretation is still based on reductionist monistic materialism , and hence it provides only the third-person viewpoint. Important concepts such as intention or goal – which are integral elements of an action – are not included among the components of the ICF . Rehabilitation doctors have struggled for recognition of the subjective dimension of functioning and disability . The phenomenological notion of life-world (German: Lebenswelt) can fill out the shortcomings of the ICF in relation to subjective experience. Edmund Husserl described the life-world as the concrete and immediate world of everyday experience . This world is pre-scientific and is experienced even before ‘the split between physical and psychical‘ (, p.189). ‘Life-world is an all-embracing term that includes the “surrounding world” ( Umwelt ), both that of nature and culture, including humans and their societies (“the world of culture”), things, animals, our overall environment’ (, p. 190). We believe that ‘life-world’ is the appropriate overarching ontological term in health care for the unity of the human, social world as it is experienced by the embodied person. The life-world encompasses first-, second-, and third-person viewpoints as already defined above under ‘Background’. It is important to note that, because a person’s life-world includes first- and second-person viewpoints, there will always be limits to how far the life-world can be described objectively. Merleau-Ponty writes that ‘the social exists silently and as a solicitation’ even before we ‘come to know it or when we judge it’ (, p. 379). ‘Life-world’ is now an established term in psychiatry and psychology . The second condition is based on an understanding that the patient, as an embodied subject, appears as a living cognitive object to an observer – in this case a clinician – in a variety of ways when the latter is in authentic communication with the patient. To explain this point, we need first the phenomenological notion of the phenomenological object. It is a mental object or ideal entity, and not physical, as is the usual sense of the word ‘object’. Karl Jaspers defines the concept of phenomenological, i.e. intentional object (German: Gegenstand ) as follows: We give the name ‘object’ in its widest sense to anything which confronts us; anything which we look at, apprehend, think about or recognize with our inner eye or with our sense-organs. In short anything to which we give our inner attention, whether it be real or unreal, concrete or abstract, dim or distinct. Objects exist for us in the form of perceptions or ideas (, p. 60). We shall follow this definition, but add to it cognitive aspects of emotions . The quotation above is an example of a fundamental philosophical insight that conscious states are intentional: they are about, or refer to, intentional objects . They are called ‘intentional’ because they are often directed by consciousness towards something (the intended object), which could be, for example, other people, the environment, numbers, facts, states of affairs, signs, data or plans for the future. They can also be about the subject’s own ego, psyche or mind. According to phenomenology, perceptions, ideas and emotions, as described above, typically have cognitive contents, namely their intentional objects. Phenomenology combines properties of the object ‘outside mind’ with the experience of ‘inside mind’ into a unified cognitive act. In this act, as human beings we are related both to objects in the external world, to other human beings and to our own experience, and in this way meaning is formed. ‘[T]he meaning of things, in a sense, exists neither “inside” our minds nor in the world itself, but in the space between us and the world’ (, p. 34). Applied to healthcare, the data from the patient acquire meaning in the interaction between the patient and the clinician. Such meaningful data are here termed ‘cognitive objects’. We introduce the concept of cognitive object (Jaspers’ phenomenological object) in this study to expand the application of C-objectivity to the human being as embodied subject. To explicate further the cognitive object in the interpersonal context, we need the phenomenological concept of empathy . Empathy is the ability to understand and share the feelings of another. Phenomenologically, empathy is intentionality directed at the experiences of the other person. Understanding comes into being by perceiving the other person in context. This understanding is both emotional and cognitive. Imagining the other person in his/her life arenas is also important . Empathy is recognized as a fundamental phenomenon in human interaction and communication . Intentional objects are perceivable and communicable intersubjectively. Phenomenology explains this basically in terms of the concept of empathy, which ‘allows us to experience behaviour as expressive of mind. [Empathy] allows us to access the feelings, desires, and beliefs of others in their expressive behaviour. Our experience and understanding of others is [however] fallible’ (, p. 155). Empathy is a means to intersubjective understanding. We shall come back to the concept of dialogic intersubjectivity below. Merleau-Ponty’s view of intentionality – as pre-predicate unity of the experienced world and life – helps us to become aware that not all aspects of a problematic relationship between a person with ill health and the work market (a work disability) can be accessed as cognitive objects, i.e. as available to our knowledge (, p. lxxxii); or, as Searle underlines, not everything that a human being experiences can be accessed as cognitive objects by others (i.e. from the third-person viewpoint). Examples are ‘[u]ndirected feelings of well-being or anxiety are not intentional’ (, p. 327). In Searle’s terms this means that some of patients’ or claimants’ undirected feelings, including their well-being and anxiety, cannot be accessed as intentional or cognitive objects. If well-being as an undirected feeling cannot be accessed completely as a cognitive object, its opposite, permanent ill health, also cannot be fully described as an object for other persons. This is interesting in our context, because a common understanding of work disability is that it is often complex and sometimes difficult to describe and explain in full. However, ill health can still be described as a narrative (see below). We now describe the ways in which embodied subjects present themselves and provide data for clinicians, divided into clinical, psychometric and behavioural data, as follows. The concept of the embodied subject fully includes scientific data from the human organism and its illnesses and impairments. Descriptions of signs from clinical examinations in the different specialties of medicine and psychology are fundamental cognitive objects. Psychometric data are obtained through psychological tests. Psychology is defined as ‘the study of the nature, function, and phenomena of behaviour and mental experience’ (, p. 619). Seen this way, we can say that psychometric data obtained by psychological tests belong to the third-person viewpoint, the point of view of the observer. However, evaluating a psychological test is a challenging cognitive activity. Important questions are the test’s theoretical orientation, practical issues, standardization norms, reliability and validity . Nevertheless, reflectively carried through, psychometrics provides a way of obtaining meaningful data relating to the embodied subject. In health care, behaviour can often be understood as reaching out towards fellow human beings in terms of what Jaspers calls ‘expressions’. He maintains that the ‘psyche and body are one for us in expression’ (, p. 225). We use Jaspers’ broad concept of behaviour to characterize objects or data relating to the embodied subject in terms of activities/actions, expressions and reflection. Behaviour has to be understood through empathy ([, pp. 251–97). Other philosophers also acknowledge that mental life expresses itself through the body. P. M. S. Hacker, writes that ‘behaviour is not only bare bodily movements, but smiles and scowls, a tender or angry voice, gestures of love or contempt, and what the person says and does’. Such behaviour ‘manifests the inner’ and runs counter to Cartesian substance dualism (, p. 45). Activities and actions in environments Jaspers describes the psychiatric patient as an active human being in terms of a variety of objective performances (, pp. 168–221). Since he wrote that in 1959, WHO has developed the ICF to describe human functioning . Environmental facilitators and barriers are public phenomena. Abilities (and competences) are also phenomena that can be spoken about in a public or cognitively objective way. Meaningful expressions of mind/body relation The concept of meaningful expression, which is publicly visible, comprises ‘meaningful objective phenomena’ (Jaspers) such as: Life in the individual’s ‘own personal world’, the place where the individual ‘by means of his attitudes, behaviour, actions […gives shape] to his environment and social relations‘ (, p. 251). Or, in other terms, the life-world of the individual so far as it can be perceived by another person. Postures, movements, gestures, facial expressions, gazes, and tones of voice (, pp. 253–74). A drive to express oneself in different ways: speech, written productions, drawing, art and handicraft, and individual outlooks of the world (, pp. 287–97). Self-reflection Reason as the capacity for reflection is fundamental in the human world. Reflecting on one’s own goals or intentions is a part of being a rational being. This is because an important quality of the person is that he or she is an agent , that is, an acting being . A person’s intention or goal is part of the world of reason. In clinical work, too, there are opportunities for the clinician and patient or claimant to reflect together (, p. 274). A patient’s/claimant’s intention or goal is therefore a cognitive object that the clinician and the patient/claimant can reason and deliberate about. To sum up: The second condition for the definition of the CCCO enables clinicians to perceive the patient as a living, cognitive object providing a variety of data, both quantitative and qualitative. Jaspers describes the psychiatric patient as an active human being in terms of a variety of objective performances (, pp. 168–221). Since he wrote that in 1959, WHO has developed the ICF to describe human functioning . Environmental facilitators and barriers are public phenomena. Abilities (and competences) are also phenomena that can be spoken about in a public or cognitively objective way. The concept of meaningful expression, which is publicly visible, comprises ‘meaningful objective phenomena’ (Jaspers) such as: Life in the individual’s ‘own personal world’, the place where the individual ‘by means of his attitudes, behaviour, actions […gives shape] to his environment and social relations‘ (, p. 251). Or, in other terms, the life-world of the individual so far as it can be perceived by another person. Postures, movements, gestures, facial expressions, gazes, and tones of voice (, pp. 253–74). A drive to express oneself in different ways: speech, written productions, drawing, art and handicraft, and individual outlooks of the world (, pp. 287–97). Reason as the capacity for reflection is fundamental in the human world. Reflecting on one’s own goals or intentions is a part of being a rational being. This is because an important quality of the person is that he or she is an agent , that is, an acting being . A person’s intention or goal is part of the world of reason. In clinical work, too, there are opportunities for the clinician and patient or claimant to reflect together (, p. 274). A patient’s/claimant’s intention or goal is therefore a cognitive object that the clinician and the patient/claimant can reason and deliberate about. To sum up: The second condition for the definition of the CCCO enables clinicians to perceive the patient as a living, cognitive object providing a variety of data, both quantitative and qualitative. To make sense, the myriad of meaningful data about a patient or claimant have to be interpreted by the clinician. They need to be interpreted in light of the social context, the purpose of the assessment, and clinical knowledge. This third condition calls for specific attention to the ways in which the perceived data are interpreted by a clinician. The clinician will use his or her knowledge and experience to make sense of the interpretation in the current context. Clinical interpretation has two aspects: one is in some way to describe the patient’s lived life, the other to make a professional assessment of themes of that life. The first can be described as a narrative, the second as a theoretical interpretation . The latter uses scientific models. Daniela Bailer-Jones defines a scientific model as ‘an interpretive description of a phenomenon that facilitates access to that phenomenon’ (, p. 1). This definition is useful for the use of practical models in health care, too. We study work (dis)ability models below. A basic aspect of interpretation is the circular relationship between the whole and its parts. In our context, this means that the data can only be understood when aspects of the patient’s life-world as a whole – daily routine, different activities, health condition, social relationships, cultural setting and so on – are taken into consideration . Similarly, the patient’s life-world taken as a whole can only be understood in relation to the data on each of these aspects. When working out this interpretative, i.e., hermeneutic circularity, the clinician will ask the patient/claimant questions, comparing the information given against experiences from his/her own life-world and experience of being an embodied subject. Sometimes it is relevant to check for coherence and consistency among the data provided as components of a life narrative. In this clinical activity, the ethical sense of objectivity comes to the fore. “[M]edical professionals have a particular obligation to create situations where it is possible for patients to present themselves as subjects with integrity and legitimate opinions” . When writing certificates, questions about the credibility of a claimant’s presentation of data will sometimes come to the mind of the expert . Sometimes, degrees of symptom magnification or occasional malingering have to be considered . This requires a reasonable interpretation of the collected data. The first three conditions involve perceiving and assessing a patient/claimant in the particular relationship between the patient/claimant and the clinician. The fourth condition consists in the use of general epistemological principles for objectivity. Epistemological principles should be used to ensure the validity of interpretations, descriptions and judgements of what is perceived, understood and assessed as C-objective. Well-known epistemological principles for application of C-objectivity are the following : Intersubjectivity C- objectivity was defined in terms of intersubjectivity under ‘Background’ above. Applied to clinical assessments, by ‘intersubjective validity’ we mean ‘what is the case/evident or true according to current professional expertise’. This means that an account should be built upon available facts or data, and that it should be supported by arguments . (Germanic terms are saklighet [Norwegian] and Sachlichkeit [German]). What is clinically described or assessed should be intersubjectively communicable and testable by other professionals in the same or similar contexts. In psychotherapy, the practice of intersubjectivity is specified as a kind of interpersonal exchange that, following Buber (see above under ‘Background’), in this article is called dialogic intersubjectivity . In certificates, for example, the concept of dialogic intersubjectivity is found in use when the expert refers to important life-world events that the expert and patient/claimant have talked about together (a cognitive object as described in the second condition for the CCCO above). An interpretation of the patient’s problem situation is constructed by the patient and therapist together. This interpretation is then written out in a certificate as an account of some relevant and important aspects of the patient’s/claimant’s life and work history. The intersubjective communication is primarily between two subjects, but the account (in the certificate, in this example) is written in such a way that it can be understood and considered to be objectively valid not just by those two persons but by any competent reader. Impartiality An account should not be twisted by the omission of some information that the writer (e.g. of a certificate) ought to understand is important for the receiver and for the purpose of the certificate. The descriptions should be factual and sober and not biased or tendentious . Accuracy and correctness The information should be copious enough that the receiver can imagine the claimant’s situation for him/herself, thus allowing misconceptions about the real situation to be avoided . A greater degree of objectivity is ensured by using these principles. When they are not used, C-subjectivity can result. C- objectivity was defined in terms of intersubjectivity under ‘Background’ above. Applied to clinical assessments, by ‘intersubjective validity’ we mean ‘what is the case/evident or true according to current professional expertise’. This means that an account should be built upon available facts or data, and that it should be supported by arguments . (Germanic terms are saklighet [Norwegian] and Sachlichkeit [German]). What is clinically described or assessed should be intersubjectively communicable and testable by other professionals in the same or similar contexts. In psychotherapy, the practice of intersubjectivity is specified as a kind of interpersonal exchange that, following Buber (see above under ‘Background’), in this article is called dialogic intersubjectivity . In certificates, for example, the concept of dialogic intersubjectivity is found in use when the expert refers to important life-world events that the expert and patient/claimant have talked about together (a cognitive object as described in the second condition for the CCCO above). An interpretation of the patient’s problem situation is constructed by the patient and therapist together. This interpretation is then written out in a certificate as an account of some relevant and important aspects of the patient’s/claimant’s life and work history. The intersubjective communication is primarily between two subjects, but the account (in the certificate, in this example) is written in such a way that it can be understood and considered to be objectively valid not just by those two persons but by any competent reader. An account should not be twisted by the omission of some information that the writer (e.g. of a certificate) ought to understand is important for the receiver and for the purpose of the certificate. The descriptions should be factual and sober and not biased or tendentious . The information should be copious enough that the receiver can imagine the claimant’s situation for him/herself, thus allowing misconceptions about the real situation to be avoided . A greater degree of objectivity is ensured by using these principles. When they are not used, C-subjectivity can result. The CCCO has been defined above in terms of four necessary conditions. These conditions can now be expressed as the following criteria for the application of a CCCO in health care and social security medicine: First criterion To take into consideration the patient’s/claimant’s social context and, when appropriate, also life-world (lived experience). At the least, important aspects of the patient’s social context (e.g. close relatives) should be considered. First- and second-person perspectives should be recognized. Second criterion To take into consideration a variety of quantitative and qualitative data from the clinician’s empathic perceptions of the patient as a cognitive object. Third criterion To be aware of the need to interpret the data in terms of both the patient’s/claimant’s lived experience and of a professional assessment. Fourth criterion To apply general epistemological principles to ensure objectivity in the concrete situation. The use of all these criteria presupposes genuine communication between the expert and the patient/claimant. To sum up: The patient/claimant should be seen as a whole human being, and listened to in his/her social context. When appropriate, relevant aspects of his/her life-world should be appraised. The clinician should recognize the patient as an embodied subject, not as a merely physical object, i.e. as his/her (clinician’s) own cognitive object, perceived through the use of empathy and imagination. The clinician should use his/her interpretive capacity to understand the variety of data from the patient/claimant, in the context of his/her clinical knowledge and experience and the purpose of the assessment. To ensure objectivity of assessments, it is important to use some generally recognized epistemological principles in the concrete situation. To take into consideration the patient’s/claimant’s social context and, when appropriate, also life-world (lived experience). At the least, important aspects of the patient’s social context (e.g. close relatives) should be considered. First- and second-person perspectives should be recognized. To take into consideration a variety of quantitative and qualitative data from the clinician’s empathic perceptions of the patient as a cognitive object. To be aware of the need to interpret the data in terms of both the patient’s/claimant’s lived experience and of a professional assessment. To apply general epistemological principles to ensure objectivity in the concrete situation. The use of all these criteria presupposes genuine communication between the expert and the patient/claimant. To sum up: The patient/claimant should be seen as a whole human being, and listened to in his/her social context. When appropriate, relevant aspects of his/her life-world should be appraised. The clinician should recognize the patient as an embodied subject, not as a merely physical object, i.e. as his/her (clinician’s) own cognitive object, perceived through the use of empathy and imagination. The clinician should use his/her interpretive capacity to understand the variety of data from the patient/claimant, in the context of his/her clinical knowledge and experience and the purpose of the assessment. To ensure objectivity of assessments, it is important to use some generally recognized epistemological principles in the concrete situation. The material on which this analysis is based consists of social security certificates written by psychiatrists and psychology specialists in their role as experts to determine claimants’ eligibility for social benefits. It is a requirement that such assessments should be objective. In Norway, the law prescribes that ‘[a]nyone who issues medical certificates, medical reports, etc., shall be careful, precise and objective’ (, §15). The government admits that claimants diagnosed with illness without objective findings, but with credible chronic disability, are also eligible for disability benefit. This has provided greater room for the use of professional discretion concerning the objectivity of assessments of work disability. The certificates were written in a context where the claimant and the expert had met each other for at least one interview in the hospital setting. The expert had access to the patient’s earlier medical files, and in addition often knew aspects of the patient’s life from ongoing or earlier treatment spells. The certificates were all written in such a way that the reader understands that the expert has empathy with the claimant. All the 18 certificates that constitute the material for this study concluded with ‘at least 50% work disability’, for a few years ahead or permanently. No disability is described in the texts in terms of objective findings. The main reason for this seems to be that no certificate contains a diagnosis from the group of organic mental disorders (ICD 10 diagnostic block F00-F09). It should be noted that the expert assessing disability is not obliged to conclude with any specific quantification of the level of work disability. The specific level, 50% or higher, is decided by the NLWA on the basis of information about loss of income, and often also on reports from work ability training or work ability testing in various settings. According to Norwegian law, a certificate from a health care expert should usually contain three parts: (a) background and relevant history, (b) a description of current data from clinical examination and interview, and (c) assessments and conclusions or recommendations . In this article, we study (b) in terms of descriptions of the patient or claimant’s present functional disability in relation to work. We also study (c). We do not study whether or not the requirements of the law – that appropriate treatment and attempts to return to work have been completed – are fulfilled in the legal sense. We have no information as to whether the claimants’ applications for disability benefit were granted or not. We have seen above that the use of professional practical models is important in fulfilling the third criterion of application of the CCCO – that concerning data interpretation. We therefore use work (dis)ability models to structure our interpretation of disability assessments. In our earlier study we found that the following three models of work (dis)ability are implicitly in use in the text collection as a whole The biomedical disability model (BDM): This model ‘views disability as a problem of the person, directly caused by disease, trauma or other health condition’ (, p. 20). The basic question is whether a person’s disability is caused by a disease or not. A disability is described in only general and medical terms . Earlier analyses have shown that the form used in work (dis)ability assessment is structured according to the BDM, with ‘objective finding’ as the fundamental criterion of objectivity. The basic sense of the concept seems to be O-objective . It is not obligatory for experts to use this form. The ability-based health model (AHM): This model is based on action theory and on the holistic and relational definition of health by Lennart Nordenfelt . Three components of the definition constitute what we have termed health factors . These are: (a) ability (or capacity), which in the ICF is a qualifier of the component activity ; (b) environment, described by the ICF in terms of barriers or facilitators; and (c) goal or intention – a factor not considered in the ICF . For an assessment to be qualified as using the AHM, the expert must describe the claimant holistically in a particular context, giving specific details about all three of the health factors specified above. The factors abilities and goals are examples of data belonging to the second criterion of a CCCO. The mixed health model (MHM): This model is intermediate between the BDM and the AHM. The BDM is most often used as a base, but with one, two or three of the above mentioned health factors added to the descriptions . In some other descriptions, health factors are described without taking the BDM into account. All three of the models of work (dis)ability are found in use in the collection of the 18 disability assessment texts analysed in this study. We investigated whether these disability assessments apply the four criteria of the CCCO, and if they do, in what way. Where the fourth criterion is concerned, we focus on professional expertise, dialogic intersubjectivity and accuracy. Here, we highlight our most striking interpretive findings. In one group of 10 certificates, the BDM is used exclusively in three certificates; in the other seven, the BDM is used as the basic model, but the claimant is described by a few words in a social context, meaning that the model used is the MHM. Table shows the reasoning process in one of the assessments based on BDM alone (cert.12). A greatly reduced ability to work is worded as ‘owing to prolonged depression’. Permanent, complete work disability status is recommended by the expert. The two other certificates based on BDM alone demonstrate the same reasoning process in the case of a middle-aged woman with 20 years of dependence on psychoactive substances. Functioning was described only in relation to symptoms or impairments in these three certificates. Among the certificates that included a few words about the social context, cert. 43 (Table ) assesses a claimant as work-disabled, but comments that the cause of the disability seems to be obscure. Cert. 73 (Table ) gives a condensed summary of another claimant’s situation. The expert formulates a hard-hitting argument by grounding it in a description of poor functioning and negative self-image over the course of many years. The expert seems to be saying: Trust me, this claimant is permanently unable to work. There are two more certificates in this group that describe relationship stress as having significance for the work (dis)ability. One of these is cert. 7 (Table ), which describes a claimant living with a husband suffering from chronic excessive alcohol consumption. Two common features of this group of certificates are the following: Only a third-person viewpoint is employed. The assessments are short and focused on the causal relationship between illness and work disability. Aspects of the claimant’s life situation are not included in the assessments, or only to a small degree. These features are not surprising, as the BDM is strictly scientific and does not include social contexts other than labour services . A basic deficiency in these descriptions is that the social context is not described well enough to explain the disability. These assessments do not fulfil the first criterion to take at least the patient’s social context into consideration. Concerning the second criterion of the CCCO in this group of certificates, the assessments are based on standard clinical examinations. Neuropsychological examination was the only psychometric test used as part of the work disability assessment ( n = 2). Behavioural data (belonging to the second criterion) are, however, not provided. The certificates do not distinguish between the claimants’ experience and the professional assessment. This is in accordance with the form used, which does not make this distinction. The third criterion is not fulfilled. Concerning the fourth criterion of the CCCO, to apply general epistemological principles, we tested the application of the principle of professional expertise. When the sentences assessing the work disability are analysed, as they were in our study, the permanent disability that is claimed on the basis of causal reasoning alone is unconvincing. The descriptions lack some information that would demonstrate more clearly the reasoning from premises to a conclusion. The assessments based on the BDM fulfil the principle of professional expertise to a lesser degree than if social context had been described more closely. This point is demonstrated in one of the certificates when a broader social description in the medical history is taken into account (cert. 12, Table ). This certificate states that the claimant’s suffering is caused by a range of disabilities: ‘limited knowledge of the Norwegian language, dyslexia, lack of school education, economic problems’ and also ‘lack of desire to recover’. It is likely that these mainly social disabilities were an important background to the final assessment. We can also say that the 10 assessments based on the BDM do not fulfil the principle of accuracy because they lack important information about the claimant’s social context. One MHM-based certificate (cert. 81, a middle-aged woman, diagnosis F60.7), using the BDM as the underlying model is, however, different from the others. It uses the NLWA form, but transcends it by describing a demanding social situation in the family in some detail, but still only from a third-person viewpoint. The husband is on permanent disability benefit and has his own problems, and two of their three children have special needs at school. The claimant is described in terms of symptoms, but also of activity limitations: ‘She has great interpersonal problems and also has problems of taking care of her own needs and those of her family’. She is assessed as having ‘such great personal and family difficulties that she appears unable to work’. The expert also argues that in reality she has not worked for 10 years. This certificate fulfils the first criterion of taking the social context into consideration. The description enables the reader to understand the claimant’s difficult social situation and to follow the reasoning process towards the conclusion of long-term work disability. The epistemological principles of professional expertise and accuracy are to some extent fulfilled. The remaining certificates ( n = 7) are structured as the experts themselves choose. One common feature of these certificates is that the assessment includes more aspects of the claimant’s life situation. The claimant’s subjectivity also comes to the fore. Five of the seven certificates have been interpreted as applying the MHM, but without using the BDM as a base. These certificates are predominantly written in a third-person viewpoint, but they also approach the first-person viewpoint. The first-person viewpoint is expressed clearly in a certificate when what the claimant thinks, feels or experiences is quoted directly. None of the certificates in our study material does this. However, in these five certificates the experts state the claimant’s opinion and use this statement of the claimant’s first-person viewpoint in the argument in favour of work disability. This is thus a use of what we term the close to the first-person viewpoint . One expert writes about a claimant that he was fired from a job in which he had invested a lot of his strength and sense of responsibility: ‘When he was made redundant he collapsed, and he has realized that he cannot face entering into new arrangements to try out his work ability’ (cert. 35). Another expert writes: ‘To questions about his thoughts on employment, he says that he never will dare to meet others in an office or working place’ (cert. 16). The descriptions fulfil the first criterion of context and lived experience more than the previous certificates discussed. The certificates distinguish between descriptions of the claimants’ opinions, etc., and the professional assessment. It is clear from the assessments that the experts have given some weight to claimants’ opinions. We emphasize that four of them are the first ones among the certificates studied to fulfil explicitly the third criterion : To be aware of the need to interpret the data in terms of both the patient’s experience and the professional assessment. The fifth, however, does not distinguish clearly between the claimant’s opinions and the expert’s own professional view of these opinions. It is therefore an interesting case. The certificate relates to a claimant who was unable to complete a work training programme at an appointed place because the requirement to attend 2 days a week created great anxiety in him. As can be seen in the assessment part of cert. 57 (Table ), the claimant’s opinion has permeated the expert’s assessment. This is clear from the following statements: ‘He believes that he will not be able to get into employment again. If he is required to do so, his anxiety level will increase significantly, with the risk of alcohol abuse and hence increased risk of suicide.’ It looks as if the expert has taken the claimant’s opinions at face value. The data regarding the claimant’s opinion have entered into the expert’s assessment in a direct way. This assessment does not fulfil the third criterion. The last two certificates briefly describe abilities, environments and goals, all in the particular context of the work-disabled claimant, i.e., the AHM is used. Due to lack of space, we shall analyse only the most detailed certificate here (cert. 44, a young woman, diagnosis F48.00). The text in it is introduced as follows: ‘Knowledge of the patient’s education, work experience and occupational training is taken as granted. Other information is here given fully, because it is considered significant in explaining her level of functioning to-day’. Information about the claimant’s work disability is given in terms of a life narrative. Cassell has characterized a narrative in the health-care context, the following aspects of which are relevant to our study. A narrative should reveal ‘the chain of events that led to the present state’. It should explain both causative factors and the patient’s ‘purposes and goals’. The ‘meanings that the patient has attached to what has and is happening’ also belong to it, as do ‘the patient’s values. What the patient thinks is important’ (, p. 93). We analyse cert. 44 with these aspects in mind. First, fundamental influences on the claimant’s situation in her childhood are described: an alcohol-abusing and violent father, and an unstable, chronic sick mother. She was sexually abused for 5 years in childhood and was bullied at school. Second, the claimant’s moral standards and important actions are described. The expert writes that she managed to stop her incipient drug abuse and also tried to help others to stop their drug abuse. ‘She has always felt responsibility in the family and has been a prop and mainstay for everyone’. She now has a ‘secure family life’ with a husband and children. Central themes in the appointments with the expert have been her worries for her sick mother, her children, her own failing ability to work and social isolation. Her deserving efforts are emphasized. Third, she has suffered and been treated for, among other things, asthma and chronic muscle pain. For as long as she can remember, she has struggled with mental problems and great burdens. Fourth, at present she is in despair, because she feels she has no control and is at the mercy of her life situation. She feels guilty for not managing to give her children a better childhood. Functionally, she is described as powerless and unable to mobilize strength. She is said to be unable to go outside with her children as she wants. She has handed over large parts of the housework to her husband. She manages to care for the family’s dog and has a close female friend whom she meets regularly. The narrative describes an ‘intertwinement of action and passion’ in human life (, p. 266). The descriptions are written from a third-person viewpoint, especially in regard to the claimant’s childhood. As can be seen from the quotation above, the ‘close to the first-person viewpoint’ has also been used. We interpret this narrative also as written from a second-person viewpoint. The life history is based on a clinical dialogue that has been going on for some years. We regard this certificate as fulfilling the first criterion: a description in terms of the patient’s life-world. The certificate demonstrates the kind of cognitive objects and data that belong to the second criterion: Important activities and actions (taking responsibility, helping others), struggling with failing ability to work and social isolation, reduced functioning and self-reflection. Because some behavioural data are described, we can say that the second criterion is fulfilled. The two AHM-based certificates distinguish clearly between the narrative data expressed by the claimants and the experts’ assessments of these data. The third criterion is fulfilled. In regard to the fourth criterion, we assess this narrative as fulfilling the epistemological principle of dialogic intersubjectivity. Cassell has an illuminating description of this kind of collaborative activity: ‘[I]t is true to call the doctor the historian while the patient is the storyteller’ (, p. 92). The narrative is a joint product between two collaborating subjects. It also fulfils the principle of accuracy. This is an example of what Cassell points out about narratives. They ‘include attitudes and valence – the emotional force – of the teller […]’ (, p. 92). It is also an example of how the ethical sense of objectivity comes to the fore. This sense is closely related to the virtue of justice conceived as fairness. The assessment in cert. 44 interprets the claimant as a ‘traumatized and vulnerable young woman who seems to have stood upright in the family since she was a child’. The assessment explains why the claimant’s childhood traumas remain untreated. The assessment concludes that the claimant is long-termed disabled. The reasoning process from premises to conclusions fulfils the condition of professional expertise. We have carried out an exploratory, interpretive study of a small set of texts originating in disability assessments in medical certificates produced for social security purposes. Four criteria of application of the comprehensive concept of cognitive objectivity (CCCO) have been tested. We believe that the way in which certificates are written at the clinic that provided the study certificates is representative of that found in the Norwegian mental health care clinics . The 18 certificates analysed should ensure typical ways of describing work disability in social security certificates. It is, however, likely that greater variation exists among work disability assessments than was observed in this study . A limitation of this study is that certificates in which ‘objective findings’ were described were not included, and so the functions of this criterion of objectivity in relation to disability could not be studied. Our findings suggest that it makes a significant difference whether the long-term disability of claimants with mental illness is assessed using the BDM, with ‘objective findings’ as an implicit criterion of objectivity, or using C-objective criteria. When the BDM is used, the work disability assessments tend to be short and focused on determining the causal relationship between work disability and illness. The social context is sparsely described, and the descriptions of the case lack sufficient accuracy. Both the factual grounds and the BDM as warrant are insufficient to support the conclusion that the claimant is permanently disabled. Using the BDM is inappropriate when there are no objective findings, and a relative lack of objectivity is found in such assessments. Objectivity improves when the BDM is supplied with social medical data. In the two certificates where the AHM is used, the patient’s context has been extended somewhat. These certificates also describe the patient’s close to the first-person viewpoints and loss of abilities, in addition to the patient’s goal and reflections. The data are varied and relevant for a disability assessment. In these certificates, the experts use their own practical model as warrant for concluding that the claimant is permanently disabled. We do not know more about the practical models used than that the claimant is seen as an agent, in relevant context and having failing abilities. The objectivity of the assessments is improved. We do not know, however, the specific content of each expert’s warrant that made them conclude that the claimant is disabled. In discussing Merleau-Ponty’s and Searle’s notions of intentionality above, we implied that not all aspects of a person’s work disability or ill health are accessible or available to our knowledge from the third-person viewpoint. Describing chains of important life events narratively in a text is a way of externalizing actions and experiences for both the patient and the clinician. ‘Once produced, the text becomes a matter for public interpretation’ (, p. 335). Writing a narrative seems to be a useful way of describing a complex work disability in an intersubjective way. However, it seems to be difficult to state the exact reasons why a patient or claimant is permanently work disabled. So far as the concept of objective finding is concerned, ‘objective finding’ as defined primarily O-objectively is necessarily inappropriate in most medical conditions in health care today. However, if, for example, a claimant has advanced cancer, the O-objective pathological reality of the cancer underlines the severity of his/her condition. The claimant is obviously permanently disabled for work. On the other hand, we do not believe it is appropriate to designate all the C-objective descriptions and assessments that can be found by perceiving a claimant as a cognitive object as ‘objective findings’. We believe that the concept of C-objective finding should be restricted to the results of the clinical test apparatus plus the signs of abnormality or pathology that can be found by clinical examination of the embodied subject. We believe that the time has come to allow objective findings to find their important but limited place among the other criteria of the CCCO. The study has defined a CCCO for use in health care and social security medicine that ensures holistic thinking about human beings. Well-accepted definitions of ontological objectivity and subjectivity, and epistemological objectivity and subjectivity, provided the point of departure for the conceptual analysis undertaken here. It was found that the C-objectivity is appropriate to a medical understanding of objective findings. To expand the understanding of ontological subjectivity as related to material reality, the phenomenological notions of embodied subject, life-world, phenomenological object and empathy were included in the conceptual analysis. The CCCO was defined by four conditions. The criteria corresponding to these conditions for the practical use of the CCCO in health care are: (1) To take into consideration the patient’s social context and, when appropriate, also life-world (lived experience). The patient’s perspective should be recognized (2). To take into consideration a variety of quantitative and qualitative data from the clinician’s perceptions of the patient’s life and the patient’s test results. (3) To be aware of the need to interpret the data in context. (4) To apply general epistemological principles (professional expertise, dialogic intersubjectivity, impartiality, accuracy and correctness) in the concrete situation. The use of all the criteria presupposes a genuine communicative interaction. The concept of CCCO also comprises the ethical sense of objectivity, which takes into consideration respect for human vulnerability, dignity, individual identity, autonomy and integrity. The four criteria were tested in an exploratory manner on the disability assessments contained in a collection of medical certificates written for social security purposes. The criteria were illuminating and useful in an analysis of what makes disability assessments for social security purposes more or less objective. The findings of our analysis suggest that the four criteria constitute a useful tool to aid an understanding of how objectivity in work disability assessments fails or can be improved or safeguarded. There is, however, a need to test the structure of the concept and the criteria in various arenas in health care where objectivity of clinical assessments is important.
A Qualitative Analysis of the Experiences of Young Patients and Caregivers Confronting Pediatric and Adolescent Oncology Diagnosis
cca948a3-c453-469e-b923-66bacf13cf93
10378996
Internal Medicine[mh]
Every year, approximately 400,000 new cases of cancer are diagnosed in children and adolescents worldwide . This trend is expected to continue increasing compared to the previous decade . Pediatric oncology refers to all oncological pathologies that affect children and adolescents between 0 and 19 years old . The survival rate for children and adolescents living in high-income countries is around 80% five years after diagnosis . Cancer is a leading cause of non-traumatic death among young patients worldwide, and its incidence is expected to continue increasing in the coming years . In Italy, where this research was conducted, a study by the Italian Association of Tumor Registries found that 7000 neoplasms were diagnosed among children and 4000 among adolescents (15–19 years) between 2016 and 2020, consistent with diagnoses recorded between 2011 and 2015. The estimated annual incidence is approximately 1400 cases for children (0–14 years) and 900 for adolescents (15–19 years). In addition to the medical consequences and concerns for young patients, there are also psychological, ethical and social concerns . Focusing on the individual, children and adolescents in active treatment experience significant changes in their bodies, which greatly influence their daily lives and emphasize their desire to feel “normal” . Among the most common physical symptoms are pain and fatigue , as well as changes in motor and respiratory functioning, difficulty swallowing and nausea and vomiting. From a psychosocial perspective, these young patients often experience strong feelings of anxiety, depression, fear, low self-esteem, social and school difficulties, irritability, nervousness, sadness and sleep problems . Several studies have highlighted the primary and secondary needs of pediatric and adolescent patients affected by different types of neoplasms. Primary needs include the desire to receive love, respect and protection of one’s dignity; the need to have caregivers close, both emotionally and physically; the desire to be personally involved in the care, treatment and decisions to be made; and the need to maintain a positive relationship with healthcare staff . Secondary needs include the need to openly communicate one’s emotions, to be listened to without judgment and to maintain, as much as possible, one’s independence, while still acknowledging the limitations imposed by the disease . Effective pediatric palliative care requires a collaborative approach between healthcare professionals, family members and patients, which addresses not only the medical needs but also the social, spiritual and psychological aspects of the child and family . Moreover, children and adolescents in active treatment struggle to make sense of their illness and may feel that they are missing out on important moments in life . Adolescents diagnosed with cancer and undergoing more intensive treatments tend to have worse psychosocial outcomes and lower quality of life, while those diagnosed with cancer during childhood often demonstrate psychological resilience, as shown in the aforementioned study . It is important to underline, in addition to the possible negative consequences of the disease, the positive aspects studied by several psychologists in the last 20 years, who have identified cancer as a possible source of growth, particularly thanks to the development of resilience in adolescent survivors . Many scholars have discussed challenges associated with pediatric cancer and the need for evidence-based psychosocial interventions to support children, adolescents and their families . Kazak and colleagues identified seven key contributions that psychologists can make to collaborative and integrated care in pediatric cancer. These include managing procedural pain, nausea and other symptoms, understanding and reducing neuropsychological effects, treating children in the context of their families and other systems, applying an evolutionary perspective, identifying competency and vulnerability, integrating psychological knowledge into decision-making and other clinical care problems and facilitating the transition to palliative care and bereavement. The positive impact of clear and empathic communication of diagnostic and prognostic information as well as the use of digital games to distract children from their treatment process have been assessed . Clinical studies have shown that psychosocial interventions are effective in reducing anxiety and depressive symptoms and improving the quality of life in pediatric oncology patients . However, the disease can have a disruptive effect on family identity and structure, and families of pediatric cancer patients may experience negative outcomes. While some families adapt within twenty months of diagnosis, others continue to experience anxiety, depression, psychological distress and post-traumatic stress symptoms, even after the end of curative treatment. Therefore, evidence-based psychosocial interventions are crucial in providing support for pediatric cancer patients and their families . At the same time, studies in the field highlight how the condition of illness also has a negative impact on the quality of life of parents . A recent study emphasizes how the relationship between mother and child changes during oncological treatments. Constantly caring for a sick child (caregiving burden) becomes a weight that generates stress that can be accentuated, among other things, by specific characteristics of the illness diagnosis and daily impediments that the child experiences. Anxiety, uncertainty, fear of the disease and its unpredictability and loneliness are just some of the most frequent emotional states that emerge . The deterioration of the quality of life depends, in fact, on the high level of stress experienced by parents , which can weaken the immune system to the point of developing post-traumatic stress disorder . The well-being of parents can also depend on individual aspects, such as personality and the ability to positively reassess the situation and derive satisfaction from their role as a parent, which amplifies in the course of illness. Further complications and worsening of the treatment process also arise from the consequences that the illness has on the child’s physical functioning, which affects their quality of life . 2.1. Research Aims The aim of this research is to address a gap in the literature regarding the psychological experiences of young oncological patients and their parents. To achieve this, two interrelated objectives have been pursued. The first objective is to investigate the experiences of parents whose children or adolescents have received an oncological diagnosis. Specifically, this involves exploring their emotions, experiences as caregivers and the impact of the disease on their lives since the diagnosis. Moreover, the focus is also on the coping strategies that parents have used and developed during their child’s hospitalization. The second objective is to gain a deeper understanding of the experience of illness of young patients as there is a lack of systematic studies on the experiences of pediatric and adolescent patients. This general objective aims to contribute to making the relationship between patient–family–healthcare professionals more satisfying and, ultimately, to improve the success of the treatment itself. Starting from that, the study has specifically explored the emotions, dreams and fears of adolescents and children who have faced cancer in pediatric age. The study has tried to identify the needs and most recurring themes in their narratives as well as extract their experiences related to illness and their relationships with healthcare professionals, parents and peer groups. 2.2. Research Design The study adopts a qualitative methodological approach, which frames individuals as active interpretative agents with their own points of view through which they construct their perspective on the world and the surrounding reality . Qualitative methodology encompasses a heterogeneous variety of perspectives and traditions , but there is a general epistemological convergence in considering that its main strengths are the idiographic standpoint, which enables an in-depth analysis of the person within their context and a close examination of their unique perspective . Therefore, semi-structured narrative interviews were used as a data collection method in order to capture the richness of participants’ narratives, leaving room for their unique storytelling while maintaining a clear and pre-established structure . Indeed, semi-structured interviews involve a set of previously constructed questions posed to the participants, with additional inquiries used to explore emerging topics and themes. It is a particularly flexible model, allowing for the collection of the details and richness of participants’ narratives while maintaining a focus on the research aims . Two different semi-structured protocols were adopted. Specifically, questions addressed to parents were, for example: “How did you feel when the physician communicated the bad news?”, “Did you feel understood from the healthcare personnel?”, “Have you noticed any changes in your daily life?”, “Did you feel that your role as a parent changed after the diagnosis?”. Questions addressed to adolescents were, for example: “How do you represent the illness to yourself?” (adolescent), “How do you feel in this new environment? Are there any spaces where you can play? How do you get along with other children?” (children), “Which people do you feel closest to right now? How would you describe the relationship you have with them?” (adolescent), “How do the treatments you receive make you feel? Do you feel listened to? What emotions do you experience most frequently?” (children), “What have been the most impactful changes in your lifestyle? How do you think you have dealt with or are dealing with these changes?” (adolescent), “If the illness were a journey, what would you fill your backpack with?”. The verbatim transcriptions of the interviews were analyzed using the thematic analytical approach, which involves identifying themes through the analysis of narrative material, and through the support of the Atlas.ti software. The thematic analysis is a method of analysis which has been widely used for qualitative research since it allows tackling different topics and is considered one of the most appropriate methodological strategies for studies that involve research questions and aims employing interpretative as well as explorative efforts . As Braun and Clarke explain, thematic analysis is functional if the following steps are followed: becoming familiar with the data (by reading it several times to identify possible patterns), generating initial codes (creating codes that summarize the main characteristics of the data), searching for themes (classifying codes into potential themes), reviewing themes (determining whether all the themes identified are relevant), defining and naming themes (creating a thematic map, identifying the heart of each theme) and, finally, producing the final written report . Two analysts independently created codes and categories from the textual data. Then, they met with a third member to check the consistency and groundedness of their analyses, to solve eventual dissimilarities and to, finally, meet an accord for thematic generation. During the interview as well as the analysis phase, interviewers and analysts carefully checked for discrepancies and inconsistencies between patients’ and parents’ narratives. Indeed, patients and parents were part of the same family (even though not all the parents agreed to be interviewed—see below in ) and it was important to understand if the illness was being experienced differently within the same family. However, no relevant discrepancy or inconsistency emerged (see below in ), and therefore, there was no need to reanalyze data on that light. 2.3. Participants The participants in the present study, parents as well as young patients, were recruited by a psychologist working at a hospital in Bari, located in the South of Italy, Puglia Region. Each participant signed an informed consent through which they expressed agreement to participate in a semi-structured narrative interview via telephone lasting approximately one hour as well as to an audio recording of the interviews. With the regards to parents, the study included seven parents of adolescents with an oncological diagnosis. The group consists of seven parents, six of whom are women, and one is a man (see ). The age of the parents ranges from 36 to 58 years old (mean age of 47 years; standard deviation of 6.7 years). All participants are respectively the mother or father of the young patients (see above), and they are all married and living with their partners. Thus, all the parents interviewed are parents of some of the children/adolescents interviewed. All participants are Italians. Specifically, four of them live in the province of Bari (Puglia), one in the province of Brindisi (Puglia), and two live in Basilicata. Only one participant is currently employed in the telecommunications industry, while four women are on special leave or have closed their business (one is a teacher, one is an employee in a car dealership, one is a pharmacist and one is a pastry chef), and the other two women are housewives. The children of the interviewees included four males and two females, ranging in age from 11 to 17 years old (one boy is 11 years old, four are 16 years old and one is 17 years old; the mean age is 15 years, and the standard deviation is 2 years). The diagnoses are very varied, including: soft tissue sarcoma, testicular rhabdomyosarcoma, Ewing’s sarcoma, B-type acute lymphoblastic leukemia and Hodgkin’s lymphoma of bone origin (see ). The study also included a group of children and adolescents with an oncological diagnosis, consisting of 5 females and 6 males. The age of the participants ranged from 7 to 18 years old (mean 15.1; standard deviation 2.96; mode of 17 years). Within the group, 10 participants were adolescents, and one was a child (using 12 years of age as a reference for entering adolescence). All participants live with their parents and were born in Italy. Most of them are residents of Puglia while some are from Basilicata. They are all followed by the pediatric onco-hematology department of a hospital in Bari. Their diagnoses are quite varied and include Ewing’s sarcoma, Hodgkin’s lymphoma, acute lymphoblastic leukemia, lymphatic leukemia, testicular sarcoma, osteosarcoma and rare sarcoma. At the time of diagnosis, they had a mean age of 13.64 years and a standard deviation of 3.17. As stated below, not all parents of the 11 pediatric patients agreed to be interviewed, which explains the difference between the two samples. The study followed the American Psychological Association Ethical Principles as well as the Declaration of Helsinki. Moreover, it was approved by the University of Pa-dova Ethics Committee (Ethical Code 86BC4C01AC73EE828B62F21440718EC9). The aim of this research is to address a gap in the literature regarding the psychological experiences of young oncological patients and their parents. To achieve this, two interrelated objectives have been pursued. The first objective is to investigate the experiences of parents whose children or adolescents have received an oncological diagnosis. Specifically, this involves exploring their emotions, experiences as caregivers and the impact of the disease on their lives since the diagnosis. Moreover, the focus is also on the coping strategies that parents have used and developed during their child’s hospitalization. The second objective is to gain a deeper understanding of the experience of illness of young patients as there is a lack of systematic studies on the experiences of pediatric and adolescent patients. This general objective aims to contribute to making the relationship between patient–family–healthcare professionals more satisfying and, ultimately, to improve the success of the treatment itself. Starting from that, the study has specifically explored the emotions, dreams and fears of adolescents and children who have faced cancer in pediatric age. The study has tried to identify the needs and most recurring themes in their narratives as well as extract their experiences related to illness and their relationships with healthcare professionals, parents and peer groups. The study adopts a qualitative methodological approach, which frames individuals as active interpretative agents with their own points of view through which they construct their perspective on the world and the surrounding reality . Qualitative methodology encompasses a heterogeneous variety of perspectives and traditions , but there is a general epistemological convergence in considering that its main strengths are the idiographic standpoint, which enables an in-depth analysis of the person within their context and a close examination of their unique perspective . Therefore, semi-structured narrative interviews were used as a data collection method in order to capture the richness of participants’ narratives, leaving room for their unique storytelling while maintaining a clear and pre-established structure . Indeed, semi-structured interviews involve a set of previously constructed questions posed to the participants, with additional inquiries used to explore emerging topics and themes. It is a particularly flexible model, allowing for the collection of the details and richness of participants’ narratives while maintaining a focus on the research aims . Two different semi-structured protocols were adopted. Specifically, questions addressed to parents were, for example: “How did you feel when the physician communicated the bad news?”, “Did you feel understood from the healthcare personnel?”, “Have you noticed any changes in your daily life?”, “Did you feel that your role as a parent changed after the diagnosis?”. Questions addressed to adolescents were, for example: “How do you represent the illness to yourself?” (adolescent), “How do you feel in this new environment? Are there any spaces where you can play? How do you get along with other children?” (children), “Which people do you feel closest to right now? How would you describe the relationship you have with them?” (adolescent), “How do the treatments you receive make you feel? Do you feel listened to? What emotions do you experience most frequently?” (children), “What have been the most impactful changes in your lifestyle? How do you think you have dealt with or are dealing with these changes?” (adolescent), “If the illness were a journey, what would you fill your backpack with?”. The verbatim transcriptions of the interviews were analyzed using the thematic analytical approach, which involves identifying themes through the analysis of narrative material, and through the support of the Atlas.ti software. The thematic analysis is a method of analysis which has been widely used for qualitative research since it allows tackling different topics and is considered one of the most appropriate methodological strategies for studies that involve research questions and aims employing interpretative as well as explorative efforts . As Braun and Clarke explain, thematic analysis is functional if the following steps are followed: becoming familiar with the data (by reading it several times to identify possible patterns), generating initial codes (creating codes that summarize the main characteristics of the data), searching for themes (classifying codes into potential themes), reviewing themes (determining whether all the themes identified are relevant), defining and naming themes (creating a thematic map, identifying the heart of each theme) and, finally, producing the final written report . Two analysts independently created codes and categories from the textual data. Then, they met with a third member to check the consistency and groundedness of their analyses, to solve eventual dissimilarities and to, finally, meet an accord for thematic generation. During the interview as well as the analysis phase, interviewers and analysts carefully checked for discrepancies and inconsistencies between patients’ and parents’ narratives. Indeed, patients and parents were part of the same family (even though not all the parents agreed to be interviewed—see below in ) and it was important to understand if the illness was being experienced differently within the same family. However, no relevant discrepancy or inconsistency emerged (see below in ), and therefore, there was no need to reanalyze data on that light. The participants in the present study, parents as well as young patients, were recruited by a psychologist working at a hospital in Bari, located in the South of Italy, Puglia Region. Each participant signed an informed consent through which they expressed agreement to participate in a semi-structured narrative interview via telephone lasting approximately one hour as well as to an audio recording of the interviews. With the regards to parents, the study included seven parents of adolescents with an oncological diagnosis. The group consists of seven parents, six of whom are women, and one is a man (see ). The age of the parents ranges from 36 to 58 years old (mean age of 47 years; standard deviation of 6.7 years). All participants are respectively the mother or father of the young patients (see above), and they are all married and living with their partners. Thus, all the parents interviewed are parents of some of the children/adolescents interviewed. All participants are Italians. Specifically, four of them live in the province of Bari (Puglia), one in the province of Brindisi (Puglia), and two live in Basilicata. Only one participant is currently employed in the telecommunications industry, while four women are on special leave or have closed their business (one is a teacher, one is an employee in a car dealership, one is a pharmacist and one is a pastry chef), and the other two women are housewives. The children of the interviewees included four males and two females, ranging in age from 11 to 17 years old (one boy is 11 years old, four are 16 years old and one is 17 years old; the mean age is 15 years, and the standard deviation is 2 years). The diagnoses are very varied, including: soft tissue sarcoma, testicular rhabdomyosarcoma, Ewing’s sarcoma, B-type acute lymphoblastic leukemia and Hodgkin’s lymphoma of bone origin (see ). The study also included a group of children and adolescents with an oncological diagnosis, consisting of 5 females and 6 males. The age of the participants ranged from 7 to 18 years old (mean 15.1; standard deviation 2.96; mode of 17 years). Within the group, 10 participants were adolescents, and one was a child (using 12 years of age as a reference for entering adolescence). All participants live with their parents and were born in Italy. Most of them are residents of Puglia while some are from Basilicata. They are all followed by the pediatric onco-hematology department of a hospital in Bari. Their diagnoses are quite varied and include Ewing’s sarcoma, Hodgkin’s lymphoma, acute lymphoblastic leukemia, lymphatic leukemia, testicular sarcoma, osteosarcoma and rare sarcoma. At the time of diagnosis, they had a mean age of 13.64 years and a standard deviation of 3.17. As stated below, not all parents of the 11 pediatric patients agreed to be interviewed, which explains the difference between the two samples. The study followed the American Psychological Association Ethical Principles as well as the Declaration of Helsinki. Moreover, it was approved by the University of Pa-dova Ethics Committee (Ethical Code 86BC4C01AC73EE828B62F21440718EC9). The following section is divided into major themes emerging from parents’ and young patients’ interviews. 3.1. Parents’ Results From the analysis of parents’ narratives, four major themes emerge. 3.1.1. Emotional and Caregiver Burden The emotions experienced by parents were at the heart of the interviews, allowing for a better understanding of the situation they have experienced. The uncertainties brought by the physical problems experienced by their children emerge, which led them to investigate the situation with pediatricians and specialists. In each of these cases, a diagnosis of oncological disease was reached, which many describe as impossible to forget. “It seemed like an infection, but I had the feeling there was something more […] So when the diagnosis was given, I received it and didn’t receive it”. (P2) “[…] You can’t believe that something like this can happen to you. It really seems like you’re living in a movie”. (P4) “In the first few days, I felt like I was in a bubble, you can’t realize and understand what’s happening”. (P7) The act of “taking care”, which is crucial for a parent, becomes even more important in the case of a life-threatening illness in their child. From the experiences shared, confusion, pain and disbelief emerged, as seen before, which then turned into a fight that also involved the daily care of the child. The psychological burden of the disease overloaded the role of the parents, who were not physically ill but had to witness their child’s pain. “When you see your daughter’s eyes imploring you to go home and not want to be sick anymore, that’s when it weighs on you, psychologically”. (P5) “You always have to pay a little attention, before you did not think about it, […] that is the thing that has changed more than anything else, you are always with that thought. Before you were more open, now you are much more worried, given his situation”. (P3) This experience of vicarious suffering made them feel the weight of caregiving as they did not know what would happen, were separated from their family and empathized with their child’s suffering, sometimes leading to a self-annulment due to totalizing care, feeling that they could only be “mothers”. 3.1.2. Personal Growth Alongside these extremely negative experiences, however, all parents have sought normalcy and shown their strength, their desire and need to fight the disease alongside their child, forcing themselves to accept the situation to move forward and fight. From this image, great experiences of resilience and positivity emerge, guided by a strong trust and hope towards the future and treatments. Some also indicate feeling fortunate compared to others regarding the diagnosis and having encountered positive aspects within the illness although they themselves specify that it was “difficult to say”. “Every day was a victory, we fought this battle with a sword pointed towards the top, because we must reach the top, this is the goal”. (P1) At the same time, parents found the strength to move forward, without realizing where it came from. They did it for themselves and for their children, with whom, as some parents explain, they felt they had established a stronger relationship of mutual support and help in moments of discouragement while trying to remain the “same” parent. “[…] it was a pleasure, I shared things with her, sometimes she did not tolerate me, but like every teenager. But in the end, it was a pleasure, for me certainly, to spend more time with her”. (P6) “In some sort of way, the shock helped us put things under a whole new perspective. You start appreciating small gestures, caresses, attention. […] And you realize how fundamental and important they are, the richness and satisfaction they can easily bring, if you let them touch you”. (P7) 3.1.3. Spirituality Concerning spirituality, not intended as religiosity, some of the interviewed parents explained how their child’s illness changed their values, or those of the young patients. In the first case, a mother explained how the path after the diagnosis made her reassess her priorities, making her understand that some aspects she focused on before were superficial. Another interviewed woman explained how her older daughter’s illness opened the eyes of the young patient and her younger sister, allowing them to appreciate the little things and the time spent together as a family. “At first, I suffered a little from this decision to leave work, but then it was an advantage from many points of view, because I took a break from some superficial things that I gave more importance to before, we changed our priorities a bit”. (P6) “We have all changed a bit: us, so to speak, but they changed a lot. I find myself with two different girls. On the one hand, it seems bad, but let’s say that it brought out the best in us. It put us in front of the true meanings of life”. (P5) Regarding religiosity, all the interviewees explained how it was of great help, an anchor to hold onto, and how prayer was a source of serenity and a request for well-being for the family. In addition, a mother explained how religiosity and prayer were ways to turn to Someone greater, surrendering to His will, guiding doctors in their work with her child. “Through prayer and faith, I trust that God can give us the help we need, the strength”. (P7) “Completely surrender to God, ‘Jesus, you take care of it.’ I do not suggest the cure, because you are the doctor […], let Your will be done whatever it may be”. (P1) Religiosity led some parents to feel the closeness of God, His presence in their lives and in the journey of their children as if there was a powerful hand guiding the process. On the other hand, some were angry with God for what happened, feeling betrayed and questioning why it happened to their children and, in general, how there could be so much evil in the wards they frequented. “[…] I felt a presence, or I wanted to think so, I felt it, praying I felt that things would go well and that there would be a higher power that would help my son”. (P2) Religiosity has helped a mother to believe that no matter how the treatments went, it would be fine because spirituality helps you think about life after death rather than the potential annihilation of life. The same woman provided a different perspective on prayer. She explained that she felt fear at the idea of going to church and praying because it would lead her to deeper introspection, bringing forth emotions that were too intense for her to confront at that moment. “Faith helps you because it gives you hope. The thought that no matter how it turned out, it would be fine, because it helps you live death differently […] in the sense of a life after death. […] I’m almost afraid to enter a church because I don’t want to think too much. I’m afraid of thinking and reflecting too deeply. I’m afraid of where my thoughts might go, so I prefer to live without dwelling too much. I’m afraid of delving too deeply within myself and unleashing emotions that I can’t afford to deal with right now because I have to take it day by day, fight”. 3.2. Young Patients’ Results From the analysis of young patients’ narrative, three major themes emerge. 3.2.1. Emotions: From Confusion to Resilience Initially, surprise, fear and anxiety arising from confusion in the new situation are described. There is a sense of injustice towards oneself and one’s family members who are also affected by this difficult journey. Uncertainty regarding the conditions, duration of treatment and future prospects is a recurring theme in the interviews. “I thought I would never get out of it, that I did not deserve such a thing, that my family did not deserve it (although less often), that it is an unknown thing and that I only got to know it by experiencing it, but I would have liked to know something about it before. I think there is too much misinformation about it, before getting sick I never really wondered what it meant to be ill”. (P9) Fear is also reported in relation to the future and the possibility of a relapse. The hypothesis that the disease may reappear seems to be what scares patients the most. Another interesting aspect that emerged is the approach with technology and the search for information on the internet. “I think that once everything is over, the problem could reappear. Last week I had a panic attack because, even though I had never read anything about the disease on the internet (also following the doctor’s advice), at that moment I searched [online] and saw that there was a possibility of a relapse. The idea of going through all of this again scares me. I have this fear a little bit”. (P3) However, most adolescents have shown great strength in contrast to these negative emotions, recounting and describing positive experiences. Despite finding themselves in a particularly difficult situation, many participants have managed to find a way to see the positive side of things, to appreciate even what they used to take for granted. Some even emphasize how they felt lucky in their misfortune, thanks to the people around them. “I feel a myriad of different emotions, certainly at times after many moments that make me feel bad, like hospitalizations, I feel so much joy when I see my father and my sister again. Moments of joy are very frequent even though it doesn’t seem like it, and I always find a way to be as happy as possible”. (P3) 3.2.2. Relationships with Peers It is interesting to note how the testimonies are particularly diverse regarding this topic: Some have not noticed any change with their lifelong friends; others struggle to meet and connect as they did before and have bonded more with people who are living with cancer and therefore share the burden of the disease and its treatment; while others have intensified their relationship with some and lost sight of other friends. Those who are more positive report phrases full of gratitude and affection towards their companions, who show closeness despite the difficulty of meeting. These friends are mainly identified as “lifelong friends”. “The relationship with my friends is beautiful, wonderful, special. I thank them so much […]. Even outside the hospital context, I have never felt different with them, there have been no changes”. (P4) An important aspect is understanding: According to what emerges from the narratives of adolescents, friends who found a way to communicate what was happening to the person directly involved were more aware and understanding towards them. “In my relationship with my peers, there are ups and downs. Some moments have made me very happy, others not so much. At this age, not everyone understands what I am going through. I feel a little more understood by those who are maybe a little more mature, but not by others. Some of my friends know what I am going through but can’t bring themselves to contact me. Instead, they ask others for information... At first, it hurt me a little, and I thought they weren’t interested. Now I understand that we are not all made the same way, and I am not so upset about it. For example, one of my friends never contacts me, but when we see each other, he always pays attention to me. He wanted me to be there for his eighteenth birthday, even though I was in a wheelchair”. (P3) For those who mentioned it, the relationship with the boys and girls they met in the hospital who are going through the therapy journey with them seems to be particularly relevant. “Apart from my mother, I feel very close to the people who are going through the same journey as me. Talking to P9 is different from talking to my best friend because she fully understands me”. (P7) 3.2.3. Personal Growth From participants’ narratives, various aspects emerged relating to changes they faced during the illness and therapy. These changes were diverse, ranging from physical changes to changes in growth, thought, habits, and daily life. Initially, some of these changes were particularly difficult to accept, but acceptance and habituation gradually followed, thanks to the support of family and friends. “I feel like I see life differently, that I have become aware of this environment. I am sure that I have to do things immediately (before I would not have expected to spend a month in the hospital, to have to shave my hair...)... At first, I dealt poorly with these changes, the hair loss was shocking. Now it seems like a good thing, I feel like I have something more than others that allows me to face everything”. (P9) The loss of independence is one of the most significant changes felt by the young people interviewed and is particularly distressing. Being unable, during adolescence, to move around freely, go out with friends, dress oneself, or take walks has had a great impact. “I find it difficult because I cannot move around as much anymore, I am not as independent as before. I used to walk a lot, but now I can’t, I have to be more careful... Then, my social life has changed, I don’t physically go to school, I am no longer alone with myself... Before, I enjoyed taking a walk or being by myself even at home, now my parents won’t let me”. (P3) Daily life was shattered by the diagnosis, habits changed: Some feel a real fracture between their previous and current lives while others no longer recognize normality. “I am living a completely different life, as if I entered another life, disconnected from the previous one. I have experienced physical changes, changes in my lifestyle, in my thinking... [The changes in thinking concern] how I dealt with the situation in February and how I am dealing with it now, even the little things. It seems to me that I am coping well, I make a lot of good resolutions, but they don’t always work out”. (P7) Some changes are also experienced as positive: The young people have noticed that they are living a path of growth, that they have acquired new important awareness of which they are proud. “I have changed my character, my way of thinking. I see it as positive, I feel grown, changed. I like the changes I’m making at the character level. I do not like how I am using my time; I could do more useful things that would serve me better”. (P8) “I no longer have free time, I focus on what I like, I try to do the thing that happens more rarely […] I invest my time differently, I use every second. […] I feel that I am facing these changes positively, I have learned that time is not infinite and must be used responsibly. In the past [time] was infinite”. (P11) From the analysis of parents’ narratives, four major themes emerge. 3.1.1. Emotional and Caregiver Burden The emotions experienced by parents were at the heart of the interviews, allowing for a better understanding of the situation they have experienced. The uncertainties brought by the physical problems experienced by their children emerge, which led them to investigate the situation with pediatricians and specialists. In each of these cases, a diagnosis of oncological disease was reached, which many describe as impossible to forget. “It seemed like an infection, but I had the feeling there was something more […] So when the diagnosis was given, I received it and didn’t receive it”. (P2) “[…] You can’t believe that something like this can happen to you. It really seems like you’re living in a movie”. (P4) “In the first few days, I felt like I was in a bubble, you can’t realize and understand what’s happening”. (P7) The act of “taking care”, which is crucial for a parent, becomes even more important in the case of a life-threatening illness in their child. From the experiences shared, confusion, pain and disbelief emerged, as seen before, which then turned into a fight that also involved the daily care of the child. The psychological burden of the disease overloaded the role of the parents, who were not physically ill but had to witness their child’s pain. “When you see your daughter’s eyes imploring you to go home and not want to be sick anymore, that’s when it weighs on you, psychologically”. (P5) “You always have to pay a little attention, before you did not think about it, […] that is the thing that has changed more than anything else, you are always with that thought. Before you were more open, now you are much more worried, given his situation”. (P3) This experience of vicarious suffering made them feel the weight of caregiving as they did not know what would happen, were separated from their family and empathized with their child’s suffering, sometimes leading to a self-annulment due to totalizing care, feeling that they could only be “mothers”. 3.1.2. Personal Growth Alongside these extremely negative experiences, however, all parents have sought normalcy and shown their strength, their desire and need to fight the disease alongside their child, forcing themselves to accept the situation to move forward and fight. From this image, great experiences of resilience and positivity emerge, guided by a strong trust and hope towards the future and treatments. Some also indicate feeling fortunate compared to others regarding the diagnosis and having encountered positive aspects within the illness although they themselves specify that it was “difficult to say”. “Every day was a victory, we fought this battle with a sword pointed towards the top, because we must reach the top, this is the goal”. (P1) At the same time, parents found the strength to move forward, without realizing where it came from. They did it for themselves and for their children, with whom, as some parents explain, they felt they had established a stronger relationship of mutual support and help in moments of discouragement while trying to remain the “same” parent. “[…] it was a pleasure, I shared things with her, sometimes she did not tolerate me, but like every teenager. But in the end, it was a pleasure, for me certainly, to spend more time with her”. (P6) “In some sort of way, the shock helped us put things under a whole new perspective. You start appreciating small gestures, caresses, attention. […] And you realize how fundamental and important they are, the richness and satisfaction they can easily bring, if you let them touch you”. (P7) 3.1.3. Spirituality Concerning spirituality, not intended as religiosity, some of the interviewed parents explained how their child’s illness changed their values, or those of the young patients. In the first case, a mother explained how the path after the diagnosis made her reassess her priorities, making her understand that some aspects she focused on before were superficial. Another interviewed woman explained how her older daughter’s illness opened the eyes of the young patient and her younger sister, allowing them to appreciate the little things and the time spent together as a family. “At first, I suffered a little from this decision to leave work, but then it was an advantage from many points of view, because I took a break from some superficial things that I gave more importance to before, we changed our priorities a bit”. (P6) “We have all changed a bit: us, so to speak, but they changed a lot. I find myself with two different girls. On the one hand, it seems bad, but let’s say that it brought out the best in us. It put us in front of the true meanings of life”. (P5) Regarding religiosity, all the interviewees explained how it was of great help, an anchor to hold onto, and how prayer was a source of serenity and a request for well-being for the family. In addition, a mother explained how religiosity and prayer were ways to turn to Someone greater, surrendering to His will, guiding doctors in their work with her child. “Through prayer and faith, I trust that God can give us the help we need, the strength”. (P7) “Completely surrender to God, ‘Jesus, you take care of it.’ I do not suggest the cure, because you are the doctor […], let Your will be done whatever it may be”. (P1) Religiosity led some parents to feel the closeness of God, His presence in their lives and in the journey of their children as if there was a powerful hand guiding the process. On the other hand, some were angry with God for what happened, feeling betrayed and questioning why it happened to their children and, in general, how there could be so much evil in the wards they frequented. “[…] I felt a presence, or I wanted to think so, I felt it, praying I felt that things would go well and that there would be a higher power that would help my son”. (P2) Religiosity has helped a mother to believe that no matter how the treatments went, it would be fine because spirituality helps you think about life after death rather than the potential annihilation of life. The same woman provided a different perspective on prayer. She explained that she felt fear at the idea of going to church and praying because it would lead her to deeper introspection, bringing forth emotions that were too intense for her to confront at that moment. “Faith helps you because it gives you hope. The thought that no matter how it turned out, it would be fine, because it helps you live death differently […] in the sense of a life after death. […] I’m almost afraid to enter a church because I don’t want to think too much. I’m afraid of thinking and reflecting too deeply. I’m afraid of where my thoughts might go, so I prefer to live without dwelling too much. I’m afraid of delving too deeply within myself and unleashing emotions that I can’t afford to deal with right now because I have to take it day by day, fight”. The emotions experienced by parents were at the heart of the interviews, allowing for a better understanding of the situation they have experienced. The uncertainties brought by the physical problems experienced by their children emerge, which led them to investigate the situation with pediatricians and specialists. In each of these cases, a diagnosis of oncological disease was reached, which many describe as impossible to forget. “It seemed like an infection, but I had the feeling there was something more […] So when the diagnosis was given, I received it and didn’t receive it”. (P2) “[…] You can’t believe that something like this can happen to you. It really seems like you’re living in a movie”. (P4) “In the first few days, I felt like I was in a bubble, you can’t realize and understand what’s happening”. (P7) The act of “taking care”, which is crucial for a parent, becomes even more important in the case of a life-threatening illness in their child. From the experiences shared, confusion, pain and disbelief emerged, as seen before, which then turned into a fight that also involved the daily care of the child. The psychological burden of the disease overloaded the role of the parents, who were not physically ill but had to witness their child’s pain. “When you see your daughter’s eyes imploring you to go home and not want to be sick anymore, that’s when it weighs on you, psychologically”. (P5) “You always have to pay a little attention, before you did not think about it, […] that is the thing that has changed more than anything else, you are always with that thought. Before you were more open, now you are much more worried, given his situation”. (P3) This experience of vicarious suffering made them feel the weight of caregiving as they did not know what would happen, were separated from their family and empathized with their child’s suffering, sometimes leading to a self-annulment due to totalizing care, feeling that they could only be “mothers”. Alongside these extremely negative experiences, however, all parents have sought normalcy and shown their strength, their desire and need to fight the disease alongside their child, forcing themselves to accept the situation to move forward and fight. From this image, great experiences of resilience and positivity emerge, guided by a strong trust and hope towards the future and treatments. Some also indicate feeling fortunate compared to others regarding the diagnosis and having encountered positive aspects within the illness although they themselves specify that it was “difficult to say”. “Every day was a victory, we fought this battle with a sword pointed towards the top, because we must reach the top, this is the goal”. (P1) At the same time, parents found the strength to move forward, without realizing where it came from. They did it for themselves and for their children, with whom, as some parents explain, they felt they had established a stronger relationship of mutual support and help in moments of discouragement while trying to remain the “same” parent. “[…] it was a pleasure, I shared things with her, sometimes she did not tolerate me, but like every teenager. But in the end, it was a pleasure, for me certainly, to spend more time with her”. (P6) “In some sort of way, the shock helped us put things under a whole new perspective. You start appreciating small gestures, caresses, attention. […] And you realize how fundamental and important they are, the richness and satisfaction they can easily bring, if you let them touch you”. (P7) Concerning spirituality, not intended as religiosity, some of the interviewed parents explained how their child’s illness changed their values, or those of the young patients. In the first case, a mother explained how the path after the diagnosis made her reassess her priorities, making her understand that some aspects she focused on before were superficial. Another interviewed woman explained how her older daughter’s illness opened the eyes of the young patient and her younger sister, allowing them to appreciate the little things and the time spent together as a family. “At first, I suffered a little from this decision to leave work, but then it was an advantage from many points of view, because I took a break from some superficial things that I gave more importance to before, we changed our priorities a bit”. (P6) “We have all changed a bit: us, so to speak, but they changed a lot. I find myself with two different girls. On the one hand, it seems bad, but let’s say that it brought out the best in us. It put us in front of the true meanings of life”. (P5) Regarding religiosity, all the interviewees explained how it was of great help, an anchor to hold onto, and how prayer was a source of serenity and a request for well-being for the family. In addition, a mother explained how religiosity and prayer were ways to turn to Someone greater, surrendering to His will, guiding doctors in their work with her child. “Through prayer and faith, I trust that God can give us the help we need, the strength”. (P7) “Completely surrender to God, ‘Jesus, you take care of it.’ I do not suggest the cure, because you are the doctor […], let Your will be done whatever it may be”. (P1) Religiosity led some parents to feel the closeness of God, His presence in their lives and in the journey of their children as if there was a powerful hand guiding the process. On the other hand, some were angry with God for what happened, feeling betrayed and questioning why it happened to their children and, in general, how there could be so much evil in the wards they frequented. “[…] I felt a presence, or I wanted to think so, I felt it, praying I felt that things would go well and that there would be a higher power that would help my son”. (P2) Religiosity has helped a mother to believe that no matter how the treatments went, it would be fine because spirituality helps you think about life after death rather than the potential annihilation of life. The same woman provided a different perspective on prayer. She explained that she felt fear at the idea of going to church and praying because it would lead her to deeper introspection, bringing forth emotions that were too intense for her to confront at that moment. “Faith helps you because it gives you hope. The thought that no matter how it turned out, it would be fine, because it helps you live death differently […] in the sense of a life after death. […] I’m almost afraid to enter a church because I don’t want to think too much. I’m afraid of thinking and reflecting too deeply. I’m afraid of where my thoughts might go, so I prefer to live without dwelling too much. I’m afraid of delving too deeply within myself and unleashing emotions that I can’t afford to deal with right now because I have to take it day by day, fight”. From the analysis of young patients’ narrative, three major themes emerge. 3.2.1. Emotions: From Confusion to Resilience Initially, surprise, fear and anxiety arising from confusion in the new situation are described. There is a sense of injustice towards oneself and one’s family members who are also affected by this difficult journey. Uncertainty regarding the conditions, duration of treatment and future prospects is a recurring theme in the interviews. “I thought I would never get out of it, that I did not deserve such a thing, that my family did not deserve it (although less often), that it is an unknown thing and that I only got to know it by experiencing it, but I would have liked to know something about it before. I think there is too much misinformation about it, before getting sick I never really wondered what it meant to be ill”. (P9) Fear is also reported in relation to the future and the possibility of a relapse. The hypothesis that the disease may reappear seems to be what scares patients the most. Another interesting aspect that emerged is the approach with technology and the search for information on the internet. “I think that once everything is over, the problem could reappear. Last week I had a panic attack because, even though I had never read anything about the disease on the internet (also following the doctor’s advice), at that moment I searched [online] and saw that there was a possibility of a relapse. The idea of going through all of this again scares me. I have this fear a little bit”. (P3) However, most adolescents have shown great strength in contrast to these negative emotions, recounting and describing positive experiences. Despite finding themselves in a particularly difficult situation, many participants have managed to find a way to see the positive side of things, to appreciate even what they used to take for granted. Some even emphasize how they felt lucky in their misfortune, thanks to the people around them. “I feel a myriad of different emotions, certainly at times after many moments that make me feel bad, like hospitalizations, I feel so much joy when I see my father and my sister again. Moments of joy are very frequent even though it doesn’t seem like it, and I always find a way to be as happy as possible”. (P3) 3.2.2. Relationships with Peers It is interesting to note how the testimonies are particularly diverse regarding this topic: Some have not noticed any change with their lifelong friends; others struggle to meet and connect as they did before and have bonded more with people who are living with cancer and therefore share the burden of the disease and its treatment; while others have intensified their relationship with some and lost sight of other friends. Those who are more positive report phrases full of gratitude and affection towards their companions, who show closeness despite the difficulty of meeting. These friends are mainly identified as “lifelong friends”. “The relationship with my friends is beautiful, wonderful, special. I thank them so much […]. Even outside the hospital context, I have never felt different with them, there have been no changes”. (P4) An important aspect is understanding: According to what emerges from the narratives of adolescents, friends who found a way to communicate what was happening to the person directly involved were more aware and understanding towards them. “In my relationship with my peers, there are ups and downs. Some moments have made me very happy, others not so much. At this age, not everyone understands what I am going through. I feel a little more understood by those who are maybe a little more mature, but not by others. Some of my friends know what I am going through but can’t bring themselves to contact me. Instead, they ask others for information... At first, it hurt me a little, and I thought they weren’t interested. Now I understand that we are not all made the same way, and I am not so upset about it. For example, one of my friends never contacts me, but when we see each other, he always pays attention to me. He wanted me to be there for his eighteenth birthday, even though I was in a wheelchair”. (P3) For those who mentioned it, the relationship with the boys and girls they met in the hospital who are going through the therapy journey with them seems to be particularly relevant. “Apart from my mother, I feel very close to the people who are going through the same journey as me. Talking to P9 is different from talking to my best friend because she fully understands me”. (P7) 3.2.3. Personal Growth From participants’ narratives, various aspects emerged relating to changes they faced during the illness and therapy. These changes were diverse, ranging from physical changes to changes in growth, thought, habits, and daily life. Initially, some of these changes were particularly difficult to accept, but acceptance and habituation gradually followed, thanks to the support of family and friends. “I feel like I see life differently, that I have become aware of this environment. I am sure that I have to do things immediately (before I would not have expected to spend a month in the hospital, to have to shave my hair...)... At first, I dealt poorly with these changes, the hair loss was shocking. Now it seems like a good thing, I feel like I have something more than others that allows me to face everything”. (P9) The loss of independence is one of the most significant changes felt by the young people interviewed and is particularly distressing. Being unable, during adolescence, to move around freely, go out with friends, dress oneself, or take walks has had a great impact. “I find it difficult because I cannot move around as much anymore, I am not as independent as before. I used to walk a lot, but now I can’t, I have to be more careful... Then, my social life has changed, I don’t physically go to school, I am no longer alone with myself... Before, I enjoyed taking a walk or being by myself even at home, now my parents won’t let me”. (P3) Daily life was shattered by the diagnosis, habits changed: Some feel a real fracture between their previous and current lives while others no longer recognize normality. “I am living a completely different life, as if I entered another life, disconnected from the previous one. I have experienced physical changes, changes in my lifestyle, in my thinking... [The changes in thinking concern] how I dealt with the situation in February and how I am dealing with it now, even the little things. It seems to me that I am coping well, I make a lot of good resolutions, but they don’t always work out”. (P7) Some changes are also experienced as positive: The young people have noticed that they are living a path of growth, that they have acquired new important awareness of which they are proud. “I have changed my character, my way of thinking. I see it as positive, I feel grown, changed. I like the changes I’m making at the character level. I do not like how I am using my time; I could do more useful things that would serve me better”. (P8) “I no longer have free time, I focus on what I like, I try to do the thing that happens more rarely […] I invest my time differently, I use every second. […] I feel that I am facing these changes positively, I have learned that time is not infinite and must be used responsibly. In the past [time] was infinite”. (P11) Initially, surprise, fear and anxiety arising from confusion in the new situation are described. There is a sense of injustice towards oneself and one’s family members who are also affected by this difficult journey. Uncertainty regarding the conditions, duration of treatment and future prospects is a recurring theme in the interviews. “I thought I would never get out of it, that I did not deserve such a thing, that my family did not deserve it (although less often), that it is an unknown thing and that I only got to know it by experiencing it, but I would have liked to know something about it before. I think there is too much misinformation about it, before getting sick I never really wondered what it meant to be ill”. (P9) Fear is also reported in relation to the future and the possibility of a relapse. The hypothesis that the disease may reappear seems to be what scares patients the most. Another interesting aspect that emerged is the approach with technology and the search for information on the internet. “I think that once everything is over, the problem could reappear. Last week I had a panic attack because, even though I had never read anything about the disease on the internet (also following the doctor’s advice), at that moment I searched [online] and saw that there was a possibility of a relapse. The idea of going through all of this again scares me. I have this fear a little bit”. (P3) However, most adolescents have shown great strength in contrast to these negative emotions, recounting and describing positive experiences. Despite finding themselves in a particularly difficult situation, many participants have managed to find a way to see the positive side of things, to appreciate even what they used to take for granted. Some even emphasize how they felt lucky in their misfortune, thanks to the people around them. “I feel a myriad of different emotions, certainly at times after many moments that make me feel bad, like hospitalizations, I feel so much joy when I see my father and my sister again. Moments of joy are very frequent even though it doesn’t seem like it, and I always find a way to be as happy as possible”. (P3) It is interesting to note how the testimonies are particularly diverse regarding this topic: Some have not noticed any change with their lifelong friends; others struggle to meet and connect as they did before and have bonded more with people who are living with cancer and therefore share the burden of the disease and its treatment; while others have intensified their relationship with some and lost sight of other friends. Those who are more positive report phrases full of gratitude and affection towards their companions, who show closeness despite the difficulty of meeting. These friends are mainly identified as “lifelong friends”. “The relationship with my friends is beautiful, wonderful, special. I thank them so much […]. Even outside the hospital context, I have never felt different with them, there have been no changes”. (P4) An important aspect is understanding: According to what emerges from the narratives of adolescents, friends who found a way to communicate what was happening to the person directly involved were more aware and understanding towards them. “In my relationship with my peers, there are ups and downs. Some moments have made me very happy, others not so much. At this age, not everyone understands what I am going through. I feel a little more understood by those who are maybe a little more mature, but not by others. Some of my friends know what I am going through but can’t bring themselves to contact me. Instead, they ask others for information... At first, it hurt me a little, and I thought they weren’t interested. Now I understand that we are not all made the same way, and I am not so upset about it. For example, one of my friends never contacts me, but when we see each other, he always pays attention to me. He wanted me to be there for his eighteenth birthday, even though I was in a wheelchair”. (P3) For those who mentioned it, the relationship with the boys and girls they met in the hospital who are going through the therapy journey with them seems to be particularly relevant. “Apart from my mother, I feel very close to the people who are going through the same journey as me. Talking to P9 is different from talking to my best friend because she fully understands me”. (P7) From participants’ narratives, various aspects emerged relating to changes they faced during the illness and therapy. These changes were diverse, ranging from physical changes to changes in growth, thought, habits, and daily life. Initially, some of these changes were particularly difficult to accept, but acceptance and habituation gradually followed, thanks to the support of family and friends. “I feel like I see life differently, that I have become aware of this environment. I am sure that I have to do things immediately (before I would not have expected to spend a month in the hospital, to have to shave my hair...)... At first, I dealt poorly with these changes, the hair loss was shocking. Now it seems like a good thing, I feel like I have something more than others that allows me to face everything”. (P9) The loss of independence is one of the most significant changes felt by the young people interviewed and is particularly distressing. Being unable, during adolescence, to move around freely, go out with friends, dress oneself, or take walks has had a great impact. “I find it difficult because I cannot move around as much anymore, I am not as independent as before. I used to walk a lot, but now I can’t, I have to be more careful... Then, my social life has changed, I don’t physically go to school, I am no longer alone with myself... Before, I enjoyed taking a walk or being by myself even at home, now my parents won’t let me”. (P3) Daily life was shattered by the diagnosis, habits changed: Some feel a real fracture between their previous and current lives while others no longer recognize normality. “I am living a completely different life, as if I entered another life, disconnected from the previous one. I have experienced physical changes, changes in my lifestyle, in my thinking... [The changes in thinking concern] how I dealt with the situation in February and how I am dealing with it now, even the little things. It seems to me that I am coping well, I make a lot of good resolutions, but they don’t always work out”. (P7) Some changes are also experienced as positive: The young people have noticed that they are living a path of growth, that they have acquired new important awareness of which they are proud. “I have changed my character, my way of thinking. I see it as positive, I feel grown, changed. I like the changes I’m making at the character level. I do not like how I am using my time; I could do more useful things that would serve me better”. (P8) “I no longer have free time, I focus on what I like, I try to do the thing that happens more rarely […] I invest my time differently, I use every second. […] I feel that I am facing these changes positively, I have learned that time is not infinite and must be used responsibly. In the past [time] was infinite”. (P11) The study aimed at investigating the experiences of parents whose children or adolescents have received an oncological diagnosis and at understanding the experience of illness of young patients. The results show that, while the oncological diagnosis triggers devasting feelings threatening daily experiences, with a profound emotional and caregiver burden, at the same time the disruption might open a new space of transformation and growth, where spirituality plays an important role. On the other hand, pediatric patients showed how ambiguous and fluctuating the oncological experience can be. The findings of the study overall indicate positive experiences for both parents and children, greater attachment to family and a deeper sense of self-awareness among the participants, which is consistent with prior research. As stated in (see above), the analysts checked for inconsistency and discrepancy between patients’ and parents’ narratives whose existence would eventually be carefully analyzed separately. However, both parents and patients reported a great unity within the family group with regard to coping strategies and shared feeling, highlighting a common perspective emerging within individual and unique emotions. Indeed, the following paragraphs stress the importance of creating a cohesive familiar team in managing the illness as well as of safeguarding one’s own private moments. The interviews reveal the weight of caregiving and its all-encompassing nature. Parents try to be constantly present, sometimes to the point of self-annulment, as explained in various studies . Despite trying to maintain a sense of normalcy, parents often experience negative emotions of discomfort, fear and insecurity due to the unpredictability of the disease and its course, in line with what has been reported in the studies of Björk and colleagues . Many other parents have reported having a good social support network and have been able to cope with the help of partners, family members and friends, who have assisted them in various tasks ranging from household chores, help with other children and financial support . Financial problems arising from therapy are also significant, as reported by one interviewed mother. Despite these experiences, the interviews have shown the will to fight, to move forward and not to be defeated by the disease and daily difficulties that test both physical and mental strength. This need to fight leads to great experiences of resilience and growth, in line with what has been reported by Woodgate and Van Schoors and colleagues , who emphasize the importance of family unity and parental couples, as reported by the interviewees, and how these aspects help to cope with the difficulties arising from the child’s illness. This personal growth involves a change in important values and a reprioritization of involved parents, demonstrating post-traumatic growth resulting from traumatic events, such as a cancer diagnosis, as reported in several studies . This is accompanied by a strengthening of the relationship with the sick child and, in general, with the family, as emphasized by Picoraro and colleagues . In the interviews, a mother defined the time spent with her daughter, despite the illness, as a pleasure, highlighting the satisfaction derived from care (compassion satisfaction), as reported in various research studies , and how this aspect of personal satisfaction in care leads to more positive experiences within the illness process. Regarding the aspect of anticipatory grief, anger and denial were reported by the parents involved in the interviews in the initial stages. It is hypothesized that they were going through the first two stages of the AG process theorized by Kübler-Ross. Regarding the moment of diagnosis, all parents have a vivid memory, as reported by Parker and Johnston . From the beginning, the doctor-patient relationship is characterized by clarity, trust and empathy, with parents expressing satisfaction with the care and relationship established with the healthcare staff, which has been reported in some literature. After the diagnosis, the referring oncologist always took care to speak personally with the young patients, with great tact but clearly and realistically about the diagnosis, treatment path and the possible outcomes. Finally, regarding the aspect of spirituality, it emerges from the literature how it helps as an anchor to understand and make sense of the illness, to move forward and provide hope and comfort, acting as a guide in uncertainty for parents, as emphasized by the interviewees. This support is made possible primarily through prayer, but also through contact with sacred objects, as explained by the interviewees and studies by Rossato and colleagues . As emphasized by the parents, spirituality has helped them feel connected to a higher power and to seek health and assistance from God, in line with the findings from the previous study . Throughout the interviews, the centrality of support, spirituality, hope and positivity emerges as coping strategies to face the situation. The adolescents reported a range of emotions and fear of relapse, and the re-experiencing of pain associated with treatment characterized their experiences. Changes in physical appearance, such as hair loss, had a significant impact on the adolescents’ self-perception and sense of identity, contributing to feelings of inferiority and a struggle to recognize themselves. The aggressive cycles of therapy led to a partial loss of independence, and the participants reported significant discomfort in relying on family for even the simplest tasks. However, some participants experienced growth and development of awareness and coping skills as a result of their illness. The analysis highlights the complexity of the experiences of adolescents with cancer and calls for a more nuanced understanding of the factors that contribute to their quality of life. However, since the pediatric population under study falls within the adolescent phase, it is necessary to highlight some characteristics of individuals in this age group. Firstly, it is clear that adolescents have different needs compared to children and adults diagnosed with the same condition. Adolescence is an extremely delicate phase in a person’s growth and development, characterized by specific developmental tasks. It is a phase of biological, social and psychological changes that lead individuals to feel the need to discover, explore and define their own physical and psychological identity . For this reason, when they feel uncertain about the potential progression of the disease and perceive a lack of necessary information to understand how to cope with it, they tend to experience increased levels of stress, anxiety, depression and isolation . This is not the case if they feel aware of what is happening to them, which is why it is common for them to express a desire to participate in decisions regarding their treatment paths within the first year after the illness . It is important, therefore, to understand the decision-making process in adolescents and find the best way to involve them in decisions regarding the treatment of their condition, in order to avoid increasing or prolonging their suffering by applying principles that do not align with their needs . A factor that reinforces the desire to actively participate in treatment decisions is the sense of self-efficacy, which refers to the ability to feel capable in the face of challenges that they encounter. This feeling is common among many adolescents . From these interviews and parents’ accounts, we gain a more precise understanding of the experiences of caregivers of adolescents with oncological diagnoses. Considering what has emerged from the research, there is the need to design and implement future projects aimed at supporting the entire family. This will allow for more accurate and patient- and family-centered interventions, not only to support families but also to provide new insights for possible future research on the experiences of parents and young patients, coping strategies, psychological support or techniques that have helped the entire family to cope, such as pet therapy or art therapies, which were not proposed in this study. Specifically, it would be interesting to expand beyond the nuclear family and include the extended family, encompassing individuals indirectly affected by the experience of illness. This would provide insight into how relatives and friends contribute to supporting the illness experience. Additionally, it would be valuable to replicate the interviews with the same individuals at various stages of the illness and treatment in order to achieve a more comprehensive understanding of the overall illness experience. In this regard, further exploration of the lived experiences associated with the moment of diagnosis would be worthwhile, with particular emphasis on the methods employed to convey the difficult news. This would also facilitate the evaluation of the contribution of psychological support provided by hospitals and aid in its implementation. From this recruitment process, a group of 18 participants was obtained. This is a limited group, and therefore the narratives cannot be generalized. However, it has provided very interesting indications for further research through the involvement of other patients, particularly pediatric patients. Despite these limitations, the present study has highlighted how in the interviewed young patients, certain central themes consistently emerge, which families and therapists should carefully consider. Among these, particularly relevant are: the numerous and diverse changes that adolescents and children affected by oncological diseases must face, ranging from physical changes to changes in habits, self-perception, use of time and level of independence; the relationship with peers, sometimes seen as the main source of support, other times fraught with difficulties due to the lack of understanding and yet again rediscovered in people known in the hospital who are undergoing treatment and care for cancer; the multiple emotions experienced during the course of therapy, ranging from astonishment at the time of diagnosis, uncertainty, fear, anger and frustration during treatment, to joy and happiness with approaching discharge, still punctuated for some by anxiety at the thought of relapse; the relationship with healthcare personnel, evaluated as particularly relevant and positive; the perceived support, especially from closest family members, which allows patients to feel accompanied on this tortuous journey. Finally, these results can foster interesting insights for enhancing clinical care services. Firstly, as previous studies have highlighted , in times of traumatic events, a more structured social support can be of utmost importance. When facing the attempt to find a meaning, self-help groups can be very powerful, also working as an important emotional stabilizing tool through correspondence and sharing . Thus, clinical institutions could provide more opportunity for mutual aid. At the same time, results show that it is important to provide clinical intervention, to both parents and pediatric patients, that help in maintaining a sense of continuity with ordinary life, thus avoiding the risk of creating a spiral of sufferance which can result in an emotional burden and that can make nurturing interpersonal relationships difficult.
Bleaching as a complement to fluoride-enhanced remineralization or resin infiltration in masking white spot lesions
dd9abf7d-084e-4304-bf92-731942f8e115
11464075
Dentistry[mh]
A recent investigation into the burden of untreated dental caries in 204 countries and territories over 30 years has revealed a significant increase in its incidence, prevalence, and number of years lived with disability, especially among groups with high free sugar intake and those in lower socioeconomic positions. Many strategies are suitable for addressing the caries process as well as its signs and symptoms. While invasive/restorative interventions are generally indicated for active cavitated lesions, no treatment might be the clinical practice for inactive non-cavitated and cavitated lesions (except for reasons of form, function or esthetics). Furthermore, non- or micro-invasive strategies might help with active non-cavitated carious lesions, or white spot lesions (WSLs). Active WSLs are typically subsurface, presenting a pseudo-intact surface layer over the body of the lesion with plenty of pores. These may be filled with saliva/water, which has a refractive index of ~1.33, or air, which has a refractive index of ~1.0. Both differ from that of the hydroxyapatite, which is ~1.65. The greater the difference between the refractive indices, the whiter the appearance of the lesion , which may lead to esthetic discomfort when located in anterior teeth. Treatment of WSLs should ideally arrest them and simultaneously favor esthetics, thus avoiding cavitation and lessening opacity and the possibility of discoloration. Although fluoride-enhanced remineralization is an effective non-invasive treatment for arresting WSLs, its esthetic result may not be successful. In general, WSLs remain visible after this procedure due to the rapid precipitation of ions in their outer portions. As a result, the ions cannot penetrate into the inner portions of the lesion body and the subsurface remains porous. On the other hand, resin infiltration in WSLs has been proven to stop mineral loss and reduce their whitish appearance. This micro-invasive strategy consists of infiltrating a low-viscosity TEGDMA-based material with a high penetration coefficient into the intercrystalline spaces of demineralized enamel after etching the pseudo-intact surface layer. The material replaces saliva/water into the lesion body pores, granting the WSL mechanical support and an optical appearance similar to that of the adjacent sound enamel, as its refractive index is ~1.52. Nonetheless, WSLs masking can be challenging, and its whitish aspect tends to be only partially masked due to the histopathologic lesion features and the high sensitivity of the technique. Some amount of remaining opacity after enamel resin infiltration seems to be inevitable, possibly due to incomplete replacement of the air that fills the total volume of enamel pores by the material. Application of the infiltrant itself over the WSL was suggested to cause a low outward air flow rate, which is part of the typical flow competition that occurs during the infiltration of liquids into dry porous hard materials. Therefore, doubts remain as to how to achieve better esthetic results without furthering loss of tooth structure and impairing enamel surface roughness, when already remineralized or infiltrated WSLs do not completely disappear. Bleaching, for instance, is a non-invasive strategy to treat discolored teeth which has been highly recommended as the first attempt to reduce the contrast between white spots and the adjacent sound tooth structure. Bleaching plus resin infiltration is widely accepted to mask fluorosis and other enamel defects. , However, Jacob, et al. (2023), on their study of bleaching on color and surface topography of teeth with enamel caries differently treated, were categorical in establishing as their background that bleaching is not recommended on teeth with demineralized carious lesions. There were also concerns from a respected consultant and employees of the company that holds the patent for the only resin infiltrant commercially available on the recommendation of bleaching before infiltration, as in clinical practice, this sequence is not always reasonable. Recognizing that the interaction between the bleaching agents and the resin-infiltrated lesions themselves is an area of interest, they just evaluated the effect of bleaching after resin infiltration regarding surface roughness and color using bovine incisors. However, bleaching as a complement to resin infiltration or to fluoride-enhanced remineralization in masking WSLs (which is different from understanding the WSL itself before and after a given treatment or combination of treatments) was not extensively evaluated. In this context, clinicians should have evidence-based alternatives to address esthetic concerns remaining from the partially successful previous treatments of WSLs. This, as well as the need to justify further randomized clinical trials on masking WSLs, were the reasons for the proposal of this study. Therefore, the objective was to evaluate the ability of bleaching after fluoride-enhanced remineralization and resin infiltration, as well as that of each of them not followed by bleaching, to mask WSLs in bovine enamel and to influence its surface roughness, relative to that of the adjacent sound enamel. The null hypothesis tested was: neither bleaching after fluoride-enhanced remineralization and resin infiltration, nor each of them not followed by bleaching, would affect the masking of WSLs or its roughness relative to adjacent sound enamel. Sample size calculation Considering that an ΔE 00 =6.28±0.531 was previously verified between WSLs and adjacent enamel, and 0.8 is the CIEDE2000 color difference perceptibility threshold, sample size was calculated (http://estatistica.bauru.usp.br/calculoamostral/) using an estimated standard deviation of 0.531 and an effect size of 0.8, plus alpha and beta errors of 5 and 20%. It was found that n=14, but n=15 was selected for each group. Specimens’ preparation and distribution in the experimental groups After the Ethics Committee on Animal Use exempted this research project from being analyzed (CEUA/FOUSP #025/2019), 150 bovine incisors were obtained. Prior to the specimens’ preparation, cracked or stained teeth were excluded from the study and then each of the pertinent ones had the crown sectioned in a 6×3×~2.9 mm length, width and thickness rectangular fragment using a precision cutting machine (Isomet Low Speed Saw; Buehler Ltd., Lake Buff, IL, USA). The dentin and enamel of the fragments were flattened, and the enamel also polished, in a metallographic polisher (EcoMet; Buehler Ltd., Lake Buff, IL, USA) to a thickness of approximately 1.6 mm and 1.3 mm, respectively. The specimens were immersed in distilled water for 10 minutes in an ultrasonic bath (Shenzhen Codyson Electrical Co., Ltd., CHN, Guangdong, China) and those still cracked were excluded from the study. Then, specimens were numbered and submitted to surface microhardness analysis (Knoop Hardness Number [KHN]) using a microhardness tester (HMV-G21DT, Shimadzu Co. Tokyo, Japan) with a Knoop indenter (50 g/10 s). Five indentations were made to determine the mean and standard deviation of the microhardness value. Blocks with a standard deviation greater than 10% of their individual mean microhardness and individual mean microhardness greater or less than 10% of the mean microhardness calculated for all blocks (324.6±13.3) were excluded. A total of 90 specimens were selected for distribution by stratified randomization (Excel 16.0; Microsoft Corporation, Redmond, WA, USA) into six experimental groups (n=15): L/N: Lesion without treatment (left half)/Nothing (right half); F/N: Fluoride treatment (left half)/Nothing (right half); F.BL/BL: Fluoride treatment + bleaching (left half)/Bleaching (right half); I/N: Infiltration treatment (left half)/Nothing (right half); I.BL/BL: Infiltration treatment + bleaching (left half)/Bleaching (right half); N/N: Nothing/Nothing (control group). The work flowchart summarizes the conducted procedures. Remaining specimens were used for the validation of the protocol for simulating the WSLs or for pilot tests, or were stored in distilled water so that they could replace any specimen previously selected in case of technical artifacts which could lead to exclusion from the study. Validation of protocol for simulating white spot lesions In nine specimens with surface microhardness consistent with those of the 90 specimens in the experimental groups, a central window measuring 3×3 mm was determined in order to achieve contact with the demineralizing solution for 32, 64 or 96 hours (n=3) (50 mM acetate buffer; 1,28 mmol/L of Ca(NO 3 ) 2 .4H 2 O, 0,74 mM NaH 2 PO 4 .2H 2 O, and 0,03 ppm F, pH 5.0, 37ºC). , All the specimens were transversally sectioned and polished to obtain slices of 80-100 µm each. These slices were then affixed to specific plates and exposed to radiation using a Transversal Microradiography system (TMR; TMR 1.25e, Inspector Research BV, Amsterdam, Netherlands). A transmitted light microscope with a 20× objective (Axioplan; Zeiss, Oberkochen, Germany) and a camera (XC-77CE, Sony, Tokyo, Japan) were used to observe whether the lesion was subsurface. The TMR 1.25e system software was used to calculate the integrated mineral loss (ΔZ, %vol.μm) by subtracting the percentage of mineral volume of sound enamel (87%) from that percentage of the demineralized enamel, multiplied by the lesion depth (μm). Lesion depth (LD, µm), the integrated mineral loss (∆Z, vol. µm), and the average mineral loss over the lesion depth (R, vol%) were obtained. Data from specimens immersed for 32, 64 and 96 hours are presented in , and their representative images in . Simulation of WSL and specimens’ treatment The right half, the lateral and dentin surfaces of each specimen in the L/N, F/N, F.BL/BL, I/N, and I.BL/BL groups were coated with three layers of cosmetic nail varnish (Colorama Longa Duração Extra Brilho; L’Oréal Brasil Comercial de Cosméticos Ltda., Rio de Janeiro, RJ, Brazil). This ensured that only the left half had artificial WSL, according to the validated protocol. The specimens were immersed in the demineralizing solution for 96 h, since the greatest depth was verified for this amount of time. Once the artificial WSLs were established, the following treatments were conducted: L/N: No treatment on the left half surface (with the WSL). F/N and F.BL/BL: 2% NaF neutral gel (SS White Artigos Dentários Ltda., Rio de Janeiro, RJ, Brazil) for 1 min on the left half surface (with the WSL). After each application of the gel, the specimens were rinsed with distilled water and stored in 6.3 mL of artificial saliva (22.1 mmol/L hydrogen carbonate, 16.1 mmol/L potassium, 14.5 mmol/L sodium, 2.6 mmol/L hydrogen phosphate, 0.8 mmol/L boric acid, 0.7 mmol/L calcium, 0.2 mmol/L thiocyanate and 0.2 mmol/L magnesium; pH between 7.4 and 7.8), which was replaced daily. , I/N and I.BL/BL: 37% phosphoric acid (Condac 37, FGM Dental Group, Joinville, SC, Brazil) for 10 s on the left half surface (with the WSL). After 30 s of air-water spray rinsing and drying, Icon ® -Dry was applied for 30 s and air-dried. Finally, Icon ® infiltrant was applied twice: the first time for 3 min, and the second for 1 min. After excesses were removed, each application was light-cured for 40 s (Radii-cal, SDI, Bayswater, VIC, Australia). Polishing was performed with an abrasive rubber cup (Enhance Finishing System, Dentsply Caulk, Milford, DE, USA) for 20 s at low speed. F.BL/BL and I.BL/BL: Bleaching with 40% hydrogen peroxide gel on the entire surface of the specimens (Opalescence Boost 40% Hydrogen Peroxide, Ultradent, South Jordan, UT, USA). A 0.5-1.0 mm thick layer of gel was applied, and three applications were made for 20 minutes each. * In the N/N group, both halves were only abraded and polished (control group). Evaluation of color difference between WSL and adjacent enamel Specimens were stored in distilled water (37ºC, 24 h) and then sectioned into two 3×3 mm halves (Isomet Low Speed Saw; Buehler Ltd., Lake Buff, IL, USA). After gently drying with an absorbent paper, each half-specimen was placed into a polytetrafluoroethylene holder with a 3×3 mm reading window, and color reading was conducted using a colorimetric reflectance spectrophotometer (CM 3700A, Konica Minolta, Osaka, Japan). Color and spectral distribution were measured according to the coordinates L*, a* and b* established by the Commission International de l’Eclariage (CIE). Following settings were used: 360-740 nm wavelength light, standard illuminant D65, 2º observer angle and a white background. ΔE 00 (using CIEDE2000 color difference formula; Δ E 00 = { [ Δ L / ( K L S L ) ] 2 + [ Δ C / K C S C ] 2 + [ Δ h / ( K h S h ) ] 2 + Δ R } 1 / 2 , ΔL, Δa, and Δb were determined by subtracting the adjacent enamel data (right half) from the WSL data (left half). shows the visual aspect of a representative specimen from each of the experimental groups. Evaluation of surface roughness difference between WSL and adjacent enamel Sites corresponding to the center of the surface of each half-specimen and 1.5 mm up and down were scanned with an optical profilometer (Proscan 2100 – Sensor Model S11/03, Scantron, Venture Way, Tauton, UK). The following settings were used: cut off = 0.8 mm, surface filter = 99, step size X = 0.002 / number of steps X = 2000, and step size Y = 0.001 / number of steps Y = 0. With the help of the system software (Proscan Application software v. 2.0.17, Scantron, Venture Way, Tauton, UK), arithmetic average roughness (Ra) was determined and the mean of the three readings assigned as the Ra for each half-specimen. ΔRa were determined by subtracting the adjacent enamel data (right half) from the WSL data (left half). Statistical analysis The Shapiro-Wilk and Levene tests were applied to evaluate distribution of data. The data for ΔE 00 , ΔL and Δa did not respect assumptions of normality, and the ΔE 00 and ΔRa of homogeneity. Thus, the Kruskal-Wallis and Dunn tests were applied. The data for Δb complied with both assumptions, and ١-way ANOVA and Tukey test were applied. Significance level was always 0.05, and the statistical program used was Statistica 13.5.17 (TIBCO Software Inc., Palo Alto, CA, USA). Considering that an ΔE 00 =6.28±0.531 was previously verified between WSLs and adjacent enamel, and 0.8 is the CIEDE2000 color difference perceptibility threshold, sample size was calculated (http://estatistica.bauru.usp.br/calculoamostral/) using an estimated standard deviation of 0.531 and an effect size of 0.8, plus alpha and beta errors of 5 and 20%. It was found that n=14, but n=15 was selected for each group. After the Ethics Committee on Animal Use exempted this research project from being analyzed (CEUA/FOUSP #025/2019), 150 bovine incisors were obtained. Prior to the specimens’ preparation, cracked or stained teeth were excluded from the study and then each of the pertinent ones had the crown sectioned in a 6×3×~2.9 mm length, width and thickness rectangular fragment using a precision cutting machine (Isomet Low Speed Saw; Buehler Ltd., Lake Buff, IL, USA). The dentin and enamel of the fragments were flattened, and the enamel also polished, in a metallographic polisher (EcoMet; Buehler Ltd., Lake Buff, IL, USA) to a thickness of approximately 1.6 mm and 1.3 mm, respectively. The specimens were immersed in distilled water for 10 minutes in an ultrasonic bath (Shenzhen Codyson Electrical Co., Ltd., CHN, Guangdong, China) and those still cracked were excluded from the study. Then, specimens were numbered and submitted to surface microhardness analysis (Knoop Hardness Number [KHN]) using a microhardness tester (HMV-G21DT, Shimadzu Co. Tokyo, Japan) with a Knoop indenter (50 g/10 s). Five indentations were made to determine the mean and standard deviation of the microhardness value. Blocks with a standard deviation greater than 10% of their individual mean microhardness and individual mean microhardness greater or less than 10% of the mean microhardness calculated for all blocks (324.6±13.3) were excluded. A total of 90 specimens were selected for distribution by stratified randomization (Excel 16.0; Microsoft Corporation, Redmond, WA, USA) into six experimental groups (n=15): L/N: Lesion without treatment (left half)/Nothing (right half); F/N: Fluoride treatment (left half)/Nothing (right half); F.BL/BL: Fluoride treatment + bleaching (left half)/Bleaching (right half); I/N: Infiltration treatment (left half)/Nothing (right half); I.BL/BL: Infiltration treatment + bleaching (left half)/Bleaching (right half); N/N: Nothing/Nothing (control group). The work flowchart summarizes the conducted procedures. Remaining specimens were used for the validation of the protocol for simulating the WSLs or for pilot tests, or were stored in distilled water so that they could replace any specimen previously selected in case of technical artifacts which could lead to exclusion from the study. In nine specimens with surface microhardness consistent with those of the 90 specimens in the experimental groups, a central window measuring 3×3 mm was determined in order to achieve contact with the demineralizing solution for 32, 64 or 96 hours (n=3) (50 mM acetate buffer; 1,28 mmol/L of Ca(NO 3 ) 2 .4H 2 O, 0,74 mM NaH 2 PO 4 .2H 2 O, and 0,03 ppm F, pH 5.0, 37ºC). , All the specimens were transversally sectioned and polished to obtain slices of 80-100 µm each. These slices were then affixed to specific plates and exposed to radiation using a Transversal Microradiography system (TMR; TMR 1.25e, Inspector Research BV, Amsterdam, Netherlands). A transmitted light microscope with a 20× objective (Axioplan; Zeiss, Oberkochen, Germany) and a camera (XC-77CE, Sony, Tokyo, Japan) were used to observe whether the lesion was subsurface. The TMR 1.25e system software was used to calculate the integrated mineral loss (ΔZ, %vol.μm) by subtracting the percentage of mineral volume of sound enamel (87%) from that percentage of the demineralized enamel, multiplied by the lesion depth (μm). Lesion depth (LD, µm), the integrated mineral loss (∆Z, vol. µm), and the average mineral loss over the lesion depth (R, vol%) were obtained. Data from specimens immersed for 32, 64 and 96 hours are presented in , and their representative images in . The right half, the lateral and dentin surfaces of each specimen in the L/N, F/N, F.BL/BL, I/N, and I.BL/BL groups were coated with three layers of cosmetic nail varnish (Colorama Longa Duração Extra Brilho; L’Oréal Brasil Comercial de Cosméticos Ltda., Rio de Janeiro, RJ, Brazil). This ensured that only the left half had artificial WSL, according to the validated protocol. The specimens were immersed in the demineralizing solution for 96 h, since the greatest depth was verified for this amount of time. Once the artificial WSLs were established, the following treatments were conducted: F/N and F.BL/BL: 2% NaF neutral gel (SS White Artigos Dentários Ltda., Rio de Janeiro, RJ, Brazil) for 1 min on the left half surface (with the WSL). After each application of the gel, the specimens were rinsed with distilled water and stored in 6.3 mL of artificial saliva (22.1 mmol/L hydrogen carbonate, 16.1 mmol/L potassium, 14.5 mmol/L sodium, 2.6 mmol/L hydrogen phosphate, 0.8 mmol/L boric acid, 0.7 mmol/L calcium, 0.2 mmol/L thiocyanate and 0.2 mmol/L magnesium; pH between 7.4 and 7.8), which was replaced daily. , I/N and I.BL/BL: 37% phosphoric acid (Condac 37, FGM Dental Group, Joinville, SC, Brazil) for 10 s on the left half surface (with the WSL). After 30 s of air-water spray rinsing and drying, Icon ® -Dry was applied for 30 s and air-dried. Finally, Icon ® infiltrant was applied twice: the first time for 3 min, and the second for 1 min. After excesses were removed, each application was light-cured for 40 s (Radii-cal, SDI, Bayswater, VIC, Australia). Polishing was performed with an abrasive rubber cup (Enhance Finishing System, Dentsply Caulk, Milford, DE, USA) for 20 s at low speed. F.BL/BL and I.BL/BL: Bleaching with 40% hydrogen peroxide gel on the entire surface of the specimens (Opalescence Boost 40% Hydrogen Peroxide, Ultradent, South Jordan, UT, USA). A 0.5-1.0 mm thick layer of gel was applied, and three applications were made for 20 minutes each. * In the N/N group, both halves were only abraded and polished (control group). Specimens were stored in distilled water (37ºC, 24 h) and then sectioned into two 3×3 mm halves (Isomet Low Speed Saw; Buehler Ltd., Lake Buff, IL, USA). After gently drying with an absorbent paper, each half-specimen was placed into a polytetrafluoroethylene holder with a 3×3 mm reading window, and color reading was conducted using a colorimetric reflectance spectrophotometer (CM 3700A, Konica Minolta, Osaka, Japan). Color and spectral distribution were measured according to the coordinates L*, a* and b* established by the Commission International de l’Eclariage (CIE). Following settings were used: 360-740 nm wavelength light, standard illuminant D65, 2º observer angle and a white background. ΔE 00 (using CIEDE2000 color difference formula; Δ E 00 = { [ Δ L / ( K L S L ) ] 2 + [ Δ C / K C S C ] 2 + [ Δ h / ( K h S h ) ] 2 + Δ R } 1 / 2 , ΔL, Δa, and Δb were determined by subtracting the adjacent enamel data (right half) from the WSL data (left half). shows the visual aspect of a representative specimen from each of the experimental groups. Sites corresponding to the center of the surface of each half-specimen and 1.5 mm up and down were scanned with an optical profilometer (Proscan 2100 – Sensor Model S11/03, Scantron, Venture Way, Tauton, UK). The following settings were used: cut off = 0.8 mm, surface filter = 99, step size X = 0.002 / number of steps X = 2000, and step size Y = 0.001 / number of steps Y = 0. With the help of the system software (Proscan Application software v. 2.0.17, Scantron, Venture Way, Tauton, UK), arithmetic average roughness (Ra) was determined and the mean of the three readings assigned as the Ra for each half-specimen. ΔRa were determined by subtracting the adjacent enamel data (right half) from the WSL data (left half). The Shapiro-Wilk and Levene tests were applied to evaluate distribution of data. The data for ΔE 00 , ΔL and Δa did not respect assumptions of normality, and the ΔE 00 and ΔRa of homogeneity. Thus, the Kruskal-Wallis and Dunn tests were applied. The data for Δb complied with both assumptions, and ١-way ANOVA and Tukey test were applied. Significance level was always 0.05, and the statistical program used was Statistica 13.5.17 (TIBCO Software Inc., Palo Alto, CA, USA). The factor under study significantly influenced ΔE 00 results (p=0.0001): the lesion without treatment (L/N) group differed from all other groups, which in turn did not differ from each other. WSLs thus contrasted with adjacent enamel, but both fluoride-enhanced remineralization and resin infiltration, followed or not by bleaching, masked them similarly. Median ΔE 00 values when fluoride-enhanced remineralization and resin infiltration were not followed by bleaching, however, exceeded color difference perceptibility and acceptability thresholds (0.8 and 1.8, respectively) . Regarding ΔL, the factor under study also significantly influenced the results (p=0.0024): the lesion without treatment (L/N) group differed from the others, but the group with fluoride treatment plus bleaching (F.BL/BL) did not (p=0.79). All groups, except for the lesion without treatment (L/N) group, were not different from each other. The lesion without treatment (L/N) group showed the greatest ΔL, while the fluoride treatment plus bleaching (F.BL/BL) group showed intermediate values. The other groups showed the lowest values, which were equivalent to those of the control (N/N) group . As for Δa, the factor under study did not significantly influence the results (p=0.1592): In the red-green coordinate, WSLs did not stand out from the adjacent enamel, regardless of its treatment and the subsequent bleaching . About Δb, the factor under study significantly influenced the results (p=0.0015): the lesion without treatment (L/N) group was different from the other groups, except for the fluoride treatment (F/N) and fluoride treatment plus bleaching (F.BL/BL). All groups, other than the lesion without treatment (L/N) group, were not different among themselves. In the yellow-blue coordinate, WSL stood out from the adjacent enamel when it was not treated, but this did not occur when it was infiltrated, or infiltrated and bleached. An intermediate situation was found when the WSL was treated by fluoride-enhanced remineralization, complemented or not by bleaching . Finally, the factor under study significantly influenced ΔRa results (p<0.001): control (N/N) group was different from the other groups, except for infiltration treatment (I/N) group. Infiltration treatment (I/N) group was different from lesion without treatment (L/N) group. All groups, except for control (N/N) group, were not different among themselves. Only infiltration without bleaching reduced ΔRa between the WSL and the adjacent enamel as to it to be similar than that for the control (N/N) group . In view of the presented results, the proposed null hypothesis was rejected, since bleaching after fluoride-enhanced remineralization and resin infiltration, and each of them not followed by bleaching, were able to mask the WSLs. Furthermore, resin infiltration not followed by bleaching was effective in minimizing the roughness difference between the WSL and the adjacent enamel. In general, patients with WSLs feel dissatisfied with their teeth color due to their white appearance. In this study, this could be evidenced by the ΔE 00 analysis, in which the enamel surface that contained the WSL highly contrasted with its adjacent sound surface. Specifically, there was a notable increase in luminosity (*L coordinates) and a blue tendency in WSL (*b coordinates - yellow/blue), which is consistent with previous studies. , , This fact can be justified by the loss of enamel mineral content, which reduces its translucency and increases its opacity. , Treatments for WSLs ideally should arrest them, and whenever possible, improve their whitish appearance, so that the lesion becomes imperceptible relative to the adjacent sound enamel. However, most in vitro studies evaluate color differences of a same area, before and after the interventions, , , leading to a necessity for studies that compare the WSL color with that of the adjacent sound surface. This methodology was called in the literature “split-tooth” , as the color analysis is conducted between two halves of a same specimen. Considering that this is how the treatments effectiveness is verified in clinical reality, this methodology was chosen to conduct the present study. There is sufficient evidence available regarding the role of fluoride in preventing or arresting WSLs progression. , Nevertheless, they may remain clinically visible as most of the detection signals comes from the lesion body, which cannot be completely remineralized. In this study, it was noted that the remineralization with sodium fluoride neutral gel in high concentration was able to mask the WSL in relation to the adjacent sound enamel. This can be justified by the enamel mineral gain, which is directly related to its translucency. Furthermore, according to Jones and Fried (2006), who evaluated the WSL reflexivity after immersion in a fluoride solution, the esthetic improvement of the lesion is not only related to its mineral gain, but also to the directional nature of the repair, related to the disposition of the crystals. Nonetheless, these results differ from those found in the in vitro study by Torres, et al. (2011) Although the fluoride-enhanced remineralization protocol used was the same, the color assessment compared the WSL color with that of the adjacent sound surface. It is also important to emphasize that the lesions used in the present study were artificial, and as such were shallower compared to their natural counterparts. Thus, the results might differ in deeper lesions. Moreover, in the present study, the treatment with resin infiltrant was also able to mask the WSL in relation to the adjacent sound enamel. This is because the colorless material basically consists of TEGDMA, has a refractive index close to that of hydroxyapatite (1.52), and replaces the water or air present in the lesion pores. As such, the whitish appearance of the lesion almost disappears, and it becomes roughly imperceptible in relation to the surrounding structure. This is called the “chameleon effect,” since the resin infiltrant does not act through color matching. , Groups in which WSLs were treated with fluoride-enhanced remineralization and resin infiltration not followed by bleaching presented medians of total color change (ΔE 00 of 1.84 and 2.07, respectively) above the perceptibility (0.8) and acceptability (1.8) thresholds. This means that for half the observers, this color difference, even though visible, may be acceptable, while the other half may find it unacceptable. In this context, if patients are still dissatisfied with their teeth color, even after remineralization or resin infiltration, subsequent tooth bleaching may be pertinent. However, the effect of bleaching after resin infiltration needs to be better elucidated, since the infiltrant could be a blocking barrier. This is why in the present study bleaching was performed not only on the infiltrated area, but on the entire specimen surface. The ΔE 00 values found after bleaching showed that the WSLs remained indistinguishable from the adjacent sound enamel. This can be explained by similarities with other studies that show the effectiveness of tooth bleaching during orthodontic treatment. Once the hydrogen peroxide has a low molecular weight and high diffusibility, successful bleaching can be achieved even in the presence of some blocking agent. It is important to highlight that when bleaching was performed, the medians of total color change (ΔE 00 of 1.58 and 1.50, respectively) were situated between the color difference perceptibility (0.8) and acceptability (1.8) thresholds. Thus, considering a clinical situation, if the patient still notices some difference in color after remineralization or infiltration of the lesion, bleaching could be a valid alternative to reduce discomfort. Furthermore, it is worth mentioning that not even a single tooth presents a homogeneous color, but rather a color gradation that varies according to the enamel and dentin thickness. This color variation within the same tooth could be verified from the control group specimens, as the derived ΔE 00 median (1.63) exceeded the color difference perceptibility threshold. In addition to the esthetic improvement provided by the treatments presented, it is important to evaluate their impact on enamel surface properties, such as surface roughness. In this study, fluoride-enhanced remineralization, followed or not by bleaching, was unable to make the surface roughness of WSLs similar to that of the adjacent enamel, at the same level found between adjacent areas of the control specimens. Other studies have shown that the resin infiltrant is able to reduce the WSL roughness, but not to reach the values of a sound enamel. Conversely, in this study, the WSL surface roughness and its respective treatments were evaluated comparatively to their adjacent sound surface. Therefore, we observed that the infiltrated WSLs presented surface roughness much similar to that of the adjacent sound surface, which can be explained by the polishing of the infiltrant surface with rubber cups. Bleaching, however, apparently suppressed the effect of resin infiltration in minimizing the surface roughness difference between the WSL and the adjacent sound enamel, which is consistent with previous research indicating an increase in surface roughness after bleaching in resin-based materials. Perhaps this increase in roughness could not be processed in vivo due to salivary flow and fluoride availability. , This study had similar limitations to other in vitro studies, so clinical extrapolations should be considered with care. Bovine teeth were used, and the carious lesions were artificial, which tend to be more superficial than the natural ones, besides not presenting a classical dark zone, richer in organic content. Furthermore, only one kind of fluoride-enhanced remineralization protocol, as well as only one kind of bleaching were considered, not to mention the fact that staining and re-bleaching were not considered. However, most studies in resin infiltration using artificial carious lesions do not validate whether they are subsurface, , , , which this study carefully did. The present results are thus valuable in encouraging further investigation of bleaching as a complement to fluoride-enhanced remineralization or resin infiltration in masking WSLs. Both fluoride-enhanced remineralization and resin infiltration, followed by bleaching or not, were able to mask WSLs. However, subsequent bleaching may be an interesting option to reduce the color differences bellow the acceptability threshold, even though it can suppress the favorable effects of resin infiltration regarding enamel surface roughness.
Expanding Bioactive Fragment Space with the Generated Database GDB-13s
b2930d6b-6404-4702-ba8f-9ee5cd8b84fd
10598793
Pharmacology[mh]
Medicinal chemistry becomes an increasingly retrospective activity as public databases such as PubChem and ChEMBL list increasing numbers of known drug-like molecules and their biological activity, from which new analogues can be derived. Nevertheless, introducing chemical novelty in new drugs is important because it can help to address new target types and overcome the limitations of classical molecular series in terms of physicochemical properties, selectivity, toxicity, and metabolism, as well as to secure intellectual property and the possibility of commercial development. − Currently, innovation focuses on exploiting very large libraries of screening compounds obtained by combining known building blocks using known chemistry. , These libraries contain billions of molecules, as in ZINC or the Enamine REAL database, , up to hundreds of billions of molecules in DNA encoded libraries, − or even much larger numbers of peptides and cyclic peptides in phage or ribosome display libraries. , Such molecules often break Lipinski’s rule of five but can nevertheless be developed as drugs. , Despite the impressive numbers of molecules in the above-mentioned databases, these molecules are obtained by combining a limited set of building blocks, typically up to thousands (only 20 for genetically encoded peptides), which severely limits fragment diversity. With respect to fragments, an additional, potentially more important, but mostly unexploited reservoir of novelty exists in the generated databases (GDBs), which systematically enumerate molecules of up to 11, 13, or 17 non-hydrogen atoms (heavy atom count (HAC) = 11, 13, or 17) from mathematical graphs using simple rules of chemical stability and synthetic feasibility. − For instance, the GDBs feature molecules with many unprecedented molecular frameworks (graphs including rings and linker bonds). , Here, we propose an approach to identify novel fragments from the GDBs that could be useful for drug design by taking the accumulated knowledge of bioactive compounds into account through an analysis of fragments. First, we assess the known chemical space by deconstructing molecules in the public databases ZINC (screening compounds), PubChem (published molecules), and COCONUT (natural products and NP-like molecules) into ring fragments (RFs, obtained by removing all atoms not directly connected to a ring) and acyclic fragments (AFs, obtained by removing all ring atoms) . This fragmentation is inspired by computational retrosynthetic analyses such as RECAP, rdScaffoldNetwork, DAIM, BRICS, CCQ, eMolFrag, molBLOCKS, or Fragmenter. In the present context, our deconstruction into RFs and AFs is designed to simplify molecules and focus on structural types. Interestingly, most molecules in ZINC, PubChem, and COCONUT break down into RFs and AFs of 13 atoms or less. In the second part of our approach, we identify RFs and AFs which are strongly enriched in bioactivity compared to inactive molecules in ChEMBL (target annotated compounds) and search for analogues of these fragments in RFs and AFs derived from the generated database GDB-13s. This database is a 10% subset of the database GDB-13, which lists 970 million small molecules of up to 13 atoms exhaustively enumerated from mathematical graphs following the simple rules of chemical stability and synthetic feasibility. While GDB-13 excludes strained rings (e.g., cubane and prismane) and hydrolytically labile and reactive functional groups (e.g., hemiacetals, aminals, enols, acyl chlorides, isocyanides, peroxides, azides, and thiols) and only considers C, N, O, S, and Cl as elements, GDB-13s additionally excludes non-aromatic olefins, acetals, enol ethers, aziridines, and aldehydes, which only rarely occur in drug molecules. Nevertheless, GDB-13s contains many unprecedented molecular frameworks (graphs including rings and linker bonds). , In the present analysis, we find that many of the bioactive-like RFs and AFs identified in GDB-13s are structurally relatively simple and have favorable synthetic accessibility scores (SAscores) and therefore represent opportunities for synthetic chemistry to contribute to drug innovation in the context of fragment-based drug discovery. , Fragment Analysis of Known Molecules and GDB-13s To assess the known chemical space, we extracted RFs and AFs from 885 905 524 molecules in the ZINC database, 100 852 694 molecules of up to 50 non-hydrogen atoms in PubChem, and 401 624 natural products (NPs) and NP-like molecules in COCONUT. We also extracted RFs and AFs from the 99 394 177 molecules in GDB-13s, to be used as a source of novelty later in the study. In all these databases, the number of molecules per RF and AF followed a typical power law distribution, with few RFs and AFs occurring in many molecules and a relatively large number of RFs and AFs occurring only once, referred to as singletons ( a and b and ). The most frequent RFs and AFs in each database were rather small, featuring mono- and disubstituted benzene rings and azacycles for RFs in known molecules, cyclopropanes for RFs in GDB-13s, and single-atom groups for AFs in all databases ( Figures S1 and S2 ). In fact, although the size distribution of the compounds, RFs, and AFs in known molecules extended far beyond 13 atoms ( c– f), the RFs and AFs up to 13 atoms were sufficient to cover most molecules except for the natural products in COCONUT, which feature many molecules with RFs larger than 13 atoms ( , entry numbers 2–4). While fragments shared by the four databases were often structurally simple, those occurring in only one of the four databases analyzed (exclusive fragments, eRF and eAF) were generally more complex, as exemplified by the most frequent cases ( Figures S3 and S4 ). Within the space covered by RFs and AFs of up to 13 atoms, GDB-13s largely outnumbered the known molecules in terms of RFs, resulting in a high percentage of exclusive RFs (99.2% eRFs ≤ 13 atoms, , entry number 9). Most AFs ≤ 13 atoms in GDB-13s were also exclusive (92.7% eAFs ≤ 13 atoms, , entry number 15), although the absolute number of AFs in GDB-13s was comparable to the number of AFs in ZINC and smaller than the number of AFs in PubChem. In fact, PubChem, ZINC, and COCONUT also contained many exclusive eRFs ≤ 13 atoms and eAFs ≤ 13 atoms, reflecting that the enumeration of GDB-13s excluded strained rings and certain functional groups and only considered C, N, O, S, and Cl as elements. Nevertheless, the above analysis showed that GDB-13s contained a very large number of both eRFs and eAFs and could therefore serve as a source of novel RFs and AFs to expand the space of known molecules. Comparative Analysis of RFs and AFs in ChEMBL Active and Inactive Molecules Aiming to select novel fragments in GDB-13s by exploiting knowledge on bioactive compounds, we analyzed molecules from the ChEMBL database to test if different RFs and AFs were associated with active or inactive compounds. We selected the 2 136 218 ChEMBL molecules with an HAC ≤ 50, separated them into 560 230 actives (IC 50 or EC 50 ≤ 10 μM, ChEMBL_actives) and 1 575 988 inactives (all others, ChEMBL_inactives), and extracted the corresponding RFs and AFs. For each RF and AF, we computed its total occurrence as the number of ChEMBL molecules containing this RF or AF, its relative occurrence in active molecules (% active) and inactive molecules (% inactive), and its activity ratio R bioactive = (% active)/(% inactive). A volcano scatter plot of the total occurrence of each RF or AF as a function of R bioactive showed that RFs and AFs spanned a broad range of R bioactive values and total occurrences ( a and b). The situation was similar when only fragments of up to 13 atoms were analyzed ( c and d). From this analysis, we partitioned ChEMBL fragments according to their R bioactive values into active ( R bioactive ≥ 4), inactive ( R bioactive ≤ 0.25), or nonpreferential fragments (intermediate values, R bioactive ≈ 1). While the most frequent fragments were small and nonpreferential, many fragments, including all singletons, occurred exclusively in either the ChEMBL_actives or ChEMBL_inactives subset and were accordingly assigned to either the active ( R bioactive ≥ 4) or inactive ( R bioactive ≤ 0.25) subset, respectively . The top 10 most frequent active ( R bioactive ≥ 4) and inactive ( R bioactive ≤ 0.25) RFs and AFs in ChEMBL were all in the size range of GDB-13s. Four of these top 10 active RFs featured halogenated benzene rings, while four of the top 10 inactive RFs were saturated heterocycles ( Figure S5 ). For AFs, fluorine prevailed in four of the top 10 active AFs, while sulfur occurred in four of the top 10 inactive AFs ( Figure S6 ). While many RFs and AFs occurred preferentially in either the ChEMBL_active or ChEMBL_inactive molecules, these fragments did not differ strongly from each other or from RFs and AFs in known molecules (PubChem, ZINC, and COCONUT) in terms of overall structural features. Indeed, the different data sets of known molecules had quite similar property profiles for RFs of up to 13 atoms in terms of the number of rings, the largest ring size, and the number of acyclic atoms and heteroatoms ( a– d). Similarly, AFs of up to 13 atoms in these data sets had comparable property profiles concerning the number of quaternary centers, triple bonds, heteroatoms, and terminal atoms ( Figures S7a–S7d ). On the other hand, the property profiles of GDB-13s RFs and AFs were clearly different from those of known molecules. For instance, RFs from GDB-13s had a broader distribution in terms of the number of rings and the largest ring size and fewer heteroatoms than the different RF data sets of known molecules. Furthermore, the GDB-13s AFs stood out with a larger number of triple bonds and terminal atoms compared to the AF data sets of known molecules. These differences probably explained the less favorable synthetic accessibility score (SAscore) of the GDB-13s RFs and AFs ( e and S7e ). Indeed, the SAscore is based on the presence of substructures frequently found in known molecules. Note that the GDB-13s RFs and AFs had relatively high natural product likeness scores (NPscores), comparable to those of the COCONUT molecules ( f and S7f ). The high NPscores of the GDB-13s RFs and AFs probably reflect the high percentage of non-aromatic, stereochemically complex structures in GDB-13s since the NPscore assigns higher values for the presence of such structural features. Bioactivity-Guided Selection of RFs and AFs in GDB-13s The analysis presented above suggested two possible approaches to select RFs and AFs from GDB-13s for drug design. First, the narrower structural parameter ranges covered by RFs and AFs from known molecules, active or inactive, which correlated with their more favorable SAscores compared to the GDB-13s RFs and AFs, indicated to select GDB-13s fragments with limited structural complexity, which would certainly help with a possible synthesis. Following up on this idea, we selected a subset of GDB-13s RFs and AFs by constraining the structural parameters closer to known molecules but considering only those exclusive to GDB-13s to ensure novelty. To our delight, this selection resulted in a sizable number of GDB-13s fragments. Indeed, we obtained 960 587 GDB-13s eRFs with up to two rings, a ring size up to seven, up to three heteroatoms, and three acyclic atoms, named RFset1. For the selection of AFs from GDB-13s, we obtained 462 439 GDB-13s eAFs without any quaternary center and up to one triple bond, up to four heteroatoms, and up to four terminal atoms, named AFset1. In a second, narrower selection, we assumed that ChEMBL-derived RFs and AFs in the R bioactive ≥ 4 value range (defined as active fragments) reflected privileged structural types, while those in the R bioactive ≤ 0.25 value range (defined as inactive fragments) marked undesirable structural types in terms of possible bioactivities. To expand the scope of the ChEMBL active fragments, we retrieved all GDB-13s RFs and AFs within a Jaccard distance d J ≤ 0.6 of any of the ChEMBL active fragments, using the MAP4 fingerprint as a similarity measure. In this manner, we obtained 97 664 RFs and 43 704 AFs, from which we removed the 25 162 RFs and 15 484 AFs found within d J ≤ 0.6 of any inactive fragments, leaving 72 502 RFs, named RFset2, and 28 220 AFs, named AFset2, as bioactive-like fragments from GDB-13s. In these sets, many fragments were also exclusive to GDB-13s, ensuring novelty (51 303 eRFs, 70.8%; 17 620 eAFs, 62.4%). The property profiles of RFset1 and AFset1, which both resulted from constraining structural parameters, remained substantially different from those of known molecules because the frequency peaked at the highest parameter value selected. This distribution reflects the combinatorial enumeration used to generate GDB-13s, which provides many more possible molecules at the largest values of structural parameters. Therefore, the SAscore remained less favorable and the NPscore relatively high in both sets. On the other hand, the property profiles of RFset2 and AFset2, selected by substructure similarity to ChEMBL bioactive fragments, were like those of known molecules, reflecting the structural similarity selection used to compose these sets ( a– d and S7a–S7d ). RFset2 and AFset2 also displayed lower SAscore and NPscore values than the full sets of GDB-13s RFs and AFs, indicating that they were generally less complex and closer to the RFs and AFs from known molecules ( e, f, S7e, and S7f ). To gain a detailed insight into the bioactivity-selected subset of GDB-13s RFs and AFs, we computed interactive TMAPs (tree maps) using the MinHashed fingerprint MAP4 as a similarity measure . These interactive TMAPs allow one to browse through the two databases and search for interesting RFs and AFs using various color-coded properties as guides. To illustrate the available options, we searched for novel analogues of the three most frequent active ( R bioactive ≥ 4) RFs in ChEMBL, one of which occurs in the kinase inhibitor drug gefitinib, revealing potentially interesting analogues . More interesting GDB-13s eRFs are exemplified as analogues of triquinazine, an eRF from GDB-13s previously used as a scaffold for a Janus kinase inhibitor analogue of the known drug tofacitinib. In principle, the same selection can also be made with the GDB-13s analogues of AFs, as exemplified for the most frequent active ( R bioactive ≥ 4) AFs from ChEMBL ( Figure S8 ). In this case, however, the selection of interesting AFs is less obvious since the chemistry of AFs highly depends on their connection to RFs. To assess the known chemical space, we extracted RFs and AFs from 885 905 524 molecules in the ZINC database, 100 852 694 molecules of up to 50 non-hydrogen atoms in PubChem, and 401 624 natural products (NPs) and NP-like molecules in COCONUT. We also extracted RFs and AFs from the 99 394 177 molecules in GDB-13s, to be used as a source of novelty later in the study. In all these databases, the number of molecules per RF and AF followed a typical power law distribution, with few RFs and AFs occurring in many molecules and a relatively large number of RFs and AFs occurring only once, referred to as singletons ( a and b and ). The most frequent RFs and AFs in each database were rather small, featuring mono- and disubstituted benzene rings and azacycles for RFs in known molecules, cyclopropanes for RFs in GDB-13s, and single-atom groups for AFs in all databases ( Figures S1 and S2 ). In fact, although the size distribution of the compounds, RFs, and AFs in known molecules extended far beyond 13 atoms ( c– f), the RFs and AFs up to 13 atoms were sufficient to cover most molecules except for the natural products in COCONUT, which feature many molecules with RFs larger than 13 atoms ( , entry numbers 2–4). While fragments shared by the four databases were often structurally simple, those occurring in only one of the four databases analyzed (exclusive fragments, eRF and eAF) were generally more complex, as exemplified by the most frequent cases ( Figures S3 and S4 ). Within the space covered by RFs and AFs of up to 13 atoms, GDB-13s largely outnumbered the known molecules in terms of RFs, resulting in a high percentage of exclusive RFs (99.2% eRFs ≤ 13 atoms, , entry number 9). Most AFs ≤ 13 atoms in GDB-13s were also exclusive (92.7% eAFs ≤ 13 atoms, , entry number 15), although the absolute number of AFs in GDB-13s was comparable to the number of AFs in ZINC and smaller than the number of AFs in PubChem. In fact, PubChem, ZINC, and COCONUT also contained many exclusive eRFs ≤ 13 atoms and eAFs ≤ 13 atoms, reflecting that the enumeration of GDB-13s excluded strained rings and certain functional groups and only considered C, N, O, S, and Cl as elements. Nevertheless, the above analysis showed that GDB-13s contained a very large number of both eRFs and eAFs and could therefore serve as a source of novel RFs and AFs to expand the space of known molecules. Aiming to select novel fragments in GDB-13s by exploiting knowledge on bioactive compounds, we analyzed molecules from the ChEMBL database to test if different RFs and AFs were associated with active or inactive compounds. We selected the 2 136 218 ChEMBL molecules with an HAC ≤ 50, separated them into 560 230 actives (IC 50 or EC 50 ≤ 10 μM, ChEMBL_actives) and 1 575 988 inactives (all others, ChEMBL_inactives), and extracted the corresponding RFs and AFs. For each RF and AF, we computed its total occurrence as the number of ChEMBL molecules containing this RF or AF, its relative occurrence in active molecules (% active) and inactive molecules (% inactive), and its activity ratio R bioactive = (% active)/(% inactive). A volcano scatter plot of the total occurrence of each RF or AF as a function of R bioactive showed that RFs and AFs spanned a broad range of R bioactive values and total occurrences ( a and b). The situation was similar when only fragments of up to 13 atoms were analyzed ( c and d). From this analysis, we partitioned ChEMBL fragments according to their R bioactive values into active ( R bioactive ≥ 4), inactive ( R bioactive ≤ 0.25), or nonpreferential fragments (intermediate values, R bioactive ≈ 1). While the most frequent fragments were small and nonpreferential, many fragments, including all singletons, occurred exclusively in either the ChEMBL_actives or ChEMBL_inactives subset and were accordingly assigned to either the active ( R bioactive ≥ 4) or inactive ( R bioactive ≤ 0.25) subset, respectively . The top 10 most frequent active ( R bioactive ≥ 4) and inactive ( R bioactive ≤ 0.25) RFs and AFs in ChEMBL were all in the size range of GDB-13s. Four of these top 10 active RFs featured halogenated benzene rings, while four of the top 10 inactive RFs were saturated heterocycles ( Figure S5 ). For AFs, fluorine prevailed in four of the top 10 active AFs, while sulfur occurred in four of the top 10 inactive AFs ( Figure S6 ). While many RFs and AFs occurred preferentially in either the ChEMBL_active or ChEMBL_inactive molecules, these fragments did not differ strongly from each other or from RFs and AFs in known molecules (PubChem, ZINC, and COCONUT) in terms of overall structural features. Indeed, the different data sets of known molecules had quite similar property profiles for RFs of up to 13 atoms in terms of the number of rings, the largest ring size, and the number of acyclic atoms and heteroatoms ( a– d). Similarly, AFs of up to 13 atoms in these data sets had comparable property profiles concerning the number of quaternary centers, triple bonds, heteroatoms, and terminal atoms ( Figures S7a–S7d ). On the other hand, the property profiles of GDB-13s RFs and AFs were clearly different from those of known molecules. For instance, RFs from GDB-13s had a broader distribution in terms of the number of rings and the largest ring size and fewer heteroatoms than the different RF data sets of known molecules. Furthermore, the GDB-13s AFs stood out with a larger number of triple bonds and terminal atoms compared to the AF data sets of known molecules. These differences probably explained the less favorable synthetic accessibility score (SAscore) of the GDB-13s RFs and AFs ( e and S7e ). Indeed, the SAscore is based on the presence of substructures frequently found in known molecules. Note that the GDB-13s RFs and AFs had relatively high natural product likeness scores (NPscores), comparable to those of the COCONUT molecules ( f and S7f ). The high NPscores of the GDB-13s RFs and AFs probably reflect the high percentage of non-aromatic, stereochemically complex structures in GDB-13s since the NPscore assigns higher values for the presence of such structural features. The analysis presented above suggested two possible approaches to select RFs and AFs from GDB-13s for drug design. First, the narrower structural parameter ranges covered by RFs and AFs from known molecules, active or inactive, which correlated with their more favorable SAscores compared to the GDB-13s RFs and AFs, indicated to select GDB-13s fragments with limited structural complexity, which would certainly help with a possible synthesis. Following up on this idea, we selected a subset of GDB-13s RFs and AFs by constraining the structural parameters closer to known molecules but considering only those exclusive to GDB-13s to ensure novelty. To our delight, this selection resulted in a sizable number of GDB-13s fragments. Indeed, we obtained 960 587 GDB-13s eRFs with up to two rings, a ring size up to seven, up to three heteroatoms, and three acyclic atoms, named RFset1. For the selection of AFs from GDB-13s, we obtained 462 439 GDB-13s eAFs without any quaternary center and up to one triple bond, up to four heteroatoms, and up to four terminal atoms, named AFset1. In a second, narrower selection, we assumed that ChEMBL-derived RFs and AFs in the R bioactive ≥ 4 value range (defined as active fragments) reflected privileged structural types, while those in the R bioactive ≤ 0.25 value range (defined as inactive fragments) marked undesirable structural types in terms of possible bioactivities. To expand the scope of the ChEMBL active fragments, we retrieved all GDB-13s RFs and AFs within a Jaccard distance d J ≤ 0.6 of any of the ChEMBL active fragments, using the MAP4 fingerprint as a similarity measure. In this manner, we obtained 97 664 RFs and 43 704 AFs, from which we removed the 25 162 RFs and 15 484 AFs found within d J ≤ 0.6 of any inactive fragments, leaving 72 502 RFs, named RFset2, and 28 220 AFs, named AFset2, as bioactive-like fragments from GDB-13s. In these sets, many fragments were also exclusive to GDB-13s, ensuring novelty (51 303 eRFs, 70.8%; 17 620 eAFs, 62.4%). The property profiles of RFset1 and AFset1, which both resulted from constraining structural parameters, remained substantially different from those of known molecules because the frequency peaked at the highest parameter value selected. This distribution reflects the combinatorial enumeration used to generate GDB-13s, which provides many more possible molecules at the largest values of structural parameters. Therefore, the SAscore remained less favorable and the NPscore relatively high in both sets. On the other hand, the property profiles of RFset2 and AFset2, selected by substructure similarity to ChEMBL bioactive fragments, were like those of known molecules, reflecting the structural similarity selection used to compose these sets ( a– d and S7a–S7d ). RFset2 and AFset2 also displayed lower SAscore and NPscore values than the full sets of GDB-13s RFs and AFs, indicating that they were generally less complex and closer to the RFs and AFs from known molecules ( e, f, S7e, and S7f ). To gain a detailed insight into the bioactivity-selected subset of GDB-13s RFs and AFs, we computed interactive TMAPs (tree maps) using the MinHashed fingerprint MAP4 as a similarity measure . These interactive TMAPs allow one to browse through the two databases and search for interesting RFs and AFs using various color-coded properties as guides. To illustrate the available options, we searched for novel analogues of the three most frequent active ( R bioactive ≥ 4) RFs in ChEMBL, one of which occurs in the kinase inhibitor drug gefitinib, revealing potentially interesting analogues . More interesting GDB-13s eRFs are exemplified as analogues of triquinazine, an eRF from GDB-13s previously used as a scaffold for a Janus kinase inhibitor analogue of the known drug tofacitinib. In principle, the same selection can also be made with the GDB-13s analogues of AFs, as exemplified for the most frequent active ( R bioactive ≥ 4) AFs from ChEMBL ( Figure S8 ). In this case, however, the selection of interesting AFs is less obvious since the chemistry of AFs highly depends on their connection to RFs. In summary, deconstructing known molecules from the ZINC and PubChem databases and natural products from the COCONUT database to form fragments (RFs and AFs) showed that these molecules mostly consist of RFs and AFs of 13 atoms or less. A comparative analysis of the database GDB-13s, which lists 99 million possible molecules of up to 13 atoms, showed that over 99% of the 28 million RFs and 93% of the 2.6 million AFs in GDB-13s are absent from public databases and are therefore exclusive and, in principle, novel. Furthermore, by analyzing the ChEMBL database, we found that certain RFs and AFs occur more frequently in known active vs inactive molecules. Analyzing the properties of active RFs and AFs in ChEMBL to define property and similarity ranges then allowed us to extract one million RFs and half a million AFs from GDB-13s with ChEMBL-active-like features. These ChEMBL-active-like RFs and AFs from GDB-13s are structurally relatively simple and have favorable SAscores and therefore represent attractive targets for synthesizing new fragments with favorable properties for drug design. Extracting RFs and AFs from Molecules The RFs and AFs were obtained from molecules by processing their SMILES using RDkit as follows . RFs: break all bonds between any two acyclic atoms and remove all acyclic atoms not directly attached to the rings. Acyclic atoms directly connected to more than one ring system are disconnected and reattached to each ring system separately. AFs: break all bonds between the cyclic and acyclic atoms and remove all cyclic atoms. TMAPs Tree maps (TMAPs) were generated by specifying standard parameters using the MAP4 fingerprint (MinHashed atom-pair fingerprint up to a diameter of four bonds). MAP4 fingerprints were computed with dimensions of 256. The RFs and AFs were obtained from molecules by processing their SMILES using RDkit as follows . RFs: break all bonds between any two acyclic atoms and remove all acyclic atoms not directly attached to the rings. Acyclic atoms directly connected to more than one ring system are disconnected and reattached to each ring system separately. AFs: break all bonds between the cyclic and acyclic atoms and remove all cyclic atoms. Tree maps (TMAPs) were generated by specifying standard parameters using the MAP4 fingerprint (MinHashed atom-pair fingerprint up to a diameter of four bonds). MAP4 fingerprints were computed with dimensions of 256.
null
9d87499e-ea11-4b5c-851d-9eb1f05cf388
11728692
Microbiology[mh]
Soil is a basic reservoir of microorganisms (e.g., bacteria, fungi, viruses, parasites), the diversity of which is the source of mechanisms regulating the impact of pathogens on other organisms . The presence of indicator microorganisms in soil not only reflects the degree of contamination of the soil environment, but also provides information about the potential risk of contamination of agricultural produce and potential threats to human and animal health. In addition to naturally occurring microflora, the soil environment may also contain microorganisms introduced by improper sewage management or the use of contaminated manure, slurry or sewage sludge in agriculture. Organic wastes contain plant nutrient contents and can be used to fertilize and improve soil properties in both raw and processed forms. Organic wastes are also introduced into the market as fertilizers and soil improvers, provided they meet quality requirements and contamination does not exceed permissible levels. In some cases, agricultural tests are necessary to confirm the suitability of the fertilizer for application to plants or for soil remediation. In addition to several quality parameters regarding minimum nutrient content and maximum heavy metal content, organic wastes and fertilizers cannot exceed the permissible values of biological contamination. Often, despite meeting all other quality requirements, organic wastes and fertilizers are disqualified from use because of the content of bacteria and parasite eggs . In Poland, in accordance with the Regulation of the Minister of the Environment (2015) , sewage sludge may be used in agriculture and for land reclamation for agricultural purposes if Salmonella bacteria have not been isolated in a representative 100 g sludge sample and the total number of live eggs of intestinal parasites (e.g., Toxocara sp., Trichuris sp. Ascaris sp.) in 1 kg of dry matter is 0. There are no regulations requiring testing the concentration of Enterobacteriaceae bacteria or Escherichia coli as a representative species. In Europe, the Council Directive 86/278/EEC of June 1986 on the application of agricultural sewage sludge in agriculture is still in force; while this directive establishes limit values for heavy metal concentrations, it does not provide indicators of biological origin. The presence of E. coli is a mandatory indicator of biological contamination of sewage sludge in only a few countries, such as Finland, Portugal and Lithuania, in amounts not exceeding 1000 CFU/g (colony-forming units per gram) or in no more than 100 CFU/g in Austria . According to research by Estrada et al. , 80 days after the introduction of sewage sludge into the soil, the concentrations of most Enterobacteriaceae , E. coli and fecal coliform bacteria were below the detection limits in various conditions. Research conducted in Poland by Stańczyk-Mazanek and Stępniak confirmed that the use of sewage sludge in doses not exceeding 40 t/ha should not cause soil contamination, but the use of higher doses may pose such a risk, especially from drug-resistant bacterial species. In turn, Michelon et al. pointed out the need to limit and control enteric pathogens in organic substances introduced into the soil. The use of natural fertilizers should also consider the regional context, so that the introduced sewage does not present too much of a burden on the environment and result in, e.g., contamination of water bodies. In terms of testing the number of Enterobacteriaceae (with a limit below 1000 CFU/g), Poland was subject, until 2024, to the provisions of the Regulation of the Minister of Agriculture and Rural Development (2008) in the field of organic and organic–mineral fertilizers based on animal by-products. Pursuant to the 2007 Act on fertilizers and fertilization , digestate belongs to the group of manufactured fertilizers or fertilizers containing animal-derived products or by-products. Pursuant to the new ministerial regulation of August 2024 , fertilizers, plant cultivation aid products and post-fermentation products cannot contain live eggs of intestinal parasites and Salmonella , while the Enterobacteriaceae indicator has been removed. In the current work, we focused exclusively on the contamination of soil, sewage sludge and digestate with bacteria from the Enterobacteriaceae family, and in particular, its representatives E. coli and Salmonella . The aim of the study was to determine the biodiversity of bacteria from the Enterobacteriaceae family in the tested samples and to determine safe limits of microbiological contamination of sludge and digestate based on an analysis of the risk of transfer of these pathogens to the soil in laboratory conditions. The results of this stage of the project will be used to verify current standards concerning regulations regarding the content of pathogenic bacteria in substances of organic origin intended for use as fertilizers in a way that does not pose a threat to human and animal health. 2.1. Determination of Bacteria Concentration Using Culture-Based Methods 2.1.1. Sample Collection Samples of arable soil (82), sewage sludge (9) and digestate (9) were collected for microbiological examination in 2021. Soil samples from agricultural fields in northeastern Poland were gathered from the top layer (up to 20 cm depth) by a soil stick sampler. In accordance with the principles of soil sampling, at least 10 punctures were made to obtain an average sample. The sewage sludge and digestate samples were obtained from biological wastewater treatment plants and agriculture biogas plants, respectively. The samples were intended for testing immediately after their delivery to the laboratory. The soil samples were sieved through a sieve with a hole diameter of 2 mm. 2.1.2. Microbiological Culture The assessment of microbial soil, sludge and digestate contamination was based on the following tests: total number of mesophilic bacteria, total number of Gram-negative bacteria from the Enterobacteriaceae family and presence of E. coli and Salmonella spp. Due to the lack of applicable procedures concerning microbiological testing of sewage sludge and digestates, the Polish standards pertaining to soil, food and feed research were used. Detection of Salmonella was performed according to the standard PN-Z-19000-1/2001 , Escherichia coli according to PN-EN ISO 16649-2:2004 , the total number of bacteria according to PN-EN ISO 4833-2:2013-12/AC and Enterobacteriaceae according to PN-EN ISO 21528-2:2017-08 . Two 10 g subsamples were taken from each sample for testing. One of the subsamples was suspended in 90 mL of Ringer’s solution, homogenized with a BagMixer 400 SW (Interscience, France) and intended for culture. The number of aerobic mesophilic bacteria was determined on nutrient agar plates (BTL, Łódź, Poland) incubated at 30 °C for 24 h. The presence of Gram-negative Enterobacteriaceae was determined on Violet, Red, Bile and Glucose (VRBG) agar plates (BioMaxima, Lublin, Poland), after incubation at 37 °C for 24 h. For E. coli , Tryptone Bile X-glucuronide (TBX) agar (BioMaxima, Lublin, Poland) was used, and the inoculated media were incubated at 44 °C for 24 h. The number of bacteria was expressed as the number of colony-forming units (CFU) in 1 g of sample. The second subsample was suspended in 90 mL Selenite-F (SF) broth (BTL, Łódź, Poland) and incubated at 43 °C for 24 h. An inoculation loop full of each SF suspension with sample was streaked onto Salmonella Shigella (SS) agar (BTL, Łódź, Poland) and incubated at 37 °C for 24 h. Bacteria, isolated on VRBG, TBX and SS media, were subjected to genera/species identification. 2.2. Determination of Bacteria Species via Biochemical and Molecular Methods 2.2.1. Biochemical Tests Preliminary identification of the isolated strains was carried out using the following sets of commercial kits: ENTEROtest 24N—for Salmonella , E. coli and other oxidase-negative bacteria from Enterobacteriaceae ; NEFERMtest 24N—for oxidase-positive non-fermenting bacteria; OXItest—a supplementary test for detecting bacterial cytochrome oxidase; and INDOLtest—for detection of E. coli and screening differentiation of indole-positive and indole-negative bacterial genera (Erba-Lachema, Brno, Czech Republic). All tests were performed in accordance with the manufacturers’ recommendations using the ErbaScan absorbance microplate reader with a measurement range from 0.000 to 4.000 OD (Erba-Lachema, Brno, Czech Republic). The interpretation of the ENTEROtest 24N and NEFERMtest 24N results was performed using ErbaExpert microbiological software version 1.2.013 (Erba-Lachema, Brno, Czech Republic). In addition, an analysis of isolated strains was also performed using the Gen III BIOLOG system (BIOLOG Inc., Hayward, CA, USA). The results were read by MicroLog M 5.2 software (BIOLOG Inc., Hayward, CA, USA). 2.2.2. Molecular Tests Isolation of DNA from bacterial cultures was performed using the Qiamp ® DNA Mini Kit (Qiagen, Hilden Germany), according to the protocol for Gram-negative bacteria extraction. One culture loop from a 24-h bacterial culture was taken for isolation. Bacterial DNA was detected by amplification of the 16S rRNA gene fragment using the universal oligonucleotide primers p27f and p1525r according to the method by Chun and Goodfellow . Each reaction has a volume of 50 µL and consisted of 1.5 U Taq DNA Polymerase, 1 × PCR buffer containing 15 mM MgCl2 (Qiagen, Hilden, Germany) and 0.2 mM dNTPs (Thermo Scientific, Waltham, MA, USA), 0.4 µM of each primer (Institute of Biochemistry and Biophysics, Warsaw, Poland) and 5 µL each of DNA template and nuclease-free water (Qiagen, Hilden, Germany). The reaction was conducted on a C1000 Thermal Cycler (BioRad, Hercules, CA, USA). Products of 1500 bp amplification were visualized in 1.5% agarose gel (Prona, Basica LE, Prona, Spain) after electrophoresis in standard conditions and staining with ethidium bromide solution (2 μg/mL). The PCR sequencing reaction was performed using a BigDye™ Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Waltham, MA, USA), and the reaction products were purified using a BigDye XTerminator™ Purification Kit (Applied Biosystems, Waltham, MA, USA). Sequencing was performed on the ABI PRISM 310 Genetic Analyzer (Applied Biosystems, Waltham, MA, USA). The nucleotide sequences were compared with sequences in GenBank using the Basic Local Alignment Search Tool (BLAST). 2.3. The Survival of E. coli Present in Organic Fertilizers on a Laboratory Scale 2.3.1. Samples In the initial phase, the total number of Gram-negative Enterobacteriaceae and E. coli was determined in the soil (universal soil—used for, e.g., gardening—and clay), sewage sludge and digestate samples used in the experiment . The sewage sludge and digestate samples were subjected to preliminary heat treatment at 121 °C for 15 min to remove natural microflora. Sterile samples were intended for inoculation with E. coli suspension. No Salmonella spp. was detected in any samples. 2.3.2. Inoculum Preparation The reference strain of E. coli ATCC 25922 was used to prepare the inoculum. From the 24-h culture, a suspension was prepared with an optical density of 0.5 McFarland (optical density at 550 nm: 0.125), measured with a Densi-La-Meter II densitometer (Erba-Lachema, Brno, Czech Republic). The initial suspension density (2.05 × 10 8 CFU/g) was determined based on the average concentration of mesophilic bacteria in the tested non-sterile sewage sludge and digestate samples . When the E. coli suspension was added to the sterile sewage sludge and digestate samples, the final concentration was 1.8 × 10 6 CFU/g. 2.3.3. Main Experiment Four containers were prepared, filled with non-sterile soil (universal or clay) in a volume of 8.3 dm 3 , ensuring the mapping of the top 20 cm of the cultivated surface layer. Two containers were filled with soil, with the addition of 14.2 g and 88.5 g of sterile sewage sludge inoculated with 1.8 mL and 11.1 mL of the stock E. coli suspension of the same 2.05 × 10 8 CFU/g concentration, respectively. The same proportions were used to add digestate samples. The amount of the added sewage sludge or digestate sample was determined based on permissible doses of fertilizers (min. 3 t/ha; max. 20 t/ha) included in the Regulation of the Minister of the Environment . The samples were mixed thoroughly and stored at a temperature of 20–25 °C during the day and 15–20 °C at night. Microbiological contamination was carried out after the 1st, 2nd and 3rd weeks of storage. 2.3.4. Control Group The control group consisted of universal and clay soil samples with additives of non-sterile sewage sludge and digestate in amounts of 14.2 g and 88.5 g, with the concentration determined in the initial test . 2.1.1. Sample Collection Samples of arable soil (82), sewage sludge (9) and digestate (9) were collected for microbiological examination in 2021. Soil samples from agricultural fields in northeastern Poland were gathered from the top layer (up to 20 cm depth) by a soil stick sampler. In accordance with the principles of soil sampling, at least 10 punctures were made to obtain an average sample. The sewage sludge and digestate samples were obtained from biological wastewater treatment plants and agriculture biogas plants, respectively. The samples were intended for testing immediately after their delivery to the laboratory. The soil samples were sieved through a sieve with a hole diameter of 2 mm. 2.1.2. Microbiological Culture The assessment of microbial soil, sludge and digestate contamination was based on the following tests: total number of mesophilic bacteria, total number of Gram-negative bacteria from the Enterobacteriaceae family and presence of E. coli and Salmonella spp. Due to the lack of applicable procedures concerning microbiological testing of sewage sludge and digestates, the Polish standards pertaining to soil, food and feed research were used. Detection of Salmonella was performed according to the standard PN-Z-19000-1/2001 , Escherichia coli according to PN-EN ISO 16649-2:2004 , the total number of bacteria according to PN-EN ISO 4833-2:2013-12/AC and Enterobacteriaceae according to PN-EN ISO 21528-2:2017-08 . Two 10 g subsamples were taken from each sample for testing. One of the subsamples was suspended in 90 mL of Ringer’s solution, homogenized with a BagMixer 400 SW (Interscience, France) and intended for culture. The number of aerobic mesophilic bacteria was determined on nutrient agar plates (BTL, Łódź, Poland) incubated at 30 °C for 24 h. The presence of Gram-negative Enterobacteriaceae was determined on Violet, Red, Bile and Glucose (VRBG) agar plates (BioMaxima, Lublin, Poland), after incubation at 37 °C for 24 h. For E. coli , Tryptone Bile X-glucuronide (TBX) agar (BioMaxima, Lublin, Poland) was used, and the inoculated media were incubated at 44 °C for 24 h. The number of bacteria was expressed as the number of colony-forming units (CFU) in 1 g of sample. The second subsample was suspended in 90 mL Selenite-F (SF) broth (BTL, Łódź, Poland) and incubated at 43 °C for 24 h. An inoculation loop full of each SF suspension with sample was streaked onto Salmonella Shigella (SS) agar (BTL, Łódź, Poland) and incubated at 37 °C for 24 h. Bacteria, isolated on VRBG, TBX and SS media, were subjected to genera/species identification. Samples of arable soil (82), sewage sludge (9) and digestate (9) were collected for microbiological examination in 2021. Soil samples from agricultural fields in northeastern Poland were gathered from the top layer (up to 20 cm depth) by a soil stick sampler. In accordance with the principles of soil sampling, at least 10 punctures were made to obtain an average sample. The sewage sludge and digestate samples were obtained from biological wastewater treatment plants and agriculture biogas plants, respectively. The samples were intended for testing immediately after their delivery to the laboratory. The soil samples were sieved through a sieve with a hole diameter of 2 mm. The assessment of microbial soil, sludge and digestate contamination was based on the following tests: total number of mesophilic bacteria, total number of Gram-negative bacteria from the Enterobacteriaceae family and presence of E. coli and Salmonella spp. Due to the lack of applicable procedures concerning microbiological testing of sewage sludge and digestates, the Polish standards pertaining to soil, food and feed research were used. Detection of Salmonella was performed according to the standard PN-Z-19000-1/2001 , Escherichia coli according to PN-EN ISO 16649-2:2004 , the total number of bacteria according to PN-EN ISO 4833-2:2013-12/AC and Enterobacteriaceae according to PN-EN ISO 21528-2:2017-08 . Two 10 g subsamples were taken from each sample for testing. One of the subsamples was suspended in 90 mL of Ringer’s solution, homogenized with a BagMixer 400 SW (Interscience, France) and intended for culture. The number of aerobic mesophilic bacteria was determined on nutrient agar plates (BTL, Łódź, Poland) incubated at 30 °C for 24 h. The presence of Gram-negative Enterobacteriaceae was determined on Violet, Red, Bile and Glucose (VRBG) agar plates (BioMaxima, Lublin, Poland), after incubation at 37 °C for 24 h. For E. coli , Tryptone Bile X-glucuronide (TBX) agar (BioMaxima, Lublin, Poland) was used, and the inoculated media were incubated at 44 °C for 24 h. The number of bacteria was expressed as the number of colony-forming units (CFU) in 1 g of sample. The second subsample was suspended in 90 mL Selenite-F (SF) broth (BTL, Łódź, Poland) and incubated at 43 °C for 24 h. An inoculation loop full of each SF suspension with sample was streaked onto Salmonella Shigella (SS) agar (BTL, Łódź, Poland) and incubated at 37 °C for 24 h. Bacteria, isolated on VRBG, TBX and SS media, were subjected to genera/species identification. 2.2.1. Biochemical Tests Preliminary identification of the isolated strains was carried out using the following sets of commercial kits: ENTEROtest 24N—for Salmonella , E. coli and other oxidase-negative bacteria from Enterobacteriaceae ; NEFERMtest 24N—for oxidase-positive non-fermenting bacteria; OXItest—a supplementary test for detecting bacterial cytochrome oxidase; and INDOLtest—for detection of E. coli and screening differentiation of indole-positive and indole-negative bacterial genera (Erba-Lachema, Brno, Czech Republic). All tests were performed in accordance with the manufacturers’ recommendations using the ErbaScan absorbance microplate reader with a measurement range from 0.000 to 4.000 OD (Erba-Lachema, Brno, Czech Republic). The interpretation of the ENTEROtest 24N and NEFERMtest 24N results was performed using ErbaExpert microbiological software version 1.2.013 (Erba-Lachema, Brno, Czech Republic). In addition, an analysis of isolated strains was also performed using the Gen III BIOLOG system (BIOLOG Inc., Hayward, CA, USA). The results were read by MicroLog M 5.2 software (BIOLOG Inc., Hayward, CA, USA). 2.2.2. Molecular Tests Isolation of DNA from bacterial cultures was performed using the Qiamp ® DNA Mini Kit (Qiagen, Hilden Germany), according to the protocol for Gram-negative bacteria extraction. One culture loop from a 24-h bacterial culture was taken for isolation. Bacterial DNA was detected by amplification of the 16S rRNA gene fragment using the universal oligonucleotide primers p27f and p1525r according to the method by Chun and Goodfellow . Each reaction has a volume of 50 µL and consisted of 1.5 U Taq DNA Polymerase, 1 × PCR buffer containing 15 mM MgCl2 (Qiagen, Hilden, Germany) and 0.2 mM dNTPs (Thermo Scientific, Waltham, MA, USA), 0.4 µM of each primer (Institute of Biochemistry and Biophysics, Warsaw, Poland) and 5 µL each of DNA template and nuclease-free water (Qiagen, Hilden, Germany). The reaction was conducted on a C1000 Thermal Cycler (BioRad, Hercules, CA, USA). Products of 1500 bp amplification were visualized in 1.5% agarose gel (Prona, Basica LE, Prona, Spain) after electrophoresis in standard conditions and staining with ethidium bromide solution (2 μg/mL). The PCR sequencing reaction was performed using a BigDye™ Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Waltham, MA, USA), and the reaction products were purified using a BigDye XTerminator™ Purification Kit (Applied Biosystems, Waltham, MA, USA). Sequencing was performed on the ABI PRISM 310 Genetic Analyzer (Applied Biosystems, Waltham, MA, USA). The nucleotide sequences were compared with sequences in GenBank using the Basic Local Alignment Search Tool (BLAST). Preliminary identification of the isolated strains was carried out using the following sets of commercial kits: ENTEROtest 24N—for Salmonella , E. coli and other oxidase-negative bacteria from Enterobacteriaceae ; NEFERMtest 24N—for oxidase-positive non-fermenting bacteria; OXItest—a supplementary test for detecting bacterial cytochrome oxidase; and INDOLtest—for detection of E. coli and screening differentiation of indole-positive and indole-negative bacterial genera (Erba-Lachema, Brno, Czech Republic). All tests were performed in accordance with the manufacturers’ recommendations using the ErbaScan absorbance microplate reader with a measurement range from 0.000 to 4.000 OD (Erba-Lachema, Brno, Czech Republic). The interpretation of the ENTEROtest 24N and NEFERMtest 24N results was performed using ErbaExpert microbiological software version 1.2.013 (Erba-Lachema, Brno, Czech Republic). In addition, an analysis of isolated strains was also performed using the Gen III BIOLOG system (BIOLOG Inc., Hayward, CA, USA). The results were read by MicroLog M 5.2 software (BIOLOG Inc., Hayward, CA, USA). Isolation of DNA from bacterial cultures was performed using the Qiamp ® DNA Mini Kit (Qiagen, Hilden Germany), according to the protocol for Gram-negative bacteria extraction. One culture loop from a 24-h bacterial culture was taken for isolation. Bacterial DNA was detected by amplification of the 16S rRNA gene fragment using the universal oligonucleotide primers p27f and p1525r according to the method by Chun and Goodfellow . Each reaction has a volume of 50 µL and consisted of 1.5 U Taq DNA Polymerase, 1 × PCR buffer containing 15 mM MgCl2 (Qiagen, Hilden, Germany) and 0.2 mM dNTPs (Thermo Scientific, Waltham, MA, USA), 0.4 µM of each primer (Institute of Biochemistry and Biophysics, Warsaw, Poland) and 5 µL each of DNA template and nuclease-free water (Qiagen, Hilden, Germany). The reaction was conducted on a C1000 Thermal Cycler (BioRad, Hercules, CA, USA). Products of 1500 bp amplification were visualized in 1.5% agarose gel (Prona, Basica LE, Prona, Spain) after electrophoresis in standard conditions and staining with ethidium bromide solution (2 μg/mL). The PCR sequencing reaction was performed using a BigDye™ Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Waltham, MA, USA), and the reaction products were purified using a BigDye XTerminator™ Purification Kit (Applied Biosystems, Waltham, MA, USA). Sequencing was performed on the ABI PRISM 310 Genetic Analyzer (Applied Biosystems, Waltham, MA, USA). The nucleotide sequences were compared with sequences in GenBank using the Basic Local Alignment Search Tool (BLAST). 2.3.1. Samples In the initial phase, the total number of Gram-negative Enterobacteriaceae and E. coli was determined in the soil (universal soil—used for, e.g., gardening—and clay), sewage sludge and digestate samples used in the experiment . The sewage sludge and digestate samples were subjected to preliminary heat treatment at 121 °C for 15 min to remove natural microflora. Sterile samples were intended for inoculation with E. coli suspension. No Salmonella spp. was detected in any samples. 2.3.2. Inoculum Preparation The reference strain of E. coli ATCC 25922 was used to prepare the inoculum. From the 24-h culture, a suspension was prepared with an optical density of 0.5 McFarland (optical density at 550 nm: 0.125), measured with a Densi-La-Meter II densitometer (Erba-Lachema, Brno, Czech Republic). The initial suspension density (2.05 × 10 8 CFU/g) was determined based on the average concentration of mesophilic bacteria in the tested non-sterile sewage sludge and digestate samples . When the E. coli suspension was added to the sterile sewage sludge and digestate samples, the final concentration was 1.8 × 10 6 CFU/g. 2.3.3. Main Experiment Four containers were prepared, filled with non-sterile soil (universal or clay) in a volume of 8.3 dm 3 , ensuring the mapping of the top 20 cm of the cultivated surface layer. Two containers were filled with soil, with the addition of 14.2 g and 88.5 g of sterile sewage sludge inoculated with 1.8 mL and 11.1 mL of the stock E. coli suspension of the same 2.05 × 10 8 CFU/g concentration, respectively. The same proportions were used to add digestate samples. The amount of the added sewage sludge or digestate sample was determined based on permissible doses of fertilizers (min. 3 t/ha; max. 20 t/ha) included in the Regulation of the Minister of the Environment . The samples were mixed thoroughly and stored at a temperature of 20–25 °C during the day and 15–20 °C at night. Microbiological contamination was carried out after the 1st, 2nd and 3rd weeks of storage. 2.3.4. Control Group The control group consisted of universal and clay soil samples with additives of non-sterile sewage sludge and digestate in amounts of 14.2 g and 88.5 g, with the concentration determined in the initial test . In the initial phase, the total number of Gram-negative Enterobacteriaceae and E. coli was determined in the soil (universal soil—used for, e.g., gardening—and clay), sewage sludge and digestate samples used in the experiment . The sewage sludge and digestate samples were subjected to preliminary heat treatment at 121 °C for 15 min to remove natural microflora. Sterile samples were intended for inoculation with E. coli suspension. No Salmonella spp. was detected in any samples. The reference strain of E. coli ATCC 25922 was used to prepare the inoculum. From the 24-h culture, a suspension was prepared with an optical density of 0.5 McFarland (optical density at 550 nm: 0.125), measured with a Densi-La-Meter II densitometer (Erba-Lachema, Brno, Czech Republic). The initial suspension density (2.05 × 10 8 CFU/g) was determined based on the average concentration of mesophilic bacteria in the tested non-sterile sewage sludge and digestate samples . When the E. coli suspension was added to the sterile sewage sludge and digestate samples, the final concentration was 1.8 × 10 6 CFU/g. Four containers were prepared, filled with non-sterile soil (universal or clay) in a volume of 8.3 dm 3 , ensuring the mapping of the top 20 cm of the cultivated surface layer. Two containers were filled with soil, with the addition of 14.2 g and 88.5 g of sterile sewage sludge inoculated with 1.8 mL and 11.1 mL of the stock E. coli suspension of the same 2.05 × 10 8 CFU/g concentration, respectively. The same proportions were used to add digestate samples. The amount of the added sewage sludge or digestate sample was determined based on permissible doses of fertilizers (min. 3 t/ha; max. 20 t/ha) included in the Regulation of the Minister of the Environment . The samples were mixed thoroughly and stored at a temperature of 20–25 °C during the day and 15–20 °C at night. Microbiological contamination was carried out after the 1st, 2nd and 3rd weeks of storage. The control group consisted of universal and clay soil samples with additives of non-sterile sewage sludge and digestate in amounts of 14.2 g and 88.5 g, with the concentration determined in the initial test . 3.1. Bacterial Concentration in Soil, Sewage Sludge and Digestate Samples The average concentration of mesophilic bacteria in soil samples was 4.6 × 10 5 CFU/g, and the average concentration of Enterobacteriaceae was 1.1 × 10 4 CFU/g. Escherichia coli was detected in two soil samples, with an average concentration of 25.3 CFU/g. Microbiological analysis of sewage sludge showed the presence of mesophilic bacteria in eight (88.9%) samples, with an average concentration of 1.4 × 10 8 CFU/g, and Enterobacteriaceae in six samples (66.7%), with an average concentration of 9.4 × 10 5 CFU/g. Escherichia coli was detected in four (44.4%) samples, with an average concentration of 1.7 × 10 4 CFU/g. Microbiological analysis of digestate showed the presence of mesophilic bacteria and Enterobacteriaceae in all tested samples, obtaining average concentrations of 2.6 × 10 8 CFU/g and 5.6 × 10 6 CFU/g, respectively, while E. coli was detected in six (66.7%) samples . Among all tested samples, Enterobacteriaceae isolated on VRBG medium constituted over 70% of the total number of mesophilic bacteria isolated on nutrient agar . In one digestate sample, the presence of Salmonella was confirmed . 3.2. Species Diversity of Enterobacteriaceae Isolated from Soil, Sewage Sludge and Digestate From the soil samples, Gram-negative bacteria belonging to the Serratia (n = 44), Enterobacter (n = 37), Pantoea (n = 32), Citrobacter (n = 27) and Pseudomonas (n = 25) genera were identified the most frequently. Individual cases were confirmed for the genera Ewingella , Gibbsiella , Hafnia , Kluyvera and Yersinia . In 14 of the 82 samples tested, the presence of Escherichia coli , considered one of the main bacterial indicators of soil microbiological purity, was confirmed , but in no sample did the concentration exceed the permissible value of 1000 CFU/g. Some species determined by biochemical methods could not be confirmed by sequencing (i.e., Burkholderia cepacia complex, Lelliottia amnigena , Chryseobacterium indologenes , Methylobacterium mesophilicum ). The bacterial composition in each tested soil sample is provided in the . In both sewage sludge and digestate samples, the most frequently identified bacterium was E. coli . Some species were detected only in sewage sludge samples ( Alcaligenes faecalis , Comamonas jiangduensis , Enterobacter cloacae , Hafnia alvei , Morganella morganii subsp. Morganii ), and others only in digestates ( Citrobacter freundii , C . gillenii , Ignatzschineria indica , Proteus mirabilis ). The genera Klebsiella and Yersinia were isolated from both sample types but identified as different species. Salmonella enterica subsp. enterica (serotype Johannesburg) was identified in only one digestate sample . Of all species identified via biochemical methods, Raoultella terrigena , Brevundimonas diminuta and Oligella urethralis were not confirmed by sequencing . 3.3. The Survival of E. coli in Soil Samples Fertilized with Sewage Sludge and Digestate in Laboratory Conditions 3.3.1. Main Experiment After the first week of fertilization with sewage sludge, an increase in the total number of Enterobacteriaceae was found compared to the initial result found in unfertilized soil (3.1 × 10 2 CFU/g): up to 9.7 × 10 3 CFU/g and 2.9 × 10 5 CFU/g for universal and clay soils, respectively. In the case of applying the minimum dose of sewage sludge, after the third week, the values did not exceed 100 CFU/g, while at the maximum dose, the final results were similar to the results of the unfertilized soil (1.3 × 10 2 and 6.0 × 10 2 CFU/g). The highest E. coli concentration values were obtained after the first week: up to 5.7 × 10 3 CFU/g and 1.8 × 10 5 CFU/g for universal and clay soils, respectively. After three weeks, the results decreased below 1 CFU/g, except for the application of the maximum dose of fertilizer in clay soil (2.8 × 10 2 CFU/g). When digestate was used in all four variants, E. coli concentrations dropped to <1 or 16 CFU/g after three weeks. In the case of Enterobacteriaceae , after using the minimum dose of digestate, the bacterial concentration did not exceed 70 CFU/g. At the maximum dose, the final values were similar to those for unfertilized soil and amounted to an average of 2 × 10 2 CFU/g for both types of soil . 3.3.2. Control Experiment In both types of soil with the addition of non-sterile sewage sludge or digestate, no growth of E. coli bacteria was observed. In universal soil with sewage sludge and clay soil with digestate at the maximum dose, the final concentration of Enterobacteriaceae obtained after three weeks was lower (<1.8 × 10 2 CFU/g) than the initial value for unfertilized soil (3.1 × 10 2 CFU/g). In the remaining variants, the final concentration of Enterobacteriaceae did not exceed 100 CFU/g . The average concentration of mesophilic bacteria in soil samples was 4.6 × 10 5 CFU/g, and the average concentration of Enterobacteriaceae was 1.1 × 10 4 CFU/g. Escherichia coli was detected in two soil samples, with an average concentration of 25.3 CFU/g. Microbiological analysis of sewage sludge showed the presence of mesophilic bacteria in eight (88.9%) samples, with an average concentration of 1.4 × 10 8 CFU/g, and Enterobacteriaceae in six samples (66.7%), with an average concentration of 9.4 × 10 5 CFU/g. Escherichia coli was detected in four (44.4%) samples, with an average concentration of 1.7 × 10 4 CFU/g. Microbiological analysis of digestate showed the presence of mesophilic bacteria and Enterobacteriaceae in all tested samples, obtaining average concentrations of 2.6 × 10 8 CFU/g and 5.6 × 10 6 CFU/g, respectively, while E. coli was detected in six (66.7%) samples . Among all tested samples, Enterobacteriaceae isolated on VRBG medium constituted over 70% of the total number of mesophilic bacteria isolated on nutrient agar . In one digestate sample, the presence of Salmonella was confirmed . From the soil samples, Gram-negative bacteria belonging to the Serratia (n = 44), Enterobacter (n = 37), Pantoea (n = 32), Citrobacter (n = 27) and Pseudomonas (n = 25) genera were identified the most frequently. Individual cases were confirmed for the genera Ewingella , Gibbsiella , Hafnia , Kluyvera and Yersinia . In 14 of the 82 samples tested, the presence of Escherichia coli , considered one of the main bacterial indicators of soil microbiological purity, was confirmed , but in no sample did the concentration exceed the permissible value of 1000 CFU/g. Some species determined by biochemical methods could not be confirmed by sequencing (i.e., Burkholderia cepacia complex, Lelliottia amnigena , Chryseobacterium indologenes , Methylobacterium mesophilicum ). The bacterial composition in each tested soil sample is provided in the . In both sewage sludge and digestate samples, the most frequently identified bacterium was E. coli . Some species were detected only in sewage sludge samples ( Alcaligenes faecalis , Comamonas jiangduensis , Enterobacter cloacae , Hafnia alvei , Morganella morganii subsp. Morganii ), and others only in digestates ( Citrobacter freundii , C . gillenii , Ignatzschineria indica , Proteus mirabilis ). The genera Klebsiella and Yersinia were isolated from both sample types but identified as different species. Salmonella enterica subsp. enterica (serotype Johannesburg) was identified in only one digestate sample . Of all species identified via biochemical methods, Raoultella terrigena , Brevundimonas diminuta and Oligella urethralis were not confirmed by sequencing . 3.3.1. Main Experiment After the first week of fertilization with sewage sludge, an increase in the total number of Enterobacteriaceae was found compared to the initial result found in unfertilized soil (3.1 × 10 2 CFU/g): up to 9.7 × 10 3 CFU/g and 2.9 × 10 5 CFU/g for universal and clay soils, respectively. In the case of applying the minimum dose of sewage sludge, after the third week, the values did not exceed 100 CFU/g, while at the maximum dose, the final results were similar to the results of the unfertilized soil (1.3 × 10 2 and 6.0 × 10 2 CFU/g). The highest E. coli concentration values were obtained after the first week: up to 5.7 × 10 3 CFU/g and 1.8 × 10 5 CFU/g for universal and clay soils, respectively. After three weeks, the results decreased below 1 CFU/g, except for the application of the maximum dose of fertilizer in clay soil (2.8 × 10 2 CFU/g). When digestate was used in all four variants, E. coli concentrations dropped to <1 or 16 CFU/g after three weeks. In the case of Enterobacteriaceae , after using the minimum dose of digestate, the bacterial concentration did not exceed 70 CFU/g. At the maximum dose, the final values were similar to those for unfertilized soil and amounted to an average of 2 × 10 2 CFU/g for both types of soil . 3.3.2. Control Experiment In both types of soil with the addition of non-sterile sewage sludge or digestate, no growth of E. coli bacteria was observed. In universal soil with sewage sludge and clay soil with digestate at the maximum dose, the final concentration of Enterobacteriaceae obtained after three weeks was lower (<1.8 × 10 2 CFU/g) than the initial value for unfertilized soil (3.1 × 10 2 CFU/g). In the remaining variants, the final concentration of Enterobacteriaceae did not exceed 100 CFU/g . After the first week of fertilization with sewage sludge, an increase in the total number of Enterobacteriaceae was found compared to the initial result found in unfertilized soil (3.1 × 10 2 CFU/g): up to 9.7 × 10 3 CFU/g and 2.9 × 10 5 CFU/g for universal and clay soils, respectively. In the case of applying the minimum dose of sewage sludge, after the third week, the values did not exceed 100 CFU/g, while at the maximum dose, the final results were similar to the results of the unfertilized soil (1.3 × 10 2 and 6.0 × 10 2 CFU/g). The highest E. coli concentration values were obtained after the first week: up to 5.7 × 10 3 CFU/g and 1.8 × 10 5 CFU/g for universal and clay soils, respectively. After three weeks, the results decreased below 1 CFU/g, except for the application of the maximum dose of fertilizer in clay soil (2.8 × 10 2 CFU/g). When digestate was used in all four variants, E. coli concentrations dropped to <1 or 16 CFU/g after three weeks. In the case of Enterobacteriaceae , after using the minimum dose of digestate, the bacterial concentration did not exceed 70 CFU/g. At the maximum dose, the final values were similar to those for unfertilized soil and amounted to an average of 2 × 10 2 CFU/g for both types of soil . In both types of soil with the addition of non-sterile sewage sludge or digestate, no growth of E. coli bacteria was observed. In universal soil with sewage sludge and clay soil with digestate at the maximum dose, the final concentration of Enterobacteriaceae obtained after three weeks was lower (<1.8 × 10 2 CFU/g) than the initial value for unfertilized soil (3.1 × 10 2 CFU/g). In the remaining variants, the final concentration of Enterobacteriaceae did not exceed 100 CFU/g . The use of sewage sludge and digestate as fertilizer on arable land promotes the functioning of the soil ecosystem, increasing crop productivity. In this study, high levels of contamination were found in sewage sludges and digestate: 1.4 × 10 8 CFU/g and 2.6 × 10 8 CFU/g, respectively. Slightly lower degrees of organic fertilizer contamination with mesophilic bacteria have been recorded in Spain (2.4 × 10 7 CFU/g for sludge) and Germany (0.5 × 10 6 CFU/g for digestate) . Direct sewage or digestate introduction into soil could increase the risk of environmental exposure to microbiological contamination, thus posing a threat to human and animal health. As indicated in and , bacteria posing no direct health risk and those with pathogenic properties were isolated from the tested organic substances. Currently, it is not possible for every organic sample introduced as fertilizer to be tested for bacterial species, including the quantitative assessment of individual species. Most legislation regulating the biological safety of fertilizers in terms of bacteria is based on tests for the detection of Salmonella . Of the samples tested, only one digestate sample confirmed the presence of Salmonella ( S . enterica subsp. enterica serovar Johannesburg), which excludes the possibility of its introduction into the soil for agricultural purposes. In other studies, the presence of Salmonella spp. in digestates varies depending on the origin of the material and ranges from 8% (1/12) to 100% (5/5) . The presence of Salmonella spp. in sewage sludges is recorded at the level of 26.7% (4/15) or 38.9% (21/54) . The second frequently used indicator of microbiological contamination of soils and fertilizers is E. coli , which is a representative species of the Enterobacteriaceae family. The current study showed a higher average number of E. coli in the digestate samples, amounting to 1.7 × 10 6 CFU/g, than in the sewage sludge samples, which were determined to be 1.7 × 10 4 CFU/g. Studies on raw digestates collected from biogas plants in France showed mean E. coli counts ranging from 9.4 × 10 1 to 1.3 × 10 4 CFU/g . The presence of E. coli strains in sewage sludge may suggest that treatment methods are not effective and that bacteria may be introduced into the soil environment, including pathogenic strains. The average counts of these bacterial strains may vary depending on the origin of the sewage sludge. Korzeniewska et al. showed that the mean number of E. coli bacteria in untreated hospital sewage ranged from 6 × 10 2 to 1 × 10 5 CFU/mL, whereas in municipal sewage, it was higher and ranged from 1.1 × 10 3 to 1.3 × 10 5 CFU/mL. In a screening study in Sweden, the mean number of E. coli also varied depending on the sampling period in the effluent and ranged from 5.0 × 10 1 to 9.15 × 10 2 CFU/mL . The results from our laboratory experiments revealed the presence of Enterobacteriaceae and E. coli at the levels below 1000 CFU/g recorded within 2 weeks of fertilization. After the first week, an increase in the number of E. coli specimens was noted in the universal soil sample. Similar results were obtained in the study conducted by Qiao et al. , who also showed an increase in the concentration of E. coli bacteria after 8 days. However, in the experiment conducted on clayey soil, the opposite results were observed, characterized by a decrease in the concentration of these microorganisms after a week. Moreover, higher concentrations of the tested microorganisms persisted longer in clay soil compared to sandy soil. The obtained results are consistent with those obtained by Alegbeleye and Sant’Ana , who also confirmed the higher survival of E. coli strains in clay soil compared to sandy soil. The persistent microbiological contamination of fertilized soil also depends on the dose of fertilizer applied. The use of smaller doses of fertilizers significantly accelerates the reduction in potentially pathogenic bacteria in the fertilized soil, and thus increases the safety of people in contact with it. However, further research in this area under natural conditions using experimental plots is necessary. Taking into account the obtained diversity of the microbiome, the assessment of the microbiological purity of sediments and fermentates, referring mainly to E. coli and Salmonella spp., does not fully demonstrate the potential risk resulting from human exposure to pathogens. High levels of contamination based on the total number of microorganisms may indicate a need to improve sanitation methods used in biogas plants and sewage treatment plants. Despite these difficulties, sewage sludge and digestate are being increasingly used in agriculture as rich sources of plant nutrients and because the beneficial chemical elements they contain, including nitrogen, decompose slowly, providing nutrients over an extended period of time. Additionally, their use as fertilizers is an alternative to conventional waste disposal. However, it is also important to appropriately adapt existing regulations regarding limiting potential human contact with pathogens . Experimental field studies conducted in Poland on the effect of regular use of sewage sludge to fertilize agricultural soils showed significant quantitative and qualitative changes in the composition of the soil microbiota, disturbing its balance and influencing the processes occurring within it . Based on the results of the experiments conducted in laboratory conditions, it can be concluded that adding the minimum dose (corresponding to a value of 3 t/ha) of sewage sludge or digestate with Enterobacteriaceae contamination below 2.5 × 10 6 CFU/g to soil, after an initial increase in the concentration of bacteria, does not result in bacterial concentrations exceeding the permissible value of 1000 CFU/g after weeks. Similarly, if the maximum dose (corresponding to a value of 20 t/ha) is used, the final results are at the same level found in unfertilized soil (2 × 10 2 CFU/g). Considering the species composition of soil, sewage sludge and digestate, including both pathogenic and non-pathogenic microorganisms, it is justified to change existing regulations by abolishing the obligation to quantitatively test samples for the presence of bacteria from the Enterobacteriaceae family. However, further research under natural conditions is necessary to confirm the biological safe use of sewage sludge and digestate as fertilizers.
A multi-docking strategy for robotic LAR and deep pelvic surgery with the Hugo RAS system: experience from a tertiary referral center
f1d50851-7ca0-4f4c-a604-daff3e2f163a
11442597
Robotic Surgical Procedures[mh]
Despite advancements in chemotherapy and radiotherapy, low anterior resection (LAR) remains the cornerstone treatment for rectal cancer [ – ]. This procedure, pioneered by Mayo and Dixon and popularized by Heald with the introduction of total mesorectal excision (TME) , has undergone numerous technical refinements over the years. One major advancement was the introduction of laparoscopic LAR, which demonstrated superior short-term outcomes compared to open surgery, including reduced blood loss, shorter hospital stay, fewer wound complication, and faster recovery of bowel function [ – ].Long-term benefits, such as disease-free and overall survival, have also been observed, establishing laparoscopic LAR as the gold standard . Robotic surgery represented the next evolution of laparoscopic LAR, rapidly gaining popularity due to its enhanced 3D visualization, ergonomics, and precision in the pelvic region [ – ]. The Medtronic Hugo™ Robotic-Assisted Surgery (RAS) (Medtronic, Minneapolis, MN, USA) system is a novel robotic platform which has been recently certified for general surgery. The novel characteristics of the platform include an open console and a modular design, which could offer potential advantages for multi-quadrant abdominal procedures. Furthermore, the system’s four independent robotic arms empower surgeons to tailor docking configurations to their preferences . On the other hand, such flexibility might pose a challenge in terms of finding the right set-up for complex procedures, such as LAR, which require different steps on multiple abdominal quadrants. In June 2023, the Medtronic Hugo RAS system was implemented in the colorectal activity in our Institution. This article aims to share our personalized and clinically validated multi-docking strategy for robotic LAR and deep pelvic procedures, leveraging the Hugo RAS system’s independent robotic arms maximizing its use in rectal procedures. Study design This study is a retrospective analysis of prospectively collected data on robotic LAR procedures completed using the Hugo RAS system at our institution (Sant’Orsola Hospital, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy) from September 2023 to June 2024. The procedures were carried out by a single surgeon experienced in laparoscopic colorectal procedures (> 800 procedures), robotically naïve. Docking was carried out by the first surgeon and the assistant surgeon in parallel from both sides of the patient. The study was designed and reported according to the STROBE guidelines (supplementary materials). Data collection The primary data collected focused on docking times for each phase of the multi-docking strategy. We also included clinical data, encompassing patient characteristics (sex, age, body mass index (BMI), underlying disease, ASA score), operative variables (procedure type, operative time, number of conversions, intraoperative complications, number of high priority alarms (RED)) and postoperative variables (length of stay, 30-day morbidity, 30-day readmission, 30-day reoperations). Rationale and development of the multi-docking strategy The Hugo RAS platform, while equipped with a detailed user guide and several Medtronic-tested docking configurations for various abdominal procedures, presented challenges in our real-world surgical experience. Many of these preset configurations did not seamlessly translate to actual operations, often requiring surgeons to adopt unfamiliar approaches or make last-time adjustments to achieve desired outcomes while avoiding external or internal collision. This could potentially diminish the comfort and intuitive experience that robotic platforms are designed for. Additionally, in procedures like LAR, the necessity to mobilize the splenic flexure for a tension-free anastomosis poses a unique challenge for robotic surgery, which is typically optimized for smaller surgical fields. To address these limitations and to optimize instrument reach, minimize arm collisions, and ensure a seamless surgical workflow, we developed a personalized multi-docking approach for LAR and pelvic surgery using what we called a W-I-shaped port placement. This approach is based on the experience developed in laparoscopic surgery and aims to reproduce a similar triangulation of the instruments and obtain a similar view of the anatomy in the different surgical fields. Therefore, the robotic setup involves three distinct docking configurations: First docking: splenic flexure mobilization Second docking: vascular control Third docking: LAR with TME Each docking configuration is tailored to the specific surgical goals of that phase, enhancing efficiency and precision. The W-I-shaped ports placement After induction of pneumoperitoneum with a Veress needle at Palmer’s point, the abdomen is insufflated to a pressure of 10–12 mmHg. Once the insufflation is completed, an 11 mm trocar for the camera is placed using the visual access technique just above the umbilicus (trocar 1). On the same vertical axis, an 8 mm robotic trocar is placed in the epigastrium (trocar 2) and a 12 mm Airseal (CONMED Corporation, Utica, New York) trocar is placed in the suprapubic area (trocar 3), where a Pfannenstiel incision will be performed later in the procedure. These three trocars form the central “I” shape on the abdomen. Four additional trocars are placed in a W-shaped pattern to complete the port placement (Fig. ). On the right hemiabdomen, an 11 mm robotic trocar (trocar B) is placed 5 cm vertically below the supraumbilical port and at least 8 cm diagonally in the right iliac region. Laterally and superior to this port, a 12-mm assistant port (trocar A) is placed at least 5 cm away in the right lumbar region. On the left hemiabdomen, an 8 mm robotic port (trocar C) is placed in the left iliac region 4 cm horizontally and at least 8 cm diagonally from trocar 1. Laterally and superior to this port, at least 8 cm away, another 8 mm robotic port (trocar D) is placed in the left lumbar region. Prior to docking the robotic system, a thorough laparoscopic exploration of the abdomen is performed, with lysis of adhesions if needed. The omentum and small bowel are then carefully mobilized towards the right side of the peritoneal cavity as necessary. First docking: the splenic flexure mobilization The patient is positioned in a 10° anti-Trendelenburg position with a 10° right tilt (Fig. ). Throughout all docking stages, the operating table height is set above the 70 marks on the robotic cart to ensure the lowest trocar incision is positioned. This adjustment, as indicated by Medtronic, guarantees arms the appropriate tilt and the largest range of motion available. Arm 1 is docked in trocar 2 (epigastric) with a tilt angle of − 15° and a docking angle of 45°. Arm 2 is docked in trocar D (right lumbar) with a tilt angle of + 15° and a docking angle of 130°. Arm 3 is docked in trocar C (left iliac region) with a tilt angle of + 15/ + 30° (adjusted based on patient build) and a docking angle of 250°. Arm 4 uses the same tilt angle as arm 3 and is docked in trocar 1 (supraumbilical) with a docking angle of 315°. Once the arms are securely docked and the confirmation is given to the system, the instruments are assigned as follows in a double right-hand configuration: Arm 1 : Bipolar forceps Arm 2 : Monopolar curved shears Arm 3 : Cadiere forceps Arm 4 : 30° camera After completing the docking in this phase, the surgeon utilizes this configuration to mobilize the splenic flexure and to divide the gastrocolic ligament. This ensures adequate mobilization of the distal transverse and the left colon, guaranteeing easy access to the large bowel within the pelvis for the subsequent colorectal or coloanal anastomosis. During this phase, if required, the assistant can facilitate the dissection by applying further traction to the colon through trocar A, B, or 3 (Fig. ). This aids the surgeon in achieving a smooth and efficient mobilization of the splenic flexure, which is usually taken down from lateral to medial. A proper triangulation is obtained and the surgeon controls two instruments (monopolar shears and Cadiere forceps) on their right hand, allowing a proper traction and counter-traction on the bowel. Second docking: vascular control After completing the splenic flexure mobilization, all robotic arms are undocked. The robotic carts remain in the same position, optimizing the redocking time. The patient is positioned in a steep Trendelenburg position maintaining the 10° of right tilt, and the table height is adjusted to ensure the lowest trocar incision remains above the 70 mark on the robotic arms (Fig. ). The docking configuration is then modified as follows: Arm 1 : Tilt angle changed to + 15°, redocked into trocar C (left iliac region) with the same 45° angle. Arm 2 : Tilt angle changed to − 30°, docked into an 8 mm robotic port that is inserted in trocar 3 (the 12 mm suprapubic port) with a 140° angle. Arm 3 : Tilt angle changed to + 30° (if not already set), docked into trocar B (right iliac region) with a 240° angle. Arm 4 : Tilt angle changed to + 20/30° (depending on patient build), docked into trocar 1 (supraumbilical) with a 315° angle. Once the arms are securely docked, the instruments are assigned as follows in a double right-hand configuration: Arm 1 : Cadiere forceps Arm 2 : Monopolar curved shears Arm 3 : 30° camera Arm 4 : Bipolar forceps In this step of the surgical procedure, the surgeon carries out the vascular control of the inferior mesenteric artery and inferior mesenteric vein (the order of the division is up to the personal preferences of the surgeon), obtaining a lateral view of the aortic plane, similar to the perspective used in laparoscopic surgery (from the right to the left side). The dissection of the left colon is then completed with a medial-to-lateral approach, reaching the plane previously dissected during the mobilization of the splenic flexure. The placement of Hem-o-lok® clips on the vessels, as well as any required suction or further traction, is obtained by the assistant through the trocar A. It is important to highlight that this docking set-up allows the dissection not only of the left colon, but also of the high-mid rectum, thus carrying out a partial mesorectal excision, without limitations or external conflicts of the instruments. Therefore, in case of a left hemicolectomy or a sigmoid resection, no further redocking will be necessary. Third docking: LAR with TME Once completed the vascular phase the third and last docking is carried out (Fig. )as follows: Arm 1 : Tilt angle does not change (+ 15°), redocked into Trocar D (left lumbar) with a docking angle of 80°. Arm 2 :Tilt angle and docking angle do not change (− 15° and 140°, respectively) and it is moved from trocar 3 to trocar C. Arm 3 : Tilt angle changed to − 30°, docked into trocar B (right lumbar) with a 240° angle. Arm 4 : Tilt angle changed to + 30° (if not already set), docked into trocar 1 (supraumbilical) with the same angle as the previous docking (315°). In this last docking instruments are assigned as follows in a double left-hand configuration: Arm 1 : Cadiere forceps Arm 2 : Bipolar forceps Arm 3 : Monopolar curved shears Arm 4 : 30° camera This port configuration optimizes instrument reach in pelvis down to the levator ani plane, minimizing collisions and facilitating efficient completion of the LAR with TME. In our practice, the assistant utilizes the trocar 3 (12 mm Airseal suprapubic port) to maintain traction on the rectum during TME using a 10 mm Babcock forceps. The smoke formed during surgery is effectively removed through the Airseal evacuation. Moreover, a fully powered articulated laparoscopic stapler (Medtronic Signia™) could be inserted in the assistant trocars A or 3 to carry out the transection of the rectum according to surgeon preference. Upon completion of the TME, all robotic arms are undocked. An incision is made at the planned loop ileostomy extraction site, where a retractor (Alexis O Wound retractor, Applied Medical, Rancho Santa Margarita, CA, USA) is placed to facilitate extraction of the colonic stump and which will also serve as the specimen extraction site. The rectum is proximally transected, and the circular stapling anvil is inserted into the colonic stump. The rectum is then returned to the abdomen, the colorectal anastomosis is constructed, and an air-leak test is carried out. An ileostomy is constructed if required by the single case. We also utilize this robotic port configuration for restorative proctectomy and ileal pouch-anal anastomosis (IPAA) procedures, with some modifications. In this cases, two Alexis retractors are placed as first steps after the end ileostomy in the right iliac fossa and the rectum stump (secured to the suprapubic fascia as for our habits) are taken down , and the port configuration is modified by excluding Trocar 2 and the first docking for the splenic flexure mobilization, which is not required. Statistical analysis Descriptive statistics, including mean, median, interquartile range, and standard deviation, were calculated to summarize docking times. To assess the presence of a negative trend in docking times, the non-parametric Mann–Kendall test and Spearman’s rank correlation coefficient were utilized. Additionally, a cumulative sum (CUSUM) analysis was conducted to monitor for shifts in docking times. The CUSUM analysis was performed twice: first using the mean docking time of all cases as a reference, and then using the mean time of the first 15 cases as a reference. This approach allows for the identification of potential changes in docking times over time, both overall and specifically in the initial phase. To present clinical variables, categorical variables were expressed as counts and percentage, while continuous variables were summarized using medians and interquartile ranges. This study is a retrospective analysis of prospectively collected data on robotic LAR procedures completed using the Hugo RAS system at our institution (Sant’Orsola Hospital, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy) from September 2023 to June 2024. The procedures were carried out by a single surgeon experienced in laparoscopic colorectal procedures (> 800 procedures), robotically naïve. Docking was carried out by the first surgeon and the assistant surgeon in parallel from both sides of the patient. The study was designed and reported according to the STROBE guidelines (supplementary materials). The primary data collected focused on docking times for each phase of the multi-docking strategy. We also included clinical data, encompassing patient characteristics (sex, age, body mass index (BMI), underlying disease, ASA score), operative variables (procedure type, operative time, number of conversions, intraoperative complications, number of high priority alarms (RED)) and postoperative variables (length of stay, 30-day morbidity, 30-day readmission, 30-day reoperations). The Hugo RAS platform, while equipped with a detailed user guide and several Medtronic-tested docking configurations for various abdominal procedures, presented challenges in our real-world surgical experience. Many of these preset configurations did not seamlessly translate to actual operations, often requiring surgeons to adopt unfamiliar approaches or make last-time adjustments to achieve desired outcomes while avoiding external or internal collision. This could potentially diminish the comfort and intuitive experience that robotic platforms are designed for. Additionally, in procedures like LAR, the necessity to mobilize the splenic flexure for a tension-free anastomosis poses a unique challenge for robotic surgery, which is typically optimized for smaller surgical fields. To address these limitations and to optimize instrument reach, minimize arm collisions, and ensure a seamless surgical workflow, we developed a personalized multi-docking approach for LAR and pelvic surgery using what we called a W-I-shaped port placement. This approach is based on the experience developed in laparoscopic surgery and aims to reproduce a similar triangulation of the instruments and obtain a similar view of the anatomy in the different surgical fields. Therefore, the robotic setup involves three distinct docking configurations: First docking: splenic flexure mobilization Second docking: vascular control Third docking: LAR with TME Each docking configuration is tailored to the specific surgical goals of that phase, enhancing efficiency and precision. After induction of pneumoperitoneum with a Veress needle at Palmer’s point, the abdomen is insufflated to a pressure of 10–12 mmHg. Once the insufflation is completed, an 11 mm trocar for the camera is placed using the visual access technique just above the umbilicus (trocar 1). On the same vertical axis, an 8 mm robotic trocar is placed in the epigastrium (trocar 2) and a 12 mm Airseal (CONMED Corporation, Utica, New York) trocar is placed in the suprapubic area (trocar 3), where a Pfannenstiel incision will be performed later in the procedure. These three trocars form the central “I” shape on the abdomen. Four additional trocars are placed in a W-shaped pattern to complete the port placement (Fig. ). On the right hemiabdomen, an 11 mm robotic trocar (trocar B) is placed 5 cm vertically below the supraumbilical port and at least 8 cm diagonally in the right iliac region. Laterally and superior to this port, a 12-mm assistant port (trocar A) is placed at least 5 cm away in the right lumbar region. On the left hemiabdomen, an 8 mm robotic port (trocar C) is placed in the left iliac region 4 cm horizontally and at least 8 cm diagonally from trocar 1. Laterally and superior to this port, at least 8 cm away, another 8 mm robotic port (trocar D) is placed in the left lumbar region. Prior to docking the robotic system, a thorough laparoscopic exploration of the abdomen is performed, with lysis of adhesions if needed. The omentum and small bowel are then carefully mobilized towards the right side of the peritoneal cavity as necessary. First docking: the splenic flexure mobilization The patient is positioned in a 10° anti-Trendelenburg position with a 10° right tilt (Fig. ). Throughout all docking stages, the operating table height is set above the 70 marks on the robotic cart to ensure the lowest trocar incision is positioned. This adjustment, as indicated by Medtronic, guarantees arms the appropriate tilt and the largest range of motion available. Arm 1 is docked in trocar 2 (epigastric) with a tilt angle of − 15° and a docking angle of 45°. Arm 2 is docked in trocar D (right lumbar) with a tilt angle of + 15° and a docking angle of 130°. Arm 3 is docked in trocar C (left iliac region) with a tilt angle of + 15/ + 30° (adjusted based on patient build) and a docking angle of 250°. Arm 4 uses the same tilt angle as arm 3 and is docked in trocar 1 (supraumbilical) with a docking angle of 315°. Once the arms are securely docked and the confirmation is given to the system, the instruments are assigned as follows in a double right-hand configuration: Arm 1 : Bipolar forceps Arm 2 : Monopolar curved shears Arm 3 : Cadiere forceps Arm 4 : 30° camera After completing the docking in this phase, the surgeon utilizes this configuration to mobilize the splenic flexure and to divide the gastrocolic ligament. This ensures adequate mobilization of the distal transverse and the left colon, guaranteeing easy access to the large bowel within the pelvis for the subsequent colorectal or coloanal anastomosis. During this phase, if required, the assistant can facilitate the dissection by applying further traction to the colon through trocar A, B, or 3 (Fig. ). This aids the surgeon in achieving a smooth and efficient mobilization of the splenic flexure, which is usually taken down from lateral to medial. A proper triangulation is obtained and the surgeon controls two instruments (monopolar shears and Cadiere forceps) on their right hand, allowing a proper traction and counter-traction on the bowel. Second docking: vascular control After completing the splenic flexure mobilization, all robotic arms are undocked. The robotic carts remain in the same position, optimizing the redocking time. The patient is positioned in a steep Trendelenburg position maintaining the 10° of right tilt, and the table height is adjusted to ensure the lowest trocar incision remains above the 70 mark on the robotic arms (Fig. ). The docking configuration is then modified as follows: Arm 1 : Tilt angle changed to + 15°, redocked into trocar C (left iliac region) with the same 45° angle. Arm 2 : Tilt angle changed to − 30°, docked into an 8 mm robotic port that is inserted in trocar 3 (the 12 mm suprapubic port) with a 140° angle. Arm 3 : Tilt angle changed to + 30° (if not already set), docked into trocar B (right iliac region) with a 240° angle. Arm 4 : Tilt angle changed to + 20/30° (depending on patient build), docked into trocar 1 (supraumbilical) with a 315° angle. Once the arms are securely docked, the instruments are assigned as follows in a double right-hand configuration: Arm 1 : Cadiere forceps Arm 2 : Monopolar curved shears Arm 3 : 30° camera Arm 4 : Bipolar forceps In this step of the surgical procedure, the surgeon carries out the vascular control of the inferior mesenteric artery and inferior mesenteric vein (the order of the division is up to the personal preferences of the surgeon), obtaining a lateral view of the aortic plane, similar to the perspective used in laparoscopic surgery (from the right to the left side). The dissection of the left colon is then completed with a medial-to-lateral approach, reaching the plane previously dissected during the mobilization of the splenic flexure. The placement of Hem-o-lok® clips on the vessels, as well as any required suction or further traction, is obtained by the assistant through the trocar A. It is important to highlight that this docking set-up allows the dissection not only of the left colon, but also of the high-mid rectum, thus carrying out a partial mesorectal excision, without limitations or external conflicts of the instruments. Therefore, in case of a left hemicolectomy or a sigmoid resection, no further redocking will be necessary. Third docking: LAR with TME Once completed the vascular phase the third and last docking is carried out (Fig. )as follows: Arm 1 : Tilt angle does not change (+ 15°), redocked into Trocar D (left lumbar) with a docking angle of 80°. Arm 2 :Tilt angle and docking angle do not change (− 15° and 140°, respectively) and it is moved from trocar 3 to trocar C. Arm 3 : Tilt angle changed to − 30°, docked into trocar B (right lumbar) with a 240° angle. Arm 4 : Tilt angle changed to + 30° (if not already set), docked into trocar 1 (supraumbilical) with the same angle as the previous docking (315°). In this last docking instruments are assigned as follows in a double left-hand configuration: Arm 1 : Cadiere forceps Arm 2 : Bipolar forceps Arm 3 : Monopolar curved shears Arm 4 : 30° camera This port configuration optimizes instrument reach in pelvis down to the levator ani plane, minimizing collisions and facilitating efficient completion of the LAR with TME. In our practice, the assistant utilizes the trocar 3 (12 mm Airseal suprapubic port) to maintain traction on the rectum during TME using a 10 mm Babcock forceps. The smoke formed during surgery is effectively removed through the Airseal evacuation. Moreover, a fully powered articulated laparoscopic stapler (Medtronic Signia™) could be inserted in the assistant trocars A or 3 to carry out the transection of the rectum according to surgeon preference. Upon completion of the TME, all robotic arms are undocked. An incision is made at the planned loop ileostomy extraction site, where a retractor (Alexis O Wound retractor, Applied Medical, Rancho Santa Margarita, CA, USA) is placed to facilitate extraction of the colonic stump and which will also serve as the specimen extraction site. The rectum is proximally transected, and the circular stapling anvil is inserted into the colonic stump. The rectum is then returned to the abdomen, the colorectal anastomosis is constructed, and an air-leak test is carried out. An ileostomy is constructed if required by the single case. We also utilize this robotic port configuration for restorative proctectomy and ileal pouch-anal anastomosis (IPAA) procedures, with some modifications. In this cases, two Alexis retractors are placed as first steps after the end ileostomy in the right iliac fossa and the rectum stump (secured to the suprapubic fascia as for our habits) are taken down , and the port configuration is modified by excluding Trocar 2 and the first docking for the splenic flexure mobilization, which is not required. The patient is positioned in a 10° anti-Trendelenburg position with a 10° right tilt (Fig. ). Throughout all docking stages, the operating table height is set above the 70 marks on the robotic cart to ensure the lowest trocar incision is positioned. This adjustment, as indicated by Medtronic, guarantees arms the appropriate tilt and the largest range of motion available. Arm 1 is docked in trocar 2 (epigastric) with a tilt angle of − 15° and a docking angle of 45°. Arm 2 is docked in trocar D (right lumbar) with a tilt angle of + 15° and a docking angle of 130°. Arm 3 is docked in trocar C (left iliac region) with a tilt angle of + 15/ + 30° (adjusted based on patient build) and a docking angle of 250°. Arm 4 uses the same tilt angle as arm 3 and is docked in trocar 1 (supraumbilical) with a docking angle of 315°. Once the arms are securely docked and the confirmation is given to the system, the instruments are assigned as follows in a double right-hand configuration: Arm 1 : Bipolar forceps Arm 2 : Monopolar curved shears Arm 3 : Cadiere forceps Arm 4 : 30° camera After completing the docking in this phase, the surgeon utilizes this configuration to mobilize the splenic flexure and to divide the gastrocolic ligament. This ensures adequate mobilization of the distal transverse and the left colon, guaranteeing easy access to the large bowel within the pelvis for the subsequent colorectal or coloanal anastomosis. During this phase, if required, the assistant can facilitate the dissection by applying further traction to the colon through trocar A, B, or 3 (Fig. ). This aids the surgeon in achieving a smooth and efficient mobilization of the splenic flexure, which is usually taken down from lateral to medial. A proper triangulation is obtained and the surgeon controls two instruments (monopolar shears and Cadiere forceps) on their right hand, allowing a proper traction and counter-traction on the bowel. After completing the splenic flexure mobilization, all robotic arms are undocked. The robotic carts remain in the same position, optimizing the redocking time. The patient is positioned in a steep Trendelenburg position maintaining the 10° of right tilt, and the table height is adjusted to ensure the lowest trocar incision remains above the 70 mark on the robotic arms (Fig. ). The docking configuration is then modified as follows: Arm 1 : Tilt angle changed to + 15°, redocked into trocar C (left iliac region) with the same 45° angle. Arm 2 : Tilt angle changed to − 30°, docked into an 8 mm robotic port that is inserted in trocar 3 (the 12 mm suprapubic port) with a 140° angle. Arm 3 : Tilt angle changed to + 30° (if not already set), docked into trocar B (right iliac region) with a 240° angle. Arm 4 : Tilt angle changed to + 20/30° (depending on patient build), docked into trocar 1 (supraumbilical) with a 315° angle. Once the arms are securely docked, the instruments are assigned as follows in a double right-hand configuration: Arm 1 : Cadiere forceps Arm 2 : Monopolar curved shears Arm 3 : 30° camera Arm 4 : Bipolar forceps In this step of the surgical procedure, the surgeon carries out the vascular control of the inferior mesenteric artery and inferior mesenteric vein (the order of the division is up to the personal preferences of the surgeon), obtaining a lateral view of the aortic plane, similar to the perspective used in laparoscopic surgery (from the right to the left side). The dissection of the left colon is then completed with a medial-to-lateral approach, reaching the plane previously dissected during the mobilization of the splenic flexure. The placement of Hem-o-lok® clips on the vessels, as well as any required suction or further traction, is obtained by the assistant through the trocar A. It is important to highlight that this docking set-up allows the dissection not only of the left colon, but also of the high-mid rectum, thus carrying out a partial mesorectal excision, without limitations or external conflicts of the instruments. Therefore, in case of a left hemicolectomy or a sigmoid resection, no further redocking will be necessary. Once completed the vascular phase the third and last docking is carried out (Fig. )as follows: Arm 1 : Tilt angle does not change (+ 15°), redocked into Trocar D (left lumbar) with a docking angle of 80°. Arm 2 :Tilt angle and docking angle do not change (− 15° and 140°, respectively) and it is moved from trocar 3 to trocar C. Arm 3 : Tilt angle changed to − 30°, docked into trocar B (right lumbar) with a 240° angle. Arm 4 : Tilt angle changed to + 30° (if not already set), docked into trocar 1 (supraumbilical) with the same angle as the previous docking (315°). In this last docking instruments are assigned as follows in a double left-hand configuration: Arm 1 : Cadiere forceps Arm 2 : Bipolar forceps Arm 3 : Monopolar curved shears Arm 4 : 30° camera This port configuration optimizes instrument reach in pelvis down to the levator ani plane, minimizing collisions and facilitating efficient completion of the LAR with TME. In our practice, the assistant utilizes the trocar 3 (12 mm Airseal suprapubic port) to maintain traction on the rectum during TME using a 10 mm Babcock forceps. The smoke formed during surgery is effectively removed through the Airseal evacuation. Moreover, a fully powered articulated laparoscopic stapler (Medtronic Signia™) could be inserted in the assistant trocars A or 3 to carry out the transection of the rectum according to surgeon preference. Upon completion of the TME, all robotic arms are undocked. An incision is made at the planned loop ileostomy extraction site, where a retractor (Alexis O Wound retractor, Applied Medical, Rancho Santa Margarita, CA, USA) is placed to facilitate extraction of the colonic stump and which will also serve as the specimen extraction site. The rectum is proximally transected, and the circular stapling anvil is inserted into the colonic stump. The rectum is then returned to the abdomen, the colorectal anastomosis is constructed, and an air-leak test is carried out. An ileostomy is constructed if required by the single case. We also utilize this robotic port configuration for restorative proctectomy and ileal pouch-anal anastomosis (IPAA) procedures, with some modifications. In this cases, two Alexis retractors are placed as first steps after the end ileostomy in the right iliac fossa and the rectum stump (secured to the suprapubic fascia as for our habits) are taken down , and the port configuration is modified by excluding Trocar 2 and the first docking for the splenic flexure mobilization, which is not required. Descriptive statistics, including mean, median, interquartile range, and standard deviation, were calculated to summarize docking times. To assess the presence of a negative trend in docking times, the non-parametric Mann–Kendall test and Spearman’s rank correlation coefficient were utilized. Additionally, a cumulative sum (CUSUM) analysis was conducted to monitor for shifts in docking times. The CUSUM analysis was performed twice: first using the mean docking time of all cases as a reference, and then using the mean time of the first 15 cases as a reference. This approach allows for the identification of potential changes in docking times over time, both overall and specifically in the initial phase. To present clinical variables, categorical variables were expressed as counts and percentage, while continuous variables were summarized using medians and interquartile ranges. Thirty-one procedures were recorded using this docking setting. The median docking time for the first operative step was 6 ± 1 min, with a mean of 5.6 ± 1.3 min. A linear regression analysis of the data (Table ) revealed a negative trend, suggesting that docking time decreased with each subsequent procedure. To further confirm this negative trend, two non-parametric statistical tests were conducted. The Mann–Kendall test (Tau =  − 0.367, p = 0.007) revealed a statistically significant negative trend, while Spearman’s rank correlation test (Rho =  − 0.501, p = 0.004) further supported this by demonstrating a statistically significant negative correlation between the order of procedures and docking times. This learning curve effect was further examined through CUSUM analyses. The first analysis, using the mean docking time of all 31 cases as a reference (5.6 min, Table ), showed initial docking times above the average, followed by a clear downward trend around the 15th procedure. Given this initial indication of a learning curve, a second CUSUM analysis was conducted using the mean docking time of the first 15 cases (6.3 min, Table ) as a reference to specifically focus on the initial learning phase. This second analysis confirmed the presence of a distinct shift towards below-average docking times after the initial phase. Both analyses highlight a notable learning curve effect, with operators demonstrating increased proficiency and efficiency over time. Clinical outcomes The study consisted of 31 patients with a median age of 52 years (Table ). The majority were male ( n = 16) and had a diagnosis of ulcerative colitis ( n = 21). The median BMI was 21 kg/m 2 , and most patients presented with an ASA score of 2. The most common procedures performed were proctectomies followed by IPAA ( n = 19) and LARs with partial mesorectal excision ( n = 6) and total mesorectal excision ( n = 4). The median operative time was 280 min. One conversion to open surgery was necessary due to adhesions. No intraoperative complications or high-priority (red) alarms were encountered. The median length of hospital stay was 7 days. Postoperative complications within 30 days occurred in four patients. These included one case of ileus requiring nasogastric tube placement, one superficial surgical site infection managed with wound care, and one case of abdominal collection treated with image-guided drainage and antibiotics. Additionally, one patient developed an anastomotic leak due to colonic ischemia, leading to reoperation involving resection of the ischemic colon and creation of an end colostomy. The study consisted of 31 patients with a median age of 52 years (Table ). The majority were male ( n = 16) and had a diagnosis of ulcerative colitis ( n = 21). The median BMI was 21 kg/m 2 , and most patients presented with an ASA score of 2. The most common procedures performed were proctectomies followed by IPAA ( n = 19) and LARs with partial mesorectal excision ( n = 6) and total mesorectal excision ( n = 4). The median operative time was 280 min. One conversion to open surgery was necessary due to adhesions. No intraoperative complications or high-priority (red) alarms were encountered. The median length of hospital stay was 7 days. Postoperative complications within 30 days occurred in four patients. These included one case of ileus requiring nasogastric tube placement, one superficial surgical site infection managed with wound care, and one case of abdominal collection treated with image-guided drainage and antibiotics. Additionally, one patient developed an anastomotic leak due to colonic ischemia, leading to reoperation involving resection of the ischemic colon and creation of an end colostomy. The independent robotic arms of the Hugo RAS system enabled us to develop a personalized multi-docking strategy that significantly streamlines LAR and IPAA procedures. Contrary to the perception of many robotic surgeons, our team believes that multi-docking during a procedure, particularly with platforms featuring independent robotic arms, is not a disadvantage but rather a significant opportunity that should be exploited and considered as a standard approach. It grants the platform exceptional flexibility in multi-quadrant abdominal procedures, requiring only minimal adjustments between dockings to optimize machine performance . Our data supports this, with a median docking time of 2.5 min after the initial docking, which always took longer (median ± IQR of 6 ± 1 min). Furthermore, studies in robotic gynecological procedures have demonstrated the benefits of multi-docking, including reduced operative time, blood loss, and improved postoperative recovery, as well as an increased number of harvested lymph nodes . Several articles have analyzed the learning curve for docking time, most of them employing the CUSUM analysis. It is interesting to note that previous publications have reported proficiency in docking time achieved after 10 cases for rectal cancer (DaVinci) , 20 cases for prostatectomy (DaVinci) , and 17 cases for gynecology (Hugo RAS) . Notably, even with the added complexity of our multi-docking approach, after reaching proficiency, our docking time for the Hugo RAS was comparable to that reported in these studies. The present setups have been utilized in over 100 colorectal procedures performed by 3 surgeons, all naïve in robotic surgery, without reported major conflicts or need for redocking during the different steps of the procedure . Although a multi-center study should be carried out to confirm the reproducibility of the Bologna setup, the feasibility of the multi-docking approach has been shown in this limited yet significant experience. This study has several limitations. Firstly, its retrospective nature may introduce biases. Secondly, the study was conducted at a tertiary referral center for rectal cancer and inflammatory bowel disease, which may limit the generalizability of our findings. Finally, the docking setup was tailored to the preferences of our institution’s surgeons, highlighting the need for personalized approaches in robotic surgery. Hopefully, these results will hopefully encourage more surgical teams to embrace the multi-docking philosophy in robotic colorectal surgery using the Hugo RAS system. This approach is especially useful for the majority of experienced laparoscopic colorectal surgeons who are beginning their robotic experience with this platform. By adopting this method, surgeons can recreate a familiar surgical view, reducing the discomfort associated with readjusting to procedures already mastered in laparoscopy. The next step to explore the different possibilities would be represented by sharing the set-ups with the surgical community [ , , – ]. To facilitate this knowledge exchange, the creation of an online, official repository where these dockings can be stored, scored, and commented on by other users of the platform would be invaluable. This collaborative approach could not only push the boundaries of research in the robotic surgical field but also drive the development of optimal docking configurations for each procedure, ultimately improving surgical outcomes and patient care. This article offers valuable insights into the potential of multi-docking strategies in robotic surgery, particularly with platforms featuring independent robotic arms, such as the Hugo RAS system. By sharing our docking settings, we aim to foster collaboration within the surgical community, unlocking the full potential of robotic technology and continually improving our collective knowledge of its applications and capabilities. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 340 KB)
Impact of pulmonary infection on thoracoscopic surgery outcomes in children with CPAM: a retrospective study
ee158777-d473-4a84-ae65-bbab2cf670ea
11892240
Surgical Procedures, Operative[mh]
Congenital pulmonary airway malformation (CPAM), formerly known as congenital cystic adenomatoid malformation (CCAM), is historically regarded as a rare condition, with an estimated prevalence ranging from 1/35,000 to 1/7,200 live births . With advancements in medical technology, the reported prevalencehas has shown an upward trend. Prenatal ultrasonography is the primary modality for identifying these lesions, while postnatal CT scans provide definitive diagnosis . Although children with CPAM often remain asymptomatic, pulmonary infection is the most common complication. The evolution of surgical techniques has positioned endoscopic surgery as a mainstream therapeutic option for CPAM. Currently, surgical resection of the lesion is the most effective treatment for CPAM. For symptomatic children, the choice between elective surgery and limited surgery is guided by the severity of their clinical presentation . In asymptomatic cases, a divergence of opinion exists among experts. some experts suggest that CPAM may undergo spontaneous regression, suggesting that early surgical intervention could introduce unnecessary risks and advocating for a watchful waiting strategy. In contrast, a substantial body of research supports surgical resection as a safe and effective approach, emphasizing its role in preventing complications and reducing the potential for malignant transformation . Previous studies have primarily focused on comparing open surgery and thoracoscopic approaches, with limited attention given to the impact of pulmonary infections on surgical outcomes. In this study, we conducted a review of CPAM cases treated with total thoracoscopic surgery at a single center. Our analysis summarizes the clinical characteristics of these cases and evaluates the influence of pulmonary infections on surgical procedures. It offers valuable insights to inform clinical decision-making and improve patient management strategies. This retrospective study was approved by the Children’s Hospital of Chongqing Medical University (Approval No. 2023–575). We reviewed the cases of CPAM treated in our center from January 2013 to December 2023. The inclusion criteria were patients who underwent thoracoscopic surgery with a pathological diagnosis of CPAM. Exclusion criteria were CPAM patients who had incomplete clinical data, were aged over 18 years old, or required a second operation due to residual lesions. 154 patients met the inclusion criteria. Patient selection process was shown in Fig. . All patients underwent a preoperative thoracic computed tomographic scan to confirm the diagnosis. According to the site and the size of the lesion, patients underwent total thoracoscopic lobectomy or segmentectomy. Before surgery, cefazolin was given to all patients preventively. General anesthesia was administered using a combination of intravenous and inhalation agents. The patient was positioned in a lateral decubitus position with the unaffected side down. Three thoracoscopic ports were established on the affected side: one at the 6th intercostal space along the anterior axillary line, one at the 7th intercostal space along the posterior axillary line, and one at the 8th intercostal space along the mid-posterior axillary line. An artificial pneumothorax was created to facilitate the procedure. The thoracic cavity was thoroughly explored, and the boundaries of the lesion were marked using an electrocoagulation hook. The corresponding pulmonary ligament was transected, and the associated pulmonary vein was carefully dissected. The pathological lung tissue was resected along the marked boundaries using the electrocoagulation hook, with hemostasis achieved using an ultrasonic scalpel. The relevant bronchus was dissected and clamped to verify normal inflation of the lung lobe. A silicone chest tube was routinely inserted through the 8th intercostal space at the mid-posterior axillary line and connected to a closed thoracic drainage system. After ensuring complete hemostasis, the intercostal incision was closed using a continuous suture with absorbable sutures thread. The muscle and subcutaneous layers were sutured in layers using absorbable sutures. The resected specimen was sent for pathological examination. Following surgery, paraffin sections were prepared from the removed lesion tissue, and the specimens were histologically examined by the pathologist to confirm the diagnosis. To minimize the risk of postoperative complications, postoperative antibiotic therapy was escalated to a third-generation cephalosporin. A thoracic drainage tube was routinely placed during the operation. The drainage tube was removed when the drainage volume was consistently less than 20 mL for three consecutive days. Children were divided into the NI (Non-infection) group, the HI (hidden-infection) group and the PI (pulmonary infection) group. The classification of the three groups was determined based on postoperative pathological findings. Patients were assigned to NI group if their postoperative pathology showed no evidence of inflammatory cell infiltration (including neutrophils, macrophages, or lymphocytes), regardless of the presence or absence of clinical symptoms. Patients were classified into HI group if their postoperative pathology indicated infection, and they exhibited clinical symptoms such as recurrent fever, cough, wheezing, or elevated inflammatory markers. Additionally, chest radiographs or CT scans showing lesions consistent with infection supported this classification. The remaining patients were categorized into the HI group,as they exhibited no clinical symptoms but with the presence of infection in the postoperative pathology. Consequently, there was a clear distinction between the three groups without overlap. Demographic data of the patients, including age, gender, body weight, lesion side and pathology of the lesion, operative data, and postoperative outcomes were extracted and analyzed statistically. SPSS 27.0.1 was utilized for statistical analysis in this study. continuous variables were tested for normality. None of the data conformed to a normal distribution, so they were expressed as Md(P25, P75). Differences among three groups were analyzed using the Kruskal–Wallis H test. Categorical data were expressed as percentages . Differences among groups were analyzed using the chi-square test. A p -value < 0.05 was considered statistically significant (Fig. ). A total of 154 children underwent thoracoscopic surgery in our hospital. They were divided into three groups including the NI group(27 cases), HI group(56 cases), and PI group(71 cases). Among these, 38 cases required conversion to thoracotomy. The rates were 14.8% (4/27) in the NI group, 23.2% (13/56) in the HI group, and 29.2% (21/71) in the PI group. The chi-square test showed no statistically significant differences among three groups ( p = 0.302). Excluding those that required conversion to thoracotomy, a total of 116 cases were under total thoracoscopy. These included 23 patients in the NI group, 43 in the HI group, and 50 in the PI group. The median age was 21.2 months (IQR, 8.5,66.9 months), and 52.6% were male. The median weight at operation was 11.0 kg (IQR, 8.0,18.0 kg). The differences in sex ( p = 0.646) were not statistically significant but in age ( p < 0.001) and weight ( p = 0.004) were statistically significant among three groups. The majority of lesions were confined to a single lobe. Only 3 cases involved multiple lobes. The left and right lower lobes were the most commonly affected, each with 43 cases (37.1%). Based on the Stocker classification, type 2 CPAM had the highest proportion with 62 (53.4%) cases. Type 1 CPAM had 51 (44.0%) cases. Type 3 and 4 CPAM were rare, with only 2 (1.7%) and 1 (0.9%) cases, respectively. 96 (82.8%) patients underwent thoracoscopic lobectomy and 20 (17.2%) underwent thoracoscopic segmentectomy. The above data was summarized in Table . For the operative findings, PI group had longer operation time, greater blood loss, and more transfusions. Operation time ( p = 0.001) and blood loss ( p = 0.016) showed significant differences among the three groups. To strengthen the robustness of the results, we calculated the effect sizes and confidence intervals by Dunn’s post-hoc test for pairwise comparisons. These analyses further support our conclusion. Operation time (NI vs. HI: Cohen’s d, 0.12, 95% CI, -20.0 to 20.0 NI vs. PI: Cohen’s d, 0.56, 95% CI, 20.0 to 64.0 HI vs. PI: Cohen’s d, 0.52, 95% CI, 20.0 to 64.0) and blood loss (NI vs. HI: Cohen’s d, 0.32, 95% CI, 0.0 to 10.0 NI vs. PI, Cohen’s d, 0.28, 95% CI, 0.0 to 10.0HI vs. PI: Cohen’s d, 0.45, 95% CI, 5.0 to 15.0) in the PI group were higher than those in the NI and HI groups, with these differences being statistically significance. No significant differences were found in blood transfusions ( p = 0.351), chest tube duration ( p = 0.246), duration of ventilator use ( p = 0.424), and hospital stay ( p = 0.080). Postoperative complications included atelectasis, pneumothorax, and pneumonia. Atelectasis occurred in 8 cases with 2 (4.7%) in the HI group and 6 (12.0%) in the PI group. Pneumothorax occurred in 32 cases with 4 (17.4%) cases in NI group, 8 (18.6%) cases in HI group, and 20 (40.0%) cases in PI group. Pneumonia occurred in 7 cases with 1 (4.3%) case in NI group, 2 (4.7%) case in HI group, and 4 (8.0%) cases in PI group. There was a significant difference in the incidence of pneumothorax ( p = 0.034) among three groups. However, no significant differences were found for atelectasis ( p = 0.064) or pneumonia ( p = 0.740). For categorical variable, we used odds ratio (OR) as the effect size, and the confidence intervals were calculated using logistic regression. PI group had a significantly higher incidence of pneumothorax (NI vs. HI: OR, 0.92, 95% CI, 0.25 to 3.38 NI vs. PI: OR, 0.32, 95% CI, 0.09 to 1.09 HI vs. PI: OR, 0.34, 95% CI, 0.13 to 0.89) compared to the NI and HI groups. The characteristics of the three groups were shown in Table . Lesion resection is a key treatment for children with CPAM, offering both curative effects and reduced complication rates. Although the exact risk remains unquantified, CPAM carries a potential for malignant transformation as evidenced by case reports . Surgical intervention may help prevent this potential malignancy. Thoracotomy has a wide range of indications. When patients exhibit respiratory distress, severe thoracic adhesions, or cannot tolerate one-lung ventilation techniques, open surgery ensure optimal outcomes and patient safety. However, thoracoscopy has emerged as the preferred surgical approach due to minimal invasiveness, superior intraoperative visualization, faster recovery, reduced postoperative pain, and shorter hospital stays. In addition, thoracoscopy decreases musculoskeletal impact and effectively lowers complication rates, including scapular winging . Segmentectomy is used in clinical as a lung preservation operation to improve postoperative lung function. But its anatomical complexity and variability make it challenging. When the lesion affects multiple lung segments, determining the resection margin becomes difficult. Studies highlight that surgeons must have detailed knowledge of lung segment anatomy to perform segmentectomy successfully . Preoperative CT imaging is used to localize and qualitatively diagnose the disease to guide surgical planning. During surgery, careful exploration of the lesion and anatomical details could reduce the risk of leaving residual tissue behind . Lobectomy is the standard procedure for treating CPAM in children. If segmental dissection is difficult or complete lesion removal is uncertain, switching to lobectomy can lower the risk of complications and avoid prolonged surgery . In this study, 116 patients underwent total thoracoscopic surgery including 96 lobectomies and 20 segmentectomies. All segmentectomy patients received preoperative CT 3D reconstruction to identify the specific lung segments and subsegments for removal. The severed bronchus and arteriovenous were determined intraoperatively based on reconstructed images and actual anatomy. With the progress of surgical technology, the use of segmentectomy in our hospital has steadily increased. CPAM is diagnosed by prenatal ultrasound and postpartum CT before symptoms develop. Nevertheless, the majority of patients opt for surgery after symptoms appear. It was reported that the rate of pulmonary infection increases with age, especially after 2 years old . Our data support this finding, showing that all patients in the PI group exhibited infection symptoms, with older age and higher weight being common characteristics. Regarding lesion location, we found no significant differences among three groups. Most lesions were localized to a single lobe, with the lower lobe being the most frequently affected . Notably, type 2 CPAM accounted for the largest proportion within the PI group, showing a significant difference. According to Stocker classification, type 2 CPAM is located in the bronchi and bronchioles, consists of multiple cysts, and primarily present with pulmonary infection as their main clinical feature . Hermelijn SM et al. found a correlation between lesion size and infection rate, based on pulmonary anatomical characteristics. Inflammation was more common in CPAM lesions with low gas content and small volume . This results were consistent with ours. Our data show that a history of pulmonary infection impacts the outcome of thoracoscopic procedure. NI group had shorter operation time, fewer blood transfusions, and reduce blood loss. Specifically, the median surgical time in the NI group was 130 min, markedly lower than the 170 min observed in the PI group and 140 min recorded in the HI group. Our operating time for all three groups was lower than those reported in previous studies on thoracoscopic surgery around 180 min . PI group with significant pulmonary inflammation presented with thoracic adhesion. This complicated the surgical procedure. This observation is consistent with existing literature. Preoperative pulmonary infections are associated with higher rates of conversion to open surgery, prolonged operative times, and extended postoperative hospital stays. Comparative data analysis revealed that the NI group had fewer intraoperative blood transfusions, shorter durations of ventilator dependency, and reduced hospital stays compared to the HI and PI groups. Although these differences did not achieve statistical significance, they suggest that the NI group may have been associated with lower surgical complexity. The blood loss could be used to evaluate the effect of surgery. We observed significant differences in blood loss. PI group exhibited the highest median intraoperative blood loss (20 ml) and demonstrated the greatest demand for transfusions (8.0%). One study reported no statistical difference in blood transfusion based on age cohort or sample size . While our investigation focused on infection grouping without age stratification. Therefore, we could not assume that the blood loss was age-related. We focused on lung related complications including atelectasis, pneumothorax, and pneumonia. Although only pneumothorax incidence reached statistical significance, complication rates showed a declining trend from the PI to NI groups. Notably, the PI group required longer chest tube placement due to more extensive surgical injury and increased exudate production. Additionally, it heightened the risk of air leakage. This was consistent with other studies reporting higher postoperative complication rates in infected lesions . When deciding whether to choose surgical intervention for CPAM children, it is essential to consider the patient’s infection status and surgical risks. For children in the HI or NI group, early surgery may be the preferable option. These patients typically present with lower surgical complexity, faster postoperative recovery, and a reduced risk of complications. Early intervention can prevent the exacerbation of infections and reduce the complexity of future surgeries. HI group remains a certain risk of postoperative complications, although the infection is hidden. For PI group, delaying surgery may be more appropriate. Preoperative anti-infective treatment to control inflammatory responses could minimize intraoperative bleeding and tissue adhesion, thereby reducing surgical difficulty and the risk of postoperative complications. To mitigate the risk of postoperative complications, we opted for a routine indwelling thoracic drainage tube. Closely monitor the chest drainage to detect and manage complications such as pneumothorax in a timely manner. Additionally, respiratory function exercises and physical therapy should be implemented to help reduce the incidence of atelectasis and pulmonary infections. This study has several limitations. First, our data comes from a single center, limited by treatment concept, which may cause selection bias. Second, we focused on the association with infection and surgery, ignoring the difference in the severity of the disease among groups. That might have affected the surgical data. Third, this is a retrospective study, and long-term prognostic outcomes are needed to better explore the effects of infection. We performed a retrospective study on pulmonary infection in children with CPAM. Lesion excision under thoracoscopic surgery is safe and effective. CPAM is prone to pulmonary infection with a high hidden-infection rate. This increases surgical difficulty and risk. Therefore, thoracoscopic surgery proves to be safe and effective prior to the symptoms. Surgery is recommended in asymptomatic cases for CPAM under 1 year of age. Additionally, such procedures may be deferred until the children reach 2 years of age, depending on surgical conditions and family preferences.
The Value of Orbital Exenteration for Eyelid Sebaceous Carcinoma in Stages II to IV: A Cohort Study of 78 Patients
1d8add20-eaa0-41c2-b68a-3c44248a9ac0
11745203
Ophthalmologic Surgical Procedures[mh]
Patients This retrospective cohort study conducted at a single center adhered to the tenets of the Declaration of Helsinki and received approval from the ethics committee of Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine (SH9H-2019-T185-2). Approval and an exemption from the requirement for informed consent from the Institutional Review Board were secured for this retrospective study. This study included consecutive pathologically diagnosed SeC patients of stages II to IV treated at Shanghai Ninth People's Hospital from January 2010 to January 2024. Tumors were categorized according to the TNM staging criteria specified in the 8th edition of the AJCC classification for eyelid carcinoma. Intraoperatively, tumor-free margins confirmed by pathology were completely excised. Inclusion criteria included complete clinical data and the presence of follow-up after the initial consultation. Data Collection The demographic and clinical data of the patients were recorded, including gender, age, tumor location, and date of pathological diagnosis. The initial surgical methods were documented, including orbital exenteration and eye-sparing treatments. Additionally, details of adjuvant therapies were collected. Postoperative complications, such as exposure keratitis, and subsequent surgical interventions were also documented. Outcomes including recurrence and metastasis were determined through an integrated assessment of clinical, radiological, and pathological information. Progression-free survival (PFS) was defined as the duration from pathological diagnosis to confirmed disease progression, including metastasis, recurrence, or death. Nodal/distant MFS and recurrence-free survival (RFS) were defined from the date of pathological diagnosis to the occurrence of their respective outcomes. DSS was defined as the time from pathological diagnosis to death due to the disease. The follow-up duration was measured in months from the initial pathological diagnosis at our center to the last follow-up or death. Propensity Score Matching Propensity score matching (PSM) was used to minimize bias from baseline characteristics and potential confounders. Among 60 stage II patients, 47 underwent eye-preserving surgery and 13 underwent orbital exenteration. A 1:3 nearest neighbor matching was then performed to further reduce bias. Thirty-nine patients who underwent eye-sparing (group 1) and 13 who had orbital exenteration (group 2) were included for further analysis. Individual propensity scores were calculated based on covariates such as gender, age, recurrence at diagnosis, laterality, stage, and follow-up period. A standardized mean difference of 0.1 or less for threshold was used to indicate adequate balance between the two groups. Statistical Analysis Univariate Cox proportional hazards regression models were employed to identify risk factors for clinical outcomes. Variables showing P < 0.10 in the univariate analyses were included in the multivariate analysis. Hazard ratios (HRs) with corresponding 95% confidence intervals (CIs) were used to describe the impact of these risk factors. Statistical significance was determined for P < 0.05. Kaplan–Meier methods were employed to assess the correlation between surgical plans and tumor outcomes. The survival analysis of the tumor category (TNM) was stratified by lesion appearance and radiological and pathological evidence at presentation. Statistical significance was assessed using the log–rank test and χ 2 test. Statistical analyses were carried out using SPSS Statistics 26.0.0.2 (IBM, Chicago, IL, USA). This retrospective cohort study conducted at a single center adhered to the tenets of the Declaration of Helsinki and received approval from the ethics committee of Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine (SH9H-2019-T185-2). Approval and an exemption from the requirement for informed consent from the Institutional Review Board were secured for this retrospective study. This study included consecutive pathologically diagnosed SeC patients of stages II to IV treated at Shanghai Ninth People's Hospital from January 2010 to January 2024. Tumors were categorized according to the TNM staging criteria specified in the 8th edition of the AJCC classification for eyelid carcinoma. Intraoperatively, tumor-free margins confirmed by pathology were completely excised. Inclusion criteria included complete clinical data and the presence of follow-up after the initial consultation. The demographic and clinical data of the patients were recorded, including gender, age, tumor location, and date of pathological diagnosis. The initial surgical methods were documented, including orbital exenteration and eye-sparing treatments. Additionally, details of adjuvant therapies were collected. Postoperative complications, such as exposure keratitis, and subsequent surgical interventions were also documented. Outcomes including recurrence and metastasis were determined through an integrated assessment of clinical, radiological, and pathological information. Progression-free survival (PFS) was defined as the duration from pathological diagnosis to confirmed disease progression, including metastasis, recurrence, or death. Nodal/distant MFS and recurrence-free survival (RFS) were defined from the date of pathological diagnosis to the occurrence of their respective outcomes. DSS was defined as the time from pathological diagnosis to death due to the disease. The follow-up duration was measured in months from the initial pathological diagnosis at our center to the last follow-up or death. Propensity score matching (PSM) was used to minimize bias from baseline characteristics and potential confounders. Among 60 stage II patients, 47 underwent eye-preserving surgery and 13 underwent orbital exenteration. A 1:3 nearest neighbor matching was then performed to further reduce bias. Thirty-nine patients who underwent eye-sparing (group 1) and 13 who had orbital exenteration (group 2) were included for further analysis. Individual propensity scores were calculated based on covariates such as gender, age, recurrence at diagnosis, laterality, stage, and follow-up period. A standardized mean difference of 0.1 or less for threshold was used to indicate adequate balance between the two groups. Univariate Cox proportional hazards regression models were employed to identify risk factors for clinical outcomes. Variables showing P < 0.10 in the univariate analyses were included in the multivariate analysis. Hazard ratios (HRs) with corresponding 95% confidence intervals (CIs) were used to describe the impact of these risk factors. Statistical significance was determined for P < 0.05. Kaplan–Meier methods were employed to assess the correlation between surgical plans and tumor outcomes. The survival analysis of the tumor category (TNM) was stratified by lesion appearance and radiological and pathological evidence at presentation. Statistical significance was assessed using the log–rank test and χ 2 test. Statistical analyses were carried out using SPSS Statistics 26.0.0.2 (IBM, Chicago, IL, USA). Patient Characteristics and Prognosis At presentation, a total of 78 eyes from 78 patients were included in the study, with a median follow-up period of 40.5 months (range, 1–160). Of these, 37 patients (47.4%) were male and 41 patients (52.6%) were female, with a median age at diagnosis of 64.1 years (range, 36–88). Twenty-seven patients (34.6%) had stage IIA disease, 33 patients (42.3%) had stage IIB, 12 patients (15.4%) had stage IIIA, one patient (1.3%) had stage IIIB, and five patients (6.4%) had stage IV ( , ). The treatment approaches and prognoses for stage II patients and stage III/IV patients will be described separately in detail below and in the ( and ). For the 78 SeC patients recruited, the mean duration from diagnosis to overall metastasis was 78.0 months. Specifically, the mean time from diagnosis to nodal metastasis was 77.4 months, and to distant metastasis it was 122.9 months. For recurrence, the mean time from diagnosis to recurrence was 125.0 months. Regarding DSS, the mean survival time for patients who died from tumor-related causes was 130.6 months. For PFS, the mean time from initial diagnosis to tumor progression was 68.5 months. Overview of Stage II Patients: Clinical Features and Prognosis To further explore the group of patients who did not have any form of metastasis at the time of initial diagnosis and whose primary treatment was surgery, we analyzed the clinical features and prognosis of stage II patients. Our cohort included 60 stage II patients, of whom 47 (78.3%) underwent eye-preserving treatment, whereas 13 patients (21.7%) underwent orbital exenteration. Among the 12 patients (20.0%) who underwent adjuvant therapies, five patients (41.7%) received radiotherapy, one patient (8.3%) received chemotherapy, and three patients (25.0%) received a combination of radiotherapy and chemotherapy. One patient (8.3%) was treated with a combination of anti–programmed cell death protein 1 (PD1) and anti–vascular endothelial growth factor (VEGF) therapies Additionally, one patient (8.3%) received a combination of anti-PD1 therapy and radiotherapy. Five patients (41.7%) developed exposure keratitis after radiotherapy and subsequently underwent additional orbital exenteration. Regarding the prognosis of the 60 stage II patients, during the follow-up period 18 patients (30.0%) developed nodal metastasis, eight patients (13.3%) developed distant metastasis, and five patients (8.3%) died of SeC. The mean duration from initial diagnosis to metastasis was 100.6 months. Specifically, the mean duration to nodal metastasis was 99.0 months; for distant metastasis, it was 133.4 months. In terms of RFS, the mean time from initial diagnosis to local recurrence was 124.0 months. Regarding DSS, the average survival time for individuals who died due to tumor-related causes was 144.3 months. In terms of PFS, the average time from initial diagnosis to tumor progression was 82.0 months. To mitigate biases and confounding variables, we conducted PSM among stage II patients, enrolling 52 individuals with a median follow-up period of 35.5 months (range, 3–160) ( ). Of these 52 patients, 13 patients (25.0%) required orbital exenteration, with eight of those (61.5%) having undergone previous surgical treatments prior to exenteration. No significant differences were observed between the two treatment groups regarding age ( P = 0.352), sex ( P = 1.000), laterality ( P = 0.518), cT category ( P = 0.099), stage ( P = 0.135), presence of recurrence ( P = 0.631), or follow-up duration ( P = 0.985) ( ). Moreover, none of the patients presented with nodal or distant metastasis at baseline. However, even with matching through PSM, some potential confounders may still exist, such as patient health status (e.g., comorbidities, body mass index, immune status), psychological factors (e.g., mood, anxiety, disease perception), and molecular characteristics of the tumor. Of the 39 patients who underwent eye-preserving surgery (group 1), 17 patients (43.6%) were classified as stage IIA and the remaining 22 patients (56.4%) as stage IIB. During an average follow-up period of 46 ± 41 months post-procedure, 14 patients (35.9%) experienced disease progression. This included local recurrence in six patients (15.4%), nodal metastasis in eight patients (20.5%), distant metastasis in four patients (10.3%), and death in seven patients (17.9%). Kaplan–Meier survival analysis revealed a mean time of 98.0 months from initial diagnosis to tumor progression. Among the 13 patients who underwent orbital exenteration (Group 2), two patients (15.4%) were classified as stage IIA and 11 patients (84.6%) as stage IIB. All stage IIA patients underwent exenteration due to multiple nodal and distant metastatic sites. During a mean follow-up period of 46 ± 30 months post-exenteration, disease progression was observed in nine patients (69.2%), including local recurrence ( n = 2, 15.4%), nodal metastasis ( n = 7, 53.8%), distant metastasis ( n = 3, 23.1%), and death ( n = 6, 46.2%). Kaplan–Meier survival estimates indicated that the mean duration from initial diagnosis to tumor progression was 52.1 months. Eye-Preserving Treatment Demonstrates a Prognosis That Is Equivalent to Orbital Exenteration in Stage II Patients Metastasis Based on our observations of 52 stage II patients subjected to PSM, 15 patients (28.8%) experienced metastasis over a median follow-up period of 29 months (range, 9–143) from their initial diagnosis. Among these, seven patients (13.5%) underwent orbital exenteration, and eight patients (15.4%) received eye-preserving surgery. Metastasis sites included the parotid lymph node ( n = 13, 25.0%), cervical lymph node ( n = 3, 5.8%), postauricular lymph node ( n = 1, 1.9%), brain and skull base ( n = 5, 9.6%), and lung ( n = 3, 5.8%). Additionally, seven patients (13.5%) exhibited metastases at multiple sites. The mean time from diagnosis to initial metastasis was 122.8 months for group 1, which was longer than that of group 2, at 63.7 months, although the difference did not reach statistical significance ( P = 0.052) ( A). The mean time from diagnosis to initial nodal metastasis was 120.1 months for group 1, significantly longer than that of group 2, which was 63.2 months ( P = 0.043) ( B). Distant metastasis occurred in seven patients (13.5%). The median intervals from diagnosis to distant metastasis were 138.4 months for group 1 and 84.8 months for group 2. These durations were not significantly different from each other ( P = 0.44) ( C). Therefore, group 1 exhibited an equivalent prognosis regarding metastasis-related outcomes compared to group 2. Recurrence-Free Survival Among the 52 stage II patients after PSM, eight patients (15.4%) experienced local recurrence. In group 1, six patients (11.5%) developed recurrences. The mean interval from diagnosis to recurrence was 129.1 months, with group 1 at 127.4 months and group 2 at 92.0 months. The durations showed no significant difference between these two groups ( P = 0.87) ( D). Disease-Specific Survival and Progression-Free Survival Of these 52 patients, 13 patients (25.0%) died, five of whom (9.6%) died of SeC. Among these five, two patients (3.8%) underwent eye-sparing surgery and three patients (5.8%) underwent orbital exenteration. The mean survival time for those who died from tumor-related causes was 141.2 months. This mean survival time was 150.7 months for group 1 and 84.2 months for group 2. The mean survival time from initial diagnosis to tumor progression was 98.0 months for group 1 and 52.1 months. These differences were not statistically significant ( P = 0.15) ( E, F). Stage ΙΙ Patients With Involvement of the Equatorial Region and Medial Canthus Prone to Choosing Orbital Exenteration To further investigate the factors influencing the choice between eye-preserving surgery and orbital exenteration for stage II patients, we conducted a χ 2 test to assess treatment methods, age, sex, laterality, cT category, and extent of lesion involvement, including the medial canthus, equatorial region, and lacrimal gland. Pearson's χ 2 test showed that patients with a worse cT category (Pearson χ 2 = 5.9; P = 0.015; odds ratio [OR] = 6.3; 95% CI, 1.2–31.3), involvement of the medial canthus (Pearson χ 2 = 5.9; P = 0.015; OR = 4.9; 95% CI, 1.3–19.0), or equatorial region (Pearson χ 2 = 10.4; P = 0.001; OR = 9.2; 95% CI, 2.1–41.1) were likely to undergo orbital exenteration as the local treatment approach. Worse cT Categories and Involvement of the Equatorial Region Were Associated With Poorer Prognosis in Eye-Preserving Patients After exploring the prognosis of stage II patients who received varying surgery plans, we performed univariate and multivariate Cox regression analyses to investigate the risk factors that may affect the prognosis of patients who received the primary treatment—namely, eye-preserving therapy. We analyzed a series of potential risk factors. Results suggested that worse cT categories and equatorial region involvement were associated with a poorer prognosis in eye-preserving patients. Tumors in higher cT categories did not show a significantly increased risk of distant metastasis (HR = 3.08; 95% CI, 0.81–11.74; P = 0.099), recurrence (HR = 0.90; 95% CI, 0.28–2.97; P = 0.868) or disease-specific survival (HR = 7.22; 95% CI, 0.83–62.72; P = 0.073). However, patients with cT4 tumors had a significantly higher risk of metastasis (HR = 4.02; 95% CI, 1.44–11.18; P = 0.008), nodal metastasis (HR = 2.75; 95% CI, 1.05–7.17; P = 0.039), and PFS (HR = 2.27; 95% CI, 1.05-4.93; P = 0.038) ( ) compared to those with lower cT tumors. Univariate Cox proportional hazards regression analysis also indicated that involvement of the equatorial region of the orbit was significantly associated with metastasis (HR = 5.31; 95% CI, 1.72–16.39; P = 0.004), distant metastasis (HR = 4.77; 95% CI, 1.22–18.74; P = 0.025), DSS (HR= 8.18; 95% CI, 1.37–48.98; P = 0.021), and PFS (HR = 7.14; 95% CI, 2.63–19.42; P < 0.001) ( ). Involvement of the medial canthus and lacrimal gland was not significantly associated with prognosis in eye-preserving patients ( ). Variables including age, sex, and side did not show significant associations with MFS, RFS, DSS, or PFS ( ). In the multivariate models, worse cT category was an independent risk factor for metastasis (HR = 0.32; 95% CI, 0.11–0.94; P = 0.037). Involvement of the equatorial region of the orbit was an independent risk factor for overall metastasis (HR = 0.27; 95% CI, 0.09–0.86; P = 0.026) and PFS (HR = 0.17; 95% CI, 0.06–0.48; P = 0.001) ( ). At presentation, a total of 78 eyes from 78 patients were included in the study, with a median follow-up period of 40.5 months (range, 1–160). Of these, 37 patients (47.4%) were male and 41 patients (52.6%) were female, with a median age at diagnosis of 64.1 years (range, 36–88). Twenty-seven patients (34.6%) had stage IIA disease, 33 patients (42.3%) had stage IIB, 12 patients (15.4%) had stage IIIA, one patient (1.3%) had stage IIIB, and five patients (6.4%) had stage IV ( , ). The treatment approaches and prognoses for stage II patients and stage III/IV patients will be described separately in detail below and in the ( and ). For the 78 SeC patients recruited, the mean duration from diagnosis to overall metastasis was 78.0 months. Specifically, the mean time from diagnosis to nodal metastasis was 77.4 months, and to distant metastasis it was 122.9 months. For recurrence, the mean time from diagnosis to recurrence was 125.0 months. Regarding DSS, the mean survival time for patients who died from tumor-related causes was 130.6 months. For PFS, the mean time from initial diagnosis to tumor progression was 68.5 months. To further explore the group of patients who did not have any form of metastasis at the time of initial diagnosis and whose primary treatment was surgery, we analyzed the clinical features and prognosis of stage II patients. Our cohort included 60 stage II patients, of whom 47 (78.3%) underwent eye-preserving treatment, whereas 13 patients (21.7%) underwent orbital exenteration. Among the 12 patients (20.0%) who underwent adjuvant therapies, five patients (41.7%) received radiotherapy, one patient (8.3%) received chemotherapy, and three patients (25.0%) received a combination of radiotherapy and chemotherapy. One patient (8.3%) was treated with a combination of anti–programmed cell death protein 1 (PD1) and anti–vascular endothelial growth factor (VEGF) therapies Additionally, one patient (8.3%) received a combination of anti-PD1 therapy and radiotherapy. Five patients (41.7%) developed exposure keratitis after radiotherapy and subsequently underwent additional orbital exenteration. Regarding the prognosis of the 60 stage II patients, during the follow-up period 18 patients (30.0%) developed nodal metastasis, eight patients (13.3%) developed distant metastasis, and five patients (8.3%) died of SeC. The mean duration from initial diagnosis to metastasis was 100.6 months. Specifically, the mean duration to nodal metastasis was 99.0 months; for distant metastasis, it was 133.4 months. In terms of RFS, the mean time from initial diagnosis to local recurrence was 124.0 months. Regarding DSS, the average survival time for individuals who died due to tumor-related causes was 144.3 months. In terms of PFS, the average time from initial diagnosis to tumor progression was 82.0 months. To mitigate biases and confounding variables, we conducted PSM among stage II patients, enrolling 52 individuals with a median follow-up period of 35.5 months (range, 3–160) ( ). Of these 52 patients, 13 patients (25.0%) required orbital exenteration, with eight of those (61.5%) having undergone previous surgical treatments prior to exenteration. No significant differences were observed between the two treatment groups regarding age ( P = 0.352), sex ( P = 1.000), laterality ( P = 0.518), cT category ( P = 0.099), stage ( P = 0.135), presence of recurrence ( P = 0.631), or follow-up duration ( P = 0.985) ( ). Moreover, none of the patients presented with nodal or distant metastasis at baseline. However, even with matching through PSM, some potential confounders may still exist, such as patient health status (e.g., comorbidities, body mass index, immune status), psychological factors (e.g., mood, anxiety, disease perception), and molecular characteristics of the tumor. Of the 39 patients who underwent eye-preserving surgery (group 1), 17 patients (43.6%) were classified as stage IIA and the remaining 22 patients (56.4%) as stage IIB. During an average follow-up period of 46 ± 41 months post-procedure, 14 patients (35.9%) experienced disease progression. This included local recurrence in six patients (15.4%), nodal metastasis in eight patients (20.5%), distant metastasis in four patients (10.3%), and death in seven patients (17.9%). Kaplan–Meier survival analysis revealed a mean time of 98.0 months from initial diagnosis to tumor progression. Among the 13 patients who underwent orbital exenteration (Group 2), two patients (15.4%) were classified as stage IIA and 11 patients (84.6%) as stage IIB. All stage IIA patients underwent exenteration due to multiple nodal and distant metastatic sites. During a mean follow-up period of 46 ± 30 months post-exenteration, disease progression was observed in nine patients (69.2%), including local recurrence ( n = 2, 15.4%), nodal metastasis ( n = 7, 53.8%), distant metastasis ( n = 3, 23.1%), and death ( n = 6, 46.2%). Kaplan–Meier survival estimates indicated that the mean duration from initial diagnosis to tumor progression was 52.1 months. Metastasis Based on our observations of 52 stage II patients subjected to PSM, 15 patients (28.8%) experienced metastasis over a median follow-up period of 29 months (range, 9–143) from their initial diagnosis. Among these, seven patients (13.5%) underwent orbital exenteration, and eight patients (15.4%) received eye-preserving surgery. Metastasis sites included the parotid lymph node ( n = 13, 25.0%), cervical lymph node ( n = 3, 5.8%), postauricular lymph node ( n = 1, 1.9%), brain and skull base ( n = 5, 9.6%), and lung ( n = 3, 5.8%). Additionally, seven patients (13.5%) exhibited metastases at multiple sites. The mean time from diagnosis to initial metastasis was 122.8 months for group 1, which was longer than that of group 2, at 63.7 months, although the difference did not reach statistical significance ( P = 0.052) ( A). The mean time from diagnosis to initial nodal metastasis was 120.1 months for group 1, significantly longer than that of group 2, which was 63.2 months ( P = 0.043) ( B). Distant metastasis occurred in seven patients (13.5%). The median intervals from diagnosis to distant metastasis were 138.4 months for group 1 and 84.8 months for group 2. These durations were not significantly different from each other ( P = 0.44) ( C). Therefore, group 1 exhibited an equivalent prognosis regarding metastasis-related outcomes compared to group 2. Recurrence-Free Survival Among the 52 stage II patients after PSM, eight patients (15.4%) experienced local recurrence. In group 1, six patients (11.5%) developed recurrences. The mean interval from diagnosis to recurrence was 129.1 months, with group 1 at 127.4 months and group 2 at 92.0 months. The durations showed no significant difference between these two groups ( P = 0.87) ( D). Disease-Specific Survival and Progression-Free Survival Of these 52 patients, 13 patients (25.0%) died, five of whom (9.6%) died of SeC. Among these five, two patients (3.8%) underwent eye-sparing surgery and three patients (5.8%) underwent orbital exenteration. The mean survival time for those who died from tumor-related causes was 141.2 months. This mean survival time was 150.7 months for group 1 and 84.2 months for group 2. The mean survival time from initial diagnosis to tumor progression was 98.0 months for group 1 and 52.1 months. These differences were not statistically significant ( P = 0.15) ( E, F). Based on our observations of 52 stage II patients subjected to PSM, 15 patients (28.8%) experienced metastasis over a median follow-up period of 29 months (range, 9–143) from their initial diagnosis. Among these, seven patients (13.5%) underwent orbital exenteration, and eight patients (15.4%) received eye-preserving surgery. Metastasis sites included the parotid lymph node ( n = 13, 25.0%), cervical lymph node ( n = 3, 5.8%), postauricular lymph node ( n = 1, 1.9%), brain and skull base ( n = 5, 9.6%), and lung ( n = 3, 5.8%). Additionally, seven patients (13.5%) exhibited metastases at multiple sites. The mean time from diagnosis to initial metastasis was 122.8 months for group 1, which was longer than that of group 2, at 63.7 months, although the difference did not reach statistical significance ( P = 0.052) ( A). The mean time from diagnosis to initial nodal metastasis was 120.1 months for group 1, significantly longer than that of group 2, which was 63.2 months ( P = 0.043) ( B). Distant metastasis occurred in seven patients (13.5%). The median intervals from diagnosis to distant metastasis were 138.4 months for group 1 and 84.8 months for group 2. These durations were not significantly different from each other ( P = 0.44) ( C). Therefore, group 1 exhibited an equivalent prognosis regarding metastasis-related outcomes compared to group 2. Among the 52 stage II patients after PSM, eight patients (15.4%) experienced local recurrence. In group 1, six patients (11.5%) developed recurrences. The mean interval from diagnosis to recurrence was 129.1 months, with group 1 at 127.4 months and group 2 at 92.0 months. The durations showed no significant difference between these two groups ( P = 0.87) ( D). Of these 52 patients, 13 patients (25.0%) died, five of whom (9.6%) died of SeC. Among these five, two patients (3.8%) underwent eye-sparing surgery and three patients (5.8%) underwent orbital exenteration. The mean survival time for those who died from tumor-related causes was 141.2 months. This mean survival time was 150.7 months for group 1 and 84.2 months for group 2. The mean survival time from initial diagnosis to tumor progression was 98.0 months for group 1 and 52.1 months. These differences were not statistically significant ( P = 0.15) ( E, F). To further investigate the factors influencing the choice between eye-preserving surgery and orbital exenteration for stage II patients, we conducted a χ 2 test to assess treatment methods, age, sex, laterality, cT category, and extent of lesion involvement, including the medial canthus, equatorial region, and lacrimal gland. Pearson's χ 2 test showed that patients with a worse cT category (Pearson χ 2 = 5.9; P = 0.015; odds ratio [OR] = 6.3; 95% CI, 1.2–31.3), involvement of the medial canthus (Pearson χ 2 = 5.9; P = 0.015; OR = 4.9; 95% CI, 1.3–19.0), or equatorial region (Pearson χ 2 = 10.4; P = 0.001; OR = 9.2; 95% CI, 2.1–41.1) were likely to undergo orbital exenteration as the local treatment approach. After exploring the prognosis of stage II patients who received varying surgery plans, we performed univariate and multivariate Cox regression analyses to investigate the risk factors that may affect the prognosis of patients who received the primary treatment—namely, eye-preserving therapy. We analyzed a series of potential risk factors. Results suggested that worse cT categories and equatorial region involvement were associated with a poorer prognosis in eye-preserving patients. Tumors in higher cT categories did not show a significantly increased risk of distant metastasis (HR = 3.08; 95% CI, 0.81–11.74; P = 0.099), recurrence (HR = 0.90; 95% CI, 0.28–2.97; P = 0.868) or disease-specific survival (HR = 7.22; 95% CI, 0.83–62.72; P = 0.073). However, patients with cT4 tumors had a significantly higher risk of metastasis (HR = 4.02; 95% CI, 1.44–11.18; P = 0.008), nodal metastasis (HR = 2.75; 95% CI, 1.05–7.17; P = 0.039), and PFS (HR = 2.27; 95% CI, 1.05-4.93; P = 0.038) ( ) compared to those with lower cT tumors. Univariate Cox proportional hazards regression analysis also indicated that involvement of the equatorial region of the orbit was significantly associated with metastasis (HR = 5.31; 95% CI, 1.72–16.39; P = 0.004), distant metastasis (HR = 4.77; 95% CI, 1.22–18.74; P = 0.025), DSS (HR= 8.18; 95% CI, 1.37–48.98; P = 0.021), and PFS (HR = 7.14; 95% CI, 2.63–19.42; P < 0.001) ( ). Involvement of the medial canthus and lacrimal gland was not significantly associated with prognosis in eye-preserving patients ( ). Variables including age, sex, and side did not show significant associations with MFS, RFS, DSS, or PFS ( ). In the multivariate models, worse cT category was an independent risk factor for metastasis (HR = 0.32; 95% CI, 0.11–0.94; P = 0.037). Involvement of the equatorial region of the orbit was an independent risk factor for overall metastasis (HR = 0.27; 95% CI, 0.09–0.86; P = 0.026) and PFS (HR = 0.17; 95% CI, 0.06–0.48; P = 0.001) ( ). In this study, we found that, for stage II SeC patients without infiltration beyond the equatorial region or into the medial canthus, orbital exenteration is unnecessary, and the choice of whether to perform orbital exenteration does not affect MFS, RFS, PFS, or DSS. For patients in stages III and IV, comprehensive adjunct therapy, including radiotherapy, chemotherapy, immunotherapy, and targeted therapy, should be performed accordingly. However, standardized approaches still must be explored. Regarding mainstream management, eye-preserving surgery and orbital exenteration are typically reserved for SeC patients at stages IIA and ΙΙΒ. The rate of orbital exenteration for SeC ranges from 13% to 20%. , , Research has suggested that the recurrence rate for orbital exenteration is 20% and eye-sparing surgery varies between 15.7% and 39.6%. , , A retrospective study indicated that orbital exenteration might improve DSS. However, current research on optimal treatment strategies and indications for stages IIA to IV SeC across various centers and regions remains sparse. Moreover, subgroup analyses utilizing the 8th edition AJCC staging criteria to explore the prognostic differences associated with different surgical methods have not yet been conducted. Consequently, existing guidelines fail to provide stage-specific, evidence-based recommendations. , , Our study showed that, for patients with stage II disease, eye-preserving surgery resulted in prognosis, including MFS, distant MFS, RFS, PFS, and DSS, that was not inferior to that of orbital exenteration. This finding suggests that eye-preserving treatment should be considered for stage II patients to maintain visual function and quality of life. Regarding nodal metastasis, we found that the prognosis for eye-sparing surgery was better than orbital exenteration. This could be attributed to the fact that the eye-sparing group had a smaller proportion of stage IIB cases ( n = 22, 56.40%) than the orbital exenteration group ( n = 11, 84.60%), indicating an earlier T stage. This result is consistent with prior studies that have shown that a worse T category is correlated with poorer outcomes in SeC of the eyelid. , Notably, our study identified clinical factors influencing the surgical decision between eye-preserving surgery and orbital exenteration for stage II patients. Our findings indicated that patients with a worse cT category and involvement of the medial canthus or equatorial region tended to require orbital exenteration as the local treatment. Previous studies have hinted that worse cT category was associated with difficulties in achieving negative surgical margins through eye-preserving surgery, thus necessitating orbital exenteration. , , Regarding tumor location, clinical observations have shown that tumors affecting the medial canthus were prone to further infiltration into the sinus cavity and cranial cavity, leading to poor outcomes with local resection. For patients with infiltration of the medial canthus and lacrimal papilla, orbital exenteration is performed to prevent further tumor spread to the various sinuses, thereby achieving the goal of radical cure. Moreover, tumors involving the equator of the orbit are challenging to resect locally, making complete removal difficult and increasing the risk of recurrence. , This may explain why patients with involvement of the medial canthus or equatorial region ultimately opted for an orbital exenteration procedure. We then investigated the risk factors that might lead to poor outcomes in patients receiving eye-preserving treatment. Previous studies have suggested that independent risk factors for nodal metastasis include a diffuse growth pattern, orbital invasion at initial presentation, perineural invasion, and a high Ki-67 percentage. Also, a diffuse pattern, pagetoid intraepithelial neoplasia, stage T3a, large tumor size, and nonlobular pattern all significantly impact the likelihood of local recurrence, metastasis, or tumor-related mortality. , But, these studies have primarily focused on the pathological factors associated with poor prognosis in SeC and have not conducted further stratification of patients according to the surgical plan. – In this study, we conducted a Cox proportional hazards regression analysis on patients treated with eye-sparing therapy. Our analysis revealed that age, sex, side, and involvement of the medial canthus and lacrimal gland were not significantly correlated with prognosis. Our study indicated that cT category was significantly linked to a worse MFS, nodal MFS, and PFS in patients undergoing eye-preserving treatment, aligning with prior research. , , Additionally, our research provided additional insight into the prognostic implications of tumor involvement sites. Specifically, the involvement of the equatorial region of the orbit was independently associated with an increased risk of metastasis and worse PFS. To our knowledge, previous literature has not extensively addressed the relationship between tumor location and prognosis in eye-preserving patients. In summary, patients with unfavorable cT category or equatorial region involvement were supposed to exercise greater caution when selecting surgical treatment methods and should receive comprehensive adjuvant therapy to improve their prognosis. Given the limited reports on the prognosis of stages III and IV SeC, we explored comprehensive treatment options and prognosis for this subset of patients. Adjuvant therapies included radiotherapy, chemotherapy, immunotherapy, VEGF-targeted therapy, and combination therapies. For stage III patients, the most common metastatic sites were the parotid lymph node, cervical lymph node, submental lymph node, and lung. Meanwhile, diseases staged IV typically metastasized to parotid lymph node, lung, liver, and cervical lymph node, consistent with former studies. , , In terms of DSS, patients with stage IV disease exhibited a shorter mean time from diagnosis to tumor-specific death compared to those with stage III disease. For stage III patients who experienced distant metastases, the average duration from the onset of distant metastasis to death was 12.5 months. In contrast, the mean time from diagnosis to tumor-specific death for stage IV patients was 49.0 months. The shorter time from distant metastasis to disease-specific death in stage III patients, compared to stage IV patients, may be attributed to the limited number of stage III patients who received postoperative adjuvant therapy (one out of four). To our knowledge, this is the first study to conduct the stratified analysis and comparison based on different stages. It comprehensively analyzed the prognosis of stages II to IV SeC, providing new insights into treatment methods for these patients. Notably, we employed the PSM method to balance baseline characteristics between the eye-preserving and orbital exenteration groups, making the comparison more akin to a randomized controlled trial. By matching patients with similar propensity scores, PSM minimizes confounding factors such as age, sex, disease stage, and other baseline characteristics, ensuring that intergroup differences are not biased. This approach enhances the validity of causal inferences and strengthens the reliability of prognostic comparisons between the two groups. There are some limitations in this study. First, the sample size was relatively small, which may have resulted in larger confidence intervals when performing subgroup analyses. Second, PSM may overlook certain confounding factors and data, potentially failing to fully eliminate confounding bias. Additionally, the single-center nature of the study primarily involved patients from the East China region, introducing selection bias. Finally, the retrospective design allowed us to minimize bias to the extent possible but could not eliminate asymmetry in data distribution. Future prospective studies are necessary to address these limitations. In conclusion, our findings indicate that, for stage II patients, the prognosis of eye-sparing treatment was not inferior to that of orbital exenteration. However, patients with worse cT category or involvement of the equatorial region or medial canthus tended to choose orbital exenteration as a surgical option. Among patients undergoing eye-sparing treatment, worse cT stage and equatorial involvement were independent risk factors associated with poorer MFS and PFS. This research explored the treatment methods and prognosis of patients with SeC based on a staging system and further aids clinicians in making treatment decisions and improving patient outcomes. Multicenter studies are necessary to validate the findings and investigate the role of systemic therapies for stages IIA to IV SeC. Supplement 1 Supplement 2 Supplement 3
Minimal-access video-assisted retroperitoneal and/or transperitoneal debridement (VARTD) in the management of infected walled-off pancreatic necrosis with deep extension: initial experience from a prospective single-arm study
08e59f81-b692-43d1-8d9e-f4083073acc1
9909852
Debridement[mh]
Acute pancreatitis is one of the leading causes of gastrointestinal-related admission to hospital . Approximately, 10–20% of patients develop necrotizing pancreatitis (NP), which is associated with high mortality rates of 20–40% [ – ]. In the case of NP, necrosis of pancreatic parenchyma and/or peripancreatic tissues is categorized into two conditions according to the disease course demarcated by 4 weeks following the NP onset—acute necrotic collections (ANCs) and walled-off necrosis (WON) [ , – ]. The latter poses a prolonged and complicated clinical course [ – ]. Especially, when an infection occurs in the necrotic bed (i.e., infected WON, hereafter called iWON), it is strongly recommended that invasive interventions be used to perform a drainage, debridement, or necrosectomy of the necrotic collection [ , , , – ]. The invasive approaches for managing WON have evolved over the past decade [ – ]. Historically, open surgical debridement/necrosectomy was the mainstay of therapy . However, such open approach is associated with an increased composite endpoint of death or severe complications [ , – ]. At present, a “step-up” approach has been advocated to be favored over open surgical approach to combat the iWON [ – , , , – ]. This sequentially applies percutaneous catheter drainage (PCD), alone or in combination with other minimal-access interventions, including endoscopic transluminal debridement/necrosectomy (ETD/ETN), video-assisted retroperitoneal debridement (VARD), and sinus tract endoscopy (STE) [ – , , , , – ]. Nevertheless, in the case of iWON with deep extension (iWONde, i.e., the necrosis is diffusely distributed throughout the abdomen), it is something that still represents a challenging scenario for those minimal-access approaches . In this instance, debridement of the iWONde in many cases requires repeated interventions (percutaneous and/or endoscopic) or even additional open necrosectomy if needed because of the difficulty in achieving sufficient evacuation of the large burden and deep extension of the necrotic collections [ , , ]. For example, the endoscopic step-up approaches are capable of debriding the necrosis located along the peri-gastric or duodenal regions; however, this fails to access the necrosum in the setting of the necrosis extends into areas that are distant from stomach . Another example is that when the necrosis extends to the right of the mesenteric vessels, and it is considered to be refractory to the VARD approach . Thus, some NP patients who had developed an iWONde, in the current era, still necessarily undergo open necrosectomy, which is deemed the best alternative on this occasion in spite of its related high mortality rates [ – , , , – ]. In this study, we attempted to solve the dilemma that no stand-alone minimal-access approach existed is suited for management of the infected necrosum with deep extension after failure of the “step-up” approach. Here, we introduced a minimal-access video-assisted retroperitoneal and/or transperitoneal debridement (hereafter called VARTD) that applies multi-mini-incision access for providing a practicable avenue to achieve sufficient clearance of the large avascular necrosis as much as possible, along with a continuous postoperative lavage by way of the flow that flushes from upper cavity towards lower zone, which may allow easy to irrigation and drainage of residual necrotic debris. The aim of this prospective single-arm study was to assess the effectiveness and safety of the VARTD for the iWONde. Design, setting and participants Patients admitted in our high-volume pancreas center with a diagnosis of necrotizing pancreatitis were screened for enrollment (Fig. ). The inclusion criteria were (1) a diagnosis of WON was made by contrast-enhanced computed tomography (CECT); (2) infection of necrotic collections was laboratory-confirmed (i.e., a positive culture of the necrotic collections) or clinically suspected (e.g., gas configuration in necrotic collections on the CECT imaging, persistence of sepsis-associated clinical signs, or progressive deterioration of clinical conditions), without evidence of other causes of infection ; (3) WON with deep extension, and the necrosis collection at least extends to the right of the mesenteric vessels. Exclusion criteria consisted of inability to obtain informed consent (i.e., refused participation or further treatment), history of previous surgical or endoscopic drainage/necrosectomy, pancreatitis following trauma or surgery, chronic pancreatitis, or pregnancy . The enrolled patients were followed up every 3 months via telephone or conventional outpatient clinic appointments up to 6 months after discharge. All the authors had access to the study data and reviewed and approved the manuscript. This single-center prospective single-arm trial was conducted at Daping Hospital, Army Medical University, China. This trial is registered with the Chinese Clinical Trial Registry, number ChiCTR1800016950. Ethical approval was obtained from the Ethics Board of Daping Hospital, Army Medical University, China (reference number: 2018-17). Signed informed consent was obtained from all participants or their legal representatives before enrollment. Technique of the VARTD approach To ensure that all the participants undergo a more standardized and uniform VARTD approach to all the procedures and avoid biases caused by differences in surgical technical skill, allowing for evidence-based recommendations for its future use, the VARTD was performed by a team consisting of two experienced pancreatic surgeons (Y Tang and H Liu). After induction of general anesthesia, the patient was placed in the left/right lateral decubitus position. The decision about where to perform the skin incision depends on prior confirmation of the location of the necrosis on the CECT image, allowing the closest access route to the necrotic collection. First, a longitudinal/oblique mini-incision (approximately 3–5 cm) was made in the midaxillary line between the costal margin and the iliac crest (Fig. A, B). The prior PCD tubes serve as tracks, and exploratory puncture using 22-gauge needle was performed as an adjunct to determine an avascular access to the necrosis cavity. Once the wall of the necrosis cavity was opened by an electric cautery, a 10-mm, 30º camera, laparoscope was inserted into the necrotic cavity (Fig. A, C). Under visualization, fluid necrotic component was irrigated and aspirated with an 8-mm blunt suction cannula, and semi-solid necrotic mass was removed using a sponge holding forceps (Fig. A, D, E). Further, the necrotic material was irrigated with a solution of 3% hydrogen peroxide followed by 0.9% saline solution and suction. Two 24 Fr single-lumen silicone tubes were then placed into the necrosis cavity after achieving adequate hemostasis, which were interlaced with each other and positioned alongside the pancreatic tail and fossa iliaca, wherein one serves as an inflow port and the other as an outflow port for postoperative lavage (Fig. F, G). The two tubes were come straight out of the skin incisions. The necrosis cavity was closed with 3–0 Prolene running sutures, and the incision was sutured in layers. Subsequently, the patient was placed in the supine position, and a transverse upper midline incision or left rectus incision (approximately 6–8 cm) was made in the epigastric region. The gastrocolic ligament was divided near the greater curvature in an avascular plane to enter the lesser sac and reach the centrally located necrosis. The centrally located necrosis was debrided using the same technique. Two 24 Fr single-lumen silicone tubes were crosswire placed through the head and tail of the pancreas on the necrotic bed for postoperative lavage (Fig. F, G). The two tubes were brought out through the skin next to the epigastric incision. The centrally located and flank necrotic cavities were linked up with each other by blunt dissection with the surgeon’s fingers during operation. A planned feeding jejunostomy was carried out in patients who had severe gastrointestinal complaints, such as vomiting and bloating after meals. Continuous postoperative lavage with 0.9% saline solution (1–3 L per day) via the inflow to outflow tubes should be started early within 24–48 h post-operation. Clinical outcomes To assess the efficacy of the combined VARTD and continuous postoperative lavage in the treatment of iWONde, we prespecified the primary efficacy endpoint as clinical improvement up to day 28 after the VARTD, which defined as a reduction of 75% or greater in size of necrotic collection (in any axis) on CT and clinical resolution of sepsis or organ dysfunction within the first 4 postoperative weeks. Baseline CECT-derived parameters were measured before the VARTD. Repeat CECT scan was performed routinely for a period of 7–14 days, or when less than 50 mL per day fluid drained was observed, or in case of clinical suspicion of enteral/pancreatic fistula or intra-abdominal bleeding. The secondary efficacy endpoint was reintervention on for an additional debridement/necrosectomy. The primary safety endpoint was a composite of major complications comprising enterocutaneous or pancreatic fistula, visceral perforation, and intra-abdominal hemorrhage that require intervention; new-onset organ failure; in-hospital death, and death within 6 months after discharge. The secondary safety endpoints included individual primary endpoint components, biliary strictures, incisional hernia, wound infections, pancreatic endocrine and exocrine insufficiency, and intensive care unit (ICU) and hospital length of stay after the VARTD. Definitions of the safety endpoints were consistent with previous reports . Statistical analysis The sample size of this study was estimated using software PASS version 15 (NCSS, LLC. Kaysville, Utah, USA). Based on published data, incidence rates of major complications/death are 40–69.5% for minimal-access surgical management in patients with necrotizing pancreatitis [ , – ]. On the basis of the assumption that the safety of the VARTD approach is comparable to that of the minimal-access approaches reported in previous studies, at one-sided 95% confidence interval and 85% statistical power of the study, an estimated sample size of 19 was determined. A total of 21 eligible participants are, therefore, planned, assuming a 10% dropout rate in the study. Since this was an exploratory study, we used descriptive statistics to summarize the findings. All analyses were followed an intention-to-treat principle. Continuous variables were described using mean ± standard deviation (SD), median and interquartile range (IQR), and range. Categorical variables were reported as numbers and proportions. The statistical analyses were carried out using SPSS software version 20 (IBM Inc., Chicago, Illinois, USA). Patients admitted in our high-volume pancreas center with a diagnosis of necrotizing pancreatitis were screened for enrollment (Fig. ). The inclusion criteria were (1) a diagnosis of WON was made by contrast-enhanced computed tomography (CECT); (2) infection of necrotic collections was laboratory-confirmed (i.e., a positive culture of the necrotic collections) or clinically suspected (e.g., gas configuration in necrotic collections on the CECT imaging, persistence of sepsis-associated clinical signs, or progressive deterioration of clinical conditions), without evidence of other causes of infection ; (3) WON with deep extension, and the necrosis collection at least extends to the right of the mesenteric vessels. Exclusion criteria consisted of inability to obtain informed consent (i.e., refused participation or further treatment), history of previous surgical or endoscopic drainage/necrosectomy, pancreatitis following trauma or surgery, chronic pancreatitis, or pregnancy . The enrolled patients were followed up every 3 months via telephone or conventional outpatient clinic appointments up to 6 months after discharge. All the authors had access to the study data and reviewed and approved the manuscript. This single-center prospective single-arm trial was conducted at Daping Hospital, Army Medical University, China. This trial is registered with the Chinese Clinical Trial Registry, number ChiCTR1800016950. Ethical approval was obtained from the Ethics Board of Daping Hospital, Army Medical University, China (reference number: 2018-17). Signed informed consent was obtained from all participants or their legal representatives before enrollment. To ensure that all the participants undergo a more standardized and uniform VARTD approach to all the procedures and avoid biases caused by differences in surgical technical skill, allowing for evidence-based recommendations for its future use, the VARTD was performed by a team consisting of two experienced pancreatic surgeons (Y Tang and H Liu). After induction of general anesthesia, the patient was placed in the left/right lateral decubitus position. The decision about where to perform the skin incision depends on prior confirmation of the location of the necrosis on the CECT image, allowing the closest access route to the necrotic collection. First, a longitudinal/oblique mini-incision (approximately 3–5 cm) was made in the midaxillary line between the costal margin and the iliac crest (Fig. A, B). The prior PCD tubes serve as tracks, and exploratory puncture using 22-gauge needle was performed as an adjunct to determine an avascular access to the necrosis cavity. Once the wall of the necrosis cavity was opened by an electric cautery, a 10-mm, 30º camera, laparoscope was inserted into the necrotic cavity (Fig. A, C). Under visualization, fluid necrotic component was irrigated and aspirated with an 8-mm blunt suction cannula, and semi-solid necrotic mass was removed using a sponge holding forceps (Fig. A, D, E). Further, the necrotic material was irrigated with a solution of 3% hydrogen peroxide followed by 0.9% saline solution and suction. Two 24 Fr single-lumen silicone tubes were then placed into the necrosis cavity after achieving adequate hemostasis, which were interlaced with each other and positioned alongside the pancreatic tail and fossa iliaca, wherein one serves as an inflow port and the other as an outflow port for postoperative lavage (Fig. F, G). The two tubes were come straight out of the skin incisions. The necrosis cavity was closed with 3–0 Prolene running sutures, and the incision was sutured in layers. Subsequently, the patient was placed in the supine position, and a transverse upper midline incision or left rectus incision (approximately 6–8 cm) was made in the epigastric region. The gastrocolic ligament was divided near the greater curvature in an avascular plane to enter the lesser sac and reach the centrally located necrosis. The centrally located necrosis was debrided using the same technique. Two 24 Fr single-lumen silicone tubes were crosswire placed through the head and tail of the pancreas on the necrotic bed for postoperative lavage (Fig. F, G). The two tubes were brought out through the skin next to the epigastric incision. The centrally located and flank necrotic cavities were linked up with each other by blunt dissection with the surgeon’s fingers during operation. A planned feeding jejunostomy was carried out in patients who had severe gastrointestinal complaints, such as vomiting and bloating after meals. Continuous postoperative lavage with 0.9% saline solution (1–3 L per day) via the inflow to outflow tubes should be started early within 24–48 h post-operation. To assess the efficacy of the combined VARTD and continuous postoperative lavage in the treatment of iWONde, we prespecified the primary efficacy endpoint as clinical improvement up to day 28 after the VARTD, which defined as a reduction of 75% or greater in size of necrotic collection (in any axis) on CT and clinical resolution of sepsis or organ dysfunction within the first 4 postoperative weeks. Baseline CECT-derived parameters were measured before the VARTD. Repeat CECT scan was performed routinely for a period of 7–14 days, or when less than 50 mL per day fluid drained was observed, or in case of clinical suspicion of enteral/pancreatic fistula or intra-abdominal bleeding. The secondary efficacy endpoint was reintervention on for an additional debridement/necrosectomy. The primary safety endpoint was a composite of major complications comprising enterocutaneous or pancreatic fistula, visceral perforation, and intra-abdominal hemorrhage that require intervention; new-onset organ failure; in-hospital death, and death within 6 months after discharge. The secondary safety endpoints included individual primary endpoint components, biliary strictures, incisional hernia, wound infections, pancreatic endocrine and exocrine insufficiency, and intensive care unit (ICU) and hospital length of stay after the VARTD. Definitions of the safety endpoints were consistent with previous reports . The sample size of this study was estimated using software PASS version 15 (NCSS, LLC. Kaysville, Utah, USA). Based on published data, incidence rates of major complications/death are 40–69.5% for minimal-access surgical management in patients with necrotizing pancreatitis [ , – ]. On the basis of the assumption that the safety of the VARTD approach is comparable to that of the minimal-access approaches reported in previous studies, at one-sided 95% confidence interval and 85% statistical power of the study, an estimated sample size of 19 was determined. A total of 21 eligible participants are, therefore, planned, assuming a 10% dropout rate in the study. Since this was an exploratory study, we used descriptive statistics to summarize the findings. All analyses were followed an intention-to-treat principle. Continuous variables were described using mean ± standard deviation (SD), median and interquartile range (IQR), and range. Categorical variables were reported as numbers and proportions. The statistical analyses were carried out using SPSS software version 20 (IBM Inc., Chicago, Illinois, USA). Demographics and baseline characteristics of patients Between July 18, 2018, and November 12, 2020, a total of 95 NP patients admitted to our high-volume pancreatic center were screened, 66 of whom developed WON; of these, except for 1 patient who was eligible but declined participation, 21 iWONde patients (mean age 42.9 years [SD, 11.7]; 10 [48%] women) were finally enrolled and underwent the VARTD (Fig. ). The demographics and baseline characteristics of the study participants are presented in Table (individual participant data are provided in Additional file : Table S1). The majority of patients (17, 81%) had a modified CT severity index (MCTSI) score higher than 8. More than half of the participants (13, 62%) were reported to have areas of non-enhancement of the pancreatic parenchyma > 50% on CECT scan. Most participants (17, 81%) reported at least one ICU stay before receiving the VARTD. The sizes and margins of the necrotic collection are shown in Table and Additional file : Table S2. All patients had necrotic collections extending to lower abdominal regions, and the majority of (17, 81%) of the participants had extensive necrosis reaching the pelvis. Mean (SD) size of necrotic collection-anteroposterior (AP) axis was 7.9 (2.6) cm; mean (SD) size of necrotic collection-transverse axis was 13.8 (5.3) cm. Percutaneous catheters were left in situ in 19 (90%) patients prior to the VARTD approach. Detailed intraoperative information appears in Additional file : Table S3. Efficacy outcomes The primary efficacy endpoint was achieved by most of the iWONde participants (14/21, 67%) who enrolled in the present study and underwent the VARTD approach (Table and Additional file : Table S4). None of the study participants required repeated interventions for additional drainage or debridement. Taken together, these satisfactory efficacy outcomes in the present study suggest that the combined VARTD technique and continuous postoperative lavage are reasonably sufficient for evacuating extensive WON and achieving good clinical results. Safety outcomes Results for the primary and secondary safety endpoints are summarized in Table . The primary composite safety endpoint occurred in six participants (29%). The composite rate of reoperation was 24%(5/21), but none for debridement. The most common major complication was enterocutaneous fistula which were six cases (29%), but only three of whom were reoperated on for an ileostomy while one case was successfully treated with endoscopic clip closure, and the rest did not require intervention (Additional file : Table S5). The other major complications were as follows: three patient (14%) suffered intra-abdominal hemorrhage, two of whom underwent reoperation for hemostasis; one case (5%) experienced a gallbladder-abscess cavity fistula requiring a reoperation for fistula excision and repair. In this study, we observed no visceral perforation event. There was one in-hospital death (5%) in a patient who experienced repeated intra-abdominal bleeding. One case (5%) developed new-onset multiple organ failure involving heart and kidney. One case (5%) developed pulmonary embolism (unrelated to the treatment) and expired three months after discharge. The secondary safety endpoint of new-onset diabetes was occurred in three patients (14%); three patients (14%) developed pancreatic insufficiency; biliary stricture occurred in two cases (10%). All participants developed wound infection, mainly occurred in the flank incisions at the “port-site” of postoperative lavage tubes. Only five patients (24%) transferred to ICU after surgery; of these, one case (5%) was attributed to the new-onset cardiac and renal failure, and the others failed early postoperative extubation and required a transient critical care. Median length of postoperative ICU stay was 3 days (IQR 1–4). Between July 18, 2018, and November 12, 2020, a total of 95 NP patients admitted to our high-volume pancreatic center were screened, 66 of whom developed WON; of these, except for 1 patient who was eligible but declined participation, 21 iWONde patients (mean age 42.9 years [SD, 11.7]; 10 [48%] women) were finally enrolled and underwent the VARTD (Fig. ). The demographics and baseline characteristics of the study participants are presented in Table (individual participant data are provided in Additional file : Table S1). The majority of patients (17, 81%) had a modified CT severity index (MCTSI) score higher than 8. More than half of the participants (13, 62%) were reported to have areas of non-enhancement of the pancreatic parenchyma > 50% on CECT scan. Most participants (17, 81%) reported at least one ICU stay before receiving the VARTD. The sizes and margins of the necrotic collection are shown in Table and Additional file : Table S2. All patients had necrotic collections extending to lower abdominal regions, and the majority of (17, 81%) of the participants had extensive necrosis reaching the pelvis. Mean (SD) size of necrotic collection-anteroposterior (AP) axis was 7.9 (2.6) cm; mean (SD) size of necrotic collection-transverse axis was 13.8 (5.3) cm. Percutaneous catheters were left in situ in 19 (90%) patients prior to the VARTD approach. Detailed intraoperative information appears in Additional file : Table S3. The primary efficacy endpoint was achieved by most of the iWONde participants (14/21, 67%) who enrolled in the present study and underwent the VARTD approach (Table and Additional file : Table S4). None of the study participants required repeated interventions for additional drainage or debridement. Taken together, these satisfactory efficacy outcomes in the present study suggest that the combined VARTD technique and continuous postoperative lavage are reasonably sufficient for evacuating extensive WON and achieving good clinical results. Results for the primary and secondary safety endpoints are summarized in Table . The primary composite safety endpoint occurred in six participants (29%). The composite rate of reoperation was 24%(5/21), but none for debridement. The most common major complication was enterocutaneous fistula which were six cases (29%), but only three of whom were reoperated on for an ileostomy while one case was successfully treated with endoscopic clip closure, and the rest did not require intervention (Additional file : Table S5). The other major complications were as follows: three patient (14%) suffered intra-abdominal hemorrhage, two of whom underwent reoperation for hemostasis; one case (5%) experienced a gallbladder-abscess cavity fistula requiring a reoperation for fistula excision and repair. In this study, we observed no visceral perforation event. There was one in-hospital death (5%) in a patient who experienced repeated intra-abdominal bleeding. One case (5%) developed new-onset multiple organ failure involving heart and kidney. One case (5%) developed pulmonary embolism (unrelated to the treatment) and expired three months after discharge. The secondary safety endpoint of new-onset diabetes was occurred in three patients (14%); three patients (14%) developed pancreatic insufficiency; biliary stricture occurred in two cases (10%). All participants developed wound infection, mainly occurred in the flank incisions at the “port-site” of postoperative lavage tubes. Only five patients (24%) transferred to ICU after surgery; of these, one case (5%) was attributed to the new-onset cardiac and renal failure, and the others failed early postoperative extubation and required a transient critical care. Median length of postoperative ICU stay was 3 days (IQR 1–4). Currently, a step-up approach applying PCD and other minimal-access interventions has been proven to have more favorable outcomes for the management of WON [ , , , , – ]. However, according to the present expert consensus and updated guideline, open surgery is still an appropriate option for the infected WON with deep extension [ , , , , , – ]. In this study, we show that a minimal-access approach followed by continuous postoperative lavage, as an optimization method, could achieve the goal, as much as possible, of debridement of the extensive infected necrotic collection without additional interventions and with reasonably low incidence rates of major complications/death. Although several therapeutic schemes have emerged and significant progress towards reducing mortality and the risk of medical complications has been made in the treatment for NP during the last decade, it is clear that sufficient drainage and/or debridement remain the most important component in the management of WON, a delayed but life-threatening NP complication [ – , , ]. Importantly, there is a major limitation regarding the use of one stand-alone minimally invasive technique within the step-up approach, regardless of ETD/ETN, STE or VARD, in the treatment of infected WON with deep extension that is spread throughout the abdomen. The main reason is that the access route for each intervention type fails to reach necrotic collections located in not only the pancreatic parenchymal area but also multiple of the peripheral zones including retromesenteric plane and/or either paracolic gutter, leading to insufficiently evacuation of semi-solid necrotic debris. The VARTD technique allows for good efficacy while evacuating the necrotic collections that located both centrally and diffusely throughout the abdomen. The continuous postoperative lavage also provides advantage in constant evacuations of necrotic debris, inflammatory exudate, vasoactive and toxic products, active enzymes, and bacteria and the toxins thereof . In this study, large-bore drains were crosswire placed in the necrotic cavity as well as alongside the lateral retroperitoneal access routes so as to easily flushed out the remaining necrotic debris with an elevation effect from high to low position. Most participants receiving the VARTD in our study showed effective evacuation of necrotic collections within 4 weeks postoperatively. Notably, none of the participants required additional interventions to remove the rest of the necrotic debris. It should be noted that the NP patients participated in the present study had a larger necrosis burden and a more serious condition when compared with those of the participants in other previous studies. Obviously, the rate of reinterventions after the VARTD was much lower than that of the use of an endoscopic or a minimal-access surgical approach alone [ , , , , ]. The reintervention rates following the VARD or ETD/ETN reported in a previous study were 37.5% and 44.1%, respectively . Furthermore, some patients undergoing the VARD or ETD/ETN even required more than three necrosectomy or endoscopic drainage procedures . However, there is no denying the fact that combination of endoscopic approaches via multiple transmural sites (multigate technique) and PCD or VARD have a potential to be another good choice for the patients who had developed iWONde [ – , , ]. Whether the VARTD is superior to the combined endoscopic transgastric drainage/necrosectomy and VARD approaches cannot be clearly concluded, future research may come to give a clear answer. As well, comparison of the cost efficiency between the VARTD and other approaches is worthy of consideration in future prospective studies. Results reported here also indicate an acceptable safety profile of the VARTD approach. The rates of postoperative complication-morbidity and mortality of the VARTD were substantially lower when compared with those of open surgical debridement as previously reported [ , , , ]. In the present study, only one participant (5%) developed new-onset multiple organ failure. Such incident rate is much lower than that of open surgery (approximately 40%) . More importantly, the mortality rate (10%) attributable to the VARTD technique was comparable to that in Bang et al. (endoscopic 8.8% or VARD 6.3%) and van Brunschot et al. (endoscopic 18% or VARD 13%) as well . Furthermore, postoperative ICU length of stay for the study participant receiving VARTD (median, 3 days) was considerably less than that for patients who had undergone open surgery (19 days) . In addition, of the 21 participants enrolled in this study, only 5 patients (24%) required reoperation for addressing the postoperative complications such as enteral/pancreatic fistula and intra-abdominal bleeding. Thus, the VARTD offers a reasonably safe strategy for evacuation of the iWONde. Percutaneous catheter drainage or endoscopically transluminal drainage for necrotic collection is preferably postponed, usually > 4 weeks after the disease onset, waiting until necrosis has been encapsulated . Furthermore, though clinical condition is sometimes changed rapidly owing to pathophysiological events initiated by infection at early phase of necrotizing pancreatitis, a recent POINTER trial conducted by Dutch Pancreatitis Study Group found that immediate catheter drainage did not provide more benefits for the NP patients . In the present study, we performed the VARTD while infected WON was developed spreading over the abdomen. However, we were wondering whether early application of the VARTD prior to the current standard “step-up” approach in selected NP patients in particular circumstances could prevent further clinical worsening. It is, therefore, worthwhile to consider the timing for performance of the VARTD in future research. This was a proof-of-concept study and has some limitations. The small sample size and lack of the risk factor-matched control groups do not allow us to make conclusions about the real safety of the VARTD. Such may also lead to an overestimation of effectiveness of this technique in the face of improving patient outcomes, thus its efficacy should be interpreted in mind. Furthermore, a potential discrepancy between the participants initially treated at primary hospitals and at tertiary referral centers regarding patterns of clinical management prior to receiving the VARTD was not resolved, advertising the potential for an indispensable selection bias. In addition, there remain limited knowledge regarding infected WON with deep extension, and there is no consensus on standard definition for its clinical improvement, thus our a priori definition may result in design bias. This may lead to a reduction in the reintervention rate after the VARTD as compared with other minimal-access techniques as reported previously. In this study, the VARTD was performed as safely as the currently preferred minimal-access surgical approaches for the treatment of infected WON; the VARTD combined with continuous postoperative lavage showed a good efficacy for evacuating necrotic debris while avoiding additional interventions; thus, this approach may be an option for selected patients especially the ones developed the iWONde after failure of the “step-up” approach. To test these findings further, a large-scale, randomized trial involving multiple institutions and comparing the effects of the VARTD with other minimal-access techniques on outcomes of patients who had developed iWONde is warranted. Additional file 1 . The following are available as supplementary data in the Additional files: Table S1 Individual participant data for demographics and baseline characteristics. Table S2 Measurements of CT-derived variables of necrosis at an individual level. Table S3 Intraoperative data. Table S4 Efficacy endpoints at an individual level. Table S5 Safety endpoints at an individual level.
Label free quantitative proteomic analysis reveals the physiological and biochemical responses of
7994b517-f749-4e78-af35-740e33e95707
11842708
Biochemistry[mh]
The intensive use of chemical pesticides is one of the major causes of biodiversity loss. It has also contributed to the development of many pesticide-resistant weed species worldwide . In fact, 211 weed species have been recently identified as herbicide resistant . Globally, there are 404 herbicide-resistant weed species (species × site of action). Weeds resistant to acetolactate synthase (ALS) inhibitors make up approximately one-third of all cases (133 out of 404) and are particularly problematic for rice and other cereals . Unlike chemical herbicides, which have a well-defined single site of action, bioherbicides based on allelochemical molecules stand out because of their multisite actions . For these reasons, the current agricultural system needs to change its practices by not only reducing the use of chemical herbicides but also using more sustainable solutions such as bioherbicides. The latter are defined as natural products used to control weeds and can be based on natural metabolites produced by living organisms, including plants and microbes . Currently, agrochemical companies are becoming increasingly interested in ecofriendly products and are investing in the research and development of biopesticides . For more than three decades, the agricultural chemical sector has not introduced any new herbicides with novel sites of action, which has made farmers dependent on existing herbicides . Hence, it is crucial to develop a new generation of botanical herbicides with new modes of action. In this sense, essential oils (EOs) could be among the best candidates. EOs contain secondary metabolites produced by aromatic plants in response to biotic and abiotic stresses and provide a number of ecological advantages to plants. They contain many active compounds that are distinguished by multisite actions in plant cells, which could slow the resistance of weeds to weed killers. Another advantage of EOs is that they cause no constraint on the environment due to their high volatility and biodegradability. Moreover, EOs have shown promising herbicidal activity. Considering these factors, they constitute a good alternative to chemical herbicides – . The phytotoxic effect of EOs on plants has been widely reported for the last 20 years. Numerous studies have shown this effect through the inhibition of seed germination and seedling growth – . For example, it has been shown that Rosmarinus officinalis EO, at lower concentrations, slows down the seedling growth of Trifolium incarnatum , Silybum marianum , and Phalaris minor , but at 5 mM, it completely inhibits seed germination . This is similar to the finding , who reported that Thymbra capitata EO inhibited the germination and seedling growth of Erigeron canadensis L ., Sonchus oleraceus (L.) L., and Chenopodium album L . at 0.125 µL/ml. In addition, several studies have described the site(s) of action of EOs – . These studies have shown that EOs can target the plasma membrane, cell wall, mitochondrial respiration and photosynthesis system. In fact, they can disturb the physiology and metabolic functions of weeds and lead to cell death. Nevertheless, no study has investigated the effect of EOs on the protein expression of plants. Cinnamomum cassia has been traditionally used to treat gastritis and dyspepsia, blood circulation disturbances and inflammatory diseases , . Moreover, Cinnamomum cassia EO (CEO), like many EOs, has many medicinal and pharmacological properties, particularly antioxidant, neuroprotective, anticancer and antidiabetic properties – . It has been described in the literature that CEO has fungicidal – bactericidal , insecticidal – and herbicidal , , activities, as described recently, and could be a promising alternative for chemical pesticides. To develop a better understanding of the phytotoxic effects of CEO, a label-free proteomic approach was adopted in this study to obtain a global view of the proteome response to CEO. The obtained results provide insights into the complex mode of action of CEO on A. thaliana . Preparation of an herbicide solution based on cinnamon essential oil CEO was purchased from Vossen & Co. (Av. Van Volxem 264/C1, 1190 Bruxelles, Belgium). The technical data sheet obtained through GC‒MS analysis revealed that the major compound was trans-cinnamaldehyde. The essential oil was formulated in water as an oil-in-water emulsion with 1% Tween 20 from Sigma‒Aldrich, which was used as a surfactant. Moreover, the concentrations of the EO were selected based on preliminary tests and literature references , . Postemergence test under greenhouse conditions A postemergence experiment was conducted to study the herbicidal effect of CEO on four-week-old A. thaliana under controlled conditions. The greenhouse was maintained at a natural photoperiod supplemented with artificial light if needed, with temperatures set at 20 ± 3 °C according to the sunlight. The relative humidity was maintained at 60 ± 3%. Seeds of A. thaliana were sown in 10 × 10 cm pots (one plant per pot). The plants were watered daily to maintain adequate soil moisture and promote uniform germination and growth. Once the weeds reached the 2–3 leaf stage (after 4 weeks), two solutions were sprayed (4 mL) on leaves using small Trigger Sprayer (100 ml): (1) a negative control containing 1% Tween 20 and (2) a formulated CEO. Four replicates were considered for each condition, with each replicate containing 5 plants. One hour after the plants were sprayed with CEO on the leaves, the plant material was collected. The second and third leaves were harvested, snap-frozen in liquid nitrogen, and stored at -80 °C. Three plants per treatment were kept to evaluate the phytotoxic effect over a 48-hour period. Additionally, after the CEO treatment, the watering remains consistent through the bottom of the pot (which has drainage holes) using a watering tray. In this case, the plants will not be affected by any water stress during the treatment. To assess the green coverage percentage during this evaluation, ImageJ software was used with the following equation: 1 [12pt]{minimal} $${}{} = }~{}~{}}}{{{}~{}~{}~{}~{}}}*100,$$ In the ImageJ analysis, the total surface area of A. thaliana was calculated using a broader fixed saturation range of 30 to 110, which accounts for all visible leaf tissues. This range ensures that damaged or discolored leaves are also included in the measurement. For the green surface area of the plant, this parameter represents the total area of the green parts of the leaves. In the ImageJ analysis, it was calculated using a fixed saturation range of 50 to 110, which specifically isolates the actively photosynthetic tissue. This range ensures that only the green, functional leaf areas are included in the measurement. A reduction in green surface area is an indicator of the CEO effect on the plant’s ability to photosynthesize, reflecting damage to the chlorophyll-containing tissues. Protein extraction A. thaliana was chosen as a model plant because the full genome information was already available. For each sample, fresh matter from A. thaliana leaves was homogenized in a Potter homogenizer (Wheaton, IL, United States) in 800 mL of homogenization buffer (8 M urea, 100 mM TEAB (triethylammonium bicarbonate), pH 8.5 (HCl), 2 mM EDTA, 10 mM dithiothreitol (DTT), protease inhibitor mix composed of 1 mM phenylmethylsulfonyl fluoride (PMSF, Merck-Sigma-Aldrich, Darmstadt, Germany), 2 µg/mL each of leupeptin (Carl Roth, Karlsruhe, Germany), aprotinin (Carl Roth, Karlsruhe, Germany), antipain (Carl Roth, Karlsruhe, Germany), pepstatin (Carl Roth, Karlsruhe, Germany), and chymostatin (Merck-Sigma-Aldrich, Darmstadt, Germany), and 0.6% w/v polyvinylpolypyrrolidone (Polyclar ® AT, SERVA Electrophoresis GmbH, Heidelberg, Germany). The homogenate was centrifuged for 5 min at 9000 rpm at 4 °C, and the supernatant was then centrifuged again at 54,000 rpm (TLAA55, Optima-Beckman, Indianapolis, USA) for 30 min at 4 °C. To separate the soluble and membrane fractions. Each sample after ultracentrifugation therefore gives two fractions, a supernatant called soluble fraction and a pellet called membrane fraction. The protein concentration was determined according to the Bradford method (1976) using a Bio-Rad protein assay kit with bovine gamma globulin as a standard. For each sample, 20 µg was transferred to 0.5 mL polypropylene Protein LoBind Eppendorf tubes and precipitated via the chloroform-methanol method (Wessel and Flugge 1984). To solubilize the proteins, 20 µL of 100 mM TEAB, pH 8.5 (triethylammonium bicarbonate) containing 0.5% RapiGest surfactant (Waters, Milford, USA), was added. In-solution trypsin digestion The proteins were then reduced with 5 mM DTT (dithiothreitol) and alkylated with 15 mM iodoacetamide. The samples were diluted five times with 20 µL of 100 mM TEAB, pH 8.5. Proteolysis was performed with 1 µg of Sequencing Grade trypsin (Promega, Madison, USA) and was continued overnight at 37 °C. Each sample was dried under vacuum with an RVC 2–25 Martin Christ Concentrator (Martin Christ Instrument Inc., Osterode, Germany) and stored at -80 °C. 2.5 Peptide separation using nanoUPLC. Before peptide separation, the samples were dissolved in 20 µL of 0.1% (v/v) formic acid and 2% (v/v) acetonitrile (ACN). The peptide mixture was separated by reversed-phase chromatography on a NanoACQUITY UPLC MClass system (Waters) with MassLynx V4.1 (Waters) software. For the digestion of proteins, 200 ng was injected into an ACQUITY UPLC M-Class C18 column (5 μm, 180 μm × 20 mm, 100 A) and desalted under isocratic conditions at a flow rate of 15 µL/min in 99% formic acid and 1% (v/v) ACN buffer for 3 min. The peptide mixture was subjected to reversed-phase chromatography on a C18 column (100 Å, 1.8 mm, 75 μm × 150 mm) PepMap column (Waters) for 130 min at 35 °C at a flow rate of 300 nL/min using a two-part linear gradient from 1% (v/v) ACN, 0.1% formic acid to 35% (v/v) ACN, 0.1% formic acid and from 35% (v/v) ACN, 0.1% formic acid to 85% (v/v) ACN, 0.1% formic acid. The column was re-equilibrated under initial conditions after washing for 10 min with 85% (v/v) ACN and 0.1% formic acid at a flow rate of 300 nL/min. For online LC‒MS analysis, nanoUPLC was coupled to a mass spectrometer through a nanoelectrospray ionization (nanoESI) source emitter. LC-IMS (Ion mobility Separation)-QTOF-MS analysis (HDMSE) Ion mobility separation-high definition enhanced (IMS-HDMSE) analysis was performed on a SYNAPT G2-Si high-definition mass spectrometer (Waters) equipped with a NanoLockSpray dual electrospray ion source (Waters). Precut-fused silica PicoTip R Emitters with outer diameters of 360 mm, inner diameters of 20 mm, 10 µm tips, and lengths of 2.5” (Waters) were used for the nanoelectrosprays. Precut-fused silica TicoTip R Emitters with outer diameters of 360 mm, inner diameters of 20 mm, and lengths of 2.5” (Waters) were used for the lock mass solution. The eluent was sprayed at a spray voltage of 2.4 kV with a sampling cone voltage of 25 V and a source offset of 30 V. The source temperature was set to 80 °C. The HDMS E method in resolution mode was used to collect data from 15 min to 106 min after injection. This method acquires MS E in positive and resolution mode over the m/z range from 50 to 2000 with a scan time of 1 s and a collision energy ramp starting from ion mobility bin 20 (20 eV) to 110 (45 eV). The collision energy in the transfer cell for low-energy MS mode was set to 4 eV. For postacquisition lock mass correction of the data in the MS method, the doubly charged monoisotopic ion of [Glu )-fibrinopeptide B was used at 100 fmol/µL using the reference sprayer of the nanoESI source with a frequency of 30 s at 0.5 ml/min into the mass spectrometer. ESI-QTOF data processing HDMS E data were processed with Progenesis QI (Nonlinear DYNAMICS, Waters) software using the A. thaliana protein sequence database (UniProt 20220410, 16127 entries). Propionamide was used as the fixed cysteine modification, oxidation was used as the variable methionine modification, trypsin was used as the digestion enzyme, and one missed cleavage was allowed. The protein confidence score is obtained by adding up the scores of all the peptides involved in the identification of the protein, even if there are some that do not participate in its quantification. The individual score of each peptide is calculated by adding up the scores obtained for a number of parameters associated with the quality of the ions detected. The tolerance on the mass and the intensities of the different isotopes must be similar. The reproducibility of the retention times and ion mobility (if used), as well as the quality of the fragmentation, should be evaluated based on the number of experimental fragments whose masses match the theoretical masses expected for a peptide (Progenesis QI, Nonlinear DYNAMICS, Waters). In addition, the mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD056647 and 10.6019/PXD056647. Statistical analysis Four biological replicates were used for each sample. The nonconflicting peptide method was used for relative quantification which means that proteins are quantified using only peptides that are not also part of another protein hit and protein abundance in a run is calculated from the sum of all the unique normalised peptide ion abundances corresponding to that protein (Progenesis QI, Nonlinear DYNAMICS, Waters). Statistical analyses were performed using the R (version R-4.3.0) software . Protein abundances were log2-transformed and then normalized to the median. Exploratory Principal Component Analysis (PCA) was performed with missing data imputed by the regularized iterative PCA algorithm with the missMDA R package . Differential abundance analyses were performed with the R package limma to compare the EO versus the control samples in each separate fraction based on moderated t-statistics. The p values were adjusted with the false discovery rate (FDR). The resulting adjusted p values and log2-fold changes are represented in the volcano plots. All tests were two-tailed. Hierarchical clustering (Euclidean distance and Ward method) and the associated heatmaps were also generated based on the z scores of proteins with adjusted p values < 0.05. To determine the differentially abundant proteins, we filtered them based on 3 criteria : (1) absolute value of log2-fold change (logFC) > 1, (2) adjusted p value (adj.P. Val) < 0.05 and (3) minimum number of unique peptides = 3 (Fig. ; Table ). In addition, the identified differentially abundant proteins were annotated and grouped by function bins based on the MapMan ontology for A. thaliana downloaded from GoMapMan . They are represented in a heatmap with the average abundance per group of samples for each functional bin. This average abundance is calculated by taking the mean value of log normalized abundances where the grand mean has been added. CEO was purchased from Vossen & Co. (Av. Van Volxem 264/C1, 1190 Bruxelles, Belgium). The technical data sheet obtained through GC‒MS analysis revealed that the major compound was trans-cinnamaldehyde. The essential oil was formulated in water as an oil-in-water emulsion with 1% Tween 20 from Sigma‒Aldrich, which was used as a surfactant. Moreover, the concentrations of the EO were selected based on preliminary tests and literature references , . A postemergence experiment was conducted to study the herbicidal effect of CEO on four-week-old A. thaliana under controlled conditions. The greenhouse was maintained at a natural photoperiod supplemented with artificial light if needed, with temperatures set at 20 ± 3 °C according to the sunlight. The relative humidity was maintained at 60 ± 3%. Seeds of A. thaliana were sown in 10 × 10 cm pots (one plant per pot). The plants were watered daily to maintain adequate soil moisture and promote uniform germination and growth. Once the weeds reached the 2–3 leaf stage (after 4 weeks), two solutions were sprayed (4 mL) on leaves using small Trigger Sprayer (100 ml): (1) a negative control containing 1% Tween 20 and (2) a formulated CEO. Four replicates were considered for each condition, with each replicate containing 5 plants. One hour after the plants were sprayed with CEO on the leaves, the plant material was collected. The second and third leaves were harvested, snap-frozen in liquid nitrogen, and stored at -80 °C. Three plants per treatment were kept to evaluate the phytotoxic effect over a 48-hour period. Additionally, after the CEO treatment, the watering remains consistent through the bottom of the pot (which has drainage holes) using a watering tray. In this case, the plants will not be affected by any water stress during the treatment. To assess the green coverage percentage during this evaluation, ImageJ software was used with the following equation: 1 [12pt]{minimal} $${}{} = }~{}~{}}}{{{}~{}~{}~{}~{}}}*100,$$ In the ImageJ analysis, the total surface area of A. thaliana was calculated using a broader fixed saturation range of 30 to 110, which accounts for all visible leaf tissues. This range ensures that damaged or discolored leaves are also included in the measurement. For the green surface area of the plant, this parameter represents the total area of the green parts of the leaves. In the ImageJ analysis, it was calculated using a fixed saturation range of 50 to 110, which specifically isolates the actively photosynthetic tissue. This range ensures that only the green, functional leaf areas are included in the measurement. A reduction in green surface area is an indicator of the CEO effect on the plant’s ability to photosynthesize, reflecting damage to the chlorophyll-containing tissues. A. thaliana was chosen as a model plant because the full genome information was already available. For each sample, fresh matter from A. thaliana leaves was homogenized in a Potter homogenizer (Wheaton, IL, United States) in 800 mL of homogenization buffer (8 M urea, 100 mM TEAB (triethylammonium bicarbonate), pH 8.5 (HCl), 2 mM EDTA, 10 mM dithiothreitol (DTT), protease inhibitor mix composed of 1 mM phenylmethylsulfonyl fluoride (PMSF, Merck-Sigma-Aldrich, Darmstadt, Germany), 2 µg/mL each of leupeptin (Carl Roth, Karlsruhe, Germany), aprotinin (Carl Roth, Karlsruhe, Germany), antipain (Carl Roth, Karlsruhe, Germany), pepstatin (Carl Roth, Karlsruhe, Germany), and chymostatin (Merck-Sigma-Aldrich, Darmstadt, Germany), and 0.6% w/v polyvinylpolypyrrolidone (Polyclar ® AT, SERVA Electrophoresis GmbH, Heidelberg, Germany). The homogenate was centrifuged for 5 min at 9000 rpm at 4 °C, and the supernatant was then centrifuged again at 54,000 rpm (TLAA55, Optima-Beckman, Indianapolis, USA) for 30 min at 4 °C. To separate the soluble and membrane fractions. Each sample after ultracentrifugation therefore gives two fractions, a supernatant called soluble fraction and a pellet called membrane fraction. The protein concentration was determined according to the Bradford method (1976) using a Bio-Rad protein assay kit with bovine gamma globulin as a standard. For each sample, 20 µg was transferred to 0.5 mL polypropylene Protein LoBind Eppendorf tubes and precipitated via the chloroform-methanol method (Wessel and Flugge 1984). To solubilize the proteins, 20 µL of 100 mM TEAB, pH 8.5 (triethylammonium bicarbonate) containing 0.5% RapiGest surfactant (Waters, Milford, USA), was added. The proteins were then reduced with 5 mM DTT (dithiothreitol) and alkylated with 15 mM iodoacetamide. The samples were diluted five times with 20 µL of 100 mM TEAB, pH 8.5. Proteolysis was performed with 1 µg of Sequencing Grade trypsin (Promega, Madison, USA) and was continued overnight at 37 °C. Each sample was dried under vacuum with an RVC 2–25 Martin Christ Concentrator (Martin Christ Instrument Inc., Osterode, Germany) and stored at -80 °C. 2.5 Peptide separation using nanoUPLC. Before peptide separation, the samples were dissolved in 20 µL of 0.1% (v/v) formic acid and 2% (v/v) acetonitrile (ACN). The peptide mixture was separated by reversed-phase chromatography on a NanoACQUITY UPLC MClass system (Waters) with MassLynx V4.1 (Waters) software. For the digestion of proteins, 200 ng was injected into an ACQUITY UPLC M-Class C18 column (5 μm, 180 μm × 20 mm, 100 A) and desalted under isocratic conditions at a flow rate of 15 µL/min in 99% formic acid and 1% (v/v) ACN buffer for 3 min. The peptide mixture was subjected to reversed-phase chromatography on a C18 column (100 Å, 1.8 mm, 75 μm × 150 mm) PepMap column (Waters) for 130 min at 35 °C at a flow rate of 300 nL/min using a two-part linear gradient from 1% (v/v) ACN, 0.1% formic acid to 35% (v/v) ACN, 0.1% formic acid and from 35% (v/v) ACN, 0.1% formic acid to 85% (v/v) ACN, 0.1% formic acid. The column was re-equilibrated under initial conditions after washing for 10 min with 85% (v/v) ACN and 0.1% formic acid at a flow rate of 300 nL/min. For online LC‒MS analysis, nanoUPLC was coupled to a mass spectrometer through a nanoelectrospray ionization (nanoESI) source emitter. Ion mobility separation-high definition enhanced (IMS-HDMSE) analysis was performed on a SYNAPT G2-Si high-definition mass spectrometer (Waters) equipped with a NanoLockSpray dual electrospray ion source (Waters). Precut-fused silica PicoTip R Emitters with outer diameters of 360 mm, inner diameters of 20 mm, 10 µm tips, and lengths of 2.5” (Waters) were used for the nanoelectrosprays. Precut-fused silica TicoTip R Emitters with outer diameters of 360 mm, inner diameters of 20 mm, and lengths of 2.5” (Waters) were used for the lock mass solution. The eluent was sprayed at a spray voltage of 2.4 kV with a sampling cone voltage of 25 V and a source offset of 30 V. The source temperature was set to 80 °C. The HDMS E method in resolution mode was used to collect data from 15 min to 106 min after injection. This method acquires MS E in positive and resolution mode over the m/z range from 50 to 2000 with a scan time of 1 s and a collision energy ramp starting from ion mobility bin 20 (20 eV) to 110 (45 eV). The collision energy in the transfer cell for low-energy MS mode was set to 4 eV. For postacquisition lock mass correction of the data in the MS method, the doubly charged monoisotopic ion of [Glu )-fibrinopeptide B was used at 100 fmol/µL using the reference sprayer of the nanoESI source with a frequency of 30 s at 0.5 ml/min into the mass spectrometer. HDMS E data were processed with Progenesis QI (Nonlinear DYNAMICS, Waters) software using the A. thaliana protein sequence database (UniProt 20220410, 16127 entries). Propionamide was used as the fixed cysteine modification, oxidation was used as the variable methionine modification, trypsin was used as the digestion enzyme, and one missed cleavage was allowed. The protein confidence score is obtained by adding up the scores of all the peptides involved in the identification of the protein, even if there are some that do not participate in its quantification. The individual score of each peptide is calculated by adding up the scores obtained for a number of parameters associated with the quality of the ions detected. The tolerance on the mass and the intensities of the different isotopes must be similar. The reproducibility of the retention times and ion mobility (if used), as well as the quality of the fragmentation, should be evaluated based on the number of experimental fragments whose masses match the theoretical masses expected for a peptide (Progenesis QI, Nonlinear DYNAMICS, Waters). In addition, the mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD056647 and 10.6019/PXD056647. Four biological replicates were used for each sample. The nonconflicting peptide method was used for relative quantification which means that proteins are quantified using only peptides that are not also part of another protein hit and protein abundance in a run is calculated from the sum of all the unique normalised peptide ion abundances corresponding to that protein (Progenesis QI, Nonlinear DYNAMICS, Waters). Statistical analyses were performed using the R (version R-4.3.0) software . Protein abundances were log2-transformed and then normalized to the median. Exploratory Principal Component Analysis (PCA) was performed with missing data imputed by the regularized iterative PCA algorithm with the missMDA R package . Differential abundance analyses were performed with the R package limma to compare the EO versus the control samples in each separate fraction based on moderated t-statistics. The p values were adjusted with the false discovery rate (FDR). The resulting adjusted p values and log2-fold changes are represented in the volcano plots. All tests were two-tailed. Hierarchical clustering (Euclidean distance and Ward method) and the associated heatmaps were also generated based on the z scores of proteins with adjusted p values < 0.05. To determine the differentially abundant proteins, we filtered them based on 3 criteria : (1) absolute value of log2-fold change (logFC) > 1, (2) adjusted p value (adj.P. Val) < 0.05 and (3) minimum number of unique peptides = 3 (Fig. ; Table ). In addition, the identified differentially abundant proteins were annotated and grouped by function bins based on the MapMan ontology for A. thaliana downloaded from GoMapMan . They are represented in a heatmap with the average abundance per group of samples for each functional bin. This average abundance is calculated by taking the mean value of log normalized abundances where the grand mean has been added. Overview of the proteome profile of leaves treated with CEO A phenotypic experiment was conducted to assess the phytotoxic effects of CEO over a 48-hour period. As shown in Figs. and , the herbicidal activity was observed on A. thaliana . Leaf wilting occurred just 1 h after treatment, with a 42.53% reduction in green coverage (Fig. C), followed by a noticeable decrease in green pigmentation after 6 h (Fig. D). By 48 h, both the leaves and stems were completely withered and discolored, reaching a 98.1% reduction in green coverage (Fig. F). Then, we used a label-free quantitative proteomics approach to investigate the herbicidal effect of CEO. In total, 3682 proteins were identified and quantified in A. thaliana leaves. All the sequences of the identified peptide fragments in the soluble and membrane fractions can be found in supplementary Table and Table , respectively. Compared to those in untreated leaves, 325 differentially accumulated proteins were found, and among them, 145 were overaccumulated, while 180 were downregulated (Fig. ). The scores plot of the PCA represented in Fig. clearly distinguishes between the control and treated leaves of A. thaliana by CEO. Indeed, the first PCA dimension, representing the main axis of variation, enables the separation of the controls from the EOs both in the soluble fraction (71.3%) and in the membrane fraction (83.2%). The volcano plots illustrate and link log fold changes and adjusted p values for each protein (Fig. A and B). It showed that a high number of differential accumulated protein were identified after only 1 h of treatment with CEO. To further analyze the data, we performed heatmap clustering, which highlighted significant proteomic changes across various functional categories, as represented by the MapMan functional bins (Fig. A, and B). The differentially accumulated proteins were classified into 31 categories for the membrane fraction and 25 categories for the soluble fraction. It revealed significant proteome remodeling after CEO exposure, impacting an essential metabolic pathway such as the pentose phosphate cycle, glycolysis, photosynthesis, nitrogen metabolism, fermentation, and cell wall synthesis. These changes confirm the substantial damage caused by CEO. In addition, among the most affected differentially accumulated proteins, we identified the photosystem II subunit H2 light-harvesting complex protein, photosystem I subunit H2 protein, GPI-anchored adhesin-like protein, and nitrate reductase 2 with the highest fold changes, which reached 16.18, 15.53 8.23 and 7.96, respectively (Table ). Indeed, proteins involved in metabolic processes were also differentially accumulated (e.g., phospholipase D delta glyceraldehyde 3-phosphate dehydrogenase asparagine synthetase 2 alpha-glucan phosphorylase 2). All these modifications certainly impacted the accumulation of proteins involved in the response to oxidative stress as well as proteins involved in the transduction of cellular signals (e.g., dicarboxylate transporter 1, PLC-like phosphodiesterase family protein plastid-lipid associated protein PAP selenoprotein O). A phenotypic experiment was conducted to assess the phytotoxic effects of CEO over a 48-hour period. As shown in Figs. and , the herbicidal activity was observed on A. thaliana . Leaf wilting occurred just 1 h after treatment, with a 42.53% reduction in green coverage (Fig. C), followed by a noticeable decrease in green pigmentation after 6 h (Fig. D). By 48 h, both the leaves and stems were completely withered and discolored, reaching a 98.1% reduction in green coverage (Fig. F). Then, we used a label-free quantitative proteomics approach to investigate the herbicidal effect of CEO. In total, 3682 proteins were identified and quantified in A. thaliana leaves. All the sequences of the identified peptide fragments in the soluble and membrane fractions can be found in supplementary Table and Table , respectively. Compared to those in untreated leaves, 325 differentially accumulated proteins were found, and among them, 145 were overaccumulated, while 180 were downregulated (Fig. ). The scores plot of the PCA represented in Fig. clearly distinguishes between the control and treated leaves of A. thaliana by CEO. Indeed, the first PCA dimension, representing the main axis of variation, enables the separation of the controls from the EOs both in the soluble fraction (71.3%) and in the membrane fraction (83.2%). The volcano plots illustrate and link log fold changes and adjusted p values for each protein (Fig. A and B). It showed that a high number of differential accumulated protein were identified after only 1 h of treatment with CEO. To further analyze the data, we performed heatmap clustering, which highlighted significant proteomic changes across various functional categories, as represented by the MapMan functional bins (Fig. A, and B). The differentially accumulated proteins were classified into 31 categories for the membrane fraction and 25 categories for the soluble fraction. It revealed significant proteome remodeling after CEO exposure, impacting an essential metabolic pathway such as the pentose phosphate cycle, glycolysis, photosynthesis, nitrogen metabolism, fermentation, and cell wall synthesis. These changes confirm the substantial damage caused by CEO. In addition, among the most affected differentially accumulated proteins, we identified the photosystem II subunit H2 light-harvesting complex protein, photosystem I subunit H2 protein, GPI-anchored adhesin-like protein, and nitrate reductase 2 with the highest fold changes, which reached 16.18, 15.53 8.23 and 7.96, respectively (Table ). Indeed, proteins involved in metabolic processes were also differentially accumulated (e.g., phospholipase D delta glyceraldehyde 3-phosphate dehydrogenase asparagine synthetase 2 alpha-glucan phosphorylase 2). All these modifications certainly impacted the accumulation of proteins involved in the response to oxidative stress as well as proteins involved in the transduction of cellular signals (e.g., dicarboxylate transporter 1, PLC-like phosphodiesterase family protein plastid-lipid associated protein PAP selenoprotein O). In this study, we found that 6% CEO caused wilting of leaves after only one hour of spraying, which confirmed its phytotoxic effect and highlights its contact herbicidal actions. This effect has been demonstrated for the first time by Tworkoski, 2002 36 , who showed the strong phytotoxic activity of CEO against Chenopodium album , Ambrosia artemisiifolia , and Sorghum halepense . He found that 2% of CEO caused rapid leaf injury and strong electrolyte leakage. In addition to its herbicidal activity, CEO has several other biological properties. In this context, the insecticidal activity of CEO against Sitophilus zeamais on maize was reported and the fungicidal activity of the same EO against Botrytis cinerea on pears after 4 days has also been described . All studies confirmed that cinnamaldehyde, the lead compound, is responsible for the toxic effect of CEO. Furthermore, the phytotoxic effect of phenylpropanoids, including cinnamaldehyde, on the leaves of A. thaliana could be explained by their interaction with membrane receptors, unlike monoterpenes, which disturb lipid organization . It is crucial to remember that the toxic properties of essential oils depend strongly on their chemical composition, which is affected by genetic variation, sampled plant tissues, growing conditions and extraction methods . In the case of herbicidal activity, another factor was the tested weed species. In fact, phytotoxic activity can be more effective on one plant species than on another. For instance, the foliar application of a Caraway EO emulsion had a greater impact on the biochemical parameters of barnyard grass than on those of maize . On this subject, the selective action of an EO toward one undesired plant species and not another is due to the mode(s) of action of its compounds, which tend to block one metabolic pathway in some plants and not others . In this paper, we studied the herbicidal effect of CEO on A. thaliana through protein expression. To our knowledge, this is the first time that label-free quantitative proteomic technology has been used to analyze the biochemical responses of plants after treatment with EOs. This advanced analytical method has been used to create cellular proteome maps and characterize interactions between plants and pathogens or defense reactions to biotic or abiotic stress . It has also been used to facilitate comparative and proteomic analyses of complex samples. Nevertheless, it will be necessary in the future to improve separation technology and bioinformatic analysis. Our proteomic analysis revealed that CEO induced dramatic changes in the leaves of A. thaliana after only one hour of exposure. Indeed, 325 proteins were differentially accumulated between the treated leaves and untreated leaves. A similar study was conducted to investigate the insecticidal effect of Mentha arvensis essential oil on the weevil of Sitophilus granarie . A total of 55 differentially accumulated proteins were detected. They showed that after 24 h of contact, the toxicity of this essential oil to insects had a notable impact on various biological processes, especially those related to the nervous and muscular systems. Due to their abundance of active compounds, essential oils offer a multitude of mechanisms of action with a low probability of developing resistant weed populations . This will be further discussed below. Among the 27 herbicide groups, 7 directly disturb the photosynthesis system of weeds by inhibiting key enzymes, especially 4-hydroxyphenylpyruvate dioxygenase (HPPD inhibitors). They can also bind to protein complexes present in the chloroplast thylakoid membrane and consequently completely stop the electron transport chain, as is the case for triazine. It has been described in the literature that photosynthesis is one of the most important biological processes in plant physiology, allowing the production of oxygen and energy in the form of sugar . Several studies have confirmed that photosynthesis is inhibited in the presence of allelochemicals, particularly EOs , , . A phenotypic experiment showed that by 48 hours, the stems and leaves became discolored and dried, confirming the desiccant effect caused by the disruption of photosynthetic mechanisms. This is further supported by our proteomic analysis, which revealed that just one hour was enough to completely destabilize photosynthetic activity in A. thaliana , as evidenced by a significant reduction in photosystem proteins in both the soluble and membrane fractions. On the other hand, proteomic analyses revealed additional differentially accumulated proteins that were not associated with the visual observations, such as nitrate reductase, which plays a key role in nitrogen assimilation in plants. This is illustrated in Fig. , which shows that 31 physiological processes in A.thaliana were disrupted. Among these processes, nitrogen metabolism, pentose phosphate pathway, along with fermentation, were significantly affected. These results could be directly in line with Ben kaab et al., 2020 48 , who state that plant extracts containing multiple molecules usually exhibit multisite action, which contrasts with synthetic herbicides that typically target a single site. It has also been shown that EO decreases water content and consequently acts as a desiccant herbicide , . This could be supported by our proteomic analysis, which showed overexpression of some proteins involved in the retention of water content in plants through the regulation of stomatal closure (phospholipase D delta protein and membrane protein AT1G32080.1). In addition, photosystem subunit 1 protein, photosystem 2 light-harvesting protein and photosynthetic electron transfer B protein are integral components of the four protein complexes located on the thylakoid membrane of the chloroplast which are strongly downregulated (Table ). They play an important role in the preservation of the electrochemical gradient required for the phosphorylation of ADP to ATP . The rubisco-activated protein was also downregulated. This protein is absolutely necessary for photosynthesis, particularly because it allows for the fixation of atmospheric CO 2 and its subsequent incorporation in the Calvin cycle for energy production in the form of sugar . The thylakoid lumen protein, which is also one of the top 40 DEPs, was downregulated. It maintains photosystem 2 under high light and contributes to the phosphorylation of ADP to ATP by pumping H + to the stroma . All these results are in agreement with those of Li et al., 2021 51, who confirmed that the phytotoxic effect of essential oils is related to the inhibition of A. thaliana photosynthesis. They revealed for the first time the possibility that the essential oil of Artemisia argyi may act as an HPPD inhibitor to block the photosynthesis chain in weed species. In fact, they analyzed the HPPD content by an immunosorbent assay (ELISA) kit and showed that at 4 µL/mL, the HPPD content significantly decreased by 31.24% in comparison to that in the control group. In addition, herbicides that specifically inhibit HPPD, such as mesotrione, effectively manage a broad spectrum of weed species . It is important to mention that the main compounds of Artemisia argyi essential oil are monoterpenes that can thus penetrate the cell and damage cellular organelles without affecting membrane permeability due to their small size . The plasma membrane serves as a solid barrier separating the cell from its surroundings and plays a vital role in the perception of external signals, facilitating exchanges between the cytoplasm and the cellular environment , . Thus, any alteration in the structure of the plant plasma membrane caused by bioactive compounds will disrupt its function and integrity and consequently disturb biochemical and physiological processes , . For these reasons, scientists believe that the plant plasma membrane is one of the potential cellular targets of essential oils (EOs). The authors also suggested first studying the interactions between phytochemical compounds and the plasma membrane to understand the mode of action of these compounds – . These compounds can interact with lipid membranes and can react as pro-oxidants by inducing lipid peroxidation . Molecular dynamic simulations revealed that cinnamaldehyde (CIN) molecules can penetrate only up to the polar head region of the model plasma membrane, where they can interact with membrane proteins, such as membrane receptors and ion channels . This finding is in line with the results of our study. In fact, CEO downregulated 264 protein membranes in A. thaliana leaves. These membrane proteins are present not only in the plasma membrane but also in various cellular organelles, including the thylakoids of chloroplasts, mitochondria, the endoplasmic reticulum, the Golgi apparatus, lysosomes and peroxisomes. This made it challenging to identify the specific proteins of the plasma membrane. As shown in Fig. , CEO affect the secondary metabolism and the signalization process. Importantly, all types of constraints on plants induce oxidative stress . In addition, allelochemical compounds can induce oxidative stress by generating reactive oxygen species (ROS). The latter are highly reactive, which can make them toxic in certain cases . They play an important signaling role in regulating essential processes such as growth, development, response to biotic and abiotic environmental stimuli, defense against pathogens and stomatal behavior . ROS can react directly with biological molecules, such as DNA, proteins or lipids, generating mutations and damaging membranes, leading to cell and tissue damage and causing programmed cell death (PCD) processes . The main type of ROS is O 2 , which can be transformed into another harmful ROS, such as a hydroxyl radical. Excessive ROS production also causes oxidative damage to cellular proteins, lipids, and nucleic acids and activates death pathways in several cell types . To summarize the mechanism of action of CEO, phenotypic evidence demonstrates its rapid effect, consistent with its contact effect on the leaf cuticle. This effect is further confirmed by the downregulation of proteins involved in the biosynthesis of the cuticle, as shown by proteomic analysis. Furthermore, the observed leaf discoloration and drying validate its desiccant properties. Proteomic analysis supports this observation, as overexpression of certain proteins involved in water retention has been noted. This could be related to a decrease in membrane integrity, as shown by Ben Kaab et al. 2020 48 , leading to water leakage. It is also well known that essential oils contain small molecules that are able to easily interact with the plant cell membranes, which could induce a prooxidant effect, commonly referred to as the “burndown effect.” This was confirmed by researchers at the WSSA Annual Meeting, who affirmed that most plant-based bioherbicides produce burning effects Consequently, we observed an overexpression of proteins involved in managing oxidative stress, resulting from oxidative damage to membrane systems on one hand, and, on the other hand, the desiccant effect of CEO, which results from the loss of membrane integrity. This will likely change the expression of several proteins, particularly those involved in photosynthesis and fermentation, which is highly dependent on water. Concerning the label-free protein quantification method, it requires high reproducibility in sample preparation and handling , . Variations may occur between samples analyzed by LC/MS even between technical replicates. Unlike labeling quantification methods where all samples are analyzed together, it is necessary to analyze all samples separately. This therefore requires processing a large number of replicates to obtain statistically stable data. Normalizing a large number of replicates can ultimately reduce the number of proteins of interest . If some proteins are too abundant, less expressed proteins will be less well or not at all identified. Currently, there is a significant demand for more research to develop natural products for agronomic application. Unfortunately, the authorization processes in EU states are time-consuming, complex and expensive and require safety documentation, such as for ecotoxicological studies. Understanding the mode(s) of action is crucial for conducting these studies efficiently. Interestingly, our research confirmed that cinnamon essential oil (CEO) could be a promising botanical herbicide for controlling weed invasion. This has been confirmed by its high and rapid phytotoxicity. Notably, our proteomic approach showed, for the first time, that a high number of proteins could be differentially accumulated after only one hour of CEO treatment. The results also showed that photosynthesis was strongly inhibited by the reduction in the expression of photosystem proteins in thylakoid membranes. It is also important to mention that quantification by the label-free approach offers a greater dynamic range and broader identified protein coverage, but lower quantification accuracy and reproducibility . Finally, this study showed that CEO has a strong herbicidal effect, making it a suitable source of natural herbicides with a low probability of developing resistant weed populations. In future studies, Other weed and crops types should be tested to better understand the herbicidal effects of CEO. Additionally, field trials should be conducted to evaluate this activity under uncontrolled conditions. Since CEO has contact herbicidal action, studying its effect on the cuticle and cell wall is crucial for determining its mode of action. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
Ethical considerations during Mpox Outbreak: a scoping review
16d0eedc-fe9b-494f-98b0-22c8869079e4
11265031
Psychiatry[mh]
In 1970, the Democratic Republic of the Congo reported the first documented case of mpox. Mpox was diagnosed in a nine-month-old child , marking the first recorded cases of human infection in history. Following its initial identification, the virus spread to other regions of Africa, primarily within the tropical rainforest zones. The disease was reported in Cameroon, the Central African Republic, Nigeria, Gabon, Ivory Coast, and South Sudan . The threatening virus was known to be endemic in these regions for five decades . On July 23, 2022, the World Health Organization (WHO) declared a renewed outbreak of mpox, designating it a Public Health Emergency of International Concern (PHEIC) . By May 10, 2023, the International Health Regulation (IHR) determined that the ongoing mpox outbreak no longer posed a PHEIC . Consequently, revised interim recommendations were issued to facilitate the transition towards a long-term strategy for controlling mpox . In May 2023, the Centers for Disease Control and Prevention (CDC) reported 87,314 confirmed mpox cases across 111 countries. Notably, over 90% of these cases emerged in regions traditionally not affected with mpox endemicity—specifically, Europe, Australia, and North America . This indicates significant virus spread beyond its usual geographical boundaries . The causative agent of the outbreak, monkeypox virus (MPXV), is a double-stranded DNA virus belonging to the Orthopoxvirus genus, closely related to the smallpox virus. MPXV is capable of infecting both humans and certain animals . Named after monkeys, MPXV was first identified in 1958 in skin lesions of imported monkeys in a Danish laboratory . Human-to-human transmission of MPXV can occur through direct contact with infected skin lesions or mucous membranes, respiratory droplets, and the sharing of contaminated items such as food, bedding, and utensils . The ongoing mpox outbreak that started in 2022 has a wider geographic spread than previous outbreaks, with growing evidence indicating sexual contact as the predominant mode of transmission. Global spread can be attributed to international travel to traditionally endemic regions and participation in large mass gatherings linked to sexual activities . The rapid expansion of human-to-human transmission is indeed amplified within sexual networks, particularly among men who have sex with men (MSM) . Pregnant females have also been diagnosed with mpox during the recent outbreak . Human-to-human transmission can occur within households and children in close contact with infected family members are at risk of contracting the virus. Healthcare professionals (HCPs) who care for sick patients, including those with mpox, are also at risk of contracting the virus if proper infection control protocols are not followed . Fever is typically the initial symptom of mpox, followed by the appearance of a rash after a few days, with concurrent or preceding lymphadenopathy . Emerging or re-emerging infectious diseases demand significant attention due to their complex ethical issues. Outbreaks often challenge balancing public health interests with protecting fundamental human rights. Measures like monitoring, isolation, and quarantine may be necessary to control the spread of the disease but must be implemented with respect for individuals’ rights and dignity . In the outbreak of mpox in 2022 and according to associated reports, 87.3% of cases were gay, bisexual, and MSM which may fuel the stigma and reduce the acceptance of this highly marginalized community . This situation exhibits striking parallels with the human immune deficiency virus (HIV) epidemic that profoundly affected the lesbian, gay, bisexual, transgender, and queer (LGBTQ) community in the late 1980s and early ’90s . This perspective highlights the dual detrimental effects of attributing the spread of mpox to a specific group: it not only perpetuates stigma against the LGBTQ community but also undermines the recognition of the broader risk to the entire population. As WHO chief Tedros Adhanom Ghebreyesus said “The stigma and discrimination can be as dangerous as any virus and can fuel the outbreak” . Individuals experiencing stigmatized identities encounter heightened vulnerability and discrimination, leading to reluctance to disclose symptoms or seek care . This serves as a barrier to effective prevention, treatment, and containment efforts during outbreaks of this nature . Moreover, HCPs face several delicate ethical dilemmas related to informed consent, patient autonomy, patient confidentiality rights, partner notification, and equity in healthcare . Preventive measures, clinical trials, and research are all subjected to ethical considerations as well. Mandatory vaccination undermines an individual’s autonomy, liberty, and benefit. All researchers and medical professionals must uphold their ongoing commitments to the values of beneficence, fairness, and respect for all people while conducting clinical trials and searching for new antiviral drugs to combat any infectious disease and its spread . Achieving a balance between the libertarian objective of confidentiality and liberty of movement and the utilitarian goal of improving public health in situations involving contagious, fatal, or dangerous diseases presents a complex and challenging ethical question . There is currently no comprehensive literature review that summarizes the ethical concerns and stigma associated with mpox infection. Recognizing this gap, our study was dedicated to filling it by conducting a thorough review of published studies and existing reports. The primary goal was to provide a comprehensive overview of the ethical issues and stigma associated with the mpox outbreak, as well as an examination of associated misinformation. The outcomes of this review will provide insights to inform recommendations for future research, policy development, and ethical guidelines. This approach is designed to address identified gaps and promote ethical decision-making in the context of mpox outbreaks. Methodology This scoping review followed the framework proposed by Arksey and O’Malley and was further improved by the recommendations of Levac et al. . We also adhered to the PRISMA Extension for Scoping Reviews (PRISMA-ScR) developed in 2018 by Tricco et al. and updated in 2020 by Peters et al . (Supplementary 1 file). This study aimed to. Analyze the identified literature to categorize and describe the ethical issues that arose during the mpox outbreak. This includes patient care issues, public health measures, and societal perceptions. Investigate and categorize the various types of stigma associated with mpox infection as described in the literature. Investigate how societal attitudes, misinformation, and public perceptions contribute to the stigmatization of mpox patients. Examine the role of misinformation in shaping ethical considerations and perpetuating stigma during the mpox outbreak in particular. Recognize the effects of false information on public health responses and individual experiences. Database search Searching for relevant literature published in English was conducted by two authors (AG, RMG) using the following electronic databases: PubMed Central, PubMed Medline, Scopus, Web of Science, Ovid, and Google Scholar. The literature search commenced on February 15, 2023, focusing on published papers from May 6, 2022, onward. This date corresponds to the announcement of the first identified case of mpox. Relevant terms, synonyms and abbreviations were tailored for each database (Supplementary 2 file). The search strategy for PubMed was (“Monkeypox virus“[MeSH Terms] OR “Monkeypox“[MeSH Terms] OR “Monkey Pox“[Text Word] OR “MPX“[Text Word] OR “monkeypox virus*“[Text Word] OR “monkeypoxvirus*“[Text Word] OR “monkey pox virus*“[Text Word]) AND (“Ethics“[MeSH Terms] OR “Morals“[MeSH Terms] OR “Social Stigma“[MeSH Terms] OR “Privacy“[MeSH Terms] OR “Confidentiality“[MeSH Terms] OR “stigma*“[Title/Abstract] OR “moral*“[Title/Abstract] OR “Secrecy“[Title/Abstract] OR “privileg*“[Title/Abstract] OR “confident*“[Title/Abstract] OR “priva*“[Title/Abstract] OR “ethic*“[Title/Abstract] OR “Egoism“[Title/Abstract] OR “metaethic*“[Title/Abstract]) In addition, reference lists, and citation tracking were conducted to identify further related articles. This involved scrutinizing the references of relevant studies, tracking citations, and exploring related articles for eligible publications. Moreover, a supplementary search was performed on gray literature sources (medRxiv and Research Square). In our scoping review methodology, we performed a manual search by systematically reviewing articles pertinent to our research topics in key journals, such as The Lancet, the BMJ, BMC Tropical Medicine and Health, Bioethics, BMC Medical Ethics, and PLOS Neglected Tropical Diseases. Study selection All citations found were imported to an “EndNote” library and duplicate citations were removed. Then, the citations were exported to an Excel sheet file for a two-stage screening process; (a) initial title and abstract screening by two authors independently (A.G, H.A) and (b) full-text screening by another two independent authors (H.E, I.K). The inclusion criteria for studies encompassed all research related to both mpox and ethical issues, published in English and appearing after the first reported case of mpox in May, 6 2022. The agreement between reviewers was 0.83. A third expert reviewer (RMG) resolved any conflicts. The criteria proposed by Joanna Briggs Institute were followed for our search strategy: Population, Concept, and Context (PCC). Population: Any population was included (no restriction for age, sex, race, sexual orientation). Concept: This study encompassed all research about both ‘mpox’ and ethical themes, written in English and published after the initial reported case of mpox in the United Kingdom on May 6, 2022. Context: All types of research papers were included (original articles, commentaries, brief reports, letters to the editor, opinion articles, short communication, and viewpoints). Eligible studies for data extraction: This criterion ensured that the selected studies provided comprehensive information on the research design, methods, and findings. Charting the data Four reviewers (A.G, H.E, H.A.M, A.G.E) independently retrieved essential data extracted from the eligible articles using the prespecified data extraction form. The extracted data included characteristics of participants (i.e., gender, sexual orientation) and study characteristics (i.e., authors’ last name, year of publication, country, objectives, and study design) and the ethical concerns or stigma related to mpox. The primary outcome of our study was the identification and synthesis of ethical themes pertaining to mpox, derived from the included records (after reviewing relevant studies concerned with the research question of interest). These themes encompassed a range of moral and ethical issues, including managing an infectious individual, misinformation, stigmatized terminology, stigmatized policies, the burden of discrimination within the community, and other pertinent themes observed across the reviewed records. Any disagreement was resolved by consensus or the senior researcher (RMG). The expert panel was consulted as needed, particularly in situations where there was a lack of understanding of the context, or specific terminologies that cannot be understood by the data extractors. Comprising individuals with specialized knowledge and expertise in the subject area (Medical Ethics, infectious diseases, and Tropical Health), the expert panel provided valuable insights and clarification to enhance the comprehensibility of the scoping review. This scoping review followed the framework proposed by Arksey and O’Malley and was further improved by the recommendations of Levac et al. . We also adhered to the PRISMA Extension for Scoping Reviews (PRISMA-ScR) developed in 2018 by Tricco et al. and updated in 2020 by Peters et al . (Supplementary 1 file). This study aimed to. Analyze the identified literature to categorize and describe the ethical issues that arose during the mpox outbreak. This includes patient care issues, public health measures, and societal perceptions. Investigate and categorize the various types of stigma associated with mpox infection as described in the literature. Investigate how societal attitudes, misinformation, and public perceptions contribute to the stigmatization of mpox patients. Examine the role of misinformation in shaping ethical considerations and perpetuating stigma during the mpox outbreak in particular. Recognize the effects of false information on public health responses and individual experiences. Searching for relevant literature published in English was conducted by two authors (AG, RMG) using the following electronic databases: PubMed Central, PubMed Medline, Scopus, Web of Science, Ovid, and Google Scholar. The literature search commenced on February 15, 2023, focusing on published papers from May 6, 2022, onward. This date corresponds to the announcement of the first identified case of mpox. Relevant terms, synonyms and abbreviations were tailored for each database (Supplementary 2 file). The search strategy for PubMed was (“Monkeypox virus“[MeSH Terms] OR “Monkeypox“[MeSH Terms] OR “Monkey Pox“[Text Word] OR “MPX“[Text Word] OR “monkeypox virus*“[Text Word] OR “monkeypoxvirus*“[Text Word] OR “monkey pox virus*“[Text Word]) AND (“Ethics“[MeSH Terms] OR “Morals“[MeSH Terms] OR “Social Stigma“[MeSH Terms] OR “Privacy“[MeSH Terms] OR “Confidentiality“[MeSH Terms] OR “stigma*“[Title/Abstract] OR “moral*“[Title/Abstract] OR “Secrecy“[Title/Abstract] OR “privileg*“[Title/Abstract] OR “confident*“[Title/Abstract] OR “priva*“[Title/Abstract] OR “ethic*“[Title/Abstract] OR “Egoism“[Title/Abstract] OR “metaethic*“[Title/Abstract]) In addition, reference lists, and citation tracking were conducted to identify further related articles. This involved scrutinizing the references of relevant studies, tracking citations, and exploring related articles for eligible publications. Moreover, a supplementary search was performed on gray literature sources (medRxiv and Research Square). In our scoping review methodology, we performed a manual search by systematically reviewing articles pertinent to our research topics in key journals, such as The Lancet, the BMJ, BMC Tropical Medicine and Health, Bioethics, BMC Medical Ethics, and PLOS Neglected Tropical Diseases. All citations found were imported to an “EndNote” library and duplicate citations were removed. Then, the citations were exported to an Excel sheet file for a two-stage screening process; (a) initial title and abstract screening by two authors independently (A.G, H.A) and (b) full-text screening by another two independent authors (H.E, I.K). The inclusion criteria for studies encompassed all research related to both mpox and ethical issues, published in English and appearing after the first reported case of mpox in May, 6 2022. The agreement between reviewers was 0.83. A third expert reviewer (RMG) resolved any conflicts. The criteria proposed by Joanna Briggs Institute were followed for our search strategy: Population, Concept, and Context (PCC). Population: Any population was included (no restriction for age, sex, race, sexual orientation). Concept: This study encompassed all research about both ‘mpox’ and ethical themes, written in English and published after the initial reported case of mpox in the United Kingdom on May 6, 2022. Context: All types of research papers were included (original articles, commentaries, brief reports, letters to the editor, opinion articles, short communication, and viewpoints). Eligible studies for data extraction: This criterion ensured that the selected studies provided comprehensive information on the research design, methods, and findings. Four reviewers (A.G, H.E, H.A.M, A.G.E) independently retrieved essential data extracted from the eligible articles using the prespecified data extraction form. The extracted data included characteristics of participants (i.e., gender, sexual orientation) and study characteristics (i.e., authors’ last name, year of publication, country, objectives, and study design) and the ethical concerns or stigma related to mpox. The primary outcome of our study was the identification and synthesis of ethical themes pertaining to mpox, derived from the included records (after reviewing relevant studies concerned with the research question of interest). These themes encompassed a range of moral and ethical issues, including managing an infectious individual, misinformation, stigmatized terminology, stigmatized policies, the burden of discrimination within the community, and other pertinent themes observed across the reviewed records. Any disagreement was resolved by consensus or the senior researcher (RMG). The expert panel was consulted as needed, particularly in situations where there was a lack of understanding of the context, or specific terminologies that cannot be understood by the data extractors. Comprising individuals with specialized knowledge and expertise in the subject area (Medical Ethics, infectious diseases, and Tropical Health), the expert panel provided valuable insights and clarification to enhance the comprehensibility of the scoping review. Search results The search strategies used in the scoping review yielded a total of 454 articles. Among these, 354 articles were identified from various databases. An additional 100 articles were retrieved from Google Scholar. A total of 92 duplicate studies were excluded using Endnote find duplicates function. The remaining 362 citations underwent screening based on their titles and abstracts. During this stage, 76 duplicates and 239 citations were excluded based on their title and abstract, leaving 47 articles for full-text screening. The full-text screening was conducted on these 47 articles, resulting in the exclusion of 15 studies. The reasons for exclusion included irrelevant targeted dates (3 studies), irrelevant citations (11 studies), and one study written in Spanish. Finally, 32 studies were included in the scoping review for further analysis and synthesis. Figure . Study characteristics Fig. . A total of 32 studies were included in the scoping review, they were classified as follows: five letters to the editor [ – ], four commentaries [ – ], four editorials [ – ], three articles [ – ], three opinion articles [ – ], two brief reports , two short communication , two viewpoints , one article info , one clinical article , one correspondence , one mini-review article , one news , one open letter , and one perspective article . Table illustrates the characteristics of the included studies. The following section will discuss highlighted ethical themes in the aforementioned studies. Figure 2. Burden of discrimination in the community Eleven articles addressed the burden of discrimination related to mpox infection. Mungmunpuntipantip discussed the importance of tackling stigma related to mpox to effectively control disease transmission. Shukla et al., highlighted the need for addressing stigma and discrimination towards LGBTQ community, particularly in developing countries like India. W. März et al., addressed the sociopolitical consequences of the mpox outbreak for the gay, bisexual, and MSM, as well as the broader lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) community leading to discrimination and isolation of these marginalized groups. Dsouza et al., collected tweets on mpox stigma among the LGBTQ + community and analyzed its sentiment and content. According to analysis, the LGBTQ + community faces stigma associated with mpox, which may discourage individuals from seeking treatment and may result in untreated infections. Aquino et al., mapped out the unintended centralization of marginalized groups by public health communications, advisories, and policies. In addition to the targeted campaigns that raise conceptual ambiguities and risks attaching a stigma to marginalized groups and mpox. Yang et al., . proposed a measure that utilizes the three stages of the stigma development process and aims to prevent the emergence, progression, and dissemination of stigmatization related to mpox. Kenyon applied Spearman’s correlation to assess the relation between the national incidence of mpox in European countries and the intensity of screening for sexually transmitted diseases (STDs) and a composite indicator of lesbian, gay, bisexual, and transgender and intersex (LGBTI) rights (the Rainbow Index). The report highlighted the stigmatizing attitudes to homosexuality as the causing factor for the reduced utilization of screening tests for STDs and therefore low incidence of mpox reported from the various Eastern European Nations. Ng et al., analyzed the sentiment of the Twitter post towards the outbreak of mpox through unsupervised machine learning that retrieved stigmatization of minority communities. März et al., represented their perspectives on the ethical challenges posed by mpox outbreaks within the LGBTQI + community, highlighting health inequalities, the heightened stress, and fear of further marginalization experienced by this community. Iglesias et al., investigated the social perspective of considering mpox as a sexually transmitted virus. The authors emphasized the need for critical thinking for efficient communication, they also discuss social inequities and highlight the value of social science. Happi et al., propose a novel classification of mpox that is non-discriminatory and non-stigmatizing and aligned with best practices in naming infectious diseases in a way that minimizes unnecessary adverse impacts on countries, geographic regions, economies, and people and that considers the evolution and spread of the virus. Public awareness and stigma Nine publications highlighted the awareness, lessons learned, and stigma associated with mpox outbreak.  Lee & Morling discussed the importance of public awareness campaigns, targeted vaccination strategies for high-risk populations, and robust surveillance systems in preventing stigma. This resonates with De Sousa et al., who insisted on the necessity of inclusive surveillance and health education strategies and decoupling public health intervention from specific affected groups to prevent prejudice and stigma. They emphasized the need to raise public awareness, engage civil society, and improve cooperation between policymakers, the medical community, and social media platforms to prevent stigma and disseminate precise and authoritative information regarding mpox. Islam et al., addressed the crucial role of advocating for public awareness to reduce the global health burden. Mirroring previous outbreaks, Dzobo et al., highlighted the lesson learned from the coronavirus disease 2019 (COVID-19) in implementing education, advocacy, and awareness strategies for reducing stigma and promoting coordinated efforts on a global scale in response to disease outbreaks. Finally, Gonsalves et al., compared the emerging mpox with HIV as both share similarities in the global and domestic response to these outbreaks, highlighting the lack of public awareness and the delay in responding to outbreaks in Africa as well as stigmatizing attitudes. Chang et al., discussed that the lack of public awareness highly promotes stigma that can be eliminated through the widespread distribution of educational resources. Ogunbajo , conducted a community initiative for vaccinating black sexual minority men (SMM) in Washington D.C. with mpox vaccines, in addition to a survey for assessing the demographics and health beliefs of participants. The report highlighted that participants had a high level of expected stigmatization for mpox patients, presenting the urgent need for public education and awareness regarding mpox. Raheel et al., discuss the importance of awareness campaigns such as “CDC’S highly successful Let’s Stop HIV Together” that motivate individuals to establish preventive measures and seek healthcare. Using a case-based discussion, Bergman et al., discussed stigma prevention strategies via community awareness and nursing approaches in enhancing awareness among healthcare providers and patient education. Policy and stigma Six studies focused on the policies and stigma.  Chang et al., discussed that policies may encourage discrimination where the implementation of a national action plan is necessary to support the response to stigma during infectious disease outbreaks. W. März et al., highlighted the urgent necessity of increasing policymakers’ awareness regarding the sociopolitical consequences of the mpox outbreak for the gay, bisexual, and MSM, as well as the LGBTQI+ community. Hence, he introduced a policy recommendation to address the mpox outbreak within a comprehensive policy framework to advance LGBTQI + health equality. De Sousa et al., emphasized the need to improve cooperation between policymakers, the medical community, and social media platforms to prevent stigma and disseminate precise and authoritative information regarding mpox. Ng et al., analyzed the sentiment of the Twitter post towards the outbreak of mpox through unsupervised machine learning that retrieved a general lack of faith in public institutions. März et al., represented their perspectives on the ethical challenges posed by mpox outbreaks within the LGBTQI + community, highlighting concerns regarding the neglect of the mpox outbreak by policymakers. Scheffer et al. wrote about their perspective on human rights-based approaches in epidemic responses. They advocate for policies and interventions guided by principles such as equity, inclusion of vulnerable populations, and active participation of affected communities in finding solutions. Misinformation in shaping stigma The association between misinformation and stigma was highlighted in six studies. Farahat et al., focused on the -misinformation on social media that impedes the ability of healthcare experts to communicate effectively. Ju et al., analyzed how the media (Washington Post) is handling both COVID-19 and mpox outbreaks and its role in framing stigma within communities. After stigmatizing China as the origin of COVID-19, the news shifted to stigmatizing Africa with mpox. Moreover, it indirectly labels gays as a special group more susceptible to mpox infection. Alsanafi et al., assessed current disease knowledge among Kuwaiti HCPs and evaluated their attitudes concerning virus emergence conspiracies. The article highlighted the lack of knowledge among HCPs regarding mpox infection, diagnosis, and management. Moreover, the false belief that infection is exclusive to gay leads to discriminatory attitudes and stigmatization towards affected persons. Chang et al., discussed how critical it is for the media to avoid drawing incorrect conclusions from research on mpox in non-endemic areas. Singla & Shen assured that in the majority of countries, social media are unregulated, and the accumulation of false information regarding various epidemics is widespread. When such deceptive and misleading information reaches the public and uninformed individuals, it can create havoc or a new kind of social stigma. Singla et al., published a comprehensive review of existing literature on the biased studies that reported mpox cases in the LGBTQ community and stated that despite the small amount of data regarding the sexual orientation of the patients, the media exacerbates the existing stigma towards the community. Psychological impact of stigma Chang et al., discussed that affected individuals and families are vulnerable to internalized stigma due to anxiety, depression, and suicide ideation, highlighting the importance of mental health support and raising awareness. Sah et al., urged the need to investigate how the stigma associated with mpox impacts the infection’s various differential diagnoses and health effects, particularly mental hygiene, underscoring the impact of mpox on mental health. Infected individuals are more likely to experience mental health issues such as depression and anxiety disorders. März et al., represented their perspectives on the ethical challenges posed by mpox outbreaks within the LGBTQI + community, highlighting stress and fear of further marginalization experienced by this community. Bergman et al., discussed different stigma types experienced by mpox patients including shame feeling, self-blame, fear of judgment, and lack of social support which can lead to depressive symptoms, psychological stress, isolation, and economic consequences. Stigmatized language and terminology Four studies focused on the stigmatized and terminology related to mpox. Islam et al., addressed the crucial role of advocating for avoiding stigmatized language in mpox communication to reduce the global health burden. Furthermore, there is a stigmatization of individuals and communities associated with the name “monkeypox” where comments often labeled it as a “gay disease” OR “monkey disease”. These stigmatizing associations were found to hinder the detection and treatment rates of the disease. In response to these concerns and after consulting with experts, the WHO decided on November 28 to change the name from “monkeypox” to “mpox“ . Also, Taylor discussed the change of mpox name from the old one “monkeypox” after the publication of a letter by over thirty scientists worldwide on June 10th, asking for terminology revision where there is a need to correct the terminology to reduce racism, and stigma and to compact the widespread misinformation. Chang et al., explored how discriminatory language could impede medical response and prevent help-seeking behavior in cases of HIV, COVID-19, and Ebola and currently in mpox. And how critical it is for the media to adopt clearer terms to avoid drawing incorrect conclusions from research on mpox in non-endemic areas. Ethical issues in managing an infectious individual Two studies addressed ethical concerns related to the management of mpox patients. Shrewsbury discussed a certain situation in which the infected person was subjected to blame and shame. Regardless of whether they caught their infection through sexual contact or by encountering a contaminated surface, everyone deserves treatment. HCPs should keep these guidelines in mind and try to be more aware of situations in which they can unintentionally embarrass or assign blame. They should acknowledge that mpox and all other contagious illnesses should be contained and treated with a commitment to unconditional empathy. Iglesias et al., investigated the healthcare consequences of considering mpox as a sexually transmitted virus. The authors emphasized the need for critical thinking for efficient communication. Vaccine-related stigma Mazzagatti et al. addressed the burden of stigma among the already criticized community of bisexuals. It underlines the detrimental effects on patient confidence and intention to take preventive measures, drawing comparisons to the historical stigmatization of persons living with HIV. Because of the concentration only on immunizing high-risk individuals, particularly MSM, “vaccine-related stigma” and restricted access to the vaccine for people who do not frequently visit sexual health clinics are emerging. The article advises getting rid of this stigma by making vaccination available to all sexually active bisexuals and identifying each person’s risk factors through interviews or questionnaires. Additionally, it emphasizes how crucial it is to safeguard private information obtained during vaccinations and offer shots outside of sexual health clinics. The article’s conclusion highlights the importance of timely and precise communication while avoiding ambiguous information that can feed stigma against the LGBTQ + community. Public anxiety Lee & Morling discussed the impact of public anxiety from unfamiliar emerging diseases, which contributes to germ-induced panic accompanied by the stigmatization of the condition and detrimental psychological consequences for both affected individuals and communities. Lack of safety Ng et al., analyzed the sentiment of the Twitter post towards the outbreak of mpox through unsupervised machine learning. This approach retrieved general concerns regarding safety, reflecting the public’s fear that the increasing number of mpox cases and the WHO declaration it a PHEIC resembles the early stages of the COVID-19 pandemic. Although mpox is not as transmissible as COVID-19 and a vaccine is available, the risk of cross-border transmission persists, particularly with increasing international travel and interconnectedness. Therefore, the author emphasized the importance of providing accurate and timely information on mpox. Only six studies were deemed suitable for data extraction, including three articles [ – ], two brief reports , and one short communication . Table These studies encompassed a total of 418,569 Twitter posts, 896 HCPs, 127,000 European MSM survey, 188 SMM in the United States of America (USA), and 71 online news reports [ – , – ]. The inclusion of these different types of publications allowed for a comprehensive exploration of the ethical issues related to the outbreak of mpox infection and provided diverse perspectives and insights into the topic at hand. Study design Of the eligible studies, two studies incorporated content analysis of Twitter posts , one study employed content analysis of The Washington Post’s Online News , one was a cross-sectional study for HCPs in Kuwait , one was an ecological analysis of European men who have sex with men internet survey in 40 countries , and one cross-sectional study of SMM in the USA . The main ethical issues The ethical issues related to human mpox have been observed at various levels, including country, institute, community, and individual. At the country level, countries with more stigmatizing attitudes towards homosexuality tend to have lower reported rates of screening for STDs and a lower incidence of mpox . At the institute level, news media (The Washington Post) has been found to construct differential stigmas that indirectly label gays as being more likely to be infected with mpox, leading to increased stigma and discrimination towards them, labeling African countries as the “typical source of mpox”, and identifying the COVID-19 outbreak in China being deemed as a cause for alarm while the mpox cases spreading in the USA being regarded as not being a significant concern . At the community level, it has been observed that the community of LGBTQ + on Twitter has been affected in a way that they refrain from any public health measures about mpox . Content analysis of public Twitter posts has also revealed stigma towards LGBTQ and racial minority communities, lack of faith in institutions, and governmental efforts to contain mpox and misinformation about the infection as a political conspiracy . At the individual level, certain observations have been made regarding ethical issues related to mpox. For instance, in Kuwait, there is a higher prevalence of conspiracy beliefs regarding emerging virus infections among certain groups. Females, individuals with lower knowledge about mpox, and those who agreed or had no opinion regarding the exclusivity of mpox incidence among gays were found to be more likely to embrace conspiracy beliefs . In the USA, particularly among bisexuals, a significant proportion of respondents (ranging from 13 to 31%) reported the belief that various people in their lives judge them if they were to contract mpox. Additionally, 35% of respondents believed they would be blamed for their infection, and 51% believed that others would assume they were sexually promiscuous if they acquired mpox . The search strategies used in the scoping review yielded a total of 454 articles. Among these, 354 articles were identified from various databases. An additional 100 articles were retrieved from Google Scholar. A total of 92 duplicate studies were excluded using Endnote find duplicates function. The remaining 362 citations underwent screening based on their titles and abstracts. During this stage, 76 duplicates and 239 citations were excluded based on their title and abstract, leaving 47 articles for full-text screening. The full-text screening was conducted on these 47 articles, resulting in the exclusion of 15 studies. The reasons for exclusion included irrelevant targeted dates (3 studies), irrelevant citations (11 studies), and one study written in Spanish. Finally, 32 studies were included in the scoping review for further analysis and synthesis. Figure . Fig. . A total of 32 studies were included in the scoping review, they were classified as follows: five letters to the editor [ – ], four commentaries [ – ], four editorials [ – ], three articles [ – ], three opinion articles [ – ], two brief reports , two short communication , two viewpoints , one article info , one clinical article , one correspondence , one mini-review article , one news , one open letter , and one perspective article . Table illustrates the characteristics of the included studies. The following section will discuss highlighted ethical themes in the aforementioned studies. Figure 2. Eleven articles addressed the burden of discrimination related to mpox infection. Mungmunpuntipantip discussed the importance of tackling stigma related to mpox to effectively control disease transmission. Shukla et al., highlighted the need for addressing stigma and discrimination towards LGBTQ community, particularly in developing countries like India. W. März et al., addressed the sociopolitical consequences of the mpox outbreak for the gay, bisexual, and MSM, as well as the broader lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) community leading to discrimination and isolation of these marginalized groups. Dsouza et al., collected tweets on mpox stigma among the LGBTQ + community and analyzed its sentiment and content. According to analysis, the LGBTQ + community faces stigma associated with mpox, which may discourage individuals from seeking treatment and may result in untreated infections. Aquino et al., mapped out the unintended centralization of marginalized groups by public health communications, advisories, and policies. In addition to the targeted campaigns that raise conceptual ambiguities and risks attaching a stigma to marginalized groups and mpox. Yang et al., . proposed a measure that utilizes the three stages of the stigma development process and aims to prevent the emergence, progression, and dissemination of stigmatization related to mpox. Kenyon applied Spearman’s correlation to assess the relation between the national incidence of mpox in European countries and the intensity of screening for sexually transmitted diseases (STDs) and a composite indicator of lesbian, gay, bisexual, and transgender and intersex (LGBTI) rights (the Rainbow Index). The report highlighted the stigmatizing attitudes to homosexuality as the causing factor for the reduced utilization of screening tests for STDs and therefore low incidence of mpox reported from the various Eastern European Nations. Ng et al., analyzed the sentiment of the Twitter post towards the outbreak of mpox through unsupervised machine learning that retrieved stigmatization of minority communities. März et al., represented their perspectives on the ethical challenges posed by mpox outbreaks within the LGBTQI + community, highlighting health inequalities, the heightened stress, and fear of further marginalization experienced by this community. Iglesias et al., investigated the social perspective of considering mpox as a sexually transmitted virus. The authors emphasized the need for critical thinking for efficient communication, they also discuss social inequities and highlight the value of social science. Happi et al., propose a novel classification of mpox that is non-discriminatory and non-stigmatizing and aligned with best practices in naming infectious diseases in a way that minimizes unnecessary adverse impacts on countries, geographic regions, economies, and people and that considers the evolution and spread of the virus. Nine publications highlighted the awareness, lessons learned, and stigma associated with mpox outbreak.  Lee & Morling discussed the importance of public awareness campaigns, targeted vaccination strategies for high-risk populations, and robust surveillance systems in preventing stigma. This resonates with De Sousa et al., who insisted on the necessity of inclusive surveillance and health education strategies and decoupling public health intervention from specific affected groups to prevent prejudice and stigma. They emphasized the need to raise public awareness, engage civil society, and improve cooperation between policymakers, the medical community, and social media platforms to prevent stigma and disseminate precise and authoritative information regarding mpox. Islam et al., addressed the crucial role of advocating for public awareness to reduce the global health burden. Mirroring previous outbreaks, Dzobo et al., highlighted the lesson learned from the coronavirus disease 2019 (COVID-19) in implementing education, advocacy, and awareness strategies for reducing stigma and promoting coordinated efforts on a global scale in response to disease outbreaks. Finally, Gonsalves et al., compared the emerging mpox with HIV as both share similarities in the global and domestic response to these outbreaks, highlighting the lack of public awareness and the delay in responding to outbreaks in Africa as well as stigmatizing attitudes. Chang et al., discussed that the lack of public awareness highly promotes stigma that can be eliminated through the widespread distribution of educational resources. Ogunbajo , conducted a community initiative for vaccinating black sexual minority men (SMM) in Washington D.C. with mpox vaccines, in addition to a survey for assessing the demographics and health beliefs of participants. The report highlighted that participants had a high level of expected stigmatization for mpox patients, presenting the urgent need for public education and awareness regarding mpox. Raheel et al., discuss the importance of awareness campaigns such as “CDC’S highly successful Let’s Stop HIV Together” that motivate individuals to establish preventive measures and seek healthcare. Using a case-based discussion, Bergman et al., discussed stigma prevention strategies via community awareness and nursing approaches in enhancing awareness among healthcare providers and patient education. Six studies focused on the policies and stigma.  Chang et al., discussed that policies may encourage discrimination where the implementation of a national action plan is necessary to support the response to stigma during infectious disease outbreaks. W. März et al., highlighted the urgent necessity of increasing policymakers’ awareness regarding the sociopolitical consequences of the mpox outbreak for the gay, bisexual, and MSM, as well as the LGBTQI+ community. Hence, he introduced a policy recommendation to address the mpox outbreak within a comprehensive policy framework to advance LGBTQI + health equality. De Sousa et al., emphasized the need to improve cooperation between policymakers, the medical community, and social media platforms to prevent stigma and disseminate precise and authoritative information regarding mpox. Ng et al., analyzed the sentiment of the Twitter post towards the outbreak of mpox through unsupervised machine learning that retrieved a general lack of faith in public institutions. März et al., represented their perspectives on the ethical challenges posed by mpox outbreaks within the LGBTQI + community, highlighting concerns regarding the neglect of the mpox outbreak by policymakers. Scheffer et al. wrote about their perspective on human rights-based approaches in epidemic responses. They advocate for policies and interventions guided by principles such as equity, inclusion of vulnerable populations, and active participation of affected communities in finding solutions. The association between misinformation and stigma was highlighted in six studies. Farahat et al., focused on the -misinformation on social media that impedes the ability of healthcare experts to communicate effectively. Ju et al., analyzed how the media (Washington Post) is handling both COVID-19 and mpox outbreaks and its role in framing stigma within communities. After stigmatizing China as the origin of COVID-19, the news shifted to stigmatizing Africa with mpox. Moreover, it indirectly labels gays as a special group more susceptible to mpox infection. Alsanafi et al., assessed current disease knowledge among Kuwaiti HCPs and evaluated their attitudes concerning virus emergence conspiracies. The article highlighted the lack of knowledge among HCPs regarding mpox infection, diagnosis, and management. Moreover, the false belief that infection is exclusive to gay leads to discriminatory attitudes and stigmatization towards affected persons. Chang et al., discussed how critical it is for the media to avoid drawing incorrect conclusions from research on mpox in non-endemic areas. Singla & Shen assured that in the majority of countries, social media are unregulated, and the accumulation of false information regarding various epidemics is widespread. When such deceptive and misleading information reaches the public and uninformed individuals, it can create havoc or a new kind of social stigma. Singla et al., published a comprehensive review of existing literature on the biased studies that reported mpox cases in the LGBTQ community and stated that despite the small amount of data regarding the sexual orientation of the patients, the media exacerbates the existing stigma towards the community. Chang et al., discussed that affected individuals and families are vulnerable to internalized stigma due to anxiety, depression, and suicide ideation, highlighting the importance of mental health support and raising awareness. Sah et al., urged the need to investigate how the stigma associated with mpox impacts the infection’s various differential diagnoses and health effects, particularly mental hygiene, underscoring the impact of mpox on mental health. Infected individuals are more likely to experience mental health issues such as depression and anxiety disorders. März et al., represented their perspectives on the ethical challenges posed by mpox outbreaks within the LGBTQI + community, highlighting stress and fear of further marginalization experienced by this community. Bergman et al., discussed different stigma types experienced by mpox patients including shame feeling, self-blame, fear of judgment, and lack of social support which can lead to depressive symptoms, psychological stress, isolation, and economic consequences. Four studies focused on the stigmatized and terminology related to mpox. Islam et al., addressed the crucial role of advocating for avoiding stigmatized language in mpox communication to reduce the global health burden. Furthermore, there is a stigmatization of individuals and communities associated with the name “monkeypox” where comments often labeled it as a “gay disease” OR “monkey disease”. These stigmatizing associations were found to hinder the detection and treatment rates of the disease. In response to these concerns and after consulting with experts, the WHO decided on November 28 to change the name from “monkeypox” to “mpox“ . Also, Taylor discussed the change of mpox name from the old one “monkeypox” after the publication of a letter by over thirty scientists worldwide on June 10th, asking for terminology revision where there is a need to correct the terminology to reduce racism, and stigma and to compact the widespread misinformation. Chang et al., explored how discriminatory language could impede medical response and prevent help-seeking behavior in cases of HIV, COVID-19, and Ebola and currently in mpox. And how critical it is for the media to adopt clearer terms to avoid drawing incorrect conclusions from research on mpox in non-endemic areas. Two studies addressed ethical concerns related to the management of mpox patients. Shrewsbury discussed a certain situation in which the infected person was subjected to blame and shame. Regardless of whether they caught their infection through sexual contact or by encountering a contaminated surface, everyone deserves treatment. HCPs should keep these guidelines in mind and try to be more aware of situations in which they can unintentionally embarrass or assign blame. They should acknowledge that mpox and all other contagious illnesses should be contained and treated with a commitment to unconditional empathy. Iglesias et al., investigated the healthcare consequences of considering mpox as a sexually transmitted virus. The authors emphasized the need for critical thinking for efficient communication. Mazzagatti et al. addressed the burden of stigma among the already criticized community of bisexuals. It underlines the detrimental effects on patient confidence and intention to take preventive measures, drawing comparisons to the historical stigmatization of persons living with HIV. Because of the concentration only on immunizing high-risk individuals, particularly MSM, “vaccine-related stigma” and restricted access to the vaccine for people who do not frequently visit sexual health clinics are emerging. The article advises getting rid of this stigma by making vaccination available to all sexually active bisexuals and identifying each person’s risk factors through interviews or questionnaires. Additionally, it emphasizes how crucial it is to safeguard private information obtained during vaccinations and offer shots outside of sexual health clinics. The article’s conclusion highlights the importance of timely and precise communication while avoiding ambiguous information that can feed stigma against the LGBTQ + community. Lee & Morling discussed the impact of public anxiety from unfamiliar emerging diseases, which contributes to germ-induced panic accompanied by the stigmatization of the condition and detrimental psychological consequences for both affected individuals and communities. Ng et al., analyzed the sentiment of the Twitter post towards the outbreak of mpox through unsupervised machine learning. This approach retrieved general concerns regarding safety, reflecting the public’s fear that the increasing number of mpox cases and the WHO declaration it a PHEIC resembles the early stages of the COVID-19 pandemic. Although mpox is not as transmissible as COVID-19 and a vaccine is available, the risk of cross-border transmission persists, particularly with increasing international travel and interconnectedness. Therefore, the author emphasized the importance of providing accurate and timely information on mpox. Only six studies were deemed suitable for data extraction, including three articles [ – ], two brief reports , and one short communication . Table These studies encompassed a total of 418,569 Twitter posts, 896 HCPs, 127,000 European MSM survey, 188 SMM in the United States of America (USA), and 71 online news reports [ – , – ]. The inclusion of these different types of publications allowed for a comprehensive exploration of the ethical issues related to the outbreak of mpox infection and provided diverse perspectives and insights into the topic at hand. Of the eligible studies, two studies incorporated content analysis of Twitter posts , one study employed content analysis of The Washington Post’s Online News , one was a cross-sectional study for HCPs in Kuwait , one was an ecological analysis of European men who have sex with men internet survey in 40 countries , and one cross-sectional study of SMM in the USA . The ethical issues related to human mpox have been observed at various levels, including country, institute, community, and individual. At the country level, countries with more stigmatizing attitudes towards homosexuality tend to have lower reported rates of screening for STDs and a lower incidence of mpox . At the institute level, news media (The Washington Post) has been found to construct differential stigmas that indirectly label gays as being more likely to be infected with mpox, leading to increased stigma and discrimination towards them, labeling African countries as the “typical source of mpox”, and identifying the COVID-19 outbreak in China being deemed as a cause for alarm while the mpox cases spreading in the USA being regarded as not being a significant concern . At the community level, it has been observed that the community of LGBTQ + on Twitter has been affected in a way that they refrain from any public health measures about mpox . Content analysis of public Twitter posts has also revealed stigma towards LGBTQ and racial minority communities, lack of faith in institutions, and governmental efforts to contain mpox and misinformation about the infection as a political conspiracy . At the individual level, certain observations have been made regarding ethical issues related to mpox. For instance, in Kuwait, there is a higher prevalence of conspiracy beliefs regarding emerging virus infections among certain groups. Females, individuals with lower knowledge about mpox, and those who agreed or had no opinion regarding the exclusivity of mpox incidence among gays were found to be more likely to embrace conspiracy beliefs . In the USA, particularly among bisexuals, a significant proportion of respondents (ranging from 13 to 31%) reported the belief that various people in their lives judge them if they were to contract mpox. Additionally, 35% of respondents believed they would be blamed for their infection, and 51% believed that others would assume they were sexually promiscuous if they acquired mpox . Discrimination and stigma associated with any disease, including mpox, are never acceptable. They can have a serious impact on health outcomes and undermine outbreak response efforts by making people hesitant to come forward or seek care. This increases the risk of transmission, both within and beyond the most affected communities . This scoping review aimed to identify and outline the primary ethical challenges associated with the outbreak of mpox. This review included 32 studies. Out of these studies, only six were suitable for data extraction, including three articles [ – ], two brief reports , and one short communication . These studies covered various topics such as Twitter posts, HCPs, MSM surveys, and online news reports. The study designs varied among the eligible studies, including content analysis of Twitter posts, analysis of online news, cross-sectional studies, ecological analysis, and community-based interventions. These different approaches provided a comprehensive understanding of the ethical issues associated with the outbreak of mpox infection. The main findings of the study The insights into the mpox outbreak and the resulting stigma paint a complicated picture. Misinformation on social media emerges as a significant barrier to effective communication among healthcare experts, emphasizing the need for a more coordinated response. Policy recommendations, lessons learned from previous epidemics, and assessments of media articles all emphasize the importance of clear messaging, public education, and the use of empathetic language. Ethical issues emerge at multiple levels, necessitating a concerted effort to monitor social media, address discriminatory language, and recognize the impact on marginalized communities. The decision to rename the virus “mpox” rather than “monkeypox” reflects a strategic move to reduce stigma. Notably, themes of targeted testing, vaccination initiatives, and stigma reduction take center stage, particularly among the LGBTQI + community, emphasizing the need for a comprehensive and compassionate approach to navigating the challenges posed by the mpox outbreak. Misinformation and social media during an infectious outbreak Infectious disease epidemics are often accompanied by scientific uncertainty, social and institutional instability, and a general atmosphere of fear and distrust. The media plays a significant role in amplifying these reactions. Misinformation is the dissemination of inaccurate or occasionally false statements that contradict the scientific community’s established understanding. Disinformation, on the other hand, can be defined as the intentional dissemination of false information with the goal of achieving secondary benefits, whether financial, political, or a combination of both . In the era of social media, both misinformation and disinformation raise significant concerns, particularly in the context of spreading knowledge related to infectious diseases . The current study highlights the issue of misinformation surrounding the mpox outbreak, emphasizing the need for public awareness, civil society engagement, and cooperation between policymakers, medical communities, and social media platforms to counteract stigma and combat human-to-human transmission, and racism. The negative impact of rumors and misinformation was previously addressed during COVID-19 [ – ]. Unverified COVID-19 rumors can undermine preparedness, lead to incorrect treatments, and diminish healthcare workers’ agency. Social stigma can hinder active participation in public health measures . Individuals can be empowered by media literacy programs to distinguish between reliable and misleading sources while fact-checking initiatives to ensure the timely correction of inaccuracies. Support and training for healthcare workers are critical in navigating rumors and social stigma, with an emphasis on trust-building strategies. International cooperation, drawing on lessons learned at COVID-19, can strengthen the global response to misinformation. Finally, encouraging ethical communication, transparent reporting, and responsible information sharing contribute to a more informed and resilient society in the face of infectious disease epidemics. Improving public awareness and increasing health literacy Health literacy refers to an individual’s ability to access and comprehend health-related information, allowing them to make informed decisions about their health. This includes the ability to effectively seek, understand, and apply health information, allowing individuals to navigate healthcare systems, engage in preventive measures, and make health-related decisions . Advocating for public awareness, emphasizing preventive measures, and avoiding stigmatized language in mpox communication all play critical roles in mitigating the outbreak’s global health burden . Individuals can make informed decisions about protective measures by raising public awareness and lowering the risk of transmission. Emphasizing preventive measures, such as mpox testing and vaccination, can help to break the chain of infection. Interestingly, despite the proven efficacy and effectiveness of the mpox vaccine , there are notable high rates of vaccination hesitancy observed among the general population and HCPs despite the high vaccine effectiveness . This phenomenon may be attributed to a lack of trust in vaccination and potential issues related to health illiteracy . This could be due to a lack of trust in vaccinations and low health literacy. A study conducted by Alsanafi et al. in 2022 highlighted that a significant percentage (20.4%) of HCPs held incorrect beliefs, such as assuming that mpox is exclusively associated with MSM. The study also found that the degree of education and occupation played a role in shaping these beliefs, with medical technicians and allied health professionals demonstrating lower knowledge compared to physicians and pharmacists. It is important to emphasize that mpox should not be incorrectly labeled as a “gay disease.” Sexual orientation does not determine an individual’s risk of infection. Understanding the actual modes of transmission is crucial in dispelling such misconceptions. By promoting accurate information and education, we can correct misunderstandings and challenge stereotypes associated with mpox . So that public health campaigns should focus on disseminating knowledge about mpox transmission, emphasizing the importance of hygiene practices, early detection, and seeking appropriate medical care. These efforts can help reduce stigma, increase awareness, and ensure that individuals and communities are equipped with the correct information to make informed decisions regarding their health and the prevention of mpox transmission. Additionally, avoiding stigmatized language is critical in creating a supportive environment that encourages people to seek information and healthcare without fear of being judged. This approach not only improves community cooperation, but also aids in the dispelling of myths and lowers the overall impact of stigma on affected individuals. Overall, these advocacy efforts are critical components of a comprehensive global strategy to address and control the mpox outbreak. To reduce the harm caused by stigma and discrimination, we must actively reflect on and act on our language, behavior, and intentions as individuals and on the policies and practices of organizations, such as healthcare facilities and media outlets . Stigma/discrimination is the main ethical concern in the literature Infectious disease outbreaks often trigger stigma . Stigma involves the withholding of social acceptance from an individual or group due to a trait perceived as discrediting by their community or society. Stigma proportionality refers to the degree to which stigma is justified or proportional in relation to the actual risks or characteristics associated with a particular group or condition. This broad concept encompasses the cognitive or emotional support of negative stereotypes, known as prejudice; negative behavioral expressions, termed discrimination; and the unjustifiable avoidance or neglect of affected individuals from a medical perspective . The research papers included in this review primarily emphasize the persistent issue of stigma, discrimination, and social disapproval faced by individuals affected by mpox. The stigma and prejudice associated with mpox have significant consequences for individuals living with the disease as well as those connected to infected individuals. Moreover, stigma linked to infectious disease outbreaks diminishes the chances of affected individuals to achieve physical, social, and psychological well-being, thereby exacerbating social and health disparities . One of the detrimental effects of stigma is that it drives individuals to hide their illness, leading to the hidden and undetected spread of the virus. Additionally, stigma can impede efforts to control disease outbreaks by fueling fear, diminishing the uptake of preventive measures (including vaccination), discouraging health-seeking behavior such as seeking testing and treatment, and reducing adherence to care . This stigma extends to partners, children, and caregivers who may face unfair judgment and mistreatment simply for their association with infected individuals. The resulting stigma and discrimination further exacerbate the emotional and psychological distress experienced by those affected by mpox . Specifically, stigma associated with COVID-19 and Ebola has been identified as a significant predictor of severe psychological distress, depression, anxiety, and symptoms of posttraumatic stress disorder . Moreover, public health interventions implemented like quarantine, contact tracing, and vaccination during outbreaks can influence the stigma associated with a disease [ – ]. While evidence of exacerbated stigma may not entirely negate the efficacy of these public health measures, it underscores the importance of considering and minimizing inadvertent social consequences wherever feasible. This pattern of behavior is not unique to the current situation but has been observed in the past with the emergence of novel pathogens. Throughout history, human communities have demonstrated a tendency to isolate, stigmatize, or avoid groups of individuals perceived as having qualities or traits that are considered disagreeable or potentially harmful to others [ – ]. Gonsalves et al. aptly coined the phrase “Déjà vu All Over Again?” to describe the similarities between the stigma surrounding the announcement of mpox and the stigma experienced in previous infectious disease outbreaks. This comparison draws parallels to the panic and discrimination that emerged during the early years of the AIDS epidemic. During that time, individuals living with HIV/AIDS faced stigmatization, particularly those who were confirmed to have the infection, as well as the “four Hs” identified by the CDC: homosexuals, heroin addicts, hemophiliacs, and Haitians . By acknowledging these recurring patterns, we can work towards breaking the cycle of stigmatization and fostering a more inclusive and supportive society for individuals affected by infectious diseases. Addressing the ethical challenges posed by stigma during infectious disease outbreaks requires a multifaceted approach. By promoting education, sensitivity in public health interventions, empathy, and advocacy for equitable policies, we can work towards fostering a society that upholds the rights and dignity of all individuals, thereby mitigating the adverse effects of stigma. Actions to mitigate the stigma and discrimination associated with mpox To overcome and combat negative attitudes and harmful language directed at mpox patients, WHO has taken multiple steps. In December 2022, public advice was published, by the WHO concerned with stigma and discrimination, targeting all organizations (governmental and non-governmental), health practitioners, authorities, as well as media dealing with the outbreak . Recently, a policy brief was released on the 23 of July 2023, offering guidance on critical ethical issues that have arisen in the context of the mpox outbreak response. Three key domains were emphasized: stigma /discrimination, the availability and distribution of medical services, and the importance and responsibility of scientifically based evidence . WHO recently released its public health advice on understanding, preventing, and addressing stigma and discrimination related to mpox, which provides information on the potential impact of stigma and recommends language and actions to counter stigmatizing attitudes and discriminatory behaviors and policies . Points of strength and limitations This scoping review is unique in its contribution as it is the first attempt to systematically analyze the existing published evidence regarding ethical dilemmas and discrimination related to mpox. By mapping out the identified moral themes, the review provides valuable insights into the current understanding of ethical challenges in mpox and identifies areas that require further exploration. Second, there is a paucity of articles addressing ethical issues related to mpox, highlighting the importance of this review to identify research gaps in the existing literature. However, this review has some limitations that should be addressed. First, the review primarily focused on the theme of stigma and discrimination associated with mpox. Other ethical principles were not extensively explored. This suggests a need for further research to assess and address a broader range of ethical issues related to mpox outbreaks. It would be of paramount value to probe both the community and HCPs’ perception of ethical values and norms surrounding the mpox infection. Second, most of the included studies originated from Western countries, neglecting the main origin of the infection in African regions. This geographical bias emphasizes the importance of conducting research in the affected areas to comprehensively understand the ethical challenges specific to those contexts. Furthermore, the makeup of the expert panel does not appear to contain persons chosen for their association with the group most affected by this outbreak, which may potentially limit the mitigation of epistemological violence and the comprehensiveness of perspectives. Another limitation was that studies on marginalized groups including rural communities and low-resource environments, which are disproportionately impacted by infectious diseases like mpox were noticeably lacking. Finally, the search string employed in the scoping review included relevant terms related to the mpox virus and ethics. While comprehensive, the approach has certain limitations. These include potential trade-offs between sensitivity and specificity, variability in terminology, the risk of publication bias towards articles published in certain journals and/or indexed in certain databases, a lack of consideration for temporal variations, language bias towards English, conceptual complexity with terms like “egoism” and “metaethics,” and potential disparities in database recognition of search terms. The insights into the mpox outbreak and the resulting stigma paint a complicated picture. Misinformation on social media emerges as a significant barrier to effective communication among healthcare experts, emphasizing the need for a more coordinated response. Policy recommendations, lessons learned from previous epidemics, and assessments of media articles all emphasize the importance of clear messaging, public education, and the use of empathetic language. Ethical issues emerge at multiple levels, necessitating a concerted effort to monitor social media, address discriminatory language, and recognize the impact on marginalized communities. The decision to rename the virus “mpox” rather than “monkeypox” reflects a strategic move to reduce stigma. Notably, themes of targeted testing, vaccination initiatives, and stigma reduction take center stage, particularly among the LGBTQI + community, emphasizing the need for a comprehensive and compassionate approach to navigating the challenges posed by the mpox outbreak. Infectious disease epidemics are often accompanied by scientific uncertainty, social and institutional instability, and a general atmosphere of fear and distrust. The media plays a significant role in amplifying these reactions. Misinformation is the dissemination of inaccurate or occasionally false statements that contradict the scientific community’s established understanding. Disinformation, on the other hand, can be defined as the intentional dissemination of false information with the goal of achieving secondary benefits, whether financial, political, or a combination of both . In the era of social media, both misinformation and disinformation raise significant concerns, particularly in the context of spreading knowledge related to infectious diseases . The current study highlights the issue of misinformation surrounding the mpox outbreak, emphasizing the need for public awareness, civil society engagement, and cooperation between policymakers, medical communities, and social media platforms to counteract stigma and combat human-to-human transmission, and racism. The negative impact of rumors and misinformation was previously addressed during COVID-19 [ – ]. Unverified COVID-19 rumors can undermine preparedness, lead to incorrect treatments, and diminish healthcare workers’ agency. Social stigma can hinder active participation in public health measures . Individuals can be empowered by media literacy programs to distinguish between reliable and misleading sources while fact-checking initiatives to ensure the timely correction of inaccuracies. Support and training for healthcare workers are critical in navigating rumors and social stigma, with an emphasis on trust-building strategies. International cooperation, drawing on lessons learned at COVID-19, can strengthen the global response to misinformation. Finally, encouraging ethical communication, transparent reporting, and responsible information sharing contribute to a more informed and resilient society in the face of infectious disease epidemics. Health literacy refers to an individual’s ability to access and comprehend health-related information, allowing them to make informed decisions about their health. This includes the ability to effectively seek, understand, and apply health information, allowing individuals to navigate healthcare systems, engage in preventive measures, and make health-related decisions . Advocating for public awareness, emphasizing preventive measures, and avoiding stigmatized language in mpox communication all play critical roles in mitigating the outbreak’s global health burden . Individuals can make informed decisions about protective measures by raising public awareness and lowering the risk of transmission. Emphasizing preventive measures, such as mpox testing and vaccination, can help to break the chain of infection. Interestingly, despite the proven efficacy and effectiveness of the mpox vaccine , there are notable high rates of vaccination hesitancy observed among the general population and HCPs despite the high vaccine effectiveness . This phenomenon may be attributed to a lack of trust in vaccination and potential issues related to health illiteracy . This could be due to a lack of trust in vaccinations and low health literacy. A study conducted by Alsanafi et al. in 2022 highlighted that a significant percentage (20.4%) of HCPs held incorrect beliefs, such as assuming that mpox is exclusively associated with MSM. The study also found that the degree of education and occupation played a role in shaping these beliefs, with medical technicians and allied health professionals demonstrating lower knowledge compared to physicians and pharmacists. It is important to emphasize that mpox should not be incorrectly labeled as a “gay disease.” Sexual orientation does not determine an individual’s risk of infection. Understanding the actual modes of transmission is crucial in dispelling such misconceptions. By promoting accurate information and education, we can correct misunderstandings and challenge stereotypes associated with mpox . So that public health campaigns should focus on disseminating knowledge about mpox transmission, emphasizing the importance of hygiene practices, early detection, and seeking appropriate medical care. These efforts can help reduce stigma, increase awareness, and ensure that individuals and communities are equipped with the correct information to make informed decisions regarding their health and the prevention of mpox transmission. Additionally, avoiding stigmatized language is critical in creating a supportive environment that encourages people to seek information and healthcare without fear of being judged. This approach not only improves community cooperation, but also aids in the dispelling of myths and lowers the overall impact of stigma on affected individuals. Overall, these advocacy efforts are critical components of a comprehensive global strategy to address and control the mpox outbreak. To reduce the harm caused by stigma and discrimination, we must actively reflect on and act on our language, behavior, and intentions as individuals and on the policies and practices of organizations, such as healthcare facilities and media outlets . Infectious disease outbreaks often trigger stigma . Stigma involves the withholding of social acceptance from an individual or group due to a trait perceived as discrediting by their community or society. Stigma proportionality refers to the degree to which stigma is justified or proportional in relation to the actual risks or characteristics associated with a particular group or condition. This broad concept encompasses the cognitive or emotional support of negative stereotypes, known as prejudice; negative behavioral expressions, termed discrimination; and the unjustifiable avoidance or neglect of affected individuals from a medical perspective . The research papers included in this review primarily emphasize the persistent issue of stigma, discrimination, and social disapproval faced by individuals affected by mpox. The stigma and prejudice associated with mpox have significant consequences for individuals living with the disease as well as those connected to infected individuals. Moreover, stigma linked to infectious disease outbreaks diminishes the chances of affected individuals to achieve physical, social, and psychological well-being, thereby exacerbating social and health disparities . One of the detrimental effects of stigma is that it drives individuals to hide their illness, leading to the hidden and undetected spread of the virus. Additionally, stigma can impede efforts to control disease outbreaks by fueling fear, diminishing the uptake of preventive measures (including vaccination), discouraging health-seeking behavior such as seeking testing and treatment, and reducing adherence to care . This stigma extends to partners, children, and caregivers who may face unfair judgment and mistreatment simply for their association with infected individuals. The resulting stigma and discrimination further exacerbate the emotional and psychological distress experienced by those affected by mpox . Specifically, stigma associated with COVID-19 and Ebola has been identified as a significant predictor of severe psychological distress, depression, anxiety, and symptoms of posttraumatic stress disorder . Moreover, public health interventions implemented like quarantine, contact tracing, and vaccination during outbreaks can influence the stigma associated with a disease [ – ]. While evidence of exacerbated stigma may not entirely negate the efficacy of these public health measures, it underscores the importance of considering and minimizing inadvertent social consequences wherever feasible. This pattern of behavior is not unique to the current situation but has been observed in the past with the emergence of novel pathogens. Throughout history, human communities have demonstrated a tendency to isolate, stigmatize, or avoid groups of individuals perceived as having qualities or traits that are considered disagreeable or potentially harmful to others [ – ]. Gonsalves et al. aptly coined the phrase “Déjà vu All Over Again?” to describe the similarities between the stigma surrounding the announcement of mpox and the stigma experienced in previous infectious disease outbreaks. This comparison draws parallels to the panic and discrimination that emerged during the early years of the AIDS epidemic. During that time, individuals living with HIV/AIDS faced stigmatization, particularly those who were confirmed to have the infection, as well as the “four Hs” identified by the CDC: homosexuals, heroin addicts, hemophiliacs, and Haitians . By acknowledging these recurring patterns, we can work towards breaking the cycle of stigmatization and fostering a more inclusive and supportive society for individuals affected by infectious diseases. Addressing the ethical challenges posed by stigma during infectious disease outbreaks requires a multifaceted approach. By promoting education, sensitivity in public health interventions, empathy, and advocacy for equitable policies, we can work towards fostering a society that upholds the rights and dignity of all individuals, thereby mitigating the adverse effects of stigma. To overcome and combat negative attitudes and harmful language directed at mpox patients, WHO has taken multiple steps. In December 2022, public advice was published, by the WHO concerned with stigma and discrimination, targeting all organizations (governmental and non-governmental), health practitioners, authorities, as well as media dealing with the outbreak . Recently, a policy brief was released on the 23 of July 2023, offering guidance on critical ethical issues that have arisen in the context of the mpox outbreak response. Three key domains were emphasized: stigma /discrimination, the availability and distribution of medical services, and the importance and responsibility of scientifically based evidence . WHO recently released its public health advice on understanding, preventing, and addressing stigma and discrimination related to mpox, which provides information on the potential impact of stigma and recommends language and actions to counter stigmatizing attitudes and discriminatory behaviors and policies . This scoping review is unique in its contribution as it is the first attempt to systematically analyze the existing published evidence regarding ethical dilemmas and discrimination related to mpox. By mapping out the identified moral themes, the review provides valuable insights into the current understanding of ethical challenges in mpox and identifies areas that require further exploration. Second, there is a paucity of articles addressing ethical issues related to mpox, highlighting the importance of this review to identify research gaps in the existing literature. However, this review has some limitations that should be addressed. First, the review primarily focused on the theme of stigma and discrimination associated with mpox. Other ethical principles were not extensively explored. This suggests a need for further research to assess and address a broader range of ethical issues related to mpox outbreaks. It would be of paramount value to probe both the community and HCPs’ perception of ethical values and norms surrounding the mpox infection. Second, most of the included studies originated from Western countries, neglecting the main origin of the infection in African regions. This geographical bias emphasizes the importance of conducting research in the affected areas to comprehensively understand the ethical challenges specific to those contexts. Furthermore, the makeup of the expert panel does not appear to contain persons chosen for their association with the group most affected by this outbreak, which may potentially limit the mitigation of epistemological violence and the comprehensiveness of perspectives. Another limitation was that studies on marginalized groups including rural communities and low-resource environments, which are disproportionately impacted by infectious diseases like mpox were noticeably lacking. Finally, the search string employed in the scoping review included relevant terms related to the mpox virus and ethics. While comprehensive, the approach has certain limitations. These include potential trade-offs between sensitivity and specificity, variability in terminology, the risk of publication bias towards articles published in certain journals and/or indexed in certain databases, a lack of consideration for temporal variations, language bias towards English, conceptual complexity with terms like “egoism” and “metaethics,” and potential disparities in database recognition of search terms. Despite the declaration that the multi-country outbreak of mpox is no longer a PHEIC, the possibility of the reemergence of mpox remains due to several interconnected factors. Among these factors, the stigma and ethical issues associated with the disease play a significant role. The stigma surrounding mpox can have detrimental effects on various aspects of the disease. It can lead to individuals avoiding seeking care and assistance. Ethical issues arising from mpox, such as discrimination, privacy concerns, access to healthcare, and the conduct of clinical and vaccine studies, further contribute to the challenges in effectively addressing the disease. Consequently, addressing stigma and ethical issues related to mpox is crucial in preventing its resurfacing and ensuring effective control measures. By promoting awareness, education, and understanding about disease and combating stigmatizing attitudes, we can create an environment that encourages individuals to seek timely care and support. Additionally, addressing ethical concerns through appropriate policies, guidelines, and interventions can help protect the rights and well-being of individuals affected by mpox. Below is the link to the electronic supplementary material. Supplementary Material 1: Figure (S1): Bar chart of the included studies type. Supplementary Material 2
Interbacterial warfare in the human gut: insights from Bacteroidales’ perspective
28ca20c1-050f-4900-984e-8781fefff3a2
11901371
Digestive System[mh]
Introduction Humans, animals, plants, and microbes coexist within vast, interconnected ecosystems populated by closely interacting organisms. Within these systems, competition and cooperation play crucial roles in maintaining ecological balance. Competition, in particular, plays a pivotal role in driving species diversity and evolution, shaping survival outcomes and social behaviors. , Microorganisms, with their simple structures, short lifecycles, high density, and genetic diversity, serve as ideal models for studying these complex interactions. , The gut microbiota, a dense and diverse microbial community within the gastrointestinal tract, profoundly impacts human health, earning its moniker as a “microbial organ”. Among its members, competition is a crucial factor in microbial adaptation and survival. , , Yet, how these interactions influence community structure and dynamics remain largely unknown. As a dominant bacterial group in the human gut, Bacteroidales maintains a stable, long-term relationship with the host. , , This group possesses diverse polysaccharide utilization systems, enabling the efficient utilization of dietary long-chain polysaccharides inaccessible to the host. Certain Bacteroidales strains also produce short-chain fatty acids, critical for intestinal mucosal integrity and immune regulation. , , However, under dysbiotic conditions or compromised intestinal barriers, Bacteroidales can become opportunistic pathogens. Additionally, certain strains of Bacteroidales like Enterotoxigenic Bacteroides fragilis (ETBF) have been found to produce Bacteroides fragilis toxin, which has been closely associated with inflammatory bowel disease and colorectal cancer. , , Given their ease of cultivation and genetic manipulability, Bacteroidales serve as a valuable model for studying microbial interactions in the gut. , Residing predominantly in the densely populated colon, these bacteria rely on two primary strategies to compete: , , , , interference competition, where they directly attack competitors through the production of harmful compounds, with the ‘winner’ acquiring the resource, and exploitative competition, where they indirectly compete by consuming resources needed by others ( ). Despite recent advancements, the mechanisms underlying Bacteroidales antagonism remain poorly understood. This review explores the competitive interactions of gut Bacteroidales, with an emphasis on toxin-mediated interbacterial antagonism. By providing a comprehensive overview of these mechanisms, we aim to guide future research on microbial interactions and offer insights into the assembly and regulation of the gut microbiota. Interference competition Interference competition is a key strategy among bacteria, particularly in the densely populated environment of the gut. It involves direct antagonism, where bacteria inhibit or eliminate competitors to secure resources. Bacteroidales utilize sophisticated interference mechanisms to maintain community stability and diversity. These mechanisms fall into two main categories ( ): contact-dependent antagonism mediated by the Type VI secretion system (T6SS) and contact-independent antagonism mediated by the diffusible bacterial toxins. T6SS systems mediate antagonism between spatially adjacent cells, providing a competitive advantage for resource acquisition at the same location and time. These systems do not confer advantages for resources that are distant or readily diffusible. In contrast, diffusible toxins exert antagonistic effects over broader spatiotemporal scales. As such, they confer a competitive advantage to resources that are specific competitive targets for both strains. 2.1. Contact-dependent antagonism mediated by T6SS The T6SS is widely acknowledged as the prevalent and extensively investigated interbacterial antagonism molecular weapon, employed by many Gram-negative bacteria for contact-dependent interbacterial antagonism. T6SS-positive strains deliver toxic effectors into target cells, specifically targeting essential bacterial components. These effectors can damage cellular envelopes, , disrupt enzymatic functions, or modify essential molecules, , thereby effectively eliminating competitors. Effector-encoding strains neutralize the toxicity of effectors by expressing cognate immunity genes adjacent to the effectors to avoid self-killing. By outcompeting susceptible strains, T6SS-positive bacteria gain a competitive advantage, shaping microbial community composition and establishing dominance in specific ecological niches. 2.1.1. Overview of T6SS in Bacteroidales The identification of T6SS in Bacteroidota was delayed until 2014 due to the absence of primary or profile sequence similarity between the 13 core T6SS proteins in Pseudomonodota and those in Bacteroidota. Unlike the general Pseudomonodota T6SS (T6SS i ) and Francisella T6SS (T6SS ii ), the Bacteroidota T6SS has distinct features and is classified as a separate subtype (T6SS iii ). The Bacteroidales T6SS is further divided into three subtypes based on their genetic architectures (GAs): GA1, GA2, and GA3 ( ). While GA1 and GA2 T6SS loci are encoded on integrative conjugative elements (ICEs) and are commonly transferred among Bacteroidales species, the GA3 T6SS is uniquely found in B. fragilis . Analysis of the predicted coding sequences (CDS) of the Bacteroidales T6SS loci reveals a conserved region and multiple variable regions. The conserved region encodes structural components required for the T6SS apparatus, including membrane, baseplate, spike, and tube complexes. The variable regions encode diverse effector-immunity protein pairs and proteins with unknown functions ( ). 2.1.2. Distinct structure of Bacteroidales T6SS suggests unique effector delivery mechanisms Bioinformatics analysis highlights that the Bacteroidales T6SS differs significantly from the Pseudomonodota T6SS in genetic architecture. Specifically, it lacks several conserved core proteins (TssJ, TssM, and TssL) found in Pseudomonodota T6SS membrane complex ( ). Recent findings have identified TssNOPQR as the unique membrane complex in Bacteroidales T6SS, suggesting a novel docking mechanism for the baseplate complex onto the membrane complex ( ). The inner tube complex of T6SS is composed of TssD proteins (Hcp), which facilitate the delivery of diverse low-molecular weight effectors. , While most Pseudomonodota typically encode a single Hcp in the individual T6SS locus, the Bacteroidales T6SS locus encodes up to six distinct Hcp variants. Given the genetic linkage between Hcp and predicted effectors in the Bacteroidales T6SS loci, diverse Hcp may facilitate the delivery of various effectors ( ). Further analysis of the variable regions within GA3 T6SS reveals distinct functional roles. Variable region 1 (V1) only encodes effector-immunity protein pairs, whereas variable region 2 (V2) also includes proteins of unknown function, potentially acting as adaptors for forming diverse spike complexes. We further conducted the analysis and quantification of various distribution patterns and abundances of the V2 region across all sequenced GA3 T6SS, revealing the potential presence of multiple representative T6SS delivery mechanisms ( ). Additionally, a recent study revealed the structure of the B. fragilis cargo delivery complex (VgrG-PAAR-Hcp, without effectors), which represents a subset of the GA3 T6SS. To fully elucidate the unique assembly and delivery mechanisms of the Bacteroidales T6SS, further biochemical experiments and high-resolution structural studies are required. 2.1.3. Mobile GA1 and GA2 T6SS loci in interbacterial competition Genomic and metagenomic analysis has revealed the widespread presence of GA1 and GA2 T6SS (mobile T6SS) in Bacteroidales isolated from the human gut microbiota. Frequent horizontal gene transfer of these mobile T6SS loci suggests that they confer fitness advantages to the encoding strains. Interestingly, the integration of GA1 T6SS into the genome of GA3 T6SS-encoding B. fragilis strains deactivates the antagonistic activity of the GA3 T6SS. This finding implies that acquiring GA1 T6SS may alter the antimicrobial spectrum of GA3 T6SS encoding strains, reversing their roles as attackers and defenders and influencing gut microbiota composition. Recent studies have identified multiple toxic effectors in the variable regions of certain GA2 T6SS loci, including predicted DNase, amidase, endotoxin, and bacteriocin domains. While periplasmic toxicity of some effectors has been confirmed, no significant antagonism (1–3 log killing was considered significant antagonism) of GA1 and GA2 T6SS was observed in vitro . The physiological functions of these loci remain to be clarified ( ). 2.1.4. Ecological impact of GA3 T6SS-mediated interbacterial antagonism GA3 T6SS demonstrates strong antagonistic activity in vitro . Effector proteins from GA3 T6SS, such as Bte1 and Bte2 in B. fragilis NCTC9343, and Bfe1 and Bfe2 in B. fragilis 638 R, exhibit specificity for targeting Bacteroidales but show limited activity against Pseudomonodota ( ). Multiple studies utilizing gnotobiotic mice have demonstrated the crucial role of GA3 T6SS in mediating competition between different B. fragilis strains in the mouse gut. , A compelling study on antibiotic cocktail-treated mice demonstrated that non-enterotoxigenic B. fragilis (NTBF) NCTC9343 effectively restricts enterotoxigenic B. fragilis (ETBF) ATCC43858 colonization through GA3 T6SS, potentially mitigating ETBF-associated disease in a murine host. Unfortunately, the lack of homology with previously characterized proteins has posed significant challenges in characterizing the functional mechanisms of effectors from GA3 T6SS. While the specific mechanisms of GA3 effector-mediated interbacterial antagonism remain unclear, studies have indicated that GA3 T6SS is associated with composition changes in human gut microbiota. An analysis of the human metagenomic datasets revealed a significant association between GA3 T6SS presence and the reduced abundances of Bacteroides and specific Firmicutes genera in the test samples. Frequent replacement of GA3 T6SS effectors was observed during early life, suggesting that at least one GA3 T6SS genotype enhances B. fragilis colonization in the infant gut. In stabilized adult gut microbiota, a reduced diversity of GA3 T6SS was observed, with a single GA3 T6SS genotype being dominated. These findings indicate intense early-life competition among B. fragilis strains potentially shape long-term gut microbiota composition. Given the diverse effectors used by GA3 T6SS to overcome Bacteroidales species, multiple mechanisms have been evolved to counteract T6SS effectors during intense interbacterial competition. Prevalent members of Bacteroidales in the human gut encode an acquired interbacterial defense (AID) gene cluster with multiple orphan immunity proteins for defending against T6SS-mediated interbacterial competition. Acquisition of the AID system confers the ability of Bacteroidales to survive T6SS-mediated killing and maintain community diversity. While GA3 T6SS exhibits robust antagonism in vitro , its activity in vivo may vary due to niche partitioning within the gut microbiota. Strong antagonism is expected among strains occupying overlapping spatial and nutritional niches but may be less apparent in species with limited direct contact. The fitness costs associated with maintaining functional T6SS have led to frequent inactivation or loss of these systems in closed gut communities. Notably, despite the observed patterns of GA3 T6SS loss and the fitness costs of production in the mice gut, the majority of sequenced human gut isolated B. fragilis strains retained an intact T6SS. This suggests that lineages losing GA3 T6SS are not evolutionary successful over longer time scales. This is likely due to the strong selective pressures exerted by vertical transmission and early-life interbacterial competition, under which strains with an intact GA3 T6SS tend to outcompete others. , 2.2. Contact-independent antagonism mediated by diffusible toxins In addition to the contact-dependent antagonism mediated by T6SS, gut Bacteroidales can also produce and secrete diffusible peptide or protein toxins capable of antagonizing a limited spectrum of targets over long distances, constituting a contact-independent antagonistic system. Currently, six types of contact-independent bactericidal toxins have been identified in gut Bacteroidales: Bacteroidales secreted antimicrobial protein (BSAP), , the Bacteroides fragilis ubiquitin (BfUbb), the bacteroidetocins (Bd), , the Bacteroidales conjugally transferred plasmid-encoded toxin (BcpT), the fragipain-activated bacteriocin 1 (Fab1), and the cholesterol-dependent cytolysins like toxins (CDCL). These molecules demonstrate unique bactericidal mechanisms and diverse distributions, reflecting the intricate interactions within the gut microbiota ( ; ). 2.2.1. Bacteroidales secreted antimicrobial protein (BSAP) BSAP represents the first identified class of secreted antimicrobial toxins in gut Bacteroidales, characterized by membrane attack complex/perforin (MACPF) domains with eukaryotic-like features. , These toxins likely exert their bactericidal effects through pore formation, similar to MACPF proteins in eukaryotes. Currently, four types of BSAP toxins (BSAP1-BSAP4) have been shown to possess clear antibactericidal toxicity. , They are produced by specific Bacteroidales species, with some species producing multiple BSAP toxins (for example, both BSAP1 and BSAP4 are produced by B. fragilis ), capable of antagonizing strains of the same or closely related species, respectively. BSAP1-BSAP4 target either the β-barrel outer membrane proteins (BSAP1 and BSAP4) , or O-antigen glycan of lipopolysaccharides (LPS) (BSAP2 and BSAP3) , on susceptible strains, respectively ( ). Notably, the gene location of the target gene for BSAP in BSAP-sensitive strains corresponds to the BSAP gene location in BSAP-production strains. Moreover, BSAP-producing strains overcome toxicity by synthesizing an orthologous nontargeted surface molecule near the BSAP’s gene, indicating that they are acquired jointly. The target of BSAP1 (OMP) and the target of BSAP2 (LPS) were shown to be essential for the adaptive colonization of corresponding strains in mice, offering a physiological explanation for BSAP-sensitive strains to retain these genes and BSAP-production strains to encode orthologous surface molecules. Moreover, unlike the Bacteroidales species capable of producing either BSAP-1, −2, or −3, where strains typically either contain the BSAP gene and produce the corresponding BSAP toxin or lack the gene and are sensitive to the toxin, there are several B. fragilis strains that do not produce BSAP-4 yet display resistance to it due to harboring the resistant ortholog receptor. Additionally, the sensitivity of certain strains to BSAP4 depends on the expression status of its target gene. Moreover, bacterial cocolonization investigations in mice or human gut metagenomes suggest that BSAP1 or BSAP2 can confer a certain fitness advantage to its producing strains compared to sensitive strains. However, the specific mechanism of bactericidal action remains unclear after binding between BSAPs and their respective targets. Despite conserved MACPF motifs, BSAP toxins have a low amino acid identity, indicating diversity in target specificity and mechanisms. Notably, over 320 MACPF domain-containing proteins have been identified in Bacteroidota. With a few exceptions, they are classified into clusters based on their species, producing 68 distinct clusters. Currently, including BSAP1-BSAP4, bactericidal activity has been identified in these MACPF-containing proteins from seven clusters (clusters 1, 2, 10, 14, 15, 16, and 19). , A comprehensive exploration of their functions and targets remains a critical avenue for future research. 2.2.2. Bacteroides fragilis ubiquitin-BfUbb BfUbb is the second diffusible antimicrobial molecule identified in intestinal B. fragilis and also exhibits eukaryotic-like features. After cleavage of its signal peptide, the mature BfUbb protein consists of 76 amino acids, sharing approximately 84% similarity with human ubiquitin (HmUbb). A key distinction between BfUbb and HmUbb is the substitution of glycine at the C-terminus of HmUbb, crucial for covalent substrate binding, with cysteine in BfUbb. This cysteine enables the formation of a unique intramolecular disulfide bond, absent in HmUbb, which is essential for BfUbb’s interaction with its substrate, peptidyl-prolyl isomerase (PPIase), and its antimicrobial activity. BfUbb was first discovered in 2011, when it was shown to covalently bind to the human E1-activated enzyme under non-reducing conditions, effectively inhibiting ubiquitination in vitro . This, along with observed antigenic cross-reactivity between BfUbb and HmUbb, suggested a potential role for BfUbb in B. fragilis -host interactions, , although further validation is needed. In 2017, Comstock and her colleagues identified the bactericidal activity of BfUbb through transposon mutagenesis screening. Subsequent studies elucidated BfUbb’s mechanism of action against B. fragilis and how other Bacteroides species resist it. , BfUbb gains access to the periplasmic space of B. fragilis via a specialized TonB-dependent transporter SusCD-like complex (designated as ButCD). Once inside, BfUbb targets an essential PPIase protein, disrupting its enzymatic and chaperone functions to exert potent bactericidal effects. Despite the universal presence of ButCD in B. fragilis strains (ButCD Bf ), some strains evade BfUbb’s effects through a single-point mutation in PPIase, substituting tyrosine at position 119 with aspartate, which prevents BfUbb binding. Additionally, other Bacteroides species avoid BfUbb-mediated interspecies antagonism by encoding ButCD variants with limited sequence similarity to ButCD Bf , thereby hindering BfUbb transport into their cells , ( ). Co-culture assays, murine colonization studies, and human gut metagenome analyses demonstrate that BfUbb provides a significant competitive advantage in its producing strains over sensitive strains. Notably, BfUbb exhibits exceptional efficacy in eliminating ETBF strains harboring BfUbb-sensitive PPIase in mice. These findings highlight the potential of BfUbb as a therapeutic agent for preventing and treating ETBF-associated diseases. 2.2.3. Bacteroidetocins (Bd) Bacteroidetocins (Bd) are a family of anti-Bacteroidales peptide toxins produced by various members of the Bacteroidota phylum. Among these, Bd-A and Bd-B were primarily found in Bacteroidales and have been the most extensively studied, exhibiting properties similar to class IIa bacteriocins of Gram-positive bacteria. , , These peptides are initially synthesized with a 15-amino-acid leader sequence, which is cleaved following a double glycine motif to yield mature peptides of 42 amino acids. Each mature peptide includes four cysteine residues involved in intramolecular disulfide bond formation. Additionally, the chemically synthesized mature Bd-A toxin exhibits effective bactericidal activity, indicating that it can correctly self-fold in vitro as well. Bd toxins specifically target members of the Bacteroidota phylum, including Bacteroides , Parabacteroides , and Prevotella species. , Long-term evolutionary studies revealed that resistance to Bd-A in Bacteroidales strains is linked to mutations in the bamA gene, which encodes an essential β-barrel outer membrane protein (OMP) responsible for the assembly and insertion of β-barrel proteins into the outer membrane ( ). A conserved aspartate residue at the N-terminus of extracellular loop 3 (el3) in BamA has been identified as critical for Bd-A sensitivity. While Bd-A-resistant BamA mutants exhibit no apparent growth defects in vitro , studies in mice demonstrated significant fitness attenuation, suggesting that these mutants are not competitive in the mammalian gut. This highlights the potential of Bd toxins as therapeutic anti-Bacteroidales agents with a reduced likelihood of resistance evolution. To date, 19 bacteroidetocin-like peptides have been identified from Bacteroidota through tblastn searches. Among these, four Bd toxins – Bd-A, Bd-B, Bd-C, and Bd-D – have been validated for bactericidal activity. Bd-A, Bd-B, and Bd-D share a common feature of four cysteines and exhibit relatively broad-spectrum activity against Bacteroidales. In contrast, Bd-C, which contains only two cysteines, is less broadly toxic but targets specific strains resistant to the other three variants. Differences in amino acid composition likely underlie the distinct activities of these toxins, suggesting that the remaining 15 Bd variants may possess unique bactericidal properties yet to be characterized. Most Bd genes are located within a conserved gene locus that includes at least three additional genes: a protein with five transmembrane regions, a thiol oxidoreductase, and an ABC-like bacteriocin transporter. Deletion of any of these genes results in reduced Bd-B activity of its producing strains, and co-expression of all four genes in Escherichia coli is sufficient to confer upon it the ability to inhibit the growth of Bacteroides thetaiotaomicron VPI-5482, confirming their collective role in active Bd-B toxin production. Interestingly, strains producing Bd-A, Bd-B, or Bd-D were found to possess BamA variants sensitive to their own toxins and remain susceptible to self-intoxication in vitro . Furthermore, Bd-B-producing strains and Bd-B-sensitive strains have been observed to coexist stably in the human gut while retaining sensitivity to Bd-B in vitro . These findings raise compelling questions about the mechanisms enabling Bd-producing strains to tolerate self-intoxication, their fitness for gut colonization, and the broader physiological roles of Bd toxins in the human intestine. 2.2.4. Bacteroidales conjugally transferred plasmid-encoded toxin (BcpT) BcpT is another diffusible toxin identified in Bacteroidales, distinct from other known toxins in several aspects. Encoded on a mobile plasmid, BcpT is named for its mode of transmission as the Bacteroidales conjugally transferred plasmid-encoded toxin. Genomic and metagenomic analyses indicate that this plasmid is primarily restricted to closely related species, Phocaeicola vulgatus and Phocaeicola dorei . However, the antibacterial specificity of BcpT is not only limited to P. vulgatus and P. dorei , as purified BcpT exhibits antibacterial activity against a broader range of species, including Bacteroides , Phocaeicola , and Parabacteroides . Unlike most proteolytically activated bacterial toxins, , , , , BcpT requires cleavage at two distinct sites for activation. Cysteine proteases of the C11 family, doripain A (DpnA) or doripain B (DpnB), cleave BcpT at residues R65 and R199, resulting in a three-fragment active state. Among these, DpnB is considered to play a dominant role in activating BcpT ( ). Although the exact mechanism following BcpT cleavage remains incompletely understood, the C-terminal fragment (residues 200–499) is thought to mediate receptor binding and antibacterial activity, while the N-terminal fragment (residues 20–199) may initially inhibit the function of C-terminal domain. Receptor blot studies identified the lipid A-core glycan of lipopolysaccharide (LPS) as the BcpT receptor, which could be bound by the proteolytically activated toxin ( ). A small lipoprotein, BcpI, encoded by a 174-bp gene downstream of bcpT , provides an eightfold increase in resistance to BcpT, serving as its immunity protein. However, the precise antibacterial mechanism of BcpT and the protective role of BcpI remain areas for further research. 2.2.5. Fragipain-activated bacteriocin (Fab1) Fab1 is a bacteriocin discovered in B. fragilis with activity exclusively targeting B. fragilis strains. It was identified through transposon mutagenesis screening together with fragipain (Fpn), a C11 family cysteine protease responsible for its activation. Fab1 is produced as an approximately 50 kDa protoxin, which is cleaved by Fpn between residues R200 and A201 to generate a ~ 28 kDa C-terminal active fragment with bactericidal properties ( ). Additionally, without Fpn, Fab1 cannot be detected in the culture supernatant, indicating that Fpn is essential for both the secretion and activation of Fab1. The gene encoding Fab1 is accompanied by rfab1 , an immunity gene located immediately downstream. RFab1 provides resistance to Fab1 in producing strains ( ). While the fpn gene is nearly ubiquitous across B. fragilis genomes, fab1 and rfab1 are found in only ~ 20% of strains, reflecting the multifunctional roles of this protease family beyond toxin activation. Some B. fragilis strains harbor rfab1 but lack fab1 , rendering them also insensitive to Fab1. Despite these insights, the specific target and bactericidal mechanism of Fab1 remain unknown. 2.2.6. Cholesterol-dependent cytolysin-like toxins (CDCL) Cholesterol-dependent cytolysin-like toxins (CDCL) represent a newly discovered class of diffusible toxins, named for their resemblance to cholesterol-dependent cytolysins (CDC). , Initially identified in Elizabethkingia anophelis from the midgut of malarial mosquitoes, CDCL have since been found widely distributed in Bacteroidota, including gut-inhabiting species. In B. fragilis , CDCL (BfCDCL) are encoded by two adjacent CDC-like genes, producing a small component (BfCDCL S ) and a larger component (BfCDCL L ). Together, these components exhibit bactericidal activity against related Bacteroides species, though their receptor remains unidentified. BfCDCL activation also requires cleavage by C11-type proteases, including Fpn or DpnB, at residues R70 (BfCDCL L ) and R62 (BfCDCL S ), respectively ( ). The activated BfCDCL L likely serves as a membrane-anchored platform via its domain 4, recruiting activated BfCDCL S to form β-barrel pores that mediate bactericidal activity. A predicted outer surface localized lipoprotein, encoded upstream of the BfCDCL genes, functions as an immunity protein for BfCDCL (BcdI) ( ). Beyond the CDCL genomic pattern identified in B. fragilis , six additional CDCL toxin patterns have been identified in gut Bacteroidales genomes. Among these, one pattern includes three adjacent CDCL genes confined to P. vulgatus and P. dorei . These patterns show varying sequence similarities, warranting further research to elucidate their bactericidal specificities, mechanisms, and physiological relevance. Contact-dependent antagonism mediated by T6SS The T6SS is widely acknowledged as the prevalent and extensively investigated interbacterial antagonism molecular weapon, employed by many Gram-negative bacteria for contact-dependent interbacterial antagonism. T6SS-positive strains deliver toxic effectors into target cells, specifically targeting essential bacterial components. These effectors can damage cellular envelopes, , disrupt enzymatic functions, or modify essential molecules, , thereby effectively eliminating competitors. Effector-encoding strains neutralize the toxicity of effectors by expressing cognate immunity genes adjacent to the effectors to avoid self-killing. By outcompeting susceptible strains, T6SS-positive bacteria gain a competitive advantage, shaping microbial community composition and establishing dominance in specific ecological niches. 2.1.1. Overview of T6SS in Bacteroidales The identification of T6SS in Bacteroidota was delayed until 2014 due to the absence of primary or profile sequence similarity between the 13 core T6SS proteins in Pseudomonodota and those in Bacteroidota. Unlike the general Pseudomonodota T6SS (T6SS i ) and Francisella T6SS (T6SS ii ), the Bacteroidota T6SS has distinct features and is classified as a separate subtype (T6SS iii ). The Bacteroidales T6SS is further divided into three subtypes based on their genetic architectures (GAs): GA1, GA2, and GA3 ( ). While GA1 and GA2 T6SS loci are encoded on integrative conjugative elements (ICEs) and are commonly transferred among Bacteroidales species, the GA3 T6SS is uniquely found in B. fragilis . Analysis of the predicted coding sequences (CDS) of the Bacteroidales T6SS loci reveals a conserved region and multiple variable regions. The conserved region encodes structural components required for the T6SS apparatus, including membrane, baseplate, spike, and tube complexes. The variable regions encode diverse effector-immunity protein pairs and proteins with unknown functions ( ). 2.1.2. Distinct structure of Bacteroidales T6SS suggests unique effector delivery mechanisms Bioinformatics analysis highlights that the Bacteroidales T6SS differs significantly from the Pseudomonodota T6SS in genetic architecture. Specifically, it lacks several conserved core proteins (TssJ, TssM, and TssL) found in Pseudomonodota T6SS membrane complex ( ). Recent findings have identified TssNOPQR as the unique membrane complex in Bacteroidales T6SS, suggesting a novel docking mechanism for the baseplate complex onto the membrane complex ( ). The inner tube complex of T6SS is composed of TssD proteins (Hcp), which facilitate the delivery of diverse low-molecular weight effectors. , While most Pseudomonodota typically encode a single Hcp in the individual T6SS locus, the Bacteroidales T6SS locus encodes up to six distinct Hcp variants. Given the genetic linkage between Hcp and predicted effectors in the Bacteroidales T6SS loci, diverse Hcp may facilitate the delivery of various effectors ( ). Further analysis of the variable regions within GA3 T6SS reveals distinct functional roles. Variable region 1 (V1) only encodes effector-immunity protein pairs, whereas variable region 2 (V2) also includes proteins of unknown function, potentially acting as adaptors for forming diverse spike complexes. We further conducted the analysis and quantification of various distribution patterns and abundances of the V2 region across all sequenced GA3 T6SS, revealing the potential presence of multiple representative T6SS delivery mechanisms ( ). Additionally, a recent study revealed the structure of the B. fragilis cargo delivery complex (VgrG-PAAR-Hcp, without effectors), which represents a subset of the GA3 T6SS. To fully elucidate the unique assembly and delivery mechanisms of the Bacteroidales T6SS, further biochemical experiments and high-resolution structural studies are required. 2.1.3. Mobile GA1 and GA2 T6SS loci in interbacterial competition Genomic and metagenomic analysis has revealed the widespread presence of GA1 and GA2 T6SS (mobile T6SS) in Bacteroidales isolated from the human gut microbiota. Frequent horizontal gene transfer of these mobile T6SS loci suggests that they confer fitness advantages to the encoding strains. Interestingly, the integration of GA1 T6SS into the genome of GA3 T6SS-encoding B. fragilis strains deactivates the antagonistic activity of the GA3 T6SS. This finding implies that acquiring GA1 T6SS may alter the antimicrobial spectrum of GA3 T6SS encoding strains, reversing their roles as attackers and defenders and influencing gut microbiota composition. Recent studies have identified multiple toxic effectors in the variable regions of certain GA2 T6SS loci, including predicted DNase, amidase, endotoxin, and bacteriocin domains. While periplasmic toxicity of some effectors has been confirmed, no significant antagonism (1–3 log killing was considered significant antagonism) of GA1 and GA2 T6SS was observed in vitro . The physiological functions of these loci remain to be clarified ( ). 2.1.4. Ecological impact of GA3 T6SS-mediated interbacterial antagonism GA3 T6SS demonstrates strong antagonistic activity in vitro . Effector proteins from GA3 T6SS, such as Bte1 and Bte2 in B. fragilis NCTC9343, and Bfe1 and Bfe2 in B. fragilis 638 R, exhibit specificity for targeting Bacteroidales but show limited activity against Pseudomonodota ( ). Multiple studies utilizing gnotobiotic mice have demonstrated the crucial role of GA3 T6SS in mediating competition between different B. fragilis strains in the mouse gut. , A compelling study on antibiotic cocktail-treated mice demonstrated that non-enterotoxigenic B. fragilis (NTBF) NCTC9343 effectively restricts enterotoxigenic B. fragilis (ETBF) ATCC43858 colonization through GA3 T6SS, potentially mitigating ETBF-associated disease in a murine host. Unfortunately, the lack of homology with previously characterized proteins has posed significant challenges in characterizing the functional mechanisms of effectors from GA3 T6SS. While the specific mechanisms of GA3 effector-mediated interbacterial antagonism remain unclear, studies have indicated that GA3 T6SS is associated with composition changes in human gut microbiota. An analysis of the human metagenomic datasets revealed a significant association between GA3 T6SS presence and the reduced abundances of Bacteroides and specific Firmicutes genera in the test samples. Frequent replacement of GA3 T6SS effectors was observed during early life, suggesting that at least one GA3 T6SS genotype enhances B. fragilis colonization in the infant gut. In stabilized adult gut microbiota, a reduced diversity of GA3 T6SS was observed, with a single GA3 T6SS genotype being dominated. These findings indicate intense early-life competition among B. fragilis strains potentially shape long-term gut microbiota composition. Given the diverse effectors used by GA3 T6SS to overcome Bacteroidales species, multiple mechanisms have been evolved to counteract T6SS effectors during intense interbacterial competition. Prevalent members of Bacteroidales in the human gut encode an acquired interbacterial defense (AID) gene cluster with multiple orphan immunity proteins for defending against T6SS-mediated interbacterial competition. Acquisition of the AID system confers the ability of Bacteroidales to survive T6SS-mediated killing and maintain community diversity. While GA3 T6SS exhibits robust antagonism in vitro , its activity in vivo may vary due to niche partitioning within the gut microbiota. Strong antagonism is expected among strains occupying overlapping spatial and nutritional niches but may be less apparent in species with limited direct contact. The fitness costs associated with maintaining functional T6SS have led to frequent inactivation or loss of these systems in closed gut communities. Notably, despite the observed patterns of GA3 T6SS loss and the fitness costs of production in the mice gut, the majority of sequenced human gut isolated B. fragilis strains retained an intact T6SS. This suggests that lineages losing GA3 T6SS are not evolutionary successful over longer time scales. This is likely due to the strong selective pressures exerted by vertical transmission and early-life interbacterial competition, under which strains with an intact GA3 T6SS tend to outcompete others. , Overview of T6SS in Bacteroidales The identification of T6SS in Bacteroidota was delayed until 2014 due to the absence of primary or profile sequence similarity between the 13 core T6SS proteins in Pseudomonodota and those in Bacteroidota. Unlike the general Pseudomonodota T6SS (T6SS i ) and Francisella T6SS (T6SS ii ), the Bacteroidota T6SS has distinct features and is classified as a separate subtype (T6SS iii ). The Bacteroidales T6SS is further divided into three subtypes based on their genetic architectures (GAs): GA1, GA2, and GA3 ( ). While GA1 and GA2 T6SS loci are encoded on integrative conjugative elements (ICEs) and are commonly transferred among Bacteroidales species, the GA3 T6SS is uniquely found in B. fragilis . Analysis of the predicted coding sequences (CDS) of the Bacteroidales T6SS loci reveals a conserved region and multiple variable regions. The conserved region encodes structural components required for the T6SS apparatus, including membrane, baseplate, spike, and tube complexes. The variable regions encode diverse effector-immunity protein pairs and proteins with unknown functions ( ). Distinct structure of Bacteroidales T6SS suggests unique effector delivery mechanisms Bioinformatics analysis highlights that the Bacteroidales T6SS differs significantly from the Pseudomonodota T6SS in genetic architecture. Specifically, it lacks several conserved core proteins (TssJ, TssM, and TssL) found in Pseudomonodota T6SS membrane complex ( ). Recent findings have identified TssNOPQR as the unique membrane complex in Bacteroidales T6SS, suggesting a novel docking mechanism for the baseplate complex onto the membrane complex ( ). The inner tube complex of T6SS is composed of TssD proteins (Hcp), which facilitate the delivery of diverse low-molecular weight effectors. , While most Pseudomonodota typically encode a single Hcp in the individual T6SS locus, the Bacteroidales T6SS locus encodes up to six distinct Hcp variants. Given the genetic linkage between Hcp and predicted effectors in the Bacteroidales T6SS loci, diverse Hcp may facilitate the delivery of various effectors ( ). Further analysis of the variable regions within GA3 T6SS reveals distinct functional roles. Variable region 1 (V1) only encodes effector-immunity protein pairs, whereas variable region 2 (V2) also includes proteins of unknown function, potentially acting as adaptors for forming diverse spike complexes. We further conducted the analysis and quantification of various distribution patterns and abundances of the V2 region across all sequenced GA3 T6SS, revealing the potential presence of multiple representative T6SS delivery mechanisms ( ). Additionally, a recent study revealed the structure of the B. fragilis cargo delivery complex (VgrG-PAAR-Hcp, without effectors), which represents a subset of the GA3 T6SS. To fully elucidate the unique assembly and delivery mechanisms of the Bacteroidales T6SS, further biochemical experiments and high-resolution structural studies are required. Mobile GA1 and GA2 T6SS loci in interbacterial competition Genomic and metagenomic analysis has revealed the widespread presence of GA1 and GA2 T6SS (mobile T6SS) in Bacteroidales isolated from the human gut microbiota. Frequent horizontal gene transfer of these mobile T6SS loci suggests that they confer fitness advantages to the encoding strains. Interestingly, the integration of GA1 T6SS into the genome of GA3 T6SS-encoding B. fragilis strains deactivates the antagonistic activity of the GA3 T6SS. This finding implies that acquiring GA1 T6SS may alter the antimicrobial spectrum of GA3 T6SS encoding strains, reversing their roles as attackers and defenders and influencing gut microbiota composition. Recent studies have identified multiple toxic effectors in the variable regions of certain GA2 T6SS loci, including predicted DNase, amidase, endotoxin, and bacteriocin domains. While periplasmic toxicity of some effectors has been confirmed, no significant antagonism (1–3 log killing was considered significant antagonism) of GA1 and GA2 T6SS was observed in vitro . The physiological functions of these loci remain to be clarified ( ). Ecological impact of GA3 T6SS-mediated interbacterial antagonism GA3 T6SS demonstrates strong antagonistic activity in vitro . Effector proteins from GA3 T6SS, such as Bte1 and Bte2 in B. fragilis NCTC9343, and Bfe1 and Bfe2 in B. fragilis 638 R, exhibit specificity for targeting Bacteroidales but show limited activity against Pseudomonodota ( ). Multiple studies utilizing gnotobiotic mice have demonstrated the crucial role of GA3 T6SS in mediating competition between different B. fragilis strains in the mouse gut. , A compelling study on antibiotic cocktail-treated mice demonstrated that non-enterotoxigenic B. fragilis (NTBF) NCTC9343 effectively restricts enterotoxigenic B. fragilis (ETBF) ATCC43858 colonization through GA3 T6SS, potentially mitigating ETBF-associated disease in a murine host. Unfortunately, the lack of homology with previously characterized proteins has posed significant challenges in characterizing the functional mechanisms of effectors from GA3 T6SS. While the specific mechanisms of GA3 effector-mediated interbacterial antagonism remain unclear, studies have indicated that GA3 T6SS is associated with composition changes in human gut microbiota. An analysis of the human metagenomic datasets revealed a significant association between GA3 T6SS presence and the reduced abundances of Bacteroides and specific Firmicutes genera in the test samples. Frequent replacement of GA3 T6SS effectors was observed during early life, suggesting that at least one GA3 T6SS genotype enhances B. fragilis colonization in the infant gut. In stabilized adult gut microbiota, a reduced diversity of GA3 T6SS was observed, with a single GA3 T6SS genotype being dominated. These findings indicate intense early-life competition among B. fragilis strains potentially shape long-term gut microbiota composition. Given the diverse effectors used by GA3 T6SS to overcome Bacteroidales species, multiple mechanisms have been evolved to counteract T6SS effectors during intense interbacterial competition. Prevalent members of Bacteroidales in the human gut encode an acquired interbacterial defense (AID) gene cluster with multiple orphan immunity proteins for defending against T6SS-mediated interbacterial competition. Acquisition of the AID system confers the ability of Bacteroidales to survive T6SS-mediated killing and maintain community diversity. While GA3 T6SS exhibits robust antagonism in vitro , its activity in vivo may vary due to niche partitioning within the gut microbiota. Strong antagonism is expected among strains occupying overlapping spatial and nutritional niches but may be less apparent in species with limited direct contact. The fitness costs associated with maintaining functional T6SS have led to frequent inactivation or loss of these systems in closed gut communities. Notably, despite the observed patterns of GA3 T6SS loss and the fitness costs of production in the mice gut, the majority of sequenced human gut isolated B. fragilis strains retained an intact T6SS. This suggests that lineages losing GA3 T6SS are not evolutionary successful over longer time scales. This is likely due to the strong selective pressures exerted by vertical transmission and early-life interbacterial competition, under which strains with an intact GA3 T6SS tend to outcompete others. , Contact-independent antagonism mediated by diffusible toxins In addition to the contact-dependent antagonism mediated by T6SS, gut Bacteroidales can also produce and secrete diffusible peptide or protein toxins capable of antagonizing a limited spectrum of targets over long distances, constituting a contact-independent antagonistic system. Currently, six types of contact-independent bactericidal toxins have been identified in gut Bacteroidales: Bacteroidales secreted antimicrobial protein (BSAP), , the Bacteroides fragilis ubiquitin (BfUbb), the bacteroidetocins (Bd), , the Bacteroidales conjugally transferred plasmid-encoded toxin (BcpT), the fragipain-activated bacteriocin 1 (Fab1), and the cholesterol-dependent cytolysins like toxins (CDCL). These molecules demonstrate unique bactericidal mechanisms and diverse distributions, reflecting the intricate interactions within the gut microbiota ( ; ). 2.2.1. Bacteroidales secreted antimicrobial protein (BSAP) BSAP represents the first identified class of secreted antimicrobial toxins in gut Bacteroidales, characterized by membrane attack complex/perforin (MACPF) domains with eukaryotic-like features. , These toxins likely exert their bactericidal effects through pore formation, similar to MACPF proteins in eukaryotes. Currently, four types of BSAP toxins (BSAP1-BSAP4) have been shown to possess clear antibactericidal toxicity. , They are produced by specific Bacteroidales species, with some species producing multiple BSAP toxins (for example, both BSAP1 and BSAP4 are produced by B. fragilis ), capable of antagonizing strains of the same or closely related species, respectively. BSAP1-BSAP4 target either the β-barrel outer membrane proteins (BSAP1 and BSAP4) , or O-antigen glycan of lipopolysaccharides (LPS) (BSAP2 and BSAP3) , on susceptible strains, respectively ( ). Notably, the gene location of the target gene for BSAP in BSAP-sensitive strains corresponds to the BSAP gene location in BSAP-production strains. Moreover, BSAP-producing strains overcome toxicity by synthesizing an orthologous nontargeted surface molecule near the BSAP’s gene, indicating that they are acquired jointly. The target of BSAP1 (OMP) and the target of BSAP2 (LPS) were shown to be essential for the adaptive colonization of corresponding strains in mice, offering a physiological explanation for BSAP-sensitive strains to retain these genes and BSAP-production strains to encode orthologous surface molecules. Moreover, unlike the Bacteroidales species capable of producing either BSAP-1, −2, or −3, where strains typically either contain the BSAP gene and produce the corresponding BSAP toxin or lack the gene and are sensitive to the toxin, there are several B. fragilis strains that do not produce BSAP-4 yet display resistance to it due to harboring the resistant ortholog receptor. Additionally, the sensitivity of certain strains to BSAP4 depends on the expression status of its target gene. Moreover, bacterial cocolonization investigations in mice or human gut metagenomes suggest that BSAP1 or BSAP2 can confer a certain fitness advantage to its producing strains compared to sensitive strains. However, the specific mechanism of bactericidal action remains unclear after binding between BSAPs and their respective targets. Despite conserved MACPF motifs, BSAP toxins have a low amino acid identity, indicating diversity in target specificity and mechanisms. Notably, over 320 MACPF domain-containing proteins have been identified in Bacteroidota. With a few exceptions, they are classified into clusters based on their species, producing 68 distinct clusters. Currently, including BSAP1-BSAP4, bactericidal activity has been identified in these MACPF-containing proteins from seven clusters (clusters 1, 2, 10, 14, 15, 16, and 19). , A comprehensive exploration of their functions and targets remains a critical avenue for future research. 2.2.2. Bacteroides fragilis ubiquitin-BfUbb BfUbb is the second diffusible antimicrobial molecule identified in intestinal B. fragilis and also exhibits eukaryotic-like features. After cleavage of its signal peptide, the mature BfUbb protein consists of 76 amino acids, sharing approximately 84% similarity with human ubiquitin (HmUbb). A key distinction between BfUbb and HmUbb is the substitution of glycine at the C-terminus of HmUbb, crucial for covalent substrate binding, with cysteine in BfUbb. This cysteine enables the formation of a unique intramolecular disulfide bond, absent in HmUbb, which is essential for BfUbb’s interaction with its substrate, peptidyl-prolyl isomerase (PPIase), and its antimicrobial activity. BfUbb was first discovered in 2011, when it was shown to covalently bind to the human E1-activated enzyme under non-reducing conditions, effectively inhibiting ubiquitination in vitro . This, along with observed antigenic cross-reactivity between BfUbb and HmUbb, suggested a potential role for BfUbb in B. fragilis -host interactions, , although further validation is needed. In 2017, Comstock and her colleagues identified the bactericidal activity of BfUbb through transposon mutagenesis screening. Subsequent studies elucidated BfUbb’s mechanism of action against B. fragilis and how other Bacteroides species resist it. , BfUbb gains access to the periplasmic space of B. fragilis via a specialized TonB-dependent transporter SusCD-like complex (designated as ButCD). Once inside, BfUbb targets an essential PPIase protein, disrupting its enzymatic and chaperone functions to exert potent bactericidal effects. Despite the universal presence of ButCD in B. fragilis strains (ButCD Bf ), some strains evade BfUbb’s effects through a single-point mutation in PPIase, substituting tyrosine at position 119 with aspartate, which prevents BfUbb binding. Additionally, other Bacteroides species avoid BfUbb-mediated interspecies antagonism by encoding ButCD variants with limited sequence similarity to ButCD Bf , thereby hindering BfUbb transport into their cells , ( ). Co-culture assays, murine colonization studies, and human gut metagenome analyses demonstrate that BfUbb provides a significant competitive advantage in its producing strains over sensitive strains. Notably, BfUbb exhibits exceptional efficacy in eliminating ETBF strains harboring BfUbb-sensitive PPIase in mice. These findings highlight the potential of BfUbb as a therapeutic agent for preventing and treating ETBF-associated diseases. 2.2.3. Bacteroidetocins (Bd) Bacteroidetocins (Bd) are a family of anti-Bacteroidales peptide toxins produced by various members of the Bacteroidota phylum. Among these, Bd-A and Bd-B were primarily found in Bacteroidales and have been the most extensively studied, exhibiting properties similar to class IIa bacteriocins of Gram-positive bacteria. , , These peptides are initially synthesized with a 15-amino-acid leader sequence, which is cleaved following a double glycine motif to yield mature peptides of 42 amino acids. Each mature peptide includes four cysteine residues involved in intramolecular disulfide bond formation. Additionally, the chemically synthesized mature Bd-A toxin exhibits effective bactericidal activity, indicating that it can correctly self-fold in vitro as well. Bd toxins specifically target members of the Bacteroidota phylum, including Bacteroides , Parabacteroides , and Prevotella species. , Long-term evolutionary studies revealed that resistance to Bd-A in Bacteroidales strains is linked to mutations in the bamA gene, which encodes an essential β-barrel outer membrane protein (OMP) responsible for the assembly and insertion of β-barrel proteins into the outer membrane ( ). A conserved aspartate residue at the N-terminus of extracellular loop 3 (el3) in BamA has been identified as critical for Bd-A sensitivity. While Bd-A-resistant BamA mutants exhibit no apparent growth defects in vitro , studies in mice demonstrated significant fitness attenuation, suggesting that these mutants are not competitive in the mammalian gut. This highlights the potential of Bd toxins as therapeutic anti-Bacteroidales agents with a reduced likelihood of resistance evolution. To date, 19 bacteroidetocin-like peptides have been identified from Bacteroidota through tblastn searches. Among these, four Bd toxins – Bd-A, Bd-B, Bd-C, and Bd-D – have been validated for bactericidal activity. Bd-A, Bd-B, and Bd-D share a common feature of four cysteines and exhibit relatively broad-spectrum activity against Bacteroidales. In contrast, Bd-C, which contains only two cysteines, is less broadly toxic but targets specific strains resistant to the other three variants. Differences in amino acid composition likely underlie the distinct activities of these toxins, suggesting that the remaining 15 Bd variants may possess unique bactericidal properties yet to be characterized. Most Bd genes are located within a conserved gene locus that includes at least three additional genes: a protein with five transmembrane regions, a thiol oxidoreductase, and an ABC-like bacteriocin transporter. Deletion of any of these genes results in reduced Bd-B activity of its producing strains, and co-expression of all four genes in Escherichia coli is sufficient to confer upon it the ability to inhibit the growth of Bacteroides thetaiotaomicron VPI-5482, confirming their collective role in active Bd-B toxin production. Interestingly, strains producing Bd-A, Bd-B, or Bd-D were found to possess BamA variants sensitive to their own toxins and remain susceptible to self-intoxication in vitro . Furthermore, Bd-B-producing strains and Bd-B-sensitive strains have been observed to coexist stably in the human gut while retaining sensitivity to Bd-B in vitro . These findings raise compelling questions about the mechanisms enabling Bd-producing strains to tolerate self-intoxication, their fitness for gut colonization, and the broader physiological roles of Bd toxins in the human intestine. 2.2.4. Bacteroidales conjugally transferred plasmid-encoded toxin (BcpT) BcpT is another diffusible toxin identified in Bacteroidales, distinct from other known toxins in several aspects. Encoded on a mobile plasmid, BcpT is named for its mode of transmission as the Bacteroidales conjugally transferred plasmid-encoded toxin. Genomic and metagenomic analyses indicate that this plasmid is primarily restricted to closely related species, Phocaeicola vulgatus and Phocaeicola dorei . However, the antibacterial specificity of BcpT is not only limited to P. vulgatus and P. dorei , as purified BcpT exhibits antibacterial activity against a broader range of species, including Bacteroides , Phocaeicola , and Parabacteroides . Unlike most proteolytically activated bacterial toxins, , , , , BcpT requires cleavage at two distinct sites for activation. Cysteine proteases of the C11 family, doripain A (DpnA) or doripain B (DpnB), cleave BcpT at residues R65 and R199, resulting in a three-fragment active state. Among these, DpnB is considered to play a dominant role in activating BcpT ( ). Although the exact mechanism following BcpT cleavage remains incompletely understood, the C-terminal fragment (residues 200–499) is thought to mediate receptor binding and antibacterial activity, while the N-terminal fragment (residues 20–199) may initially inhibit the function of C-terminal domain. Receptor blot studies identified the lipid A-core glycan of lipopolysaccharide (LPS) as the BcpT receptor, which could be bound by the proteolytically activated toxin ( ). A small lipoprotein, BcpI, encoded by a 174-bp gene downstream of bcpT , provides an eightfold increase in resistance to BcpT, serving as its immunity protein. However, the precise antibacterial mechanism of BcpT and the protective role of BcpI remain areas for further research. 2.2.5. Fragipain-activated bacteriocin (Fab1) Fab1 is a bacteriocin discovered in B. fragilis with activity exclusively targeting B. fragilis strains. It was identified through transposon mutagenesis screening together with fragipain (Fpn), a C11 family cysteine protease responsible for its activation. Fab1 is produced as an approximately 50 kDa protoxin, which is cleaved by Fpn between residues R200 and A201 to generate a ~ 28 kDa C-terminal active fragment with bactericidal properties ( ). Additionally, without Fpn, Fab1 cannot be detected in the culture supernatant, indicating that Fpn is essential for both the secretion and activation of Fab1. The gene encoding Fab1 is accompanied by rfab1 , an immunity gene located immediately downstream. RFab1 provides resistance to Fab1 in producing strains ( ). While the fpn gene is nearly ubiquitous across B. fragilis genomes, fab1 and rfab1 are found in only ~ 20% of strains, reflecting the multifunctional roles of this protease family beyond toxin activation. Some B. fragilis strains harbor rfab1 but lack fab1 , rendering them also insensitive to Fab1. Despite these insights, the specific target and bactericidal mechanism of Fab1 remain unknown. 2.2.6. Cholesterol-dependent cytolysin-like toxins (CDCL) Cholesterol-dependent cytolysin-like toxins (CDCL) represent a newly discovered class of diffusible toxins, named for their resemblance to cholesterol-dependent cytolysins (CDC). , Initially identified in Elizabethkingia anophelis from the midgut of malarial mosquitoes, CDCL have since been found widely distributed in Bacteroidota, including gut-inhabiting species. In B. fragilis , CDCL (BfCDCL) are encoded by two adjacent CDC-like genes, producing a small component (BfCDCL S ) and a larger component (BfCDCL L ). Together, these components exhibit bactericidal activity against related Bacteroides species, though their receptor remains unidentified. BfCDCL activation also requires cleavage by C11-type proteases, including Fpn or DpnB, at residues R70 (BfCDCL L ) and R62 (BfCDCL S ), respectively ( ). The activated BfCDCL L likely serves as a membrane-anchored platform via its domain 4, recruiting activated BfCDCL S to form β-barrel pores that mediate bactericidal activity. A predicted outer surface localized lipoprotein, encoded upstream of the BfCDCL genes, functions as an immunity protein for BfCDCL (BcdI) ( ). Beyond the CDCL genomic pattern identified in B. fragilis , six additional CDCL toxin patterns have been identified in gut Bacteroidales genomes. Among these, one pattern includes three adjacent CDCL genes confined to P. vulgatus and P. dorei . These patterns show varying sequence similarities, warranting further research to elucidate their bactericidal specificities, mechanisms, and physiological relevance. Bacteroidales secreted antimicrobial protein (BSAP) BSAP represents the first identified class of secreted antimicrobial toxins in gut Bacteroidales, characterized by membrane attack complex/perforin (MACPF) domains with eukaryotic-like features. , These toxins likely exert their bactericidal effects through pore formation, similar to MACPF proteins in eukaryotes. Currently, four types of BSAP toxins (BSAP1-BSAP4) have been shown to possess clear antibactericidal toxicity. , They are produced by specific Bacteroidales species, with some species producing multiple BSAP toxins (for example, both BSAP1 and BSAP4 are produced by B. fragilis ), capable of antagonizing strains of the same or closely related species, respectively. BSAP1-BSAP4 target either the β-barrel outer membrane proteins (BSAP1 and BSAP4) , or O-antigen glycan of lipopolysaccharides (LPS) (BSAP2 and BSAP3) , on susceptible strains, respectively ( ). Notably, the gene location of the target gene for BSAP in BSAP-sensitive strains corresponds to the BSAP gene location in BSAP-production strains. Moreover, BSAP-producing strains overcome toxicity by synthesizing an orthologous nontargeted surface molecule near the BSAP’s gene, indicating that they are acquired jointly. The target of BSAP1 (OMP) and the target of BSAP2 (LPS) were shown to be essential for the adaptive colonization of corresponding strains in mice, offering a physiological explanation for BSAP-sensitive strains to retain these genes and BSAP-production strains to encode orthologous surface molecules. Moreover, unlike the Bacteroidales species capable of producing either BSAP-1, −2, or −3, where strains typically either contain the BSAP gene and produce the corresponding BSAP toxin or lack the gene and are sensitive to the toxin, there are several B. fragilis strains that do not produce BSAP-4 yet display resistance to it due to harboring the resistant ortholog receptor. Additionally, the sensitivity of certain strains to BSAP4 depends on the expression status of its target gene. Moreover, bacterial cocolonization investigations in mice or human gut metagenomes suggest that BSAP1 or BSAP2 can confer a certain fitness advantage to its producing strains compared to sensitive strains. However, the specific mechanism of bactericidal action remains unclear after binding between BSAPs and their respective targets. Despite conserved MACPF motifs, BSAP toxins have a low amino acid identity, indicating diversity in target specificity and mechanisms. Notably, over 320 MACPF domain-containing proteins have been identified in Bacteroidota. With a few exceptions, they are classified into clusters based on their species, producing 68 distinct clusters. Currently, including BSAP1-BSAP4, bactericidal activity has been identified in these MACPF-containing proteins from seven clusters (clusters 1, 2, 10, 14, 15, 16, and 19). , A comprehensive exploration of their functions and targets remains a critical avenue for future research. Bacteroides fragilis ubiquitin-BfUbb BfUbb is the second diffusible antimicrobial molecule identified in intestinal B. fragilis and also exhibits eukaryotic-like features. After cleavage of its signal peptide, the mature BfUbb protein consists of 76 amino acids, sharing approximately 84% similarity with human ubiquitin (HmUbb). A key distinction between BfUbb and HmUbb is the substitution of glycine at the C-terminus of HmUbb, crucial for covalent substrate binding, with cysteine in BfUbb. This cysteine enables the formation of a unique intramolecular disulfide bond, absent in HmUbb, which is essential for BfUbb’s interaction with its substrate, peptidyl-prolyl isomerase (PPIase), and its antimicrobial activity. BfUbb was first discovered in 2011, when it was shown to covalently bind to the human E1-activated enzyme under non-reducing conditions, effectively inhibiting ubiquitination in vitro . This, along with observed antigenic cross-reactivity between BfUbb and HmUbb, suggested a potential role for BfUbb in B. fragilis -host interactions, , although further validation is needed. In 2017, Comstock and her colleagues identified the bactericidal activity of BfUbb through transposon mutagenesis screening. Subsequent studies elucidated BfUbb’s mechanism of action against B. fragilis and how other Bacteroides species resist it. , BfUbb gains access to the periplasmic space of B. fragilis via a specialized TonB-dependent transporter SusCD-like complex (designated as ButCD). Once inside, BfUbb targets an essential PPIase protein, disrupting its enzymatic and chaperone functions to exert potent bactericidal effects. Despite the universal presence of ButCD in B. fragilis strains (ButCD Bf ), some strains evade BfUbb’s effects through a single-point mutation in PPIase, substituting tyrosine at position 119 with aspartate, which prevents BfUbb binding. Additionally, other Bacteroides species avoid BfUbb-mediated interspecies antagonism by encoding ButCD variants with limited sequence similarity to ButCD Bf , thereby hindering BfUbb transport into their cells , ( ). Co-culture assays, murine colonization studies, and human gut metagenome analyses demonstrate that BfUbb provides a significant competitive advantage in its producing strains over sensitive strains. Notably, BfUbb exhibits exceptional efficacy in eliminating ETBF strains harboring BfUbb-sensitive PPIase in mice. These findings highlight the potential of BfUbb as a therapeutic agent for preventing and treating ETBF-associated diseases. Bacteroidetocins (Bd) Bacteroidetocins (Bd) are a family of anti-Bacteroidales peptide toxins produced by various members of the Bacteroidota phylum. Among these, Bd-A and Bd-B were primarily found in Bacteroidales and have been the most extensively studied, exhibiting properties similar to class IIa bacteriocins of Gram-positive bacteria. , , These peptides are initially synthesized with a 15-amino-acid leader sequence, which is cleaved following a double glycine motif to yield mature peptides of 42 amino acids. Each mature peptide includes four cysteine residues involved in intramolecular disulfide bond formation. Additionally, the chemically synthesized mature Bd-A toxin exhibits effective bactericidal activity, indicating that it can correctly self-fold in vitro as well. Bd toxins specifically target members of the Bacteroidota phylum, including Bacteroides , Parabacteroides , and Prevotella species. , Long-term evolutionary studies revealed that resistance to Bd-A in Bacteroidales strains is linked to mutations in the bamA gene, which encodes an essential β-barrel outer membrane protein (OMP) responsible for the assembly and insertion of β-barrel proteins into the outer membrane ( ). A conserved aspartate residue at the N-terminus of extracellular loop 3 (el3) in BamA has been identified as critical for Bd-A sensitivity. While Bd-A-resistant BamA mutants exhibit no apparent growth defects in vitro , studies in mice demonstrated significant fitness attenuation, suggesting that these mutants are not competitive in the mammalian gut. This highlights the potential of Bd toxins as therapeutic anti-Bacteroidales agents with a reduced likelihood of resistance evolution. To date, 19 bacteroidetocin-like peptides have been identified from Bacteroidota through tblastn searches. Among these, four Bd toxins – Bd-A, Bd-B, Bd-C, and Bd-D – have been validated for bactericidal activity. Bd-A, Bd-B, and Bd-D share a common feature of four cysteines and exhibit relatively broad-spectrum activity against Bacteroidales. In contrast, Bd-C, which contains only two cysteines, is less broadly toxic but targets specific strains resistant to the other three variants. Differences in amino acid composition likely underlie the distinct activities of these toxins, suggesting that the remaining 15 Bd variants may possess unique bactericidal properties yet to be characterized. Most Bd genes are located within a conserved gene locus that includes at least three additional genes: a protein with five transmembrane regions, a thiol oxidoreductase, and an ABC-like bacteriocin transporter. Deletion of any of these genes results in reduced Bd-B activity of its producing strains, and co-expression of all four genes in Escherichia coli is sufficient to confer upon it the ability to inhibit the growth of Bacteroides thetaiotaomicron VPI-5482, confirming their collective role in active Bd-B toxin production. Interestingly, strains producing Bd-A, Bd-B, or Bd-D were found to possess BamA variants sensitive to their own toxins and remain susceptible to self-intoxication in vitro . Furthermore, Bd-B-producing strains and Bd-B-sensitive strains have been observed to coexist stably in the human gut while retaining sensitivity to Bd-B in vitro . These findings raise compelling questions about the mechanisms enabling Bd-producing strains to tolerate self-intoxication, their fitness for gut colonization, and the broader physiological roles of Bd toxins in the human intestine. Bacteroidales conjugally transferred plasmid-encoded toxin (BcpT) BcpT is another diffusible toxin identified in Bacteroidales, distinct from other known toxins in several aspects. Encoded on a mobile plasmid, BcpT is named for its mode of transmission as the Bacteroidales conjugally transferred plasmid-encoded toxin. Genomic and metagenomic analyses indicate that this plasmid is primarily restricted to closely related species, Phocaeicola vulgatus and Phocaeicola dorei . However, the antibacterial specificity of BcpT is not only limited to P. vulgatus and P. dorei , as purified BcpT exhibits antibacterial activity against a broader range of species, including Bacteroides , Phocaeicola , and Parabacteroides . Unlike most proteolytically activated bacterial toxins, , , , , BcpT requires cleavage at two distinct sites for activation. Cysteine proteases of the C11 family, doripain A (DpnA) or doripain B (DpnB), cleave BcpT at residues R65 and R199, resulting in a three-fragment active state. Among these, DpnB is considered to play a dominant role in activating BcpT ( ). Although the exact mechanism following BcpT cleavage remains incompletely understood, the C-terminal fragment (residues 200–499) is thought to mediate receptor binding and antibacterial activity, while the N-terminal fragment (residues 20–199) may initially inhibit the function of C-terminal domain. Receptor blot studies identified the lipid A-core glycan of lipopolysaccharide (LPS) as the BcpT receptor, which could be bound by the proteolytically activated toxin ( ). A small lipoprotein, BcpI, encoded by a 174-bp gene downstream of bcpT , provides an eightfold increase in resistance to BcpT, serving as its immunity protein. However, the precise antibacterial mechanism of BcpT and the protective role of BcpI remain areas for further research. Fragipain-activated bacteriocin (Fab1) Fab1 is a bacteriocin discovered in B. fragilis with activity exclusively targeting B. fragilis strains. It was identified through transposon mutagenesis screening together with fragipain (Fpn), a C11 family cysteine protease responsible for its activation. Fab1 is produced as an approximately 50 kDa protoxin, which is cleaved by Fpn between residues R200 and A201 to generate a ~ 28 kDa C-terminal active fragment with bactericidal properties ( ). Additionally, without Fpn, Fab1 cannot be detected in the culture supernatant, indicating that Fpn is essential for both the secretion and activation of Fab1. The gene encoding Fab1 is accompanied by rfab1 , an immunity gene located immediately downstream. RFab1 provides resistance to Fab1 in producing strains ( ). While the fpn gene is nearly ubiquitous across B. fragilis genomes, fab1 and rfab1 are found in only ~ 20% of strains, reflecting the multifunctional roles of this protease family beyond toxin activation. Some B. fragilis strains harbor rfab1 but lack fab1 , rendering them also insensitive to Fab1. Despite these insights, the specific target and bactericidal mechanism of Fab1 remain unknown. Cholesterol-dependent cytolysin-like toxins (CDCL) Cholesterol-dependent cytolysin-like toxins (CDCL) represent a newly discovered class of diffusible toxins, named for their resemblance to cholesterol-dependent cytolysins (CDC). , Initially identified in Elizabethkingia anophelis from the midgut of malarial mosquitoes, CDCL have since been found widely distributed in Bacteroidota, including gut-inhabiting species. In B. fragilis , CDCL (BfCDCL) are encoded by two adjacent CDC-like genes, producing a small component (BfCDCL S ) and a larger component (BfCDCL L ). Together, these components exhibit bactericidal activity against related Bacteroides species, though their receptor remains unidentified. BfCDCL activation also requires cleavage by C11-type proteases, including Fpn or DpnB, at residues R70 (BfCDCL L ) and R62 (BfCDCL S ), respectively ( ). The activated BfCDCL L likely serves as a membrane-anchored platform via its domain 4, recruiting activated BfCDCL S to form β-barrel pores that mediate bactericidal activity. A predicted outer surface localized lipoprotein, encoded upstream of the BfCDCL genes, functions as an immunity protein for BfCDCL (BcdI) ( ). Beyond the CDCL genomic pattern identified in B. fragilis , six additional CDCL toxin patterns have been identified in gut Bacteroidales genomes. Among these, one pattern includes three adjacent CDCL genes confined to P. vulgatus and P. dorei . These patterns show varying sequence similarities, warranting further research to elucidate their bactericidal specificities, mechanisms, and physiological relevance. Exploitative competition Beyond interference competition, exploitative competition also plays an irreplaceable role in maintaining Bacteroidales’ stable colonization of the mammalian gut ( ). This form of competition involves limiting the growth of rivals by efficiently utilizing scarce resources. Unlike interference competition, where direct interactions between cells occur, exploitative competition operates indirectly, becoming especially significant under nutrient-limited conditions, such as in biofilm-like dense microbial communities. Resources critical to microbial survival include nutrients (e.g., carbon, nitrogen, phosphorus, sulfur, hydrogen, calcium, iron, and trace metals) and spatial niches. As microbes grow and accumulate biomass, they expand their spatial distribution and compete with other populations to colonize regions with higher nutrient availability. 3.1. Competition for nutrient resources Competition for nutrients can occur through two main mechanisms: increased nutrient absorption or secreting factors that enhance nutrient acquisition. It is observed in Saccharomyces cerevisiae and E. coli , switching from fermentation to aerobic respiration under aerobic conditions, increasing their growth rate and absorbing nutrients more rapidly than their competitors. , The secretion of nutrient acquisition factors enhances competitive ability by producing digestive enzymes that hydrolyze complex nutrient molecules or synthesizing siderophores-like molecules to chelate and sequester essential metals. As commensal and mutualistic organisms, Bacteroidales have distinct advantages in resource acquisition, such as carbon, metals, and corrinoids. 3.1.1. Carbon Bacteroidales exhibit a strong capacity for polysaccharide metabolism, supported by specialized genetic regions known as polysaccharide utilization loci (PULs), which enhance their ability to recognize and degrade complex carbohydrates. Beyond benefiting themselves, Bacteroidales also establish cross-feeding networks that facilitate nutrient exchange within microbial communities. However, when primary degraders restrict the release of free oligosaccharides or monosaccharides into the extracellular environment, they adopt a “selfish” mode of glycan catabolism. , This strategy promotes the dominance of these “selfish degraders” over other species ( ). For example, B. thetaiotaomicron metabolizes yeast mannan by using surface endo-mannanases to produce large oligosaccharides, which are immediately captured and transported into the cell for further breakdown by periplasmic mannanases ( ). This process ensures efficient utilization of the resource. In the co-culture experiments, the carbon is the only limiting nutrient, while other nutrients, such as nitrogen and trace metals, are provided in excess. Yeast mannan, as the sole carbon source, B. thetaiotaomicron outcompeted B. cellulosilyticus and B. xylanisolvens . However, B. thetaiotaomicron may not have growth advantage under other carbon sources such as mannose, or other limiting nutrients. Under varying carbon source concentrations or different growth rates, the dominant strain may either further enhance or lose its growth advantage. Similarly, B. ovatus efficiently degrades various xylan polysaccharides, such as wheat arabinoxylan and glucuronoarabinoxylan. While the degradation products of wheat arabinoxylan, rather than glucuronoarabinoxylan, can support the growth of Bifidobacterium adolescentis (which cannot utilize the intact polysaccharide). This provides B. ovatus a competitive edge when using glucuronoarabinoxylan (e.g., corn bran xylan) as the sole carbon source. However, in competition with B. ovatus and Roseburia intestinalis (both of which exhibited comparable growth on xylan as a carbon source), R. intestinalis eventually emerged as the dominant species, seemingly outcompeting B. ovatus after propagation of the co-culture for two additional passages. Nonetheless, these observations were made under controlled laboratory conditions with single-nutrient limitations. To more accurately reflect natural environments and achieve more realistic competition outcomes, it is imperative to introduce additional layers of complexity. Bacteroidales enhance their adaptability by inducing specific outer membrane polysaccharide-binding proteins and glycoside hydrolases in response to dietary conditions. In the absence of dietary polysaccharides, they switch to utilizing host mucus polysaccharides, thereby maintaining their stability in the intestinal environment. An often-overlooked resource is genetic material, which can also serve as a nutrient. For example, B. thetaiotaomicron metabolizes ribose via its ribokinase-encoded ribose-utilization system, enhancing its colonization fitness in a diet-specific manner. These effective glycan acquisition strategies provide Bacteroidales with a diet-specific competitive advantage in vivo . 3.1.2. Iron Iron is an essential element for most organisms, serving as a cofactor for metalloproteins involved in vital cellular processes such as DNA replication. A common strategy for acquiring iron is the secretion of siderophores, molecules that scavenge iron from the environment. Numerous studies have highlighted cross-species competition mediated by siderophores, with variations in siderophore-binding affinities and uptake capabilities influencing competitive dynamics. For example, under iron-limited conditions, strains producing low-affinity siderophores may grow normally, but their growth is inhibited when high-affinity siderophores from other species are introduced. Siderophores, as public goods, can also be utilized by non-producing strains with appropriate siderophore receptors, effectively transferring production costs to the siderophore producers. , To date, there is no evidence that Bacteroides produce siderophores. However, Zhu et al . identified a xenosiderophore utilization system (xusABC) in B. thetaiotaomicron by analyzing its transcriptional response in the cecum of mice during Salmonella infection. This system enables B. thetaiotaomicron to use siderophores produced by Enterobacteriaceae family members, enhancing its colonization resilience under nutritional immunity induced by Salmonella infection or noninfectious colitis ( ). , However, the xenosiderophore utilization system is not widespread in Bacteroidales, potentially giving B. thetaiotaomicron a unique competitive advantage during enteric Salmonella infection. Heme, the host’s largest iron reservoir, is typically bound to macromolecules like hemoglobin, and free heme is scarce. Bacteroides and Porphyromonas gingivalis , as heme-deficient bacteria, cannot synthesize protoporphyrin IX de novo and must acquire heme from the environment. Previous studies have identified several hemophores in Porphyromonas gingivalis , such as HusA and HmuY, critical for competitive advantage in vivo . , While HmuY homologs have been identified in Bacteroides , deletion of these genes does not affect their in vitro growth. The mechanisms underlying heme transport and competition in Bacteroidales remain largely unexplored. 3.1.3. Other metals The acquisition of transition metals such as zinc and manganese is critical for microbial survival, alongside iron. Zinc is predicted to interact with 4–6% of bacterial proteins, playing key roles in gene regulation, cellular metabolism, and as a cofactor for various virulence factors. Manganese, another essential cofactor, supports numerous bacterial enzymes involved in lipid, protein, and carbohydrate metabolism. Under metal-limited conditions, microorganisms adapt by upregulating high-affinity transporters to import these nutrients. For instance, E. coli primarily uses the low-affinity ZupT transporter for zinc uptake under moderate availability but relies on the high-affinity ZnuACB system under extreme zinc scarcity. The deletion of ZnuA in Campylobacter jejuni significantly impairs intestinal colonization. While homologs of ZupT and ZnuACB exist in Bacteroidales, their specific mechanisms and impact on competitive colonization under metal-limiting conditions remain unclear. Additionally, the T6SS in pathogens like Yersinia pseudotuberculosis and Burkholderia thailandensis has been shown to mediate exploitative competition by secreting protein-based carriers that facilitate manganese and zinc acquisition. , However, whether T6SS in Bacteroidales performs a similar role remains unexplored. 3.1.4. Corrinoids Corrinoids, especially vitamin B12, are essential cofactors for methionine synthesis, propionate production, and other metabolic pathways. These compounds significantly influence the structure and function of the human gut microbiota. , Due to the incomplete or absent corrinoid synthesis pathways in Bacteroidales, these bacteria depend entirely on extracellular transporters for corrinoid acquisition. Redundant corrinoid transporters are a common feature in gut microbiota. In B. thetaiotaomicron , three distinct vitamin B12 utilization systems have been identified ( ). Among these, BtuB1 and BtuB3 have lower specificity or affinity for cobalamin compared to BtuB2, and each transporter exhibits distinct preferences for various corrinoids. For example, strains encoding only BtuB1 outperform those with BtuB3 when provided with adeninylcobamide or benzimidazolylcobamide, but not with cobalamin, 2-methyladeninylcobamide, 5-methoxybenzimidazolylcobamide, or 5-methylbenzimidazolylcobamide. The diversity of vitamin B12 transporters in B. thetaiotaomicron is vital for its fitness in vivo , particularly in response to diet and community composition. A dietary deficiency in vitamin B12 drastically increases the reliance on BtuB2 for microbial fitness. For instance, the relative abundance of BtuB2-deficient strains in mice on a vitamin B12-depleted diet is nearly two orders of magnitude lower compared to mice on a vitamin B12-enriched diet. Pre-colonization with a Bacteroidales community exacerbates this competitive disadvantage, whereas pre-colonization with Firmicutes and Actinobacteria, which can synthesize sufficient corrinoids, completely mitigates the defect. , Corrinoid transport mirrors iron transport, as many gut microbes, including B. thetaiotaomicron , lack the ability to synthesize corrinoids but possess extensive machinery to capture these compounds from other species. Consequently, microbes capable of acquiring diverse corrinoids gain a significant competitive advantage over those with transporters specific to modified molecules. 3.2. Taking and holding the ecological niche Space occupation is pivotal in microbial competition, encompassing both the colonization of new ecological niches and the prevention of competitor encroachment over the long term. B. fragilis expresses specific mucosal colonization factors, such as the sulfatase BF3086 and glycosyl hydrolase BF3134, which are upregulated in both mucus and tissue environments. These factors enable B. fragilis to penetrate the mucus layer and inhabit deeper crypt regions. When germ-free mice were colonized with either wild-type B. fragilis or the ∆BF3134 mutant individually, bacterial populations in feces and the colonic lumen remained comparable. However, in co-colonization experiments, the proportion of ∆BF3134 steadily declined, indicating the importance of mucosal colonization factors in occupying new niches. When a bacterial population occupies a favorable ecological niche, it must limit the invasion of competitors to ensure prolonged survival. Prior studies have demonstrated that germ-free mice mono-associated with a single Bacteroidales species are resistant to colonization by the same species but not different species ( ). This colonization resistance is attained via species-specific nutrients or unique niches. In vivo genetic screening identified a conserved and unique class of PULs called commensal colonization factors (ccf), essential for B. fragilis robust mucosal colonization, preventing reinvasion by the same species ( ). Additionally, the mucosal colonization-defective mutant ∆BF3134 and the capsular polysaccharide-deficient strain ∆PSB/C could not maintain colonization resistance by excluding competitors of the same species. , Competition for nutrient resources Competition for nutrients can occur through two main mechanisms: increased nutrient absorption or secreting factors that enhance nutrient acquisition. It is observed in Saccharomyces cerevisiae and E. coli , switching from fermentation to aerobic respiration under aerobic conditions, increasing their growth rate and absorbing nutrients more rapidly than their competitors. , The secretion of nutrient acquisition factors enhances competitive ability by producing digestive enzymes that hydrolyze complex nutrient molecules or synthesizing siderophores-like molecules to chelate and sequester essential metals. As commensal and mutualistic organisms, Bacteroidales have distinct advantages in resource acquisition, such as carbon, metals, and corrinoids. 3.1.1. Carbon Bacteroidales exhibit a strong capacity for polysaccharide metabolism, supported by specialized genetic regions known as polysaccharide utilization loci (PULs), which enhance their ability to recognize and degrade complex carbohydrates. Beyond benefiting themselves, Bacteroidales also establish cross-feeding networks that facilitate nutrient exchange within microbial communities. However, when primary degraders restrict the release of free oligosaccharides or monosaccharides into the extracellular environment, they adopt a “selfish” mode of glycan catabolism. , This strategy promotes the dominance of these “selfish degraders” over other species ( ). For example, B. thetaiotaomicron metabolizes yeast mannan by using surface endo-mannanases to produce large oligosaccharides, which are immediately captured and transported into the cell for further breakdown by periplasmic mannanases ( ). This process ensures efficient utilization of the resource. In the co-culture experiments, the carbon is the only limiting nutrient, while other nutrients, such as nitrogen and trace metals, are provided in excess. Yeast mannan, as the sole carbon source, B. thetaiotaomicron outcompeted B. cellulosilyticus and B. xylanisolvens . However, B. thetaiotaomicron may not have growth advantage under other carbon sources such as mannose, or other limiting nutrients. Under varying carbon source concentrations or different growth rates, the dominant strain may either further enhance or lose its growth advantage. Similarly, B. ovatus efficiently degrades various xylan polysaccharides, such as wheat arabinoxylan and glucuronoarabinoxylan. While the degradation products of wheat arabinoxylan, rather than glucuronoarabinoxylan, can support the growth of Bifidobacterium adolescentis (which cannot utilize the intact polysaccharide). This provides B. ovatus a competitive edge when using glucuronoarabinoxylan (e.g., corn bran xylan) as the sole carbon source. However, in competition with B. ovatus and Roseburia intestinalis (both of which exhibited comparable growth on xylan as a carbon source), R. intestinalis eventually emerged as the dominant species, seemingly outcompeting B. ovatus after propagation of the co-culture for two additional passages. Nonetheless, these observations were made under controlled laboratory conditions with single-nutrient limitations. To more accurately reflect natural environments and achieve more realistic competition outcomes, it is imperative to introduce additional layers of complexity. Bacteroidales enhance their adaptability by inducing specific outer membrane polysaccharide-binding proteins and glycoside hydrolases in response to dietary conditions. In the absence of dietary polysaccharides, they switch to utilizing host mucus polysaccharides, thereby maintaining their stability in the intestinal environment. An often-overlooked resource is genetic material, which can also serve as a nutrient. For example, B. thetaiotaomicron metabolizes ribose via its ribokinase-encoded ribose-utilization system, enhancing its colonization fitness in a diet-specific manner. These effective glycan acquisition strategies provide Bacteroidales with a diet-specific competitive advantage in vivo . 3.1.2. Iron Iron is an essential element for most organisms, serving as a cofactor for metalloproteins involved in vital cellular processes such as DNA replication. A common strategy for acquiring iron is the secretion of siderophores, molecules that scavenge iron from the environment. Numerous studies have highlighted cross-species competition mediated by siderophores, with variations in siderophore-binding affinities and uptake capabilities influencing competitive dynamics. For example, under iron-limited conditions, strains producing low-affinity siderophores may grow normally, but their growth is inhibited when high-affinity siderophores from other species are introduced. Siderophores, as public goods, can also be utilized by non-producing strains with appropriate siderophore receptors, effectively transferring production costs to the siderophore producers. , To date, there is no evidence that Bacteroides produce siderophores. However, Zhu et al . identified a xenosiderophore utilization system (xusABC) in B. thetaiotaomicron by analyzing its transcriptional response in the cecum of mice during Salmonella infection. This system enables B. thetaiotaomicron to use siderophores produced by Enterobacteriaceae family members, enhancing its colonization resilience under nutritional immunity induced by Salmonella infection or noninfectious colitis ( ). , However, the xenosiderophore utilization system is not widespread in Bacteroidales, potentially giving B. thetaiotaomicron a unique competitive advantage during enteric Salmonella infection. Heme, the host’s largest iron reservoir, is typically bound to macromolecules like hemoglobin, and free heme is scarce. Bacteroides and Porphyromonas gingivalis , as heme-deficient bacteria, cannot synthesize protoporphyrin IX de novo and must acquire heme from the environment. Previous studies have identified several hemophores in Porphyromonas gingivalis , such as HusA and HmuY, critical for competitive advantage in vivo . , While HmuY homologs have been identified in Bacteroides , deletion of these genes does not affect their in vitro growth. The mechanisms underlying heme transport and competition in Bacteroidales remain largely unexplored. 3.1.3. Other metals The acquisition of transition metals such as zinc and manganese is critical for microbial survival, alongside iron. Zinc is predicted to interact with 4–6% of bacterial proteins, playing key roles in gene regulation, cellular metabolism, and as a cofactor for various virulence factors. Manganese, another essential cofactor, supports numerous bacterial enzymes involved in lipid, protein, and carbohydrate metabolism. Under metal-limited conditions, microorganisms adapt by upregulating high-affinity transporters to import these nutrients. For instance, E. coli primarily uses the low-affinity ZupT transporter for zinc uptake under moderate availability but relies on the high-affinity ZnuACB system under extreme zinc scarcity. The deletion of ZnuA in Campylobacter jejuni significantly impairs intestinal colonization. While homologs of ZupT and ZnuACB exist in Bacteroidales, their specific mechanisms and impact on competitive colonization under metal-limiting conditions remain unclear. Additionally, the T6SS in pathogens like Yersinia pseudotuberculosis and Burkholderia thailandensis has been shown to mediate exploitative competition by secreting protein-based carriers that facilitate manganese and zinc acquisition. , However, whether T6SS in Bacteroidales performs a similar role remains unexplored. 3.1.4. Corrinoids Corrinoids, especially vitamin B12, are essential cofactors for methionine synthesis, propionate production, and other metabolic pathways. These compounds significantly influence the structure and function of the human gut microbiota. , Due to the incomplete or absent corrinoid synthesis pathways in Bacteroidales, these bacteria depend entirely on extracellular transporters for corrinoid acquisition. Redundant corrinoid transporters are a common feature in gut microbiota. In B. thetaiotaomicron , three distinct vitamin B12 utilization systems have been identified ( ). Among these, BtuB1 and BtuB3 have lower specificity or affinity for cobalamin compared to BtuB2, and each transporter exhibits distinct preferences for various corrinoids. For example, strains encoding only BtuB1 outperform those with BtuB3 when provided with adeninylcobamide or benzimidazolylcobamide, but not with cobalamin, 2-methyladeninylcobamide, 5-methoxybenzimidazolylcobamide, or 5-methylbenzimidazolylcobamide. The diversity of vitamin B12 transporters in B. thetaiotaomicron is vital for its fitness in vivo , particularly in response to diet and community composition. A dietary deficiency in vitamin B12 drastically increases the reliance on BtuB2 for microbial fitness. For instance, the relative abundance of BtuB2-deficient strains in mice on a vitamin B12-depleted diet is nearly two orders of magnitude lower compared to mice on a vitamin B12-enriched diet. Pre-colonization with a Bacteroidales community exacerbates this competitive disadvantage, whereas pre-colonization with Firmicutes and Actinobacteria, which can synthesize sufficient corrinoids, completely mitigates the defect. , Corrinoid transport mirrors iron transport, as many gut microbes, including B. thetaiotaomicron , lack the ability to synthesize corrinoids but possess extensive machinery to capture these compounds from other species. Consequently, microbes capable of acquiring diverse corrinoids gain a significant competitive advantage over those with transporters specific to modified molecules. Carbon Bacteroidales exhibit a strong capacity for polysaccharide metabolism, supported by specialized genetic regions known as polysaccharide utilization loci (PULs), which enhance their ability to recognize and degrade complex carbohydrates. Beyond benefiting themselves, Bacteroidales also establish cross-feeding networks that facilitate nutrient exchange within microbial communities. However, when primary degraders restrict the release of free oligosaccharides or monosaccharides into the extracellular environment, they adopt a “selfish” mode of glycan catabolism. , This strategy promotes the dominance of these “selfish degraders” over other species ( ). For example, B. thetaiotaomicron metabolizes yeast mannan by using surface endo-mannanases to produce large oligosaccharides, which are immediately captured and transported into the cell for further breakdown by periplasmic mannanases ( ). This process ensures efficient utilization of the resource. In the co-culture experiments, the carbon is the only limiting nutrient, while other nutrients, such as nitrogen and trace metals, are provided in excess. Yeast mannan, as the sole carbon source, B. thetaiotaomicron outcompeted B. cellulosilyticus and B. xylanisolvens . However, B. thetaiotaomicron may not have growth advantage under other carbon sources such as mannose, or other limiting nutrients. Under varying carbon source concentrations or different growth rates, the dominant strain may either further enhance or lose its growth advantage. Similarly, B. ovatus efficiently degrades various xylan polysaccharides, such as wheat arabinoxylan and glucuronoarabinoxylan. While the degradation products of wheat arabinoxylan, rather than glucuronoarabinoxylan, can support the growth of Bifidobacterium adolescentis (which cannot utilize the intact polysaccharide). This provides B. ovatus a competitive edge when using glucuronoarabinoxylan (e.g., corn bran xylan) as the sole carbon source. However, in competition with B. ovatus and Roseburia intestinalis (both of which exhibited comparable growth on xylan as a carbon source), R. intestinalis eventually emerged as the dominant species, seemingly outcompeting B. ovatus after propagation of the co-culture for two additional passages. Nonetheless, these observations were made under controlled laboratory conditions with single-nutrient limitations. To more accurately reflect natural environments and achieve more realistic competition outcomes, it is imperative to introduce additional layers of complexity. Bacteroidales enhance their adaptability by inducing specific outer membrane polysaccharide-binding proteins and glycoside hydrolases in response to dietary conditions. In the absence of dietary polysaccharides, they switch to utilizing host mucus polysaccharides, thereby maintaining their stability in the intestinal environment. An often-overlooked resource is genetic material, which can also serve as a nutrient. For example, B. thetaiotaomicron metabolizes ribose via its ribokinase-encoded ribose-utilization system, enhancing its colonization fitness in a diet-specific manner. These effective glycan acquisition strategies provide Bacteroidales with a diet-specific competitive advantage in vivo . Iron Iron is an essential element for most organisms, serving as a cofactor for metalloproteins involved in vital cellular processes such as DNA replication. A common strategy for acquiring iron is the secretion of siderophores, molecules that scavenge iron from the environment. Numerous studies have highlighted cross-species competition mediated by siderophores, with variations in siderophore-binding affinities and uptake capabilities influencing competitive dynamics. For example, under iron-limited conditions, strains producing low-affinity siderophores may grow normally, but their growth is inhibited when high-affinity siderophores from other species are introduced. Siderophores, as public goods, can also be utilized by non-producing strains with appropriate siderophore receptors, effectively transferring production costs to the siderophore producers. , To date, there is no evidence that Bacteroides produce siderophores. However, Zhu et al . identified a xenosiderophore utilization system (xusABC) in B. thetaiotaomicron by analyzing its transcriptional response in the cecum of mice during Salmonella infection. This system enables B. thetaiotaomicron to use siderophores produced by Enterobacteriaceae family members, enhancing its colonization resilience under nutritional immunity induced by Salmonella infection or noninfectious colitis ( ). , However, the xenosiderophore utilization system is not widespread in Bacteroidales, potentially giving B. thetaiotaomicron a unique competitive advantage during enteric Salmonella infection. Heme, the host’s largest iron reservoir, is typically bound to macromolecules like hemoglobin, and free heme is scarce. Bacteroides and Porphyromonas gingivalis , as heme-deficient bacteria, cannot synthesize protoporphyrin IX de novo and must acquire heme from the environment. Previous studies have identified several hemophores in Porphyromonas gingivalis , such as HusA and HmuY, critical for competitive advantage in vivo . , While HmuY homologs have been identified in Bacteroides , deletion of these genes does not affect their in vitro growth. The mechanisms underlying heme transport and competition in Bacteroidales remain largely unexplored. Other metals The acquisition of transition metals such as zinc and manganese is critical for microbial survival, alongside iron. Zinc is predicted to interact with 4–6% of bacterial proteins, playing key roles in gene regulation, cellular metabolism, and as a cofactor for various virulence factors. Manganese, another essential cofactor, supports numerous bacterial enzymes involved in lipid, protein, and carbohydrate metabolism. Under metal-limited conditions, microorganisms adapt by upregulating high-affinity transporters to import these nutrients. For instance, E. coli primarily uses the low-affinity ZupT transporter for zinc uptake under moderate availability but relies on the high-affinity ZnuACB system under extreme zinc scarcity. The deletion of ZnuA in Campylobacter jejuni significantly impairs intestinal colonization. While homologs of ZupT and ZnuACB exist in Bacteroidales, their specific mechanisms and impact on competitive colonization under metal-limiting conditions remain unclear. Additionally, the T6SS in pathogens like Yersinia pseudotuberculosis and Burkholderia thailandensis has been shown to mediate exploitative competition by secreting protein-based carriers that facilitate manganese and zinc acquisition. , However, whether T6SS in Bacteroidales performs a similar role remains unexplored. Corrinoids Corrinoids, especially vitamin B12, are essential cofactors for methionine synthesis, propionate production, and other metabolic pathways. These compounds significantly influence the structure and function of the human gut microbiota. , Due to the incomplete or absent corrinoid synthesis pathways in Bacteroidales, these bacteria depend entirely on extracellular transporters for corrinoid acquisition. Redundant corrinoid transporters are a common feature in gut microbiota. In B. thetaiotaomicron , three distinct vitamin B12 utilization systems have been identified ( ). Among these, BtuB1 and BtuB3 have lower specificity or affinity for cobalamin compared to BtuB2, and each transporter exhibits distinct preferences for various corrinoids. For example, strains encoding only BtuB1 outperform those with BtuB3 when provided with adeninylcobamide or benzimidazolylcobamide, but not with cobalamin, 2-methyladeninylcobamide, 5-methoxybenzimidazolylcobamide, or 5-methylbenzimidazolylcobamide. The diversity of vitamin B12 transporters in B. thetaiotaomicron is vital for its fitness in vivo , particularly in response to diet and community composition. A dietary deficiency in vitamin B12 drastically increases the reliance on BtuB2 for microbial fitness. For instance, the relative abundance of BtuB2-deficient strains in mice on a vitamin B12-depleted diet is nearly two orders of magnitude lower compared to mice on a vitamin B12-enriched diet. Pre-colonization with a Bacteroidales community exacerbates this competitive disadvantage, whereas pre-colonization with Firmicutes and Actinobacteria, which can synthesize sufficient corrinoids, completely mitigates the defect. , Corrinoid transport mirrors iron transport, as many gut microbes, including B. thetaiotaomicron , lack the ability to synthesize corrinoids but possess extensive machinery to capture these compounds from other species. Consequently, microbes capable of acquiring diverse corrinoids gain a significant competitive advantage over those with transporters specific to modified molecules. Taking and holding the ecological niche Space occupation is pivotal in microbial competition, encompassing both the colonization of new ecological niches and the prevention of competitor encroachment over the long term. B. fragilis expresses specific mucosal colonization factors, such as the sulfatase BF3086 and glycosyl hydrolase BF3134, which are upregulated in both mucus and tissue environments. These factors enable B. fragilis to penetrate the mucus layer and inhabit deeper crypt regions. When germ-free mice were colonized with either wild-type B. fragilis or the ∆BF3134 mutant individually, bacterial populations in feces and the colonic lumen remained comparable. However, in co-colonization experiments, the proportion of ∆BF3134 steadily declined, indicating the importance of mucosal colonization factors in occupying new niches. When a bacterial population occupies a favorable ecological niche, it must limit the invasion of competitors to ensure prolonged survival. Prior studies have demonstrated that germ-free mice mono-associated with a single Bacteroidales species are resistant to colonization by the same species but not different species ( ). This colonization resistance is attained via species-specific nutrients or unique niches. In vivo genetic screening identified a conserved and unique class of PULs called commensal colonization factors (ccf), essential for B. fragilis robust mucosal colonization, preventing reinvasion by the same species ( ). Additionally, the mucosal colonization-defective mutant ∆BF3134 and the capsular polysaccharide-deficient strain ∆PSB/C could not maintain colonization resistance by excluding competitors of the same species. , Conclusion and future perspectives The competitive interactions within the gut microbiota are as complex as the microbial communities themselves. Strong natural selection, driven by exploitative competition among different genotypes, is often accompanied by interference competition. Both forms of competition are widespread in bacterial communities and play a significant role in shaping the outcomes of natural selection. This review has summarized recent advances in understanding the competitive interactions among dominant Bacteroidales strains in the gut, focusing primarily on interference and exploitative competition. We have aimed to distill and highlight the intricacies of these interactions to provide a foundation for future research on bacterial interplay within the gut. However, despite these advancements, numerous questions remain unresolved. While the activities of certain toxins and molecules involved in Bacteroidales competition have been validated in vitro or through simplified in vivo models, much remained unknown about their mechanisms of action and specific roles within the host. This underscores the need for further exploration. Moreover, it is likely that many molecules and mechanisms mediating competition among Bacteroidales are yet to be discovered. Beyond traditional biochemical and genetic screening methods, emerging techniques such as bioinformatics-based mining and artificial intelligence-driven machine learning offer significant promise for the rapid, high-throughput identification of competitive molecular players from Bacteroidales genomes, proteomes, and metabolomes. To fully understand both known and newly identified molecules, several critical questions need addressing: How are antimicrobial molecules secreted, delivered to target bacteria, and recognized by their receptors? What are their intrinsic properties, mechanisms of action, and regulation within the host environment? For nutrient-acquisition molecules, how are they secreted, how do they recognize and bind substrates, and how are these substrates recycled during passive competition? The lack of clarity in these areas hampers a complete understanding of the mechanisms and ecological roles of competitive antagonism mediated by these molecules. In the intricate and densely populated microbial communities, various forms of competitive dynamics emerge, ultimately leading to ecologically stable outcomes. Currently, research on the diverse competitive mechanisms among gut Bacteroidales is predominantly characterized by isolated and unilateral studies. However, gut bacteria, including Bacteroidales, coexist within a complex intestinal ecosystem. By integrating insights from previous studies, it has become evident that the concurrent presence of differing T6SS or diffusible toxin genes within Bacteroidales strains is prevalent ( ). Therefore, further exploration is essential to elucidate how these competitive interactions collectively function under different temporal, spatial, and environmental conditions. Additionally, current studies on Bacteroidales competition often focus on interactions between two species or employ overly simplistic models. Most predictions derived from in vitro experiments have yet to be validated in more physiologically relevant environments, and varying culture conditions may also yield divergent outcomes. Consequently, the conclusions drawn from in vitro studies may not reliably reflect the competition dynamics and ultimate outcomes of Bacteroidales within complex microbial communities. Expanding research to include more complex, multispecies, or community-based models is essential for accurately reflecting the dynamics of these interactions within the intricate intestinal environment. In conclusion, Bacteroidales form a highly dynamic and systematic competitive system that is central to maintaining stability and diversity within the gut microbiota. A deeper understanding of these competitive interactions will shed light on the complex processes underpinning gut microbiome assembly. Such insights could inform the development of therapeutic strategies aimed at sustaining or manipulating these intricate microbial communities.
Pediatric oncologists' perspectives on the use of complementary medicine in pediatric cancer patients in Switzerland: A national survey‐based cross‐sectional study
2b0c9cc5-6a05-4ef8-b436-bcaa3130c43f
9875643
Internal Medicine[mh]
INTRODUCTION The term “complementary medicine” (CM) summarizes therapies which are not part of conventional medicine. Historically, the terms “complementary and alternative medicine” (CAM) were closely linked. “Alternative medicine,” referring to the use of a therapy instead of conventional medicine, is not deemed standard care and will not be discussed in this study as it seems to be inappropriate for pediatric oncologists (POs). Indeed, it should be stated that the use of additional treatment modalities should be complementary to conventional standard of care treatments, and not as an alternative to it. Thus, we will only refer to therapies used as a complement to standard of care treatments. , , , CM seem to have potential benefits, as suggested by several studies and systematic reviews. , , , , , , , , , The inclusion of CM in conventional medicine is a way to offer a more holistic approach to the patient and the family, which leads to the concept of “integrative medicine,” which has been used more and more as it better corresponds to POs' current medical practice. Integrative medicine does not only include physical aspects, but also psychic and spiritual aspects of the human being, regarding them in a holistic way. , CM can be subdivided in four main groups: biochemical therapies (e.g., aromatherapy, dietary complements including antioxidants and vitamins), bioenergetic therapies (e.g., anthroposophic medicine, homeopathy), biomechanical therapies (e.g., chiropractic) and mind‐body based therapies (e.g., hypnosis, music therapy). Previous studies (USA, UK, Germany, and Turkey ) and systematic reviews have emphasized the widespread use of CM by pediatric cancer patients. Worldwide, the prevalence of any CM use in children with cancer (since cancer diagnosis) varies massively, ranging from 6% to 91% in a systematic review. A study showed that although the use of CM appears more frequent in lower income countries with an average prevalence of approximately 77%, higher income countries also showed an important frequency of CM use (average prevalence of approximatively 47%). CM are mostly used by the pediatric cancer patients as a way to increase wellness, but also to ease the symptoms related to chemotherapies, to reinforce the immune system and to improve healing. , , Some CM modalities are widely used in conventional practice, especially in the management of procedural pain (by repeated venous, port, lumbar and bone marrow punctures) or stress and anxiety generated by the side effects of chemotherapies (e.g., hypnosis, music therapy, acupuncture, aromatherapy and others). , , , , In Switzerland, there has been an increasing interest in complementary therapies. A recent study showed that 97% of all pediatricians in Switzerland were asked by their patients about the use of CM, and two thirds of them were interested in further information and training about complementary and integrative medicine (CIM). Today, there is an official recognition for homeopathy, anthroposophic medicine, traditional Chinese medicine, acupuncture, neural therapy and phytotherapy, with structured and continuous postgraduate formation programs. Officially mandated by the Swiss Society of Pediatrics (SSP), the Swiss Interest Group for Integrative Pediatrics (SIGIP) was founded in 2017 with the aim to create a national platform of pediatricians interested in complementary therapies, providing an important expertise on the subject and organizing trainings. The experience of colleagues at the University of Bern showed that 53% of their pediatric cancer patients were using CM. The oncologist was not aware of this use in approximately ¼ of cases, and half of the families were expecting more information about CM. More recently, the pediatric oncology team of the University of Lausanne identified a higher use of CM after diagnosis (69.3%) than before diagnosis (54.3%) among their patients, with a marked increase of use of hypnosis during oncologic treatment, likely due to local practice of the medical team to cope with procedural pain. There appears to be a need to improve communication, as only two thirds of patients/parents inform their oncologist about CM use. Internationally, the perspective of POs with regards of CM use of their patients has been studied in a few studies. In the United States, more than 50% of the interviewed POs thought that dietary supplements, herbal medicine, special diets, vitamins, and chiropractic therapy might be harmful to patients. A German study reported that half of the interviewed POs were unable to acquire CM knowledge during medical training and over 70% of them suggested that CM should be an integral part of postgraduate training. A more recent German study highlighted an important need for more information about CM by POs. CM use among pediatric oncology patients in Switzerland has been already investigated and the study revealed an important need for further communication with their POs. There is no study investigating Swiss POs views on their patients' use of CM. The aim of this study is to explore POs' perception of (1) the use of CM among their patients, (2) the communication about CM with their patients, (3) their collaboration with CM specialists/therapists, and (4) their need for further learning on the subject. Furthermore, this study may increase the awareness for this topic and may consecutively stimulate the communication about CM between pediatric cancer patients/families and their physicians, improving pediatric oncology patients' management. METHODS 2.1 Subjects and eligibility criteria A link to an online survey was sent by e‐mail to each local investigator—participating in the design of this cross‐sectional study—from all nine Swiss Pediatric Oncology Group (SPOG) centers (Aarau, Basel, Bellinzona, Bern, Geneva, Lucerne, St Gallen, Lausanne, and Zurich). Each local investigator forwarded it to their eligible local pediatric oncology colleagues and collected the number of potential responses in their center. The survey was available for a total period of 2 months (from 17 June through 17 August 2021). Reminders to the local investigator were sent 1 month after initial survey distribution. The data were anonymous and we did not collect any participant's identifying data. All answered forms were anonymous and were only accessible to the main author. The e‐mails were exclusively exchanged through professional e‐mail addresses which are secured by each hospital's network security system (in general by the HIN security network of Switzerland, providing the best online security in Switzerland). The study data was collected through a survey created using Google Forms, which is Google's online form and survey program with a high level of data security. After publication, copies of the data will be stored on a password‐protected institutional computer and the survey will be deleted. Eligibility criteria to answer the survey were board‐certified (Switzerland or abroad) pediatric oncologists currently working in a SPOG center either in clinical care program for pediatric oncology patients aged 0 to 18 years, or in a research program or other not directly patient related work. There were no exclusion criteria. A total of fifty‐two POs were eligible and received the survey, and twenty‐nine of them responded the survey. 2.2 Questionnaire Roth's questionnaire was adapted to local practice and changes ‐ mainly with the aim to increase the clarity of the survey ‐ were made based on our local investigators' suggestions. The survey consisted of 27 questions (Supplementary File). First, POs were asked about personal information such as gender, graduation in or outside of Switzerland, number of years of pediatric oncology practice, area of practice, allocated time to clinical and non‐clinical practice, and acquired qualifications related to CM. Then they were asked about the use of CM among their patients and their interactions with them concerning CM, as for example what percentage of patients is using any kind of CM, how often the oncologist asks the patients if they are using CM, reasons why not asking for it, how often does the patient ask spontaneously for CM, why POs are not comfortable discussing CM and how do they react when the patient addresses the topic. Also, they were asked about their need for information and about the availability of resources as well as experts concerning CM (pharmacist who assesses potential interactions, CM therapist), and how often do they have information exchange with CM specialists concerning patients currently using CM. Questions were included about their perception of CM therapy such as potential benefits and harms for every kind of CM, and their need of more information and training on every type of CM. 2.3 Statistical analysis Descriptive statistics were generated for all variables. Subgroup analyses were performed with Fisher exact tests to evaluate the relationships among physicians' demographic characteristics and the following variables: communication with the patient/family, referral to a CM specialist/provider, need to do literature search to get information about CM. Statistical analysis and graphs were performed using the software GraphPad Prism 9.2.0 (GraphPad Software Inc., San Diego, California, US). No assessment of risk of bias was performed. Subjects and eligibility criteria A link to an online survey was sent by e‐mail to each local investigator—participating in the design of this cross‐sectional study—from all nine Swiss Pediatric Oncology Group (SPOG) centers (Aarau, Basel, Bellinzona, Bern, Geneva, Lucerne, St Gallen, Lausanne, and Zurich). Each local investigator forwarded it to their eligible local pediatric oncology colleagues and collected the number of potential responses in their center. The survey was available for a total period of 2 months (from 17 June through 17 August 2021). Reminders to the local investigator were sent 1 month after initial survey distribution. The data were anonymous and we did not collect any participant's identifying data. All answered forms were anonymous and were only accessible to the main author. The e‐mails were exclusively exchanged through professional e‐mail addresses which are secured by each hospital's network security system (in general by the HIN security network of Switzerland, providing the best online security in Switzerland). The study data was collected through a survey created using Google Forms, which is Google's online form and survey program with a high level of data security. After publication, copies of the data will be stored on a password‐protected institutional computer and the survey will be deleted. Eligibility criteria to answer the survey were board‐certified (Switzerland or abroad) pediatric oncologists currently working in a SPOG center either in clinical care program for pediatric oncology patients aged 0 to 18 years, or in a research program or other not directly patient related work. There were no exclusion criteria. A total of fifty‐two POs were eligible and received the survey, and twenty‐nine of them responded the survey. Questionnaire Roth's questionnaire was adapted to local practice and changes ‐ mainly with the aim to increase the clarity of the survey ‐ were made based on our local investigators' suggestions. The survey consisted of 27 questions (Supplementary File). First, POs were asked about personal information such as gender, graduation in or outside of Switzerland, number of years of pediatric oncology practice, area of practice, allocated time to clinical and non‐clinical practice, and acquired qualifications related to CM. Then they were asked about the use of CM among their patients and their interactions with them concerning CM, as for example what percentage of patients is using any kind of CM, how often the oncologist asks the patients if they are using CM, reasons why not asking for it, how often does the patient ask spontaneously for CM, why POs are not comfortable discussing CM and how do they react when the patient addresses the topic. Also, they were asked about their need for information and about the availability of resources as well as experts concerning CM (pharmacist who assesses potential interactions, CM therapist), and how often do they have information exchange with CM specialists concerning patients currently using CM. Questions were included about their perception of CM therapy such as potential benefits and harms for every kind of CM, and their need of more information and training on every type of CM. Statistical analysis Descriptive statistics were generated for all variables. Subgroup analyses were performed with Fisher exact tests to evaluate the relationships among physicians' demographic characteristics and the following variables: communication with the patient/family, referral to a CM specialist/provider, need to do literature search to get information about CM. Statistical analysis and graphs were performed using the software GraphPad Prism 9.2.0 (GraphPad Software Inc., San Diego, California, US). No assessment of risk of bias was performed. RESULTS 3.1 Study population The questionnaire was sent to 52 POs working in Switzerland who confirmed being eligible for the study. We received 29 filled questionnaires (overall response rate 56%). Response rate in all three parts of Switzerland were 16/18 (89%), 12/33 (36%) and 1/1 (100%) in French, German and Italian‐speaking part of Switzerland, respectively. All questionnaires were fully completed, and none was excluded. Study population was assessed by a series of demographic questions. Respondents' demographic data are reported in Table . 3.2 Communication of pediatric oncologists with the patients/families regarding CM We analyzed communication between POs and their patients about CM (Table ). Most POs (59%) do ask to more than half of their patients about CM in general, and particularly about biochemical therapies such as dietary supplements and special diets (55%), and mind‐body based therapies such as hypnosis, meditation, and music therapy (38%). POs ask less frequently about the use of other subgroups of CM. Twenty‐one percent of POs ask less than 25% of their patients about the use of CM. The main reasons why POs do not ask their patients about CM are forgetting to ask (55%), lack of knowledge on the subject (31%) and lack of time (24%), as illustrated in Supplementary Table . More than half of POs (55%) reported feeling uncomfortable discussing CM therapies because of a lack of knowledge and education on the topic, and almost half of POs (48%) reported that they had concern about potential harmful side effects of CM. Fewer (17%) responded that they were unaware of local providers. Less than half of POs (38%) reported feeling completely comfortable talking about CM (Figure ). Most respondents (83% to 97%) reported that less than 50% of their patients and families had initiated a conversation about the different subgroups of CM. The main reaction of POs uncomfortable when asked about CM by their patients and families were to admit that they were not knowledgeable on the subject (62%) and referring them to a CM specialist or asking a pharmacist (59%), as shown in Supplementary Table . Overall, all but one POs are open to discuss about CM with patients with a good prognosis and all of POs are open to discuss the subject with patients with poor prognosis. 3.3 Physicians estimates about the use of CM We analyzed the estimates of POs about use of CM among their patients/families (Table ). For all subgroups of CM, most of the POs estimated that up to 75% of their patients were using CM on a regular basis. Sixty‐nine percent of POs estimated that at least 10% of their patients used biochemical therapies (e.g., aromatherapy, antioxidants, dietary complements, melatonin) on a regular basis, 62% of POs believed that more than 10% of their patients regularly used bioenergetic therapies (e.g., acupuncture, anthroposophic medicine, homeopathy) and mind‐body based therapies (e.g., hypnosis, meditation, music therapy). 3.4 Referral to a CM specialist/therapist We investigated referral rates of patients to a CM specialist/therapist. Referral rates appears to be relatively low, as most POs refer occasionally their patients to a CM specialist/therapist (16/29; 55%—Figure ). Patients referred to a CM specialist by POs are mainly referred to hypnotherapist and massage therapist (69%), followed by homeopathic practitioner (55%), osteopath and anthroposophic therapist (52%), and acupuncturist (48%). Most of the time, the patient is referred to a CM specialist on demand of the family (63%–79%). Patients known to be interested in CM are not referred in approximately one third of cases for massage therapy and hypnosis (31%), half of cases for homeopathy (45%), anthroposophic medicine and osteopathy (48%), acupuncture (52%) and in lesser cases for chiropractic and dietary specialist (59%) and yoga (72%). Referral rates of patients to each CM specialists are detailed in Supplementary Figure . Most respondents (90%) do have a pharmacist who is able to assess potential medical interactions with CM treatment. More than two third (66%–93% depending on the type of CM) of POs do not have a CM specialist available in their network for the following therapies: aromatherapy, antioxidant, black seed oil, curcuma, enzymes, pre and probiotics, Ayurveda, magnets, Reiki, cranio‐sacral therapy, guided imagery, horse‐riding therapy, and martial arts. Nevertheless, more than half of POs (52%–83% depending on the type of CM) do have an in‐house or external specialist for the following therapies: hypnosis, music therapy, massage therapy, melatonin, cannabinoids, homeopathy special diet, and acupuncture. Details are shown in Supplementary Table . More than half of POs (52%) consider their communication with CM specialist as good. 3.5 Estimations of risks and benefits of CM by the physicians POs' estimations of risks and benefits of CM are shown in Table . POs consider that CM may be efficient in improving the quality of life or specific symptoms for pediatric oncology patients in a curative treatment setting, as for cannabinoids/music therapy/relaxation (93%), massage therapy (90%), hypnosis/meditation (79%), and melatonin (76%). There is no clear overall certainty of ineffective CM, but there is a mixed opinion for all the other CM, with a more evident uncertainty about some CM such as antioxidants, black seed oil, curcuma, enzymes, Ayurveda, Reiki, cranio‐sacral therapy, and guided imagery (>50% of “don't know” responses). Although most of the CM do not seem harmful for more than half of respondents such as all mind‐body based therapies (41%–93%; average 77%), massage therapy (76%), melatonin (69%), homeopathy (66%), aromatherapy/cannabinoids (59%), herbal medicine is considered to be potentially harmful to the patients for 59% of respondents. There is also an important uncertainty about potential harmful effects of some CM such as black seed oil, Ayurveda, Reiki, and guided imagery (>50% of “don't know” responses). All POs indicated that it is important for them to know about CM therapies their patients are using, in order to prevent potential harmful drug interactions (100%), improve trust between physicians and patients, improve patient adherence to medical therapy (86%) or improve patient satisfaction with their medical therapy (79%). 3.6 Interest of POs for further information and training Interest for more information and training on CM was assessed. It appears that many POs (72%) do some literature searches to get information about side effects and interaction related with CM at least from time to time and nearly half of POs do it often/very often (45% of POs). Most POs (66% to 76%) are interested in learning more about the following CM: cannabinoids, hypnosis and relaxation, music therapy, herbal medicine, acupuncture, meditation, and yoga. Many POs (62% to 79%) are not interested in further information about the following CM: magnets, enzymes, black seed oil, Reiki, chiropractic and cranio‐sacral therapy. There is, in general, more interest in mind‐body therapies (41%–76%; average 62%) than in other subgroups of CM (21%–76%; average 49%) (Table ). As shown in Figure , a large majority of POs (76%–97%; average 84%) respond that information or training opportunities on the use of CM for treating symptoms of cancer or side effects of anti‐tumoral therapy in pediatric oncology patients would be important for their clinical work, except for radiation‐induced dermatitis (52%). More than half of POs (52%–59%) think it would be very important for specific symptoms such as nausea and vomiting, pain, loss of appetite, changes in taste, weakness, sleep disorders, and psychological disorders (Figure ; Supplementary Table ). 3.7 Subgroup analysis Subgroup analysis did not show any difference between physicians' demographic characteristics and the following variables: communication with the patient/family, referral to a CM specialist/provider, need to do literature research to get information about CM (data not shown). Study population The questionnaire was sent to 52 POs working in Switzerland who confirmed being eligible for the study. We received 29 filled questionnaires (overall response rate 56%). Response rate in all three parts of Switzerland were 16/18 (89%), 12/33 (36%) and 1/1 (100%) in French, German and Italian‐speaking part of Switzerland, respectively. All questionnaires were fully completed, and none was excluded. Study population was assessed by a series of demographic questions. Respondents' demographic data are reported in Table . Communication of pediatric oncologists with the patients/families regarding CM We analyzed communication between POs and their patients about CM (Table ). Most POs (59%) do ask to more than half of their patients about CM in general, and particularly about biochemical therapies such as dietary supplements and special diets (55%), and mind‐body based therapies such as hypnosis, meditation, and music therapy (38%). POs ask less frequently about the use of other subgroups of CM. Twenty‐one percent of POs ask less than 25% of their patients about the use of CM. The main reasons why POs do not ask their patients about CM are forgetting to ask (55%), lack of knowledge on the subject (31%) and lack of time (24%), as illustrated in Supplementary Table . More than half of POs (55%) reported feeling uncomfortable discussing CM therapies because of a lack of knowledge and education on the topic, and almost half of POs (48%) reported that they had concern about potential harmful side effects of CM. Fewer (17%) responded that they were unaware of local providers. Less than half of POs (38%) reported feeling completely comfortable talking about CM (Figure ). Most respondents (83% to 97%) reported that less than 50% of their patients and families had initiated a conversation about the different subgroups of CM. The main reaction of POs uncomfortable when asked about CM by their patients and families were to admit that they were not knowledgeable on the subject (62%) and referring them to a CM specialist or asking a pharmacist (59%), as shown in Supplementary Table . Overall, all but one POs are open to discuss about CM with patients with a good prognosis and all of POs are open to discuss the subject with patients with poor prognosis. Physicians estimates about the use of CM We analyzed the estimates of POs about use of CM among their patients/families (Table ). For all subgroups of CM, most of the POs estimated that up to 75% of their patients were using CM on a regular basis. Sixty‐nine percent of POs estimated that at least 10% of their patients used biochemical therapies (e.g., aromatherapy, antioxidants, dietary complements, melatonin) on a regular basis, 62% of POs believed that more than 10% of their patients regularly used bioenergetic therapies (e.g., acupuncture, anthroposophic medicine, homeopathy) and mind‐body based therapies (e.g., hypnosis, meditation, music therapy). Referral to a CM specialist/therapist We investigated referral rates of patients to a CM specialist/therapist. Referral rates appears to be relatively low, as most POs refer occasionally their patients to a CM specialist/therapist (16/29; 55%—Figure ). Patients referred to a CM specialist by POs are mainly referred to hypnotherapist and massage therapist (69%), followed by homeopathic practitioner (55%), osteopath and anthroposophic therapist (52%), and acupuncturist (48%). Most of the time, the patient is referred to a CM specialist on demand of the family (63%–79%). Patients known to be interested in CM are not referred in approximately one third of cases for massage therapy and hypnosis (31%), half of cases for homeopathy (45%), anthroposophic medicine and osteopathy (48%), acupuncture (52%) and in lesser cases for chiropractic and dietary specialist (59%) and yoga (72%). Referral rates of patients to each CM specialists are detailed in Supplementary Figure . Most respondents (90%) do have a pharmacist who is able to assess potential medical interactions with CM treatment. More than two third (66%–93% depending on the type of CM) of POs do not have a CM specialist available in their network for the following therapies: aromatherapy, antioxidant, black seed oil, curcuma, enzymes, pre and probiotics, Ayurveda, magnets, Reiki, cranio‐sacral therapy, guided imagery, horse‐riding therapy, and martial arts. Nevertheless, more than half of POs (52%–83% depending on the type of CM) do have an in‐house or external specialist for the following therapies: hypnosis, music therapy, massage therapy, melatonin, cannabinoids, homeopathy special diet, and acupuncture. Details are shown in Supplementary Table . More than half of POs (52%) consider their communication with CM specialist as good. Estimations of risks and benefits of CM by the physicians POs' estimations of risks and benefits of CM are shown in Table . POs consider that CM may be efficient in improving the quality of life or specific symptoms for pediatric oncology patients in a curative treatment setting, as for cannabinoids/music therapy/relaxation (93%), massage therapy (90%), hypnosis/meditation (79%), and melatonin (76%). There is no clear overall certainty of ineffective CM, but there is a mixed opinion for all the other CM, with a more evident uncertainty about some CM such as antioxidants, black seed oil, curcuma, enzymes, Ayurveda, Reiki, cranio‐sacral therapy, and guided imagery (>50% of “don't know” responses). Although most of the CM do not seem harmful for more than half of respondents such as all mind‐body based therapies (41%–93%; average 77%), massage therapy (76%), melatonin (69%), homeopathy (66%), aromatherapy/cannabinoids (59%), herbal medicine is considered to be potentially harmful to the patients for 59% of respondents. There is also an important uncertainty about potential harmful effects of some CM such as black seed oil, Ayurveda, Reiki, and guided imagery (>50% of “don't know” responses). All POs indicated that it is important for them to know about CM therapies their patients are using, in order to prevent potential harmful drug interactions (100%), improve trust between physicians and patients, improve patient adherence to medical therapy (86%) or improve patient satisfaction with their medical therapy (79%). Interest of POs for further information and training Interest for more information and training on CM was assessed. It appears that many POs (72%) do some literature searches to get information about side effects and interaction related with CM at least from time to time and nearly half of POs do it often/very often (45% of POs). Most POs (66% to 76%) are interested in learning more about the following CM: cannabinoids, hypnosis and relaxation, music therapy, herbal medicine, acupuncture, meditation, and yoga. Many POs (62% to 79%) are not interested in further information about the following CM: magnets, enzymes, black seed oil, Reiki, chiropractic and cranio‐sacral therapy. There is, in general, more interest in mind‐body therapies (41%–76%; average 62%) than in other subgroups of CM (21%–76%; average 49%) (Table ). As shown in Figure , a large majority of POs (76%–97%; average 84%) respond that information or training opportunities on the use of CM for treating symptoms of cancer or side effects of anti‐tumoral therapy in pediatric oncology patients would be important for their clinical work, except for radiation‐induced dermatitis (52%). More than half of POs (52%–59%) think it would be very important for specific symptoms such as nausea and vomiting, pain, loss of appetite, changes in taste, weakness, sleep disorders, and psychological disorders (Figure ; Supplementary Table ). Subgroup analysis Subgroup analysis did not show any difference between physicians' demographic characteristics and the following variables: communication with the patient/family, referral to a CM specialist/provider, need to do literature research to get information about CM (data not shown). DISCUSSION In Switzerland, CM is often used and considered as an important subject by POs. , , , , Our survey shows that POs in Switzerland are generally aware that many of their cancer patients use CM regularly and that they are concerned about potential harmful side effects of CM. All of them indicated that it is important to know CM therapies their patients are using. The communication of POs with their patients and families about CM seems to be incomplete, as the topic is not addressed systematically by all POs. Indeed, 59% of POs do ask to more than half of their patients. This rate is comparable to the frequency described in Roth's study (50%) for POs in the US. The reasons why Swiss POs do not ask all the time are mostly related to a prioritization of their patients' problems as they often forget or don't have enough time in their schedule to discuss the topic, but also seem to be related to a lack of knowledge on the subject. The topic is relatively infrequently actively discussed by the patients and families as reported by most POs (less than 50% of patients according to 83% to 97% of respondents), probably because they think that their PO may be not knowledgeable on the subject, or because they might fear their PO's reaction. Indeed, POs' most frequent answer to patients asking about CM is not being knowledgeable on the subject (62%). Only few of them seem to have a negative perception by convincing the patient not to use CM (14%). In a recent Swiss study focusing on the CM use in pediatric oncology patients, only 38% of all respondents stated that they have discussed CM with their POs, and that the discussion was initiated by one of their parents in 87% of cases, which is in disagreement with POs perspective described in our study. These observations highlight a desire for more communication between patients and POs about CM. The same study reported a substantial concern about a negative reaction from POs, preventing some patients to discuss about CM with them. In our study, POs are generally open to discuss CM with both good and poor prognosis pediatric oncology patients. Based on our data, we are not able to evaluate the effect of CM on prognosis. However, we assume that discussing about CM could improve the patient's well‐being, allow the family to support the child in an active and medically safe manner and enhance compliance to the conventional therapy. In order to evaluate the effect of CM on the prognosis and outcome of pediatric oncology patients undergoing CM in addition to conventional treatment versus conventional treatment alone, further studies should be performed. Very few POs in Switzerland are trained for CIM, as only one respondent of our study has an additional CIM‐related qualification (hypnosis). POs are aware of their lack of knowledge and training on CIM. One third of them indicated that their lack of knowledge prevents them from asking their patients about the use of CM, and more than half of them are uncomfortable talking about CM with their patients and families. Several POs consider that some CM ‐ including music therapy, relaxation, massage therapy, hypnosis, meditation, cannabinoids and melatonin ‐ could improve the quality of life and specific symptoms for their patients. There is an important uncertainty among POs on potential risks or benefits of specific CM. This leads to an important need for further learning and training about CIM, especially for mind‐body therapies, cannabinoids, herbal medicine and acupuncture (66% to 76% of POs), for their potential to ease treatment‐ or disease‐induced complaints. The most important application areas of CIM for POs appears to be nausea/vomiting, lack of appetite, pain, fatigue, sleep disorders and psychological disorders. This is in agreement with findings of a previous study performed in Germany. In Switzerland, there is no systematic CIM training program during POs' formation. This highlights the importance of the Swiss Interest Group for Integrative Pediatrics (SIGIP), whose members offer training programs on CIM. Our survey shows that the collaboration with CIM specialist is not yet very well established. This is surprising, because POs' lack of knowledge on CIM should lead to a high referral rate to CIM specialists/therapist but paradoxically, referral rate is low with almost two third of POs occasionally or never referring their patients. The main reasons for this observation appear to be the availability of a pharmacist in the network assessing for medical interaction as well as the lack of CIM specialists in their network. This study is potentially limited by a demographic respondent bias. Despite a relatively good overall response rate (56%), there is a lower response rate in the German‐speaking part of Switzerland (36%). Furthermore, it is likely that POs who responded to the survey were more interested by CIM than non‐respondents, with a higher interest to learn more about CIM. This hypothesis is supported by a recent paper investigating the attitudes of healthcare coworkers towards CM in Turkey. The cross‐sectional study using a survey showed an impressive response rate (83%) with 794 healthcare coworkers completing the survey. Of interest was the more negative attitude towards CM of physicians when compared to other healthcare professions. The risk for bias was not assessed although it does exist, as we received only approximatively half of the potential answers. As the survey was responded in an anonymous manner, we are unable to compare the information of respondents ( n = 29) and non‐respondents ( n = 23). In addition, we emphasize that this study is also limited to a small group of participants and the results of this pilot observation should be treated with caution. In summary, there is a need to increase communication and interaction between patients/families and POs with regards to CM. It appears to be reasonable to implement a systematic CIM training program for POs. This may improve care provided to pediatric cancer patients in Switzerland by offering them a more holistic and individual approach of care, by limiting potential harms caused by an inappropriate use of CM, and by improving the trust‐based relationship between the medical team/physician and the family/patient. Léopold Pirson: Investigation (equal); writing – original draft (equal). Sonja Lüer: Investigation (supporting); writing – review and editing (supporting). Manuel Diezi: Investigation (supporting); writing – review and editing (supporting). Sabine Kroiss: Investigation (supporting); writing – review and editing (supporting). Pierluigi Brazzola: Investigation (supporting); writing – review and editing (supporting). Freimut Schilling: Investigation (supporting); writing – review and editing (supporting). Nicolas von der Weid: Investigation (supporting); writing – review and editing (supporting). Katrin Scheinemann: Investigation (supporting); writing – review and editing (supporting). Jeanette Greiner: Investigation (supporting); writing – review and editing (supporting). Tycho Jan Zuzak: Writing – original draft (equal); writing – review and editing (equal). Andre von Bueren: Investigation (lead); supervision (lead); writing – original draft (equal). The authors have stated explicitly that there are no conflicts of interest in connection with this article. This study was considered as falling outside of the scope of the Swiss legislation regulating research on human subjects, so that the need for local ethics committee approval was waived (confirmed by the local ethics committee; Req‐2021‐01340). Completion of the electronic survey was viewed as consent to participate and to use the anonymous responses in our analysis and publications. Supplementary File. Survey consisting of 27 questions used for the study Supplementary Table 1 . Reasons (more than one answer possible) POs do not ask their patients about CM ( n = 29). Supplementary Table 2 . Response (more than one answer possible) to the patients asking about CM that PO is not comfortable discussing ( n = 29). Supplementary Table 3 . Percentage of POs having access to an available CM specialist in their network ( n = 29). Supplementary Table 4 . Importance for practice of information and training on the use of CM for specific symptoms or side‐effects ( n = 29). Click here for additional data file. Supplementary Figure 1 . Number of POs referring to specific CM providers. Click here for additional data file.
Post-mortem genetic testing in sudden cardiac death and genetic screening of relatives at risk: lessons learned from a Czech pilot multidisciplinary study
0b295bd0-b51c-420e-ab2f-7a7ef4ed0853
10567875
Forensic Medicine[mh]
Sudden unexplained death (SUD) is defined as an unexplained, unexpected sudden death occurring in an individual older than 1 year. The main cause of SUD is sudden cardiac death which is defined as death occurring within an hour of the onset of symptoms if witnessed or within 24 h from the moment when the decedent was last observed alive without symptoms if unwitnessed . The global annual SCD incidence has been estimated to be 4–5 million cases per year approximately . Approximately 1 to 3 per 100,000 individuals younger than 35 years die suddenly or unexpectedly every year . Coronary artery disease (CAD) is responsible for 80% of SCD cases mainly in the older population. Nevertheless, inherited cardiac conditions including familiar hyperlipoproteinemia causing the premature CAD remain to be the common cause of SCD until 50 years of age also in Czech Republic . Some SCD cases may have a genetic background, mostly with autosomal dominant pattern (50% probability regardless of gender), and there is a significant risk of developing an identical disease with the risk of cardiac arrest in first-degree relatives . Therefore, the post-mortem genetic testing, together with cardiac screening of first-degree relatives, is recommended by European guidelines [ – ]. The scope of examination of survivors at risk was defined in a document of the World Organisation for Heart Rhythm Disorders (APHRS/HRS) and elsewhere [ , , ]. There are recommended autopsy procedures developed within the Association for European Cardiovascular Pathology (AECVP), which aim to standardize the autopsy procedure and diagnostics, incl. spectra of additional laboratory tests at SCD . According to the autopsy results and based on macroscopic and microscopic findings, the categories of SCD types are defined internationally in terms of cardiomyopathy (CM), sudden arrhythmic death syndrome (SADS), and sudden unexplained death in individuals younger or older than 1 year (sudden unexplained death syndrome (SUDS) or sudden unexplained deaths in infant (SUDI)). Sudden unexplained death in epilepsy (SUDEP) is mentioned separately, when epilepsy may be an incorrect diagnosis for unconsciousness due to sustained ventricular arrhythmias, or some epilepsy may be a form of both cerebral and cardiac channelopathies . The AECVP best practices further define cases in which post-mortem genetic testing, sometimes referred to as molecular autopsy, should be performed to pinpoint the cause of SCD and the associated primary prevention of cardiac arrest in relatives . Post-mortem genetic testing of the deceased should be followed, or under ideal conditions, accompanied by clinical genetic counselling and cardiological screening of first-degree relatives . Finding out the causes of SCD therefore represents a multidisciplinary process in which autologous physicians, clinical geneticists, molecular geneticists, cardiologists for children and adults a psychologist, neurologist, lipidologist, general practitioner, and other specialties according to the individual needs of individual cases . Regarding complex issues, this type of diagnostics is concentrated in tertiary care centers. In the following text, we present the results of a multicenter and multidisciplinary study of cases of sudden cardiac death in the Czech Republic in the years 2016–2021, which was financed by a grant from the Ministry of Health of the Czech Republic with registration number NV18-02–00,237. The aim of the project was to identify a representative set of SCD cases. Subsequently, based on the interest of relatives and obtaining the informed consent of persons close to the deceased, find out the molecular causes of sudden heart death and evaluate the outcomes and impacts of this examination on the care of first-degree relatives for primary prevention of life-threatening heart rhythm disorders. This multidisciplinary and multi-center study has been approved by the Institute of Clinical and Experimental Medicine, the University Hospital Motol, and all participating Forensic Institutions Ethics Committee. Consent of post-mortem testing from all SCD cases included into the study was provided by close family members. Study cohort From 2016 to 2021, we studied a cohort of 100 unrelated SCD victims and their families. Forensic department directly reported 61/100 cases suspected of dying from cardiovascular diseases, 19/100 cases were included based on family cardiologist recommendations, and 20/100 cases were added to the study based on family request. Forensic autopsy was performed in all included cases. Cases with a non-cardiovascular cause of death, lethal medications/toxins, age less than 1 year, where families declined participation in the study, and/or with CAD different from familial thoracic aortic aneurysm and dissection were excluded from the cohort. The study cohort flow is represented in Fig. . SCD victims aged between 1 and 59 were included into the study. Clinical data on the circumstances of death, health status, and family history were recorded (Table ). Family testing was performed on 301 relatives of SCD victims. In 37/100 families where a variant of interest was detected, genetic testing in relatives at risk was performed. In families with a negative genetic test (63/100), only cardiological screening was performed (Fig. ). Autopsy evaluation Autopsies of all SCD cases were performed at 12 different Czech Forensic Medicine Institutes from 8/13 regions of the Czech Republic. Post-mortem diagnosis was established by forensic autopsy, which included macroscopic and microscopic examination of the heart and blood vessels. All autopsies of the deceased were performed according to valid recommendations for the procedure of autopsy in the Czech Republic (Act No. 372/2011 Coll., on Health Services). In the course of the project cooperation, the expert group consisting of forensic pathologists, cardiologists, and cardiogeneticists was established in order to create Czech national autopsy guidelines based on European recommendations . These are now in the approval process. After forensic cardiac/aortic autopsy, cases were categorized in four major groups based on 2020 APHRS/HRS expert consensus statement : (i) cardiomyopathies, cases with a confirmed diagnosis of heart structure; (ii) sudden arrhythmic death syndrome, unclear cause of death in an individual over 1 year of age with a negative pathological autopsy, i.e., without macroscopic, microscopic/necropsy, and toxicological findings; (iii) sudden unexplained death syndrome, unclear cause of death in an individual older than 1 year, when there are non-specific structural changes of the heart that do not meet the criteria for cardiomyopathy or arrhythmic syndrome, or necropsy was not performed; and (iv) sudden thoracic aortic death, cases with a confirmed diagnosis of aortic dissection leading to death. Characteristics of individual groups are described in Supplementary Table . Genetic testing DNA testing for all samples was performed at the Genetic Department of University Hospital Motol. DNA post-mortem samples were obtained from tissue rich in nucleated cells collected during autopsy (i.e., spleen, nodules, or liver). Tissues prior DNA isolation were stored either in RNAlater solution for fresh tissue, frozen at − 20 °C or − 80 °C, or as formalin-fixed paraffin-embedded tissue. DNA samples for genetic testing in living family members were obtained from peripheral blood samples stored with K 3 EDTA. Genomic DNA was extracted from tissue and/or blood samples using an automated nucleic acid extractor, MagCore HF16 Plus (RBC Bioscience, Taiwan). DNA was quantified using the NanoDrop 2000 spectrophotometer (Thermo Scientific, USA) and Qubit 2.0 (Invitrogen, USA) according to the manufacturer’s instructions. Next-generation sequencing (NGS) library preparation in all SCD cases was performed using either a broad custom-made panel comprising 100 cardiac/aortic conditions-related genes (Sophia Genetics, Switzerland). The full list of genes included in the custom-made panel is available in Supplementary Table . DNA libraries were sequenced by NGS with paired-end reads (2 × 150 bp cycles) on MiniSeq/MiSeq/NextSeq/NovaSeq platforms (Illumina, USA). NGS sequencing conditions used gave high coverage of all regions of interest allowing for copy number variation in all genes. In 24 negative cases, whole exome sequencing was performed. In 10/24 of these cases, the expanded analysis of two or three affected family members was performed due to a family history of SCD. All variants of interest were validated by Sanger DNA sequencing, and cascade family screening was performed. In the case of poor-quality DNA samples, NGS analysis was performed on clearly affected relatives in 24/100 cases or on healthy parents in 10/100 cases (Fig. ). When a variant of interest was found by indirect DNA testing, it was confirmed by Sanger sequencing in the deceased. Data analysis NGS sequencing data were processed and analyzed by Genome Analysis Toolkit pipeline from Broad Institute (USA). Variant calling was based on the human genome reference GRCh37/hg19. Variant prioritization was performed by Sophia DDM software supported by Integrative Genomics Viewer (Broad Institute), Alamut® Visual (Interactive Biosoftware), and VarSome Clinical software. Variant prioritization was carried out based on the presence and frequency of the variant in general population (gnomAD, dbSNP databases), presence in clinical databases (ClinVar, Human Gene Mutation Database), interspecies conservation of the residue and coherence, familial cosegregation with the phenotype and in silico predictions using bioinformatics tools integrated in Varsome Clinical software (DANN, DEOGEN2, EIGEN, FATHMM-MKL, M-CAP, MVP, MutationAssessor, MutationTaster, PrimateAI, REVEL, PolyPhen, and SIFT). Variants with read depth < 10 × , synonymous and intronic variants in non-splice regions, and minor allele frequency higher than expected for the disease were excluded . The pathogenicity of the detected variants was classified into 5 categories according to the evidence criteria proposed by the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) guidelines . P/LP variants are classified as class 4 and 5, VUS corresponds to class 3, and a common risk factor belongs to class 2. We have included an extra group of VUS*—VUS of interest—which are rare genetic variants of unknown significance located in known inherited cardiac/aortic condition genes with a high probability of being the disease cause based on current knowledge and molecular/clinical geneticists experience but lacking more substantial evidence such as functional studies and/or larger segregation studies. From 2016 to 2021, we studied a cohort of 100 unrelated SCD victims and their families. Forensic department directly reported 61/100 cases suspected of dying from cardiovascular diseases, 19/100 cases were included based on family cardiologist recommendations, and 20/100 cases were added to the study based on family request. Forensic autopsy was performed in all included cases. Cases with a non-cardiovascular cause of death, lethal medications/toxins, age less than 1 year, where families declined participation in the study, and/or with CAD different from familial thoracic aortic aneurysm and dissection were excluded from the cohort. The study cohort flow is represented in Fig. . SCD victims aged between 1 and 59 were included into the study. Clinical data on the circumstances of death, health status, and family history were recorded (Table ). Family testing was performed on 301 relatives of SCD victims. In 37/100 families where a variant of interest was detected, genetic testing in relatives at risk was performed. In families with a negative genetic test (63/100), only cardiological screening was performed (Fig. ). Autopsies of all SCD cases were performed at 12 different Czech Forensic Medicine Institutes from 8/13 regions of the Czech Republic. Post-mortem diagnosis was established by forensic autopsy, which included macroscopic and microscopic examination of the heart and blood vessels. All autopsies of the deceased were performed according to valid recommendations for the procedure of autopsy in the Czech Republic (Act No. 372/2011 Coll., on Health Services). In the course of the project cooperation, the expert group consisting of forensic pathologists, cardiologists, and cardiogeneticists was established in order to create Czech national autopsy guidelines based on European recommendations . These are now in the approval process. After forensic cardiac/aortic autopsy, cases were categorized in four major groups based on 2020 APHRS/HRS expert consensus statement : (i) cardiomyopathies, cases with a confirmed diagnosis of heart structure; (ii) sudden arrhythmic death syndrome, unclear cause of death in an individual over 1 year of age with a negative pathological autopsy, i.e., without macroscopic, microscopic/necropsy, and toxicological findings; (iii) sudden unexplained death syndrome, unclear cause of death in an individual older than 1 year, when there are non-specific structural changes of the heart that do not meet the criteria for cardiomyopathy or arrhythmic syndrome, or necropsy was not performed; and (iv) sudden thoracic aortic death, cases with a confirmed diagnosis of aortic dissection leading to death. Characteristics of individual groups are described in Supplementary Table . DNA testing for all samples was performed at the Genetic Department of University Hospital Motol. DNA post-mortem samples were obtained from tissue rich in nucleated cells collected during autopsy (i.e., spleen, nodules, or liver). Tissues prior DNA isolation were stored either in RNAlater solution for fresh tissue, frozen at − 20 °C or − 80 °C, or as formalin-fixed paraffin-embedded tissue. DNA samples for genetic testing in living family members were obtained from peripheral blood samples stored with K 3 EDTA. Genomic DNA was extracted from tissue and/or blood samples using an automated nucleic acid extractor, MagCore HF16 Plus (RBC Bioscience, Taiwan). DNA was quantified using the NanoDrop 2000 spectrophotometer (Thermo Scientific, USA) and Qubit 2.0 (Invitrogen, USA) according to the manufacturer’s instructions. Next-generation sequencing (NGS) library preparation in all SCD cases was performed using either a broad custom-made panel comprising 100 cardiac/aortic conditions-related genes (Sophia Genetics, Switzerland). The full list of genes included in the custom-made panel is available in Supplementary Table . DNA libraries were sequenced by NGS with paired-end reads (2 × 150 bp cycles) on MiniSeq/MiSeq/NextSeq/NovaSeq platforms (Illumina, USA). NGS sequencing conditions used gave high coverage of all regions of interest allowing for copy number variation in all genes. In 24 negative cases, whole exome sequencing was performed. In 10/24 of these cases, the expanded analysis of two or three affected family members was performed due to a family history of SCD. All variants of interest were validated by Sanger DNA sequencing, and cascade family screening was performed. In the case of poor-quality DNA samples, NGS analysis was performed on clearly affected relatives in 24/100 cases or on healthy parents in 10/100 cases (Fig. ). When a variant of interest was found by indirect DNA testing, it was confirmed by Sanger sequencing in the deceased. NGS sequencing data were processed and analyzed by Genome Analysis Toolkit pipeline from Broad Institute (USA). Variant calling was based on the human genome reference GRCh37/hg19. Variant prioritization was performed by Sophia DDM software supported by Integrative Genomics Viewer (Broad Institute), Alamut® Visual (Interactive Biosoftware), and VarSome Clinical software. Variant prioritization was carried out based on the presence and frequency of the variant in general population (gnomAD, dbSNP databases), presence in clinical databases (ClinVar, Human Gene Mutation Database), interspecies conservation of the residue and coherence, familial cosegregation with the phenotype and in silico predictions using bioinformatics tools integrated in Varsome Clinical software (DANN, DEOGEN2, EIGEN, FATHMM-MKL, M-CAP, MVP, MutationAssessor, MutationTaster, PrimateAI, REVEL, PolyPhen, and SIFT). Variants with read depth < 10 × , synonymous and intronic variants in non-splice regions, and minor allele frequency higher than expected for the disease were excluded . The pathogenicity of the detected variants was classified into 5 categories according to the evidence criteria proposed by the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) guidelines . P/LP variants are classified as class 4 and 5, VUS corresponds to class 3, and a common risk factor belongs to class 2. We have included an extra group of VUS*—VUS of interest—which are rare genetic variants of unknown significance located in known inherited cardiac/aortic condition genes with a high probability of being the disease cause based on current knowledge and molecular/clinical geneticists experience but lacking more substantial evidence such as functional studies and/or larger segregation studies. Demographic characteristics of the study cohort In total, we have performed molecular autopsy in 100 unrelated SCD cases. The demographic characteristics of the study group are described in Table . SCD cases were represented mainly by males 71/100 (71.0%) with a mean age of death of 33.3 (12.8) and a range of 1 to 59 years. The cardiac event most often occurred while sleeping 35/100 (35.0%) and at home 71/100 (71.0%). Family history of inherited cardiovascular conditions and/or sudden death was reported in 46/100 (46.0%) of cases (Table ). Molecular autopsy results We performed molecular autopsy by next-generation sequencing. The diagnostic yield in this study was 22/100 (22.0%). From these 22 diagnosed cases, 23 P/LP variants were identified since we detected 2 P/LP variants in one case of cardiomyopathy (Tables and ). Two variants were transmitted de novo, and 21 were transmitted from their parents. We have identified 10/100 (10.0%) of VUS* in genes known to be disease-causing but lacking strong evidence to be in the P/LP category (Table ). The autopsy group with the highest P/LP variant detection rate was the SAD group 3/8 (37.5%), followed by CM 13/49 (26.5%) (Table ). Within the CM group, the highest diagnostic genetic yield was from cases related to DCM 6/14 (42.9%). The lowest P/LP variant detection rate observed was from the post-mortem arrhythmogenic cardiomyopathy (ACM) group 2/22 (9.1%) (Table ). The most frequently altered gene detected was TTN , and variants in this gene were from cases diagnosed as DCM, LVNC, and SUDS accounting for 6/23 (26,1%) of the P/LP variants detected (Fig. ). We have also identified P/LP variants in HCM phenocopy genes such as GLA related to Fabry disease and FHL1 related to X-linked muscular dystrophy (Table ). No pathogenic copy number variations (CNV) were detected. In 10 clearly familiar SCD with negative results on targeted panels, the expanded analysis with other affected family members was performed (whole exome sequencing, Sophia Genetics, Switzerland) without yielding new P/LP DNA variants. We have also identified a known DNA risk factor variant (class 2, likely benign) in the potassium channel gene: NM_000219.5(KCNE1): c.253G > A p.(Asp85Asn). This variant is not rare in the normal population (total MAF:0.009324, non-Finnish European MAF:0.01223, gnomAD) and is a risk allele for drug-induced long QT syndrome type 5, with a mild course and incomplete penetrance . The detailed overview of genetic findings is in Table . The highest P/LP variant detection rate 8 of 36 (22.2%) was identified in the age group of 31 to 40 years at death (Fig. ). Direct and indirect DNA testing Due to poor quality DNA, in 34/100 (34.0%) SCD cases, NGS analysis was indirectly performed on a clearly affected relative or on both healthy parents (Fig. ). It was due to the unavailability of material other than formalin-fixed and paraffin-embedded tissue and, in several cases, tissue autolysis prior to DNA extraction. At the time of this study, EDTA blood sampling was not routinely performed at autopsy by forensic specialists. Indirect testing in affected family members had a diagnostic yield of 11/24 (45.8%), while genetic testing performed in healthy parents reached a diagnostic yield of 1/10 (10.0%) (Figs. and ). In cases with family history of SCD and/or inherited cardiac condition (ICC), we have identified 18/46 (39.0%) P/LP variants, while in SCD cases without family history, we have identified 5/54 (9.0%) causative variants (Fig. , Table ). Family screening A total of 301 relatives from 100 families were examined, of whom 87/301 (28.9%) had a positive cardiological phenotype and/or a positive genotype (Fig. ). Genetic screening for identified DNA variants (P/LP, VUS*, and RF variants) was performed in 37 families, in 131 relatives. A positive genetic finding was found in 60/131 (45.8%) of relatives. Through cardiological family screening, we uncovered 70 affected individuals with an ICC (phenotype-positive family members); 33/70 (47.1%) were already in cardiological treatment before SCD of their first-degree relative, and 37/70 (52.9%) were newly diagnosed through our family cascade screening (Fig. ). In total, we have performed molecular autopsy in 100 unrelated SCD cases. The demographic characteristics of the study group are described in Table . SCD cases were represented mainly by males 71/100 (71.0%) with a mean age of death of 33.3 (12.8) and a range of 1 to 59 years. The cardiac event most often occurred while sleeping 35/100 (35.0%) and at home 71/100 (71.0%). Family history of inherited cardiovascular conditions and/or sudden death was reported in 46/100 (46.0%) of cases (Table ). We performed molecular autopsy by next-generation sequencing. The diagnostic yield in this study was 22/100 (22.0%). From these 22 diagnosed cases, 23 P/LP variants were identified since we detected 2 P/LP variants in one case of cardiomyopathy (Tables and ). Two variants were transmitted de novo, and 21 were transmitted from their parents. We have identified 10/100 (10.0%) of VUS* in genes known to be disease-causing but lacking strong evidence to be in the P/LP category (Table ). The autopsy group with the highest P/LP variant detection rate was the SAD group 3/8 (37.5%), followed by CM 13/49 (26.5%) (Table ). Within the CM group, the highest diagnostic genetic yield was from cases related to DCM 6/14 (42.9%). The lowest P/LP variant detection rate observed was from the post-mortem arrhythmogenic cardiomyopathy (ACM) group 2/22 (9.1%) (Table ). The most frequently altered gene detected was TTN , and variants in this gene were from cases diagnosed as DCM, LVNC, and SUDS accounting for 6/23 (26,1%) of the P/LP variants detected (Fig. ). We have also identified P/LP variants in HCM phenocopy genes such as GLA related to Fabry disease and FHL1 related to X-linked muscular dystrophy (Table ). No pathogenic copy number variations (CNV) were detected. In 10 clearly familiar SCD with negative results on targeted panels, the expanded analysis with other affected family members was performed (whole exome sequencing, Sophia Genetics, Switzerland) without yielding new P/LP DNA variants. We have also identified a known DNA risk factor variant (class 2, likely benign) in the potassium channel gene: NM_000219.5(KCNE1): c.253G > A p.(Asp85Asn). This variant is not rare in the normal population (total MAF:0.009324, non-Finnish European MAF:0.01223, gnomAD) and is a risk allele for drug-induced long QT syndrome type 5, with a mild course and incomplete penetrance . The detailed overview of genetic findings is in Table . The highest P/LP variant detection rate 8 of 36 (22.2%) was identified in the age group of 31 to 40 years at death (Fig. ). Due to poor quality DNA, in 34/100 (34.0%) SCD cases, NGS analysis was indirectly performed on a clearly affected relative or on both healthy parents (Fig. ). It was due to the unavailability of material other than formalin-fixed and paraffin-embedded tissue and, in several cases, tissue autolysis prior to DNA extraction. At the time of this study, EDTA blood sampling was not routinely performed at autopsy by forensic specialists. Indirect testing in affected family members had a diagnostic yield of 11/24 (45.8%), while genetic testing performed in healthy parents reached a diagnostic yield of 1/10 (10.0%) (Figs. and ). In cases with family history of SCD and/or inherited cardiac condition (ICC), we have identified 18/46 (39.0%) P/LP variants, while in SCD cases without family history, we have identified 5/54 (9.0%) causative variants (Fig. , Table ). A total of 301 relatives from 100 families were examined, of whom 87/301 (28.9%) had a positive cardiological phenotype and/or a positive genotype (Fig. ). Genetic screening for identified DNA variants (P/LP, VUS*, and RF variants) was performed in 37 families, in 131 relatives. A positive genetic finding was found in 60/131 (45.8%) of relatives. Through cardiological family screening, we uncovered 70 affected individuals with an ICC (phenotype-positive family members); 33/70 (47.1%) were already in cardiological treatment before SCD of their first-degree relative, and 37/70 (52.9%) were newly diagnosed through our family cascade screening (Fig. ). We present the results of an unprecedented post-mortem genetic study performed in the Czech Republic. Our study substantially contributed to establishing a multidisciplinary collaboration on a national level. Through post-mortem molecular genetic analysis, we identified pathogenic/likely pathogenic variants (P/LP) following ACMG/AMP recommendations in 22/100 (22.0%) of cases. Cardiological and genetic screening disclosed 83/301 (27.6%) relatives at risk of SCD. Genetic testing in affected relatives as starting material leads to a high diagnostic yield (45,8%) and offers a valuable alternative when suitable material is not available in our study. The vast majority of Czech families (122/135, 90.4%) are interested in investigating the causes of death of their relatives and in preventive cardiological care as documented (Fig. ). Most of the SCD cases were males (71.0%), corresponding to known gender differences in the severity of cardiomyopathies . The higher incidence of SCD in males has been previously reported . Most sudden deaths occurred at home and during daily routine activity or sleep, consistent with international studies . Only 7% of deaths occurred during vigorous sport or physical activity. Sudden death in an athlete was not reported to us during the study time (Table ). The mean age of studied cases was less than 40 years (33.3 years), while the males with post-mortem diagnosis SADS were the youngest (23.6 years) (Table ) as described elsewhere . The overall genetic yield of 22.0% observed in our study is consistent with published international studies [ – ]. Genetic testing findings highly correlate with the autopsy diagnosis in all groups. Thus, in general, genetic testing can be expected to identify clearly inherited conditions in about 1/5 of SCD cases. The yield and spectrum of detected variants in our SCD study correspond to that observed in other European cohorts [ , – ]. The only surprising result was low recovery in the ACM group, although all ACM-related genes were tested (Supplementary Table ). The European autopsy guidelines are not yet fully adopted in the Czech Republic, and the autopsy diagnosis of ACM may not be correct in some cases. Thanks to the ongoing cooperation and multidisciplinary communication, we believe to overcome this burden soon. The extended NGS panels did not bring a higher diagnostic yield even in clearly familiar cases in consistency with other studies . These results of genetic yield in our cohort are crucial for communication with family members so that their expectations for this type of testing are realistic. We have confirmed the high increase in the genetic diagnostic yield (P/LP) (39%) in cases with family history of SCD and/or ICC whereas in cases without family history, the diagnostic yield was only 9% (Fig. ). The effect of positive FH is reflected in high diagnostic yield (i.e., P/LP only variants) obtained through genetic testing in affected family members in cases where quality DNA was not available for testing and comprises even a 45.8% (Figs. and ). This shows the importance of detailed cardiological screening in surviving relatives bringing the possibility of genetic testing in affected individuals. We did not find copy-number variants in our SCD cohort, but Sophia Genetics enables their detection, and we indeed detect them in cases outside this study. The frequent titin ( TTN ) pathogenic findings reflect the known high frequency in familiar heart failure and its arrhythmogenic potential and highly correlate with the autopsy findings (Table , Fig. ) . Titin is a giant myofilament that extends from the Z-disk (N-terminus) to the M-band (C-terminus) region of the sarcomere and is now recognized as a major human disease gene. Many titin mutations are linked to cardiomyopathies and neuromuscular diseases . In our study, we identified truncating variants in the filamin C gene ( FLNC ) as a certain molecular cause in three male individuals from the DCM, ACM, and SUDS groups. FLNC is the gene encoding filamin C, an actin cross-linking protein that plays a central role in the assembly and organization of sarcomeres. Gene is widely expressed in cardiac and skeletal muscles, and mutations in FLNC were associated with skeletal myopathy, as well as hypertrophic, restrictive, and dilated cardiomyopathy . The heterogeneous autopsy findings correlate with the described clinical manifestations of the FLNC gene and also show that arrhythmic complications may precede the development of clear structural changes in the heart muscle [ – ]. In this study, one family with a P/LP variant in FLNC gene, requested to be included in an assisted reproduction and preimplantation diagnosis program for primary prevention of the disease in offspring [ – ]. The inclusion of HCM-phenocopy genes in the NGS genetic panel (Supplementary Table ), allowed the identification of P/LP variants in the GLA , PRKAG2 , and FHL1 genes, increasing the overall genetic diagnostic detection rate. The finding of a common RF variant in the potassium channel gene KCNE1 associated with hereditary arrhythmia syndrome LQT5 lite is difficult to interpret in the deceased. Based on the available literature, we did not identify it as a clear molecular cause of sudden death [ – ]. In 4 out of 5 families, the LQT5 lite variant segregated along with the cardiological phenotype (QTc prolongation) and complaints. Nevertheless, we communicated the finding to families and recommended appropriate medicament treatment and lifestyle measurements . Detected variants of unknown significance (VUS) are a challenge for clinical interpretation . Nevertheless, we decided to assign the class of VUS* variants (Table ). These variants are rare in population and located in genes related to inherited cardiovascular diseases, prediction software and practice knowledge supports a pathogenic role, and they segregate with the phenotype within the family. In these families, we informed the relatives about the finding and offered all of them cardiological follow-up every 3–5 years; nevertheless, VUS* carriers are included in more intensive preventive diagnostic programs. We see a great advantage in the centralization of molecular genetic testing, whereby we have ensured uniform methodological procedures and, finally, the assessment of identified DNA variants with the standardized assignment of the corresponding diagnostic criteria according to the ACMG/AMP recommendations, which may otherwise differ among laboratories in molecular genetic practice. Our results lead to primary prevention of SCD in almost 1/3 of relatives at risk, with most of them in productive age. In families with detected genetic variant (P/LP, VUS*, RF), the proportion of relatives at potential risk is even higher (45.8%) (Fig. ). Half of family relatives have already been treated for ICC. Genetic testing contributed so to their accurate diagnosis. Nevertheless, the other family members were not aware of their present health condition with the risk of SCD. These findings should contribute to the higher awareness in caring professionals of the possibility of underlying genetic background (i.e., familial disease) in heart diseases and encourage them to offer the family cascade screening. Our findings further document that the genetic yield is in familiar cases even higher than in the general cohort. The molecular genetic analysis in affected relatives may be used for molecular autopsy if the material from the SCD victims is not of good quality. Nevertheless, the screening in genetic negative cases identified 15.9% phenotype-positive relatives, and so the clinical examination in this group should be recommended. Our project has significantly improved the communication and collaboration among clinical genetics, molecular genetics, cardiology, and forensic medicine centers in several regions of the Czech Republic. Based on this study, the multidisciplinary teams are now being created in most national tertiary medical centers. The study founded the nationwide registry of SCD cases. Our project initiated the creation of Czech national guidelines on autopsy in the case of SCD which are now in the approval process. The post-mortem genetic analysis in the Czech Republic is feasible and of interest to included professionals and affected families. The diagnostic genetic yield is corresponding to other international cohorts. The family cascade screening should be offered to surviving relatives as recommended elsewhere. In clearly clinically familiar cases, the genetic yield is expected to be higher than in sporadic cases. The awareness of the possibility of familiar diseases should be increased in caring cardiologists, who should offer the family cascade screening in patients’ families with unexplained heart failure and/or ventricular arrhythmias. This established the basis for organized post-mortem analysis with necessary multidisciplinary teams at national and regional tertiary centers. The main limitation was the difference in the description of the section findings among referring forensic centers since in the CZ, there were no established standard guidelines to perform the autopsy in suspected SCD cases. We dealt with the not exactly defined post-mortem diagnoses and the lack of material sufficient for molecular genetic testing. The post-mortem macroscopic and microscopic findings were often individually discussed with the forensic specialists and their classification to the internationally acknowledged categories as SADS, SUDS, or CM was assigned later after tedious discussion and after long intervals after death. Another challenge we have encountered is the low diagnostic yield of 9.1% observed in cases of arrhythmogenic cardiomyopathy, highly discordant from studies observed in living adult patients with a reported diagnostic yield of 30–60% . This last might be due to post-mortem overdiagnosis of ACM for cases presenting a fatty heart. To obtain data on body composition from an autopsy case is highly difficult in our experience and might be considered when an arrhythmogenic cardiomyopathy case is suspected. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 11 KB) Supplementary file2 (DOCX 12 KB) Supplementary file3 (DOCX 14 KB)
Ultrasound-based artificial intelligence in gastroenterology and hepatology
d6232b0c-feda-4a3b-a933-673929eae21d
9594013
Internal Medicine[mh]
Liver disease causes two million deaths per year in the world among which cirrhosis is the 11 th leading cause of death in the world and liver cancer is the 16 th leading cause of death. The prevalence of nonalcoholic fatty liver disease (NAFLD) is 25.0% and is estimated to be 33.5% by 2030. Gastrointestinal diseases affect an estimated 60 to 70 million American citizens annually. It is reported that pancreatic cancer (PC) is one of the top five causes of death from cancer, and colorectal cancer accounts for 8.5% of cancer-related deaths[ - ]. Therefore, it is of great importance to pay attention to these diseases. In clinical practice, many imaging techniques such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound have played a vital role in the detection and treatment of diseases. Ultrasound, a noninvasive and real-time diagnostic technique, is the most commonly used method for detecting and diagnosing human digestive diseases. However, the interpretation and analysis of ultrasound images depend deeply on the subjective judgment and experience of human experts. Radiologists may make mistakes due to exhaustion when dealing with a large number of images. Artificial intelligence (AI) is defined as computer algorithms created by humans and improved with analogs of the thoughts, judgments, and reactions that take place in the human brain. In recent years, radiologists have increasingly embraced the aid of AI-powered diagnoses. AI can make a quantitative analysis by recognizing the information of images automatically and is widely applied in the medical images of ultrasound in diffuse liver diseases, focal liver lesions, PC, and colorectal cancer. In this review, we described the development of AI-based ultrasound in the aforementioned applications. In addition, we also discussed the future opportunities and challenges of AI-based ultrasound. Currently, the algorithms of AI used in medical images mainly include traditional machine learning algorithms and deep learning. Machine learning Machine learning is described as a kind of data science that offers computers with the capacity to study without being programmed with specific rules. It focuses on computer algorithms that are studied from the training model and give predictions on another model. Machine learning depends primarily on the predefined characteristics that display the regular patterns inherent in models acquired from regions of interest with explicit parameters on the basis of expert experience. Then, other medical image features, such as various mass shape, size, and echo, can be quantified. Radiomics, which belongs to traditional machine learning, is a popular field of study related to the acquisition and assessment of patterns within medical images, including CT, MRI, and ultrasound. These patterns include complicated patterns that are difficult to recognize or analyze by the human eye. Deep learning Deep learning is at the leading edge of AI and is developing rapidly. Deep learning is described as a group of artificial neural network (ANN) algorithms, which include many hidden layers. Namely, deep learning depends on a subset of algorithms that try to model high-level abstractions. Recently, convolutional neural networks (CNNs) are the preferred type of deep learning architecture in the assessment of medical images. CNNs consist of an input layer, multiple hidden layers, and an output layer (Figure ). The hidden layers include convolutional layers, pooling layers, connected layers, and normalization layers. Convolutional layers and pooling layers can complete feature extraction and aggregation. Machine learning is described as a kind of data science that offers computers with the capacity to study without being programmed with specific rules. It focuses on computer algorithms that are studied from the training model and give predictions on another model. Machine learning depends primarily on the predefined characteristics that display the regular patterns inherent in models acquired from regions of interest with explicit parameters on the basis of expert experience. Then, other medical image features, such as various mass shape, size, and echo, can be quantified. Radiomics, which belongs to traditional machine learning, is a popular field of study related to the acquisition and assessment of patterns within medical images, including CT, MRI, and ultrasound. These patterns include complicated patterns that are difficult to recognize or analyze by the human eye. Deep learning is at the leading edge of AI and is developing rapidly. Deep learning is described as a group of artificial neural network (ANN) algorithms, which include many hidden layers. Namely, deep learning depends on a subset of algorithms that try to model high-level abstractions. Recently, convolutional neural networks (CNNs) are the preferred type of deep learning architecture in the assessment of medical images. CNNs consist of an input layer, multiple hidden layers, and an output layer (Figure ). The hidden layers include convolutional layers, pooling layers, connected layers, and normalization layers. Convolutional layers and pooling layers can complete feature extraction and aggregation. Diffuse liver diseases Diffuse liver diseases display a failure in the metabolic and synthesis processes of the liver. Liver biopsy is the gold standard for the diagnosis of fibrosis and NAFLD. However, liver biopsy is an invasive process that has many complications such as hemorrhage, biliary peritonitis, and pneumothorax. In addition, liver biopsy is not feasible for the long-term management of patients with chronic liver diseases. Noninvasive liver imaging methods such as CT, MRI, and ultrasound have been extensively studied. Ultrasound is one of most common methods to diagnose liver diseases due to its noninvasiveness, inexpensive price, and real-time ability. Machine learning algorithms based on ultrasound have been applied for analysis of steatosis and the staging of liver fibrosis. Table shows the application of ultrasound-based AI in diffuse liver disease. Fatty liver diseases: An excess amount of fat in the liver cells is found in fatty liver diseases (FLD). The main causes of FLD include obesity, alcoholism, diabetes, nonalcoholic steatohepatitis, drugs, and toxins. FLD is related to the growing risk of cirrhosis and liver cancer. The most common cause of FLD is NAFLD, which ranges in prevalence from 25% to 45%. Several noninvasive imaging methods such as CT, MRI, and ultrasound can diagnose NAFLD. Ultrasound is the cheapest diagnostic method with 93% sensitivity, while hepatic steatosis is greater than 33%. Conventional ultrasound is commonly used for NAFLD evaluation, but its qualitative nature, doctor dependency, and unsatisfactory accuracy limits the application. Moreover, the ultrasound images of fatty liver and early cirrhosis have many common features, making it hard to distinguish the two diseases by the human eye. In recent years, ultrasound-based AI has demonstrated high accuracy for detection of steatosis and represents excellent reproducibility and reliability. Byra et al created a CNN model to acquire features from B-mode ultrasound image. It was reported that they could assess the amount of steatosis present in the liver with the area under the receiver operating characteristic curve (AUC) of 0.98, and their approach may assist the doctors in automatically assessing the amount of fat in the liver clinically. Biswas et al revealed that a deep learning-based algorithm reached a superior performance for FLD identification and risk stratification with 100% accuracy and AUC of 1.0 when compared with a conventional machine learning system support vector machine (SVM) (accuracy: 82%, AUC: 0.79) and extreme learning machine (accuracy: 92%, AUC: 0.92). Deep learning has also been applied to quantitatively evaluate NAFLD. The radiofrequency data of ultrasound displays much more information of hepatic microstructure than that of gray-scale B-mode images. Han et al developed a deep learning algorithm that used radiofrequency data for NAFLD assessment. The results revealed that the sensitivity, specificity, and positive predictive value (PPV) for NAFLD diagnosis were 97%, 94%, and 97%, respectively. They confirmed that the quantitative analysis of raw radiofrequency ultrasound signals showed the potential of identifying NAFLD and quantifying hepatic fat fraction. Liver fibrosis and cirrhosis: Patients with chronic liver disease may have no clinical symptoms for an extended period, or it may develop to fibrosis and cirrhosis. The activation of the resting hepatic stellate cell into an activated myofibroblast plays an important role in the progression of liver fibrosis. The activated myofibroblast expresses abundant a-smooth muscle actin and collagen. Cirrhosis, which consists of various nodules and is harder than the normal liver, is the advanced period of fibrosis. Liver fibrosis and early cirrhosis are confirmed to be partly reversible. Therefore, the precise diagnosis of liver fibrosis is vital for the treatment and management of chronic liver disease patients. In clinical practice, liver biopsy is the gold standard for the diagnosis of liver fibrosis. Various noninvasive modalities such as ultrasound and elastography have been used as alternatives to liver biopsy. Some studies suggest that AI models based on ultrasound and elastography have great potential for the classification of liver fibrosis. AI based on conventional ultrasound: AI based on conventional ultrasound has been applied to improve their performance for the diagnosis and grading of liver fibrosis. Yeh et al built an SVM model to analyze liver fibrosis. B-mode images of 20 fresh postsurgical human livers were used to assess ultrasound capacity in evaluating the stage of fibrosis. The study indicated the best classification accuracy of two, three, four, and six classes were 91%, 85%, 81%, and 72%, respectively. The results confirmed that the SVM model may be suggested to assess diverse liver fibrosis stage. Other than the B-mode ultrasound, duplex ultrasound has also been applied to diagnose liver fibrosis. Using an ANN model based on duplex ultrasound, Zhang et al demonstrated that their model reached the accuracy, sensitivity, and specificity were 88.3%, 95.0%, and 85.0%, respectively. The ANN model included five ultrasonographic parameters: thickness of spleen, liver vein waveform, the hepatic parenchyma, liver artery pulsatile index, and hepatic damping index. The study suggested that their ANN model has the potential to diagnose liver fibrosis noninvasively. Studies confirmed that radiomics show great performance in the grading of liver fibrosis. By the use of texture analysis to analyze ultrasound liver images, the study found the accuracies of S0-S4 were 100%, 90%, 70%, 90%, and 100%, respectively. It was reported that deep learning has great potential for liver fibrosis evaluation. Lee et al built a deep CNN and trained a four-class model (F0 vs F1 vs F23 vs F4) to predict METAVIR scores. They used 13608 ultrasound images of 3446 patients who accepted surgery, liver biopsy, or transient elastography to train the deep CNN model. The model achieved a higher AUC of 0.857 for the classification of cirrhosis compared with five radiologists (AUC range, 0.656-0.816; P < 0.05) using the external test set. AI based on ultrasound elastography: ultrasound elastography has been performed to acquire quantitative assessment of liver tissue stiffness, which is related to the grades of fibrosis. These technologies include strain elastography and shear wave elastography (SWE). Recently, some studies confirmed that the AI based on SWE has great value to identify and stage liver fibrosis. Compared to conventional radiomics, a multiparametric ultrasonic model using machine learning algorithms demonstrated better manifestation in fibrosis assessment. By quantifying color information from SWE images, Gatos et al created an SVM model that could differentiate patients with liver diseases from controls with accuracy, sensitivity, and specificity of 87.3%, 93.5% and 81.2%, respectively. Deep learning has also been applied in the assessment of liver fibrosis. A multicenter study used deep learning radiomics on 2D-SWE ultrasound images for the classification of liver fibrosis. 2D-SWE ultrasound images had higher AUCs of 0.97 for F4, 0.98 for ≥ F3, and 0.85 for ≥ F2 fibrosis when compared with standard 2D-SWE. It is necessary to contain a large training dataset for deep learning. However, it is difficult and expensive to get abundant medical images in clinics. One method to solve this problem is the employment of transfer learning (TL), which can enhance the performance by TL from other areas to the ultrasound area. A study developed a CNN model by TL radiomics to assess ultrasound images of gray-scale modality and elastogram modality for the grade of accurate liver fibrosis. TL in gray-scale modality and elastogram modality revealed much higher diagnostic accuracy of AUCs compared with non-TL. Multimodal gray-scale modality + elastogram modality was confirmed to be the most precise diagnostic model with AUCs of 0.930, 0.932, and 0.950 for classifying ≥ S2, ≥ S3, and S4, respectively. It was suggested that this TL model had excellent performance in liver fibrosis staging in clinical applications. Focal liver lesion Focal liver lesions (FLLs) are described as an abnormal part of the liver mainly coming from hepatocytes, biliary epithelium, and mesenchymal tissue. Due to its cheap price, noninvasiveness, and real-time imaging, ultrasound is the preferred method for the diagnosis of FLLs. Based on this trend, the AI models using ultrasound images have more advantages over CT and MRI in routine clinical applications. Table shows the application of ultrasound-based AI in FLLs. The application of AI in the diagnosis of benign and malignant FLLs: Hepatocellular carcinoma (HCC) is the fifth most common malignancy worldwide and accounts for the second leading cause of cancer-related deaths. It is vital to identify benign and malignant FLLs for patients in the early stage. AI based on conventional ultrasound: deep learning based on B-mode ultrasound has been demonstrated to be helpful in the diagnosis of benign and malignant FLLs. A CNN model was used to distinguish benign and malignant FLLs and achieved a higher accuracy than two experts. Yang et al developed a multicenter study to improve the B-mode ultrasound diagnostic performance for FLLs. The CNN of ultrasound performed high sensitivity and specificity in detecting FLLs, and it may be helpful for less-experienced doctors to enhance their judgment in liver cancer diagnosis. AI based on B-mode ultrasound images has also been applied for the diagnosis of primary or secondary malignant liver tumors. A study proposed machine learning for discriminating HCC and metastatic liver tumors using SVM. The results revealed a classification accuracy of 91.6% with a sensitivity of 90.0% for HCCs and 93.3% for metastatic liver tumors. AI based on contrast-enhanced ultrasound (CEUS): Recently, CEUS has become a commonly used ultrasound modality for the detection of FLLs. Many studies have indicated that CEUS images had better sensitivity and specificity for the differentiation of malignant and benign tumors compared with B-mode images. One of the advantages of CEUS is that the images can be analyzed quantitatively. Time intensity curve (TIC) is a common quantitative analysis tool for CEUS. Recently, AI based on CEUS images was reported to have great performance for the discrimination of FLLs. Gatos et al created a pretrained SVM algorithm to distinguish benign and malignant FLLs. In this model, a complex segmentation method based on TIC was used to detect lesions and process contours of 52 CEUS images. The accuracy, sensitivity, and specificity were 90.3%, 93.1%, and 86.9%, respectively. Another study using SVM revealed that the sensitivity, specificity, and accuracy of benign and malignant grading were 94.0%, 87.1%, and 91.8%, respectively, while the classification accuracy of HCC, metastatic liver tumor, and benign were 85.7%, 87.7%, and 84.4%, respectively. In addition to TIC, extracting features except TICs from a region of interest on CEUS images and videos was also applied in AI. A two-stage multiview learning framework, which was the integration of deep canonical correlation analysis and multiple kernel learning for CEUS-based computer-aided diagnosis, was proposed to identify liver tumors. The deep canonical correlation analysis-multiple kernel learning framework achieved performance for discriminating benign from malignant liver tumors with the accuracy, sensitivity, and specificity of 90.4%, 93.6%, and 86.8%, respectively. The application of AI for the differential diagnosis of FLLs: With the development of AI, AI based on B-mode ultrasound images has great performance on the diagnosis of different FLLs. Hwang et al extracted hybrid textural features from ultrasound images and used an ANN to diagnose FLLs. They indicated that the model revealed enormous potential with the diagnosis accuracy of over 96% among all FLLs groups (hemangioma vs malignant, cyst vs hemangioma, and cyst vs malignant). Deep learning was also applied in the distinction of different FLLs. Schmauch et al created an algorithm that simultaneously detected and characterized FLLs. Although the amount of training data was relatively small, the average AUC of FLL detection and characterization was 0.935 and 0.916, respectively. A CNN model was developed and validated for tumor detection and 6-class discrimination (HCC, focal fatty sparing, focal fatty infiltration, hemangiomas, and cysts). This model reached 87.0% detection rate, 83.9% sensitivity, and 97.1% specificity in the internal evaluation. In external validation groups, the model achieved 75.0% detection rate, 84.9% sensitivity, and 97.1% specificity. CEUS also had excellent potential for AI to distinguish different FLLs. An ANN was applied to study the role of TIC analysis parameters of 4-class discrimination of liver tumors. The neural network had 94.45% training accuracy and 87.12% testing accuracy. The automatic classification process registered 93.2% sensitivity and 89.7% specificity. Căleanu et al reported the 5-class classification of liver tumors using deep neural networks with an accuracy of 88%. In this study, deep neural network algorithms were compared with state-of-the-art architectures, and a novel leave-one-patient-out evaluation procedure was presented. All these studies indicated that AI based on conventional ultrasound and CEUS played a vital role in the detection and distinction of FLLs. The application of AI in the management of HCC patients: Because of the development of new treatments, the management of HCC patients has become much more complicated. Radiomics can offer accurate assessment of great numbers of image features from medical images. These features that are difficult to detect by the human eye can be detected by machine learning or deep learning. AI models based on radiomics has also been reported to be applicable for the management of HCC, such as the prediction of microvascular invasion (MVI), curative transarterial chemoembolization (TACE) effect, recurrence after thermal ablation, and prognosis. Predicting MVI: MVI is described as the invasion of tumor cells within a vascular space lined by endothelium. It has been proven that MVI is a predictor of early recurrence of HCC and poor survival outcomes. The only way to confirm MVI is via histopathology after surgery. Patients with HCC can receive a great benefit when MVI is identified noninvasively and accurately before surgery. The application of AI based on gray-scale ultrasound images and CEUS indicated good performance in predicting preoperative MVI. A study indicated that the radiological features of gray-scale ultrasound images of gross tumoral area predicted preoperative MVI of HCC with an AUC of 0.81. A CEUS-based radiomics score was built for preoperative prediction of MVI in HCC. The radiomics nomogram revealed great potential in the detection of MVI with an AUC of 0.731 compared with the clinical nomogram with an AUC of 0.634. It was indicated that the radiomics data based on ultrasound was a single predictor of MVI in HCC. Our group created a radiomics model based on CEUS to evaluate MVI of HCC patients before surgery. The model revealed a better detection in the primary group with an AUC of 0.849 vs 0.690 as well as the validation group with an AUC of 0.788 vs 0.661 when compared with the clinical model. We confirmed that the portal venous phase, delay phase, tumor size, rad-score, and alpha-fetoprotein level were single predictors related to MVI. Predicting curative TACE effect: Pathways participating in important cancer-related progression, such as cell proliferation and angiogenesis, are major goals for the treatment of HCC patients. Additionally, transcription factors and cell cycle regulators are also considered to be interesting for anti-HCC drugs. TACE is a widely used first-line therapy for HCC patients diagnosed at the intermediate stage. The tumor response to the first TACE treatment is highly different and obviously related to the subsequent therapies as well as the patients’ survival. Hence, the exact prediction of HCC responses after the first TACE treatment is vital for patients. The prediction of tumor responses to TACE heavily depends on MRI and serological biomarkers. But these methods achieved unsatisfactory accuracy of prediction. The application of AI based on both B-mode ultrasound and CEUS demonstrated better prediction efficacy. An AI-based radiomics was established and validated to predict the personalized responses of HCC to the first TACE session. The deep learning radiomics-based CEUS model showed better performance compared with the machine learning radiomics-based B-mode model and machine learning radiomics-based time intensity curve of CEUS model with AUCs of 0.93, 0.80, and 0.81, respectively. They suggested that the deep learning-based radiomics could benefit TACE candidates in clinical work. Predicting recurrence after thermal ablation: Thermal ablation has been confirmed to be an available therapy for early-stage HCC patients who are unsuitable for operation or recurrence after surgery. In addition, the recent 2-year recurrence rates of HCC patients who underwent thermal ablation were reported as 2%-18%. The accurate preoperative prediction of thermal ablation outcomes is of great importance for HCC patients. Compared with other imaging modalities, CEUS is radiation-free and has better temporal resolution when revealing the blood supply of the tumor. The application of AI based on CEUS could be performed for the preoperative prediction of thermal ablation outcomes. A radiomics model was created to predict the early and late recurrence of HCC patients who accepted thermal ablation. The combined model including CEUS, ultrasound radiomics, and clinical factors showed better performance for early recurrence with an AUC of 0.89 and for late recurrence prediction with a C-index of 0.77. Predicting the prognoses: Surgical resection (SR) and radiofrequency ablation (RFA) are common curative strategies for HCC patients diagnosed at the early stage. Some studies have compared the long-term survival of RFA and SR for early-stage HCC patients. However, the conclusions were sharply different. Hence, it is necessary to find useful predictive means to select the optimal patients who are suitable for RFA or SR before surgery. AI models based on CEUS had great performance for the prediction of progression-free survival (PFS). A deep learning-based radiomics from CEUS images was built to predict the PFS of SR and RFA for HCC patients. Both SR and RFA models achieved high prediction accuracy of 2-year PFS. They also identified that a higher average probability of 2-year PFS may be acquired while some RFA and SR patients exchange their choices. By utilizing conventional ultrasound images and CEUS, these AI prediction models can be applied in the individualized management of HCC patients. Diffuse liver diseases display a failure in the metabolic and synthesis processes of the liver. Liver biopsy is the gold standard for the diagnosis of fibrosis and NAFLD. However, liver biopsy is an invasive process that has many complications such as hemorrhage, biliary peritonitis, and pneumothorax. In addition, liver biopsy is not feasible for the long-term management of patients with chronic liver diseases. Noninvasive liver imaging methods such as CT, MRI, and ultrasound have been extensively studied. Ultrasound is one of most common methods to diagnose liver diseases due to its noninvasiveness, inexpensive price, and real-time ability. Machine learning algorithms based on ultrasound have been applied for analysis of steatosis and the staging of liver fibrosis. Table shows the application of ultrasound-based AI in diffuse liver disease. Fatty liver diseases: An excess amount of fat in the liver cells is found in fatty liver diseases (FLD). The main causes of FLD include obesity, alcoholism, diabetes, nonalcoholic steatohepatitis, drugs, and toxins. FLD is related to the growing risk of cirrhosis and liver cancer. The most common cause of FLD is NAFLD, which ranges in prevalence from 25% to 45%. Several noninvasive imaging methods such as CT, MRI, and ultrasound can diagnose NAFLD. Ultrasound is the cheapest diagnostic method with 93% sensitivity, while hepatic steatosis is greater than 33%. Conventional ultrasound is commonly used for NAFLD evaluation, but its qualitative nature, doctor dependency, and unsatisfactory accuracy limits the application. Moreover, the ultrasound images of fatty liver and early cirrhosis have many common features, making it hard to distinguish the two diseases by the human eye. In recent years, ultrasound-based AI has demonstrated high accuracy for detection of steatosis and represents excellent reproducibility and reliability. Byra et al created a CNN model to acquire features from B-mode ultrasound image. It was reported that they could assess the amount of steatosis present in the liver with the area under the receiver operating characteristic curve (AUC) of 0.98, and their approach may assist the doctors in automatically assessing the amount of fat in the liver clinically. Biswas et al revealed that a deep learning-based algorithm reached a superior performance for FLD identification and risk stratification with 100% accuracy and AUC of 1.0 when compared with a conventional machine learning system support vector machine (SVM) (accuracy: 82%, AUC: 0.79) and extreme learning machine (accuracy: 92%, AUC: 0.92). Deep learning has also been applied to quantitatively evaluate NAFLD. The radiofrequency data of ultrasound displays much more information of hepatic microstructure than that of gray-scale B-mode images. Han et al developed a deep learning algorithm that used radiofrequency data for NAFLD assessment. The results revealed that the sensitivity, specificity, and positive predictive value (PPV) for NAFLD diagnosis were 97%, 94%, and 97%, respectively. They confirmed that the quantitative analysis of raw radiofrequency ultrasound signals showed the potential of identifying NAFLD and quantifying hepatic fat fraction. Liver fibrosis and cirrhosis: Patients with chronic liver disease may have no clinical symptoms for an extended period, or it may develop to fibrosis and cirrhosis. The activation of the resting hepatic stellate cell into an activated myofibroblast plays an important role in the progression of liver fibrosis. The activated myofibroblast expresses abundant a-smooth muscle actin and collagen. Cirrhosis, which consists of various nodules and is harder than the normal liver, is the advanced period of fibrosis. Liver fibrosis and early cirrhosis are confirmed to be partly reversible. Therefore, the precise diagnosis of liver fibrosis is vital for the treatment and management of chronic liver disease patients. In clinical practice, liver biopsy is the gold standard for the diagnosis of liver fibrosis. Various noninvasive modalities such as ultrasound and elastography have been used as alternatives to liver biopsy. Some studies suggest that AI models based on ultrasound and elastography have great potential for the classification of liver fibrosis. AI based on conventional ultrasound: AI based on conventional ultrasound has been applied to improve their performance for the diagnosis and grading of liver fibrosis. Yeh et al built an SVM model to analyze liver fibrosis. B-mode images of 20 fresh postsurgical human livers were used to assess ultrasound capacity in evaluating the stage of fibrosis. The study indicated the best classification accuracy of two, three, four, and six classes were 91%, 85%, 81%, and 72%, respectively. The results confirmed that the SVM model may be suggested to assess diverse liver fibrosis stage. Other than the B-mode ultrasound, duplex ultrasound has also been applied to diagnose liver fibrosis. Using an ANN model based on duplex ultrasound, Zhang et al demonstrated that their model reached the accuracy, sensitivity, and specificity were 88.3%, 95.0%, and 85.0%, respectively. The ANN model included five ultrasonographic parameters: thickness of spleen, liver vein waveform, the hepatic parenchyma, liver artery pulsatile index, and hepatic damping index. The study suggested that their ANN model has the potential to diagnose liver fibrosis noninvasively. Studies confirmed that radiomics show great performance in the grading of liver fibrosis. By the use of texture analysis to analyze ultrasound liver images, the study found the accuracies of S0-S4 were 100%, 90%, 70%, 90%, and 100%, respectively. It was reported that deep learning has great potential for liver fibrosis evaluation. Lee et al built a deep CNN and trained a four-class model (F0 vs F1 vs F23 vs F4) to predict METAVIR scores. They used 13608 ultrasound images of 3446 patients who accepted surgery, liver biopsy, or transient elastography to train the deep CNN model. The model achieved a higher AUC of 0.857 for the classification of cirrhosis compared with five radiologists (AUC range, 0.656-0.816; P < 0.05) using the external test set. AI based on ultrasound elastography: ultrasound elastography has been performed to acquire quantitative assessment of liver tissue stiffness, which is related to the grades of fibrosis. These technologies include strain elastography and shear wave elastography (SWE). Recently, some studies confirmed that the AI based on SWE has great value to identify and stage liver fibrosis. Compared to conventional radiomics, a multiparametric ultrasonic model using machine learning algorithms demonstrated better manifestation in fibrosis assessment. By quantifying color information from SWE images, Gatos et al created an SVM model that could differentiate patients with liver diseases from controls with accuracy, sensitivity, and specificity of 87.3%, 93.5% and 81.2%, respectively. Deep learning has also been applied in the assessment of liver fibrosis. A multicenter study used deep learning radiomics on 2D-SWE ultrasound images for the classification of liver fibrosis. 2D-SWE ultrasound images had higher AUCs of 0.97 for F4, 0.98 for ≥ F3, and 0.85 for ≥ F2 fibrosis when compared with standard 2D-SWE. It is necessary to contain a large training dataset for deep learning. However, it is difficult and expensive to get abundant medical images in clinics. One method to solve this problem is the employment of transfer learning (TL), which can enhance the performance by TL from other areas to the ultrasound area. A study developed a CNN model by TL radiomics to assess ultrasound images of gray-scale modality and elastogram modality for the grade of accurate liver fibrosis. TL in gray-scale modality and elastogram modality revealed much higher diagnostic accuracy of AUCs compared with non-TL. Multimodal gray-scale modality + elastogram modality was confirmed to be the most precise diagnostic model with AUCs of 0.930, 0.932, and 0.950 for classifying ≥ S2, ≥ S3, and S4, respectively. It was suggested that this TL model had excellent performance in liver fibrosis staging in clinical applications. Focal liver lesions (FLLs) are described as an abnormal part of the liver mainly coming from hepatocytes, biliary epithelium, and mesenchymal tissue. Due to its cheap price, noninvasiveness, and real-time imaging, ultrasound is the preferred method for the diagnosis of FLLs. Based on this trend, the AI models using ultrasound images have more advantages over CT and MRI in routine clinical applications. Table shows the application of ultrasound-based AI in FLLs. The application of AI in the diagnosis of benign and malignant FLLs: Hepatocellular carcinoma (HCC) is the fifth most common malignancy worldwide and accounts for the second leading cause of cancer-related deaths. It is vital to identify benign and malignant FLLs for patients in the early stage. AI based on conventional ultrasound: deep learning based on B-mode ultrasound has been demonstrated to be helpful in the diagnosis of benign and malignant FLLs. A CNN model was used to distinguish benign and malignant FLLs and achieved a higher accuracy than two experts. Yang et al developed a multicenter study to improve the B-mode ultrasound diagnostic performance for FLLs. The CNN of ultrasound performed high sensitivity and specificity in detecting FLLs, and it may be helpful for less-experienced doctors to enhance their judgment in liver cancer diagnosis. AI based on B-mode ultrasound images has also been applied for the diagnosis of primary or secondary malignant liver tumors. A study proposed machine learning for discriminating HCC and metastatic liver tumors using SVM. The results revealed a classification accuracy of 91.6% with a sensitivity of 90.0% for HCCs and 93.3% for metastatic liver tumors. AI based on contrast-enhanced ultrasound (CEUS): Recently, CEUS has become a commonly used ultrasound modality for the detection of FLLs. Many studies have indicated that CEUS images had better sensitivity and specificity for the differentiation of malignant and benign tumors compared with B-mode images. One of the advantages of CEUS is that the images can be analyzed quantitatively. Time intensity curve (TIC) is a common quantitative analysis tool for CEUS. Recently, AI based on CEUS images was reported to have great performance for the discrimination of FLLs. Gatos et al created a pretrained SVM algorithm to distinguish benign and malignant FLLs. In this model, a complex segmentation method based on TIC was used to detect lesions and process contours of 52 CEUS images. The accuracy, sensitivity, and specificity were 90.3%, 93.1%, and 86.9%, respectively. Another study using SVM revealed that the sensitivity, specificity, and accuracy of benign and malignant grading were 94.0%, 87.1%, and 91.8%, respectively, while the classification accuracy of HCC, metastatic liver tumor, and benign were 85.7%, 87.7%, and 84.4%, respectively. In addition to TIC, extracting features except TICs from a region of interest on CEUS images and videos was also applied in AI. A two-stage multiview learning framework, which was the integration of deep canonical correlation analysis and multiple kernel learning for CEUS-based computer-aided diagnosis, was proposed to identify liver tumors. The deep canonical correlation analysis-multiple kernel learning framework achieved performance for discriminating benign from malignant liver tumors with the accuracy, sensitivity, and specificity of 90.4%, 93.6%, and 86.8%, respectively. The application of AI for the differential diagnosis of FLLs: With the development of AI, AI based on B-mode ultrasound images has great performance on the diagnosis of different FLLs. Hwang et al extracted hybrid textural features from ultrasound images and used an ANN to diagnose FLLs. They indicated that the model revealed enormous potential with the diagnosis accuracy of over 96% among all FLLs groups (hemangioma vs malignant, cyst vs hemangioma, and cyst vs malignant). Deep learning was also applied in the distinction of different FLLs. Schmauch et al created an algorithm that simultaneously detected and characterized FLLs. Although the amount of training data was relatively small, the average AUC of FLL detection and characterization was 0.935 and 0.916, respectively. A CNN model was developed and validated for tumor detection and 6-class discrimination (HCC, focal fatty sparing, focal fatty infiltration, hemangiomas, and cysts). This model reached 87.0% detection rate, 83.9% sensitivity, and 97.1% specificity in the internal evaluation. In external validation groups, the model achieved 75.0% detection rate, 84.9% sensitivity, and 97.1% specificity. CEUS also had excellent potential for AI to distinguish different FLLs. An ANN was applied to study the role of TIC analysis parameters of 4-class discrimination of liver tumors. The neural network had 94.45% training accuracy and 87.12% testing accuracy. The automatic classification process registered 93.2% sensitivity and 89.7% specificity. Căleanu et al reported the 5-class classification of liver tumors using deep neural networks with an accuracy of 88%. In this study, deep neural network algorithms were compared with state-of-the-art architectures, and a novel leave-one-patient-out evaluation procedure was presented. All these studies indicated that AI based on conventional ultrasound and CEUS played a vital role in the detection and distinction of FLLs. The application of AI in the management of HCC patients: Because of the development of new treatments, the management of HCC patients has become much more complicated. Radiomics can offer accurate assessment of great numbers of image features from medical images. These features that are difficult to detect by the human eye can be detected by machine learning or deep learning. AI models based on radiomics has also been reported to be applicable for the management of HCC, such as the prediction of microvascular invasion (MVI), curative transarterial chemoembolization (TACE) effect, recurrence after thermal ablation, and prognosis. Predicting MVI: MVI is described as the invasion of tumor cells within a vascular space lined by endothelium. It has been proven that MVI is a predictor of early recurrence of HCC and poor survival outcomes. The only way to confirm MVI is via histopathology after surgery. Patients with HCC can receive a great benefit when MVI is identified noninvasively and accurately before surgery. The application of AI based on gray-scale ultrasound images and CEUS indicated good performance in predicting preoperative MVI. A study indicated that the radiological features of gray-scale ultrasound images of gross tumoral area predicted preoperative MVI of HCC with an AUC of 0.81. A CEUS-based radiomics score was built for preoperative prediction of MVI in HCC. The radiomics nomogram revealed great potential in the detection of MVI with an AUC of 0.731 compared with the clinical nomogram with an AUC of 0.634. It was indicated that the radiomics data based on ultrasound was a single predictor of MVI in HCC. Our group created a radiomics model based on CEUS to evaluate MVI of HCC patients before surgery. The model revealed a better detection in the primary group with an AUC of 0.849 vs 0.690 as well as the validation group with an AUC of 0.788 vs 0.661 when compared with the clinical model. We confirmed that the portal venous phase, delay phase, tumor size, rad-score, and alpha-fetoprotein level were single predictors related to MVI. Predicting curative TACE effect: Pathways participating in important cancer-related progression, such as cell proliferation and angiogenesis, are major goals for the treatment of HCC patients. Additionally, transcription factors and cell cycle regulators are also considered to be interesting for anti-HCC drugs. TACE is a widely used first-line therapy for HCC patients diagnosed at the intermediate stage. The tumor response to the first TACE treatment is highly different and obviously related to the subsequent therapies as well as the patients’ survival. Hence, the exact prediction of HCC responses after the first TACE treatment is vital for patients. The prediction of tumor responses to TACE heavily depends on MRI and serological biomarkers. But these methods achieved unsatisfactory accuracy of prediction. The application of AI based on both B-mode ultrasound and CEUS demonstrated better prediction efficacy. An AI-based radiomics was established and validated to predict the personalized responses of HCC to the first TACE session. The deep learning radiomics-based CEUS model showed better performance compared with the machine learning radiomics-based B-mode model and machine learning radiomics-based time intensity curve of CEUS model with AUCs of 0.93, 0.80, and 0.81, respectively. They suggested that the deep learning-based radiomics could benefit TACE candidates in clinical work. Predicting recurrence after thermal ablation: Thermal ablation has been confirmed to be an available therapy for early-stage HCC patients who are unsuitable for operation or recurrence after surgery. In addition, the recent 2-year recurrence rates of HCC patients who underwent thermal ablation were reported as 2%-18%. The accurate preoperative prediction of thermal ablation outcomes is of great importance for HCC patients. Compared with other imaging modalities, CEUS is radiation-free and has better temporal resolution when revealing the blood supply of the tumor. The application of AI based on CEUS could be performed for the preoperative prediction of thermal ablation outcomes. A radiomics model was created to predict the early and late recurrence of HCC patients who accepted thermal ablation. The combined model including CEUS, ultrasound radiomics, and clinical factors showed better performance for early recurrence with an AUC of 0.89 and for late recurrence prediction with a C-index of 0.77. Predicting the prognoses: Surgical resection (SR) and radiofrequency ablation (RFA) are common curative strategies for HCC patients diagnosed at the early stage. Some studies have compared the long-term survival of RFA and SR for early-stage HCC patients. However, the conclusions were sharply different. Hence, it is necessary to find useful predictive means to select the optimal patients who are suitable for RFA or SR before surgery. AI models based on CEUS had great performance for the prediction of progression-free survival (PFS). A deep learning-based radiomics from CEUS images was built to predict the PFS of SR and RFA for HCC patients. Both SR and RFA models achieved high prediction accuracy of 2-year PFS. They also identified that a higher average probability of 2-year PFS may be acquired while some RFA and SR patients exchange their choices. By utilizing conventional ultrasound images and CEUS, these AI prediction models can be applied in the individualized management of HCC patients. Gastric mesenchymal tumors The majority of gastric mesenchymal tumors are occasionally found during routine esophagogastroduodenoscopy examinations. The incidence of gastric mesenchymal tumors is uncertain, but the prevalence of subepithelial tumors identified under endoscopy in Korea was reported as 1.7%. Most gastric mesenchymal tumors are gastrointestinal stromal tumors (GISTs), which may metastasize to the liver and peritoneum after surgery. Hence, distinguishing GISTs from benign mesenchymal tumors such as leiomyomas or schwannomas is of great importance in clinic practice. Endoscopic ultrasonography (EUS) is a common method to assess gastric mesenchymal tumors. It helps doctors evaluate the detailed size, shape, origin, and border of the lesions[ - ]. But the interpretation of EUS images by endoscopists is subjective and has poor interobserver agreement. Recently, EUS image interpretation using AI has developed rapidly and is applied to distinguish GISTs from benign mesenchymal tumors. A convolutional neural network computer-aided diagnosis (CNN-CAD) model based on EUS images was developed to assess gastric mesenchymal tumors. They reported the model distinguished GISTs from non-GIST tumors with 83.0% sensitivity, 75.5% specificity, and 79.2% accuracy. The CNN-CAD model had the potential to provide diagnostic assistance to endoscopists in the future. Pancreatic diseases EUS is currently a common tool to diagnose pancreatic diseases in clinical practice. However, the specificity for the diagnosis of pancreatic diseases using EUS images is low and deeply depends on the subjective judgment of endoscopists. Studies have confirmed that AI based on EUS improves their performance for the diagnosis of pancreatic diseases. Recently, AI using EUS images has been applied in the differential diagnosis of PC, distinguishing intraductal papillary mucinous neoplasms (IPMNs) and detecting pancreatic segmentation. Pancreatic cancer: PC is relatively uncommon, with an incidence of 8-12 per 100000 per year. PC is attributed to hereditary germline or somatic acquired mutations in some genes such as tumor suppressor genes and cell cycle genes. These mutations are also associated with the progression and metastasis of PC. Moreover, shortened telomerase, cell turnover, and genomic instability have an important role in the development of PC. The early diagnosis and surgery of PC, especially for lesions less than 1 cm, can achieve long-term prognoses with a 5-year survival rate of 80.4%. However, PC is most frequently detected at an advanced stage, and the 5-year survival rate remains as low as 3%-15%. Hence, early detection is vital for the treatment of PC patients. Studies have reported that AI based on EUS has great performance for the diagnosis of PC. AI based on B-mode EUS: AI models based on B-mode EUS have been applied to improve their performance for the diagnosis of PC. Norton et al first reported the use of CAD utilizing EUS images in pancreatic diseases in 2001. The study included 14 patients with focal chronic pancreatitis and 21 patients with PC. They showed the diagnostic sensitivity of the two diseases was 89%, and the overall accuracy was 80%. However, this study cannot be referred to as AI-CAD in current applications as the number of patients was limited and the resolution of images were very low. With the development of AI, ANN and SVM presented good performance in the diagnosis of PC[ - ]. Das et al developed an ANN model to distinguish chronic pancreatitis from PC. The results achieved 93% sensitivity, 92% specificity, 87% PPV, 96% negative predictive value (NPV), and 0.93 AUC. By using a multilayered neural network, the study confirmed the first machine learning results for the EUS images of the pancreas. But the sample size was small and lacked pathological evidence in the chronic pancreatitis and normal pancreas groups. By selecting better texture features that included multifractal dimensional features, a quantitative measure of fractality (self-similarity), and complexity from EUS images, a SVM prediction model was created to identify PC and non-PC patients. The model reached 97.98% accuracy, 94.32% sensitivity, 99.45% specificity, 98.65% PPV, and 97.77% NPV. The study demonstrated that SVM using EUS images is a useful tool for diagnosing PC and pancreatic diseases. It was reported that AI was also applied for the age-dependent pancreatic changes on EUS images of PC cases. Ozkan et al suggested a high-performance CAD model applying ANN to discriminate PC and noncancer patients in three age groups. In the under 40-year-old group, the accuracy, sensitivity and specificity were 92.0%, 87.5%, and 94.1%, respectively. In the 40-year-old to 60-year-old group, the accuracy, sensitivity, and specificity were 88.5%, 85.7%, and 91.7%, respectively. In the > 60-year-old group, the accuracy, sensitivity, and specificity were 91.7%, 93.3%, and 88.9%, respectively. The total performance of this model showed the accuracy, sensitivity, and specificity were 87.5%, 83.3%, and 93.3%, respectively. Besides machine learning, deep learning has been applied to B-mode EUS images for analysis of PC. A CNN model using EUS images was developed for the detection of PC. The sensitivity, specificity, PPV, and NPV were 90.2%, 74.9%, 80.1%, and 88.7%, respectively. The CNN model included six normalization layers, seven convolution layers, four max-pooling layers, and six activation layers. The EUS-CNN application was first reported to have the potential to detect PC from EUS images. AI based on EUS elastography: Real-time EUS elastography can provide more information about the features of pancreatic masses by the use of strain assessment. It was reported that EUS elastography has been applied in the differential diagnosis of pancreatic lesions. However, the accuracy and reproducibility were unstable. The application of AI improves their performance in the diagnosis of PC. A prospective, blinded, multicentric study using EUS elastography by ANN was performed in focal pancreatic lesions. They demonstrated the sensitivity, specificity, PPV, and NPV values for the diagnosis of PC were 87.59%, 82.94%, 96.25%, and 57.22%, respectively. The study suggested that the ANN model may provide fast and accurate diagnoses in the clinical. AI based on contrast-enhanced EUS: Contrast-enhanced EUS has been used to enhance the detection of pancreatic lesions. AI based on contrast-enhanced EUS has great performance for the diagnosis of PC. An ANN model based on the TIC analysis from contrast-enhanced EUS images was designed to diagnose PC and chronic pancreatitis. The study reached 94.64% sensitivity, 94.44% specificity, 97.24% PPV, and 89.47% NPV. The study suggested that the model could provide additional diagnostic value to CEUS interpretation and EUS fine needle aspiration results. IPMNs: IPMNs are considered to be precursor lesions of pancreatic adenocarcinoma. Early surgical resection of IPMNs can provide a survival benefit for patients. EUS is often used to assess the malignancy of IPMNs in clinics. Several predictive techniques were used to diagnose the malignancy of IPMNs with no satisfactory results (70%-80%). Compared with human diagnosis and conventional EUS features, AI via deep learning algorithms was confirmed to be a more exact and objective way for the differential diagnosis of malignant IPMNs. Kuwahara et al performed a predictive CNN model using EUS images to detect malignant IPMNs. The model reached 95.7% sensitivity, 94.0% accuracy, and 92.6% specificity. The accuracy was higher compared with the diagnosis of a radiologist (56.0%). The author suggested that the application of AI can evaluate malignant IPMNs before surgery. Pancreatic segmentation: AI using EUS images has also been applied in pancreatic segmentation. A deep learning-based classification system was created to utilize the “station approach” in EUS of pancreas. The system obtained 90.0% accuracy in classification and 0.770 and 0.813 in blood vessel and pancreas segmentation, respectively. The results were similar to that of EUS experts. Thus, this study revealed that AI has the feasibility to detect the station and segmentation of the pancreas. The majority of gastric mesenchymal tumors are occasionally found during routine esophagogastroduodenoscopy examinations. The incidence of gastric mesenchymal tumors is uncertain, but the prevalence of subepithelial tumors identified under endoscopy in Korea was reported as 1.7%. Most gastric mesenchymal tumors are gastrointestinal stromal tumors (GISTs), which may metastasize to the liver and peritoneum after surgery. Hence, distinguishing GISTs from benign mesenchymal tumors such as leiomyomas or schwannomas is of great importance in clinic practice. Endoscopic ultrasonography (EUS) is a common method to assess gastric mesenchymal tumors. It helps doctors evaluate the detailed size, shape, origin, and border of the lesions[ - ]. But the interpretation of EUS images by endoscopists is subjective and has poor interobserver agreement. Recently, EUS image interpretation using AI has developed rapidly and is applied to distinguish GISTs from benign mesenchymal tumors. A convolutional neural network computer-aided diagnosis (CNN-CAD) model based on EUS images was developed to assess gastric mesenchymal tumors. They reported the model distinguished GISTs from non-GIST tumors with 83.0% sensitivity, 75.5% specificity, and 79.2% accuracy. The CNN-CAD model had the potential to provide diagnostic assistance to endoscopists in the future. EUS is currently a common tool to diagnose pancreatic diseases in clinical practice. However, the specificity for the diagnosis of pancreatic diseases using EUS images is low and deeply depends on the subjective judgment of endoscopists. Studies have confirmed that AI based on EUS improves their performance for the diagnosis of pancreatic diseases. Recently, AI using EUS images has been applied in the differential diagnosis of PC, distinguishing intraductal papillary mucinous neoplasms (IPMNs) and detecting pancreatic segmentation. Pancreatic cancer: PC is relatively uncommon, with an incidence of 8-12 per 100000 per year. PC is attributed to hereditary germline or somatic acquired mutations in some genes such as tumor suppressor genes and cell cycle genes. These mutations are also associated with the progression and metastasis of PC. Moreover, shortened telomerase, cell turnover, and genomic instability have an important role in the development of PC. The early diagnosis and surgery of PC, especially for lesions less than 1 cm, can achieve long-term prognoses with a 5-year survival rate of 80.4%. However, PC is most frequently detected at an advanced stage, and the 5-year survival rate remains as low as 3%-15%. Hence, early detection is vital for the treatment of PC patients. Studies have reported that AI based on EUS has great performance for the diagnosis of PC. AI based on B-mode EUS: AI models based on B-mode EUS have been applied to improve their performance for the diagnosis of PC. Norton et al first reported the use of CAD utilizing EUS images in pancreatic diseases in 2001. The study included 14 patients with focal chronic pancreatitis and 21 patients with PC. They showed the diagnostic sensitivity of the two diseases was 89%, and the overall accuracy was 80%. However, this study cannot be referred to as AI-CAD in current applications as the number of patients was limited and the resolution of images were very low. With the development of AI, ANN and SVM presented good performance in the diagnosis of PC[ - ]. Das et al developed an ANN model to distinguish chronic pancreatitis from PC. The results achieved 93% sensitivity, 92% specificity, 87% PPV, 96% negative predictive value (NPV), and 0.93 AUC. By using a multilayered neural network, the study confirmed the first machine learning results for the EUS images of the pancreas. But the sample size was small and lacked pathological evidence in the chronic pancreatitis and normal pancreas groups. By selecting better texture features that included multifractal dimensional features, a quantitative measure of fractality (self-similarity), and complexity from EUS images, a SVM prediction model was created to identify PC and non-PC patients. The model reached 97.98% accuracy, 94.32% sensitivity, 99.45% specificity, 98.65% PPV, and 97.77% NPV. The study demonstrated that SVM using EUS images is a useful tool for diagnosing PC and pancreatic diseases. It was reported that AI was also applied for the age-dependent pancreatic changes on EUS images of PC cases. Ozkan et al suggested a high-performance CAD model applying ANN to discriminate PC and noncancer patients in three age groups. In the under 40-year-old group, the accuracy, sensitivity and specificity were 92.0%, 87.5%, and 94.1%, respectively. In the 40-year-old to 60-year-old group, the accuracy, sensitivity, and specificity were 88.5%, 85.7%, and 91.7%, respectively. In the > 60-year-old group, the accuracy, sensitivity, and specificity were 91.7%, 93.3%, and 88.9%, respectively. The total performance of this model showed the accuracy, sensitivity, and specificity were 87.5%, 83.3%, and 93.3%, respectively. Besides machine learning, deep learning has been applied to B-mode EUS images for analysis of PC. A CNN model using EUS images was developed for the detection of PC. The sensitivity, specificity, PPV, and NPV were 90.2%, 74.9%, 80.1%, and 88.7%, respectively. The CNN model included six normalization layers, seven convolution layers, four max-pooling layers, and six activation layers. The EUS-CNN application was first reported to have the potential to detect PC from EUS images. AI based on EUS elastography: Real-time EUS elastography can provide more information about the features of pancreatic masses by the use of strain assessment. It was reported that EUS elastography has been applied in the differential diagnosis of pancreatic lesions. However, the accuracy and reproducibility were unstable. The application of AI improves their performance in the diagnosis of PC. A prospective, blinded, multicentric study using EUS elastography by ANN was performed in focal pancreatic lesions. They demonstrated the sensitivity, specificity, PPV, and NPV values for the diagnosis of PC were 87.59%, 82.94%, 96.25%, and 57.22%, respectively. The study suggested that the ANN model may provide fast and accurate diagnoses in the clinical. AI based on contrast-enhanced EUS: Contrast-enhanced EUS has been used to enhance the detection of pancreatic lesions. AI based on contrast-enhanced EUS has great performance for the diagnosis of PC. An ANN model based on the TIC analysis from contrast-enhanced EUS images was designed to diagnose PC and chronic pancreatitis. The study reached 94.64% sensitivity, 94.44% specificity, 97.24% PPV, and 89.47% NPV. The study suggested that the model could provide additional diagnostic value to CEUS interpretation and EUS fine needle aspiration results. IPMNs: IPMNs are considered to be precursor lesions of pancreatic adenocarcinoma. Early surgical resection of IPMNs can provide a survival benefit for patients. EUS is often used to assess the malignancy of IPMNs in clinics. Several predictive techniques were used to diagnose the malignancy of IPMNs with no satisfactory results (70%-80%). Compared with human diagnosis and conventional EUS features, AI via deep learning algorithms was confirmed to be a more exact and objective way for the differential diagnosis of malignant IPMNs. Kuwahara et al performed a predictive CNN model using EUS images to detect malignant IPMNs. The model reached 95.7% sensitivity, 94.0% accuracy, and 92.6% specificity. The accuracy was higher compared with the diagnosis of a radiologist (56.0%). The author suggested that the application of AI can evaluate malignant IPMNs before surgery. Pancreatic segmentation: AI using EUS images has also been applied in pancreatic segmentation. A deep learning-based classification system was created to utilize the “station approach” in EUS of pancreas. The system obtained 90.0% accuracy in classification and 0.770 and 0.813 in blood vessel and pancreas segmentation, respectively. The results were similar to that of EUS experts. Thus, this study revealed that AI has the feasibility to detect the station and segmentation of the pancreas. Colorectal tumors Colorectal cancer is the third most common cancer worldwide and accounts for the second leading cause of cancer-related deaths. Moreover, a growing number of patients diagnosed with rectal cancer are under 50-years-old. Colorectal cancer is attributed to gene mutations of epithelial cells, such as oncogenes, tumor suppressor genes, and DNA repair genes. The specific molecular mechanisms implicated in this type of cancer may include the instability of chromosomes and microsatellites. Recently, some researchers studied tumor deposits (TDs) of rectal cancer. TDs are described as focal aggregates of adenocarcinoma located in the surrounding fat of the colon or rectum. They are discontinuous with the primary tumor and unrelated to a lymph node. It was reported that a patient who is TD-positive has more malignant tumors, with decreased disease-free survival and overall survival. However, TDs are often diagnosed by pathology only after surgery. Hence, the noninvasive preoperative prediction of TDs is important for rectal cancer patients. EUS is currently a common tool to detect rectal masses. Recently, ultrasound-based radiomics have been applied to predict the status of TDs. Chen et al developed an ANN system using ultrasound radiomics and clinical factors to predict TDs. Endorectal ultrasound and SWE examinations were conducted for 127 patients with rectal cancer. The accuracy was 75.0% in the validation group. The model reached 72.7% sensitivity, 75.9% specificity, and 0.743 AUC. The study suggested that ultrasound-based radiomics has the potential for the prediction of TDs before treatment. Table shows the application of ultrasound-based AI in gastrointestinal disease. Colorectal cancer is the third most common cancer worldwide and accounts for the second leading cause of cancer-related deaths. Moreover, a growing number of patients diagnosed with rectal cancer are under 50-years-old. Colorectal cancer is attributed to gene mutations of epithelial cells, such as oncogenes, tumor suppressor genes, and DNA repair genes. The specific molecular mechanisms implicated in this type of cancer may include the instability of chromosomes and microsatellites. Recently, some researchers studied tumor deposits (TDs) of rectal cancer. TDs are described as focal aggregates of adenocarcinoma located in the surrounding fat of the colon or rectum. They are discontinuous with the primary tumor and unrelated to a lymph node. It was reported that a patient who is TD-positive has more malignant tumors, with decreased disease-free survival and overall survival. However, TDs are often diagnosed by pathology only after surgery. Hence, the noninvasive preoperative prediction of TDs is important for rectal cancer patients. EUS is currently a common tool to detect rectal masses. Recently, ultrasound-based radiomics have been applied to predict the status of TDs. Chen et al developed an ANN system using ultrasound radiomics and clinical factors to predict TDs. Endorectal ultrasound and SWE examinations were conducted for 127 patients with rectal cancer. The accuracy was 75.0% in the validation group. The model reached 72.7% sensitivity, 75.9% specificity, and 0.743 AUC. The study suggested that ultrasound-based radiomics has the potential for the prediction of TDs before treatment. Table shows the application of ultrasound-based AI in gastrointestinal disease. In recent years, AI models using ultrasound images have developed rapidly. They can offer a more precise and efficient diagnosis and ease the burden of doctors. AI based on ultrasound has been confirmed to be helpful in diffuse liver diseases and FLLs, such as assessing the severity of NAFLD and the grade of liver fibrosis, distinguishing benign and malignant liver lesions, predicting the MVI of HCC, curative TACE effect, and prognoses after thermal ablation. In addition, AI based on EUS has great performance in gastrointestinal diseases, such as distinguishing gastric mesenchymal tumors, differential diagnosis of PC, distinguishing IPMNs, and predicting the status of TDs in rectal cancer. However, the application of AI based on ultrasound in clinical practice has some limitations. The main reason may be due to the high variability between radiologists in ultrasound image acquisition and interpretation. Hence, it is necessary to unify the ultrasonic image acquisition process as well as the standard of ultrasonic data measurement during the ultrasound examination. In addition, some studies of AI-powered ultrasound were retrospective and trained on limited data offered by a single hospital with potential data selection bias, and the amount of data in the training set was not enough. Abundant multicenter prospective studies should assure the efficiency and stability of these AI models. Additionally, deep learning needs a large number of images, so it is necessary to establish an abundant database with common collaborative efforts. In addition, the application of AI based on EUS has some limitations. The number of EUS examinations is overwhelmingly low compared to other examinations such as endoscopy and CT, especially in gastrointestinal diseases. In the future, AI based on ultrasound may be used to develop highly accurate and more efficient models for more digestive diseases such as peptic ulcers, stomach neoplasms, inflammatory bowel disease, and so on. These models may heavily reduce the workload for doctors by automatic identification of disease on radiologic and histopathologic images. Moreover, the application of AI can enable building individual management for patients as well as predicting disease progression and complications in clinics. Additionally, AI may improve distance teaching by remote monitoring and enhance medical services in undeveloped areas.
Proteomic Analysis of Plasma Exosomes Enables the Identification of Lung Cancer in Patients With Chronic Obstructive Pulmonary Disease
0f1c0bef-77bc-4afe-aa09-1509c748eb0e
11717053
Biochemistry[mh]
Introduction As global public health issues, lung cancer and chronic obstructive pulmonary disease (COPD) pose significant risks to human health. Among different types of cancer, lung cancer ranks second globally in incidence and first in mortality. In 2020, there were 1.8 million deaths from lung cancer worldwide, accounting for 18% of all cancer deaths . COPD, which has become the third leading cause of death globally, resulted in 3.23 million deaths in 2019 , and this number is projected to reach 5.4 million by 2060 . Lung cancer and COPD typically occur in elderly individuals and those with a smoking history, with approximately 0.8%–2.7% of COPD patients developing lung cancer annually . However, the causative factors may be unrelated to smoking and COPD has been confirmed as an independent risk factor for lung cancer even among non‐smokers . The risk of developing lung cancer is closely correlated with the severity of airflow obstruction in COPD patients . Furthermore, lung cancer is a significant cause of mortality in LC‐COPD patients, accounting for 33% of deaths among COPD patients. Approximately 40% of patients with COPD die within 1 year of receiving a lung cancer diagnosis . Early‐stage lung cancer presents with subtle or no obvious clinical symptoms, leading many patients to be diagnosed at an advanced stage when they cough and have a chest pain, with a 5‐year survival rate of less than 20% . Low‐dose CT screening has become the preferred recommendation of numerous international authoritative medical organizations . For COPD patients who smoke, it is recommended to undergo annual low‐dose CT screening for lung cancer according to the recommendations for the general population , which significantly reduces lung cancer mortality for patients with mild‐to‐moderate COPD . However, low‐dose CT screening faces challenges such as radiation exposure, high false‐positive rates, and overdiagnosis . Moreover, patients with COPD typically present symptoms such as cough, wheezing, and dyspnea. Thus, in patients who have concomitant early‐stage lung cancer, the rates of misdiagnosis and missed diagnosis remain high with low‐dose CT screening . In addition, the diagnosis of lung cancer currently relies primarily on invasive biopsies. However, tissue biopsy surgeries pose certain limitations and risks in the COPD population. In recent years, serum tumor markers, characterized by their minimally invasive nature, stability, and ability to be detected in the blood before changes appear in imaging studies, have become a hot topic in screening and diagnosis for lung cancer . As a type of liquid biopsy method, exosomes (EVs) carry important information from cells, such as various proteins, lipids, DNA, and RNA, and have been demonstrated to participate in the growth, metastasis and angiogenesis of lung cancer . Furthermore, compared to healthy individuals, lung cancer patients have significantly greater levels of tumor‐derived exosomes (TDEs) secreted by cancer cells in their blood . Thus, EVs have been used for diagnosis, drug resistance assessment, and prognosis of lung cancer, offering promising clinical applications . Studies have shown that the physicochemical properties of proteins in EV vesicles are stable and less affected by the internal environment, and the protein composition is associated with disease onset and prognosis . Proteomic analysis can identify disease characteristics that mRNA‐based approaches may not reveal . As a method for proteomic analysis, liquid chromatography–tandem mass spectrometry (LC–MS/MS) has become a widely used high‐throughput detection method in the field of tumor biomarker research, and label‐free quantification (LFQ) is a protein quantification technique that does not rely on isotope labeling. In addition, parallel reaction monitoring (PRM) technology is particularly sensitive and specific . Based on the characteristics of EVs and the advantages of proteomics, this study collected peripheral blood samples from COPD with lung cancer and COPD patients and extracted plasma exosomes (EVs) for proteomic mass spectrometry analysis to preliminarily screen for differentially expressed proteins (DEPs) via bioinformatics analysis. Subsequently, we used PRM technology for further validation to identify candidate protein biomarkers for the screening and diagnosis of lung cancer in COPD patients. Materials and Methods 2.1 Study Design and Tissue Samples We retrospectively collected peripheral blood samples from COPD patients and COPD patients with lung cancer at West China Hospital, Sichuan University, between January 1, 2020 and January 31, 2022. The inclusion criteria were as follows: (I) patients who were ≥ 40 years old, (II) lung cancer diagnosed based on histopathology, and (III) COPD diagnosed through confirmed lung function tests (FEV1%/FVC < 70% according to the COPD GOLD guidelines ). The exclusion criteria were as follows: (I) patients with metastatic lung tumors or multiple primary lung cancers, (II) patients with other malignant solid tumors, hematologic malignancies, rheumatic immune diseases, acute/chronic infectious diseases, or psychiatric disorders, (III) patients who underwent systemic therapy (including chemotherapy, immunotherapy, and targeted therapy) prior to sample collection, and (IV) patients with incomplete clinical data. This study obtained approval from the Medical Ethics Committee of West China Hospital of Sichuan University. All participants signed written informed consent forms before inclusion in our study. Plasma EVs were extracted from isolated blood samples for proteomic analysis. LC–MS/MS and LFQ proteomics technology were used for the identification of DEPs in the discovery cohort. LC–MS/MS and PRM were used for targeted validation of candidate proteins associated with COPD with lung cancer. We selected DEPs for validation based on large differences in fold change (FC), with no reported association with susceptibility to COPD combined with lung cancer and the clinical AUC value greater than 0.67 with a p value of less than 0.05 in the TCGA database. Comprehensive bioinformatics analysis was conducted on candidate proteins and biological interaction partners to characterize their functional relevance. 2.2 Sample Processing and Preparation Blood samples were removed from the −80°C environment and centrifuged at 12000 g for 15 min at 4°C. Then, we filtered the supernatant through a 0. 22 μM microporous membrane and processed for EV isolation using a PTM‐EV kit (PTM, China). The supernatant was treated with a final concentration of 8 M urea and protease inhibitors, followed by sonication for disruption. A BCA kit (Beyotime, China) was used to determine the protein concentration. 5 mM dithiothreitol (DTT) was added to the protein solution at 56°C for 30 min for the reduction reaction. Then, the mixture was incubated at room temperature in the dark for 15 min after achieving a final concentration of 11 mM by adding iodoacetamide (IAA). Subsequently, 100 mM tetraethylammonium bromide (TEAB) was added to the sample to decrease the urea concentration to below 2 M after dilution. Finally, following the addition of trypsin to the protein sample at varying trypsin‐to‐protein mass ratios, an initial overnight digestion was conducted (1:50 ratio), followed by a subsequent 4 h digestion (1:100 ratio). 2.3 LC – MS / MS and LFQ Analysis The tryptic peptides dissolved in solvent A (0.1% formic acid and 2% acetonitrile) were injected onto a homemade reversed‐phase analytical column. Solvent B (0.1% formic acid and 90% acetonitrile) was added to the sample at a constant flow rate of 700 nL/min on the EASY‐nLC 1200 UPLC system, and the gradient was as follows: an increase from 4% to 20% over 68 min and 20%–32% over 14 min, then an increase to 80% over 4 min, and a hold at 80% for the final 4 min. The peptide fragments were subjected to ultrahigh‐performance liquid chromatography (UHPLC) on an Orbitrap Exploris 480 mass spectrometer with an electrospray voltage of 2.3 kV. The full MS scan resolution was set to 60 000 for a scan range of 400–1200 m/z. The secondary mass spectrum scanning range had a fixed starting point at 110 m/z, with a resolution of 30 000. In the data‐dependent acquisition (DDA) mode, following the first scan, the top 15 precursor ions with the highest signal intensity were sequentially directed to the higher‐energy collision dissociation (HCD) collision pool and fragmented using 27% collision energy. Subsequently, consecutive secondary mass spectrometry analyses was conducted. The automatic gain control (AGC) was 75% with a maximum injection time of 100 ms, and dynamic exclusion was set to 30 s. 2.4 Database Search and Bioinformatics Analysis The MS/MS data were processed using Proteome Discoverer (v2.4.1.15). The Homo_sapiens_9606_PR_20210721.fasta (78 120 entries) concatenated with the reverse decoy database was used to search tandem mass spectra. Trypsin/P was designated as the cleavage enzyme, allowing up to two missed cleavages. The minimum peptide length was set to six amino acid residues, with a maximum of three modifications per peptide. In the first search, the mass tolerance for precursor ions was set to 10 ppm, and in the main search, it was set to 5 ppm. The mass tolerance for fragment ions was set to 0.02 Da. The false discovery rate (FDR) was adjusted to < 1%. The differences between Ca and COPD patients were calculated using the FC as the standard. Proteins were defined as upregulated if FC > 1.5 and downregulated if FC < 1/1.5. DEPs were identified using a significance threshold of p < 0.05. Proteins were classified by Gene Ontology (GO) annotation derived from the UniProt‐GOA database, which categorized proteins into three categories: cellular component, molecular function, and biological process. Then, functional classification statistics were performed using Clusters of Orthologous Groups of proteins (COG/KOG). We annotated the protein pathways using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. A subcellular localization prediction software named WoLF PSORT was used to predict subcellular localization. The protein–protein interaction (PPI) network from STRING (v.11.0) was visualized with the “networkD3” R package. Fisher's exact test with a p value < 0.05 indicated statistical significance. 2.5 PRM Analysis Tissue collection and processing remained the same, but trypsin‐digested peptides from each sample were separated using an EASY nLC 1000 UPLC instrument. After dissolving the trypsin‐digested peptides in solvent A, they were eluted using solvent B at a flow rate of 500 nL/min according to the following gradient: an increase from 6% to 25% over 40 min, from 25% to 35% over 12 min, from 35% to 80% over 4 min, and a hold at 80% for the final 4 min. Then, the eluted peptides were analyzed using a Q Exactive Plus MS instrument with an electrospray voltage of 2.1 kV. Full MS detection was performed in the scan range of 400–905 m/z with a resolution on the Q Exactive Plus MS instrument set to 3E6 for AGC and 50 ms for the maximum injection time. For secondary mass spectrometry, Orbitrap scanning resolution was set to 17 500 with AGC at 1E5, maximum injection time at 220 ms, and isolation window at 1.6 m/z. We used the MaxQuant search engine (v.1.6.15.0) and Skyline software version 21.1 to process and analyze the PRM data. Peptide quantification was performed based on the peak area of fragment ions from their respective transitions. We considered statistical significance to be achieved when the p value from Student's t ‐test was less than 0.05. The detailed technical process of PRM is provided in the attachment (Data ). Study Design and Tissue Samples We retrospectively collected peripheral blood samples from COPD patients and COPD patients with lung cancer at West China Hospital, Sichuan University, between January 1, 2020 and January 31, 2022. The inclusion criteria were as follows: (I) patients who were ≥ 40 years old, (II) lung cancer diagnosed based on histopathology, and (III) COPD diagnosed through confirmed lung function tests (FEV1%/FVC < 70% according to the COPD GOLD guidelines ). The exclusion criteria were as follows: (I) patients with metastatic lung tumors or multiple primary lung cancers, (II) patients with other malignant solid tumors, hematologic malignancies, rheumatic immune diseases, acute/chronic infectious diseases, or psychiatric disorders, (III) patients who underwent systemic therapy (including chemotherapy, immunotherapy, and targeted therapy) prior to sample collection, and (IV) patients with incomplete clinical data. This study obtained approval from the Medical Ethics Committee of West China Hospital of Sichuan University. All participants signed written informed consent forms before inclusion in our study. Plasma EVs were extracted from isolated blood samples for proteomic analysis. LC–MS/MS and LFQ proteomics technology were used for the identification of DEPs in the discovery cohort. LC–MS/MS and PRM were used for targeted validation of candidate proteins associated with COPD with lung cancer. We selected DEPs for validation based on large differences in fold change (FC), with no reported association with susceptibility to COPD combined with lung cancer and the clinical AUC value greater than 0.67 with a p value of less than 0.05 in the TCGA database. Comprehensive bioinformatics analysis was conducted on candidate proteins and biological interaction partners to characterize their functional relevance. Sample Processing and Preparation Blood samples were removed from the −80°C environment and centrifuged at 12000 g for 15 min at 4°C. Then, we filtered the supernatant through a 0. 22 μM microporous membrane and processed for EV isolation using a PTM‐EV kit (PTM, China). The supernatant was treated with a final concentration of 8 M urea and protease inhibitors, followed by sonication for disruption. A BCA kit (Beyotime, China) was used to determine the protein concentration. 5 mM dithiothreitol (DTT) was added to the protein solution at 56°C for 30 min for the reduction reaction. Then, the mixture was incubated at room temperature in the dark for 15 min after achieving a final concentration of 11 mM by adding iodoacetamide (IAA). Subsequently, 100 mM tetraethylammonium bromide (TEAB) was added to the sample to decrease the urea concentration to below 2 M after dilution. Finally, following the addition of trypsin to the protein sample at varying trypsin‐to‐protein mass ratios, an initial overnight digestion was conducted (1:50 ratio), followed by a subsequent 4 h digestion (1:100 ratio). LC – MS / MS and LFQ Analysis The tryptic peptides dissolved in solvent A (0.1% formic acid and 2% acetonitrile) were injected onto a homemade reversed‐phase analytical column. Solvent B (0.1% formic acid and 90% acetonitrile) was added to the sample at a constant flow rate of 700 nL/min on the EASY‐nLC 1200 UPLC system, and the gradient was as follows: an increase from 4% to 20% over 68 min and 20%–32% over 14 min, then an increase to 80% over 4 min, and a hold at 80% for the final 4 min. The peptide fragments were subjected to ultrahigh‐performance liquid chromatography (UHPLC) on an Orbitrap Exploris 480 mass spectrometer with an electrospray voltage of 2.3 kV. The full MS scan resolution was set to 60 000 for a scan range of 400–1200 m/z. The secondary mass spectrum scanning range had a fixed starting point at 110 m/z, with a resolution of 30 000. In the data‐dependent acquisition (DDA) mode, following the first scan, the top 15 precursor ions with the highest signal intensity were sequentially directed to the higher‐energy collision dissociation (HCD) collision pool and fragmented using 27% collision energy. Subsequently, consecutive secondary mass spectrometry analyses was conducted. The automatic gain control (AGC) was 75% with a maximum injection time of 100 ms, and dynamic exclusion was set to 30 s. Database Search and Bioinformatics Analysis The MS/MS data were processed using Proteome Discoverer (v2.4.1.15). The Homo_sapiens_9606_PR_20210721.fasta (78 120 entries) concatenated with the reverse decoy database was used to search tandem mass spectra. Trypsin/P was designated as the cleavage enzyme, allowing up to two missed cleavages. The minimum peptide length was set to six amino acid residues, with a maximum of three modifications per peptide. In the first search, the mass tolerance for precursor ions was set to 10 ppm, and in the main search, it was set to 5 ppm. The mass tolerance for fragment ions was set to 0.02 Da. The false discovery rate (FDR) was adjusted to < 1%. The differences between Ca and COPD patients were calculated using the FC as the standard. Proteins were defined as upregulated if FC > 1.5 and downregulated if FC < 1/1.5. DEPs were identified using a significance threshold of p < 0.05. Proteins were classified by Gene Ontology (GO) annotation derived from the UniProt‐GOA database, which categorized proteins into three categories: cellular component, molecular function, and biological process. Then, functional classification statistics were performed using Clusters of Orthologous Groups of proteins (COG/KOG). We annotated the protein pathways using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. A subcellular localization prediction software named WoLF PSORT was used to predict subcellular localization. The protein–protein interaction (PPI) network from STRING (v.11.0) was visualized with the “networkD3” R package. Fisher's exact test with a p value < 0.05 indicated statistical significance. PRM Analysis Tissue collection and processing remained the same, but trypsin‐digested peptides from each sample were separated using an EASY nLC 1000 UPLC instrument. After dissolving the trypsin‐digested peptides in solvent A, they were eluted using solvent B at a flow rate of 500 nL/min according to the following gradient: an increase from 6% to 25% over 40 min, from 25% to 35% over 12 min, from 35% to 80% over 4 min, and a hold at 80% for the final 4 min. Then, the eluted peptides were analyzed using a Q Exactive Plus MS instrument with an electrospray voltage of 2.1 kV. Full MS detection was performed in the scan range of 400–905 m/z with a resolution on the Q Exactive Plus MS instrument set to 3E6 for AGC and 50 ms for the maximum injection time. For secondary mass spectrometry, Orbitrap scanning resolution was set to 17 500 with AGC at 1E5, maximum injection time at 220 ms, and isolation window at 1.6 m/z. We used the MaxQuant search engine (v.1.6.15.0) and Skyline software version 21.1 to process and analyze the PRM data. Peptide quantification was performed based on the peak area of fragment ions from their respective transitions. We considered statistical significance to be achieved when the p value from Student's t ‐test was less than 0.05. The detailed technical process of PRM is provided in the attachment (Data ). Results 3.1 Clinical Characteristics After preliminary demographic statistical analysis and sample quality control processing, our discovery cohort ultimately included 10 patients in the COPD group and 6 patients in the COPD with lung cancer (Ca) group. In the validation cohort, there were six patients in the COPD group and four patients in the Ca group. Their clinical baseline characteristics and tumor pathological information were presented in Table . The two cohorts showed no significant differences in age, sex, body mass index (BMI), or smoking history ( p value > 0.05). 3.2 Identification of Candidate Markers We identified 784 proteins, 575 of which were quantified (Figure , Table ). Biological replicates were validated by principal component analysis (PCA) (Figure ), Pearson's correlation coefficient (PCC), and relative standard deviation (RSD) (Figure ), which showed that the differentiation and sample selection between the Ca and COPD groups were satisfactory. Among the quantified proteins, 86 DEPs showed statistically significant changes between Ca and COPD patients, including 40 upregulated and 46 downregulated DEPs, as visualized in the volcano plot and heatmap (Figure , Table ). According to the functional classification, these DEPs were mainly distributed in the extracellular space (52.33%) (Figure , Table ). GO analysis revealed that DEPs in the Ca group were involved mainly in biological regulation, metabolic processes, and immune system compared to those in the COPD group (Figure , Table ). COG/KOG analysis showed results similar to those of subcellular localization and GO analysis (Figure , Table ). To investigate the functional enrichment of DEPs, we conducted enrichment analysis at three levels: GO classification (Figure , Table ), KEGG pathway enrichment, and protein structural domain enrichment. Subsequently, the genes were divided into four groups based on FC, labeled Q1 to Q4. To determine the correlation of protein function, we performed enrichment analysis for GO classification, KEGG pathways, and protein domains for each Q group, and then, we conducted cluster analysis. Biological process analysis revealed that upregulated DEPs were related to metabolic processes and cellular cytoskeleton organization (Figure , Table ). In addition, KEGG pathway analysis showed that upregulated DEPs were mainly involved in carbon metabolism, insulin secretion and IL‐17 inflammatory signaling, while downregulated DEPs were mainly associated with ferroptosis (Figure , Table ). For protein structural domains, upregulated DEPs were enriched within protein tyrosine phosphatase (Figure , Table ). The PPI analysis revealed differential protein interaction relationships based on a confidence score > 0.7 (enrichment p value < 0.05) (Figure ). 3.3 PRM Validation Based on the above results, we selected 16 DEPs as candidate proteins to perform PRM validation analysis (Table ). We obtained abundance values for 16 candidate proteins through quantitative data of target peptide fragments to confirm the reliability of our findings. We ultimately validated five DEPs with consistent trends in the Ca and COPD groups (Figure ), including keratin type I cytoskeletal 10 (KRT10, A0A1B0GVI3), serotransferrin (TF, P02787), keratin type II cytoskeletal 1 (KRT1, P04264), keratin type I cytoskeletal 9 (KRT9, P35527), and phosphatidylinositol‐glycan‐specific phospholipase D (GPLD1, P80108). Clinical Characteristics After preliminary demographic statistical analysis and sample quality control processing, our discovery cohort ultimately included 10 patients in the COPD group and 6 patients in the COPD with lung cancer (Ca) group. In the validation cohort, there were six patients in the COPD group and four patients in the Ca group. Their clinical baseline characteristics and tumor pathological information were presented in Table . The two cohorts showed no significant differences in age, sex, body mass index (BMI), or smoking history ( p value > 0.05). Identification of Candidate Markers We identified 784 proteins, 575 of which were quantified (Figure , Table ). Biological replicates were validated by principal component analysis (PCA) (Figure ), Pearson's correlation coefficient (PCC), and relative standard deviation (RSD) (Figure ), which showed that the differentiation and sample selection between the Ca and COPD groups were satisfactory. Among the quantified proteins, 86 DEPs showed statistically significant changes between Ca and COPD patients, including 40 upregulated and 46 downregulated DEPs, as visualized in the volcano plot and heatmap (Figure , Table ). According to the functional classification, these DEPs were mainly distributed in the extracellular space (52.33%) (Figure , Table ). GO analysis revealed that DEPs in the Ca group were involved mainly in biological regulation, metabolic processes, and immune system compared to those in the COPD group (Figure , Table ). COG/KOG analysis showed results similar to those of subcellular localization and GO analysis (Figure , Table ). To investigate the functional enrichment of DEPs, we conducted enrichment analysis at three levels: GO classification (Figure , Table ), KEGG pathway enrichment, and protein structural domain enrichment. Subsequently, the genes were divided into four groups based on FC, labeled Q1 to Q4. To determine the correlation of protein function, we performed enrichment analysis for GO classification, KEGG pathways, and protein domains for each Q group, and then, we conducted cluster analysis. Biological process analysis revealed that upregulated DEPs were related to metabolic processes and cellular cytoskeleton organization (Figure , Table ). In addition, KEGG pathway analysis showed that upregulated DEPs were mainly involved in carbon metabolism, insulin secretion and IL‐17 inflammatory signaling, while downregulated DEPs were mainly associated with ferroptosis (Figure , Table ). For protein structural domains, upregulated DEPs were enriched within protein tyrosine phosphatase (Figure , Table ). The PPI analysis revealed differential protein interaction relationships based on a confidence score > 0.7 (enrichment p value < 0.05) (Figure ). PRM Validation Based on the above results, we selected 16 DEPs as candidate proteins to perform PRM validation analysis (Table ). We obtained abundance values for 16 candidate proteins through quantitative data of target peptide fragments to confirm the reliability of our findings. We ultimately validated five DEPs with consistent trends in the Ca and COPD groups (Figure ), including keratin type I cytoskeletal 10 (KRT10, A0A1B0GVI3), serotransferrin (TF, P02787), keratin type II cytoskeletal 1 (KRT1, P04264), keratin type I cytoskeletal 9 (KRT9, P35527), and phosphatidylinositol‐glycan‐specific phospholipase D (GPLD1, P80108). Discussion Previous studies have reported plasma EVs for proteomic analysis to identify early screening biomarkers for lung cancer, some of which are also related to the early diagnosis of COPD, while very few studies have investigated how to detect lung cancer in COPD patients . Therefore, this study focused on exploring biomarkers for COPD combined with lung cancer. Moreover, this method is minimally invasive and has no apparent contraindications, making it a novel detection method particularly suitable for elderly individuals with poor lung function and no indication for low‐dose CT screening. In this study, a panel of five differential plasma exosomal proteins was identified in the COPD with lung cancer group and COPD group. The three proteins expressed at higher levels in the lung cancer group were keratins, namely, KRT1, KRT9, and KRT10. Keratin is a class of cytoskeletal proteins specifically expressed in epithelial cells, and the expression levels influence processes such as cell growth, migration, and invasion, which are hallmarks of cancer metastasis. Circulating tumor cells (CTCs), typically originating from epithelial‐derived tumors, detach from the tumor tissue and enter the bloodstream. During this process, the overexpression of keratin in CTCs can promote the invasiveness and metastatic of tumors, thereby supporting tumor cell detachment and dissemination . Thus, keratins have been established as the most commonly used markers to identify CTCs in various cancer patients, including breast, lung, colorectal, or pancreatic cancer . Additionally, a study revealed a notable positive association between the quantities of KRT1, KRT9, and KRT10, which were confirmed to originate from tumor cells in exhaled breath condensate samples from lung cancer patients and tumor size . Thus, KRT1, KRT9, and KRT10 from plasma exosomal samples in our study indeed hold potential as screening biomarkers for lung cancer in patients with COPD. One of the above proteins that exhibited increased expression in the COPD group was TF. TF not only participates in the transport of serum iron but also has been demonstrated to be an established determinant of oxidative stress and is involved in inducing DNA damage and activating numerous important signaling pathways . Ferroptosis has been demonstrated to inhibit tumor growth, while TF can promote iron death via the transferrin receptor (TFRC) , which could explain why TF levels are lower in our lung cancer group. Consistent with our study, a study reported that the serum levels of TF in patients with lung cancer combined with COPD were lower than those in COPD patients . In addition, a meta‐analysis on iron‐related biomarkers and lung cancer risk revealed that the levels of TF were notably lower in lung cancer patients compared to those in healthy control groups , which also confirms the reliability of our results regarding TF. Another protein with lower expression in the lung cancer group is GPLD1, which can regulate the expression of associated proteins by partially cleaving glycosylphosphatidylinositol (GPI) anchored proteins, thereby participating in immune system processes, cell differentiation, and programmed cell death . Previous evidence suggested that decreased GPLD1 in tumor cells is closely associated with the occurrence and progression of hepatocellular carcinoma and colorectal cancer. Another study showed that plasma GPLD1 can serve as a predictive marker for the response of patients with locally advanced rectal cancer to neoadjuvant radiotherapy . Our study was the first to reveal that the expression levels of GPLD1 were significantly lower in COPD with lung cancer patients compared to non‐cancer patients. It is worth noting that the inclusion and exclusion criteria of this study are highly stringent. Older age is a common shared risk factor for both lung cancer and COPD patients . Therefore, we selected patients who were ≥ 40 years old for this study. Hospitalized patients often have multiple comorbidities or complications and may undergo various treatment regimens, all of which can interfere with biomarker expression. Therefore, we excluded patients with complex diseases and those who had received any form of systemic anti‐tumor therapy prior to sample collection. This approach was taken to strictly control the influence of confounding factors and minimize potential biases. For the technical methods we employed, there are several advantages. First, the LFQ proteomic analysis we used is not limited by the number of labeling channels, making it more widely applicable compared to the traditional tandem mass tag‐based (TMT) proteomic method . Additionally, it does not incur significant costs associated with extensive fractionation and maintains both the depth of proteome coverage and the reliability of identification, making it suitable for sample detection, such as in blood and bodily fluids compared to the traditional DDA method . Second, as a high‐resolution and high‐accuracy mass spectrometry‐based ion monitoring technique, PRM enables the selective detection of target proteins or peptide segments (such as translationally modified peptide segments). This approach facilitates quantitative analysis of target proteins/peptide fragments and enables precise and specific analysis of complex samples . However, our study has several limitations. The single‐center design with a relatively small sample size limited the diversity of patients as well as the variability in tumor histology and stage, thereby constraining the reproducibility and generalizability of our findings. In the future, we plan to increase the sample size and incorporate multi‐center validation to enhance the reliability of our conclusions. Additionally, we intend to perform subgroup analyses based on tumor histology, staging, and different COPD pulmonary function statuses to further explore the applicability and comprehensiveness of our findings. Conclusions In conclusion, this study conducted plasma EV proteomics analysis using LFQ and PRM techniques, ultimately identifying five protein biomarkers with significant differences and certain biological relevance between the COPD with lung cancer group and the COPD control group. Our findings may provide some support for future research aimed at identifying lung cancer patients within the COPD population. Additionally, this study offers a foundation for further exploration of the shared mechanisms between COPD and lung cancer. Huohuo Zhang: formal analysis, methodology, investigation, writing – review and editing. Jiaxuan Wu: writing – original draft, validation, formal analysis. Jiadi Gan: methodology, investigation, software. Wei Wang: software, data curation. Yi Liu: investigation, data curation. Tingting Song: software, visualization. Yongfeng Yang: visualization, resources. Guiyi Ji: conceptualization, resources, project administration, supervision, writing – review and editing. Weimin Li: conceptualization, project administration, supervision, writing – review and editing, funding acquisition. This study involving human participants were approved by the Medical Ethics Committee of West China Hospital of Sichuan University (Clinical study registration number: 2019 Trial‐No. 195). Participants provided written informed consent to participate in this study. The authors declare no conflicts of interest. Figures S1–S3. Table S1. Table S2. Table S3. Table S4. Table S5. Table S6. Table S7. Table S8. Table S9. Data S1. Supporting Information.
Confirmation of Heart Malformations in Fetuses in the First Trimester Using Three-Dimensional Histologic Autopsy
07e549ae-43a1-4f9b-9661-5d71b08608b7
10184816
Forensic Medicine[mh]
This was a cohort study of pregnant women who elected first-trimester pregnancy termination in the setting of a fetus with suspected CHD on first-trimester ultrasound examination. Medical termination of pregnancy was performed using mifepristone and misoprostol, according to national guidelines. Before termination, at 12–13 weeks of gestation, a detailed ultrasound examination was performed by a team of experienced maternal–fetal medicine subspecialists using transabdominal ultrasonography following a previously published protocol using color or high-definition directional power Doppler. A transvaginal approach was used when transabdominal imaging was inadequate. Voluson E10 and E8 systems equipped with RM6C and RIC5-9-D convex transducers were used for the ultrasound examinations. After medical termination of pregnancy, fetal autopsy was performed. The fetal hearts were removed and sent to the University of Medicine and Pharmacy Craiova's Research Centre for Microscopic Morphology and Immunology for further evaluation. The tissue was preserved in 10% neutral buffered formalin for 15 days, and paraffin embedding followed the standard protocol. Using a motorized HMB450 rotary microtome, each heart block was sectioned in serial 10-micrometer–thick sections. Sectioned slices were collected with a specific section transfer system on poly- l -lysine–prepared slides for improved adherence and left to dry at 37°C for 1 day. All slides were numbered to preserve slices in the correct sequence. Subsequently, we used the standard histologic protocol for hematoxylin-eosin staining. Special coloration (periodic acid-Schiff stain, Masson's trichrome stain, and orcein) was used to confirm specific features, such as fibroelastosis in certain malformations. The colored slides were scanned using a Motic EasyScan scanner at 20× objective and saved in a proprietary format in the Motic Digital Slide Assistant package database. Image resolution was set to 72×72 dpi and quality to 10 using the “Batch image manipulation” plugin in GNU Image Manipulation Program (Fig. ). This enabled faster manipulation in the 3D reconstruction software. We eliminated the slides with tissue roll or rupture during the histology process. The images were imported into Amira Avizo software. The auto align function was used to align slices, and only minor manual adjustments were necessary. We used the “threshold” function with a masking range of 75–160 voxel value for segmentation. Artifact removal significantly improves the overall aspect of the volume easily and relatively fast using the “brush,” “blow,” or “lasso” tools. Segmentation then was performed to individualize heart structures using the “brush” tool, and each element of interest was assigned a different color (Fig. ). We used different sections through the reconstructed heart to evaluate the heart structure and highlight the defects. We generally aimed to section the fetal heart rendered volume according to the classic key planes of the ultrasound cardiac sweep (four-chamber view, left and right ventricular outflow tracts). In addition, certain functions can be applied to the volume to strengthen the results. To better visualize the differences between the left and right ventricles of the heart, we used “compute ambient occlusion” and “interactive threshold” to render the cavities' volumes. The 3D reconstructions and histologic slides (when needed) were analyzed and reviewed by a multidisciplinary team that included maternal–fetal medicine subspecialists and pathologists. The ethical norms and good practice in scientific research were followed throughout the study. Therefore, the research ethics committees of the University of Medicine and Pharmacy in Craiova (no. 27/24.02.2021) and of the Emergency County Clinical Hospital of Craiova (no. 38680/13.09.2021) approved this study. Informed consent was obtained from all of the included patients. Six fetuses with suspected heart malformations were investigated using histologic 3D imaging reconstruction: two with hypoplastic left heart syndrome, two with atrioventricular septal defects, one with an isolated ventricular septal defect, and one with transposition of the great arteries. Turner syndrome and trisomy 21 were detected with genetic analysis in the ventricular septal defect and atrioventricular septal defect cases, respectively. After the segmentation process, we identified the cardiac structures and all of the defects detected by first-trimester fetal echocardiography. Even a small defect, such as the isolated ventricular septal defect (Fig. ), was visualized using the lateral views of the septum from the right and left ventricles in 3D reconstruction (Fig. ). The cases with more severe defects detected on ultrasound examination, such as the atrioventricular septal defect (Fig. ), were straightforward to confirm in 3D histologic reconstruction (Fig. ). An axial plane through the heart reveals the incomplete ventricular and atrial septum. The abnormal anatomy of the atrioventricular valves is evident—only one cusp is visible on each side because of the common atrioventricular valve (Fig. ). Furthermore, in this case, the systematic evaluation of the structures enabled the diagnosis of additional anomalies not detected at the time of the first-trimester ultrasound examination. We noted a bicuspid aortic valve (Appendix 1, available online at http://links.lww.com/AOG/D117 ) and a rare anatomical variant regarding the thymus position in relation to the left brachiocephalic vein. The right lobe of the thymus was located posteriorly to the left brachiocephalic vein, a rare variation that is important to be aware of in the advent of thymic interventions (Appendix 2, available online at http://links.lww.com/AOG/D117 ). For the cases in which hypoplastic left heart syndrome was detected (Fig. ), we emphasized the differences between the two ventricles and great vessels through multiple modalities. The fastest way to evaluate the ventricular size discrepancy was to investigate the axial sections of the reconstructed heart (Fig. ), similar and parallel to the four-chamber view plane. Longitudinal long-axis views of the fetal heart highlight the size differences of both ventricular and outflow tracts (Appendix 3, part C, available online at http://links.lww.com/AOG/D117 ). The cavities' relative sizes can be further highlighted and compared using the volume modification functions (Appendix 3, part A, http://links.lww.com/AOG/D117 ). Also, a transverse section at the base of the heart illustrates the caliber difference between the normal pulmonary artery and hypoplastic aorta (Appendix 3, part B, http://links.lww.com/AOG/D117 ). The ultrasound scan revealed a nonfunctional but thickened and echogenic left ventricle, suggesting myocardial hypertrophy and endocardial fibroelastosis. This led us to investigate the presence of cardiomyopathy and endocardial fibroelastosis using special histologic colorations (Appendix 4, available online at http://links.lww.com/AOG/D117 ). Ventricular wall hypertrophy was accompanied by increased subendothelial density of the collagen, fibrin, and elastic fibers, consistent with endocardial fibroelastosis. In the transposition of the great arteries case (Fig. ), the main characteristics of the anomaly were evident using histologic 3D imaging reconstruction: parallel vessel alignment (Fig. A) and ventriculoarterial discordance, where the aorta arises from the morphologic right ventricle and the pulmonary artery arises from the morphologic left ventricle (Fig. B). Furthermore, evaluation of the coronary arteries was possible, and we noted the circumflex artery arising from the right coronary artery (Fig. C). Given the high incidence of CHD, a reliable and widely available method to confirm malformations detected prenatally by ultrasonography should be available. In this article, we demonstrate the ability to confirm CHD detected on first-trimester ultrasound examination in fetal specimens after termination of pregnancy or pregnancy loss using histologic 3D imaging reconstruction. The histologic 3D imaging reconstruction protocol used in our study is relatively low cost and employs generic equipment and easily acquirable software. Furthermore, the learning curve of each step is not steep, because the technique is widely used in general pathology practice. The process can be automated, further reducing the time needed to prepare and scan the histologic slides. The studies throughout the literature note that perinatal autopsy performed in the second or third trimester serves not only as an audit for prenatal ultrasound examination findings but also identifies additional anomalies. – The same statement may now be applied to the first trimester using this technique. Histologic 3D imaging reconstruction allowed us to identify the ultrasound-detected anomalies in a small but diverse series of cases (cardiac chambers and conotruncal anomalies) and facilitated the diagnosis of additional findings (hypertrophic cardiomyopathy and fibroelastosis in the hypoplastic left heart syndrome case, bicuspid aortic valve and the anatomical variant of the thymus in the atrioventricular septal defect case, and the branching variation of the coronary arteries in the transposition of the great arteries case) , that could not have been established by ultrasonography at the time of the first-trimester scan. These additional findings may affect counseling of the patient regarding recurrence risk. , Perinatal autopsy complemented with histologic examination remains the gold standard for fetal anatomy assessment, even in the era of high-resolution computed tomography and magnetic resonance imaging. Histologic 3D imaging reconstruction is an important step in the field. It provides additional advantages, including medical teaching – and telemedicine, because the reconstructions can be re-examined by various practitioners. This technique also facilitates remote analysis and reduces the time and costs of transfer of physical pathology samples between institutions. Histologic 3D imaging reconstruction also provides an opportunity to retain specific slices for supplementary special stains, which can aid in evaluating the fetal heart or add valuable information to standard autopsy. , , In the hypoplastic left heart syndrome case, we improved the diagnosis by detecting endocardial fibroelastosis , and characterized the associated hypertrophic cardiomyopathy. This finding is important in clinical care, because the literature suggests a genetic component of this disease as well as an association with maternal anti-Ro and anti-La antibodies. , One limitation of this technique is that distortion can occur from one image to another due to tissue shrinkage throughout the process or inhomogeneous relaxation of sections before mounting. – Methods are available to overcome this limitation, – and automation of the slicing and scanning, user-friendly software, and experienced team members can overcome these technical issues. In conclusion, we demonstrate the capacity to confirm fetal heart anomalies using 3D histologic imaging reconstruction of fetal hearts. The findings using histologic 3D imaging reconstruction consistently confirmed the first-trimester ultrasound imaging findings of congenital heart anomalies and, in some cases, identified additional anomalies.
Bioactive Materials in Vital Pulp Therapy: Promoting Dental Pulp Repair Through Inflammation Modulation
da94d219-95d4-4fea-a4ac-1363b4c694d5
11853510
Dentistry[mh]
Dental pulp is enclosed by hard dentin walls , and it is very common for it to be infected by tooth decay, trauma, or other factors. Inflammation of dental pulp caused by such factors can result in intense pain due to the non-expandable nature of the pulp cavity . In the past, treatment involved exposure and subsequent extraction of the pulp, commonly referred to as Root Canal Treatment (RCT). However, with a paradigm shift towards minimally invasive biologic therapy, vital pulp therapy (VPT) and regenerative endodontic therapy (RET) are increasingly important . VPT entails the removal of damaged or infected parts of the pulp tissue, followed by the application of medication or materials to promote repair, ultimately aiming to restore the tooth’s function and esthetics . The advancement of these treatment approaches is strongly correlated with the development of bioactive dental materials. The most critical step of VPT is ensuring the repair of the dental pulp. In the past, pulpal inflammation was considered an undesirable response, often resulting in cell necrosis and treatment failure. However, recent evidence suggests that inflammation plays a crucial role in the process of pulp repair, which is a prerequisite for inducing pulp repair and healing . At the onset of injury, pattern recognition receptors, including Toll-like receptors (TLRs) on the surface of dentinogenic cells and pulp fibroblasts, bind to relevant molecules and initiate an inflammatory cascade response. Then, pulp fibroblasts and inflammatory cells secrete numerous pro-inflammatory cytokines, amplifying the inflammatory process . In the later stages of inflammation, anti-inflammatory cytokines are secreted to terminate inflammation. Notably, some cytokines exert diametrically opposed pro-inflammatory and anti-inflammatory effects through mediating different signaling pathways. When inflammation is reduced to a low level, the tissue microenvironment changes, and the balance shifts towards restorative repair . Dental pulp stem cells (DPSCs) can differentiate in multiple directions, facilitating pulp regeneration and dentin remineralization . In VPT, choosing an appropriate pulp-capping material is crucial for the modulation of the inflammation and repair course. Bioactive materials (BMs) are multifunctional composite materials composed of ceramics, metals, or polymers and are capable of interacting with living organisms and producing specific biological effects . Given the absence of a universally accepted standard definition of the term and the necessity of its usage, we categorize compounds or complexes containing metal ions that are discussed herein as BMs. These substances act on dental and periodontal tissues, release their own components, and induce restorative changes in the organism. Over the past few decades, significant progress has been made in the development of dental BMs with the ability to interact with surrounding dental tissues and stimulate the repair of pulpal and periradicular tissues. Dental BMs, including mineral trioxide aggregates (MTAs), Biodentine, Bioaggregate, and iRoot BP Plus, are mainly based on calcium silicates and are widely used clinically to repair and regenerate damaged pulp tissue. Currently, the biological properties of BMs and their interactions with the pulp in VPT are better understood . However, numerous studies indicate that BMs appear to exert different effects on inflammation during various stages of the pulpal inflammatory response . And it seems that inducing inflammation rapidly at the initial stage of pulp damage, controlling the severity of inflammation during the process, and inhibiting inflammation at an appropriate time to avoid adverse effects are key factors in promoting the repair of inflamed pulp . Understanding the mechanisms by which BMs influence pulp inflammation and repair will facilitate the development of drugs targeting both pulp repair and inflammation. This review summarizes various mechanisms through which BMs regulate inflammation and repair during VPT. The inclusion criteria were as follows: Original articles Human or animal cell culture studies or animal studies The exclusion criteria were as follows: Case study reports Review or systematic review Commentaries/letters to the editor/expert opinion Non–English-language articles Search Methodology The MEDLINE/PubMed library databases were queried for relevant articles on the topic of applications of bioactive materials in VPT published up to December 2023 (last accessed 31 December 2023). The search terms were the following keywords used in various combinations: “Mineral trioxide aggregates”, “Biodentine”, “iRoot BP Plus”, “pulp capping”, “lithium”, “zinc”, “Strontium”, “Magnesium”, “Silver”, “inflammation”, “molecular mechanism”, “signaling pathway”, “dental stem cell”, “apical papilla stem cell”, and “dental pulp”. An initial literature search using different combinations of the search terms yielded 1112 articles . presents a flowchart of the review process. Titles and abstracts of these articles were reviewed by 2 independent examiners who excluded nonqualifying publications. The MEDLINE/PubMed library databases were queried for relevant articles on the topic of applications of bioactive materials in VPT published up to December 2023 (last accessed 31 December 2023). The search terms were the following keywords used in various combinations: “Mineral trioxide aggregates”, “Biodentine”, “iRoot BP Plus”, “pulp capping”, “lithium”, “zinc”, “Strontium”, “Magnesium”, “Silver”, “inflammation”, “molecular mechanism”, “signaling pathway”, “dental stem cell”, “apical papilla stem cell”, and “dental pulp”. An initial literature search using different combinations of the search terms yielded 1112 articles . presents a flowchart of the review process. Titles and abstracts of these articles were reviewed by 2 independent examiners who excluded nonqualifying publications. In order to enhance the transparency and reliability of the studies, a brief assessment of the quality of the included studies was conducted, and the evidence was categorized into the following three levels based on the GRADE framework: high quality, moderate quality, and low quality. Concurrently, given the heterogeneity of study designs, particular attention was devoted to the randomization of trials, the sample size, the configuration of control groups, and the objectivity of outcome assessment. High quality: randomized controlled trials and repeated experiments; subjects are dental pulp stem cells or other stem cells within the pulp chamber. Moderate quality: small sample-size experiments or non-randomized controlled trials; subjects are other cells with stemness. Low quality: uncontrolled trials; subjects are mismatched cell types or cells of unknown origin, or animal experiments, or trials that do not fully meet the criteria for high and moderate quality. Calcium hydroxide (Ca(OH) 2 ) has traditionally been regarded as the gold standard for VPT. Due to its high basicity, calcium hydroxide can locally damage pulp tissue, creating an uncontrolled necrotic area that triggers a sustained inflammatory response and may lead to intra-pulpal calcifications . Currently, calcium hydroxide is being replaced by a new generation of materials—calcium silicate-based materials, which exhibit excellent biocompatibility, intrinsic osteoconductive activity, and the ability to induce regenerative responses. They can promote the formation of higher-quality dentin bridges and improve the sealing of pulp-capped sites . MTA, the first bioceramic material used in endodontics, was developed based on Portland cement and is composed mainly of calcium, silicon, aluminum, bismuth, and iron. With its superior biocompatibility and sealing properties, MTA has been widely used in a variety of VPT and RET and has become the benchmark for the development of novel bioceramic materials . Biodentine, a “dentine replacement” material developed by Septodont (Saint-Maur-des-Fossés, France) in 2009, contains tricalcium silicate, calcium carbonate, zirconium oxide, and calcium chloride and is widely recognized as a promising material that exhibits excellent physical and biological properties in VPT and RET, including sealing, dentine formation, and pulp regenerative abilities similar to MTA. And Biodentine has a shorter setting time and is less likely to cause tooth discoloration than MTA . iRoot BP Plus, developed by Innovative Bio Ceramix lnc. (Vancouver, BC, Canada), is regarded as an alternative to MTA. It is composed of tricalcium silicate, zirconium oxide, tantalum pentoxide, dicalcium hydroxysilicate, calcium sulfate, calcium dihydrogen phosphate, and fillers . The literature shows that iRoot BP Plus has been used in various clinical procedures, such as direct or indirect pulp capping and pulpotomy . The available studies, though limited in number, demonstrate the excellent physicochemical and biological properties of the substance. However, further research is necessary to determine its efficacy. In addition to commercialized bioceramic materials, BMs can also include bioactive ions like titanium and strontium , bioactive proteins such as amelogenin and Human β-defensin 4 , and naturally occurring active materials like propolis . Inflammation is a natural defense response to damage to the pulp–dentin complex; it aims to eliminate the initial damage factor and initiate the repair process. However, excessive or persistent inflammation may lead to further damage to the pulpal tissues and disease progression. During the initial stages of pulpal injury, immune cells quickly gather at the affected site. Neutrophils are the first immune cells to arrive, and they eliminate invading microorganisms by releasing enzymes and reactive oxygen species . The inflammatory response is then amplified by the recruitment of macrophages and dendritic cells, which are responsible for the phagocytosis of bacteria, the removal of necrotic tissue, and antigen presentation . The onset, persistence, and resolution of acute inflammation in dental pulp are complex biological processes involving multiple immune cellular and molecular pathways. This primarily includes the following aspects. 5.1. Interleukin Both IL-1α and IL-1β, as members of the interleukin family, are ubiquitously expressed and key pro-inflammatory cytokines. IL-1α precursors are released as biologically active mediators during cell necrosis, and they bind to IL-1 receptor 1 (IL-1R1), inducing the same pro-inflammatory effects as IL-1β . IL-6 is expressed during acute inflammation and is primarily produced by T cells, monocytes, fibroblasts, and macrophages in response to antigens . It is typically regarded as a pro-inflammatory marker within the first 24 h of dental pulp tissue infection. However, it has also been reported that IL-6 has anti-inflammatory effects . This may be due to the fact that the effects of IL-6 are related to the microenvironment and the timing of its expression. Therefore, the bidirectional effects of IL-6, both pro-inflammatory and anti-inflammatory, are important for the development of inflammatory processes in dental pulp. IL-6 can not only produce regenerative or anti-inflammatory effects through the classical pathway of Mitogen-Activated Protein Kinase (MAPK) signaling but also mediate the pro-inflammatory effects through the trans signaling pathway. IL-8 is produced by cells expressing TLRs, such as dentinogenic cells, neutrophils, and monocytes, but is only released under inflammatory conditions . IL-8 recruits and activates neutrophils, induces superoxide production, and enhances the expression of neutrophil adhesion molecules . It is evident that IL-8 plays a role in the development of inflammation in pulpitis. And the successful detection of IL-6 and IL-8 upregulation has been recognized as a marker of the induction of pulpitis in numerous studies. 5.2. Complement As an important part of the immune system, the complement system plays a key role in the initiation of pulpal inflammation, elimination of bacteria or irritants, and repair of the pulp–dentin complex. In pulpal inflammation, activation of the complement system can amplify the inflammatory response and promote lesion progression . The complement system can be activated via the classical pathway, the mannan-binding element pathway, and the alternative pathway, all of which can be induced by pathogens or components of pathogens, such as bacterial lipopolysaccharides. This activation leads to the production of various complement molecules, including C3a, C5a, and membrane attack complex. C3a and C5a are chemotactic factors that attract leukocytes, such as neutrophils and macrophages, to sites of inflammation. They also increase vascular permeability and promote the infiltration of inflammatory mediators . Moreover, it has been demonstrated that C5a stimulates odontogenic differentiation through various pathways in both healthy and inflammatory states of Human dental pulp stem cells (hDPSCs) , while C3a facilitates the mobilization and specific recruitment of DPSCs and dental pulp fibroblasts . 5.3. Cellular Autophagy Cellular autophagy is an intracellular degradation pathway that maintains cellular homeostasis by encapsulating and digesting discarded or damaged organelles and proteins inside the cell. This process can be activated in various cell types, including pulp fibroblasts, dentinogenic cells, macrophages, and lymphocytes, in response to stress and hypoxia in the inflamed pulp. Autophagy maintains DPSC homeostasis by degrading intracellularly damaged organelles and biomolecules . It also reduces oxidative stress and inflammatory signaling by inhibiting the activation of the NF-κB pathway and the NOD-like receptor thermal protein domain associated protein 3 (NLRP3) inflammasome . Futhermore, it is also able to reduce lipopolysaccharide (LPS)-induced pulp cell pyroptosis . Autophagy can protect odontoblasts during early inflammatory stages of caries . However, excessive autophagy induced by stress conditions can cause and exacerbate tissue damage. In the later stages of inflammation, autophagy contributes to the removal of inflammatory cell debris and promotes tissue repair . 5.4. Macrophages Macrophages can exist in either an M1 or M2 polarization state, depending on the microenvironment. These cells play a dual role in inflammation and tissue repair . During the initiation of pulpitis, macrophages activate the immune response by phagocytosing pathogens and releasing inflammatory mediators such as TNF-α, IL-1β, and IL-6. This pro-inflammatory activity is the primary function of M1-type macrophages. At the same time, M1-type macrophages produce ROS to kill pathogens, but this may also cause damage to the surrounding pulp tissue. During the regression and healing phase of inflammation, macrophages undergo M2-type polarization. In their M2 state, macrophages help clean the site of inflammation by phagocytosing cellular debris and dead cells. They also release anti-inflammatory cytokines, including IL-10 and TGF-β, as well as factors that promote cellular proliferation and tissue repair. This promotes the regression of inflammation and the repair of tissues . 5.5. Molecular Signaling Pathways Signaling pathways are fundamental to life processes, mediating the transmission of extracellular molecular signals across cell membranes to exert their effects intracellularly. Extracellular molecular signals (referred to as ligands) include a wide variety of substances, such as hormones, growth factors, cytokines, neurotransmitters, and small molecule compounds. Signaling pathways related to inflammation, including nuclear factor kappa-B (NF-κB), Wnt, Notch, MAPK, and NLRP3, are activated or inhibited to regulate inflammation directly or indirectly by altering the microenvironment. The NF-κB pathway is crucial in pulpal inflammation as it controls the expression of various inflammatory factors, including TNF-α and IL-1β, which aggravate the inflammatory response and contribute to further damage to pulpal tissue . TNF-α, produced by activated macrophages and T cells, induces the release of inflammatory mediators, recruits lymphocytes and monocytes, and stimulates endothelial cells to express adhesion molecules such as vascular cell adhesion protein 1 (VCAM-1) and intercellular cell adhesion molecule-1 (ICAM-1), as well as secrete chemokines like C-C motif chemokine ligand 2 (CCL2)/monocyte chemoattractant protein 1 (MCP-1) and IL-8 . IL-1β, secreted by macrophages, dendritic cells, and dentin-forming cells in the dental pulp following pathogen recognition or stimulation , enhances the recruitment and activation of neutrophils and macrophages, increasing their phagocytic activity. This creates a positive feedback loop, stimulating the production of more pro-inflammatory cytokines. Additionally, IL-1β increases the permeability of the pulpal vasculature, promoting the diffusion of inflammatory mediators and exacerbating the inflammatory response . MAPKs are upstream components of NF-κB, comprising three families: ERK, JNK, and p38 MAPK . When stimulated by pathogens or activated by inflammatory mediators such as TNF-α and IL-1β, p38 MAPK activates the NF-κB pathway. This pathway regulates the expression of inflammatory genes, promotes the recruitment and activation of inflammatory cells, and ultimately leads to increased inflammation . The process is regulated by positive feedback . Additionally, the MAPK pathway regulates pulp cell survival and apoptosis, which are pivotal in determining the extent of pulp tissue damage and its subsequent repair . Both IL-1α and IL-1β, as members of the interleukin family, are ubiquitously expressed and key pro-inflammatory cytokines. IL-1α precursors are released as biologically active mediators during cell necrosis, and they bind to IL-1 receptor 1 (IL-1R1), inducing the same pro-inflammatory effects as IL-1β . IL-6 is expressed during acute inflammation and is primarily produced by T cells, monocytes, fibroblasts, and macrophages in response to antigens . It is typically regarded as a pro-inflammatory marker within the first 24 h of dental pulp tissue infection. However, it has also been reported that IL-6 has anti-inflammatory effects . This may be due to the fact that the effects of IL-6 are related to the microenvironment and the timing of its expression. Therefore, the bidirectional effects of IL-6, both pro-inflammatory and anti-inflammatory, are important for the development of inflammatory processes in dental pulp. IL-6 can not only produce regenerative or anti-inflammatory effects through the classical pathway of Mitogen-Activated Protein Kinase (MAPK) signaling but also mediate the pro-inflammatory effects through the trans signaling pathway. IL-8 is produced by cells expressing TLRs, such as dentinogenic cells, neutrophils, and monocytes, but is only released under inflammatory conditions . IL-8 recruits and activates neutrophils, induces superoxide production, and enhances the expression of neutrophil adhesion molecules . It is evident that IL-8 plays a role in the development of inflammation in pulpitis. And the successful detection of IL-6 and IL-8 upregulation has been recognized as a marker of the induction of pulpitis in numerous studies. As an important part of the immune system, the complement system plays a key role in the initiation of pulpal inflammation, elimination of bacteria or irritants, and repair of the pulp–dentin complex. In pulpal inflammation, activation of the complement system can amplify the inflammatory response and promote lesion progression . The complement system can be activated via the classical pathway, the mannan-binding element pathway, and the alternative pathway, all of which can be induced by pathogens or components of pathogens, such as bacterial lipopolysaccharides. This activation leads to the production of various complement molecules, including C3a, C5a, and membrane attack complex. C3a and C5a are chemotactic factors that attract leukocytes, such as neutrophils and macrophages, to sites of inflammation. They also increase vascular permeability and promote the infiltration of inflammatory mediators . Moreover, it has been demonstrated that C5a stimulates odontogenic differentiation through various pathways in both healthy and inflammatory states of Human dental pulp stem cells (hDPSCs) , while C3a facilitates the mobilization and specific recruitment of DPSCs and dental pulp fibroblasts . Cellular autophagy is an intracellular degradation pathway that maintains cellular homeostasis by encapsulating and digesting discarded or damaged organelles and proteins inside the cell. This process can be activated in various cell types, including pulp fibroblasts, dentinogenic cells, macrophages, and lymphocytes, in response to stress and hypoxia in the inflamed pulp. Autophagy maintains DPSC homeostasis by degrading intracellularly damaged organelles and biomolecules . It also reduces oxidative stress and inflammatory signaling by inhibiting the activation of the NF-κB pathway and the NOD-like receptor thermal protein domain associated protein 3 (NLRP3) inflammasome . Futhermore, it is also able to reduce lipopolysaccharide (LPS)-induced pulp cell pyroptosis . Autophagy can protect odontoblasts during early inflammatory stages of caries . However, excessive autophagy induced by stress conditions can cause and exacerbate tissue damage. In the later stages of inflammation, autophagy contributes to the removal of inflammatory cell debris and promotes tissue repair . Macrophages can exist in either an M1 or M2 polarization state, depending on the microenvironment. These cells play a dual role in inflammation and tissue repair . During the initiation of pulpitis, macrophages activate the immune response by phagocytosing pathogens and releasing inflammatory mediators such as TNF-α, IL-1β, and IL-6. This pro-inflammatory activity is the primary function of M1-type macrophages. At the same time, M1-type macrophages produce ROS to kill pathogens, but this may also cause damage to the surrounding pulp tissue. During the regression and healing phase of inflammation, macrophages undergo M2-type polarization. In their M2 state, macrophages help clean the site of inflammation by phagocytosing cellular debris and dead cells. They also release anti-inflammatory cytokines, including IL-10 and TGF-β, as well as factors that promote cellular proliferation and tissue repair. This promotes the regression of inflammation and the repair of tissues . ) Signaling pathways are fundamental to life processes, mediating the transmission of extracellular molecular signals across cell membranes to exert their effects intracellularly. Extracellular molecular signals (referred to as ligands) include a wide variety of substances, such as hormones, growth factors, cytokines, neurotransmitters, and small molecule compounds. Signaling pathways related to inflammation, including nuclear factor kappa-B (NF-κB), Wnt, Notch, MAPK, and NLRP3, are activated or inhibited to regulate inflammation directly or indirectly by altering the microenvironment. The NF-κB pathway is crucial in pulpal inflammation as it controls the expression of various inflammatory factors, including TNF-α and IL-1β, which aggravate the inflammatory response and contribute to further damage to pulpal tissue . TNF-α, produced by activated macrophages and T cells, induces the release of inflammatory mediators, recruits lymphocytes and monocytes, and stimulates endothelial cells to express adhesion molecules such as vascular cell adhesion protein 1 (VCAM-1) and intercellular cell adhesion molecule-1 (ICAM-1), as well as secrete chemokines like C-C motif chemokine ligand 2 (CCL2)/monocyte chemoattractant protein 1 (MCP-1) and IL-8 . IL-1β, secreted by macrophages, dendritic cells, and dentin-forming cells in the dental pulp following pathogen recognition or stimulation , enhances the recruitment and activation of neutrophils and macrophages, increasing their phagocytic activity. This creates a positive feedback loop, stimulating the production of more pro-inflammatory cytokines. Additionally, IL-1β increases the permeability of the pulpal vasculature, promoting the diffusion of inflammatory mediators and exacerbating the inflammatory response . MAPKs are upstream components of NF-κB, comprising three families: ERK, JNK, and p38 MAPK . When stimulated by pathogens or activated by inflammatory mediators such as TNF-α and IL-1β, p38 MAPK activates the NF-κB pathway. This pathway regulates the expression of inflammatory genes, promotes the recruitment and activation of inflammatory cells, and ultimately leads to increased inflammation . The process is regulated by positive feedback . Additionally, the MAPK pathway regulates pulp cell survival and apoptosis, which are pivotal in determining the extent of pulp tissue damage and its subsequent repair . ) 6.1. MTA 6.1.1. Effects of MTA on Inflammatory Factors It is known that MTA contributes to the long-term reduction of pulpal inflammation and guides the restoration of pulpal tissue . Santos et al. performed total pulpotomy using MTA and Biodentine on five beagles after one week of dentin exposure and took samples for observation after 14 weeks. The results demonstrated a substantial regenerative capacity of the pulp during the long-term restorative process, even in the presence of prior inflammatory conditions . However, numerous studies have demonstrated that MTA’s effects on inflammation are not consistent throughout the inflammatory process, indicating that MTA does not exhibit anti-inflammatory activity at all stages of inflammation . IL-6 and IL-8 were used to indicate the severity of inflammation . Minsun Chung et al. observed that after treating inflamed DPSCs with White MTA for 48 h, the expression levels of IL-6 and IL-8 significantly increased. However, another study reported that following 48 h of LPS treatment, the two markers decreased upon stimulation with Retro MTA . Ciasca et al. observed that the treatment of inflamed Human osteoblast-like cells with ProRoot MTA resulted in the downregulation of IL-1β and IL-6 within 48 h, and a gradual reduction of the inhibitory effect on IL-6 was noted . Nevertheless, it has been demonstrated that IL-6 secretion was not markedly enhanced or suppressed by MTA treatment of Human monocytes for 24 h, despite the notable downregulation of IL-1β . These findings indicate that MTA induces varying inflammatory responses in different cell types. Even within the same cell type, different formulations of MTA elicit distinct inflammatory reactions. In addition, MTA may exert anti-inflammatory or pro-inflammatory effects that are subject to dynamic adjustment according to the time of action throughout the repair process. Moreover, MTA materials themselves may trigger inflammation, and this pro-inflammatory effect becomes more pronounced over time. Some studies have investigated the effect of MTA on healthy DPSCs . The results demonstrated that the pro-inflammatory marker IL-1β exhibited a significant increase within two days, while IL-6 and IL-8 demonstrated varying degrees of upregulation throughout the eight-day observation period. Additionally, the osteogenic marker ALP demonstrated a notable suppression. While a separate study demonstrated that MTA suppressed IL-1β expression in Human monocytes, this inhibitory effect was less pronounced than that observed in the inflammatory environment of a concurrent experiment . These findings indicate that in the absence of an inflammation environment, MTA materials may induce inflammation and even inhibit mineralization. This propensity is likewise observed in Human neutrophils , Human fibroblasts , Human osteoblast-like cells , murine RAW264.7 macrophage cells , and L929 Mouse fibroblasts . The fate determination of DPSCs plays a crucial role in their future development, which, in turn, influences the success of pulp-capping repair in clinical practice. The pro-inflammatory and mineralization-inhibitory effects of MTA on healthy DPSCs cannot fully explain its clinical application in indirect pulp capping, which aims to promote the formation of reparative dentin. The dentin mineralization-promoting effect of MTA in indirect pulp capping appears to be attributed to its influence on autophagy in hDPSCs. 6.1.2. Effects of MTA on Cellular Autophagy Two studies regarding the effects of MTA on autophagy in healthy Human hDPSCs indicate that MTA’s influence on cellular autophagy varies at different stages of action. Qiu et al. observed that MTA promoted cell proliferation and inhibited differentiation through the early inhibition of autophagy and activation of the Notch pathway within 24 h . MTA may enhance the repair of damaged pulp by potentially accelerating the proliferation of hDPSCs and shortening the duration needed for these cells to progress into the odontoblastic differentiation phase in clinical practice. However, Kim et al. found that MTA promoted autophagy through the AMPK pathway and induced the differentiation and mineralization of adult dentin cells on days 3, 5, and 7 . These results indicate a notable shift in the effect of MTA on autophagy, from initial inhibition to subsequent promotion by the second day. This change may be attributed to the activation of different signaling pathways in varying cellular microenvironments, leading to distinct effects on autophagy and potentially resembling a relay mechanism that promotes the proliferation, differentiation, and mineralization of DPSCs. Additionally, a study on MTA-induced murine healthy RAW264.7 macrophages demonstrated that cellular autophagy could be induced within 24 h, which is inconsistent with previous research . This discrepancy may be attributed to differences in autophagy regulatory mechanisms across species. However, there is currently no relevant research on inflammatory DPSCs or other Human cells. 6.1.3. Effects of MTA on Molecular Signaling Pathways MTA has been observed to induce the activation of signaling pathways in DPSCs. In the absence of inflammation, the activity of the Akt , Phospholipase C , and Wnt pathways can be observed following the treatment of cells with MTA for varying periods, from one day to two weeks. The regulation of pulpal inflammation and tooth repair by MTA is significantly influenced by the high involvement of the Ca sensing receptor (CaSR) and transient receptor potential ankyrin subfamily member 1 (TRPA1). Chen et al. demonstrated that CaSR is expressed in Human dental pulp. It was also shown that CaSR can negatively or positively regulate the MTA-induced mineralization of hDPSCs in a ligand-dependent manner via the phosphoinositide 3-kinase/Akt pathway . J. M. Kim et al. conducted further studies on the relationship between the CaSR and MTA and found that MTA dually regulates extracellular Ca 2+ and pH, activating the CaSR and subsequently activating multiple downstream pathways. Among these, Ca 2+ mobilization from intracellular stores by the phospholipase C pathway plays an important role in the osteogenic differentiation of hDPSCs by regulating transcriptional activity . CaSR mainly senses changes in Ca 2+ , while TRPA1 is the pathway by which odontogenic cells detect pH in the extracellular environment. The findings of Kimura et al. indicate that high pH stimulation results in the activation of intracellular Ca 2+ mobilization via TRPA1 channel-mediated extracellular Ca 2+ influx and intracellular Ca 2+ release. Furthermore, under pathological conditions, TRPA1 channel activation directly promotes dentin formation . In addition to the CaSR and TRAP1, Chen et al. also cultured hDPSCs using a range of concentrations of MTA extracts to examine their proliferation and odontogenic differentiation . Their findings indicated that when hDPSCs were cultured in a wide range of concentrations of MTA extracts, genes, and proteins related to the Wnt/β-catenin signaling pathway were significantly elevated. This suggests that Wnt/β-catenin signaling is also involved in the odontogenic differentiation of hDPSCs. Moreover, the MAPK pathway has been found to be the most frequently induced by MTA for pulpal osteogenic/odontogenic differentiation. J.-H. Kim et al. demonstrated that the treatment of hDPSCs with MTA and propolis, either alone or in combination, resulted in the phosphorylation of extracellular signal-regulated kinase (ERK) and the upregulated expression of dentin sialophosphoprotein (DSPP) and dental matrix protein 1 (DMP1) . All three subfamily proteins of MAPK signaling (ERK, p38, and JNK) are targets of MTA for the promotion of dentin repair . In addition, Du et al. and Yan et al. used MTA to co-culture with Human dental stem cells from apical papilla (hSCAPs) for periods ranging from three to seven days. The results demonstrated that different concentrations of MTA could promote the odontogenic/osteogenic differentiation potential of hSCAPs through the activation of the p38, ERK, or NF-κB signaling pathways. Furthermore, the NF-κB pathway was activated through the upregulation of inflammatory cytokines. Similarly, Wang et al. observed the activation of the MAPK and NF-κB pathways in Human periodontal ligament stem cells. Additionally, the combined use of MTA and platelet-rich fibrin (PRF) has been shown to synergistically promote the differentiation of hDPSCs into odontoblasts by regulating the bone morphogenetic protein (BMP)/Smad signaling pathway . Yun et al. found that the co-administration of MTA and growth hormone could enhance the secretion of BMP2 and p-Smad1/5/8 . However, only a limited number of studies have investigated the activation of signaling pathways in inflammatory hDPSCs induced by MTA. Wang et al. demonstrated that MTA enhanced the LPS-induced proliferation, adhesion, and differentiation of hDPSCs, with the proliferation and adhesion processes occurring via the AKT pathway. However, it is possible that the cell differentiation process may not utilize the same pathway. Previous studies have indicated that the differentiation process of inflammatory hDPSCs may be achieved through the activation of the NF-κB pathway. This is because MTA also has a certain pro-inflammatory tendency when it activates the pathway by acting on healthy pulp tissues. In a study conducted by Y. Wang et al. , Rat DPSCs were used to investigate the effects of MTA on tooth/osteogenic capacity. The findings indicated that MTA enhanced this capacity at the inflammatory site by activating the NF-κB pathway, which indirectly confirmed the hypothesis. It is noteworthy that Kuramoto et al. discovered that MTA inhibited NF-κB activity and decreased IL-1α and IL-6 via the calcineurin/NFAT/Egr2 pathway when inflammatory RAW264.7 macrophage cells were incubated with MTA for a period of 5 h. This suggests that the modulatory effects of MTA on certain signaling pathways, such as the NF-κB pathway, may be dynamic in the context of an inflammatory environment. This is in contrast to the findings of most studies, which report a single activating or inhibitory effect. Rather, the effects of MTA on signaling pathways may be context-dependent and vary with the development of inflammation and alterations in the microenvironment. Consistent with previous studies, Y. Wang et al. also discovered that MTA could enhance odontogenic and osteogenic capacity through the activation of the JNK and ERK pathways following the treatment of healthy Rat bone marrow stromal cells with MTA for one week. 6.1.4. Effects of MTA on Macrophages Furthermore, studies have demonstrated that MTA induced macrophage polarization towards the M2 phenotype, increasing the secretion of IL-10, TGF-β, and VEGF through the Axl/Akt/NF-kB pathway, which, in turn, exerts significant anti-inflammatory effects . This process is associated with a microenvironment of high pH and the gradual release of calcium ions from MTA. 6.2. Biodentine Similar to MTA, Biodentine can promote the odontogenic/osteogenic differentiation of dental pulp through the MAPK and AKT pathways. Additionally, Luo et al. discovered that Biodentine also plays a role in inducing odontogenic/osteogenic differentiation through the calcium-/calmodulin-dependent protein kinase II (CaMKII) signaling pathway, where CaMKII facilitates its induction by promoting the phosphorylation of Smad1 . Currently, numerous studies have investigated the effects of Biodentine on the pulp inflammation response. In healthy hDPSCs, Biodentine inhibited IL-6 secretion for up to 192 h, with a progressive increase in inhibition over time. However, in the context of an inflammatory state, Biodentine unexpectedly promoted IL-6 secretion during the initial 48 h. Nevertheless, the inhibitory effect was observed to resume from the 96 h mark onwards . This pattern is notably distinct from the observed tendency of MTA to moderately promote inflammatory responses in healthy cells at the early stages and to suppress these responses in inflammatory cells thereafter. Furthermore, two additional studies demonstrated that Biodentine consistently promoted IL-8 secretion in both inflammatory and non-inflammatory states throughout the entire eight-day observation period . Previously, it was assumed that both IL-6 and IL-8 were regarded as inflammatory markers. However, the results of these studies indicate that both cytokines were not simultaneously up- or downregulated. The variations in the induction of IL-6 and IL-8 may suggest that cellular inflammation during different phases is regulated by distinct immune cell populations. Additionally, in the case of IL-6, Biodentine did not simply promote or inhibit its secretion. This suggests that the multifunctionality of the inflammatory cytokines in pulpal inflammation and the effect of Biodentine on them are also dynamically adjusted, similar to the effects observed with MTA. Although there is a substantial body of literature indicating that complement, particularly C3a fragments and C5a fragments, plays a significant role in the initiation of pulpal inflammation and the subsequent reparative regeneration of damaged pulp, few studies have examined the impact of bioceramic materials on complement secretion. A study utilizing Biodentine, TheraCal, and Xeno Ⅲ to incubate injured pulp fibroblasts for 30 min demonstrated that Biodentine had no significant effect on C5a secretion, whereas TheraCal and Xeno Ⅲ, which contain resin components, significantly promoted C5a secretion, with the latter exhibiting a more pronounced effect . Notably, C5a secretion was positively correlated with resin content. This phenomenon can be attributed to the more severe inflammatory response caused by the lower biocompatibility of the resin. In contrast, Biodentine demonstrated no significant promotion or inhibition of C5a secretion in inflammatory conditions, suggesting that the active ingredient in the calcium silicate material may not affect pulpal inflammation through its filling properties. A study investigating Biodentine in the treatment of LPS-stimulated Human macrophages for a period of 24 h observed a reduction in the secretion of pro-inflammatory cytokines IL-1β, IL-6, and IL-8, accompanied by an increase in the secretion of anti-inflammatory cytokines IL-10 and TGF-β . This finding suggests that Biodentine may contribute to the polarization of macrophages from the M1 phenotype to the M2 phenotype. 6.3. iRoot BP Plus Studies indicate that iRoot BP Plus facilitates the pulp–dentin complex repair pathway, which is comparable to MTA. Zhang et al. demonstrated that iRoot BP Plus facilitates hDPSCs in their migration and pulp repair through the FGFR-mediated ERK 1/2, JNK, and Akt pathways. Lu et al. found iRoot BP Plus enhanced the bone-derived/odontogenic differentiation potential of BMSCs via the MAPK pathway. A study utilizing iRoot BP Plus to treat hDPSCs in an inflammatory state for 24 h revealed a reduction in the secretion of pro-inflammatory factors IL-1β and IL-6 , accompanied by an increase in the secretion of anti-inflammatory factors IL-4 and IL-10 . This finding suggests that iRoot BP Plus is capable of inhibiting inflammation in a relatively short period of time . In addition to the aforementioned studies on MTA, there are also investigations using iRoot BP Plus on bone marrow mesenchymal stem cells (BMSCs) . The results demonstrated that autophagy markers were progressively upregulated at 15, 30, and 60 min, indicating that iRoot BP Plus is capable of promoting the bone/odontogenic differentiation of BMSCs through autophagy and that it induces cellular autophagy in a relatively short period of time. 6.4. Biologically Active Ions Biologically active ions are defined as those ions that can interact with biological systems and have an effect on biological processes. Among the most studied ions are calcium, iron, silicon, zinc, magnesium, lithium, silver, phosphorus, and strontium. These ions have been shown to promote bone regeneration and tissue repair. Besides the main active components of calcium silicate materials—calcium, silicon, iron, and phosphorus—which have been introduced in the earlier sections, this part will focus on the effects of other active ions on the inflammatory response. Currently, the literature on the use of bioactive ions for targeted studies on pulpal inflammation remains limited. Nonetheless, we have identified numerous studies that elucidate the relationship between bioactive ions and odontogenic repair in pulpal treatment. Therefore, we will provide a detailed summary and discussion of these findings. 6.4.1. Lithium Ions Lithium ions (Li + ), as antagonists of glycogen synthase kinase 3 (GSK3), can mitigate the inhibitory effects of GSK3 on the Wnt signaling pathway, thereby indirectly activating the Wnt pathway. Through this mechanism, lithium ions modulate pulpal inflammation and promote pulp repair and healing . Liang et al. synthesized lithium-doped mesoporous nanoparticles (Li-MNPs) to treat hDPSCs. The results demonstrated that Li-MNPs significantly enhanced mineralization and odontogenic differentiation, thereby promoting dentin regeneration both in situ and in vivo . Furthermore, Alaohali et al. replaced sodium ions with lithium in BGs and observed tertiary dentin formation in pulp-capping experiments . Ishimoto et al. employed LiCl for pulp capping and observed the formation of tubular dentin . 6.4.2. Zinc Ions Zinc ion (Zn 2+ ) compounds like zinc oxide and clove oil have long been used in dental treatments for endodontic diseases. Huang et al. prepared zinc and zinc-containing bioactive glasses (ZnBGs) to treat hDPSCs. The results demonstrated that ZnBG increased the secretion of DSPP and DMP-1, as well as upregulated the mRNA of osteogenic markers and the expression of vascular endothelial growth factor (VEGF) . Zhang et al. prepared a bioactive calcium phosphate cement (CPC) containing ZnBG by a sol-gel process and investigated its effects on hDPSCs, demonstrating the activation of odontogenic differentiation and the promotion of angiogenesis via the integrin, Wnt, MAPK, and NF-κB pathways . Among them, integrins, especially integrin α5 and α6, play a pivotal role in the proliferation, migration, and osteogenic/dentinogenic differentiation of hDPSCs . 6.4.3. Strontium Ions Strontium ions (Sr 2+ ) can replace calcium ions in enamel, enhancing the hardness of enamel, improving the structure and function of dentin, and improving the blood circulation of pulpal tissues. Bakhit et al. employed strontium ranelate (SrRn) to Mouse dental pulp cells (MDPs). Their findings demonstrated that SrRn stimulates the proliferation of MDPs and tooth formation via CaSR-activated PI3K/Akt signaling in vitro and induces osteogenic differentiation and mineralization . The results of another experiment demonstrated that Sr 2+ may induce hDPSCs to differentiate into dentinogenic cell-like cells . Additionally, Huang et al. demonstrated that a specific dose of Sr could promote the proliferation, odontogenic differentiation, and mineralization of hDPSCs in vitro through the CaSR activation of the downstream MAPK/ERK pathway . 6.4.4. Magnesium Ions Magnesium ions (Mg 2+ ) primarily function in dentin and cementum. However, recent studies have demonstrated that Mg 2+ also functions in pulp. Kong et al. found that a Mg 2+ -enriched microenvironment activated the ERK/BMP2/Smads signaling pathway, promoting the odontogenic differentiation of DPSCs . Zhong et al. synthesized Mg-BG to investigate its effects on mineralization, tooth formation, and the anti-inflammatory capacity of hDPSCs. The results revealed an increase in the expression of odontogenic genes, accompanied by a downregulation of inflammatory markers, including IL-4 , IL-6 , IL-8 , and TNF-α . 6.4.5. Silver Ions In addition to the antimicrobial and sealing of dentin tubules functions, silver ions (Ag + ) have been shown to have a positive impact on modulating pulpal inflammation and promoting pulpal repair and healing. Zhu et al. introduced silver-doped BG to chitosan hydrogel (Ag-BG/CS) and applied it to inflamed DPSCs and Rat inflamed dental pulp models . The results of cellular experiments demonstrated that Ag-BG/CS downregulated the expression of IL-1β , IL-6 , IL-8 , and TNF-α by inhibiting the NF-κB pathway and enhanced the in vitro odontogenic differentiation potential of DPSCs. In vivo experiments further indicated that Ag-BG/CS enhanced the preservation of vital pulp tissue and induced stronger restorative dentin formation compared with MTA. Additionally, the significantly increasing phosphorylation levels of p38 and ERK1/2 suggested that Ag-BG/CS enhances pulpal restoration through the MAPK signaling pathway. 6.5. Bioactive Proteins It is well established that the signaling pathways involved in the repair and healing of the pulp–dentin complex require the transmission or reception of information through a diverse array of proteins. This has inspired researchers to apply these proteins directly for the regulation of this process. In addition, there are proteins such as VEGF that have a direct contribution to pulp repair and healing. In this paper, we collectively refer to them as bioactive proteins. The most extensively studied bioactive proteins are the BMP family. A study revealed a live pulpotomy in experimental dogs applying BMP-2 and BMP-4 in combination with dentin matrix by Nakashima, indicating significant promotion in dentin formation and suggesting the potential to induce differentiation . Another study found that combining BMP-2 with VEGF significantly enhances the proliferation of hDPSCs . These findings indicate that BMP-2 may enhance the proliferation and differentiation abilities of DPSCs. A study found that BMP-7, in combination with MTA, had no significant effect on cell proliferation compared with MTA alone when treating DPSCs. However, an increase in mineralized nodules and a high expression of DMP-1 and DSPP were observed . Moreover, Liang et al. demonstrated that BMP-7 promoted the migration and odontogenic differentiation of hDPSCs . In conclusion, BMP-7 appears to promote the migration of DPSCs rather than proliferation. BMP-9 is frequently associated with inflammatory responses, yet its relationship to endodontitis remains unclear. Song et al. studied BMP-9 expression in Rats with endodontic inflammation and in immortalized hDPSCs, using THP-1 to assess BMP-9’s role. They found that BMP-9 overexpression decreased IL-6 and matrix metalloproteinase 2 (MMP-2) secretion, increased phosphorylated Smad1/5, and reduced phosphorylated ERK and JNK levels. BMP-9 also reduced THP-1 cell migration . These findings indicate that BMP-9 may play a role in the early stages of inflammation, exerting a partial inhibitory effect on its severity. Resolvin E1 is synthesized during the spontaneous regression phase of acute inflammation. Chen et al. utilized 8-week-old SD Rats to model pulp injury and sealed the pulp with collagen sponges impregnated with Resolvin E1 for a period of 4 weeks . The results demonstrated enhanced DMP-1 and DSPP secretion, as well as restorative dentin formation. Furthermore, the effect of heparin on hDPSCs has been investigated, and it has been found that heparin induces osteogenic bioactivity and increases BMP-2 and osteocalcin (OCN) . The impact of a combination of dentin matrix proteins (TDM) and DPSC-derived small extracellular vesicles (sEVs) on the repair of pulp–dentin complexes has also been investigated . The results demonstrated that sEVs enhanced the proliferation and migration of DPSCs. The combination of TDM and sEVs exhibited a synergistic effect on DPSC migration while simultaneously inhibiting their proliferation. In vivo TDM and sEV-TDM were observed to promote the formation of dentin, and odontoblast-like cells were observed. Furthermore, studies have been conducted showing that pAsp promotes the secretion of osteopontin (OPN) and DMP-1 and facilitates dentin regeneration in the absence of additional calcium sources in dentin regeneration . 6.1.1. Effects of MTA on Inflammatory Factors It is known that MTA contributes to the long-term reduction of pulpal inflammation and guides the restoration of pulpal tissue . Santos et al. performed total pulpotomy using MTA and Biodentine on five beagles after one week of dentin exposure and took samples for observation after 14 weeks. The results demonstrated a substantial regenerative capacity of the pulp during the long-term restorative process, even in the presence of prior inflammatory conditions . However, numerous studies have demonstrated that MTA’s effects on inflammation are not consistent throughout the inflammatory process, indicating that MTA does not exhibit anti-inflammatory activity at all stages of inflammation . IL-6 and IL-8 were used to indicate the severity of inflammation . Minsun Chung et al. observed that after treating inflamed DPSCs with White MTA for 48 h, the expression levels of IL-6 and IL-8 significantly increased. However, another study reported that following 48 h of LPS treatment, the two markers decreased upon stimulation with Retro MTA . Ciasca et al. observed that the treatment of inflamed Human osteoblast-like cells with ProRoot MTA resulted in the downregulation of IL-1β and IL-6 within 48 h, and a gradual reduction of the inhibitory effect on IL-6 was noted . Nevertheless, it has been demonstrated that IL-6 secretion was not markedly enhanced or suppressed by MTA treatment of Human monocytes for 24 h, despite the notable downregulation of IL-1β . These findings indicate that MTA induces varying inflammatory responses in different cell types. Even within the same cell type, different formulations of MTA elicit distinct inflammatory reactions. In addition, MTA may exert anti-inflammatory or pro-inflammatory effects that are subject to dynamic adjustment according to the time of action throughout the repair process. Moreover, MTA materials themselves may trigger inflammation, and this pro-inflammatory effect becomes more pronounced over time. Some studies have investigated the effect of MTA on healthy DPSCs . The results demonstrated that the pro-inflammatory marker IL-1β exhibited a significant increase within two days, while IL-6 and IL-8 demonstrated varying degrees of upregulation throughout the eight-day observation period. Additionally, the osteogenic marker ALP demonstrated a notable suppression. While a separate study demonstrated that MTA suppressed IL-1β expression in Human monocytes, this inhibitory effect was less pronounced than that observed in the inflammatory environment of a concurrent experiment . These findings indicate that in the absence of an inflammation environment, MTA materials may induce inflammation and even inhibit mineralization. This propensity is likewise observed in Human neutrophils , Human fibroblasts , Human osteoblast-like cells , murine RAW264.7 macrophage cells , and L929 Mouse fibroblasts . The fate determination of DPSCs plays a crucial role in their future development, which, in turn, influences the success of pulp-capping repair in clinical practice. The pro-inflammatory and mineralization-inhibitory effects of MTA on healthy DPSCs cannot fully explain its clinical application in indirect pulp capping, which aims to promote the formation of reparative dentin. The dentin mineralization-promoting effect of MTA in indirect pulp capping appears to be attributed to its influence on autophagy in hDPSCs. 6.1.2. Effects of MTA on Cellular Autophagy Two studies regarding the effects of MTA on autophagy in healthy Human hDPSCs indicate that MTA’s influence on cellular autophagy varies at different stages of action. Qiu et al. observed that MTA promoted cell proliferation and inhibited differentiation through the early inhibition of autophagy and activation of the Notch pathway within 24 h . MTA may enhance the repair of damaged pulp by potentially accelerating the proliferation of hDPSCs and shortening the duration needed for these cells to progress into the odontoblastic differentiation phase in clinical practice. However, Kim et al. found that MTA promoted autophagy through the AMPK pathway and induced the differentiation and mineralization of adult dentin cells on days 3, 5, and 7 . These results indicate a notable shift in the effect of MTA on autophagy, from initial inhibition to subsequent promotion by the second day. This change may be attributed to the activation of different signaling pathways in varying cellular microenvironments, leading to distinct effects on autophagy and potentially resembling a relay mechanism that promotes the proliferation, differentiation, and mineralization of DPSCs. Additionally, a study on MTA-induced murine healthy RAW264.7 macrophages demonstrated that cellular autophagy could be induced within 24 h, which is inconsistent with previous research . This discrepancy may be attributed to differences in autophagy regulatory mechanisms across species. However, there is currently no relevant research on inflammatory DPSCs or other Human cells. 6.1.3. Effects of MTA on Molecular Signaling Pathways MTA has been observed to induce the activation of signaling pathways in DPSCs. In the absence of inflammation, the activity of the Akt , Phospholipase C , and Wnt pathways can be observed following the treatment of cells with MTA for varying periods, from one day to two weeks. The regulation of pulpal inflammation and tooth repair by MTA is significantly influenced by the high involvement of the Ca sensing receptor (CaSR) and transient receptor potential ankyrin subfamily member 1 (TRPA1). Chen et al. demonstrated that CaSR is expressed in Human dental pulp. It was also shown that CaSR can negatively or positively regulate the MTA-induced mineralization of hDPSCs in a ligand-dependent manner via the phosphoinositide 3-kinase/Akt pathway . J. M. Kim et al. conducted further studies on the relationship between the CaSR and MTA and found that MTA dually regulates extracellular Ca 2+ and pH, activating the CaSR and subsequently activating multiple downstream pathways. Among these, Ca 2+ mobilization from intracellular stores by the phospholipase C pathway plays an important role in the osteogenic differentiation of hDPSCs by regulating transcriptional activity . CaSR mainly senses changes in Ca 2+ , while TRPA1 is the pathway by which odontogenic cells detect pH in the extracellular environment. The findings of Kimura et al. indicate that high pH stimulation results in the activation of intracellular Ca 2+ mobilization via TRPA1 channel-mediated extracellular Ca 2+ influx and intracellular Ca 2+ release. Furthermore, under pathological conditions, TRPA1 channel activation directly promotes dentin formation . In addition to the CaSR and TRAP1, Chen et al. also cultured hDPSCs using a range of concentrations of MTA extracts to examine their proliferation and odontogenic differentiation . Their findings indicated that when hDPSCs were cultured in a wide range of concentrations of MTA extracts, genes, and proteins related to the Wnt/β-catenin signaling pathway were significantly elevated. This suggests that Wnt/β-catenin signaling is also involved in the odontogenic differentiation of hDPSCs. Moreover, the MAPK pathway has been found to be the most frequently induced by MTA for pulpal osteogenic/odontogenic differentiation. J.-H. Kim et al. demonstrated that the treatment of hDPSCs with MTA and propolis, either alone or in combination, resulted in the phosphorylation of extracellular signal-regulated kinase (ERK) and the upregulated expression of dentin sialophosphoprotein (DSPP) and dental matrix protein 1 (DMP1) . All three subfamily proteins of MAPK signaling (ERK, p38, and JNK) are targets of MTA for the promotion of dentin repair . In addition, Du et al. and Yan et al. used MTA to co-culture with Human dental stem cells from apical papilla (hSCAPs) for periods ranging from three to seven days. The results demonstrated that different concentrations of MTA could promote the odontogenic/osteogenic differentiation potential of hSCAPs through the activation of the p38, ERK, or NF-κB signaling pathways. Furthermore, the NF-κB pathway was activated through the upregulation of inflammatory cytokines. Similarly, Wang et al. observed the activation of the MAPK and NF-κB pathways in Human periodontal ligament stem cells. Additionally, the combined use of MTA and platelet-rich fibrin (PRF) has been shown to synergistically promote the differentiation of hDPSCs into odontoblasts by regulating the bone morphogenetic protein (BMP)/Smad signaling pathway . Yun et al. found that the co-administration of MTA and growth hormone could enhance the secretion of BMP2 and p-Smad1/5/8 . However, only a limited number of studies have investigated the activation of signaling pathways in inflammatory hDPSCs induced by MTA. Wang et al. demonstrated that MTA enhanced the LPS-induced proliferation, adhesion, and differentiation of hDPSCs, with the proliferation and adhesion processes occurring via the AKT pathway. However, it is possible that the cell differentiation process may not utilize the same pathway. Previous studies have indicated that the differentiation process of inflammatory hDPSCs may be achieved through the activation of the NF-κB pathway. This is because MTA also has a certain pro-inflammatory tendency when it activates the pathway by acting on healthy pulp tissues. In a study conducted by Y. Wang et al. , Rat DPSCs were used to investigate the effects of MTA on tooth/osteogenic capacity. The findings indicated that MTA enhanced this capacity at the inflammatory site by activating the NF-κB pathway, which indirectly confirmed the hypothesis. It is noteworthy that Kuramoto et al. discovered that MTA inhibited NF-κB activity and decreased IL-1α and IL-6 via the calcineurin/NFAT/Egr2 pathway when inflammatory RAW264.7 macrophage cells were incubated with MTA for a period of 5 h. This suggests that the modulatory effects of MTA on certain signaling pathways, such as the NF-κB pathway, may be dynamic in the context of an inflammatory environment. This is in contrast to the findings of most studies, which report a single activating or inhibitory effect. Rather, the effects of MTA on signaling pathways may be context-dependent and vary with the development of inflammation and alterations in the microenvironment. Consistent with previous studies, Y. Wang et al. also discovered that MTA could enhance odontogenic and osteogenic capacity through the activation of the JNK and ERK pathways following the treatment of healthy Rat bone marrow stromal cells with MTA for one week. 6.1.4. Effects of MTA on Macrophages Furthermore, studies have demonstrated that MTA induced macrophage polarization towards the M2 phenotype, increasing the secretion of IL-10, TGF-β, and VEGF through the Axl/Akt/NF-kB pathway, which, in turn, exerts significant anti-inflammatory effects . This process is associated with a microenvironment of high pH and the gradual release of calcium ions from MTA. It is known that MTA contributes to the long-term reduction of pulpal inflammation and guides the restoration of pulpal tissue . Santos et al. performed total pulpotomy using MTA and Biodentine on five beagles after one week of dentin exposure and took samples for observation after 14 weeks. The results demonstrated a substantial regenerative capacity of the pulp during the long-term restorative process, even in the presence of prior inflammatory conditions . However, numerous studies have demonstrated that MTA’s effects on inflammation are not consistent throughout the inflammatory process, indicating that MTA does not exhibit anti-inflammatory activity at all stages of inflammation . IL-6 and IL-8 were used to indicate the severity of inflammation . Minsun Chung et al. observed that after treating inflamed DPSCs with White MTA for 48 h, the expression levels of IL-6 and IL-8 significantly increased. However, another study reported that following 48 h of LPS treatment, the two markers decreased upon stimulation with Retro MTA . Ciasca et al. observed that the treatment of inflamed Human osteoblast-like cells with ProRoot MTA resulted in the downregulation of IL-1β and IL-6 within 48 h, and a gradual reduction of the inhibitory effect on IL-6 was noted . Nevertheless, it has been demonstrated that IL-6 secretion was not markedly enhanced or suppressed by MTA treatment of Human monocytes for 24 h, despite the notable downregulation of IL-1β . These findings indicate that MTA induces varying inflammatory responses in different cell types. Even within the same cell type, different formulations of MTA elicit distinct inflammatory reactions. In addition, MTA may exert anti-inflammatory or pro-inflammatory effects that are subject to dynamic adjustment according to the time of action throughout the repair process. Moreover, MTA materials themselves may trigger inflammation, and this pro-inflammatory effect becomes more pronounced over time. Some studies have investigated the effect of MTA on healthy DPSCs . The results demonstrated that the pro-inflammatory marker IL-1β exhibited a significant increase within two days, while IL-6 and IL-8 demonstrated varying degrees of upregulation throughout the eight-day observation period. Additionally, the osteogenic marker ALP demonstrated a notable suppression. While a separate study demonstrated that MTA suppressed IL-1β expression in Human monocytes, this inhibitory effect was less pronounced than that observed in the inflammatory environment of a concurrent experiment . These findings indicate that in the absence of an inflammation environment, MTA materials may induce inflammation and even inhibit mineralization. This propensity is likewise observed in Human neutrophils , Human fibroblasts , Human osteoblast-like cells , murine RAW264.7 macrophage cells , and L929 Mouse fibroblasts . The fate determination of DPSCs plays a crucial role in their future development, which, in turn, influences the success of pulp-capping repair in clinical practice. The pro-inflammatory and mineralization-inhibitory effects of MTA on healthy DPSCs cannot fully explain its clinical application in indirect pulp capping, which aims to promote the formation of reparative dentin. The dentin mineralization-promoting effect of MTA in indirect pulp capping appears to be attributed to its influence on autophagy in hDPSCs. Two studies regarding the effects of MTA on autophagy in healthy Human hDPSCs indicate that MTA’s influence on cellular autophagy varies at different stages of action. Qiu et al. observed that MTA promoted cell proliferation and inhibited differentiation through the early inhibition of autophagy and activation of the Notch pathway within 24 h . MTA may enhance the repair of damaged pulp by potentially accelerating the proliferation of hDPSCs and shortening the duration needed for these cells to progress into the odontoblastic differentiation phase in clinical practice. However, Kim et al. found that MTA promoted autophagy through the AMPK pathway and induced the differentiation and mineralization of adult dentin cells on days 3, 5, and 7 . These results indicate a notable shift in the effect of MTA on autophagy, from initial inhibition to subsequent promotion by the second day. This change may be attributed to the activation of different signaling pathways in varying cellular microenvironments, leading to distinct effects on autophagy and potentially resembling a relay mechanism that promotes the proliferation, differentiation, and mineralization of DPSCs. Additionally, a study on MTA-induced murine healthy RAW264.7 macrophages demonstrated that cellular autophagy could be induced within 24 h, which is inconsistent with previous research . This discrepancy may be attributed to differences in autophagy regulatory mechanisms across species. However, there is currently no relevant research on inflammatory DPSCs or other Human cells. MTA has been observed to induce the activation of signaling pathways in DPSCs. In the absence of inflammation, the activity of the Akt , Phospholipase C , and Wnt pathways can be observed following the treatment of cells with MTA for varying periods, from one day to two weeks. The regulation of pulpal inflammation and tooth repair by MTA is significantly influenced by the high involvement of the Ca sensing receptor (CaSR) and transient receptor potential ankyrin subfamily member 1 (TRPA1). Chen et al. demonstrated that CaSR is expressed in Human dental pulp. It was also shown that CaSR can negatively or positively regulate the MTA-induced mineralization of hDPSCs in a ligand-dependent manner via the phosphoinositide 3-kinase/Akt pathway . J. M. Kim et al. conducted further studies on the relationship between the CaSR and MTA and found that MTA dually regulates extracellular Ca 2+ and pH, activating the CaSR and subsequently activating multiple downstream pathways. Among these, Ca 2+ mobilization from intracellular stores by the phospholipase C pathway plays an important role in the osteogenic differentiation of hDPSCs by regulating transcriptional activity . CaSR mainly senses changes in Ca 2+ , while TRPA1 is the pathway by which odontogenic cells detect pH in the extracellular environment. The findings of Kimura et al. indicate that high pH stimulation results in the activation of intracellular Ca 2+ mobilization via TRPA1 channel-mediated extracellular Ca 2+ influx and intracellular Ca 2+ release. Furthermore, under pathological conditions, TRPA1 channel activation directly promotes dentin formation . In addition to the CaSR and TRAP1, Chen et al. also cultured hDPSCs using a range of concentrations of MTA extracts to examine their proliferation and odontogenic differentiation . Their findings indicated that when hDPSCs were cultured in a wide range of concentrations of MTA extracts, genes, and proteins related to the Wnt/β-catenin signaling pathway were significantly elevated. This suggests that Wnt/β-catenin signaling is also involved in the odontogenic differentiation of hDPSCs. Moreover, the MAPK pathway has been found to be the most frequently induced by MTA for pulpal osteogenic/odontogenic differentiation. J.-H. Kim et al. demonstrated that the treatment of hDPSCs with MTA and propolis, either alone or in combination, resulted in the phosphorylation of extracellular signal-regulated kinase (ERK) and the upregulated expression of dentin sialophosphoprotein (DSPP) and dental matrix protein 1 (DMP1) . All three subfamily proteins of MAPK signaling (ERK, p38, and JNK) are targets of MTA for the promotion of dentin repair . In addition, Du et al. and Yan et al. used MTA to co-culture with Human dental stem cells from apical papilla (hSCAPs) for periods ranging from three to seven days. The results demonstrated that different concentrations of MTA could promote the odontogenic/osteogenic differentiation potential of hSCAPs through the activation of the p38, ERK, or NF-κB signaling pathways. Furthermore, the NF-κB pathway was activated through the upregulation of inflammatory cytokines. Similarly, Wang et al. observed the activation of the MAPK and NF-κB pathways in Human periodontal ligament stem cells. Additionally, the combined use of MTA and platelet-rich fibrin (PRF) has been shown to synergistically promote the differentiation of hDPSCs into odontoblasts by regulating the bone morphogenetic protein (BMP)/Smad signaling pathway . Yun et al. found that the co-administration of MTA and growth hormone could enhance the secretion of BMP2 and p-Smad1/5/8 . However, only a limited number of studies have investigated the activation of signaling pathways in inflammatory hDPSCs induced by MTA. Wang et al. demonstrated that MTA enhanced the LPS-induced proliferation, adhesion, and differentiation of hDPSCs, with the proliferation and adhesion processes occurring via the AKT pathway. However, it is possible that the cell differentiation process may not utilize the same pathway. Previous studies have indicated that the differentiation process of inflammatory hDPSCs may be achieved through the activation of the NF-κB pathway. This is because MTA also has a certain pro-inflammatory tendency when it activates the pathway by acting on healthy pulp tissues. In a study conducted by Y. Wang et al. , Rat DPSCs were used to investigate the effects of MTA on tooth/osteogenic capacity. The findings indicated that MTA enhanced this capacity at the inflammatory site by activating the NF-κB pathway, which indirectly confirmed the hypothesis. It is noteworthy that Kuramoto et al. discovered that MTA inhibited NF-κB activity and decreased IL-1α and IL-6 via the calcineurin/NFAT/Egr2 pathway when inflammatory RAW264.7 macrophage cells were incubated with MTA for a period of 5 h. This suggests that the modulatory effects of MTA on certain signaling pathways, such as the NF-κB pathway, may be dynamic in the context of an inflammatory environment. This is in contrast to the findings of most studies, which report a single activating or inhibitory effect. Rather, the effects of MTA on signaling pathways may be context-dependent and vary with the development of inflammation and alterations in the microenvironment. Consistent with previous studies, Y. Wang et al. also discovered that MTA could enhance odontogenic and osteogenic capacity through the activation of the JNK and ERK pathways following the treatment of healthy Rat bone marrow stromal cells with MTA for one week. Furthermore, studies have demonstrated that MTA induced macrophage polarization towards the M2 phenotype, increasing the secretion of IL-10, TGF-β, and VEGF through the Axl/Akt/NF-kB pathway, which, in turn, exerts significant anti-inflammatory effects . This process is associated with a microenvironment of high pH and the gradual release of calcium ions from MTA. Similar to MTA, Biodentine can promote the odontogenic/osteogenic differentiation of dental pulp through the MAPK and AKT pathways. Additionally, Luo et al. discovered that Biodentine also plays a role in inducing odontogenic/osteogenic differentiation through the calcium-/calmodulin-dependent protein kinase II (CaMKII) signaling pathway, where CaMKII facilitates its induction by promoting the phosphorylation of Smad1 . Currently, numerous studies have investigated the effects of Biodentine on the pulp inflammation response. In healthy hDPSCs, Biodentine inhibited IL-6 secretion for up to 192 h, with a progressive increase in inhibition over time. However, in the context of an inflammatory state, Biodentine unexpectedly promoted IL-6 secretion during the initial 48 h. Nevertheless, the inhibitory effect was observed to resume from the 96 h mark onwards . This pattern is notably distinct from the observed tendency of MTA to moderately promote inflammatory responses in healthy cells at the early stages and to suppress these responses in inflammatory cells thereafter. Furthermore, two additional studies demonstrated that Biodentine consistently promoted IL-8 secretion in both inflammatory and non-inflammatory states throughout the entire eight-day observation period . Previously, it was assumed that both IL-6 and IL-8 were regarded as inflammatory markers. However, the results of these studies indicate that both cytokines were not simultaneously up- or downregulated. The variations in the induction of IL-6 and IL-8 may suggest that cellular inflammation during different phases is regulated by distinct immune cell populations. Additionally, in the case of IL-6, Biodentine did not simply promote or inhibit its secretion. This suggests that the multifunctionality of the inflammatory cytokines in pulpal inflammation and the effect of Biodentine on them are also dynamically adjusted, similar to the effects observed with MTA. Although there is a substantial body of literature indicating that complement, particularly C3a fragments and C5a fragments, plays a significant role in the initiation of pulpal inflammation and the subsequent reparative regeneration of damaged pulp, few studies have examined the impact of bioceramic materials on complement secretion. A study utilizing Biodentine, TheraCal, and Xeno Ⅲ to incubate injured pulp fibroblasts for 30 min demonstrated that Biodentine had no significant effect on C5a secretion, whereas TheraCal and Xeno Ⅲ, which contain resin components, significantly promoted C5a secretion, with the latter exhibiting a more pronounced effect . Notably, C5a secretion was positively correlated with resin content. This phenomenon can be attributed to the more severe inflammatory response caused by the lower biocompatibility of the resin. In contrast, Biodentine demonstrated no significant promotion or inhibition of C5a secretion in inflammatory conditions, suggesting that the active ingredient in the calcium silicate material may not affect pulpal inflammation through its filling properties. A study investigating Biodentine in the treatment of LPS-stimulated Human macrophages for a period of 24 h observed a reduction in the secretion of pro-inflammatory cytokines IL-1β, IL-6, and IL-8, accompanied by an increase in the secretion of anti-inflammatory cytokines IL-10 and TGF-β . This finding suggests that Biodentine may contribute to the polarization of macrophages from the M1 phenotype to the M2 phenotype. Studies indicate that iRoot BP Plus facilitates the pulp–dentin complex repair pathway, which is comparable to MTA. Zhang et al. demonstrated that iRoot BP Plus facilitates hDPSCs in their migration and pulp repair through the FGFR-mediated ERK 1/2, JNK, and Akt pathways. Lu et al. found iRoot BP Plus enhanced the bone-derived/odontogenic differentiation potential of BMSCs via the MAPK pathway. A study utilizing iRoot BP Plus to treat hDPSCs in an inflammatory state for 24 h revealed a reduction in the secretion of pro-inflammatory factors IL-1β and IL-6 , accompanied by an increase in the secretion of anti-inflammatory factors IL-4 and IL-10 . This finding suggests that iRoot BP Plus is capable of inhibiting inflammation in a relatively short period of time . In addition to the aforementioned studies on MTA, there are also investigations using iRoot BP Plus on bone marrow mesenchymal stem cells (BMSCs) . The results demonstrated that autophagy markers were progressively upregulated at 15, 30, and 60 min, indicating that iRoot BP Plus is capable of promoting the bone/odontogenic differentiation of BMSCs through autophagy and that it induces cellular autophagy in a relatively short period of time. ) Biologically active ions are defined as those ions that can interact with biological systems and have an effect on biological processes. Among the most studied ions are calcium, iron, silicon, zinc, magnesium, lithium, silver, phosphorus, and strontium. These ions have been shown to promote bone regeneration and tissue repair. Besides the main active components of calcium silicate materials—calcium, silicon, iron, and phosphorus—which have been introduced in the earlier sections, this part will focus on the effects of other active ions on the inflammatory response. Currently, the literature on the use of bioactive ions for targeted studies on pulpal inflammation remains limited. Nonetheless, we have identified numerous studies that elucidate the relationship between bioactive ions and odontogenic repair in pulpal treatment. Therefore, we will provide a detailed summary and discussion of these findings. 6.4.1. Lithium Ions Lithium ions (Li + ), as antagonists of glycogen synthase kinase 3 (GSK3), can mitigate the inhibitory effects of GSK3 on the Wnt signaling pathway, thereby indirectly activating the Wnt pathway. Through this mechanism, lithium ions modulate pulpal inflammation and promote pulp repair and healing . Liang et al. synthesized lithium-doped mesoporous nanoparticles (Li-MNPs) to treat hDPSCs. The results demonstrated that Li-MNPs significantly enhanced mineralization and odontogenic differentiation, thereby promoting dentin regeneration both in situ and in vivo . Furthermore, Alaohali et al. replaced sodium ions with lithium in BGs and observed tertiary dentin formation in pulp-capping experiments . Ishimoto et al. employed LiCl for pulp capping and observed the formation of tubular dentin . 6.4.2. Zinc Ions Zinc ion (Zn 2+ ) compounds like zinc oxide and clove oil have long been used in dental treatments for endodontic diseases. Huang et al. prepared zinc and zinc-containing bioactive glasses (ZnBGs) to treat hDPSCs. The results demonstrated that ZnBG increased the secretion of DSPP and DMP-1, as well as upregulated the mRNA of osteogenic markers and the expression of vascular endothelial growth factor (VEGF) . Zhang et al. prepared a bioactive calcium phosphate cement (CPC) containing ZnBG by a sol-gel process and investigated its effects on hDPSCs, demonstrating the activation of odontogenic differentiation and the promotion of angiogenesis via the integrin, Wnt, MAPK, and NF-κB pathways . Among them, integrins, especially integrin α5 and α6, play a pivotal role in the proliferation, migration, and osteogenic/dentinogenic differentiation of hDPSCs . 6.4.3. Strontium Ions Strontium ions (Sr 2+ ) can replace calcium ions in enamel, enhancing the hardness of enamel, improving the structure and function of dentin, and improving the blood circulation of pulpal tissues. Bakhit et al. employed strontium ranelate (SrRn) to Mouse dental pulp cells (MDPs). Their findings demonstrated that SrRn stimulates the proliferation of MDPs and tooth formation via CaSR-activated PI3K/Akt signaling in vitro and induces osteogenic differentiation and mineralization . The results of another experiment demonstrated that Sr 2+ may induce hDPSCs to differentiate into dentinogenic cell-like cells . Additionally, Huang et al. demonstrated that a specific dose of Sr could promote the proliferation, odontogenic differentiation, and mineralization of hDPSCs in vitro through the CaSR activation of the downstream MAPK/ERK pathway . 6.4.4. Magnesium Ions Magnesium ions (Mg 2+ ) primarily function in dentin and cementum. However, recent studies have demonstrated that Mg 2+ also functions in pulp. Kong et al. found that a Mg 2+ -enriched microenvironment activated the ERK/BMP2/Smads signaling pathway, promoting the odontogenic differentiation of DPSCs . Zhong et al. synthesized Mg-BG to investigate its effects on mineralization, tooth formation, and the anti-inflammatory capacity of hDPSCs. The results revealed an increase in the expression of odontogenic genes, accompanied by a downregulation of inflammatory markers, including IL-4 , IL-6 , IL-8 , and TNF-α . 6.4.5. Silver Ions In addition to the antimicrobial and sealing of dentin tubules functions, silver ions (Ag + ) have been shown to have a positive impact on modulating pulpal inflammation and promoting pulpal repair and healing. Zhu et al. introduced silver-doped BG to chitosan hydrogel (Ag-BG/CS) and applied it to inflamed DPSCs and Rat inflamed dental pulp models . The results of cellular experiments demonstrated that Ag-BG/CS downregulated the expression of IL-1β , IL-6 , IL-8 , and TNF-α by inhibiting the NF-κB pathway and enhanced the in vitro odontogenic differentiation potential of DPSCs. In vivo experiments further indicated that Ag-BG/CS enhanced the preservation of vital pulp tissue and induced stronger restorative dentin formation compared with MTA. Additionally, the significantly increasing phosphorylation levels of p38 and ERK1/2 suggested that Ag-BG/CS enhances pulpal restoration through the MAPK signaling pathway. Lithium ions (Li + ), as antagonists of glycogen synthase kinase 3 (GSK3), can mitigate the inhibitory effects of GSK3 on the Wnt signaling pathway, thereby indirectly activating the Wnt pathway. Through this mechanism, lithium ions modulate pulpal inflammation and promote pulp repair and healing . Liang et al. synthesized lithium-doped mesoporous nanoparticles (Li-MNPs) to treat hDPSCs. The results demonstrated that Li-MNPs significantly enhanced mineralization and odontogenic differentiation, thereby promoting dentin regeneration both in situ and in vivo . Furthermore, Alaohali et al. replaced sodium ions with lithium in BGs and observed tertiary dentin formation in pulp-capping experiments . Ishimoto et al. employed LiCl for pulp capping and observed the formation of tubular dentin . Zinc ion (Zn 2+ ) compounds like zinc oxide and clove oil have long been used in dental treatments for endodontic diseases. Huang et al. prepared zinc and zinc-containing bioactive glasses (ZnBGs) to treat hDPSCs. The results demonstrated that ZnBG increased the secretion of DSPP and DMP-1, as well as upregulated the mRNA of osteogenic markers and the expression of vascular endothelial growth factor (VEGF) . Zhang et al. prepared a bioactive calcium phosphate cement (CPC) containing ZnBG by a sol-gel process and investigated its effects on hDPSCs, demonstrating the activation of odontogenic differentiation and the promotion of angiogenesis via the integrin, Wnt, MAPK, and NF-κB pathways . Among them, integrins, especially integrin α5 and α6, play a pivotal role in the proliferation, migration, and osteogenic/dentinogenic differentiation of hDPSCs . Strontium ions (Sr 2+ ) can replace calcium ions in enamel, enhancing the hardness of enamel, improving the structure and function of dentin, and improving the blood circulation of pulpal tissues. Bakhit et al. employed strontium ranelate (SrRn) to Mouse dental pulp cells (MDPs). Their findings demonstrated that SrRn stimulates the proliferation of MDPs and tooth formation via CaSR-activated PI3K/Akt signaling in vitro and induces osteogenic differentiation and mineralization . The results of another experiment demonstrated that Sr 2+ may induce hDPSCs to differentiate into dentinogenic cell-like cells . Additionally, Huang et al. demonstrated that a specific dose of Sr could promote the proliferation, odontogenic differentiation, and mineralization of hDPSCs in vitro through the CaSR activation of the downstream MAPK/ERK pathway . Magnesium ions (Mg 2+ ) primarily function in dentin and cementum. However, recent studies have demonstrated that Mg 2+ also functions in pulp. Kong et al. found that a Mg 2+ -enriched microenvironment activated the ERK/BMP2/Smads signaling pathway, promoting the odontogenic differentiation of DPSCs . Zhong et al. synthesized Mg-BG to investigate its effects on mineralization, tooth formation, and the anti-inflammatory capacity of hDPSCs. The results revealed an increase in the expression of odontogenic genes, accompanied by a downregulation of inflammatory markers, including IL-4 , IL-6 , IL-8 , and TNF-α . In addition to the antimicrobial and sealing of dentin tubules functions, silver ions (Ag + ) have been shown to have a positive impact on modulating pulpal inflammation and promoting pulpal repair and healing. Zhu et al. introduced silver-doped BG to chitosan hydrogel (Ag-BG/CS) and applied it to inflamed DPSCs and Rat inflamed dental pulp models . The results of cellular experiments demonstrated that Ag-BG/CS downregulated the expression of IL-1β , IL-6 , IL-8 , and TNF-α by inhibiting the NF-κB pathway and enhanced the in vitro odontogenic differentiation potential of DPSCs. In vivo experiments further indicated that Ag-BG/CS enhanced the preservation of vital pulp tissue and induced stronger restorative dentin formation compared with MTA. Additionally, the significantly increasing phosphorylation levels of p38 and ERK1/2 suggested that Ag-BG/CS enhances pulpal restoration through the MAPK signaling pathway. ) It is well established that the signaling pathways involved in the repair and healing of the pulp–dentin complex require the transmission or reception of information through a diverse array of proteins. This has inspired researchers to apply these proteins directly for the regulation of this process. In addition, there are proteins such as VEGF that have a direct contribution to pulp repair and healing. In this paper, we collectively refer to them as bioactive proteins. The most extensively studied bioactive proteins are the BMP family. A study revealed a live pulpotomy in experimental dogs applying BMP-2 and BMP-4 in combination with dentin matrix by Nakashima, indicating significant promotion in dentin formation and suggesting the potential to induce differentiation . Another study found that combining BMP-2 with VEGF significantly enhances the proliferation of hDPSCs . These findings indicate that BMP-2 may enhance the proliferation and differentiation abilities of DPSCs. A study found that BMP-7, in combination with MTA, had no significant effect on cell proliferation compared with MTA alone when treating DPSCs. However, an increase in mineralized nodules and a high expression of DMP-1 and DSPP were observed . Moreover, Liang et al. demonstrated that BMP-7 promoted the migration and odontogenic differentiation of hDPSCs . In conclusion, BMP-7 appears to promote the migration of DPSCs rather than proliferation. BMP-9 is frequently associated with inflammatory responses, yet its relationship to endodontitis remains unclear. Song et al. studied BMP-9 expression in Rats with endodontic inflammation and in immortalized hDPSCs, using THP-1 to assess BMP-9’s role. They found that BMP-9 overexpression decreased IL-6 and matrix metalloproteinase 2 (MMP-2) secretion, increased phosphorylated Smad1/5, and reduced phosphorylated ERK and JNK levels. BMP-9 also reduced THP-1 cell migration . These findings indicate that BMP-9 may play a role in the early stages of inflammation, exerting a partial inhibitory effect on its severity. Resolvin E1 is synthesized during the spontaneous regression phase of acute inflammation. Chen et al. utilized 8-week-old SD Rats to model pulp injury and sealed the pulp with collagen sponges impregnated with Resolvin E1 for a period of 4 weeks . The results demonstrated enhanced DMP-1 and DSPP secretion, as well as restorative dentin formation. Furthermore, the effect of heparin on hDPSCs has been investigated, and it has been found that heparin induces osteogenic bioactivity and increases BMP-2 and osteocalcin (OCN) . The impact of a combination of dentin matrix proteins (TDM) and DPSC-derived small extracellular vesicles (sEVs) on the repair of pulp–dentin complexes has also been investigated . The results demonstrated that sEVs enhanced the proliferation and migration of DPSCs. The combination of TDM and sEVs exhibited a synergistic effect on DPSC migration while simultaneously inhibiting their proliferation. In vivo TDM and sEV-TDM were observed to promote the formation of dentin, and odontoblast-like cells were observed. Furthermore, studies have been conducted showing that pAsp promotes the secretion of osteopontin (OPN) and DMP-1 and facilitates dentin regeneration in the absence of additional calcium sources in dentin regeneration . A notable heterogeneity characterized the design of the studies incorporated within the scope of this review, encompassing in vitro cellular experiments on diverse cell types of Human and animal origin, along with animal model studies employing Rats or Mice . These variations have the potential to influence the comparison and integration of study results. Additionally, the experimental conditions (e.g., the employed inflammation model, the concentration of materials utilized) and outcome metrics (e.g., expression levels of inflammatory factors, biocompatibility assessment) exhibited significant variation across studies. Consequently, although the present analysis offers a comprehensive understanding of the role of bioactive materials in pulpal inflammation modulation and regeneration, readers should exercise caution when directly comparing different studies. In conclusion, with the increasing use of living pulp preservation in clinical practice, it becomes particularly important to investigate the mechanisms that promote the repair and healing of pulp. Thus, the question of how to regulate non-long-term mild or moderate inflammation as a prerequisite for inducing pulp repair and healing has emerged as a significant area of investigation. In vivo inflammation is primarily expressed by pro- and anti-inflammatory cytokines, which are regulated by macrophages, complement, autophagy, and other factors. These cells or proteins regulate the relevant cytokines through signaling pathways, either directly or indirectly, thus altering the inflammatory environment of damaged pulp. This, in turn, initiates pulpal restoration, ultimately leading to pulp healing and dentin formation. To facilitate the regulation of this complex process, BMs that interact with biological systems and produce specific biological effects have emerged. This review initially dedicates a section to elucidate the effects of various inflammatory cytokines, signaling pathways, complement, autophagy, and macrophages on inflammation. Subsequently, the BMs are divided into calcium silicate materials, bioactive ions, and bioactive proteins, and their effects on the pulp–dentin complex are discussed individually. The calcium silicate materials, such as MTA, Biodentine, and iRoot BP Plus, demonstrated comparable yet not identical effects on various inflammatory cytokines, including IL-1β, IL-6, TNF-α, and IL-8. Furthermore, the observed effects were not exclusively promotional or inhibitory. The complement C3a and C5a, which play a pivotal role in the initiation of inflammation, did not establish a necessary relationship with the active components of calcium silicate materials in this study. Instead, their expression was found to be positively correlated with the resin content of the material. In studies of autophagy, MTA’s influence on cellular autophagy varies at different stages of action, and it is clear that iRoot BP Plus is capable of activating autophagy. Regarding the effect of MTA and iRoot BP Plus on macrophages, it can be concluded that these materials promote M1/M2 phenotype polarization. Biodentine also appears to facilitate macrophage polarization from the M1 to the M2 phenotype. In terms of signaling pathways, in addition to the MAPK and AKT pathways, Biodentine also activates the CaMKII pathway. MTA is also linked to the NF-κB, BMP/Smad, Wnt-catenin, and CaSR pathways. All of the aforementioned signaling pathways have been shown to have a positive effect on the osteogenic/odontogenic differentiation of hDPSCs. In addition to the calcium, silicon, and iron elements present in the active components of calcium silicate materials, bioactive ions—including lithium, zinc, strontium, magnesium, and silver—facilitate the reparative regeneration of the pulp–dentin complex. Among these, Li promotes tooth regeneration by activating the integrin, Wnt, MAPK, and NF-κB pathways, while Sr correlates with the PI3K/Akt and MAPK/ERK pathways. In Mg-rich microenvironments, ERK/BMP2/Smads signaling is activated by an increase in intracellular Mg 2+ . Ag activates ERK/BMP2/Smads signaling by inhibiting the NF-κB pathway and activating the MAPK pathway, which, in turn, promotes pulp repair. The most extensively studied bioactive proteins include BMP-2, BMP-7, and BMP-9 of the BMP protein family, in addition to heparin, TDM, and pAsp, all of which have been shown to promote tooth regeneration. However, their antimicrobial properties are slightly weaker than those of the other two BMs. A considerable number of BMs have been investigated with the objective of modulating pulpal inflammation and promoting the restorative healing of damaged pulp. However, despite this research, BMs remain the only materials used in clinical practice. Although they do achieve satisfactory results, they are still quite far from perfectly realizing VPT, namely the functional healing of the pulp and dentin. The principal active components of calcium silicate materials remain a limited number of elements, although an increasing number of modified BMs are being synthesized and studied. Different BMs elicit varying inflammatory responses in dental pulp due to their distinct compositions. Understanding their modes of action will contribute to a broader understanding of the mechanisms involved in induced dentinogenesis. This feature provides clinicians with greater options to select the most appropriate BMs for precision therapy based on the patient’s specific case status. Furthermore, the currently widely used clinical BMs, including MTA, Biodentine, and iRoot BP Plus, have demonstrated efficacy in endodontic restorative healing and hard tissue generation. It is anticipated that the therapeutic effect of VPT will be further enhanced in the future through the incorporation of novel biologically active ions, such as Li and Ag, along with bioactive proteins, including the BMP family. Furthermore, we aspire to promote the advancement of the integration of materials science with immunology, molecular biology, and other fields by discussing the dynamic inflammatory regulation of BMs in VPT. This integration is expected to result in the development of smarter, novel materials that can sense and respond to changes in the inflammatory microenvironment in real time, thereby enhancing their role in therapy. These insights will help in the development of new materials with specific components, aiding in exploring how newly developed pulp-capping materials shift the balance from inflammation toward repair and healing. This understanding will also lay the foundation for designing future capping materials aimed at influencing these processes. In the future, it may be possible to gradually approach the goal of perfectly realizing VPT through the organic combination of a more diverse range of bioactive ions and bioactive proteins.
Clinical application of whole transcriptome sequencing for the classification of patients with acute lymphoblastic leukemia
d67205ce-a337-425d-9d37-1b7f4a52c26b
8330044
Anatomy[mh]
Transcriptome sequencing, usually gene expression arrays, has been a well-established diagnostic tool to characterize and quantify gene expression profiles and to detect fusion transcripts for many years. The development of RNA sequencing (RNA-Seq), including polyA-selected and whole transcript sequencing (WTS), has made it possible to broaden the analytical spectrum to study multiple transcriptional events (e.g., chimeric transcripts, isoform switching, expression, etc.) with a single approach. Compared to the expression arrays, RNA-Seq offers single base pair resolution, and considerably less background noise, providing hence, a relatively unbiased analysis of the transcriptome. Although guidelines have been established to allow for precise and context-dependent data analysis , no gold standard exists for any of the preprocessing steps or the downstream analyses. Hence, integrating RNA-Seq into the necessarily rigid quality standards of clinical diagnostic workflows is challenging. However, the multifaceted output of the assay can greatly benefit clinical diagnostics as indicated by various studies [ – ] and reviews . Acute lymphoblastic leukemia (ALL) is a heterogeneous hematological neoplasm when considering the clinical and genetic characteristics . The World Health Organization (WHO) recognizes nine different sub-entities within the BCP-ALL with recurrent genetic abnormalities , including 4 groups characterized by the specific translocations, which result in the formation of aberrant chimeric transcripts detectable by RNA-Seq fusion calling ( BCR-ABL1 , KMT2A -rearranged, ETV6-RUNX1 , TCF3-PBX1 ). Additional entities are characterized by abnormalities in chromosome numbers, or by partial amplifications: BCP-ALLs with hyperdiploidy, hypodiploidy or intrachromosomal amplification of chromosome 21 (iAMP21). More recently, further distinct ALL subtypes, including BCR-ABL1 -like and ETV6-RUNX1- like ALL, were identified based on their gene expression profiles [ – ]. BCR-ABL1 -like was included in the WHO classification of 2017, as a provisional entity based on treatment and prognostic implications that are associated with this high-risk subtype . Currently, the diagnosis of ALL patients requires various analyses encompassing morphology, immunophenotyping, molecular analysis of gene fusions and mutations, and detection of numerical and structural abnormalities based on chromosomal banding analysis (CBA) and fluorescence in situ hybridization (FISH) . With WTS parallel analysis of gene expression profiles, fusion transcripts and copy number changes becomes feasible, leading to an in-depth characterization of a patients’ genetic profile as a basis for disease classification based on the data set of a single approach. Nevertheless, studies that comprehensively assessed all the transcriptional events in a clinical setting are scarce. In the current study, we performed detailed WTS analysis in 279 patients with newly diagnosed ALL of B- and T-lineage, to explore the complete diagnostic potential of WTS for the genetic characterization of ALL and its applicability in routine practice. Patients and samples Two hundred seventy nine patients with newly diagnosed ALL, sent to MLL Leukemia Laboratory between 03/2006–01/2017 for diagnostic work-up, were selected based on sample availability for WTS and WGS. ALL diagnosis was established based on morphology, immunophenotype, and cytogenetics, as previously published [ – ]. The cohort comprised 115 female (41%) and 164 male (59%) patients, with a median age of 49 years (range 0.1–93 years) at diagnosis (Additional file : Table S1). The patients showed B-cell precursor (BCP-ALL; n = 211) or T-cell precursor immunophenotype (T-ALL; n = 68). For WTS analysis 64 (45% female, 55% male), healthy individuals were sequenced as controls. CBA, FISH, array-CGH CBA was performed for all 279 cases as previously described . Classification of chromosomal aberrations and karyotypes was performed according to the ISCN 2016 guidelines . The FISH probes used in diagnostic work-up were selected based on recommendations, aberrations detected in CBA, and the availability of probes. Array-CGH analyses were carried out for 123 cases (4x180K microarray slides, Agilent Technologies, Santa Clara, CA). The design was based on UCSC hg19 (NCBI Build 37, February 2009). Library preparation, sequencing, and data preprocessing Library preparation was done as previously described . In brief, genomic DNA and total RNA were extracted from lysed cell pellet of diagnostic bone marrow ( n = 196) or peripheral blood ( n = 83). Two hundred fifty ng of high-quality RNA were used as input for the TruSeq Total Stranded RNA kit (Illumina, San Diego, CA, USA). WGS libraries were prepared from 1 μg of DNA with the TruSeq PCR free library prep kit (Illumina). For WTS, 101 bp paired-end reads were produced on a NovaSeq 6000 system with a median yield of 68 million cluster per sample. WGS libraries were sequenced on a NovaSeq 6000 or HiSeqX instrument with 90x coverage and 150 bp paired-end sequences (Illumina). FASTQ generation was performed applying Illumina’s bcl2fastq software (v2.20). Using BaseSpace’s RNA-seq Alignment app (v2.0.1) with default parameters, reads were mapped with STAR aligner (v2.5.0a, Illumina) to the human reference genome hg19 (RefSeq annotation). Reads from WGS libraries were aligned to the human reference genome (GRCh37, Ensembl annotation) using the Isaac aligner (version 03.16.02.19). DNA and RNA-based genotyping The haplotype caller was used to identify the variant allele frequencies of 50 single nucleotide polymorphisms (SNP) , following the best practice guidelines of GATK4 . The allele concordance score, defined as the ratio between the number of identical alleles and the total number of alleles, was computed for all pairwise comparisons to identify the best WGS match to each WTS profile. WTS coverage of the chromosomal region of the various SNPs were assessed by samtools depth command. Only the SNPs with at least 5 reads in a patients’ WTS data, were used for the comparisons. Structural variant and copy number variant detection on WGS data For WGS, no sample specific normal tissue was available. A sequencing-platform and gender specific genomic DNA from a mixture of multiple anonymous donors (Promega, Fitchburg, WI, USA) was used as a normal in a tumor/unmatched normal workflow to call structural variants (SV; aberrations with > 50 bp in size) with Manta (v0.28.0). For somatic copy number variations (CNV), GATK4 was used following the Broad’s recommended best practices with a panel of normals. Specific gene deletions ( IKZF1, CDKN2A, RB1 ) were identified by matching SV and CNV calls within the respective region. Gene expression analysis Estimated read counts per gene were obtained from Cufflinks 2 (version 2.2.1). Non expressed genes were filtered out (< 2 counts). Raw counts were normalized by applying the Trimmed mean of M-values method from the edgeR package , producing log 2 CPM values. t-SNE plots were generated with the R package Rtsne ( https://github.com/jkrijthe/Rtsne ). Circos plots were generated with RCircos . Venn diagrams were produced with BioVenn . The remaining plots were generated with ggplot2 (ref. ). For BCR - ABL1 -like expression analysis the median expression profiles of selected genes from 40 BCR - ABL1 positive and 65 BCR - ABL1 negative cases were used as references. Classification was done based on the minimal Euclidean distance. Fusion calling on WTS data Arriba ( https://github.com/suhrig/arriba ), STAR-Fusion , and MANTA were selected for fusion calling. All the algorithms were used with default settings, except for STAR-Fusion --min_FFPM, which was set to zero, to include all candidate fusion transcripts independent of estimated expression. Fusions were only considered for further analysis, if they were called by at least two callers, could be confirmed by WGS, and were not detected in control samples. Putative novel fusions were queried against the Mitelman Database of Chromosome Aberrations and Gene Fusions ( https://mitelmandatabase.isb-cgc.org/ ) and ChimerDB . Copy number inference on WTS data The copy number states of the autosomes were inferred from raw gene counts with the ‘import-rna’ option of the software package CNVkit . The obtained results were further filtered, and only the calls with a weight > 15 were considered. Individual calls were aggregated per chromosome. A copy number state was considered as aberrant, if the log 2 value was > 0.15. Samples with > 3 copy number changes (chromosome gains or losses) were selected as potential low hypodiploidv/near triploidy and high hyperdiploidy cases. Samples with either loss of ≥5 chromosomes or the specific loss of chromosomes 3, 7, 13, and 17, were assigned to the low hypodiploid/near-triploid group. Cases were categorized as high hyperdiploid if at least 2 of the following chromosomes were gained: 4, 6, 10, 14, 17, 18, and 21. Selected small nucleotide variant analysis The WTS data was evaluated for SNVs in CRLF2 , DUX4 , JAK2 , KRAS , NRAS , PAX5 and TP53 . Variants were called with the Isaac Variant Caller (version 2.3.13) and only the passed variants with a matching call in WGS data were included. For the WGS data, a gender-matched reference DNA was used for unmatched normal variant calling with Strelka2 (version 2.4.7). Two hundred seventy nine patients with newly diagnosed ALL, sent to MLL Leukemia Laboratory between 03/2006–01/2017 for diagnostic work-up, were selected based on sample availability for WTS and WGS. ALL diagnosis was established based on morphology, immunophenotype, and cytogenetics, as previously published [ – ]. The cohort comprised 115 female (41%) and 164 male (59%) patients, with a median age of 49 years (range 0.1–93 years) at diagnosis (Additional file : Table S1). The patients showed B-cell precursor (BCP-ALL; n = 211) or T-cell precursor immunophenotype (T-ALL; n = 68). For WTS analysis 64 (45% female, 55% male), healthy individuals were sequenced as controls. CBA was performed for all 279 cases as previously described . Classification of chromosomal aberrations and karyotypes was performed according to the ISCN 2016 guidelines . The FISH probes used in diagnostic work-up were selected based on recommendations, aberrations detected in CBA, and the availability of probes. Array-CGH analyses were carried out for 123 cases (4x180K microarray slides, Agilent Technologies, Santa Clara, CA). The design was based on UCSC hg19 (NCBI Build 37, February 2009). Library preparation was done as previously described . In brief, genomic DNA and total RNA were extracted from lysed cell pellet of diagnostic bone marrow ( n = 196) or peripheral blood ( n = 83). Two hundred fifty ng of high-quality RNA were used as input for the TruSeq Total Stranded RNA kit (Illumina, San Diego, CA, USA). WGS libraries were prepared from 1 μg of DNA with the TruSeq PCR free library prep kit (Illumina). For WTS, 101 bp paired-end reads were produced on a NovaSeq 6000 system with a median yield of 68 million cluster per sample. WGS libraries were sequenced on a NovaSeq 6000 or HiSeqX instrument with 90x coverage and 150 bp paired-end sequences (Illumina). FASTQ generation was performed applying Illumina’s bcl2fastq software (v2.20). Using BaseSpace’s RNA-seq Alignment app (v2.0.1) with default parameters, reads were mapped with STAR aligner (v2.5.0a, Illumina) to the human reference genome hg19 (RefSeq annotation). Reads from WGS libraries were aligned to the human reference genome (GRCh37, Ensembl annotation) using the Isaac aligner (version 03.16.02.19). The haplotype caller was used to identify the variant allele frequencies of 50 single nucleotide polymorphisms (SNP) , following the best practice guidelines of GATK4 . The allele concordance score, defined as the ratio between the number of identical alleles and the total number of alleles, was computed for all pairwise comparisons to identify the best WGS match to each WTS profile. WTS coverage of the chromosomal region of the various SNPs were assessed by samtools depth command. Only the SNPs with at least 5 reads in a patients’ WTS data, were used for the comparisons. For WGS, no sample specific normal tissue was available. A sequencing-platform and gender specific genomic DNA from a mixture of multiple anonymous donors (Promega, Fitchburg, WI, USA) was used as a normal in a tumor/unmatched normal workflow to call structural variants (SV; aberrations with > 50 bp in size) with Manta (v0.28.0). For somatic copy number variations (CNV), GATK4 was used following the Broad’s recommended best practices with a panel of normals. Specific gene deletions ( IKZF1, CDKN2A, RB1 ) were identified by matching SV and CNV calls within the respective region. Estimated read counts per gene were obtained from Cufflinks 2 (version 2.2.1). Non expressed genes were filtered out (< 2 counts). Raw counts were normalized by applying the Trimmed mean of M-values method from the edgeR package , producing log 2 CPM values. t-SNE plots were generated with the R package Rtsne ( https://github.com/jkrijthe/Rtsne ). Circos plots were generated with RCircos . Venn diagrams were produced with BioVenn . The remaining plots were generated with ggplot2 (ref. ). For BCR - ABL1 -like expression analysis the median expression profiles of selected genes from 40 BCR - ABL1 positive and 65 BCR - ABL1 negative cases were used as references. Classification was done based on the minimal Euclidean distance. Arriba ( https://github.com/suhrig/arriba ), STAR-Fusion , and MANTA were selected for fusion calling. All the algorithms were used with default settings, except for STAR-Fusion --min_FFPM, which was set to zero, to include all candidate fusion transcripts independent of estimated expression. Fusions were only considered for further analysis, if they were called by at least two callers, could be confirmed by WGS, and were not detected in control samples. Putative novel fusions were queried against the Mitelman Database of Chromosome Aberrations and Gene Fusions ( https://mitelmandatabase.isb-cgc.org/ ) and ChimerDB . The copy number states of the autosomes were inferred from raw gene counts with the ‘import-rna’ option of the software package CNVkit . The obtained results were further filtered, and only the calls with a weight > 15 were considered. Individual calls were aggregated per chromosome. A copy number state was considered as aberrant, if the log 2 value was > 0.15. Samples with > 3 copy number changes (chromosome gains or losses) were selected as potential low hypodiploidv/near triploidy and high hyperdiploidy cases. Samples with either loss of ≥5 chromosomes or the specific loss of chromosomes 3, 7, 13, and 17, were assigned to the low hypodiploid/near-triploid group. Cases were categorized as high hyperdiploid if at least 2 of the following chromosomes were gained: 4, 6, 10, 14, 17, 18, and 21. The WTS data was evaluated for SNVs in CRLF2 , DUX4 , JAK2 , KRAS , NRAS , PAX5 and TP53 . Variants were called with the Isaac Variant Caller (version 2.3.13) and only the passed variants with a matching call in WGS data were included. For the WGS data, a gender-matched reference DNA was used for unmatched normal variant calling with Strelka2 (version 2.4.7). SNP profiles verify correct WGS/WTS pairing A recently established SNP panel for both DNA and RNA-based genotyping , was used to identify potential sample mix-ups and/or contaminations between the WGS and WTS samples. The allele concordance score (ranging from 0 to 1; Patients & Methods) was used to identify the best matching WGS sample for each WTS profile. For 278/279 cases, the best matching WGS sample belonged to the same patient as the WTS sample with a minimal allele concordance score of 0.81 (Additional file : Table S2). However, for one of the samples, a substantial number of SNPs showed divergent VAFs between the WTS and WGS datasets, resulting in the elimination of the patients’ dataset. Gene expression reliably segregates BCP-ALL from T-ALL patients The samples were classified by WTS data following the classification tree depicted in Fig. . The initial classification step comprised the assignment of the samples to either the T, or the B lineage. As expected, the gene expression data could be used to reliably differentiate between BCP-ALL and T-ALL samples, based on the expression levels of 14 described markers (Additional file : Fig. S1A, Additional file : Table S3) . Both lineages comprise different subtypes, characterized by the expression of various differentiation markers, and thereby defining the maturation state. Mapping the sample subtype classification (immunophenotyping data, Additional file : Table S1) to the two groups showed that the clusters within the groups fitted loosely to these subtypes (Additional file : Fig. S1B). However, further subclassification based on gene expression data, is rather challenging, and only for CD10 (common B-ALL) and CD1A (thymic T-ALL) discriminative power of the expression data could be detected (Additional file : Fig. S1C). The CD10 and CD1A expression values obtained from WTS correlated well with the percentage of positive cells obtained from immunophenotyping (R 2 = 0.75; Additional file : Fig. S1D). Fusion calling identifies subgroup defining rearrangements with high accuracy Following the segregation of the samples into the two lineages, the BCP-ALL samples are further subclassified by the identification of recurrent risk-stratifying gene fusions. The median of fusions per patient was 1 (range 0–8). In total, 100 unique fusion transcripts were called in the BCP-ALL cohort. Fourteen fusion transcripts occurred recurrently, while 86 fusion transcripts could only be detected in a single patient. 56% of these fusion transcripts involved genes located on the same chromosome (48/86; intra-chromosomal fusion) and 44% were caused by structural rearrangements between two different chromosomes (38/86; inter-chromosomal fusion). Based on the results from CBA, 41 BCR - ABL1 , 23 KMT2A - AFF1 , 5 ETV6 - RUNX1 , and 4 TCF3 - PBX1 fusions were detected in the cohort. The fusion calling based on WTS data identified 97% of these fusions (Table ) with no false positives. In addition, WTS detected three other known fusion partners of KMT2A (MLLT10 , MLLT1 , and USP2), each in a different case, but missed one KMT2A - EPS15 fusion, assigning 76/211 BCP-ALL samples to their respective subgroups. For 15% of these samples an additional fusion transcript was called. Except for WDR37 - TBRG1 , which co-occurred with the reciprocal of the KMT2A - MLLT10 fusion transcript and involved the same chromosomes, but with breakpoints further apart, all of the additional fusion transcripts were intra-chromosomal (Additional file : Table S4). Broadening the spectra of fusion transcripts In addition to the subtype defining rearrangements, among the BCP-ALLs we identified well characterized fusions involving ZNF384 ( n = 8), PAX5 ( n = 3), and two fusion transcripts containing NUTM1 : BRD9 - NUTM1 , which has been described in infant ALLs , and the novel fusion CHD4 - NUMT1. We also identified one case with an EBF1 - PDGFRB fusion, which arose from an interstitial 5q33 deletion (WGS data), and another, with a TCF3 - HLF fusion transcript. Known fusion transcripts in the T-ALL cohort mainly involved MLLT10 (n = 3) and genes encoding for proteins of the nuclear pore complex ( n = 5). Interestingly, we also identified recurrent read-through events, such as MTAP-ANRIL ( n = 15), RCBTB2 - LPAR6 ( n = 12), P2RY8 - CRLF2 (n = 3) and DLEU2 - SPRYD7 (n = 3) in both groups. Even if the fusions themselves are most likely not biologically active, MTAP-ANRIL has been detected in melanoma patients in association with the deletion of the tumor suppressor genes CDKN2A / B , RCBTB2 - LPAR6 indicates a partial RB1 loss as part of a larger deletion , and DLEU2 - SPRYD7 indicates the deletion of the miR-15a/16–1 cluster (Fig. ). The deletions were confirmed by WGS SV and CNV calls in the respective patients. Since WTS is not limited to the detection of already known chimeric transcripts, we also identified in total 57 putative novel fusion transcripts (Additional file : Table S4). Although the potential therapeutic consequences and functions are yet to be determined, multiple genes associated with cancer or implied in non-hematologic malignancies were found as fusion partner genes in our dataset (e.g. CHD4 , HOXA7 , FOXO3 ). CNV calling based on WTS data identifies relevant ploidy groups For the remaining 134 BCP-ALLs with no subtype defining rearrangements, CNV calling was performed based on the WTS data to identify relevant ploidy groups for further subclassification (Fig. ). Here, the CNVkit algorithm was used to identify patients with high hyperdiploidy or low hypodiploidy/near-triploidy (Patients and Methods). The algorithm correctly identified 17 (94%) low hypodiploid/near-triploid ALLs and 12 (80%) high hyperdiploid ALLs as defined by WGS, arrayCGH, and FISH (Additional file : Table S5). One case was misclassified as high hyperdiploid, but is most likely a near-triploid ALL, according to the WGS data. The algorithm missed 1 hypodiploid/near-triploid ALL with low blast count (20%), 3 high hyperdiploid ALL, 1 near haploid ALL, and 1 iAMP21 ALL. However, the resolution of the algorithm might be too low to reliably detect iAMP21. We thus, analyzed the expression of DYRK1A and CHAF1B that have recently been associated with iAMP21-positive ALLs . The expression of both genes was indeed heightened in the iAMP21 case (Additional file : Fig. S2A). Based on this classification, 103 (49%) BCP-ALL samples of our cohort have no established abnormalities and are further referred to as BCP-ALL ‘other’. BCR - ABL1 -like signature identification by WTS A compilation of the various published gene lists [ – ] was used to test for their ability to differentiate between BCR - ABL1 positive and BCR - ABL1 negative cases in our cohort. A final list of 26 genes with the highest variation between BCR - ABL1 positive and BCR - ABL1 negative cases and the reference profiles of 41 BCR - ABL1 positive and 65 BCR - ABL1 negative cases (Additional file : Table S6) were used to classify the 103 BCP-ALL ‘other’ cases into the BCR-ABL1 -like and non BCR-ABL1 -like groups, based on minimal distance. Twenty eight cases were classified as BCR-ABL1 -like and the remaining 75, as non BCR-ABL1 -like. Recently, a targeted RNA-Seq panel of 38 genes was published to identify adult BCP-ALL pts. with BCR - ABL1 -like characteristics . The application of this gene panel identified 30 BCR-ABL1 -like cases, of which 28 (93%) were concordant with the group classification, by the list of 26 genes. Hence, the concordant subset of 28 samples was assigned to the BCR-ABL1 -like subtype (Additional file : Table S7). Characteristics of BCR-ABL1 -like and non BCR-ABL1 -like cases There were no significant differences in baseline characteristics such as age, gender, and ALL phenotype between BCR-ABL1 -like and non BCR-ABL1 -like patients (Additional file : Table S1), but for an elevation of white blood cell counts in BCR-ABL1 -like patients (59.03 × 10 9 /L vs 25.18 × 10 9 /L, P = 0.025). Among the BCP-ALL ‘other’ cases, CRLF2 showed a clear bimodal expression (Additional file : Fig. S2B), with a significant higher CRLF2 expression in the BCR-ABL1 -like group, as compared to the non BCR-ABL1 -like cases (logFC 5.17, P < 0.0001). The high CRLF2 expression could either be linked to the occurrence of a CRLF2 rearrangement or CRLF2 mutations. Only one case with a high CRLF2 expression was assigned to the non BCR-ABL1 -like group. In addition, a significant enrichment of JAK2 mutations (mainly c.2047A > G) could be observed in the BCR-ABL1 -like group (42% vs 0%, P < 0.001), whereas the non BCR-ABL1 -like group carried a higher proportion of NRAS/KRAS (28% vs 4%, P = 0.007), PAX5 (c.239C > G, 8% vs 0%, P = 0.12) and TP53 (8% vs 0%, P = 0.12) mutations (Additional file : Table S7). Fusions involving PAX5 , CRLF2 , and tyrosine kinases were exclusively found in the BCR-ABL1 -like group. All samples with detected NUTM1 , HLF , and ZNF384 fusion transcripts were assigned to the non BCR-ABL1 -like group and, hence, could be further subclassified based on these genetic alterations. WGS data showed that 34% of the BCP-ALL ‘other’ cases harbored a deletion in IKZF1, and as expected, these deletions were significantly more common in the BCR-ABL1 -like group (61% vs 24%, P < 0.001). A similar trend could be observed for RB1 deletions (WGS data, 18% vs 4%, P = 0.019). In contrast, deletions of the tumor-suppressor gene CDKN2A (WGS data) were fairly common amongst both groups (32% vs 44%), and were not enriched in BCR-ABL1 -like or non BCR-ABL1 -like cases (Additional file : Table S7). A multi-modal approach is superior to a classification based on gene expression profiles alone Most genetic alterations in ALLs are also associated with specific gene expression profiles, providing the basis for expression-based classification approaches such as ALLsorts ( https://github.com/Oshlack/AllSorts ). Hence, for BCP-ALL patients, we compared our results from the multi-modal approach to the ALLSorts classifier (see Patients & Methods; Additional file : Table S8). The ALLSorts classifier returns a matrix with per sample probabilities for each subtype. For the comparison, only the highest subtype probability was considered for each sample. The ALLSorts predictions were grouped into; unclassified (= BCP-ALL ‘other’; probability < 50%), low confidence (50–80% probability), medium confidence (80–90% probability), and high confidence (probability > 90%) calls. For the fusion transcript and ploidy based WHO subgroups, the ALLSorts classifier achieved an overall accuracy of 86%, compared to 97% of our stepwise approach. The ploidy groups had the highest number of false negative calls, and less than 50% of the high hyperdiploid cases were called with high confidence by ALLSorts (Fig. A). Although the single iAMP21 case could be identified by gene expression as mentioned above, it was not identified as such, by ALLSorts. The ALLSorts classifier also made 8 false positive calls with different confidence levels, compared to zero false positive calls of the fusion calling (Fig. B). It was also evident that the overlap of assigned class labels between ALLSorts and the multi-method approach dropped from 89 to 26% with decreasing probability values (Fig. C). Due to the higher number of false negative calls, ALLSorts assigned more cases to the BCP-ALL ‘other’ group compared to the multi-modal approach (113 vs 103 Fig. D). The approaches agreed on 26 of the BCR-ABL1 -like cases, while ALLSorts misclassified 3 BCR - ABL1 cases as BCR-ABL1 -like. ALLSorts classified 15 patient profiles as DUX4 rearranged (Fig. D). However, neither DUX4 fusion transcripts nor DUX4 expression (WTS data) or the IGH- DUX4 structural variants (WGS data), could be identified in those cases. Nevertheless, compared to BCP-ALL ‘other’ cases not classified as DUX4 rearranged, an overexpression of DUX4 target genes such as PCDH17 (logFC 7.51, P -value < 0.0001), PDGFRA (logFC 5.65, P-value < 0.0001), and AGAP1 (logFC 5.52, P-value < 0.0001) could be observed. ALLSorts correctly identified 6 samples with a PAX5 c.239C > G mutation. However, one case of PAX5 c.239C > G was missed, and in one case the additional high hyperdiploidy was not detected. Both cases were correctly identified by the stepwise approach. ALLSorts correctly identified all cases harboring a ZNF384 or NUTM1 fusion transcript and one case with a HLF fusion transcript as detected by the multi-modal approach. One case was labeled as a MYC/BCL2 double-hit BCP-ALL by ALLSorts, but solely carried a MYC translocation (WGS data). A recently established SNP panel for both DNA and RNA-based genotyping , was used to identify potential sample mix-ups and/or contaminations between the WGS and WTS samples. The allele concordance score (ranging from 0 to 1; Patients & Methods) was used to identify the best matching WGS sample for each WTS profile. For 278/279 cases, the best matching WGS sample belonged to the same patient as the WTS sample with a minimal allele concordance score of 0.81 (Additional file : Table S2). However, for one of the samples, a substantial number of SNPs showed divergent VAFs between the WTS and WGS datasets, resulting in the elimination of the patients’ dataset. The samples were classified by WTS data following the classification tree depicted in Fig. . The initial classification step comprised the assignment of the samples to either the T, or the B lineage. As expected, the gene expression data could be used to reliably differentiate between BCP-ALL and T-ALL samples, based on the expression levels of 14 described markers (Additional file : Fig. S1A, Additional file : Table S3) . Both lineages comprise different subtypes, characterized by the expression of various differentiation markers, and thereby defining the maturation state. Mapping the sample subtype classification (immunophenotyping data, Additional file : Table S1) to the two groups showed that the clusters within the groups fitted loosely to these subtypes (Additional file : Fig. S1B). However, further subclassification based on gene expression data, is rather challenging, and only for CD10 (common B-ALL) and CD1A (thymic T-ALL) discriminative power of the expression data could be detected (Additional file : Fig. S1C). The CD10 and CD1A expression values obtained from WTS correlated well with the percentage of positive cells obtained from immunophenotyping (R 2 = 0.75; Additional file : Fig. S1D). Following the segregation of the samples into the two lineages, the BCP-ALL samples are further subclassified by the identification of recurrent risk-stratifying gene fusions. The median of fusions per patient was 1 (range 0–8). In total, 100 unique fusion transcripts were called in the BCP-ALL cohort. Fourteen fusion transcripts occurred recurrently, while 86 fusion transcripts could only be detected in a single patient. 56% of these fusion transcripts involved genes located on the same chromosome (48/86; intra-chromosomal fusion) and 44% were caused by structural rearrangements between two different chromosomes (38/86; inter-chromosomal fusion). Based on the results from CBA, 41 BCR - ABL1 , 23 KMT2A - AFF1 , 5 ETV6 - RUNX1 , and 4 TCF3 - PBX1 fusions were detected in the cohort. The fusion calling based on WTS data identified 97% of these fusions (Table ) with no false positives. In addition, WTS detected three other known fusion partners of KMT2A (MLLT10 , MLLT1 , and USP2), each in a different case, but missed one KMT2A - EPS15 fusion, assigning 76/211 BCP-ALL samples to their respective subgroups. For 15% of these samples an additional fusion transcript was called. Except for WDR37 - TBRG1 , which co-occurred with the reciprocal of the KMT2A - MLLT10 fusion transcript and involved the same chromosomes, but with breakpoints further apart, all of the additional fusion transcripts were intra-chromosomal (Additional file : Table S4). In addition to the subtype defining rearrangements, among the BCP-ALLs we identified well characterized fusions involving ZNF384 ( n = 8), PAX5 ( n = 3), and two fusion transcripts containing NUTM1 : BRD9 - NUTM1 , which has been described in infant ALLs , and the novel fusion CHD4 - NUMT1. We also identified one case with an EBF1 - PDGFRB fusion, which arose from an interstitial 5q33 deletion (WGS data), and another, with a TCF3 - HLF fusion transcript. Known fusion transcripts in the T-ALL cohort mainly involved MLLT10 (n = 3) and genes encoding for proteins of the nuclear pore complex ( n = 5). Interestingly, we also identified recurrent read-through events, such as MTAP-ANRIL ( n = 15), RCBTB2 - LPAR6 ( n = 12), P2RY8 - CRLF2 (n = 3) and DLEU2 - SPRYD7 (n = 3) in both groups. Even if the fusions themselves are most likely not biologically active, MTAP-ANRIL has been detected in melanoma patients in association with the deletion of the tumor suppressor genes CDKN2A / B , RCBTB2 - LPAR6 indicates a partial RB1 loss as part of a larger deletion , and DLEU2 - SPRYD7 indicates the deletion of the miR-15a/16–1 cluster (Fig. ). The deletions were confirmed by WGS SV and CNV calls in the respective patients. Since WTS is not limited to the detection of already known chimeric transcripts, we also identified in total 57 putative novel fusion transcripts (Additional file : Table S4). Although the potential therapeutic consequences and functions are yet to be determined, multiple genes associated with cancer or implied in non-hematologic malignancies were found as fusion partner genes in our dataset (e.g. CHD4 , HOXA7 , FOXO3 ). For the remaining 134 BCP-ALLs with no subtype defining rearrangements, CNV calling was performed based on the WTS data to identify relevant ploidy groups for further subclassification (Fig. ). Here, the CNVkit algorithm was used to identify patients with high hyperdiploidy or low hypodiploidy/near-triploidy (Patients and Methods). The algorithm correctly identified 17 (94%) low hypodiploid/near-triploid ALLs and 12 (80%) high hyperdiploid ALLs as defined by WGS, arrayCGH, and FISH (Additional file : Table S5). One case was misclassified as high hyperdiploid, but is most likely a near-triploid ALL, according to the WGS data. The algorithm missed 1 hypodiploid/near-triploid ALL with low blast count (20%), 3 high hyperdiploid ALL, 1 near haploid ALL, and 1 iAMP21 ALL. However, the resolution of the algorithm might be too low to reliably detect iAMP21. We thus, analyzed the expression of DYRK1A and CHAF1B that have recently been associated with iAMP21-positive ALLs . The expression of both genes was indeed heightened in the iAMP21 case (Additional file : Fig. S2A). Based on this classification, 103 (49%) BCP-ALL samples of our cohort have no established abnormalities and are further referred to as BCP-ALL ‘other’. - ABL1 -like signature identification by WTS A compilation of the various published gene lists [ – ] was used to test for their ability to differentiate between BCR - ABL1 positive and BCR - ABL1 negative cases in our cohort. A final list of 26 genes with the highest variation between BCR - ABL1 positive and BCR - ABL1 negative cases and the reference profiles of 41 BCR - ABL1 positive and 65 BCR - ABL1 negative cases (Additional file : Table S6) were used to classify the 103 BCP-ALL ‘other’ cases into the BCR-ABL1 -like and non BCR-ABL1 -like groups, based on minimal distance. Twenty eight cases were classified as BCR-ABL1 -like and the remaining 75, as non BCR-ABL1 -like. Recently, a targeted RNA-Seq panel of 38 genes was published to identify adult BCP-ALL pts. with BCR - ABL1 -like characteristics . The application of this gene panel identified 30 BCR-ABL1 -like cases, of which 28 (93%) were concordant with the group classification, by the list of 26 genes. Hence, the concordant subset of 28 samples was assigned to the BCR-ABL1 -like subtype (Additional file : Table S7). BCR-ABL1 -like and non BCR-ABL1 -like cases There were no significant differences in baseline characteristics such as age, gender, and ALL phenotype between BCR-ABL1 -like and non BCR-ABL1 -like patients (Additional file : Table S1), but for an elevation of white blood cell counts in BCR-ABL1 -like patients (59.03 × 10 9 /L vs 25.18 × 10 9 /L, P = 0.025). Among the BCP-ALL ‘other’ cases, CRLF2 showed a clear bimodal expression (Additional file : Fig. S2B), with a significant higher CRLF2 expression in the BCR-ABL1 -like group, as compared to the non BCR-ABL1 -like cases (logFC 5.17, P < 0.0001). The high CRLF2 expression could either be linked to the occurrence of a CRLF2 rearrangement or CRLF2 mutations. Only one case with a high CRLF2 expression was assigned to the non BCR-ABL1 -like group. In addition, a significant enrichment of JAK2 mutations (mainly c.2047A > G) could be observed in the BCR-ABL1 -like group (42% vs 0%, P < 0.001), whereas the non BCR-ABL1 -like group carried a higher proportion of NRAS/KRAS (28% vs 4%, P = 0.007), PAX5 (c.239C > G, 8% vs 0%, P = 0.12) and TP53 (8% vs 0%, P = 0.12) mutations (Additional file : Table S7). Fusions involving PAX5 , CRLF2 , and tyrosine kinases were exclusively found in the BCR-ABL1 -like group. All samples with detected NUTM1 , HLF , and ZNF384 fusion transcripts were assigned to the non BCR-ABL1 -like group and, hence, could be further subclassified based on these genetic alterations. WGS data showed that 34% of the BCP-ALL ‘other’ cases harbored a deletion in IKZF1, and as expected, these deletions were significantly more common in the BCR-ABL1 -like group (61% vs 24%, P < 0.001). A similar trend could be observed for RB1 deletions (WGS data, 18% vs 4%, P = 0.019). In contrast, deletions of the tumor-suppressor gene CDKN2A (WGS data) were fairly common amongst both groups (32% vs 44%), and were not enriched in BCR-ABL1 -like or non BCR-ABL1 -like cases (Additional file : Table S7). Most genetic alterations in ALLs are also associated with specific gene expression profiles, providing the basis for expression-based classification approaches such as ALLsorts ( https://github.com/Oshlack/AllSorts ). Hence, for BCP-ALL patients, we compared our results from the multi-modal approach to the ALLSorts classifier (see Patients & Methods; Additional file : Table S8). The ALLSorts classifier returns a matrix with per sample probabilities for each subtype. For the comparison, only the highest subtype probability was considered for each sample. The ALLSorts predictions were grouped into; unclassified (= BCP-ALL ‘other’; probability < 50%), low confidence (50–80% probability), medium confidence (80–90% probability), and high confidence (probability > 90%) calls. For the fusion transcript and ploidy based WHO subgroups, the ALLSorts classifier achieved an overall accuracy of 86%, compared to 97% of our stepwise approach. The ploidy groups had the highest number of false negative calls, and less than 50% of the high hyperdiploid cases were called with high confidence by ALLSorts (Fig. A). Although the single iAMP21 case could be identified by gene expression as mentioned above, it was not identified as such, by ALLSorts. The ALLSorts classifier also made 8 false positive calls with different confidence levels, compared to zero false positive calls of the fusion calling (Fig. B). It was also evident that the overlap of assigned class labels between ALLSorts and the multi-method approach dropped from 89 to 26% with decreasing probability values (Fig. C). Due to the higher number of false negative calls, ALLSorts assigned more cases to the BCP-ALL ‘other’ group compared to the multi-modal approach (113 vs 103 Fig. D). The approaches agreed on 26 of the BCR-ABL1 -like cases, while ALLSorts misclassified 3 BCR - ABL1 cases as BCR-ABL1 -like. ALLSorts classified 15 patient profiles as DUX4 rearranged (Fig. D). However, neither DUX4 fusion transcripts nor DUX4 expression (WTS data) or the IGH- DUX4 structural variants (WGS data), could be identified in those cases. Nevertheless, compared to BCP-ALL ‘other’ cases not classified as DUX4 rearranged, an overexpression of DUX4 target genes such as PCDH17 (logFC 7.51, P -value < 0.0001), PDGFRA (logFC 5.65, P-value < 0.0001), and AGAP1 (logFC 5.52, P-value < 0.0001) could be observed. ALLSorts correctly identified 6 samples with a PAX5 c.239C > G mutation. However, one case of PAX5 c.239C > G was missed, and in one case the additional high hyperdiploidy was not detected. Both cases were correctly identified by the stepwise approach. ALLSorts correctly identified all cases harboring a ZNF384 or NUTM1 fusion transcript and one case with a HLF fusion transcript as detected by the multi-modal approach. One case was labeled as a MYC/BCL2 double-hit BCP-ALL by ALLSorts, but solely carried a MYC translocation (WGS data). Genetic aberrations in ALL are structurally diverse, and currently detected by a variety of diagnostic assays. The aim of this study was to compile a diagnostic workflow to establish whole transcriptome RNA sequencing as a reliable, comprehensive, and efficient assay for ALL diagnostics. We demonstrated that typical genetic alterations can be identified with high accuracy, while at the same time the unbiased assessment of the transcriptome also allows the identification of potentially new targets in patients, where these genetic aberrations are absent. Our results further suggest that careful selection of the algorithms for each molecular type is beneficial for accurate sample classification. We demonstrated that samples could efficiently be classified in a stepwise approach (Fig. ). As previously shown , BCP-ALLs were characterized by a homogenous CD19 gene expression, whereas T-ALLs could be identified by CD3D and CCR9 expression. Multiple entity-defining fusion transcripts are known in BCP-ALLs ( BCR-ABL1 , KMT2A-AFF1 , TCF3-PBX1 , ETV6-RUNX1 ), and a reliable detection is mandatory for every diagnostic workflow. The applied fusion calling procedure identified 97% of the fusions, as detected by gold standard techniques, which is in line with previous RNA-Seq studies in pediatric ALL cohorts that reported detection rates between 91 and 97% [ , – ]. The number of true positive calls can be increased by considering the overlap of different callers , while simultaneously reducing the number of false negative ones. In this study, we used three different algorithms, with only 68% of the risk-stratifying fusion events being called by all the three algorithms (Additional file : Table S4), advocating the combined approach. Two fusion transcripts were missed most likely due to the low fusion transcripts expression, which are very difficult to detect and cannot be rescued by this approach. Here, only a greater sequencing depth could solve this issue. Beside subtype defining rearrangements, other previously described translocations could be identified, including 8 rearrangements involving ZNF384 , which were recently described to constitute a new molecular subtype of BCP-ALL ‘other’ with a good response to prednisone and conventional chemotherapy [ – ]. Our study showed that WTS is especially beneficial to identify cytogenetically cryptic events (e.g. EP300 - ZNF384 , PAX5 fusion transcripts, SET - NUP214 , etc) and unknown or divers fusion partners, in an unbiased and cost-effective way. In addition, we identified multiple recurrent read-through events indicative of gene deletions, frequent in ALL (e.g. CDKN2A / B , RB1 , MIR15A / 16–1 ), which were exclusively called by STAR-Fusion and arriba. To the best of our knowledge, two of these events have not been described before, whereas the MTAP - ANRIL fusion has been identified in a melanoma cohort, in the context of CDKN2A / B deletions . CDKN2A / B deletions have been associated with poor prognosis and it has been suggested to declare them as an additional B-ALL subgroup . Moreover, WTS identified 57 putative novel fusions with the majority occurring only in a single patient; similar to the findings in a study of pediatric ALL . As all these fusions were detected only once, the putative role in ALL pathogenesis and their diagnostic and prognostic potential has to be determined by combining data from several studies. However, fusion transcripts involving genes with a large number of pseudogenes (e.g. DUX4 ) or highly variable genomic regions (e.g. IGH gene locus) are still challenging to detect with most fusion calling algorithms but this might be improved by their continuous optimization. Estimating abnormalities involving the chromosome number, plays a major role in ALL classification and prognostication. While ALL with high hyperdiploidy is associated with a favorable prognosis, ALL with low hypodiploidy shows a poor outcome . Due to the interplay of multiple regulating factors, inferring copy number changes from WTS data is rather challenging . In our study, the determination of ploidy groups had the highest error rate, missing 5 cases, compared to the ones from WGS, arrayCGH, and FISH. However, CBA missed 4 low hypodiploid/near-triploid cases due to low in vitro proliferation, which were identified based on WTS data and confirmed by WGS. The resolution of the applied algorithm was too low to identify the iAMP21 case, or to reliably detect single gene deletions. While in the case of iAMP21 the gene expression could be used for the classification, the same did not hold true for gene deletions, as mentioned in a previous study . Here, the analysis of isoforms and differential transcript usage might provide the needed insights, but these analyses were out of the scope of this work. In addition, a larger set of iAMP21-positive cases is needed to proof the validity of CHAF1B and DYRK1A gene expression as biomarkers for the presence of iAMP21, since our cohort included only one such case. BCR - ABL1 -like ALLs are one of the most relevant new subgroups due to the potential benefit for treatment with tyrosine kinase inhibitors, as further underlined by the poor outcome for this ALL subtype on conventional treatment strategies . False negative results are rare, and the actual risk and clinical impact in such cases is unknown . Various gene lists have been published in the literature [ – ] for gene expression profiling, with only partial overlapping between the lists and the resulting classification. For WTS we identified a list of 26 genes to identify BCR - ABL1 -like cases. The majority of the genes (65%) were also present in the recently published list of a targeted RNA-Seq panel , resulting in an overlap of 93% in classification results. We only characterized 13% of our BCP-ALL cohort as BCR - ABL1 -like cases, which is below the typical range of 24–33% [ , , ]. However, 67% of the patients from our BCP ALL cohort fall into the adult or elderly age group, and it has been shown that the frequency of BCR - ABL1 -like cases declines with age, with an incidence just of between 7 and 20% for adults and elderly patients . In line with published data, cases with CRLF2 rearrangements and IKZF1 deletions were significantly more common in BCR - ABL1 -like cases . One case with an IGH - CRLF2 rearrangement and high CRLF2 expression was not classified as BCR-ABL1 -like ALL. In pediatric ALL it has been reported that 5 to 10% of patients with CRLF2 -rearranged ALL have distinctly different gene-expression profiles without the kinase-activated signature . In the BCR-ABL1 -like subgroup we also identified 3 patients harboring a P2RY8 - CRLF2 fusion, which is associated with poor prognosis in children , and 3 patients with fusions involving PAX5 . Cases harboring PAX5 or CRLF2 fusions have been proposed as an independent subgroup in BCP-ALL . 42% of BCR - ABL1 -like cases carried a JAK2 mutation, which is comparable to previous studies that reported 27–57% of mutated JAK2 [ , , ]. Among the non BCR-ABL1 -like subgroup, we identified various cases with PAX5 (c.239C > G) mutations, along with cases harboring ZNF384 , HLF or NUTM rearrangements, all of which have been recently identified as new BCP ALL subgroups . The ALLSorts algorithm also identified a DUX4 transcriptional signature in 15 cases, but no indication for DUX4 fusion transcripts could be identified in them, as these fusions were predominantly described in pediatric and AYA ALL (adolescent and young adults) , and our cohort included only a small number of young patients. However, it is well known that fusion transcripts involving DUX4 are difficult to detect with standard fusion calling pipelines and gene expression profiling might be superior in these instances. Further comparison between the gene expression profile based ALLSorts classifier and our stepwise approach, showed a good concordance for high confidence calls (Fig. C). However, our approach applies optimized algorithms for the different molecular types, resulting in an overall more precise classification, with superior performance for the identification of ploidy groups and a reduced number of false positive calls. In summary, our study demonstrates that WTS can be used to reliably classify ALL patients with a single assay and is superior to conventional methods in cases which lack entity-defining genetic abnormalities. With the decrease in sequencing costs, the integration of WTS in routine diagnostics of ALL patients seems feasible, however, requiring the definition of standardized quality parameters and data analysis workflows to enable reproducibility and comparability between laboratories. Additional file 1: Figure S1. Gene expression of the B-cell and T-cell lineage . t-SNE plot of gene expression of selected marker genes (a-c, perplexity: 25). d) Correlation between CD10 / CD1A gene expression and CD10+/CD1A+ cells as determined by immunophenotyping. Colors correspond to lineage, immunophenotypic subtype or expression as indicated by the plot legends. Additional file 2: Figure S2. Expression of selected genes. a) CHAF1B and DYRK1A expression of BCP-ALL patients ( n = 104) without risk-stratifying fusions or abnormal chromosome number. b) CRLF2 expression of BCP-ALL ‘other’ patients ( n = 103). Additional file 3: Table S1. Patient data. Additional file 4: Table S2. Allele concordance scores between WTS and WGS SNP profiles. Additional file 5: Table S3. Gene expression of known lineage markers. Additional file 6: Table S4. List of called fusion transcripts in both lineages. Additional file 7: Table S5. Copy number variations called by CNVkit for WTS data. Additional file 8: Table S6. Expression of BCR-ABL1-like signature genes. Additional file 9: Table S7. Characteristics of BCR-ABL1-like and non BCR-ABL1-like cases. Additional file 10: Table S8. Comparison to ALLSorts algorithm.
A paradigm shift in pharmacogenomics: From candidate polymorphisms to comprehensive sequencing
eef32253-0a17-44f8-b3b8-dfc38a5b93d2
9805052
Pharmacology[mh]
INTRODUCTION Pharmacogenetics is a scientific discipline with a long history. The first description of interindividual differences in adverse event risk after ingestion of fava beans dates back to around 510 bc . However, it would take more than two millennia until those differences were linked to heritable factors. Since the beginning of the 20th century, progress in the field drastically accelerated with important milestones including the concept “inborn errors of metabolism,” coining the term “pharmacogenetics” and the identification of an ever increasing number of functionally relevant polymorphism in drug‐metabolizing enzymes, such as TPMT, CYP2D6, CYP2C19 and NAT1/2. These findings were enabled using forward genetics approaches, i.e., the identification of patients with abnormal drug reactions followed by their genetic interrogation. , Later, the emergence of genome‐wide association study (GWAS) designs facilitated the further identification of significant pharmacogenetic biomarkers, including CYP2C9 rs1057910 for phenytoin‐related severe cutaneous adverse reactions, SLCO1B1 rs4363657 for statin‐induced myopathy and NUDT15 p.R139C for thiopurine‐induced early leukopenia. The increasing number of pharmacogenetic associations reported identified by diverse methodologies conducted by a multiple of different labs entailed considerable heterogeneity in variant nomenclature and reporting, which hampered further progress. To increase comparability between studies and increase accessibility of report for non‐experts, a systematic star allele nomenclature system was established with the aim to simplify the names of these well‐characterized pharmacogenetic alleles. The first consolidated online database for cytochrome P450 (CYP) star alleles was established in 1999 hosted by Karolinska Institutet, which provided a summary of alleles and their associated effects and facilitated rapid online dissemination of new alleles. More recently, the resource was transitioned into the Pharmacogene Variation Consortium website. While it is estimated that 20% to 30% of interindividual variability in drug response results from genetic factors, commonly interrogated polymorphisms could explain around 70% to 80% of such variations. , The origin of the remaining so‐called “missing heritability” remains unclear. The increasing capability of sequencing methods revealed the tremendous complexity of pharmacogenomic variation and identified a plethora of rare variants with unknown functional effects. These rare variations are plausible candidates to contribute at least in part to this missing heritability. In this review, we discuss experimental and computational advances for pharmacogenomic variant identification and interpretation. We furthermore highlight current roadblocks and future opportunities for how these might improve clinical decision to refine personalized medicine. ADVANCES IN GENETIC AND GENOMIC PROFILING METHODS THAT ENABLE PGx Genetic profiling technologies underwent impressive developments over the last decades. Conceptually, pharmacogenomic profiling methods can be divided into (i) panel‐based approaches that interrogate individual candidate variations and (ii) sequencing‐based approaches that comprehensively interrogate predefined genomic areas and can also identify novel variations. Panel‐based approaches are most commonly used in clinical PGx. These methods either rely on PCR or mass spectrometry to identify candidate variants and can vary in scope between the interrogation of one or few variants up to multiple million variations. Mass spectrometric methods are typically cost‐effective for mid‐throughput applications, typically testing up to 36 markers in 384 individuals. In contrast, arrays are highly variable in their gene coverage, inclusion of copy number variations (CNVs) and mitochondrial mutations between different models with current genome‐wide arrays differing between 240 000 variants and 4.1 million. , Furthermore, a growing number of pharmacogene‐specific arrays is available that comprise “only” few hundred to thousand variants; however, as these are focused exclusively on genes involved in pharmacokinetics, pharmacodynamics and drug safety, their coverage of clinically relevant pharmacogenetic variation is nevertheless more dense than in genome‐wide arrays. As such, the selection of genotyping array should thus be done in coordination with the scope of the research question at hand. Irrespective of the choice of genotyping array, all panel approaches have in common that they only cover limited predefined sets of variants. Consequently, such methods cannot identify variations in genomic positions not covered by the array. This limits the utility of array‐based approaches to clinical genotyping of variants with unknown functional consequences and to pharmacogenetic GWASs that aim to find genetic markers for drug‐related phenotypes. To comprehensively profile the pharmacogenomic landscape, including rare and novel variations, sequencing of the relevant loci is required. In the past three decades, sequencing methods have developed from a low‐throughput technology that could profile around 1000 bases per day to massively parallelized next‐generation sequencing (NGS) or short‐read platforms that allow for the generation of around 1 Tb of sequence per day on a single state‐of‐the‐art instrument, which constitutes a 10 9 ‐fold increase. , While NGS has been a major catalyst for pharmacogenomic research in recent years, short‐read sequencing methods cannot accurately profile complex or repetitive genetic loci, which include multiple genes of high pharmacogenomic relevance, such as CYP2B6 , CYP2D6 and HLAs . Long‐read sequencing methods, also referred to as “third generation sequencing,” aspire to overcome these technological limitations. While short‐read sequencing is based on the release of pyrophosphate upon extension of a nascent DNA strand, which typically results in read lengths of 100–600 bp, long‐read sequencing relies on the monitoring of polymerase activity on single template molecules in real‐time, resulting in reads that commonly exceed 10 kb. For a detailed overview of the technological basis of long‐read sequencing, we refer the interested reader to excellent reviews on this matter. , Long‐read sequencing facilitates the exact identification of CNVs and structural rearrangements and has already demonstrated considerable advantages compared to short‐read methods for the profiling and phasing of complex pharmacogenomic loci. Both short‐read and long‐read sequencing have contributed to the identification of pharmacogenomic variant and allele distributions at the population scale. , These projects have resulted in the identification of tens of thousands of different single‐nucleotide variations (SNVs), indels and CNVs. This pharmacogenomic landscape and current approaches for its functional interpretation are discussed in the following sections. ETHNOGEOGRAPHIC PHARMACOGENOMIC DIVERSITY Evaluation of pharmacogenomic variability between human populations is receiving increasing interest. Over the last two decades, studies have pinpointed numerous clinically relevant single‐nucleotide polymorphisms (SNPs) and CNVs with distinct ethnogeographic frequency profiles. Some well‐studied population‐specific variations in CYP2D6 , CYP2C19 and HLA‐B are illustrated below. Individuals of European descent are more likely to carry loss‐of‐function variant CYP2D6*3 and *4 , whereas the decreased function allele CYP2D6*10 is the main cause of decreased CYP2D6 activity in East Asia. In contrast, the gain‐of‐function variations CYP2D6*1xN and *2xN are most abundant in Oceania, East Africa and the Middle East. Increased CYP2C19 activity due to the CYP2C19*17 allele is frequent in Europe (MAF = 23.1%), the Middle East (MAF = 22.8%) and Africa (MAF = 20.9%) but very rare in East Asia (MAF = 0.7%). Interestingly, differences in CYP2D6 and CYP2C19 allele frequencies not only differ between major populations but can also be remarkably different between relatively close ethnogeographic groups. For instance, within Europe, frequencies of the inactive CYP2C19*2 allele differ between 8% in the Czech Republic and 21% in Cyprus, while CYP2D6*4 varies between 10% in Finland and 33.4% on the Faroe Islands. The resulting functional differences at the population scale emphasize the potential utility of leveraging ancestry information for pharmacological treatment decisions. Besides PK gene variability, specific variants in HLA genes that constitute established risk factors of severe or life‐threatening drug hypersensitivity reactions, including the Stevens–Johnson syndrome (SJS), the toxic epidermal necrolysis (TEN), drug reaction with eosinophilia and systemic symptoms (DRESS) and maculopapular eruption (MPE) have pronounced ethnogeographic differences. The most clinically established case concerns the associations of HLA‐B*1502 with carbamazepine and oxcarbazepine hypersensitivity. HLA‐B*1502 is highly prevalent in Asian populations with allele frequencies up to 22%, whereas it is almost absent outside of Asia, resulting in population‐stratified recommendations for pre‐emptive genotyping in the labels of these drugs. Similarly, the frequency of the HLA‐B*58:01 allele that is associated with allopurinol‐induced SJS/TEN/DRESS is substantially higher across Asia and Africa, suggesting that genotyping of HLA‐B*58:01 in these populations might be considered before initiating therapy for the treatment of gout. Importantly, however, the ancestry information is not sufficient to accurately guide pharmacological treatment. As such, ethnicity can only serve as a weak‐at‐best proxy of an individual's genotype in the absence of additional data and cannot depict the uniqueness of an individual's pharmacogenetic makeup. In this context, we find it important to highlight the recent policy statement by the American Academy of Pediatrics (AAP) for the “Eliminations of Race‐based medicine”. Specifically, the authors of the white paper state that “race is a social, not a biologic, construct, and the use of race as a proxy for factors such as genetic ancestry is scientifically flawed”. It is therein underlined that the inclusion of race as a guide for therapeutic decision‐making in many of the current clinical algorithms or practice guidelines is rather inferred, and not adequately supported by solid epidemiologic evidence, which calls the notion of “equitable care assertion” into question. In an effort to fix these inaccuracies, the medical guidance that incorporates race assignment is under re‐examination and reconsideration not only by the AAP but also by pharmacogenetic expert groups, such as the Clinical Pharmacogenetics Implementation Consortium (CPIC). In summary, it has become increasingly clear that population studies cannot inform about an individual's genetic fingerprint with sufficient accuracy to guide the selection of appropriate pharmacotherapy. From a scientific perspective and abstracting from practical constraints, we argue that it is therefore time for population pharmacogenomic advice to be complemented, if not superseded, by genomic evaluations at the level of the individual for an equitable and true personalization of medicine. PHARMACOGENOMIC VARIABILITY BEYOND WELL‐CHARACTERIZED POLYMORPHISMS Pharmacokinetic (PK) genes are involved in drug absorption, distribution, metabolism and excretion (ADME). Notably, these genes are commonly under low evolutionary pressure at least in part due to the lack of endogenous substrates and thus harbour a large repertoire of genetic variants. Using large‐scale pharmacogenomic sequencing data of clinically relevant PK genes, we and others have identified more than 69 000 variants, of which common variants, including the well‐characterized star alleles, with minor allele frequencies (MAF) ≥ 1% accounted for less than 2%. , , , Across the 57 members of the human CYP gene family, sequencing data from 6503 individuals revealed 4025 SNVs that resulted in amino acid alterations, of which 93% were rare with frequencies <1%. , Furthermore, using more recent consolidated large‐scale sequencing data from 141 614 unrelated individuals identified 6016 exonic variants in the eight clinically most important CYP genes alone, 98.8% and 96.8% of which were rare with frequency below 1% and 0.1%, respectively. Surprisingly, other important PK gene families, such as ABC , SLC and SLCO transporters carried similar numbers of rare genetic variations , , (Table ). However, while CYP genes harbour >30 common decreased and loss‐of‐function alleles, deleterious variations in drug transporters were generally rare. While PK variations are predominantly studied in the germline genome, variants in pharmacodynamic (PD) genes are commonly interrogated in oncology where treatments are available that specifically target certain somatic mutations. As of the writing of this review, 74 somatic pharmacogenetic biomarkers in oncological drug targets are recognized by the U.S. Food and Drug Administration (FDA), and we refer the interested reader to recent reviews on the topic. , Curiously, with notable exceptions, PD germline variability has received considerably less attention. Among the well‐characterized PD associations are links between variants in VKORC1 and warfarin response, CFTR variants with drug selection for the treatment of cystic fibrosis and associations between variations in β‐adrenergic receptors and response to anti‐asthmatics. The landscape of PD gene variability has only been recently analysed comprehensively. In G‐protein coupled receptors (GPCRs), which constitute the targets of 34% of approved drugs, sequencing has identified 14 192 missense variants, covering approximately 25% of all nucleotide positions across the entire GPCRome. Further drug target sequencing projects showed that rare variants are predominant also in other PD genes with around 800 000 genetic variants being identified across all FDA‐approved drug targets (98.1% of which were rare with MAF < 1%). , , FUNCTIONAL INTERPRETATION OF RARE PHARMACOGENOMIC VARIANTS Given these vast numbers of identified variations in both PK and PD genes, their correct functional interpretation is of high importance if those variants are supposed to be used to improve clinical decision‐making. Heterologous variant expression in cell lines, such as HEK293 cells, followed by functional assays using appropriate endpoints is considered as the gold‐standard method to evaluate pharmacogenetic variant function. In addition, epidemiological association studies can provide another layer of evidence in determining variant impact on patients. However, these approaches are not suitable for the comprehensive interrogation of the pharmacogenomic variability landscape for multiple reasons. First, in vitro assays are generally low throughput and testing hundreds of thousands of rare variants using conventional assays would require excessive financial resources. Second, experimental assays are time‐consuming and require trained laboratory staff, which makes them unsuitable to rapidly deliver variant function results at the point of care. And third, epidemiological analyses require sufficiently large sample sizes to yield statistically significant results. However, obtaining sufficiently large numbers of rare variant carriers is not feasible or even impossible for rare variants as impractical numbers of individuals would need to be screened. Given this preamble, it is thus not surprising that computational predictions have emerged as the most commonly used go‐to method to assess the function of otherwise uncharacterized variants. Computational methods are often specialized for different variant classes (missense, synonymous, frameshift, etc.) or types of functional impacts (structural alterations, splice effects, effects on gene regulation, etc.) and consider a variety of features and parameters, including sequencing conservation, structure stabilities and functional genomic data to derive their classifications (Figure ). The arguably most commonly used prediction methods are SIFT, PolyPhen‐2 and CADD. Readers are referred to recent reviews for a detailed discussion of variant effect prediction principles and a comprehensive overview of currently available computational tools. , Importantly, however, these computational methods generally underperform on pharmacogenetic variant sets (Figure ). The main reason is related to the critical dependency of machine learning‐based methods on the quality of training datasets. With very few notable exceptions discussed below, computational algorithms use pathogenic, that is, disease‐associated, variants as positive training sets and common variants with frequencies >5% to 10%, which are not likely to be pathogenic, as negative sets. However, pathogenicity and variant deleteriousness are different concepts. While they overlap in genes associated with genetic disease, pharmacogenes are rarely associated with diseases, and thus, deleterious pharmacogenetic variants are rarely pathogenic. As discussed above, multiple deleterious pharmacogenomic variants, including CYP2C19*2 , CYP2D6*4 and CYP3A5*3 , are very common in the general population resulting in misclassifications already during model training. Related to the focus of computational methods on pathogenicity rather than deleteriousness predictions and based on the assumption that conserved genomic regions are more important for organismal fitness, sequence conservation constitutes the most commonly used key parameter for variant effect predictions. However, many pharmacogenetic loci are only poorly conserved and even deletions of the entire gene body of a pharmacogene is relatively common (allele frequency of deletions of CYP2D6 are 1% to 6%). To overcome these limitations, computational methods have been developed that were specifically trained on pharmacogenes. Using 337 experimentally characterized variants across 44 pharmacogenes as training dataset, we optimized the performance of 18 partly orthogonal machine learning algorithms and integrated the best performing tools into an ensemble score termed ADME Prediction Framework (APF). Notably, APF achieved 93% accuracy when predicting loss‐of‐function and neutral pharmacogenomic variants and outperformed conventional variant predictors based on five‐fold cross‐validations. Furthermore, unlike most other methods that provide only binary classifications or risk propensities, APF provided scores that are significantly correlated with enzyme activity ( R 2 = 0.9, p = 2.9 × 10 −5 ), opening possibilities for quantitative assessments of variant impact. Notably, APF also performed well on predictions for DPYD despite the fact that no DPYD variations were utilized for model training. In contrast, APF performance on the disease‐associated drug transporter SLC10A1 (NTCP) was not higher than other algorithms. Similar to APF, another machine learning‐based model was recently developed with good performance in prioritizing NGS‐derived pharmacogenomic variants. Besides those prediction methods applicable to the entire pharmacogenome, several gene‐specific predictors have been developed. The DPYD ‐specific variant classifier DPYD‐Varifier was trained using in vitro functional data of 156 missense DPYD variants and achieved 85% of predictive accuracy. Recently, a convolutional neural network approach has been used to build prediction tools for CYP2D6 . By leveraging CYP2D6 long‐read sequencing data, the model predicted CYP2D6 function in a continuous scale and demonstrated its performance superior to conventional predictions that are based on diplotype/phenotype categories or gene activity scores. Overall, computational tools constitute versatile and effective means to rapidly evaluate the function of uncharacterized or novel pharmacogenomic variants. However, while their performance improved considerably in recent years, it remains questionable whether their accuracy is currently sufficient to warrant their use for clinical applications. With increasing available experimental data for model training and advances in machine learning, computational approaches hold promise to further improve, thereby paving the way for the clinical implementation of sequencing‐based PGx. IMPLEMENTATION AND PRECISION MEDICINE It is evident that NGS technologies can offer much broader information on pharmacogenetic variability compared to a panel of selected variants with established functional impact. In the clinics, the usage of such genetic information aspires to introduce a paradigm shift from traditional prescribing to genome‐considerate precision drug prescription (Figure ). Utilizing variations for which actionable information is available can provide a first step, while further inclusion of uncharacterized or private variations based on NGS aspires to provide further possibilities for treatment individualization. It is critical to consider though that the clinical implementation of NGS can only add value if there are rules and frameworks in place regarding how to handle novel or even unique variants for which the function is only predicted based on computational models rather than experimentally established using PK in vivo data. As such, extensive clinical validations are required that carefully scrutinize whether NGS can add value to the patient and the healthcare system. This is particularly true if NGS data is intended to alter the therapeutic regimen for a given patient. We thus do not envision that novel uncharacterized variations can directly guide prescribing in the near future. However, we believe that lower intensity interventions, such as increasing monitoring frequency and surveillance for carriers of rare, putatively deleterious but otherwise uncharacterized variations might be a viable way forward that could allow to add value to patients already in the short‐term without leaving the boundaries of established prescribing practices. Importantly though, already today, PGx‐based dosing is subject to cautious interpretations in clinical practice, as the overall relationship between diplotypes and concrete dose advice is dependent on parallel clearance pathways, concomitant drug treatment with possible drug–drug interactions and intolerability issues arising from PD variability. From an immediate clinical perspective, we discuss in the following multiple important considerations. Issues that need to be addressed include considerations of (1) which patients will qualify for a broader pharmacogenomic investigation, (2) how will these patients best be informed about the underlying purpose of the PGx investigation and its corresponding implications, (3) how will secondary findings be managed, (4) who will be responsible for data management and interpretation of the results within healthcare, (5) how should PGx data be presented in a clinically useful format, (6) can we achieve the necessary turnaround times to achieve effective decision support, (7) how can we ensure that important findings are utilized at the point of care and (8) how to deal with novel or even patient‐unique genetic variants without any functional correlate. For the discussion of further issues related to reimbursement, privacy and PGx education, we refer the interested reader to previous reviews. , , While many of these items are general, we do acknowledge that some aspects are country‐ and healthcare system‐specific and discussion of those is provided from a Swedish perspective. Patient selection for NGS . This complex question involves both organization of patient care in different therapeutic areas, medical needs and the priority of PGx with regard to different treatment regimens in the frame of the allocation of limited healthcare resources. At present, precision medicine and the broader use of genomics has focused primarily on cancer treatment and the ambition to include all or at least the majority of cancer patients has been expressed in several countries, including Sweden. Utilizing these data for pharmacogenetic interpretations would be justified for the simple reason that the data are already available and that these patients are likely to benefit from the respective results during the years to come within highly specialized care. However, it is currently difficult to integrate sequencing into established routines outside of oncology due to the lack of downstream analytics. Notably, in geriatrics or psychiatry, the potential value of PGx characterization may be even higher due to increased frequency of polypharmacy. , , Understanding the purpose of the PGx investigation . Patient education and empowerment constitute important issues of pharmacogenetic testing. It will not be possible to carry out laboratory analyses that the patient never approved or understood the purpose of. As such, selection of specific genetic variants for pharmacogenetic panel testing will be inappropriate as neither the physician nor the patient can be expected to understand the details and limitations of the conducted tests as well as the relevance of generated results. Thus, a more paedagogic way might be to perform pharmacogenetic testing using predefined strategies for specific umbrella terms, such as “metabolic drug elimination capacity” with regard to drug‐metabolizing enzymes or “drug hypersensitivity profile” with regard to immune‐mediated events and corresponding HLA markers. Incidental or secondary findings . The issue of how to handle incidental genetic findings of potential relevance to disease or disease prognosis is always important when it comes to broader genetic investigations, and we refer to more detailed discussions elsewhere. In principle, two classes of incidental findings can be distinguished. The first class is related to the pleiotropic effects of some pharmacogenes. Examples are testing of UGT1A1 to predict irinotecan response, which can reveal carrier status of variants causing Crigler–Najjar syndrome, or tests of VKORC1 variability to guide warfarin dosing, which can return secondary findings regarding the risks of familiar coagulopathies. While the second is a consequence of testing strategies that evaluate not only a given locus of interest, but might evaluate the entire pharmacogenome, exome or genome. While targeted pharmacogenomic sequencing rarely overlaps with analyses of strong markers of disease risk or prognosis, the likelihood of incidental findings increases if WES or WGS are employed. As such, there need to be clear guidelines in place as to how to manage secondary findings with consideration of patient preference. Data management . Based on regulatory and ethical arguments, patients are expected to have full access to their personal healthcare data. However, with an increasing use of advanced diagnostics and data rich analyses, this might turn out to be practically difficult. The analytical results of such tests will only be relevant for the individual patient after extensive processing and translation into functional consequence, the latter being a suitable task for the discipline of clinical pharmacology. Nevertheless, it needs to be clearly defined who owns the data in the individual case. PGx result reporting . For the clinical implementation of pharmacogenetic tests, it is imperative that results are integrated into electronic medical records in a format that is transparent and easily understandable for clinical staff who might not be PGx experts. It is important that test reports follow established guidelines for nomenclature and result reporting. This should include a list of the investigated genes and allelic variants as well as the translation of the genetic findings into predicted phenotypes and corresponding clinical interpretations for the individual patient. A table summarizing the investigated genes, the detected variants and the predicted individual activity of different metabolic pathways, as compared to the general population or average patient, should help the responsible physician to understand whether the patient might be at an increased risk of non‐response or toxicity using standard doses. In this respect, international work on consensus guidelines on how to interpret and quantify the impact of different variants is important. , Given the life‐long relevance of PGx results, we recommend that the PGx profile of a given patient is kept separate in a specific folder of the patient records rather than in a single post as part of a consecutive list of laboratory results, which is unfortunately common practice today, at least in the leading medical centres of Sweden. Turnaround times . Sample preparation, sequencing and data analysis typically entail turnaround times of a few weeks. As a consequence, the use of NGS for the pre‐emptive guidance of personalized prescribing is not realistic for acute cases. Notably however, substantially faster turnaround times down to three business days have recently been reported for NGS‐based testing of molecular panels in a community hospital setting, raising hopes that application for sub‐acute cases might become realistic in the near future. It is important to emphasize that once a pharmacogenomic profile is generated for a given patient, this information will be rapidly available for future occasions, meaning potential access even on an acute basis. For example, for a patient initially subjected to pharmacogenomic profiling for oncological treatment, genotype data should be at hand later in life that on an acute basis helps to guide treatment with antiplatelets after the placement of coronary artery stents. Clinical decision support systems . No single professional can learn to manage and practice the differential impact of many PGx variants on numerous prescription drugs. Analogous to the situation with drug–drug interactions, drug‐gene interactions would be ideal for database‐driven, clinical decision support tools to be used at the point of care. The system should provide warnings if drugs or dosages are prescribed to a given patient that are contrary to current pharmacogenomic guidelines. In this context it is critical to note that guidelines between regulatory agencies feature notable discrepancies and achieving evidence‐based consensus is important to enable their efficient use in the clinics. , Importantly, decision support should not only utilize genetic information, but should integrate such information with other patient‐specific data of relevance for drug treatment, such as PK drug–drug interactions, body weight and kidney function. Novel genetic variants . As described in detail in previous sections, NGS can be expected to uncover pharmacogenetic variations for which no functional data based on epidemiological or experimental evaluations exist. Computational models that predict the functional correlate may be the principal way forward by allowing to flag carriers of variations with putative deleterious impacts for intensified follow‐up and, if applicable, a recommendation for therapeutic drug monitoring. In addition to these direct clinical considerations, it is of paramount importance to determine whether sequencing for pharmacogenetic applications constitutes an efficient allocation of healthcare resources. In such health economic evaluations, the costs and patient effects of sequencing‐guided therapy is compared to the standard of care. These analyses can be conducted assuming two different perspectives. First, it can be evaluated whether sequencing is cost‐effective for guiding the treatment of a condition the respective patient was diagnosed with. Alternatively, the frame of evaluation of cost‐effectiveness of sequencing can be extended to include the entire lifetime of the patient. However, while more accurate, the latter drastically increases the complexity of the evaluation due to the added uncertainty. While most studies that evaluated the economics of pharmacogenomic interventions concluded that testing was cost‐effective, it is important to note that these studies focused exclusively on the genotyping of candidate variations. Furthermore, economic calculations are highly sensitive to healthcare system‐specific parameters and, thus, require resource‐intensive modelling efforts for each country separately. However, recently developed generic models hold promise to facilitate such analyses. To date, no trials have been published that evaluate the cost‐effectiveness of pharmacogenomic sequencing outside of oncology. Prospective clinical trials that evaluate the cost‐effectiveness of NGS coupled to computational variant predictions are thus of critical importance to provide patient benefits without overburdening the healthcare system. CONCLUSION The development of sequencing methods in the past 20 years has facilitated the discovery of tens of thousands of rare pharmacogenomic variants. Consideration of this complexity beyond well‐characterized polymorphisms promises to eventually improve the personalization of pharmacogenetic recommendations. However, to leverage its added value, routines and workflows are required that establish if, when and how such data can be utilized to guide clinical decisions. In this context, computational methods provide versatile and rapid means to interpret the functional impact of previously uncharacterized pharmacogenomic variations. However, before NGS can be meaningfully used for clinical applications, rigorous trials are required that evaluate whether current tools are sufficiently accurate to cost‐effectively improve patient care. Even after extensive clinical trials, the pre‐emptive generation of NGS data for clinical applications appears at present unrealistic outside of life‐threatening diseases that are associated with high healthcare costs. This includes various cancers but could also include certain genetic diseases for which genetic information can guide therapy. However, the more widespread availability of NGS data, for example, via business‐to‐consumer sequencing outside of direct medical indications or generated in the context of oncological therapy entails that such data can be increasingly repurposed or applied to less costly diseases or applications where pharmacogenomic information can add value, such as the guidance of prescribing of psychiatric medications. While multiple hurdles need to be overcome, it thus seems realistic to envision a future clinical context where broad PGx data will be easily accessible and incorporated into clinical decision‐making, especially regarding the determination of starting doses for drugs with clear pharmacogenetic associations, as well as for the identification of patients that require more intense monitoring. YZ and VML are co‐founders and shareholders of PersoMedix AB. In addition, VML is CEO and shareholder of HepaPredict AB. EE is vice‐chair of the Genomic Medicine Sweden Pharmacogenomics work package, supported by grants from The Swedish Innovation Agency. The other authors declare no conflicts of interest.
A comparative analysis of GPT-3.5 and GPT-4.0 on a multiple-choice ophthalmology question bank: A study on artificial intelligence developments
87dd2810-33b5-45db-972b-045610904159
11809821
Ophthalmology[mh]
The medical industry is among the many fields where artificial intelligence (AI) has shown increasing promise. In recent years, doctors have frequently used artificial intelligence to assist them in diagnosis, treatment, and research . In the past, AI has been utilized to identify different retinal pathologies, such as age-related macular degeneration and diabetic retinopathy . The literature also shows how AI can be helpful in conditions other than retinal pathologies . Large language model (LLM) Generative Pretrained Transformer 3 (GPT-3) produces text that appears human. It received training on a vast corpus of text (more than 400 billion words) from the internet, which included webpages, books, and articles . The large language model (LLM) ChatGPT (OpenAI, San Francisco, CA, USA) has caused a paradigm shift in the application of artificial intelligence in medicine . Currently limited to training using online resources until September 2021, GPT-3.5 is an improved version of GPT-3 (2020) trained on a wide range of parameters . In March 2023, OpenAI unveiled GPT-4, a new generation LLM that outperforms GPT-3.5 and performs at a human level across various academic benchmarks . The large language models (LLMs) and text-based LLMs can potentially improve medical diagnosis and interpretation. OphthoQuestions question banks, the Basic and Clinical Sciences Course (BCSC) Self-Assessment Programme, and FRCOphth examinations have previously been used to test the effectiveness of these models, particularly in ophthalmology . The performance of LLMs in ophthalmology question answering is still not sufficiently analyzed, although there are studies on their performance . This study evaluated a comparative analysis of GPT-3.5 and GPT-4.0 on the multiple-choice ophthalmology question bank using OphthoQuestions ( www.ophthoquestions.com ), a popular question preparation bank. Ophthalmologists frequently consult this multiple-choice question bank as these resources have been linked to improved performance on the standardized Ophthalmic Knowledge Assessment Programme (OKAP) examination taken by ophthalmology residents in the United States and Canada, particularly in studying for board examinations. Exploring OphthoQuestions In January 2024, using a personal account on OphthoQuestions ( www.ophthoquestions.com ), 520 questions were selected from 4,551 OphthoQuestions. Since the GPT-3.5 and GPT-4.0 multiple-choice question bank performances were compared, using questions that did not contain visual data, such as clinical, radiological, or graphic images, was preferred since the GPT-3.5 model could not analyze visual data. These questions were not available to the general public, meaning there was no chance that they were previously indexed in the ChatGPT training data set or any search engine. The researcher generated 40 random questions from each of the 13 ophthalmology sub-specialties. These subgroups included general medicine, fundamentals, clinical optics, cornea, uveitis, glaucoma, lens and cataract, pathology and tumors, neuro-ophthalmology, pediatrics, oculoplastics, retina, vitreous, and refractive surgery. Study Design The researcher manually entered the content of the text-based questions into the program. A new chat was opened for each question. Then, the statement “You should choose one of the following options” was written. Questions containing visual elements such as clinical images or medical photographs were not included in our evaluation as ChatGPT-3.5 could not analyze them. This study assessed the gross accuracy in correctly completing a series of multiple-choice questions (MCQs). ChatGPT was considered to have given a “correct” answer for scoring purposes when it selected the option suggested by the answer key for a given question. On the other hand, an answer was considered “incorrect” if it did not match the answer's essential suggestion, if the platform failed to identify any option when asked further, or if the third attempt was incorrect in the case of conflicting duplicate answers. The answers were then checked against the answer key by the researcher, and the correct answers were analyzed according to subgroups and overall groups. A conservative analysis strategy was adopted, preferring not to set thresholds similar to those in other studies. Instead, the assessment of whether the performance of GPT-4.0 was different from GPT-3.5 was performed . Statistical analysis To analyze categoric variables, Fisher’s exact test and Chi-square (X 2 ) were used to compare the number of correct responses on the GPT-4.0 and GPT-3.5 tests. The Kolmogorov-Smirnov test was used to assess the data’s normality. The accuracy and compliance rates were reported in percentage numbers. The accuracy of the thirteen distinct subspecialties was also compared using chi-square analysis. A P-value of 0.05 was regarded as statistically significant. The studies used SPSS, version 25.0 (SPSS Inc., Chicago, IL, USA). In January 2024, using a personal account on OphthoQuestions ( www.ophthoquestions.com ), 520 questions were selected from 4,551 OphthoQuestions. Since the GPT-3.5 and GPT-4.0 multiple-choice question bank performances were compared, using questions that did not contain visual data, such as clinical, radiological, or graphic images, was preferred since the GPT-3.5 model could not analyze visual data. These questions were not available to the general public, meaning there was no chance that they were previously indexed in the ChatGPT training data set or any search engine. The researcher generated 40 random questions from each of the 13 ophthalmology sub-specialties. These subgroups included general medicine, fundamentals, clinical optics, cornea, uveitis, glaucoma, lens and cataract, pathology and tumors, neuro-ophthalmology, pediatrics, oculoplastics, retina, vitreous, and refractive surgery. The researcher manually entered the content of the text-based questions into the program. A new chat was opened for each question. Then, the statement “You should choose one of the following options” was written. Questions containing visual elements such as clinical images or medical photographs were not included in our evaluation as ChatGPT-3.5 could not analyze them. This study assessed the gross accuracy in correctly completing a series of multiple-choice questions (MCQs). ChatGPT was considered to have given a “correct” answer for scoring purposes when it selected the option suggested by the answer key for a given question. On the other hand, an answer was considered “incorrect” if it did not match the answer's essential suggestion, if the platform failed to identify any option when asked further, or if the third attempt was incorrect in the case of conflicting duplicate answers. The answers were then checked against the answer key by the researcher, and the correct answers were analyzed according to subgroups and overall groups. A conservative analysis strategy was adopted, preferring not to set thresholds similar to those in other studies. Instead, the assessment of whether the performance of GPT-4.0 was different from GPT-3.5 was performed . To analyze categoric variables, Fisher’s exact test and Chi-square (X 2 ) were used to compare the number of correct responses on the GPT-4.0 and GPT-3.5 tests. The Kolmogorov-Smirnov test was used to assess the data’s normality. The accuracy and compliance rates were reported in percentage numbers. The accuracy of the thirteen distinct subspecialties was also compared using chi-square analysis. A P-value of 0.05 was regarded as statistically significant. The studies used SPSS, version 25.0 (SPSS Inc., Chicago, IL, USA). In general, GPT-4.0 and GPT-3.5 answered 408 questions (78.46%) 95% CI [70,88%] and 333 questions (64.15%) 95% CI [53,74%] of 520 questions correctly, respectively. GPT-4.0 answered statistically significantly more questions correctly compared to GPT-3.5 (p = 0.0195). Chat GPT 4.0 showed a statistically significant difference compared to Chat GPT 3.5 in giving correct answers in all subgroup analyses (p<0.05). In subgroup analyses, pathology and tumors were the groups with the highest percentage difference in the percentage of correct answers. In contrast, the group with the lowest percentage difference in correct answers was the retina, vitreous, and neuro-ophthalmology section. GPT-3.5 performance was significantly variable across the 13 subspecialties (p = 0.034). GPT-4.0 showed more consistent results across subspecialty groups than GPT-3.5 with no significant differences (p = 0.078). At the same time, GPT-3.5 had the highest percentage of correct answers in fundamentals (74%) and the lowest in pathology and tumors (53.0%). GPT-4.0 showed the highest percentage of correct answers in general medicine (88%) and the lowest rate of correct answers in clinical optics (70%). shows the amount and percentage of correct answers given by GPT-4.0 and GPT-3.5. This research provides promising new evidence of ChatGPT’s ability to handle complex clinical and medical data, particularly the development and consistency of artificial intelligence algorithms. AI chatbot technology has developed rapidly and is being used increasingly in e-society. ChatGPT, in particular, has become one of the fastest-growing computer applications in history, gaining 100 million active users in just 2 months . Integrating AI into clinical practice and medical education has grown in popularity recently. Recent research indicates that the newest LLM versions exhibit a promising problem-solving capacity . With its widespread use, it has been the subject of many studies, for example, one study reporting the relative success of ChatGPT on a sample United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge assessment, achieving a passing threshold of approximately 60% . The effectiveness of artificial intelligence was also studied in another board exam. In this study of the efficacy of artificial intelligence in the European Ophthalmology board exam, it was reported that GPT showed superior success by answering 6188 of 6785 questions correctly . Very few studies in the literature show the performance of GPT-3.5 and GPT 4.0 against each other in ophthalmology . In one of these studies, the GPT-4 was tested on two multiple-choice question sets of 260 questions from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and OphthoQuestions question banks. The top-performing GPT-4 model was also contrasted with GPT-3.5 and past human performance. Antaki et al. found that GPT-4 significantly outperformed GPT-3.5 on simulated ophthalmology board-style exams, similar to the findings presented in this study . In another study evaluating the ability to answer ophthalmology-related questions at different ophthalmology education levels, GPT-4.0 was found to perform significantly better than GPT-3.5 (75% vs 46%, p<0.01) . In a relatively recent study, Moshirfar and colleagues evaluated human responses to 467 questions from GPT-4.0, GPT-3.5, and a question bank called StatPearls and obtained scores of GPT-4.0 73.2%, GPT-3.5 55.5%, humans 58.3%, respectively. Although it is not appropriate to directly compare this study and the presented study, Moshirfar et al. found that GPT-4.0 answered more questions correctly in percentage than GPT-3.5, similar to the results in this study . This study found that GPT-4.0 answered more questions correctly than GPT-3.5, and the difference between the two groups was statistically significant (78.46% vs. 64.15%; p = 0.0195). Chat GPT 4.0 showed a statistically significant difference compared to Chat GPT 3.5 in giving correct answers in all subgroup analyses (p<0.05). In the subgroup analyses performed in this study, GPT-3.5 performance was significantly variable across the 13 subspecialties (p = 0.034). GPT-4.0 showed more consistent results across subspecialty groups than GPT-3.5 with no significant differences (p = 0.078). This result indicates that the GPT-4.0 algorithm is statistically more successful than GPT-3.5 in the ophthalmology question bank. Finally, the statistically significant success of GPT-4.0 compared to GPT-3.5 in this study should be considered with the algorithm developments in the coming years, especially in online exams, which will increase gradually since the use of artificial intelligence is an increasing threat to test integrity. Thus, protocols such as mandatory proctoring should be considered. Limitation of Study The first limitation of this study was that image—or video-based questions that could not be easily analyzed in ChatGPT-3.5, which was offered free of charge, were not evaluated. This limitation should be considered a limitation that might affect the study. Furthermore, the questions included in the study were not categorized as easy, medium, or complex. Even though the questions were chosen randomly, this factor should have also been considered statistically. The first limitation of this study was that image—or video-based questions that could not be easily analyzed in ChatGPT-3.5, which was offered free of charge, were not evaluated. This limitation should be considered a limitation that might affect the study. Furthermore, the questions included in the study were not categorized as easy, medium, or complex. Even though the questions were chosen randomly, this factor should have also been considered statistically. The results of this study point to the potential for AI, and ChatGPT in particular, to positively contribute to medical education and practice. Moreover, the success of AI in its multiple-choice question bank exam could pave the way for greater integration of AI technology into medical education and continuing professional development. In the coming years, ChatGPT’s proficiency in clinical management and decision-making should be supported by further studies demonstrating that it can be a valuable resource for ophthalmologists and other medical professionals seeking information and guidance on complex cases. Furthermore, ChatGPT 4.0 was statistically more consistent and accurate in the study presented here than ChatGPT-3.5. AI technology, especially in ophthalmology, should be seen as a complement to, rather than a replacement for, medical professionals.
Field experiments show no consistent reductions in soil microbial carbon in response to warming
f9352f51-2af3-419a-9fcd-e57bd6d35330
10899254
Microbiology[mh]
Methodology overview According to Patoine et al. , MBC showed a significant decreasing trend from 1992 to 2013, which was almost entirely attributed to climate change, with little contribution from land cover change. They further concluded that the climate contribution was dominated by increasing temperature rather than the change in precipitation (their Supplementary Figs. and ). This conclusion is in line with their Supplementary Fig. and Supplementary Fig. , which show a clear decrease in MBC with increasing annual temperature, but no clear trend, or only a very slight increasing one, in MBC with increasing precipitation. Given these pieces of evidence, we decided to focus on the temperature effect on MBC in this analysis. Here, we focus on testing three hypotheses: (1) The MBC response to warming reported by Patoine et al. should be detectable using field warming experiments, which have been widely adopted to examine how MBC responds to temperature increase. (2) Similarly, we hypothesize that the response could probably also be found in in-situ long-term MBC measurements affected by interannual temperature changes. (3) Given that the Random Forest model used to predict MBC change during 1992–2013 by Patoine et al. was trained using largely static observations of MBC stock across spatial gradients, and that a clear spatial pattern of MBC stock exists across different climatic gradients (their Fig. ), we hypothesize that the conclusion of Patoine et al. might be subject to the space-for-time substitution (SFT) effect, in which case the predicted reduction over time could be an artifact of decreasing MBC stocks with increasing temperature over spatial gradients. To test the initial two hypotheses, we compiled observations from field warming experiments and in-situ long-term measurements from the literature. To test the third one, we repeated the Random Forest model training followed by prediction of MBC change for 1992–2013 following the same method as Patoine et al. , but used bootstrapping sub-sampling to obtain variations in both the predicted MBC change rate and the spatial slope between MBC and temperature, and further examined how the predicted MBC change rate responds to the derived spatial slope. Analysis using field warming experiment data A systematic, reproducible workflow was followed to ensure the suitability and completeness of field warming experiment data included in this study (Supplementary Fig. ). Laboratory controlled warming experiments were excluded because they reflect the real world less realistically. Peer-reviewed articles on soil warming effects on microbial soil biomass were collected from a literature search using “soil warm” and “microbial biomass” as keywords in ScienceDirect ( https://www.sciencedirect.com/ ), China National Knowledge Infrastructure (CNKI, https://www.cnki.net/ ), Google Scholar, and papers cited in previous review studies. By observing the criteria for an article to be included (Supplementary Fig. ), a total of 130 paired MBC measurements from both control and warming sites from 69 papers were collected (Fig. ). To evaluate how MBC responds to soil warming, the effect of warming on MBC was calculated for each pair of measurements using the natural log-transformed response ratio (LN(RR)): 1 [12pt]{minimal} $${{{{{}}}}}({{{{{}}}}})={{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})-{{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})$$ LN ( RR ) = ln ( MBC t ) − ln ( MBC c ) Where MBC t and MBC c represent MBC from the warming and control treatments, respectively, and the response ratio (RR) was natural-log transformed, a common practice to make it satisfy the normal distribution . As LN(RR) seems larger for intermediate warming levels compared to either the low or high warming magnitude, potential effects of warming magnitudes on LN(RR) were examined using a quadratic fitting between LN(RR) and warming magnitude (R 2 adj = 0.23, p < 0.01, Supplementary Fig. ). The MBC response to soil warming was also examined in detail by separating all field-warming observations into different groups of warming magnitude (<1 °C, 1–2 °C, 2–3 °C, 3–4 °C, and 4–5 °C). The random-effect model was used to obtain the overall effect of warming on MBC and test its statistical significance (Fig. ). Funnel plots and the “metabias” method from the ‘meta’ package in R were employed to investigate potential publication bias for each warming magnitude group (Supplementary Fig. ). If the funnel plot shows significant asymmetry (i.e., p < 0.05 derived using the “Egger” test from the “metabias” method), then an iterative “trim-and-fill” method was used to remove the most extreme publication(s) from either the left or the right tail of the funnel plot until it becomes symmetric, and then to fill imputed missing publication(s) followed by computation of a new effect size of MBC response to warming. The impacts of warming duration on MBC responses were examined similarly by grouping into different durations of <3 years, 3–6 years and 6–30 years. Analysis using in-situ long-term MBC measurements We initially searched the MBC datasets used by Patoine et al. and used in a systematic analysis by Xu et al. for in-situ long-term MBC measurements, but found only one study (Supplementary Table and Supplementary Fig. ) meeting our criteria. A subsequent systematic search in ScienceDirect, CNKI, and Google Scholar using the search terms “long-term soil microbial biomass carbon” and “soil microbial biomass carbon interannual variability” retrieved another five studies which met our criteria – (Supplementary Table ). For each site, annual temperatures corresponding to the observation years were retrieved from the WorldClim dataset using the recorded site location information and a linear relationship between the observed MBC and annual temperature was fitted to examine its response to changes in temperature (Fig. ). Testing the space-for-time substitution (SFT) effect in Patoine et al. According to the SFT hypothesis described above, greater predicted reductions in the global MBC are to be expected when the approach of Patoine et al. is applied to subsets of the observation data if they have steeper spatial negative slopes between MBC and temperature. Bootstrapping sub-sampling was used to verify this hypothesis: (1) 500 MBC observations were randomly taken (with replacement) from the original MBC dataset of Patoine et al. ( n = 762) by sampling 200 times. Following the method described in Patoine et al. , a Random Forest model was trained following each sub-sampling and was then used to predict global MBC for 1992–2013. For each sub-sample, the slope between MBC and annual temperature was also derived using a simple linear regression. Finally, the relationship between the predicted MBC change rate and the slope of MBC against temperature was examined. (2) similar to (1), but the dataset for sub-sampling was the dataset of Patoine et al. combined with the MBC observations from the control treatment of the field-warming dataset ( n = 762 + 106). Only MBC observations reported in units that could be converted to mmol kg -1 were used, resulting in 106 measurements. The same procedure as used by Patoine et al. was then followed to derive soil MBC stocks. In both tests, following Patoine et al. , environmental variables of annual temperature, soil organic carbon, soil pH, precipitation, soil clay content, soil sand content, land-cover, soil nitrogen content, NDVI, and elevation were used as predictor variables in the Random Forest modeling. Values for these variables corresponding to the 106 control MBC measurements were extracted from the same global datasets used by Patoine et al. based on site geolocations. To account for only those spatial grid cells where the coverage of environmental variables allows a high-confidence prediction of MBC, the spatial coverage analysis was performed for each bootstrapping sub-sampling (for both n = 762 and n = 762 + 106) following the approach of Patoine et al. (i.e., the ‘Mahalanobis distance’ approach and the ‘dissimilarity index’ approach). The results obtained by using different layers of valid pixels for model prediction for different bootstrapping sub-samplings are shown in Fig. . An alternative approach, using a single shared layer of valid pixels containing only collocating valid pixels of all the 200 bootstrapping sub-samplings, yielded similar results (Supplementary Fig. ). According to Patoine et al. , MBC showed a significant decreasing trend from 1992 to 2013, which was almost entirely attributed to climate change, with little contribution from land cover change. They further concluded that the climate contribution was dominated by increasing temperature rather than the change in precipitation (their Supplementary Figs. and ). This conclusion is in line with their Supplementary Fig. and Supplementary Fig. , which show a clear decrease in MBC with increasing annual temperature, but no clear trend, or only a very slight increasing one, in MBC with increasing precipitation. Given these pieces of evidence, we decided to focus on the temperature effect on MBC in this analysis. Here, we focus on testing three hypotheses: (1) The MBC response to warming reported by Patoine et al. should be detectable using field warming experiments, which have been widely adopted to examine how MBC responds to temperature increase. (2) Similarly, we hypothesize that the response could probably also be found in in-situ long-term MBC measurements affected by interannual temperature changes. (3) Given that the Random Forest model used to predict MBC change during 1992–2013 by Patoine et al. was trained using largely static observations of MBC stock across spatial gradients, and that a clear spatial pattern of MBC stock exists across different climatic gradients (their Fig. ), we hypothesize that the conclusion of Patoine et al. might be subject to the space-for-time substitution (SFT) effect, in which case the predicted reduction over time could be an artifact of decreasing MBC stocks with increasing temperature over spatial gradients. To test the initial two hypotheses, we compiled observations from field warming experiments and in-situ long-term measurements from the literature. To test the third one, we repeated the Random Forest model training followed by prediction of MBC change for 1992–2013 following the same method as Patoine et al. , but used bootstrapping sub-sampling to obtain variations in both the predicted MBC change rate and the spatial slope between MBC and temperature, and further examined how the predicted MBC change rate responds to the derived spatial slope. A systematic, reproducible workflow was followed to ensure the suitability and completeness of field warming experiment data included in this study (Supplementary Fig. ). Laboratory controlled warming experiments were excluded because they reflect the real world less realistically. Peer-reviewed articles on soil warming effects on microbial soil biomass were collected from a literature search using “soil warm” and “microbial biomass” as keywords in ScienceDirect ( https://www.sciencedirect.com/ ), China National Knowledge Infrastructure (CNKI, https://www.cnki.net/ ), Google Scholar, and papers cited in previous review studies. By observing the criteria for an article to be included (Supplementary Fig. ), a total of 130 paired MBC measurements from both control and warming sites from 69 papers were collected (Fig. ). To evaluate how MBC responds to soil warming, the effect of warming on MBC was calculated for each pair of measurements using the natural log-transformed response ratio (LN(RR)): 1 [12pt]{minimal} $${{{{{}}}}}({{{{{}}}}})={{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})-{{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})$$ LN ( RR ) = ln ( MBC t ) − ln ( MBC c ) Where MBC t and MBC c represent MBC from the warming and control treatments, respectively, and the response ratio (RR) was natural-log transformed, a common practice to make it satisfy the normal distribution . As LN(RR) seems larger for intermediate warming levels compared to either the low or high warming magnitude, potential effects of warming magnitudes on LN(RR) were examined using a quadratic fitting between LN(RR) and warming magnitude (R 2 adj = 0.23, p < 0.01, Supplementary Fig. ). The MBC response to soil warming was also examined in detail by separating all field-warming observations into different groups of warming magnitude (<1 °C, 1–2 °C, 2–3 °C, 3–4 °C, and 4–5 °C). The random-effect model was used to obtain the overall effect of warming on MBC and test its statistical significance (Fig. ). Funnel plots and the “metabias” method from the ‘meta’ package in R were employed to investigate potential publication bias for each warming magnitude group (Supplementary Fig. ). If the funnel plot shows significant asymmetry (i.e., p < 0.05 derived using the “Egger” test from the “metabias” method), then an iterative “trim-and-fill” method was used to remove the most extreme publication(s) from either the left or the right tail of the funnel plot until it becomes symmetric, and then to fill imputed missing publication(s) followed by computation of a new effect size of MBC response to warming. The impacts of warming duration on MBC responses were examined similarly by grouping into different durations of <3 years, 3–6 years and 6–30 years. We initially searched the MBC datasets used by Patoine et al. and used in a systematic analysis by Xu et al. for in-situ long-term MBC measurements, but found only one study (Supplementary Table and Supplementary Fig. ) meeting our criteria. A subsequent systematic search in ScienceDirect, CNKI, and Google Scholar using the search terms “long-term soil microbial biomass carbon” and “soil microbial biomass carbon interannual variability” retrieved another five studies which met our criteria – (Supplementary Table ). For each site, annual temperatures corresponding to the observation years were retrieved from the WorldClim dataset using the recorded site location information and a linear relationship between the observed MBC and annual temperature was fitted to examine its response to changes in temperature (Fig. ). According to the SFT hypothesis described above, greater predicted reductions in the global MBC are to be expected when the approach of Patoine et al. is applied to subsets of the observation data if they have steeper spatial negative slopes between MBC and temperature. Bootstrapping sub-sampling was used to verify this hypothesis: (1) 500 MBC observations were randomly taken (with replacement) from the original MBC dataset of Patoine et al. ( n = 762) by sampling 200 times. Following the method described in Patoine et al. , a Random Forest model was trained following each sub-sampling and was then used to predict global MBC for 1992–2013. For each sub-sample, the slope between MBC and annual temperature was also derived using a simple linear regression. Finally, the relationship between the predicted MBC change rate and the slope of MBC against temperature was examined. (2) similar to (1), but the dataset for sub-sampling was the dataset of Patoine et al. combined with the MBC observations from the control treatment of the field-warming dataset ( n = 762 + 106). Only MBC observations reported in units that could be converted to mmol kg -1 were used, resulting in 106 measurements. The same procedure as used by Patoine et al. was then followed to derive soil MBC stocks. In both tests, following Patoine et al. , environmental variables of annual temperature, soil organic carbon, soil pH, precipitation, soil clay content, soil sand content, land-cover, soil nitrogen content, NDVI, and elevation were used as predictor variables in the Random Forest modeling. Values for these variables corresponding to the 106 control MBC measurements were extracted from the same global datasets used by Patoine et al. based on site geolocations. To account for only those spatial grid cells where the coverage of environmental variables allows a high-confidence prediction of MBC, the spatial coverage analysis was performed for each bootstrapping sub-sampling (for both n = 762 and n = 762 + 106) following the approach of Patoine et al. (i.e., the ‘Mahalanobis distance’ approach and the ‘dissimilarity index’ approach). The results obtained by using different layers of valid pixels for model prediction for different bootstrapping sub-samplings are shown in Fig. . An alternative approach, using a single shared layer of valid pixels containing only collocating valid pixels of all the 200 bootstrapping sub-samplings, yielded similar results (Supplementary Fig. ). Supplementary Information
Molecular Mechanisms of
dcf52a70-3a6c-4b88-9019-6540a02c79d6
11812009
Biochemistry[mh]
Introduction Reactive oxygen species (ROS), such as superoxide radical (•O 2– ), singlet oxygen ( 1 O 2 ), hydroxyl radical (•OH), and hydrogen peroxide (H 2 O 2 ), are highly reactive molecules with unpaired valence electrons or unstable bonds that are produced as byproducts of aerobic metabolism. Multiple enzymes or intracellular chemicals are able to cope with excess ROS. However, imbalances in ROS homeostasis can lead to oxidative stress, which is associated with aging and various diseases, including cancer, − inflammatory diseases, and neurological disorders. − However, ROS also play several beneficial roles when present at moderate levels. They act as intracellular signaling molecules, , participate in hormonal biosynthesis, and contribute to immune responses by exhibiting antimicrobial activity. Under physiological conditions, the cellular ROS level is maintained by dynamic equilibrium, balanced by several mechanisms of constant ROS production and elimination. In excess, ROS cause oxidative modifications of all major cellular macromolecules, such as lipids, proteins, DNA, and carbohydrates, leading to alteration of their biological function, increasing mutagenesis, and finally leading to cell death. , Due to their toxic properties, these highly reactive molecules are used as host antimicrobial strategies against a variety of pathogens. , An arsenal of host immune cells (neutrophils and macrophages) phagocytose pathogens and trigger an oxidative burst—the rapid induction and release of ROS molecules—against exogenous pathogens. − Pathogens, however, employ a variety of mechanisms to counter and evade the immune response using different strategies such as enhancing antioxidant/ROS-detoxification pathways. Investigating the pathogen’s ability to face oxidative stress by identifying its key players in ROS-detoxification pathways may shed light on the pathogen’s survival strategy and pathomechanism and potentially lead to the development of new therapeutic options. This study focuses on Acanthamoeba castellanii , a free-living amoeba that occupies diverse habitats such as soil or water environments, but also causes partly severe infections in humans. On the one hand, it is the causative agent of Acanthamoeba keratitis, , a rare but serious ocular infection that can lead to visual loss. On the other hand, it can cause granulomatous amebic encephalitis, an often fatal brain and spinal cord infection that typically occurs in immunocompromised individuals. , The fact that this organism can both live freely and become an opportunistic pathogen, causing two very different diseases in humans, reinforces the fact that A. castellanii possesses a remarkable ability to adapt to considerable environmental changes. It has a number of mechanisms that protect it from oxidative stress, including mitochondrial energy-dissipating systems, catalase, superoxide dismutase, and both thioredoxin and glutathione systems. Recently, oxidative stress-induced transcriptional changes of key enzymes involved in the thioredoxin and glutathione systems of A. castellanii have been described, highlighting the complexity of the amoeba’s redox system. The aim of this study was to investigate the ability of A. castellanii to counteract the oxidative stress induced by various ROS-inducing agents. Proteomic analysis, together with RT-qPCR, was used to obtain a comprehensive view of the response to these challenging conditions, with the ultimate goal of identifying A. castellanii key players in the defense against oxidative stress. Methods 2.1 Growth Analysis A. castellanii cells, strain Neff (ATCC 30010) (1 × 10 5 cells) were cultivated in a 12-well plate aerobic culture flask at 27 °C in PYG medium (0.75% yeast extract, 0.75% proteose peptone, and 1.5% glucose) supplemented with different concentrations of various ROS-inducing agents: sodium nitroprusside (SNP) (Merck, USA) (10 μM; 100 μM; 1 mM), H 2 O 2 (10 μM; 100 μM; 250 μM; 1 mM), phenethyl isothiocyanate (PEITC) (Merck, USA) (6.25 μM; 12.5 μM; 15 μM; 25 μM; 50 μM), or rotenone (Merck, USA) (25 μM; 50 μM). Cells supplemented with 0.45% or 1% ethanol, respectively, were used as a control. Cell density was measured every 24 h for 3 days (except for the first 19-hour time point) using a GUAVA EasyCyte 8HT flow cytometer (Merck, USA) after fixation with 2% paraformaldehyde. 2.2 Oxyblot Oxidative damage to proteins caused by selected ROS-inducing agents was determined by immunoblot detection of carbonyl groups using the OxyBlot protein oxidation detection kit (Merck Millipore, USA) according to the manufacturer’s protocol. Briefly, 20 μg of protein sample in PBS was treated with 6% SDS and incubated with 2,4-dinitrophenylhydrazine. The dinitrophenylhydrazone-derivative residues were detected by a specific primary antibody in conjunction with a secondary antibody (provided in the kit) and visualized using the enhanced chemiluminescence system Amersham Imager 600 (GE Healthcare Life Sciences, USA). 2.3 Sample Preparation for LC-MS/MS A. castellanii cells were grown in 25 cm 2 aerobic culture flasks at 27 °C in PYG medium supplemented with 100 μM SNP, 250 μM H 2 O 2 , 6.25 μM PEITC, or 50 μM Rotenone, respectively, for 2 or 8 h, in four biological replicates. After the incubation, cells were washed twice (1200 g, 10 min, 4 °C) with phosphate-buffered saline (PBS) containing a protease inhibitor cocktail (Merck, USA), phosphatase cocktail II+III, and components to preserve the acetylation state of proteins: 40 μM Trichostatin A, 1 mM EX-527, 400 mM Nicotinamide, and 200 mM Sodium Butyrate (all Merck, USA). Pellets were then resuspended in RIPA buffer (ThermoFisher, USA) containing the respective inhibitors (described above). Subsequently, the samples were vigorously pipetted in and out to ensure cell lysis, followed by centrifugation at 14,000 g for 15 min at 4 °C. The resulting supernatant was carefully transferred to a fresh tube, and the protein concentration of the samples was determined using the BCA kit (Sigma-Aldrich, USA). The samples were stored at −80 °C until their next use. 2.4 Samples Preparation for Proteomic Analysis Six times the volume of cooled acetone (−20 °C) was added to the sample volume containing 10 μg of protein extracts. The vortexed tubes were incubated overnight at −20 °C and then centrifuged for 10 min at 11,000 rpm and 4 °C. The protein pellets were dissolved in buffer (8 M urea; 25 mM NH 4 HCO 3 ). The samples were then digested overnight at 37 °C by sequencing grade trypsin (enzyme:sample ratio 1:20; Promega, USA). The digested peptides were loaded and desalted on Evotips (Evosep One, Denmark) according to the manufacturer’s instructions. 2.5 LC-MS/MS Analysis Samples were analyzed on a timsTOF Pro 2 mass spectrometer (Bruker Daltonics, Germany) coupled to an Evosep One system (Evosep, Denmark) operating with the 30 samples/day method developed by the manufacturer. Chemicals for the method, MS-grade Acetonitrile (ACN), H 2 O and formic acid (FA) were from Thermo Chemical (USA). Briefly, the method is based on a 44-minute gradient and a total cycle time of 48 min with a C18 analytical column (0.15 × 150 mm, 1.9 μm beads, ref EV-1106, Evosep, Denmark) equilibrated at 40 °C and operated at a flow rate of 500 nL/min. H 2 O/0.1% FA was used as solvent A and ACN/0.1% FA as solvent B. The timsTOF Pro 2 was operated in DDA PASEF (Data-Dependent Acquisition that uses Parallel Accumulation Serial Fragmentation) mode over a 1.3 s cycle time. Mass spectra for MS and MS/MS scans were recorded between 100 and 1,700 m / z . The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD052473. 2.6 Data Analysis from LC-MS/MS Peak Online X software (build 1.6, Bioinformatics Solutions Inc.) was used to search for proteins against the Acanthamoeba castellanii database (UniProt release 2022_01, 14979 entries). The parent mass tolerance was set to 20 ppm with a fragment mass tolerance of 0.05 Da. Semispecific tryptic cleavage was selected, and a maximum of 2 missed cleavages was allowed. Half disulfide bridge (C) was set as a fixed modification. Oxidation (M) and deamidation (NQ) were set as possible variable modifications. The maximum number of variable modifications per peptide was limited to 3. Identifications were filtered based on a 1% false discovery rate (FDR) threshold at both the peptide and protein group levels. Another search was performed to screen the oxidative status of cysteines. In this search, half disulfide bridge (C), cysteine oxidation to cysteic acid (C), cysteinylation (C), glutathione disulfide (C), oxidation or hydroxylation (C), dihydroxylation (C), oxidation (M), and deamidation (NQ) were all set as variable modifications. Protein identifications were only considered if at least two unique identified peptides were present within a single protein. Multivariate statistics on protein measurements were performed using Qlucore Omics Explorer 3.7 (Qlucore AB, SWEDEN). A positive threshold of 1 was set to allow a log2 transformation of abundance data for normalization; i.e., all abundance data values below the threshold are replaced by 1 before transformation. The transformed data were finally used for statistical analysis, i.e., the evaluation of differentially present proteins between two groups using a bilateral Student’s t -test and assuming equal variance between groups. A p -value better than 0.05 was used to filter out differential candidates. 2.7 Western Blot To confirm the results of the proteomic analysis, the native expression of thioredoxin reductase (ACA1_398900) was visualized using a purified rabbit polyclonal antibody. SDS-PAGE and Western blotting were performed according to standard protocols in a Mini Protean Tetra Cell (Bio-Rad, USA). Blots were developed with peroxidase-conjugated goat antirabbit secondary antibody (A9169, Merck, USA). The signal was detected on an Amersham Imager 600 (GE Healthcare Life Sciences, USA) using an Immobilon Forte Western HRP substrate (Merck, USA). 2.8 PCR Primer Efficiency Study Two pairs of primers for each gene of interest (GOI) were designed using Primer3 and initially tested in conventional PCR using genomic DNA. All primers were synthesized by Microsynth. Then, standard curves were generated with 5 points of 10-fold serial dilutions of RNA to calculate the primer efficiency (E) and the correlation coefficients (R²). Efficiency was calculated according to the formula E = (10 –1 /slope −1)*100. The primer pair with better efficiency in RT-qPCR was selected for further experiments . 2.9 RNA Extraction and Quantitative Real-Time PCR (RT-qPCR) The RNA was isolated using an innuPREP RNA Mini Kit 2.0 (Analytik Jena, Germany) following the manufacturer’s protocol. The concentration and purity of RNA were measured with a NanoDrop spectrophotometer ND1000 (NanoDrop Technologies, USA). All RNA samples were diluted to 10 ng/μL using nuclease-free water and stored at −80 °C until use. RT-qPCR was performed in a CFX96 thermocycler (Bio-Rad, USA) using the Luna Universal One-Step RT-qPCR kit (E3005L, New England BioLabs, USA). The reaction mixture (20 μL per reaction) contained 10 μL of Luna Universal One-Step Reaction Mix 2x, 1 μL of Luna WarmStart RT Enzyme Mix 20x, 400 nM of each primer, and 50 ng of RNA (5 μL of 10 ng/μL). The RT-qPCR profile included a reverse transcription step at 55 °C for 10 min, an initial denaturation step at 95 °C for 1 min, followed by 40 cycles of denaturation at 95 °C for 10 s and extension at 60 °C for 60 s, and a melting curve was performed at the end of the run by stepwise (0.5 °C per 5 s) increasing the temperature from 60 to 95 °C. All experiments were carried out in two technical and three biological replicates. The relative expression of target genes was normalized using the formula described by and as reference genes (RG) the 18S rRNA gene and the hypoxanthine-guanine phosphoribosyltransferase (HPRT). Statistical analysis was performed with GraphPad Prism 9 (GraphPad Software Inc., USA). To determine statistical significance among investigated groups, one-way analysis of variance (ANOVA) was performed. A statistical difference was considered significant when p < 0.05. 2.10 ABC Transporter Localization Gene was subcloned into the pTN plasmid, which allows expression of N-terminally GFP-tagged proteins and transfected into the A. castellanii cells according to the published protocol (dx.doi.org/10.17504/protocols.io.s4regv6). Live cell microscopy was done to visualize the GFP signal using a Leica TCS SP8 WLL SMD-FLIM microscope (Leica, Germany) equipped with an HC PL APO CS2 63 x /1.20 water objective (excitation 488 nm, emission 498–551 nm). Acquired pictures were processed using Fiji software. Growth Analysis A. castellanii cells, strain Neff (ATCC 30010) (1 × 10 5 cells) were cultivated in a 12-well plate aerobic culture flask at 27 °C in PYG medium (0.75% yeast extract, 0.75% proteose peptone, and 1.5% glucose) supplemented with different concentrations of various ROS-inducing agents: sodium nitroprusside (SNP) (Merck, USA) (10 μM; 100 μM; 1 mM), H 2 O 2 (10 μM; 100 μM; 250 μM; 1 mM), phenethyl isothiocyanate (PEITC) (Merck, USA) (6.25 μM; 12.5 μM; 15 μM; 25 μM; 50 μM), or rotenone (Merck, USA) (25 μM; 50 μM). Cells supplemented with 0.45% or 1% ethanol, respectively, were used as a control. Cell density was measured every 24 h for 3 days (except for the first 19-hour time point) using a GUAVA EasyCyte 8HT flow cytometer (Merck, USA) after fixation with 2% paraformaldehyde. Oxyblot Oxidative damage to proteins caused by selected ROS-inducing agents was determined by immunoblot detection of carbonyl groups using the OxyBlot protein oxidation detection kit (Merck Millipore, USA) according to the manufacturer’s protocol. Briefly, 20 μg of protein sample in PBS was treated with 6% SDS and incubated with 2,4-dinitrophenylhydrazine. The dinitrophenylhydrazone-derivative residues were detected by a specific primary antibody in conjunction with a secondary antibody (provided in the kit) and visualized using the enhanced chemiluminescence system Amersham Imager 600 (GE Healthcare Life Sciences, USA). Sample Preparation for LC-MS/MS A. castellanii cells were grown in 25 cm 2 aerobic culture flasks at 27 °C in PYG medium supplemented with 100 μM SNP, 250 μM H 2 O 2 , 6.25 μM PEITC, or 50 μM Rotenone, respectively, for 2 or 8 h, in four biological replicates. After the incubation, cells were washed twice (1200 g, 10 min, 4 °C) with phosphate-buffered saline (PBS) containing a protease inhibitor cocktail (Merck, USA), phosphatase cocktail II+III, and components to preserve the acetylation state of proteins: 40 μM Trichostatin A, 1 mM EX-527, 400 mM Nicotinamide, and 200 mM Sodium Butyrate (all Merck, USA). Pellets were then resuspended in RIPA buffer (ThermoFisher, USA) containing the respective inhibitors (described above). Subsequently, the samples were vigorously pipetted in and out to ensure cell lysis, followed by centrifugation at 14,000 g for 15 min at 4 °C. The resulting supernatant was carefully transferred to a fresh tube, and the protein concentration of the samples was determined using the BCA kit (Sigma-Aldrich, USA). The samples were stored at −80 °C until their next use. Samples Preparation for Proteomic Analysis Six times the volume of cooled acetone (−20 °C) was added to the sample volume containing 10 μg of protein extracts. The vortexed tubes were incubated overnight at −20 °C and then centrifuged for 10 min at 11,000 rpm and 4 °C. The protein pellets were dissolved in buffer (8 M urea; 25 mM NH 4 HCO 3 ). The samples were then digested overnight at 37 °C by sequencing grade trypsin (enzyme:sample ratio 1:20; Promega, USA). The digested peptides were loaded and desalted on Evotips (Evosep One, Denmark) according to the manufacturer’s instructions. LC-MS/MS Analysis Samples were analyzed on a timsTOF Pro 2 mass spectrometer (Bruker Daltonics, Germany) coupled to an Evosep One system (Evosep, Denmark) operating with the 30 samples/day method developed by the manufacturer. Chemicals for the method, MS-grade Acetonitrile (ACN), H 2 O and formic acid (FA) were from Thermo Chemical (USA). Briefly, the method is based on a 44-minute gradient and a total cycle time of 48 min with a C18 analytical column (0.15 × 150 mm, 1.9 μm beads, ref EV-1106, Evosep, Denmark) equilibrated at 40 °C and operated at a flow rate of 500 nL/min. H 2 O/0.1% FA was used as solvent A and ACN/0.1% FA as solvent B. The timsTOF Pro 2 was operated in DDA PASEF (Data-Dependent Acquisition that uses Parallel Accumulation Serial Fragmentation) mode over a 1.3 s cycle time. Mass spectra for MS and MS/MS scans were recorded between 100 and 1,700 m / z . The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD052473. Data Analysis from LC-MS/MS Peak Online X software (build 1.6, Bioinformatics Solutions Inc.) was used to search for proteins against the Acanthamoeba castellanii database (UniProt release 2022_01, 14979 entries). The parent mass tolerance was set to 20 ppm with a fragment mass tolerance of 0.05 Da. Semispecific tryptic cleavage was selected, and a maximum of 2 missed cleavages was allowed. Half disulfide bridge (C) was set as a fixed modification. Oxidation (M) and deamidation (NQ) were set as possible variable modifications. The maximum number of variable modifications per peptide was limited to 3. Identifications were filtered based on a 1% false discovery rate (FDR) threshold at both the peptide and protein group levels. Another search was performed to screen the oxidative status of cysteines. In this search, half disulfide bridge (C), cysteine oxidation to cysteic acid (C), cysteinylation (C), glutathione disulfide (C), oxidation or hydroxylation (C), dihydroxylation (C), oxidation (M), and deamidation (NQ) were all set as variable modifications. Protein identifications were only considered if at least two unique identified peptides were present within a single protein. Multivariate statistics on protein measurements were performed using Qlucore Omics Explorer 3.7 (Qlucore AB, SWEDEN). A positive threshold of 1 was set to allow a log2 transformation of abundance data for normalization; i.e., all abundance data values below the threshold are replaced by 1 before transformation. The transformed data were finally used for statistical analysis, i.e., the evaluation of differentially present proteins between two groups using a bilateral Student’s t -test and assuming equal variance between groups. A p -value better than 0.05 was used to filter out differential candidates. Western Blot To confirm the results of the proteomic analysis, the native expression of thioredoxin reductase (ACA1_398900) was visualized using a purified rabbit polyclonal antibody. SDS-PAGE and Western blotting were performed according to standard protocols in a Mini Protean Tetra Cell (Bio-Rad, USA). Blots were developed with peroxidase-conjugated goat antirabbit secondary antibody (A9169, Merck, USA). The signal was detected on an Amersham Imager 600 (GE Healthcare Life Sciences, USA) using an Immobilon Forte Western HRP substrate (Merck, USA). PCR Primer Efficiency Study Two pairs of primers for each gene of interest (GOI) were designed using Primer3 and initially tested in conventional PCR using genomic DNA. All primers were synthesized by Microsynth. Then, standard curves were generated with 5 points of 10-fold serial dilutions of RNA to calculate the primer efficiency (E) and the correlation coefficients (R²). Efficiency was calculated according to the formula E = (10 –1 /slope −1)*100. The primer pair with better efficiency in RT-qPCR was selected for further experiments . RNA Extraction and Quantitative Real-Time PCR (RT-qPCR) The RNA was isolated using an innuPREP RNA Mini Kit 2.0 (Analytik Jena, Germany) following the manufacturer’s protocol. The concentration and purity of RNA were measured with a NanoDrop spectrophotometer ND1000 (NanoDrop Technologies, USA). All RNA samples were diluted to 10 ng/μL using nuclease-free water and stored at −80 °C until use. RT-qPCR was performed in a CFX96 thermocycler (Bio-Rad, USA) using the Luna Universal One-Step RT-qPCR kit (E3005L, New England BioLabs, USA). The reaction mixture (20 μL per reaction) contained 10 μL of Luna Universal One-Step Reaction Mix 2x, 1 μL of Luna WarmStart RT Enzyme Mix 20x, 400 nM of each primer, and 50 ng of RNA (5 μL of 10 ng/μL). The RT-qPCR profile included a reverse transcription step at 55 °C for 10 min, an initial denaturation step at 95 °C for 1 min, followed by 40 cycles of denaturation at 95 °C for 10 s and extension at 60 °C for 60 s, and a melting curve was performed at the end of the run by stepwise (0.5 °C per 5 s) increasing the temperature from 60 to 95 °C. All experiments were carried out in two technical and three biological replicates. The relative expression of target genes was normalized using the formula described by and as reference genes (RG) the 18S rRNA gene and the hypoxanthine-guanine phosphoribosyltransferase (HPRT). Statistical analysis was performed with GraphPad Prism 9 (GraphPad Software Inc., USA). To determine statistical significance among investigated groups, one-way analysis of variance (ANOVA) was performed. A statistical difference was considered significant when p < 0.05. ABC Transporter Localization Gene was subcloned into the pTN plasmid, which allows expression of N-terminally GFP-tagged proteins and transfected into the A. castellanii cells according to the published protocol (dx.doi.org/10.17504/protocols.io.s4regv6). Live cell microscopy was done to visualize the GFP signal using a Leica TCS SP8 WLL SMD-FLIM microscope (Leica, Germany) equipped with an HC PL APO CS2 63 x /1.20 water objective (excitation 488 nm, emission 498–551 nm). Acquired pictures were processed using Fiji software. Results 3.1 A Comparative Analysis of Proteome Responses to Various ROS-Inducing Agents Label-free proteomic analysis was employed to investigate the response of A. castellanii to different conditions of ROS induction at the protein level. Several known ROS-inducing agents were selected to induce oxidative stress in the cells: H 2 O 2 , phenethylisothiocyanate (PEITC), − and rotenone, , as well as a source of both nitrosative and oxidative stress: sodium nitroprusside (SNP). , The concentrations of the agents were selected based on the growth analysis (see Figure S1 ), and the ability of these agents to induce oxidative damage to proteins of A. castellanii at selected concentrations was confirmed using the Oxyblot Protein Oxidation Detection Kit ( Figure S3 ). Two incubation points of 2 and 8 h were set to determine the immediate and prolonged response of the A. castellanii proteome to selected conditions. Overall, 3,375 proteins were identified by label-free proteomic analysis. visualizes the differential proteomic response to selected ROS-inducing agents, where the amount of upregulated and downregulated proteins in each condition is shown as a percentage, with the total amount of proteins identified in the proteomic analysis considered as 100%. The number of identified proteins under different conditions exhibited minimal variation, with a difference of less than 5%. Additionally, fewer than 15 proteins were unique to each condition. The highest number of significantly changed proteins was identified under PEITC conditions at both incubation time points, whereas the lowest total number of significantly changed proteins was identified in rotenone treatment. Changes in the proteome were evident as early as two h of incubation with each agent, and the number of proteins with increased levels was greater after 8 h in all conditions, with the lowest change in H 2 O 2 . On the other hand, the number of proteins with reduced levels was lower after 8 h than after 2 h, except for incubation with SNP. To validate the observed changes, a specific antibody against thioredoxin reductase (TrxR-S, ACA1_398900) was used to confirm its increased expression in cells incubated in the presence of H 2 O 2 by Western blot ( Figure S4 ). 3.1.1 Candidate Gene Approach Analysis of Proteomic Data In order to explore the cellular response to oxidative stress that is common to all studied conditions, we created a Venn diagram from the list of significantly upregulated proteins (531) ( and S2 ). Four proteins: oxidoreductase (ACA1_362830), NADPH-dependent FMN reductase (ACA1_175790), a protein from the glutathione transferase family (ACA1_099220), and phosphatase (ACA1_057530) had elevated levels under all selected ROS-inducing conditions after 2 and/or 8 h of incubation, suggesting their key role in coping with oxidative stress in A. castellanii . No protein showed reduced levels in all four conditions, and, in general, the overlap of downregulated proteins between the different conditions was smaller than that of upregulated proteins. We also analyzed the effect of oxidative stress on post-translational cysteine modifications, which, consistent with the upregulation of numerous glutathione transferases, are indeed affected with the highest changes after SNP (at 8 h) and PEITC treatment ( Table S2 ). 3.2 Correlation between Gene Expression and Protein Level Changes To determine whether the increase in protein levels under oxidative stress occurs at the level of gene expression, we performed RT-qPCR with 3 selected proteins affected in all conditions. The relative gene expression with different treatments at 2 and 8 h is shown in . After 2 h, the relative expression of oxidoreductase (OR) and glutathione transferase (GST) increased after treatment with PEITC and rotenone, while the effect was lower with the latter compound. There was no increase in the relative expression of phosphatase (PHO). Interestingly, the relative expression obtained for the three GOIs was generally lower after 8 h of treatment and the increase of OR and GST was significant only after treatment with PEITC. Similar to the 2-h treatment, no increase in the relative expression of PHO was observed after 8 h. 3.3 ABC Transporter Among the most affected proteins, we identified an ABC transporter (ACA1_352460) that was up-regulated upon treatment with rotenone and PEITC, and whose levels were elevated after both 2 and 8 h of incubation. Members of this family of proteins in eukaryotes are mostly effluxers. Analysis using the HHpred tool clearly predicts that the amoeba homologue is a pleiotropic drug resistance protein, and because of the importance of this family of proteins in microbial drug resistance we decided to focus further on this transporter. To support the hypothesis that it is a cellular efflux transporter, we determined its cellular localization by expressing it with a GFP tag. As shown in , the protein is mainly localized to the plasma membrane, as expected. 3.4 Sparse Partial Least-Squares Discriminant Analysis of Proteomic Data Next, we aimed to identify the most predictive and discriminative features in our data in order to classify the samples. This step is essential to determine whether upregulated proteins in our data set show systematic features along with other proteins or whether they show this trend simply by chance. Thus, following the candidate-gene approach, we aimed to search for patterns on a global scale and whether the detected proteins above can be corroborated using sPLS-DA. First, we normalized data with normalyzerDE and from all the performed normalizations, we selected VSN normalization, which provided the lowest within-group variation. Next, we used Sparse Partial Least Squares Discriminant Analysis (sPLS-DA) and Area Under Curve Analysis (AUC) to find potential sources of variation in our data. AUC analysis is based on the selectivity and specificity of sPLS-DA and represents the probability that the sPLS-DA model will rank the positive examples higher than the negative examples. In A, we clearly see that, after 2 h of cultivation, PEITC had a strong influence upon separation from controls ( X -axis, AUC = 1, p = 0.002), and similarly, H 2 O 2 samples diverged from controls ( Y -axis, AUC = 1, p = 0.02). However, Rotenone and SNP overlapped with controls. After 8 h of cultivation, all conditions significantly diverged from controls in either the first or second dimension. For example, in the first component ( X -axis) PEITC significantly diverges from the others (AUC = 1, p = 0.0025), while the second component ( Y -axis) discriminates H 2 O 2 from controls and all other conditions (AUC = 1, p = 0.0025), as well as PEITC (AUC = 1, p = 0.0025), SNP (AUC = 0.97, p = 0.005), and Rotenone (AUC = 0.84, p = 0.04), which are above zero on the Y -axis B. To find out the biological relevance of these changes, we extracted loadings from sPLS-DA ( C,D) and interestingly, three of the previously mentioned proteins (ACA1_175790, ACA1_362830, ACA1_352460) responsible for the differentiation of PEITC samples were also detected ( B and are highlighted in C. This is additional evidence that these proteins are markers of oxidative stress mentioned above. On a global scale, however, there is a detectable small effect of many proteins in combination rather than a strong effect of a few. For example, ACA1_325510 (actin bundling protein), ACA1_296720 (histone deacetylase 1, putative), and ACA1_210670 (universal stress domain containing protein) contribute to the separation of Rotenone samples (2nd comp. on the Y -axis, D). Similarly, in PEITC samples, separation in Y -axis of PEITC samples is driven by ACA1_218690 (1,2-dihydroxy-3-keto-5-methylthiopentene dioxygenase), which is annotated as an enzyme that catalyzes two different reactions between oxygen and the acireductone and depends upon the metal bound in the active site. In addition, this analysis shows that the overall phenotype changes at the proteome level and not just selected proteins under the four types of oxidative stress. A Comparative Analysis of Proteome Responses to Various ROS-Inducing Agents Label-free proteomic analysis was employed to investigate the response of A. castellanii to different conditions of ROS induction at the protein level. Several known ROS-inducing agents were selected to induce oxidative stress in the cells: H 2 O 2 , phenethylisothiocyanate (PEITC), − and rotenone, , as well as a source of both nitrosative and oxidative stress: sodium nitroprusside (SNP). , The concentrations of the agents were selected based on the growth analysis (see Figure S1 ), and the ability of these agents to induce oxidative damage to proteins of A. castellanii at selected concentrations was confirmed using the Oxyblot Protein Oxidation Detection Kit ( Figure S3 ). Two incubation points of 2 and 8 h were set to determine the immediate and prolonged response of the A. castellanii proteome to selected conditions. Overall, 3,375 proteins were identified by label-free proteomic analysis. visualizes the differential proteomic response to selected ROS-inducing agents, where the amount of upregulated and downregulated proteins in each condition is shown as a percentage, with the total amount of proteins identified in the proteomic analysis considered as 100%. The number of identified proteins under different conditions exhibited minimal variation, with a difference of less than 5%. Additionally, fewer than 15 proteins were unique to each condition. The highest number of significantly changed proteins was identified under PEITC conditions at both incubation time points, whereas the lowest total number of significantly changed proteins was identified in rotenone treatment. Changes in the proteome were evident as early as two h of incubation with each agent, and the number of proteins with increased levels was greater after 8 h in all conditions, with the lowest change in H 2 O 2 . On the other hand, the number of proteins with reduced levels was lower after 8 h than after 2 h, except for incubation with SNP. To validate the observed changes, a specific antibody against thioredoxin reductase (TrxR-S, ACA1_398900) was used to confirm its increased expression in cells incubated in the presence of H 2 O 2 by Western blot ( Figure S4 ). 3.1.1 Candidate Gene Approach Analysis of Proteomic Data In order to explore the cellular response to oxidative stress that is common to all studied conditions, we created a Venn diagram from the list of significantly upregulated proteins (531) ( and S2 ). Four proteins: oxidoreductase (ACA1_362830), NADPH-dependent FMN reductase (ACA1_175790), a protein from the glutathione transferase family (ACA1_099220), and phosphatase (ACA1_057530) had elevated levels under all selected ROS-inducing conditions after 2 and/or 8 h of incubation, suggesting their key role in coping with oxidative stress in A. castellanii . No protein showed reduced levels in all four conditions, and, in general, the overlap of downregulated proteins between the different conditions was smaller than that of upregulated proteins. We also analyzed the effect of oxidative stress on post-translational cysteine modifications, which, consistent with the upregulation of numerous glutathione transferases, are indeed affected with the highest changes after SNP (at 8 h) and PEITC treatment ( Table S2 ). Candidate Gene Approach Analysis of Proteomic Data In order to explore the cellular response to oxidative stress that is common to all studied conditions, we created a Venn diagram from the list of significantly upregulated proteins (531) ( and S2 ). Four proteins: oxidoreductase (ACA1_362830), NADPH-dependent FMN reductase (ACA1_175790), a protein from the glutathione transferase family (ACA1_099220), and phosphatase (ACA1_057530) had elevated levels under all selected ROS-inducing conditions after 2 and/or 8 h of incubation, suggesting their key role in coping with oxidative stress in A. castellanii . No protein showed reduced levels in all four conditions, and, in general, the overlap of downregulated proteins between the different conditions was smaller than that of upregulated proteins. We also analyzed the effect of oxidative stress on post-translational cysteine modifications, which, consistent with the upregulation of numerous glutathione transferases, are indeed affected with the highest changes after SNP (at 8 h) and PEITC treatment ( Table S2 ). Correlation between Gene Expression and Protein Level Changes To determine whether the increase in protein levels under oxidative stress occurs at the level of gene expression, we performed RT-qPCR with 3 selected proteins affected in all conditions. The relative gene expression with different treatments at 2 and 8 h is shown in . After 2 h, the relative expression of oxidoreductase (OR) and glutathione transferase (GST) increased after treatment with PEITC and rotenone, while the effect was lower with the latter compound. There was no increase in the relative expression of phosphatase (PHO). Interestingly, the relative expression obtained for the three GOIs was generally lower after 8 h of treatment and the increase of OR and GST was significant only after treatment with PEITC. Similar to the 2-h treatment, no increase in the relative expression of PHO was observed after 8 h. ABC Transporter Among the most affected proteins, we identified an ABC transporter (ACA1_352460) that was up-regulated upon treatment with rotenone and PEITC, and whose levels were elevated after both 2 and 8 h of incubation. Members of this family of proteins in eukaryotes are mostly effluxers. Analysis using the HHpred tool clearly predicts that the amoeba homologue is a pleiotropic drug resistance protein, and because of the importance of this family of proteins in microbial drug resistance we decided to focus further on this transporter. To support the hypothesis that it is a cellular efflux transporter, we determined its cellular localization by expressing it with a GFP tag. As shown in , the protein is mainly localized to the plasma membrane, as expected. Sparse Partial Least-Squares Discriminant Analysis of Proteomic Data Next, we aimed to identify the most predictive and discriminative features in our data in order to classify the samples. This step is essential to determine whether upregulated proteins in our data set show systematic features along with other proteins or whether they show this trend simply by chance. Thus, following the candidate-gene approach, we aimed to search for patterns on a global scale and whether the detected proteins above can be corroborated using sPLS-DA. First, we normalized data with normalyzerDE and from all the performed normalizations, we selected VSN normalization, which provided the lowest within-group variation. Next, we used Sparse Partial Least Squares Discriminant Analysis (sPLS-DA) and Area Under Curve Analysis (AUC) to find potential sources of variation in our data. AUC analysis is based on the selectivity and specificity of sPLS-DA and represents the probability that the sPLS-DA model will rank the positive examples higher than the negative examples. In A, we clearly see that, after 2 h of cultivation, PEITC had a strong influence upon separation from controls ( X -axis, AUC = 1, p = 0.002), and similarly, H 2 O 2 samples diverged from controls ( Y -axis, AUC = 1, p = 0.02). However, Rotenone and SNP overlapped with controls. After 8 h of cultivation, all conditions significantly diverged from controls in either the first or second dimension. For example, in the first component ( X -axis) PEITC significantly diverges from the others (AUC = 1, p = 0.0025), while the second component ( Y -axis) discriminates H 2 O 2 from controls and all other conditions (AUC = 1, p = 0.0025), as well as PEITC (AUC = 1, p = 0.0025), SNP (AUC = 0.97, p = 0.005), and Rotenone (AUC = 0.84, p = 0.04), which are above zero on the Y -axis B. To find out the biological relevance of these changes, we extracted loadings from sPLS-DA ( C,D) and interestingly, three of the previously mentioned proteins (ACA1_175790, ACA1_362830, ACA1_352460) responsible for the differentiation of PEITC samples were also detected ( B and are highlighted in C. This is additional evidence that these proteins are markers of oxidative stress mentioned above. On a global scale, however, there is a detectable small effect of many proteins in combination rather than a strong effect of a few. For example, ACA1_325510 (actin bundling protein), ACA1_296720 (histone deacetylase 1, putative), and ACA1_210670 (universal stress domain containing protein) contribute to the separation of Rotenone samples (2nd comp. on the Y -axis, D). Similarly, in PEITC samples, separation in Y -axis of PEITC samples is driven by ACA1_218690 (1,2-dihydroxy-3-keto-5-methylthiopentene dioxygenase), which is annotated as an enzyme that catalyzes two different reactions between oxygen and the acireductone and depends upon the metal bound in the active site. In addition, this analysis shows that the overall phenotype changes at the proteome level and not just selected proteins under the four types of oxidative stress. Discussion Understanding the cellular response to oxidative stress is of particular importance in parasites, as they encounter oxidative stress during host invasion. To gain a broad insight into the mechanisms by which Acanthamoeba combats oxidative stress, we used four different sources for its induction: H 2 O 2 , as the most direct and commonly used generator of oxidative stress/damage; rotenone and PEITC, which cause it indirectly through more complex/metabolic mechanisms; and SNP, which induces nitrosative stress, closely related to oxidative stress. The most significant changes were observed after treatment with PEITC, suggesting a complex effect on the cell. The changes were more pronounced after 8 h of treatment, except for H 2 O 2 , probably due to its instability. The overlap of downregulated proteins between the different conditions was less evident than that of upregulated proteins, indicating that the specific response to oxidative stress is more directed toward upregulation of defense pathways, and the decrease in protein levels is more the result of their degradation, disrupted metabolism, and decreased cellular fitness. Correlations between proteomic data and qPCR results of genes encoding proteins upregulated in all four conditions were only partial, which is not unexpected and highlights the importance of the proteomic approach in studying the cellular response to stress. Among the proteins repeatedly shown to be upregulated, we observed known proteins involved in defense against oxidative stress, such as several glutathione transferases or peroxiredoxin (ACA1_027750), a member of the universal stress protein superfamily (ACA1_210670). More importantly, we identified four proteins that are upregulated regardless of the source of oxidative stress, and we believe that the function of these proteins deserves further rigorous investigation: a glutathione transferase (ACA1_099220), an NADPH-dependent FMN reductase (ACA1_175790), a phosphatase (ACA1_057530), and an oxidoreductase (ACA1_362830). The induction of glutathione transferase is consistent with the pluripotent role of the glutathione system in cellular detoxification and protection against oxidative stress. The role of NADPH-dependent FMN reductase in oxidative stress defense may be hypothesized due to NADPH being the principal reductant for the thioredoxin and glutathione systems. While the function of the identified phosphatase is difficult to propose due to the high variability of functions and substrates, AlphaFold structure prediction of the induced oxidoreductase revealed a close homology to human quinone oxidoreductase PIG3 ( Figure S5 ). Contrary to the role in oxidative stress defense, this enzyme was shown to be associated with ROS generation in human and plant cells. , Therefore, it would be beneficial in the future to biochemically characterize this protein, considering both its function in the Acanthamoeba oxidative stress response and the role of PIG3 in critical processes of human cells: response to DNA damage and p53-mediated apoptosis. Putting the results of our study in the context of previous research, we can conclude that Acanthamoeba employs a wide range of oxidative stress counteracting machineries, some of whose components are inducible at the transcriptomic/proteomic or enzymatic level. These include, in particular, the thioredoxin and glutathione systems, mitochondrial energy dissipating systems, and enzymes such as catalase or superoxide dismutase, , , Importantly, we have also identified an ABC transporter whose levels are increased upon incubation with rotenone and PEITC. Given that these two compounds are organic molecules and given the cellular localization of the transporter, it is very likely that this protein plays a critical role in drug efflux in A. castellanii . Interestingly, isothiocyanates (including PEITC) have been shown to interact with a number of ABC transporters. Therefore, further research in our laboratory is currently focused on studying its specificity and role in drug resistance. To summarize, our study has provided complex insight into the oxidative stress response of the facultatively pathogenic amoeba Acanthamoeba castellanii and identified several key players in its defense system. This will enhance our understanding of the mechanisms by which Acanthamoeba can successfully evade the immune system and may also lead to the identification of new chemotherapeutic strategies, given the potential of redox-active antiparasitic agents. −
How to Set Up a Molecular Pathology Lab: A Guide for Pathologists
aa4ed504-6721-4c90-a45d-8e6dc6918d9d
10510618
Pathology[mh]
The effects of diseases on the human body were first documented by ancient Egyptians however, the concept of organ-specific disease and anatomic pathology have begun to evolve only in the last few centuries, and the alterations at the tissue and cellular level have gained attention following the invention of the microscope . The molecular pathology era has begun with the integration of molecular tests into pathology practice, especially in diagnosis of solid tumors and hematological malignancies, as a part of the improvements in molecular sciences that had been led by the completion of the Human Genome Project in early 2000s. Pathologists now have a critical role of morphomolecular assessment in this new generation medicine era and therefore have critical impact on the preanalytical phase of molecular testing . Given that pathologists combine molecular tests with conventional pathological evaluation methods, pathology laboratories should be designed and operated in accordance with the requirements of molecular testing procedures. Adequate space, appropriate equipment and qualified personnel are required to establish a molecular pathology laboratory. While the specifics of the requirements may vary depending on the spectrum of the tests that will be performed, there are several basic criteria that need to be fulfilled for standardization. In this paper, the criteria required to establish a molecular pathology laboratory will be reviewed. Required Physical Conditions One of the most important points that should be taken into consideration while designing a molecular pathology laboratory is to create a plan to prevent contamination. Polymerase chain reaction (PCR)-based methods are especially susceptible to contamination . To obtain a large number of copies from a very small amount of the target sequence with the PCR method provides an important diagnostic advantage, but this ability also leads to false results in case of contamination. False positive results may occur due to contamination from sample to sample, transport of amplicons from the previous amplification of the same target, cross-contamination of different reactions prepared simultaneously, and contamination of the reagents with DNA templates . In real-time PCR methods; when the PCR reactions are finished with fluorescence-based detection techniques, the analysis is also completed at the same time. Since the PCR products do not need to be reprocessed, the reaction tubes or closed plates are not opened and the amplicon transport does not occur . The laboratories using real-time PCR methods therefore have less risk of contamination . Main procedures performed in a molecular pathology laboratory using PCR-based methods are pre-PCR procedures (sample preparation, PCR preparation) and post-PCR procedures (performing PCR and post-PCR analysis) . It is critical to perform these operations in areas separated as “clean” and “dirty”. The “clean” area represents the area where all pre-PCR procedures (such as microdissection, DNA/RNA extraction, PCR preparation) are performed, and the “dirty” area represents the area where all post-PCR products (amplicons) are processed. The staff and researchers should keep all reagents, materials and equipment used in these areas separate at all times and never move them back from the dirty area to the clean area . Contamination is significantly reduced by physical separation of the clean and dirty areas and by doing pre-PCR and post-PCR activities in separate rooms. Therefore, planning at least two separate rooms is essential while designing a molecular pathology laboratory. It is recommended to perform sample preparation steps such as nucleic acid isolation in the pre-PCR laboratory, and to perform PCR reactions and other post-PCR procedures in the post-PCR laboratory . However, if there is sufficient space, four separate rooms are recommended for the preparation of reagents , sample preparation , PCR step and post-PCR steps for an ideal molecular pathology laboratory. Each room must have its own equipment, protective clothing and consumables, and there should be no material/equipment transport between the rooms . The requirements for laboratory design may vary according to the method used. For example, as mentioned previously, 3 rooms may be ideal in a laboratory where a real-time PCR method is applied, since post-PCR analysis is not necessary in the real-time PCR method . Reagent preparation room is the room where reagent stocks are prepared and then divided into a certain number of small usable parts (aliquoted), and the reaction mixes are prepared. This room should be free of any biological materials such as DNA/RNA extracts, PCR products, etc. The sample separation room is where the nucleic acid isolation is performed and the isolated samples are added to the PCR reaction mixes . This room is also called a “low copy” room, as the number of copies has not yet been amplified by PCR . Ideally, it is recommended to perform the steps of nucleic acid isolation and addition of isolated samples to the PCR reaction mixes in separate rooms, but these two steps are usually performed in the same room but in different areas/compartments since most laboratories do not have sufficient space . Preparation of the PCR reactions in a laminar flow biosafety cabinet ensures that the area remains clean . The amplification (PCR) room is where PCR devices are located and the amplification steps are performed, and the post-PCR room is where the analysis of PCR products by gel electrophoresis, sequencing, nested-PCR, etc., methods are carried out. These two rooms constitute contaminated-dirty rooms and no equipment or materials used in these rooms should be used in other rooms . These rooms are also called “high copy” rooms . In the PCR applications such as real-time PCR where single-stage PCR reactions are sufficient and tubes with PCR products are not required to be opened, PCR devices can be placed in the post-PCR room. However, in the laboratories using PCR applications such as nested PCR, etc., where multiple PCR reaction steps are required and the tubes must be opened, PCR devices should be placed in a separate room/area . In the amplification phase, the primary and secondary PCR steps (if any), should be separated according to the physical state of the laboratory, preferably in separate rooms. If this is not possible, they should be performed in separate compartments and on separate PCR devices . Next-generation sequencing applications also include one or more PCR amplification steps, which are similarly recommended to be performed in separate rooms/areas . Various recommendations about minimum room sizes can be found in international guidelines. For example, according to the field planning criteria of the United States of America Military Health System Pathology and Clinical Laboratories guide, the reagent preparation room should be at least 120 sq ft (approx. 33.5 m2) and the amplification room should be 240 sq ft (approx. 22.3 m2) in size . The relevant guide published by the Republic of Turkey Ministry of Health is detailed below. Workflow The workflow in the molecular pathology laboratory must be unidirectional from the clean area to the dirty area . When laboratory personnel and researchers are required to move from dirty rooms to clean rooms, laboratory coats, gloves and all kinds of protective equipment should be changed and hands should be washed. No material should be carried from the dirty room to the clean room . To prevent the passage of personnel from the dirty room to the clean room, it is appropriate to have separate personnel working in each room or to perform pre-PCR and post-PCR procedures on different days . There are automated molecular pathology platform systems that provide automatic one-way workflow and isolate nucleic acid from the sample, combine the isolated DNA with amplification reagents, and perform the analysis, and their use is becoming increasingly common . The rooms and workflow that should be present in an ideal molecular pathology laboratory are shown in . If all operations have to be performed in a single room, separate compartments/benches are required for the reagent preparation, sample preparation, PCR stage and post-PCR stages. The rule of unidirectional workflow from the clean compartments to the dirty compartments must be followed. If possible, sample preparation should be carried out in a laminar flow biosafety cabinet including UV light. In the absence of separate rooms, a timetable should be established in which the pre-PCR and post-PCR steps are performed at different times of the day . Ventilation Circulating air between pre- and post-PCR laboratories is an important source of contamination in laboratories where techniques detecting very small amounts of DNA/RNA are used. Each laboratory should be ventilated separately and the air pressure must be adjusted separately. At positive pressure, the air pressure inside the room is higher than the air pressure outside the room, preventing the transport of unwanted substances from outside. Negative pressure, on the other hand, allows air to enter into the room and prevents the migration of the air to the surrounding rooms/laboratories. The doors must be kept closed to maintain the negative pressure. There should be slight positive pressure in the pre-PCR laboratory to prevent the entrance of contaminated air from outside, while the post-PCR laboratory should have a slight negative pressure to keep the air in and thus to prevent the escape of amplicons from the completed PCR samples. The ventilation of pre-PCR and post-PCR laboratories should be opened to different air channels and opened out from different locations . Ultra-Violet (UV) Irradiation UV rays that cause DNA damage are useful for eliminating the contaminated DNA that may occur during addition of the DNA template. UV light can therefore be used to sterilize the pre-PCR laboratory. Since this method is based on cross-linking with thymidine residues, the base sequence of the target region plays a role in its success. In addition, the hydration status of the DNA has a significant effect on the UV resistance of the DNA. As dry-state DNA is more resistant to UV light, UV light is less effective in preventing contamination on dry laboratory surfaces . If UV light is going to be used on master mixes for decontamination, care must be taken to ensure that no dNTPs and enzymes are damaged in the UV light . The UV light source can be placed on the laboratory ceiling or bench and can be activated by a device on the exit door as the last person leaving the laboratory closes the outer door. If UV lights are used, UV-induced ozone must be removed by ventilation. Accumulation of deposits due to the precipitation of oxidation products on the glass of the bulb during radiation occurs and this reduces the effectiveness of the UV system. These deposits should be removed monthly and the performance of the UV bulbs must be strictly monitored . The physical conditions required in molecular laboratories in Turkey are defined by the “Guideline for Physical Infrastructural Standards of Medical Laboratories applying Molecular Tests” published by the Republic of Turkey Ministry of Health . According to this guideline; Molecular diagnostic laboratories should have at least two, preferably three rooms, each with a minimum area of 15 square meters, physically separated from each other to allow unidirectional workflow (from preamplification to postamplification) and preferably with separate ventilation systems. These rooms are defined in parallel with those recommended in the literature and international guidelines, as a ‘pre-amplification laboratory’ where sample acceptance and nucleic acid extraction are performed, an ‘amplification laboratory’ where target amplification methods are applied, and a ‘post-amplification laboratory’ where analysis methods such as electrophoresis, DNA sequence analysis etc. take place after amplification. In order to prevent contamination, the issues of providing a clean and preferably separate airflow, preparation and storage of all reagents and chemicals in their own areas, use of separate devices and materials for each laboratory area (freezers, refrigerators, cabinets, centrifuges, water baths, vortex mixers, pipettes, pencils, timers, all kinds of consumables, etc.) are pointed out. It is emphasized that if only two rooms can be reserved for the molecular diagnosis laboratory, preamplification procedures and amplification / postamplification analyses should be performed in separate rooms. For each laboratory; the issues of the presence of single piece floor covering without pores, the presence of hand washing sinks, the provision of temperature and humidity monitoring, the use of UV irradiation systems (on counter tops and / or room ceilings) during non-working hours, the presence of sufficient storage space, the presence of a sufficient number of grounded electrical outlets and uninterruptible power supplies (UPS, generator, etc.) for the laboratory devices, and the placement of laboratory equipment to allow unidirectional workflow are pointed out. What to Do to Avoid Intermixing and Contamination of Samples Molecular laboratory tests are generally quite sensitive and specific, providing very precise results . Even so, false positive or false negative results may sometimes occur. Control mechanisms that include the verification of the primer and probe sequences, checking and confirming whether the test conditions are optimal, and the use of negative controls should be employed to reduce false results. While amplification is generally a part of the molecular diagnostic method, current nucleic acid amplification methods are very sensitive with the capability of detecting even a single molecule. Although this seems to be an advantage, it should be kept in mind that the contaminated DNA molecule may also be amplified, causing false positive results. Therefore, prevention of contamination must be a priority in molecular pathology laboratories . Cross-contamination is one of the sources of error and contamination in the pathology laboratory and may occur at any stage of the tissue processing process, such as during macroscopic and/or microscopic evaluation, or during DNA extraction, and may cause false positive or negative results. Microorganisms (viruses, bacteria, etc.) may also be transferred from one case to another during these processes. Immunohistochemical staining with ABH blood group antibodies, microdissection, and microsatellite instability analysis can be performed to prevent and detect cross-contamination . As processing of the samples from different patients occurs in the same area with recurrent use of several instruments (e.g., microtome blade, water bath) in pathology laboratories, precautions such as using a new blade for each sample, washing the blade with DNA decontamination solution, and/or sectioning of an empty paraffin block between samples (called the “ sandwich model” ) are used by various laboratories to reduce the risk of cross-contamination . However, cross-contamination rates have been reported to be around 3% (0-8.8 %) despite these precautions . The amplified DNA from the positive reactions in the previous test, when the reaction tubes are opened after amplification, is the source of contamination for the subsequent tests . Also, amplification reactions are exposed to contamination from other patients’ samples and the target-containing plasmid . Samples may contaminate the laboratory environment while pipetting and the risk of contamination increases if multiple samples are run together. Positive controls studied in the test are risk sources for contamination as well. Clothing, laboratory waste and/or uncleaned tables may contain contaminating nucleic acids . In order to prevent and control contamination, the appropriate physical conditions, architectural structure and design, meticulous application of laboratory techniques and environmental control protocols, and the workflow plan are essential. Using separate areas or rooms for pre-amplification, amplification and post-amplification stages with separate ventilation systems is an efficient way to prevent contamination, as discussed in detail in the “Physical Conditions” section above. The risk of contamination is lower in closed system devices. Only personnel in charge should be present in the test area. Every area/room should have its own equipment including laboratory coats and pipettes . Minimum aerosolization while opening tubes is also necessary to prevent transport between samples . Cleaning before and after each procedure should be carried out by using nucleic acid removers. For example, washing with 10% bleach that is freshly prepared and rinsing with 70% ethanol can be performed . Thus, both biologically hazardous substances and nucleic acids that may be sources of contamination can be removed . Adding enzymes such as uracil DNA glycosylase, which break down DNA, into the amplified DNA to exchange some or all of the thymidine with uracil in the reaction products can prevent contamination biochemically . In addition, contamination can be prevented or reduced by discarding unopened tubes in the last stage, using pipettes with positive pressure displacements, not talking during tests such as PCR, and regular exposure of laboratory devices to UV radiation . Aliquoting the reagents for each run is another precaution for the prevention of contamination. Patient samples and positive controls should be the last to be put into reaction tubes to reduce the risk of transport of the nucleic acids. If positive controls are going to be used, the lowest dilutions should be preferred . Water or DNA-free buffers can be used as negative controls to detect and monitor contamination. The control tube should contain all materials for all stages of the test, like any other sample. A positive result detected in the negative control indicates the possibility of contamination . The adequacy control of chemical sterilization (such as uracil glycosylase protocols) can be done by incorporating a small amount of amplicon into a negative control . Surface and equipment contamination can be checked by swab samples from the laboratory surfaces where the test is carried out with damp filter papers, and a positive result in the swab sample indicates the presence of contamination. In addition, more than expected positivity rates of a given test may indicate contamination . As RNA is more reactive than DNA and is vulnerable to RNAses that are present in all cells, prevention of RNAse contamination is very important in RNA-based molecular tests. RNases are resistant to metal chelating agents and can persist even after prolonged boiling or autoclaving. The most common sources of exogenous RNAse contamination are contaminated buffers and automatic pipettors. Also, all laboratory surfaces and glassware can be contaminated with RNAse from the laboratory personnel’s skin, hair, etc. Wearing gloves and changing them frequently during all stages of the test, use of separate laboratory equipment for RNA based tests, aliquoting small amounts of buffers, use of RNAse inhibitors (DEPC, etc.), and use of RNAase-free solutions and tubes are among the laboratory precautions to prevent RNAase contamination . Preparation of a documented action plan in case of contamination is recommended. Most laboratories quarantine and/or destroy all contaminated reagents and consumables . Equipment Equipment and appliances may slightly differ between molecular pathology laboratories based on their testing profile. On the other hand, often a detailed inventory list is required to set up a molecular pathology laboratory and provide standard testing. It should also be noted here that equipment and consumables used in the routine pathology setting must be available in a molecular pathology laboratory, as well. It is also important to budget for service contracts for maintenance and repair . Unidirectional workflow should be taken into consideration while organizing/placing the equipment . The list of the appliances and equipment required in a molecular pathology laboratory provided by Republic of Turkey Ministry of Health is also a useful guide . Instructions for calibration and maintenance should be kept in the laboratory as written guidelines. Calibration guidelines must include the schedule for calibration (e.g., daily, monthly etc.), instructions describing the steps of the calibration procedure, calibration material specifications, preparation and storage conditions, troubleshooting and documentation methods, maintenance guidelines, the schedule for maintenance, instructions for performing maintenance, and troubleshooting guidelines . Quality Assessment Quality management is essential in all steps of pathology evaluation (i.e., pre-analytical, analytical and post-analytical) and a very important component in molecular pathology practice. However, as the details of quality management are beyond the scope of this review, only the basic principles will be mentioned here. There are several regulatory guidelines, including standard operating procedure manuals, to be followed to set-up and/or manage a molecular pathology laboratory . The presence of errors affecting the accuracy of the results in a molecular pathology laboratory should be checked regularly under the supervision of the pathologist (“quality control; QC”) to prevent or minimize erroneous reports and provide the confidence that quality requirements will be fulfilled (“quality assurance; QA) . Turn-around time and test result statistics should be checked and validated by using standard validation studies . For internal quality assessment (IQA), the use of control materials is recommended (see previous sections for details). In addition to the internal precautions mentioned in the previous sections, external quality assessment (EQA) must also be performed at given intervals for specific types of tests, as it is the most critical stage of quality management. EQA, a measure of laboratory performance , has been shown to be helpful to improve molecular pathology laboratories , and EQA programs are the key elements of a laboratory’s QA framework . Regular participation in EQA is needed to verify and improve the quality of testing, as molecular pathology EQA schemes score the report and the test result . As a part of an EQA scheme, participants receive test samples and their results are then reviewed to check for errors. The reports are scored and the participant laboratories gain the opportunity to improve their service. Laboratories across Europe are also required to have accreditation . Accreditation is a process in which an authorized independent body officially recognizes that the laboratory is competent to perform certain tasks, and may be considered the most effective system for QA as compliance with ISO standards is checked by accreditation bodies. Both accreditation and participation in EQA are recognized as effective and important tools to improve the accuracy and reliability of molecular testing . One of the most important points that should be taken into consideration while designing a molecular pathology laboratory is to create a plan to prevent contamination. Polymerase chain reaction (PCR)-based methods are especially susceptible to contamination . To obtain a large number of copies from a very small amount of the target sequence with the PCR method provides an important diagnostic advantage, but this ability also leads to false results in case of contamination. False positive results may occur due to contamination from sample to sample, transport of amplicons from the previous amplification of the same target, cross-contamination of different reactions prepared simultaneously, and contamination of the reagents with DNA templates . In real-time PCR methods; when the PCR reactions are finished with fluorescence-based detection techniques, the analysis is also completed at the same time. Since the PCR products do not need to be reprocessed, the reaction tubes or closed plates are not opened and the amplicon transport does not occur . The laboratories using real-time PCR methods therefore have less risk of contamination . Main procedures performed in a molecular pathology laboratory using PCR-based methods are pre-PCR procedures (sample preparation, PCR preparation) and post-PCR procedures (performing PCR and post-PCR analysis) . It is critical to perform these operations in areas separated as “clean” and “dirty”. The “clean” area represents the area where all pre-PCR procedures (such as microdissection, DNA/RNA extraction, PCR preparation) are performed, and the “dirty” area represents the area where all post-PCR products (amplicons) are processed. The staff and researchers should keep all reagents, materials and equipment used in these areas separate at all times and never move them back from the dirty area to the clean area . Contamination is significantly reduced by physical separation of the clean and dirty areas and by doing pre-PCR and post-PCR activities in separate rooms. Therefore, planning at least two separate rooms is essential while designing a molecular pathology laboratory. It is recommended to perform sample preparation steps such as nucleic acid isolation in the pre-PCR laboratory, and to perform PCR reactions and other post-PCR procedures in the post-PCR laboratory . However, if there is sufficient space, four separate rooms are recommended for the preparation of reagents , sample preparation , PCR step and post-PCR steps for an ideal molecular pathology laboratory. Each room must have its own equipment, protective clothing and consumables, and there should be no material/equipment transport between the rooms . The requirements for laboratory design may vary according to the method used. For example, as mentioned previously, 3 rooms may be ideal in a laboratory where a real-time PCR method is applied, since post-PCR analysis is not necessary in the real-time PCR method . Reagent preparation room is the room where reagent stocks are prepared and then divided into a certain number of small usable parts (aliquoted), and the reaction mixes are prepared. This room should be free of any biological materials such as DNA/RNA extracts, PCR products, etc. The sample separation room is where the nucleic acid isolation is performed and the isolated samples are added to the PCR reaction mixes . This room is also called a “low copy” room, as the number of copies has not yet been amplified by PCR . Ideally, it is recommended to perform the steps of nucleic acid isolation and addition of isolated samples to the PCR reaction mixes in separate rooms, but these two steps are usually performed in the same room but in different areas/compartments since most laboratories do not have sufficient space . Preparation of the PCR reactions in a laminar flow biosafety cabinet ensures that the area remains clean . The amplification (PCR) room is where PCR devices are located and the amplification steps are performed, and the post-PCR room is where the analysis of PCR products by gel electrophoresis, sequencing, nested-PCR, etc., methods are carried out. These two rooms constitute contaminated-dirty rooms and no equipment or materials used in these rooms should be used in other rooms . These rooms are also called “high copy” rooms . In the PCR applications such as real-time PCR where single-stage PCR reactions are sufficient and tubes with PCR products are not required to be opened, PCR devices can be placed in the post-PCR room. However, in the laboratories using PCR applications such as nested PCR, etc., where multiple PCR reaction steps are required and the tubes must be opened, PCR devices should be placed in a separate room/area . In the amplification phase, the primary and secondary PCR steps (if any), should be separated according to the physical state of the laboratory, preferably in separate rooms. If this is not possible, they should be performed in separate compartments and on separate PCR devices . Next-generation sequencing applications also include one or more PCR amplification steps, which are similarly recommended to be performed in separate rooms/areas . Various recommendations about minimum room sizes can be found in international guidelines. For example, according to the field planning criteria of the United States of America Military Health System Pathology and Clinical Laboratories guide, the reagent preparation room should be at least 120 sq ft (approx. 33.5 m2) and the amplification room should be 240 sq ft (approx. 22.3 m2) in size . The relevant guide published by the Republic of Turkey Ministry of Health is detailed below. The workflow in the molecular pathology laboratory must be unidirectional from the clean area to the dirty area . When laboratory personnel and researchers are required to move from dirty rooms to clean rooms, laboratory coats, gloves and all kinds of protective equipment should be changed and hands should be washed. No material should be carried from the dirty room to the clean room . To prevent the passage of personnel from the dirty room to the clean room, it is appropriate to have separate personnel working in each room or to perform pre-PCR and post-PCR procedures on different days . There are automated molecular pathology platform systems that provide automatic one-way workflow and isolate nucleic acid from the sample, combine the isolated DNA with amplification reagents, and perform the analysis, and their use is becoming increasingly common . The rooms and workflow that should be present in an ideal molecular pathology laboratory are shown in . If all operations have to be performed in a single room, separate compartments/benches are required for the reagent preparation, sample preparation, PCR stage and post-PCR stages. The rule of unidirectional workflow from the clean compartments to the dirty compartments must be followed. If possible, sample preparation should be carried out in a laminar flow biosafety cabinet including UV light. In the absence of separate rooms, a timetable should be established in which the pre-PCR and post-PCR steps are performed at different times of the day . Circulating air between pre- and post-PCR laboratories is an important source of contamination in laboratories where techniques detecting very small amounts of DNA/RNA are used. Each laboratory should be ventilated separately and the air pressure must be adjusted separately. At positive pressure, the air pressure inside the room is higher than the air pressure outside the room, preventing the transport of unwanted substances from outside. Negative pressure, on the other hand, allows air to enter into the room and prevents the migration of the air to the surrounding rooms/laboratories. The doors must be kept closed to maintain the negative pressure. There should be slight positive pressure in the pre-PCR laboratory to prevent the entrance of contaminated air from outside, while the post-PCR laboratory should have a slight negative pressure to keep the air in and thus to prevent the escape of amplicons from the completed PCR samples. The ventilation of pre-PCR and post-PCR laboratories should be opened to different air channels and opened out from different locations . UV rays that cause DNA damage are useful for eliminating the contaminated DNA that may occur during addition of the DNA template. UV light can therefore be used to sterilize the pre-PCR laboratory. Since this method is based on cross-linking with thymidine residues, the base sequence of the target region plays a role in its success. In addition, the hydration status of the DNA has a significant effect on the UV resistance of the DNA. As dry-state DNA is more resistant to UV light, UV light is less effective in preventing contamination on dry laboratory surfaces . If UV light is going to be used on master mixes for decontamination, care must be taken to ensure that no dNTPs and enzymes are damaged in the UV light . The UV light source can be placed on the laboratory ceiling or bench and can be activated by a device on the exit door as the last person leaving the laboratory closes the outer door. If UV lights are used, UV-induced ozone must be removed by ventilation. Accumulation of deposits due to the precipitation of oxidation products on the glass of the bulb during radiation occurs and this reduces the effectiveness of the UV system. These deposits should be removed monthly and the performance of the UV bulbs must be strictly monitored . The physical conditions required in molecular laboratories in Turkey are defined by the “Guideline for Physical Infrastructural Standards of Medical Laboratories applying Molecular Tests” published by the Republic of Turkey Ministry of Health . According to this guideline; Molecular diagnostic laboratories should have at least two, preferably three rooms, each with a minimum area of 15 square meters, physically separated from each other to allow unidirectional workflow (from preamplification to postamplification) and preferably with separate ventilation systems. These rooms are defined in parallel with those recommended in the literature and international guidelines, as a ‘pre-amplification laboratory’ where sample acceptance and nucleic acid extraction are performed, an ‘amplification laboratory’ where target amplification methods are applied, and a ‘post-amplification laboratory’ where analysis methods such as electrophoresis, DNA sequence analysis etc. take place after amplification. In order to prevent contamination, the issues of providing a clean and preferably separate airflow, preparation and storage of all reagents and chemicals in their own areas, use of separate devices and materials for each laboratory area (freezers, refrigerators, cabinets, centrifuges, water baths, vortex mixers, pipettes, pencils, timers, all kinds of consumables, etc.) are pointed out. It is emphasized that if only two rooms can be reserved for the molecular diagnosis laboratory, preamplification procedures and amplification / postamplification analyses should be performed in separate rooms. For each laboratory; the issues of the presence of single piece floor covering without pores, the presence of hand washing sinks, the provision of temperature and humidity monitoring, the use of UV irradiation systems (on counter tops and / or room ceilings) during non-working hours, the presence of sufficient storage space, the presence of a sufficient number of grounded electrical outlets and uninterruptible power supplies (UPS, generator, etc.) for the laboratory devices, and the placement of laboratory equipment to allow unidirectional workflow are pointed out. Molecular laboratory tests are generally quite sensitive and specific, providing very precise results . Even so, false positive or false negative results may sometimes occur. Control mechanisms that include the verification of the primer and probe sequences, checking and confirming whether the test conditions are optimal, and the use of negative controls should be employed to reduce false results. While amplification is generally a part of the molecular diagnostic method, current nucleic acid amplification methods are very sensitive with the capability of detecting even a single molecule. Although this seems to be an advantage, it should be kept in mind that the contaminated DNA molecule may also be amplified, causing false positive results. Therefore, prevention of contamination must be a priority in molecular pathology laboratories . Cross-contamination is one of the sources of error and contamination in the pathology laboratory and may occur at any stage of the tissue processing process, such as during macroscopic and/or microscopic evaluation, or during DNA extraction, and may cause false positive or negative results. Microorganisms (viruses, bacteria, etc.) may also be transferred from one case to another during these processes. Immunohistochemical staining with ABH blood group antibodies, microdissection, and microsatellite instability analysis can be performed to prevent and detect cross-contamination . As processing of the samples from different patients occurs in the same area with recurrent use of several instruments (e.g., microtome blade, water bath) in pathology laboratories, precautions such as using a new blade for each sample, washing the blade with DNA decontamination solution, and/or sectioning of an empty paraffin block between samples (called the “ sandwich model” ) are used by various laboratories to reduce the risk of cross-contamination . However, cross-contamination rates have been reported to be around 3% (0-8.8 %) despite these precautions . The amplified DNA from the positive reactions in the previous test, when the reaction tubes are opened after amplification, is the source of contamination for the subsequent tests . Also, amplification reactions are exposed to contamination from other patients’ samples and the target-containing plasmid . Samples may contaminate the laboratory environment while pipetting and the risk of contamination increases if multiple samples are run together. Positive controls studied in the test are risk sources for contamination as well. Clothing, laboratory waste and/or uncleaned tables may contain contaminating nucleic acids . In order to prevent and control contamination, the appropriate physical conditions, architectural structure and design, meticulous application of laboratory techniques and environmental control protocols, and the workflow plan are essential. Using separate areas or rooms for pre-amplification, amplification and post-amplification stages with separate ventilation systems is an efficient way to prevent contamination, as discussed in detail in the “Physical Conditions” section above. The risk of contamination is lower in closed system devices. Only personnel in charge should be present in the test area. Every area/room should have its own equipment including laboratory coats and pipettes . Minimum aerosolization while opening tubes is also necessary to prevent transport between samples . Cleaning before and after each procedure should be carried out by using nucleic acid removers. For example, washing with 10% bleach that is freshly prepared and rinsing with 70% ethanol can be performed . Thus, both biologically hazardous substances and nucleic acids that may be sources of contamination can be removed . Adding enzymes such as uracil DNA glycosylase, which break down DNA, into the amplified DNA to exchange some or all of the thymidine with uracil in the reaction products can prevent contamination biochemically . In addition, contamination can be prevented or reduced by discarding unopened tubes in the last stage, using pipettes with positive pressure displacements, not talking during tests such as PCR, and regular exposure of laboratory devices to UV radiation . Aliquoting the reagents for each run is another precaution for the prevention of contamination. Patient samples and positive controls should be the last to be put into reaction tubes to reduce the risk of transport of the nucleic acids. If positive controls are going to be used, the lowest dilutions should be preferred . Water or DNA-free buffers can be used as negative controls to detect and monitor contamination. The control tube should contain all materials for all stages of the test, like any other sample. A positive result detected in the negative control indicates the possibility of contamination . The adequacy control of chemical sterilization (such as uracil glycosylase protocols) can be done by incorporating a small amount of amplicon into a negative control . Surface and equipment contamination can be checked by swab samples from the laboratory surfaces where the test is carried out with damp filter papers, and a positive result in the swab sample indicates the presence of contamination. In addition, more than expected positivity rates of a given test may indicate contamination . As RNA is more reactive than DNA and is vulnerable to RNAses that are present in all cells, prevention of RNAse contamination is very important in RNA-based molecular tests. RNases are resistant to metal chelating agents and can persist even after prolonged boiling or autoclaving. The most common sources of exogenous RNAse contamination are contaminated buffers and automatic pipettors. Also, all laboratory surfaces and glassware can be contaminated with RNAse from the laboratory personnel’s skin, hair, etc. Wearing gloves and changing them frequently during all stages of the test, use of separate laboratory equipment for RNA based tests, aliquoting small amounts of buffers, use of RNAse inhibitors (DEPC, etc.), and use of RNAase-free solutions and tubes are among the laboratory precautions to prevent RNAase contamination . Preparation of a documented action plan in case of contamination is recommended. Most laboratories quarantine and/or destroy all contaminated reagents and consumables . Equipment and appliances may slightly differ between molecular pathology laboratories based on their testing profile. On the other hand, often a detailed inventory list is required to set up a molecular pathology laboratory and provide standard testing. It should also be noted here that equipment and consumables used in the routine pathology setting must be available in a molecular pathology laboratory, as well. It is also important to budget for service contracts for maintenance and repair . Unidirectional workflow should be taken into consideration while organizing/placing the equipment . The list of the appliances and equipment required in a molecular pathology laboratory provided by Republic of Turkey Ministry of Health is also a useful guide . Instructions for calibration and maintenance should be kept in the laboratory as written guidelines. Calibration guidelines must include the schedule for calibration (e.g., daily, monthly etc.), instructions describing the steps of the calibration procedure, calibration material specifications, preparation and storage conditions, troubleshooting and documentation methods, maintenance guidelines, the schedule for maintenance, instructions for performing maintenance, and troubleshooting guidelines . Quality management is essential in all steps of pathology evaluation (i.e., pre-analytical, analytical and post-analytical) and a very important component in molecular pathology practice. However, as the details of quality management are beyond the scope of this review, only the basic principles will be mentioned here. There are several regulatory guidelines, including standard operating procedure manuals, to be followed to set-up and/or manage a molecular pathology laboratory . The presence of errors affecting the accuracy of the results in a molecular pathology laboratory should be checked regularly under the supervision of the pathologist (“quality control; QC”) to prevent or minimize erroneous reports and provide the confidence that quality requirements will be fulfilled (“quality assurance; QA) . Turn-around time and test result statistics should be checked and validated by using standard validation studies . For internal quality assessment (IQA), the use of control materials is recommended (see previous sections for details). In addition to the internal precautions mentioned in the previous sections, external quality assessment (EQA) must also be performed at given intervals for specific types of tests, as it is the most critical stage of quality management. EQA, a measure of laboratory performance , has been shown to be helpful to improve molecular pathology laboratories , and EQA programs are the key elements of a laboratory’s QA framework . Regular participation in EQA is needed to verify and improve the quality of testing, as molecular pathology EQA schemes score the report and the test result . As a part of an EQA scheme, participants receive test samples and their results are then reviewed to check for errors. The reports are scored and the participant laboratories gain the opportunity to improve their service. Laboratories across Europe are also required to have accreditation . Accreditation is a process in which an authorized independent body officially recognizes that the laboratory is competent to perform certain tasks, and may be considered the most effective system for QA as compliance with ISO standards is checked by accreditation bodies. Both accreditation and participation in EQA are recognized as effective and important tools to improve the accuracy and reliability of molecular testing . In conclusion, if the results obtained by molecular diagnostic tests are inaccurate due to any of the factors mentioned here, serious problems may arise which adversely affect the diagnosis and treatment decision. As molecular diagnosis has a major role in treatment decisions, especially for cancer patients, the management of the molecular pathology laboratory is of utmost importance. The authors declare no conflict of interest. No funding to declare.
ALLTogether recommendations for biobanking samples from patients with acute lymphoblastic leukaemia: a modified Delphi study
27bb5790-44f6-49c4-b9db-e24e8db7381a
11920285
Cytology[mh]
The ALLTogether consortium has implemented a standard treatment protocol, ALLTogether1, for acute lymphoblastic leukaemia (ALL) in children, adolescents, and young adults across Northern and Western Europe (NCT03911128), expecting to enrol around 6430 patients over 5 years’ time period. Establishment of this collaborative consortium has opened avenues for a multitude of scientific collaborations, particularly through the biobanking of samples from patients enrolled in the clinical trial. Biobanks play a crucial role in cancer research and personalized medicine, emphasized by The International Agency for Research on Cancer (IARC) stating that biobanks serve as a cornerstone for advancement of three expanding areas within biomedical science. These include: (i) molecular and genetic epidemiology (focusing on investigating the interplay between genetic and environmental factors in cancer causation among both the general population and familial contexts), (ii) molecular pathology (aiming to develop molecular-driven methods for classifying and diagnosing various cancers) and (iii) pharmacogenomics/pharmacoproteomics which seeks to elucidate the relationship between an individual patient's genetic makeup or observable characteristics and their response to drug therapies . To facilitate ALLTogether scientific research projects, the consortium has strongly recommended that at least 50% of samples from ALLTogether1 study patients who consented for storage of biomaterial, should be reserved for collaborative ALLTogether studies, ensuring equitable access to biobanked resources across the consortium and the opportunity to study rare ALL subtypes in the context of a uniformly treated patient population. The ALLTogether1 protocol contains several sub-studies which require the biobanking of various samples -leukaemic and non-leukaemic- from peripheral blood (PB), bone marrow (BM) and cerebrospinal fluid (CSF) at critical time-points: diagnosis, day 15, post-induction therapy (day 29), post-consolidation therapy (day 71) and relapse. The ALLTogether Scientific Committee is in charge of the evaluation of collaborative research proposals requiring ALLTogether biobanked samples. As ALLTogether research projects inherently involve more than one regional study group of the consortium leading to requesting samples from several national biobanks, developing guidelines seemed crucial. To address this, the ALLTogether consortium established a Biobank Committee, comprising national representatives from each country participating in the trial. The mission of the Committee is to outline recommended practices in biobanking materials from ALL patients with the aim to enhance the quality of stored materials for research purposes, promote uniformity to facilitate joint analysis of rare subtypes, and contribute to improved patient treatment through research enabled by the biobanks. Current recommendations in oncology are not-disease specific . The absence of specific recommendations for leukaemia further emphasizes the necessity to develop recommendations for biobanking across the consortium. To achieve this objective, the ALLTogether Biobank Committee has undertaken a modified Delphi study, involving a panel of national experts from a range of disciplines. The purpose of the survey was to establish the best practice for all aspects of biobanking for ALL, free from constraints such as funds or storage space. Writing group and expert panel The ALLTogether Biobank Committee comprises representatives from countries or study groups participating in the clinical trial (Belgian Society of Paediatric Haematology Oncology (BSPHO), Nordic Society of Paediatric Haematology and Oncology (NOPHO), Société Française de lutte contre les Cancers et leucémies de l’Enfant et de l’adolescent (SFCE), Co-operative study group for childhood acute lymphoblastic leukemia (CoALL), The Paediatric Haematology/Oncology Association of Ireland (PHOAI), Princess Máxima Center for Pediatric Oncology (PMC), Grupo Português de Leucemias Pediatricas member of the Sociedade de Hematologica e Oncologia Pediatrica (GPLP-SHOP), Sociedad Española de Hematología y Oncología Pediátricas (SEHOP) and The United Kingdom ALL Group (UKALL)). Within the Committee, a dedicated writing group was tasked with developing questions for the Delphi survey. This group comprised 9 members from 6 countries (Belgium, Finland, Ireland, Portugal, The Netherlands, and The United Kingdom) with diverse roles including clinicians, researchers, laboratory scientists, and biobank coordinators. The panel of experts ( n = 53) consisted of national experts (3 from Portugal; 4 from Germany and Ireland; 5 from Belgium, Finland, Spain and Sweden, 6 from The Netherlands; and 8 from France and The United Kingdom), nominated by their ALLTogether consortium regional representatives, including members of the Biobank Committee who did not participate in drafting the surveys. National experts represented biobank technical staff as well as users from a range of disciplines, including laboratory technicians, biobank coordinators, clinicians and biomedical scientists. Survey design To derive consensus-based recommendations, a modified version of the Delphi method [ – ] was employed. The modification consisted of predefined items and options elaborated by the writing group instead of an open-ended questionnaire. The voting process by the expert group was facilitated by the digital survey and reporting platform Webropol. The study encompassed two Delphi rounds (Fig. ). Round 1, split over questionnaires 1A and 1B, comprehensively covered all aspects of biobanking. Round 1A of the survey comprised questions relating to BM aspiration procedure, sampling, viable cell storage, and DNA and RNA storage. Round 1B questions covered plasma, serum, CSF, germline material, sample processing data and quality monitoring. Round 2 revisited and refined some previous questions, to clarify issues arising during the first voting rounds, and to arrive at consensus. Participants were asked by email to participate through a web link to the survey or QR code and urged to only answer to questions that were relevant to their expertise. The voting was anonymous, independent per survey, and answers were only visible by moderators of the survey (AT and JL). The anonymized answers were available to all writing group members (Supplementary Table ). Consensus scoring For Rounds 1A and 1B, the level of Agreement (LoA) score was assessed on a Likert scale of 1 (completely disagree) to 5 (completely agree). Participants had the option to abstain from answering if they lacked particular opinion or expertise. The abstention rate was calculated per recommendation by summarizing the number of participants with no opinion on the recommendation divided by the number of people who answered the question. LoA score was calculated by dividing the number of participant answering agree/strongly agree (4/5) or disagree/strongly disagree (1/2) with the total number of the participants answering the recommendation. Consensus was considered achieved when ≥66% of the participants either agreed/strongly agreed (4/5) or disagreed/strongly disagreed (1/2). For Round 2, direct questions were posed to seek consensus with options including Yes/No/No opinion as well as multiple-choice questions to gain further insight into preferred methods. Participants were encouraged to provide comments alongside with their responses, as this serves as a valuable source of information and advice. The questions and answers per participant of the three surveys are presented in Supplementary Table . The ALLTogether Biobank Committee comprises representatives from countries or study groups participating in the clinical trial (Belgian Society of Paediatric Haematology Oncology (BSPHO), Nordic Society of Paediatric Haematology and Oncology (NOPHO), Société Française de lutte contre les Cancers et leucémies de l’Enfant et de l’adolescent (SFCE), Co-operative study group for childhood acute lymphoblastic leukemia (CoALL), The Paediatric Haematology/Oncology Association of Ireland (PHOAI), Princess Máxima Center for Pediatric Oncology (PMC), Grupo Português de Leucemias Pediatricas member of the Sociedade de Hematologica e Oncologia Pediatrica (GPLP-SHOP), Sociedad Española de Hematología y Oncología Pediátricas (SEHOP) and The United Kingdom ALL Group (UKALL)). Within the Committee, a dedicated writing group was tasked with developing questions for the Delphi survey. This group comprised 9 members from 6 countries (Belgium, Finland, Ireland, Portugal, The Netherlands, and The United Kingdom) with diverse roles including clinicians, researchers, laboratory scientists, and biobank coordinators. The panel of experts ( n = 53) consisted of national experts (3 from Portugal; 4 from Germany and Ireland; 5 from Belgium, Finland, Spain and Sweden, 6 from The Netherlands; and 8 from France and The United Kingdom), nominated by their ALLTogether consortium regional representatives, including members of the Biobank Committee who did not participate in drafting the surveys. National experts represented biobank technical staff as well as users from a range of disciplines, including laboratory technicians, biobank coordinators, clinicians and biomedical scientists. To derive consensus-based recommendations, a modified version of the Delphi method [ – ] was employed. The modification consisted of predefined items and options elaborated by the writing group instead of an open-ended questionnaire. The voting process by the expert group was facilitated by the digital survey and reporting platform Webropol. The study encompassed two Delphi rounds (Fig. ). Round 1, split over questionnaires 1A and 1B, comprehensively covered all aspects of biobanking. Round 1A of the survey comprised questions relating to BM aspiration procedure, sampling, viable cell storage, and DNA and RNA storage. Round 1B questions covered plasma, serum, CSF, germline material, sample processing data and quality monitoring. Round 2 revisited and refined some previous questions, to clarify issues arising during the first voting rounds, and to arrive at consensus. Participants were asked by email to participate through a web link to the survey or QR code and urged to only answer to questions that were relevant to their expertise. The voting was anonymous, independent per survey, and answers were only visible by moderators of the survey (AT and JL). The anonymized answers were available to all writing group members (Supplementary Table ). For Rounds 1A and 1B, the level of Agreement (LoA) score was assessed on a Likert scale of 1 (completely disagree) to 5 (completely agree). Participants had the option to abstain from answering if they lacked particular opinion or expertise. The abstention rate was calculated per recommendation by summarizing the number of participants with no opinion on the recommendation divided by the number of people who answered the question. LoA score was calculated by dividing the number of participant answering agree/strongly agree (4/5) or disagree/strongly disagree (1/2) with the total number of the participants answering the recommendation. Consensus was considered achieved when ≥66% of the participants either agreed/strongly agreed (4/5) or disagreed/strongly disagreed (1/2). For Round 2, direct questions were posed to seek consensus with options including Yes/No/No opinion as well as multiple-choice questions to gain further insight into preferred methods. Participants were encouraged to provide comments alongside with their responses, as this serves as a valuable source of information and advice. The questions and answers per participant of the three surveys are presented in Supplementary Table . Biobanking infrastructures The Biobank Committee initially gathered information on national or study group biobanks within the consortium, focusing on their structures, stored sample types, sample processing protocols, and sample processing data. This revealed a diverse range of structures among the 10 countries involved in 9 study groups (Finland and Sweden are part of the NOPHO study group) (Supplementary Table ). Most biobanks are centralized within countries (7/10) while three countries comprise multiple biobank locations. In countries with multiple biobanks, these biobanks are attached to a diagnostic laboratory. Central biobanks are either affiliated with a diagnostic laboratory (3/7) or operate as a separate entity (4/7). The types of stored material vary across countries. The only commonality identified in the current practice is the storage of viable frozen cells that can be used for functional analyses and for DNA, RNA and protein extraction. Delphi survey The panel was asked to answer questions in 7 categories related to biobanking of ALL samples. The participation rate of the three successive surveys was 70% (37/53), 64% (34/53) and 60% (32/53) with similar distribution of panel member expertise (Fig. ). Among all items of Round 1A and 1B, 48 out of 63 (75%) reached a positive consensus, while 15 out of 63 (24%) did not reach consensus. Subsequently, the writing group reformulated the latter questions and developed 24 new questions (Round 2) to enhance clarity and obtain additional information on reasons for the non-consensus. To this end, questions with multiple options alongside with the possibility to add comments were implemented. The revised version obtained an affirmative consensus for 18 items, while 6 items still did not reach consensus. The result for each item is summarized in Table and discussed below per category, highlighting additional comments provided by the panel. The geographical spread of responders is illustrated in . Procedure and collection Ensuring a high quality of stored samples for research starts with obtaining high-quality fresh samples. Participants strongly agreed on the importance of avoiding haemodilution when large BM sample volumes were required, consensus was reached on reinserting the needle at a different angle through the original puncture site. No agreement was reached on the preferred type of sample collection tubes, with most votes going to Sodium Heparin and Ethylenediaminetetraacetic acid (EDTA). This in part reflects BM aspiration procedure detailed in the trial protocol (Supplementary Method ), in which different tubes were deemed acceptable, depending on local or national practices. Also in Round 2, where the question was re-formulated to ask about the best type of tube for processing viable-frozen cells for functional studies, no consensus was reached. Therefore, we conclude that different types of anti-coagulant tubes (Sodium Heparin, EDTA or acid citrate dextrose (ACD)) are used to obtain viable frozen cells, depending on the preferences and processing procedures of the local or national biobanks. For instance, heparin has been found to interfere with nucleic acid amplification assays , which is mitigated through density gradient centrifugation and subsequent cell washing steps. If a research study involves the production of immortalised cell lines derived from PB lymphocytes, it is advised to prioritise ACD anti-coagulation . Despite EDTA posing challenges for cytogenetic analysis, it can provide blood fractions suitable for a wide range of DNA-based and protein assays . Importantly, the panel agreed that the type of collection tube should be noted, as well as the time between sampling and processing to allow researchers to evaluate whether the biobanked material is suitable for their research application . Biobanking of cells The panel was asked about desired time-points to collect BM/PB cells, methods for cell processing and storage conditions. The panel agreed that cells should be collected at diagnosis, relapse, end of induction and end of consolidation. Additionally, the panel agreed of storing infiltrated extramedullary samples, such as pleural and pericardial fluids, at diagnosis and relapse. Experts did not feel that other time-points were useful for routine biobanking. If RNA and/or DNA were not extracted by the biobank, participants agreed on prioritizing the storage of viable cells rather than keeping cell pellets for future extraction. The panel agreed that all cells not used for diagnostic purposes should be biobanked. Furthermore, if fewer than 5 million cells are recovered after density gradient centrifugation and separation, participants also supported the storage as viable cells, since this leaves all options open for functional studies such as drug sensitivity testing and patient-derived xenografts. This is valuable information, leading to the strong recommendation that biobanks should prioritize storage of viable cells. On the number of cells per vial to store, participants frequently suggested not to exceed 20M cells, both for optimal recovery and to allow multiple applications. Example algorithms how to balance number of vials and number of cells per vial are provided (Supplementary Method ). Concerning processing of BM aspirate or PB samples to obtain cells, there was agreement on the use of density gradient centrifugation. The attitude towards dealing with visible erythrocyte contamination in the cell pellet after density gradient centrifugation and separation did not reach consensus. Some participants considered that minimising the number of protocol steps for not losing cells is more important than purity or that visible contamination by erythrocytes is too suggestive. This is in sharp contrast to others for whom this step is part of their routine practice to avoid interference of haemoglobin in colorimetric assays including MTT and debris in flow cytometry measurements. Moreover, DNA and RNA from (immature) red blood cells and DNA sticking to the surface of mature red blood cells interferes with DNA and RNA quantification . After density gradient centrifugation, the panel agreed that cell viability should be assessed but did not reach consensus on a single preferred technique. Flow cytometry and trypan blue are equally accepted. There was strong agreement on implementing a consensus-based protocol to standardise cryopreservation procedures. The ALLTogether1 laboratory manual contains a standard operation procedure for cryopreservation of viable cells needed for an ALLTogether1 protocol-related sub-study (Supplementary Method ). This procedure was optimized for recovery of viable lymphoblasts after thawing, with the most critical step that the cell suspension and freezing medium need to be cold (ice water, 0–4 °C) and cryovials pre-cooled in −20 °C freezer to minimize the toxic effect of DMSO. Laboratories who implemented this procedure reported improved viability upon thawing from typically <30% to >80% (personal communication to Judith Boer from Máxima Biobank and LBL2018 protocol Biobank). Biobanking of plasma and serum Currently, plasma and serum from BM/PB samples are stored in the majority of ALLTogether study group biobanks (Supplementary Table ) and could be useful for the study of cytokines, for example . Experts agreed that plasma and serum should be stored at diagnosis, relapse, end of induction and end of consolidation. Experts did not consider other time-points as critical within the current trial strategy. However, the majority of experts abstained from statements 3.6 and 4.6 (abstention rate of 59% for plasma and 63% for serum) in Table , possibly suggesting lack of experience with research on this material. Some experts suggested that additional time-points might be valuable, particularly in context of cell-based immunotherapies to evaluate response. One expert proposed storing plasma monthly after consolidation to enable the discovery of biomarkers predicting relapse. Biobanking of cerebrospinal fluid (CSF) Optimizing treatment of the central nervous system (CNS) remains a challenge in ALL. Research focusing on CNS-ALL is needed to provide a better understanding of disease biology as well as putative drug targets and biomarkers. CSF-Flow, a sub-study within the ALLTogether1 trial, is a study on identifying biomarkers to enable accurate prediction of relapse-risk . Hence, the establishment of a robust CSF biobank is deemed indispensable . Experts reached consensus that CSF should be stored at diagnosis and relapse and at follow-up regardless of infiltration at diagnosis. Day 15 and day 29 time-points are obligatory as part of a sub-study in the current trial. Experts agreed that after centrifugation, the supernatant and pellet should be stored. Storage of the pellet (as non-viable cells) would allow for DNA, RNA, or protein extraction. There was agreement not to biobank CSF at other follow-up time-points. There was no clear consensus on the storage method, however, 86% of answering experts considered −80 °C acceptable, while 55% preferred liquid nitrogen. Multiple experts considered both options, possibly reflecting the dependence on the biological questions posed. Supplementary Table summarise CSF storage practices across the consortium. Biobanking of germline material Whole exome sequencing (WES) and whole genome sequencing (WGS) are frequently used in research for the detection of genomic aberrations and are starting to be used as routine diagnostic tools. DNA isolated from germline material is critical for discrimination between somatic and germline variants. Gathering information across the consortium showed that one country has successfully implemented skin biopsies along WES/WGS into routine workflows, minimizing patient burden while ensuring timely results. The panel agreed on the necessity to store germline material, with the preferred source being a skin biopsy (77% agreement). Remission samples (<1% leukaemic cells) were also deemed acceptable, while buccal swabs were considered the least preferred option. The writing group suggested that direct DNA extraction from skin biopsy without fibroblast culture to avoid culture time would be a preferred option to expedite WGS/WES analysis, notwithstanding the risk of potential low contamination by leucocyte-derived DNA. However, the panel found several sources of germline DNA equally suitable, including direct DNA isolation or fibroblast culture from skin biopsy, remission PB or BM. This likely reflects divergent local practices among the participants, such as WGS/WES not being routine diagnostics or as skin biopsy not being a procedure performed in every centre. Indeed information gathered from national biobanks indicated that this source of material is not routinely stored across the consortium. Skin biopsy with culture fibroblasts is the preferred method for confirmation of germline predisposition to haematological malignancies . Cell culture offers advantages such as increased DNA yield and enables cell storage for potential functional studies but takes a few weeks and can fail in rare cases. For transplanted patients, the panel agreed (68% agreement) that DNA of the stem cell donor should be retrieved and stored in the biobank. Participants highlighted the requirement of donor consent for storage. Quality and data monitoring The experts agreed (73%) on the necessity for biobanks to obtain ISO accreditations to ensure high sample quality. The standard ISO 20387:2018 on “Biotechnology — Biobanking — General requirements for biobanking" provides guidelines and requirements for the establishment, operation, and management of biobanks. This standard outlines procedures related to quality management, sample collection, storage, retrieval, and transportation to ensure the integrity and traceability of biological specimens. ISO/TR 22758:2020 “Biotechnology - Biobanking - Implementation guide for ISO 20387” provides support for implementing the requirements of ISO 20387:2018 . Currently, only one ALLTogether affiliated biobanks has obtained ISO accreditation. The survey did not include questions specifically addressing infection control monitoring, such as Mycoplasma infection, since ISO 20387: 2018 does not require specific protocols. Mycoplasma is a well-known contaminant in established cell culture settings altering the phenotypic and functional characteristics of cells in vitro but has not been described in short-term cultures of primary cells. Within the consortium, biobanks handling primary patient samples do not have Mycoplasma testing protocol in place. The experts reached strong agreement (91%) on the recommendation for internal quality monitoring of biobanked material. Suggestions to perform this included at least a yearly review of nucleic acid integrity (DIN/RIN values) and cell viability pre-freezing and post-thawing. The panel strongly agreed (97% agreement) that a valuable method to assess the quality of biobanked samples would be to request feedback from researchers receiving cryopreserved samples on the number of recovered cells and the viability percentage after thawing. An example of a yearly annual report form from VIVO Biobank in UK can be found on https://vivobiobank.org/researchers/applying . Another example of satisfactory survey template developed by the Biobanking and BioMolecular Resource Research Infrastructure – European Research Infrastructure Consortium (BBMRI-ERIC) can be found on https://www.bbmri-eric.eu/services/quality-management/ . The Biobank Committee initially gathered information on national or study group biobanks within the consortium, focusing on their structures, stored sample types, sample processing protocols, and sample processing data. This revealed a diverse range of structures among the 10 countries involved in 9 study groups (Finland and Sweden are part of the NOPHO study group) (Supplementary Table ). Most biobanks are centralized within countries (7/10) while three countries comprise multiple biobank locations. In countries with multiple biobanks, these biobanks are attached to a diagnostic laboratory. Central biobanks are either affiliated with a diagnostic laboratory (3/7) or operate as a separate entity (4/7). The types of stored material vary across countries. The only commonality identified in the current practice is the storage of viable frozen cells that can be used for functional analyses and for DNA, RNA and protein extraction. The panel was asked to answer questions in 7 categories related to biobanking of ALL samples. The participation rate of the three successive surveys was 70% (37/53), 64% (34/53) and 60% (32/53) with similar distribution of panel member expertise (Fig. ). Among all items of Round 1A and 1B, 48 out of 63 (75%) reached a positive consensus, while 15 out of 63 (24%) did not reach consensus. Subsequently, the writing group reformulated the latter questions and developed 24 new questions (Round 2) to enhance clarity and obtain additional information on reasons for the non-consensus. To this end, questions with multiple options alongside with the possibility to add comments were implemented. The revised version obtained an affirmative consensus for 18 items, while 6 items still did not reach consensus. The result for each item is summarized in Table and discussed below per category, highlighting additional comments provided by the panel. The geographical spread of responders is illustrated in . Ensuring a high quality of stored samples for research starts with obtaining high-quality fresh samples. Participants strongly agreed on the importance of avoiding haemodilution when large BM sample volumes were required, consensus was reached on reinserting the needle at a different angle through the original puncture site. No agreement was reached on the preferred type of sample collection tubes, with most votes going to Sodium Heparin and Ethylenediaminetetraacetic acid (EDTA). This in part reflects BM aspiration procedure detailed in the trial protocol (Supplementary Method ), in which different tubes were deemed acceptable, depending on local or national practices. Also in Round 2, where the question was re-formulated to ask about the best type of tube for processing viable-frozen cells for functional studies, no consensus was reached. Therefore, we conclude that different types of anti-coagulant tubes (Sodium Heparin, EDTA or acid citrate dextrose (ACD)) are used to obtain viable frozen cells, depending on the preferences and processing procedures of the local or national biobanks. For instance, heparin has been found to interfere with nucleic acid amplification assays , which is mitigated through density gradient centrifugation and subsequent cell washing steps. If a research study involves the production of immortalised cell lines derived from PB lymphocytes, it is advised to prioritise ACD anti-coagulation . Despite EDTA posing challenges for cytogenetic analysis, it can provide blood fractions suitable for a wide range of DNA-based and protein assays . Importantly, the panel agreed that the type of collection tube should be noted, as well as the time between sampling and processing to allow researchers to evaluate whether the biobanked material is suitable for their research application . The panel was asked about desired time-points to collect BM/PB cells, methods for cell processing and storage conditions. The panel agreed that cells should be collected at diagnosis, relapse, end of induction and end of consolidation. Additionally, the panel agreed of storing infiltrated extramedullary samples, such as pleural and pericardial fluids, at diagnosis and relapse. Experts did not feel that other time-points were useful for routine biobanking. If RNA and/or DNA were not extracted by the biobank, participants agreed on prioritizing the storage of viable cells rather than keeping cell pellets for future extraction. The panel agreed that all cells not used for diagnostic purposes should be biobanked. Furthermore, if fewer than 5 million cells are recovered after density gradient centrifugation and separation, participants also supported the storage as viable cells, since this leaves all options open for functional studies such as drug sensitivity testing and patient-derived xenografts. This is valuable information, leading to the strong recommendation that biobanks should prioritize storage of viable cells. On the number of cells per vial to store, participants frequently suggested not to exceed 20M cells, both for optimal recovery and to allow multiple applications. Example algorithms how to balance number of vials and number of cells per vial are provided (Supplementary Method ). Concerning processing of BM aspirate or PB samples to obtain cells, there was agreement on the use of density gradient centrifugation. The attitude towards dealing with visible erythrocyte contamination in the cell pellet after density gradient centrifugation and separation did not reach consensus. Some participants considered that minimising the number of protocol steps for not losing cells is more important than purity or that visible contamination by erythrocytes is too suggestive. This is in sharp contrast to others for whom this step is part of their routine practice to avoid interference of haemoglobin in colorimetric assays including MTT and debris in flow cytometry measurements. Moreover, DNA and RNA from (immature) red blood cells and DNA sticking to the surface of mature red blood cells interferes with DNA and RNA quantification . After density gradient centrifugation, the panel agreed that cell viability should be assessed but did not reach consensus on a single preferred technique. Flow cytometry and trypan blue are equally accepted. There was strong agreement on implementing a consensus-based protocol to standardise cryopreservation procedures. The ALLTogether1 laboratory manual contains a standard operation procedure for cryopreservation of viable cells needed for an ALLTogether1 protocol-related sub-study (Supplementary Method ). This procedure was optimized for recovery of viable lymphoblasts after thawing, with the most critical step that the cell suspension and freezing medium need to be cold (ice water, 0–4 °C) and cryovials pre-cooled in −20 °C freezer to minimize the toxic effect of DMSO. Laboratories who implemented this procedure reported improved viability upon thawing from typically <30% to >80% (personal communication to Judith Boer from Máxima Biobank and LBL2018 protocol Biobank). Currently, plasma and serum from BM/PB samples are stored in the majority of ALLTogether study group biobanks (Supplementary Table ) and could be useful for the study of cytokines, for example . Experts agreed that plasma and serum should be stored at diagnosis, relapse, end of induction and end of consolidation. Experts did not consider other time-points as critical within the current trial strategy. However, the majority of experts abstained from statements 3.6 and 4.6 (abstention rate of 59% for plasma and 63% for serum) in Table , possibly suggesting lack of experience with research on this material. Some experts suggested that additional time-points might be valuable, particularly in context of cell-based immunotherapies to evaluate response. One expert proposed storing plasma monthly after consolidation to enable the discovery of biomarkers predicting relapse. Optimizing treatment of the central nervous system (CNS) remains a challenge in ALL. Research focusing on CNS-ALL is needed to provide a better understanding of disease biology as well as putative drug targets and biomarkers. CSF-Flow, a sub-study within the ALLTogether1 trial, is a study on identifying biomarkers to enable accurate prediction of relapse-risk . Hence, the establishment of a robust CSF biobank is deemed indispensable . Experts reached consensus that CSF should be stored at diagnosis and relapse and at follow-up regardless of infiltration at diagnosis. Day 15 and day 29 time-points are obligatory as part of a sub-study in the current trial. Experts agreed that after centrifugation, the supernatant and pellet should be stored. Storage of the pellet (as non-viable cells) would allow for DNA, RNA, or protein extraction. There was agreement not to biobank CSF at other follow-up time-points. There was no clear consensus on the storage method, however, 86% of answering experts considered −80 °C acceptable, while 55% preferred liquid nitrogen. Multiple experts considered both options, possibly reflecting the dependence on the biological questions posed. Supplementary Table summarise CSF storage practices across the consortium. Whole exome sequencing (WES) and whole genome sequencing (WGS) are frequently used in research for the detection of genomic aberrations and are starting to be used as routine diagnostic tools. DNA isolated from germline material is critical for discrimination between somatic and germline variants. Gathering information across the consortium showed that one country has successfully implemented skin biopsies along WES/WGS into routine workflows, minimizing patient burden while ensuring timely results. The panel agreed on the necessity to store germline material, with the preferred source being a skin biopsy (77% agreement). Remission samples (<1% leukaemic cells) were also deemed acceptable, while buccal swabs were considered the least preferred option. The writing group suggested that direct DNA extraction from skin biopsy without fibroblast culture to avoid culture time would be a preferred option to expedite WGS/WES analysis, notwithstanding the risk of potential low contamination by leucocyte-derived DNA. However, the panel found several sources of germline DNA equally suitable, including direct DNA isolation or fibroblast culture from skin biopsy, remission PB or BM. This likely reflects divergent local practices among the participants, such as WGS/WES not being routine diagnostics or as skin biopsy not being a procedure performed in every centre. Indeed information gathered from national biobanks indicated that this source of material is not routinely stored across the consortium. Skin biopsy with culture fibroblasts is the preferred method for confirmation of germline predisposition to haematological malignancies . Cell culture offers advantages such as increased DNA yield and enables cell storage for potential functional studies but takes a few weeks and can fail in rare cases. For transplanted patients, the panel agreed (68% agreement) that DNA of the stem cell donor should be retrieved and stored in the biobank. Participants highlighted the requirement of donor consent for storage. The experts agreed (73%) on the necessity for biobanks to obtain ISO accreditations to ensure high sample quality. The standard ISO 20387:2018 on “Biotechnology — Biobanking — General requirements for biobanking" provides guidelines and requirements for the establishment, operation, and management of biobanks. This standard outlines procedures related to quality management, sample collection, storage, retrieval, and transportation to ensure the integrity and traceability of biological specimens. ISO/TR 22758:2020 “Biotechnology - Biobanking - Implementation guide for ISO 20387” provides support for implementing the requirements of ISO 20387:2018 . Currently, only one ALLTogether affiliated biobanks has obtained ISO accreditation. The survey did not include questions specifically addressing infection control monitoring, such as Mycoplasma infection, since ISO 20387: 2018 does not require specific protocols. Mycoplasma is a well-known contaminant in established cell culture settings altering the phenotypic and functional characteristics of cells in vitro but has not been described in short-term cultures of primary cells. Within the consortium, biobanks handling primary patient samples do not have Mycoplasma testing protocol in place. The experts reached strong agreement (91%) on the recommendation for internal quality monitoring of biobanked material. Suggestions to perform this included at least a yearly review of nucleic acid integrity (DIN/RIN values) and cell viability pre-freezing and post-thawing. The panel strongly agreed (97% agreement) that a valuable method to assess the quality of biobanked samples would be to request feedback from researchers receiving cryopreserved samples on the number of recovered cells and the viability percentage after thawing. An example of a yearly annual report form from VIVO Biobank in UK can be found on https://vivobiobank.org/researchers/applying . Another example of satisfactory survey template developed by the Biobanking and BioMolecular Resource Research Infrastructure – European Research Infrastructure Consortium (BBMRI-ERIC) can be found on https://www.bbmri-eric.eu/services/quality-management/ . The establishment of the ALLTogether consortium has created a unique framework for scientific collaboration between study groups across Northern and Western Europe. Successful translation of basic scientific studies into improved patient outcomes relies on access to large and well-annotated cohorts. Establishing best practice for biobanking for ALL is crucial to ensure storage of valuable biospecimen across the ALLTogether consortium. Using a modified Delphi method, we provided a consensus documentation on best biobank practices within the consortium. The surveys resulted in several strong recommendations (summarised in Fig. ). First, biobanks should allocate effort and resources to store all viable cells from diagnosis, relapse (PB, BM and extra-medullary samples) and follow-up time-points post-induction and post-consolidation, enabling retrospective extraction of DNA, RNA, and (phospho-)proteins if not initially performed. Various types of cell storing scenarios could be considered, balancing the number of vials and the number of cells per vial needed for different applications: small vials (5–10 million cells) for backup diagnostics, RNA/DNA/protein isolation or for generation of patient-derived xenografts (PDX); medium-sized vials (10–20 million cells) for most experiments including single cell sequencing, and larger or multiple vials for functional studies with multiple culture conditions, e.g. drug library screening. Using proven cryopreservation methods, a cell recovery rate between 50% and 80% can be achieved, which contributes to optimal use in research. In addition to the cell storage, the collection and storage of the plasma and serum is highly recommended. A third strong recommendation of the survey is the storage of CSF (supernatant and pellet) at diagnosis, relapse and follow-up time-points day 15 and post-induction. This facilitates biomarker research on CNS-ALL, which is the focus of a sub-study of the ALLTogether1 trial. While this approach preserves an uncontaminated supernatant for future studies, it limits the scope of research requiring viable cells. However, since leukaemic cells often adhere to stroma, studying soluble biomarkers such as leukemic-derived vesicles, secreted proteins and metabolites, or cell-free DNA may prove more relevant for understanding CNS disease characteristics. Given the current trial strategy, experts considered that additional collection time-points were not deemed essential (statements 2.3 and 3.6 and 4.6 for cells, DNA, RNA, serum, plasma). Without a clearly defined strategy and research plan, experts felt that collecting additional samples, such as after end of treatment, exclusively for biobanking would not be justified, even if they could provide valuable information, for example regarding long-term treatment effects and potential late toxicities. Other significant recommendations emphasize the importance of biobanks receiving feedback from researchers on sample quality and data generated from these samples. Quality metrics would help the biobanks to potentially review their practices but also inform future researchers using the same samples. Another important role of user feedback includes publications and presentations using biobank resources, as the acknowledgement of the valuable contribution of biobanks helps to ensure their sustainability . In addition, the value of a specimen increases significantly with the amount of information available about it. Our expert panel recommended for biobanks to document the methods applied to biobanked samples, such as sequencing, enabling data sharing with other researchers when original materials are unavailable. This approach enhances transparency and facilitates broader access to valuable research resources and findings. The Biobank Committee's future endeavours also involve incorporating PDX samples generated by ALLTogether research projects. These samples serve as valuable preclinical cancer models, crucial for translational research aimed at validating the therapeutic effectiveness of compounds or exploring novel therapeutic strategies, thus advancing precision medicine initiatives. Six statements from the survey failed to reach a consensus, despite attempts at reformulation or clarification in the second round. These related to questions about collection tubes, cell processing for cryopreservation, and source of germline DNA. The rate of panel members giving ‘no opinion’ was above 20% for these questions, suggesting the requirement of specific expertise. While our survey explored many aspects of essential materials and time-points for biobanking of ALL, other crucial aspects of biobanking deserve attention such as ethics, legal considerations, operations (small, centralised or multisite biobanks), and financial sustainability. These elements were beyond the scope of this specific investigation and may require a different range of specialties represented on the panel. Ultimately, while the survey aimed to establish harmonized guidelines, we recognize the need for flexibility in their application to ensure feasibility and equity across diverse healthcare systems. In high-resource settings, biobanks may have the infrastructure, funding, and expertise to implement the full suite of recommendations, including advanced protocols such as the collection of skin biopsies for germline material. In contrast, in low-resource settings, logistical and financial barriers may necessitate prioritizing less resource-intensive practices, such as the use of remission samples for germline material when WES/WGS is not yet routinely implemented. Addressing these disparities will require tailored strategies and ongoing collaboration between biobanking networks. Our recommendations could serve as a roadmap for identifying critical gaps in current practices and building a case for additional funding or resources. Engaging stakeholders, including hospital administrations, policymakers, and funding bodies, to highlight the importance of biobanking in advancing research and improving patient outcomes, may further support the implementation of these practices even in challenging settings. In the context of precision and personalized medicine, it is important for biobanks to transition towards a patient-centred approach . ALLTogether consortium has actively engaged Patient and Public Involvement (PPI) representatives to ensure their meaningful contribution and feedback on various aspects including research projects, sample availability, clinical unmet needs, and ethical and legal considerations surrounding the clinical trial. Several national biobanks within the consortium have Patient Representatives integrated into their committees. The comparison of current practices of ALLTogether consortium related biobanks facilitated knowledge sharing and fostered collaboration among biobanks. The Delphi survey across the consortium resulted in recommendations that could serve as guidelines for initiating or updating procedures based on consensus to facilitate collaborative research studies on samples from patients with ALL. Supplemental information Appendix 1: List of Members of the Biobanking Committee of ALLTogether consortium Supplemental Table 1: Questions and answers per participants of the three surveys (Excel)
Targeting pan-essential pathways in cancer with cytotoxic chemotherapy: challenges and opportunities
1434a6c1-1e05-46c5-941f-bd0c01e93374
10435635
Internal Medicine[mh]
Cytotoxic chemotherapy, also referred to as conventional or classical chemotherapy, is a central pillar of cancer treatment and has been for many decades . These therapies, particularly when used in combination, have high clinical impact and continue to transform the lives of cancer patients on a daily basis. This can sometimes be unappreciated within the pre-clinical cancer research community, in which we often present these therapies as a group of non-specific cellular poisons, especially when contrasted with new molecularly targeted therapies in this era of precision oncology. There are of course spectacular success stories that motivates an ever expanding list of drugs to enter clinical studies, but we have a disproportionately high failure rate of these new therapies . Whilst the reasons for this are many and increasing efforts are focused to understand this , it is important to note that most agents that do successfully progress through clinical testing do not actually displace standard-of-care therapies. Instead, they are used in combination with standard-of-care, or as an option for those patients who have exhausted previous treatment options . We should also not neglect that new agents are expensive and many healthcare systems and patients will not have access to these and will rely on the more affordable alternative of conventional chemotherapeutic agents. A recent survey questioned frontline oncologists across the globe for a list of the top 20 cancer medicines deemed essential to their practice, and of these, 12 were cytotoxic agents . This is not written as a criticism of developing new molecules or therapeutic modalities, as this is vital, but rather as an argument to reassess the importance of cytotoxic chemotherapies, and in particular, the importance of focusing research efforts upon improving our basic understanding of how these drugs work. Knowledge gained could provide the scientific basis for refining the use of these therapies, which could have immediate clinical impact worldwide. Furthermore, an argument could be made that one factor contributing to the disproportionately high failure rate of new targeted therapies is a misunderstanding of how current standard-of-care therapies, principally cytotoxic chemotherapies, work as medicines. Understanding the mechanistic principles underlying successful treatments, even those which are half a century old, is critical for improving new therapies. Thus, in this article, I wish to reaffirm the importance of improving our basic understanding of these therapies together with highlighting several unanswered questions along with current approaches that can be employed towards refining their use. Cytotoxic chemotherapy is a broad term which encompasses an array of anticancer drugs with activities in many distinct malignancies. Often, this term is used to associate therapies with non-specific mechanisms of action, so-called “dirty drugs”, considered to be generally cytotoxic to highly proliferative tissues and this being the main basis of obtaining a therapeutic window. We now know that this is a vast oversimplification, as will be discussed below; however, this (mis)information is constantly perpetuated in our research literature and in the information given to cancer patients. What these agents have in common is that they were identified somewhat empirically (although phenotypic rationales for cancer selectivity existed ), using then state-of-the-art approaches, and were shown to have effects in pre-clinical cancer models before progressing (often rather quickly) into patients; however, their molecular mechanism of action was not defined at that time. This is often discussed in contrast to the traditional view of targeted therapies that can start with a (hopefully) validated target in mind, which is far from trivial , and molecules are subsequently identified to modulate this target in cancer cells; thus ideally, the target and anti-cancer mechanism is defined from the outset (thus, informing mechanism-based use of the therapy). This is despite a substantial amount of new therapies resulting from phenotypic drug discovery . Looking from our current standpoint, many decades after the identification and subsequent introduction of many chemotherapeutics into the clinic, we have vastly increased our understanding of their pharmacology together with the fundamentals of cancer biology. We now know that many of these agents have distinct molecular mechanisms of action, targeting known cancer cell dependencies or perturbing essential biochemical pathways, which is also why these agents have more recently been referred to as targeted cytotoxic therapy . Antimetabolites, a group of chemotherapies that were amongst the first to show clinical success in treating cancer , are an excellent case study for highlighting the distinct molecular mechanisms of action of chemotherapeutics in targeting well-established cancer cell dependencies (even within a single drug class) . Nucleotide biosynthesis can be regarded as a non-oncogene addiction of cancer cells, as DNA building blocks (deoxynucleoside triphosphates, dNTPs) are required to fuel elevated genome duplication and repair. Antimetabolites, which are synthetic mimics of nucleosides or folate, can be potent and specific inhibitors of enzymes within these pathways and thus starve cancer cells of specific substrates for DNA synthesis . Additionally, many of these compounds can directly perturb DNA metabolism through distinct molecular mechanisms, for instance by slowing/stalling the DNA synthetic reaction, inducing lesions to trap cancer cells in futile DNA repair cycles, or by causing DNA–protein crosslinks . Thus, this family of compounds—via distinct mechanisms—effectively exploit two valid anticancer targets, DNA precursor metabolism and genomic integrity . Similar examples can be taken from other classes of cytotoxic chemotherapy, with genomic integrity being a common target . Alkylating agents can induce a spectrum of base modifications which are metabolized and repaired in distinct ways , whilst platinum-based agents crosslink DNA strands to inhibit normal DNA function , both allowing exploitation for tumor cell killing. Topoisomerase poisons trap the enzymes responsible for releasing torsional strain during DNA metabolism on the DNA molecule during catalysis, inducing strand breaks coupled with a DNA–protein crosslink, a potent cytotoxic lesion . Specifically, topoisomerase I is the cellular target of camptothecins, which act at the level of the DNA–topoisomerase I complex and, through stabilization of this complex, stimulate DNA cleavage. Topoisomerase II poisons (e.g., anthracyclines) act by inhibiting the religation of DNA–topoisomerase II complexes, whereas others induce their formation . To summarize, these agents induce distinct DNA lesions that are metabolized by cancer cells in specific ways (via replication, transcription, and/or repair), dependent upon cellular context, with specific cellular outcomes. However, despite the detailed molecular mechanisms of action we have mapped out over several decades of research, there is much left to be understood which can impact upon the clinical use of these therapies, from further refinement of those mechanisms to, more broadly, how these therapies work as medicines. The name for this section is taken from a thought-provoking perspective article which highlights a largely unanswered question in the cancer research community: how do cytotoxic chemotherapies work as medicines? As overviewed in the previous section, we now have a detailed understanding for the molecular underpinnings of the mechanism of action for many of these therapies from work in pre-clinical cancer models (which continues to be refined through state-of-the art experimental approaches, exemplified in refs ), but what is the relationship of this understanding to their mechanism of action as medicines? The “proliferation rate paradox” questions the widely perpetuated view that cytotoxic chemotherapies are effective anti-cancer medicines owing to their ability to target highly proliferative tissues, which cancers are typically thought to be, thus offering a therapeutic window owing to slower growing non-malignant tissues. Non-malignant tissues often responsible for dose-limiting toxicities of cancer drugs can be the bone marrow and gut crypts, those tissues with the highest cellular turnover , with doubling times for the bone marrow, for instance, reported to range from 17 h to 3 days. However, analysis of tumor doubling rates, which varies widely dependent upon tumor type (and can vary at different locations within the same tumor), can range from one week to over a year . This discrepancy is by no means a new discovery, as pointed out by Mitchison , studies in the 1970s began asking the same question and assembled data to provide answers . And over the subsequent years, several articles have highlighted this problem again and compiled evidence from various sources . Tumor doubling rates will be a measure of both cell proliferation and cell death, but it is clear that tumors can be very slow growing, and analysis of the proportion of S-phase cells in tumors also supports this , as does the striking array of cell-cycle phases observed in a recent analysis of multiple human cancers . This is potentially further supported by the limited clinical utility of nucleotide-derived positron emission tomography (PET) tracers to visualize tumors (as these are dependent upon replicating cells) , as highlighted by Yan and colleagues . In addition, a recent analysis of publicly available data from the Human Protein Atlas of Ki67 staining (a proliferation marker) in normal and malignant tissue also underscored that dose-limiting non-malignant tissues can have a higher proliferative index than malignant counterparts . This is one of the suggested reasons for the failure of some cell-cycle-targeted therapies (such as mitotic kinase inhibitors) in clinical studies, as if drugs kill solely based upon proliferation rate, on-target dose-limiting toxicity will prevent efficacy . Considering the above information, if cytotoxic chemotherapies do kill cells based solely upon proliferation rates, how would they achieve selectivity for malignant versus non-malignant tissue (especially if cell-cycle kinase inhibitors cannot)? Rather than solely proliferation rate, there is likely several reasons why cytotoxic chemotherapies can achieve therapeutic windows, overviewed in Fig. , and perhaps (to some extent) these reasons will be specific to individual classes of cytotoxic chemotherapy owing to their distinct molecular mechanisms of action. DNA repair and cell-cycle checkpoint proficiency have been suggested to account for the therapeutic window of some cytotoxic chemotherapies. A common target for many of these drugs is the DNA molecule with different drugs inducing different DNA lesions and thereby cells rely on distinct repair pathways to remove the resulting DNA damage in an attempt to restore genomic integrity. Cancer cells can often be defective in repair pathways, which although fuels cancer development through acquisition of mutations and genomic rearrangements , also renders these cells differentially sensitive to DNA damaging agents compared to their non-malignant counterparts. Within pre-clinical research there are a plethora of examples , including thiopurine cytotoxicity being dependent upon proficient mis-match repair , inactivation of PrimPol-mediated DNA damage tolerance sensitizing to inter-strand crosslinking agents , and dependency upon proficient homologous recombination (HR) repair to fix cytotoxic DNA double-strand breaks caused by replication of chemotherapy-induced DNA lesions . Clinical evidence is particularly strong for the effectiveness of platinum agents in HR defective (e.g., BRCA-mutant) cancers . Cancer cells also have higher levels of replication stress and thus agents which exacerbate this, which are many amongst cytotoxic chemotherapy, could selectively kill cancer cells over non-malignant cells . With regards to antimetabolites, differences between normal and cancer cell metabolism have been suggested to be responsible for the clinical success of these therapies . It is well established that cancer cells rewire their metabolism to support biomass production, which can be dependent upon several factors, and it is possible these changes render cells selectively sensitive to antimetabolite therapies. This is consistent with the initial phenotypic observations that spurred the development of some of these drugs (e.g., cancer cells increase uptake and use of uracil from medium being the basis of 5-fluorouracil development ) and more recent examples also exist. For instance, a recent study highlighted that a subset of lung cancers are dependent upon pyrimidine salvage pathways, which explained the lack of pre-clinical models being sensitive to inhibition of de novo pyrimidine synthesis via inactivation of the enzyme DHODH (as these are the complimentary pathways that can supply pyrimidine nucleotides, de novo and salvage). Pyrimidine analogs fluorouracil and gemcitabine can be effective in these cancers, and these therapies are prodrugs, requiring activation by these same pyrimidine salvage pathways. An extension of this argument was recently made by Yan and colleagues which pointed out that many of the cytotoxic chemotherapies used in the clinic are actually prodrugs. In addition to antimetabolites, this includes methylating agents, nitrogen mustards, and platinum-based agents (for thorough drug list see ref ), and these compounds thus undergo intracellular activation to exert their antitumour properties, and, as highlighted by the authors, many of the pathways or conditions needed to bioactivate these compounds can be elevated in certain cancer types, which could potentially account for a therapeutic window. However, other cytotoxic chemotherapies, such as the microtubule targeting agent paclitaxel, are not prodrugs but can also successfully treat solid malignancies before encountering dose-limiting toxicities, which intriguingly, targeted mitotic kinase inhibitors cannot . The discrepancy between the utility of paclitaxel versus targeted mitotic kinase inhibitors can potentially be explained by the molecular mechanism of action of paclitaxel in tumors, where concentrations are much lower than what is typically utilized in cell culture experiments, and rather than inducing mitotic arrest (which is the desired phenotype for mitotic kinase inhibitors), it is chromosome mis-segregation on multipolar spindles (without mitotic arrest) that is the clinically relevant phenotype . However, why paclitaxel can target malignant tissue over its non-malignant counterparts still appears to be an unanswered question . Whilst all reasons discussed thus far have been tailored to the distinct molecular mechanisms of these therapies, Letai and colleagues provided a broader explanation that can encompass all (cytotoxic) chemotherapy, termed mitochondrial or apoptotic priming, first reported over a decade ago . Apoptotic priming is based upon the principle that most chemotherapies kill cells via the intrinsic mitochondrial-dependent pathway of apoptosis, and different tissues (both malignant and non-malignant) have a different propensity to execute this pathway owing to how close the mitochondria within those tissues are to this apoptotic threshold (i.e., mitochondrial outermembrane permeabilization, MOMP). Accordingly, it was shown that the mitochondria within chemosensitive tissues, whether malignant or non-malignant, where closer to this apoptotic threshold (i.e., primed), whilst mitochondria in tissues known to be typically chemoresistant where further from this threshold (i.e., not primed), explaining differential sensitivity and thus the therapeutic window. This phenomenon also explained why some tumors are generally chemosensitive regardless of the therapy used, childhood acute lymphoblastic leukemia being a prime example, as these malignant cells are closer to the apoptotic threshold, whilst others are generally chemoresistant. A pan-essential gene can be defined as a gene which, if lost, results in a loss of cell fitness or death in multiple normal tissues or cell lineages , which can now be readily identified through publicly available genome-wide CRISPR knockout screens . Whilst several cytotoxic chemotherapies can be potent inhibitors of known pan-essential genes, such as methotrexate targeting dihydrofolate reductase (DHFR) or gemcitabine covalently inhibiting ribonucleotide reductase (RNR), these therapies can often be polypharmacologic, and many can target more broadly a metabolic process (such as perturbing the DNA synthetic reaction in a particular way). Thus, it is perhaps more accurate to consider these therapies as ones which target pan-essential pathways. With this in mind, a recent perspective article discussed in depth the pitfalls of developing cancer drugs targeting pan-essential genes (such as the cell-cycle inhibitors discussed above), and many lessons learnt were presented that should be applied to the development of future therapies. Given the similarity of targeting pan-essential genes to cytotoxic chemotherapies targeting pan-essential pathways, which was noted by the authors , one point that was not discussed was that much of the knowledge and approaches outlined by the authors could also be applied to conventional cytotoxic chemotherapies. For instance, several key features of successful targeted therapies (i.e., those with a high therapeutic index) were listed , and a number of these features can be found in pre-clinical or clinical data regarding cytotoxic chemotherapies, highlighting potential avenues to refine their use. Lineage-restricted therapies are those which target a particular cell lineage regardless of whether it is malignant, a prime example being B-cell targeted therapies such as Brunton’s Tyrosine Kinase (BTK) inhibitors, which successfully treat B-cell malignancies whilst also killing normal B cells . The antimetabolite nelarabine, a guanosine analog, was developed following the observation that build-up of the metabolite deoxyguanosine triphosphate (dGTP) was selectively toxic to T cells. This selectively was shown to be the case for nelarabine too and now this therapy is approved for use in relapsed and refractory T-cell malignancies . Synthetic lethality was another key feature of high-therapeutic index therapies, the concept in which loss of two complimentary pathways is required for cell killing , the quintessential example being the use of PARP inhibitors in BRCA-mutated cancers . This feature has also been reported with several chemotherapeutics, although best described as hypersensitivity. BRCA-mutated cancer models are also hypersensitive to the antimetabolite 6-thioguanine owing to this compound inducing cytotoxic DNA damage during DNA synthesis requiring HR repair . Furthermore, 6-thioguanine could overcome platinum and PARP inhibitor resistance in pre-clinical models . This was also shown to be the case for alkylating agents . With regards to “ BRAF -like” colon cancer, genome-wide shRNA screening revealed a selective vulnerability during mitotic progression when compared to non- BRAF -like colon cancer, which can be successfully targeted with the microtubule poison vinorelbine . Another example comes from analysis of exceptional responders that identified that tumors with a defective DNA damage response display synthetic lethality with temozolomide . Use of predictive biomarkers, which allow focused use of a treatment in those patients with a high probability of response, is another feature of high-therapeutic index therapies . In addition to the synthetic lethal examples outlined above, harnessing knowledge on chemotherapy metabolism can also be advantageous. Prodrugs like antimetabolites require activation by a cascade of enzymes to elicit their anticancer effect, and these drugs are also subject to catabolic processes, all of which impacts the efficacy/toxicity balance, which has been discussed in detail . Recent examples include the nucleotide hydrolyses SAMHD1 and NUDT15 that can convert the active metabolites of several nucleoside analogs into inactive forms. In the case of SAMHD1 and the deoxycytidine analog cytarabine, which is standard-of-care in acute myeloid leukemia, this has implications on treatment efficacy , whilst in the case of NUDT15, enzyme variants are associated with increased toxicity following thiopurine treatment , offering the basis for dose individualization . Another key feature of high-therapeutic index therapies are those which can exploit differential surface-antigen expression , and whilst cytotoxic chemotherapy alone is unable to do this, antibody–drug conjugates that utilize cytotoxic payloads do offer this advantage. For example, derivatives of the topoisomerase poison camptothecin and microtubule inhibitors have been used as payloads on multiple antibody–drug conjugates , offering the potential to target these chemotherapeutics to specific cellular subsets. Despite many open questions within this field—offering potential for optimization—there is evidence that this therapeutic modality can offer improved efficacy over cytotoxic chemotherapy . There are also efforts to broadly target cytotoxic chemotherapies to cancer cells in an antigen-independent manner, for instance by exploiting consequences of the Warburg effect . Several strategies have been employed to optimize chemotherapy treatments, to increase their therapeutic window, reviewed by Chang et al. . These include schedule optimization (for instance on–off dosing strategies), the use of supportive medicine to mitigate treatment side effects, optimization of chemotherapy formulation to shift drug distribution towards the tumor, innovative drug combinations, and personalizing dosing based upon body surface area, weight, and renal function. These approaches, in some cases with several decades of documented use, have been successful in increasing the chemotherapeutic window. However, there is much room to exploit the molecular mechanisms of these therapies further, and to match this with the molecular characteristics of the patient and their cancer, akin to efforts for therapies targeting pan-essential genes . Thus, it is tempting to speculate additional advances can be made and one key component of this will be to improve our basic understanding of how these therapies work. This information provides the basis of hypotheses that can be retrospectively evaluated using clinical data, if available, given the widespread use of these agents, or alternatively new clinical studies could be established. For instance, exploiting large-scale pharmacogenomic datasets, whether pan-cancer or disease-focused , in which panels of cancer cell lines thoroughly characterised with various omic technologies (transcriptome, proteome, metabolome, etc.) are subject to an array of drug perturbations, can be a powerful approach to identify correlates of drug efficacy. Expansion of these datasets to encompass more advanced near-patient models, such as those employed in functional precision medicine efforts , is a particularly exciting prospect, especially when combining with controls relevant to understanding the therapeutic window (i.e., those tissues which are associated with dose-limiting toxicities). Coupling pharmacogenomic datasets with data derived from genome-wide CRISPR loss-of-function screens can also allow deciphering of drug mode of action , although the polypharmacologic nature of some cytotoxic chemotherapies may complicate this. Altogether, these studies facilitate identification of putative predictive and pharmacodynamic/pharmacokinetic (PD/PK) biomarkers, which can be interrogated in subsequent focused studies. Given many of these therapies are prodrugs and subject to large inter-individual variability in PK, identification of PD/PK biomarkers could facilitate dosing and schedule optimisation, which can be an important aspect of refining treatments. Furthermore, these efforts could allow identification of potential therapeutic targets to enhance treatment efficacy, for instance by targeting those factors associated with treatment resistance . Findings from such studies will undoubtably require validation, and here it is key that methods and models appropriate to the therapy and malignancy in question are used together with employing more complex modes of drug testing. For instance, frequently, measurement of ATP in cell lysate is used as a proxy for quantifying viable cells following drug exposure, however if drug treatment alters cell size or affects ATP metabolism, data from these readouts will be misleading . Additionally, drug response is typically characterised following 3-day continuous exposure, however more complex dose-scheduling may be warranted, especially with a focus upon pharmacologically relevant drug doses . Employing these approaches with a thorough assessment of drug efficacy readouts—not just IC 50 values—can also yield valuable information , and it is important to account for possible confounding factors such as differing proliferation rate . Coupling such approaches with single-cell multi-parameter readouts, as recently exemplified , can be particular powerful in characterizing drug responses in single cells yielding information-rich datasets. These focused approaches can yield unexpected and clinically relevant biology of cytotoxic chemotherapies. For instance, analysis of long-term single-cell responses to cisplatin-exposed cells found an unexpected relationship between proliferation rate and cell killing, with highly proliferative cells being more likely to arrest than die, whilst the opposite was observed for slowly proliferating cells . Similarly, coupling super-resolution microscopy with a clickable analog of the antimetabolite cytarabine, also revealed an unexpected relationship between replication and drug toxicity, finding drug-resistant cells can incorporate more of this analog into genomic DNA whilst sensitive cells incorporated less . Given the complexity of (cancer) biology and its interaction with small molecules, especially those which require metabolism (which is many amongst cytotoxic chemotherapies ), hypothesis-free unbiased approaches will be key in furthering our molecular understanding of these clinically used therapies. In addition to harnessing large-scale pharmacogenomic datasets, discussed above, this includes approaches such as pooled whole-genome CRISPR screens to identify therapy resistance and sensitization factors . These can interrogate many cytotoxic agents within the same cell models, identifying both common and drug-specific pathways , although as models used will not always be relevant to all drugs evaluated, findings specific to the biology of cancer subsets will be omitted. Alternatively, cell models representing specific malignancies can be used and drugs relevant for this cancer screened , yielding more disease relevant datasets. Identification of drug-resistant alleles is often considered the gold-standard of identifying a drugs target , and although when considering the polypharmacology associated with some cytotoxic agents which could complicate data interpretation, methods developed to identify drug-resistant alleles have been utilised successfully with cytotoxic agents under pre-clinical and clinical investigation . Whilst these approaches have not been extensively used on current cytotoxic chemotherapies used in the clinic, examples exist, such as the use with thioguanine which successfully identified a previously known key metabolic enzyme . This highlights that with this family of compounds, key drug-resistant alleles could be those catalyzing an early metabolic step required for drug activation, in line with identification of deoxycytidine kinase in CRISPR screening efforts against multiple nucleoside analogs . Chemoproteomic approaches are another powerful approach to gain insight into the molecular mechanisms of drugs . Thermal proteome profiling, exploiting the simplicity of a thermal shift assay but on a proteome scale, allows unbiased mapping of protein stability and abundance changes within the proteome of drug-exposed cells , which was recently exemplified with cytotoxic agent 5-fluorouracil uncovering new biology associated with this decades old therapy . Another key unbiased approach to defining the molecular mechanism of therapeutics is morphological cell profiling, or cell painting , when applied to drug-exposed cells. The power of this technology in deciphering drug mechanism is particularly strong when it is applied to libraries of compounds ; however, as discussed previously for CRISPR screening, this can prevent the use of disease-specific models and may hinder identification of disease-specific biology relevant to the drug molecular mechanism. This can of course be overcome by focusing research efforts on models representing malignancies of interest, if this is the purpose of the study. Knowledge gained with the approaches discussed above will inform the rational design of drug combinations—which is key —ideally therapies with monotherapy efficacy and a high-therapeutic index. Combing therapies serves multiple purposes, it can be used to enhance cancer cell killing, reduce treatment toxicity, and/or prevent the onset of treatment resistance; combinations of agents is how cancer is successfully treated. Although much focus in pre-clinical research centers upon finding synergistic combinations, there are data clearly arguing that this is not necessary for clinical benefit , and instead research efforts should focus on combining independently active drugs with resistance mechanisms that do not overlap, to maximize anti-cancer efficacy within the context of tumor heterogeneity . Combinations with immune-oncology therapies is also an important avenue being explored . Although cytotoxic chemotherapy has long-been considered to be immune suppressive, there are increasing data supporting the efficacy of these therapies involves activation of antitumour immune responses . Improving our understanding of clinically active therapeutics is key to rationally refining their use and improving patient responses, whether in a patient population or in efforts to individualize treatments. When considering the high attrition rate in current oncology drug development coupled with the knowledge that most new therapies do not displace standard-of-care treatments, and the high financial burden of new therapies often preventing worldwide use, it is clear that cytotoxic chemotherapies are going to remain an important component of cancer therapy for many years to come. It is, thus, important to focus research efforts upon these tried-and-tested therapies. As outlined here, these are not a group of non-specific cellular poisons killing cells based solely upon proliferation rate, but a diverse group of anticancer agents with distinct molecular mechanisms that target pan-essential pathways in cancer cells. The more we learn about these therapeutics, the unappreciated intricacies of their modes of action, the line between cytotoxic chemotherapies and subsequently developed targeted agents becomes increasingly blurred, revealing a broad spectrum of clinically active agents, which should all be taken full advantage of. By furthering our knowledge on the molecular mechanisms underpinning the activity of these compounds, and the relationship of this to the factors that dictate the chemotherapeutic window (Fig. ), we can continue towards the refinement and optimisation of the clinical use of these therapies. Furthermore, understanding the mechanistic principles of current therapies could also provide a strong foundation for the development of new effective therapies. For example, with the knowledge that the clinical success of paclitaxel is owing to dysregulation of mitosis but without actual mitotic arrest , new antimitotic agents could be developed with this goal in mind, perhaps overcoming previous clinical failures in this area . Whilst another example could be embracing the prodrug strategy that is abundant in clinically successful conventional agents but underexplored with new therapeutics .
Intraepithelial CD15 infiltration identifies high-grade anal dysplasia in people with HIV
8bd0e0cd-8bb9-447c-bb29-390376fae0e3
11383605
Anatomy[mh]
Anal cancer is considered infrequent in the general population . However, in selected populations, such as men who have sex with men (MSM) with HIV, anal cancer occurs at rising rates and is currently one of the most common non-AIDS-defining cancers . Infection by high-risk human papillomavirus (HR-HPV) at the squamocolumnar transition zone is considered the main etiological agent of anal cancer . Persistent HPV infection is able to induce a series of changes in the transitional epithelium that lead to the development of low-grade squamous intraepithelial lesion (LSIL), which can progress to high-grade squamous intraepithelial lesion (HSIL), considered the direct precursor of invasive anal cancer . Anal SILs are histologically identical among people with HIV (PWH) and uninfected individuals; however, they are more prevalent and likely to persist and progress to anal cancer in the first group, even among those in which combination antiretroviral therapy (cART) maintains viral suppression and induces immunological recovery . Multiple factors related to the local interaction and potentiation between HIV and HPV may explain this increase in prevalence and associated pathology in PWH, including oncogenic effects and overall impact on local immunity . Of particular importance may be the persistent depletion of CD4 + T cells from the mucosal compartments in PWH who have been treated during chronic infection , which may create a more favorable microenvironment for precancerous lesions to develop and progress. In this sense, altered cell-mediated immunity has been associated with increased HPV infection and disease , while immune responses orchestrate regression of HPV-related lesions . Screening and treating HSIL have recently been demonstrated to be effective for cancer prevention in PWH . However, a reliable biomarker that indicates the risk to develop anal cancer has not yet been identified, and even classifying intermediate lesions of SIL is still challenging . Most studies aiming to expand the understanding of anal dysplasia progression and identify potential biomarkers have focused on the genes and/or proteins involved in HPV-mediated carcinogenesis . In contrast, studies focusing on the local immune microenvironment surrounding anal lesions are scarce , although disturbances in the local microenvironment may play a critical role in the development of anal cancer precursors . Thus, phenotyping the immune landscape surrounding dysplastic lesions could provide new insights on the immunopathology of these persistent infections, which in turn may allow the identification of new biomarkers. In light of the limited data available about the immune microenvironment that differentiate normal epithelium from anal dysplastic lesions and the limitations of diagnostic tools of HSIL, we conducted a study to evaluate immunological subsets in the anal mucosa of MSM with HIV who participated in an anal screening program. The main goal of this study was to characterize the immune environment where lesions develop to identify biomarkers that can contribute to diagnosing HSIL. Based on a pathological diagnostic, we observed divergent trends in resident lymphocyte populations and myeloid-derived suppressor cells and neutrophils. Ultimately, the epithelial infiltration of CD15 + neutrophils associated with pathology provides a biomarker of interest for assisting HSIL diagnosis and future immunological interventions. Cohort characteristics. The discovery cohort comprised 47 cART-treated MSM with HIV, with a total of 54 anal samples. All analyses, including flow cytometry on fresh samples, were conducted simultaneously during screening in this cross-sectional study. Anal samples were subsequently classified based on histological analyses as normal ( n = 24 samples, including 21 individuals), LSIL ( n = 24 samples, including 20 individuals), and HSIL ( n = 6 samples, including 6 individuals). In 7 of these individuals, we had concomitant paired samples, wherein 1 was classified as normal and the other as LSIL. The validation cohort included 54 MSM with HIV, with a total of 57 anal samples classified as normal ( n = 12 samples, including 12 individuals), LSIL ( n = 25 samples, including 22 individuals), and HSIL ( n = 20 samples, obtained from 19 individuals). Of note, 8 of these patients were also included in the discovery cohort, though with different samples (in terms of time point and or localization). and show a summary of the participant characteristics related to HIV and other relevant parameters from both cohorts. Expression of CD103 in resident memory lymphocytes is diminished in pathological samples. To study immune populations located in the anal biopsies, after selecting live single CD45 + cells, we delineated 3 major subsets: T lymphocytes, NK cells, and specific myeloid populations. The flow cytometry gating strategy used for all samples is shown in ; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.175251DS1 The median count of viable hematopoietic CD45 + cells retrieved from each biopsy sample is presented in and , which indicates a consistent trend toward increased CD45 + infiltration in pathological samples compared with normal ones. Of note, this value was extracted from the acquisition of the entire sample by flow cytometry, without normalization by sample weight or determination of the absolute count after digestion, and thus has limited accuracy. Regarding the analyses of T cells derived from anal biopsies, displays 2 representative samples showing the frequency of CD8 – (which were >97% CD4 + CD3 + T cells, ) and CD8 + CD3 + T cell subsets analyzed in normal and HSIL biopsies. In these subsets, we determined lymphocyte activation by HLA-DR expression and tissue residency by CD69 combined with CD103 expression . Although the overall frequency of CD8 – or CD8 + T lymphocytes, out of live CD45 + cells, did not vary significantly among the different groups, the analyses of CD8 + T cells in paired samples from the same individual, in which both normal and LSIL biopsies were available, revealed a higher total frequency in LSIL samples compared with normal samples ( P = 0.031, ). In contrast, the fraction of CD8 + resident memory T cells (T RM ) expressing CD103 + decreased with increasing pathology, showing a trend when comparing normal and HSIL samples ( P = 0.08, ). Indeed, when considering pathological samples as a single group, the trend for CD8 + T RM expressing CD103 + remained ( P = 0.063, ). However, this difference was lost when displayed as the percentage of total CD8 + T cells . Further, CD8 – T RM CD103 + cells were significantly lower in pathological samples compared with nonpathological biopsies when analyzed as the percentage of CD45 + live cells ( P = 0.024, ) and of CD8 – T cells ( P = 0.036, ). To disentangle the effect of the grade of dysplasia from the effect of having confounding factors such as age, nadir CD4, time on cART, or the presence of HR-HPV on the frequency of T RM subsets, we separated the pathological and nonpathological samples in a post hoc analysis into 2 groups based on these factors . Overall, the median frequency of CD8 – or CD8 + T RM CD103 + in normal biopsies was higher than in pathological samples in all comparisons. Further, differences in the frequency of CD8 – T RM CD103 + between these 2 groups of samples were kept statistically significant for the group with nadir CD4 below 350 cells and for both subsets when only considering samples without HR-HPV16/18 genotypes . Thus, while CD8 + T cell infiltration appeared to be associated with an LSIL diagnostic, the proportion of T RM expressing CD103 + was reduced as the level of dysplasia progressed. This reduction was more pronounced for CD8 – T RM lymphocytes and was not affected by age, level of nadir CD4, time on cART, or the presence of HR-HPV16/18 genotypes. NK cells expressing CD56 are perturbed in pathological samples. We then analyzed the frequency of CD3 – lymphocytes based on their expression of CD16 or CD56, as the major NK subsets in tissue samples, as shown in the representative examples . For these analyses we considered CD16 – CD56 + NK, CD16 + CD56 + NK, and CD16 + CD56 – NK subsets individually and also all together, referred to as total NK cells. Overall, a larger percentage of CD16 + CD56 – NK cells tended to accumulate in pathological samples compared with normal samples ( P = 0.038; ). For each NK cell subset, we also analyzed the expression of markers associated with residency in tissue (CD69 and CD103) or with cellular activation (HLA-DR) . In terms of HLA-DR expression, pathological biopsies showed significantly higher percentages of this molecule in total NK cells and in the CD56 + CD16 – NK cell fraction compared with normal biopsies ( P = 0.030 and P = 0.024; , respectively). Further, both pathological and normal samples showed very high expression of CD69 in NK cells, primarily within the CD56 + NK subset (regardless of CD16 expression); however, this expression was higher in normal samples compared with pathological samples ( P = 0.034 and P = 0.041; , respectively). Last, when we analyzed CD103 expression together with CD69, we detected lower proportions of these markers in total NK cells from HSIL samples in comparison with normal or LSIL samples ( P = 0.042 and P = 0.019, ). We additionally determined the effect of age, nadir CD4, time on cART, or presence of HR-HPV on the differences observed in the NK subsets based on pathology . Most differences observed between normal and pathological biopsies were maintained, in particular the increase in HLA-DR expression and the decrease in CD69 associated with dysplasia, though statistical significance was limited in most comparisons of the post hoc analyses . These results indicate that CD16 + CD56 – NK cells and overall HLA-DR expression are augmented during pathology, while the expression of residency markers CD69 and CD103 is compromised in dysplastic environments. Overall, CD56 NK subsets appeared to be most affected by these changes, which seemed not to depend on age, nadir CD4, time on cART, or presence of HR-HPV16/18 genotypes. Potentially suppressive myeloid cell subsets are augmented in anal dysplasia. We additionally determined the frequency of several myeloid subsets, including a subset of potentially immune-tolerant cells, called myeloid-derived suppressor cells (MDSCs), and mature neutrophils (CD15 + CD16 + , ) . Of note, MDSCs were defined as CD11b dim CD33 + myeloid cells with either an HLA-DR – CD14 – or an HLA-DR lo CD14 + phenotype , as shown . In certain samples, the percentage of MDSCs expressing CD14 was significantly increased with pathology: the HSIL group showed a median of 43.85 (IQR: 34.03–54.70), the LSIL group showed a median of 27.30 (IQR: 16.45–36.93), and the normal group had a median of 23.35 (IQR: 16.05–39.23). However, due to high variability within the normal samples, statistical significance was not achieved, with only a trend observed when compared to the HSIL group ( P = 0.082, ). Still, out of the 6 individuals with paired normal and LSIL samples, 5 of them exhibited an increase in CD14 + MDSCs in the LSIL sample compared with the nonpathological sample . Regarding the evaluation of CD15 + CD16 + neutrophils, we detected a gradual increase in their percentage associated with the severity of the SIL. There was a statistically significant difference between normal and HSIL samples ( P = 0.047, ), and this association became stronger when all pathological samples were grouped together and compared with normal samples ( P = 0.012, ). Post hoc analyses considering age, nadir CD4, time on cART, or presence of HR-HPV on the differences for myeloid subsets based on pathology were also performed . Most differences observed between normal and pathological biopsies regarding CD14 + MDSCs were lost, but did not seem affected by these factors either, while CD15 + CD16 + neutrophils were strongly increased in pathological samples from patients 45 years old or less, with nadir CD4 levels equal or inferior to 350 cells, with less than 10 years of cART, and without HPV16/18 genotypes . Together, increased proportions of CD14 + MDSCs and mostly CD15 + CD16 + neutrophils were found to be associated with dysplasia. Considering that the frequency of this myeloid subset expressing CD15 + CD16 + appeared as the best flow cytometry–derived subset to classify pathology, we additionally assessed the level of correlation between this parameter and other immune subsets and clinical parameters. The frequency of this subset out of the total myeloid fraction did not show any correlation with age, CD4 nadir, CD4/CD8 ratio, or the number of years under viral suppression for each individual . Nevertheless, there was a moderate negative correlation between this subset and the total frequency of CD8 – T RM ( r = –0.41, P = 0.005) as well as CD8 + T RM ( r = –0.58, P < 0.001) in the same samples, the frequency of CD3 – CD56 – expressing CD69 (regardless of CD16 expression; r = –0.43, P = 0.013), and the frequency of myeloid cells expressing high levels of HLA-DR ( r = –0.40, P = 0.004). In contrast, a positive correlation was observed between the total frequency of CD3 – CD16 + NK cells (regardless of CD56 expression) and CD15 + CD16 + myeloid cells ( r = 0.33, P = 0.036, ). CD15 epithelial staining as a complementary biomarker for diagnosis. Based on our results, we selected CD103 and CD15 molecules for further validation of our findings through immunohistochemistry . Our main objective was to determine their diagnostic value as individual pathology markers in comparison with p16, which is currently recommended to support the diagnosis of HSIL in the appropriate morphology context . To this end, we obtained a new set of 57 archived tissue sections as the validation cohort, which did not show differences in clinical parameters between groups, except for HR-HPV genotypes . CD103- and CD15-positive cells were individually counted within the epithelium or the underlying stroma. Unexpectedly, the average CD103 count within the epithelium and stroma of HSIL biopsies was higher compared with normal samples . These findings suggested that other subsets beyond T cells, such as NK cells and CD15 + neutrophils, as previously reported , could potentially exhibit a higher frequency of CD103 expression in association with pathology. Indeed, the subsequent analysis of CD103 expression within the neutrophil CD15 + CD16 + subset obtained from the flow cytometry data demonstrated an overall increase of this molecule in the pathological samples compared with the normal samples ( P = 0.046, ), suggesting their epithelial location. Actually, in line with these results, immunohistochemistry analyses evidenced an increase in CD15 counts in the epithelium and stroma of HSIL samples compared with both normal and LSIL samples ( P = 0.0001 and P = 0.039, respectively, for the epithelium, and P = 0.0042 and P = 0.074, respectively, for the stroma; ). Since the validation cohort showed significant differences in the presence of HR-HPV associated with pathology , we also performed a post hoc analysis separating by the presence or not of HPV16/18 genotypes, as well as the clinical parameters analyzed beforehand for the discovery cohort. These analyses evidenced that HSIL samples consistently had increased numbers of CD15 in the epithelium and, less so, in the stroma, compared with LSIL and normal samples . Regarding the presence of HPV16/18 genotypes, differences between HSIL and LSIL or normal biopsies were more obvious when these genotypes were absent ( P = 0.019 and P = 0.002, respectively, for epithelium; and P = 0.029 and P = 0.022, respectively for stroma; ). Considering that when these genotypes were present only 2 samples remained normal, statistical significance was lost, but trends of lower CD15 detection in normal samples compared with pathological samples remained . Further, in our study, p16 staining correlated with HSIL diagnosis with a sensitivity of 65% and a specificity of 93% (AUC 0.798, ). In comparison, a threshold of more than 5 positive CD15 cells in the epithelium had a sensitivity of 80% and a specificity of 71% (AUC 0.762, ). Importantly, the combination of both biomarkers, meaning a threshold of more than 5 positive CD15 cells and a positive p16 staining, showed a sensitivity of 95% and a specificity of 68% (AUC 0.813, ). Considering that the majority of lesions diagnosed as HSIL in the validation cohort underwent subsequent treatment, we aimed to determine the predictive value of CD15 staining regarding the response to treatment. Out of all the pathological samples that had a follow-up biopsy performed at the same previous site (19 out of the 20), 3 samples remained classified as HSIL, 10 samples showed a decrease in severity to LSIL, and 6 samples completely responded to treatment and were classified as normal biopsies. When comparing the quantification of CD15-positive cells in the epithelium between the samples that completely responded to treatment (regressed to normality) and those that remained as an HSIL diagnosis or decreased to an LSIL, a trend toward lower numbers of this biomarker in the pathological samples that regressed was observed . Indeed, it is noteworthy that samples negative for p16 (highlighted as triangles in ) were observed in all groups with different treatment outcomes. This observation suggests that quantifying CD15 in the epithelium could potentially serve as a more reliable indicator for predicting the response to treatment, pending further validation. Last, to verify the findings from the immunohistochemistry analyses and relate them to the original flow cytometry data obtained, we performed immunofluorescence (IF) analyses in an additional small subset of biopsies from the initial discovery cohort. Thus, we performed costaining of CD4 and CD103 cells and of CD15 and CD66b (a common marker to identify neutrophils, ref. ) cells and quantified single and double-positive cells in the epithelium and the lamina propria. These analyses showed that the median counts of CD4 + and CD103 + positive cells in both epithelial and stromal areas were higher for pathological compared with nonpathological samples ( P = 0.022 and P = 0.025, respectively; ). However, when calculating the proportion of double CD4 + CD103 + from the total CD4 counts, pathological samples showed, in general, low percentages . In addition, quantification of double CD15- and CD66b-positive cells verified higher levels of double-positive cells located in the epithelium and stroma of pathological compared with normal biopsies ( P = 0.001 and P = 0.002, respectively; ), but in this case, the proportion of CD15 + CD66b + cells with respect to CD15 + cells remained in the high range in association with dysplasia . These results indicate that there is an overall increase or infiltration of immune cells in pathological areas, already suggested by the high median count of viable hematopoietic CD45 + cells retrieved from pathological biopsies ( and ). Indeed, single CD4 + T cell quantification in tissue slides from pathological samples by IF, which was markedly increased in pathological samples ( P = 0.001, for both epithelium and stroma; ), was also accompanied by a higher CD8 – CD3 + event count median in the biopsies from the flow cytometry data, with a median of 975 (IQR: 381–2,036) for the HSIL, of 381 (IQR: 173–860) for the LSIL, and of 240 (IQR: 137–815) for normal samples. Consequently, there was a significant negative correlation within individual samples between the frequency of CD103 + CD8 – T RM out of the total live CD45 + fraction measured by flow cytometry and the quantification of CD4 + CD103 + cells ( r = –0.78, P = 0.003; ). The explanation to this apparent contradiction is the difference between these techniques in terms of quantification as well as other phenotypic markers included to identify subsets. Thus, while an overall CD4 + count increase is observed and quantified by techniques that provide absolute numbers, such as IF, the proportion of cells that express CD103 out of this subset is low in pathological samples, shown by IF and also flow cytometry, in which CD69 was concomitantly assessed to identify the proportion of T RM out of CD45 + live cells. In contrast, there was a positive correlative trend between the frequency of CD15 + CD16 + cells out of the myeloid fraction and the quantification of CD15 + CD66b + cells ( r = 0.55, P = 0.055; ), which was significant when considering CD15 + positive cells only ( r = 0.68, P = 0.012; ). In summary, the findings from immunohistochemistry and IF verified the presence of CD15 + neutrophils associated with dysplasia in the anal mucosa. Moreover, the identification of these cells in the epithelium serves as a valuable pathological marker in this context. The discovery cohort comprised 47 cART-treated MSM with HIV, with a total of 54 anal samples. All analyses, including flow cytometry on fresh samples, were conducted simultaneously during screening in this cross-sectional study. Anal samples were subsequently classified based on histological analyses as normal ( n = 24 samples, including 21 individuals), LSIL ( n = 24 samples, including 20 individuals), and HSIL ( n = 6 samples, including 6 individuals). In 7 of these individuals, we had concomitant paired samples, wherein 1 was classified as normal and the other as LSIL. The validation cohort included 54 MSM with HIV, with a total of 57 anal samples classified as normal ( n = 12 samples, including 12 individuals), LSIL ( n = 25 samples, including 22 individuals), and HSIL ( n = 20 samples, obtained from 19 individuals). Of note, 8 of these patients were also included in the discovery cohort, though with different samples (in terms of time point and or localization). and show a summary of the participant characteristics related to HIV and other relevant parameters from both cohorts. To study immune populations located in the anal biopsies, after selecting live single CD45 + cells, we delineated 3 major subsets: T lymphocytes, NK cells, and specific myeloid populations. The flow cytometry gating strategy used for all samples is shown in ; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.175251DS1 The median count of viable hematopoietic CD45 + cells retrieved from each biopsy sample is presented in and , which indicates a consistent trend toward increased CD45 + infiltration in pathological samples compared with normal ones. Of note, this value was extracted from the acquisition of the entire sample by flow cytometry, without normalization by sample weight or determination of the absolute count after digestion, and thus has limited accuracy. Regarding the analyses of T cells derived from anal biopsies, displays 2 representative samples showing the frequency of CD8 – (which were >97% CD4 + CD3 + T cells, ) and CD8 + CD3 + T cell subsets analyzed in normal and HSIL biopsies. In these subsets, we determined lymphocyte activation by HLA-DR expression and tissue residency by CD69 combined with CD103 expression . Although the overall frequency of CD8 – or CD8 + T lymphocytes, out of live CD45 + cells, did not vary significantly among the different groups, the analyses of CD8 + T cells in paired samples from the same individual, in which both normal and LSIL biopsies were available, revealed a higher total frequency in LSIL samples compared with normal samples ( P = 0.031, ). In contrast, the fraction of CD8 + resident memory T cells (T RM ) expressing CD103 + decreased with increasing pathology, showing a trend when comparing normal and HSIL samples ( P = 0.08, ). Indeed, when considering pathological samples as a single group, the trend for CD8 + T RM expressing CD103 + remained ( P = 0.063, ). However, this difference was lost when displayed as the percentage of total CD8 + T cells . Further, CD8 – T RM CD103 + cells were significantly lower in pathological samples compared with nonpathological biopsies when analyzed as the percentage of CD45 + live cells ( P = 0.024, ) and of CD8 – T cells ( P = 0.036, ). To disentangle the effect of the grade of dysplasia from the effect of having confounding factors such as age, nadir CD4, time on cART, or the presence of HR-HPV on the frequency of T RM subsets, we separated the pathological and nonpathological samples in a post hoc analysis into 2 groups based on these factors . Overall, the median frequency of CD8 – or CD8 + T RM CD103 + in normal biopsies was higher than in pathological samples in all comparisons. Further, differences in the frequency of CD8 – T RM CD103 + between these 2 groups of samples were kept statistically significant for the group with nadir CD4 below 350 cells and for both subsets when only considering samples without HR-HPV16/18 genotypes . Thus, while CD8 + T cell infiltration appeared to be associated with an LSIL diagnostic, the proportion of T RM expressing CD103 + was reduced as the level of dysplasia progressed. This reduction was more pronounced for CD8 – T RM lymphocytes and was not affected by age, level of nadir CD4, time on cART, or the presence of HR-HPV16/18 genotypes. We then analyzed the frequency of CD3 – lymphocytes based on their expression of CD16 or CD56, as the major NK subsets in tissue samples, as shown in the representative examples . For these analyses we considered CD16 – CD56 + NK, CD16 + CD56 + NK, and CD16 + CD56 – NK subsets individually and also all together, referred to as total NK cells. Overall, a larger percentage of CD16 + CD56 – NK cells tended to accumulate in pathological samples compared with normal samples ( P = 0.038; ). For each NK cell subset, we also analyzed the expression of markers associated with residency in tissue (CD69 and CD103) or with cellular activation (HLA-DR) . In terms of HLA-DR expression, pathological biopsies showed significantly higher percentages of this molecule in total NK cells and in the CD56 + CD16 – NK cell fraction compared with normal biopsies ( P = 0.030 and P = 0.024; , respectively). Further, both pathological and normal samples showed very high expression of CD69 in NK cells, primarily within the CD56 + NK subset (regardless of CD16 expression); however, this expression was higher in normal samples compared with pathological samples ( P = 0.034 and P = 0.041; , respectively). Last, when we analyzed CD103 expression together with CD69, we detected lower proportions of these markers in total NK cells from HSIL samples in comparison with normal or LSIL samples ( P = 0.042 and P = 0.019, ). We additionally determined the effect of age, nadir CD4, time on cART, or presence of HR-HPV on the differences observed in the NK subsets based on pathology . Most differences observed between normal and pathological biopsies were maintained, in particular the increase in HLA-DR expression and the decrease in CD69 associated with dysplasia, though statistical significance was limited in most comparisons of the post hoc analyses . These results indicate that CD16 + CD56 – NK cells and overall HLA-DR expression are augmented during pathology, while the expression of residency markers CD69 and CD103 is compromised in dysplastic environments. Overall, CD56 NK subsets appeared to be most affected by these changes, which seemed not to depend on age, nadir CD4, time on cART, or presence of HR-HPV16/18 genotypes. We additionally determined the frequency of several myeloid subsets, including a subset of potentially immune-tolerant cells, called myeloid-derived suppressor cells (MDSCs), and mature neutrophils (CD15 + CD16 + , ) . Of note, MDSCs were defined as CD11b dim CD33 + myeloid cells with either an HLA-DR – CD14 – or an HLA-DR lo CD14 + phenotype , as shown . In certain samples, the percentage of MDSCs expressing CD14 was significantly increased with pathology: the HSIL group showed a median of 43.85 (IQR: 34.03–54.70), the LSIL group showed a median of 27.30 (IQR: 16.45–36.93), and the normal group had a median of 23.35 (IQR: 16.05–39.23). However, due to high variability within the normal samples, statistical significance was not achieved, with only a trend observed when compared to the HSIL group ( P = 0.082, ). Still, out of the 6 individuals with paired normal and LSIL samples, 5 of them exhibited an increase in CD14 + MDSCs in the LSIL sample compared with the nonpathological sample . Regarding the evaluation of CD15 + CD16 + neutrophils, we detected a gradual increase in their percentage associated with the severity of the SIL. There was a statistically significant difference between normal and HSIL samples ( P = 0.047, ), and this association became stronger when all pathological samples were grouped together and compared with normal samples ( P = 0.012, ). Post hoc analyses considering age, nadir CD4, time on cART, or presence of HR-HPV on the differences for myeloid subsets based on pathology were also performed . Most differences observed between normal and pathological biopsies regarding CD14 + MDSCs were lost, but did not seem affected by these factors either, while CD15 + CD16 + neutrophils were strongly increased in pathological samples from patients 45 years old or less, with nadir CD4 levels equal or inferior to 350 cells, with less than 10 years of cART, and without HPV16/18 genotypes . Together, increased proportions of CD14 + MDSCs and mostly CD15 + CD16 + neutrophils were found to be associated with dysplasia. Considering that the frequency of this myeloid subset expressing CD15 + CD16 + appeared as the best flow cytometry–derived subset to classify pathology, we additionally assessed the level of correlation between this parameter and other immune subsets and clinical parameters. The frequency of this subset out of the total myeloid fraction did not show any correlation with age, CD4 nadir, CD4/CD8 ratio, or the number of years under viral suppression for each individual . Nevertheless, there was a moderate negative correlation between this subset and the total frequency of CD8 – T RM ( r = –0.41, P = 0.005) as well as CD8 + T RM ( r = –0.58, P < 0.001) in the same samples, the frequency of CD3 – CD56 – expressing CD69 (regardless of CD16 expression; r = –0.43, P = 0.013), and the frequency of myeloid cells expressing high levels of HLA-DR ( r = –0.40, P = 0.004). In contrast, a positive correlation was observed between the total frequency of CD3 – CD16 + NK cells (regardless of CD56 expression) and CD15 + CD16 + myeloid cells ( r = 0.33, P = 0.036, ). Based on our results, we selected CD103 and CD15 molecules for further validation of our findings through immunohistochemistry . Our main objective was to determine their diagnostic value as individual pathology markers in comparison with p16, which is currently recommended to support the diagnosis of HSIL in the appropriate morphology context . To this end, we obtained a new set of 57 archived tissue sections as the validation cohort, which did not show differences in clinical parameters between groups, except for HR-HPV genotypes . CD103- and CD15-positive cells were individually counted within the epithelium or the underlying stroma. Unexpectedly, the average CD103 count within the epithelium and stroma of HSIL biopsies was higher compared with normal samples . These findings suggested that other subsets beyond T cells, such as NK cells and CD15 + neutrophils, as previously reported , could potentially exhibit a higher frequency of CD103 expression in association with pathology. Indeed, the subsequent analysis of CD103 expression within the neutrophil CD15 + CD16 + subset obtained from the flow cytometry data demonstrated an overall increase of this molecule in the pathological samples compared with the normal samples ( P = 0.046, ), suggesting their epithelial location. Actually, in line with these results, immunohistochemistry analyses evidenced an increase in CD15 counts in the epithelium and stroma of HSIL samples compared with both normal and LSIL samples ( P = 0.0001 and P = 0.039, respectively, for the epithelium, and P = 0.0042 and P = 0.074, respectively, for the stroma; ). Since the validation cohort showed significant differences in the presence of HR-HPV associated with pathology , we also performed a post hoc analysis separating by the presence or not of HPV16/18 genotypes, as well as the clinical parameters analyzed beforehand for the discovery cohort. These analyses evidenced that HSIL samples consistently had increased numbers of CD15 in the epithelium and, less so, in the stroma, compared with LSIL and normal samples . Regarding the presence of HPV16/18 genotypes, differences between HSIL and LSIL or normal biopsies were more obvious when these genotypes were absent ( P = 0.019 and P = 0.002, respectively, for epithelium; and P = 0.029 and P = 0.022, respectively for stroma; ). Considering that when these genotypes were present only 2 samples remained normal, statistical significance was lost, but trends of lower CD15 detection in normal samples compared with pathological samples remained . Further, in our study, p16 staining correlated with HSIL diagnosis with a sensitivity of 65% and a specificity of 93% (AUC 0.798, ). In comparison, a threshold of more than 5 positive CD15 cells in the epithelium had a sensitivity of 80% and a specificity of 71% (AUC 0.762, ). Importantly, the combination of both biomarkers, meaning a threshold of more than 5 positive CD15 cells and a positive p16 staining, showed a sensitivity of 95% and a specificity of 68% (AUC 0.813, ). Considering that the majority of lesions diagnosed as HSIL in the validation cohort underwent subsequent treatment, we aimed to determine the predictive value of CD15 staining regarding the response to treatment. Out of all the pathological samples that had a follow-up biopsy performed at the same previous site (19 out of the 20), 3 samples remained classified as HSIL, 10 samples showed a decrease in severity to LSIL, and 6 samples completely responded to treatment and were classified as normal biopsies. When comparing the quantification of CD15-positive cells in the epithelium between the samples that completely responded to treatment (regressed to normality) and those that remained as an HSIL diagnosis or decreased to an LSIL, a trend toward lower numbers of this biomarker in the pathological samples that regressed was observed . Indeed, it is noteworthy that samples negative for p16 (highlighted as triangles in ) were observed in all groups with different treatment outcomes. This observation suggests that quantifying CD15 in the epithelium could potentially serve as a more reliable indicator for predicting the response to treatment, pending further validation. Last, to verify the findings from the immunohistochemistry analyses and relate them to the original flow cytometry data obtained, we performed immunofluorescence (IF) analyses in an additional small subset of biopsies from the initial discovery cohort. Thus, we performed costaining of CD4 and CD103 cells and of CD15 and CD66b (a common marker to identify neutrophils, ref. ) cells and quantified single and double-positive cells in the epithelium and the lamina propria. These analyses showed that the median counts of CD4 + and CD103 + positive cells in both epithelial and stromal areas were higher for pathological compared with nonpathological samples ( P = 0.022 and P = 0.025, respectively; ). However, when calculating the proportion of double CD4 + CD103 + from the total CD4 counts, pathological samples showed, in general, low percentages . In addition, quantification of double CD15- and CD66b-positive cells verified higher levels of double-positive cells located in the epithelium and stroma of pathological compared with normal biopsies ( P = 0.001 and P = 0.002, respectively; ), but in this case, the proportion of CD15 + CD66b + cells with respect to CD15 + cells remained in the high range in association with dysplasia . These results indicate that there is an overall increase or infiltration of immune cells in pathological areas, already suggested by the high median count of viable hematopoietic CD45 + cells retrieved from pathological biopsies ( and ). Indeed, single CD4 + T cell quantification in tissue slides from pathological samples by IF, which was markedly increased in pathological samples ( P = 0.001, for both epithelium and stroma; ), was also accompanied by a higher CD8 – CD3 + event count median in the biopsies from the flow cytometry data, with a median of 975 (IQR: 381–2,036) for the HSIL, of 381 (IQR: 173–860) for the LSIL, and of 240 (IQR: 137–815) for normal samples. Consequently, there was a significant negative correlation within individual samples between the frequency of CD103 + CD8 – T RM out of the total live CD45 + fraction measured by flow cytometry and the quantification of CD4 + CD103 + cells ( r = –0.78, P = 0.003; ). The explanation to this apparent contradiction is the difference between these techniques in terms of quantification as well as other phenotypic markers included to identify subsets. Thus, while an overall CD4 + count increase is observed and quantified by techniques that provide absolute numbers, such as IF, the proportion of cells that express CD103 out of this subset is low in pathological samples, shown by IF and also flow cytometry, in which CD69 was concomitantly assessed to identify the proportion of T RM out of CD45 + live cells. In contrast, there was a positive correlative trend between the frequency of CD15 + CD16 + cells out of the myeloid fraction and the quantification of CD15 + CD66b + cells ( r = 0.55, P = 0.055; ), which was significant when considering CD15 + positive cells only ( r = 0.68, P = 0.012; ). In summary, the findings from immunohistochemistry and IF verified the presence of CD15 + neutrophils associated with dysplasia in the anal mucosa. Moreover, the identification of these cells in the epithelium serves as a valuable pathological marker in this context. Persistent infections share immunological features with the tumor environment, where the balance between effector mechanisms and suppressive or inflammatory populations is disrupted. In this sense, the anal SIL may be of particular interest, since it may combine a persistent viral infection with a tumor microenvironment. However, detailed assessment of relevant resident or infiltrated immune subsets within affected dysplastic areas has largely been missing for transitional anal tissue. Overall, we identify a potentially enriched immunosuppressive environment associated with pathological samples. Importantly, our findings highlight CD15 as an immunological marker that could contribute to an improved diagnosis of HSIL. Effective resident immunity, including T RM subsets, play essential roles in controlling persistent infections . E6-specific CD4 + T cell responses may be associated with recent HSIL regression . In contrast, skewing of HPV-specific T cells from an effector Th1 to a Th2 profile or increased expression of programmed cell death 1 in infiltrating CD8 + T cells in patients with venereal warts may suggest suppressed effector immunity . Further, CD69 + CD103 + T RM -like cells accumulate in various human solid cancers, where they have been associated with improved disease outcome and patient survival . In anal dysplastic lesions, overall CD8 + T cell infiltration or expansion has been reported , which concurs with our observation of a higher frequency of CD8 + T cell lymphocytes in HSIL samples compared with concurrent normal mucosa from the same individual. However, expression of CD103 within this compartment was lower in dysplastic compared with nonpathological samples. In this sense, a persistent depletion of CD4 + T RM phenotypes from the mucosal compartments has been reported in PWH who have been treated during chronic infection . Considering that CD4 + T RM promote the development of CD103-expressing CD8 + T RM in certain tissues , their generation may also be compromised in these patients. Still, in our cohort, other aspects associated with the dysplastic environment may have a greater impact, since all PWH included were cART treated during the chronic phase. In fact, factors like transforming growth factor-β (TGF-β) availability, which is essential for CD103 expression and T RM development; epithelial dysfunction; and chronic antigen exposure may affect CD103 expression . While we could speculate that the TGF-β signaling is affected within pathological areas , our data showed the opposite for mature neutrophils, which showed higher levels of CD103 expression in those areas, with more retention within the epithelium. Thus, other mechanisms, such as an impaired CD38 signaling, autocrine secretion, or the availability of TGF-β1 for T cells, could be at play . NK cells are also known for their key role in viral and tumor clearance, including resident memory NK cells . High expression of canonical markers, such as CD69 and CD103, in CD56 + NK cells identify resident memory NK cells in specific tissues, such as the liver, lung, or uterus . In anal tissue, the expression of CD69 within the CD56 fraction of nondysplastic samples was generally over 90%. However, HSIL and LSIL biopsies presented lower proportions of CD69 + CD56 + NK cells, and HSIL samples of total CD69 + CD103 + NK cells, suggesting again that the shrinkage of the resident lymphocyte effector compartment may contribute to the lack of control of the nascent dysplasia. In contrast, HLA-DR expression and a high proportion of CD16 + NK cells were associated with pathology. HLA-DR indicates activation in several lymphocyte subsets, and an accumulation of HLA-DR–expressing NK cells at sites of inflammation has been reported . Regarding CD16 + NK cells, this subset includes CD56 – CD16 + NK cells, which have been shown to expand during viral infections to form an anergic population with impaired cytotoxic activities . Our results are somewhat consistent with NK cell deficiency affecting CD56 populations, which renders patients more susceptible to HPV and herpes simplex virus infection and HPV-related diseases . In agreement, a general decrease of CD56 + NK cells has been associated with cervical dysplasia in HPV/HIV-coinfected women , while presence of CD56 + cells has been associated with increased overall survival in squamous cell carcinoma of the oropharynx, independent from HPV . Engaging effector mechanisms may be of particular importance in individuals infected with various persistent viruses, such as HPV and HIV, which exploit immune modulation mechanisms from the host to induce immune tolerance and limit viral clearance . Indeed, the local inflammatory state generated by chronic infection, including molecules like granulocyte colony-stimulating factor , could induce accumulation of undesired suppressive cells, such as MDSCs, as reported . Although previous studies suggest that myeloid cells might be displaying an immunosuppressive effect in HPV-induced malignancies , the mechanisms responsible for the various immune-related defects observed in these patients remain unclear. Furthermore, the so-called mature CD15 neutrophils, identified by high expression of CD16 and expression of CD66b , may also play a controversial role. They have been linked to inflammatory conditions , with increased numbers in patients with periodontitis or vaginitis . However, they might be suppressing T cells even in the context of inflammation , impairing and exerting strong T cell immunosuppression , even in tumor microenvironments . The fact that CD15 + granulocytic MDSCs and neutrophils share expression of CD15 + , CD16 + , and CD66b + molecules indicates that only functional assays would confirm their immunosuppressive properties , and future research in this area is warranted. Still, both CD14 + MDSCs and CD15 + CD16 + mature neutrophils are known to be key hallmarks of tumor inflammation and immune suppression, subsets that are also involved in chronic infections . Thus, the fact that we observed a gradual increase of these subsets from normal to HSIL samples suggests that an immunosuppressive environment may favor dysplasia progression. Actually, a link between systemic amplification of myeloid cells and the detrimental effects of these cells on CD8 + T cell activation and recruitment into the tumor microenvironment has been proposed . Importantly, immunohistochemistry and IF analyses verified the infiltration of CD15 + neutrophils in the anal mucosa associated with dysplasia, showing its potential value as a biomarker for pathology staging. Substantial disagreement exists among experienced pathologists in diagnosing SIL by H&E morphology, which is the gold standard . In this sense, addition of p16 immunohistochemistry increases interobserver agreement, yet discrepancy remains considerable regarding intermediate lesions . Thus, it is crucial to identify additional markers that can help minimize the misdiagnosis of HSIL and avoid unnecessary treatments. Moreover, the identification of reliable markers is essential for accurately identifying individuals with precancerous lesions who are at risk of disease progression. In our study, the determination of CD15 and p16, which shared similar technical complexity, since they were both detected by immunohistochemistry, exhibited a similar capacity to reliably detect dysplasia. Thus, in cases in which p16 was negative, the determination of epithelial CD15 staining, based on the established threshold, could help differentiate between HSIL and LSIL. Further, the fact that differences were stronger in samples negative for HR-HPV genotypes provides additional value to follow up these patients with elevated numbers of CD15 in their epithelium. It should also be noted that we observed an inverse association between the epithelial infiltration of CD15 in the HSIL samples and the response to treatment, which was not observed with p16. Of note, while we did not include patients with anal cancer, other works have highlighted the importance of neutrophils and CD15 expression in cancer as biomarkers of progression and response to treatment . Thus, future larger studies should aim to validate the utility of CD15 staining as a complementary measurement for the diagnosis of L/HSIL, or even as a prognostic marker, in particular if this marker can be eventually assessed by noninvasive techniques. It is important to note that this study is limited by the number of samples, in particular within the HSIL group in the discovery cohort, which was restricted by the complexity of the analyses and the impossibility of preselecting samples based on the degree of dysplasia. However, CD15 results were verified in the validation cohort, which included more homogeneous groups of samples. Of note, as another limitation, 8 patients had different time point samples in both cohorts. Besides, because dysplasia development takes years to progress, we lack the longitudinal analyses that would inform on the actual predictive value of these markers regarding lesion evolution to cancer. Future studies will address the function and interactions between these resident immune cells to define key populations in anal cancer precursor progression. In summary, our results expand current knowledge of mucosal immunity in anal dysplasia. The identification of CD15 as a potential complementary biomarker for HSIL diagnosis suggests its potential application in improving diagnostic tools and may have implications for the development of targeted immunotherapeutic strategies for this condition. Sex as a biological variable. This study involved MSM with HIV. Only MSM with HIV were included because they are the group with the highest risk of anal cancer and in whom screening for anal dysplasia is recommended. Since our study focused on characterizing the immunological environment where lesions develop to identify biomarkers that may contribute to the diagnosis of anal dysplasia, it was advisable to start with the highest risk group. Further studies would be necessary to determine if the findings are applicable to women or other groups of persons at risk. Study design and patient cohorts. The Anal Dysplasia Unit at the University Hospital Vall d’Hebron (HUVH, Barcelona, Spain) was created in May 2009 and attends more than 1,000 MSM with HIV. Anal screening includes anal liquid cytology and HPV determination, a high-resolution anoscopy (HRA) and, when necessary, anal biopsies, as previously described . Patients undergoing anal biopsies as part of the screening program were offered to participate in the study with the following inclusion criteria: patients on cART, with HIV viral suppression, and without any anal sexually transmitted disease or treatment for HSIL in the last 6 months. Patients were included prospectively for the initial immunological and histological analyses, while for the validation of the results by immunohistochemistry, patients were recruited retrospectively from available histological samples. Sample collection. Cytology was obtained by introducing a Dacron swab 3–5 cm into the anal canal and softly rotating it. The swab was introduced into 20 mL of PreservCyt/ThinPrep Pap test solution (Cytyc Iberia S.L.) and shaken for 30 seconds. This sample was used to carry out the cytological analysis and HPV testing . Single or multiple anal biopsies were taken from individual patients in the same screening session if HRA revealed an abnormal area or in areas that were previously treated to determine treatment efficacy. For a single biopsy, an immunological and histological study was carried out simultaneously. An expert pathologist classified samples using the terminology and morphological criteria published in the Lower Anogenital Squamous Terminology project: benign, LSIL, and HSIL . HPV detection. DNA was extracted from cytology-derived cell suspensions using the QIAamp Viral DNA minikit (QIAGEN). Specific sequences of papillomavirus were amplified by specific protocol CLART Genomic HPV-2 in accordance with the manufacturer’s instructions. Detection of HPV genotypes 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73, and 82 were considered as high risk. Immunological cell phenotyping by cytometry. Fresh anal tissue samples of ≈8 mm 3 were collected in antibiotic-containing RPMI 1640 medium. Samples were enzymatically digested with 5 mg/mL collagenase IV (Gibco, Thermo Fisher Scientific), and the resulting mononuclear cell suspension was washed twice and stained for viability with Live/Dead Aqua (Invitrogen, Thermo Fisher Scientific) at room temperature for 30 minutes in PBS. Cells were then washed with PBS and surface-stained using a 13-color flow cytometry panel . After fixation, all events were acquired using a BD LSRFortessa flow cytometer, and data were analyzed with FlowJo vX.0.7 software (TreeStar). We established a minimum of 1% of CD45 + cells from the total stored events as well as additional minimums per subset count to consider the sample for the analyses: 100 events for CD3 + T cell lymphocytes, 50 events for CD3 – lymphocytes, and 100 events for myeloid cells. Immunohistochemistry. We analyzed CD103-positive (Abcam ab129202) and CD15-positive (Ventana 05266904001) cells by immunohistochemistry to assess their value as a pathology marker in archival specimens from anal biopsies obtained from the validation cohort. Formalin-fixed, paraffin-embedded anal samples of 3 μm sections were deparaffinized, rehydrated, and stained using optimal dilutions of monoclonal antibodies . The staining was performed following the protocol of the ultraView Universal DAB kit for Ventana Benchmark ultra. Mononuclear cells with a dark brown cytoplasmic signal were recorded as positive cells. Since intensity of the staining was homogeneous, the H score (an indicator of the intensity and proportion of the biomarker identified) was not used. Positive cells within the squamous epithelium and underlying stroma of the whole sample were manually counted using light microscopy (Olympus BX43) at ×40 original magnification by 2 independent pathologists. Sections were examined, avoiding lymphoid follicles of the stromal areas when present. The average number of positive cells from a median of 3 fields was reported for all the markers except for p16 (Ventana 05266904001), which was considered positive or negative based on the staining at the nuclear level of the squamous epithelial cells. IF. Formalin-fixed, paraffin-embedded tissue slides underwent overnight deparaffinization at 65°C, followed by xylene and ethanol dilutions and fixation in 10% neutral buffered formalin. For CD103 (Abcam ab129202) and CD4 (Ventana 05552737001) staining, slides were placed in pH 9 antigen retrieval at 95°C for 30 minutes. After cooling, slides were blocked with 1× Block Opal buffer for 10 minutes, followed by incubation of CD103 (1:200) for 1 hour after TBS/Tween washing. Subsequently, 1× Opal Anti-Mouse + Rabbit HRP was applied and incubated for 45 minutes at room temperature. This allowed the Opal signal to be generated, using a 1:100 dilution of Opal Fluorophore 520 reagent in 1× Plus Manual Amplification Diluent, according to the manufacturer’s instructions (AKOYA Biosciences, NEL811001KT). CD103 fluorophore stripping was performed for 30 minutes at 95°C and pH 9, simultaneously working as the antigen retrieval step for CD4. Washes and incubations were performed as described before. However, the working dilution for CD4 was 1:25, and Opal Fluorophore 690 was used for signal generation. After CD4 stripping (performed as for CD103), the slides were counterstained with DAPI (1:1,000) for 7 minutes, washed, and mounted with Fluoromount-G (Invitrogen, Thermo Fisher Scientific). Staining with CD66b (Abcam ab197678) and CD15 (Abcam ab241552) was conducted as described above, with retrieval buffers at pH 6 and an antibody dilution of 1:25 for both markers. Images were initially captured in ×20 original magnification fields using a wide-field multidimensional Thunder microscope (Leica) for subsequent analyses. Quantification of epithelial and stromal single- and double-positive cells per sample was processed using ImageJ (NIH), in which binary masks for each marker were previously established. Confirmatory analyses and images at ×25 and ×40 original magnification were taken with a confocal microscope, ZEISS LSM 980, at a resolution of 2,048 × 2,048 pixels. Statistics. Comparisons were performed between the 3 histological groups (normal, LSIL, and HSIL) as well as between 2 groups (normal versus pathological samples, which combined LSIL and HSIL samples) to increase statistical power. Statistical analysis was conducted using GraphPad Prism software. All tests assumed normal distribution and were 2 sided. Nonparametric Kruskal-Wallis test with Dunn’s post hoc test for multiple comparisons and Mann-Whitney U test or χ 2 test were used for the unpaired analyses of 3 and 2 groups, respectively. For patients with paired normal and LSIL samples, we employed the Wilcoxon signed-rank test. Sensitivity, specificity, and the area under the receiver operating characteristic curve of potential biomarkers to detect HSIL were also determined in the validation cohort. Study approval. Written informed consent for sample collection and use of information available in the medical records was obtained from all patients included. This study was performed in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (PR(AG)240/2014) of the HUVH. Data availability. All data associated with this study are present in the paper or the supplement, and raw data are included in the file. This study involved MSM with HIV. Only MSM with HIV were included because they are the group with the highest risk of anal cancer and in whom screening for anal dysplasia is recommended. Since our study focused on characterizing the immunological environment where lesions develop to identify biomarkers that may contribute to the diagnosis of anal dysplasia, it was advisable to start with the highest risk group. Further studies would be necessary to determine if the findings are applicable to women or other groups of persons at risk. The Anal Dysplasia Unit at the University Hospital Vall d’Hebron (HUVH, Barcelona, Spain) was created in May 2009 and attends more than 1,000 MSM with HIV. Anal screening includes anal liquid cytology and HPV determination, a high-resolution anoscopy (HRA) and, when necessary, anal biopsies, as previously described . Patients undergoing anal biopsies as part of the screening program were offered to participate in the study with the following inclusion criteria: patients on cART, with HIV viral suppression, and without any anal sexually transmitted disease or treatment for HSIL in the last 6 months. Patients were included prospectively for the initial immunological and histological analyses, while for the validation of the results by immunohistochemistry, patients were recruited retrospectively from available histological samples. Cytology was obtained by introducing a Dacron swab 3–5 cm into the anal canal and softly rotating it. The swab was introduced into 20 mL of PreservCyt/ThinPrep Pap test solution (Cytyc Iberia S.L.) and shaken for 30 seconds. This sample was used to carry out the cytological analysis and HPV testing . Single or multiple anal biopsies were taken from individual patients in the same screening session if HRA revealed an abnormal area or in areas that were previously treated to determine treatment efficacy. For a single biopsy, an immunological and histological study was carried out simultaneously. An expert pathologist classified samples using the terminology and morphological criteria published in the Lower Anogenital Squamous Terminology project: benign, LSIL, and HSIL . DNA was extracted from cytology-derived cell suspensions using the QIAamp Viral DNA minikit (QIAGEN). Specific sequences of papillomavirus were amplified by specific protocol CLART Genomic HPV-2 in accordance with the manufacturer’s instructions. Detection of HPV genotypes 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73, and 82 were considered as high risk. Fresh anal tissue samples of ≈8 mm 3 were collected in antibiotic-containing RPMI 1640 medium. Samples were enzymatically digested with 5 mg/mL collagenase IV (Gibco, Thermo Fisher Scientific), and the resulting mononuclear cell suspension was washed twice and stained for viability with Live/Dead Aqua (Invitrogen, Thermo Fisher Scientific) at room temperature for 30 minutes in PBS. Cells were then washed with PBS and surface-stained using a 13-color flow cytometry panel . After fixation, all events were acquired using a BD LSRFortessa flow cytometer, and data were analyzed with FlowJo vX.0.7 software (TreeStar). We established a minimum of 1% of CD45 + cells from the total stored events as well as additional minimums per subset count to consider the sample for the analyses: 100 events for CD3 + T cell lymphocytes, 50 events for CD3 – lymphocytes, and 100 events for myeloid cells. We analyzed CD103-positive (Abcam ab129202) and CD15-positive (Ventana 05266904001) cells by immunohistochemistry to assess their value as a pathology marker in archival specimens from anal biopsies obtained from the validation cohort. Formalin-fixed, paraffin-embedded anal samples of 3 μm sections were deparaffinized, rehydrated, and stained using optimal dilutions of monoclonal antibodies . The staining was performed following the protocol of the ultraView Universal DAB kit for Ventana Benchmark ultra. Mononuclear cells with a dark brown cytoplasmic signal were recorded as positive cells. Since intensity of the staining was homogeneous, the H score (an indicator of the intensity and proportion of the biomarker identified) was not used. Positive cells within the squamous epithelium and underlying stroma of the whole sample were manually counted using light microscopy (Olympus BX43) at ×40 original magnification by 2 independent pathologists. Sections were examined, avoiding lymphoid follicles of the stromal areas when present. The average number of positive cells from a median of 3 fields was reported for all the markers except for p16 (Ventana 05266904001), which was considered positive or negative based on the staining at the nuclear level of the squamous epithelial cells. Formalin-fixed, paraffin-embedded tissue slides underwent overnight deparaffinization at 65°C, followed by xylene and ethanol dilutions and fixation in 10% neutral buffered formalin. For CD103 (Abcam ab129202) and CD4 (Ventana 05552737001) staining, slides were placed in pH 9 antigen retrieval at 95°C for 30 minutes. After cooling, slides were blocked with 1× Block Opal buffer for 10 minutes, followed by incubation of CD103 (1:200) for 1 hour after TBS/Tween washing. Subsequently, 1× Opal Anti-Mouse + Rabbit HRP was applied and incubated for 45 minutes at room temperature. This allowed the Opal signal to be generated, using a 1:100 dilution of Opal Fluorophore 520 reagent in 1× Plus Manual Amplification Diluent, according to the manufacturer’s instructions (AKOYA Biosciences, NEL811001KT). CD103 fluorophore stripping was performed for 30 minutes at 95°C and pH 9, simultaneously working as the antigen retrieval step for CD4. Washes and incubations were performed as described before. However, the working dilution for CD4 was 1:25, and Opal Fluorophore 690 was used for signal generation. After CD4 stripping (performed as for CD103), the slides were counterstained with DAPI (1:1,000) for 7 minutes, washed, and mounted with Fluoromount-G (Invitrogen, Thermo Fisher Scientific). Staining with CD66b (Abcam ab197678) and CD15 (Abcam ab241552) was conducted as described above, with retrieval buffers at pH 6 and an antibody dilution of 1:25 for both markers. Images were initially captured in ×20 original magnification fields using a wide-field multidimensional Thunder microscope (Leica) for subsequent analyses. Quantification of epithelial and stromal single- and double-positive cells per sample was processed using ImageJ (NIH), in which binary masks for each marker were previously established. Confirmatory analyses and images at ×25 and ×40 original magnification were taken with a confocal microscope, ZEISS LSM 980, at a resolution of 2,048 × 2,048 pixels. Comparisons were performed between the 3 histological groups (normal, LSIL, and HSIL) as well as between 2 groups (normal versus pathological samples, which combined LSIL and HSIL samples) to increase statistical power. Statistical analysis was conducted using GraphPad Prism software. All tests assumed normal distribution and were 2 sided. Nonparametric Kruskal-Wallis test with Dunn’s post hoc test for multiple comparisons and Mann-Whitney U test or χ 2 test were used for the unpaired analyses of 3 and 2 groups, respectively. For patients with paired normal and LSIL samples, we employed the Wilcoxon signed-rank test. Sensitivity, specificity, and the area under the receiver operating characteristic curve of potential biomarkers to detect HSIL were also determined in the validation cohort. Written informed consent for sample collection and use of information available in the medical records was obtained from all patients included. This study was performed in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (PR(AG)240/2014) of the HUVH. All data associated with this study are present in the paper or the supplement, and raw data are included in the file. ABM, CM, NM, and AAG performed tissue processing and flow cytometry analyses. ABM, JC, and SL performed histology, immunohistochemistry, and IF analyses. JB, AC, JNGP, and VF collected samples and patient data. ABM, CM, MJB contributed to data analyses and discussion. JB and MG conceived and supervised the study and wrote the manuscript. All authors contributed to refinement of the study protocol and approved the final manuscript. Supplemental data Supporting data values
Biomarker patterns and mechanistic insights into hypothermia from a postmortem metabolomics investigation
198f2d27-fd98-4af7-81d2-29f6a11803d9
11329508
Forensic Medicine[mh]
Unraveling the mysteries behind the ultimate trigger for mortality involves navigating through a complex web of physiological, environmental, and contextual intricacies, presenting a multifaceted puzzle demanding comprehensive exploration and understanding. In most countries, unnatural or unexpected deaths shall be reported to the police, which then will request a forensic autopsy. Typically, a forensic autopsy implies a careful dissection of all internal organs and a thorough external examination of the body, and also most often includes toxicological and microscopical analysis of samples collected during the autopsy. Hypothermia, characterized by a critical reduction in core body temperature, caused by extended exposure to low temperature, often outdoors, can present significant challenges in differentiating it from other causes of death, particularly when signs of external trauma or coexisting medical conditions are present – . In typical cases of fatal hypothermia, sign of undressing at the scene, stress ulcerations in the mucosa of the ventricle (Wieschnieski’s spots), frost erythema in the skin and immuno-positivity for heat shock protein 70 of podocyte cell nuclei in the kidneys can be seen – . However these findings may be absent, which in part can be dependent on the ambient temperature and the length of the exposure. Moreover, conventional postmortem examinations, relying only on structural macroscopic and microscopic changes may fail to provide conclusive evidence regarding the cause of death in suspected hypothermic cases , . Recent advancements in the field of metabolomics, a branch of systems biology concerned with the comprehensive analysis of endogenous metabolites within biological systems, offer an intriguing approach to unraveling the intricate metabolic alterations associated with hypothermia-related deaths. Postmortem metabolomics stands as a promising frontier in biomarker discovery, presenting an opportunity to unearth novel biological markers that could significantly enhance both clinical practice and investigations into causes of death – . In cases of complex conditions like hypothermia, where definitive biomarkers are lacking, postmortem metabolomics holds significant promise in providing valuable insights and enhancing diagnostic capabilities . By analyzing the composition of low-molecular weight molecules present after death, postmortem metabolomics provides a unique opportunity to uncover the pathophysiological changes that occurred leading up to an individual's demise. This method allows us to delve into the metabolic alterations postmortem, potentially unraveling the intricate pathways associated with hypothermia-induced fatalities. The primary objective of our research is to discern distinct biomarker patterns associated with hypothermia, enhancing the accuracy of its identification during postmortem examinations. Additionally, our study seeks to elucidate the mechanistic underpinnings of these biomarker patterns within physiological pathways, aiming to enhance our comprehension of the biological mechanisms underlying hypothermia. Study population and data selection All autopsy cases admitted between late June 2017 and November 2020 at the Swedish National Board of Forensic Medicine, aged 18 or older, and that underwent toxicological screening in femoral blood using high-resolution mass spectrometry, were considered for inclusion in this study (n = 17,011). Case information were extracted from the Swedish Forensic Medicine database . During the study period, we considered cases in which hypothermia was stated as the primary cause of death by the responsible pathologist and without no hospital visits prior to the fatalities or signs of an apparent putrefaction process. Controls were selected from a pool of 3089 femoral blood samples from deceased subjects. The selected causes of death included cardiovascular diseases (e.g., acute myocardial infarction and acute pulmonary heart disease), cerebrovascular diseases (e.g., subarachnoid hemorrhage and intracerebral hemorrhage), aortic rupture, traumatic injuries (e.g., skull fractures, subdural hemorrhage, injury of the thorax), and effects of external causes such as strangulation and drowning. The ICD-9 codes associated with these causes of death were 410K, 415B, 430, 431, 441A, 441B, 441D, 800K, 852M, 861L, 900L, 933, 992X, 994B, 994K, 994N, and 994W (the sufficies are according to the Swedish ICD-9 codes, but some are specific to Swedish forensic pathologist to allow for a better specification of the different medical conditions). The controls were selected based on similarity with the study group, primarily considering sex and age. The distribution of causes of death among the controls is detailed in Supplementary Table . The final dataset comprised 150 hypothermia cases and 278 matched controls to be used for metabolite pattern and marker identification. To evaluate the performance of the markers and to simulate a real-world application a test group was created by pseudo-randomly selecting the first 10 males and 10 females from each month within the inclusion period. This test set consisted of 667 cases after excluding individuals under the age of 18, cases lacking available toxicological screening data, cases admitted to emergency care before their demise and any cases previously included as a hypothermia or control case. The hypothermia cases and matched controls were randomly divided into a training set (3/4) and a validation set (1/4). The training set was employed for creating and refining the multivariate model, while the validation set was used for evaluation and validation of the model. Institutional review board statement This study was approved by the Swedish Ethical Review Authority (Dnr 2019-04530). Due to the retrospective nature of the study, the need of informed consent was waived by Swedish Ethical Review Authority. All methods were carried out in accordance with relevant guidelines and regulations. Data acquisition and metabolomics analysis UHPLC-QToF data, from the selected postmortem cases, obtained during drug screening in femoral blood together with multivariate analysis was used to identify postmortem biomarkers. In short, blood samples were prepared and analyzed according to a standardized procedure described elsewhere . Each sample was prepared by protein precipitation including an addition of three internal standards (amphetamine-D8, diazepam-D5 and mianserin-D3). All samples were injected on a UHPLC-ESI-QToF system. Separation was performed on C18 column using gradient elution (Supplementary Fig. ). MS-data was collected in positive mode and the total acquisition time for each sample was 12 min. Each analytical run included a blank whole blood sample containing the three internal standards, analyzed in the beginning and at the end of each run. An acceptable run showed absolute areas over 1.2 × 10 6 ,1.4 × 10 6 and 1.6 × 10 6 for amphetamine-D8, diazepam-D5 and mianserin-D3 respectively, a retention time deviation of maximum ± 0.1 min and a mass accuracy deviation of maximum ± 5 ppm. The raw LC/MS data from the selected autopsy cases were exported to mzData-files using Masshunter. The postmortem metabolomics analysis was conducted using the 'XCMS' package in R (4.1.2), which integrates the 'CAMERA' package for feature annotation, as previously described . In XCMS the centWave algorithm were used for feature detection using the following parameters Δm/z of 30 ppm, minimum peak width of 3 s, maximum peak width of 30 s and signal to noise threshold of 3 with noise variable set to 500. Retention time correction was performed using the Obiwarp function and for the grouping an mz width of 0.05, base width of 3 and minimum fraction of 0.6 were used. Data preprocessing and multivariate analysis The training set was normalized in Excel using the probabilistic quotient normalization, and log transformed, scaled with unit variance and subjected to multivariate analysis using SIMCA 17.0.2 (Umetrics, Umeå, Sweden). Features with a retention time < 60 s and > 660 s were excluded. Principal component analysis (PCA) was used to give an overview of the data, enabling identification of outliers and observation of trends. In addition, partial least square (PLS) models for age, sex and BMI were created to investigate systematic differences in the metabolic profiles. Orthogonal partial least square discriminant analysis (OPLS-DA) was used to identify variables contributing to group classification between hypothermia and control cases. Model complexity were reduced by stepwise removing non-contributing features using variable importance for the projection plots (VIP) for visualization and variable selection. The overall goal was to retain a practical and efficient classification model with as few variables a possible. Experimental reproducibility was assessed by examining the score plots from the principal component analysis (PCA), by cross validation in OPLS-DA model of the training set, and by external validation of the OPLS-DA model using a validation set to assess the predictability of the multivariate model. False positives and false negatives were investigated in depth, together with using a test set with randomly selected control cases, in order to elucidate the usability and predictability of the final model. Features in the final model were identified and annotated by matching molecular weight (± 5 ppm) and retention time against an in-house database and the online Human Metabolome Database ( https://hmdb.ca ). All features were also uploaded into MetaboAnalysts (version 6.0) module, functional analysis, usable for untargeted metabolomics data. The basic assumption is that putative annotation at individual compound level can collectively predict changes at functional levels as defined by metabolite sets or pathways . Statistical variances among the three study groups for both annotated and non-annotated metabolites were validated through univariate analysis via Kruskal–Wallis test, with subsequent Bonferroni correction to compensate for effects of multiple comparisons (SPSS, ver. 29.0, IBM). All autopsy cases admitted between late June 2017 and November 2020 at the Swedish National Board of Forensic Medicine, aged 18 or older, and that underwent toxicological screening in femoral blood using high-resolution mass spectrometry, were considered for inclusion in this study (n = 17,011). Case information were extracted from the Swedish Forensic Medicine database . During the study period, we considered cases in which hypothermia was stated as the primary cause of death by the responsible pathologist and without no hospital visits prior to the fatalities or signs of an apparent putrefaction process. Controls were selected from a pool of 3089 femoral blood samples from deceased subjects. The selected causes of death included cardiovascular diseases (e.g., acute myocardial infarction and acute pulmonary heart disease), cerebrovascular diseases (e.g., subarachnoid hemorrhage and intracerebral hemorrhage), aortic rupture, traumatic injuries (e.g., skull fractures, subdural hemorrhage, injury of the thorax), and effects of external causes such as strangulation and drowning. The ICD-9 codes associated with these causes of death were 410K, 415B, 430, 431, 441A, 441B, 441D, 800K, 852M, 861L, 900L, 933, 992X, 994B, 994K, 994N, and 994W (the sufficies are according to the Swedish ICD-9 codes, but some are specific to Swedish forensic pathologist to allow for a better specification of the different medical conditions). The controls were selected based on similarity with the study group, primarily considering sex and age. The distribution of causes of death among the controls is detailed in Supplementary Table . The final dataset comprised 150 hypothermia cases and 278 matched controls to be used for metabolite pattern and marker identification. To evaluate the performance of the markers and to simulate a real-world application a test group was created by pseudo-randomly selecting the first 10 males and 10 females from each month within the inclusion period. This test set consisted of 667 cases after excluding individuals under the age of 18, cases lacking available toxicological screening data, cases admitted to emergency care before their demise and any cases previously included as a hypothermia or control case. The hypothermia cases and matched controls were randomly divided into a training set (3/4) and a validation set (1/4). The training set was employed for creating and refining the multivariate model, while the validation set was used for evaluation and validation of the model. This study was approved by the Swedish Ethical Review Authority (Dnr 2019-04530). Due to the retrospective nature of the study, the need of informed consent was waived by Swedish Ethical Review Authority. All methods were carried out in accordance with relevant guidelines and regulations. UHPLC-QToF data, from the selected postmortem cases, obtained during drug screening in femoral blood together with multivariate analysis was used to identify postmortem biomarkers. In short, blood samples were prepared and analyzed according to a standardized procedure described elsewhere . Each sample was prepared by protein precipitation including an addition of three internal standards (amphetamine-D8, diazepam-D5 and mianserin-D3). All samples were injected on a UHPLC-ESI-QToF system. Separation was performed on C18 column using gradient elution (Supplementary Fig. ). MS-data was collected in positive mode and the total acquisition time for each sample was 12 min. Each analytical run included a blank whole blood sample containing the three internal standards, analyzed in the beginning and at the end of each run. An acceptable run showed absolute areas over 1.2 × 10 6 ,1.4 × 10 6 and 1.6 × 10 6 for amphetamine-D8, diazepam-D5 and mianserin-D3 respectively, a retention time deviation of maximum ± 0.1 min and a mass accuracy deviation of maximum ± 5 ppm. The raw LC/MS data from the selected autopsy cases were exported to mzData-files using Masshunter. The postmortem metabolomics analysis was conducted using the 'XCMS' package in R (4.1.2), which integrates the 'CAMERA' package for feature annotation, as previously described . In XCMS the centWave algorithm were used for feature detection using the following parameters Δm/z of 30 ppm, minimum peak width of 3 s, maximum peak width of 30 s and signal to noise threshold of 3 with noise variable set to 500. Retention time correction was performed using the Obiwarp function and for the grouping an mz width of 0.05, base width of 3 and minimum fraction of 0.6 were used. The training set was normalized in Excel using the probabilistic quotient normalization, and log transformed, scaled with unit variance and subjected to multivariate analysis using SIMCA 17.0.2 (Umetrics, Umeå, Sweden). Features with a retention time < 60 s and > 660 s were excluded. Principal component analysis (PCA) was used to give an overview of the data, enabling identification of outliers and observation of trends. In addition, partial least square (PLS) models for age, sex and BMI were created to investigate systematic differences in the metabolic profiles. Orthogonal partial least square discriminant analysis (OPLS-DA) was used to identify variables contributing to group classification between hypothermia and control cases. Model complexity were reduced by stepwise removing non-contributing features using variable importance for the projection plots (VIP) for visualization and variable selection. The overall goal was to retain a practical and efficient classification model with as few variables a possible. Experimental reproducibility was assessed by examining the score plots from the principal component analysis (PCA), by cross validation in OPLS-DA model of the training set, and by external validation of the OPLS-DA model using a validation set to assess the predictability of the multivariate model. False positives and false negatives were investigated in depth, together with using a test set with randomly selected control cases, in order to elucidate the usability and predictability of the final model. Features in the final model were identified and annotated by matching molecular weight (± 5 ppm) and retention time against an in-house database and the online Human Metabolome Database ( https://hmdb.ca ). All features were also uploaded into MetaboAnalysts (version 6.0) module, functional analysis, usable for untargeted metabolomics data. The basic assumption is that putative annotation at individual compound level can collectively predict changes at functional levels as defined by metabolite sets or pathways . Statistical variances among the three study groups for both annotated and non-annotated metabolites were validated through univariate analysis via Kruskal–Wallis test, with subsequent Bonferroni correction to compensate for effects of multiple comparisons (SPSS, ver. 29.0, IBM). Demographic overview and data processing Table provides a demographic overview of the cases selected for the primary study groups and the test set. Notably, no statistically significant differences were found in sex, BMI, and known PMIs (p > 0.05) between the hypothermia cases and their matched controls. Even though there were no statistical differences between medians, there was a noticeable age distribution difference. In addition it's important to highlight that a considerable portion of the hypothermia cases had unknown PMIs. When assessing the last observed time until the body was found as PMI, differences did indeed emerge. Particularly for the randomly selected controls, significant differences were evident, as they were not matched meticulously with the study groups, resulting in marked demographic disparities. Mass spectra data were processed using XCMS to compile a comprehensive list of chromatographic peaks with specific accurate masses and retention times, termed features. After the exclusion of features with a retention time of < 60 s and > 660 s, this selection resulted in 2526 features being available for multivariate modeling. Multivariate modeling and model evaluation When applying supervised OPLS-DA analyses, we successfully distinguished the study groups based on metabolic features. The OPLS-DA model demonstrated statistical significance, with R2 = 0.83 and Q2 = 0.67, along with a CV-ANOVA p-value of < 0.001. After stepwise removing features from this model. The final OPLS-DA model only contained 44 unique features. This model exhibited a high goodness-of-fit with a comparable predictive performance as the first model, reporting R 2 = 0.73 and Q2 = 0.68, along with a CV-ANOVA p-value of < 0.001 (Fig. A). In the training set, the model correctly classified 93% of the 322 samples, with a sensitivity of 89% and a specificity of 95% for the hypothermia cases. However, 21 samples were misclassified: 15 false negatives and 6 false positives. Notably, one of the false negatives had drowning as primary cause of death and should have been categorized as drowning case from the beginning, except for that no clear trend was observed in the false negatives. Among the false positives, four cases had drowning, and two had subarachnoid hemorrhage listed as the primary cause of death. Two of these six had hypothermia listed as a contributing cause of death. To further evaluate the model's predictability, the remaining 106 autopsy cases were utilized as an external validation set. Each autopsy case was predicted and classified using the final model with a threshold determined by the true and false positive rates from the training set. The predicted score plot and the ROC curve for the 106 cases in the validation set are shown in Fig. . In the validation set, the model accurately classified 94% of the samples, with a sensitivity of 92% and a specificity of 96%. However, six samples were misclassified: three false negatives and three false positives. No discernible pattern was observed for the false negatives, but all three false positives were drowning cases. To assess the model's applicability in a real setting and to identify any potential differential causes of death exhibiting a similar metabolite pattern to hypothermia, we examined 667 randomly selected control samples. The predicted score plot for these randomly selected controls is displayed in Fig. . Since hypothermia, as expected, represent a low proportion of the autopsy cases the vast majority of the samples were correctly predicted as controls, as shown in the density plot in Fig. C. Nevertheless, among the samples, 72 (11%) had a predicted score (tPS) above the threshold of 3. This threshold corresponds to achieving a sensitivity of 75% in identifying hypothermia cases in the validation set. These 72 autopsy cases were categorized into nine classes based on their primary cause of death, including ketoacidosis, brain injury, drowning, drug intoxications, hanging, heart and cardiovascular diseases, pneumonia, other causes of death and an unknown cause of death. The prevalence of cases predicted as hypothermia was roughly similar to or lower than 11% for potential differential diagnoses such as brain injury (4%), drowning (7%), drug intoxication (7%), and heart and cardiovascular diseases (9%) (Table ). Notably, as many as 17 out of 25 ketoacidosis cases in the random control set were misclassified as hypothermia, suggesting that the model encounters challenges in distinguishing between ketoacidosis and hypothermia (Fig. D). It is important to note that, among these 72 cases, five had hypothermia listed as a contributing cause while none of the cases with tPS < 3 had hypothermia as a contributing cause. Metabolite identification and pathway analysis In-house and online public database matching led to the identification of the 44 features that discriminate the hypothermia group, resulting in putative metabolite identifications listed in Table . These identified metabolites include carnitines, stress hormones, NAD metabolites, purine metabolites, and known biomarkers for renal dysfunction. To provide visual representation of the changes in the hypothermia cases, six specific metabolites—three upregulated and three downregulated across multiple pathways—are depicted as boxplots in Fig. , highlighting distinctive differences between the three groups. For the functional analysis in MetaboAnalyst, all 2256 features were uploaded. MetaboAnalyst identified 230 empirical compounds in the dataset, and the following 7 pathways exhibited a combined p-value, based on the Mummchog and GSEA algorithms, of less than 0.05: C21-steroid hormone biosynthesis and metabolism, vitamin B3 (nicotinate and nicotinamide) metabolism, carnitine shuttle, arginine and proline metabolism, androgen and estrogen biosynthesis and metabolism, and vitamin B12 (cyanocobalamin) metabolism, as shown in Fig. B. Table provides a demographic overview of the cases selected for the primary study groups and the test set. Notably, no statistically significant differences were found in sex, BMI, and known PMIs (p > 0.05) between the hypothermia cases and their matched controls. Even though there were no statistical differences between medians, there was a noticeable age distribution difference. In addition it's important to highlight that a considerable portion of the hypothermia cases had unknown PMIs. When assessing the last observed time until the body was found as PMI, differences did indeed emerge. Particularly for the randomly selected controls, significant differences were evident, as they were not matched meticulously with the study groups, resulting in marked demographic disparities. Mass spectra data were processed using XCMS to compile a comprehensive list of chromatographic peaks with specific accurate masses and retention times, termed features. After the exclusion of features with a retention time of < 60 s and > 660 s, this selection resulted in 2526 features being available for multivariate modeling. When applying supervised OPLS-DA analyses, we successfully distinguished the study groups based on metabolic features. The OPLS-DA model demonstrated statistical significance, with R2 = 0.83 and Q2 = 0.67, along with a CV-ANOVA p-value of < 0.001. After stepwise removing features from this model. The final OPLS-DA model only contained 44 unique features. This model exhibited a high goodness-of-fit with a comparable predictive performance as the first model, reporting R 2 = 0.73 and Q2 = 0.68, along with a CV-ANOVA p-value of < 0.001 (Fig. A). In the training set, the model correctly classified 93% of the 322 samples, with a sensitivity of 89% and a specificity of 95% for the hypothermia cases. However, 21 samples were misclassified: 15 false negatives and 6 false positives. Notably, one of the false negatives had drowning as primary cause of death and should have been categorized as drowning case from the beginning, except for that no clear trend was observed in the false negatives. Among the false positives, four cases had drowning, and two had subarachnoid hemorrhage listed as the primary cause of death. Two of these six had hypothermia listed as a contributing cause of death. To further evaluate the model's predictability, the remaining 106 autopsy cases were utilized as an external validation set. Each autopsy case was predicted and classified using the final model with a threshold determined by the true and false positive rates from the training set. The predicted score plot and the ROC curve for the 106 cases in the validation set are shown in Fig. . In the validation set, the model accurately classified 94% of the samples, with a sensitivity of 92% and a specificity of 96%. However, six samples were misclassified: three false negatives and three false positives. No discernible pattern was observed for the false negatives, but all three false positives were drowning cases. To assess the model's applicability in a real setting and to identify any potential differential causes of death exhibiting a similar metabolite pattern to hypothermia, we examined 667 randomly selected control samples. The predicted score plot for these randomly selected controls is displayed in Fig. . Since hypothermia, as expected, represent a low proportion of the autopsy cases the vast majority of the samples were correctly predicted as controls, as shown in the density plot in Fig. C. Nevertheless, among the samples, 72 (11%) had a predicted score (tPS) above the threshold of 3. This threshold corresponds to achieving a sensitivity of 75% in identifying hypothermia cases in the validation set. These 72 autopsy cases were categorized into nine classes based on their primary cause of death, including ketoacidosis, brain injury, drowning, drug intoxications, hanging, heart and cardiovascular diseases, pneumonia, other causes of death and an unknown cause of death. The prevalence of cases predicted as hypothermia was roughly similar to or lower than 11% for potential differential diagnoses such as brain injury (4%), drowning (7%), drug intoxication (7%), and heart and cardiovascular diseases (9%) (Table ). Notably, as many as 17 out of 25 ketoacidosis cases in the random control set were misclassified as hypothermia, suggesting that the model encounters challenges in distinguishing between ketoacidosis and hypothermia (Fig. D). It is important to note that, among these 72 cases, five had hypothermia listed as a contributing cause while none of the cases with tPS < 3 had hypothermia as a contributing cause. In-house and online public database matching led to the identification of the 44 features that discriminate the hypothermia group, resulting in putative metabolite identifications listed in Table . These identified metabolites include carnitines, stress hormones, NAD metabolites, purine metabolites, and known biomarkers for renal dysfunction. To provide visual representation of the changes in the hypothermia cases, six specific metabolites—three upregulated and three downregulated across multiple pathways—are depicted as boxplots in Fig. , highlighting distinctive differences between the three groups. For the functional analysis in MetaboAnalyst, all 2256 features were uploaded. MetaboAnalyst identified 230 empirical compounds in the dataset, and the following 7 pathways exhibited a combined p-value, based on the Mummchog and GSEA algorithms, of less than 0.05: C21-steroid hormone biosynthesis and metabolism, vitamin B3 (nicotinate and nicotinamide) metabolism, carnitine shuttle, arginine and proline metabolism, androgen and estrogen biosynthesis and metabolism, and vitamin B12 (cyanocobalamin) metabolism, as shown in Fig. B. Hypothermia, a potentially life-threatening condition characterized by a dangerously low body temperature, has long been a subject of scientific inquiry , , , . Its complex pathophysiology has intrigued researchers for years, leading to investigations into the metabolic dysfunctions it induces in search of potential biomarkers , , . Over the years, several biomarkers have been suggested, such as 3-hydroxybutyric acid, cortisol, and arginine. Recognizing the challenges in finding a single marker that might be too unspecific, our approach aims to identify a pattern capable of classifying hypothermia cases with high sensitivity and specificity and explore the potential for incorporating these biomarkers into forensic screening methods. Furthermore, potential biomarkers could also enhance our understanding of the physiological responses during hypothermia and might hold promise for clinical applications. These biomarkers could serve as valuable tools for monitoring and potentially treating hypothermia cases in a clinical setting, thereby advancing our capacity to manage and mitigate the impact of this condition. Postmortem metabolomics as a screening tool for hypothermia In the realm of metabolomics, the validation of multivariate models is of paramount importance. Even so, a significant proportion of metabolomics investigations rely exclusively on cross-validation. In this study, we employed a three-set design encompassing a training set, a validation set, and a test set. The final model thereby underwent evaluation not only through cross-validation but also via external validation on unseen samples. Furthermore, testing on randomly selected samples provided insights into the model's real-world performance. This approach provided a robust foundation for the comprehensive validation and evaluation of the applicability of postmortem metabolomics. The final model exhibited remarkably high predictive power, as demonstrated by both cross-validation and external validation, with sensitivity and specificity exceeding 90%. A noteworthy aspect of our findings is the limited number of metabolites (n = 44) required to achieve this impressive predictive capability. One such environments, hypothermia could have significantly contributed to death, even if it is not explicitly mentioned on death certificates. However, in the test set only 1 out of 14 drowning cases were classified as hypothermic. The interplay between drowning and hypothermia presents a diagnostic challenge, as both conditions might share overlapping metabolic profiles. Moreover, we employed a test set to investigate whether other causes of death shared a similar metabolomic profile with hypothermia. This test set validated the model's high sensitivity by classifying all cases where hypothermia was a contributing cause of death. Notably, in the test set, 17 out of 25 ketoacidosis cases were classified as hypothermia cases, while the remaining 8 cases teetered on the borderline of being classified as a hypothermia case. The correlation between hypothermia and ketoacidosis is intriguing, given that conditions known to induce ketoacidosis often serve as triggers for secondary hypothermia . Secondary hypothermia often occurs in the context of underlying clinical conditions or concurrent medications that affect the body’s ability to maintain its internal core temperature (e.g. malnutrition, underlying diseases such as diabetes or alcohol that impair central thermoregulation), which are also known to cause ketoacidosis – . It could therefore be argued that the test set showed no potential differential diagnosis as the ketoacidosis cases might be hypothermic as well. However, differentiating different types of ketoacidosis represents a crucial area for future research in postmortem metabolomics. As different types of ketoacidosis (e.g., due to diabetes, alcohol, starvation) can exhibit distinct metabolic profiles, comparing these with the metabolic signatures of hypothermia could uncover specific biomarkers unique to each condition. For example, metabolites related to alcohol metabolism, such as ethyl glucuronide, may help differentiate alcoholic ketoacidosis from other forms. Similarly, markers of nutritional status and stress response could be informative in cases of starvation and hypothermia, respectively. Even so, the model’s predictive power for the validation set and the limited set of differential diagnosis demonstrates in the test set, proves the potential of postmortem metabolomics as a screening tool for hypothermia. Metabolic changes and affected pathways during hypothermia When exposed to cold conditions, the body often initiates a stress response, leading to an increase in cortisol production as it attempts to maintain core body temperature and adapt to the cold environment, which might be why we see upregulated level of cortisol and the observed pattern and C21 hormone response in the functional analysis. The rise in cortisol levels might reflect the biological stress response to cold, and cortisol has been suggested as a marker for cold exposure . Another significant mechanism during hypothermia induced stress is vasoconstriction, where peripheral blood vessels constrict to minimize heat loss. This change in blood flow can lead to reduced renal perfusion and glomerular filtration rates, possibly explaining the observed alterations in metabolic markers related to renal function, such as N-methyl-2-pyridone-5-carboxamide (2PY), N-methyl-2-pyridone-5-carboxamide (4PY), phenacetylglutamine, and hippuric acid , . As the body fights the cold, its metabolic rate significantly increases. This increased metabolism is an energy-intensive process aimed at generating heat and preserving core body temperature. Thermogenesis consumes NADH, which may explain the observed patterns in nicotinamide metabolism. Metabolites, including 1-(beta- d -ribofuranosyl)-1,4-dihydronicotinamide (a precursor of nicotinamide), s-adenosylmethionine (SAM, a vital co-substrate in the nicotinamide pathway), and the end products 2PY and 4PY, were the precursor and co-factor are downregulated while the waste products are up-regulated. This pattern aligns with observations in living subjects , . Furthermore, the thermogenesis in brown adipose tissue could account for the accumulation of end products from the Krebs cycle and β-oxidation, such as hippuric acid, phenylacetylglutamine, and hydroxybutyric acid. Additionally, there's a consensus in the literature regarding increased levels of blood ketone bodies, including β-hydroxybutyrate, acetone, isopropyl alcohol, and increased cortisol levels , , , Moreover, the increase in β-oxidation in brown adipose tissue may also contribute to the elevated levels of circulating acylcarnitines, and might be why the carnitine shuttle seems affected. Interestingly a model restricted to cases aged 70 or younger (Supplementary Fig. ), outperformed the model in Fig. . This might be explained by the amount and activity of brown adipose tissue (BAT) which is expected to decline with age . As BAT is important in energy homeostasis and thermogenesis, the metabolome differences in younger individuals are expected to be greater between the groups in comparison to older individuals. Furthermore, acylcarnitines have been proposed as a trigger and a fuel source for brown fat thermogenesis . To conclude, these results aligns with findings from a previous targeted metabolomics study on forensic hypothermia cases by Rousseau and colleagues in 2019 . Our investigation into the metabolomic profile differences between hypothermia cases and controls cases revealed distinct variations in several metabolites, indicating potential biomarkers for accurate identification. A Summary of affected metabolites and their relation to thermogenesis and renal dysfunction is found in Fig. . Notably, the study identified key metabolic pathways associated with hypothermia pathophysiology, shedding light on underlying mechanisms. Additionally, the observed differences, especially in metabolites linked to specific pathways, present promising avenues for developing targeted treatments or interventions. These findings not only hold diagnostic implications for hypothermia but also offer insights into potential therapeutic approaches. Understanding the altered metabolic pathways could pave the way for treatment strategies aimed at mitigating the effects of hypothermia and improving patient outcomes. Potential insights and limitation Postmortem metabolomics presents a novel avenue for exploring potential biomarkers that offer insights into the mechanisms of states or diseases. This approach provides an opportunity to investigate aspects that might be unfeasible to explore in clinical settings due to practical or ethical constraints. It is essential to highlight the clinical implications of these findings. Beyond revealing the potential to probe disease mechanisms using postmortem samples, an approach potentially more ethical than clinical investigations and closer to actual human conditions than animal models, the results underscore the possibility of identifying crucial markers for various diseases or conditions. However, it is important to mention that our analytical method was primarily optimized for forensic toxicological screening, which has implications for the width of metabolome coverage. Expanding the screening to include various chromatographic conditions and both positive and negative ionization methods could potentially unveil more markers related to hypothermia. It is therefore important to not overinterpret the metabolite changes and relate them to the mechanism of hypothermia. However, the decision to utilize the current forensic toxicological screening analytic method was guided by the aim of creating a practical and efficient classification model. A simple and straightforward model, employing as few metabolites as necessary for prediction, was considered more important than to unravel the mechanism behind hypothermia. It is likely that further refinements of the data may provide additional insights into the mechanisms underlying hypothermia. In the context of postmortem metabolomics, little is known about factors such as postmortem interval, postmortem degradation, and postmortem redistribution and their influence on the metabolome . However, to mitigate these issues, we only included autopsy cases showing no putrefaction, aiming to minimize the potential impact of these factors on the metabolome and no apparent differences in postmortem interval were observed between the study groups. Having said that, examination of samples from decomposed samples are important to find out if the results obtained can be applied on such cases. We recognize the analytical method's limitations and the need for further research to elucidate the mechanisms underlying hypothermia and the impact of postmortem factors on the metabolome. In the realm of metabolomics, the validation of multivariate models is of paramount importance. Even so, a significant proportion of metabolomics investigations rely exclusively on cross-validation. In this study, we employed a three-set design encompassing a training set, a validation set, and a test set. The final model thereby underwent evaluation not only through cross-validation but also via external validation on unseen samples. Furthermore, testing on randomly selected samples provided insights into the model's real-world performance. This approach provided a robust foundation for the comprehensive validation and evaluation of the applicability of postmortem metabolomics. The final model exhibited remarkably high predictive power, as demonstrated by both cross-validation and external validation, with sensitivity and specificity exceeding 90%. A noteworthy aspect of our findings is the limited number of metabolites (n = 44) required to achieve this impressive predictive capability. One such environments, hypothermia could have significantly contributed to death, even if it is not explicitly mentioned on death certificates. However, in the test set only 1 out of 14 drowning cases were classified as hypothermic. The interplay between drowning and hypothermia presents a diagnostic challenge, as both conditions might share overlapping metabolic profiles. Moreover, we employed a test set to investigate whether other causes of death shared a similar metabolomic profile with hypothermia. This test set validated the model's high sensitivity by classifying all cases where hypothermia was a contributing cause of death. Notably, in the test set, 17 out of 25 ketoacidosis cases were classified as hypothermia cases, while the remaining 8 cases teetered on the borderline of being classified as a hypothermia case. The correlation between hypothermia and ketoacidosis is intriguing, given that conditions known to induce ketoacidosis often serve as triggers for secondary hypothermia . Secondary hypothermia often occurs in the context of underlying clinical conditions or concurrent medications that affect the body’s ability to maintain its internal core temperature (e.g. malnutrition, underlying diseases such as diabetes or alcohol that impair central thermoregulation), which are also known to cause ketoacidosis – . It could therefore be argued that the test set showed no potential differential diagnosis as the ketoacidosis cases might be hypothermic as well. However, differentiating different types of ketoacidosis represents a crucial area for future research in postmortem metabolomics. As different types of ketoacidosis (e.g., due to diabetes, alcohol, starvation) can exhibit distinct metabolic profiles, comparing these with the metabolic signatures of hypothermia could uncover specific biomarkers unique to each condition. For example, metabolites related to alcohol metabolism, such as ethyl glucuronide, may help differentiate alcoholic ketoacidosis from other forms. Similarly, markers of nutritional status and stress response could be informative in cases of starvation and hypothermia, respectively. Even so, the model’s predictive power for the validation set and the limited set of differential diagnosis demonstrates in the test set, proves the potential of postmortem metabolomics as a screening tool for hypothermia. When exposed to cold conditions, the body often initiates a stress response, leading to an increase in cortisol production as it attempts to maintain core body temperature and adapt to the cold environment, which might be why we see upregulated level of cortisol and the observed pattern and C21 hormone response in the functional analysis. The rise in cortisol levels might reflect the biological stress response to cold, and cortisol has been suggested as a marker for cold exposure . Another significant mechanism during hypothermia induced stress is vasoconstriction, where peripheral blood vessels constrict to minimize heat loss. This change in blood flow can lead to reduced renal perfusion and glomerular filtration rates, possibly explaining the observed alterations in metabolic markers related to renal function, such as N-methyl-2-pyridone-5-carboxamide (2PY), N-methyl-2-pyridone-5-carboxamide (4PY), phenacetylglutamine, and hippuric acid , . As the body fights the cold, its metabolic rate significantly increases. This increased metabolism is an energy-intensive process aimed at generating heat and preserving core body temperature. Thermogenesis consumes NADH, which may explain the observed patterns in nicotinamide metabolism. Metabolites, including 1-(beta- d -ribofuranosyl)-1,4-dihydronicotinamide (a precursor of nicotinamide), s-adenosylmethionine (SAM, a vital co-substrate in the nicotinamide pathway), and the end products 2PY and 4PY, were the precursor and co-factor are downregulated while the waste products are up-regulated. This pattern aligns with observations in living subjects , . Furthermore, the thermogenesis in brown adipose tissue could account for the accumulation of end products from the Krebs cycle and β-oxidation, such as hippuric acid, phenylacetylglutamine, and hydroxybutyric acid. Additionally, there's a consensus in the literature regarding increased levels of blood ketone bodies, including β-hydroxybutyrate, acetone, isopropyl alcohol, and increased cortisol levels , , , Moreover, the increase in β-oxidation in brown adipose tissue may also contribute to the elevated levels of circulating acylcarnitines, and might be why the carnitine shuttle seems affected. Interestingly a model restricted to cases aged 70 or younger (Supplementary Fig. ), outperformed the model in Fig. . This might be explained by the amount and activity of brown adipose tissue (BAT) which is expected to decline with age . As BAT is important in energy homeostasis and thermogenesis, the metabolome differences in younger individuals are expected to be greater between the groups in comparison to older individuals. Furthermore, acylcarnitines have been proposed as a trigger and a fuel source for brown fat thermogenesis . To conclude, these results aligns with findings from a previous targeted metabolomics study on forensic hypothermia cases by Rousseau and colleagues in 2019 . Our investigation into the metabolomic profile differences between hypothermia cases and controls cases revealed distinct variations in several metabolites, indicating potential biomarkers for accurate identification. A Summary of affected metabolites and their relation to thermogenesis and renal dysfunction is found in Fig. . Notably, the study identified key metabolic pathways associated with hypothermia pathophysiology, shedding light on underlying mechanisms. Additionally, the observed differences, especially in metabolites linked to specific pathways, present promising avenues for developing targeted treatments or interventions. These findings not only hold diagnostic implications for hypothermia but also offer insights into potential therapeutic approaches. Understanding the altered metabolic pathways could pave the way for treatment strategies aimed at mitigating the effects of hypothermia and improving patient outcomes. Postmortem metabolomics presents a novel avenue for exploring potential biomarkers that offer insights into the mechanisms of states or diseases. This approach provides an opportunity to investigate aspects that might be unfeasible to explore in clinical settings due to practical or ethical constraints. It is essential to highlight the clinical implications of these findings. Beyond revealing the potential to probe disease mechanisms using postmortem samples, an approach potentially more ethical than clinical investigations and closer to actual human conditions than animal models, the results underscore the possibility of identifying crucial markers for various diseases or conditions. However, it is important to mention that our analytical method was primarily optimized for forensic toxicological screening, which has implications for the width of metabolome coverage. Expanding the screening to include various chromatographic conditions and both positive and negative ionization methods could potentially unveil more markers related to hypothermia. It is therefore important to not overinterpret the metabolite changes and relate them to the mechanism of hypothermia. However, the decision to utilize the current forensic toxicological screening analytic method was guided by the aim of creating a practical and efficient classification model. A simple and straightforward model, employing as few metabolites as necessary for prediction, was considered more important than to unravel the mechanism behind hypothermia. It is likely that further refinements of the data may provide additional insights into the mechanisms underlying hypothermia. In the context of postmortem metabolomics, little is known about factors such as postmortem interval, postmortem degradation, and postmortem redistribution and their influence on the metabolome . However, to mitigate these issues, we only included autopsy cases showing no putrefaction, aiming to minimize the potential impact of these factors on the metabolome and no apparent differences in postmortem interval were observed between the study groups. Having said that, examination of samples from decomposed samples are important to find out if the results obtained can be applied on such cases. We recognize the analytical method's limitations and the need for further research to elucidate the mechanisms underlying hypothermia and the impact of postmortem factors on the metabolome. In conclusion, our study's utilization of a three-set design, strong predictive capabilities, and intriguing metabolite correlations in the mechanism of hypothermia, highlights the potential of postmortem metabolomics. This study serves as evidence that postmortem metabolomics could offer means to delve into the mechanisms underlying critical states or diseases which might hold relevance beyond forensic applications. Supplementary Information.
Metabolite quantification data based on
7fe60d95-5a50-4a33-9c36-5765a0f07cb6
11559240
Biochemistry[mh]
To obtain an integrated vision of fruit development and related metabolism, we studied several species with the same approach. This dataset is part of a larger study about eight fleshy fruit species aiming at studying the regulation of fruit metabolism during fruit development by combining several omics. During this project, special care has been taken to produce quantitative data whenever possible, on fruits sampled at a range of developmental stages before and during ripening, as already done for tomato . Such quantitative data are of special interest for analyses and meta-analyses aiming at showing common or species-dependent regulations. For metabolomics, when certain precautions are taken during extraction, spectra acquisition and processing, and using calibration, proton nuclear magnetic resonance ( 1 H-NMR) profiling provides absolute quantification data of major metabolites expressed for instance as mg or mmol on a fresh weight or dry weight basis. The present quantitative data of the major metabolites of both eggplant ( Solanum melongena L.) and pepper ( Capsicum annuum L.) fruit based on monodimensional (1D) 1 H-NMR spectra have not been published in a research paper yet. When complemented with other metabolomics data and with other omics they can be used for a systems biology study to decipher and improve fleshy fruit quality . Plant material The plants were grown under conditions of commercial production at Sainte-Livrade-sur-Lot (South-West France, 44° 23′ 56″ N, 0° 35′ 25″ E, 50-m altitude). Pepper plants, C. annuum cv Gonto (Clause Vegetable Seeds, Portes-lès-Valence, France), were grown under a plastic tunnel at 1.8 plant/m 2 density in a sandy-loam soil with drip fertirrigation (2 to 3 irrigations per day during 15–20 min with a flow rate of 4.5 mm/h/m 2 ). Eggplant plants, S. melongena cv Monarca (Rijk Zwaan, Aramon, France), were grown in a plastic greenhouse at 1.2 plants/m 2 density in coco-fiber substrate with drip fertirrigation (irrigation every 60 to 80 min from 8:30 a.m. to 5:30 p.m. triggered by a solarimeter during 6–7 min with a dripper flow rate of 2 L/h, five drippers for three plants and an electrical conductivity sensor for fertilizer monitoring). From anthesis to ripe-fruit harvest, the mean, minimum and maximum daily-mean temperatures were 22.9, 17.0, 28.9, and 22.8, 17.3, 27.6 °C, for pepper in the tunnel and eggplant in the greenhouse, respectively. Fruits were harvested at ten or 11 stages of development (Data files 1–2, and , respectively): anthesis, growth stages, maturation start, ripening stages, ripe). Fruit harvests started on June 20 th and May 27 th and ended on October 5 th and August 11 th 2016 for pepper and eggplant, respectively. Each stage was identified with its corresponding number of days after anthesis (DPA). Five biological replicates were collected for each stage of development, with a minimum of 12 fruits per replicate for the first two stages and four fruits for the other stages (Data files 3–4 ). For the first two stages, the entire ovary or fruit was sampled as their rapid dissection was not feasible. For the following stages, the seeds were discarded to study the edible fleshy part of the fruit. Then, for pepper samples were dissected from about one third of the fruit around the equatorial region, and for eggplant from about one fourth of the fruit on the pedicel side. Pepper fruit pericarp or eggplant fruit mesocarp (pericarp without peel) was rapidly dissected. All samples were immediately frozen in liquid nitrogen, stored at − 80 °C before cryogrinding (Spex Genogrinder 2010, Fisher Scientific, Illkirch, France) and lyophilization (Dura Dry MP Freeze Dryer, Warminster, PA USA), and then NMR-based analysis. Proton NMR profiling data Polar compounds were extracted from 25 ± 1 mg lyophilized powder with an ethanol–water series . NMR analyses of major polar compounds were performed on pH-adjusted lyophilized extracts as previously described with minor modifications (Data file 5 ). Briefly, absolute quantification of individual metabolites was achieved using a 500-MHz Avance-III NMR spectrometer (Bruker Biospin, Wissembourg, France) and external calibration with calibration-range solutions. The NMR spectrometer was equipped with a 5-mm inverse probe and an autosampler (Bruker Biospin, Karlsruhe, Germany). 1 H-NMR spectra were acquired with a single pulse (zg) sequence, 64 scans, a 2.73-s acquisition time, a 90° pulse angle, a 25-s recycle delay and a fixed receiver gain for each species. The resulting free induction decays (Data sets 1–2 ) were processed with NMRProcFlow tool using the variable-size bucketing module for peak integration. Metabolites were assigned according to published data , previous work on a mixture of stages of development , and additional 1D and 2D NMR experiments including COrrelation SpectroscopY (COSY), Heteronuclear Multiple Bond Correlation (HMBC), Heteronuclear Single Quantum Correlation (HSQC), 1D selective gradient COSY and TOtal Correlation SpectroscopY (TOCSY) (1D 1 H annotation Table and 2D spectra in Data sets 3–4 ). For pepper, two unknown compounds were partially identified: Unknown_1, a trans-4-hydroxyproline like compound, and Unknown_2, a hydroxycinnamic-acid containing compound (Data set 5 ). The singlet at 3.05 ppm from Unknown_1 was infirmed to be malonate or creatine in disagreement with previous studies . A resonance group was selected for metabolite quantification (assignment description Data files 6–7 , localization on representative 1D spectra Data files 8–9 ). Metabolite contents were determined using the calibration curves and the dry matter contents of the samples and expressed on a fresh weight basis. This resulted in the quantification of 24 and 27 metabolites in pepper and eggplant, respectively (Data files 10–11 , overview with principal component analyses (PCA) Data files 12–13). Nineteen metabolites were determined in both pepper and eggplant, including three soluble sugars, five organic acids and nine free amino acids. These common metabolites allowed seeing common changes during development for the two species and their main compositional differences (PCA Data file 14 ). The strategy for spectra, data and metadata deposit combines a national repository (recherche.data.gouv, https://recherche.data.gouv.fr/en ) for 1D and 2D spectra, and an institutional data management system based on FAIR principles (ODAM, https://inrae.github.io/ODAM/ , ) for pepper ( https://pmb-bordeaux.fr/dataexplorer/?ds=FR17PP009 ) and eggplant data ( https://pmb-bordeaux.fr/dataexplorer/?ds=FR17EP006 ). The plants were grown under conditions of commercial production at Sainte-Livrade-sur-Lot (South-West France, 44° 23′ 56″ N, 0° 35′ 25″ E, 50-m altitude). Pepper plants, C. annuum cv Gonto (Clause Vegetable Seeds, Portes-lès-Valence, France), were grown under a plastic tunnel at 1.8 plant/m 2 density in a sandy-loam soil with drip fertirrigation (2 to 3 irrigations per day during 15–20 min with a flow rate of 4.5 mm/h/m 2 ). Eggplant plants, S. melongena cv Monarca (Rijk Zwaan, Aramon, France), were grown in a plastic greenhouse at 1.2 plants/m 2 density in coco-fiber substrate with drip fertirrigation (irrigation every 60 to 80 min from 8:30 a.m. to 5:30 p.m. triggered by a solarimeter during 6–7 min with a dripper flow rate of 2 L/h, five drippers for three plants and an electrical conductivity sensor for fertilizer monitoring). From anthesis to ripe-fruit harvest, the mean, minimum and maximum daily-mean temperatures were 22.9, 17.0, 28.9, and 22.8, 17.3, 27.6 °C, for pepper in the tunnel and eggplant in the greenhouse, respectively. Fruits were harvested at ten or 11 stages of development (Data files 1–2, and , respectively): anthesis, growth stages, maturation start, ripening stages, ripe). Fruit harvests started on June 20 th and May 27 th and ended on October 5 th and August 11 th 2016 for pepper and eggplant, respectively. Each stage was identified with its corresponding number of days after anthesis (DPA). Five biological replicates were collected for each stage of development, with a minimum of 12 fruits per replicate for the first two stages and four fruits for the other stages (Data files 3–4 ). For the first two stages, the entire ovary or fruit was sampled as their rapid dissection was not feasible. For the following stages, the seeds were discarded to study the edible fleshy part of the fruit. Then, for pepper samples were dissected from about one third of the fruit around the equatorial region, and for eggplant from about one fourth of the fruit on the pedicel side. Pepper fruit pericarp or eggplant fruit mesocarp (pericarp without peel) was rapidly dissected. All samples were immediately frozen in liquid nitrogen, stored at − 80 °C before cryogrinding (Spex Genogrinder 2010, Fisher Scientific, Illkirch, France) and lyophilization (Dura Dry MP Freeze Dryer, Warminster, PA USA), and then NMR-based analysis. Polar compounds were extracted from 25 ± 1 mg lyophilized powder with an ethanol–water series . NMR analyses of major polar compounds were performed on pH-adjusted lyophilized extracts as previously described with minor modifications (Data file 5 ). Briefly, absolute quantification of individual metabolites was achieved using a 500-MHz Avance-III NMR spectrometer (Bruker Biospin, Wissembourg, France) and external calibration with calibration-range solutions. The NMR spectrometer was equipped with a 5-mm inverse probe and an autosampler (Bruker Biospin, Karlsruhe, Germany). 1 H-NMR spectra were acquired with a single pulse (zg) sequence, 64 scans, a 2.73-s acquisition time, a 90° pulse angle, a 25-s recycle delay and a fixed receiver gain for each species. The resulting free induction decays (Data sets 1–2 ) were processed with NMRProcFlow tool using the variable-size bucketing module for peak integration. Metabolites were assigned according to published data , previous work on a mixture of stages of development , and additional 1D and 2D NMR experiments including COrrelation SpectroscopY (COSY), Heteronuclear Multiple Bond Correlation (HMBC), Heteronuclear Single Quantum Correlation (HSQC), 1D selective gradient COSY and TOtal Correlation SpectroscopY (TOCSY) (1D 1 H annotation Table and 2D spectra in Data sets 3–4 ). For pepper, two unknown compounds were partially identified: Unknown_1, a trans-4-hydroxyproline like compound, and Unknown_2, a hydroxycinnamic-acid containing compound (Data set 5 ). The singlet at 3.05 ppm from Unknown_1 was infirmed to be malonate or creatine in disagreement with previous studies . A resonance group was selected for metabolite quantification (assignment description Data files 6–7 , localization on representative 1D spectra Data files 8–9 ). Metabolite contents were determined using the calibration curves and the dry matter contents of the samples and expressed on a fresh weight basis. This resulted in the quantification of 24 and 27 metabolites in pepper and eggplant, respectively (Data files 10–11 , overview with principal component analyses (PCA) Data files 12–13). Nineteen metabolites were determined in both pepper and eggplant, including three soluble sugars, five organic acids and nine free amino acids. These common metabolites allowed seeing common changes during development for the two species and their main compositional differences (PCA Data file 14 ). The strategy for spectra, data and metadata deposit combines a national repository (recherche.data.gouv, https://recherche.data.gouv.fr/en ) for 1D and 2D spectra, and an institutional data management system based on FAIR principles (ODAM, https://inrae.github.io/ODAM/ , ) for pepper ( https://pmb-bordeaux.fr/dataexplorer/?ds=FR17PP009 ) and eggplant data ( https://pmb-bordeaux.fr/dataexplorer/?ds=FR17EP006 ). Due to its current low sensitivity, 1 H-NMR of polar extracts allowed the absolute quantification of major compounds, mostly primary metabolites but including major soluble sugars and organic acids crucial for fruit taste. Intermediaries of central metabolism such as sugar phosphates and specialized metabolites such as glycoalkaloids, phenolics and isoprenoids should be determined, as relative or absolute contents, using dedicated protective solvent extraction and complementary analytical strategies based on liquid chromatography coupled to mass spectrometry or to tandem mass spectrometry .
RETRACTED: Thymoquinone and Curcumin Defeat Aging-Associated Oxidative Alterations Induced by D-Galactose in Rats’ Brain and Heart
e8cb742a-e3b1-4497-89fb-a9393e273c01
8268720
Anatomy[mh]
Aging is a deteriorative process that occurs mainly due to oxidative stress, leading to numerous oxidative stress-associated diseases because of the accumulation of reactive oxygen species (ROS) and reduced antioxidant capability . A D-galactose (D-gal)-induced aging model is a commonly utilized model to investigate anti-aging drugs . When D-gal accumulates in the body, it can react with the free amines of amino acids in proteins and peptides, forming advanced glycation end products (AGEs) . Consequently, AGEs interact with specific receptors (RAGE) in many cell types and induce the activation of the downstream nuclear factor kappa-B (NF-κB), and other signaling pathways, resulting in ROS generation, which could accelerate the aging process . The elevated ROS and reactive nitrogen species (RNS), including superoxide anion and nitric oxide, lead to cellular damages in protein, lipid, and DNA that are able to favor the development of different diseases, including tumors, neurodegenerative disorders, aging, and an inflammatory processes [ , , , ]. Natural compounds act as preventive antioxidant agents against different age-associated alterations . Thymoquinone (TQ) is an active compound of Nigella sativa seeds with diverse biological activities such as antioxidant, antitoxic, anti-inflammatory, antidiabetic, and anticancer activities ( ) [ , , , , ]. TQ showed a positive elevation in liver glutathione levels and enhancement of total oxidant status of blood in a rat model with carbon tetrachloride-induced hepatotoxicity . Also, TQ protects cardiac muscles against diabetic oxidative stress by upregulation of nuclear factor-erythroid-2-related factor 2 (Nrf2), which improved the antioxidant potential of the cardiac muscles and alleviated the inflammatory process . Moreover, TQ alleviates the testicular damage in diabetic rats through its powerful antioxidant and hypoglycemic effects . Additionally, TQ shows a regenerative potential for treating damaged peripheral nerves . Curcumin (Cur) is a yellow pigment obtained from Curcuma longa , commonly used as a spice and food-coloring agent ( ). It has preventive or putative therapeutic properties because of its anti-inflammatory, antioxidant, anti-aging, and anticancer potential [ , , , , , , ]. For the antioxidant potential of both TQ and Cur, we investigated their anti-aging potential either alone or in combination against the oxidative alterations in rats’ brains and hearts induced by the D-gal-aging model. 2.1. Biochemical Parameters revealed no significant changes in serum glucose and creatinine levels and alanine aminotransferase (ALT, EC 2.6.1.2) activity between all groups. In contrast, rats in the D-gal+TQ+Cur group exhibited a significant reduction in aspartate aminotransferase (AST, EC 2.6.1.1) ( p < 0.05) activity and urea ( p < 0.05), and uric acid ( p < 0.001) level compared with the D-gal. D-gal+TQ and D-gal+Cur groups also revealed a marked lower uric acid than the control, vehicle, and D-gal groups. 2.2. Histopathological Assessment of the Rat’s Liver Negative control and vehicle groups showed normal hepatic architecture ( ). On the other hand, the D-gal group revealed hydropic degeneration, the central veins were dilated and congestive, and there was an accumulation of inflammatory cell infiltrations ( ). The D-gal+TQ group showed an improved hepatic structure with a lower pyknotic nuclei than the D-gal group ( ). D-gal+Cur group revealed a relatively normal hepatic structure as the negative control group ( ). The D-gal+TQ+Cur group treated with mix showed the best protection of the hepatic architecture ( ). Statistical analysis of hepatic lesions scores declared that the animals treated with D-gal had significantly higher hepatic necrosis and hepatic vacuolation scores than rats in the control group. However, compared with rats in the D-gal group, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups significantly reduced the hepatic lesions score ( ). 2.3. Histopathological Assessment of the Rat’s Spleen Negative control and vehicle groups showed the normal splenic architecture ( ). In contrast, the D-gal group showed marked alterations within their white and red pulp; this included depletion of a red pulp component and deformity of white pulp ( ). D-gal+TQ group showed improvement of white bulb architecture ( ). D-gal+Cur group revealed a relatively normal splenic structure ( ). D-gal+TQ+Cur group showed the best protection of the splenic architecture ( ). Statistical analysis of splenic lesions scores declared that the animals treated with D-gal had significantly higher splenic red pulp depletion and splenic nodules deformity scores than rats in the control group. However, compared with rats in the D-gal group, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups significantly reduced the splenic lesion score ( ). 2.4. Histopathological Assessment of Rat’s Kidney Negative control and vehicle groups showed the normal hippocampal architecture ( ). On the other hand, the D-gal group revealed congestion of glomerular and intertubular capillaries, degenerative and necrotic changes of renal tubules, and intratubular eosinophilic proteinaceous materials ( ). The D-gal+TQ group showed improvement of renal structure with lower necrosis than the D-gal group ( ). The D-gal+Cur group revealed a relatively normal renal structure as the negative control group ( ). The D-gal+TQ+Cur group showed the best protection of the renal architecture ( ). Statistical analysis of hippocampal lesion scores declared that the animals treated with D-gal had significantly higher renal necrosis and congestion scores than rats in the control group. However, compared with rats in the D-gal group, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups showed a significant reduction in the renal lesions score ( ). 2.5. Histopathological Assessment of Rat’s Cerebellum Negative control and vehicle groups showed the normal cerebellar architecture that consisted of uniform molecular, granular, and Purkinje cell layers ( ). However, the D-gal group showed loss and necrosis of Purkinje cells in the Purkinje cells layer, neurons in the granular layer, and neurons in the molecular layer ( ). The D-gal+TQ group showed enhancement in the number of Purkinje cells in the Purkinje cells layer with a lower number of pyknotic nuclei than the D-gal group ( ). D-gal+Cur showed a nearly normal cerebellar structure as the negative control group ( ). D-gal+TQ+Cur revealed the highest prevention against D-gal ( ). Statistical analysis of cerebellar lesion scores indicated that the animals administrated with the D-gal had a markedly ( p < 0.001) higher cerebellar necrosis score than the rats in the control group. On the other hand, compared with the rats in the D-gal+TQ group, D-gal+Cur, and D-gal+TQ+Cur groups showed a marked ( p < 0.001) reduction in the cerebellar lesions score ( ). in D-gal+TQ+Cur, the cerebellar necrosis extent was significantly decreased compared with D-gal+TQ ( p < 0.001) and D-gal+Cur ( p < 0.01) groups. 2.6. Immunohistochemistry Assessment of Cerebellum Negative control and vehicle groups showed negative caspase 3 reactions in all cerebellar layers ( A,B), while the D-gal group showed the highest caspase 3 responses in all cerebellar layers ( C). D-gal+TQ showed a reduced allocation of caspase 3 reacted nuclei compared to the D-gal group ( D). D-gal+Cur showed a shallow distribution of caspase 3 in the nuclei ( E). The lowest caspase reaction could be seen in D-gal+TQ+Cur ( F). A significant ( p < 0.001) elevated expression of caspase 3 was revealed in the nuclei of the cerebellar layers in the D-gal group compared with the control rats by Statistical analysis caspase 3 allocations. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the expression of caspase 3 was significantly ( p < 0.001) lowered ( G). A significant reduction ( p < 0.001) in caspase 3 was recognized in D-gal+TQ+Cur compared with D-gal+TQ and D-gal+Cur. In the rat cerebellum, the staining by immunohistochemistry of calbindin showed the highest calbindin reaction in the Purkinje cells of negative control and vehicle groups ( A,B). However, in all cerebellar layers, the D-gal group revealed a negative calbindin reaction ( C). In D-gal+TQ, a more elevated number of positive calbindin Purkinje cells was displayed than in the D-gal group ( D). A moderate number of positive calbindin Purkinje cells was revealed in D-gal+Cur ( E). D-gal+TQ+Cur revealed the highest number of positive calbindin Purkinje cells ( F). In the D-gal group, a significant ( p < 0.001) lowering in the expression of the number of positive calbindin Purkinje cells was revealed by statistical analysis of the number of positive calbindin Purkinje cells compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the low number of positive calbindin Purkinje cells was significantly ( p < 0.001) raised ( G). In rat cerebellum, the staining by immunohistochemistry of ionized calcium-binding adapter molecule 1 (IBA1) revealed a low number of microglia in negative control and vehicle groups ( A,B). However, in all cerebellar layers, the D-gal group showed the highest microglia ( C). In D-gal+TQ, a lower number of microglia was revealed than in the D-gal group ( D). A moderate number of microglia was shown in D-gal+Cur ( E). D-gal+TQ+Cur revealed the lowest number of microglia ( F). In the D-gal group, a significant ( p < 0.001) lowering in the expression of the number of microglia by statistical analysis reduced the number of microglia compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the low number of microglia was significantly ( p < 0.001) raised ( G). 2.7. Histopathological Assessment of Rat’s Hippocampus Negative control and vehicle groups showed normal hippocampal architecture ( ). However, in the D-gal group, the necrosis of dentate gyrus neurons was intensive. Also, the layers and number of hippocampal cells were lowered with disordered cells and most cells were shrunken, with pyknosis in nuclei ( ). D-gal+TQ showed lower necrotic hippocampal cells than the D-gal group ( ). A relatively normal hippocampal structure was revealed in D-gal+Cur compared with the negative control group ( ). The highest protective effects on hippocampal architecture were seen in D-gal+TQ+Cur ( ). A significant ( p < 0.001) more elevated hippocampal necrosis scores in the D-gal group were revealed by statistical analysis of hippocampal lesion scores than the control group rats. On the other hand, in the D-gal group, D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.001), and D-gal+TQ+Cur ( p < 0.001) groups, rats showed a marked lowering in the hippocampal lesions score ( ). Hippocampal necrosis in D-gal+TQ+Cur was markedly ( p < 0.001) decreased compared with D-gal+TQ and D-gal+Cur. 2.8. Immunohistochemistry Assessment of Hippocampus Negative control and vehicle groups revealed a negative reaction for caspase 3 in the hippocampus ( A,B), while the strongest caspase 3 reaction was revealed in the D-gal group ( C). A lower caspase 3 distribution was shown in D-gal+TQ than the D-gal group ( D). A very low distribution of caspase 3 nuclei was revealed in D-gal+Cur ( E). The weakest caspase 3 reaction was seen in D-gal+TQ+Cur ( F). In the D-gal group, a marked ( p < 0.001) elevated expression of caspase 3 was revealed by statistical analysis of the number of caspase 3 nuclei in the hippocampal layers with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the high caspase 3 expression was markedly ( p < 0.001) lowered ( G). Caspase 3 levels in D-gal+TQ+Cur were markedly ( p < 0.001) decreased compared to for D-gal+TQ and D-gal+Cur groups. Immunohistochemical staining of rat hippocampal dentate gyrus with calbindin revealed strong calbindin reaction negative control and vehicle groups ( A,B). However, a reduced calbindin reaction was seen in the D-gal group ( C). A more elevated number of positive calbindin cells D-gal+TQ was revealed than the D-gal group ( D). D-gal+Cur revealed a moderate number of positive calbindin cells ( E). D-gal+TQ+Cur revealed the highest number of positive calbindin cells ( F). In the D-gal group, a marked ( p < 0.001) reduction in the expression of positive calbindin cells was revealed by statistical analysis of positive calbindin cells compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the reduction in expression of positive calbindin cells was markedly ( p < 0.001) elevated ( G). Hippocampal calbindin was significantly ( p < 0.001) increased in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. Immunohistochemical staining of rat hippocampal dentate gyrus by ionized calcium-binding adapter molecule 1 (IBA1) revealed a low number of microglia in negative control and vehicle groups ( A,B). However, in the D-gal group, the highest number of microglia in all layers was shown ( C). A lower number of microglia was revealed in the D-gal+TQ than the D-gal group ( D). A moderate number of microglia was shown in D-gal+Cur ( E). D-gal+TQ+Cur revealed the lowest number of microglia ( F). In the D-gal group, a marked ( p < 0.001) reduction in the number of microglia was revealed by statistical analysis of the number of microglia compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the reduction in the number of microglia was markedly ( p < 0.001) elevated ( G). Hippocampal IBA1 content was significantly ( p < 0.001) increased in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. 2.9. Histopathological Assessment of Rat’s Heart Negative control and vehicle groups showed normal cardiac architecture ( ). However, the disarrayed necrotic myofibers were demonstrated in the D-gal group ( ). D-gal+TQ showed reduced necrosis and improvement of cardiac myocytes ( ). A relatively normal cardiac structure as a negative control group was revealed in D-gal+Cur ( ). The best protection of cardiac architecture was shown in D-gal+TQ+Cur ( ). A marked ( p < 0.001) higher cardiac necrosis score was declared by statistical analysis of cardiac lesion scores than the rats in the control group or in the D-gal group. On the other hand, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups revealed a marked ( p < 0.001) lower cardiac lesion score than the rats D-gal group ( ). D-gal+TQ+Cur exhibited a considerable ( p < 0.001) reduction in heart tissue necrosis compared with D-gal+TQ and D-gal+Cur. 2.10. Immunohistochemistry Assessment of Heart In negative control and vehicle groups, rat cardiac sections stained immunohistochemically by Bcl2 revealed the highest reaction of Bcl2 in cardiac myocytes ( A,B). However, a weak Bcl2 response in most cardiac myocytes was shown in the D-gal group ( C). A stronger Bcl2 reaction was revealed in the D-gal+TQ group ( D). The D-gal+Cur group indicated a moderate Bcl2 response ( E). D-gal+TQ+Cur showed the strongest Bcl2 reaction ( F). A marked ( p < 0.001) lowered Bcl2expression was revealed in the D-gal group by statistical analysis of Bcl2 distribution in the cardiac myocytes compared with the control rats. A reduced Bcl2 expression in the D-gal group was significantly ( p < 0.001) increased in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats ( G). Cardiac Bcl2 content was markedly ( p < 0.001) elevated in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. A negative caspase 3 reactions in all myocytes were revealed in negative control and vehicle groups ( A,B). In contrast, the strongest caspase 3 reaction in all cardiac myocytes was revealed in the D-gal group ( C). In D-gal+TQ, a less reduced caspase 3 distribution was shown than the D-gal group ( D). A very low caspase 3 distribution was revealed in D-gal+Cur ( E). The weakest caspase 3 reactions could be seen in D-gal+TQ+Cur ( F). In the D-gal group compared with the control rats, A marked ( p < 0.001) a high expression of caspase 3 by statistical analysis of caspase 3 distribution. The increased caspase 3 expression markedly reduced in the D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.001), and D-gal+TQ+Cur ( p < 0.001) treated rats ( G). Cardiac caspase 3 content was markedly ( p < 0.001) decreased in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. 2.11. Effect of Thymoquinone and Curcumin on the Aging-Altered Genes in the Brain A marked increase in the expression of TP53 was revealed in D-gal ( p < 0.001), D-gal+TQ ( p < 0.05), and D-gal+Cur ( p < 0.001) in comparison with the control group. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur, the TP53 were markedly downregulated ( p < 0.001) compared with the D-gal group. Besides, TP53 in the D-gal+Cur group was markedly ( p < 0.05) elevated compared with the D-gal+TQ+Cur combination group ( A). The illustrated data in B showed significant ( p < 0.001) upregulation of p21 in D-gal compared with the control, while D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups, the p21 relative gene expression was markedly ( p < 0.001) downregulated compared with the D-gal. BCL2 relative expression was markedly ( p < 0.001) downregulated in D-gal compared with the control, while in comparison with the D-gal, the BCL2 expressions were significantly ( p < 0.001) upregulated in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups ( C). In contrast, in the D-gal group, the relative expressions of Bax ( D) and CASP -3 ( E) were markedly ( p < 0.001) upregulated in comparison with the control, while in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups, the expression levels of Bax and CASP-3 were markedly ( p < 0.001) downregulated in D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups in comparison with the D-gal ( C). 2.12. Effect of Thymoquinone and Curcumin on the Aging-Altered Genes in Heart There is a significant upregulation of the TP53 relative expression levels in D-gal ( p < 0.001) and D-gal+Cur ( p < 0.05) in comparison with the control. In the D-gal+TQ ( p < 0.01) and D-gal+TQ+Cur ( p < 0.001) groups, TP53 was markedly downregulated in comparison with the D-gal ( A). In B, the data revealed significant upregulation in p21 in D-gal ( p < 0.001) and D-gal+Cur ( p < 0.05) in comparison with the control. Also, it was significantly upregulated in D-gal+TQ ( p < 0.05) and D-gal+TQ+Cur ( p < 0.001) groups compared with the D-gal. A significant downregulation ( p < 0.001) of the BCL2 expression in the D-gal group was also significantly ( p < 0.001) upregulated in D-gal+TQ+Cur in comparison with the control ( C). Compared with the D-gal, BCL2 was markedly upregulated in D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.01), and D-gal+TQ+Cur ( p < 0.001) groups. In the D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.01) groups, the BCL2 was markedly downregulated in comparison with the D-gal+TQ+Cur. The exposed data in D showed marked upregulation in Bax expression in the D-gal ( p < 0.001) and D-gal+Cur ( p < 0.05) in comparison with the control. In the D-gal+TQ ( p < 0.001), D-gal+Cur ( p < 0.01), and D-gal+TQ+Cur ( p < 0.001) groups, Bax was significantly downregulated in comparison with the D-gal. A marked ( p < 0.001) increase of CASP-3 relative expression levels was revealed in D-gal compared with the control and vehicle groups. In contrast, it was significantly ( p < 0.001) downregulated in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups in comparison with the D-gal. revealed no significant changes in serum glucose and creatinine levels and alanine aminotransferase (ALT, EC 2.6.1.2) activity between all groups. In contrast, rats in the D-gal+TQ+Cur group exhibited a significant reduction in aspartate aminotransferase (AST, EC 2.6.1.1) ( p < 0.05) activity and urea ( p < 0.05), and uric acid ( p < 0.001) level compared with the D-gal. D-gal+TQ and D-gal+Cur groups also revealed a marked lower uric acid than the control, vehicle, and D-gal groups. Negative control and vehicle groups showed normal hepatic architecture ( ). On the other hand, the D-gal group revealed hydropic degeneration, the central veins were dilated and congestive, and there was an accumulation of inflammatory cell infiltrations ( ). The D-gal+TQ group showed an improved hepatic structure with a lower pyknotic nuclei than the D-gal group ( ). D-gal+Cur group revealed a relatively normal hepatic structure as the negative control group ( ). The D-gal+TQ+Cur group treated with mix showed the best protection of the hepatic architecture ( ). Statistical analysis of hepatic lesions scores declared that the animals treated with D-gal had significantly higher hepatic necrosis and hepatic vacuolation scores than rats in the control group. However, compared with rats in the D-gal group, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups significantly reduced the hepatic lesions score ( ). Negative control and vehicle groups showed the normal splenic architecture ( ). In contrast, the D-gal group showed marked alterations within their white and red pulp; this included depletion of a red pulp component and deformity of white pulp ( ). D-gal+TQ group showed improvement of white bulb architecture ( ). D-gal+Cur group revealed a relatively normal splenic structure ( ). D-gal+TQ+Cur group showed the best protection of the splenic architecture ( ). Statistical analysis of splenic lesions scores declared that the animals treated with D-gal had significantly higher splenic red pulp depletion and splenic nodules deformity scores than rats in the control group. However, compared with rats in the D-gal group, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups significantly reduced the splenic lesion score ( ). Negative control and vehicle groups showed the normal hippocampal architecture ( ). On the other hand, the D-gal group revealed congestion of glomerular and intertubular capillaries, degenerative and necrotic changes of renal tubules, and intratubular eosinophilic proteinaceous materials ( ). The D-gal+TQ group showed improvement of renal structure with lower necrosis than the D-gal group ( ). The D-gal+Cur group revealed a relatively normal renal structure as the negative control group ( ). The D-gal+TQ+Cur group showed the best protection of the renal architecture ( ). Statistical analysis of hippocampal lesion scores declared that the animals treated with D-gal had significantly higher renal necrosis and congestion scores than rats in the control group. However, compared with rats in the D-gal group, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups showed a significant reduction in the renal lesions score ( ). Negative control and vehicle groups showed the normal cerebellar architecture that consisted of uniform molecular, granular, and Purkinje cell layers ( ). However, the D-gal group showed loss and necrosis of Purkinje cells in the Purkinje cells layer, neurons in the granular layer, and neurons in the molecular layer ( ). The D-gal+TQ group showed enhancement in the number of Purkinje cells in the Purkinje cells layer with a lower number of pyknotic nuclei than the D-gal group ( ). D-gal+Cur showed a nearly normal cerebellar structure as the negative control group ( ). D-gal+TQ+Cur revealed the highest prevention against D-gal ( ). Statistical analysis of cerebellar lesion scores indicated that the animals administrated with the D-gal had a markedly ( p < 0.001) higher cerebellar necrosis score than the rats in the control group. On the other hand, compared with the rats in the D-gal+TQ group, D-gal+Cur, and D-gal+TQ+Cur groups showed a marked ( p < 0.001) reduction in the cerebellar lesions score ( ). in D-gal+TQ+Cur, the cerebellar necrosis extent was significantly decreased compared with D-gal+TQ ( p < 0.001) and D-gal+Cur ( p < 0.01) groups. Negative control and vehicle groups showed negative caspase 3 reactions in all cerebellar layers ( A,B), while the D-gal group showed the highest caspase 3 responses in all cerebellar layers ( C). D-gal+TQ showed a reduced allocation of caspase 3 reacted nuclei compared to the D-gal group ( D). D-gal+Cur showed a shallow distribution of caspase 3 in the nuclei ( E). The lowest caspase reaction could be seen in D-gal+TQ+Cur ( F). A significant ( p < 0.001) elevated expression of caspase 3 was revealed in the nuclei of the cerebellar layers in the D-gal group compared with the control rats by Statistical analysis caspase 3 allocations. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the expression of caspase 3 was significantly ( p < 0.001) lowered ( G). A significant reduction ( p < 0.001) in caspase 3 was recognized in D-gal+TQ+Cur compared with D-gal+TQ and D-gal+Cur. In the rat cerebellum, the staining by immunohistochemistry of calbindin showed the highest calbindin reaction in the Purkinje cells of negative control and vehicle groups ( A,B). However, in all cerebellar layers, the D-gal group revealed a negative calbindin reaction ( C). In D-gal+TQ, a more elevated number of positive calbindin Purkinje cells was displayed than in the D-gal group ( D). A moderate number of positive calbindin Purkinje cells was revealed in D-gal+Cur ( E). D-gal+TQ+Cur revealed the highest number of positive calbindin Purkinje cells ( F). In the D-gal group, a significant ( p < 0.001) lowering in the expression of the number of positive calbindin Purkinje cells was revealed by statistical analysis of the number of positive calbindin Purkinje cells compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the low number of positive calbindin Purkinje cells was significantly ( p < 0.001) raised ( G). In rat cerebellum, the staining by immunohistochemistry of ionized calcium-binding adapter molecule 1 (IBA1) revealed a low number of microglia in negative control and vehicle groups ( A,B). However, in all cerebellar layers, the D-gal group showed the highest microglia ( C). In D-gal+TQ, a lower number of microglia was revealed than in the D-gal group ( D). A moderate number of microglia was shown in D-gal+Cur ( E). D-gal+TQ+Cur revealed the lowest number of microglia ( F). In the D-gal group, a significant ( p < 0.001) lowering in the expression of the number of microglia by statistical analysis reduced the number of microglia compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the low number of microglia was significantly ( p < 0.001) raised ( G). Negative control and vehicle groups showed normal hippocampal architecture ( ). However, in the D-gal group, the necrosis of dentate gyrus neurons was intensive. Also, the layers and number of hippocampal cells were lowered with disordered cells and most cells were shrunken, with pyknosis in nuclei ( ). D-gal+TQ showed lower necrotic hippocampal cells than the D-gal group ( ). A relatively normal hippocampal structure was revealed in D-gal+Cur compared with the negative control group ( ). The highest protective effects on hippocampal architecture were seen in D-gal+TQ+Cur ( ). A significant ( p < 0.001) more elevated hippocampal necrosis scores in the D-gal group were revealed by statistical analysis of hippocampal lesion scores than the control group rats. On the other hand, in the D-gal group, D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.001), and D-gal+TQ+Cur ( p < 0.001) groups, rats showed a marked lowering in the hippocampal lesions score ( ). Hippocampal necrosis in D-gal+TQ+Cur was markedly ( p < 0.001) decreased compared with D-gal+TQ and D-gal+Cur. Negative control and vehicle groups revealed a negative reaction for caspase 3 in the hippocampus ( A,B), while the strongest caspase 3 reaction was revealed in the D-gal group ( C). A lower caspase 3 distribution was shown in D-gal+TQ than the D-gal group ( D). A very low distribution of caspase 3 nuclei was revealed in D-gal+Cur ( E). The weakest caspase 3 reaction was seen in D-gal+TQ+Cur ( F). In the D-gal group, a marked ( p < 0.001) elevated expression of caspase 3 was revealed by statistical analysis of the number of caspase 3 nuclei in the hippocampal layers with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the high caspase 3 expression was markedly ( p < 0.001) lowered ( G). Caspase 3 levels in D-gal+TQ+Cur were markedly ( p < 0.001) decreased compared to for D-gal+TQ and D-gal+Cur groups. Immunohistochemical staining of rat hippocampal dentate gyrus with calbindin revealed strong calbindin reaction negative control and vehicle groups ( A,B). However, a reduced calbindin reaction was seen in the D-gal group ( C). A more elevated number of positive calbindin cells D-gal+TQ was revealed than the D-gal group ( D). D-gal+Cur revealed a moderate number of positive calbindin cells ( E). D-gal+TQ+Cur revealed the highest number of positive calbindin cells ( F). In the D-gal group, a marked ( p < 0.001) reduction in the expression of positive calbindin cells was revealed by statistical analysis of positive calbindin cells compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the reduction in expression of positive calbindin cells was markedly ( p < 0.001) elevated ( G). Hippocampal calbindin was significantly ( p < 0.001) increased in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. Immunohistochemical staining of rat hippocampal dentate gyrus by ionized calcium-binding adapter molecule 1 (IBA1) revealed a low number of microglia in negative control and vehicle groups ( A,B). However, in the D-gal group, the highest number of microglia in all layers was shown ( C). A lower number of microglia was revealed in the D-gal+TQ than the D-gal group ( D). A moderate number of microglia was shown in D-gal+Cur ( E). D-gal+TQ+Cur revealed the lowest number of microglia ( F). In the D-gal group, a marked ( p < 0.001) reduction in the number of microglia was revealed by statistical analysis of the number of microglia compared with the control rats. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats, the reduction in the number of microglia was markedly ( p < 0.001) elevated ( G). Hippocampal IBA1 content was significantly ( p < 0.001) increased in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. Negative control and vehicle groups showed normal cardiac architecture ( ). However, the disarrayed necrotic myofibers were demonstrated in the D-gal group ( ). D-gal+TQ showed reduced necrosis and improvement of cardiac myocytes ( ). A relatively normal cardiac structure as a negative control group was revealed in D-gal+Cur ( ). The best protection of cardiac architecture was shown in D-gal+TQ+Cur ( ). A marked ( p < 0.001) higher cardiac necrosis score was declared by statistical analysis of cardiac lesion scores than the rats in the control group or in the D-gal group. On the other hand, D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups revealed a marked ( p < 0.001) lower cardiac lesion score than the rats D-gal group ( ). D-gal+TQ+Cur exhibited a considerable ( p < 0.001) reduction in heart tissue necrosis compared with D-gal+TQ and D-gal+Cur. In negative control and vehicle groups, rat cardiac sections stained immunohistochemically by Bcl2 revealed the highest reaction of Bcl2 in cardiac myocytes ( A,B). However, a weak Bcl2 response in most cardiac myocytes was shown in the D-gal group ( C). A stronger Bcl2 reaction was revealed in the D-gal+TQ group ( D). The D-gal+Cur group indicated a moderate Bcl2 response ( E). D-gal+TQ+Cur showed the strongest Bcl2 reaction ( F). A marked ( p < 0.001) lowered Bcl2expression was revealed in the D-gal group by statistical analysis of Bcl2 distribution in the cardiac myocytes compared with the control rats. A reduced Bcl2 expression in the D-gal group was significantly ( p < 0.001) increased in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur treated rats ( G). Cardiac Bcl2 content was markedly ( p < 0.001) elevated in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. A negative caspase 3 reactions in all myocytes were revealed in negative control and vehicle groups ( A,B). In contrast, the strongest caspase 3 reaction in all cardiac myocytes was revealed in the D-gal group ( C). In D-gal+TQ, a less reduced caspase 3 distribution was shown than the D-gal group ( D). A very low caspase 3 distribution was revealed in D-gal+Cur ( E). The weakest caspase 3 reactions could be seen in D-gal+TQ+Cur ( F). In the D-gal group compared with the control rats, A marked ( p < 0.001) a high expression of caspase 3 by statistical analysis of caspase 3 distribution. The increased caspase 3 expression markedly reduced in the D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.001), and D-gal+TQ+Cur ( p < 0.001) treated rats ( G). Cardiac caspase 3 content was markedly ( p < 0.001) decreased in D-gal+TQ+Cur compared to in D-gal+TQ and D-gal+Cur. A marked increase in the expression of TP53 was revealed in D-gal ( p < 0.001), D-gal+TQ ( p < 0.05), and D-gal+Cur ( p < 0.001) in comparison with the control group. In the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur, the TP53 were markedly downregulated ( p < 0.001) compared with the D-gal group. Besides, TP53 in the D-gal+Cur group was markedly ( p < 0.05) elevated compared with the D-gal+TQ+Cur combination group ( A). The illustrated data in B showed significant ( p < 0.001) upregulation of p21 in D-gal compared with the control, while D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups, the p21 relative gene expression was markedly ( p < 0.001) downregulated compared with the D-gal. BCL2 relative expression was markedly ( p < 0.001) downregulated in D-gal compared with the control, while in comparison with the D-gal, the BCL2 expressions were significantly ( p < 0.001) upregulated in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups ( C). In contrast, in the D-gal group, the relative expressions of Bax ( D) and CASP -3 ( E) were markedly ( p < 0.001) upregulated in comparison with the control, while in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups, the expression levels of Bax and CASP-3 were markedly ( p < 0.001) downregulated in D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups in comparison with the D-gal ( C). There is a significant upregulation of the TP53 relative expression levels in D-gal ( p < 0.001) and D-gal+Cur ( p < 0.05) in comparison with the control. In the D-gal+TQ ( p < 0.01) and D-gal+TQ+Cur ( p < 0.001) groups, TP53 was markedly downregulated in comparison with the D-gal ( A). In B, the data revealed significant upregulation in p21 in D-gal ( p < 0.001) and D-gal+Cur ( p < 0.05) in comparison with the control. Also, it was significantly upregulated in D-gal+TQ ( p < 0.05) and D-gal+TQ+Cur ( p < 0.001) groups compared with the D-gal. A significant downregulation ( p < 0.001) of the BCL2 expression in the D-gal group was also significantly ( p < 0.001) upregulated in D-gal+TQ+Cur in comparison with the control ( C). Compared with the D-gal, BCL2 was markedly upregulated in D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.01), and D-gal+TQ+Cur ( p < 0.001) groups. In the D-gal+TQ ( p < 0.01), D-gal+Cur ( p < 0.01) groups, the BCL2 was markedly downregulated in comparison with the D-gal+TQ+Cur. The exposed data in D showed marked upregulation in Bax expression in the D-gal ( p < 0.001) and D-gal+Cur ( p < 0.05) in comparison with the control. In the D-gal+TQ ( p < 0.001), D-gal+Cur ( p < 0.01), and D-gal+TQ+Cur ( p < 0.001) groups, Bax was significantly downregulated in comparison with the D-gal. A marked ( p < 0.001) increase of CASP-3 relative expression levels was revealed in D-gal compared with the control and vehicle groups. In contrast, it was significantly ( p < 0.001) downregulated in the D-gal+TQ, D-gal+Cur, and D-gal+TQ+Cur groups in comparison with the D-gal. Aging (senescence) is the loss of organ and tissue function gradually with time . The losses of age-associated functions are because of the cumulation of oxidative damage macromolecules (proteins, DNA, and lipids) by ROS and RNS . Senescent cells pile through aging and have been involved in enhancing various age-related diseases . Senescence inducers led to upregulation of p53 , which elevated the cyclin-dependent kinase inhibitor p21 (WAF1/CIP1) , mainly mediating G1 growth arrest . The mechanism D-gal inducing aging is well recognized and based on generation of ROS and RNS that induce inflammation and apoptosis of different body cells . In the current study, we assessed the aging and apoptotic markers due to D-gal administration and the protective role of TQ, Cur and their combination. D-gal significantly upregulated p21 and TP53 , leading to aging oxidative alterations in brain and heart tissues. D-gal-induced upregulation of p21 and p53 in mouse and rat models treated by D-gal. Moreover, western blot results revealed upregulation of the p53/p21 signaling pathway in mice’s hippocampus . TQ, Cur, and their combination significantly downregulated the increased expression of p21 and TP53 because of D-gal. Also, TQ is responsible for apoptosis induction in colorectal cancer by inhibiting the p53-dependent CHEK1 . Some rationales suggest that curcumin’s anti-aging function is due to its ability to postpone cellular senescence in cells building the vasculature . An elevated ROS level accompanies humans’ aging and higher animals in mitochondria, inducing apoptosis, lowering the functioning cells’ number . D-gal-stimulated brain aging exhibited changes in cognitive function and brain mitochondria . Also, hypertrophy of the myocytes and myocytes’ loss are characteristic foraging in the mammalian heart . During heart failure and normal heart aging, necrosis and apoptosis mechanisms are involved in myocyte cell loss . In the present study, D-gal induced necrosis and apoptosis of brain and heart tissues monitored by upregulation of apoptotic ( CASP-3 and Bax genes and caspase 3 protein) and downregulation of antiapoptotic ( Blc2 gene and protein) markers. In the same context, D-gal induced significant decreases in the Bax/Bcl-2 ratio and caspase-3 in mice’s brains and rats . Also, D-gal markedly lowered the Bax/Bcl-2 ratio and caspase-3 in aging rats’ cardiomyocytes . In contrast, TQ and Cur attenuated the necrotic and apoptotic alterations of rats’ brains and hearts, especially their combination. Similarly, Abulfadl et al. stated that TQ prevented D-gal/AlCl3-induced cognitive decline by promoting synaptic plasticity and cholinergic function and suppressing neuronal apoptosis oxidative damage and neuroinflammation in rats. Also, curcumin minimized the alterations induced in Purkinje cells and cleaved caspase-3 expression in rats due to D-gal. Calcium has a pivotal role in the neurodegeneration process and has an essential role as an intracellular signaling mediator . Therefore, multiple injury pathways meet to stimulate an extra increase in intracellular calcium levels, inducing a series of caspases leading to the apoptosis onset. So, the calcium homeostasis maintenance within neurons is essential to their health, including many mechanisms . Calbindin is a calcium-binding protein that protects neurons against damage caused by excessive Ca 2+ elevation . Thus, in the current investigation, D-gal induced reduced brain calbindin in rats leading to activation of caspase 3 that induced apoptosis of brain tissue, while TQ, Cur, and their combinations attenuated the calbindin reduction due to D-gal. There is no published article regarding the influence of D-gal or TQ on brain calbindin expression. At the same time, curcuminoid submicron particle consumption inverted spatial memory deficits and the loss of calbindin in the hippocampus of the Alzheimer’s disease mouse model . IBA1 is a cytoskeleton protein localized only in macrophages and microglia . The IBA1 expression is upregulated in stimulated microglia after ischemia , peripheral nerve injury , and many brain diseases . In the present investigation, we recognized a marked elevation in IBA1 expression in brain tissue. Similarly, D-gal significantly increased the neuroinflammatory marker, IBA1, in the mouse brain . Conversely, TQ and Cur and their combinations significantly reduced the elevated IBA1 in response to D-gal. 4.1. Ethics Statement Faculty of Veterinary Medicine Ethics Committee, University of Damanhour, Egypt endorsed this investigation (DVM-034-20, January 2020), according to the NIH Guide for the Care and Use of Laboratory Animals. 4.2. Experimental Design Forty-eight adults male Wistar rats (120 ± 20 g) were purchased from the Center of Medical Research and Services, Alexandria University, Egypt and housed in standard laboratory conditions with a 12 h light/dark cycle. Drinking water and food pellets were provided ad libitum. The ingredients of the basal diet are listed in . After 10 days, the rats were randomly allocated into six groups ( n = 8 per group) in three replicates each, including control group; raised on distilled water administered by gavage and basal diet along with a subcutaneous injection of physiological saline solution (0.9%), vehicle group; raised on corn oil was administered with gavage and basal diet along with a subcutaneous injection of physiological saline solution (0.9%) for 42 days, D-gal group, raised on basal diet and injected subcutaneously with 200 mg of D-gal dissolved in saline solution per kg body weight (BW) and corn oil orally per day for 42 days ; The D-gal+TQ group, reared on a basal diet and injected subcutaneously with 200 mg of D-gal per kg BW daily along with oral supplementation of TQ (Sigma-Aldrich, St. Louis, MO, USA) dissolved in corn oil by a dose of 20 mg per kg BW daily for 42 days . The D-gal+Cur group, reared on basal diet and injected subcutaneously with 200 mg of D-gal per kg BW daily along with oral supplementation of Cur (Sigma-Aldrich) dissolved in corn oil by a dose of 20 mg per kg BW daily for 42 days , and D-gal+TQ+Cur group; received the basal diet and injected subcutaneously with 200 mg of D-gal per kg BW daily along with oral supplementation of TQ (20 mg per kg BW.) and Cur (20 mg per kg BW) dissolved in corn oil daily for 42 days ( ). 4.3. Sampling To confirm the correct sampling, rats were anesthetized with an intravenous pentobarbital injection (30 mg/kg) in each group ( n = 5 per group) on day 42. Samples of blood were left to coagulate and then centrifuged for 15 min at 1409× g to obtain serum for biochemical analyses. For sample fixation, parts of each rat’s brain and heart were washed with phosphate buffer saline (PBS, pH 7.4) and fixed in 4% paraformaldehyde dissolved in PBS for histopathology and immunohistochemistry examinations. For relative expression analyses of mRNA, other parts of the brain and heart were frozen at −80 °C. Also, liver, spleen, and kidney samples were taken for histopathology. 4.4. Biochemical, Histopathological, Immunohistochemical, and Reverse Transcription-Polymerase Chain Reaction (RT-PCR) Assessments Serum samples were subjected to determination of glucose, AST, ALT, creatinine, urea, and uric acid levels following the instructions of the manufacturer (Biodiagnostic, Dokki, Giza, Egypt). Histopathological , immunohistochemistry , and RT-PCR assessments of brain and heart samples were done as described in our previous study, El-Far et al. . Antibodies used in the immunohistochemical assay are listed in , and the primer’s sequence of tested genes is listed in . 4.5. Statistical Analyses Data were analyzed with one-way analysis of variance (ANOVA), followed by Tukey’s multiple comparison test using GraphPad Prism 5 (San Diego, CA, USA). All declarations of significance depended on p < 0.05. Faculty of Veterinary Medicine Ethics Committee, University of Damanhour, Egypt endorsed this investigation (DVM-034-20, January 2020), according to the NIH Guide for the Care and Use of Laboratory Animals. Forty-eight adults male Wistar rats (120 ± 20 g) were purchased from the Center of Medical Research and Services, Alexandria University, Egypt and housed in standard laboratory conditions with a 12 h light/dark cycle. Drinking water and food pellets were provided ad libitum. The ingredients of the basal diet are listed in . After 10 days, the rats were randomly allocated into six groups ( n = 8 per group) in three replicates each, including control group; raised on distilled water administered by gavage and basal diet along with a subcutaneous injection of physiological saline solution (0.9%), vehicle group; raised on corn oil was administered with gavage and basal diet along with a subcutaneous injection of physiological saline solution (0.9%) for 42 days, D-gal group, raised on basal diet and injected subcutaneously with 200 mg of D-gal dissolved in saline solution per kg body weight (BW) and corn oil orally per day for 42 days ; The D-gal+TQ group, reared on a basal diet and injected subcutaneously with 200 mg of D-gal per kg BW daily along with oral supplementation of TQ (Sigma-Aldrich, St. Louis, MO, USA) dissolved in corn oil by a dose of 20 mg per kg BW daily for 42 days . The D-gal+Cur group, reared on basal diet and injected subcutaneously with 200 mg of D-gal per kg BW daily along with oral supplementation of Cur (Sigma-Aldrich) dissolved in corn oil by a dose of 20 mg per kg BW daily for 42 days , and D-gal+TQ+Cur group; received the basal diet and injected subcutaneously with 200 mg of D-gal per kg BW daily along with oral supplementation of TQ (20 mg per kg BW.) and Cur (20 mg per kg BW) dissolved in corn oil daily for 42 days ( ). To confirm the correct sampling, rats were anesthetized with an intravenous pentobarbital injection (30 mg/kg) in each group ( n = 5 per group) on day 42. Samples of blood were left to coagulate and then centrifuged for 15 min at 1409× g to obtain serum for biochemical analyses. For sample fixation, parts of each rat’s brain and heart were washed with phosphate buffer saline (PBS, pH 7.4) and fixed in 4% paraformaldehyde dissolved in PBS for histopathology and immunohistochemistry examinations. For relative expression analyses of mRNA, other parts of the brain and heart were frozen at −80 °C. Also, liver, spleen, and kidney samples were taken for histopathology. Serum samples were subjected to determination of glucose, AST, ALT, creatinine, urea, and uric acid levels following the instructions of the manufacturer (Biodiagnostic, Dokki, Giza, Egypt). Histopathological , immunohistochemistry , and RT-PCR assessments of brain and heart samples were done as described in our previous study, El-Far et al. . Antibodies used in the immunohistochemical assay are listed in , and the primer’s sequence of tested genes is listed in . Data were analyzed with one-way analysis of variance (ANOVA), followed by Tukey’s multiple comparison test using GraphPad Prism 5 (San Diego, CA, USA). All declarations of significance depended on p < 0.05. Aging is associated with oxidative stress alterations in different body organs. D-gal induced histopathological changes in the brain, heart, liver, spleen, and kidney tissues, besides significantly enhancing apoptosis. TQ and Cur defeated the oxidative alterations of the brain and heart activated by D-gal. Interestingly, the TQ and Cur combination exhibited more protection for brain and heart tissues than TQ or Cur supplemented alone. These results proved the anti-aging potential of a TQ and Cur supplementary combination.
A New Proxy Measurement Algorithm with Application to the Estimation of Vertical Ground Reaction Forces Using Wearable Sensors
ea9f6f48-8206-403f-9dea-6138e28ca3fd
5677265
Physiology[mh]
The analysis of ground reaction force (GRF) (i.e., the force of interaction between the body, usually the foot, and the ground) is central in many scientific and engineering fields, including biomechanics, medical science, sports science, and robotics [ , , , ]. In human biomechanics and humanoid robotics, for example, postural control is critical for understanding balance and locomotion, where the control strategies for bipedal systems heavily rely on the knowledge of the GRF and its point of application, i.e., the centre of pressure (COP). In healthcare, estimating the GRF and joint moments of patients in daily life activities could have substantial clinical impact by providing assessments of pathological gait, fall detection in the elderly, and biofeedback data for home interventions . In human biomechanics, standard measuring techniques for GRFs are restricted to laboratory settings, where GRFs can be accurately measured using calibrated force platform systems, but this limits the applicability of the relevant results, which are obtained for one step only. Whereas instrumented treadmills with embedded force platforms allow for accurate multi-step GRF measures, they are still limited to the laboratory setting. Furthermore, some clinical gait features are often triggered by free-living environmental challenges and cannot be replicated in a controlled laboratory environment. Continuous monitoring in unsupervised habitual environments is essentially useful for enhancing diagnostics, monitoring disease progression, measuring the efficacy of intervention, and predicting the risk of falls and cognitive decline . Portable and wearable sensor systems have been developed to allow for the measurement/estimation of the GRFs or foot pressure distributions in a real environment outside a laboratory or in daily life [ , , , , , , , , , ]. The output of these systems provides the GRF and its point of application (centre of pressure (COP)) for a variety of applications. However, systems such as force sensitive resistors are still relatively expensive, quite cumbersome to wear and prone to mechanical damage, with the result of a limited applicability outside the specialised research field. Inertial measurement units (IMU) are sensors suitable for long-term monitoring of gait information , which would allow overcoming these limitations. It will be of significant importance if the GRF information can be reconstructed from the IMU data. The problem of the estimation of GRFs without using force plates has been tackled by other authors [ , , , ], some of whom yielded results using IMU recordings [ , , , ]. However, the applications of these approaches have certain constraints, since most of them require modelling of biomechanical systems to a certain extent so that some data of the body segments (such as limbs) of the particular subject is required such as masses, dimensions, and centres of masses. These are therefore heavily subject-dependent and require extensive knowledge for correct modelling. In some cases, such approaches also require data from many IMU sensors; for example, 16 sensors were used for data collection in one such study , which limits the applicability within a real-life context. One inertial sensor has been used to estimate some characteristics of GRF, such as the peak values and mean value of the GRF, rather than a full profile of the GRF in a gait cycle [ , , ]. In these studies, the accelerations were directly used as indicators of the GRF. However, the dynamics between the accelerations and GRF has not been explored. To this end, a novel and generic proxy measurement method is proposed without regard to the biomechanical modelling of movement. The NARMAX (nonlinear auto-regressive moving average models with exogenous inputs) method [ , , , ] is adopted to identify the dynamic relationships between the proxy variables, i.e., the acceleration from IMUs, and the measured vertical GRFs (vGRFs). The NARMAX methods provide linear and/or nonlinear dynamic relationships and models between user-defined inputs and outputs, both pertaining to our problem. The aim of this study is to introduce a new generic algorithm to provide a proxy measure for unobservable variables. In this specific application, wearable IMU sensors will be used to measure the accelerations at different body levels. The acceleration signals are used as the proxy variables, and the dynamical relationship between vGRF and the accelerations is explored. The new algorithm is then used to estimate vGRF from these accelerations. Based on the new proxy measurements, the predicted vGRFs from the developed dynamic models are compared and evaluated with simultaneously vGRF data obtained from pressure insoles. The proxy measurement of vGRFs is studied for both outdoor specified (controlled straight) walking and outdoor free walking. Nine healthy volunteers (3 females, 6 males, age 28 ± 3 years old) were recruited for the study. Ethical approval was obtained from the University of Sheffield’s Research Ethics Committee, and the research was conducted according to the declaration of Helsinki. All participants provided written informed consent. Each participant was asked to wear three IMUs (Opal™, APDM; weight 22 g, size 48.5 mm × 36.5 mm × 13.5 mm) containing a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer. One IMU was positioned on the lower trunk on the fifth lumbar vertebra (L5) with its sensing axes X , Y , and Z pointing downward, to the left and forward, respectively. The other two IMUs were positioned at the seventh cervical vertebra (C7) and forehead (FH), with X , Y , and Z pointing downward, to the right and backward, respectively. The devices measured accelerations at a sampling frequency of 128 Hz, and the accelerometer range was set at ±6 g. It is worth emphasising that only the 3-axis accelerations were used in the study although the sensors can also provide angular velocity and orientation information. Hence, the proxy measures are actually free from the limitations related to gyroscope drifts and magnetic disturbances. Two pressure-sensing insoles (F-Scan 3000E, Tekscan TM , South Boston, MA, USA) were used to obtain the vGRF reference. The insoles were cut to fit tightly into each participant’s shoe. They were calibrated using a step calibration technique according to the manufacturer’s instructions. The sampling frequency was set at 128 Hz. A Fourier analysis of the vGRF time series showed that all main frequency components had a frequency lower than 10 Hz. Therefore, a sample frequency greater than 64 Hz was deemed high enough to characterise the main frequency spectral. Subjects completed two walking tasks in the conditions detailed in . The IMU and pressure insoles data were collected during each task. For the outdoor free walking task, participants were instructed to walk freely in the city centre without any restrictions regarding route or walking speed, and avoiding stairs. For the outdoor controlled walking, participants were asked to walk back and forth along a 50 m walkway at their preferred speed. More details about the protocol are available in . The outdoor free walking conditions had the potential of recording the participant’s turns in addition to straight line walking, both of which were included in the analysis. Data recorded during resting or transitory periods were excluded from the analysis. A vertical jump was used as a synchronising event between the IMUs and the insoles in order to realign the signals coming from the two instruments at the beginning of each trial. The equivalency of the nominal sampling frequency of the two instruments was verified, and the mismatch was corrected for the 15 min outdoor free walking (OFW) data by realigning the signals every 2 min. This procedure was not needed in the outdoor controlled walking (OCW) tasks, which lasted less than 2 min. 2.1. The General Idea of the Proxy Measure Once the accelerations and vGRF signals have been recorded, the relationship between the accelerometer signals and the insole measured vGRF can be modelled. The higher-order cross-correlation nonlinear detection method was applied and indicated that a linear model is not sufficient to describe the relationships between these two types of signals. Therefore, a nonlinear dynamic model was developed in this study. The accelerations are defined on the sensor frame, while the vGRFs are defined in a ground frame. The coordinate transformation can be represented by a linear map: (1) a → g l o b a l = T ( ϑ ) a → s e n s o r where the orientation ϑ of the sensor frame is known. The relationship between the vGRF and the accelerations can be described in the global frame by a function given as (2) v G R F = f 0 ( a → g l o b a l ) = f 0 ( T ( ϑ ) a → s e n s o r ) However, the transform matrix Τ ( ϑ ) may change over time, due to changes in the orientation of the sensor. It is then possible to define a relationship based on the orientation and accelerations as (3) v G R F = f 1 ( ϑ , a → s e n s o r , g ) where g is a constant representing gravity. The effect of gravity can be considered as an implicit parameter in the model, as detailed in the discussion. A further assumption is that the entries in the coordinate transformation matrix can be expressed as functions of the time-varying accelerations a → s e n s o r ( t ) and of the associated time delays a → s e n s o r ( t − τ i ) , τ i > 0 . Equation (3) can then be rewritten as a function of the accelerations: (4) v G R F ( t ) = f ( a → s e n s o r ( t ) , a → s e n s o r ( t − τ 1 ) , ⋯ , a → s e n s o r ( t − τ L ) ) For the time series, the discrete time relationship reads as (5) v G R F ( k ) = f ( a → s e n s o r ( k ) , a → s e n s o r ( k − 1 ) , ⋯ , a → s e n s o r ( k − L ) ) where L represents the maximum time delay and k denotes the k th sample instance. The function form f ( a → s e n s o r ) is usually unknown. However, according to the Stone–Weierstrass-like approximation theorems , the function can often be approximated as the linear superposition of a set of known basis functions ϕ i ( a → s e n s o r ) as (6) v G R F = ∑ i n θ i ϕ i ( a → s e n s o r ( k ) , a → s e n s o r ( k − 1 ) , ⋯ , a → s e n s o r ( k − L ) ) The model structure and the associated parameters θ i can be learned using the orthogonal forward regression algorithm. Based on the model structure, the accelerations measured in the sensor reference frame can be directly used without any frame transformation. The underlying nonlinear relationship, in fact, will be learned by the iterative orthogonal forward regression algorithm (iOFR) algorithm. Thereafter, the vGRF can be predicted based on the obtained model using only the output of an accelerometer outputs. The results in validate the assumptions in our method and show that this kind of model structure is capable of describing the underlying nonlinear dynamic connections between the acceleration and vGRF even in an outdoor free-style walking scenario. 2.2. Decomposition of the Sensor Signals Since we wanted to use the acceleration readings a from one single wearable sensor as a proxy measurement for vGRFs of both feet, a key step in this study was to separate the single input variable a into two components: one reflects the walking when the left leg has dominant pressure on the ground ( a l e f t ), and the other reflects the walking when the right leg has dominant pressure on the ground. The two components were defined by introducing two membership functions as follows. (7) w l e f t = G R F l e f t G R F l e f t + G R F r i g h t , w r i g h t = G R F r i g h t G R F l e f t + G R F r i g h t The left and right components can then be split by the defined membership functions as (8) a l e f t = a ⋅ w l e f t , a r i g h t = a ⋅ w r i g h t where a is the acceleration recordings and the “ · ” represents the point-wise multiplication operation. An approximation to the membership functions can be estimated using gait events such as the IC (initial foot contact) and FC (final foot contact) instants calculated from the insole pressure sensor information. The membership is set as 1 in the single support phase and 0 in the swing phase. These two values are linearly connected in the double support phases. The left and right membership functions can then be approximately obtained, and the acceleration recordings can be decomposed into the left and right components. shows the calculated and approximated membership functions for the IMU signal decomposition in two gait cycles. The gait events can also be obtained from an extra inertial sensor at the pelvis or shank level . This will release the limitation in the applications. In this study, the gait events were detected using the ground reaction force with a 10 N threshold to avoid errors possibly introduced in calculating the gait events form the IMU. 2.3. The Proxy Model Development Once the left and right components of the acceleration signals were obtained, a special type of NARMAX model , based on expansions of the input only, giving essentially a Volterra series expansion or a nonlinear moving average (NMA) model , was used for the derivation of the vGRF model. This expansion provides a general representation of nonlinear dynamics, where only the nonlinearities in the input variables are involved. This simplifies the prediction of the vGRF from only the current and past acceleration recordings. A discrete NMA or Volterra series model can be defined as (9) y ( k ) = ∑ n = 0 N y n ( k ) , y n ( k ) = h n ( m 1 , m 2 , ⋯ , m n ) ∏ i = 1 n u ( k − m i ) where u ( k ) and y ( k ) are the system input and output, respectively, and m i represents time delays. The n th kernel h n ( m 1 , m 2 , ⋯ , m n ) characterises the weight of the n th nonlinearity in the system response. The discrete Volterra series model can be rewritten in the general NARMAX form as follows: (10) y ( k ) = F [ u ( k − 1 ) , u ( k − 2 ) , ⋯ , u ( k − n u ) ] where F is a multivariate polynomial function and n u denotes the maximum time delay of the input. In the context of GRF prediction, the left and right components of three measured perpendicular accelerations { a x , a y , a z } defined in the IMU frame are the inputs, and the vGRFs { G R F l e f t , G R F r i g h t } from left or right foot are the output. That is, a total number of six inputs { a x , l e f t , a x , r i g h t , a y , l e f t , a y , r i g h t , a z , l e f t , a z , r i g h t } was included in the model. Once the maximum time delay is specified, the model structure can be constructed based on Equation (3). However, the model can include a huge number of terms; for example, the number of terms in the model is 5778 when the maximum time lag is 18 samples. This may lead to over-fitting of the data or numerical ill-conditioning in parameter estimation. The OFR algorithm and the associated variants have been proven able to efficiently determine a sparse model structure and have been widely used in a wide range of applications . Here, an improved OFR algorithm, an iterative OFR, was used to identify the model structure and explore the relationship between the desired vGRF and the proxy measurements . A more detailed discussion of the iterative OFR algorithms can be found in . Once a reliable model is built, the vGRF can be reconstructed with the chosen wearable sensor information only. The final model structure in this study included only the nonlinear moving average part of the proxy measurements and no information about the output, i.e., the vGRF was used. This made the prediction of the GRF much easier, and the prediction error would not accumulate in the predicted GRF. The same procedure was applied to the 9 participants and two walking tasks, respectively, to build subject-specific proxy models. The subject-specific models produced more accurate estimation of vGRF than an average model, which was built by pooling all subject data. The subject-specific models performed better because subject- and task-specific information was characterised by the models. 2.4. Accuracy Analysis To assess the performance of the models, the predicted GRFs were compared with the pressure insole recordings. Following the definitions given in , the differences were quantified using the root mean squared error (RMSE): (11) R M S E = 1 N ∑ k = 1 N ( y ( k ) − y ^ ( k ) ) 2 where y ( k ) and y ^ ( k ) are the predicted and measured vertical GRFs, respectively, and N is the sampling number for comparison. The relative RMSE (rRMSE) with respect to the average peak-to-peak amplitude between two values was also used to quantify the performance of the prediction: (12) r R M S E = R M S E ( max ( y ( k ) ) − min ( y ( k ) ) + max ( y ^ ( k ) ) − min ( y ^ ( k ) ) ) / 2 The ranges for the maximum and minimum were calculated over the number of samples used for validation. The predicted vGRFs were compared with the insole measured reference signals for each gait cycle. The mean and standard deviations of the prediction errors over the gait cycles were compared for each individual and each task. A Student’s t -test was adopted to analyse the effects of different walking tasks, sensor locations, and inter-subject variability on the accuracy of the proxy measure. Once the accelerations and vGRF signals have been recorded, the relationship between the accelerometer signals and the insole measured vGRF can be modelled. The higher-order cross-correlation nonlinear detection method was applied and indicated that a linear model is not sufficient to describe the relationships between these two types of signals. Therefore, a nonlinear dynamic model was developed in this study. The accelerations are defined on the sensor frame, while the vGRFs are defined in a ground frame. The coordinate transformation can be represented by a linear map: (1) a → g l o b a l = T ( ϑ ) a → s e n s o r where the orientation ϑ of the sensor frame is known. The relationship between the vGRF and the accelerations can be described in the global frame by a function given as (2) v G R F = f 0 ( a → g l o b a l ) = f 0 ( T ( ϑ ) a → s e n s o r ) However, the transform matrix Τ ( ϑ ) may change over time, due to changes in the orientation of the sensor. It is then possible to define a relationship based on the orientation and accelerations as (3) v G R F = f 1 ( ϑ , a → s e n s o r , g ) where g is a constant representing gravity. The effect of gravity can be considered as an implicit parameter in the model, as detailed in the discussion. A further assumption is that the entries in the coordinate transformation matrix can be expressed as functions of the time-varying accelerations a → s e n s o r ( t ) and of the associated time delays a → s e n s o r ( t − τ i ) , τ i > 0 . Equation (3) can then be rewritten as a function of the accelerations: (4) v G R F ( t ) = f ( a → s e n s o r ( t ) , a → s e n s o r ( t − τ 1 ) , ⋯ , a → s e n s o r ( t − τ L ) ) For the time series, the discrete time relationship reads as (5) v G R F ( k ) = f ( a → s e n s o r ( k ) , a → s e n s o r ( k − 1 ) , ⋯ , a → s e n s o r ( k − L ) ) where L represents the maximum time delay and k denotes the k th sample instance. The function form f ( a → s e n s o r ) is usually unknown. However, according to the Stone–Weierstrass-like approximation theorems , the function can often be approximated as the linear superposition of a set of known basis functions ϕ i ( a → s e n s o r ) as (6) v G R F = ∑ i n θ i ϕ i ( a → s e n s o r ( k ) , a → s e n s o r ( k − 1 ) , ⋯ , a → s e n s o r ( k − L ) ) The model structure and the associated parameters θ i can be learned using the orthogonal forward regression algorithm. Based on the model structure, the accelerations measured in the sensor reference frame can be directly used without any frame transformation. The underlying nonlinear relationship, in fact, will be learned by the iterative orthogonal forward regression algorithm (iOFR) algorithm. Thereafter, the vGRF can be predicted based on the obtained model using only the output of an accelerometer outputs. The results in validate the assumptions in our method and show that this kind of model structure is capable of describing the underlying nonlinear dynamic connections between the acceleration and vGRF even in an outdoor free-style walking scenario. Since we wanted to use the acceleration readings a from one single wearable sensor as a proxy measurement for vGRFs of both feet, a key step in this study was to separate the single input variable a into two components: one reflects the walking when the left leg has dominant pressure on the ground ( a l e f t ), and the other reflects the walking when the right leg has dominant pressure on the ground. The two components were defined by introducing two membership functions as follows. (7) w l e f t = G R F l e f t G R F l e f t + G R F r i g h t , w r i g h t = G R F r i g h t G R F l e f t + G R F r i g h t The left and right components can then be split by the defined membership functions as (8) a l e f t = a ⋅ w l e f t , a r i g h t = a ⋅ w r i g h t where a is the acceleration recordings and the “ · ” represents the point-wise multiplication operation. An approximation to the membership functions can be estimated using gait events such as the IC (initial foot contact) and FC (final foot contact) instants calculated from the insole pressure sensor information. The membership is set as 1 in the single support phase and 0 in the swing phase. These two values are linearly connected in the double support phases. The left and right membership functions can then be approximately obtained, and the acceleration recordings can be decomposed into the left and right components. shows the calculated and approximated membership functions for the IMU signal decomposition in two gait cycles. The gait events can also be obtained from an extra inertial sensor at the pelvis or shank level . This will release the limitation in the applications. In this study, the gait events were detected using the ground reaction force with a 10 N threshold to avoid errors possibly introduced in calculating the gait events form the IMU. Once the left and right components of the acceleration signals were obtained, a special type of NARMAX model , based on expansions of the input only, giving essentially a Volterra series expansion or a nonlinear moving average (NMA) model , was used for the derivation of the vGRF model. This expansion provides a general representation of nonlinear dynamics, where only the nonlinearities in the input variables are involved. This simplifies the prediction of the vGRF from only the current and past acceleration recordings. A discrete NMA or Volterra series model can be defined as (9) y ( k ) = ∑ n = 0 N y n ( k ) , y n ( k ) = h n ( m 1 , m 2 , ⋯ , m n ) ∏ i = 1 n u ( k − m i ) where u ( k ) and y ( k ) are the system input and output, respectively, and m i represents time delays. The n th kernel h n ( m 1 , m 2 , ⋯ , m n ) characterises the weight of the n th nonlinearity in the system response. The discrete Volterra series model can be rewritten in the general NARMAX form as follows: (10) y ( k ) = F [ u ( k − 1 ) , u ( k − 2 ) , ⋯ , u ( k − n u ) ] where F is a multivariate polynomial function and n u denotes the maximum time delay of the input. In the context of GRF prediction, the left and right components of three measured perpendicular accelerations { a x , a y , a z } defined in the IMU frame are the inputs, and the vGRFs { G R F l e f t , G R F r i g h t } from left or right foot are the output. That is, a total number of six inputs { a x , l e f t , a x , r i g h t , a y , l e f t , a y , r i g h t , a z , l e f t , a z , r i g h t } was included in the model. Once the maximum time delay is specified, the model structure can be constructed based on Equation (3). However, the model can include a huge number of terms; for example, the number of terms in the model is 5778 when the maximum time lag is 18 samples. This may lead to over-fitting of the data or numerical ill-conditioning in parameter estimation. The OFR algorithm and the associated variants have been proven able to efficiently determine a sparse model structure and have been widely used in a wide range of applications . Here, an improved OFR algorithm, an iterative OFR, was used to identify the model structure and explore the relationship between the desired vGRF and the proxy measurements . A more detailed discussion of the iterative OFR algorithms can be found in . Once a reliable model is built, the vGRF can be reconstructed with the chosen wearable sensor information only. The final model structure in this study included only the nonlinear moving average part of the proxy measurements and no information about the output, i.e., the vGRF was used. This made the prediction of the GRF much easier, and the prediction error would not accumulate in the predicted GRF. The same procedure was applied to the 9 participants and two walking tasks, respectively, to build subject-specific proxy models. The subject-specific models produced more accurate estimation of vGRF than an average model, which was built by pooling all subject data. The subject-specific models performed better because subject- and task-specific information was characterised by the models. To assess the performance of the models, the predicted GRFs were compared with the pressure insole recordings. Following the definitions given in , the differences were quantified using the root mean squared error (RMSE): (11) R M S E = 1 N ∑ k = 1 N ( y ( k ) − y ^ ( k ) ) 2 where y ( k ) and y ^ ( k ) are the predicted and measured vertical GRFs, respectively, and N is the sampling number for comparison. The relative RMSE (rRMSE) with respect to the average peak-to-peak amplitude between two values was also used to quantify the performance of the prediction: (12) r R M S E = R M S E ( max ( y ( k ) ) − min ( y ( k ) ) + max ( y ^ ( k ) ) − min ( y ^ ( k ) ) ) / 2 The ranges for the maximum and minimum were calculated over the number of samples used for validation. The predicted vGRFs were compared with the insole measured reference signals for each gait cycle. The mean and standard deviations of the prediction errors over the gait cycles were compared for each individual and each task. A Student’s t -test was adopted to analyse the effects of different walking tasks, sensor locations, and inter-subject variability on the accuracy of the proxy measure. The data from OCW and OFW were split into a training set and a test set. Half of the data were used to identify the model, and the remaining half were used to validate the model and analyse the prediction errors. The total OCW data included 23,040 samples, which were about 172 gait cycles after removing the resting or transitory periods. The total OFW data included 92,160 samples, about 688 gait cycles. It is worth emphasising that all of the prediction error analysis given below was based on the test set only, excluding the data used for training the models. The results illustrated the predictive ability of the obtained proxy model. A large training set was used to include as rich as possible gait variability for the purpose of improving the model predictive performance under different cases. 3.1. Proxy Measurement of vGRF Based on the Waist Level Sensor Signal The six split acceleration signals { a x , l e f t , a x , r i g h t , a y , l e f t , a y , r i g h t , a z , l e f t , a z , r i g h t } were used to fit the left and right vGRF by the NARMAX Model (10). The iterative OFR algorithm was used to detect the model structure and estimate the associated parameters. Half of the data were used to identify the model, and the other half were used to validate the predictive power of the obtained model. A cross-correlation analysis between waist acceleration and total GRF indicated that most of the time delays between the acceleration and the total vGRF were fewer than 18 samples. Hence, a maximum time lag of 18 was used to build the NARMAX model. All of the left and right components of the waist level accelerations with a time lag of less than 18 were used to construct a term dictionary consisting of all of the combinations of { a x , l e f t ( k ) , ⋯ , a x , l e f t ( k − 5 ) , a x , r i g h t ( k ) , ⋯ , a x , r i g h t ( k − 18 ) , ⋯ , a z , r i g h t ( k − 18 ) } , up to second-order polynomial terms, that is a i p ( k − n i ) a j q ( k − n j ) , 0 ≤ p + q ≤ 2 , 0 ≤ n i , n j ≤ 18 , where a i , a j ∈ { a x , l e f t , a x , r i g h t , a y , l e f t , a y , r i g h t , a z , l e f t , a z , r i g h t } . A 64-term NARMAX model was obtained for both left and right vGRFs. A typical proxy model prediction of the vGRFs for the OCW and OFW tasks is shown in , based on the data from Participant No. 1. The proxy measures are significantly correlated with the insole measures, with cross-correlation coefficients ρ = 0.993 ( p < 0.01) for OCW and ρ = 0.990 ( p < 0.01) for OFW. Similar results were obtained using the data from other participants. More detailed prediction errors for the OCW and OFW tasks are shown in and . The prediction errors for full gait cycles, single support phases, double support phases, and three critical points (two vertical peaks, VP1, VP2, and one trough values, TR in ) are listed. The mean relative prediction errors (in rRMSE %) were less than 5.2% for the OCW. Generally, the prediction errors for OFW were less than 7.0%, which is greater (with p = 0.01) than OCW. This may be because both the walking direction and speed were restricted in the OCW, and the consistency between the training and the test data was better than the OFW data. Therefore, the predictions of the test data in OCW were more accurate than those in the OFW cases. The average prediction errors over all participants were 3.8% and 5.0% for OCW and OFW, respectively. The highest prediction error happened at VP2 for both OCW and OFW cases. The average prediction errors were 4.6% and 6.2%, respectively, which were larger than the overall prediction error (with p = 0.16 and 0.14, respectively). This means the model prediction at VP2 was less accurate than the overall performance. 3.2. The Effect of Sensor Location The other two accelerations measured at the cervical (C7) and forehead (FH) levels were also used as the proxy variable to estimate vGRFs. The results are shown in , , and . The overall mean of the prediction error for OCW based on C7 and FH accelerations was 4.0% and 4.2%, respectively, which was greater (with p = 0.67 and p = 0.29, respectively) than the prediction error 3.8% produced by the L5 sensor. This could be because the movement of the waist in the OCW was more stable. Similar results can be observed in the OFW case. The overall mean of the prediction error for OFW-based C7 and FH was 5.6% and 6.0%, respectively, which were greater (with p = 0.17 and 0.08) than the prediction error produced by the L5 sensor. In sum, the L5 proxy measure had the best performance compared to the other two. The difference among the performance of different sensors was not significant with p > 0.29 in the OCW cases, while there were relatively larger difference in the performances for the OFW tasks. The comparison of the full gait cycle prediction errors based on different sensor locations is summarised in . 3.3. Inter-Subject Variability The inter-subject variances for OCW were small for all three models (0.68, 0.19, and 0.55). In the OFW cases, the inter-subject variances were relatively small (0.73 and 0.65) using the L5 and C7 models. The OFW variance based on FH model was 1.36, which was greater than those of the other two sensor positions. Hence, the C7 model kept a low inter-subject variance for both OCW and OFW, and the performance was more stable than the other two proxy models. This can also be inferred from the results shown in . In summary, the L5 sensor-based proxy model showed the minimum model prediction errors and the C7 model the smallest inter-subject variability. There were no significant differences in the performances of the proxy models based on three different sensor locations. The six split acceleration signals { a x , l e f t , a x , r i g h t , a y , l e f t , a y , r i g h t , a z , l e f t , a z , r i g h t } were used to fit the left and right vGRF by the NARMAX Model (10). The iterative OFR algorithm was used to detect the model structure and estimate the associated parameters. Half of the data were used to identify the model, and the other half were used to validate the predictive power of the obtained model. A cross-correlation analysis between waist acceleration and total GRF indicated that most of the time delays between the acceleration and the total vGRF were fewer than 18 samples. Hence, a maximum time lag of 18 was used to build the NARMAX model. All of the left and right components of the waist level accelerations with a time lag of less than 18 were used to construct a term dictionary consisting of all of the combinations of { a x , l e f t ( k ) , ⋯ , a x , l e f t ( k − 5 ) , a x , r i g h t ( k ) , ⋯ , a x , r i g h t ( k − 18 ) , ⋯ , a z , r i g h t ( k − 18 ) } , up to second-order polynomial terms, that is a i p ( k − n i ) a j q ( k − n j ) , 0 ≤ p + q ≤ 2 , 0 ≤ n i , n j ≤ 18 , where a i , a j ∈ { a x , l e f t , a x , r i g h t , a y , l e f t , a y , r i g h t , a z , l e f t , a z , r i g h t } . A 64-term NARMAX model was obtained for both left and right vGRFs. A typical proxy model prediction of the vGRFs for the OCW and OFW tasks is shown in , based on the data from Participant No. 1. The proxy measures are significantly correlated with the insole measures, with cross-correlation coefficients ρ = 0.993 ( p < 0.01) for OCW and ρ = 0.990 ( p < 0.01) for OFW. Similar results were obtained using the data from other participants. More detailed prediction errors for the OCW and OFW tasks are shown in and . The prediction errors for full gait cycles, single support phases, double support phases, and three critical points (two vertical peaks, VP1, VP2, and one trough values, TR in ) are listed. The mean relative prediction errors (in rRMSE %) were less than 5.2% for the OCW. Generally, the prediction errors for OFW were less than 7.0%, which is greater (with p = 0.01) than OCW. This may be because both the walking direction and speed were restricted in the OCW, and the consistency between the training and the test data was better than the OFW data. Therefore, the predictions of the test data in OCW were more accurate than those in the OFW cases. The average prediction errors over all participants were 3.8% and 5.0% for OCW and OFW, respectively. The highest prediction error happened at VP2 for both OCW and OFW cases. The average prediction errors were 4.6% and 6.2%, respectively, which were larger than the overall prediction error (with p = 0.16 and 0.14, respectively). This means the model prediction at VP2 was less accurate than the overall performance. The other two accelerations measured at the cervical (C7) and forehead (FH) levels were also used as the proxy variable to estimate vGRFs. The results are shown in , , and . The overall mean of the prediction error for OCW based on C7 and FH accelerations was 4.0% and 4.2%, respectively, which was greater (with p = 0.67 and p = 0.29, respectively) than the prediction error 3.8% produced by the L5 sensor. This could be because the movement of the waist in the OCW was more stable. Similar results can be observed in the OFW case. The overall mean of the prediction error for OFW-based C7 and FH was 5.6% and 6.0%, respectively, which were greater (with p = 0.17 and 0.08) than the prediction error produced by the L5 sensor. In sum, the L5 proxy measure had the best performance compared to the other two. The difference among the performance of different sensors was not significant with p > 0.29 in the OCW cases, while there were relatively larger difference in the performances for the OFW tasks. The comparison of the full gait cycle prediction errors based on different sensor locations is summarised in . The inter-subject variances for OCW were small for all three models (0.68, 0.19, and 0.55). In the OFW cases, the inter-subject variances were relatively small (0.73 and 0.65) using the L5 and C7 models. The OFW variance based on FH model was 1.36, which was greater than those of the other two sensor positions. Hence, the C7 model kept a low inter-subject variance for both OCW and OFW, and the performance was more stable than the other two proxy models. This can also be inferred from the results shown in . In summary, the L5 sensor-based proxy model showed the minimum model prediction errors and the C7 model the smallest inter-subject variability. There were no significant differences in the performances of the proxy models based on three different sensor locations. When analysing an individual’s gait, the knowledge of GRFs is very important as input for the joint mechanics . The gold standard method of measuring GRFs is based on the use of a force plate. The instrumented treadmills can overcome the restrictions on the number of consecutive gait cycles that can be analysed. The studies to predict GRFs using motion data or kinematic data of the subjects have been another focus of the research [ , , , , , , , , , ]. However, most of these methods are restricted to gait laboratory settings. In this paper, we have demonstrated a low-cost proxy measurement method to accurately predict the vertical GRFs using only one inertial sensor. This study aimed at reconstructing the vGRF under each foot using as few sensor recordings as possible, preferably from one wearable sensor. In this way, we could achieve a good prediction of vGRF with extremely low cost. To this end, the accelerations recorded at three different levels were investigated. The three locations were forehead, base of neck and lumbar. It was shown that the L5 model has smaller prediction error and relatively less inter-subject variability. Another advantage of using the L5 sensor is that the gait events, which were used in splitting the vGRF signals, can be detected from the waist level IMU information based on the method in , and no extra sensor information is needed. The quality (prediction accuracy) of the proxy measures of the vGRF is comparable to the direct measurement of vGRF and measurement based on the inverse dynamics method . In , the ground reaction kinetics were estimated, including three ground reaction forces, two centres of pressure, and vertical torque. The average normalised prediction error (rRMSE) for the vGRF in the intra-day single-task is less than 3.5% and in the multi-task prediction error less than 4.2%. In , the relative RMSE prediction error for vGRF was about 6.0%. Both of the above studies were conducted in an indoor condition and with fixed walking speed. Our average prediction error for fixed walking speed tasks is about 3.8%, and the free waking without speed restriction is about 5.0%. Furthermore, our study was in an outdoor condition, which is more challenging. In the proposed method, only the accelerations from the inertial sensor were used to build the model. Other sensor information, for example, the angular velocities, has also been tested in the study. Results showed that including the angular velocities is of little help in improving prediction accuracy. On the contrary, using less information will reduce the complicity of the model and increase the model’s robustness. A preliminary study showed that the angular velocities played an important role in the prediction of COP. The technique used in this study is decomposing acceleration signals into left and right components for the purpose of predicting both left and right vGRFs at the same time. This procedure further enhanced the correlations between the model predictions and vGRFs. This is critical for the prediction performance of the proposed approach. The decomposition conducted in this paper was based on the gait event information, e.g., heel-strike and toe-off. We chose to extract this information from the pressure insoles, being more accurate for this purpose, to isolate a possible source of additional error from the final estimate of the model outputs. This can be a limitation of the proposed method because this information may not be readily available. However, this information can be obtained from inertial sensors located on the pelvis or ankles in real applications . For example, the inertial sensor signals at the L5 level can be used for both splitting the data and building the proxy model. We have shown how the NARMAX modelling approach can be used to identify a simple, but nonlinear proxy model for predicting vGRFs of both feet, during normal daily outdoor walking. The task investigated in this paper could have been achieved using other machine learning approaches, such as supervised artificial neural networks (ANNs). For instance, ANNs have been used to predict the joint load in motion and the ground reaction forces during gait . However, these approaches tend to be slow in learning, especially when using large input spaces and, more importantly, generate opaque models that are difficult to visualise and analyse. In contrast, the NARMAX modelling methodology produces transparent mathematical functions that are directly related to the task. The model needs to be validated for the prediction of ground reaction forces during more daily living activities. The application of the developed method in predicting mediolateral and anterior–posterior ground forces is of interest. It is noteworthy to recall that this study involved only young healthy volunteers, whereas upper body movements and stability tends to change with aging and pathologies . Therefore, further investigations are needed to translate the results of this study to other populations, for example, other age groups or groups with pathological gaits. However, the method is expected to be applicable to other groups because of the subject-specified modeling procedure. A proxy measurement method is used in this study where the vGRFs are indirectly estimated through measuring the proxy variables, namely the accelerations, which are much easier to obtain in out-of-laboratory settings. The most common use of proxy measurement is that of substituting a measurement of one variable that is inexpensive and/or easily obtainable for a different variable that would be more difficult or costly, if not possible, to collect. Proxy measurements have been widely used in the social sciences, but rarely in engineering applications. Therefore, the methodology and results in this study could have important implications beyond ground reaction force prediction with many applications in medicine. A similar proxy measurement strategy can be implemented in other engineering applications that involve unobservable and/or expensively measurable states and variables. In this study, a proxy measurement method has been proposed to estimate the vGRFs in non-laboratory settings. Inertial sensor information has been used as proxy variables, and the nonlinear dynamic relationship between the vGRF and accelerations has been revealed using a NARMAX model. The proposed method is easy to proceed and provides a low-cost but reliable proxy measure of vGRFs in non-laboratory settings. This makes the long-term monitoring of the gait characteristics in a free-living condition possible. Another advantage of the new methods is that it provides an explicit model for the dynamic relationships between the accelerations at different body levels and the vGRFs. This can be used for further model-based analyses, for example, the nonlinear spectral analysis, to explore some new gait characteristics which cannot be obtained using a simple statistical method . In future research, the obtained models will be used for predicting ground reaction forces during various activities of daily living. The application of the developed method in predicting mediolateral and anterior–posterior ground forces is of interest. Further studies will involve other age groups or disease-related ground reaction force predictions such as Parkinson’s disease. While the present study focuses on using the new proxy algorithm in the application of GRFs, the ideas are applicable over a very wide spectrum of problems and can be used for generic proxy measurement reconstructions of other immeasurable signals.
HPV Biomarkers in Oral and Blood‐Derived Body Fluids in Head and Neck Cancer Patients
761ac5d8-3e27-4764-9ef2-28792871d769
11886502
Digestive System[mh]
Introduction Head and neck cancers (HNC) comprise several malignancies arising in the oral cavity, oropharynx, larynx, and hypopharynx. These neoplasms are associated with established risk factors such as tobacco, alcohol consumption and human papillomavirus (HPV) infections . High‐risk (HR) HPV genotypes—including HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59—are linked to a broad spectrum of human cancers , whereas low‐risk (LR) genotypes like HPV6 and HPV11, are primarily responsible for benign genital warts and recurrent respiratory papillomatosis , a rare but debilitating condition, usually affecting the larynx. HPV‐associated HNC primarily occurring in the oropharynx, where the causal role of HPV is well‐established . Globally, the incidence of HPV‐associated oropharyngeal cancer (OPC) is rising, , with HPV16 detected in approximately 80% or more of these cases . To date, there are no validated screening protocols for the early detection of HPV‐associated HNC, which impacts early diagnosis and clinical outcomes. Current methods for determining HPV status in cancer tissues include HPV DNA and E6/E7 mRNA assays, as well as p16 INK4a immunohistochemistry (IHC) staining as a surrogate marker of HPV infection . The 8th Edition of the TNM Classification of Malignant Tumors recommends p16 INK4a detection alone or in combination with an HPV DNA assay to identify HPV‐associated OPC for improved cancer classification and treatment planning. Beyond traditional HPV diagnostic assays, additional HPV biomarkers traceable in body fluids, such as saliva and blood, are being explored to improve early detection of HPV‐associated HNC [ , , , ]. Advantages of using body fluids over tumor biopsies include: (i) easier integration into diagnostic procedures, (ii) the absence of cancer tissues requirements, and (iii) minimally invasive sample collection. Among blood‐based biomarkers, circulating tumor (ct) HPV DNA has shown promise for the early diagnosis and relapse monitoring of HPV‐associated OPC [ , , , ]. Moreover, studies on oral HPV DNA in saliva have found a higher positivity rate in HNC HPV‐associated patients compared to controls [ , , , , ], with some reports detecting oral HPV DNA up to 3.9 years before an OPC diagnosis. Persistent oral HPV DNA positivity following treatment has been linked to poor prognosis and cancer recurrence . To date, the combined analysis of body fluid biomarkers in HPV‐associated cancers remains mostly underexplored [ , , ]. Only a limited number of studies have examined the combined analysis of viral biomarkers, but initial findings suggest potential for improved early detection of HPV‐associated OPC at both initial diagnosis and recurrence . Therefore, further studies are needed to develop and validate screening methods and diagnostic algorithms based on body fluid biomarkers for early detection of HPV‐associated HNC. In this proof of principle study, we assessed whether HPV DNA positivity in body fluids could predict HPV positivity in OPC tissues and evaluated the effectiveness of each HPV biomarker in body fluids as a minimally invasive tool. These biomarkers could potentially enhance diagnostic algorithms for the early detection of HPV‐associated OPC. Materials and Methods 2.1 Study Group, Clinical Information, and Biological Samples The study analyzed formalin‐fixed paraffin‐embedded (FFPE) tissues, plasma and oral (swabs and gargles) samples collected from 142 patients, referred to the Department of Otolaryngology and Head and Neck Surgery of the European Institute of Oncology, IEO, IRCCS, consecutively enrolled from 2019 to 2022. Inclusion criteria: (i) age ≥ 18 years, (ii) suspected HN disease (iii) patient with clinical, radiological and/or cyto‐histological diagnosis of HNC, and (iv) no previous treatments. Specimens from 142 patients were analyzed for HPV DNA, including FFPE tissues (available for 90 patients), plasma ( n = 141), gargles ( n = 141), and exfoliated oral cells from swabs ( n = 142) (Figure ). Matched samples included 140 sets of oral gargles, swabs, and plasma, and 89 sets of oral gargles, swabs, plasma, and FFPE tissues. All cases were clinically (c) staged and, when surgery was performed as treatment, pathologically (p), according to the 8th TNM edition . The study aimed to assess a new methodology as an early diagnostic tool to be applied before treatment. Therefore, clinical staging (cTNM) was used for analysis independently of the final treatment, rather than the pathological one (pTNM), as reported in Supporting Information S1: Table . Discrepancies between clinical and pathological staging (e.g., cTumor (T) > pT), and variations in the number of collected biological samples arose from instances where clinical suspicion of cancer was not confirmed by final histological examination (e.g., non‐HNC cases). Additional discrepancies occurred in patients receiving radio‐chemotherapy without surgery, where pTNM staging was not feasible. Furthermore, the collection of tissues, blood and oral gargle samples from all patients, was hindered by incomplete patient compliance and logical constraints imposed by the COVID‐19 pandemic. 2.2 Plasma and Oral Samples Collection Ten milliliters of whole blood were collected from each patient into BD Vacutainer K2 EDTA tubes (BD Biosciences, San Jose, CA). For oral gargle specimens, 15 mL of sterile saline solution (0.9%: 9 g NaCl in 1000 mL of sterile water) was swirled in the oral cavity for 15 s and then collected in a 50 mL sterile falcon tube . Exfoliated oral cells were also collected by swabbing the entire buccal surface (e.g, the alveolar ridges, lateral tongue, the tonsillar areas and base of the tongue) using a specific swab (ORAcollect‐DNA kit; DNA‐genotek Inc.). All samples, including blood, gargle and swab specimens were immediately frozen at −20°C. The collection process was conducted by two clinicians (MT and RDB) (see Supporting Information materials for further details and specifications). 2.3 DNA Extraction From FFPE Tissues FFPE tissue blocks (90 samples) were sectioned, as previously described and the DNA was extracted as reported in Supporting Information material [ , , ]. 2.4 Circulating Tumor DNA Extraction From Plasma Samples Circulating cell free DNA was extracted from 500 μL of plasma using the QIAamp circulating nucleic acid kit (Qiagen, Hilden, Germany) as already described (see Supporting Information materials for further details and specifications). 2.5 DNA Extraction From Oral Samples DNA from oral gargles and swabs was extracted using the Qiagen BioRobot EZ1 and the EZ1 DNA tissue kit according to instructions (Qiagen, Hilden, Germany) (see Supporting Information materials for further details and specifications). 2.6 HPV DNA Detection by Bead‐Based Genotyping Luminex Assay (E7‐MPG) The E7‐MPG assay was applied to analyze HPV DNA from 90 FFPE tissues, 141 plasma, 142 swabs, and 141 gargles as previously reported . This well‐validated molecular assay, which combines multiplex PCR and Luminex bead‐based technology (Luminex Corp., Austin, TX, USA) using type specific primers, detects HR HPV genotypes [ , , ] (see Supporting Information materials for further details and specifications). 2.7 P16 INK4a Immunohistochemical (IHC) Staining and HPV DNA Genotyping by INNO‐LiPA HPV Genotyping Assay in OPC Samples Since no universally accepted gold standard exists for assessing HR HPV in FFPE tissues, two commercial tests, widely applied in diagnostic settings of HPV‐associated cancers, were employed to analyze the 23 OPC samples and evaluate their concordance with the E7‐MPG molecular assay in identifying HPV‐associated OPC. Unstained sections from each FFPE block, prepared as previously described , were processed for Hematoxylin and Eosin (H&E) and p16 INK4 staining . Both assays were used as diagnostic tests to stratify HPV‐associated OPC, as described in Table and in Supporting Information S1: Table (see Supporting Information materials for further details and specifications). 2.8 Statistical Analysis HPV DNA prevalence was estimated as the proportion of FFPE tissue, plasma, and oral samples that tested positive for any HPV DNA genotype by multiplex PCR E7‐MPG assay with corresponding binomial 95% confidence intervals (CIs). Demographic data were tabulated in percentage by HPV biomarkers and their combinations. Fisher's exact test with two‐sided p ‐value < 0.05 was considered statistically significant. The concordance of HPV16 DNA status determined by E7‐MPG in body fluids with that of HPV status in FFPE OPC tissues, analysed by p16 INK4a IHC or INNO‐LIPA HPV genotyping tests, was estimated by means of the Cohen's kappa coefficient with corresponding 95% CI. To assess the level of agreement between HPV16 biomarkers in body fluids and in OPC FFPE tissues, interpretation of the Cohen's kappa statistics was established as follows: (i) < 0: poor, (ii) 0–0.20: slight, (iii) 0.21–0.40: fair, (iv) 0.41–0.60: moderate, (v) 0.61–0.80: substantial, (vi) 0.81–1.0: almost perfect . Since a suggested reference test to identify HPV‐associated HNC outside the oropharynx site has yet to be determined [ , , ], the overall concordance between plasma, gargles and oral swabs versus FFPE tissues for HPV16 DNA detection in the other HNC sites (non‐oropharynx) was evaluated by HPV DNA detection (E7‐MPG assay). Statistics was performed with GraphPad Prism (Version 10.1.1) and GraphPad online version ( https://www.graphpad.com ). Study Group, Clinical Information, and Biological Samples The study analyzed formalin‐fixed paraffin‐embedded (FFPE) tissues, plasma and oral (swabs and gargles) samples collected from 142 patients, referred to the Department of Otolaryngology and Head and Neck Surgery of the European Institute of Oncology, IEO, IRCCS, consecutively enrolled from 2019 to 2022. Inclusion criteria: (i) age ≥ 18 years, (ii) suspected HN disease (iii) patient with clinical, radiological and/or cyto‐histological diagnosis of HNC, and (iv) no previous treatments. Specimens from 142 patients were analyzed for HPV DNA, including FFPE tissues (available for 90 patients), plasma ( n = 141), gargles ( n = 141), and exfoliated oral cells from swabs ( n = 142) (Figure ). Matched samples included 140 sets of oral gargles, swabs, and plasma, and 89 sets of oral gargles, swabs, plasma, and FFPE tissues. All cases were clinically (c) staged and, when surgery was performed as treatment, pathologically (p), according to the 8th TNM edition . The study aimed to assess a new methodology as an early diagnostic tool to be applied before treatment. Therefore, clinical staging (cTNM) was used for analysis independently of the final treatment, rather than the pathological one (pTNM), as reported in Supporting Information S1: Table . Discrepancies between clinical and pathological staging (e.g., cTumor (T) > pT), and variations in the number of collected biological samples arose from instances where clinical suspicion of cancer was not confirmed by final histological examination (e.g., non‐HNC cases). Additional discrepancies occurred in patients receiving radio‐chemotherapy without surgery, where pTNM staging was not feasible. Furthermore, the collection of tissues, blood and oral gargle samples from all patients, was hindered by incomplete patient compliance and logical constraints imposed by the COVID‐19 pandemic. Plasma and Oral Samples Collection Ten milliliters of whole blood were collected from each patient into BD Vacutainer K2 EDTA tubes (BD Biosciences, San Jose, CA). For oral gargle specimens, 15 mL of sterile saline solution (0.9%: 9 g NaCl in 1000 mL of sterile water) was swirled in the oral cavity for 15 s and then collected in a 50 mL sterile falcon tube . Exfoliated oral cells were also collected by swabbing the entire buccal surface (e.g, the alveolar ridges, lateral tongue, the tonsillar areas and base of the tongue) using a specific swab (ORAcollect‐DNA kit; DNA‐genotek Inc.). All samples, including blood, gargle and swab specimens were immediately frozen at −20°C. The collection process was conducted by two clinicians (MT and RDB) (see Supporting Information materials for further details and specifications). DNA Extraction From FFPE Tissues FFPE tissue blocks (90 samples) were sectioned, as previously described and the DNA was extracted as reported in Supporting Information material [ , , ]. Circulating Tumor DNA Extraction From Plasma Samples Circulating cell free DNA was extracted from 500 μL of plasma using the QIAamp circulating nucleic acid kit (Qiagen, Hilden, Germany) as already described (see Supporting Information materials for further details and specifications). DNA Extraction From Oral Samples DNA from oral gargles and swabs was extracted using the Qiagen BioRobot EZ1 and the EZ1 DNA tissue kit according to instructions (Qiagen, Hilden, Germany) (see Supporting Information materials for further details and specifications). HPV DNA Detection by Bead‐Based Genotyping Luminex Assay (E7‐MPG) The E7‐MPG assay was applied to analyze HPV DNA from 90 FFPE tissues, 141 plasma, 142 swabs, and 141 gargles as previously reported . This well‐validated molecular assay, which combines multiplex PCR and Luminex bead‐based technology (Luminex Corp., Austin, TX, USA) using type specific primers, detects HR HPV genotypes [ , , ] (see Supporting Information materials for further details and specifications). P16 INK4a Immunohistochemical (IHC) Staining and HPV DNA Genotyping by INNO‐LiPA HPV Genotyping Assay in OPC Samples Since no universally accepted gold standard exists for assessing HR HPV in FFPE tissues, two commercial tests, widely applied in diagnostic settings of HPV‐associated cancers, were employed to analyze the 23 OPC samples and evaluate their concordance with the E7‐MPG molecular assay in identifying HPV‐associated OPC. Unstained sections from each FFPE block, prepared as previously described , were processed for Hematoxylin and Eosin (H&E) and p16 INK4 staining . Both assays were used as diagnostic tests to stratify HPV‐associated OPC, as described in Table and in Supporting Information S1: Table (see Supporting Information materials for further details and specifications). Statistical Analysis HPV DNA prevalence was estimated as the proportion of FFPE tissue, plasma, and oral samples that tested positive for any HPV DNA genotype by multiplex PCR E7‐MPG assay with corresponding binomial 95% confidence intervals (CIs). Demographic data were tabulated in percentage by HPV biomarkers and their combinations. Fisher's exact test with two‐sided p ‐value < 0.05 was considered statistically significant. The concordance of HPV16 DNA status determined by E7‐MPG in body fluids with that of HPV status in FFPE OPC tissues, analysed by p16 INK4a IHC or INNO‐LIPA HPV genotyping tests, was estimated by means of the Cohen's kappa coefficient with corresponding 95% CI. To assess the level of agreement between HPV16 biomarkers in body fluids and in OPC FFPE tissues, interpretation of the Cohen's kappa statistics was established as follows: (i) < 0: poor, (ii) 0–0.20: slight, (iii) 0.21–0.40: fair, (iv) 0.41–0.60: moderate, (v) 0.61–0.80: substantial, (vi) 0.81–1.0: almost perfect . Since a suggested reference test to identify HPV‐associated HNC outside the oropharynx site has yet to be determined [ , , ], the overall concordance between plasma, gargles and oral swabs versus FFPE tissues for HPV16 DNA detection in the other HNC sites (non‐oropharynx) was evaluated by HPV DNA detection (E7‐MPG assay). Statistics was performed with GraphPad Prism (Version 10.1.1) and GraphPad online version ( https://www.graphpad.com ). Results 3.1 Characteristic of the Patients' Cohort A total of 142 patients were enrolled in the study, comprising 39 females (27.5%) and 103 males (72.5%). The mean age at diagnosis was 63.4 (SD ± 11) with females (mean, SD 67.4 ± 13) being older than males (mean, SD 61.9 ± 10). Among the 142 patients, 43 (30%) were non‐smokers, 52 (37%) were ex‐smokers who had quit for at least 12 months before diagnosis and 47 (33%) were active smokers. Among the active smokers, 2 (2.1%) declared less than 10 packs/year (p/y) , 3 (6.4%) smoked between 10 and 20 p/y and 43 (91.5%) were heavy smokers with a smoking history of more than 20 p/y. In the group of the former smokers, 1 (2%) smoked less than 10 p/y, 12 (23%) between 10 and 20 p/y and 39 (75%) referred to more than 20 p/y. Of the 142 patients, 132 were diagnosed with HNC, while 10 were non‐HNC cases, which included squamous intraepithelial neoplasia (SIN) III/carcinoma in situ of the larynx ( n = 1), dysplasia (SIN I–II) from the oral cavity ( n = 1), oropharynx ( n = 1) and larynx ( n = 3), oral cavity lymphoma ( n = 1), and non‐cancer cases ( n = 3). Figure represents the patients’ cohort. Among the 132 HNC cases, 37.9% ( n = 50) were oral cavity cancers, 34% ( n = 45) laryngeal cancers, 17.4% ( n = 23) OPCs, 7.5% ( n = 10) hypopharyngeal cancer, 0.7% ( n = 1) was oropharyngeal/hypopharyngeal cancer and 2.3% ( n = 3) were occult tumors (Figure ). These HNC cases ( n = 132) were classified into different T stages according to the 8th TMN edition and were staged both clinically and pathologically when possible (Supporting Information S1: Table ). Regarding the 23 oropharyngeal tumors, 7 were located at the base of the tongue, and 16 on the lateral wall (tonsil), as shown in Table . Examining smoking habits among the 23 OPC patients, 10 (43.5%) were non‐smokers, 9 (39.1%) were ex‐smokers who had quit smoking for at least 12 months before diagnosis, and 4 (17.4%) were active smokers (Table ). Among the active smokers, all 4 (100%) patients reported a smoking history of more than 20 p/y (Table ). 3.2 HPV DNA Prevalence in FFPE Tissues by Beads‐Based Genotyping Luminex Assay (E7‐MPG) A total of 90 available FFPE tumor tissue samples were retrieved and analyzed for HPV DNA. One sample was beta‐globin negative and was excluded. HPV positivity was detected in 25.8% (23/89) of the remaining samples (Tables and ). Overall, HR HPVs were found in the majority of positive cases (91.3%, n = 21/23). Among the HR HPVs, HPV16 DNA was identified in 22.4% of cancer specimens (20/89, 95% CI: 14.97–32.25) (Figure ). Specifically, HPV16 DNA was detected in 10 OPCs (5 cT1, 3 cT2, 1 cT3, and 1 cT4a), 5 oral cavity cancer (1 cT2, 3 cT3, and 1 cT4a), 4 laryngeal cancer (cT1, cT2, cT3, and cT4, one each) and 1 in hypopharyngeal cancer (cT2) (Figure ). Additionally, three other genotypes, HPV18, 6 and 68 were detected in FFPE tissue specimens from oral cavity and larynx (Figure ). However, none of these genotypes was found positive by PCR in other biological samples, such as plasma or oral samples. 3.3 ctHPV DNA Prevalence in Plasma by Beads‐Based Genotyping Luminex Assay (E7‐MPG) A total of 141 plasma samples were analyzed for ctHPV DNA. All samples tested positive for beta‐globin, confirming the successful DNA extraction and PCR amplification. Among these, 124 out of 141 (88%) plasma samples were negative for HPV DNA, while 12% were HPV‐positive (17/141; 95% CI: 7.57–18.55) (e.g., HPV16 and HPV35), as shown in Tables and and Figure . All HPV‐positive plasma samples presented single infections, exclusively with HR HPV genotypes. HPV16 was the predominant genotype detected in 16 out of 17 HPV‐positive plasma samples. The majority of HPV16 ctDNA‐positive plasma samples ( n = 13/16; 81.2%) were from OPC patients. Among these, six were classified as cT1 ( n = 6/13 46.1%), five were cT2 (38.4%), one cT3 and one cT4a (7.7% each). The remaining three HPV16 ctDNA‐positive plasma samples (18.8%) were collected from non‐oropharyngeal sites (one hypopharyngeal, one laryngeal and one unknown primary tumor cancers (Tables and and Figure ). 3.4 HPV DNA Prevalence in Oral Specimens (Gargles and Swabs) by Beads‐Based Genotyping Luminex Assay (E7‐MPG) A total of 141 gargle samples and 142 oral swabs were analyzed for the presence of HPV DNA. All samples tested positive for beta‐globin amplification confirming adequate DNA quality. HPV DNA was detected in 29 out of 141 gargle samples (20.6%, 95% CI: 14.67–28.02) (Table ). Among these, 22 were single infections and 7 were multiple HPV infections. Overall, HR HPV genotypes were prevalently identified (8 out of 14 HPV types) (Figure ), either as single or multiple infections. Details of the detected HPV genotypes are provided in Figure for gargles and Figure for oral swab samples. HPV16 was by far the most prevalent type, found in 19 out of the 29 HPV‐positive gargle specimens (65.5%), either alone ( n = 12) or in coinfection with other HPV types ( n = 7) (Figure ). By anatomic site, the majority of HPV16 positive gargle samples were found in OPC ( n = 15) (Figure ). These were distributed across clinical stages cT1 ( n = 7), followed by cT2 ( n = 5), cT3 ( n = 2), and cT4a ( n = 1) (Table ). The remaining HPV16 positive samples ( n = 4) were detected at oral cavity ( n = 2; cT1 and cT3), hypopharyngeal (cT2), and laryngeal (cT4a) cancers. In addition to HPV16, single HPV infections with HPV6, 11, 31, 45, 51, 53, 58, and 68 were each detected in one sample, while HPV56 was found in two samples. Some HPV genotypes (e.g., 51, 58, and 68) were also detected in multiple HPV infections (Figure ). For oral swab samples, 93% tested negative for any HPV type ( n = 132/142) while 7% ( n = 10/142, 95% CI: 3.73–12.62) were HPV DNA‐positive (Tables and ). Among the positive samples HPV16 was identified in 70%, either as single infections ( n = 5) or multiple infections ( n = 2) (Figure ). Multiple infections involved HPV16/HPV70 and HPV16/HPV56. The coinfection HPV16/HPV70 in the oral swab was also identified in the corresponding paired gargle sample. HPV DNA of non‐HPV16 types, namely HPV58, 66 and 70, were identified in swab specimens, with HPV58 and 66 also detected in the corresponding paired gargle specimens. The majority of HPV‐positive oral swab samples were from OPC patients ( n = 6) (Figure ), staged as cT1 ( n = 2), cT2 ( n = 2), cT3 ( n = 1), cT4a ( n = 1) (Table ). The remaining HPV‐positive swab samples ( n = 4) were from oral cavity cancer (cT3), two hypopharyngeal (cT2 and cT4a) and one laryngeal (cT4a) cancers. 3.5 Concordance of HPV16 DNA Between FFPE Tissues and Body Fluids in OPC and non‐OPC The agreement between the E7‐MPG and the commercial tests (p16 INK4a or HPV DNA) applied to FFPE OPC tissues for HPV16 detection was 100%, k = 1 (14/14, 95% CI: 100‐100) (Supporting Information S1: Table ). The overall concordance for HPV16 detection between plasma ctHPV DNA tested with E7‐MPG and FFPE OPC tissues, tested with p16 INK4a or INNO‐LiPA HPV assay, was 91.3%, k = 0.81 (21/23, 95% CI: 58.3–100). The sensitivity and specificity of ctHPV DNA in detecting HPV16 in plasma samples were 86.7% and 100%, respectively. The concordance for HPV16 detection between oral gargles and FFPE OPC tissues tested by commercial assays, was 95.2%, k = 0.88 (20/21, 95% CI: 67.8–100), while the sensitivity and specificity in detecting HPV16 were 100% and 85.9%, respectively (Supporting Information S1: Table ). However, the concordance between oral swabs and FFPE OPC tissues for HPV16 detection was notably lower at 59.1%, k = 0.28 (13/22, 95% CI: 37–53.9), with sensitivity and specificity in detecting HPV16 of 35.7% and 100%, respectively (Supporting Information S1: Table ). Excluding OPC cases, the reported overall concordance rates for HPV16 DNA detection, matched to FFPE HNC tissues, were 87.3% for plasma, 87.7% for oral gargles, and 88.8% for oral swab (Supporting Information S1: Table ). Analyzing their performance, the sensitivity and specificity of ctHPV DNA in detecting HPV16 were 10.0% and 100%, respectively (Supporting Information S1: Table ). Sensitivity and specificity for HPV16 detection between oral gargle and FFPE tissues were 22.2% and 98.2%, respectively (Supporting Information S1: Table ). Finally, the sensitivity and specificity of HPV16 between oral swab and FFPE tissues were 20.0% and 100%, respectively (Supporting Information S1: Table ). The majority of HPV DNA‐positive plasma and oral samples, including any HPV genotype, were found in OPC patients: 76.5% ( n = 13/17 total positives), 58.6% ( n = 17/29 total positives), and 60% ( n = 6/10 total positives) considering plasma, gargles and oral swab samples, respectively (Table ). In OPC patients ( n = 23), HPV16 was detected by E7‐MPG in 21.7% ( n = 5/23), 56.5% ( n = 13/23), and 65.2% ( n = 15/23) of oral swabs, plasma, and gargle specimens, respectively (Table ). Thirteen out of 23 plasma samples from OPC patients (56.5%) tested positive for ctHPV16 DNA, and were staged as cT1 (6/9; 66.6%), cT2 (5/5; 100%), cT3 (1/6; 16.6%), and cT4 (1/3; 33.3%) (Table ). Thirteen out of 23 plasma samples from OPC patients (56.5%) tested positive for ctHPV16 DNA, and were staged as cT1 (6/9; 66.6%), cT2 (5/5; 100%), cT3 (1/6; 16.6%), and cT4 (1/3; 33.3%) (Table ). Of the gargle samples, 73.9% (17/23) showed HPV DNA positivity for any HPV type in samples from OPC patients, of which 88.2% (15/17) were HPV16‐positive and stratified as follows: cT1: 77.7% (7/9), cT2: 100% (5/5), cT3: 33.3% (2/6), and cT4a 33.3% (1/3) (Table ). Regarding the oral swab samples, 26% (6/23) tested positive for any HPV genotype. Overall, 83.3% (5/6) were HPV16‐positive and from patients with tumor stages cT1 (1/9), cT2 (2/5), cT3 (1/6), and cT4a (1/3) (Table ). Considering only HPV16, gargles alone detected all cT1 HPV16‐positive OPC (100%, n = 7/7), compared to plasma alone 71.4% ( n = 5/7) (Table ). The combined analysis of ctHPV16 DNA in plasma and HPV16 DNA in gargles improved sensitivity to 100% ( n = 7/7) for cT1. When including any HPV genotype, the combined approach achieved a 100% detection rate for cT1 cases ( n = 8/8), outperforming gargles alone (87.5%, 7/8) and plasma alone (75%, 6/8) (data calculated from Table ). Characteristic of the Patients' Cohort A total of 142 patients were enrolled in the study, comprising 39 females (27.5%) and 103 males (72.5%). The mean age at diagnosis was 63.4 (SD ± 11) with females (mean, SD 67.4 ± 13) being older than males (mean, SD 61.9 ± 10). Among the 142 patients, 43 (30%) were non‐smokers, 52 (37%) were ex‐smokers who had quit for at least 12 months before diagnosis and 47 (33%) were active smokers. Among the active smokers, 2 (2.1%) declared less than 10 packs/year (p/y) , 3 (6.4%) smoked between 10 and 20 p/y and 43 (91.5%) were heavy smokers with a smoking history of more than 20 p/y. In the group of the former smokers, 1 (2%) smoked less than 10 p/y, 12 (23%) between 10 and 20 p/y and 39 (75%) referred to more than 20 p/y. Of the 142 patients, 132 were diagnosed with HNC, while 10 were non‐HNC cases, which included squamous intraepithelial neoplasia (SIN) III/carcinoma in situ of the larynx ( n = 1), dysplasia (SIN I–II) from the oral cavity ( n = 1), oropharynx ( n = 1) and larynx ( n = 3), oral cavity lymphoma ( n = 1), and non‐cancer cases ( n = 3). Figure represents the patients’ cohort. Among the 132 HNC cases, 37.9% ( n = 50) were oral cavity cancers, 34% ( n = 45) laryngeal cancers, 17.4% ( n = 23) OPCs, 7.5% ( n = 10) hypopharyngeal cancer, 0.7% ( n = 1) was oropharyngeal/hypopharyngeal cancer and 2.3% ( n = 3) were occult tumors (Figure ). These HNC cases ( n = 132) were classified into different T stages according to the 8th TMN edition and were staged both clinically and pathologically when possible (Supporting Information S1: Table ). Regarding the 23 oropharyngeal tumors, 7 were located at the base of the tongue, and 16 on the lateral wall (tonsil), as shown in Table . Examining smoking habits among the 23 OPC patients, 10 (43.5%) were non‐smokers, 9 (39.1%) were ex‐smokers who had quit smoking for at least 12 months before diagnosis, and 4 (17.4%) were active smokers (Table ). Among the active smokers, all 4 (100%) patients reported a smoking history of more than 20 p/y (Table ). HPV DNA Prevalence in FFPE Tissues by Beads‐Based Genotyping Luminex Assay (E7‐MPG) A total of 90 available FFPE tumor tissue samples were retrieved and analyzed for HPV DNA. One sample was beta‐globin negative and was excluded. HPV positivity was detected in 25.8% (23/89) of the remaining samples (Tables and ). Overall, HR HPVs were found in the majority of positive cases (91.3%, n = 21/23). Among the HR HPVs, HPV16 DNA was identified in 22.4% of cancer specimens (20/89, 95% CI: 14.97–32.25) (Figure ). Specifically, HPV16 DNA was detected in 10 OPCs (5 cT1, 3 cT2, 1 cT3, and 1 cT4a), 5 oral cavity cancer (1 cT2, 3 cT3, and 1 cT4a), 4 laryngeal cancer (cT1, cT2, cT3, and cT4, one each) and 1 in hypopharyngeal cancer (cT2) (Figure ). Additionally, three other genotypes, HPV18, 6 and 68 were detected in FFPE tissue specimens from oral cavity and larynx (Figure ). However, none of these genotypes was found positive by PCR in other biological samples, such as plasma or oral samples. ctHPV DNA Prevalence in Plasma by Beads‐Based Genotyping Luminex Assay (E7‐MPG) A total of 141 plasma samples were analyzed for ctHPV DNA. All samples tested positive for beta‐globin, confirming the successful DNA extraction and PCR amplification. Among these, 124 out of 141 (88%) plasma samples were negative for HPV DNA, while 12% were HPV‐positive (17/141; 95% CI: 7.57–18.55) (e.g., HPV16 and HPV35), as shown in Tables and and Figure . All HPV‐positive plasma samples presented single infections, exclusively with HR HPV genotypes. HPV16 was the predominant genotype detected in 16 out of 17 HPV‐positive plasma samples. The majority of HPV16 ctDNA‐positive plasma samples ( n = 13/16; 81.2%) were from OPC patients. Among these, six were classified as cT1 ( n = 6/13 46.1%), five were cT2 (38.4%), one cT3 and one cT4a (7.7% each). The remaining three HPV16 ctDNA‐positive plasma samples (18.8%) were collected from non‐oropharyngeal sites (one hypopharyngeal, one laryngeal and one unknown primary tumor cancers (Tables and and Figure ). HPV DNA Prevalence in Oral Specimens (Gargles and Swabs) by Beads‐Based Genotyping Luminex Assay (E7‐MPG) A total of 141 gargle samples and 142 oral swabs were analyzed for the presence of HPV DNA. All samples tested positive for beta‐globin amplification confirming adequate DNA quality. HPV DNA was detected in 29 out of 141 gargle samples (20.6%, 95% CI: 14.67–28.02) (Table ). Among these, 22 were single infections and 7 were multiple HPV infections. Overall, HR HPV genotypes were prevalently identified (8 out of 14 HPV types) (Figure ), either as single or multiple infections. Details of the detected HPV genotypes are provided in Figure for gargles and Figure for oral swab samples. HPV16 was by far the most prevalent type, found in 19 out of the 29 HPV‐positive gargle specimens (65.5%), either alone ( n = 12) or in coinfection with other HPV types ( n = 7) (Figure ). By anatomic site, the majority of HPV16 positive gargle samples were found in OPC ( n = 15) (Figure ). These were distributed across clinical stages cT1 ( n = 7), followed by cT2 ( n = 5), cT3 ( n = 2), and cT4a ( n = 1) (Table ). The remaining HPV16 positive samples ( n = 4) were detected at oral cavity ( n = 2; cT1 and cT3), hypopharyngeal (cT2), and laryngeal (cT4a) cancers. In addition to HPV16, single HPV infections with HPV6, 11, 31, 45, 51, 53, 58, and 68 were each detected in one sample, while HPV56 was found in two samples. Some HPV genotypes (e.g., 51, 58, and 68) were also detected in multiple HPV infections (Figure ). For oral swab samples, 93% tested negative for any HPV type ( n = 132/142) while 7% ( n = 10/142, 95% CI: 3.73–12.62) were HPV DNA‐positive (Tables and ). Among the positive samples HPV16 was identified in 70%, either as single infections ( n = 5) or multiple infections ( n = 2) (Figure ). Multiple infections involved HPV16/HPV70 and HPV16/HPV56. The coinfection HPV16/HPV70 in the oral swab was also identified in the corresponding paired gargle sample. HPV DNA of non‐HPV16 types, namely HPV58, 66 and 70, were identified in swab specimens, with HPV58 and 66 also detected in the corresponding paired gargle specimens. The majority of HPV‐positive oral swab samples were from OPC patients ( n = 6) (Figure ), staged as cT1 ( n = 2), cT2 ( n = 2), cT3 ( n = 1), cT4a ( n = 1) (Table ). The remaining HPV‐positive swab samples ( n = 4) were from oral cavity cancer (cT3), two hypopharyngeal (cT2 and cT4a) and one laryngeal (cT4a) cancers. Concordance of HPV16 DNA Between FFPE Tissues and Body Fluids in OPC and non‐OPC The agreement between the E7‐MPG and the commercial tests (p16 INK4a or HPV DNA) applied to FFPE OPC tissues for HPV16 detection was 100%, k = 1 (14/14, 95% CI: 100‐100) (Supporting Information S1: Table ). The overall concordance for HPV16 detection between plasma ctHPV DNA tested with E7‐MPG and FFPE OPC tissues, tested with p16 INK4a or INNO‐LiPA HPV assay, was 91.3%, k = 0.81 (21/23, 95% CI: 58.3–100). The sensitivity and specificity of ctHPV DNA in detecting HPV16 in plasma samples were 86.7% and 100%, respectively. The concordance for HPV16 detection between oral gargles and FFPE OPC tissues tested by commercial assays, was 95.2%, k = 0.88 (20/21, 95% CI: 67.8–100), while the sensitivity and specificity in detecting HPV16 were 100% and 85.9%, respectively (Supporting Information S1: Table ). However, the concordance between oral swabs and FFPE OPC tissues for HPV16 detection was notably lower at 59.1%, k = 0.28 (13/22, 95% CI: 37–53.9), with sensitivity and specificity in detecting HPV16 of 35.7% and 100%, respectively (Supporting Information S1: Table ). Excluding OPC cases, the reported overall concordance rates for HPV16 DNA detection, matched to FFPE HNC tissues, were 87.3% for plasma, 87.7% for oral gargles, and 88.8% for oral swab (Supporting Information S1: Table ). Analyzing their performance, the sensitivity and specificity of ctHPV DNA in detecting HPV16 were 10.0% and 100%, respectively (Supporting Information S1: Table ). Sensitivity and specificity for HPV16 detection between oral gargle and FFPE tissues were 22.2% and 98.2%, respectively (Supporting Information S1: Table ). Finally, the sensitivity and specificity of HPV16 between oral swab and FFPE tissues were 20.0% and 100%, respectively (Supporting Information S1: Table ). The majority of HPV DNA‐positive plasma and oral samples, including any HPV genotype, were found in OPC patients: 76.5% ( n = 13/17 total positives), 58.6% ( n = 17/29 total positives), and 60% ( n = 6/10 total positives) considering plasma, gargles and oral swab samples, respectively (Table ). In OPC patients ( n = 23), HPV16 was detected by E7‐MPG in 21.7% ( n = 5/23), 56.5% ( n = 13/23), and 65.2% ( n = 15/23) of oral swabs, plasma, and gargle specimens, respectively (Table ). Thirteen out of 23 plasma samples from OPC patients (56.5%) tested positive for ctHPV16 DNA, and were staged as cT1 (6/9; 66.6%), cT2 (5/5; 100%), cT3 (1/6; 16.6%), and cT4 (1/3; 33.3%) (Table ). Thirteen out of 23 plasma samples from OPC patients (56.5%) tested positive for ctHPV16 DNA, and were staged as cT1 (6/9; 66.6%), cT2 (5/5; 100%), cT3 (1/6; 16.6%), and cT4 (1/3; 33.3%) (Table ). Of the gargle samples, 73.9% (17/23) showed HPV DNA positivity for any HPV type in samples from OPC patients, of which 88.2% (15/17) were HPV16‐positive and stratified as follows: cT1: 77.7% (7/9), cT2: 100% (5/5), cT3: 33.3% (2/6), and cT4a 33.3% (1/3) (Table ). Regarding the oral swab samples, 26% (6/23) tested positive for any HPV genotype. Overall, 83.3% (5/6) were HPV16‐positive and from patients with tumor stages cT1 (1/9), cT2 (2/5), cT3 (1/6), and cT4a (1/3) (Table ). Considering only HPV16, gargles alone detected all cT1 HPV16‐positive OPC (100%, n = 7/7), compared to plasma alone 71.4% ( n = 5/7) (Table ). The combined analysis of ctHPV16 DNA in plasma and HPV16 DNA in gargles improved sensitivity to 100% ( n = 7/7) for cT1. When including any HPV genotype, the combined approach achieved a 100% detection rate for cT1 cases ( n = 8/8), outperforming gargles alone (87.5%, 7/8) and plasma alone (75%, 6/8) (data calculated from Table ). Discussion Currently, no validated screening protocols exist for detecting HPV‐associated HNC, which could significantly enhance early diagnosis and improve clinical outcomes. Notably, an increasing incidence of HPV‐associated OPC has been observed in several high‐income countries (HIC), predominantly among white‐Caucasian males [ , , ]. Geographical and demographic variations in HPV‐ associated OPC patients are apparent, particularly between Europe (EU) and the United States (US). European patients with HPV‐associated OPC tend to be older and more likely to have a history of heavy smoking compared to their US counterparts . These factors, such as age and smoking are associated with poorer overall health, increased comorbidities, and more complex prognoses. In contrast, younger, less‐smoking‐prone US patients often experience better outcomes, likely due to fewer underlying health issues and a stronger immune response . Additionally, smoking can reduce the effectiveness of treatments like radiotherapy, a common modality treatment for these patients, highlighting the need for tailored diagnostic and treatment protocols to address regional and lifestyle differences . Despite the growing need for early identification of HPV‐associated HNC, a consensus on diagnostic tests and screening protocols remains absent worldwide. In recent years, several non‐invasive HPV biomarkers, such as ctHPV DNA, oral HPV DNA and E6 antibodies have been explored as diagnostic tools for early diagnosis of HPV‐associated HNC [ , , , ]. These biomarkers offer significant advantages over tumor biopsies, including ease of collection, reduced invasiveness, and no requirement for tumor tissues. In this study the potential of combining ctHPV DNA in plasma with oral HPV DNA from gargle and swab samples to detect HPV‐associated HNC was evaluated. Using a highly sensitive bead‐based multiplex HPV genotyping assay, an HPV16 prevalence of 22.4% in HNC FFPE tissues was identified. Other studies have reported a higher prevalence in OPC, such as 40% in our previous investigation . Regarding ctHPV DNA in plasma, HPV16‐positive ctDNA was predominantly detected in OPC cases, representing 81.2% of ctHPV16‐positive plasma samples. By contrast, only a minor fraction of HPV16 ctDNA‐positive samples was found in occult tumors (6.2%) and hypopharyngeal and laryngeal cancers (12.5%). These findings are consistent with prior studies demonstrating the utility of ctHPV DNA in diagnosing and monitoring HPV‐associated OPC recurrence [ , , ]. Considering oral HPV DNA detection, gargle samples revealed an HPV prevalence of 20.6%, with HPV16 being the predominant genotype. Among HPV‐positive OPC cases, 88.2% were HPV16 positive. A previous follow‐up study indicated that oral HPV16 DNA detection was associated with a 7.1‐fold increase in the likelihood of HNC presence and with a 22.4‐fold increase specifically for OPC . A separate study reported 79.1% sensitivity of oral HPV test when p16 INK4a IHC was used as the reference also recently confirmed by Tang et al. Oral swabs showed lower sensitivity compared to gargles, likely due to: (i) operator dependence, (ii) variability in patient compliance and gag reflex, and (iii) difficulty accessing the oropharyngeal tumor site. Gargles, in contrast, offered a standardized and effective procedure, collecting saliva throughout the entire oral cavity and oropharynx using a 15 s rinse with a15 mL 9%‐saline solution. Different protocols for gargles collection, which include gargling durations, volumes of solution or different solutions for conservation have been applied across epidemiological studies . Currently, it is not clear yet whether the combination of blood and oral HPV biomarkers could improve early detection of HPV‐associated HNC. In OPC patients, around 90% of HPV16 DNA‐positive plasma and gargle samples matched the HPV status of corresponding tumor tissues in this study. Gargle samples alone demonstrated 100% sensitivity in detecting HPV16 DNA‐positive OPC, outperforming plasma samples (86.7%). Including infection by genotype other than HPV16, combining gargle and plasma samples improved sensitivity to 100% for HPV16‐positive OPC at early stages, compared to 87.5% for gargles alone and 75% for plasma alone. Ahn et al. reported sensitivities of 52.8% and 67.3%, for HPV16 DNA detection in saliva and plasma, respectively. Combining saliva and plasma increased sensitivity to 76% and specificity to 100%. Therefore, a combination of multiple samples can be a useful tool to identify patients with HPV‐associated OPC . The usefulness of a combined assays was also recently highlighted by Lewis et al. , who underlineed improved sensitivity by combining HPV serology with ctHPV DNA detection using ddPCR. Of note, in two early‐stage (cT1) cases, plasma samples were ctHPV16‐negative, while gargle samples were positive. This discordance may be due to the absence or low release of ctHPV16 DNA in the bloodstream, as previously reported in several HPV‐associated tumors , possibly due to the small tumor size in early stage. These data and other recent studies [ , , ], suggest that non‐invasive—or minimally invasive—tools, such as oral and plasma‐based HPV biomarkers could complement current diagnostic strategies and improve the early detection of HPV‐associated HNC . This study has several limitations. The relatively small cohort size and limited number of HPV‐associated cases reduce the statistical power and generalizability of the findings. FFPE tissue was only available for a subset of patients, potentially limiting comprehensive comparisons across all samples. Additionally, the small number of HPV‐associated cases precluded a detailed analysis of smoking status and specific oropharyngeal subsites in relation to HPV positivity, limiting insights into these factors’ roles. Future studies should address these limitations by including larger, more diverse cohorts and ensuring sufficient representation of HPV‐associated cases. Further investigation into the impact of smoking and specific anatomical subsites on HPV positivity is warranted. Innovative biomarkers could enhance detection sensitivity, enabling earlier diagnosis and better stratification of HPV‐associated HNC. Conclusion These findings emphasize the limited role of HPV in non‐oropharyngeal HNC compared to OPC. Gargles appeared more sensitive than plasma for HPV16 DNA detection, particularly in early‐stage OPC. Combining plasma and gargle assays enhanced sensitivity, reaching 100% for HPV16‐positive OPC at early stages. However, further validation studies are crucial before these viral biomarkers can be implemented for early diagnosis and disease monitoring of HPV‐associated OPC. Future efforts should also aim to develop comprehensive diagnostic algorithms that integrate these biomarkers into HNC routine clinical practice. Conceptualization and original draft preparation: Susanna Chiocca, Tarik Gheit, Maria Lina Tornesello, Luisa Galati and Marta Tagliabue. Supervised the study: Tarik Gheit, Mohssen Ansarin, and Susanna Chiocca. Methodology and analysis: Massimo Tommasino, Luisa Galati, Tarik Gheit, Sandrine McKay‐Chopin, and Fausto Maffini. Study Design: Massimo Tommasino, Susanna Chiocca, Maria Lina Tornesello, Mohssen Ansarin, Luisa Galati, Giuseppe De Palma, Stefania Vecchio, Angelo Virgilio Paradiso, Laura Sichero, Luisa Lina Villa, and Giovanni Blandino. Patients enrollment and follow up: Marta Tagliabue, Rita De Berardinis, Francesco Chu, Francesco Bandi, Chiara Mossinelli, Jacopo Zocchi, Giacomo Pietrobon, Stefano Filippo Zorzi, Enrica Grosso, Stefano Riccio, Roberto Bruschini, and Gioacchino Giugliano. Clinical specimens (saliva, tissues and blood samples) collection and data curation: Marta Tagliabue and Rita De Berardinis. Draft revision and editing: Marta Tagliabue, Susanna Chiocca, Luisa Galati, Rita De Berardinis, Tarik Gheit and Maria Lina Tornesello. All authors have reviewed and approved the manuscript. Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization and of the IRCCS Istituto Tumori “Giovanni Paolo II”, Bari, the authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy, or views of the Institute. Ethical approval was obtained from IEO Ethical Committee (code IEO 1572), Milan, Italy. All the included patients signed the informed consent. The authors declare no conflicts of interest. Supporting information.
Immunohistochemistry as a tool to identify
790405d7-0393-40b2-bfcc-d9c0dc88c362
8960608
Anatomy[mh]
Below is the link to the electronic supplementary material. Supplementary file1 (PDF 723 kb)
Classification of paediatric brain tumours by diffusion weighted imaging and machine learning
2a61a8dd-5923-439f-8c76-6b1f7f3d7053
7862387
Pediatrics[mh]
Brain tumours are the most common solid tumours in childhood and the largest cause of death from cancer in this age group. About half of the tumours arise from the posterior fossa with the most common site being the cerebellum making them amenable to surgical resection but with a significant risk of subsequent morbidity. The degree of the resection required is dependent on the type of tumour and so pre-operative diagnosis is desirable as it can aid in surgical planning. The patients have magnetic resonance imaging at presentation as a standard of care giving the opportunity to achieve this. Discrimination of the three main types of brain tumours in the posterior fossa (Ependymoma, Medulloblastoma and Pilocytic Astrocytoma) using qualitative assessment of MRI is challenging due to overlapping radiological characteristics but can be improved by the inclusion of diffusion weighted imaging (DWI) – . However, there is increasing evidence that identification of tumours is improved by using quantitative image analysis and the combination of this with pattern recognition techniques. This has been applied in a single centre study to good effect . Texture analysis of conventional MRI has been implemented in single and multi-centre studies – and been applied to DWI in a single centre study . Advanced imaging techniques such as spectroscopy have shown an ability to differentiate between posterior fossa tumours – , however the technique is still very challenging to implement in the clinical context due to the reliance on scientific input, inhomogeneous protocols across scanner vendors and a lack for consensus analysis pipelines. Perfusion imaging likewise has acquisition and analysis issues outstanding which make it challenging to implement clinically in routine practice. Texture analysis of T2-weighted MR images, has been shown to discriminate well between tumour types in both single , and multi-centre studies . One imaging modality that has made its way into routine imaging protocols is diffusion-weighted imaging. The apparent diffusion coefficient (ADC) maps are generally produced by scanner software and therefore readily available on clinical PACS systems. ADC is a quantitative measure so does not require normalisation to healthy-appearing tissue. Importantly, we have previously shown ADC values to be reproducible between different scanners and field strengths using standard clinical protocols (including scanners used in this study), which is essential for effective multicentre studies . ADC values have previously been shown to discriminate between posterior fossa paediatric brain tumours but for Medulloblastomas and Ependymomas, where significant overlap between ADC values is observed for the two tumour types. Some studies have found significant differences , , but others have not , . One limitation of previous studies and studies of paediatric brain tumours in general are the very small cohort sizes due to the rarity of the diseases. To implement large scale studies of paediatric cancer it is essential to conduct these on a multicentre basis where robust imaging biomarkers will be required . In this study we present a multicentre analysis of ADC maps focussing on paediatric brain tumours of the posterior fossa. The study involves one hundred and seventeen patients from five primary treatment centres across the UK with scans from twelve hospitals using eighteen different scanners. ADC has been shown to be reproducible across multiple centres and scanners and this large pediatric cohort is a test of the robustness of this approach in a clinical scenario. This study was approved by the Derby Regional Ethics committee and informed consent was obtained from all parents/legal guardians. The consent included the upload of clinical and imaging data to the UK Children’s Cancer and Leukaemia Group Functional Imaging Database. All methods were performed in accordance with the relevant guidelines and regulations. Whilst each centre was able to set an MRI protocol for patients with new brain tumours, national guidelines exist and are compatible with those adopted in Europe for clinical trials. The protocol includes T1w, T2w and T1w post contrast imaging as well as DWI. Five primary treatment centres provided data for the study Nottingham, Newcastle, Great Ormond Street Children’s Hospital London, Alder Hey Liverpool and Birmingham Children’s Hospital. Including local hospitals where the children originally presented, MRI were performed at twelve different hospitals on a total of eighteen different scanners in the study. The specifics of the scanners and parameters used for the various scans are summarised in the supplementary material. Histological diagnoses were acquired as per standard clinical practice. All images were checked prior to analysis for significant artefacts and image warp sometimes associated with diffusion-weighted imaging. ADC maps were produced using in-house software written in the Python program language using standard, well documented methods. All maps were produced using b = 0 and b = 1000 s/mm 2 images. Regions of interest were drawn manually for the whole tumours excluding areas of large cysts and peri-tumoural oedema using MRIcro (version 1.40, http://people.cas.sc.edu/rorden/mricro/ ). The ROIs were drawn on the b0 images (essentially a T2-weighted image) whilst viewing the complete image set of higher resolution T2/ FLAIR images acquired and using them to determine the margins of the tumour. The regions of interest were drawn by an expert scientist with six years of experience in neuroimaging with a special interest in paediatric neuroimaging (JN). For inter-user validation of the ROIs, a sub section of the data was semi-randomly selected (8 Medulloblastomas, 6 Ependymomas and 6 Pilocytic Astrocytomas) for re-drawing separately by two radiologists(BP (4 years experience) and AO (13 years experience)). Each radiologist drew ROIs for ten of the patients (20 ROIs in total) and the ADC histogram values were extracted from the tumours. Values from the radiologist’s ROI were then compared against those produced by JN. ADC values were extracted from the tumour ROIs and placed into 180 bins for histogram analysis using in house software written in Python 2.7. The bin width was 0.022 × 10 −3 mm 2 s −1 . For the histogram analysis and illustrative purposes the data was normalised to the area under the histogram. Post-normalisation, mean histograms were produced by summing the normalised frequency value for each bin separately and dividing by the number of cases. The error bars in the average histogram plots represent one standard deviation for each bin of the normalised frequency. A receiver operator characteristics curve (ROC) was produced for the ADC mean values within the Medulloblastoma and Ependymoma regions of interest using SPSS (version 22 IBM). The selection of the mean in lieu of the median was arbitrary as no difference was observed in the classification results between the two measures. The box and whisker plot was produced using the R statistical package (The R Foundation, version 0.5.0, 2013). The three main types of paediatric posterior fossa brain tumours were classified with two classification methods, Naïve Bayes (NB) and Random Forest using the Orange Data Mining Tool (Orange, version 2.7.8) . In detail, the three-way classification method was as follows: The raw histogram data was not used for classification, only extracted values from the histograms (Min, Max, Mean, Median, Variance, Skew, Kurtosis and the 5th, 10th, 20th, 25th, 35th, 40th, 45th, 50th, 55th, 60th, 70th, 75th, 80th, 85th and 90th quantiles) were used as the input data. A principal component analysis was conducted on the input data prior to the cross validation for data reduction covering 95% of the variance (a maximum of 10 components were used) and the resultant components were classified using NB and RF. The data was validated using 10 folds cross validation. The study included 55 patients with Medulloblastomas, 36 with Pilocytic Astrocytomas and 26 with Ependymomas, 4 Atypical Teratoid Rhabdoid Tumours (ATRTs) and 3 other low grade tumours all found in the posterior fossa. The rarer tumour types (ATRTs and low grade tumours) were included for illustrative purposes but due to small numbers were not included in the classification analysis. All of the various histopathological subtypes within these 3 tumour subgroups were grouped together for purposes of analysis. Patient details are shown in Table including age, the distribution of the field strengths at which the patients were scanned and the mean and the range of the ADC values within the tumour ROIs. 4 posterior fossa ATRTs were included for visual comparison only but were not included in the classification analysis as the numbers were too small. Likewise, for the low grade tumours these were presented visually in the supplementary material and not included in the classification due to the very small numbers (n = 3). The ROIs drawn by the radiologists were compared using the extracted mean ADC values with those from JN. A correlation coefficient of R = 0.977 was observed indicating very good agreement between raters (a Bland–Altman plot for the mean values is included in the supplementary material with a repeatability coefficient of CR = 1.06 × 10 −4 ). All of the metrics used for the classification were assessed between the raters and no significant differences were detected between metrics derived from the raters’ ROIs (two-tailed t-test, Bonferroni-corrected for multiple comparisons). This indicated that all of the metrics were reliable and henceforth used for the machine learning classification. Example MRIs are shown for the most common types of posterior fossa paediatric brain tumours in Fig. . As can be seen in the images, the ADC maps look distinctly different for the Medulloblastomas and the Pilocytic Astrocytomas with higher ADC observed for the latter. The appearance of the Ependymomas and the Medulloblastomas are much closer in terms of ADC contrast. The mean values for the tumours were as follows: Ependymoma (EP) 1.126 ± 0.155 × 10 −3 mm 2 s −1 , Medulloblastoma (MB) 0.870 ± 0.015 × 10 −3 mm 2 s −1 , Pilocytic Astrocytoma (PA) 1.656 ± 0.029 × 10 −3 mm 2 s −1 . An ANOVA of the group means showed significant differences between the PAs and the EPs and the MBs and the EP ( P < 0.001). The box plots in Fig. show overlap spread of the mean ADC values of the PA and EP and also between the MB and EPs. A receive operator curve was constructed for MB and EP to determine an optimal cut off value for the mean values for the two tumour types. The cut off value was found to be 0.984 × 10 −3 mm 2 s −1 with a sensitivity of 80.8% and a specificity of 80.0% with 21/26 EP and 48/55 MB falling either side of the cut off boundary. Mean histograms from the three main tumour types are presented in Fig. including the standard deviations. Also included is a plot illustrating the overlap of the mean histograms. Pilocytic Astrocytomas, Medulloblastomas and Ependymomas were classified using the extracted parameters (mean, variance, skew, kurtosis and 10th and 20th etc.). Two classification methods were employed, Naïve Bayes (NB) and Random Forest (RF). NB was chosen as the more simple model and RF the more complex for comparison purposes (Table ). Using the extracted histogram parameters, as shown in Table , the overall classification accuracy was 84.6% using NB and 86.3% using RF. The balanced overall accuracy using the same data was 84.4% for NB and 86.3% for RF. The largest discrepancies were observed for Ependymomas where NB classified 80.8% of cases correctly and RF classified 73.1% correctly. The opposite trend was observed for Medulloblastomas where NB classified 83.6% correctly and RF classified 94.5% correctly. A plot showing a comparison between the average histograms of four ATRTs and the medulloblastomas used for the classification is shown in Fig. . The overlap between the ATRTs and the medulloblastomas in terms of the histograms is clear from the plot. Histograms of the rare low grade tumours are presented in the supplementary material. One of the most simplistic image analyses that can be performed on parametric maps is the mean values from a region of interest drawn around or within a lesion. This approach is potentially available on most clinical PACS systems and would provide a feasible route for quantitative measures of DWI to be integrated into routine clinical practice. We found that there was a significant difference between the mean tumour values when observing Pilocytic Astrocytomas, Medulloblastomas and Ependymomas P < 0.001. These differences are in line with previous single centre studies that have shown significant differences between Medulloblastomas and Ependymomas , , which are traditionally the two tumour types which overlap in terms of ADC. It is encouraging that a large multicentre study can show similar statistical differences to single centre studies despite the heterogeneity in acquisition protocol and scanner model and manufacturers. We used an ROC analysis which suggested a cut off value between the Medulloblastomas and Ependymomas of 0.984 × 10 −3 mm 2 s −1 with high sensitivity (80%) and specificity (80.8%). We believe this may be the best route for the use of ADC clinically in the most common scenario of discriminating between Medulloblastomas and Ependymomas as it can be achieved on most clinical PACS systems. This will however, need to be prospectively validated. The average histograms presented in Fig. show that there are differences between the three main tumour types if assessed as groups and they have distinct appearances. The Medulloblastomas in general have the peaks at the lowest ADC values followed by the Ependymomas and then the Pilocytic Astrocytomas. However, the standard deviation represented by the error bars, suggests there is the potential for significant overlap on a case-by-case basis. While these plots may be helpful to clinicians as part of a larger diagnostic workup they have limited use on a case-by-case basis without additional information. Interestingly, the Medulloblastomas appear to be the most homogeneous tumours with respect to their histograms. The Pilocytic Astrocytoma ADC distribution has increased variance with respect to the shape of the average histogram which is in keeping with their histological appearance which is heterogeneous and micro cystic. This suggests the measurement of more than one microenvironment. Classification of the tumours using the histogramswas performed using metrics extracted from the histogram such as skew, variance and quantiles, as had been used in previous studies , . We also used two different classifiers with both linear (Naïve Bayes) and non-linear (Random Forest) approaches to see if the classification was significantly affected. As differentiation of the Ependymomas from the other two tumour types has been the most challenging it is appropriate that this classification test takes precedence when selecting the best classifier. A Random Forest classifier used on the extracted histogram parameters resulted the best overall classification accuracy of 86.3% but the highest classification rates for Ependymomas of 80.8% were obtained using a Naïve Bayes classifier. Overall the classification rates are not as high as has previously been seen in the literature for ADC analysis with Rodriguez Gutierrez et al. quoting overall classification rates of 91.4% and Bull et al. 93.75%. However, our study is much larger than the aforementioned studies with a more heterogeneous data input with regards to hospitals, scanners and acquisition protocols. Historically analysis of ADC images has been qualitative with areas of lowest signal being used for identification of cellular density , . This has also propagated into quantitative measurements where minimum ADC is used as a biomarker for diagnosis and also treatment monitoring. This approach can be challenging from a data analysis perspective and although it can seem attractive and time efficient does result in a large amount of data being discarded. When the whole region of interest is preserved and histogram analysis is used the data is far less sensitive to user error , especially if the regions of interest are larger. If this approach is also combined with feature selection and machine learning algorithms, then large datasets can be used to better effect. As is shown in Fig. ADC does have its limitations when attempting to distinguish between ATRTs and Medulloblastomas. The average histograms display a large amount of overlap and when the ATRTs were run through the classifier were all classified as Medulloblastomas. The rare tumour types presented in the supplementary material were included for illustrative purposes and also illustrate the difficulty that these cases present when attempting to classify. The results from this study underline the potential opportunity for ADC maps to be used in multicentre studies. We have shown that a cohort of patients scanned on a wide range of different scanners, in different hospitals and using heterogeneous protocols can still produce reliable results. This has potential for integration of advanced imaging techniques into clinical trials for the assessment of treatment efficacy. Our results suggest that strict harmonisation of DWI protocols may not be necessary for the production of reliable biomarkers thereby allowing the utilisation of historically acquired imaging data as demonstrated here. This is important as harmonisation of protocols is challenging across multiple centres, especially for routinely acquired images. This is a retrospective study of previously acquired data with the histological diagnosis already known. The classifiers and cut off values from this study will therefore need to be tested on prospective data to be truly validated. Likewise, the machine learning classifiers should also be tested prospectively. ADC is unable to discriminate ATRTs from Medulloblastomas as the diffusion parameters were virtually identical. The study has been limited mainly to the three most common types of brain tumours in the posterior fossa and did not address the identification of rarer tumour types such as choroid plexus papillomas. No in-depth analysis has been conducted of the tumour genetic or molecular subtypes, due to limitations of statistical power. The regions of interest were hand drawn so were susceptible to human error, although this issue was addressed through the use of multiple experts to verify the regions of interest which were shown to be reproducible across raters. The cut off values used in the study were produced from ADC maps from bespoke software, care must therefore be used when implementing this in ADC maps produced directly by scanner software or third-party vendors. Despite this, it is likely that the results would be applicable to ADC maps produced by scanners since the calculation of ADC maps involves a linear fit between two data points which is inherently stable, although this should be formally validated. Care would need to be taken when DWI is acquired with multiple b-values to ensure that ADC values were calculated from the b0 and b1000 images in order to make the results compatible with those of this study. The histogram metrics were produced via bespoke software which creates a barrier to clinical implementation, however we envisage these metrics becoming increasingly available via scanner manufacturers and clinical picture archiving and communication systems (PACS). We have shown in this study that it is possible to discriminate between the three most common types of paediatric posterior fossa brain tumour types using histogram analysis of Apparent Diffusion Coefficient maps in a large cohort of patients acquired on a multi-centre basis. Supplementary Information.
Update to the College of American Pathologists Reporting on Thyroid Carcinomas
b25b3797-fc16-4d98-9d64-7e0fc625d6b8
2807537
Pathology[mh]
Despite advances in the last several decades in the diagnosis of thyroid cancers, there are still many problems and controversies related to the histopathology of thyroid carcinomas. These controversies impact on the prognosis and therapy of thyroid cancer patients, as well as, on the development of cutting edge research aimed at better outcome and quality of life for these patients. These contentious issues may directly impact on the pathologic reporting of these carcinomas. In the most updated version of the College of American Pathologists (CAP) protocol for the examination of specimens from patients with carcinomas of the thyroid, attempts were made to address some of these controversies. The goal was to produce a practical document providing for more clinically relevant pathology reports. The updated CAP protocol is essentially organized in a similar fashion to the previous edition with some modifications primarily but not exclusively based on the current CAP recommendations for the reporting of all cancers. Among the controversial issues in thyroid pathology include (but are not limited to) the histopathologic interpretation of an encapsulated (well-differentiated) thyroid follicular neoplasm, specifically in determining which of these lesions does and which does not represent the follicular variant of papillary thyroid carcinoma . Controversies also relate to the concept of the less commonly occurring poorly-differentiated thyroid carcinoma. It is beyond the scope of the CAP protocol to provide guidelines for the histopathologic interpretation of thyroid cancers. Among the standard data elements required by the CAP, this paper addresses specific issues pathologists will confront relative to criteria of malignancy (e.g., invasiveness) and extrathyroidal extension (ETE). Unfortunately, even among the authors of the updated CAP Thyroid Cancer Protocol there was not uniformity in agreement on these issues. As such, the document echos the varying views on invasiveness and ETE. Although listed as recommended rather than required elements, mitotic activity and necrosis are important in potentially determining whether a given cancer represents a poorly-differentiated thyroid carcinoma. Finally, the issue of whether the identification of thyroid papillary microcarcinomas should engender the use of the CAP Thyroid Cancer Protocol will be discussed. The updated CAP thyroid protocol is still a work in progress and a final consensus among the responsible authors has not been reached on all of the reporting data elements, including (but not limited to): (1) whether to report all identifiable foci of thyroid papillary microcarcinoma including incidentally identified foci or whether to limit reporting to only those foci that were detectable preoperatively (by clinical examination and/or radiographic evaluation); (2) whether to limit the use of the designation of thyroid papillary microcarcinoma to adults excluding such a designation for pediatric age groups. Consequently, the recommendations herein contained reflect the views of the author of this manuscript, as well as the senior author of the CAP protocol (Bruce M. Wenig MD ). Modifications to the recommendations in this manuscript based on a final consensus opinion of the entire panel may yet occur. Criteria and Extent of Capsular and Vascular Invasion in Follicular Carcinoma and its Variants Capsular Invasion (CI) The diagnosis of papillary thyroid carcinoma is entirely predicated on the presence of diagnostic nuclear features. As such, the diagnosis of papillary thyroid carcinoma can be made in the presence of an encapsulated follicular neoplasm even in the absence of invasive growth. Once a given neoplasm invades its capsule and/or shows evidence of angioinvasion (intracapsular or beyond), that follicular epithelial cell lesion is malignant. Excluding papillary thyroid carcinoma, there is universal agreement that the diagnosis of follicular carcinoma requires the presence of capsular or vascular invasion of the tumor capsule or beyond the tumor capsule. Unfortunately, there is still controversy in regard to the interpretation of CI. While most authors require complete capsular transgression by tumor in order to diagnose capsular invasion , other authorities will diagnose a given neoplasm as a carcinoma even in the presence of incomplete capsular transgression . According to Chan , Fig. depicts the various histologic appearances associated with the presence or absence of capsular invasion. According to this author , a given neoplasm should not be diagnosed as carcinoma if complete capsular penetration cannot be proven after extensive sampling except in one instance. This situation occurs when a satellite tumor nodule morphologically similar to the main tumor is lying just outside the tumor capsule (Fig. e). This appearance results from failure to identify the point of capsular penetration. It is noteworthy that not all authors agree that these satellite nodules represent CI . In equivocal cases of CI, the entire cancer irrespective of tumor size should be processed in the attempt to clarify whether CI is present. Deeper sections of the representative paraffin block(s) should be performed in the areas of concern in order to exclude CI . Angioinvasion or Lymph-Vascular Invasion The CAP guidelines for all cancer protocols calls for the use of the term lymph-vascular invasion (LVI) which is the terminology used in the Thyroid Cancer Protocol as well as all the protocols for the reporting of carcinomas of the entire upper aerodigestive tract, including the oral cavity, pharynx (oro-, naso- and hypopharynx), sinonasal tract, larynx and salivary glands. While the criteria for capsular invasion are quite controversial, there is currently relatively good consensus on what constitutes lymph-vascular invasion. The majority of authors agree that lymph-vascular invasion should involve capsular or extra-capsular vessels (Fig. ). These images (Fig. ) depict intracapsular LVI with endothelialized thrombus, tumor thrombus with fibrin, and tumor thrombus attached to vessel wall. The tumor thrombus should protrude into the lumen and needs to be covered by endothelial cells (Fig. b). However, endothelialization is not a requirement if the tumor is attached to the vessel wall (Fig. c) or admixed with a fibrin thrombus (Fig. d). The point of attachment of the tumor to the vessel wall has to be identified for some authorities to assure that free floating tumor artifactually displaced by the surgeon or the pathologist are not misinterpreted as LVI. Tumor in intra-tumoral or subcapsular vessels does not qualify for LVI and should not be interpreted as such (Fig. a). Extent of Capsular and Lymph-Vascular Invasion Follicular carcinomas of the thyroid gland, including its oncocytic variant (so-called Hürthle cell carcinoma), are subdivided into minimally and widely invasive carcinomas . There is general agreement that follicular carcinomas with widespread gross invasion of the thyroid and peri-thyroid soft tissue should be labeled as widely invasive follicular carcinoma . These tumors often lack complete encapsulation and have a poor prognosis . In contrast, the criteria defining “minimally invasive” follicular carcinoma are controversial and still evolving. In some schemes, this designation refers to encapsulated lesions with capsular and/or small caliber sized LVI even if the LVI is extensive . The invasive foci in these limited invasive cancers are rarely, if ever, grossly identifiable. Defined as such minimally invasive follicular carcinomas have an overall good prognosis although some cases may recur and metastasize . Identifying these “minimally invasive” follicular carcinomas may be crucial as some surgeons take a conservative management approach and treat these cancers by lobectomy alone followed by observation. Such an approach is held for all “minimally invasive” follicular carcinomas, including the oncocytic (Hürthle cell) category. However, the therapeutic approach for the diagnosis of any follicular carcinoma whether minimally invasive or not generally includes total thyroidectomy and post-operative radioactive iodine (RAI) therapy. It should be noted that the conservative approach of lobectomy and observation may risk undertreating those few minimally invasive tumors with a poor outcome. In order to detect those “minimally invasive” cases that recur, some authorities limit the use of the minimally invasive term to those carcinomas with capsular invasion only since their metastatic rate is close to 0 . The designation “grossly encapsulated angioinvasive follicular carcinoma” has been suggested to encapsulated tumors with any foci of vascular invasion in view of their perceived higher risk of recurrence . These authors feel that the presence of LVI, even in one or a few endothelial-lined vessels, negates a diagnosis of “minimal invasion” as these carcinomas once gaining access to any vessel have the capacity to behave in a more aggressive manner than those carcinomas with only capsular invasion without LVI . Other authors base their definition of minimally invasive carcinoma on the number of foci of invasion especially vascular invasion. [ , – ]. In some studies, encapsulated follicular carcinoma, including the oncocytic variant, with four or more foci of vascular invasion have a significant recurrence rate (47% for follicular oncocytic tumors) even if the foci of angioinvasion are microscopic (Fig. ). These tumors are therefore called “grossly encapsulated follicular carcinoma with extensive angioinvasion”. Another study showed that follicular oncocytic (Hürthle cell) carcinomas with a total of two foci of capsular/vascular invasion did not recur after a long follow up and should therefore be labeled as minimally invasive . Irrespective of one’s philosophy in regard to the definition of minimally invasive follicular carcinoma, the recommendation is that pathologists should report on the presence as well as the extent (focal, extensive) of capsular and lymph-vascular invasion. This approach has a dual advantage of collating the various terminologies suggested for these carcinomas, as well as and perhaps more importantly, providing a report that better assists the clinician in assessing recurrence risk and, therefore, in deciding on the extent of surgical intervention (e.g., completion thyroidectomy) and the use of postoperative RAI therapy. Mitosis and Tumor Necrosis Increased mitotic activity and especially tumor necrosis are powerful indicators of adverse outcome in thyroid carcinomas of follicular origin. Asklen and Livolsi showed that tumor necrosis and mitotic rate >2 mitosis/10 high power fields indicate worse survival in papillary thyroid carcinoma ( P = 0.028; P < 0.00005, respectively) . Mitosis and tumor necrosis are also strongly associated with poorly differentiated thyroid carcinomas (PDTC). The latter type of carcinoma has a prognosis in between the indolent well differentiated papillary thyroid carcinomas and the almost universally lethal anaplastic carcinoma. Its definition is however subject to controversy. PDTC defined on the basis of high mitotic activity (≥5 mitosis/10 high power fields, 400×) and/or tumor necrosis have a disease specific survival of 60% at 5 years irrespective of the tumor architecture (Fig. ) . PDTC defined mainly on the basis of growth pattern alone (such as the tumors reported in the large Italian study by Volante et al.) also occupy an intermediate position at the prognostic level on the spectrum of thyroid carcinoma progression . However, when Volante et al. developed a numeric scoring system whose most influential parameter was tumor necrosis, those neoplasms with necrosis had a much worse survival than those without . Indeed, the overall survival curve of their most favorable subgroup even overlapped with that of patients with well-differentiated papillary and follicular carcinomas . The overall survival of their most aggressive group (those patients whose neoplasms contained at least tumor necrosis) appears to be closer to that of PDTC defined by necrosis and/or a high mitotic rate . Recently, a group of pathologists gathered in Turin, Italy in the attempt to provide a consensus view regarding PDTC .Their definition relied on the presence of solid growth but required the presence of at least one of the followings: convoluted nuclei, tumor necrosis and/or mitosis ≥3/10 high power fields, 400×. In this study as well, mitosis and tumor necrosis were very powerful indicator of poor outcome ( P = 0.011; P < 0.001 respectively) while the type of PDTC (papillary vs non-papillary) was not. The value of mitosis and tumor necrosis is also emphasized by the fact that PDTC defined on the basis of mitoses and/or necrosis is the major cause of radioactive iodine (RAI) refractory, positron emission tomography (PET)-positive incurable thyroid carcinomas . The importance of tumor necrosis in primary tumors is further validated by the fact that it was (along with extra-thyroid extension) the only independent variable associated with decreased overall survival within RAI refractory thyroid carcinomas . From the above data, one can conclude that whatever definition is used for PDTC, it is very helpful to mention the presence of mitosis and tumor necrosis in the pathology report. It is however important to differentiate tumor necrosis from necrosis due to previous fine needle aspiration (FNA). Tumor necrosis has a “comedo-like” appearance composed of degenerating cytoplasm and punctuate, karyorectic nuclear debris (Fig. ). In contrast, the presence of fibroblastic stromal reaction, evidence of hemorrhage or an identifiable needle tract in the necrotic area are attributable to reaction induced by prior FNA. Since the majority of thyroid cancers are well-differentiated lacking mitotic activity and necrosis, and the fact that PDTC is a rather uncommon diagnosis, the Thyroid Cancer Protocol recommends rather than requires the reporting of these data elements (i.e., mitotic activity and tumor necrosis). Extra-Thyroid Extension Extrathyroidal extension refers to involvement of the perithyroidal soft tissues by a primary thyroid cancer. On gross examination, the capsule may appear complete but evidence has shown that microscopically the capsule is focally incomplete in a majority of autopsy thyroid glands evaluated . The capsule includes sizable vascular spaces as well as small peripheral nerves and is continuous with the pretracheal fascia. . In practice, since the fibrous capsule of the thyroid is often incomplete, the criteria for defining (minimal) extrathyroidal extension may be problematic and subjective. Diagnostic findings for minimal extrathyroidal extension includes the presence of carcinoma extending into perithyroidal soft tissues, including infiltration of adipose tissue and skeletal muscle, as well as around (and into) sizable vascular structures and nerves. Extension into adipose tissue can be problematic given the fact that adipose tissue can be found within the thyroid gland proper under normal conditions and also may be a component of a variety of thyroid lesions including carcinomas. . As such, the presence of adipose tissue in association with a thyroid carcinoma should not be mistaken for extrathyroidal extension. Some authorities only accept invasion of skeletal muscle as the identifier for extrathyroidal extension. However, similar to adipose tissue in the thyroid, the presence of skeletal muscle may be seen in the thyroid gland under normal conditions, especially in relation to the isthmus portion of the thyroid gland, as well as in a variety of pathologic conditions . If present, a desmoplastic response may be a helpful finding in the determination of extrathyroidal extension (Fig. ). The identification of thick-walled vascular spaces and/or small peripheral nerves in association with adipose tissue may be of greater assistance as these structures are not located in the thyroid gland proper and their presence would be helpful in determining whether the carcinoma is extrathyroidal in extent (Fig. ). While minimal extra-thyroid extension can be difficult to identify, extensive extra-thyroid extension is always obvious and easily diagnosed by the surgeon during the thyroidectomy. Extensive extrathyroid extension is defined by the presence of carcinoma well beyond the thyroid gland proper with direct invasion (i.e., not metastasis) into one or more of the following structures:subcutaneous soft tissues; adjacent viscera, including the larynx, trachea and/or esophagus; the recurrent laryngeal nerve, carotid artery or mediastinal blood vessels. Many studies have shown that carcinomas with extensive extra-thyroid extension have a much worse survival than those with minimal extra-thyroid extension . Moreover, some studies have found a similar outcome in patients with minimal versus no extra-thyroid extension . Based on the above data, it is therefore mandatory to report on the extent (minimal versus extensive) of extrathyroid extension. Resection Margins Few published studies have addressed the influence of margin status and patient outcome. Most surgeons, endocrinologists, and nuclear medicine specialists require knowledge of positive margins, i.e., tumor extending to surgical resection edge. While this makes intuitive sense and it is recommended that a positive margin be mentioned in the final pathology report, meticulous studies on the effect of positive margins and outcome in large series of patients with long-term follow-up are lacking. At the present time, there is no need to report the distance of tumor to closest resection margin. Indeed, there is no data to date on the prognostic value of close margins as an independent or co-variable. Lymph Node Metastases Although controversy still exists in regard to the prognostic value of nodal metastases in papillary thyroid carcinomas, the reporting on lymph node status is mandatory since positive nodal metastases most often lead to RAI therapy. The pathologist should also comment on the presence or absence of extranodal extension since the latter was shown to increase the risk for distant metastases and death . Papillary Thyroid Microcarcinoma This variant of papillary thyroid carcinoma is defined as any focus measuring ≤1 cm. Such papillary thyroid microcarcinomas usually are incidentally identified in thyroid glands removed for other reasons. There is general agreement that no additional therapy is needed for these incidentally identified foci of thyroid papillary microcarcinoma and, in order to avoid overtreatment, it is worthwhile to consider indicating in the pathology report that these foci have an extremely favorable prognosis and should not be used as a reason for additional therapy (e.g., completion thyroidectomy and RAI). Given their rather common identification in all thyroid gland resections and their indolent biologic behavior, it is not the recommendation of the CAP Thyroid Cancer Protocol to issue a protocol for each case in which incidental papillary thyroid microcarcinomas are found. An exception to such practice would be considered in those examples of papillary thyroid carcinomas measuring ≤1 cm but representing the primary reason a lobe/gland was removed. The tumor could have been discovered clinically (palpable, visible nodule) or by imaging. Given the more sophisticated diagnostic (e.g., imaging) modalities currently available, smaller (i.e., <1 cm) lesions are being identified and resected. In such circumstances, where the primary reason for thyroid surgery is to excise a subcentimeter focus of PTC, then reporting should follow the CAP Thyroid Protocol. Although usually extremely indolent, papillary thyroid microcarcinomas may exceptionally behave aggressively with spread to lymph node or distant sites . Such aggressive papillary thyroid microcarcinomas usually harbor their metastases at presentation . The presence of two or more foci of papillary thyroid microcarcinonas, and aggressive features related to the primary tumor such as lymph-vascular invasion, extra-thyroid extension and “aggressive” morphology (e.g., tall cell features) may trigger full blown treatment including total thyroidectomy and RAI therapy. Such management does not appear to be justified at this time as there is insufficient data in the literature (long term follow-up) on these papillary thyroid microcarcinomas with “aggressive” features in the primary to justify such a therapeutic approach. It is our recommendation that the designation of papillary thyroid microcarcinoma should not be applied to children and adolescents under 19 years old as a significant number of these subcentimeter papillary carcinomas occurring in the pediatric population display extrathyroidal extension and distant metastases . Capsular Invasion (CI) The diagnosis of papillary thyroid carcinoma is entirely predicated on the presence of diagnostic nuclear features. As such, the diagnosis of papillary thyroid carcinoma can be made in the presence of an encapsulated follicular neoplasm even in the absence of invasive growth. Once a given neoplasm invades its capsule and/or shows evidence of angioinvasion (intracapsular or beyond), that follicular epithelial cell lesion is malignant. Excluding papillary thyroid carcinoma, there is universal agreement that the diagnosis of follicular carcinoma requires the presence of capsular or vascular invasion of the tumor capsule or beyond the tumor capsule. Unfortunately, there is still controversy in regard to the interpretation of CI. While most authors require complete capsular transgression by tumor in order to diagnose capsular invasion , other authorities will diagnose a given neoplasm as a carcinoma even in the presence of incomplete capsular transgression . According to Chan , Fig. depicts the various histologic appearances associated with the presence or absence of capsular invasion. According to this author , a given neoplasm should not be diagnosed as carcinoma if complete capsular penetration cannot be proven after extensive sampling except in one instance. This situation occurs when a satellite tumor nodule morphologically similar to the main tumor is lying just outside the tumor capsule (Fig. e). This appearance results from failure to identify the point of capsular penetration. It is noteworthy that not all authors agree that these satellite nodules represent CI . In equivocal cases of CI, the entire cancer irrespective of tumor size should be processed in the attempt to clarify whether CI is present. Deeper sections of the representative paraffin block(s) should be performed in the areas of concern in order to exclude CI . Angioinvasion or Lymph-Vascular Invasion The CAP guidelines for all cancer protocols calls for the use of the term lymph-vascular invasion (LVI) which is the terminology used in the Thyroid Cancer Protocol as well as all the protocols for the reporting of carcinomas of the entire upper aerodigestive tract, including the oral cavity, pharynx (oro-, naso- and hypopharynx), sinonasal tract, larynx and salivary glands. While the criteria for capsular invasion are quite controversial, there is currently relatively good consensus on what constitutes lymph-vascular invasion. The majority of authors agree that lymph-vascular invasion should involve capsular or extra-capsular vessels (Fig. ). These images (Fig. ) depict intracapsular LVI with endothelialized thrombus, tumor thrombus with fibrin, and tumor thrombus attached to vessel wall. The tumor thrombus should protrude into the lumen and needs to be covered by endothelial cells (Fig. b). However, endothelialization is not a requirement if the tumor is attached to the vessel wall (Fig. c) or admixed with a fibrin thrombus (Fig. d). The point of attachment of the tumor to the vessel wall has to be identified for some authorities to assure that free floating tumor artifactually displaced by the surgeon or the pathologist are not misinterpreted as LVI. Tumor in intra-tumoral or subcapsular vessels does not qualify for LVI and should not be interpreted as such (Fig. a). Extent of Capsular and Lymph-Vascular Invasion Follicular carcinomas of the thyroid gland, including its oncocytic variant (so-called Hürthle cell carcinoma), are subdivided into minimally and widely invasive carcinomas . There is general agreement that follicular carcinomas with widespread gross invasion of the thyroid and peri-thyroid soft tissue should be labeled as widely invasive follicular carcinoma . These tumors often lack complete encapsulation and have a poor prognosis . In contrast, the criteria defining “minimally invasive” follicular carcinoma are controversial and still evolving. In some schemes, this designation refers to encapsulated lesions with capsular and/or small caliber sized LVI even if the LVI is extensive . The invasive foci in these limited invasive cancers are rarely, if ever, grossly identifiable. Defined as such minimally invasive follicular carcinomas have an overall good prognosis although some cases may recur and metastasize . Identifying these “minimally invasive” follicular carcinomas may be crucial as some surgeons take a conservative management approach and treat these cancers by lobectomy alone followed by observation. Such an approach is held for all “minimally invasive” follicular carcinomas, including the oncocytic (Hürthle cell) category. However, the therapeutic approach for the diagnosis of any follicular carcinoma whether minimally invasive or not generally includes total thyroidectomy and post-operative radioactive iodine (RAI) therapy. It should be noted that the conservative approach of lobectomy and observation may risk undertreating those few minimally invasive tumors with a poor outcome. In order to detect those “minimally invasive” cases that recur, some authorities limit the use of the minimally invasive term to those carcinomas with capsular invasion only since their metastatic rate is close to 0 . The designation “grossly encapsulated angioinvasive follicular carcinoma” has been suggested to encapsulated tumors with any foci of vascular invasion in view of their perceived higher risk of recurrence . These authors feel that the presence of LVI, even in one or a few endothelial-lined vessels, negates a diagnosis of “minimal invasion” as these carcinomas once gaining access to any vessel have the capacity to behave in a more aggressive manner than those carcinomas with only capsular invasion without LVI . Other authors base their definition of minimally invasive carcinoma on the number of foci of invasion especially vascular invasion. [ , – ]. In some studies, encapsulated follicular carcinoma, including the oncocytic variant, with four or more foci of vascular invasion have a significant recurrence rate (47% for follicular oncocytic tumors) even if the foci of angioinvasion are microscopic (Fig. ). These tumors are therefore called “grossly encapsulated follicular carcinoma with extensive angioinvasion”. Another study showed that follicular oncocytic (Hürthle cell) carcinomas with a total of two foci of capsular/vascular invasion did not recur after a long follow up and should therefore be labeled as minimally invasive . Irrespective of one’s philosophy in regard to the definition of minimally invasive follicular carcinoma, the recommendation is that pathologists should report on the presence as well as the extent (focal, extensive) of capsular and lymph-vascular invasion. This approach has a dual advantage of collating the various terminologies suggested for these carcinomas, as well as and perhaps more importantly, providing a report that better assists the clinician in assessing recurrence risk and, therefore, in deciding on the extent of surgical intervention (e.g., completion thyroidectomy) and the use of postoperative RAI therapy. The diagnosis of papillary thyroid carcinoma is entirely predicated on the presence of diagnostic nuclear features. As such, the diagnosis of papillary thyroid carcinoma can be made in the presence of an encapsulated follicular neoplasm even in the absence of invasive growth. Once a given neoplasm invades its capsule and/or shows evidence of angioinvasion (intracapsular or beyond), that follicular epithelial cell lesion is malignant. Excluding papillary thyroid carcinoma, there is universal agreement that the diagnosis of follicular carcinoma requires the presence of capsular or vascular invasion of the tumor capsule or beyond the tumor capsule. Unfortunately, there is still controversy in regard to the interpretation of CI. While most authors require complete capsular transgression by tumor in order to diagnose capsular invasion , other authorities will diagnose a given neoplasm as a carcinoma even in the presence of incomplete capsular transgression . According to Chan , Fig. depicts the various histologic appearances associated with the presence or absence of capsular invasion. According to this author , a given neoplasm should not be diagnosed as carcinoma if complete capsular penetration cannot be proven after extensive sampling except in one instance. This situation occurs when a satellite tumor nodule morphologically similar to the main tumor is lying just outside the tumor capsule (Fig. e). This appearance results from failure to identify the point of capsular penetration. It is noteworthy that not all authors agree that these satellite nodules represent CI . In equivocal cases of CI, the entire cancer irrespective of tumor size should be processed in the attempt to clarify whether CI is present. Deeper sections of the representative paraffin block(s) should be performed in the areas of concern in order to exclude CI . The CAP guidelines for all cancer protocols calls for the use of the term lymph-vascular invasion (LVI) which is the terminology used in the Thyroid Cancer Protocol as well as all the protocols for the reporting of carcinomas of the entire upper aerodigestive tract, including the oral cavity, pharynx (oro-, naso- and hypopharynx), sinonasal tract, larynx and salivary glands. While the criteria for capsular invasion are quite controversial, there is currently relatively good consensus on what constitutes lymph-vascular invasion. The majority of authors agree that lymph-vascular invasion should involve capsular or extra-capsular vessels (Fig. ). These images (Fig. ) depict intracapsular LVI with endothelialized thrombus, tumor thrombus with fibrin, and tumor thrombus attached to vessel wall. The tumor thrombus should protrude into the lumen and needs to be covered by endothelial cells (Fig. b). However, endothelialization is not a requirement if the tumor is attached to the vessel wall (Fig. c) or admixed with a fibrin thrombus (Fig. d). The point of attachment of the tumor to the vessel wall has to be identified for some authorities to assure that free floating tumor artifactually displaced by the surgeon or the pathologist are not misinterpreted as LVI. Tumor in intra-tumoral or subcapsular vessels does not qualify for LVI and should not be interpreted as such (Fig. a). Follicular carcinomas of the thyroid gland, including its oncocytic variant (so-called Hürthle cell carcinoma), are subdivided into minimally and widely invasive carcinomas . There is general agreement that follicular carcinomas with widespread gross invasion of the thyroid and peri-thyroid soft tissue should be labeled as widely invasive follicular carcinoma . These tumors often lack complete encapsulation and have a poor prognosis . In contrast, the criteria defining “minimally invasive” follicular carcinoma are controversial and still evolving. In some schemes, this designation refers to encapsulated lesions with capsular and/or small caliber sized LVI even if the LVI is extensive . The invasive foci in these limited invasive cancers are rarely, if ever, grossly identifiable. Defined as such minimally invasive follicular carcinomas have an overall good prognosis although some cases may recur and metastasize . Identifying these “minimally invasive” follicular carcinomas may be crucial as some surgeons take a conservative management approach and treat these cancers by lobectomy alone followed by observation. Such an approach is held for all “minimally invasive” follicular carcinomas, including the oncocytic (Hürthle cell) category. However, the therapeutic approach for the diagnosis of any follicular carcinoma whether minimally invasive or not generally includes total thyroidectomy and post-operative radioactive iodine (RAI) therapy. It should be noted that the conservative approach of lobectomy and observation may risk undertreating those few minimally invasive tumors with a poor outcome. In order to detect those “minimally invasive” cases that recur, some authorities limit the use of the minimally invasive term to those carcinomas with capsular invasion only since their metastatic rate is close to 0 . The designation “grossly encapsulated angioinvasive follicular carcinoma” has been suggested to encapsulated tumors with any foci of vascular invasion in view of their perceived higher risk of recurrence . These authors feel that the presence of LVI, even in one or a few endothelial-lined vessels, negates a diagnosis of “minimal invasion” as these carcinomas once gaining access to any vessel have the capacity to behave in a more aggressive manner than those carcinomas with only capsular invasion without LVI . Other authors base their definition of minimally invasive carcinoma on the number of foci of invasion especially vascular invasion. [ , – ]. In some studies, encapsulated follicular carcinoma, including the oncocytic variant, with four or more foci of vascular invasion have a significant recurrence rate (47% for follicular oncocytic tumors) even if the foci of angioinvasion are microscopic (Fig. ). These tumors are therefore called “grossly encapsulated follicular carcinoma with extensive angioinvasion”. Another study showed that follicular oncocytic (Hürthle cell) carcinomas with a total of two foci of capsular/vascular invasion did not recur after a long follow up and should therefore be labeled as minimally invasive . Irrespective of one’s philosophy in regard to the definition of minimally invasive follicular carcinoma, the recommendation is that pathologists should report on the presence as well as the extent (focal, extensive) of capsular and lymph-vascular invasion. This approach has a dual advantage of collating the various terminologies suggested for these carcinomas, as well as and perhaps more importantly, providing a report that better assists the clinician in assessing recurrence risk and, therefore, in deciding on the extent of surgical intervention (e.g., completion thyroidectomy) and the use of postoperative RAI therapy. Increased mitotic activity and especially tumor necrosis are powerful indicators of adverse outcome in thyroid carcinomas of follicular origin. Asklen and Livolsi showed that tumor necrosis and mitotic rate >2 mitosis/10 high power fields indicate worse survival in papillary thyroid carcinoma ( P = 0.028; P < 0.00005, respectively) . Mitosis and tumor necrosis are also strongly associated with poorly differentiated thyroid carcinomas (PDTC). The latter type of carcinoma has a prognosis in between the indolent well differentiated papillary thyroid carcinomas and the almost universally lethal anaplastic carcinoma. Its definition is however subject to controversy. PDTC defined on the basis of high mitotic activity (≥5 mitosis/10 high power fields, 400×) and/or tumor necrosis have a disease specific survival of 60% at 5 years irrespective of the tumor architecture (Fig. ) . PDTC defined mainly on the basis of growth pattern alone (such as the tumors reported in the large Italian study by Volante et al.) also occupy an intermediate position at the prognostic level on the spectrum of thyroid carcinoma progression . However, when Volante et al. developed a numeric scoring system whose most influential parameter was tumor necrosis, those neoplasms with necrosis had a much worse survival than those without . Indeed, the overall survival curve of their most favorable subgroup even overlapped with that of patients with well-differentiated papillary and follicular carcinomas . The overall survival of their most aggressive group (those patients whose neoplasms contained at least tumor necrosis) appears to be closer to that of PDTC defined by necrosis and/or a high mitotic rate . Recently, a group of pathologists gathered in Turin, Italy in the attempt to provide a consensus view regarding PDTC .Their definition relied on the presence of solid growth but required the presence of at least one of the followings: convoluted nuclei, tumor necrosis and/or mitosis ≥3/10 high power fields, 400×. In this study as well, mitosis and tumor necrosis were very powerful indicator of poor outcome ( P = 0.011; P < 0.001 respectively) while the type of PDTC (papillary vs non-papillary) was not. The value of mitosis and tumor necrosis is also emphasized by the fact that PDTC defined on the basis of mitoses and/or necrosis is the major cause of radioactive iodine (RAI) refractory, positron emission tomography (PET)-positive incurable thyroid carcinomas . The importance of tumor necrosis in primary tumors is further validated by the fact that it was (along with extra-thyroid extension) the only independent variable associated with decreased overall survival within RAI refractory thyroid carcinomas . From the above data, one can conclude that whatever definition is used for PDTC, it is very helpful to mention the presence of mitosis and tumor necrosis in the pathology report. It is however important to differentiate tumor necrosis from necrosis due to previous fine needle aspiration (FNA). Tumor necrosis has a “comedo-like” appearance composed of degenerating cytoplasm and punctuate, karyorectic nuclear debris (Fig. ). In contrast, the presence of fibroblastic stromal reaction, evidence of hemorrhage or an identifiable needle tract in the necrotic area are attributable to reaction induced by prior FNA. Since the majority of thyroid cancers are well-differentiated lacking mitotic activity and necrosis, and the fact that PDTC is a rather uncommon diagnosis, the Thyroid Cancer Protocol recommends rather than requires the reporting of these data elements (i.e., mitotic activity and tumor necrosis). Extrathyroidal extension refers to involvement of the perithyroidal soft tissues by a primary thyroid cancer. On gross examination, the capsule may appear complete but evidence has shown that microscopically the capsule is focally incomplete in a majority of autopsy thyroid glands evaluated . The capsule includes sizable vascular spaces as well as small peripheral nerves and is continuous with the pretracheal fascia. . In practice, since the fibrous capsule of the thyroid is often incomplete, the criteria for defining (minimal) extrathyroidal extension may be problematic and subjective. Diagnostic findings for minimal extrathyroidal extension includes the presence of carcinoma extending into perithyroidal soft tissues, including infiltration of adipose tissue and skeletal muscle, as well as around (and into) sizable vascular structures and nerves. Extension into adipose tissue can be problematic given the fact that adipose tissue can be found within the thyroid gland proper under normal conditions and also may be a component of a variety of thyroid lesions including carcinomas. . As such, the presence of adipose tissue in association with a thyroid carcinoma should not be mistaken for extrathyroidal extension. Some authorities only accept invasion of skeletal muscle as the identifier for extrathyroidal extension. However, similar to adipose tissue in the thyroid, the presence of skeletal muscle may be seen in the thyroid gland under normal conditions, especially in relation to the isthmus portion of the thyroid gland, as well as in a variety of pathologic conditions . If present, a desmoplastic response may be a helpful finding in the determination of extrathyroidal extension (Fig. ). The identification of thick-walled vascular spaces and/or small peripheral nerves in association with adipose tissue may be of greater assistance as these structures are not located in the thyroid gland proper and their presence would be helpful in determining whether the carcinoma is extrathyroidal in extent (Fig. ). While minimal extra-thyroid extension can be difficult to identify, extensive extra-thyroid extension is always obvious and easily diagnosed by the surgeon during the thyroidectomy. Extensive extrathyroid extension is defined by the presence of carcinoma well beyond the thyroid gland proper with direct invasion (i.e., not metastasis) into one or more of the following structures:subcutaneous soft tissues; adjacent viscera, including the larynx, trachea and/or esophagus; the recurrent laryngeal nerve, carotid artery or mediastinal blood vessels. Many studies have shown that carcinomas with extensive extra-thyroid extension have a much worse survival than those with minimal extra-thyroid extension . Moreover, some studies have found a similar outcome in patients with minimal versus no extra-thyroid extension . Based on the above data, it is therefore mandatory to report on the extent (minimal versus extensive) of extrathyroid extension. Few published studies have addressed the influence of margin status and patient outcome. Most surgeons, endocrinologists, and nuclear medicine specialists require knowledge of positive margins, i.e., tumor extending to surgical resection edge. While this makes intuitive sense and it is recommended that a positive margin be mentioned in the final pathology report, meticulous studies on the effect of positive margins and outcome in large series of patients with long-term follow-up are lacking. At the present time, there is no need to report the distance of tumor to closest resection margin. Indeed, there is no data to date on the prognostic value of close margins as an independent or co-variable. Although controversy still exists in regard to the prognostic value of nodal metastases in papillary thyroid carcinomas, the reporting on lymph node status is mandatory since positive nodal metastases most often lead to RAI therapy. The pathologist should also comment on the presence or absence of extranodal extension since the latter was shown to increase the risk for distant metastases and death . This variant of papillary thyroid carcinoma is defined as any focus measuring ≤1 cm. Such papillary thyroid microcarcinomas usually are incidentally identified in thyroid glands removed for other reasons. There is general agreement that no additional therapy is needed for these incidentally identified foci of thyroid papillary microcarcinoma and, in order to avoid overtreatment, it is worthwhile to consider indicating in the pathology report that these foci have an extremely favorable prognosis and should not be used as a reason for additional therapy (e.g., completion thyroidectomy and RAI). Given their rather common identification in all thyroid gland resections and their indolent biologic behavior, it is not the recommendation of the CAP Thyroid Cancer Protocol to issue a protocol for each case in which incidental papillary thyroid microcarcinomas are found. An exception to such practice would be considered in those examples of papillary thyroid carcinomas measuring ≤1 cm but representing the primary reason a lobe/gland was removed. The tumor could have been discovered clinically (palpable, visible nodule) or by imaging. Given the more sophisticated diagnostic (e.g., imaging) modalities currently available, smaller (i.e., <1 cm) lesions are being identified and resected. In such circumstances, where the primary reason for thyroid surgery is to excise a subcentimeter focus of PTC, then reporting should follow the CAP Thyroid Protocol. Although usually extremely indolent, papillary thyroid microcarcinomas may exceptionally behave aggressively with spread to lymph node or distant sites . Such aggressive papillary thyroid microcarcinomas usually harbor their metastases at presentation . The presence of two or more foci of papillary thyroid microcarcinonas, and aggressive features related to the primary tumor such as lymph-vascular invasion, extra-thyroid extension and “aggressive” morphology (e.g., tall cell features) may trigger full blown treatment including total thyroidectomy and RAI therapy. Such management does not appear to be justified at this time as there is insufficient data in the literature (long term follow-up) on these papillary thyroid microcarcinomas with “aggressive” features in the primary to justify such a therapeutic approach. It is our recommendation that the designation of papillary thyroid microcarcinoma should not be applied to children and adolescents under 19 years old as a significant number of these subcentimeter papillary carcinomas occurring in the pediatric population display extrathyroidal extension and distant metastases . The updated CAP protocol is a step toward improving the clinical value of the pathologic reporting of thyroid carcinomas. An accurate assessment of the extent of invasion of the tumor capsule, especially lymph-vascular invasion, is an important element in the reporting of thyroid carcinomas. Meticulous microscopic examination of TC is no longer an academic exercise but a necessity in the management of these malignancies. Proliferative assessment of the tumor (i.e., mitosis and necrosis) is of high prognostic value in the determination of a poorly-differentiated thyroid carcinoma. The extremely indolent behavior of papillary thyroid microcarcinoma should be communicated to the clinician in order to avoid overtreatment. There are still unresolved issues in the histopathologic diagnosis of thyroid carcinomas. Large clinico-pathologic studies with long term follow up are still needed in order to increase the impact of histopathology on the prognosis and management of TC. With the advent of molecular diagnostics the anticipation is that many of these controversial issues will be resolved but until that time, pathologists must rely on morphology in the assessment and reporting of thyroid carcinomas.
Looking back on 51 years of the Carol Nachman Prize in Rheumatology—significance for the field of spondyloarthritis research
daa53863-de4e-4043-9423-a5449ae09fbf
11442482
Internal Medicine[mh]
The casino ( Spielbank ) of Wiesbaden, capital of the German state of Hessen, has endowed the Carol Nachman Prize to promote research work in the field of rheumatology since 1972. Since 1987, a Carol Nachman Medal has also been awarded. The prize, endowed with 37,500 euro, is the second highest medical award in Germany and serves to promote clinical, therapeutic, and experimental research work in the international field of rheumatology. The Carol Nachman Prize and Medal bear the name of their donor, the long-time casino concessionaire and honorary citizen of Wiesbaden, Carol Nachman. About 50 years ago, he and the Wiesbaden rheumatologist Prof. Klaus Miehlke launched the prize together with the Mayor of Wiesbaden at that time, Alfred Herbel. Since 1972, the prize has been awarded to more than 80 internationally recognized scientists. The prize is awarded based on evaluation of their work by an independent international scientific committee. Even after the death of the prize donor, the casino of Wiesbaden has continued to provide the financial support. The aim of the annual endowment of the Wiesbaden State Capital Prize for Rheumatology is to honor the work of medical doctors and basic scientists in combating these widespread diseases, which is so valuable for everyone. Over the years, the casino has provided more than 1.6 million euro for this purpose. The city of Wiesbaden sees itself as a “rheumatism city” because internationally renowned medical professionals treat thousands of patients with rheumatic diseases every year in internal rheumatology and orthopedic hospitals, outpatient clinics, and the specialist rheumatology practices of the city. In addition, scientific training congresses bring many rheumatologists here, and not only during the annual internal medicine congress in April. For the casino of Wiesbaden, this is reason enough to support the awarding of the “Prize of the State Capital Wiesbaden for Rheumatology” with great commitment on an ongoing basis. On the afternoon before the award ceremony, the Carol Nachman Symposium “50 years of the Carol Nachman Prize—50 years of milestones in rheumatology” was organized by the chairperson of the board of trustees, Elisabeth Märker-Hermann, who has already served in this position since 2010 following the late Prof. Joachim R. Kalden. Internationally renowned speakers commemorated 50 years of milestones in rheumatology (epidemiology and public health research, rheumatoid arthritis, spondyloarthritis, and systemic lupus erythematosus) at the ceremonial hall of Wiesbaden City Hall. The talk of Prof. Jürgen Braun serves as the basis for this overview of the Carol Nachman Prize related to the field of spondyloarthritis during the past roughly 30 years. For the 50th anniversary of the Carol Nachman Prize of the City of Wiesbaden, which was celebrated on June 23, 2022, a number of previous prizewinners were invited to hold lectures. Among others, J. Braun was asked to create an overview of the award winners of the previous 30 years in the field of spondyloarthritis (SpA). On the basis of the list of prizewinners provided, a selection was made as to which prizewinners had made a name for themselves either primarily or also in the field of SpA and had published important articles. Then the winners were emailed and asked to name their 5–8 most important publications in this field. In the reference list, the names of Carol Nachman Prize winners are highlighted in bold. The awardees are listed in Table . A short introduction to their work and their main publications is presented below. Usually, 5–8 publications were given. Prof. Robert Hammer and Prof. Joel Taurog, University of Texas Southwestern Medical Center, Department of Biochemistry, University of Texas Southwestern Medical Center in Dallas, Division of Rheumatic Diseases The main part of their work related to spondyloarthritis (SpA) is based on a really fascinating animal model. Their first paper described this animal model showing that transgenic rats expressing HLA-B27 and human beta 2 microglobulin develop a disease with similarities to human SpA . In the same model, it was shown that expression of HLA-B27 correlates with the SpA-like symptoms developing in the animals . Furthermore, the authors showed that the SpA-like disease could be transferred by bone marrow engraftment . Another important finding and certainly among the most interesting findings of that model was that the SpA-like disease did not develop if the rats were held in a germ-free environment . Furthermore, the authors demonstrated that normal luminal bacteria, especially Bacteroides species, may function as mediators of chronic colitis, gastritis, and arthritis in HLA-B27/human beta 2 microglobulin transgenic rats . Although the question remains as to what this model has finally taught us about the human disease, this was great scientific work that included major genetic and immunologic research questions. The main part of their work related to spondyloarthritis (SpA) is based on a really fascinating animal model. Their first paper described this animal model showing that transgenic rats expressing HLA-B27 and human beta 2 microglobulin develop a disease with similarities to human SpA . In the same model, it was shown that expression of HLA-B27 correlates with the SpA-like symptoms developing in the animals . Furthermore, the authors showed that the SpA-like disease could be transferred by bone marrow engraftment . Another important finding and certainly among the most interesting findings of that model was that the SpA-like disease did not develop if the rats were held in a germ-free environment . Furthermore, the authors demonstrated that normal luminal bacteria, especially Bacteroides species, may function as mediators of chronic colitis, gastritis, and arthritis in HLA-B27/human beta 2 microglobulin transgenic rats . Although the question remains as to what this model has finally taught us about the human disease, this was great scientific work that included major genetic and immunologic research questions. Prof. Joachim Sieper and Prof. Jürgen Braun, Charité University Medicine, UKBF Berlin, Germany The work of these authors also included many aspects of SpA but they concentrated on human material such as synovial fluid and sacroiliac biopsies and modern imaging technology. Their first important study showed that magnetic resonance imaging (MRI) is useful to detect active sacroiliitis in patients with no definite radiographic changes . In a landmark study, the authors demonstrated for the first time that tumor necrosis factor alpha (TNFα) is heavily expressed in inflamed sacroiliac joints of patients with ankylosing spondylitis (AS, ), now named axial spondyloarthritis (axSpA). In a follow-up study, the degree of enhancement detected by MRI was shown to correlate with cellularity in early and active sacroiliitis in patients with axSpA, . The next project was a pilot study that showed for the first time that anti-TNF therapy (with infliximab) works very well in patients with AS . In the following first randomized controlled trial on biologics in axSpA it was clearly demonstrated that inhibitors of TNF (TNFi) are efficacious in AS . This study provided the basis for approval of infliximab for AS by the European Medicines Agency (EMA), because of an unmet need in this disease. These studies provided the main basis for being awarded with Carol Nachman Prize and later, in 2003, the European League Against Rheumatism (EULAR) award. In the over 20 years after receiving the Carol Nachman Prize, these two authors have continued to contribute considerably in different fields of spondyloarthritis such as diagnosis, new treatments, imaging, and outcome parameters. Prof. Joachim Sieper, Charité Universitätsmedizin Berlin, UKBF In an important early study, the authors showed that the T cell cytokine pattern detected in the synovial membrane of patients with rheumatoid arthritis (RA) and reactive arthritis (ReA) differs regarding the expression of cytokines such as interferon gamma (IFNγ) and interleukin (IL)‑4 according to the T helper cell (Th)1/Th2 paradigm . More than 10 years later, the Assessment of SpondyloArthritis International Society (ASAS) group developed candidate criteria for the classification of axSpA including patients without radiographic changes in the sacroiliac joints . Thereafter, the first classification criteria for axSpA, which are still used, were validated . These classification criteria, in conjunction with the 1984 modified New York criteria , allowed for differentiation between radiographic (r-axSpA) and non-radiographic (nr-axSpA) axSpA. The former is largely equivalent to AS . However, nr-axSpA developed as a new indication for biologic and targeted synthetic disease-modifying anti-rheumatic drugs (b- and tsDMARDs). The first agent to be successfully studied in this indication of nr-axSpA was the TNFi adalimumab . Neglecting the difference between r‑axSpA and nr-axSpA, which is only relevant for classification and not for clinical diagnoses, another important study in early, active axSpA compared infliximab plus the non-steroidal antiinflammatory drug (NSAID) naproxen versus naproxen alone in a double-blind, placebo-controlled trial, INFAST . Expectedly, infliximab worked better but naproxen also led to remission in about 30% of patients (as compared to about 60% with the TNFi). Shortly thereafter, it became clear that there are other bDMARDs working in AS: of those blocking interleukin 17 (IL-17), the first one approved for AS was secukinumab , and later ixekizumab. Both agents are also efficacious in nr-axSpA, as published in early 2020 . As of today, five TNFi are approved for axSpA (only infliximab is not approved for nr-axSpA), and three (infliximab, etanercept, adalimumab) have already several biosimilars approved. In addition, there are, in the meantime, three IL-17i (secukinumab, ixekizumab), and recently also bimekizumab . Furthermore, there are also two janus kinase (JAK) kinase inhibitors, tofacitinib and upadacitinib approved in the field of SpA . Great achievements by large national and international studies were made in which this awardee has often played a leading role. Prof. Jürgen Braun, Rheumazentrum Ruhrgebiet 2001–2021, Ruhr Universität Bochum, 2024 Rheumatologisches Versorgungszentrum Steglitz After having developed a new MRI-based scoring system to quantify spinal inflammation in patients with AS , the authors were able to demonstrate for the first time in a multicenter, randomized, double-blind, placebo-controlled trial that treatment with infliximab led to a major reduction of axial inflammation in the vertebral column . The relationship between inflammation and new bone formation in AS was analyzed in detail to show that inflammatory MRI changes precede new bone formation in AS . Many years later, a large population-based study taking advantage of MRI performed as part of the Study of Health in Pomerania (SHIP) was conducted. In this project, the frequency of MRI changes suggestive of axSpA in the axial skeleton in a large cohort of individuals aged < 45 years was assessed and found to be much more frequent than previously thought . Furthermore, the study results supported the hypothesis of mechanical strain contributing to the presence of bone marrow edema (BME) in the general population aged < 45 years and the role of HLA-B27+ as a severity rather than a susceptibility factor for BME in the sacroiliac joints . After long discussions on how to define severity, the ASAS decided to develop a health index for patients with AS, the ASAS HI . This global initiative based on the International Classification of Functioning, Disability, and Health (ICF) developed by the World Health Organization (WHO) allows for quantification of global functioning of patients with AS. Some years later, the ASAS developed quality standards to improve the quality of health and care services of patients with axSpA . The treat-to-target strategy (T2T) has been increasingly recognized as the best way to treat inflammatory rheumatic diseases. It is associated with a strong tendency to focus on disease activity markers and the reduction of inflammation. In an important review about the significance of physical function and activity in axSpA, it is stressed that rheumatologists should also consider these outcome parameters as important in their management strategy to treat patients with axSpA—which is very consistent with the quality standards for axSpA . The work of these authors also included many aspects of SpA but they concentrated on human material such as synovial fluid and sacroiliac biopsies and modern imaging technology. Their first important study showed that magnetic resonance imaging (MRI) is useful to detect active sacroiliitis in patients with no definite radiographic changes . In a landmark study, the authors demonstrated for the first time that tumor necrosis factor alpha (TNFα) is heavily expressed in inflamed sacroiliac joints of patients with ankylosing spondylitis (AS, ), now named axial spondyloarthritis (axSpA). In a follow-up study, the degree of enhancement detected by MRI was shown to correlate with cellularity in early and active sacroiliitis in patients with axSpA, . The next project was a pilot study that showed for the first time that anti-TNF therapy (with infliximab) works very well in patients with AS . In the following first randomized controlled trial on biologics in axSpA it was clearly demonstrated that inhibitors of TNF (TNFi) are efficacious in AS . This study provided the basis for approval of infliximab for AS by the European Medicines Agency (EMA), because of an unmet need in this disease. These studies provided the main basis for being awarded with Carol Nachman Prize and later, in 2003, the European League Against Rheumatism (EULAR) award. In the over 20 years after receiving the Carol Nachman Prize, these two authors have continued to contribute considerably in different fields of spondyloarthritis such as diagnosis, new treatments, imaging, and outcome parameters. In an important early study, the authors showed that the T cell cytokine pattern detected in the synovial membrane of patients with rheumatoid arthritis (RA) and reactive arthritis (ReA) differs regarding the expression of cytokines such as interferon gamma (IFNγ) and interleukin (IL)‑4 according to the T helper cell (Th)1/Th2 paradigm . More than 10 years later, the Assessment of SpondyloArthritis International Society (ASAS) group developed candidate criteria for the classification of axSpA including patients without radiographic changes in the sacroiliac joints . Thereafter, the first classification criteria for axSpA, which are still used, were validated . These classification criteria, in conjunction with the 1984 modified New York criteria , allowed for differentiation between radiographic (r-axSpA) and non-radiographic (nr-axSpA) axSpA. The former is largely equivalent to AS . However, nr-axSpA developed as a new indication for biologic and targeted synthetic disease-modifying anti-rheumatic drugs (b- and tsDMARDs). The first agent to be successfully studied in this indication of nr-axSpA was the TNFi adalimumab . Neglecting the difference between r‑axSpA and nr-axSpA, which is only relevant for classification and not for clinical diagnoses, another important study in early, active axSpA compared infliximab plus the non-steroidal antiinflammatory drug (NSAID) naproxen versus naproxen alone in a double-blind, placebo-controlled trial, INFAST . Expectedly, infliximab worked better but naproxen also led to remission in about 30% of patients (as compared to about 60% with the TNFi). Shortly thereafter, it became clear that there are other bDMARDs working in AS: of those blocking interleukin 17 (IL-17), the first one approved for AS was secukinumab , and later ixekizumab. Both agents are also efficacious in nr-axSpA, as published in early 2020 . As of today, five TNFi are approved for axSpA (only infliximab is not approved for nr-axSpA), and three (infliximab, etanercept, adalimumab) have already several biosimilars approved. In addition, there are, in the meantime, three IL-17i (secukinumab, ixekizumab), and recently also bimekizumab . Furthermore, there are also two janus kinase (JAK) kinase inhibitors, tofacitinib and upadacitinib approved in the field of SpA . Great achievements by large national and international studies were made in which this awardee has often played a leading role. After having developed a new MRI-based scoring system to quantify spinal inflammation in patients with AS , the authors were able to demonstrate for the first time in a multicenter, randomized, double-blind, placebo-controlled trial that treatment with infliximab led to a major reduction of axial inflammation in the vertebral column . The relationship between inflammation and new bone formation in AS was analyzed in detail to show that inflammatory MRI changes precede new bone formation in AS . Many years later, a large population-based study taking advantage of MRI performed as part of the Study of Health in Pomerania (SHIP) was conducted. In this project, the frequency of MRI changes suggestive of axSpA in the axial skeleton in a large cohort of individuals aged < 45 years was assessed and found to be much more frequent than previously thought . Furthermore, the study results supported the hypothesis of mechanical strain contributing to the presence of bone marrow edema (BME) in the general population aged < 45 years and the role of HLA-B27+ as a severity rather than a susceptibility factor for BME in the sacroiliac joints . After long discussions on how to define severity, the ASAS decided to develop a health index for patients with AS, the ASAS HI . This global initiative based on the International Classification of Functioning, Disability, and Health (ICF) developed by the World Health Organization (WHO) allows for quantification of global functioning of patients with AS. Some years later, the ASAS developed quality standards to improve the quality of health and care services of patients with axSpA . The treat-to-target strategy (T2T) has been increasingly recognized as the best way to treat inflammatory rheumatic diseases. It is associated with a strong tendency to focus on disease activity markers and the reduction of inflammation. In an important review about the significance of physical function and activity in axSpA, it is stressed that rheumatologists should also consider these outcome parameters as important in their management strategy to treat patients with axSpA—which is very consistent with the quality standards for axSpA . Prof. A. Robin Poole, McGill University, Montréal, Canada This author has mainly concentrated on autoimmune responses to the connective tissue structure of cartilage. The matrix of cartilage contains glycosaminoglycans, proteoglycans, and collagen fibers. Immunity to proteoglycans can indeed be induced by injection of human cartilage proteoglycan in BALB/c mice, since these animals develop progressive polyarthritis and AS-like features . More specifically, such rheumatic symptoms could also be induced in these mice by the proteoglycan aggrecan G1 domain. In this animal model, a T cell line specific to an epitope on the G1 domain of aggrecan induced arthritic symptoms by adoptive transfer and homing to the intraarticular epitope . Furthermore, the results of proliferation assays with peripheral blood lymphocytes from patients and healthy controls suggested that the cartilage link protein is a potential autoantigen in the development of both RA and AS . When this link protein was repeatedly injected intraperitoneally into BALB/c mice, a persistent, erosive, inflammatory polyarthritis developed. Importantly, a single T cell epitope was recognized by specific T lymphocytes . Immunity to cartilage molecules such as link protein, aggrecan, or the G1 domain of aggrecan has also been observed in patients with AS. In an interesting review, it was also proposed that involvement of other tissues involved in axSpA such as the eye and the aorta may be due to cross-reactive immunity . Also here, an important research question related to immunity to human cartilage was tackled by studies mainly using an animal model but also with human peripheral blood. However, as of today, no more evidence has been generated to support that hypothesis. This author has mainly concentrated on autoimmune responses to the connective tissue structure of cartilage. The matrix of cartilage contains glycosaminoglycans, proteoglycans, and collagen fibers. Immunity to proteoglycans can indeed be induced by injection of human cartilage proteoglycan in BALB/c mice, since these animals develop progressive polyarthritis and AS-like features . More specifically, such rheumatic symptoms could also be induced in these mice by the proteoglycan aggrecan G1 domain. In this animal model, a T cell line specific to an epitope on the G1 domain of aggrecan induced arthritic symptoms by adoptive transfer and homing to the intraarticular epitope . Furthermore, the results of proliferation assays with peripheral blood lymphocytes from patients and healthy controls suggested that the cartilage link protein is a potential autoantigen in the development of both RA and AS . When this link protein was repeatedly injected intraperitoneally into BALB/c mice, a persistent, erosive, inflammatory polyarthritis developed. Importantly, a single T cell epitope was recognized by specific T lymphocytes . Immunity to cartilage molecules such as link protein, aggrecan, or the G1 domain of aggrecan has also been observed in patients with AS. In an interesting review, it was also proposed that involvement of other tissues involved in axSpA such as the eye and the aorta may be due to cross-reactive immunity . Also here, an important research question related to immunity to human cartilage was tackled by studies mainly using an animal model but also with human peripheral blood. However, as of today, no more evidence has been generated to support that hypothesis. Profs. Auli und Paavo Toivanen, University of Turku These authors have concentrated on the role of bacteria in the pathogenesis of ReA, which constitutes a small part of the spectrum of SpA. ReA causes joint pain and swelling, most often asymmetric, in knees, ankles, and feet triggered by an infection, usually caused by bacteria such as Chlamydia, Salmonella , and Yersinia in another part of the body, most often in the gastrointestinal or the urogenital tract. A major research question has always been whether the presence of microbes or a pathologic reaction of the immune system is more important for the pathogenesis. Using polymerase chain reaction (PCR), chromosomal DNA of Yersinia was not found in the synovial specimen of patients with Yersinia -triggered ReA or controls . However, with immunocytochemical techniques, Yersinia antigens were observed in synovial specimens from patients with Yersini a‑triggered ReA . Later work demonstrated for the first time that bacterial antigens may persist for a long time in patients who develop ReA after an infection with Yersinia enterocolitica O:3 . The Yersinia adhesin, YadA, seems to be involved in interactions with extracellular matrix molecules after invasion of the intestinal tissue . In a first study to assess bacteria-specific immune responses in HLA-B27+ compared to HLA-B27− individuals, antibodies to Yersinia, Salmonella , and Klebsiella were more often found in the former, suggesting that differences in such immune responses are related to HLA-B27 . In an important first study using an animal model, it was shown that antibiotic therapy of Yersinia -triggered ReA only works if the treatment is started very early in the course of the disease and if given in sufficient dosage . However, antibiotic treatment had no effect on fully developed arthritis and also not on antibody formation . As of today, the detection of chromosomal bacterial material in the synovial fluid has a place in clinical practice especially for Chlamydia , but this often does not take place because the incidence of ReA has rather declined in recent decades. Similarly, antibiotic therapy will especially be performed if there is evidence of Chlamydia in the urogenital tract. These authors have concentrated on the role of bacteria in the pathogenesis of ReA, which constitutes a small part of the spectrum of SpA. ReA causes joint pain and swelling, most often asymmetric, in knees, ankles, and feet triggered by an infection, usually caused by bacteria such as Chlamydia, Salmonella , and Yersinia in another part of the body, most often in the gastrointestinal or the urogenital tract. A major research question has always been whether the presence of microbes or a pathologic reaction of the immune system is more important for the pathogenesis. Using polymerase chain reaction (PCR), chromosomal DNA of Yersinia was not found in the synovial specimen of patients with Yersinia -triggered ReA or controls . However, with immunocytochemical techniques, Yersinia antigens were observed in synovial specimens from patients with Yersini a‑triggered ReA . Later work demonstrated for the first time that bacterial antigens may persist for a long time in patients who develop ReA after an infection with Yersinia enterocolitica O:3 . The Yersinia adhesin, YadA, seems to be involved in interactions with extracellular matrix molecules after invasion of the intestinal tissue . In a first study to assess bacteria-specific immune responses in HLA-B27+ compared to HLA-B27− individuals, antibodies to Yersinia, Salmonella , and Klebsiella were more often found in the former, suggesting that differences in such immune responses are related to HLA-B27 . In an important first study using an animal model, it was shown that antibiotic therapy of Yersinia -triggered ReA only works if the treatment is started very early in the course of the disease and if given in sufficient dosage . However, antibiotic treatment had no effect on fully developed arthritis and also not on antibody formation . As of today, the detection of chromosomal bacterial material in the synovial fluid has a place in clinical practice especially for Chlamydia , but this often does not take place because the incidence of ReA has rather declined in recent decades. Similarly, antibiotic therapy will especially be performed if there is evidence of Chlamydia in the urogenital tract. Prof. Pierre Miossec, University of Lyon, France The inhibitory effect of soluble TNF receptors on IL‑6 production and collagen degradation in synovium and bone was shown to be increased upon adding soluble IL-17 receptor and soluble IL‑1 receptor II. This supports the concept of combination therapy to further increase the response to therapy . Later, it became clear that Th17 cells produce IL-17, and that IL-17 is inhibited by IFN‑γ, while IL-23 enhances IL-17 production. Furthermore, there was increasing evidence that IL-17 is able to induce IL‑1 and TNF production, while IL‑1 and IL‑6 can increase IL-17 secretion but IL‑4 inhibits IL-17 . The Th17 subset is a new T cell subset described in addition to the Th1 and Th2 T cell subsets, which seems to be controlled through IL-23 . These Th17 cells became more and more likely to play a key role in chronic inflammatory diseases. Treatment trials several years later made it clear that IL-17 does not play an important role in RA but rather in psoriasis and in axSpA. Based on recent clinical trial data, IL-23 and IL-17 are at least partly uncoupled in axSpA. Reasons as to why, when, and how this plays a role in the pathogenesis of SpA were discussed with special reference to the microenvironment of the subchondral bone marrow. Especially the different interactions between lymphocytes and stromal cells play an important role in immune responses . Stromal cells have indeed been shown to contribute to inflammation—from induction to chronicity or resolution—through direct cell interactions and through the secretion of pro-inflammatory and anti-inflammatory mediators. Today, anti-IL17 therapy has an established place in the treatment of axSpA and psoriatic arthritis (PsA). The inhibitory effect of soluble TNF receptors on IL‑6 production and collagen degradation in synovium and bone was shown to be increased upon adding soluble IL-17 receptor and soluble IL‑1 receptor II. This supports the concept of combination therapy to further increase the response to therapy . Later, it became clear that Th17 cells produce IL-17, and that IL-17 is inhibited by IFN‑γ, while IL-23 enhances IL-17 production. Furthermore, there was increasing evidence that IL-17 is able to induce IL‑1 and TNF production, while IL‑1 and IL‑6 can increase IL-17 secretion but IL‑4 inhibits IL-17 . The Th17 subset is a new T cell subset described in addition to the Th1 and Th2 T cell subsets, which seems to be controlled through IL-23 . These Th17 cells became more and more likely to play a key role in chronic inflammatory diseases. Treatment trials several years later made it clear that IL-17 does not play an important role in RA but rather in psoriasis and in axSpA. Based on recent clinical trial data, IL-23 and IL-17 are at least partly uncoupled in axSpA. Reasons as to why, when, and how this plays a role in the pathogenesis of SpA were discussed with special reference to the microenvironment of the subchondral bone marrow. Especially the different interactions between lymphocytes and stromal cells play an important role in immune responses . Stromal cells have indeed been shown to contribute to inflammation—from induction to chronicity or resolution—through direct cell interactions and through the secretion of pro-inflammatory and anti-inflammatory mediators. Today, anti-IL17 therapy has an established place in the treatment of axSpA and psoriatic arthritis (PsA). Prof. Désirée van der Heijde, University of Leiden, the Netherlands This author is the most important and leading figure of ASAS who has initiated many projects to standardize measurements and instruments to assess important features of AS and axSpA. In a very early project , instruments for the core set for DC-ART, SMARD, physical therapy, and clinical record-keeping in AS were selected by the ASAS Working Group to be able to compare results across studies. This core set has recently been updated . The studies leading to establishment of the classification criteria for axSpA which are now used worldwide in many trials have already been cited . Another landmark study led to establishment of the disease activity criteria for axSpA which are now also used worldwide in clinical studies, the ASDAS . Another very important study linked disease activity to radiographic progression in axSpA . The ASAS-EULAR management recommendations for axSpA, which have led rheumatologists all over the world on how to treat their patients , have recently been updated . Great achievements by large national and international studies were made in which this awardee has often played a leading role. This author is the most important and leading figure of ASAS who has initiated many projects to standardize measurements and instruments to assess important features of AS and axSpA. In a very early project , instruments for the core set for DC-ART, SMARD, physical therapy, and clinical record-keeping in AS were selected by the ASAS Working Group to be able to compare results across studies. This core set has recently been updated . The studies leading to establishment of the classification criteria for axSpA which are now used worldwide in many trials have already been cited . Another landmark study led to establishment of the disease activity criteria for axSpA which are now also used worldwide in clinical studies, the ASDAS . Another very important study linked disease activity to radiographic progression in axSpA . The ASAS-EULAR management recommendations for axSpA, which have led rheumatologists all over the world on how to treat their patients , have recently been updated . Great achievements by large national and international studies were made in which this awardee has often played a leading role. Prof. Paul Emery, University of Leeds, England, UK This author has a great record in many fields of rheumatology, with significant contributions also in the field of SpA. In an early review, we were reminded of the significance of enthesitis in axSpA . Unlike monoclonal antibodies, the TNF receptor fusion protein etanercept works on arthritis but not on colitis in Crohn’s-related SpA . Later, it became clear that the frequency of flares of Crohn’s disease was also higher in axSpA patients treated with etanercept . In a long-term study, it was shown that the degree of inflammation in the SIJ in combination with HLA-B27 were predictive of structural changes in these joints in the further course of patients with axSpA . In a landmark study with very early young HLA-B27+ patients with axSpA treated with the TNFi infliximab, partial remission was reached in more than half of the cases . In another landmark study, it was shown that a T2T approach leads to better outcomes with a few more side effects in PsA . The T2T approach is nowadays more and more established, also in the field of SpA. Enthesitis is an important outcome in clinical studies. Great achievements by large national and international studies were made in which this awardee has often played a leading role. This author has a great record in many fields of rheumatology, with significant contributions also in the field of SpA. In an early review, we were reminded of the significance of enthesitis in axSpA . Unlike monoclonal antibodies, the TNF receptor fusion protein etanercept works on arthritis but not on colitis in Crohn’s-related SpA . Later, it became clear that the frequency of flares of Crohn’s disease was also higher in axSpA patients treated with etanercept . In a long-term study, it was shown that the degree of inflammation in the SIJ in combination with HLA-B27 were predictive of structural changes in these joints in the further course of patients with axSpA . In a landmark study with very early young HLA-B27+ patients with axSpA treated with the TNFi infliximab, partial remission was reached in more than half of the cases . In another landmark study, it was shown that a T2T approach leads to better outcomes with a few more side effects in PsA . The T2T approach is nowadays more and more established, also in the field of SpA. Enthesitis is an important outcome in clinical studies. Great achievements by large national and international studies were made in which this awardee has often played a leading role. Prof. Dr. Georg Schett, Universität Erlangen This author has made important basic science and clinical contributions not only in the field of SpA. Among the first was the Dickkopf (DKK)-related protein 1 that is encoded in humans by the DKK1 gene. DKK‑1 inhibits the Wnt signal transduction pathway. Wnt signaling is a multifaceted pathway that regulates several important cellular pathways which are β‑catenin-dependent or not. TNFα was identified as a key inducer of DKK‑1 in a mouse inflammatory arthritis model and also in human RA. These results suggested that the Wnt pathway is a key regulator of joint remodeling . Psoriasis patients without arthritis show substantial signs of enthesophyte formation, representing new bone formation at mechanically exposed sites of joints. This finding supports the concept of a deep Koebner phenomenon at entheseal sites in patients with psoriasis . The pathophysiology of enthesitis is of special interest in the field of SpA. In an important review, the role of biomechanics, prostaglandin E2-mediated vasodilation, and the activation of innate immune cells in the initial phase of enthesitis was addressed , and also the possible role of entheseal IL-23-responsive proinflammatory cells that produce IL-17, IL-22, and TNFα. The data of another study strongly suggested that a very early disease interception in patients with incipient psoriatic arthritis leads not only to a decline in skin symptoms but also to reduction of pain and subclinical inflammation . Finally, in a highly cited review, the ability to block specific cytokine pathways was interpreted as an important tool to reveal pathophysiological differences among autoimmune diseases, hereby providing a framework for reclassification of rheumatic diseases . This author has made important basic science and clinical contributions not only in the field of SpA. Among the first was the Dickkopf (DKK)-related protein 1 that is encoded in humans by the DKK1 gene. DKK‑1 inhibits the Wnt signal transduction pathway. Wnt signaling is a multifaceted pathway that regulates several important cellular pathways which are β‑catenin-dependent or not. TNFα was identified as a key inducer of DKK‑1 in a mouse inflammatory arthritis model and also in human RA. These results suggested that the Wnt pathway is a key regulator of joint remodeling . Psoriasis patients without arthritis show substantial signs of enthesophyte formation, representing new bone formation at mechanically exposed sites of joints. This finding supports the concept of a deep Koebner phenomenon at entheseal sites in patients with psoriasis . The pathophysiology of enthesitis is of special interest in the field of SpA. In an important review, the role of biomechanics, prostaglandin E2-mediated vasodilation, and the activation of innate immune cells in the initial phase of enthesitis was addressed , and also the possible role of entheseal IL-23-responsive proinflammatory cells that produce IL-17, IL-22, and TNFα. The data of another study strongly suggested that a very early disease interception in patients with incipient psoriatic arthritis leads not only to a decline in skin symptoms but also to reduction of pain and subclinical inflammation . Finally, in a highly cited review, the ability to block specific cytokine pathways was interpreted as an important tool to reveal pathophysiological differences among autoimmune diseases, hereby providing a framework for reclassification of rheumatic diseases . Prof. Maxime Dougados, René Descartes University of Paris, Hospital Cochin, France This author has a great record in many fields of rheumatology, with significant contributions in the field of SpA. In the first randomized placebo-controlled clinical trial to study the efficacy and safety of a potential disease-modifying anti-rheumatic drug (DMARD) in AS, Sulfasalazin, there was no convincing evidence of its efficacy . After the first attempt to enlarge the spectrum of SpA , ASAS improvement criteria, initially for AS, were developed , which are still widely used. Following the German early SpA inception cohort GESPIC , the DESIR (DEvenir des Spondylarthropathies Indifférenciées Récentes) cohort is one of the most important cohort studies in the field of axSpA. An early study showed that HLA-B27 is associated with earlier onset of IBP, less delay in diagnosis, more axial inflammation, and more radiographic changes in the SIJ . The ASAS-COMOSPA study focused on comorbidities and included almost 4000 patients worldwide. The most frequent comorbidities found were osteoporosis and gastroduodenal ulcer, and the most frequent risk factors hypertension, smoking, and hypercholesterolemia, indicating a significant cardiovascular risk of these patients . In another landmark T2T study, it was shown that this approach led to better outcomes than traditional strategies in patients with axSpA . Great achievements by large national and international studies were made in which this awardee has often played a leading role. This author has a great record in many fields of rheumatology, with significant contributions in the field of SpA. In the first randomized placebo-controlled clinical trial to study the efficacy and safety of a potential disease-modifying anti-rheumatic drug (DMARD) in AS, Sulfasalazin, there was no convincing evidence of its efficacy . After the first attempt to enlarge the spectrum of SpA , ASAS improvement criteria, initially for AS, were developed , which are still widely used. Following the German early SpA inception cohort GESPIC , the DESIR (DEvenir des Spondylarthropathies Indifférenciées Récentes) cohort is one of the most important cohort studies in the field of axSpA. An early study showed that HLA-B27 is associated with earlier onset of IBP, less delay in diagnosis, more axial inflammation, and more radiographic changes in the SIJ . The ASAS-COMOSPA study focused on comorbidities and included almost 4000 patients worldwide. The most frequent comorbidities found were osteoporosis and gastroduodenal ulcer, and the most frequent risk factors hypertension, smoking, and hypercholesterolemia, indicating a significant cardiovascular risk of these patients . In another landmark T2T study, it was shown that this approach led to better outcomes than traditional strategies in patients with axSpA . Great achievements by large national and international studies were made in which this awardee has often played a leading role. Prof. Dafna Gladman, University of Toronto, Canada This author has worked intensively all her life on all aspects of PsA. In an early study it was shown that PsA can be a rather severe disease . The role of environmental factors in the development of PsA was highlighted in another study , while the disadvantages of late presentation of patients with PsA were stressed by showing that this is associated with more joint damage . The contribution of certain HLA‑B and HLA‑C alleles to the susceptibility to PsA among patients with psoriasis was analyzed in another study , while the biomarkers metalloproteinase (MMP)-3 and cartilage oligomeric matrix protein (COMP) were found to be predictive of drug responses to anti-TNF therapy in patients with PsA . The prospective follow-up of patients with psoriasis was instrumental to show that the chemokine CXCL10 is a biomarker for the development of PsA . Furthermore, a preclinical phase was described that is characterized by nonspecific musculoskeletal symptoms, including joint pain, fatigue, and stiffness, which was found to precede the diagnosis of PsA in patients with psoriasis . This author has worked intensively all her life on all aspects of PsA. In an early study it was shown that PsA can be a rather severe disease . The role of environmental factors in the development of PsA was highlighted in another study , while the disadvantages of late presentation of patients with PsA were stressed by showing that this is associated with more joint damage . The contribution of certain HLA‑B and HLA‑C alleles to the susceptibility to PsA among patients with psoriasis was analyzed in another study , while the biomarkers metalloproteinase (MMP)-3 and cartilage oligomeric matrix protein (COMP) were found to be predictive of drug responses to anti-TNF therapy in patients with PsA . The prospective follow-up of patients with psoriasis was instrumental to show that the chemokine CXCL10 is a biomarker for the development of PsA . Furthermore, a preclinical phase was described that is characterized by nonspecific musculoskeletal symptoms, including joint pain, fatigue, and stiffness, which was found to precede the diagnosis of PsA in patients with psoriasis . Prof. Iain McInnes, University of Glasgow, Scotland, UK As already pointed out, the authors of this review argued that “immune-mediated inflammatory diseases” should be less defined by their organ involvement but rather by a molecular-based classification . Indeed, a similar response or non-response to targeted anti-cytokine therapies might be more connecting than an organ-based definition . The role of dendritic cells (DC) was highlighted based on the finding of a reduced number of these cells in peripheral blood (PB) of RA and PsA patients, an increase in synovial fluid compared to PB, and an incomplete maturation of these cells in the inflamed synovial compartment . In an important head-to-head (H2H) trial in PsA, a similar proportion of ACR20 responders was found among patients treated with the JAK inhibitor upadacitinib dosed at 15 mg/day and the TNFi adalimumab at 40 mg every 2 weeks, while the response rates for the 30 mg/day dose of upadacitinib was even higher than for adalimumab. However, the higher dose has not been approved for this indication. Both drugs were superior to placebo . In another H2H trial, the IL-17i secukinumab was not superior to adalimumab in patients with active PsA. The higher retention rate of secukinumab could be due to the superior effect of the IL-17i on skin manifestations . The anti-IL23 antibody guselkumab, which binds selectively to the p19 subunit of this cytokine, was much more efficacious than placebo in a large randomized controlled trial in active PsA . Great achievements by large national and international studies were made in which this awardee has often played a leading role. As already pointed out, the authors of this review argued that “immune-mediated inflammatory diseases” should be less defined by their organ involvement but rather by a molecular-based classification . Indeed, a similar response or non-response to targeted anti-cytokine therapies might be more connecting than an organ-based definition . The role of dendritic cells (DC) was highlighted based on the finding of a reduced number of these cells in peripheral blood (PB) of RA and PsA patients, an increase in synovial fluid compared to PB, and an incomplete maturation of these cells in the inflamed synovial compartment . In an important head-to-head (H2H) trial in PsA, a similar proportion of ACR20 responders was found among patients treated with the JAK inhibitor upadacitinib dosed at 15 mg/day and the TNFi adalimumab at 40 mg every 2 weeks, while the response rates for the 30 mg/day dose of upadacitinib was even higher than for adalimumab. However, the higher dose has not been approved for this indication. Both drugs were superior to placebo . In another H2H trial, the IL-17i secukinumab was not superior to adalimumab in patients with active PsA. The higher retention rate of secukinumab could be due to the superior effect of the IL-17i on skin manifestations . The anti-IL23 antibody guselkumab, which binds selectively to the p19 subunit of this cytokine, was much more efficacious than placebo in a large randomized controlled trial in active PsA . Great achievements by large national and international studies were made in which this awardee has often played a leading role. Prof. Dirk Elewaut, University of Ghent, Belgium This author has made important basic science contributions to the field of SpA. In an animal model, the role of mechanical stress was shown to be mandatory for inflammation and new bone formation at enthesial sites and probably also at other sites of the axial skeleton, providing some explanations regarding the sites of disease manifestations in SpA . More evidence that mechanical strain controls the site-specific localization of inflammation and tissue damage in arthritis was provided in a following study . Dysregulated IL-23/IL-17 responses seem to be present in SpA. The retinoic acid receptor-related orphan receptor gamma isoform t (RORγt) is an important Th17 cell transcriptional regulator. The potential of ROR-γt antagonism to modulate aberrant type-17 responses was highlighted in a study with SpA patients . In work focusing on the microbiome, it was shown that the intestinal microbial composition of patients with SpA who have microscopic gut inflammation is different compared to those without colitis. Moreover, the microbial genus Dialister was abundantly present in the gut of these patients, which correlated with disease activity . In recent decades, the role of gut inflammation has been studied a lot—not only in Gent. In this important study, an association between gut inflammation and the degree of subchondral bone marrow edema in the sacroiliac joints of SpA patients gave new support to the long-postulated gut–joint link in this disease . This author has made important basic science contributions to the field of SpA. In an animal model, the role of mechanical stress was shown to be mandatory for inflammation and new bone formation at enthesial sites and probably also at other sites of the axial skeleton, providing some explanations regarding the sites of disease manifestations in SpA . More evidence that mechanical strain controls the site-specific localization of inflammation and tissue damage in arthritis was provided in a following study . Dysregulated IL-23/IL-17 responses seem to be present in SpA. The retinoic acid receptor-related orphan receptor gamma isoform t (RORγt) is an important Th17 cell transcriptional regulator. The potential of ROR-γt antagonism to modulate aberrant type-17 responses was highlighted in a study with SpA patients . In work focusing on the microbiome, it was shown that the intestinal microbial composition of patients with SpA who have microscopic gut inflammation is different compared to those without colitis. Moreover, the microbial genus Dialister was abundantly present in the gut of these patients, which correlated with disease activity . In recent decades, the role of gut inflammation has been studied a lot—not only in Gent. In this important study, an association between gut inflammation and the degree of subchondral bone marrow edema in the sacroiliac joints of SpA patients gave new support to the long-postulated gut–joint link in this disease . Prof. Maria Antonietta D’Agostino, Universita Cattolicà del Sacro Cuore, Rome, Italy This author has mainly focused on imaging in SpA. In an early cross-sectional study, power Doppler ultrasonography was introduced for investigation of SpA-associated peripheral enthesitis . Some evidence was provided that this technique can be helpful to diagnose SpA early in a follow-up study . The frequency of synovitis was determined in an ultrasound study in the healthy population. The findings were found to be relevant for calculation of the specificity of a positive ultrasound finding in the diagnostic approach of SpA . By analyzing data from the DESIR cohort on patients with early axSpA, the authors identified two clinical phenotypes—one with predominantly axial manifestations and one with predominantly peripheral manifestations . In the context of an OMERACT project, a final reliable ultrasound score and a consensus-based definition of enthesitis in SpA were generated, with possible implications for clinical practice . In a therapeutic study, the treatment response of PsA patients to secukinumab was studied by ultrasonography which showed a rapid reduction of synovitis, in good correlation to improvement of clinical symptoms . In a follow-up study of an earlier investigation, it was shown that the presence of peripheral manifestations at diagnosis of axSpA results in a poorer clinical outcome compared to an axial manifestation alone . This author has mainly focused on imaging in SpA. In an early cross-sectional study, power Doppler ultrasonography was introduced for investigation of SpA-associated peripheral enthesitis . Some evidence was provided that this technique can be helpful to diagnose SpA early in a follow-up study . The frequency of synovitis was determined in an ultrasound study in the healthy population. The findings were found to be relevant for calculation of the specificity of a positive ultrasound finding in the diagnostic approach of SpA . By analyzing data from the DESIR cohort on patients with early axSpA, the authors identified two clinical phenotypes—one with predominantly axial manifestations and one with predominantly peripheral manifestations . In the context of an OMERACT project, a final reliable ultrasound score and a consensus-based definition of enthesitis in SpA were generated, with possible implications for clinical practice . In a therapeutic study, the treatment response of PsA patients to secukinumab was studied by ultrasonography which showed a rapid reduction of synovitis, in good correlation to improvement of clinical symptoms . In a follow-up study of an earlier investigation, it was shown that the presence of peripheral manifestations at diagnosis of axSpA results in a poorer clinical outcome compared to an axial manifestation alone . Univ.-Prof. Dr. med. Martin Rudwaleit and Prof. Dr. Denis McGonagle; Univ.-Prof. Dr. med. Martin Rudwaleit, University of Bielefeld, Germany This author has not only worked on the definition of IBP , but also very much on the diagnosis and classification of axSpA . Regarding anti-TNF therapy, he has made clinically relevant contributions for the prediction of a major response to this treatment . Following the early study on the efficacy of NSAIDs to reduce radiographic progression in AS showing that continuous treatment with celecoxib is superior to on demand therapy , he and his coworkers studied the performance of diclofenac in this regard but failed to show an effect . However, in contrast, a dose-dependent effect of smoking on radiographic progression in AS was clearly demonstrated . Currently, M. Rudwaleit is leading the ASAS-part of the CLASSIC study, an international multicenter trial including a second evaluation of the ASAS classification criteria, aiming for an increase in their specificity. Prof. Dr. Denis McGonagle, University of Leeds, UK This author, last but clearly not least, started early by reminding us that enthesitis is an important clinical feature of SpA . The late J. Ball had expressed his view on the relevance of enthesitis in his Heberden Oration lecture shortly before the association of AS with HLA-B27 had been discovered 50 years ago (as recently reviewed ). Nevertheless, this author has taken a deep dive into the synovio-entheseal complex and he has nicely explained the functional interdependence of an enthesis with adjacent synovium and how this has an influence on the phenotypic expression of joint disease—not only in PsA . An important step in the understanding of immune-mediated diseases was his paper on the differentiation of autoimmunity from autoinflammation . The recognition and genetic understanding of autoinflammatory diseases has helped to define mechanisms of self-directed inflammation which act independently of adaptive immunity. Local factors at sites predisposed to disease lead to activation of innate immune cells such as macrophages and neutrophils. For example, disturbed homeostasis of canonical cytokine cascades (as in periodic fever syndromes), aberrant bacterial sensing (as in Crohn’s disease), and tissue microdamage (as in PsA) predispose to site-specific inflammation triggered by innate immune dysregulation at sites of mechanical stress, driving SpA pathology . Fitting into this concept, the author later proposed together with Turkish colleagues the term “MHC-I-opathy” to explain how and why Behçet’s disease and several clinically distinct forms of SpA, all associated with MHC class I alleles such as HLA-B-51, HLA-C-0602 , and HLA-B-27 and epistatic ERAP1 interactions, have a shared immunopathogenetic basis. This also includes a barrier dysfunction in environmentally exposed organs such as the skin, and aberrant innate immune reactions at sites of mechanical stress . Finally, taking advantage of human spinous processes, entheseal soft tissue, and peri-entheseal bone harvested during elective orthopedic procedures, the author showed that the spinal entheseal Vδ1 and Vδ2 subsets are tissue-resident cells with inducible IL-17A production . He also provided evidence that the Vδ1 subset does so independently of IL-23R expression. This is important because a sophisticated animal experiment had shown several years ago that IL-23 is essential in enthesitis by acting on a previously unidentified IL-23 receptor . However, there was some disappointment later on because therapies directed against IL-23 did not work in AS , while anti-IL-17 antibodies did . The situation became even more complicated when it became clear that anti-IL-23 but not anti-IL-17 agents work in IBD, while both were shown to be efficacious in psoriasis and PsA . Very recently, McGonagle proposed an explanation for differences between axSpA and PsA with axial involvement, arguing that in contrast to “HLA-B27-associated disease,” the enthesitis seen in PsA primarily manifests in ligamentous soft tissue as “ligamentitis” . This may also explain differential responses to IL-23 inhibition, since the enthesis bone and soft tissues have radically different immune cell and stromal compositions. This author has not only worked on the definition of IBP , but also very much on the diagnosis and classification of axSpA . Regarding anti-TNF therapy, he has made clinically relevant contributions for the prediction of a major response to this treatment . Following the early study on the efficacy of NSAIDs to reduce radiographic progression in AS showing that continuous treatment with celecoxib is superior to on demand therapy , he and his coworkers studied the performance of diclofenac in this regard but failed to show an effect . However, in contrast, a dose-dependent effect of smoking on radiographic progression in AS was clearly demonstrated . Currently, M. Rudwaleit is leading the ASAS-part of the CLASSIC study, an international multicenter trial including a second evaluation of the ASAS classification criteria, aiming for an increase in their specificity. This author, last but clearly not least, started early by reminding us that enthesitis is an important clinical feature of SpA . The late J. Ball had expressed his view on the relevance of enthesitis in his Heberden Oration lecture shortly before the association of AS with HLA-B27 had been discovered 50 years ago (as recently reviewed ). Nevertheless, this author has taken a deep dive into the synovio-entheseal complex and he has nicely explained the functional interdependence of an enthesis with adjacent synovium and how this has an influence on the phenotypic expression of joint disease—not only in PsA . An important step in the understanding of immune-mediated diseases was his paper on the differentiation of autoimmunity from autoinflammation . The recognition and genetic understanding of autoinflammatory diseases has helped to define mechanisms of self-directed inflammation which act independently of adaptive immunity. Local factors at sites predisposed to disease lead to activation of innate immune cells such as macrophages and neutrophils. For example, disturbed homeostasis of canonical cytokine cascades (as in periodic fever syndromes), aberrant bacterial sensing (as in Crohn’s disease), and tissue microdamage (as in PsA) predispose to site-specific inflammation triggered by innate immune dysregulation at sites of mechanical stress, driving SpA pathology . Fitting into this concept, the author later proposed together with Turkish colleagues the term “MHC-I-opathy” to explain how and why Behçet’s disease and several clinically distinct forms of SpA, all associated with MHC class I alleles such as HLA-B-51, HLA-C-0602 , and HLA-B-27 and epistatic ERAP1 interactions, have a shared immunopathogenetic basis. This also includes a barrier dysfunction in environmentally exposed organs such as the skin, and aberrant innate immune reactions at sites of mechanical stress . Finally, taking advantage of human spinous processes, entheseal soft tissue, and peri-entheseal bone harvested during elective orthopedic procedures, the author showed that the spinal entheseal Vδ1 and Vδ2 subsets are tissue-resident cells with inducible IL-17A production . He also provided evidence that the Vδ1 subset does so independently of IL-23R expression. This is important because a sophisticated animal experiment had shown several years ago that IL-23 is essential in enthesitis by acting on a previously unidentified IL-23 receptor . However, there was some disappointment later on because therapies directed against IL-23 did not work in AS , while anti-IL-17 antibodies did . The situation became even more complicated when it became clear that anti-IL-23 but not anti-IL-17 agents work in IBD, while both were shown to be efficacious in psoriasis and PsA . Very recently, McGonagle proposed an explanation for differences between axSpA and PsA with axial involvement, arguing that in contrast to “HLA-B27-associated disease,” the enthesitis seen in PsA primarily manifests in ligamentous soft tissue as “ligamentitis” . This may also explain differential responses to IL-23 inhibition, since the enthesis bone and soft tissues have radically different immune cell and stromal compositions. This article was prepared to give an overview of the work of the Carol Nachman Prize winners and their impact on spondyloarthritis research. The list of awardees and their most important publications provides an overall view of their personal achievements during the past decades. About 40% of the awardees of the Carol Nachman Prize in the last 31 years were working at least in part in the field of spondyloarthritis. The following themes were covered: Axial spondyloarthritis Psoriatic arthritis Reactive arthritis Enthesitis Inflammatory bowel disease Mechanical stress as trigger Autoimmunity Microbiome IL-17/IL-23 Osteoimmunology Comorbidity Magnetic resonance imaging Ultrasound Radiographic progression The above list is an interesting mixture of different fields in medicine and rheumatology covering clinical science, epidemiology, imaging, therapy, basic research, cytokines, animal models, the microbiome, autoimmunity, and mechanical factors. For the majority of the award winners cited here, the field of spondyloarthritis was the focus or at least a major part of larger scientific projects in basic immunology and rheumatology. However, several authors have also contributed significant work to the pathogenesis of rheumatoid arthritis and osteoarthritis or to the epidemiology and management of other rheumatic diseases. Looking at the reference list, most authors have worked together in many projects. This may well be one reason for the great success of research in rheumatology and the field of SpA. Of the 18 awardees presented here, 4 were women (22%) and 14 men (78%). This imbalance should be considered as a stimulation for more female researchers in the field of rheumatology. In fact, studies on sex- and gender-related differences regarding diseases and treatment outcomes have substantially increased in the past years. The main reason for the absence of Carol Nachman Prize winners in the field of spondyloarthritis before 1992 mainly indicates that the focus of international and national rheumatology was rather on rheumatoid arthritis and connective tissue diseases. The discovery of the HLA-B27 association by D. Brewerton , the proposed concept of spondyloarthritis by J. Moll & V. Wright , the first description of inflammatory back pain by A. Calin , the first hint that NSAIDs may inhibit radiographic progression in AS , the detailed radiographic concept of radiographic changes in the sacroiliac joint by W. Dihlmann , the distinguished and still valid calculation of the diagnostic performance of HLA-B27 by M. Khan , and the publication of the modified New York criteria for AS by S. van der Linden are good examples of valuable scientific research in the field of AS. The award of the Carol Nachman Prize and the annual celebration in the city of Wiesbaden has always been a festive event to appreciate prior work or even lifetime achievement of the awardees and their groups. In addition, it has been a challenge to generate new knowledge and promote research in rheumatology. Awards such as the Carol Nachman Prize have contributed and continue to contribute a lot to the stimulation of research in rheumatology.
Impact of direct mesenteric perfusion on malperfusion in acute type A aortic dissection repair
d3c044ad-364f-4ed2-97c6-50957679ba3f
11852347
Surgical Procedures, Operative[mh]
Despite recent improvements in surgical outcomes for acute type A aortic dissection (ATAAD), the Japanese Association for Thoracic Surgery reported that the in-hospital mortality rate was 11.2% (670/5995) in 2017 , with patients with mesenteric malperfusion showing mortality rates greater than 60% . The traditional approach for ATAAD with mesenteric malperfusion has involved immediate central repair followed by bowel resection if needed. This is because many surgeons believe that the best strategy for treating malperfusion syndrome is to rapidly improve flow to the true lumen and reduce pressure in the false lumen. However, central repair can be time-consuming and may not adequately address branch-type static obstructions of the superior mesenteric artery (SMA) . In addition, even when bowel resection is performed, the results may be disastrous. Recent reports suggest that treating intestinal ischaemia before central repair may improve the outcomes . However, this is impossible in unstable patients. Furthermore, a certain number of deaths due to aortic rupture have been reported when surgeons have prioritized revascularization of the mesenteric artery through endovascular treatment . As both intestinal ischaemia and aortic dissection often undergo various clinical transitions, the optimal management remains unclarified. We consider it important to quickly resolve these two problems. Since 2011, the Japanese Red Cross Aichi Medical Center Nagoya Daini Hospital has used a specific approach for mesenteric malperfusion in ATAAD based on preoperative CT scans. While temporary SMA perfusion using saphenous vein bypass grafting during ATAAD repair was first reported by Okada et al. in 2007 , our approach involves exploratory laparotomy and immediate surgical reperfusion with SMA plasty concurrent with open repair of ATAAD. The present study aimed to describe our systematic surgical approach for concurrent management of central aortic repair and mesenteric malperfusion, and to report the clinical outcomes of this integrated strategy in patients with and without temporary SMA perfusion. This retrospective observational study examined consecutive patients who underwent repair of ATAAD between August 2011 and November 2022. ATAAD was defined as an onset within 14 days of admission. Patients were included if they had suspected mesenteric malperfusion based on preoperative CT findings showing SMA dissection and clinical symptoms such as abdominal pain and tenderness. Patients were excluded if they had pre-existing bowel necrosis at laparotomy precluding central repair, or if SMA revascularization was performed separately from central repair. The primary end-point was the 30-day operative mortality. The secondary end-points were successful SMA revascularization (confirmed by postoperative contrast CT), need for bowel resection, laparotomy-related complications (surgical site infection, prolonged ileus, and reoperation), and aortic event-free survival. Aortic events were defined as aortic reoperation, aortic rupture, or aortic-related death. The Institutional Review Board of the Japanese Red Cross Aichi Medical Center Nagoya Daini Hospital approved the study (approval no. 1632) on 6 September 2024 and waived the need for consent. Diagnosis of mesenteric malperfusion Malperfusion was defined as a failure of blood flow to end organs caused by dissection-related obstruction of the aorta and its branches. Based on contrast-enhanced CT findings and clinical assessment, patients with aortic dissection extending into the SMA were diagnosed with mesenteric malperfusion. The diagnostic criterion for mesenteric malperfusion was either: true lumen narrowing of the SMA on CT with clinical symptoms (abdominal pain, hematochezia) or complete occlusion of the SMA true lumen on CT, regardless of symptoms. This classification was maintained regardless of subsequent intraoperative findings after cardiopulmonary bypass (CPB) initiation. In all patients meeting these diagnostic criteria, we performed laparotomy and SMA perfusion simultaneously with CPB establishment. Operative technique After being diagnosed with ATAAD, patients were expeditiously transferred to the operating theater for emergency repair. The procedure was performed under general anaesthesia, with the patients intubated and in the supine position. All procedures, including SMA access and reconstruction, were performed by cardiac surgeons experienced in aortic surgery. Additional surgical specialists were not required. Surgical approach The initial phase involved exposing the undissected right axillary and common femoral arteries for arterial access. A median sternotomy incision was extended to the umbilicus to expose the SMA. This manoeuvre typically alleviated shock due to cardiac tamponade. The right atrium was exposed for venous access. Anticoagulation and CPB Heparin (300 U/kg) was administered to maintain an activated clotting time of longer than 400 seconds. CPB was established using right axillary and femoral arterial cannulation for inflow and right atrial drainage for outflow. A left ventricular vent was inserted through the right superior pulmonary vein, followed by systemic cooling to the target temperature. No endotoxin absorption was used during CPB. SMA management Concurrent with systemic cooling, an upper median laparotomy was performed. The transverse colon was manually elevated, and the mesentery was incised to expose the main SMA trunk (Fig. A). As demonstrated in Video 1, the SMA was taped distally beyond the middle colic artery branching site. If the intestine showed signs of compromised viability, such as discoloration or reduced peristalsis, and Doppler ultrasound (Model 811-B, Parks Medical) detected reduced SMA flow, SMA perfusion was initiated. The SMA was clamped and transversely incised (Fig. B). When a false lumen thrombus was observed upon incision of the SMA, thrombectomy was performed using a 4-French balloon catheter (Fig. C). The thrombectomy prioritized perfusion over complete removal, focusing only on achieving adequate space for the perfusion tube and intimal fixation. If resistance was felt, to prevent a new tear, catheter insertion was withheld and the balloon expansion was limited to the thickness of the short axis of the false lumen. Subsequently, an 8-French multipurpose tube (Atom Medical Corp.) was inserted into the distal true lumen of the SMA (Fig. D) and connected to the CPB arterial line branch (Fig. A). The cannula to the SMA was connected to the main arterial line of the CPB circuit, with no independent flow regulation to the SMA. The mesenteric perfusion was therefore dependent on the pressure from the main arterial line. The bowel was observed for improvements in colour and peristalsis, with Doppler ultrasound repeated if necessary to confirm the effectiveness of mesenteric perfusion. Central aortic repair Following establishment of adequate SMA perfusion, central aortic repair was commenced. Distal aortic anastomosis was performed under hypothermic circulatory arrest with selective cerebral perfusion at a bladder temperature of 25°C. During circulatory arrest, the brachiocephalic artery was clamped and axillary perfusion was initiated to ensure blood flow to the right common carotid artery and SMA (Fig. B). The initial flow through both the right axillary artery and the SMA was set at 5–8 ml/kg/min, and then adjusted up to 15 ml/kg/min according to the perfusion pressure required to maintain adequate perfusion. Selective cerebral perfusion The protocol for selective cerebral perfusion aimed to establish flow in all three major vessels. While the right common carotid artery was perfused via axillary cannulation, a separate pump system maintained perfusion of the left common carotid and left subclavian arteries. Extent of aortic replacement The extent of aortic replacement was determined by the location of the entry tear. Ascending aortic replacement was performed for tears in the ascending aorta, partial arch replacement was performed for proximal arch tears, and total arch replacement with a frozen elephant trunk technique was performed for tears of the distal arch and beyond. SMA revascularization During rewarming following central repair, SMA revascularization (SMA plasty) was conducted as previously described . In cases of persistent inadequate proximal SMA blood flow or suspected static occlusion/stenosis, thrombectomy of both the pseudo and true lumens was performed using a balloon-tip catheter. Distal SMA thrombectomy followed a similar procedure. Upon improvement of blood flow from both sides, the pseudo lumen of the distal SMA was closed with a running 7-0 polypropylene suture connecting the intima and adventitia. The reconstructed distal SMA wall and proximal adventitia were then anastomosed with 7-0 polypropylene. Post-anastomosis Doppler flowmetry was performed to confirm SMA blood flow. Final assessment The procedure concluded with a meticulous assessment of the bowel coloration and peristalsis. Upon confirmation of adequate perfusion, the abdomen was closed. Lactate monitoring consisted of a preoperative measurement at admission, intraoperative measurements every 30–45 min, and postoperative measurements in the ICU every 30 min for three readings, followed by every 2 hours after stabilization. Statistical analysis Given the small sample size, continuous data are expressed as median [interquartile range], while categorical variables are expressed as n . Due to the limited number of patients, formal statistical comparisons between groups were not performed. Follow-up completeness was assessed using the formal person-time method. Aortic event-free survival was estimated using the Kaplan–Meier method. Malperfusion was defined as a failure of blood flow to end organs caused by dissection-related obstruction of the aorta and its branches. Based on contrast-enhanced CT findings and clinical assessment, patients with aortic dissection extending into the SMA were diagnosed with mesenteric malperfusion. The diagnostic criterion for mesenteric malperfusion was either: true lumen narrowing of the SMA on CT with clinical symptoms (abdominal pain, hematochezia) or complete occlusion of the SMA true lumen on CT, regardless of symptoms. This classification was maintained regardless of subsequent intraoperative findings after cardiopulmonary bypass (CPB) initiation. In all patients meeting these diagnostic criteria, we performed laparotomy and SMA perfusion simultaneously with CPB establishment. After being diagnosed with ATAAD, patients were expeditiously transferred to the operating theater for emergency repair. The procedure was performed under general anaesthesia, with the patients intubated and in the supine position. All procedures, including SMA access and reconstruction, were performed by cardiac surgeons experienced in aortic surgery. Additional surgical specialists were not required. Surgical approach The initial phase involved exposing the undissected right axillary and common femoral arteries for arterial access. A median sternotomy incision was extended to the umbilicus to expose the SMA. This manoeuvre typically alleviated shock due to cardiac tamponade. The right atrium was exposed for venous access. Anticoagulation and CPB Heparin (300 U/kg) was administered to maintain an activated clotting time of longer than 400 seconds. CPB was established using right axillary and femoral arterial cannulation for inflow and right atrial drainage for outflow. A left ventricular vent was inserted through the right superior pulmonary vein, followed by systemic cooling to the target temperature. No endotoxin absorption was used during CPB. SMA management Concurrent with systemic cooling, an upper median laparotomy was performed. The transverse colon was manually elevated, and the mesentery was incised to expose the main SMA trunk (Fig. A). As demonstrated in Video 1, the SMA was taped distally beyond the middle colic artery branching site. If the intestine showed signs of compromised viability, such as discoloration or reduced peristalsis, and Doppler ultrasound (Model 811-B, Parks Medical) detected reduced SMA flow, SMA perfusion was initiated. The SMA was clamped and transversely incised (Fig. B). When a false lumen thrombus was observed upon incision of the SMA, thrombectomy was performed using a 4-French balloon catheter (Fig. C). The thrombectomy prioritized perfusion over complete removal, focusing only on achieving adequate space for the perfusion tube and intimal fixation. If resistance was felt, to prevent a new tear, catheter insertion was withheld and the balloon expansion was limited to the thickness of the short axis of the false lumen. Subsequently, an 8-French multipurpose tube (Atom Medical Corp.) was inserted into the distal true lumen of the SMA (Fig. D) and connected to the CPB arterial line branch (Fig. A). The cannula to the SMA was connected to the main arterial line of the CPB circuit, with no independent flow regulation to the SMA. The mesenteric perfusion was therefore dependent on the pressure from the main arterial line. The bowel was observed for improvements in colour and peristalsis, with Doppler ultrasound repeated if necessary to confirm the effectiveness of mesenteric perfusion. Central aortic repair Following establishment of adequate SMA perfusion, central aortic repair was commenced. Distal aortic anastomosis was performed under hypothermic circulatory arrest with selective cerebral perfusion at a bladder temperature of 25°C. During circulatory arrest, the brachiocephalic artery was clamped and axillary perfusion was initiated to ensure blood flow to the right common carotid artery and SMA (Fig. B). The initial flow through both the right axillary artery and the SMA was set at 5–8 ml/kg/min, and then adjusted up to 15 ml/kg/min according to the perfusion pressure required to maintain adequate perfusion. Selective cerebral perfusion The protocol for selective cerebral perfusion aimed to establish flow in all three major vessels. While the right common carotid artery was perfused via axillary cannulation, a separate pump system maintained perfusion of the left common carotid and left subclavian arteries. Extent of aortic replacement The extent of aortic replacement was determined by the location of the entry tear. Ascending aortic replacement was performed for tears in the ascending aorta, partial arch replacement was performed for proximal arch tears, and total arch replacement with a frozen elephant trunk technique was performed for tears of the distal arch and beyond. SMA revascularization During rewarming following central repair, SMA revascularization (SMA plasty) was conducted as previously described . In cases of persistent inadequate proximal SMA blood flow or suspected static occlusion/stenosis, thrombectomy of both the pseudo and true lumens was performed using a balloon-tip catheter. Distal SMA thrombectomy followed a similar procedure. Upon improvement of blood flow from both sides, the pseudo lumen of the distal SMA was closed with a running 7-0 polypropylene suture connecting the intima and adventitia. The reconstructed distal SMA wall and proximal adventitia were then anastomosed with 7-0 polypropylene. Post-anastomosis Doppler flowmetry was performed to confirm SMA blood flow. Final assessment The procedure concluded with a meticulous assessment of the bowel coloration and peristalsis. Upon confirmation of adequate perfusion, the abdomen was closed. Lactate monitoring consisted of a preoperative measurement at admission, intraoperative measurements every 30–45 min, and postoperative measurements in the ICU every 30 min for three readings, followed by every 2 hours after stabilization. The initial phase involved exposing the undissected right axillary and common femoral arteries for arterial access. A median sternotomy incision was extended to the umbilicus to expose the SMA. This manoeuvre typically alleviated shock due to cardiac tamponade. The right atrium was exposed for venous access. Heparin (300 U/kg) was administered to maintain an activated clotting time of longer than 400 seconds. CPB was established using right axillary and femoral arterial cannulation for inflow and right atrial drainage for outflow. A left ventricular vent was inserted through the right superior pulmonary vein, followed by systemic cooling to the target temperature. No endotoxin absorption was used during CPB. Concurrent with systemic cooling, an upper median laparotomy was performed. The transverse colon was manually elevated, and the mesentery was incised to expose the main SMA trunk (Fig. A). As demonstrated in Video 1, the SMA was taped distally beyond the middle colic artery branching site. If the intestine showed signs of compromised viability, such as discoloration or reduced peristalsis, and Doppler ultrasound (Model 811-B, Parks Medical) detected reduced SMA flow, SMA perfusion was initiated. The SMA was clamped and transversely incised (Fig. B). When a false lumen thrombus was observed upon incision of the SMA, thrombectomy was performed using a 4-French balloon catheter (Fig. C). The thrombectomy prioritized perfusion over complete removal, focusing only on achieving adequate space for the perfusion tube and intimal fixation. If resistance was felt, to prevent a new tear, catheter insertion was withheld and the balloon expansion was limited to the thickness of the short axis of the false lumen. Subsequently, an 8-French multipurpose tube (Atom Medical Corp.) was inserted into the distal true lumen of the SMA (Fig. D) and connected to the CPB arterial line branch (Fig. A). The cannula to the SMA was connected to the main arterial line of the CPB circuit, with no independent flow regulation to the SMA. The mesenteric perfusion was therefore dependent on the pressure from the main arterial line. The bowel was observed for improvements in colour and peristalsis, with Doppler ultrasound repeated if necessary to confirm the effectiveness of mesenteric perfusion. Following establishment of adequate SMA perfusion, central aortic repair was commenced. Distal aortic anastomosis was performed under hypothermic circulatory arrest with selective cerebral perfusion at a bladder temperature of 25°C. During circulatory arrest, the brachiocephalic artery was clamped and axillary perfusion was initiated to ensure blood flow to the right common carotid artery and SMA (Fig. B). The initial flow through both the right axillary artery and the SMA was set at 5–8 ml/kg/min, and then adjusted up to 15 ml/kg/min according to the perfusion pressure required to maintain adequate perfusion. The protocol for selective cerebral perfusion aimed to establish flow in all three major vessels. While the right common carotid artery was perfused via axillary cannulation, a separate pump system maintained perfusion of the left common carotid and left subclavian arteries. The extent of aortic replacement was determined by the location of the entry tear. Ascending aortic replacement was performed for tears in the ascending aorta, partial arch replacement was performed for proximal arch tears, and total arch replacement with a frozen elephant trunk technique was performed for tears of the distal arch and beyond. During rewarming following central repair, SMA revascularization (SMA plasty) was conducted as previously described . In cases of persistent inadequate proximal SMA blood flow or suspected static occlusion/stenosis, thrombectomy of both the pseudo and true lumens was performed using a balloon-tip catheter. Distal SMA thrombectomy followed a similar procedure. Upon improvement of blood flow from both sides, the pseudo lumen of the distal SMA was closed with a running 7-0 polypropylene suture connecting the intima and adventitia. The reconstructed distal SMA wall and proximal adventitia were then anastomosed with 7-0 polypropylene. Post-anastomosis Doppler flowmetry was performed to confirm SMA blood flow. The procedure concluded with a meticulous assessment of the bowel coloration and peristalsis. Upon confirmation of adequate perfusion, the abdomen was closed. Lactate monitoring consisted of a preoperative measurement at admission, intraoperative measurements every 30–45 min, and postoperative measurements in the ICU every 30 min for three readings, followed by every 2 hours after stabilization. Given the small sample size, continuous data are expressed as median [interquartile range], while categorical variables are expressed as n . Due to the limited number of patients, formal statistical comparisons between groups were not performed. Follow-up completeness was assessed using the formal person-time method. Aortic event-free survival was estimated using the Kaplan–Meier method. A total of 217 patients underwent open repair of ATAAD. The repair procedures comprised ascending aorta replacement ( n = 110), aortic arch replacement ( n = 98), and aortic root replacement ( n = 9). Among these patients, 12 (5.5%) were initially suspected of having mesenteric malperfusion. Two patients were excluded based on our criteria, leaving 10 patients for analysis in this study (Fig. ). There were 10 patients who underwent the combined approach to repair ATAAD complicated by mesenteric malperfusion. Of the 10 patients, 6 had dynamic obstructions confirmed intraoperatively (5 by Doppler ultrasound after CPB initiation, 1 without thrombectomy). The remaining four patients had false lumen thrombosis, although it was not feasible to distinguish between purely static versus mixed obstruction. Table summarizes the characteristics and surgical procedures of the study cohort. At the time of presentation, six patients had symptoms of abdominal pain, two presented with bloody stools, and one had vomited. The remaining one patient had no complaints or abnormal abdominal findings, but the CT scan clearly showed that the SMA was occluded by dissection. Four patients (40%) were in a state of preoperative shock, defined as systolic blood pressure <80 mmHg. Of these four patients, two presented with cardiac tamponade, while the other two experienced cardiogenic shock due to either coronary artery malperfusion or acute aortic regurgitation. All patients underwent exploratory laparotomy. During exploratory laparotomy, all patients underwent Doppler ultrasound assessment of SMA blood flow. Five patients (50%) showed reduced SMA flow and required direct perfusion via CPB. The other five patients maintained adequate SMA flow after CPB initiation and did not require any direct intervention. The time to reperfusion of the SMA was 62.0 [42.0–85.5] min from the initiation of the procedure. Our standardized surgical sequence was followed in all patients, including four patients with cardiogenic shock. In one patient with right coronary artery malperfusion, immediate CPB establishment normalized the ST changes, enabling SMA assessment during the cooling phase without delaying central repair. The 30-day operative mortality was 20% (2/10 patients), with deaths attributed to stroke ( n = 1) and acute myocardial infarction ( n = 1). Postoperative contrast CT confirmed improved SMA perfusion in all surviving patients (8/10 cases). No patients required bowel resection, and no laparotomy-related complications were observed. The aortic event-free survival rate at 5 years postoperatively was 85%. Median follow-up was 8.2 years (range 1 day–12.5 years). The follow-up rate was 92.8% using the formal person-time method. During follow-up, two in-hospital deaths occurred, and one aortic event occurred at 38 months postoperatively. In the subgroup of four patients presenting with preoperative shock, one patient did not survive, yielding an operative mortality rate of 25% for this high-risk subgroup. One patient had an uncommon vascular anatomical variant where both the coeliac trunk and the SMA had a common origin from the abdominal aorta as a single trunk (Fig. B). Dissection had progressed to its branches, but fortunately the coeliac artery branch was unaffected and there were no findings of obstruction (Fig. A). On imaging, the SMA demonstrated poor contrast enhancement in both the true and false lumens, consistent with dissection (Fig. C). Additionally, there was a reduced caliber of the superior mesenteric vein relative to the SMA, known as the smaller superior mesenteric vein sign (Fig. D). Temporary perfusion was therefore only performed in the SMA. After central repair, thrombectomy and SMA plasty were performed, resulting in no bowel resection and a favourable outcome. No additional interventions were required. A contrast-enhanced CT scan performed 10 years postoperatively demonstrated restored patency of the SMA with good blood flow in the previously affected segments (Fig. ). Despite improvements in surgical outcomes for aortic dissection, managing malperfusion, particularly SMA malperfusion, remains a substantial challenge with high mortality rates . The critical dilemma in cases of mesenteric malperfusion secondary to ATAAD lies in determining the necessity and timing of mesenteric revascularization, complicated by the limited ischaemic tolerance of intestinal tissue. Traditionally, central aortic repair has been prioritized, with the European Society of Cardiology (ESC) guidelines recommending immediate aortic surgery for malperfusion (Class I, Level B) . However, this approach has shown suboptimal outcomes , as central repair alone, while potentially sufficient for purely dynamic obstructions, fails to improve blood flow in cases of static or mixed obstruction. Recent studies have shown improved efficacy with a staged approach beginning with mesenteric revascularization before addressing the proximal aortic pathology . However, this strategy is inappropriate in patients with ATAAD requiring urgent central repair due to life-threatening complications (ie, cardiac tamponade, multisystemic malperfusion, acute heart failure). Prioritizing endovascular intervention reportedly leads to mortality rates of 39% in stable patients and 100% in patients in shock . While some institutions use hybrid operating rooms for simultaneous central repair and malperfusion evaluation, the ESC guidelines restrict such interventions to centers with appropriate expertise and facilities . This limitation is particularly relevant in Japan, where such specialized facilities are not universally available. Our direct SMA perfusion technique offers three key advantages: concurrent revascularization during central repair, feasibility in standard operating rooms without specialized expertise, and real-time bowel perfusion assessment through laparotomy. Notably, this approach achieved remarkably improved survival outcomes—with operative mortality of 25% in shock patients and overall mortality of 20%, substantially lower than historical rates exceeding 60% in this high-risk population of ATAAD with mesenteric malperfusion. However, our small sample size without a control group prevents definitive conclusions about survival benefits. Interestingly, 50% of patients initially diagnosed with mesenteric malperfusion did not require intervention after CPB initiation. Recent studies suggest that CPB-induced haemodynamic changes may improve abdominal malperfusion , possibly contributing to the resolution of perfusion obstruction in some cases. Diagnosing mesenteric ischaemia in ATAAD remains challenging due to its subtle symptoms and non-specific markers . Uchida et al. reported that 46% of 13 patients undergoing exploratory laparotomy showed no evidence of intestinal ischaemia , emphasizing the need for more accurate diagnostic methods. While Orihashi et al. proposed specific criteria for diagnosing mesenteric ischaemia in aortic dissection using transoesophageal echocardiography (TEE) , the practical application of this approach presents challenges, particularly in emergency settings. In our series, TEE was not used for SMA flow assessment because it requires advanced diagnostic expertise and carries substantial uncertainty. Given the lethal nature of bowel necrosis, we believe that it would be too risky to rely solely on TEE for decision-making. The present study has several important limitations. First, our small sample size ( n = 10) and retrospective design limit the generalizability of our findings and prevented meaningful statistical comparisons between groups with and without SMA perfusion. Second, the actual blood flow to the SMA during temporary perfusion was not quantified, introducing potential variability in the effectiveness of the intervention. Third, the smaller body habitus typical of the Japanese population may have allowed the 8-French tube to provide adequate perfusion, potentially limiting the applicability of these results to populations with different anthropometric characteristics. In conclusion, while this pilot study suggests that combining exploratory laparotomy with direct perfusion may be technically feasible for addressing mesenteric malperfusion in patients with ATAAD, several considerations warrant attention. The approach carries substantial risks, including increased blood loss, potential bowel injury, prolonged operative time, additional surgical trauma, and postoperative ileus. These risks must be carefully balanced against the potential benefits of direct mesenteric reperfusion. Larger prospective controlled studies are essential to validate these preliminary findings and establish the safety profile of this approach. Future studies should evaluate not only technical success but also long-term outcomes, potential complications, and specific patient selection criteria. Until more robust evidence is available, this approach should be considered experimental and used only in carefully selected patients.
Neurological update: neuro-otology 2023
6be6fd0d-029b-42e6-96e7-636d797cf537
10632253
Otolaryngology[mh]
Amongst patients seen by appointment, complaining of what after careful interrogation sounds like recurrent acute vertigo attacks, the differential diagnosis is basically limited to benign positional vertigo (BPV), Meniere’s disease (MD) or vestibular migraine (VM) . Patients who start to have posterior circulation transient ischemic attacks with predominant vertigo will usually have a stroke long before their appointment comes around . BPV is the commonest cause of recurrent vertigo; it presents as brief spins lasting seconds, triggered by bending down, looking up or rolling over in bed . The elderly might present with falls after getting out of bed . BPV is caused by otoconia dislodged from one of the otolith membranes moving under the influence of gravity, either within a semicircular canal (SCC) duct itself (“canalithiasis”) or while attached to its cupula (“cupulolithiasis”). As the head moves from one position to another with respect to gravity, the otoconia move and increase or decrease the resting activity of canal afferents , producing vertigo and a nystagmus with its rotation axis orthogonal to the plane of the stimulated canal . Careful clinical observation and analysis of the exact beating direction (i.e. rotation axis) of this position-provoked nystagmus and of the exact provocative position, allows deductions to be made about which SCC in which ear is being stimulated in which direction—towards the ampulla which is excitatory for the lateral SCC but inhibitory for the vertical SSCs, or away from the ampulla which is inhibitory for the lateral SCC but excitatory for the vertical SCCs. These deductions then guide repositioning manoeuvres. Informative simulations of the presumed movement of the otoconia in the SCCs have been produced [ – ]. A useful collection of BPV videos, uploaded by Dr Dan Gold, can be found on the University of Utah, Neuro-ophthalmology Virtual Education Library (NOVEL) website . Typical posterior semicircular canal (PSC) BPV accounts for almost 90% of all BPV presentations . The Dix-Hallpike test produces almost immediate geotropic-torsional and upbeating vertical nystagmus indicating that the otoconia are moving in the excitatory direction, that is away from the PSC cupula in the lowermost ear (Fig. ). (Note: The term “geotropic” when applied to positional nystagmus means that the quick phases of the nystagmus beat towards the lowermost ear; “apogeotropic” means towards the uppermost ear.) Diagnostic criteria for typical posterior canal BPV require: (1) recurrent attacks of positional vertigo or dizziness provoked by lying down or turning over while supine; (2) attack duration of < 1 min; (3) positional nystagmus elicited after a latency of a few seconds by the Dix-Hallpike or the side-lying manoeuvre; (4) geotropic torsional, vertical upbeating (PSC plane) nystagmus lasting < 1 min and (5) that no other disorder better accounts for these findings . Investigations are indicated only when an underlying cause for BPV is suspected . Typical horizontal semicircular canal (HSC) BPV—also known as lateral semicircular canal BPV—accounts for about 10% of all BPV presentations. There are several variants, all with some type of horizontal positional nystagmus. Three examples follow. (A) Paroxysmal horizontal nystagmus beating towards the lowermost ear (i.e. geotropic nystagmus). This is attributed to canalithiasis of the HSC in the ear that is lowermost when lying on the side with the higher nystagmus slow-phase velocity (Fig. ). This nystagmus has a shorter onset latency than PSC-BPV, a crescendo-decrescendo pattern and a relatively longer duration, still less than 1 min [ , , ]. (B) Persistent horizontal nystagmus beating towards the uppermost ear (i.e. apogeotropic nystagmus). This is attributed to cupulolithiasis of the HSC in the ear that is uppermost when lying on the side with the higher nystagmus slow-phase velocity. (C) Persistent horizontal geotropic nystagmus that is symmetrical to each side. This has been attributed to a “light cupula”, i.e. a cupula with a lower than normal specific gravity , but not everyone believes this . Both geotropic and apogeotropic horizontal positional nystagmus have also been reported in vestibular migraine [ , , ]. Typical PSC-BPV can usually be treated effectively and immediately with an Epley or a Semont [ – ] manoeuvre by physiotherapists , audiologists or doctors. Some patients learn to treat themselves , often by following one of many self-help BPV online videos. HSC-BPV can be harder to treat than PSC-BPV and many different repositioning manoeuvres are used; many are named after the neuro-otologist who first proposed it . The simplest just has the patient lie only on the unaffected side for “as long as possible, preferably all night” . On the basis of modelling, a universal BPV repositioning manoeuvre has been proposed . Unfortunately, even now only a few of the many patients who present to an Emergency Room [ – ] or to a primary care clinic with vertigo even have a Dix-Hallpike test correctly performed; most just have blood tests and brain CT and are prescribed useless anti-emetic tablets. There can be practical problems with treating even a simple case of unilateral PSC-BPV. For example, if the patient is 120 kg, 80 years old and has Parkinson's disease, it is impossible to do a proper Epley (or Semont) manoeuvre, or even an accurate Dix-Hallpike test, especially on a narrow examination couch jammed in the office corner up against a wall. Two solutions to this problem are: (1) a home visit: testing and treating the patients in their own home, on their own double bed, with their family helping; with video goggles it is possible to check the nystagmus and to show sceptical family members that there really is something wrong with the patient. (2) Treating the patient in a mechanical repositioning device such as the Epley Omniax rotator (unfortunately no longer made) or the TRV chair, both motor-driven. These devices are suitable and effective for diagnosing and treating patients with BPV that involves multiple canals or those with physical limitations (stroke, spine injuries, morbid obesity) that preclude effective bedside manoeuvres. A transportable manually operated device is also available . There are many other patterns of positional nystagmus in patients who really do have peripheral positional vertigo (i.e. BPV) rather than central positional vertigo . For example, in one type of atypical PSC-BPV the patient has apogeotropic torsional, downbeating nystagmus in the Dix-Hallpike position rather than geotropic, upbeating nystagmus. This could be taken to indicate anterior SCC BPV, but soon the patient develops nystagmus of typical PSC canalithiasis from the opposite side . These patients are thought to have otoconia in the distal part of the non-ampullary arm of the PSC, close to the common crus. Dix-Hallpike testing moves this mass towards the ampulla, thus inhibiting posterior canal afferents and producing an inhibitory torsional downbeating nystagmus. This positional nystagmus can be provoked in either right or left Dix-Hallpike positions, the head-hanging position and sometimes, even in a side lying position; there is a crescendo-decrescendo time-course but no latency and the nystagmus is not completely exhaustible. Rising to the upright position does not reverse nystagmus direction, and it does not fatigue on repeated positioning. Two treatments have been proposed: the second half of the Semont manoeuvre which the patient begins by sitting upright with legs hanging over the edge of the bed, the head rotated towards the healthy ear; then while maintaining this head position, lies onto the unaffected side, thus allowing the otoconia to fall into the common crus and finally the vestibule. The second treatment, termed the “45° forced prolonged position”, requires subjects to lie on the unaffected side with the head turned 45° downwards to bring the non-ampullary arm of the affected posterior canal into a draining position and to maintain this for eight hours . Atypical BPV can be difficult to distinguish from central positional vertigo (see below), and in our view the diagnosis should be made by a neuro-otologist. When BPV accompanies or follows an acute vestibular syndrome, the cause of the acute vestibular syndrome should be confirmed with video head impulse testing (vHIT), vestibular evoked myogenic potentials (VEMPs) and audiometry. With BPV secondary to vestibular neuritis there can be impaired ocular VEMPs and horizontal plus anterior canal vHITs but normal cervical VEMPs . In contrast, with BPV after labyrinthitis or labyrinthine infarct, there is also sudden hearing loss , and there can be prolonged geotropic or apogeotropic positional nystagmus refractory to treatment (as in cupulolithiasis) and abnormal posterior canal vHIT . If the story sounds like BPV, but there is neither positional vertigo, nor positional nystagmus with a correctly done Dix-Hallpike test, it is best to see the patient again rather than order tests. While an unequivocal diagnosis of BPV requires paroxysmal positional nystagmus, some patients who keep having positional vertigo but have no nystagmus during the Dix-Hallpike test can do just as well as those who do have nystagmus after a repositioning manoeuvre . Others only have vertigo after coming up from the Dix-Hallpike test but do have retropulsion and measurable oscillation of the trunk at the same time, possibly due to otoconia on the utricular side of the PSC. These patients can be treated effectively with repeated sit-ups from the Dix-Hallpike position, aimed at liberating otoconia from the short arm of the PSC . With removal of visual fixation, an asymptomatic low-velocity (2–5°/s) positional nystagmus, horizontal or vertical of almost every conceivable kind, occurs in many (maybe even most) normal subjects—even in those without a history of BPV or migraine . This needs to be considered when a patient who seems to have had BPV, but is now in remission, has some positional nystagmus in the dark. Positional vertigo and positional nystagmus (paroxysmal, persistent or both) can be the presenting feature of some focal lesions and diffuse diseases affecting the cerebellum or the brainstem [ – ]. Downbeating nystagmus on straight head-hanging, upbeating nystagmus on returning to the upright position from supine and apogeotropic nystagmus during the supine head-roll test all occur in central paroxysmal positional nystagmus . The direction of central paroxysmal positional nystagmus aligns with the vector sum of the rotational axes of the semicircular canals that were being inhibited in each position: for example, a straight head-hanging position would inhibit both anterior canals, and so the nystagmus is directly upbeat with no latency to onset and a rapid crescendo phase which decreases exponentially. Time constants for the nystagmus, 3–8 s, correspond to those of the vertical vestibulo-ocular reflex (VOR). The possibility of a central positional vertigo/nystagmus is particularly important to consider in a patient presenting without any other neurological symptoms or signs, just with atypical BPV . Could the cause of the positional vertigo be just migraine [ , , ] or perhaps something more sinister such as a structural lesion? Unfortunately, not even a high-quality, contrast-enhanced MRI with thin, overlapping slices is totally reassuring, as the problem might be an MRI negative, antibody mediated, autoimmune process . So, if it is not BPV, then is it MD or is it VM, or maybe both ? There is a close relationship between the two , and some actually consider MD to be a vestibulo-cochlear subtype of migraine . The diagnosis of MD is easy if there is unilateral tinnitus and aural fullness with a fluctuating, low-frequency, cochlear-type sensorineural hearing loss which might not be obvious during, or even after, the first few vertigo attacks, but will be eventually . Moreover, the patient might be too dizzy during attacks to notice the hearing problem and could not in any case cooperate with an audiogram. There are smartphone apps offering pure-tone air-conducted audiograms with which it is possible to check if there is a temporary hearing loss with the vertigo attacks . This way any reasonably tech-savvy patient should be able to do their own audiogram on a regular basis in between and just after vertigo attacks. There is no other cause of a low-frequency hearing loss that comes and goes (Fig. ). Accurate audiological evaluation and interpretation by an audiologist, in cooperation with an otologist, is essential to make the diagnosis of MD. Diagnostic difficulties could arise if the patient has a pre-existing, unrelated hearing loss such as low-frequency conductive (otosclerosis), mid-frequency sensorineural (congenital) or high-frequency sensorineural (age/noise induced), or if the patient has bilateral MD. Drop attacks—in which the patient just drops to the ground—occur in MD as well as in some non-MD aural diseases, but not in migraine . Unfortunately, while most neurologists will order an EEG and ECG in such patients, they will rarely order an audiogram . Repeated attacks of Room Tilt Illusion—suddenly the whole visual world is tilted or even inverted for seconds or minutes—might be a related phenomenon: they can occur in both MD and migraine and perhaps also with TIAs or seizures . Syncope, as a result of the strong vestibular sensation, is rare but potentially dangerous and easy to mistake for a drop attack in a patient with MD . In between MD attacks, there will often be unilateral vestibular impairment of air-conducted ocular and cervical VEMPs and of caloric responses but not of the head impulse test [ – ]. Settings of the subjective visual horizontal (or vertical) might deviate, usually in the same direction as the slow phases of any spontaneous nystagmus . During MD attacks, there is, almost invariably, horizontal nystagmus (sometimes with a vertical component) that can have a horizontal slow phase velocity over 160°/s. The nystagmus first beats towards the affected side (excitatory nystagmus), then towards the normal side (paretic nystagmus) and then again towards the affected side (recovery nystagmus) . Without knowing from hearing loss which is the affected ear, the spontaneous nystagmus direction will not accurately lateralise the MD. This type of nystagmus is enhanced by head-shaking and skull vibration (apparently possible in certain stoical patients) . Rarely, the video head impulse test is temporarily abnormal during an MD attack with either reduced or enhanced responses from the lateral SCC.  It is of interest that the VOR response to pulsed galvanic stimulation can also be enhanced in MD . The vertigo attacks in MD can usually be stopped. Therapeutic total unilateral vestibular deafferentation of the affected ear with vestibular nerve section or labyrinthectomy or partial deafferentation with intratympanic gentamicin can do this, but at the risk of producing imbalance needing long-term vestibular rehabilitation [ – ], especially in the elderly . Intratympanic dexamethasone might be just as good as gentamicin and will not produce imbalance . A low-sodium diet is traditional , endolymphatic sac surgery controversial , drugs such as betahistine or cinnarizine plus dimenhydrinate still hopeful. Many patients with migraine headaches also have balance problems, including vertigo attacks [ – ], and many patients with vertigo attacks or other balance problems also have migraine headaches [ , , ]. There are now official criteria for the diagnosis of VM , even though many migraineurs have other, unofficial, balance problems such as chronic subjective dizziness , motion sensitivity , motion sickness , constant rocking sensations ( mal-de-debarquement ) , room-tilt illusion or a generalised imbalance which can respond to vestibular rehabilitation . There are some characteristic differences between patients with VM and migraineurs without vestibular symptoms: a longstanding history of migraine with severe headache attacks, aural fullness/tinnitus accompanying attacks, presence of menopause and a history of motion sickness . There might be minor audiologic changes in VM but not a fluctuating, unilateral low-frequency hearing loss as in MD. Children have VM . (They also have BPV but only rarely have MD .) Perhaps as a consequence of the vertigo attacks, some VM patients (and also some MD patients) develop psychological problems such as depression , anxiety [ – ], panic attacks , phobias and of even more concern, possible cognitive impairment [ – ] which might however respond to therapy . Between attacks VM patients can have some low-velocity spontaneous or positional nystagmus in darkness, usually horizontal and around 10°/s or less, but their vestibular function tests (vHIT, caloric and VEMP) are normal . During a VM attack most have a direction-changing or direction-fixed spontaneous nystagmus , usually horizontal and less than 15°/s slow phase velocity (but sometimes up to 57°/s), or a persistent positional nystagmus up to 100°/s slow phase velocity in 26% (Fig. ). Such ictal nystagmus in VM might need to be distinguished from the ictal nystagmus that can occur in MD , central vestibulopathy or BPV . When patients have both MD and migraine then things get even more complicated [ , – ]. Also, patients can have headache with their BPV , and those who have migraine are more likely to have BPV than those who do not . Although there is no solid evidence of measurable benefit from treating or preventing VM [ – ], patients are of course treated , usually with drugs that are used for the treatment and prevention of migraine headaches [such as betablockers, pizotifen, tricyclics, anticonvulsants (topiramate, lamotrigine, valproate), cinnarizine, flunarizine or triptans] [ , , ]. In patients with recurrent acute spontaneous vertigo attacks that have been happening for more than say 3 months, MD and VM are the only two realistic diagnoses. If there is also unilateral tinnitus and aural fullness with a low frequency unilateral/asymmetrical sensorineural hearing loss, then it has to be MD. If there are no aural symptoms and no hearing loss, the differential diagnosis will hinge on the vestibular function tests. These should all be normal in VM, but in MD there may be: (1) a canal paresis > 25% on the caloric test with normal lateral SCC vHIT and (2) reduced air-conducted VEMPs, ocular and cervical, on the side with the caloric paresis. The spontaneous nystagmus seen during a vertigo attack is also useful for differentiating MD from VM and is discussed below in our section on vestibular event monitoring. Short, fast, head accelerations (head impulses) test SCC afferents in much the same way as patellar tendon taps test 1a afferents. Head impulses test the vestibulo-ocular reflex in response to rapid (2000–3000°/s 2 ) head accelerations. The VOR response to these fast stimuli is hard-wired into the neurophysiology of the SCCs and the brainstem; it depends on the resting rate and on–off asymmetry of primary SCC afferents and their robust direct disynaptic or trisynaptic excitatory and cross commissural inhibitory projections via the vestibular nuclei in the pons and medulla to the ocular motor nuclei in the pons and midbrain . The head impulse test , specifically the vHIT , can detect moderate to severe impairment of any single SCC. It is sometimes (but not always) possible to detect this in the clinical HIT by noting the characteristic compensatory “catch-up” saccades [ – ]. The clinical head impulse test depends, as do other aspects of the neurological examination, on both the clinician’s skill and the patient’s co-operation. If the catch-up saccades have a short latency and so occur while the head is still moving rather than just after it has stopped moving, they will be “covert,” that is, invisible to the clinician but detectible on vHIT . There are now three commercially available vHIT systems: two with goggle-based pupil-tracking cameras and one with a tripod-mounted camera ; each system has its strengths and weaknesses and potential pitfalls . With training and practice neurologists, otolaryngologists, audiologists and physiotherapists can all now measure the VOR from each of the six SCCs in almost any reasonably co-operative adult or child in about 20 min. Since 2016 when we wrote the previous version of this review, the yearly number of publications in PubMed dealing specifically with vHIT has increased from 62 to 186. Here, we consider four common clinical situations in which the vHIT could help with diagnosis. The patient is seen, usually in an Emergency Room, during her first-ever attack of acute, spontaneous, isolated vertigo. Assuming there is no simultaneous acute unilateral hearing loss (neurologists rarely ask about and almost never test for hearing loss), the two main diagnoses are vestibular neuritis and posterior circulation stroke involving the cerebellum and perhaps the brainstem vestibular nuclei. A competent, focused clinical examination which includes a head impulse test, such as HINTS or STANDING can usually distinguish between the two. Videonystagmography plus vHIT [ – ] will double the rate of correct diagnosis . In acute vestibular neuritis there is sudden unilateral loss of vestibular function . All three SCCs might be involved or only the lateral and anterior which suggests involvement of only the superior vestibular nerve. A patient with left superior vestibular neuritis will have a horizontal/ torsional nystagmus beating to the right, more vigorously in right than in left gaze, suppressed by visual fixation and almost always a clinically obvious impairment of the left horizontal SCC VOR on the bedside HIT . Here vHIT can provide objective, quantitative measures of the VOR from all six SCCs , documenting that there really is unilateral impairment of left lateral and anterior SCC function. Vestibular testing can be completed by finding a leftward offset of the subjective visual horizontal (or vertical), loss of left ocular VEMPs indicating impaired utricular function with intact cervical VEMPs indicating preserved saccular function [ , , ] (Fig. ). Selective inferior vestibular neuritis , affecting just the PSC, can only be confidently diagnosed with vHIT and corroborated by finding an absent cervical VEMP (from the saccule) and a preserved ocular VEMP (from the utricle) . In contrast to acute vestibular neuritis, an acute cerebellar/brainstem infarct might not impair the VOR, so the patient will have a normal clinical HIT, a normal or near-normal vHIT [ – ], sometimes not even nystagmus , and may just complain of imbalance . Here the logic is counter-intuitive: it is a normal test result, in this case the normal head impulse test (and no nystagmus), that indicates a potentially serious condition. Acute cerebellar infarction is not a diagnosis to miss , as there is chance of foramen magnum herniation needing immediate posterior fossa decompression to prevent death or permanent disability , whereas an unequivocally abnormal test, the vHIT, indicates a potentially safe-to-discharge condition—vestibular neuritis. Two other conditions that can produce acute, isolated, spontaneous vertigo, MD and VM, also do not show impairment of the VOR on vHIT; they can be hard to differentiate from cerebellar infarction in the acute phase. However, it is unusual in MD for there not to be or have been unilateral tinnitus, fullness, and low-frequency hearing loss, even during the first attack—see above. On the other hand, patients with an MD vertigo attack are usually too busy being dizzy to complain about or even to notice a minor hearing problem, especially in the masking din of an Emergency Room and if nobody asks about it and if nobody can test for it. A severe, first-ever VM attack might be even more difficult to distinguish from cerebellar infarction—even by an experienced neuro-otologist. A combination of one, maybe even two, negative diffusion-weighted MRI scans and a detailed headache history once the patient has recovered is probably the only way. The editor of Practical Neurology gives a clear and concise personal account of what it is like to have, and to have had, acute vestibular neuritis . The patient is seen days, weeks—whenever she can get an appointment—after such an attack. She might be asymptomatic and simply wants to know what happened and whether it could happen again. Or she might be complaining of persisting imbalance, because she really did have acute vestibular neuritis and while her brainstem has compensated , her peripheral vestibular function has not fully recovered and she now has chronic vestibular insufficiency , experiencing head movement oscillopsia and a feeling of imbalance with a positive foam Romberg test [ – ]. Or because she actually had a cerebellar infarct. Alternatively, she could be complaining of further, but less severe, vertigo attacks: if the attacks are spontaneous, she might actually have MD; if the attacks are positional, she might have PSC-BPV as a result of the vestibular neuritis . When a patient still has unilateral impairment of peripheral SCC function according to vHIT, caloric or rotational testing [ , – ] some weeks after the acute vestibular syndrome, the diagnosis of vestibular neuritis can be safely made in retrospect. If, however, peripheral vestibular function has largely recovered (with or without corticosteroids ), the distinction between recovered peripheral (as opposed to centrally compensated) vestibular function and cerebellar infarction cannot be made clinically and will need MRI. If that too is normal, there is a diagnostic problem. Was this actually an MRI negative cerebellar infarct or a cerebellar TIA rather than a recovered vestibular neuritis or even post-stroke BPV ? Could the patient have had a cerebellar embolus from paroxysmal atrial fibrillation ? There are more questions than answers. The patient is seen when well, but complains of recurrent vertigo attacks, either spontaneous or positional. If the attacks really are vertigo, then VM, MD, and BPV are just about the only plausible diagnoses. Rarely, recurrent vertigo is cardiogenic . Unfortunately, most patients who have started to have isolated vertigo attacks from vertebrobasilar TIAs will have a stroke long before their appointment comes around . Recurrent vertigo attacks are the most common vestibular complaint in office practice, but vHIT rarely helps as it is usually normal inter-ictally, even in Meniere’s disease [ , , ]. Nonetheless, it is still worth doing: occasionally BPV is secondary to some inner ear disease and so in that case the vHIT could be abnormal. Posterior SCC vHIT might also be transiently abnormal due to canalithiasis itself . There are many possible causes for a complaint of chronic imbalance: some neurological, such as sensory neuropathies, extrapyramidal disorders, orthostatic tremor, or normal pressure hydrocephalus, and others not, such as musculoskeletal disorders or mental health issues. What concerns us here is chronic vestibular insufficiency which can either be due to severe unilateral vestibular impairment [ , , , ] or moderate to severe, symmetrical or asymmetrical, bilateral vestibular impairment [ – ]. The patient with chronic vestibular insufficiency might have no obvious symptoms while sitting or lying but feels imbalance as soon as she stands or walks. Despite this there might be little clinically obvious impairment of gait or of stance even with eyes closed and feet together—a negative Romberg test. But if the patient now tries to do a Romberg test on a soft surface, say a foam mat , then she will sway and could fall if not caught. This is a positive foam Romberg test which is almost diagnostic of vestibular impairment. Patients with proprioceptive impairment such as those with a hereditary neuropathy such as Charcot-Marie Tooth disease or chronic inflammatory demyelinating polyneuropathy or a ganglionopathy such as CANVAS (cerebellar ataxia neuropathy vestibular areflexia syndrome) already have a positive Romberg test on the firm surface such as the floor but will be worse when standing on foam. Patients with bilateral vestibular impairment might also have difficulties with movement strategies, control of dynamics, orientation in space, and cognitive processing . Such patients will also notice vertical oscillopsia during rapid, passive vertical head-shaking due to impairment of the vertical VOR. They might even volunteer, or at least admit, that they have to stop walking in order to see clearly. Having the examiner shake their head up-and-down will drop their vision by at least three lines on a Snellen chart. Bilateral vestibular impairment needs to be severe to be detectable on caloric or rotational tests, as these tests have large normal ranges. vHIT is the most reliable test to detect bilateral vestibular impairment [ , , , ] as it has a tight age-adjusted normal range and is even suitable for detecting age-related vestibular impairment, that is “presbyvestibulopathy” [ – ], also called “presbystasis” . Although mild impairment of just one lateral SCC can be detected by caloric testing, it will not produce imbalance if it is only mild. vHIT is the best test for measuring whether vestibular function is by itself impaired sufficiently to produce imbalance. A possible cause of an isolated severe unilateral vestibular loss presenting with chronic vestibular impairment is an unrecognised previous attack of acute vestibular neuritis ; the patient might not have had or might not have noticed vertigo. If there is definitely no history of a previous vertigo attack, then a chronic progressive cause of unilateral vestibular loss such as a vestibular schwannoma (hearing should also be impaired) needs to be excluded [ – ]. The cause of non-syndromic bilateral vestibular impairment without hearing impairment usually remains undiagnosed unless it is bilateral sequential vestibular neuritis , gentamicin toxicity , Wernicke’s encephalopathy or maybe hereditary spastic paraplegia . If accompanied by hearing impairment then other diagnoses need to be considered: hereditary disorders such as Usher syndrome and also acquired diseases such as superficial siderosis and leptomeningeal carcinomatosis . If there is also cerebellar impairment, as shown by an impaired visually enhanced VOR, then CANVAS needs to be considered. If there is paradoxical enhancement of the VOR on vHIT, as well as of the visually enhanced VOR, then autosomal recessive cerebellar ataxia type 3 (ARCA3) which is due to a mutation in the ANO10 gene needs to be considered . Although vHIT can be quick and easy to do, it requires training, practice and attention to detail [ , , , ]. For example, it is important to interact with the patient throughout testing, continually exhorting her to pay attention to the fixation target (as in visual field testing), not to blink, and not to resist or try to help with the passive head turning. It is important to give head impulse stimuli over the entire magnitude range up to 300°/s peak head velocity. Testing the vertical SCCs requires special attention to eccentric horizontal eye position . The reason it is possible to test the three-dimensional vestibular sensory system with a two-dimensional method (the vHIT) is that when the eyes deviate horizontally so that they align with vertical impulses being delivered directly in a vertical SCC plane, then the VOR is entirely vertical; torsional eye movements, which cannot be detected by the video method, are eliminated. vHIT testing using a head-fixed rather than space-fixed visual target—the suppression Head Impulse (SHIMP) paradigm —can give clearer results in patients with many covert saccades, especially those with only a little residual HSC function. The caloric has been the mainstay of vestibular testing for over a hundred years , and it still has a place in some cases with a normal lateral canal vHIT. It is now proposed that vHIT should be the first test done in a patient with a suspected vestibular problem . If the vHIT is abnormal, then there is no point in doing calorics—they will not give any more diagnostic information. If, however, the vHIT data are clean and truly normal over the entire stimulus magnitude range, then it might be worth asking for calorics . For example, in MD the calorics might be impaired even when the vHIT is normal [ , , , ]. One explanation for this discrepancy is that since MD preferentially causes impairment of type II vestibular hair cells , it will preferentially impair tonic HSC discharges (responsible for caloric responses) rather than phasic discharges (responsible for impulsive responses). Our alternative explanation, that the caloric impairment is a hydrodynamic effect from the swelling of the endolymphatic compartment abolishing the possibility of thermal convection —the main proposed mechanism of caloric stimulation—is not supported by otopathologic studies . Also, in patients with recovered vestibular neuritis, recovery might be less obvious on caloric testing than on vHIT, which means that a patient seen some time after an acute vestibular syndrome who now has a normal vHIT should have a caloric test—as it might still show a canal paresis , indicating that it really was vestibular neuritis rather than a brainstem/cerebellar stroke. VEMPs can give a semi-quantitative measurement of the function of each of the four otoliths—two utricles and two saccules . VEMPs combined with vHIT make it possible to test each of the 10 vestibular organs individually . VEMPs are about as easy or difficult to do as any other evoked potential test in clinical neurophysiology. There are, however, some important specific technical details to follow in order to record meaningful VEMPs : (a) correct calibration of the air-conducted sound stimulus which needs to be loud enough to be effective but still safe ; (b) an effective stimulator for bone-conducted VEMPs, such as a triggered tendon hammer or, for more accuracy, an electro-mechanical vibrator such as a Bruel & Kjaer minishaker ; and (c) measurement of background rectified sternomastoid muscle EMG activation with cervical VEMP to make the left/right asymmetry ratio more accurate. What then are some clinical situations in which VEMP testing might be useful ? Consider the patient who is seen weeks after recovering from an acute vestibular syndrome who has no impairment of vHIT, but has a canal paresis on a caloric test . Here, an absent ocular VEMP from ipsilateral utricle would confirm that the patient has had superior vestibular neuritis . Similarly, if the patient has only an impaired posterior canal vHIT, then an absent cervical VEMP from the ipsilateral saccule could support the diagnosis of a previous inferior vestibular neuritis . VEMPs are particularly useful to help decide if a superior canal dehiscence shown on CT is symptomatic : if the VEMP has a low threshold and a large amplitude, then it probably is [ – ]. One of the difficulties when diagnosing patients with recurrent vertigo is that they are often asymptomatic when seen in the clinic. Patients with BPV, MD or VM have very mild or no nystagmus between attacks, but will often have marked spontaneous and/or positional nystagmus when symptomatic . This acute nystagmus has diagnostic value. For example, when differentiating MD and VM, spontaneous horizontal nystagmus with slow phase velocity > 12.05°/s during an attack is 82.1% specific for MD, whereas spontaneous vertical nystagmus is 93.0% sensitive for VM . Devices have been developed which allow this nystagmus to be captured during a vertigo episode at home, either by patients self-recording using portable video goggles (Fig. ) or a wearable electro-oculography device (CAVA) that provides continuous monitoring . Although these devices are not currently widely available—the DizzyDoctor System was marketed but support has recently been discontinued —they will play an important role as a diagnostic aid in the near future. A similar problem applies to patients who are very vertiginous in the Emergency Room, but who are much less symptomatic when reviewed on the ward the next day or in the clinic several weeks later. Using video goggles to record acute nystagmus in the ER helps differentiate between stroke and vestibular neuritis, MD and VM, and BPV and central positional nystagmus . In recent years, research into the applications of artificial intelligence and machine learning in healthcare has increased exponentially . The field of neuro-otology has been no exception, and most of the efforts thus far explore the potential for machine learning tools to act as diagnostic decision aids . In general terms, machine learning models take clinical data from patients with conditions of interest and apply various algorithms to ‘learn’ how to distinguish between the diagnoses, without needing to be explicitly programmed with rules to follow. Machine learning methods allow analysis of large, complex datasets including images, and can identify associations that are invisible to the human eye or traditional techniques. Unlike a clinician, their diagnostic performance is never affected by fatigue or carelessness. The differential diagnosis of vertigo is a problem that is well suited to machine learning as there are often only a few plausible differentials, particularly within a specific vertigo syndrome. Machine learning methods have been applied to clinical data from history , examination findings including video eye recordings [ – ], vestibular function tests such as vHIT and VEMPs , or a combination of the above [ – ]. Many studies report excellent model performance, including models achieving accuracies and/or area-under-the-curve scores of 0.95 or higher for distinguishing stroke from vestibular neuritis , VM or MD from other causes of dizziness , or between various subtypes of BPV . However, such promising results come with caveats. Many models use data collected from a single site and have not yet been validated in populations with, for instance, diverse demographics or different laboratory setups. Furthermore, models are often trained using data carefully selected as being typical or free of artefact , with exclusion of rarer conditions (such as cupulolithiasis or anterior canal BPV ) or patients with unclear diagnoses . Performance in real-world settings can be poorer than expected . The immediate future of machine learning in neuro-otology lies in models which can be used in real-time by clinicians to assist diagnosis. Given the legal and regulatory complexities, it is likely to be some time before clinicians will be replaced by devices which can provide diagnoses autonomously. Emergency physicians, generalists and primary care physicians would be able to access tools that simulate the diagnostic expertise of the expert neuro-otologist and apply this to a much larger population of vertiginous patients. Ideally, models would use only a minimal number of input variables, so as to optimise clinical workflow and reduce computational power requirements, and would not rely heavily on technical expertise or specialised equipment. Promisingly, machine learning algorithms may be able to identify nystagmus even from low resolution images , as well as differentiate between common causes of vertigo using only information from a patient questionnaire . Clinicians must familiarise themselves with the limitations of machine learning models and the risks associated with their use , as the implementation of these technologies is inevitable.
High-Throughput Empirical and Virtual Screening To Discover Novel Inhibitors of Polyploid Giant Cancer Cells in Breast Cancer
c7ef742d-0645-44fb-ab78-a603b2801d13
11923954
Cytology[mh]
Polyploid giant cancer cells (PGCCs) are cancer cells with additional copies of chromosomes, often resulting in significantly larger cell size and increased genomic content. − These cells are found across various cancer types, including breast, prostate, lung, ovarian, and colorectal cancers. − The presence of PGCCs has been correlated to advanced disease stages, increased tumor aggressiveness, and poor clinical outcomes. The formation of PGCCs can be attributed to several mechanisms, including aberrant cell cycle regulation, mitotic failure, and response to cellular stress, such as chemotherapy and radiation. These mechanisms result in the cells bypassing normal mitotic checkpoints, leading to endoreduplication or cell fusion events that contribute to polyploidy. − PGCCs contribute significantly to tumor heterogeneity. By reshuffling the genomic content of multiple copies of the genome, they generate diverse progeny through asymmetric division and budding, allowing for the rapid adaptation of tumor cells to changing microenvironments and therapeutic pressures. This adaptability promotes tumor evolution and metastasis, complicating treatment strategies. PGCCs have emerged as a key target in cancer research due to their critical role in therapy resistance. These cells exhibit resistance to conventional chemotherapies and radiation therapy, often surviving initial treatments and giving rise to recurrent tumors. , , This resistance is mediated through multiple mechanisms, including enhanced DNA repair capabilities, activation of survival pathways, avoidance of apoptosis, and the ability to enter a dormant state. In addition, PGCCs are reported to exhibit stem cell-like properties by their enhanced tumor-initiating capability and upregulation of relevant biomarkers. − Their presence often correlates with more aggressive disease phenotypes and poorer patient outcomes. Targeting PGCCs represents a promising therapeutic strategy. Approaches under investigation include disrupting the specific cell cycle and survival pathways active in PGCCs, as well as exploiting their unique metabolic dependencies. − Therapies aimed at eliminating PGCCs or preventing their formation could enhance the treatment efficacy and reduce relapse rates. Although there has been some progress in this direction, to date, − there are no effective therapies targeting PGCCs. The development of anti-PGCC treatments has been hindered by the absence of a high-throughput method to rapidly quantify these cells. Traditional drug screening assays, such as MTT, XTT, or ATP, quickly measure the overall inhibition of cancer cell populations but fail to provide specific information on the elimination of a small PGCC subpopulation, which is crucial for addressing treatment resistance and relapse. PGCCs can be characterized by excessive DNA content and large cell and nuclear size. Currently, the gold standard for identifying and isolating PGCCs involves fluorescence-activated cell sorting (FACS) combined with visual confirmation. While flow cytometry can quantify the number and percentage of PGCCs, it is impractical for screening thousands of compounds or for monitoring the dynamic processes of PGCC induction and death. The limitations of existing approaches underscore the need for a high-throughput and precise analytical method specifically tailored to PGCC research. Leveraging advancements in image-based cell segmentation and detection, − we recently developed a dedicated single-cell morphological analysis pipeline to accelerate anti-PGCC therapy discovery. Using this pipeline, we developed complementary discovery strategies to identify novel PGCC inhibitors in this study: high-throughput screening of Phase 1-approved compounds for rapid translational impact, mechanistic studies to identify novel compound classes, and machine-learning-powered virtual screening to broaden the solution space. While our pipeline allows for high-throughput testing of thousands of compounds, exhaustive empirical testing of all existing compounds remains impractical, highlighting the critical role of computational methods in predicting anti-PGCC drug responses and prioritizing candidates for validation. Experimental screening generates essential data sets for building and evaluating various machine learning models, fostering a synergistic relationship between these approaches to streamline drug discovery. Machine learning models have emerged as powerful tools, offering a promising solution by leveraging multiomics data and biochemical features of compounds, such as chemical structures, to predict drug sensitivity across cancer cell lines. − However, to the best of our knowledge, no machine learning models currently exist for predicting anti-PGCC compounds, largely due to the lack of large training data sets. Establishing such methods is essential for advancing the development of targeted therapies for these challenging cancer cells. In this study, using our high-throughput morphological assay data, we developed an ensemble machine learning model integrating biochemical and pharmacological features to predict anti-PGCC activity ( a). Virtual screening of 6575 compounds identified top candidates, five of which were experimentally validated across four cell models. This study highlights the power of AI-driven and empirical screening to accelerate PGCC inhibitor discovery and combat therapy resistance. Single-Cell Morphological Analysis to Identify Inhibitors of PGCCs In our screening experiments, we utilized a compound library of 2726 compounds, each having successfully completed Phase I drug safety confirmation (APExBIO, L1052, DiscoveryProbe Clinical & FDA-Approved Drug Library). These compounds were prepared at a concentration of 10 mM in DMSO or PBS and diluted to a final concentration of 10 μM for screening. Cells were harvested from culture dishes using 0.05% Trypsin/EDTA (Gibco, 25,200), centrifuged at 1000 rpm for 4 min, resuspended in appropriate media, and seeded into 96-well plates. For direct treatment, 1000 cells were seeded in 100 μL of media per well. Cells were cultured for 24 h before treatment with compounds for 48 h. Post-treatment, cells were stained with 0.3 μM Calcein AM (Biotium, 80011–2), 0.6 μM ethidium homodimer-1 (Invitrogen, L3224), and 8 μM Hoechst 33342 (Thermo Scientific 62249), followed by a 30 min incubation. For preinducing PGCC experiments, 4000 cells per well were seeded. After 24 h, cells were treated with a PGCC-inducing agent (Docetaxel 1 μM) for 48 h. Postinduction, the reagents were aspirated, and the test compounds were added to treat the mixed populations for an additional 48 h. The same staining and imaging protocol was used to quantify PGCCs and non-PGCCs after treatments. Loading cells, drugs, or staining reagents into a 96-well plate requires less than 10 s with our pipetting robot, accommodating 88 test conditions and 8 control wells for normalization. To quantify PGCCs and non-PGCCs in collected images, we developed a custom MATLAB (2022b) program to achieve this in three steps: (1) identify cell nuclei with Hoechst staining, (2) determine cell viability, and (3) recognize PGCCs based on nuclear size based on our previous work. , − Among the 2726 compounds, 29 compounds were excluded due to their fluorescent colors, which interfere with image processing. Representation of Drug Features Using Structures and Descriptions For machine learning modeling, each drug was represented by either a vector of molecular fingerprints to capture its biochemical and structural features or a vector of text embeddings to encode descriptions of its pharmacological, biochemical, and molecular biological properties. Drug structures were represented by the Simplified Molecular Input Line Entry System (SMILES) line notation. Canonical SMILES codes were obtained from PubChem using the Python PubChemPy package and then converted into molecular fingerprints based on the Molecular ACCess System (MACCS), PubChem, and Extended-Connectivity Fingerprint (ECFP6) systems using the R rcdk package. The molecular fingerprints are binary vectors that encode the structural properties of a drug, with lengths of 166, 881, and 1024 bits, respectively, where each bit denotes the presence (1) or absence (0) of a predefined structural property. Text descriptions of drugs were obtained from PubChem using the PUG REST interface, which provides programmatic access to PubChem data. , We then converted the descriptions into text embeddings using the latest embedding methods developed by OpenAI, including text-embedding-3-small (1536 dimensions) and text-embedding-3-large (3072 dimensions), which generate vectors composed of continuous values to represent the semantic information on drug descriptions. Machine Learning Models to Predict Anti-PGCC Efficacy We trained machine learning models to predict drug responses in PGCCs of MDA-MB-231 based on drug structures and descriptions. The normalized count of PGCCs, compared between treated and untreated cells, was increased by 10 –3 and then log 2-transformed and used as the prediction target. We employed 10-fold cross-validation to train and test each model. In each round of 10-fold cross-validation, the drugs were randomly partitioned into 10 sets, where 9 sets were used for model training, and the remaining set was used for testing, where a Pearson correlation coefficient was calculated between the actual and predicted values. Once all 10 sets were tested by the corresponding trained models, we summarized the performance by averaging the 10 correlation coefficients. This entire process, including random partitioning and a 10-fold cross-validation, was repeated for 10 rounds. The results from these 10 rounds were presented in box plots, with performance summarized by the median correlation value. We evaluated a total of seven linear and nonlinear regression-based machine learning models, including linear regression with L2 regularization (Ridge), support vector machine (SVM), random forest (RF), histogram-based gradient boosting (HGB), decision tree (DT), stochastic gradient descent linear regression (SGD), and multilayer perceptron (MLP). These models were implemented by using the respective functions of the Python scikit-learn library. For ensemble learning, the predicted drug responses from two individual models, trained on either drug structures or descriptions, were used as inputs for training a linear regression model to predict the drug response. We ensured that all random partitions were applied consistently across individual and ensemble models to allow for a rigorous comparison of the results. In our screening experiments, we utilized a compound library of 2726 compounds, each having successfully completed Phase I drug safety confirmation (APExBIO, L1052, DiscoveryProbe Clinical & FDA-Approved Drug Library). These compounds were prepared at a concentration of 10 mM in DMSO or PBS and diluted to a final concentration of 10 μM for screening. Cells were harvested from culture dishes using 0.05% Trypsin/EDTA (Gibco, 25,200), centrifuged at 1000 rpm for 4 min, resuspended in appropriate media, and seeded into 96-well plates. For direct treatment, 1000 cells were seeded in 100 μL of media per well. Cells were cultured for 24 h before treatment with compounds for 48 h. Post-treatment, cells were stained with 0.3 μM Calcein AM (Biotium, 80011–2), 0.6 μM ethidium homodimer-1 (Invitrogen, L3224), and 8 μM Hoechst 33342 (Thermo Scientific 62249), followed by a 30 min incubation. For preinducing PGCC experiments, 4000 cells per well were seeded. After 24 h, cells were treated with a PGCC-inducing agent (Docetaxel 1 μM) for 48 h. Postinduction, the reagents were aspirated, and the test compounds were added to treat the mixed populations for an additional 48 h. The same staining and imaging protocol was used to quantify PGCCs and non-PGCCs after treatments. Loading cells, drugs, or staining reagents into a 96-well plate requires less than 10 s with our pipetting robot, accommodating 88 test conditions and 8 control wells for normalization. To quantify PGCCs and non-PGCCs in collected images, we developed a custom MATLAB (2022b) program to achieve this in three steps: (1) identify cell nuclei with Hoechst staining, (2) determine cell viability, and (3) recognize PGCCs based on nuclear size based on our previous work. , − Among the 2726 compounds, 29 compounds were excluded due to their fluorescent colors, which interfere with image processing. For machine learning modeling, each drug was represented by either a vector of molecular fingerprints to capture its biochemical and structural features or a vector of text embeddings to encode descriptions of its pharmacological, biochemical, and molecular biological properties. Drug structures were represented by the Simplified Molecular Input Line Entry System (SMILES) line notation. Canonical SMILES codes were obtained from PubChem using the Python PubChemPy package and then converted into molecular fingerprints based on the Molecular ACCess System (MACCS), PubChem, and Extended-Connectivity Fingerprint (ECFP6) systems using the R rcdk package. The molecular fingerprints are binary vectors that encode the structural properties of a drug, with lengths of 166, 881, and 1024 bits, respectively, where each bit denotes the presence (1) or absence (0) of a predefined structural property. Text descriptions of drugs were obtained from PubChem using the PUG REST interface, which provides programmatic access to PubChem data. , We then converted the descriptions into text embeddings using the latest embedding methods developed by OpenAI, including text-embedding-3-small (1536 dimensions) and text-embedding-3-large (3072 dimensions), which generate vectors composed of continuous values to represent the semantic information on drug descriptions. We trained machine learning models to predict drug responses in PGCCs of MDA-MB-231 based on drug structures and descriptions. The normalized count of PGCCs, compared between treated and untreated cells, was increased by 10 –3 and then log 2-transformed and used as the prediction target. We employed 10-fold cross-validation to train and test each model. In each round of 10-fold cross-validation, the drugs were randomly partitioned into 10 sets, where 9 sets were used for model training, and the remaining set was used for testing, where a Pearson correlation coefficient was calculated between the actual and predicted values. Once all 10 sets were tested by the corresponding trained models, we summarized the performance by averaging the 10 correlation coefficients. This entire process, including random partitioning and a 10-fold cross-validation, was repeated for 10 rounds. The results from these 10 rounds were presented in box plots, with performance summarized by the median correlation value. We evaluated a total of seven linear and nonlinear regression-based machine learning models, including linear regression with L2 regularization (Ridge), support vector machine (SVM), random forest (RF), histogram-based gradient boosting (HGB), decision tree (DT), stochastic gradient descent linear regression (SGD), and multilayer perceptron (MLP). These models were implemented by using the respective functions of the Python scikit-learn library. For ensemble learning, the predicted drug responses from two individual models, trained on either drug structures or descriptions, were used as inputs for training a linear regression model to predict the drug response. We ensured that all random partitions were applied consistently across individual and ensemble models to allow for a rigorous comparison of the results. Comprehensive Compound Efficacy Analysis by Quantifying PGCCs and Non-PGCCs We developed a high-throughput single-cell morphological analysis pipeline to quantify PGCCs and non-PGCCs by segmenting nuclei with Hoechst staining, excluding dead cells via Live/Dead staining, and classifying cells based on nuclear size ( a). Validated across multiple breast cancer cell lines, our approach aligns with flow cytometry and manual inspection. As a demonstration, Paclitaxel treatment of MDA-MB-231 cells significantly reduced the total cell count while enriching PGCCs ( b–d). Our image-processing pipeline converts raw images into pseudocolored representations, revealing a clear shift toward larger nuclei (red) in treated cells, confirming PGCC induction. Leveraging this pipeline, we screened a library of 2726 Phase I-approved compounds for their impact on PGCCs and non-PGCCs. Among 2726 compounds, 29 fluorescent-interfering compounds were excluded, and 461 inhibited the total cell number by at least half. However, among those 461 compounds, 236 compounds (51.2%) enriched PGCCs by at least 2-fold. Notably, standard chemotherapies, including Taxanes, Gemcitabine, Carboplatin, and Vinorelbine, depleted non-PGCCs but expanded PGCC populations, explaining tumor resistance and relapse post-treatment. In contrast, Cyclophosphamide, Capecitabine, and Fluorouracil did not induce PGCCs but showed limited efficacy in cancer cell elimination. These findings underscore the limitations of current triple-negative breast cancer (TNBC) therapies and highlight the necessity of PGCC-targeting strategies, for which our screening pipeline provides a powerful discovery platform. Discovering PGCC Inhibitors with Screening Experiments Since most TNBC cell lines naturally contain fewer than 1% PGCCs, evaluating compound efficacy against PGCCs is challenging. To enrich PGCCs, we pretreated cells with Docetaxel for 2 days before introducing test compounds for an additional 2 days, followed by staining and imaging ( a). As shown in b, drug-resistant PGCCs remained resistant to most chemotherapeutics. Among 2697 screened compounds, 169 reduced PGCCs by at least 2-fold, 45 by 10-fold, and 63 inhibited both PGCCs and non-PGCCs by at least 2-fold ( b). Notably, proteasome inhibitors (Bortezomib, Oprozomib, Carfilzomib, Celastrol), CHK inhibitors (AZD7762, PF-477736), and the FOXM1 inhibitor Thiostrepton emerged as potent PGCC-targeting agents. FOXM1, a key cell cycle regulator, is dysregulated in PGCCs, making them particularly vulnerable to its inhibition. , , Proteasome inhibitors induce cell death through multiple mechanisms, including pro-apoptotic protein accumulation, cell cycle arrest, and heightened sensitivity to other therapies. , CHK inhibitors, by targeting CHK1/CHK2, impair DNA damage repair and cell cycle control, enhancing therapy-induced cancer cell death. , While these compounds are well studied, they are not yet clinically used for breast cancer treatment resistance. Their selective activity against PGCCs underscores their potential as targeted therapies to overcome treatment resistance. In addition, our large-scale screening identified novel PGCC-targeting compounds beyond the well-characterized drug classes ( b). Notably, macrocyclic lactones, including Doramectin, Pyronaridine, Ivermectin, and Moxidectin, known for their antiparasitic activity, , disrupt neurotransmission by modulating glutamate-gated chloride channels, selectively affecting parasites while sparing host cells. While Doramectin has been shown to inhibit glioblastoma cell survival via autophagy modulation, its role in breast cancer remains unexplored. Additionally, Pyronaridine, an antimalarial drug, , emerged as a potent PGCC inhibitor. It disrupts hemozoin formation, intercalates DNA, and induces oxidative stress, leading to parasite death. Pyronaridine also exhibits antiviral activity against COVID-19 and Ebola. , Although its potential impact on breast cancer has been noted, , there has been no prior investigation into its potential in targeting cancer resistance and PGCCs. While the precise mechanisms underlying PGCC inhibition remain unclear, these compounds offer promising avenues for future research. To validate our findings, we further validated it with multiple concentrations and cell lines ( c). Pyronaridine selectively eliminated PGCCs in both models, highlighting our ability to identify new compounds with PGCC-specific activity. Identification and Validation of AXL as a Key Mediator for the Anti-PGCC Effects of Pyronaridine To elucidate the mechanisms underlying Pyronaridine’s inhibition of PGCCs in MDA-MB-231 cells, we performed RNA-seq on Pyronaridine-treated PGCCs and compared their gene expression profiles to untreated controls. GSEA identified 283 significantly depleted gene sets enriched for genes downregulated by Pyronaridine. Network analysis revealed a strong association with cell cycle regulation and cancer proliferation ( a,b). Among these gene sets, the KOBAYASHI_EGFR_SIGNALING_24HR_DN gene set, linked to EGFR inhibition, was significantly depleted (NES = −1.74, q = 0.007) ( a–c). This set overlapped with others related to cell cycle states, RB1 targets, and breast cancer grades, suggesting that Pyronaridine disrupts EGFR signaling to inhibit PGCC proliferation in TNBC. These findings align with prior reports of Pyronaridine’s effects in non-small cell lung cancer. We further explored key players in the EGFR signaling pathway-mediated genes for their potential as therapeutic targets of PGCCs in TNBC. The top five leading-edge genes from GSEA (TUBB, AXL, NOLC1, CCND1, and TPX2) were all significantly downregulated by Pyronaridine ( c). Among them, AXL emerged as a particularly promising target. AXL, a receptor tyrosine kinase, regulates cell survival, proliferation, migration, and invasion. − In PGCCs, AXL may drive DNA damage response and cytokinesis failure, , thereby supporting the growth and adaptation of polyploid cancer cells under stressed conditions. Given our RNA-Seq data and its potential role in therapy resistance, we tested TP-0903, a novel ATP-competitive AXL inhibitor, in clinical trials for advanced solid tumors. , TP-0903 effectively eliminated PGCCs in both the MDA-MB-231 and SUM159 cells ( d). This preliminary study aligns with RNA-Seq analysis and supports that Pyronaridine’s mechanism in targeting PGCCs may involve the AXL pathway. Machine Learning-Based Prediction of Anti-PGCC Effects Although our assay enables high-throughput compound screening, empirically evaluating all existing compounds is neither practical nor efficient. To overcome this limitation, we developed predictive machine learning models trained on our experimental data. To the best of our knowledge, this is the first study to apply machine learning to predicting the anti-PGCC efficacy of compounds. We systematically evaluated seven state-of-the-art regression models to predict the PGCC-targeting effects in MDA-MB-231 cells. These regression models were trained to predict changes in PGCC counts based on quantitative representations of either chemical structures (fingerprints) or compound descriptions (text converted to embeddings) ( a). To maximize predictive power, we generated fingerprints using three complementary widely used descriptor systems (MACCS, PubChem, and ECFP6), capturing key structural and connectivity-based features. For text-based embeddings, we utilized drug descriptions from PubChem, integrating data from multiple well-established databases, including DrugBank, ChEBI, NCIt, MeSH, and Open Targets. This comprehensive approach mitigates biases from any single database and enhances the robustness of our predictive models by incorporating chemical, pharmacological, and clinical insights. A total of 2430 compounds in the screening library with both features available were used in the model. We adopted 10 rounds of 10-fold cross-validations to train and test each model. In each iteration of cross-validation, a model was trained using 90% of the 2430 compounds and tested on the remaining 10%, which were not seen by the model during training. Overall, 31 out of 63 (49.2%) models achieved a median Pearson correlation coefficient ρ above 0.2 across 10 rounds of cross-validations ( b). For molecular fingerprints, HGB with a combination of MACCS and PubChem was the best model (median ρ, 0.29; b). Models that used combinations of multiple molecular fingerprints as features tended to achieve better performance compared with those using single molecular fingerprints. For example, HGB with MACCS and PubChem, RF with MACCS and ECFP6, and SVM with all three molecular fingerprints outperformed their single-fingerprint counterparts ( b). For description-based embeddings, models with longer embeddings (3072 dimensions) generally outperformed those with 1536 dimensions ( b), suggesting that longer embeddings capture additional pharmacological information. Notably, SVM with 3072-dimensional embeddings was the best-performing model (median ρ = 0.24; b). Overall, the performance of these models was comparable to the best results from a community challenge for predicting drug sensitivities and recent studies predicting genetic dependencies in pan-cancer cell lines, − demonstrating the capability of our screening library to support accurate predictive modeling. Enhancing Predictive Performance by Integrating Compound Structures and Descriptions Using an Ensemble Learning Approach Since compound structures and descriptions provide distinct yet potentially complementary information, combining these features may improve the performance of predictive models. To explore this, we developed an ensemble learning method by integrating the best-performing models for drug structures and descriptions ( i . e ., HGB on MACCS and PubChem and SVM on the longer embedding). The ensemble model utilized linear regression to generate the final prediction based on the outputs of these two models. Notably, this approach significantly improved performance (median ρ = 0.31) compared to the individual models (one-tailed paired t -test, both P < 1 × 10 –6 ) ( c). Across all 2,430 drugs, the ensemble model achieved a ρ of 0.33 between actual and predicted drug responses ( P = 1.53 × 10 –61 ) ( d). In the ensemble model, the regression coefficients for the HGB and SVM models were 1.2 and 0.6, respectively, both statistically significant ( P < 1 × 10 –3 ). These results suggest that both models contributed meaningful and independent information to the ensemble model. The HGB model had a greater impact on the final prediction, while the SVM model predictions provided a complementary effect. Taken together, our findings demonstrate that integrating these two distinct features allows the model to capture meaningful and complementary patterns related to anti-PGCC effects, leading to enhanced predictive performance. Expanded Virtual Screening by the Ensemble Prediction Model and Experimental Validation We expanded our virtual screening to a broader range of compounds to identify potential anti-PGCC agents in breast cancer. As a proof of concept, we compiled a large library of compounds based on the Profiling Relative Inhibition Simultaneously in Mixtures (PRISM) project, which is one of the largest drug sensitivity screens, covering 6575 oncology or non-oncology drugs (as of 24Q2). Of these 6575 drugs, 3093 drugs were not included in our original screening library but had both drug structure and description information. We applied our ensemble model to predict anti-PGCC effects for these 3093 drugs in MDA-MB-231 cells. The predicted drugs are ranked based on their inhibition effects in PGCCs ( e). Among the top-ranked candidates, we prioritized five compounds, Selamectin, AV-412, Azeliragon, Lestaurtinib, and UCN-01, based on novelty, pharmacological strength, and translational potential for experimental validation. All five compounds effectively inhibited PGCCs in MDA-MB-231 cells, confirming the model’s predictive power ( f and Table S1 ). To enhance clinical relevance, we further validated these compounds in an additional TNBC cell line (SUM159) and two patient-derived breast cancer models, Vari068 (TNBC) and PDXO-073 (ER+). − Lestaurtinib consistently suppressed PGCCs across all models, while UCN-01 and Azeliragon exhibited efficacy in three models. In contrast, Selamectin and AV-412 showed limited activity beyond MDA-MB-231, suggesting intercellular variability ( f and Table S1 ). These results highlight the influence of cell line-specific differences. Future work will incorporate additional training data from diverse cell models and integrate molecular features ( e . g ., key mutations) to refine predictive accuracy and enhance clinical translation. While machine learning models cannot provide mechanistic explanations, a literature review underscores the therapeutic potential of the identified compounds. Lestaurtinib, a targeted FLT3 inhibitor, disrupts stress signaling pathways like JAK2 and has shown promise in relapsed FLT3 mutant acute myeloid leukemia, , with potential applications in solid tumors warranting further exploration. UCN-01 (7-hydroxystaurosporine), a Chk1 inhibitor, disrupts critical cell cycle checkpoints, targeting PGCCs’ ability to manage DNA damage and genomic instability. , It has exhibited encouraging results in early-phase cancer trials with ongoing efforts to optimize its pharmacokinetics and reduce plasma protein binding. Modifications to enhance its drug-like properties could unlock its potential for broader clinical application. Azeliragon, a RAGE inhibitor, has shown versatility with ongoing trials exploring its therapeutic potential beyond Alzheimer’s disease. , Current investigations, including a Phase II trial in glioblastoma, underscore its potential to address critical challenges in cancer treatment, including therapy-related toxicity and tumor microenvironment modulation. Selamectin, FDA-approved for veterinary use, presents a promising starting point for cancer therapy due to its established safety and dosing in animals. With focused preclinical studies addressing pharmacokinetics, tissue-specific toxicity, and human ion channel interactions, Selamectin could potentially be repurposed for cancer treatment, offering a novel therapeutic avenue. AV-412, an oral tyrosine kinase inhibitor targeting EGFR and HER2, has demonstrated preclinical efficacy, including activity against resistant tumor models. , Its successful completion of a Phase I trial highlights its potential for further development, paving the way for additional studies to establish its clinical utility in cancer therapy. Overall, our investigation demonstrates the significant potential of machine learning-based virtual screening to accelerate the discovery of novel anticancer therapies, particularly for targeting therapy-resistant PGCCs. We developed a high-throughput single-cell morphological analysis pipeline to quantify PGCCs and non-PGCCs by segmenting nuclei with Hoechst staining, excluding dead cells via Live/Dead staining, and classifying cells based on nuclear size ( a). Validated across multiple breast cancer cell lines, our approach aligns with flow cytometry and manual inspection. As a demonstration, Paclitaxel treatment of MDA-MB-231 cells significantly reduced the total cell count while enriching PGCCs ( b–d). Our image-processing pipeline converts raw images into pseudocolored representations, revealing a clear shift toward larger nuclei (red) in treated cells, confirming PGCC induction. Leveraging this pipeline, we screened a library of 2726 Phase I-approved compounds for their impact on PGCCs and non-PGCCs. Among 2726 compounds, 29 fluorescent-interfering compounds were excluded, and 461 inhibited the total cell number by at least half. However, among those 461 compounds, 236 compounds (51.2%) enriched PGCCs by at least 2-fold. Notably, standard chemotherapies, including Taxanes, Gemcitabine, Carboplatin, and Vinorelbine, depleted non-PGCCs but expanded PGCC populations, explaining tumor resistance and relapse post-treatment. In contrast, Cyclophosphamide, Capecitabine, and Fluorouracil did not induce PGCCs but showed limited efficacy in cancer cell elimination. These findings underscore the limitations of current triple-negative breast cancer (TNBC) therapies and highlight the necessity of PGCC-targeting strategies, for which our screening pipeline provides a powerful discovery platform. Since most TNBC cell lines naturally contain fewer than 1% PGCCs, evaluating compound efficacy against PGCCs is challenging. To enrich PGCCs, we pretreated cells with Docetaxel for 2 days before introducing test compounds for an additional 2 days, followed by staining and imaging ( a). As shown in b, drug-resistant PGCCs remained resistant to most chemotherapeutics. Among 2697 screened compounds, 169 reduced PGCCs by at least 2-fold, 45 by 10-fold, and 63 inhibited both PGCCs and non-PGCCs by at least 2-fold ( b). Notably, proteasome inhibitors (Bortezomib, Oprozomib, Carfilzomib, Celastrol), CHK inhibitors (AZD7762, PF-477736), and the FOXM1 inhibitor Thiostrepton emerged as potent PGCC-targeting agents. FOXM1, a key cell cycle regulator, is dysregulated in PGCCs, making them particularly vulnerable to its inhibition. , , Proteasome inhibitors induce cell death through multiple mechanisms, including pro-apoptotic protein accumulation, cell cycle arrest, and heightened sensitivity to other therapies. , CHK inhibitors, by targeting CHK1/CHK2, impair DNA damage repair and cell cycle control, enhancing therapy-induced cancer cell death. , While these compounds are well studied, they are not yet clinically used for breast cancer treatment resistance. Their selective activity against PGCCs underscores their potential as targeted therapies to overcome treatment resistance. In addition, our large-scale screening identified novel PGCC-targeting compounds beyond the well-characterized drug classes ( b). Notably, macrocyclic lactones, including Doramectin, Pyronaridine, Ivermectin, and Moxidectin, known for their antiparasitic activity, , disrupt neurotransmission by modulating glutamate-gated chloride channels, selectively affecting parasites while sparing host cells. While Doramectin has been shown to inhibit glioblastoma cell survival via autophagy modulation, its role in breast cancer remains unexplored. Additionally, Pyronaridine, an antimalarial drug, , emerged as a potent PGCC inhibitor. It disrupts hemozoin formation, intercalates DNA, and induces oxidative stress, leading to parasite death. Pyronaridine also exhibits antiviral activity against COVID-19 and Ebola. , Although its potential impact on breast cancer has been noted, , there has been no prior investigation into its potential in targeting cancer resistance and PGCCs. While the precise mechanisms underlying PGCC inhibition remain unclear, these compounds offer promising avenues for future research. To validate our findings, we further validated it with multiple concentrations and cell lines ( c). Pyronaridine selectively eliminated PGCCs in both models, highlighting our ability to identify new compounds with PGCC-specific activity. To elucidate the mechanisms underlying Pyronaridine’s inhibition of PGCCs in MDA-MB-231 cells, we performed RNA-seq on Pyronaridine-treated PGCCs and compared their gene expression profiles to untreated controls. GSEA identified 283 significantly depleted gene sets enriched for genes downregulated by Pyronaridine. Network analysis revealed a strong association with cell cycle regulation and cancer proliferation ( a,b). Among these gene sets, the KOBAYASHI_EGFR_SIGNALING_24HR_DN gene set, linked to EGFR inhibition, was significantly depleted (NES = −1.74, q = 0.007) ( a–c). This set overlapped with others related to cell cycle states, RB1 targets, and breast cancer grades, suggesting that Pyronaridine disrupts EGFR signaling to inhibit PGCC proliferation in TNBC. These findings align with prior reports of Pyronaridine’s effects in non-small cell lung cancer. We further explored key players in the EGFR signaling pathway-mediated genes for their potential as therapeutic targets of PGCCs in TNBC. The top five leading-edge genes from GSEA (TUBB, AXL, NOLC1, CCND1, and TPX2) were all significantly downregulated by Pyronaridine ( c). Among them, AXL emerged as a particularly promising target. AXL, a receptor tyrosine kinase, regulates cell survival, proliferation, migration, and invasion. − In PGCCs, AXL may drive DNA damage response and cytokinesis failure, , thereby supporting the growth and adaptation of polyploid cancer cells under stressed conditions. Given our RNA-Seq data and its potential role in therapy resistance, we tested TP-0903, a novel ATP-competitive AXL inhibitor, in clinical trials for advanced solid tumors. , TP-0903 effectively eliminated PGCCs in both the MDA-MB-231 and SUM159 cells ( d). This preliminary study aligns with RNA-Seq analysis and supports that Pyronaridine’s mechanism in targeting PGCCs may involve the AXL pathway. Although our assay enables high-throughput compound screening, empirically evaluating all existing compounds is neither practical nor efficient. To overcome this limitation, we developed predictive machine learning models trained on our experimental data. To the best of our knowledge, this is the first study to apply machine learning to predicting the anti-PGCC efficacy of compounds. We systematically evaluated seven state-of-the-art regression models to predict the PGCC-targeting effects in MDA-MB-231 cells. These regression models were trained to predict changes in PGCC counts based on quantitative representations of either chemical structures (fingerprints) or compound descriptions (text converted to embeddings) ( a). To maximize predictive power, we generated fingerprints using three complementary widely used descriptor systems (MACCS, PubChem, and ECFP6), capturing key structural and connectivity-based features. For text-based embeddings, we utilized drug descriptions from PubChem, integrating data from multiple well-established databases, including DrugBank, ChEBI, NCIt, MeSH, and Open Targets. This comprehensive approach mitigates biases from any single database and enhances the robustness of our predictive models by incorporating chemical, pharmacological, and clinical insights. A total of 2430 compounds in the screening library with both features available were used in the model. We adopted 10 rounds of 10-fold cross-validations to train and test each model. In each iteration of cross-validation, a model was trained using 90% of the 2430 compounds and tested on the remaining 10%, which were not seen by the model during training. Overall, 31 out of 63 (49.2%) models achieved a median Pearson correlation coefficient ρ above 0.2 across 10 rounds of cross-validations ( b). For molecular fingerprints, HGB with a combination of MACCS and PubChem was the best model (median ρ, 0.29; b). Models that used combinations of multiple molecular fingerprints as features tended to achieve better performance compared with those using single molecular fingerprints. For example, HGB with MACCS and PubChem, RF with MACCS and ECFP6, and SVM with all three molecular fingerprints outperformed their single-fingerprint counterparts ( b). For description-based embeddings, models with longer embeddings (3072 dimensions) generally outperformed those with 1536 dimensions ( b), suggesting that longer embeddings capture additional pharmacological information. Notably, SVM with 3072-dimensional embeddings was the best-performing model (median ρ = 0.24; b). Overall, the performance of these models was comparable to the best results from a community challenge for predicting drug sensitivities and recent studies predicting genetic dependencies in pan-cancer cell lines, − demonstrating the capability of our screening library to support accurate predictive modeling. Since compound structures and descriptions provide distinct yet potentially complementary information, combining these features may improve the performance of predictive models. To explore this, we developed an ensemble learning method by integrating the best-performing models for drug structures and descriptions ( i . e ., HGB on MACCS and PubChem and SVM on the longer embedding). The ensemble model utilized linear regression to generate the final prediction based on the outputs of these two models. Notably, this approach significantly improved performance (median ρ = 0.31) compared to the individual models (one-tailed paired t -test, both P < 1 × 10 –6 ) ( c). Across all 2,430 drugs, the ensemble model achieved a ρ of 0.33 between actual and predicted drug responses ( P = 1.53 × 10 –61 ) ( d). In the ensemble model, the regression coefficients for the HGB and SVM models were 1.2 and 0.6, respectively, both statistically significant ( P < 1 × 10 –3 ). These results suggest that both models contributed meaningful and independent information to the ensemble model. The HGB model had a greater impact on the final prediction, while the SVM model predictions provided a complementary effect. Taken together, our findings demonstrate that integrating these two distinct features allows the model to capture meaningful and complementary patterns related to anti-PGCC effects, leading to enhanced predictive performance. We expanded our virtual screening to a broader range of compounds to identify potential anti-PGCC agents in breast cancer. As a proof of concept, we compiled a large library of compounds based on the Profiling Relative Inhibition Simultaneously in Mixtures (PRISM) project, which is one of the largest drug sensitivity screens, covering 6575 oncology or non-oncology drugs (as of 24Q2). Of these 6575 drugs, 3093 drugs were not included in our original screening library but had both drug structure and description information. We applied our ensemble model to predict anti-PGCC effects for these 3093 drugs in MDA-MB-231 cells. The predicted drugs are ranked based on their inhibition effects in PGCCs ( e). Among the top-ranked candidates, we prioritized five compounds, Selamectin, AV-412, Azeliragon, Lestaurtinib, and UCN-01, based on novelty, pharmacological strength, and translational potential for experimental validation. All five compounds effectively inhibited PGCCs in MDA-MB-231 cells, confirming the model’s predictive power ( f and Table S1 ). To enhance clinical relevance, we further validated these compounds in an additional TNBC cell line (SUM159) and two patient-derived breast cancer models, Vari068 (TNBC) and PDXO-073 (ER+). − Lestaurtinib consistently suppressed PGCCs across all models, while UCN-01 and Azeliragon exhibited efficacy in three models. In contrast, Selamectin and AV-412 showed limited activity beyond MDA-MB-231, suggesting intercellular variability ( f and Table S1 ). These results highlight the influence of cell line-specific differences. Future work will incorporate additional training data from diverse cell models and integrate molecular features ( e . g ., key mutations) to refine predictive accuracy and enhance clinical translation. While machine learning models cannot provide mechanistic explanations, a literature review underscores the therapeutic potential of the identified compounds. Lestaurtinib, a targeted FLT3 inhibitor, disrupts stress signaling pathways like JAK2 and has shown promise in relapsed FLT3 mutant acute myeloid leukemia, , with potential applications in solid tumors warranting further exploration. UCN-01 (7-hydroxystaurosporine), a Chk1 inhibitor, disrupts critical cell cycle checkpoints, targeting PGCCs’ ability to manage DNA damage and genomic instability. , It has exhibited encouraging results in early-phase cancer trials with ongoing efforts to optimize its pharmacokinetics and reduce plasma protein binding. Modifications to enhance its drug-like properties could unlock its potential for broader clinical application. Azeliragon, a RAGE inhibitor, has shown versatility with ongoing trials exploring its therapeutic potential beyond Alzheimer’s disease. , Current investigations, including a Phase II trial in glioblastoma, underscore its potential to address critical challenges in cancer treatment, including therapy-related toxicity and tumor microenvironment modulation. Selamectin, FDA-approved for veterinary use, presents a promising starting point for cancer therapy due to its established safety and dosing in animals. With focused preclinical studies addressing pharmacokinetics, tissue-specific toxicity, and human ion channel interactions, Selamectin could potentially be repurposed for cancer treatment, offering a novel therapeutic avenue. AV-412, an oral tyrosine kinase inhibitor targeting EGFR and HER2, has demonstrated preclinical efficacy, including activity against resistant tumor models. , Its successful completion of a Phase I trial highlights its potential for further development, paving the way for additional studies to establish its clinical utility in cancer therapy. Overall, our investigation demonstrates the significant potential of machine learning-based virtual screening to accelerate the discovery of novel anticancer therapies, particularly for targeting therapy-resistant PGCCs. Therapy resistance in breast cancer is increasingly linked to polyploid giant cancer cells (PGCCs), which arise through whole genome doubling and exhibit heightened resistance to conventional treatments. To accelerate the discovery of PGCC-targeting compounds, we developed a high-throughput single-cell morphological analysis workflow that rapidly differentiates inhibitors of non-PGCCs, PGCCs, or both. Unlike flow cytometry, which struggles with cell dissociation, cluster removal, and dynamic tracking, our imaging-based approach is faster and more scalable and leverages computational advancements for superior screening efficiency. By screening 2726 FDA Phase 1-approved drugs, we identified promising anti-PGCC candidates, including inhibitors of the proteasome, FOXM1, CHK, and macrocyclic lactones. RNA-Seq analysis further implicated AXL inhibition as a potential PGCC-targeting strategy. To scale discovery, we developed an ensemble learning model integrating chemical fingerprints and compound descriptors to predict anti-PGCC efficacy. This model successfully predicted effective compounds from the PRISM library, which includes over 6000 drugs, with five top-ranked predictions experimentally validated. These findings highlight the power of AI-driven virtual screening in overcoming therapy resistance. With future data accumulation, our computational framework will continue to improve to enhance predictive accuracy and broader applicability in drug discovery.
Natural Language Processing Technologies for Public Health in Africa: Scoping Review
20b9042e-da23-496e-999d-d33ae0417383
11923465
Medicine[mh]
Public Health Needs in Africa Most African countries face major challenges in meeting the sustainable development goal (SDG) 3 targets for good health and well-being [ - ]. Key public health challenges include high rates of infectious diseases, maternal and child health inequities, and a growing burden of noncommunicable diseases, alongside the critical need for resilient emergency response systems . Some of these challenges stem from acute shortages in the health workforce and weak public health surveillance systems, among other weaknesses in public health systems . For instance, Africa has only 1400 epidemiologists, despite an estimated need for 6000 . These issues are further amplified by structural weaknesses in health systems and insufficient multisectoral coordination for health , which are particularly exposed by public health emergencies , such as the COVID-19 pandemic and the mpox outbreaks. During the COVID-19 pandemic, while several African countries were able to rapidly leverage their past experiences with outbreaks to respond to COVID-19, they also faced challenges, such as inadequate adherence to infection control, insufficient personal protective equipment, poor contact tracing, supply chain shortages, and a lack of training for key personnel . To systematically strengthen public health capacities, the World Health Organization (WHO) has outlined 12 essential public health functions (EPHFs) . These functions include a broad range of activities, from disease surveillance and health promotion to emergency preparedness and equitable access to health care services. However, many countries across the region, especially those with lower income levels (note that in 2024, Africa comprises upper-middle–income, lower-middle–income, and low-income countries ), face substantial challenges in fully implementing these functions, mainly because of limited financial, infrastructural, and health care workforce resources . In resource-constrained settings, innovative technologies, such as artificial intelligence (AI) technologies, could play a crucial role in supporting the implementation of EPHFs , thereby improving public health outcomes and advancing progress toward achieving health-related SDGs. Natural Language Processing Technologies for Public Health Natural language processing (NLP) is a vibrant interdisciplinary field within AI research, known by various terms in different disciplines, such as NLP in computer science, computational linguistics in linguistics, speech recognition in engineering, computational psycholinguistics in psychology, and language technologies in public discourse . Despite the diversity in terminologies and research focuses within these disciplines, they share the common goal of enabling computers to interpret, understand, and generate human language . NLP allows computers to perform a wide range of language-based tasks, including facilitating human-machine communication; improving human-to-human interactions; and processing text and speech data for practical NLP applications across different sectors, including public health. NLP holds significant potential for advancing public health in Africa by addressing the ongoing challenges faced by many countries. By appropriately leveraging NLP, countries can improve health communication, enhance disease surveillance, support workforce training, and optimize limited resources [ , , ], all of which are crucial for achieving SDG 3 targets. NLP technologies can be used to process and analyze large volumes of health data from diverse sources, including social media, medical records, and public health reports, to identify emerging health threats and track disease patterns in real time. This capability is especially valuable in regions with limited health workforce and surveillance infrastructure, as it enables faster, data-driven responses to public health emergencies. In practice, NLP-driven tools have already shown promise in Africa. For instance, during the COVID-19 pandemic, WhatsApp chatbots in South Africa, Rwanda, and Senegal were used to disseminate reliable information and facilitate rapid COVID-19 testing, while a Telegram-based chatbot in Ghana was developed to combat misinformation and provide accurate data to the public . Such tools can bridge communication gaps by delivering health information in local languages, empowering communities to recognize symptoms, prevent disease transmission, and respond more effectively. These innovations could play a transformative role in strengthening health systems across the continent, making them more resilient and responsive to both everyday health needs and unexpected crises. Africa has made progress toward achieving some of its innovation and technology targets . Specifically, the continent is making strides in mobile network coverage, with approximately 89% of the total African population now having access to mobile networks. Countries like Mali, Namibia, and Guinea-Bissau have achieved 100% 2G mobile network coverage . This expanding network coverage creates new opportunities for cloud-based NLP applications in public health. Cloud computing, which uses remote servers to store, manage, and process data, allows African countries to access computing power that was previously unattainable . This scalability is crucial for deploying NLP-based health solutions in resource-constrained settings where local infrastructure may be insufficient or absent. The synergy of cloud technology and increasing network accessibility opens the door to the expansion of NLP technologies in Africa, providing promising opportunities to improve public health outcomes across the continent. However, a primary obstacle to the development of NLP applications is a lack of essential digital datasets for the >2000 languages spoken on the continent . The development of modern NLP-based health applications for African language communities requires large-scale datasets to fully unlock the capabilities of deep learning models; however, there is a scarcity of digitized, in-language (ie, datasets collected in the specific languages spoken by the target user of the NLP technology), and in-domain (ie, datasets tailored to a specific use case or application, such as health education or disease surveillance, rather than general-purpose language) data. This scarcity is particularly profound in the health sector, where data tailored to specific African languages and contexts are often completely absent. This conjunction of linguistic diversity and data scarcity creates significant obstacles to developing effective NLP technologies tailored to Africa’s specific public health needs. Moreover, even when NLP technologies are developed, their successful deployment, validation, and integration into existing health systems are critical for achieving a meaningful positive impact. In resource-constrained environments, lessons learned from previous experience suggest that NLP technologies should be integrated into existing systems and institutions, rather than aiming to replace them . This requires overcoming various obstacles, including the development of a nuanced understanding of local public health needs, the creation of sustainable and scalable solutions, and ensuring equitable access for all users [ , - ]. Research Gaps Most previous reviews related to NLP technologies in public health have focused on global health [ , , ] or low- and middle-income countries as a whole, examined AI applications without focusing specifically on NLP [ , , , ], or focused on a single type of NLP application, such as chatbots . Unlike other AI technologies, NLP applications are heavily influenced by the languages and cultures they are designed to serve . Given Africa’s vast linguistic diversity and the complex spectrum of public health challenges faced by countries in the region, an Africa-focused review is critical for a more nuanced understanding of how NLP can be tailored to meet the diverse health needs across the continent. This approach aligns with pan-African initiatives, such as the African Union’s Agenda 2063 , which seeks to address health challenges and promote collaboration across borders. At the same time, modern NLP technologies often share similar development paradigms, meaning that advancements in one type of application can provide valuable insights and sometimes resources to others. These benefits extend beyond the experience gained during application development and include the shared use of digital resources across applications, often improving performance through NLP’s transfer learning techniques . Therefore, a broader review of NLP technologies, compared to one focused on a specific application, provides researchers and developers with a more comprehensive set of evidence to guide future development. To the best of our knowledge, this is the first scoping review to comprehensively examine the application of NLP technologies to public health in Africa. By mapping the current evidence, this review aims to provide insights into the key barriers and opportunities for the development and deployment of these technologies. Specifically, the review aims to answer five main research questions: (1) Needs and availability: What public health needs are being addressed by NLP technologies in Africa, and what unmet needs remain? (2) Prevalence and distribution: What factors influence the availability of public health NLP technologies across African countries and languages? (3) Deployment and integration: What stages of deployment have these technologies reached, and to what extent have they been integrated into health systems? (4) Public health impact: What measurable impact has these technologies had on public health outcomes, where such data are available? (5) Outlook: What recommendations have been proposed to enhance the quality, cost, and accessibility of health-related NLP technologies in Africa? By answering these questions, the review aims to provide actionable recommendations for future research and development. Most African countries face major challenges in meeting the sustainable development goal (SDG) 3 targets for good health and well-being [ - ]. Key public health challenges include high rates of infectious diseases, maternal and child health inequities, and a growing burden of noncommunicable diseases, alongside the critical need for resilient emergency response systems . Some of these challenges stem from acute shortages in the health workforce and weak public health surveillance systems, among other weaknesses in public health systems . For instance, Africa has only 1400 epidemiologists, despite an estimated need for 6000 . These issues are further amplified by structural weaknesses in health systems and insufficient multisectoral coordination for health , which are particularly exposed by public health emergencies , such as the COVID-19 pandemic and the mpox outbreaks. During the COVID-19 pandemic, while several African countries were able to rapidly leverage their past experiences with outbreaks to respond to COVID-19, they also faced challenges, such as inadequate adherence to infection control, insufficient personal protective equipment, poor contact tracing, supply chain shortages, and a lack of training for key personnel . To systematically strengthen public health capacities, the World Health Organization (WHO) has outlined 12 essential public health functions (EPHFs) . These functions include a broad range of activities, from disease surveillance and health promotion to emergency preparedness and equitable access to health care services. However, many countries across the region, especially those with lower income levels (note that in 2024, Africa comprises upper-middle–income, lower-middle–income, and low-income countries ), face substantial challenges in fully implementing these functions, mainly because of limited financial, infrastructural, and health care workforce resources . In resource-constrained settings, innovative technologies, such as artificial intelligence (AI) technologies, could play a crucial role in supporting the implementation of EPHFs , thereby improving public health outcomes and advancing progress toward achieving health-related SDGs. Natural language processing (NLP) is a vibrant interdisciplinary field within AI research, known by various terms in different disciplines, such as NLP in computer science, computational linguistics in linguistics, speech recognition in engineering, computational psycholinguistics in psychology, and language technologies in public discourse . Despite the diversity in terminologies and research focuses within these disciplines, they share the common goal of enabling computers to interpret, understand, and generate human language . NLP allows computers to perform a wide range of language-based tasks, including facilitating human-machine communication; improving human-to-human interactions; and processing text and speech data for practical NLP applications across different sectors, including public health. NLP holds significant potential for advancing public health in Africa by addressing the ongoing challenges faced by many countries. By appropriately leveraging NLP, countries can improve health communication, enhance disease surveillance, support workforce training, and optimize limited resources [ , , ], all of which are crucial for achieving SDG 3 targets. NLP technologies can be used to process and analyze large volumes of health data from diverse sources, including social media, medical records, and public health reports, to identify emerging health threats and track disease patterns in real time. This capability is especially valuable in regions with limited health workforce and surveillance infrastructure, as it enables faster, data-driven responses to public health emergencies. In practice, NLP-driven tools have already shown promise in Africa. For instance, during the COVID-19 pandemic, WhatsApp chatbots in South Africa, Rwanda, and Senegal were used to disseminate reliable information and facilitate rapid COVID-19 testing, while a Telegram-based chatbot in Ghana was developed to combat misinformation and provide accurate data to the public . Such tools can bridge communication gaps by delivering health information in local languages, empowering communities to recognize symptoms, prevent disease transmission, and respond more effectively. These innovations could play a transformative role in strengthening health systems across the continent, making them more resilient and responsive to both everyday health needs and unexpected crises. Africa has made progress toward achieving some of its innovation and technology targets . Specifically, the continent is making strides in mobile network coverage, with approximately 89% of the total African population now having access to mobile networks. Countries like Mali, Namibia, and Guinea-Bissau have achieved 100% 2G mobile network coverage . This expanding network coverage creates new opportunities for cloud-based NLP applications in public health. Cloud computing, which uses remote servers to store, manage, and process data, allows African countries to access computing power that was previously unattainable . This scalability is crucial for deploying NLP-based health solutions in resource-constrained settings where local infrastructure may be insufficient or absent. The synergy of cloud technology and increasing network accessibility opens the door to the expansion of NLP technologies in Africa, providing promising opportunities to improve public health outcomes across the continent. However, a primary obstacle to the development of NLP applications is a lack of essential digital datasets for the >2000 languages spoken on the continent . The development of modern NLP-based health applications for African language communities requires large-scale datasets to fully unlock the capabilities of deep learning models; however, there is a scarcity of digitized, in-language (ie, datasets collected in the specific languages spoken by the target user of the NLP technology), and in-domain (ie, datasets tailored to a specific use case or application, such as health education or disease surveillance, rather than general-purpose language) data. This scarcity is particularly profound in the health sector, where data tailored to specific African languages and contexts are often completely absent. This conjunction of linguistic diversity and data scarcity creates significant obstacles to developing effective NLP technologies tailored to Africa’s specific public health needs. Moreover, even when NLP technologies are developed, their successful deployment, validation, and integration into existing health systems are critical for achieving a meaningful positive impact. In resource-constrained environments, lessons learned from previous experience suggest that NLP technologies should be integrated into existing systems and institutions, rather than aiming to replace them . This requires overcoming various obstacles, including the development of a nuanced understanding of local public health needs, the creation of sustainable and scalable solutions, and ensuring equitable access for all users [ , - ]. Most previous reviews related to NLP technologies in public health have focused on global health [ , , ] or low- and middle-income countries as a whole, examined AI applications without focusing specifically on NLP [ , , , ], or focused on a single type of NLP application, such as chatbots . Unlike other AI technologies, NLP applications are heavily influenced by the languages and cultures they are designed to serve . Given Africa’s vast linguistic diversity and the complex spectrum of public health challenges faced by countries in the region, an Africa-focused review is critical for a more nuanced understanding of how NLP can be tailored to meet the diverse health needs across the continent. This approach aligns with pan-African initiatives, such as the African Union’s Agenda 2063 , which seeks to address health challenges and promote collaboration across borders. At the same time, modern NLP technologies often share similar development paradigms, meaning that advancements in one type of application can provide valuable insights and sometimes resources to others. These benefits extend beyond the experience gained during application development and include the shared use of digital resources across applications, often improving performance through NLP’s transfer learning techniques . Therefore, a broader review of NLP technologies, compared to one focused on a specific application, provides researchers and developers with a more comprehensive set of evidence to guide future development. To the best of our knowledge, this is the first scoping review to comprehensively examine the application of NLP technologies to public health in Africa. By mapping the current evidence, this review aims to provide insights into the key barriers and opportunities for the development and deployment of these technologies. Specifically, the review aims to answer five main research questions: (1) Needs and availability: What public health needs are being addressed by NLP technologies in Africa, and what unmet needs remain? (2) Prevalence and distribution: What factors influence the availability of public health NLP technologies across African countries and languages? (3) Deployment and integration: What stages of deployment have these technologies reached, and to what extent have they been integrated into health systems? (4) Public health impact: What measurable impact has these technologies had on public health outcomes, where such data are available? (5) Outlook: What recommendations have been proposed to enhance the quality, cost, and accessibility of health-related NLP technologies in Africa? By answering these questions, the review aims to provide actionable recommendations for future research and development. Overview The paradigms of developing NLP technologies have evolved significantly since the origin of NLP in the 1940s. The early rule-based systems, such as ELIZA , were followed by a shift toward machine learning–based methods in the 1990s. This new approach leveraged large datasets, reducing reliance on manually crafted rules. In 2013, the introduction of Word2Vec marked a major milestone for NLP , by representing words as vectors. This approach formed the foundation for neural language models. Subsequently, pretrained language models, such as BERT (Bidirectional Encoder Representations from Transformers) and GPT-2 (Generative Pre-trained Transformer 2) , have become the backbone for NLP development, allowing systems to be fine-tuned and developed using datasets of thousands of examples. Recent advances in large language models have further simplified NLP development, allowing systems to achieve optimal performance after learning on just a handful of task-specific examples. In the context of public health in Africa, the development of NLP technologies will likely use a mixture of paradigms, depending on the availability of task-specific datasets and computational resources. For the purposes of this scoping review, we define NLP technologies broadly to include any computational systems that process natural language, either as input or output . This inclusive definition ensures that the review captures a wide range of applications in public health across Africa. In [ , - ], we provide examples of technologies that fall within or outside the scope of this review. The scoping review maps the current evidence on NLP technologies within the framework of EPHFs. The review follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) reporting guidelines (see the PRISMA-ScR checklist in ), and its review protocol is available on medRxiv . Search Strategy A systematic search of academic literature was conducted on May 13, 2024, and updated on October 3, 2024, using the following five electronic bibliographic databases: MEDLINE via PubMed: Medical and public health literature ACL Anthology: NLP and language science literature Scopus: Broad interdisciplinary scope, including medical research IEEE Xplore: Engineering literature, particularly in NLP and health informatics ACM Digital Library: Computing literature, including NLP and health informatics The search included studies published from January 1, 2013, to October 3, 2024, with the aim being to capture recent developments in NLP, particularly after the introduction of neural language models in 2013 . No language restrictions were applied, although the search terms were in English. Search terms were developed around three key areas: Africa: The names of all 55 African Union member countries and African languages with >1 million native speakers Public health: On the basis of the 12 EPHFs outlined by the WHO NLP: As suggested by a team of experts in the field These terms were combined with general phrases and Medical Subject Headings. The search strategy for each database was tailored with database-specific features to enhance the retrieval of relevant studies. The complete search strategy for MEDLINE (PubMed) and the full list of search terms are detailed in . Reference chaining of relevant articles was also conducted. Discussion of NLP technologies for public health in Africa occurs beyond the academic literature, spanning a diverse array of contributors, formats, and outlets. As such, sparse academic literature on this topic does not necessarily indicate a lack of progress . Many promising technologies addressing public health challenges in Africa are introduced through media outlets, as well as by the individuals, companies, governments, and nongovernmental organizations (NGOs) that develop and use them. These contributions are often presented on the web or shared at events, such as conferences. Therefore, this scoping review also mapped evidence from a broad gray literature. In addition to structured academic databases, the following gray literature sources were included: (1) preprints, non–peer-reviewed studies, and reports; (2) media articles and blog posts; (3) commercial products from startups and established companies; (4) initiatives led by NGOs; and (5) proceedings and presentations from events and conferences. The complete search strategy for gray literature is available in . Screening and NLP Technology Selection Criteria This review includes NLP technologies designed to support public health in Africa. In addition, we consider digital and computational resources essential for the development and deployment of these technologies, such as digital datasets, hardware, and software toolkits. The selection of sources for this scoping review followed a systematic 2-step screening process. Initially, titles and abstracts were reviewed by one reviewer (AO) to exclude studies meeting any of the predefined exclusion criteria ( ), such as those not involving NLP technologies, unrelated to public health, or lacking a focus on Africa. Studies that passed this initial screening were then subjected to full-text screening, where studies meeting all the inclusion criteria ( ) were included. The full-text screening was conducted by the same reviewer (AO), and the reasons for exclusion at this stage were documented. Before the formal screening process, pilot screenings were conducted to refine our screening guidelines ( ) and to ensure consistency and accuracy in study selection. In these pilots, 10% (179/1791 for title and abstract screening and 36/361 for full-text screening) of candidate papers were randomly selected and independently reviewed by 2 trained reviewers (AO and CC) following the predefined screening guidelines. Interreviewer agreement was assessed using exact match rate and Cohen κ to ensure reliability. The screening process and guidelines were iteratively refined by a review coordinator (SH), and the pilot screening was repeated until both an exact match rate of 0.9 and a Cohen κ score of 0.8, indicating almost perfect agreement, were achieved. Following the pilot, formal screening was conducted by a single reviewer (AO), with any concerns resolved by a review coordinator (SH), a subject matter expert in NLP. Inclusion and exclusion criteria. Inclusion criteria All types of scientific publications aimed at an academic audience (eg, peer-reviewed articles, conference proceedings, and book chapters); for gray literature search, other web-based publications (eg, blog posts and media outlets) Studies focusing on the development, evaluation, or adaptation of natural language processing (NLP) technologies specifically promoting public health Studies demonstrating direct or indirect relevance to the population in the continent of Africa Studies published between January 1, 2013, and October 3, 2024 Studies published in any language Exclusion criteria Articles without full-text availability; for articles not available through Cambridge University libraries, full text was requested by emailing the authors Studies unrelated to NLP technologies or their application to public health; for example, non-NLP applications, where no language technologies were involved, and the technology was used to perform tasks, such as predicting outcomes solely from structured datasets or images Studies focused on non-African contexts, except where such studies offer comparative insights relevant to African NLP technologies Studies published before January 1, 2013, or after October 3, 2024 No language requirement specified Data Extraction and Synthesis Data were extracted for each included study based on a predefined data extraction template ( ), capturing key information on study descriptions, the characterization of NLP technologies ( ), and their contributions to EPHFs, SDGs, and SDG 3 targets specifically. In addition, where such data were available, any public health outcomes measured and recommendations for future development were documented. One reviewer (AO) completed the data extraction for all included studies, with any concerns resolved through team discussions. Due to the heterogeneity of study goals, methodologies, evaluation methods, and outcomes, a formal meta-analysis was not attempted. Instead, a narrative synthesis of the results was conducted, with introduced NLP technologies characterized according to the categories outlined in the data extraction template. The extracted data were analyzed to identify trends, gaps, and areas for future research. In addition, the authors’ recommendations for future development were documented and summarized to provide guidance for advancing NLP technologies in public health. A similar pipelined approach of screening, data extraction, and synthesis was applied to the gray literature. Given that our gray literature search covered sources beyond academic publications, we omitted undisclosed data extraction items, as commercial products often lack full disclosure of their design and implementation. A detailed description of our approach to identifying and synthesizing the gray literature is available in . Selected characterization of natural language processing technologies in public health. Natural language processing (NLP) applications: The NLP application each system performs, such as conversational assistant, language translation, or automated diagnosis Modality: Type of data processed by the NLP application (eg, text, audio, and image) Supported languages: The set of languages supported by the NLP technology; languages are documented using ISO (International Organization for Standardization) 639-2 codes Target countries: Countries or regions where the introduced NLP technology is applied or intended to be used; countries are documented using the Alpha-3 code from the ISO 3166 standard Evaluation method (Adapted from Laranjo et al ) Technical performance: Intrinsic evaluation measures such as accuracy, precision, recall, and F1 score User experience: Results on usability testing, user satisfaction surveys, and qualitative feedback from health care providers Health-related measure: Extrinsic evaluation measures such as patient engagement rates, reduction in diagnostic errors, or improvements in treatment outcomes Domain coverage General domain: Data concerning general language processing outside specialized contexts Research domain: Research articles and professional materials for expert audiences Clinical domain: Clinical notes, patient interactions, and other health care–specific communications Target users Health care providers: Direct care providers including physicians, nurses, practitioners, community health workers, and other health care professionals Public health officials and policy makers: Individuals involved in public health policy, administration, and epidemiology Researchers and data scientists: Academics and professionals focused on public health research and data analysis Specific equity-seeking groups: Populations grouped by protected demographic characteristics, such as people with disabilities, children, LGBTQ+ (lesbian, gay, bisexual, trans, queer) individuals, and older adults, who advocate for health equity within and beyond their group General public: The broader community, especially those at higher risk or in need of specific health interventions Others: Any target users that do not fit into the above categories Deployment stage Conceptualization: This initial stage is when the need for an NLP application is identified, and its feasibility is considered Design and prototyping: Development of initial prototypes; these prototypes are usually evaluated based on their technical performance Validation: Rigorous testing of the system with public health outcomes to validate its effectiveness and efficiency in real-world settings Deployment and operational: Deployment of the NLP technology in actual public health settings, where it is actively used Not applicable: The study does not introduce or use any new NLP technologies Level of accessibility Open-source: Publicly accessible datasets and tools that are open-source for future research and analysis Publicly available: Datasets and NLP applications that are accessible to the general public via web or mobile but not necessarily open-source Limited access: Datasets and NLP applications available only to certain users or under specific conditions Closed access: Datasets and applications that are not openly accessible outside the group of authors but may be available upon request or through collaboration Available platform Mobile apps: Technologies accessible via mobile apps Web-based applications: Technologies accessible via web applications or web-based platforms Web service: Technologies accessible via web-based application programming interfaces without user interfaces Dataset: Specific datasets published in the study NLP tool and library: Specific NLP tools and libraries; these tools usually require installations on each deployed computer, which require expertise in computer science The paradigms of developing NLP technologies have evolved significantly since the origin of NLP in the 1940s. The early rule-based systems, such as ELIZA , were followed by a shift toward machine learning–based methods in the 1990s. This new approach leveraged large datasets, reducing reliance on manually crafted rules. In 2013, the introduction of Word2Vec marked a major milestone for NLP , by representing words as vectors. This approach formed the foundation for neural language models. Subsequently, pretrained language models, such as BERT (Bidirectional Encoder Representations from Transformers) and GPT-2 (Generative Pre-trained Transformer 2) , have become the backbone for NLP development, allowing systems to be fine-tuned and developed using datasets of thousands of examples. Recent advances in large language models have further simplified NLP development, allowing systems to achieve optimal performance after learning on just a handful of task-specific examples. In the context of public health in Africa, the development of NLP technologies will likely use a mixture of paradigms, depending on the availability of task-specific datasets and computational resources. For the purposes of this scoping review, we define NLP technologies broadly to include any computational systems that process natural language, either as input or output . This inclusive definition ensures that the review captures a wide range of applications in public health across Africa. In [ , - ], we provide examples of technologies that fall within or outside the scope of this review. The scoping review maps the current evidence on NLP technologies within the framework of EPHFs. The review follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) reporting guidelines (see the PRISMA-ScR checklist in ), and its review protocol is available on medRxiv . A systematic search of academic literature was conducted on May 13, 2024, and updated on October 3, 2024, using the following five electronic bibliographic databases: MEDLINE via PubMed: Medical and public health literature ACL Anthology: NLP and language science literature Scopus: Broad interdisciplinary scope, including medical research IEEE Xplore: Engineering literature, particularly in NLP and health informatics ACM Digital Library: Computing literature, including NLP and health informatics The search included studies published from January 1, 2013, to October 3, 2024, with the aim being to capture recent developments in NLP, particularly after the introduction of neural language models in 2013 . No language restrictions were applied, although the search terms were in English. Search terms were developed around three key areas: Africa: The names of all 55 African Union member countries and African languages with >1 million native speakers Public health: On the basis of the 12 EPHFs outlined by the WHO NLP: As suggested by a team of experts in the field These terms were combined with general phrases and Medical Subject Headings. The search strategy for each database was tailored with database-specific features to enhance the retrieval of relevant studies. The complete search strategy for MEDLINE (PubMed) and the full list of search terms are detailed in . Reference chaining of relevant articles was also conducted. Discussion of NLP technologies for public health in Africa occurs beyond the academic literature, spanning a diverse array of contributors, formats, and outlets. As such, sparse academic literature on this topic does not necessarily indicate a lack of progress . Many promising technologies addressing public health challenges in Africa are introduced through media outlets, as well as by the individuals, companies, governments, and nongovernmental organizations (NGOs) that develop and use them. These contributions are often presented on the web or shared at events, such as conferences. Therefore, this scoping review also mapped evidence from a broad gray literature. In addition to structured academic databases, the following gray literature sources were included: (1) preprints, non–peer-reviewed studies, and reports; (2) media articles and blog posts; (3) commercial products from startups and established companies; (4) initiatives led by NGOs; and (5) proceedings and presentations from events and conferences. The complete search strategy for gray literature is available in . This review includes NLP technologies designed to support public health in Africa. In addition, we consider digital and computational resources essential for the development and deployment of these technologies, such as digital datasets, hardware, and software toolkits. The selection of sources for this scoping review followed a systematic 2-step screening process. Initially, titles and abstracts were reviewed by one reviewer (AO) to exclude studies meeting any of the predefined exclusion criteria ( ), such as those not involving NLP technologies, unrelated to public health, or lacking a focus on Africa. Studies that passed this initial screening were then subjected to full-text screening, where studies meeting all the inclusion criteria ( ) were included. The full-text screening was conducted by the same reviewer (AO), and the reasons for exclusion at this stage were documented. Before the formal screening process, pilot screenings were conducted to refine our screening guidelines ( ) and to ensure consistency and accuracy in study selection. In these pilots, 10% (179/1791 for title and abstract screening and 36/361 for full-text screening) of candidate papers were randomly selected and independently reviewed by 2 trained reviewers (AO and CC) following the predefined screening guidelines. Interreviewer agreement was assessed using exact match rate and Cohen κ to ensure reliability. The screening process and guidelines were iteratively refined by a review coordinator (SH), and the pilot screening was repeated until both an exact match rate of 0.9 and a Cohen κ score of 0.8, indicating almost perfect agreement, were achieved. Following the pilot, formal screening was conducted by a single reviewer (AO), with any concerns resolved by a review coordinator (SH), a subject matter expert in NLP. Inclusion and exclusion criteria. Inclusion criteria All types of scientific publications aimed at an academic audience (eg, peer-reviewed articles, conference proceedings, and book chapters); for gray literature search, other web-based publications (eg, blog posts and media outlets) Studies focusing on the development, evaluation, or adaptation of natural language processing (NLP) technologies specifically promoting public health Studies demonstrating direct or indirect relevance to the population in the continent of Africa Studies published between January 1, 2013, and October 3, 2024 Studies published in any language Exclusion criteria Articles without full-text availability; for articles not available through Cambridge University libraries, full text was requested by emailing the authors Studies unrelated to NLP technologies or their application to public health; for example, non-NLP applications, where no language technologies were involved, and the technology was used to perform tasks, such as predicting outcomes solely from structured datasets or images Studies focused on non-African contexts, except where such studies offer comparative insights relevant to African NLP technologies Studies published before January 1, 2013, or after October 3, 2024 No language requirement specified Data were extracted for each included study based on a predefined data extraction template ( ), capturing key information on study descriptions, the characterization of NLP technologies ( ), and their contributions to EPHFs, SDGs, and SDG 3 targets specifically. In addition, where such data were available, any public health outcomes measured and recommendations for future development were documented. One reviewer (AO) completed the data extraction for all included studies, with any concerns resolved through team discussions. Due to the heterogeneity of study goals, methodologies, evaluation methods, and outcomes, a formal meta-analysis was not attempted. Instead, a narrative synthesis of the results was conducted, with introduced NLP technologies characterized according to the categories outlined in the data extraction template. The extracted data were analyzed to identify trends, gaps, and areas for future research. In addition, the authors’ recommendations for future development were documented and summarized to provide guidance for advancing NLP technologies in public health. A similar pipelined approach of screening, data extraction, and synthesis was applied to the gray literature. Given that our gray literature search covered sources beyond academic publications, we omitted undisclosed data extraction items, as commercial products often lack full disclosure of their design and implementation. A detailed description of our approach to identifying and synthesizing the gray literature is available in . Selected characterization of natural language processing technologies in public health. Natural language processing (NLP) applications: The NLP application each system performs, such as conversational assistant, language translation, or automated diagnosis Modality: Type of data processed by the NLP application (eg, text, audio, and image) Supported languages: The set of languages supported by the NLP technology; languages are documented using ISO (International Organization for Standardization) 639-2 codes Target countries: Countries or regions where the introduced NLP technology is applied or intended to be used; countries are documented using the Alpha-3 code from the ISO 3166 standard Evaluation method (Adapted from Laranjo et al ) Technical performance: Intrinsic evaluation measures such as accuracy, precision, recall, and F1 score User experience: Results on usability testing, user satisfaction surveys, and qualitative feedback from health care providers Health-related measure: Extrinsic evaluation measures such as patient engagement rates, reduction in diagnostic errors, or improvements in treatment outcomes Domain coverage General domain: Data concerning general language processing outside specialized contexts Research domain: Research articles and professional materials for expert audiences Clinical domain: Clinical notes, patient interactions, and other health care–specific communications Target users Health care providers: Direct care providers including physicians, nurses, practitioners, community health workers, and other health care professionals Public health officials and policy makers: Individuals involved in public health policy, administration, and epidemiology Researchers and data scientists: Academics and professionals focused on public health research and data analysis Specific equity-seeking groups: Populations grouped by protected demographic characteristics, such as people with disabilities, children, LGBTQ+ (lesbian, gay, bisexual, trans, queer) individuals, and older adults, who advocate for health equity within and beyond their group General public: The broader community, especially those at higher risk or in need of specific health interventions Others: Any target users that do not fit into the above categories Deployment stage Conceptualization: This initial stage is when the need for an NLP application is identified, and its feasibility is considered Design and prototyping: Development of initial prototypes; these prototypes are usually evaluated based on their technical performance Validation: Rigorous testing of the system with public health outcomes to validate its effectiveness and efficiency in real-world settings Deployment and operational: Deployment of the NLP technology in actual public health settings, where it is actively used Not applicable: The study does not introduce or use any new NLP technologies Level of accessibility Open-source: Publicly accessible datasets and tools that are open-source for future research and analysis Publicly available: Datasets and NLP applications that are accessible to the general public via web or mobile but not necessarily open-source Limited access: Datasets and NLP applications available only to certain users or under specific conditions Closed access: Datasets and applications that are not openly accessible outside the group of authors but may be available upon request or through collaboration Available platform Mobile apps: Technologies accessible via mobile apps Web-based applications: Technologies accessible via web applications or web-based platforms Web service: Technologies accessible via web-based application programming interfaces without user interfaces Dataset: Specific datasets published in the study NLP tool and library: Specific NLP tools and libraries; these tools usually require installations on each deployed computer, which require expertise in computer science Overview The initial database search on May 13, 2024, retrieved 1791 citations, and the final updated search on October 3, 2024, retrieved an additional 404 citations ( ). The updated search retrieved 289 additional articles from PubMed, 70 from ACL Anthology, 2 from Scopus, 36 from IEEE Xplore, and 7 from ACM Digital Library, resulting in 6 additional papers being identified for full-text eligibility assessment. After removing 9 duplicate citations, 2186 unique records were screened. During the title and abstract screening, 1825 articles were excluded. Full-text reviews were conducted for the remaining 361 articles, which included 6 articles identified through the final updated search. Following the full-text screening, 311 articles were excluded, resulting in the inclusion of 2.29% (50/2186) studies. An additional 4 studies were identified through reference chaining of the included studies. Before the formal screening process, 3 rounds of pilot screenings, covering 10% (179/1791 for title and abstract screening and 36/361 for full-text screening) of the citations, were conducted to ensure consistency and reliability. The final round achieved interreviewer agreement scores of 0.97 for accuracy and 0.89 for Cohen κ in title and abstract screening, and perfect agreement (ie, 1.0 for both measures) in full-text screening. In this section, we provide an overview of the academic literature on NLP technologies for public health in Africa and present our findings in response to the 5 aforementioned research questions. In addition, we separately summarize relevant gray literature, which provides complementary perspectives to the academic literature. By combining these 2 sources of evidence, we aim to provide a comprehensive and up-to-date analysis of the landscape, while adhering to the rigorous methodological standards of this scoping review. Description of Academic Literature Over the past decade, there has been a rapid increase in the number of publications on NLP for public health in Africa, with a notable spike in 2022, where 6 (43%) out of the 14 papers published that year were in response to the COVID-19 pandemic ( ). The number of academic papers from authors affiliated with African and non-African institutions has been similar ( ). Of the 54 included citations, 38 (70%) papers were contributed by authors affiliated with African institutions [ , - ], while 35 (65%) papers [ , - , , , - , , , , , , , , - ] were authored by researchers affiliated with institutions outside Africa. For readability, we do not provide a full list of in-text citations for all our categorizations throughout this review, instead highlighting specific papers where necessary. A complete table with all the categorizations and their corresponding references is available in . Notably, 19 (35%) papers [ , - , , , - , , , , , , , , ] stem from collaborations between African and non-African institutions, highlighting the prevalence and importance of cross-border and cross-continental collaborations at the intersection of NLP and public health research. Among these 19 papers, 16 (84%) involved coauthorship between researchers from North America and Africa, 6 (32%) involved coauthorship between Asia and Africa, 4 (21%) involved coauthorship between Europe and Africa, and 1 (5%) involved coauthorship between Oceania and Africa. Researchers affiliated with institutions in the United States and South Africa (18/54, 33% papers each) emerged as the leading contributors to NLP research for public health in Africa. Number of papers by publication years, author institutional affiliations by country, languages supported, and African countries and regions supported in the 54 included studies. Publication years 2015: n=1; 2016: n=2; 2017: n=2; 2018; n=1; 2019: n=5; 2020: n=9; 2021: n=8; 2022: n=14; 2023: n=9; 2024: n=3 Note that data gathering ended on October 3, 2024, providing only 9 months of data for 2024, which means the 2024 data are not directly comparable to the other years. Author institutional affiliations by country South Africa: n=18; the United States: n=18; Kenya: n=8; Canada: n=6; India: n=5; Germany: n=3; Iran: n=3; Rwanda: n=3; Saudi Arabia: n=3; the United Kingdom: n=3; others: n=23 Countries with fewer than 2 papers are grouped under others, including Belgium, Brazil, Cameroon, Egypt, Eswatini, Ethiopia, France, Hungary, Indonesia, Italy, Lebanon, Lesotho, Morocco, Netherlands, New Zealand, Nigeria, Qatar, Senegal, Sierra Leone, Spain, Switzerland, Tanzania, and Uganda. Note that a single paper can have authors from multiple countries, so the total number of country affiliations exceeds the total number of papers reviewed. Language supported English: n=40; Arabic: n=8; Kiswahili: n=7; French: n=4; Zulu: n=4; Amharic: n=3; Hausa: n=3; Hindi: n=3; Northern Sotho: n=3; Xhosa: n=3; others: n=64 Languages supported by <3 technologies are grouped under others, which includes Afrikaans (n=2), Bengali (n=2), Chinese (n=2), Gujarati (n=2), Igbo (n=2), Indonesian (n=2), Japanese (n=2), Korean (n=2), Marathi (n=2), North Ndebele (n=2), Portuguese (n=2), Sinhala (n=2), Sotho (n=2), Spanish (n=2), Urdu (n=2), Assamese (n=1), Central Atlas Tamazight (n=1), Czech (n=1), Dutch (n=1), German (n=1), Iloko (n=1), Italian (n=1), Kannada (n=1), Kikuyu (n=1), Kinyarwanda (n=1), Luo (n=1), Malay (n=1), Malayalam (n=1), Nepali (n=1), Nyankole (n=1), Panjabi (n=1), Persian (n=1), Polish (n=1), Pushto (n=1), Russian (n=1), Shona (n=1), Somali (n=1), Swati (n=1), Tagalog (n=1), Tamil (n=1), Telugu (n=1), Thai (n=1), Tigrinya (n=1), Tsonga (n=1), Tswana (n=1), Turkish (n=1), Uighur (n=1), Venda (n=1), and Yoruba (n=1). Languages are identified using the ISO 639-2 code, and different dialectal variants of a language (eg, Arabic) are not distinguished. A single technology may support multiple languages. African countries and regions supported South Africa: n=25; Kenya: n=14; Nigeria: n=9; Rwanda: n=7; Egypt: n=5; Ethiopia: n=5; Uganda: n=5; Zimbabwe: n=4; Cameroon: n=3; Eritrea: n=3; Morocco: n=3; Somalia: n=3; Tunisia: n=3; others: n=55 Countries and regions supported by <3 technologies are grouped under others, which includes Algeria (n=2), Botswana (n=2), Democratic Republic of the Congo (n=2), Eswatini (n=2), Lesotho (n=2), Malawi (n=2), Mozambique (n=2), Namibia (n=2), Niger (n=2), Senegal (n=2), South Sudan (n=2), Sudan (n=2), Tanzania (n=2), Angola (n=1), Benin (n=1), Burkina Faso (n=1), Burundi (n=1), Cabo Verde (n=1), Central African Republic (n=1), Chad (n=1), Comoros (n=1), Congo (n=1), Côte d’Ivoire (n=1), Djibouti (n=1), Equatorial Guinea (n=1), Gabon (n=1), Gambia (n=1), Ghana (n=1), Guinea (n=1), Guinea-Bissau (n=1), Liberia (n=1), Libya (n=1), Madagascar (n=1), Mali (n=1), Mauritania (n=1), Mauritius (n=1), Sao Tome and Principe (n=1), Seychelles (n=1), Sierra Leone (n=1), Togo (n=1), Western Sahara (n=1), and Zambia (n=1). Countries and regions are identified using the ISO 3166 code. Approximately 48% (26/54) of the papers included in this review did not disclose their source of funding. Of the 28 (52%) papers that did report funding, some of them received funding from more than one source, with the vast majority of papers (26/28, 93%) supported by public entities. This included government grants, NGOs, and research councils. Only 2 (4%) papers were funded by industry actors . Geographically, with international funding sources determined by their headquarters’ location, most of the papers that disclosed funding were financially supported by institutions in North America (14/54, 26%) and Europe (9/54, 17%). Reflecting the global nature of these research contributions, funding was also sourced from other continents, including Africa and Asia, demonstrating the wide range of financial support for these studies. Notably, no funding was recorded from Oceania and South America. The data used to develop NLP technologies for public health in Africa is generally up-to-date. Among the 36 (67%) papers that reported the year of data collection, most studies (30/54, 56%) used data collected either in the same year or within 1 year before publication. Specifically, 13 (24%) papers used data collected in the same year, while 17 (31%) used data with a 1-year delay. In this review, for papers introducing a dataset, unless otherwise specified, assume the year of data collection is the same as the year of publication. For papers that introduce NLP applications or perform an analysis using a data set from other sources, the year of data collection is the year when the data was originally published. Most of the data (36/54, 67% papers) fall within the general domain, primarily produced and consumed by the general public, such as social media data. In addition, 15 (28%) studies covered clinical domains, including clinical notes, patient interactions, and other health care–specific communications, while 8 (15%) papers focused on the research domain, covering research articles and professional materials aimed at expert audiences. It should be noted that one study can cover multiple domains simultaneously, as seen in 5 (9%) studies. Regarding data modality, most of the data (53/54, 98% studies) used to develop these technologies were text-based, with minimal use of other modalities. Only 4 (7%) papers used audio or image data, highlighting a limited exploration of non–text-based data in NLP applications for public health in Africa. Needs and Availability Most of the reviewed papers focused on conversational assistants (17/54, 31%) and sentiment analysis (15/54, 28%). Additional applications included machine translation (3/54, 6%), thematic analysis (3/54, 6%), information extraction (3/54, 6%), and outbreak detection (2/54, 4%). Fewer papers addressed tasks, such as infection detection, misinformation detection, disease prediction, optical character recognition, question-answering, hate speech detection, medical report generation, and speech recognition, with each of these applications covered by only 1 study. A smaller subset of studies focused on fundamental NLP challenges in the context of public health, such as syntax parsing (1/54, 2%), word embedding (1/54, 2%), and lexical processing (1/54, 2%), rather than user-facing applications. Most available NLP technologies for public health in Africa were designed to serve expert users, such as researchers (45/54, 83%), policy makers (38/54, 70%), and health care providers (30/54, 56%). Fewer than half of the systems were public-facing (25/54, 46%) and targeted toward equity-seeking groups (8/54, 15%). This focus on expert-driven systems suggests an opportunity to develop more public-facing NLP technologies that engage and empower communities to proactively manage their health. After mapping the currently available NLP technologies into the WHO’s EPHF framework, shows that 9 (75%) out of 12 EPHFs were addressed by existing NLP technologies in Africa, with EPHF 3 (ie, public health stewardship), EPHF 8 (ie, community engagement and social participation), and EPHF 12 (ie, access to and utilization of health products, supplies, equipment, and technologies) remaining unaddressed. Most studies predominantly focused on 4 EPHFs: EPHF 7 (ie, health promotion; 31/54, 57%), EPHF 11 (ie, public health research, evaluation, and knowledge; 25/54, 46%), EPHF 10 (ie, health service quality and equity; 24/54, 44%), and EPHF 1 (ie, public health surveillance and monitoring; 23/54, 43%). Work on other EPHFs remains relatively sparse, with only a handful of papers addressing them. When each paper was assigned one primary EPHF, only 6 EPHFs were the main focus of these studies, leaving 6 EPHFs unaddressed, including EPHF 2 (ie, public health emergency management), EPHF 3, EPHF 4 (ie, multisectoral planning, financing, and management for public health), EPHF 5 (ie, health protection), EPHF 8, and EPHF 12. When mapping the included NLP technologies to the United Nations’ SDGs, all the reviewed NLP technologies contributed to SDG 3 (good health and well-being). The interconnected nature of the SDGs means that contributing to one SDG often supports progress in others. For example, 15/54 (28%) studies contributed to SDG 10 (reduced inequality), 10/54 (19%) studies to SDG 9 (industry, innovation, and infrastructure), 6/54 (11%) studies to SDG 8 (decent work and economic growth), 5/54 (9%) studies to SDG 4 (quality education), 4/54 (7%) studies to SDG 5 (gender equality), 1 (2%) study to SDG 15 (life on land), and 1 (2%) study to SDG 16 (peace, justice, and strong institutions). Thus, even with a primary focus on health, these projects may have a far-reaching impact on other SDGs . When zooming in on the targets of SDG 3, currently available NLP technologies in Africa only cover 6 of the 13 specific targets ( ). Of the 4 means of implementation listed for SDG 3, available technologies engage 3 (ie, tobacco control, access to vaccines and medicines, and health financing). Prevalence and Distribution The availability of NLP technologies for public health in Africa is strongly influenced by the languages these technologies support. Most NLP technologies predominantly serve widely spoken high-resource languages ( ), such as English (40/54, 74%), Arabic (8/54, 15%), and French (4/54, 7%), reflecting their status as official languages in academic, governmental, and professional sectors across the continent. In contrast, support for indigenous African languages is significantly limited. While some widely spoken African languages, such as Kiswahili (7/54, 13%) and Zulu (4/54, 7%), are represented, many other indigenous languages remain underrepresented or entirely absent from these technologies. Overall, 59 languages were supported by the 54 studies included in this review, a number that falls far short of covering Africa’s linguistic diversity. The availability of NLP technologies for public health in Africa varies significantly across countries and regions. As shown in , South Africa is the primary target country for these technologies, with 25 (46%) out of 54 studies targeting this country, followed by Kenya (14/54, 26%) and Nigeria (9/54, 17%). In contrast, 29 African countries and regions, including Angola, Benin, and Burkina Faso, are supported by only 1 technology each, highlighting uneven distributions of NLP technologies across the continent. Our review reveals a geographic concentration of available NLP technologies in certain countries, especially South Africa, Kenya, and Nigeria, suggesting a need for future efforts to expand NLP technology development to underserved regions ( ). The results further highlight a major gap in linguistic inclusivity within the existing NLP technologies across the continent, where languages spoken in these better-supported regions receive more attention compared to those in other areas ( ). Deployment and Integration Among the 54 included studies, 4 (7%) were reviews of existing NLP technologies related to health chatbots , HIV prevention in Africa , HIV prevention in Malawi specifically , and chatbots for HIV prevention . These reviews do not introduce new NLP technologies but rather summarize findings from other research that has introduced new technologies and reported primary results. As such, the concept of deployment is less relevant to these review studies. Therefore, this subsection focuses on the 50 studies that directly introduce new NLP applications. Most NLP applications for public health in Africa are still in the early stages of development, with only 1 (2%) out of the 50 studies fully deployed and operational. This deployed system is a Facebook messenger chatbot designed to address vaccine hesitancy in Kenya and Nigeria, collecting real-time data on vaccine hesitancy trends from user interactions . Most studies (44/50, 88%) are in the design and prototyping phase, where they are evaluated only based on their technical performance in controlled, lab-based environments. Meanwhile, 5 (10%) studies have reached the validation stage, where their effectiveness has been tested in real-world settings through methods, such as expert reviews and user testing [ , , ]. Specifically, 2 (4%) studies involved health care professionals who reviewed system performance in practical scenarios. The remaining 3 (6%) studies [ , , ] conducted evaluations with small samples of target users to test the developed NLP technologies. However, these systems were accessible only to a limited number of users and have not yet achieved full deployment. Regarding accessibility (as defined in ), >half (29/50, 58%) of the NLP applications are publicly accessible, allowing general use without significant restrictions. Of these, 12 (24%) are open-source , enabling researchers and developers to build new NLP applications based on their published systems. By contrast, a significant number of systems are categorized as having limited access (18/50, 36%) or are closed access (3), likely due to the sensitive nature of health-related data, raising concerns around privacy and data security. In terms of platform support, most NLP technologies for public health in Africa are offered as tools and libraries (29/50, 58%), datasets (5/50, 10%), or web services (4/50, 8%), all of which require a certain level of technical expertise in computer science to exploit effectively. A smaller proportion of technologies are provided as mobile apps (11/50, 22%) and web-based applications (9/50, 18%), offering more user-friendly interfaces that can be accessed by a broader range of users, including public health practitioners and the general public. This distribution suggests an opportunity to develop NLP technologies with more accessible interfaces to promote wider adoption and usability of these technologies for public health across Africa. Of the 50 reviewed NLP technologies, 40 (80%) indicated an intent to integrate their solutions into existing public health systems, and 41 (82%) were designed to be interoperable with various health infrastructures. This reflects a clear recognition among researchers and developers of the importance of ensuring that these technologies function seamlessly within current health frameworks. However, despite this intent, only 1 (2%) study has reached the stage of deployment, highlighting the need to move these technologies from development into operational use. Scope and Public Health Impact Out of the 54 studies reviewed, 30 (56%) aimed to develop NLP applications or inform public policies to improve public health outcomes in Africa. Of these, 22 (41%) specifically focused on addressing public health challenges within African countries, directly targeting the health issues faced by local communities. The remaining 8 (15%) studies adopted a broader global health perspective. While they do not exclusively tailor their approaches to Africa, these studies aim to promote public health on a global scale, with intended outcomes that also benefit Africa. The other 24 (44%) studies were divided into 18 (33%) studies that advance NLP technologies for public health using African data (eg, social media, health records, or public health data) without directly targeting specific health challenges, and 6 (11%) studies that contribute to global health discussions, with Africa serving as a case study or example. In terms of evaluation, nearly all studies (51/54, 94%) reported technical performance using a variety of automatic evaluation metrics. Classification metrics were the most commonly used, such as accuracy (used in 22/54, 41% studies), precision (13/54, 24%), F 1 -score (15/54, 28%), and recall (11/54, 20%), making these the 4 most frequently applied automatic metrics. However, because of the wide variation in evaluation approaches, direct comparisons between the studies were impractical and not attempted. In contrast, only 11 (20%) studies reported evaluation results based on user experiences, such as usability testing, user satisfaction surveys, or qualitative feedback from health care providers. Furthermore, only 8 (15%) studies attempted to evaluate these technologies using health-related measures. Among these, 2 (4%) studies confirmed a positive impact on public health outcomes, with NLP-based interventions shown to improve participants’ mood and increase vaccine intentions and willingness . Outlook and Ethical Consideration Among the 54 included papers, 20 (37%) provided recommendations for the future development of NLP technologies for public health in Africa. A thematic analysis of these recommendations identified 6 key areas for future research: addressing specific public health challenges with NLP (11/54, 20%), expanding data coverage for underrepresented languages (8/54, 15% studies), contextualizing solutions to local health needs (6/54, 11% studies), enhancing trust and ethical standards (5/54, 9% studies), integrating NLP technologies with existing health systems (5/54, 9% studies), and incorporating participatory design with domain expert input (3/54, 6% studies). In terms of ethical considerations, 46 (85%) out of 54 studies attempted to engage stakeholders during the study design and implementation, while 38 (70%) studies explicitly addressed data privacy compliance. Approximately half of the studies (26/54, 48%) involved the local community in their research. However, only 16 (30%) studies reported receiving explicit ethics approval from an independent review board, and 10 (19%) studies mentioned obtaining informed consent from human participants. It is important to note that not all types of studies involved human participants or required explicit ethics approval in advance. In addition to these considerations, 45 (83%) papers highlighted other ethical concerns, including bias and fairness (11/54, 20%), cultural relevance and appropriateness (11/54, 20%), avoiding miscommunication by NLP technologies (9/54, 17%), preventing misuse of NLP technologies (9/54, 17%), data sharing and accessibility (7/54, 13%), adherence to regulatory standards (2/54, 4%), data representativeness (1/54, 2%), and fair compensation for participants (1/54, 2%). Description of Gray Literature Our gray literature review covered two types of sources: (1) academic literature, including unpublished preprints and peer-reviewed articles not indexed in the 5 structured databases, and (2) nonacademic sources, such as online articles, blog posts, products from startups and established companies, initiatives from NGOs, and proceedings from events and conferences. Full results of the gray literature search are detailed in , with key findings highlighted below. Within the academic gray literature, we identified 11 relevant articles from the first 100 Google Scholar results, with 9 (9%) peer-reviewed articles and 2 (2%) preprints. These studies generally aligned with the patterns observed in the aforementioned structured database search. Each study involved researchers affiliated with at least one African institution, with contributions from South Africa (6/11, 55% studies) [ - ], Nigeria (4/11, 36%) [ - ], Guinea (1/11, 9%) , and Rwanda (1/11, 9%) . In addition, 4 (36%) studies involved collaborations with international researchers from institutions based in the United States (3/11, 27% studies), Canada (3/11, 27%), Germany (1/11, 9%), and Mexico (1/11, 9%). Funding was disclosed in 6 (55%) of the 11 studies, all supported by public entities. The primary NLP applications developed in these studies were conversational assistants (4/11, 36% studies) and sentiment analysis tools (3/11, 27% studies). These studies primarily supported EPHF 7 (ie, health promotion, 8/11, 73% studies). Regarding language coverage, nearly all studies (11/11, 100%) reported support for English. A smaller number addressed African languages, including Ndebele (2/11, 18% studies), Sotho (2/11, 18%), Kiswahili (2/11, 18%), Swati (2/11, 18%), Venda (2/11, 18%), Xhosa (2/11, 18%), Zulu (2/11 18%), with one study each for Afrikaans, Hausa, Kinyarwanda, Northern Sotho, Shona, Tsonga, and Tswana. For target countries, Nigeria and South Africa were the primary focus, each covered in 5 (45%) studies. Notably, none of the studies provided performance evaluations based on health-related measures or reported reaching the stage of actual deployment. Outside academia, commercial products and NGO-led initiatives have focused on creating practical NLP solutions aimed at real-world public health impact. On the basis of our search results, 4 NLP technologies were developed as commercial products by companies [ - ], and another 4 were created as part of initiatives led by NGOs [ - ]. These projects were often in partnership with charitable organizations like the Bill and Melinda Gates Foundation, international bodies, such as the WHO, and industry partners like Google or Meta, frequently collaborating with telecom providers to reach populations with lower literacy levels and limited access to public health resources. The primary focus of these NLP technologies was on disseminating public health information through conversational assistants, with applications supporting EPHF 7 (ie, health promotion) and SDG 3 (good health and well-being). Most tools were designed in English with the limited inclusion of widely spoken African languages like Hausa, Kiswahili, and Zulu. In contrast to academic literature, NLP technologies from these nonacademic sources typically disclosed only limited details about their design and implementation. Furthermore, our review of events and conferences did not introduce additional evidence of NLP technologies tailored to African public health challenges. A lack of standardized protocols for reporting NLP technologies, such as established reporting standards or controlled vocabularies for indexing, may explain why no relevant NLP technologies were retrieved during our search, likely due to limited keyword overlap. The initial database search on May 13, 2024, retrieved 1791 citations, and the final updated search on October 3, 2024, retrieved an additional 404 citations ( ). The updated search retrieved 289 additional articles from PubMed, 70 from ACL Anthology, 2 from Scopus, 36 from IEEE Xplore, and 7 from ACM Digital Library, resulting in 6 additional papers being identified for full-text eligibility assessment. After removing 9 duplicate citations, 2186 unique records were screened. During the title and abstract screening, 1825 articles were excluded. Full-text reviews were conducted for the remaining 361 articles, which included 6 articles identified through the final updated search. Following the full-text screening, 311 articles were excluded, resulting in the inclusion of 2.29% (50/2186) studies. An additional 4 studies were identified through reference chaining of the included studies. Before the formal screening process, 3 rounds of pilot screenings, covering 10% (179/1791 for title and abstract screening and 36/361 for full-text screening) of the citations, were conducted to ensure consistency and reliability. The final round achieved interreviewer agreement scores of 0.97 for accuracy and 0.89 for Cohen κ in title and abstract screening, and perfect agreement (ie, 1.0 for both measures) in full-text screening. In this section, we provide an overview of the academic literature on NLP technologies for public health in Africa and present our findings in response to the 5 aforementioned research questions. In addition, we separately summarize relevant gray literature, which provides complementary perspectives to the academic literature. By combining these 2 sources of evidence, we aim to provide a comprehensive and up-to-date analysis of the landscape, while adhering to the rigorous methodological standards of this scoping review. Over the past decade, there has been a rapid increase in the number of publications on NLP for public health in Africa, with a notable spike in 2022, where 6 (43%) out of the 14 papers published that year were in response to the COVID-19 pandemic ( ). The number of academic papers from authors affiliated with African and non-African institutions has been similar ( ). Of the 54 included citations, 38 (70%) papers were contributed by authors affiliated with African institutions [ , - ], while 35 (65%) papers [ , - , , , - , , , , , , , , - ] were authored by researchers affiliated with institutions outside Africa. For readability, we do not provide a full list of in-text citations for all our categorizations throughout this review, instead highlighting specific papers where necessary. A complete table with all the categorizations and their corresponding references is available in . Notably, 19 (35%) papers [ , - , , , - , , , , , , , , ] stem from collaborations between African and non-African institutions, highlighting the prevalence and importance of cross-border and cross-continental collaborations at the intersection of NLP and public health research. Among these 19 papers, 16 (84%) involved coauthorship between researchers from North America and Africa, 6 (32%) involved coauthorship between Asia and Africa, 4 (21%) involved coauthorship between Europe and Africa, and 1 (5%) involved coauthorship between Oceania and Africa. Researchers affiliated with institutions in the United States and South Africa (18/54, 33% papers each) emerged as the leading contributors to NLP research for public health in Africa. Number of papers by publication years, author institutional affiliations by country, languages supported, and African countries and regions supported in the 54 included studies. Publication years 2015: n=1; 2016: n=2; 2017: n=2; 2018; n=1; 2019: n=5; 2020: n=9; 2021: n=8; 2022: n=14; 2023: n=9; 2024: n=3 Note that data gathering ended on October 3, 2024, providing only 9 months of data for 2024, which means the 2024 data are not directly comparable to the other years. Author institutional affiliations by country South Africa: n=18; the United States: n=18; Kenya: n=8; Canada: n=6; India: n=5; Germany: n=3; Iran: n=3; Rwanda: n=3; Saudi Arabia: n=3; the United Kingdom: n=3; others: n=23 Countries with fewer than 2 papers are grouped under others, including Belgium, Brazil, Cameroon, Egypt, Eswatini, Ethiopia, France, Hungary, Indonesia, Italy, Lebanon, Lesotho, Morocco, Netherlands, New Zealand, Nigeria, Qatar, Senegal, Sierra Leone, Spain, Switzerland, Tanzania, and Uganda. Note that a single paper can have authors from multiple countries, so the total number of country affiliations exceeds the total number of papers reviewed. Language supported English: n=40; Arabic: n=8; Kiswahili: n=7; French: n=4; Zulu: n=4; Amharic: n=3; Hausa: n=3; Hindi: n=3; Northern Sotho: n=3; Xhosa: n=3; others: n=64 Languages supported by <3 technologies are grouped under others, which includes Afrikaans (n=2), Bengali (n=2), Chinese (n=2), Gujarati (n=2), Igbo (n=2), Indonesian (n=2), Japanese (n=2), Korean (n=2), Marathi (n=2), North Ndebele (n=2), Portuguese (n=2), Sinhala (n=2), Sotho (n=2), Spanish (n=2), Urdu (n=2), Assamese (n=1), Central Atlas Tamazight (n=1), Czech (n=1), Dutch (n=1), German (n=1), Iloko (n=1), Italian (n=1), Kannada (n=1), Kikuyu (n=1), Kinyarwanda (n=1), Luo (n=1), Malay (n=1), Malayalam (n=1), Nepali (n=1), Nyankole (n=1), Panjabi (n=1), Persian (n=1), Polish (n=1), Pushto (n=1), Russian (n=1), Shona (n=1), Somali (n=1), Swati (n=1), Tagalog (n=1), Tamil (n=1), Telugu (n=1), Thai (n=1), Tigrinya (n=1), Tsonga (n=1), Tswana (n=1), Turkish (n=1), Uighur (n=1), Venda (n=1), and Yoruba (n=1). Languages are identified using the ISO 639-2 code, and different dialectal variants of a language (eg, Arabic) are not distinguished. A single technology may support multiple languages. African countries and regions supported South Africa: n=25; Kenya: n=14; Nigeria: n=9; Rwanda: n=7; Egypt: n=5; Ethiopia: n=5; Uganda: n=5; Zimbabwe: n=4; Cameroon: n=3; Eritrea: n=3; Morocco: n=3; Somalia: n=3; Tunisia: n=3; others: n=55 Countries and regions supported by <3 technologies are grouped under others, which includes Algeria (n=2), Botswana (n=2), Democratic Republic of the Congo (n=2), Eswatini (n=2), Lesotho (n=2), Malawi (n=2), Mozambique (n=2), Namibia (n=2), Niger (n=2), Senegal (n=2), South Sudan (n=2), Sudan (n=2), Tanzania (n=2), Angola (n=1), Benin (n=1), Burkina Faso (n=1), Burundi (n=1), Cabo Verde (n=1), Central African Republic (n=1), Chad (n=1), Comoros (n=1), Congo (n=1), Côte d’Ivoire (n=1), Djibouti (n=1), Equatorial Guinea (n=1), Gabon (n=1), Gambia (n=1), Ghana (n=1), Guinea (n=1), Guinea-Bissau (n=1), Liberia (n=1), Libya (n=1), Madagascar (n=1), Mali (n=1), Mauritania (n=1), Mauritius (n=1), Sao Tome and Principe (n=1), Seychelles (n=1), Sierra Leone (n=1), Togo (n=1), Western Sahara (n=1), and Zambia (n=1). Countries and regions are identified using the ISO 3166 code. Approximately 48% (26/54) of the papers included in this review did not disclose their source of funding. Of the 28 (52%) papers that did report funding, some of them received funding from more than one source, with the vast majority of papers (26/28, 93%) supported by public entities. This included government grants, NGOs, and research councils. Only 2 (4%) papers were funded by industry actors . Geographically, with international funding sources determined by their headquarters’ location, most of the papers that disclosed funding were financially supported by institutions in North America (14/54, 26%) and Europe (9/54, 17%). Reflecting the global nature of these research contributions, funding was also sourced from other continents, including Africa and Asia, demonstrating the wide range of financial support for these studies. Notably, no funding was recorded from Oceania and South America. The data used to develop NLP technologies for public health in Africa is generally up-to-date. Among the 36 (67%) papers that reported the year of data collection, most studies (30/54, 56%) used data collected either in the same year or within 1 year before publication. Specifically, 13 (24%) papers used data collected in the same year, while 17 (31%) used data with a 1-year delay. In this review, for papers introducing a dataset, unless otherwise specified, assume the year of data collection is the same as the year of publication. For papers that introduce NLP applications or perform an analysis using a data set from other sources, the year of data collection is the year when the data was originally published. Most of the data (36/54, 67% papers) fall within the general domain, primarily produced and consumed by the general public, such as social media data. In addition, 15 (28%) studies covered clinical domains, including clinical notes, patient interactions, and other health care–specific communications, while 8 (15%) papers focused on the research domain, covering research articles and professional materials aimed at expert audiences. It should be noted that one study can cover multiple domains simultaneously, as seen in 5 (9%) studies. Regarding data modality, most of the data (53/54, 98% studies) used to develop these technologies were text-based, with minimal use of other modalities. Only 4 (7%) papers used audio or image data, highlighting a limited exploration of non–text-based data in NLP applications for public health in Africa. Most of the reviewed papers focused on conversational assistants (17/54, 31%) and sentiment analysis (15/54, 28%). Additional applications included machine translation (3/54, 6%), thematic analysis (3/54, 6%), information extraction (3/54, 6%), and outbreak detection (2/54, 4%). Fewer papers addressed tasks, such as infection detection, misinformation detection, disease prediction, optical character recognition, question-answering, hate speech detection, medical report generation, and speech recognition, with each of these applications covered by only 1 study. A smaller subset of studies focused on fundamental NLP challenges in the context of public health, such as syntax parsing (1/54, 2%), word embedding (1/54, 2%), and lexical processing (1/54, 2%), rather than user-facing applications. Most available NLP technologies for public health in Africa were designed to serve expert users, such as researchers (45/54, 83%), policy makers (38/54, 70%), and health care providers (30/54, 56%). Fewer than half of the systems were public-facing (25/54, 46%) and targeted toward equity-seeking groups (8/54, 15%). This focus on expert-driven systems suggests an opportunity to develop more public-facing NLP technologies that engage and empower communities to proactively manage their health. After mapping the currently available NLP technologies into the WHO’s EPHF framework, shows that 9 (75%) out of 12 EPHFs were addressed by existing NLP technologies in Africa, with EPHF 3 (ie, public health stewardship), EPHF 8 (ie, community engagement and social participation), and EPHF 12 (ie, access to and utilization of health products, supplies, equipment, and technologies) remaining unaddressed. Most studies predominantly focused on 4 EPHFs: EPHF 7 (ie, health promotion; 31/54, 57%), EPHF 11 (ie, public health research, evaluation, and knowledge; 25/54, 46%), EPHF 10 (ie, health service quality and equity; 24/54, 44%), and EPHF 1 (ie, public health surveillance and monitoring; 23/54, 43%). Work on other EPHFs remains relatively sparse, with only a handful of papers addressing them. When each paper was assigned one primary EPHF, only 6 EPHFs were the main focus of these studies, leaving 6 EPHFs unaddressed, including EPHF 2 (ie, public health emergency management), EPHF 3, EPHF 4 (ie, multisectoral planning, financing, and management for public health), EPHF 5 (ie, health protection), EPHF 8, and EPHF 12. When mapping the included NLP technologies to the United Nations’ SDGs, all the reviewed NLP technologies contributed to SDG 3 (good health and well-being). The interconnected nature of the SDGs means that contributing to one SDG often supports progress in others. For example, 15/54 (28%) studies contributed to SDG 10 (reduced inequality), 10/54 (19%) studies to SDG 9 (industry, innovation, and infrastructure), 6/54 (11%) studies to SDG 8 (decent work and economic growth), 5/54 (9%) studies to SDG 4 (quality education), 4/54 (7%) studies to SDG 5 (gender equality), 1 (2%) study to SDG 15 (life on land), and 1 (2%) study to SDG 16 (peace, justice, and strong institutions). Thus, even with a primary focus on health, these projects may have a far-reaching impact on other SDGs . When zooming in on the targets of SDG 3, currently available NLP technologies in Africa only cover 6 of the 13 specific targets ( ). Of the 4 means of implementation listed for SDG 3, available technologies engage 3 (ie, tobacco control, access to vaccines and medicines, and health financing). The availability of NLP technologies for public health in Africa is strongly influenced by the languages these technologies support. Most NLP technologies predominantly serve widely spoken high-resource languages ( ), such as English (40/54, 74%), Arabic (8/54, 15%), and French (4/54, 7%), reflecting their status as official languages in academic, governmental, and professional sectors across the continent. In contrast, support for indigenous African languages is significantly limited. While some widely spoken African languages, such as Kiswahili (7/54, 13%) and Zulu (4/54, 7%), are represented, many other indigenous languages remain underrepresented or entirely absent from these technologies. Overall, 59 languages were supported by the 54 studies included in this review, a number that falls far short of covering Africa’s linguistic diversity. The availability of NLP technologies for public health in Africa varies significantly across countries and regions. As shown in , South Africa is the primary target country for these technologies, with 25 (46%) out of 54 studies targeting this country, followed by Kenya (14/54, 26%) and Nigeria (9/54, 17%). In contrast, 29 African countries and regions, including Angola, Benin, and Burkina Faso, are supported by only 1 technology each, highlighting uneven distributions of NLP technologies across the continent. Our review reveals a geographic concentration of available NLP technologies in certain countries, especially South Africa, Kenya, and Nigeria, suggesting a need for future efforts to expand NLP technology development to underserved regions ( ). The results further highlight a major gap in linguistic inclusivity within the existing NLP technologies across the continent, where languages spoken in these better-supported regions receive more attention compared to those in other areas ( ). Among the 54 included studies, 4 (7%) were reviews of existing NLP technologies related to health chatbots , HIV prevention in Africa , HIV prevention in Malawi specifically , and chatbots for HIV prevention . These reviews do not introduce new NLP technologies but rather summarize findings from other research that has introduced new technologies and reported primary results. As such, the concept of deployment is less relevant to these review studies. Therefore, this subsection focuses on the 50 studies that directly introduce new NLP applications. Most NLP applications for public health in Africa are still in the early stages of development, with only 1 (2%) out of the 50 studies fully deployed and operational. This deployed system is a Facebook messenger chatbot designed to address vaccine hesitancy in Kenya and Nigeria, collecting real-time data on vaccine hesitancy trends from user interactions . Most studies (44/50, 88%) are in the design and prototyping phase, where they are evaluated only based on their technical performance in controlled, lab-based environments. Meanwhile, 5 (10%) studies have reached the validation stage, where their effectiveness has been tested in real-world settings through methods, such as expert reviews and user testing [ , , ]. Specifically, 2 (4%) studies involved health care professionals who reviewed system performance in practical scenarios. The remaining 3 (6%) studies [ , , ] conducted evaluations with small samples of target users to test the developed NLP technologies. However, these systems were accessible only to a limited number of users and have not yet achieved full deployment. Regarding accessibility (as defined in ), >half (29/50, 58%) of the NLP applications are publicly accessible, allowing general use without significant restrictions. Of these, 12 (24%) are open-source , enabling researchers and developers to build new NLP applications based on their published systems. By contrast, a significant number of systems are categorized as having limited access (18/50, 36%) or are closed access (3), likely due to the sensitive nature of health-related data, raising concerns around privacy and data security. In terms of platform support, most NLP technologies for public health in Africa are offered as tools and libraries (29/50, 58%), datasets (5/50, 10%), or web services (4/50, 8%), all of which require a certain level of technical expertise in computer science to exploit effectively. A smaller proportion of technologies are provided as mobile apps (11/50, 22%) and web-based applications (9/50, 18%), offering more user-friendly interfaces that can be accessed by a broader range of users, including public health practitioners and the general public. This distribution suggests an opportunity to develop NLP technologies with more accessible interfaces to promote wider adoption and usability of these technologies for public health across Africa. Of the 50 reviewed NLP technologies, 40 (80%) indicated an intent to integrate their solutions into existing public health systems, and 41 (82%) were designed to be interoperable with various health infrastructures. This reflects a clear recognition among researchers and developers of the importance of ensuring that these technologies function seamlessly within current health frameworks. However, despite this intent, only 1 (2%) study has reached the stage of deployment, highlighting the need to move these technologies from development into operational use. Out of the 54 studies reviewed, 30 (56%) aimed to develop NLP applications or inform public policies to improve public health outcomes in Africa. Of these, 22 (41%) specifically focused on addressing public health challenges within African countries, directly targeting the health issues faced by local communities. The remaining 8 (15%) studies adopted a broader global health perspective. While they do not exclusively tailor their approaches to Africa, these studies aim to promote public health on a global scale, with intended outcomes that also benefit Africa. The other 24 (44%) studies were divided into 18 (33%) studies that advance NLP technologies for public health using African data (eg, social media, health records, or public health data) without directly targeting specific health challenges, and 6 (11%) studies that contribute to global health discussions, with Africa serving as a case study or example. In terms of evaluation, nearly all studies (51/54, 94%) reported technical performance using a variety of automatic evaluation metrics. Classification metrics were the most commonly used, such as accuracy (used in 22/54, 41% studies), precision (13/54, 24%), F 1 -score (15/54, 28%), and recall (11/54, 20%), making these the 4 most frequently applied automatic metrics. However, because of the wide variation in evaluation approaches, direct comparisons between the studies were impractical and not attempted. In contrast, only 11 (20%) studies reported evaluation results based on user experiences, such as usability testing, user satisfaction surveys, or qualitative feedback from health care providers. Furthermore, only 8 (15%) studies attempted to evaluate these technologies using health-related measures. Among these, 2 (4%) studies confirmed a positive impact on public health outcomes, with NLP-based interventions shown to improve participants’ mood and increase vaccine intentions and willingness . Among the 54 included papers, 20 (37%) provided recommendations for the future development of NLP technologies for public health in Africa. A thematic analysis of these recommendations identified 6 key areas for future research: addressing specific public health challenges with NLP (11/54, 20%), expanding data coverage for underrepresented languages (8/54, 15% studies), contextualizing solutions to local health needs (6/54, 11% studies), enhancing trust and ethical standards (5/54, 9% studies), integrating NLP technologies with existing health systems (5/54, 9% studies), and incorporating participatory design with domain expert input (3/54, 6% studies). In terms of ethical considerations, 46 (85%) out of 54 studies attempted to engage stakeholders during the study design and implementation, while 38 (70%) studies explicitly addressed data privacy compliance. Approximately half of the studies (26/54, 48%) involved the local community in their research. However, only 16 (30%) studies reported receiving explicit ethics approval from an independent review board, and 10 (19%) studies mentioned obtaining informed consent from human participants. It is important to note that not all types of studies involved human participants or required explicit ethics approval in advance. In addition to these considerations, 45 (83%) papers highlighted other ethical concerns, including bias and fairness (11/54, 20%), cultural relevance and appropriateness (11/54, 20%), avoiding miscommunication by NLP technologies (9/54, 17%), preventing misuse of NLP technologies (9/54, 17%), data sharing and accessibility (7/54, 13%), adherence to regulatory standards (2/54, 4%), data representativeness (1/54, 2%), and fair compensation for participants (1/54, 2%). Our gray literature review covered two types of sources: (1) academic literature, including unpublished preprints and peer-reviewed articles not indexed in the 5 structured databases, and (2) nonacademic sources, such as online articles, blog posts, products from startups and established companies, initiatives from NGOs, and proceedings from events and conferences. Full results of the gray literature search are detailed in , with key findings highlighted below. Within the academic gray literature, we identified 11 relevant articles from the first 100 Google Scholar results, with 9 (9%) peer-reviewed articles and 2 (2%) preprints. These studies generally aligned with the patterns observed in the aforementioned structured database search. Each study involved researchers affiliated with at least one African institution, with contributions from South Africa (6/11, 55% studies) [ - ], Nigeria (4/11, 36%) [ - ], Guinea (1/11, 9%) , and Rwanda (1/11, 9%) . In addition, 4 (36%) studies involved collaborations with international researchers from institutions based in the United States (3/11, 27% studies), Canada (3/11, 27%), Germany (1/11, 9%), and Mexico (1/11, 9%). Funding was disclosed in 6 (55%) of the 11 studies, all supported by public entities. The primary NLP applications developed in these studies were conversational assistants (4/11, 36% studies) and sentiment analysis tools (3/11, 27% studies). These studies primarily supported EPHF 7 (ie, health promotion, 8/11, 73% studies). Regarding language coverage, nearly all studies (11/11, 100%) reported support for English. A smaller number addressed African languages, including Ndebele (2/11, 18% studies), Sotho (2/11, 18%), Kiswahili (2/11, 18%), Swati (2/11, 18%), Venda (2/11, 18%), Xhosa (2/11, 18%), Zulu (2/11 18%), with one study each for Afrikaans, Hausa, Kinyarwanda, Northern Sotho, Shona, Tsonga, and Tswana. For target countries, Nigeria and South Africa were the primary focus, each covered in 5 (45%) studies. Notably, none of the studies provided performance evaluations based on health-related measures or reported reaching the stage of actual deployment. Outside academia, commercial products and NGO-led initiatives have focused on creating practical NLP solutions aimed at real-world public health impact. On the basis of our search results, 4 NLP technologies were developed as commercial products by companies [ - ], and another 4 were created as part of initiatives led by NGOs [ - ]. These projects were often in partnership with charitable organizations like the Bill and Melinda Gates Foundation, international bodies, such as the WHO, and industry partners like Google or Meta, frequently collaborating with telecom providers to reach populations with lower literacy levels and limited access to public health resources. The primary focus of these NLP technologies was on disseminating public health information through conversational assistants, with applications supporting EPHF 7 (ie, health promotion) and SDG 3 (good health and well-being). Most tools were designed in English with the limited inclusion of widely spoken African languages like Hausa, Kiswahili, and Zulu. In contrast to academic literature, NLP technologies from these nonacademic sources typically disclosed only limited details about their design and implementation. Furthermore, our review of events and conferences did not introduce additional evidence of NLP technologies tailored to African public health challenges. A lack of standardized protocols for reporting NLP technologies, such as established reporting standards or controlled vocabularies for indexing, may explain why no relevant NLP technologies were retrieved during our search, likely due to limited keyword overlap. Principal Findings Research into NLP technologies for public health in Africa is an emerging field, with significant growth since 2019. Current studies primarily focus on 2 applications: conversational agents for public health information dissemination and sentiment analysis tools that track public health attitudes on social media. Most studies target high-resource languages like English, Arabic, and French, with limited support for widely spoken African languages, such as Kiswahili and Zulu, and no support for most of Africa’s >2000 languages. Most NLP applications remain in the prototype stage, with evaluations often limited to technical performance metrics in controlled settings. Only a handful of studies have validated their systems in real-world contexts, and just 1 has reached full deployment. Until now, most systems have been developed as technical NLP tools rather than targeted health interventions, with limited evaluation of their impact on public health outcomes through rigorous study designs and implementation research approaches. While current research highlights the potential of NLP to address public health needs in Africa, this potential remains largely unrealized in terms of measurable public health outcomes. The following discussion explores pathways for public health and NLP researchers to contribute to the development and deployment of NLP technologies toward achieving positive health impacts in Africa. In addition, we reviewed the strengths and limitations of our review approach, providing context for readers to critically evaluate the subsequent discussion. Bridging Technical NLP Performance With Health-Related Outcomes The review of 54 studies highlights the growing effort to leverage NLP technology for health improvement in Africa. However, it identifies a significant gap in evaluating real-world health outcomes or the behavioral antecedents of these outcomes. Most studies (51/54, 94%) emphasized technical performance, using metrics, such as accuracy, precision, F 1 -score, and recall. In comparison, only 11 (20%) studies incorporated user-centered evaluations, such as usability testing or health care provider feedback. While some studies [ , , ] assessed user outcomes like the accuracy of health communications and improvements in health care interactions, only 2 (4%) studies measured explicit health-related impacts. Specifically, 1 (2%) paper demonstrated improvements in participants’ mood through an automated intervention targeting maternal mental health in Kenya, while another paper showed increased vaccine willingness via a chatbot addressing individual concerns. These examples illustrate the potential of NLP interventions to influence public health, while their rarity highlights the need for more research focused on evaluating health impacts. An overreliance on technical NLP metrics limits our understanding of whether these technologies effectively address real-world health challenges. To ensure NLP solutions meet their intended public health goals, future research should incorporate tools to evaluate health-related measures and behavioral outcomes of NLP solutions alongside technical performance. Tools and frameworks already exist to guide the evaluation of health interventions, such as the WHO’s “Monitoring and Evaluating Digital Health Interventions” framework , which provides standardized guidelines for assessing the impact of digital health technologies on health outcomes and behaviors. Despite the availability of such resources, they remain underutilized in the evaluation of NLP technologies. To fully realize the potential of NLP for public health, it is essential that future studies adopt these established frameworks to rigorously measure both health outcomes and behavioral changes. Integrating these tools will strengthen evidence on the real-world effectiveness of NLP interventions and support more impactful, data-driven public health strategies. Deployment, Integration, and Cross-Sectoral Development For NLP technologies to be deployed in an impactful way, they must be integrated into African health systems and broader public health infrastructure, ensuring accessibility to diverse groups of users. The results of our review of the academic literature have shown the nascent nature of NLP deployment in Africa, with only 1 technology, a Facebook messenger chatbot collecting data on vaccine hesitancy , having reached full deployment. Other described technologies (40/50, 80%) were designed with the potential for integration into public health systems, and most apps under development are available without significant restrictions (ie, either open-source or publicly available). However, many of these apps require substantial expertise in computer science for installation and use, limiting their accessibility. For effective integration, these technologies need to be accessible to their intended users, such as health care workers, patients, and nonspecialists. Approximately 20 apps in this review were designed to be delivered via mobile- or web-based interfaces, increasing their potential usability. In contrast, industry-led commercial products and NGO-driven initiatives have generally progressed further, often yielding immediate, tangible impacts for African communities. These initiatives commonly partner with organizations like the Bill and Melinda Gates Foundation , the WHO, and companies , such as Google or Meta, and frequently collaborate with telecom providers to enhance accessibility for populations with limited resources and lower literacy levels. Unlike academic studies, which typically prioritize proof-of-concept and feasibility testing, these projects aim for direct public health impact, real-world validation, and, at times, profitability. However, as highlighted in this review, nonacademic projects tend to focus on narrower applications, primarily conversational assistants, offer limited language support, serve smaller populations, and address a more focused range of public health challenges compared to the diverse objectives often seen in academic research. Moving forward, bridging the gap between NLP research and accessible, real-world applications will be essential for delivering positive public health impacts. The narrower focus of nonacademic projects highlights a need for extended collaboration between academic and nonacademic researchers, combining priorities, expertise, and resources to enhance NLP’s potential in addressing Africa’s public health needs. Cross-sectoral partnerships offer a promising model for advancing academic NLP technologies from proof-of-concept to impactful public health solutions across the continent. Toward Needs-Based Approaches As we move toward the SDGs’ 2030 deadline, it is sobering to note that “current progress falls far short of what is required to meet the SDGs” . Within this, the world is off track to achieve SDG 3 . The SDG dashboard map offers a country-by-country breakdown of each of the SDG 3 indicators . Progress toward SDG 3 in all but one mainland African country (ie, Tunisia) is described as “major challenges remain” (ie, the most concerning category), while Tunisia and the island nations of Cabo Verde, Mauritius, and the Seychelles are in the less severe category of “significant challenges remain.” In terms of progress, no African countries are currently considered to be “on track” or “decreasing” their progress; instead, they are all described as having major challenges in their progress toward the SDG 3 (ie, good health and well-being) targets . In response to this somewhat bleak outlook, the United Nations prescribes that “changing course requires prioritizing the achievement of universal health coverage, strengthening health systems, investing in disease prevention and treatment, and addressing disparities in access to care and services, especially for vulnerable populations” . Furthermore, it should be recognized that poverty and inequality constrain the possibilities for health gains , highlighting the need for a paradigm focused not only on treatment but on prevention, equity, and intersectional, multisectoral approaches to health promotion. There is also a need to address technological and infrastructural limitations which still exist. Globally, a third of people remain offline—that is 30% of men and 35% of women . In 2015, 15.6% of people in sub-Saharan Africa had internet access, rising to 37% by 2023 ( ibid ). Furthermore, a study of 15 countries (of which 7 were African countries) demonstrated how access to this technology often varies, with lower phone ownership in rural compared to urban areas, and varied ownership levels between poorer and wealthier income groups . A review of the successes and limitations of telemedicine deployment in Africa during the COVID-19 pandemic demonstrates what this means in practice. The study found the following technologies were used “videos, telephones, smart wearable digital devices, messaging mobile apps, virtual programs, online health education modules, SMSs, live audio–visual communication, and other digital platforms.” Among these, phones were the most widely used. Some of the difficulties faced included an array of digital challenges ranging from low connectivity and high data costs to the inaccessibility of smartphones, nondelivery of messages, and insufficient digital skills. This was in a broader context characterized by a lack of telemedicine frameworks and policies to support a roll out; some patients and health care personnel preferred not to use these technologies, and there was an underlying shortage of health care personnel . For NLP technologies to address real-world health challenges, they should be viewed not just as technical solutions but as tools shaped by and responsive to the local context. Developing effective NLP applications will require a community-centered approach , grounded in local needs, ethical principles, infrastructures, and capacities to ensure these tools are truly accessible and impactful. Engaging people in research, including coresearchers, can facilitate a closer understanding of local needs and suitable ways to address these . The Need for Culturally and Linguistically Inclusive NLP Applications in Africa Africa has exceptional linguistic diversity, with >2000 languages spoken across the continent . This includes widely used official languages, such as Arabic, English, and French, alongside popular indigenous languages, such as Zulu, as well as a large majority of underrepresented languages spoken by smaller communities. Kiswahili spans several African countries, uniting East Africa as the shared language of politics, trade, music, literary tradition, and religion (both Islam and Christianity) . Nigeria is the most linguistically diverse, with >500 indigenous languages . While official languages tend to have relatively sufficient digital data to support NLP development, most indigenous African languages fall into the category of being low-resource, extremely low-resource, or even no-resource, often lacking any digital data essential for NLP technologies. The scarcity of digital language resources forms significant performance disparities in NLP systems . These disparities, including higher error rates for underrepresented languages (ie, error rate disparities ), contribute to broader inequities, limiting access to advancements in NLP technology and impeding speakers of underrepresented languages from fully benefiting from progress in NLP technology. To develop inclusive NLP applications that equitably serve African populations, strategically expanding digital datasets for underserved languages is essential. This is particularly the case for languages with limited online representation . Concurrently, advancements in multilingual NLP and cross-lingual transfer learning provide promising opportunities [ - ]. These approaches allow neural language models, the backbones of most modern NLP applications, to leverage knowledge from high-resource languages to perform well in low-resource contexts, even with minimal in-language data. By combining efforts in data collection with advancements in NLP research, these technologies can better support Africa’s linguistic diversity, contributing to public health solutions that promote, rather than hinder, health equity. In addition to linguistic inclusivity, cultural relevance is essential for NLP technologies to meet the diverse needs of African communities effectively. Distinct cultural practices, health beliefs, and communication styles influence how groups perceive and interact with technology. Embedding cultural practices into the design and implementation of NLP technologies is essential for ensuring their relevance and successful integration into Africa’s health systems and institutions. For example, a health education dialogue system advising users to visit a general practitioner , a term commonly associated with primary care doctors in the United Kingdom’s National Health Service, may be irrelevant or confusing to many users in Africa. Furthermore, Loveys et al highlighted how expressions of depression in text, such as the ratio of positive to negative emotions, vary across cultures. Similarly, the extensive use of traditional medicine in many African countries highlights the need for NLP technologies to incorporate culturally recognized terms and references to these practices. Another good example is the recent work by Olatunji et al , which introduces a geo-culturally diverse dataset of clinically diverse questions and answers annotated by health experts. This dataset enables the development of question-answering systems serving African patients. Respecting and integrating these cultural nuances into NLP design and implementation can enhance trust, ensure alignment with the expectations and needs of each community, and ultimately promote equitable public health outcomes. Integrative Development for NLP in Public Health Our analysis shows that NLP technologies focusing on health in Africa intersect most frequently with SDG 10 (ie, reduced inequality) and SDG 9 (ie, industry, innovation, and infrastructure). This study did not identify any intersections with other SDGs, even those that are particularly relevant to public health, such as SDG 1 (ie, end poverty), SDG 2 (ie, zero hunger), SDG 6 (ie, clean water and sanitation), SDG 7 (ie, affordable and clean energy), SDG 11 (ie, sustainable cities and communities), SDG 12 (ie, responsible consumption and production), SDG 13 (ie, climate action), and SDG 17 (ie, partnership for the goals) are absent. This highlights an opportunity for more cross-cutting intersections between NLP applications for health and broader sustainable economic development efforts, as well as the importance of a more integrated approach to achieving the SDGs . The current coverage of the SDG 3 targets and means of implementation by existing NLP technologies ( ) highlights the need for more investment in the WHO framework convention on tobacco control (SDG 3.a), substance abuse (SDG 3.5), road traffic (SDG 3.6), and environmental health (SDG 3.9) for sustained and long-term health impacts. While the goals of ending epidemics (SDG 3.3), reducing maternal mortality (SDG 3.1), and achieving universal health coverage (SDG 3.8) are strongly represented in the literature, there remains significant potential to invest more in cross-cutting activities that have long-term impacts on health systems. The limited attention given to 6 key EPHFs (ie, highlighted by the orange line in ) also highlights a significant gap in the research landscape, with critical public health functions, such as emergency management, stewardship, and multisectoral planning being overlooked. These underaddressed EPHFs and SDG 3 targets play a fundamental role in building resilient health systems, especially in low-resource settings. Therefore, there is a need for more balanced research efforts to ensure all aspects of public health are adequately supported. Developing NLP technologies to address these underresearched EPHFs and SDG 3 targets requires a deep understanding of local contexts and the integration of NLP technologies within them. For instance, addressing EPHF 12 (ie, access to health products) requires integrating NLP technologies into existing logistics and supply chain systems, which often vary significantly between countries. Such highly specialized and localized contexts introduce additional challenges and requirements for system development. Future studies should aim to bridge these gaps by aligning NLP development with more integrative, cross-disciplinary collaborative approaches that promote system-wide impact. By embedding NLP technologies within broader health system goals and leveraging collaborative input across fields, these solutions can become more effective, resilient, and responsive to public health challenges. Such an approach will help guide interventions and inform policy development, ensuring that public health improvements are enduring and responsive to Africa’s evolving health needs. Strengths and Limitations of the Review This review has several strengths. We developed and followed a protocol guided by the PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analyses—Protocols) guidelines , ensuring a transparent and systematic approach throughout the study. Our literature search was comprehensive and multidisciplinary, integrating both academic and gray literature sources to capture a broad range of perspectives and to provide a current reflection of the scope of research. The inclusion of gray literature, such as reports from industry and NGOs, enabled a wider view of NLP applications in public health that extends beyond the academic literature. Screening and data extraction were conducted following predefined guidelines created by a team of domain experts, enhancing both the consistency and relevance of the data collected. The extracted data is publicly available in a machine-readable format to facilitate future research ( ). However, there are some limitations to consider. Our review relied primarily on English search terms, which may have inadvertently excluded studies published in other languages without English annotations. Although we included 5 representative databases spanning public health, NLP, computing, and engineering, some relevant studies may have been missed because of the highly interdisciplinary nature of the field. In addition, NLP research is frequently disseminated as preprints, blog posts (eg, OpenAI’s introduction of ChatGPT), and in peer-reviewed machine learning conference proceedings (such as International Conference on Learning Representations and Conference on Neural Information Processing Systems) not indexed by our selected databases. Consequently, certain emerging studies may not have been captured, especially given the rapid development of NLP applications in public health. However, the results of our gray literature search indicate that most academic papers, including preprints and conference proceedings, have been covered in this review. Although this review included a broad range of gray literature, the search for nonacademic sources was far from exhaustive, and some relevant NLP technologies may have been overlooked. The primary focus of this review was on peer-reviewed academic literature, with gray literature providing supplementary insights. This work could be further expanded through a series of expert interviews, such as with representatives from the WHO or the SDG by 2030 committees, to interrogate the factors shaping the development, or lack thereof, of NLP applications for public health in Africa . In addition, the heterogeneity of NLP methodologies, applications, and evaluation metrics hindered our ability to conduct a formal meta-analysis, resulting in a reliance on narrative synthesis. Despite efforts to maintain consistency during data extraction, inconsistencies in reporting approaches for included studies presented challenges, particularly in evaluating public health focus and outcomes, which impacts comparability across different technologies. These challenges reveal a broader misalignment between the rigorous methodological standards of systematic reviews and the agile, fast-evolving, and highly interdisciplinary nature of NLP research. Future research should explore methodological adaptations that better align with the field of NLP, especially for applications in health contexts. Conclusions The application of NLP technologies to public health in Africa is a promising and rapidly evolving field, with the potential to enhance health care accessibility, equity, and efficiency across the continent. However, significant gaps persist in real-world deployment, language inclusivity, and the rigorous evaluation of health outcomes. The identification of, and responses to, such gaps would be greatly enabled by the establishment of reporting standards for NLP technologies. Future research should adopt a needs-based and cross-sectoral approach, engaging expertise from diverse fields, including the expertise of local communities on their own needs and possible solutions, and using existing frameworks for evaluating their public health impacts and outcomes. This approach will help build a deeper understanding of needs and support the tailored design of NLP technologies to effectively address public health challenges where this technology can be useful and is wanted. Furthermore, qualitative research, such as expert interviews, can contribute to better understand the dynamics and demand for progress in this area. By bridging existing gaps in meaningful local engagement, NLP research can better support resilient, culturally relevant, and equitable public health systems in Africa. Research into NLP technologies for public health in Africa is an emerging field, with significant growth since 2019. Current studies primarily focus on 2 applications: conversational agents for public health information dissemination and sentiment analysis tools that track public health attitudes on social media. Most studies target high-resource languages like English, Arabic, and French, with limited support for widely spoken African languages, such as Kiswahili and Zulu, and no support for most of Africa’s >2000 languages. Most NLP applications remain in the prototype stage, with evaluations often limited to technical performance metrics in controlled settings. Only a handful of studies have validated their systems in real-world contexts, and just 1 has reached full deployment. Until now, most systems have been developed as technical NLP tools rather than targeted health interventions, with limited evaluation of their impact on public health outcomes through rigorous study designs and implementation research approaches. While current research highlights the potential of NLP to address public health needs in Africa, this potential remains largely unrealized in terms of measurable public health outcomes. The following discussion explores pathways for public health and NLP researchers to contribute to the development and deployment of NLP technologies toward achieving positive health impacts in Africa. In addition, we reviewed the strengths and limitations of our review approach, providing context for readers to critically evaluate the subsequent discussion. The review of 54 studies highlights the growing effort to leverage NLP technology for health improvement in Africa. However, it identifies a significant gap in evaluating real-world health outcomes or the behavioral antecedents of these outcomes. Most studies (51/54, 94%) emphasized technical performance, using metrics, such as accuracy, precision, F 1 -score, and recall. In comparison, only 11 (20%) studies incorporated user-centered evaluations, such as usability testing or health care provider feedback. While some studies [ , , ] assessed user outcomes like the accuracy of health communications and improvements in health care interactions, only 2 (4%) studies measured explicit health-related impacts. Specifically, 1 (2%) paper demonstrated improvements in participants’ mood through an automated intervention targeting maternal mental health in Kenya, while another paper showed increased vaccine willingness via a chatbot addressing individual concerns. These examples illustrate the potential of NLP interventions to influence public health, while their rarity highlights the need for more research focused on evaluating health impacts. An overreliance on technical NLP metrics limits our understanding of whether these technologies effectively address real-world health challenges. To ensure NLP solutions meet their intended public health goals, future research should incorporate tools to evaluate health-related measures and behavioral outcomes of NLP solutions alongside technical performance. Tools and frameworks already exist to guide the evaluation of health interventions, such as the WHO’s “Monitoring and Evaluating Digital Health Interventions” framework , which provides standardized guidelines for assessing the impact of digital health technologies on health outcomes and behaviors. Despite the availability of such resources, they remain underutilized in the evaluation of NLP technologies. To fully realize the potential of NLP for public health, it is essential that future studies adopt these established frameworks to rigorously measure both health outcomes and behavioral changes. Integrating these tools will strengthen evidence on the real-world effectiveness of NLP interventions and support more impactful, data-driven public health strategies. For NLP technologies to be deployed in an impactful way, they must be integrated into African health systems and broader public health infrastructure, ensuring accessibility to diverse groups of users. The results of our review of the academic literature have shown the nascent nature of NLP deployment in Africa, with only 1 technology, a Facebook messenger chatbot collecting data on vaccine hesitancy , having reached full deployment. Other described technologies (40/50, 80%) were designed with the potential for integration into public health systems, and most apps under development are available without significant restrictions (ie, either open-source or publicly available). However, many of these apps require substantial expertise in computer science for installation and use, limiting their accessibility. For effective integration, these technologies need to be accessible to their intended users, such as health care workers, patients, and nonspecialists. Approximately 20 apps in this review were designed to be delivered via mobile- or web-based interfaces, increasing their potential usability. In contrast, industry-led commercial products and NGO-driven initiatives have generally progressed further, often yielding immediate, tangible impacts for African communities. These initiatives commonly partner with organizations like the Bill and Melinda Gates Foundation , the WHO, and companies , such as Google or Meta, and frequently collaborate with telecom providers to enhance accessibility for populations with limited resources and lower literacy levels. Unlike academic studies, which typically prioritize proof-of-concept and feasibility testing, these projects aim for direct public health impact, real-world validation, and, at times, profitability. However, as highlighted in this review, nonacademic projects tend to focus on narrower applications, primarily conversational assistants, offer limited language support, serve smaller populations, and address a more focused range of public health challenges compared to the diverse objectives often seen in academic research. Moving forward, bridging the gap between NLP research and accessible, real-world applications will be essential for delivering positive public health impacts. The narrower focus of nonacademic projects highlights a need for extended collaboration between academic and nonacademic researchers, combining priorities, expertise, and resources to enhance NLP’s potential in addressing Africa’s public health needs. Cross-sectoral partnerships offer a promising model for advancing academic NLP technologies from proof-of-concept to impactful public health solutions across the continent. As we move toward the SDGs’ 2030 deadline, it is sobering to note that “current progress falls far short of what is required to meet the SDGs” . Within this, the world is off track to achieve SDG 3 . The SDG dashboard map offers a country-by-country breakdown of each of the SDG 3 indicators . Progress toward SDG 3 in all but one mainland African country (ie, Tunisia) is described as “major challenges remain” (ie, the most concerning category), while Tunisia and the island nations of Cabo Verde, Mauritius, and the Seychelles are in the less severe category of “significant challenges remain.” In terms of progress, no African countries are currently considered to be “on track” or “decreasing” their progress; instead, they are all described as having major challenges in their progress toward the SDG 3 (ie, good health and well-being) targets . In response to this somewhat bleak outlook, the United Nations prescribes that “changing course requires prioritizing the achievement of universal health coverage, strengthening health systems, investing in disease prevention and treatment, and addressing disparities in access to care and services, especially for vulnerable populations” . Furthermore, it should be recognized that poverty and inequality constrain the possibilities for health gains , highlighting the need for a paradigm focused not only on treatment but on prevention, equity, and intersectional, multisectoral approaches to health promotion. There is also a need to address technological and infrastructural limitations which still exist. Globally, a third of people remain offline—that is 30% of men and 35% of women . In 2015, 15.6% of people in sub-Saharan Africa had internet access, rising to 37% by 2023 ( ibid ). Furthermore, a study of 15 countries (of which 7 were African countries) demonstrated how access to this technology often varies, with lower phone ownership in rural compared to urban areas, and varied ownership levels between poorer and wealthier income groups . A review of the successes and limitations of telemedicine deployment in Africa during the COVID-19 pandemic demonstrates what this means in practice. The study found the following technologies were used “videos, telephones, smart wearable digital devices, messaging mobile apps, virtual programs, online health education modules, SMSs, live audio–visual communication, and other digital platforms.” Among these, phones were the most widely used. Some of the difficulties faced included an array of digital challenges ranging from low connectivity and high data costs to the inaccessibility of smartphones, nondelivery of messages, and insufficient digital skills. This was in a broader context characterized by a lack of telemedicine frameworks and policies to support a roll out; some patients and health care personnel preferred not to use these technologies, and there was an underlying shortage of health care personnel . For NLP technologies to address real-world health challenges, they should be viewed not just as technical solutions but as tools shaped by and responsive to the local context. Developing effective NLP applications will require a community-centered approach , grounded in local needs, ethical principles, infrastructures, and capacities to ensure these tools are truly accessible and impactful. Engaging people in research, including coresearchers, can facilitate a closer understanding of local needs and suitable ways to address these . Africa has exceptional linguistic diversity, with >2000 languages spoken across the continent . This includes widely used official languages, such as Arabic, English, and French, alongside popular indigenous languages, such as Zulu, as well as a large majority of underrepresented languages spoken by smaller communities. Kiswahili spans several African countries, uniting East Africa as the shared language of politics, trade, music, literary tradition, and religion (both Islam and Christianity) . Nigeria is the most linguistically diverse, with >500 indigenous languages . While official languages tend to have relatively sufficient digital data to support NLP development, most indigenous African languages fall into the category of being low-resource, extremely low-resource, or even no-resource, often lacking any digital data essential for NLP technologies. The scarcity of digital language resources forms significant performance disparities in NLP systems . These disparities, including higher error rates for underrepresented languages (ie, error rate disparities ), contribute to broader inequities, limiting access to advancements in NLP technology and impeding speakers of underrepresented languages from fully benefiting from progress in NLP technology. To develop inclusive NLP applications that equitably serve African populations, strategically expanding digital datasets for underserved languages is essential. This is particularly the case for languages with limited online representation . Concurrently, advancements in multilingual NLP and cross-lingual transfer learning provide promising opportunities [ - ]. These approaches allow neural language models, the backbones of most modern NLP applications, to leverage knowledge from high-resource languages to perform well in low-resource contexts, even with minimal in-language data. By combining efforts in data collection with advancements in NLP research, these technologies can better support Africa’s linguistic diversity, contributing to public health solutions that promote, rather than hinder, health equity. In addition to linguistic inclusivity, cultural relevance is essential for NLP technologies to meet the diverse needs of African communities effectively. Distinct cultural practices, health beliefs, and communication styles influence how groups perceive and interact with technology. Embedding cultural practices into the design and implementation of NLP technologies is essential for ensuring their relevance and successful integration into Africa’s health systems and institutions. For example, a health education dialogue system advising users to visit a general practitioner , a term commonly associated with primary care doctors in the United Kingdom’s National Health Service, may be irrelevant or confusing to many users in Africa. Furthermore, Loveys et al highlighted how expressions of depression in text, such as the ratio of positive to negative emotions, vary across cultures. Similarly, the extensive use of traditional medicine in many African countries highlights the need for NLP technologies to incorporate culturally recognized terms and references to these practices. Another good example is the recent work by Olatunji et al , which introduces a geo-culturally diverse dataset of clinically diverse questions and answers annotated by health experts. This dataset enables the development of question-answering systems serving African patients. Respecting and integrating these cultural nuances into NLP design and implementation can enhance trust, ensure alignment with the expectations and needs of each community, and ultimately promote equitable public health outcomes. Our analysis shows that NLP technologies focusing on health in Africa intersect most frequently with SDG 10 (ie, reduced inequality) and SDG 9 (ie, industry, innovation, and infrastructure). This study did not identify any intersections with other SDGs, even those that are particularly relevant to public health, such as SDG 1 (ie, end poverty), SDG 2 (ie, zero hunger), SDG 6 (ie, clean water and sanitation), SDG 7 (ie, affordable and clean energy), SDG 11 (ie, sustainable cities and communities), SDG 12 (ie, responsible consumption and production), SDG 13 (ie, climate action), and SDG 17 (ie, partnership for the goals) are absent. This highlights an opportunity for more cross-cutting intersections between NLP applications for health and broader sustainable economic development efforts, as well as the importance of a more integrated approach to achieving the SDGs . The current coverage of the SDG 3 targets and means of implementation by existing NLP technologies ( ) highlights the need for more investment in the WHO framework convention on tobacco control (SDG 3.a), substance abuse (SDG 3.5), road traffic (SDG 3.6), and environmental health (SDG 3.9) for sustained and long-term health impacts. While the goals of ending epidemics (SDG 3.3), reducing maternal mortality (SDG 3.1), and achieving universal health coverage (SDG 3.8) are strongly represented in the literature, there remains significant potential to invest more in cross-cutting activities that have long-term impacts on health systems. The limited attention given to 6 key EPHFs (ie, highlighted by the orange line in ) also highlights a significant gap in the research landscape, with critical public health functions, such as emergency management, stewardship, and multisectoral planning being overlooked. These underaddressed EPHFs and SDG 3 targets play a fundamental role in building resilient health systems, especially in low-resource settings. Therefore, there is a need for more balanced research efforts to ensure all aspects of public health are adequately supported. Developing NLP technologies to address these underresearched EPHFs and SDG 3 targets requires a deep understanding of local contexts and the integration of NLP technologies within them. For instance, addressing EPHF 12 (ie, access to health products) requires integrating NLP technologies into existing logistics and supply chain systems, which often vary significantly between countries. Such highly specialized and localized contexts introduce additional challenges and requirements for system development. Future studies should aim to bridge these gaps by aligning NLP development with more integrative, cross-disciplinary collaborative approaches that promote system-wide impact. By embedding NLP technologies within broader health system goals and leveraging collaborative input across fields, these solutions can become more effective, resilient, and responsive to public health challenges. Such an approach will help guide interventions and inform policy development, ensuring that public health improvements are enduring and responsive to Africa’s evolving health needs. This review has several strengths. We developed and followed a protocol guided by the PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analyses—Protocols) guidelines , ensuring a transparent and systematic approach throughout the study. Our literature search was comprehensive and multidisciplinary, integrating both academic and gray literature sources to capture a broad range of perspectives and to provide a current reflection of the scope of research. The inclusion of gray literature, such as reports from industry and NGOs, enabled a wider view of NLP applications in public health that extends beyond the academic literature. Screening and data extraction were conducted following predefined guidelines created by a team of domain experts, enhancing both the consistency and relevance of the data collected. The extracted data is publicly available in a machine-readable format to facilitate future research ( ). However, there are some limitations to consider. Our review relied primarily on English search terms, which may have inadvertently excluded studies published in other languages without English annotations. Although we included 5 representative databases spanning public health, NLP, computing, and engineering, some relevant studies may have been missed because of the highly interdisciplinary nature of the field. In addition, NLP research is frequently disseminated as preprints, blog posts (eg, OpenAI’s introduction of ChatGPT), and in peer-reviewed machine learning conference proceedings (such as International Conference on Learning Representations and Conference on Neural Information Processing Systems) not indexed by our selected databases. Consequently, certain emerging studies may not have been captured, especially given the rapid development of NLP applications in public health. However, the results of our gray literature search indicate that most academic papers, including preprints and conference proceedings, have been covered in this review. Although this review included a broad range of gray literature, the search for nonacademic sources was far from exhaustive, and some relevant NLP technologies may have been overlooked. The primary focus of this review was on peer-reviewed academic literature, with gray literature providing supplementary insights. This work could be further expanded through a series of expert interviews, such as with representatives from the WHO or the SDG by 2030 committees, to interrogate the factors shaping the development, or lack thereof, of NLP applications for public health in Africa . In addition, the heterogeneity of NLP methodologies, applications, and evaluation metrics hindered our ability to conduct a formal meta-analysis, resulting in a reliance on narrative synthesis. Despite efforts to maintain consistency during data extraction, inconsistencies in reporting approaches for included studies presented challenges, particularly in evaluating public health focus and outcomes, which impacts comparability across different technologies. These challenges reveal a broader misalignment between the rigorous methodological standards of systematic reviews and the agile, fast-evolving, and highly interdisciplinary nature of NLP research. Future research should explore methodological adaptations that better align with the field of NLP, especially for applications in health contexts. The application of NLP technologies to public health in Africa is a promising and rapidly evolving field, with the potential to enhance health care accessibility, equity, and efficiency across the continent. However, significant gaps persist in real-world deployment, language inclusivity, and the rigorous evaluation of health outcomes. The identification of, and responses to, such gaps would be greatly enabled by the establishment of reporting standards for NLP technologies. Future research should adopt a needs-based and cross-sectoral approach, engaging expertise from diverse fields, including the expertise of local communities on their own needs and possible solutions, and using existing frameworks for evaluating their public health impacts and outcomes. This approach will help build a deeper understanding of needs and support the tailored design of NLP technologies to effectively address public health challenges where this technology can be useful and is wanted. Furthermore, qualitative research, such as expert interviews, can contribute to better understand the dynamics and demand for progress in this area. By bridging existing gaps in meaningful local engagement, NLP research can better support resilient, culturally relevant, and equitable public health systems in Africa.
Individual differences in human frequency-following response predict pitch labeling ability
c1022f5b-40b6-461d-9b34-b5d3d86ad77c
8275664
Physiology[mh]
Research on auditory object perception typically focuses on the cortical networks that organize the recognition process. Whether conceived of as a dual pathway or focused on pattern classification , the theoretic framing is based on an ascending auditory recognition system in which frequency specific encoding in primary auditory cortex from the eighth nerve is increasingly refined in temporal cortex for abstract sound category classification and recognition. Much of the research on cortical auditory processing suggests that the site of auditory long-term memory and thus the factors that might influence representation and recognition reside in a cortical network . This suggests that while subcortical mechanisms may be important in the ascending auditory pathway, given that these mechanisms operate below cortical memory formation and storage, they are involved in neurally-encoded auditory signal refinement and transmission but not specifically conditioned by experience. However, research by Kraus and colleagues has suggested a very different view of the functional role of the subcortical ascending auditory system in perception. For example, their research has shown that musical expertise modifies the auditory coding of pitch in a way that benefits learning tone language patterns . In this research, group differences in musical experience are related to the frequency-following response (FFR) for speech stimuli as well as music and thus have generalized beyond the specific context of experience. Moreover, they argue that the group difference in the auditory brainstem response (ABR) due to musical training predicts how the groups learn. While it is unclear if there is descending cortical control over the brainstem response that sharpens it, or whether there is experiential tuning of the FFR from the bottom-up, it is important that by some mechanism, the ascending auditory pathway is not just a passive signal transmission line, but it is changed in processing by experience. Indeed, there is now substantial research showing that experience can alter encoding in the FFR substantially – , even after a relatively short period of training . However, it is still not clear whether the observed experience-based changes in the FFR are reflected in behavior. Certainly, if auditory encoding increases the fidelity of the neural representation of frequency, frequency-based auditory performance should improve. Musacchia et al. observed that neural responses attributed to the brainstem, including the FFR, correlated with scores in certain musical skill tasks (e.g. timbre discrimination). Moreover, Marmel et al . found that aspects of the FFR predict the ability to discriminate between pitches in a forced-choice task. Coffey et al. found that individual differences in the FFR relate to pitch perception for tones with a missing fundamental frequency. Carcagno and Plack found FFR changes following training in a pitch discrimination task, but the observed changes in FFR strength were not specific to stimuli that shared relevant characteristics with the trained stimuli, and correlations between FFR strength and performance metrics were nonsignificant. While these studies support the notion that FFR features seem to relate to individual differences in perceptual acuity, the extent to which plasticity in early auditory structures supports cognitive abilities that are critical to behavior, such as categorization, remains an open question. Absolute pitch (AP) or “perfect pitch” is the relatively rare ability to label a musical note without the aid of a reference note and can provide a model system for investigating individual differences in the relationship between auditory encoding and human performance. Given that the spectral structure of the FFR suggests that pitch information is successfully transferred from the cochlea to the central nervous system in all listeners , it may be surprising that most humans are unable to easily utilize that information for the categorization of isolated notes. In contrast, relative pitch perception (categorizing notes in relation to other notes) is the norm among musicians. Absolute pitch possessors’ tuning standards can even be shifted after listening to “detuned” music that maintains relative pitch cues , . The presumed rarity of AP should be striking, as it is comparable to only being able to classify colors by their relationship to other colors and not with consistent labels such as “blue.” Absolute pitch has often been used as a model system for understanding the interplay between genetic and experiential factors in the development of stable cognitive-perceptual skills —this is a largely unexplored parallel to the way in which the scalp-recorded FFR has been used to investigate the role of experience in shaping auditory encoding, something previously thought to be non-plastic. It could be the case that features of spectral encoding in the FFR may vary between listeners who perceive the pitch of notes absolutely rather than in reference to other notes, supporting the different priorities of categorical processes downstream. Given that AP represents a distinct cognitive skill, the ability to categorize notes, it provides an excellent window into the interplay between low-level encoding, reflected by the FFR, and high-level perceptual categorization. While AP has traditionally been construed as a dichotomous ability, in which subjects either have or do not have AP , , recent evidence has suggested that AP ability exists along a spectrum, where AP ability is best described as a continuously distributed variable . While there is sizable variance in pitch labelling ability in the general population , variables that predict continuous variation in absolute pitch perception ability are largely unknown and generally viewed as a consequence of cognitive factors rather than auditory ability . The aim of the present study, then, is to investigate the extent to which individual differences in the FFR, reflecting low-level neural auditory encoding of sounds, predicts variation in pitch labelling ability, a higher-level cognitive process. Behavioral results There was a reasonable spread of performance on pitch performance for sine tones for both self-reported AP possessors (M = 0.554, SD = 0.163) and other musicians (M = 0.212, SD = 0.0960), as well as for piano tones (self-reported AP possessors: M = 0.984, SD = 0.0165; other musicians: M = 0.294, SD = 0.199). See Fig. A for a visualization of how the scores relate to one another. The distribution of average pitch labeling ability was approximately M = 0.769, SD = 0.0814 for self-reported AP possessors and M = 0.253, SD = 0.134 for other musicians. Performance on the pitch adjustment task (measures auditory working memory precision by requiring participants to hold in mind a target note for some period of time prior to manually adjusting the final tone to match the target) for self-reported AP possessors was M = 2.978, SD = 2.507, and M = 3.311, SD = 0.822 for other musicians (see Fig. B). Finally, just-noticeable difference (JND) task (assesses one’s ability to behaviorally discriminate between two tones of varying frequency) performance for self-reported AP possessors was M = 0.849, SD = 0.0715, and M = 0.782, SD = 0.0918 for other musicians (see Fig. C). While previous research has found that there is a positive relationship between tonal language experience and AP ability , we did not find such a relationship here for both the AP piano tone conservative measure ( t (11.1) = 0.55, p = 0.59) and the AP sine tone conservative measure ( t (10.5) = 0.74, p = 0.48). We also found no significant difference between subjects who identified their primary instrument as fixed-pitch and not fixed-pitch on both performance on the AP piano tone conservative measure ( t (9.7) = − 0.50, p = 0.63), and AP sine tone conservative measure ( t (9.3) = − 0.66, p = 0.53). In other words, effects reported in past research—such as that lessons on piano or other fixed-pitch instruments enhance AP abilities or that personal musical histories are reflected by individual performance on absolute pitch recognition tasks —are not significantly present in our sample. Electrophysiology results and predictive modeling The FFR to the piano tone (r = 0.26, t(999) = 31.49, p = 9.18e-152) and the FFR to the unfamiliar complex tone (r = 0.27, t(999) = 31.91, p = 1.31e-154) both predict pitch-labelling performance better than chance, but not significantly differently from one another (t(1994.81) = − 1.19, p = 0.234). Both the piano tone FFR (t(1875.59) = 38.81, p = 2.42e-242) and complex tone FFR (t(1840.56) = 39.16) perform significantly better than the speech-evoked FFR (r = − 0.15), which performs significantly worse than chance (t(999) = − -22.71, p = 2.29e-92). The Lasso regression yielded the following sparse models, reported with regression coefficients in normalized units for easy comparison across models. Note, in Eq. , that the Lasso regression selected harmonics near the formant frequencies of the spoken /da/ to include in the model; while this is encouraging with respect to the Lasso technique picking out relevant predictors, the speech model does not perform above chance, so we caution against attempting to interpret the presence or absence of particular parameters in the model. 1 [12pt]{minimal} $$ {}\;{}: _{{logit}} = 6.7 10^{{ - 18}} - 0.33F_{0} + 0.017H_{5} $$ Complex tone : y ^ logit = 6.7 × 10 - 18 - 0.33 F 0 + 0.017 H 5 2 [12pt]{minimal} $$ {}\;{}: _{{logit}} = ~ - 5.1 10^{{ - 18}} - 0.063F_{0} - 0.45H_{1} + 0.28H_{4} $$ Piano Tone : y ^ logit = - 5.1 × 10 - 18 - 0.063 F 0 - 0.45 H 1 + 0.28 H 4 3 [12pt]{minimal} $$ {}: _{{logit}} = ~1.9 10^{{ - 17}} + 0.15F_{0} - 0.021H_{6} + 0.022H_{{12}} $$ Speech : y ^ logit = 1.9 × 10 - 17 + 0.15 F 0 - 0.021 H 6 + 0.022 H 12 The piano tone FFR predicts AP classification performance for both piano tones (r = 0.29, t(999) = 31.11, p = 4.11e-149) and sine tones (r = 0.08, t(999) = 12.26, p = 2.69e-32). However, the model does predict significantly better on piano tone performance (t(1729.47) = 19.22, p = 8.70e-75), suggesting a more specific effect of auditory encoding on pitch classification ability. 4 [12pt]{minimal} $$ {}\;{}: _{{logit}} = - 3.3 10^{{ - 17}} - 0.013F_{0} - 0.46H_{1} - 0.0044H_{3} + 0.25H_{4} $$ Piano Tones : y ^ logit = - 3.3 × 10 - 17 - 0.013 F 0 - 0.46 H 1 - 0.0044 H 3 + 0.25 H 4 5 [12pt]{minimal} $$ {}\;{}: _{{logit}} = 4.4 10^{{ - 17}} - 0.089H_{1} ~ + ~0.0012H_{4} $$ Sine Tones : y ^ logit = 4.4 × 10 - 17 - 0.089 H 1 + 0.0012 H 4 The frequency-following responses to the piano tone predicts AP performance better than the behavioral measures (age of music onset, tonal language experience, pitch adjustment and just-noticeable-difference scores) are able to (t(1980.05) = − 16.22, p = 1.16e-55), with the latter only performing slightly, albeit significantly, above chance (r = 0.09, t(999) = 11.69, p = 1.06e-29). Notably, combining the behavioral and electrophysiological predictors (r = 0.21) yields a model that is worse than that based on only electrophysiological predictors (t(1982.98) = − 4.52, p = 6.55e-06), but does do better than the behavioral data alone (t(1997.86) = − 12.23, p = 3.08e-33). This suggests that the behavioral measures contain little information about pitch labelling ability that is not already captured by the FFR. Interestingly, the behavioral-only model (see Eq. ) removed all predictors except for the just-noticeable-difference score, a measure of perceptual discrimination ability, indicating that the other behavioral measures do not provide additional information about pitch labelling ability. 6 [12pt]{minimal} $$ {}: _{{logit}} = 8.7 10^{{ - 18}} + 0.023JND $$ Behavioral : y ^ logit = 8.7 × 10 - 18 + 0.023 J N D 7 [12pt]{minimal} $$ {}: _{{logit}} = - 5.2 10^{{ - 17}} - 0.39H_{1} + 0.18H_{4} + 0.20JND - 0.0038age\_onset~ $$ Combined : y ^ logit = - 5.2 × 10 - 17 - 0.39 H 1 + 0.18 H 4 + 0.20 J N D - 0.0038 a g e _ o n s e t 8 [12pt]{minimal} $$ {}: _{{logit}} = - 5.1 10^{{ - 18}} - 0.063F_{0} - 0.45H_{1} + 0.28H_{4} $$ FFR : y ^ logit = - 5.1 × 10 - 18 - 0.063 F 0 - 0.45 H 1 + 0.28 H 4 There was a reasonable spread of performance on pitch performance for sine tones for both self-reported AP possessors (M = 0.554, SD = 0.163) and other musicians (M = 0.212, SD = 0.0960), as well as for piano tones (self-reported AP possessors: M = 0.984, SD = 0.0165; other musicians: M = 0.294, SD = 0.199). See Fig. A for a visualization of how the scores relate to one another. The distribution of average pitch labeling ability was approximately M = 0.769, SD = 0.0814 for self-reported AP possessors and M = 0.253, SD = 0.134 for other musicians. Performance on the pitch adjustment task (measures auditory working memory precision by requiring participants to hold in mind a target note for some period of time prior to manually adjusting the final tone to match the target) for self-reported AP possessors was M = 2.978, SD = 2.507, and M = 3.311, SD = 0.822 for other musicians (see Fig. B). Finally, just-noticeable difference (JND) task (assesses one’s ability to behaviorally discriminate between two tones of varying frequency) performance for self-reported AP possessors was M = 0.849, SD = 0.0715, and M = 0.782, SD = 0.0918 for other musicians (see Fig. C). While previous research has found that there is a positive relationship between tonal language experience and AP ability , we did not find such a relationship here for both the AP piano tone conservative measure ( t (11.1) = 0.55, p = 0.59) and the AP sine tone conservative measure ( t (10.5) = 0.74, p = 0.48). We also found no significant difference between subjects who identified their primary instrument as fixed-pitch and not fixed-pitch on both performance on the AP piano tone conservative measure ( t (9.7) = − 0.50, p = 0.63), and AP sine tone conservative measure ( t (9.3) = − 0.66, p = 0.53). In other words, effects reported in past research—such as that lessons on piano or other fixed-pitch instruments enhance AP abilities or that personal musical histories are reflected by individual performance on absolute pitch recognition tasks —are not significantly present in our sample. The FFR to the piano tone (r = 0.26, t(999) = 31.49, p = 9.18e-152) and the FFR to the unfamiliar complex tone (r = 0.27, t(999) = 31.91, p = 1.31e-154) both predict pitch-labelling performance better than chance, but not significantly differently from one another (t(1994.81) = − 1.19, p = 0.234). Both the piano tone FFR (t(1875.59) = 38.81, p = 2.42e-242) and complex tone FFR (t(1840.56) = 39.16) perform significantly better than the speech-evoked FFR (r = − 0.15), which performs significantly worse than chance (t(999) = − -22.71, p = 2.29e-92). The Lasso regression yielded the following sparse models, reported with regression coefficients in normalized units for easy comparison across models. Note, in Eq. , that the Lasso regression selected harmonics near the formant frequencies of the spoken /da/ to include in the model; while this is encouraging with respect to the Lasso technique picking out relevant predictors, the speech model does not perform above chance, so we caution against attempting to interpret the presence or absence of particular parameters in the model. 1 [12pt]{minimal} $$ {}\;{}: _{{logit}} = 6.7 10^{{ - 18}} - 0.33F_{0} + 0.017H_{5} $$ Complex tone : y ^ logit = 6.7 × 10 - 18 - 0.33 F 0 + 0.017 H 5 2 [12pt]{minimal} $$ {}\;{}: _{{logit}} = ~ - 5.1 10^{{ - 18}} - 0.063F_{0} - 0.45H_{1} + 0.28H_{4} $$ Piano Tone : y ^ logit = - 5.1 × 10 - 18 - 0.063 F 0 - 0.45 H 1 + 0.28 H 4 3 [12pt]{minimal} $$ {}: _{{logit}} = ~1.9 10^{{ - 17}} + 0.15F_{0} - 0.021H_{6} + 0.022H_{{12}} $$ Speech : y ^ logit = 1.9 × 10 - 17 + 0.15 F 0 - 0.021 H 6 + 0.022 H 12 The piano tone FFR predicts AP classification performance for both piano tones (r = 0.29, t(999) = 31.11, p = 4.11e-149) and sine tones (r = 0.08, t(999) = 12.26, p = 2.69e-32). However, the model does predict significantly better on piano tone performance (t(1729.47) = 19.22, p = 8.70e-75), suggesting a more specific effect of auditory encoding on pitch classification ability. 4 [12pt]{minimal} $$ {}\;{}: _{{logit}} = - 3.3 10^{{ - 17}} - 0.013F_{0} - 0.46H_{1} - 0.0044H_{3} + 0.25H_{4} $$ Piano Tones : y ^ logit = - 3.3 × 10 - 17 - 0.013 F 0 - 0.46 H 1 - 0.0044 H 3 + 0.25 H 4 5 [12pt]{minimal} $$ {}\;{}: _{{logit}} = 4.4 10^{{ - 17}} - 0.089H_{1} ~ + ~0.0012H_{4} $$ Sine Tones : y ^ logit = 4.4 × 10 - 17 - 0.089 H 1 + 0.0012 H 4 The frequency-following responses to the piano tone predicts AP performance better than the behavioral measures (age of music onset, tonal language experience, pitch adjustment and just-noticeable-difference scores) are able to (t(1980.05) = − 16.22, p = 1.16e-55), with the latter only performing slightly, albeit significantly, above chance (r = 0.09, t(999) = 11.69, p = 1.06e-29). Notably, combining the behavioral and electrophysiological predictors (r = 0.21) yields a model that is worse than that based on only electrophysiological predictors (t(1982.98) = − 4.52, p = 6.55e-06), but does do better than the behavioral data alone (t(1997.86) = − 12.23, p = 3.08e-33). This suggests that the behavioral measures contain little information about pitch labelling ability that is not already captured by the FFR. Interestingly, the behavioral-only model (see Eq. ) removed all predictors except for the just-noticeable-difference score, a measure of perceptual discrimination ability, indicating that the other behavioral measures do not provide additional information about pitch labelling ability. 6 [12pt]{minimal} $$ {}: _{{logit}} = 8.7 10^{{ - 18}} + 0.023JND $$ Behavioral : y ^ logit = 8.7 × 10 - 18 + 0.023 J N D 7 [12pt]{minimal} $$ {}: _{{logit}} = - 5.2 10^{{ - 17}} - 0.39H_{1} + 0.18H_{4} + 0.20JND - 0.0038age\_onset~ $$ Combined : y ^ logit = - 5.2 × 10 - 17 - 0.39 H 1 + 0.18 H 4 + 0.20 J N D - 0.0038 a g e _ o n s e t 8 [12pt]{minimal} $$ {}: _{{logit}} = - 5.1 10^{{ - 18}} - 0.063F_{0} - 0.45H_{1} + 0.28H_{4} $$ FFR : y ^ logit = - 5.1 × 10 - 18 - 0.063 F 0 - 0.45 H 1 + 0.28 H 4 Though previous work has shown that individual changes in the FFR can arise as a result of past experience, such as musical training, the exact relationship between the FFR and behavior has remained ambiguous. Individual differences in the FFR have been related to performance on certain perceptual discrimination tasks and such differences have been shown to emerge following training in such a task , but these individual differences were not specific to task-relevant spectral features and studies that relate auditory encoding to performance rarely compare the magnitude of FFR differences across stimuli from different domains. This omission is particularly problematic, as many known FFR effects persist across auditory domains; for example, musical training seems to impact the FFR encoding of speech sounds, leading some researchers to argue that experience-dependent changes in the FFR are generally domain-nonspecific . The present study provides compelling evidence for the domain specificity of individual differences in FFR spectral features. While our data replicate previous findings that FFRs to domain nonspecific stimuli can predict scores in an auditory task, as the predictive performance of our model deviates from chance for all stimuli, we find robust differences between the predictive power of FFRs to different stimuli. We find that the FFRs to tones predicts performance substantially better than to speech stimuli, seemingly corresponding to the subjects’ experience attending to the pitch of notes regardless of the familiarity of their timbres. In contrast, the FFR to the piano tone, a familiar timbre, does not seem to predict pitch-labelling ability for piano tone stimuli any better than the FFR to the complex tone, so instrument-specific advantages in brainstem encoding do not seem to account for well documented own-instrument advantage effects in the AP literature , . Our subjects do, however, generally perform better on the piano tones than on the sine tones, consistent with past literature, so the observed timbre-familiarity advantage may originate from later auditory processing or during subsequent categorization. Importantly, we find that the FFR to the piano tone predicts subjects’ ability to label the pitch of piano tones significantly better than it does the pitch of sine tones. This finding points toward a view of FFR plasticity as a mechanism that can support domain-specific auditory skills above and beyond the domain-general effects previous researchers have observed. Notably, individual differences in early sensory encoding, as reflected by the FFR, are able to predict continuous variance in AP ability. Since the variation in pitch labelling ability has largely gone unexplained since researchers have argued that AP should be considered as a graded (rather than dichotomous) ability this finding is novel. It has long remained an open scientific question why humans can place some types of stimulus characteristics into stable, barely changing categories (such as color) but less so others (such as pitch); understanding the relationship between individual differences in low-level sensory coding and in the higher-level cognitive ability to consistently categorize perceptual stimuli promises to shed light on broader theories of semantic memory, concepts, and categories . It is tempting to conclude that the mechanism for our observed effect is a difference in stimulus encoding in subcortical structures that covaries with AP ability; indeed, this is how the FFR literature has historically interpreted such results , , . Of course, our ability to draw definitive conclusions from our results is limited by the nature of a between-subject design in noninvasive electrophysiology studies using correlation. A predictive relationship between the scalp-recorded FFR and AP ability need not be caused by a true change in auditory encoding in the FFR’s source structures; since part of the FFR is thought to originate subcortically, any anatomical difference between those far-field sources and the recording electrode that covaries with AP ability could mediate the observed effect by altering volume conduction through the brain. However, such an anatomical difference would affect the scalp recorded FFR similarly for different stimuli, and we observe robust differences in predictive power between stimuli. Individual differences in brain anatomy could conceivably have a compounding influence on some true effect if, for example, changes in white matter density or microstructure, which may affect volume conduction, support higher fidelity phase locking to the acoustic stimulus. While this situation would suggest some true effect exists, it makes estimating the effect size from a scalp-recording tenuous, since the true effect could be correlated with a confounding factor. Lastly, since the FFR is now thought to originate from a distributed network of cortical and subcortical sources rather than solely from the auditory brainstem as previously thought , a differential contribution of cortical sources, close to the recording electrode, and subcortical sources could account for any attenuation or amplification of power in the FFR. It seems difficult to tease apart this alternative from the traditional explanation with the minimalist recording montage used in most FFR experiments, but this distinction may be addressable in future research using high density electrode montages . Nonetheless, a shift in the relative contribution of different source regions, rather than an overall change in phase-locking to the stimulus, would still speak to the overall hypothesis that differences in early auditory encoding support higher-level cognitive abilities in a domain-specific manner. The fields of FFR research and AP research share a common interest in how long- and short-term experience interact with less malleable aspects of nervous system development, such as genetics, to alter the encoding of sound. While the mechanisms of AP have traditionally been construed as cognitive, the present study suggests that real variance in pitch labelling ability may be attributable to low-level sensory encoding differences, as reflected in the FFR . Conversely, individual differences in the FFR appear to be much more dependent upon the development of specialized skills and the particular domain of auditory experience than previously thought. As many fields in the behavioral sciences are now discovering, it may not be possible to fully understand cognition or perception without considering their dynamic interaction. Participants Thirty-five individuals participated in the experiment, four subjects were removed (one for non-compliance on tasks, one for hardware issues at the time of experimentation, one for failure to meet hearing criteria, and one for a pre-existing neurological condition). Absolute pitch possessors (N = 16) and musically matched subjects (N = 15) were recruited from the Chicagoland area. By including subjects that are expected to show a range of pitch perception ability, we hope that our sample is representative of the population distribution of absolute pitch ability described by Van Hedger et al. . Of the 31 remaining subjects, which included both males and females (16 females) with varying amounts of musical training, the average age was M = 21.6, SD = 3.01. The self-reported absolute pitch possessors reported to play an instrument for M = 15.88, SD = 3.77, years, while the other musicians reported to play an instrument for M = 14.73, SD = 4.48, years (t(27) = 0.765, p = 0.451). Three self-reported absolute pitch possessors and seven musically matched subjects were tonal language speakers. 13 self-reported absolute pitch possessors and 10 musically matched subjects identified their primary (synonymous here with first) instrument as being a fixed-pitch instrument (piano). The study procedure was approved by the Social and Behavioral Sciences Institutional Review Board at the University of Chicago, and all research was performed in accordance with such guidelines. Informed consent was received from each subject. FFR acquisition and preprocessing protocol All recordings were conducted in a soundproof semi-electrically shielded booth. Brainstem electroencephalography recordings were collected while participants were presented with auditory stimuli that were presented binaurally via fitted earbuds attached to Etymotic Research ER-3a insert tube phones at 65–75 dB. Alternating polarity presentation was used to reduce the presence of the cochlear microphonic (CM) in the recorded signal. Each stimulus type was presented 3000 times, 1500 times for each polarity. During recording participants were allowed to watch a silent film, as is common for ABR studies . Stimuli were presented using Psychtoolbox (Matlab Psychtoolbox-3; psychtoolbox.org). Horizontal montaging was used using Ag–AgCl electrodes. Electrode placement included a ground electrode on the center of the forehead, an active electrode placed at Cz, and linked reference electrodes placed on both the left and right mastoid. Impedances from Cz, each mastoid individually, and the mastoids together were taken prior to experimentation, with a maximum of 5 k Ohms allowed. BrainVision PyCorder software (BrainProducts) was used to record brainstem responses with an online filter of 0.1 to 3000 Hz. Preprocessing in BrainVision Analyzer 2.2.0 proceeded as follows. Filtering parameters were dictated by the properties of the stimuli. The EEG recordings in response to the piano and complex stimuli were bandpass filtered (Butterworth 12 dB octave roll-off) from 100 to 2000 Hz, whereas /da/ stimuli were bandpass filtered from 70 to 2000 Hz. All stimuli had an additional notch filter of 60 Hz applied. We then applied an absolute threshold detection (± 700 mV) on the recorded audio channel via a Boolean expression that selectively finds the negative and positive peak of the start of a stimulus, and marks whichever occurs first. It is vital to use an absolute threshold rather than solely a positive or negative threshold in order to not correct for phase differences between inverted and non-inverted stimuli. By preserving such phase differences, we are able to shift our analysis to mainly examine the ABR portion of the recorded signal rather than the cochlear microphonic (CM), as the ABR is insensitive to phase differences while the CM is not. Segmentation procedures were dependent on the length of the stimulus. Piano and complex tones were 200 ms in length, and the /da/ stimulus was 80 ms in length. As a result, piano and complex segments were defined to start 50 ms prior to stimulus onset and last 250 ms post stimulus onset, /da/ segments were defined to start -10 ms prior to the stimulus onset and last 120 ms post stimulus onset. Trials that had been contaminated by unwanted artifacts (those that exceeded a strict amplitude threshold of 35 µV) were removed from the dataset. A baseline correction transformation was performed on the 10 ms preceding the /da/ stimulus, and 50 ms preceding the piano/complex stimuli. Stimuli The piano stimulus was sampled from an acoustic piano and produced with Reason software (Propellerhead, Stockholm). The complex tone was generated in Adobe Audition, and the /da/ stimulus was generated by the implementation of a Klatt synthesizer. The fundamental of the complex tone was 207.65 Hz (G# 3 ). The fundamental of the piano tone was 261.63 Hz (C 3 ). The F0 of the /da/ was 100 Hz. The complex tone stimulus had a fundamental frequency of 207.65 Hz, and consisted of the 3rd, 7th, 8th, and 10th harmonics. An F0 of 100 Hz for our speech stimulus was based on prior auditory brainstem work , and we chose fundamental frequencies for our piano and complex tone stimuli that were in a comfortable middle octave for music listening and is conveniently within the register of most commonly played instruments. Prescreening Participants were administered a sixty second hearing screening using a Welch-Allyn Otoscope equipped with an audiometer. Participants had to detect the occurrence of four tones (500, 1000, 2000, and 4000 Hz), which were presented at random intervals to prevent guessing. Participants were also checked via otoscope to make sure their ear canals were free from debris and that their eardrums were intact. Experimental design and statistical analyses For each subject, we began the experimental session with several questionnaires, where we assessed their musical experience (Absolute Pitch Questionnaire and Musical Experience Questionnaire) and tonal language experience (Language Experience Questionnaire). Afterwards, participants were screened for normal hearing. (Air conduction thresholds < 40 dB, see subsection) We then recorded EEG responses to a piano tone, a complex tone with an unfamiliar timbre, and a spoken /da/. (See and FFR Acquisition Protocol subsections, above, for more details and Fig. for stimuli power spectra.). Then, each subject completed an explicit pitch labelling (AP) assessment. The AP assessment consisted of two different paper-pen AP tests. Both tests presented tones across a range of different octaves. The average score of these two tests is what we refer to here as the AP test score, or pitch labelling ability (see Fig. C–E for full distribution of AP test scores, and Fig. C,D for the performance distribution broken down by piano and sine AP scores). Presentation of the stimuli was controlled by E-prime software. Subjects subsequently completed a just-noticeable-difference (JND) assessment, which was used to examine how well participants could behaviorally discriminate between two tones. Tones were presented in four blocks of 20 trials each. A standard 1000 Hz tone was used, and in the first block, one of the notes deviated by 56 cents from the 1000 Hz tone. In the second block, the notes deviated by 28 cents, in the third block the notes deviated by 14 cents, and in the fourth block the notes deviated by seven cents. On half of the trials the two tones presented were the same 1000 Hz tone. For a given trial, participants needed to determine whether the two tones were the same 1000 Hz tone or if they were two different tones. This assessment was also graded on a 100% scale. Individual differences in JND task performance should reflect differences in fine grained pitch processing. This task was administered using E-prime software. Subjects then performed a pitch adjustment assessment (administered using MATLAB), which was based on a task reported by Heald et al. . In this task, participants were required to adjust the frequency of a probe sine tone to match a previously presented target sine tone. The target tone was briefly presented (200 ms) and then immediately masked by noise (1000 ms). Following the noise, a secondary tone (200 ms) was presented. The participants were then asked to try to adjust the secondary tone to match the target tone by adjusting the pitch either up or down. Ten target tones were tested from 471.58 Hz (end point − 80 cents B4) to 547.99 Hz (end point + 80 cents C5), across the B4 and C5 categories. Participants either started above or below these categories (i.e., the location of the secondary tone). Participants were able to adjust the probe tone by adjusting the pitch drawn from a stimulus series. They could adjust the probe either by 10 or 20 cent steps. Given the masking of the target tone, matching performance on this task is designed to measure auditory working memory precision, as it is necessary for participants to hold in mind the target note despite the white noise and intermediary adjustment tones. This interpretation of this task is similarly held by Kumar et al. and Van Hedger et al. . The FFR was computed from the EEG responses as follows. Preprocessing was done using BrainVision Analyzer 2.2.0. (See FFR Acquisition and Preprocessing Protocol subsection above.). This preprocessed data was then exported from BrainVision Analyzer 2.2.0 to .mat files. (All analyses after this point were scripted in MATLAB and in R; all code, from preprocessing to the generation of figures, can be found at https://github.com/apex-lab/ap-ffr .) In order to maintain an equal number of trials for inverted and noninverted stimuli, we randomly subsampled trials from whichever stimulus polarity (inverted or noninverted) had more trials so that, for each subject, we were left with an equal number of trials of each polarity. Then, all remaining trials (of both polarities) were averaged for each subject and stimulus type (piano, complex tone, speech) to obtain the FFR. This is frequently recommended in the FFR literature for the purpose of averaging out any stimulus artifact and attenuating the contribution of the cochlear microphonic (see FFR Acquisition and Preprocessing Protocol subsection). Next, we applied a Hanning taper to the window corresponding to the duration of each stimulus and computed the power spectrum of each FFR over that window. We then exported the power of each subject’s FFR at each harmonic of its eliciting stimulus (up to 1500 Hz, see Fig. ) for analysis in R. (These files are available for researchers who wish to reproduce our analyses.) We then assessed whether the FFRs elicited by stimuli from a variety of auditory domains (piano, speech, and a novel complex periodic signal) were predictive of pitch labelling performance on the score (accuracy) of both AP tests. The reason for focusing on predictive performance, rather than relying on null hypothesis significance testing for inference, is that in principle all the harmonics of a stimulus (and thus the FFR) contain information about pitch. In order to avoid making any assumptions about which harmonics to include but not allow our analysis to suffer from problems inherent to high-dimensional regression (the “curse of dimensionality,” Friedman, 1997) , we employed the Lasso regression technique to fit sparse generalizable models to our data. We describe the Lasso regression technique in some detail below in the Model Fitting subsection below. First, we fit separate models for each FFR eliciting stimulus, predicting the pitch labelling ability across both AP tests (sine and piano tones). Pitch labelling ability is operationalized by awarding 1 point for correctly labelling a note and 0.75 points if only a semitone off, then dividing total points awarded by the number of trials. This is considered a relatively conservative measure, specifically with regard to identifying intermediate AP possessors, and has been used by a number of influential studies , , . However, alternative measures of AP ability, such as mean absolute deviation (MAD) in semitones and raw accuracy, are provided for interested researchers in our open dataset. (Though we found the reported results were robust to the operationalization of AP.) Since this measure is [0, 1] bounded, we logit transform it before fitting the model. For each model, we compute the correlation between model predictions and true pitch labelling ability on a test set for each of 1000 cross-validation runs. We then apply the Fisher z- transformation to these r values (since they would otherwise be [0, 1] bounded and therefore non-normal) and compare each model’s performance to chance ( r = 0) with a one-sample t- test. We also compare the three models to one another to test whether the auditory domain of the FFR eliciting stimulus matters when predicting pitch labelling performance. Full distributions of raw and transformed r values are reported (Fig. ), and regression coefficients (fit on the full dataset) are reported in normalized units for easy comparison between models. In order to assess the evidence of a specific effect of low-level auditory encoding on task performance, we then separately fit models predicting pitch labelling performance on sine tones and pitch labelling performance on piano tones from the piano elicited FFRs. We compared these models to chance and to each other using t tests on the z- transformed r values from 1000 cross-validation runs. The full distribution of r values is reported in Fig. . In total, we report 12 statistical tests. In order to control for multiple comparisons, we apply a Bonferroni correction, resulting in a new significance threshold of α = 0.00417 against which the reported p -values should be compared. Model fitting While ordinary least squares regression finds regression coefficients β to minimize the loss function [12pt]{minimal} $$SSE( ) = _{i} ( {_{i} - y_{i} } )^{2}$$ S S E β = ∑ i y ^ i - y i 2 , where [12pt]{minimal} $$$$ y ^ is what the model predicts, Lasso regression minimizes [12pt]{minimal} $$L( ) = SSE( ) + ~ _{j} | { _{j} } |$$ L β = S S E β + λ ∑ j β j . The addition of a penalty term for the size of β means that the fit model will only include nonzero values of β (regression coefficients) if the increase in the penalty term is offset by enough of a decrease in the sum of squares error (SSE). In order to ensure that results are generalizable, we pick λ (which determines how much the model will “care” about the penalty term) to maximize model performance on data that the model never saw during training (a hold-out set). This ensures that the model only includes predictor variables that robustly help it predict new data (the predictors that we can expect to generalize outside of our particular sample to the target population), setting the coefficients for all other predictors to zero. In exchange for performing near-optimal variable selection for us, Lasso regression does not provide a p -value for each remaining regression coefficient, but we can derive a p -value for the full model by comparing model performance on a test set (more data points the model did not see during training) to chance. This p -value, arguably, is more meaningful than those traditionally reported since it is derived from a measure of how well a model generalizes to new data, while p- values for ordinary linear regression are more prone to reach significance just because of noise within the sample. For more detail on the theory and practical implementation of the Lasso, see James et al. . Each time we fit a model we are actually fitting many models. First, we divide the data randomly into a training set (2/3 of the data) and a test set (the remaining 1/3 of the data). Next, we train models using many different values of λ (from 0.01 to [12pt]{minimal} $${10}^{10}$$ 10 10 ) and select the model that minimizes the leave-one-out cross-validation score over the training set. We then compute the performance of this model on the test set (picking the metric of our choosing as a “cross-validation score,” in our case [12pt]{minimal} $$r = {}( {,~y} )$$ r = corr y ^ , y ) as a measure of how well the model predicts new data. If using the cross-validation score for inference, one has to be concerned about whether performance on the test set may have been good (or bad) by mere chance, and as it happens, the random choice of test set can result in dramatically variable cross-validation scores (see Figs. , , ). To account for this variability, we repeat this whole cross-validation procedure 1000 times for each model, each with a new, random training-test split, and report the full distribution of r values generated. Thirty-five individuals participated in the experiment, four subjects were removed (one for non-compliance on tasks, one for hardware issues at the time of experimentation, one for failure to meet hearing criteria, and one for a pre-existing neurological condition). Absolute pitch possessors (N = 16) and musically matched subjects (N = 15) were recruited from the Chicagoland area. By including subjects that are expected to show a range of pitch perception ability, we hope that our sample is representative of the population distribution of absolute pitch ability described by Van Hedger et al. . Of the 31 remaining subjects, which included both males and females (16 females) with varying amounts of musical training, the average age was M = 21.6, SD = 3.01. The self-reported absolute pitch possessors reported to play an instrument for M = 15.88, SD = 3.77, years, while the other musicians reported to play an instrument for M = 14.73, SD = 4.48, years (t(27) = 0.765, p = 0.451). Three self-reported absolute pitch possessors and seven musically matched subjects were tonal language speakers. 13 self-reported absolute pitch possessors and 10 musically matched subjects identified their primary (synonymous here with first) instrument as being a fixed-pitch instrument (piano). The study procedure was approved by the Social and Behavioral Sciences Institutional Review Board at the University of Chicago, and all research was performed in accordance with such guidelines. Informed consent was received from each subject. All recordings were conducted in a soundproof semi-electrically shielded booth. Brainstem electroencephalography recordings were collected while participants were presented with auditory stimuli that were presented binaurally via fitted earbuds attached to Etymotic Research ER-3a insert tube phones at 65–75 dB. Alternating polarity presentation was used to reduce the presence of the cochlear microphonic (CM) in the recorded signal. Each stimulus type was presented 3000 times, 1500 times for each polarity. During recording participants were allowed to watch a silent film, as is common for ABR studies . Stimuli were presented using Psychtoolbox (Matlab Psychtoolbox-3; psychtoolbox.org). Horizontal montaging was used using Ag–AgCl electrodes. Electrode placement included a ground electrode on the center of the forehead, an active electrode placed at Cz, and linked reference electrodes placed on both the left and right mastoid. Impedances from Cz, each mastoid individually, and the mastoids together were taken prior to experimentation, with a maximum of 5 k Ohms allowed. BrainVision PyCorder software (BrainProducts) was used to record brainstem responses with an online filter of 0.1 to 3000 Hz. Preprocessing in BrainVision Analyzer 2.2.0 proceeded as follows. Filtering parameters were dictated by the properties of the stimuli. The EEG recordings in response to the piano and complex stimuli were bandpass filtered (Butterworth 12 dB octave roll-off) from 100 to 2000 Hz, whereas /da/ stimuli were bandpass filtered from 70 to 2000 Hz. All stimuli had an additional notch filter of 60 Hz applied. We then applied an absolute threshold detection (± 700 mV) on the recorded audio channel via a Boolean expression that selectively finds the negative and positive peak of the start of a stimulus, and marks whichever occurs first. It is vital to use an absolute threshold rather than solely a positive or negative threshold in order to not correct for phase differences between inverted and non-inverted stimuli. By preserving such phase differences, we are able to shift our analysis to mainly examine the ABR portion of the recorded signal rather than the cochlear microphonic (CM), as the ABR is insensitive to phase differences while the CM is not. Segmentation procedures were dependent on the length of the stimulus. Piano and complex tones were 200 ms in length, and the /da/ stimulus was 80 ms in length. As a result, piano and complex segments were defined to start 50 ms prior to stimulus onset and last 250 ms post stimulus onset, /da/ segments were defined to start -10 ms prior to the stimulus onset and last 120 ms post stimulus onset. Trials that had been contaminated by unwanted artifacts (those that exceeded a strict amplitude threshold of 35 µV) were removed from the dataset. A baseline correction transformation was performed on the 10 ms preceding the /da/ stimulus, and 50 ms preceding the piano/complex stimuli. The piano stimulus was sampled from an acoustic piano and produced with Reason software (Propellerhead, Stockholm). The complex tone was generated in Adobe Audition, and the /da/ stimulus was generated by the implementation of a Klatt synthesizer. The fundamental of the complex tone was 207.65 Hz (G# 3 ). The fundamental of the piano tone was 261.63 Hz (C 3 ). The F0 of the /da/ was 100 Hz. The complex tone stimulus had a fundamental frequency of 207.65 Hz, and consisted of the 3rd, 7th, 8th, and 10th harmonics. An F0 of 100 Hz for our speech stimulus was based on prior auditory brainstem work , and we chose fundamental frequencies for our piano and complex tone stimuli that were in a comfortable middle octave for music listening and is conveniently within the register of most commonly played instruments. Participants were administered a sixty second hearing screening using a Welch-Allyn Otoscope equipped with an audiometer. Participants had to detect the occurrence of four tones (500, 1000, 2000, and 4000 Hz), which were presented at random intervals to prevent guessing. Participants were also checked via otoscope to make sure their ear canals were free from debris and that their eardrums were intact. For each subject, we began the experimental session with several questionnaires, where we assessed their musical experience (Absolute Pitch Questionnaire and Musical Experience Questionnaire) and tonal language experience (Language Experience Questionnaire). Afterwards, participants were screened for normal hearing. (Air conduction thresholds < 40 dB, see subsection) We then recorded EEG responses to a piano tone, a complex tone with an unfamiliar timbre, and a spoken /da/. (See and FFR Acquisition Protocol subsections, above, for more details and Fig. for stimuli power spectra.). Then, each subject completed an explicit pitch labelling (AP) assessment. The AP assessment consisted of two different paper-pen AP tests. Both tests presented tones across a range of different octaves. The average score of these two tests is what we refer to here as the AP test score, or pitch labelling ability (see Fig. C–E for full distribution of AP test scores, and Fig. C,D for the performance distribution broken down by piano and sine AP scores). Presentation of the stimuli was controlled by E-prime software. Subjects subsequently completed a just-noticeable-difference (JND) assessment, which was used to examine how well participants could behaviorally discriminate between two tones. Tones were presented in four blocks of 20 trials each. A standard 1000 Hz tone was used, and in the first block, one of the notes deviated by 56 cents from the 1000 Hz tone. In the second block, the notes deviated by 28 cents, in the third block the notes deviated by 14 cents, and in the fourth block the notes deviated by seven cents. On half of the trials the two tones presented were the same 1000 Hz tone. For a given trial, participants needed to determine whether the two tones were the same 1000 Hz tone or if they were two different tones. This assessment was also graded on a 100% scale. Individual differences in JND task performance should reflect differences in fine grained pitch processing. This task was administered using E-prime software. Subjects then performed a pitch adjustment assessment (administered using MATLAB), which was based on a task reported by Heald et al. . In this task, participants were required to adjust the frequency of a probe sine tone to match a previously presented target sine tone. The target tone was briefly presented (200 ms) and then immediately masked by noise (1000 ms). Following the noise, a secondary tone (200 ms) was presented. The participants were then asked to try to adjust the secondary tone to match the target tone by adjusting the pitch either up or down. Ten target tones were tested from 471.58 Hz (end point − 80 cents B4) to 547.99 Hz (end point + 80 cents C5), across the B4 and C5 categories. Participants either started above or below these categories (i.e., the location of the secondary tone). Participants were able to adjust the probe tone by adjusting the pitch drawn from a stimulus series. They could adjust the probe either by 10 or 20 cent steps. Given the masking of the target tone, matching performance on this task is designed to measure auditory working memory precision, as it is necessary for participants to hold in mind the target note despite the white noise and intermediary adjustment tones. This interpretation of this task is similarly held by Kumar et al. and Van Hedger et al. . The FFR was computed from the EEG responses as follows. Preprocessing was done using BrainVision Analyzer 2.2.0. (See FFR Acquisition and Preprocessing Protocol subsection above.). This preprocessed data was then exported from BrainVision Analyzer 2.2.0 to .mat files. (All analyses after this point were scripted in MATLAB and in R; all code, from preprocessing to the generation of figures, can be found at https://github.com/apex-lab/ap-ffr .) In order to maintain an equal number of trials for inverted and noninverted stimuli, we randomly subsampled trials from whichever stimulus polarity (inverted or noninverted) had more trials so that, for each subject, we were left with an equal number of trials of each polarity. Then, all remaining trials (of both polarities) were averaged for each subject and stimulus type (piano, complex tone, speech) to obtain the FFR. This is frequently recommended in the FFR literature for the purpose of averaging out any stimulus artifact and attenuating the contribution of the cochlear microphonic (see FFR Acquisition and Preprocessing Protocol subsection). Next, we applied a Hanning taper to the window corresponding to the duration of each stimulus and computed the power spectrum of each FFR over that window. We then exported the power of each subject’s FFR at each harmonic of its eliciting stimulus (up to 1500 Hz, see Fig. ) for analysis in R. (These files are available for researchers who wish to reproduce our analyses.) We then assessed whether the FFRs elicited by stimuli from a variety of auditory domains (piano, speech, and a novel complex periodic signal) were predictive of pitch labelling performance on the score (accuracy) of both AP tests. The reason for focusing on predictive performance, rather than relying on null hypothesis significance testing for inference, is that in principle all the harmonics of a stimulus (and thus the FFR) contain information about pitch. In order to avoid making any assumptions about which harmonics to include but not allow our analysis to suffer from problems inherent to high-dimensional regression (the “curse of dimensionality,” Friedman, 1997) , we employed the Lasso regression technique to fit sparse generalizable models to our data. We describe the Lasso regression technique in some detail below in the Model Fitting subsection below. First, we fit separate models for each FFR eliciting stimulus, predicting the pitch labelling ability across both AP tests (sine and piano tones). Pitch labelling ability is operationalized by awarding 1 point for correctly labelling a note and 0.75 points if only a semitone off, then dividing total points awarded by the number of trials. This is considered a relatively conservative measure, specifically with regard to identifying intermediate AP possessors, and has been used by a number of influential studies , , . However, alternative measures of AP ability, such as mean absolute deviation (MAD) in semitones and raw accuracy, are provided for interested researchers in our open dataset. (Though we found the reported results were robust to the operationalization of AP.) Since this measure is [0, 1] bounded, we logit transform it before fitting the model. For each model, we compute the correlation between model predictions and true pitch labelling ability on a test set for each of 1000 cross-validation runs. We then apply the Fisher z- transformation to these r values (since they would otherwise be [0, 1] bounded and therefore non-normal) and compare each model’s performance to chance ( r = 0) with a one-sample t- test. We also compare the three models to one another to test whether the auditory domain of the FFR eliciting stimulus matters when predicting pitch labelling performance. Full distributions of raw and transformed r values are reported (Fig. ), and regression coefficients (fit on the full dataset) are reported in normalized units for easy comparison between models. In order to assess the evidence of a specific effect of low-level auditory encoding on task performance, we then separately fit models predicting pitch labelling performance on sine tones and pitch labelling performance on piano tones from the piano elicited FFRs. We compared these models to chance and to each other using t tests on the z- transformed r values from 1000 cross-validation runs. The full distribution of r values is reported in Fig. . In total, we report 12 statistical tests. In order to control for multiple comparisons, we apply a Bonferroni correction, resulting in a new significance threshold of α = 0.00417 against which the reported p -values should be compared. While ordinary least squares regression finds regression coefficients β to minimize the loss function [12pt]{minimal} $$SSE( ) = _{i} ( {_{i} - y_{i} } )^{2}$$ S S E β = ∑ i y ^ i - y i 2 , where [12pt]{minimal} $$$$ y ^ is what the model predicts, Lasso regression minimizes [12pt]{minimal} $$L( ) = SSE( ) + ~ _{j} | { _{j} } |$$ L β = S S E β + λ ∑ j β j . The addition of a penalty term for the size of β means that the fit model will only include nonzero values of β (regression coefficients) if the increase in the penalty term is offset by enough of a decrease in the sum of squares error (SSE). In order to ensure that results are generalizable, we pick λ (which determines how much the model will “care” about the penalty term) to maximize model performance on data that the model never saw during training (a hold-out set). This ensures that the model only includes predictor variables that robustly help it predict new data (the predictors that we can expect to generalize outside of our particular sample to the target population), setting the coefficients for all other predictors to zero. In exchange for performing near-optimal variable selection for us, Lasso regression does not provide a p -value for each remaining regression coefficient, but we can derive a p -value for the full model by comparing model performance on a test set (more data points the model did not see during training) to chance. This p -value, arguably, is more meaningful than those traditionally reported since it is derived from a measure of how well a model generalizes to new data, while p- values for ordinary linear regression are more prone to reach significance just because of noise within the sample. For more detail on the theory and practical implementation of the Lasso, see James et al. . Each time we fit a model we are actually fitting many models. First, we divide the data randomly into a training set (2/3 of the data) and a test set (the remaining 1/3 of the data). Next, we train models using many different values of λ (from 0.01 to [12pt]{minimal} $${10}^{10}$$ 10 10 ) and select the model that minimizes the leave-one-out cross-validation score over the training set. We then compute the performance of this model on the test set (picking the metric of our choosing as a “cross-validation score,” in our case [12pt]{minimal} $$r = {}( {,~y} )$$ r = corr y ^ , y ) as a measure of how well the model predicts new data. If using the cross-validation score for inference, one has to be concerned about whether performance on the test set may have been good (or bad) by mere chance, and as it happens, the random choice of test set can result in dramatically variable cross-validation scores (see Figs. , , ). To account for this variability, we repeat this whole cross-validation procedure 1000 times for each model, each with a new, random training-test split, and report the full distribution of r values generated.
Challenges and opportunities for risk‐ and systems‐based control of
abd703ab-ffee-48fb-94aa-64f199367113
11605164
Microbiology[mh]
INTRODUCTION Listeria monocytogenes is a bacterial foodborne pathogen that causes an estimated 1591 (Scallan et al., ) illnesses per year in the United States and an estimated 23,510 (Maertens de Noordhout et al., ) illnesses per year worldwide. While the total number of illnesses caused by L. monocytogenes is small compared to other foodborne pathogens (e.g., nontyphoidal Salmonella spp., which causes an estimated 1,027,561 foodborne illnesses per year in the United States), L. monocytogenes has a high case fatality rate of ∼16% (Scallan et al., ), making it a serious public health concern. The risk of illness and death caused by L. monocytogenes is especially high for pregnant people, elderly individuals, and other individuals with compromised immune systems. Listeria monocytogenes can cause an invasive infection with symptoms including sepsis and meningitis in immunocompromised individuals (as well as rarely in healthy individuals) and spontaneous abortions in pregnant people (Schlech, ). Listeria monocytogenes can also cause gastrointestinal illness in healthy individuals. Additionally, contamination of ready‐to‐eat (RTE) food products with L. monocytogenes is a common cause of food recalls, which can damage company reputations and cause substantial financial losses. In fact, there were 90 recalls of foods and beverages in the United States due to L. monocytogenes contamination reported between 2022 and 2023 (US Department of Agriculture Food Safety Inspection Service, ; US Food and Drug Administration, ). As such, public health officials and the food industry have substantial stakes in reducing contamination of RTE food products with L. monocytogenes , as well as in the implementation of other strategies that reduce human listeriosis cases (e.g., reducing the growth of L. monocytogenes in foods, education campaigns targeting susceptible consumers). The need to effectively control L. monocytogenes has been further heightened with the broad use of molecular subtyping tools and specifically whole genome sequencing (WGS), which has led to improved detection of listeriosis outbreaks, including small outbreaks (e.g., two to three cases) and/or outbreaks that occur over prolonged time periods (e.g., years) (Moura et al., ). More specifically, the US Centers for Disease Control and Prevention (CDC) has been performing WGS of all human L. monocytogenes isolates since 2013 (Jackson et al., ). Similarly, routine WGS of human clinical L. monocytogenes isolates has also been performed by the European Centre for Disease Prevention and Control (ECDC) and the Public Health Agency of Canada (PHAC) since 2014 and 2017, respectively (European Centre for Disease Prevention and Control, ; Public Health Agency of Canada, ), with other public health agencies around the world also increasingly switching to routine use of WGS for characterization of L. monocytogenes . Even with WGS, detailed epidemiological data are still needed to reliably and definitively identify the specific food source responsible for a given outbreak. Importantly, subtyping—specifically WGS data—is also increasingly used by regulatory agencies, including the US Food and Drug Administration (FDA) and the US Department of Agriculture Food Safety and Inspection Service (USDA FSIS), to characterize Listeria spp. and L. monocytogenes isolates obtained from foods and food‐associated built environments. In some cases, these WGS data may provide evidence for L. monocytogenes persistence in food processing facilities, which in some countries and jurisdictions may be used by regulatory agencies to help identify unhygienic conditions in a given facility and can lead to facility shutdowns and recalls, even in the absence of detected finished product contamination. Key factors that affect the risk of human foodborne listeriosis cases linked to a specific food include (i) initial contamination of the food, (ii) the ability of the food to support L. monocytogenes growth, and (iii) the susceptibility of the consumers of a specific food product. As (ii) and (iii) have been detailed in a number of publications and reviews (Buchanan et al., ; Farber et al., ; Hoelzer et al., ; ILSI Research Foundation/Risk Science Institute, Expert Panel on Listeria monocytogenes in Foods, ; Pouillot et al., ), the review presented here specifically focuses on the risks associated with initial contamination of foods, as well as the challenges the food industry faces in its efforts to reduce the contamination and proliferation of L. monocytogenes on foods. Efforts to reduce L. monocytogenes contamination of food products are complicated by the fact that this organism (i) is frequently found in a variety of different environments, making introduction into raw materials and processing facilities probable, (ii) is able to survive and grow under adverse environmental conditions, and (iii) has a propensity to establish persistent populations in food‐associated built environments (e.g., processing facilities) and equipment (Ferreira et al., ; McClure et al., ). Consequently, contamination of food products can occur through a variety of different routes, including natural environments (e.g., for raw materials harvested from nature, such as wild‐caught seafood), primary production environments (e.g., livestock or produce farms), raw materials (e.g., raw milk), and the food‐associated built environments (e.g., processing facilities, retail establishments) and equipment themselves, among others (Ferreira et al., ). The remainder of this article will (i) detail sources of Listeria , (ii) discuss public health and business risks associated with L. monocytogenes and how to develop and implement risk‐based systems to address these food safety issues, and (iii) outline key control strategies and associated challenges. SOURCES OF Listeria While L. monocytogenes is the only human pathogen in the genus Listeria , testing for the presence of Listeria spp. is often used by industry to monitor processing facility environments for the presence of conditions that would facilitate the presence, survival, and/or growth of L. monocytogenes (Chapin et al., ). Listeria monocytogenes , as well as other Listeria spp., has been isolated from a wide variety of environments, including from soil, water, feces, and vegetation in the primary production environment (Golden et al., ; Nightingale et al., ; Strawn, Fortes, et al., ; Strawn, Gröhn, et al., ; Vilar et al., ; Weller et al., , ); in pristine environments such as national parks (Sauders et al., ); in urban environments from sidewalks and automated teller machines (ATMs), among others (Sauders et al., ); in processing environments (Ferreira et al., ); and in retail and food service environments (Hoelzer et al., ). Consequently, many different sources can be responsible for the introduction of L. monocytogenes and other Listeria spp. into finished product and food‐associated environments, including primary production environments and raw materials, natural environments, and food‐associated built environments such as packinghouses, processing facilities, and retail establishments (Figure ). Importantly, while employees may act as fomites (El‐Shenawy, ; Kerr et al., ), there is essentially no evidence that human fecal carriers play a role as sources of L. monocytogenes in foods and food‐associated environments (Sauders et al., ). 2.1 Primary production environments and raw materials Primary production environments (e.g., farms, fields) and raw materials can play two distinct roles as sources of Listeria , including (i) introduction into raw materials that do not undergo a kill step and where L. monocytogenes can be carried over into the finished RTE product (e.g., fresh‐cut produce, cold‐smoked seafood, raw milk dairy products) and (ii) introduction into food‐associated environments (e.g., processing facilities) with the potential of subsequent environmental transmission into finished RTE products. Control of Listeria in raw materials is hence particularly important for the production of RTE foods that do not involve an effective kill step. Contamination of raw materials can occur from a variety of sources and at a variety of points in the supply chain prior to materials reaching a processing facility (or a retail establishment), including (i) primary production (e.g., produce or livestock farms), including at harvest (e.g., milking equipment, produce harvest equipment), (ii) at an upstream facility (e.g., a storage facility, packinghouse, or slaughterhouse), and (iii) during transportation. One key supply chain where raw material contamination is of particular concern is whole and fresh‐cut produce. In produce primary production environments, Listeria has been isolated from agricultural water sources, soil, vegetation, and wildlife feces (Chapin et al., ; Strawn, Fortes, et al., ; Strawn, Gröhn, et al., ; Weller et al., , ). With a variety of possible sources, there are a number of transmission pathways that can lead to preharvest produce contamination with Listeria . For instance, contaminated irrigation water could directly deposit Listeria onto a product or could deposit it into the soil with the possibility of subsequent transmission onto produce (Park et al., ; Weller et al., ). Soil could also harbor Listeria populations that can be transferred to produce. Finally, wildlife could directly deposit Listeria ‐contaminated feces onto produce or into the soil. Livestock could present a source of Listeria in produce at the preharvest level. The possible role of livestock as a source of L. monocytogenes is supported by a listeriosis outbreak in the Maritime Provinces, Canada that was linked to coleslaw likely contaminated from sheep feces (Schlech et al., ), as well as the frequent high prevalence of L. monocytogenes and other Listeria spp. in livestock. In fact, Golden et al. found a Listeria spp. (including L. monocytogenes ) prevalence of 15.9% (245/1537) and a L. monocytogenes prevalence of 1.8% (28/1537) in soil and fecal samples collected from 11 poultry farms in the southeastern United States, further supporting the importance of livestock‐associated sources. A number of factors can influence how effectively Listeria is transferred from soil or other environmental sources to produce. During rain or irrigation events, splashing of soil or wildlife feces can facilitate transfer; in addition, runoff and flooding from adjacent lands can also facilitate contamination of the produce (Pang et al., ). Increased wind speed has also been associated with an increased prevalence of Listeria , possibly due to increased transfer of Listeria from surrounding environments (e.g., farms) (Pang et al., ). Unlike Salmonella and pathogenic Escherichia coli , to date, very few listeriosis outbreaks have been definitively linked to contamination of produce that occurred at the preharvest level. However, given the frequent presence of L. monocytogenes in natural and farm environments, it is highly likely that a number of finished product contamination events are linked to the preharvest environment, particularly for products that do not undergo extensive antimicrobial wash treatments or heat treatments. The limited number of outbreaks directly linked to preharvest sources may reflect the fact that many preharvest contamination events (e.g., from wildlife feces or soil splashes) impact small quantities of produce and are thus more likely to lead to individual listeriosis cases rather than outbreaks, although additional information is required to confirm that low levels of Listeria are typically present in the preharvest environment. There is, however, growing concern about Listeria contamination of produce during harvesting (e.g., via contaminated harvesting equipment), which could lead to more widespread contamination events and thus an increased risk of outbreaks. For example, a recent listeriosis outbreak associated with packaged salads, which resulted in 18 illnesses, 16 hospitalizations, and three deaths across 13 states, was found to be traced back to harvesting equipment contaminated with the outbreak strain (Centers for Disease Control and Prevention, ). Additionally, while less formally characterized compared to traditional preharvest growing environments (e.g., open fields), Listeria contamination of produce grown in controlled environment agriculture (CEA) settings can also occur, as evidenced by recent listeriosis outbreaks and L. monocytogenes recalls linked to produce such as enoki mushrooms (Centers for Disease Control and Prevention, ; US Food and Drug Administration, ) and spinach (US Food and Drug Administration, ) grown in CEA settings. In CEA operations, contamination may occur from seeds, water, substrates/other growing media used for CEA production, or the built environment/production equipment, although currently there are limited data surrounding how Listeria is spread from these sources to edible or inedible portions of produce (Hamilton et al., ). In addition to concerns about Listeria contamination, temperature and humidity conditions used to grow produce in CEA settings may also support effective Listeria proliferation. In particular, this concern has been raised for enoki mushrooms, where certain growing stages require long periods of exposure to cooler temperatures and high‐humidity conditions that can support Listeria growth (Grocholl et al., ; Pereira et al., ). Importantly, Pereira et al. noted that enoki mushroom samples collected during a multinational listeriosis outbreak frequently yielded levels of L. monocytogenes (i.e., >10 3 CFU/g) that were higher than those observed in food samples from previous listeriosis outbreaks linked to contamination from packing/processing environments (Chen, Burall, Luo, et al., ; Chen, Burall, Macarisin, et al., ). These findings may support the likelihood that contamination and proliferation of L. monocytogenes on enoki mushrooms occurred during CEA cultivation, as opposed to occurring at later stages of the supply chain. However, given that L. monocytogenes has only recently emerged as a food safety concern in CEA systems, more research is needed to fully elucidate the risks of L. monocytogenes (and other Listeria spp.) contamination and proliferation on produce in CEA primary production environments. Another supply chain where preharvest contamination is relevant is cold‐smoked seafood. While a number of studies have provided convincing evidence that contamination of finished RTE cold‐smoked products (in particular, cold‐smoked salmon) can be traced back to raw materials (Jahncke et al., ), it is rare to identify the specific contamination sources of the raw material (e.g., fish farms, fish slaughter facilities, transport equipment, etc.). Importantly, however, cold‐smoked salmon represents a model for a supply chain that has started to implement innovative approaches to reduce contamination of incoming raw materials, ranging from stringent supplier qualification supported by intensive raw material testing to nonthermal treatments to reduce Listeria loads on incoming raw material (Jahncke et al., ). Finally, L. monocytogenes contamination of raw materials is also important for raw milk (which is legal for sale in some locations) and other dairy products prepared using raw milk (e.g., raw milk cheese), particularly as L. monocytogenes can sometimes be highly prevalent on livestock farms. For instance, Nightingale et al. found a L. monocytogenes prevalence of 20.1% (414/2056) in a survey of 52 ruminant farms where samples were collected of feces, soil, feedstuff, and water. Furthermore, although relatively rare, dairy ruminants can also asymptomatically carry and shed L. monocytogenes into their raw milk (Bolten, Ralyea, et al., ; Hunt et al., ; Papić et al., ), in some cases over periods of up to several months (Ricchi et al., ). Thus, it is unsurprising that contamination of raw milk and raw milk dairy products continues to represent a major public health concern, with six L. monocytogenes ‐associated outbreaks and recalls linked to raw milk products being reported in the United States between 2022 and 2023 (Center for Dairy Research, ). In addition to raw materials being a direct source of Listeria in finished products and an indirect source (via introduction from farm environments into processing plants), Listeria can also establish itself within vehicles and crates used to transport products from primary production environments to packinghouses, processing facilities, or directly to retail. Cross‐contamination during transport is possible whenever the product is exposed to the open environment. To prevent cross‐contamination at this stage, regular cleaning and sanitation of trailers that transport crates and raw materials is thus necessary. 2.2 Employees Employees are sometimes brought up as potential sources or vectors that contribute to the introduction of Listeria into the processing environment. However, unlike other pathogens, such as Salmonella , it appears unlikely for humans to be fecal carriers of Listeria (Sauders et al., ). Employees, however, can act as fomites, particularly since Listeria can often be found at high frequencies in urban, rural, and natural environments. For example, in a survey of Listeria in four urban environments in New York State, Sauders et al. found the overall Listeria spp. (including L. monocytogenes ) prevalence to be 23.4% ( N = 907). Once introduced into a facility, Listeria can survive over time, particularly if effective food safety and sanitation practices are lacking. In order to prevent the introduction of Listeria into the processing environment or onto the product via employees, effective good manufacturing practices (GMPs) must be followed. These GMPs include hand washing and sanitation, wearing gloves, wearing clean coats designated for use only inside of the processing areas, boot washes (or footbaths) at the entrance to the processing areas, and captive footwear policies (e.g., each employee has a set of work boots kept at the facility that may only be used inside the processing areas). 2.3 Processing facility and packinghouse environment The majority of L. monocytogenes outbreaks have been linked to RTE foods where contamination originated from processing facilities or packinghouse environments. Listeria presence in food processing‐associated environments can conceptually be broken down into two components: (i) introduction into the facility and (ii) the subsequent fate of Listeria in a facility, which may include rapid elimination (e.g., through cleaning and sanitation) or survival subsequent to introduction (also often referred to as “persistence”) (Belias et al., ). As indicated above, Listeria introduction can originate from a number of sources outside a processing facility; preventing introduction through fomites (e.g., people, equipment, etc.) is thus a key part of Listeria control programs. Without the proper zoning of equipment and employees, once introduced, Listeria can then be moved throughout the facility and can be introduced into a “niche” where it can evade cleaning and sanitation. A lack of sanitary design and proper cleaning and sanitation programs can prevent the elimination of L. monocytogenes from the facility (thus facilitating “persistence”). Hence, persistence can typically be traced back to failures associated with prerequisite programs or other nonprocess preventive controls under the Preventive Controls for Human Food Rule such as cleaning and sanitation (US Food and Drug Administration, ). The importance of L. monocytogenes persistence in processing facilities has been defined though numerous studies. For instance, Beno et al. isolated persistent L. monocytogenes pulsed‐field gel electrophoresis (PFGE) types (i.e., PFGE types isolated on more than one sampling date) from the processing environments of four out of nine small cheese processing facilities during a survey where environmental samples were collected monthly; 31 of the 57 L. monocytogenes strains subjected to PFGE in this study were persistent in a given facility. Furthermore, in a 2011 L. monocytogenes outbreak linked to cantaloupes in the United States (McCollum et al., ), samples of the brushes used to wash the cantaloupes in the packinghouse yielded L. monocytogenes isolates of the same PFGE types as the strains causing illnesses, implicating the brushes as a likely persistent source of contamination in this outbreak. These are two of the many examples that highlight the importance of strong food safety programs at the processing environment level that are designed to control the presence and persistence of Listeria ; additional examples and more detailed coverage of Listeria persistence can be found in a number of reviews on this subject (Belias et al., ; Chowdhury & Anand, ; Ferreira et al., ; Tuytschaever et al., ). Importantly, if suitable conditions exist, L. monocytogenes persistence in food packing/processing environments can extend over periods of up to several years. For example, Orsi et al. found L. monocytogenes that persisted in a food processing facility for at least 12 years. Similarly, in a recent listeriosis outbreak associated with RTE dairy products (US Food and Drug Administration, ), L. monocytogenes isolates obtained from environmental swabs taken from the implicated dairy processing facility in 2024 matched clinical isolates from the outbreak that were obtained in 2014, indicating that L. monocytogenes was likely persistent in this facility for nearly 10 years. Persistent L. monocytogenes can be transferred to food contact surfaces and contaminate food products, or a harborage point may develop (i.e., a niche where Listeria is present and can continually contaminate product or other areas of the processing environment) within a food contact surface. Some common harborage points include product coolers, forklifts, forklift stops, hollow equipment legs, dead‐end pipes, drains, floor–wall junctures, junctures between equipment legs and the floor, and floor cracks, among other sites that are difficult to clean and sanitize (Simmons & Wiedmann, ). 2.4 Retail and food services Listeria monocytogenes recalls and outbreaks have also been linked to the contamination originating from retail and food service establishment environments. Similar to processing facilities, the presence of Listeria in retail and food service environments is dependent upon (i) its introduction into the environment and (ii) its ability to persist within the environment after it has been introduced. However, compared with a processing environment, retail and food service environments are more open to outside environments, given that there is limited control over what customers bring into the retail space. Therefore, in addition to Listeria being introduced on raw materials or with employees, it can also be transported into the retail environment via customers. Hoelzer et al. conducted a survey of 120 retail delis that were classified as small (<10 employees, N = 60) or having failed an inspection ( N = 60) where they collected samples of food and nonfood contact surfaces, including slicers, utensils, the deli case, floors, drains, and sinks, among others. The L. monocytogenes prevalence in these delis ranged from <6% (0/18 positive samples) to 92% (11/12 positive samples); common sites positive for L. monocytogenes included milk crates, floors of walk‐in coolers, and drains (Hoelzer et al., ). Once introduced into a retail setting, Listeria persistence can develop, and cross‐contamination of RTE products can occur (similar to what is observed in processing facilities) if Listeria is not eliminated through cleaning and sanitation. As such, frequent and stringent cleaning and sanitation is required to eliminate Listeria if introduced into the retail environments; additional deep cleaning and sanitation, which often involves disassembling equipment (e.g., slicers, display cases) as far as possible prior to cleaning and sanitation, is also essential to help effectively eliminate Listeria from niches in the equipment (Forauer et al., ). In addition, easy‐to‐clean equipment should be used when possible (e.g., using stainless‐steel utensils in place of wooden utensils). Food safety personnel should be included in decision‐making when purchasing or re‐designing equipment to ensure proper hygienic design principles are followed. Primary production environments and raw materials Primary production environments (e.g., farms, fields) and raw materials can play two distinct roles as sources of Listeria , including (i) introduction into raw materials that do not undergo a kill step and where L. monocytogenes can be carried over into the finished RTE product (e.g., fresh‐cut produce, cold‐smoked seafood, raw milk dairy products) and (ii) introduction into food‐associated environments (e.g., processing facilities) with the potential of subsequent environmental transmission into finished RTE products. Control of Listeria in raw materials is hence particularly important for the production of RTE foods that do not involve an effective kill step. Contamination of raw materials can occur from a variety of sources and at a variety of points in the supply chain prior to materials reaching a processing facility (or a retail establishment), including (i) primary production (e.g., produce or livestock farms), including at harvest (e.g., milking equipment, produce harvest equipment), (ii) at an upstream facility (e.g., a storage facility, packinghouse, or slaughterhouse), and (iii) during transportation. One key supply chain where raw material contamination is of particular concern is whole and fresh‐cut produce. In produce primary production environments, Listeria has been isolated from agricultural water sources, soil, vegetation, and wildlife feces (Chapin et al., ; Strawn, Fortes, et al., ; Strawn, Gröhn, et al., ; Weller et al., , ). With a variety of possible sources, there are a number of transmission pathways that can lead to preharvest produce contamination with Listeria . For instance, contaminated irrigation water could directly deposit Listeria onto a product or could deposit it into the soil with the possibility of subsequent transmission onto produce (Park et al., ; Weller et al., ). Soil could also harbor Listeria populations that can be transferred to produce. Finally, wildlife could directly deposit Listeria ‐contaminated feces onto produce or into the soil. Livestock could present a source of Listeria in produce at the preharvest level. The possible role of livestock as a source of L. monocytogenes is supported by a listeriosis outbreak in the Maritime Provinces, Canada that was linked to coleslaw likely contaminated from sheep feces (Schlech et al., ), as well as the frequent high prevalence of L. monocytogenes and other Listeria spp. in livestock. In fact, Golden et al. found a Listeria spp. (including L. monocytogenes ) prevalence of 15.9% (245/1537) and a L. monocytogenes prevalence of 1.8% (28/1537) in soil and fecal samples collected from 11 poultry farms in the southeastern United States, further supporting the importance of livestock‐associated sources. A number of factors can influence how effectively Listeria is transferred from soil or other environmental sources to produce. During rain or irrigation events, splashing of soil or wildlife feces can facilitate transfer; in addition, runoff and flooding from adjacent lands can also facilitate contamination of the produce (Pang et al., ). Increased wind speed has also been associated with an increased prevalence of Listeria , possibly due to increased transfer of Listeria from surrounding environments (e.g., farms) (Pang et al., ). Unlike Salmonella and pathogenic Escherichia coli , to date, very few listeriosis outbreaks have been definitively linked to contamination of produce that occurred at the preharvest level. However, given the frequent presence of L. monocytogenes in natural and farm environments, it is highly likely that a number of finished product contamination events are linked to the preharvest environment, particularly for products that do not undergo extensive antimicrobial wash treatments or heat treatments. The limited number of outbreaks directly linked to preharvest sources may reflect the fact that many preharvest contamination events (e.g., from wildlife feces or soil splashes) impact small quantities of produce and are thus more likely to lead to individual listeriosis cases rather than outbreaks, although additional information is required to confirm that low levels of Listeria are typically present in the preharvest environment. There is, however, growing concern about Listeria contamination of produce during harvesting (e.g., via contaminated harvesting equipment), which could lead to more widespread contamination events and thus an increased risk of outbreaks. For example, a recent listeriosis outbreak associated with packaged salads, which resulted in 18 illnesses, 16 hospitalizations, and three deaths across 13 states, was found to be traced back to harvesting equipment contaminated with the outbreak strain (Centers for Disease Control and Prevention, ). Additionally, while less formally characterized compared to traditional preharvest growing environments (e.g., open fields), Listeria contamination of produce grown in controlled environment agriculture (CEA) settings can also occur, as evidenced by recent listeriosis outbreaks and L. monocytogenes recalls linked to produce such as enoki mushrooms (Centers for Disease Control and Prevention, ; US Food and Drug Administration, ) and spinach (US Food and Drug Administration, ) grown in CEA settings. In CEA operations, contamination may occur from seeds, water, substrates/other growing media used for CEA production, or the built environment/production equipment, although currently there are limited data surrounding how Listeria is spread from these sources to edible or inedible portions of produce (Hamilton et al., ). In addition to concerns about Listeria contamination, temperature and humidity conditions used to grow produce in CEA settings may also support effective Listeria proliferation. In particular, this concern has been raised for enoki mushrooms, where certain growing stages require long periods of exposure to cooler temperatures and high‐humidity conditions that can support Listeria growth (Grocholl et al., ; Pereira et al., ). Importantly, Pereira et al. noted that enoki mushroom samples collected during a multinational listeriosis outbreak frequently yielded levels of L. monocytogenes (i.e., >10 3 CFU/g) that were higher than those observed in food samples from previous listeriosis outbreaks linked to contamination from packing/processing environments (Chen, Burall, Luo, et al., ; Chen, Burall, Macarisin, et al., ). These findings may support the likelihood that contamination and proliferation of L. monocytogenes on enoki mushrooms occurred during CEA cultivation, as opposed to occurring at later stages of the supply chain. However, given that L. monocytogenes has only recently emerged as a food safety concern in CEA systems, more research is needed to fully elucidate the risks of L. monocytogenes (and other Listeria spp.) contamination and proliferation on produce in CEA primary production environments. Another supply chain where preharvest contamination is relevant is cold‐smoked seafood. While a number of studies have provided convincing evidence that contamination of finished RTE cold‐smoked products (in particular, cold‐smoked salmon) can be traced back to raw materials (Jahncke et al., ), it is rare to identify the specific contamination sources of the raw material (e.g., fish farms, fish slaughter facilities, transport equipment, etc.). Importantly, however, cold‐smoked salmon represents a model for a supply chain that has started to implement innovative approaches to reduce contamination of incoming raw materials, ranging from stringent supplier qualification supported by intensive raw material testing to nonthermal treatments to reduce Listeria loads on incoming raw material (Jahncke et al., ). Finally, L. monocytogenes contamination of raw materials is also important for raw milk (which is legal for sale in some locations) and other dairy products prepared using raw milk (e.g., raw milk cheese), particularly as L. monocytogenes can sometimes be highly prevalent on livestock farms. For instance, Nightingale et al. found a L. monocytogenes prevalence of 20.1% (414/2056) in a survey of 52 ruminant farms where samples were collected of feces, soil, feedstuff, and water. Furthermore, although relatively rare, dairy ruminants can also asymptomatically carry and shed L. monocytogenes into their raw milk (Bolten, Ralyea, et al., ; Hunt et al., ; Papić et al., ), in some cases over periods of up to several months (Ricchi et al., ). Thus, it is unsurprising that contamination of raw milk and raw milk dairy products continues to represent a major public health concern, with six L. monocytogenes ‐associated outbreaks and recalls linked to raw milk products being reported in the United States between 2022 and 2023 (Center for Dairy Research, ). In addition to raw materials being a direct source of Listeria in finished products and an indirect source (via introduction from farm environments into processing plants), Listeria can also establish itself within vehicles and crates used to transport products from primary production environments to packinghouses, processing facilities, or directly to retail. Cross‐contamination during transport is possible whenever the product is exposed to the open environment. To prevent cross‐contamination at this stage, regular cleaning and sanitation of trailers that transport crates and raw materials is thus necessary. Employees Employees are sometimes brought up as potential sources or vectors that contribute to the introduction of Listeria into the processing environment. However, unlike other pathogens, such as Salmonella , it appears unlikely for humans to be fecal carriers of Listeria (Sauders et al., ). Employees, however, can act as fomites, particularly since Listeria can often be found at high frequencies in urban, rural, and natural environments. For example, in a survey of Listeria in four urban environments in New York State, Sauders et al. found the overall Listeria spp. (including L. monocytogenes ) prevalence to be 23.4% ( N = 907). Once introduced into a facility, Listeria can survive over time, particularly if effective food safety and sanitation practices are lacking. In order to prevent the introduction of Listeria into the processing environment or onto the product via employees, effective good manufacturing practices (GMPs) must be followed. These GMPs include hand washing and sanitation, wearing gloves, wearing clean coats designated for use only inside of the processing areas, boot washes (or footbaths) at the entrance to the processing areas, and captive footwear policies (e.g., each employee has a set of work boots kept at the facility that may only be used inside the processing areas). Processing facility and packinghouse environment The majority of L. monocytogenes outbreaks have been linked to RTE foods where contamination originated from processing facilities or packinghouse environments. Listeria presence in food processing‐associated environments can conceptually be broken down into two components: (i) introduction into the facility and (ii) the subsequent fate of Listeria in a facility, which may include rapid elimination (e.g., through cleaning and sanitation) or survival subsequent to introduction (also often referred to as “persistence”) (Belias et al., ). As indicated above, Listeria introduction can originate from a number of sources outside a processing facility; preventing introduction through fomites (e.g., people, equipment, etc.) is thus a key part of Listeria control programs. Without the proper zoning of equipment and employees, once introduced, Listeria can then be moved throughout the facility and can be introduced into a “niche” where it can evade cleaning and sanitation. A lack of sanitary design and proper cleaning and sanitation programs can prevent the elimination of L. monocytogenes from the facility (thus facilitating “persistence”). Hence, persistence can typically be traced back to failures associated with prerequisite programs or other nonprocess preventive controls under the Preventive Controls for Human Food Rule such as cleaning and sanitation (US Food and Drug Administration, ). The importance of L. monocytogenes persistence in processing facilities has been defined though numerous studies. For instance, Beno et al. isolated persistent L. monocytogenes pulsed‐field gel electrophoresis (PFGE) types (i.e., PFGE types isolated on more than one sampling date) from the processing environments of four out of nine small cheese processing facilities during a survey where environmental samples were collected monthly; 31 of the 57 L. monocytogenes strains subjected to PFGE in this study were persistent in a given facility. Furthermore, in a 2011 L. monocytogenes outbreak linked to cantaloupes in the United States (McCollum et al., ), samples of the brushes used to wash the cantaloupes in the packinghouse yielded L. monocytogenes isolates of the same PFGE types as the strains causing illnesses, implicating the brushes as a likely persistent source of contamination in this outbreak. These are two of the many examples that highlight the importance of strong food safety programs at the processing environment level that are designed to control the presence and persistence of Listeria ; additional examples and more detailed coverage of Listeria persistence can be found in a number of reviews on this subject (Belias et al., ; Chowdhury & Anand, ; Ferreira et al., ; Tuytschaever et al., ). Importantly, if suitable conditions exist, L. monocytogenes persistence in food packing/processing environments can extend over periods of up to several years. For example, Orsi et al. found L. monocytogenes that persisted in a food processing facility for at least 12 years. Similarly, in a recent listeriosis outbreak associated with RTE dairy products (US Food and Drug Administration, ), L. monocytogenes isolates obtained from environmental swabs taken from the implicated dairy processing facility in 2024 matched clinical isolates from the outbreak that were obtained in 2014, indicating that L. monocytogenes was likely persistent in this facility for nearly 10 years. Persistent L. monocytogenes can be transferred to food contact surfaces and contaminate food products, or a harborage point may develop (i.e., a niche where Listeria is present and can continually contaminate product or other areas of the processing environment) within a food contact surface. Some common harborage points include product coolers, forklifts, forklift stops, hollow equipment legs, dead‐end pipes, drains, floor–wall junctures, junctures between equipment legs and the floor, and floor cracks, among other sites that are difficult to clean and sanitize (Simmons & Wiedmann, ). Retail and food services Listeria monocytogenes recalls and outbreaks have also been linked to the contamination originating from retail and food service establishment environments. Similar to processing facilities, the presence of Listeria in retail and food service environments is dependent upon (i) its introduction into the environment and (ii) its ability to persist within the environment after it has been introduced. However, compared with a processing environment, retail and food service environments are more open to outside environments, given that there is limited control over what customers bring into the retail space. Therefore, in addition to Listeria being introduced on raw materials or with employees, it can also be transported into the retail environment via customers. Hoelzer et al. conducted a survey of 120 retail delis that were classified as small (<10 employees, N = 60) or having failed an inspection ( N = 60) where they collected samples of food and nonfood contact surfaces, including slicers, utensils, the deli case, floors, drains, and sinks, among others. The L. monocytogenes prevalence in these delis ranged from <6% (0/18 positive samples) to 92% (11/12 positive samples); common sites positive for L. monocytogenes included milk crates, floors of walk‐in coolers, and drains (Hoelzer et al., ). Once introduced into a retail setting, Listeria persistence can develop, and cross‐contamination of RTE products can occur (similar to what is observed in processing facilities) if Listeria is not eliminated through cleaning and sanitation. As such, frequent and stringent cleaning and sanitation is required to eliminate Listeria if introduced into the retail environments; additional deep cleaning and sanitation, which often involves disassembling equipment (e.g., slicers, display cases) as far as possible prior to cleaning and sanitation, is also essential to help effectively eliminate Listeria from niches in the equipment (Forauer et al., ). In addition, easy‐to‐clean equipment should be used when possible (e.g., using stainless‐steel utensils in place of wooden utensils). Food safety personnel should be included in decision‐making when purchasing or re‐designing equipment to ensure proper hygienic design principles are followed. RISK‐BASED APPROACHES TO L. monocytogenes CONTROL Listeria monocytogenes contamination of food products and food processing environments can represent both a public health risk and a business (or enterprise) risk, and both need to be managed and minimized. Importantly, currently used approaches to control L. monocytogenes can use either risk‐ or hazard‐based approaches; however, risk‐based approaches tend to be more impactful in reducing L. monocytogenes illnesses (Barlow et al., ). 3.1 Public health risks Listeria monocytogenes contamination of RTE foods has been identified as the cause of a number of listeriosis outbreaks worldwide. The risk of a L. monocytogenes illness or an outbreak linked to a specific food product is affected by a combination of (i) the likelihood a given food will become contaminated with L. monocytogenes , (ii) the food's ability to support the growth of L. monocytogenes , (iii) possible inactivation steps before consumption (e.g., cooking), (iv) susceptibility of the products’ consumers, (v) the dose of L. monocytogenes ingested from consumption of the contaminated food, and (vi) the virulence potential of the L. monocytogenes strain(s) present in the food. The mean r (i.e., the probability of a person becoming ill from one cell of L. monocytogenes ) is estimated to range from 7.9 × 10 −12 to 9.6 × 10 −9 depending on the underlying conditions of the person (e.g., age, pregnancy, and other co‐morbidities) (Pouillot et al., ). Due to the low probability of illness from a single cell, it is not likely for a product to be contaminated with L. monocytogenes at a level high enough to cause illness without L. monocytogenes growth in the product. Therefore, the ability of L. monocytogenes to grow in a given product plays an important role in its ability to cause an infection. Some RTE foods inherently carry a lower risk of causing L. monocytogenes infections given that they either (i) possess intrinsic characteristics (i.e., pH <4.4, water activity <0.92) or (ii) undergo processing, handling, and/or storage conditions that can restrict L. monocytogenes growth (US Food and Drug Administration, ). However, it is important to note that, while a given food can inherently pose a lower risk due to the inability of L. monocytogenes to replicate in it, it is still possible for it to cause illnesses. In particular, the risk of illness increases when the state of the food product is changed in a way that allows it to support the growth of L. monocytogenes prior to consumption. This can be illustrated in a multistate listeriosis outbreak associated with caramel apples (Angelo et al., ). While both apples and caramel represent foods with limited ability to support L. monocytogenes growth, due to the apples’ typical low pH (e.g., <4.0) and caramel's low water activity (<0.80) (Ward et al., ), it was hypothesized that the process of piercing the stems of apples with a stick and covering them with caramel could create a microenvironment at the caramel layer–apple surface interface, where access to moisture and nutrients (e.g., from excreted apple juices) would be sufficient to support L. monocytogenes growth. This hypothesis has been supported by empirical research studies that observed the growth of L. monocytogenes in caramel apples (Glass et al., ; Salazar et al., ). Therefore, while apples are not an inherently high‐risk product (also due to the containment of nutrients within their waxy skin) as supported by several studies that have not observed growth of L. monocytogenes on whole intact apples (Kroft et al., ; Salazar et al., ; Sheng et al., ), downstream processing of the apples in this specific instance increased their risk of causing illness. In addition, the importance of Listeria control in frozen vegetables and other frozen products is becoming more apparent. Frozen vegetables are not traditionally considered an RTE product, as consumers are generally instructed to cook these products prior to consumption. However, fruit and vegetable smoothies have become increasingly popular and are often prepared using frozen fruits and vegetables (e.g., berries, spinach, kale), which are generally not cooked prior to blending (Zoellner et al., ). While L. monocytogenes is unable to grow at freezing temperatures (<0°C), it can survive (Azizoglu et al., ). Thus, if these smoothies are not consumed immediately following preparation and are instead left at temperatures that permit L. monocytogenes growth (e.g., room temperature, refrigeration temperatures) for sufficient time, L. monocytogenes can replicate to a level that increases the likelihood of causing an illness. Similarly, preparation of shakes or smoothies from ice cream with subsequent storage at temperatures that allow L. monocytogenes growth can convert a product that would be considered low risk (due to its inability to support L. monocytogenes growth at freezing temperatures) into a high‐risk product; this scenario is suspected to have contributed to a 2010–2015 listeriosis outbreak linked to ice cream (Conrad et al., ). As such, it is important to consider all potential uses of a product when designing a food safety program, which could include effective cooking labels and instructions as one component of a food safety plan, in order to decrease both the public health and the business risks associated with that product. In addition, when assessing the public health risk associated with different RTE products, one must consider the target consumers, as some individuals are at a higher risk of infection, with elderly, pregnant, and immunocompromised individuals being particularly susceptible to systemic listeriosis infections. For instance, in a 2010 hospital‐acquired listeriosis outbreak linked to contaminated diced celery, eight of the 10 outbreak case patients were over the age of 65, and all (10/10) case patients were reported to have at least one underlying condition that rendered them immunocompromised or had recently received immunosuppressive treatments that could have increased their susceptibility to listeriosis (Gaul et al., ). Moreover, certain products may be consumed at an increased frequency by individuals with an increased risk of acquiring listeriosis (some products may even be specifically targeted toward one of these groups). For example, in the 2010–2015 listeriosis outbreak linked to ice cream, four of the 10 cases were reported to have consumed ice cream products implicated in the outbreak in milkshakes while hospitalized for nonrelated ailments (i.e., they were immunocompromised) (Conrad et al., ). As ice cream‐based milkshakes both (i) represent a common calorie‐dense source of nutrition for hospitalized patients, especially those restricted to soft or liquid diets (Okkels et al., ) and (ii) may be at higher risk of temperature abuse and L. monocytogenes growth compared to whole intact ice cream products, this example illustrates how combinations of different factors can increase the public health risk of a product that may otherwise be considered low risk. Hence, an assessment of the listeriosis‐associated public health risk of an RTE product should also consider how products are handled/manipulated prior to consumption (with a focus on those that increase the potential for L. monocytogenes growth) as well as the target consumers of said products and their risk of acquiring foodborne listeriosis. The outcomes of these assessments may indicate the need for additional control strategies, such as specific labeling and detailed preparation instructions. 3.2 Business risks While the public health risk (i.e., the risk of human listeriosis) associated with a product would typically be the driver of L. monocytogenes ‐focused food safety efforts, individual firms may also want to assess the business and enterprise risks associated with Listeria . The predominant business risks associated with Listeria typically relate to (i) human disease cases and outbreaks linked to a product and (ii) recalls due to detection of L. monocytogenes contamination in product or repeat L. monocytogenes (or possibly even repeat Listeria ) detection in the processing environment. In this context, it is important to note that in some countries (e.g., the United States), any RTE product that tests positive for L. monocytogenes would be considered adulterated and hence would have to be recalled, even if the product represents an extremely low public health risk (e.g., sunflower seeds) and even if there are no associated human disease cases. Therefore, L. monocytogenes may pose a reasonably high enterprise risk for some products that represent a limited public health risk, which could lead to situations where it may be prudent for firms to make considerable investments into L. monocytogenes control, even for products where this organism represents a limited public health risk. Enterprise risks associated with L. monocytogenes detection differ considerably based on the regulatory environments. For example, in the United States, there is a so‐called “zero‐tolerance” policy for L. monocytogenes in RTE foods. This means no detectable L. monocytogenes may be present in two 25‐g samples of FDA‐regulated products and one 25‐g sample of USDA‐regulated products (Archer, ). If L. monocytogenes is found in any RTE product in the United States, the product must be recalled (if a product is in commerce), regardless of whether any known illnesses have been traced back to the product. If the product is still under the company's control (i.e., not in commerce), the food must be reprocessed with a validated listericidal treatment, repurposed such that it will not be consumed by humans or animals, or destroyed. In addition, it must be determined if other product lots are also potentially contaminated, regardless of whether the products have entered commerce (US Food and Drug Administration, ). In the European Union (EU), on the other hand, there are different criteria for RTE foods depending on the potential for L. monocytogenes growth. For example, RTE foods that are not able to support the growth of L. monocytogenes may have up to 100 CFU/g of L. monocytogenes in a given product for the entirety of the product's shelf life (European Commission, ). However, for RTE foods that support the growth of L. monocytogenes , or RTE foods without data to prove the product's ability to limit L. monocytogenes growth to 100 CFU/g at the end of the product's shelf life, L. monocytogenes must be absent in five 25‐g samples of the product at the time the product leaves the production facility (European Commission, ). Similar to the approach in the EU, in Canada, food products are grouped into two categories (Health Canada, ). Category 1 products are those known to support the growth of L. monocytogenes and are commonly implicated in outbreaks (e.g., deli‐meats, soft cheeses); for category 1 products, L. monocytogenes must be absent in five 25‐g samples (analyzed either separately or composited) of the product. Category 2 products are those that support limited (e.g., fresh‐cut fruits and vegetables) to no growth (e.g., ice cream, hard cheeses) of L. monocytogenes ; for category 2 products, L. monocytogenes levels must be less than 100 CFU/g in five distinct 10‐g samples of the product (Health Canada, ). Quantification of the business risk associated with L. monocytogenes should take into account a number of different costs, including (i) costs of illnesses (which a company may be liable for); (ii) costs of product destruction or reprocessing (if permitted), (iii) legal fees; (iv) loss of sales of destroyed product; and (v) loss of future sales due to reputational impacts or (temporary) facility shutdowns. The magnitude of the business risk associated with L. monocytogenes can be illustrated by the number of recalls due to L. monocytogenes contamination, or suspected contamination. For example, in the United States alone, there were 47 L. monocytogenes ‐related recalls in 2023, including 18 recalls associated with fresh produce, 12 recalls associated with dairy products, five recalls associated with deli meats or sandwiches/salads containing deli meats, and one recall associated with smoked seafood (US Department of Agriculture Food Safety Inspection Service, ; US Food and Drug Administration, ). Public health risks Listeria monocytogenes contamination of RTE foods has been identified as the cause of a number of listeriosis outbreaks worldwide. The risk of a L. monocytogenes illness or an outbreak linked to a specific food product is affected by a combination of (i) the likelihood a given food will become contaminated with L. monocytogenes , (ii) the food's ability to support the growth of L. monocytogenes , (iii) possible inactivation steps before consumption (e.g., cooking), (iv) susceptibility of the products’ consumers, (v) the dose of L. monocytogenes ingested from consumption of the contaminated food, and (vi) the virulence potential of the L. monocytogenes strain(s) present in the food. The mean r (i.e., the probability of a person becoming ill from one cell of L. monocytogenes ) is estimated to range from 7.9 × 10 −12 to 9.6 × 10 −9 depending on the underlying conditions of the person (e.g., age, pregnancy, and other co‐morbidities) (Pouillot et al., ). Due to the low probability of illness from a single cell, it is not likely for a product to be contaminated with L. monocytogenes at a level high enough to cause illness without L. monocytogenes growth in the product. Therefore, the ability of L. monocytogenes to grow in a given product plays an important role in its ability to cause an infection. Some RTE foods inherently carry a lower risk of causing L. monocytogenes infections given that they either (i) possess intrinsic characteristics (i.e., pH <4.4, water activity <0.92) or (ii) undergo processing, handling, and/or storage conditions that can restrict L. monocytogenes growth (US Food and Drug Administration, ). However, it is important to note that, while a given food can inherently pose a lower risk due to the inability of L. monocytogenes to replicate in it, it is still possible for it to cause illnesses. In particular, the risk of illness increases when the state of the food product is changed in a way that allows it to support the growth of L. monocytogenes prior to consumption. This can be illustrated in a multistate listeriosis outbreak associated with caramel apples (Angelo et al., ). While both apples and caramel represent foods with limited ability to support L. monocytogenes growth, due to the apples’ typical low pH (e.g., <4.0) and caramel's low water activity (<0.80) (Ward et al., ), it was hypothesized that the process of piercing the stems of apples with a stick and covering them with caramel could create a microenvironment at the caramel layer–apple surface interface, where access to moisture and nutrients (e.g., from excreted apple juices) would be sufficient to support L. monocytogenes growth. This hypothesis has been supported by empirical research studies that observed the growth of L. monocytogenes in caramel apples (Glass et al., ; Salazar et al., ). Therefore, while apples are not an inherently high‐risk product (also due to the containment of nutrients within their waxy skin) as supported by several studies that have not observed growth of L. monocytogenes on whole intact apples (Kroft et al., ; Salazar et al., ; Sheng et al., ), downstream processing of the apples in this specific instance increased their risk of causing illness. In addition, the importance of Listeria control in frozen vegetables and other frozen products is becoming more apparent. Frozen vegetables are not traditionally considered an RTE product, as consumers are generally instructed to cook these products prior to consumption. However, fruit and vegetable smoothies have become increasingly popular and are often prepared using frozen fruits and vegetables (e.g., berries, spinach, kale), which are generally not cooked prior to blending (Zoellner et al., ). While L. monocytogenes is unable to grow at freezing temperatures (<0°C), it can survive (Azizoglu et al., ). Thus, if these smoothies are not consumed immediately following preparation and are instead left at temperatures that permit L. monocytogenes growth (e.g., room temperature, refrigeration temperatures) for sufficient time, L. monocytogenes can replicate to a level that increases the likelihood of causing an illness. Similarly, preparation of shakes or smoothies from ice cream with subsequent storage at temperatures that allow L. monocytogenes growth can convert a product that would be considered low risk (due to its inability to support L. monocytogenes growth at freezing temperatures) into a high‐risk product; this scenario is suspected to have contributed to a 2010–2015 listeriosis outbreak linked to ice cream (Conrad et al., ). As such, it is important to consider all potential uses of a product when designing a food safety program, which could include effective cooking labels and instructions as one component of a food safety plan, in order to decrease both the public health and the business risks associated with that product. In addition, when assessing the public health risk associated with different RTE products, one must consider the target consumers, as some individuals are at a higher risk of infection, with elderly, pregnant, and immunocompromised individuals being particularly susceptible to systemic listeriosis infections. For instance, in a 2010 hospital‐acquired listeriosis outbreak linked to contaminated diced celery, eight of the 10 outbreak case patients were over the age of 65, and all (10/10) case patients were reported to have at least one underlying condition that rendered them immunocompromised or had recently received immunosuppressive treatments that could have increased their susceptibility to listeriosis (Gaul et al., ). Moreover, certain products may be consumed at an increased frequency by individuals with an increased risk of acquiring listeriosis (some products may even be specifically targeted toward one of these groups). For example, in the 2010–2015 listeriosis outbreak linked to ice cream, four of the 10 cases were reported to have consumed ice cream products implicated in the outbreak in milkshakes while hospitalized for nonrelated ailments (i.e., they were immunocompromised) (Conrad et al., ). As ice cream‐based milkshakes both (i) represent a common calorie‐dense source of nutrition for hospitalized patients, especially those restricted to soft or liquid diets (Okkels et al., ) and (ii) may be at higher risk of temperature abuse and L. monocytogenes growth compared to whole intact ice cream products, this example illustrates how combinations of different factors can increase the public health risk of a product that may otherwise be considered low risk. Hence, an assessment of the listeriosis‐associated public health risk of an RTE product should also consider how products are handled/manipulated prior to consumption (with a focus on those that increase the potential for L. monocytogenes growth) as well as the target consumers of said products and their risk of acquiring foodborne listeriosis. The outcomes of these assessments may indicate the need for additional control strategies, such as specific labeling and detailed preparation instructions. Business risks While the public health risk (i.e., the risk of human listeriosis) associated with a product would typically be the driver of L. monocytogenes ‐focused food safety efforts, individual firms may also want to assess the business and enterprise risks associated with Listeria . The predominant business risks associated with Listeria typically relate to (i) human disease cases and outbreaks linked to a product and (ii) recalls due to detection of L. monocytogenes contamination in product or repeat L. monocytogenes (or possibly even repeat Listeria ) detection in the processing environment. In this context, it is important to note that in some countries (e.g., the United States), any RTE product that tests positive for L. monocytogenes would be considered adulterated and hence would have to be recalled, even if the product represents an extremely low public health risk (e.g., sunflower seeds) and even if there are no associated human disease cases. Therefore, L. monocytogenes may pose a reasonably high enterprise risk for some products that represent a limited public health risk, which could lead to situations where it may be prudent for firms to make considerable investments into L. monocytogenes control, even for products where this organism represents a limited public health risk. Enterprise risks associated with L. monocytogenes detection differ considerably based on the regulatory environments. For example, in the United States, there is a so‐called “zero‐tolerance” policy for L. monocytogenes in RTE foods. This means no detectable L. monocytogenes may be present in two 25‐g samples of FDA‐regulated products and one 25‐g sample of USDA‐regulated products (Archer, ). If L. monocytogenes is found in any RTE product in the United States, the product must be recalled (if a product is in commerce), regardless of whether any known illnesses have been traced back to the product. If the product is still under the company's control (i.e., not in commerce), the food must be reprocessed with a validated listericidal treatment, repurposed such that it will not be consumed by humans or animals, or destroyed. In addition, it must be determined if other product lots are also potentially contaminated, regardless of whether the products have entered commerce (US Food and Drug Administration, ). In the European Union (EU), on the other hand, there are different criteria for RTE foods depending on the potential for L. monocytogenes growth. For example, RTE foods that are not able to support the growth of L. monocytogenes may have up to 100 CFU/g of L. monocytogenes in a given product for the entirety of the product's shelf life (European Commission, ). However, for RTE foods that support the growth of L. monocytogenes , or RTE foods without data to prove the product's ability to limit L. monocytogenes growth to 100 CFU/g at the end of the product's shelf life, L. monocytogenes must be absent in five 25‐g samples of the product at the time the product leaves the production facility (European Commission, ). Similar to the approach in the EU, in Canada, food products are grouped into two categories (Health Canada, ). Category 1 products are those known to support the growth of L. monocytogenes and are commonly implicated in outbreaks (e.g., deli‐meats, soft cheeses); for category 1 products, L. monocytogenes must be absent in five 25‐g samples (analyzed either separately or composited) of the product. Category 2 products are those that support limited (e.g., fresh‐cut fruits and vegetables) to no growth (e.g., ice cream, hard cheeses) of L. monocytogenes ; for category 2 products, L. monocytogenes levels must be less than 100 CFU/g in five distinct 10‐g samples of the product (Health Canada, ). Quantification of the business risk associated with L. monocytogenes should take into account a number of different costs, including (i) costs of illnesses (which a company may be liable for); (ii) costs of product destruction or reprocessing (if permitted), (iii) legal fees; (iv) loss of sales of destroyed product; and (v) loss of future sales due to reputational impacts or (temporary) facility shutdowns. The magnitude of the business risk associated with L. monocytogenes can be illustrated by the number of recalls due to L. monocytogenes contamination, or suspected contamination. For example, in the United States alone, there were 47 L. monocytogenes ‐related recalls in 2023, including 18 recalls associated with fresh produce, 12 recalls associated with dairy products, five recalls associated with deli meats or sandwiches/salads containing deli meats, and one recall associated with smoked seafood (US Department of Agriculture Food Safety Inspection Service, ; US Food and Drug Administration, ). CHALLENGES Major challenges associated with L. monocytogenes control include (i) development and consistent implementation of programs that prevent L. monocytogenes introduction and persistence in food‐associated environments (e.g., processing facilities); (ii) management of raw material contamination in products that do not have an effective kill step (e.g., fresh produce, cold‐smoked seafood, raw milk, and raw milk dairy products); (iii) implementation of appropriate root cause analysis (RCA) procedures that allow for identification of sources of environmental and product contamination; and (iv) appropriate use of subtyping methods, including WGS, and appropriate interpretation of the resulting data. 4.1 Development and consistent implementation of programs that prevent L. monocytogenes introduction and persistence in food‐associated environments A continued main challenge the industry faces is the development and consistent implementation of programs that (i) prevent L. monocytogenes introduction and (ii) prevent L. monocytogenes persistence in food‐associated environments (with persistence being defined as survival of a specific Listeria subtype in a processing facility over time; see below for details). A related challenge is for regulatory agencies to develop, implement, and enforce regulations that encourage industry to develop and implement stringent programs that minimize the likelihood of L. monocytogenes introduction and persistence. For example, regulatory agencies may want to eliminate negative consequences associated with Listeria detection in a processing environment to encourage processors to find Listeria if it is present. However, some regulations or practices may, unintentionally, provide incentives for companies to not implement stringent environmental and finished product testing strategies. For instance, this may be the case if consequences of positive test results (which are expected due to the frequent presence of L. monocytogenes in the environment) are not commensurate to public health risk (e.g., recalls or other severe regulatory consequences due to finding a low level of L. monocytogenes in a single finished product sample that does not support its growth or finding L. monocytogenes in nonfood contact surfaces in a processing facility). In addition, an allowable level for L. monocytogenes in finished products (e.g., those that do not support Listeria growth) may incentivize companies to more aggressively test, improving their ability to identify contamination in the processing facility or in raw materials that could lead to finished product contamination with a frequency or at levels likely to cause illness (Farber et al., ). More specifically, regulatory agencies could not only set limits of 100 CFU/g for foods that do not support growth (which is consistent with CODEX Alimentarius guidelines [Luber, ]) but could also set lower limits using approaches such as three‐class sampling plans, as detailed by Farber et al. . Minimizing L. monocytogenes introduction from outside environments is particularly challenging due to the high prevalence of L. monocytogenes in many different environments, as detailed above. Achieving “zero” introduction of L. monocytogenes into processing facilities is essentially impossible, particularly if raw materials, which in most cases would have to be expected to at least occasionally be contaminated, are introduced in a facility. Key strategies to minimize the introduction of L. monocytogenes include GMPs (e.g., employees should wear clean coats and boots designated for use only within the processing area), regular cleaning and sanitation of trailers used to transport raw materials, regular cleaning and sanitation of forklifts, and regular cleaning and sanitation of creates and bins (including trash bins) that carry materials inside and outside of the facility. However, it is important to note that interventions that introduce additional moisture into the facility (e.g., door foamers and foot baths) can facilitate L. monocytogenes growth and survival if not properly maintained (e.g., if appropriate sanitizer concentrations are not consistently maintained). Verifying sanitizer concentrations as well as testing the areas around foot baths and foamers for Listeria presence can be useful in identifying a lack of Listeria control. Prevention or management of L. monocytogenes persistence in food facilities is a well‐documented issue for the industry. Persistent Listeria refers to the Listeria that remains in the processing environment for an extended time and is able to survive cleaning and sanitation. Once introduced into the processing environment, Listeria can enter niches within the equipment or building infrastructure where cleaners and sanitizers are not able to reach and eliminate its presence, allowing it to become persistent. “Transient Listeria ,” on the other hand, refers to Listeria that is introduced into the processing environment but is subsequently removed during regular cleaning and sanitation activities (Belias et al., ). Since Listeria is prevalent in a variety of environments, it is expected for Listeria to enter the processing environment on occasion. As long as L. monocytogenes is quickly (e.g., by the end of a 1‐day shift) removed by cleaning and sanitation and not allowed to survive in a niche within the processing environment, it is unlikely to pose a substantial public health or business risk, thus making transient L. monocytogenes a lesser concern compared to persistent L. monocytogenes . An effective Listeria sampling program is key to identifying the presence and persistence of Listeria . For any RTE foods that are at risk of exposure to the processing facility environment, appropriate testing programs need to include a robust environmental monitoring program and may also include finished product testing, although often at substantially lower frequencies compared to environmental monitoring testing. Finished product testing, particularly if conducted in the absence of a strong environmental monitoring program, is of limited value, as L. monocytogenes is often present sporadically and at low levels on food samples, which can make it difficult to identify contaminated products via final product testing. On the other hand, environmental monitoring programs often allow for early detection of potential sources and routes of contamination. A key challenge with environmental monitoring programs, however, is that many lack clear and defined goals, such as validation and verification of Listeria control strategies. For instance, routine environmental monitoring programs can be used for verification of Listeria control strategies (e.g., cleaning and sanitation). Another important consideration for Listeria sampling programs is whether to test for Listeria spp. or L. monocytogenes . When performing finished product testing, samples should always be tested for L. monocytogenes , as a positive result for Listeria spp. in a finished product would require further speciation to clearly define the risk associated with the contamination. Meanwhile, when performing environmental monitoring, testing for Listeria spp. generally represents the preferred testing strategy, as several nonpathogenic species of Listeria often inhabit similar environments as L. monocytogenes , and thus, Listeria spp. can represent an index organism for L. monocytogenes (Chapin et al., ). Overall, while strong food safety programs (including environmental monitoring programs) can be costly, they can provide a significant return on investment if they facilitate the identification and elimination of Listeria within the processing environment before detection by regulatory agencies or before a public health issue emerges. In order to identify persistent Listeria (and differentiate them from transient Listeria ), subtyping (e.g., PFGE or WGS) can be used to determine which subtype or strain of Listeria is present. If the same or related subtypes are found over time, it is often an indication of persistent Listeria or continuous reintroduction of the same subtype. While identifying if a given subtype is persistent or continuously being reintroduced into the environment can be notoriously challenging, certain environmental sampling strategies (e.g., performing preoperational environmental sampling) can help differentiate between these two scenarios. For example, subtype characterization of isolates obtained preoperation (i.e., after cleaning and sanitation, but prior to the next production cycle) can provide strong evidence for persistence (if isolates obtained over time share the same or closely related subtypes) (Bolten, Lott, et al., ; Bolten, Ralyea, et al., ). On the other hand, identification of isolates that are obtained mid‐operation (e.g., at least 3–4 h into a given production cycle) and share the same or closely related subtypes could also be due to reintroduction. Importantly, effective food safety programs should be put in place to protect against persistent Listeria . These programs must emphasize proper sanitary design of equipment (i.e., elimination of areas within the equipment or facility infrastructure that are difficult to clean and sanitize); a one‐directional flow of employees, equipment, and food products through the processing area; and proper cleaning and sanitation programs, including disassembly of equipment to a level that allows for effective elimination of Listeria from niches through cleaning and sanitation activities. In many cases, effective programs may include regular more in‐depth cleaning and sanitation of both equipment (known as “Periodic Equipment Cleaning” [PEC]) and infrastructure (known as “Periodic Infrastructure Cleaning” [PIC]), using validated frequency as well as documentation as part of a Master Sanitation schedule. In addition to persistent Listeria and transient Listeria , there is a scenario that can be labeled “persistent transient Listeria ,” which refers to the continual reintroduction of Listeria , representing a single or multiple different subtypes, at a given site or area in the processing environment (Belias et al., ). While this scenario may not be as much of a concern as persistent Listeria , the continuous reintroduction of Listeria to a given area also indicates a lack of proper Listeria control. Persistent transient Listeria is likely to be introduced into the processing environment with raw materials, crates, and employees, among other routes. In order to reduce the prevalence of persistent transient Listeria , more frequent cleaning and sanitation, improved supplier verification, and additional controls to prevent employees from tracking Listeria into the processing environment (e.g., captive footwear programs) should be considered. 4.2 Management of raw material contamination in products that do not have an effective kill step While Listeria can be easily inactivated by heat, there are a number of products and raw materials that do not receive an effective kill step during processing. Fresh produce, cold‐smoked seafood products, and raw milk dairy products represent examples of commonly consumed RTE products that do not undergo a kill step as part of their processing. For these products in particular, robust supplier verification programs for raw materials are essential to reduce the likelihood that raw materials lead to contamination of the final product. This should include verifying that a supplier has implemented (i) an effective environmental monitoring program if appropriate (e.g., the supplier runs the product through a packinghouse or processing facility) and (ii) thorough cleaning and sanitation programs. In addition, nonthermal and thermal heat treatments that reduce L. monocytogenes (although not at the level of a “kill step,” which is typically defined as a 5‐log reduction) can be used. For example, cold‐smoked seafood producers may use antimicrobial washes of incoming raw materials or the addition of antimicrobial treatments (e.g., nisin) to final products to reduce Listeria levels and growth (Jahncke et al., ). Produce packinghouses and fresh‐cut facilities may also use antimicrobials in wash water to reduce cross‐contamination between produce items (Gil et al., ). 4.3 Implementation of appropriate RCA procedures that allow for the identification of sources of environmental and product contamination Since L. monocytogenes contamination can originate from a variety of sources and can be facilitated by a number of practices (or lack of practices), identifying the sources of L. monocytogenes found in finished products or the environment, as well as contributing factors (e.g., improper execution of preharvest risk assessments, deficiencies in sanitation standard operating procedures [SSOPs]), can be difficult. While a formal well‐defined RCA approach provides one of the most effective ways to define the root cause of a Listeria “issue,” implementing good RCA procedures remains a challenge for many companies. RCA is a strategy that aims to identify the true or initial cause of a final event, such that without this initial cause, the final event could not occur. Therefore, using RCA pivots corrective actions from being responsive in nature to being preventive (i.e., with proper RCA, similar problems will be prevented from happening in the future). In particular, RCA can be used to provide a more systematic method for identifying and controlling the presence and persistence of Listeria in the food supply chain. For instance, if a persistent Listeria subtype is present in a given processing environment, RCA should be performed to (i) identify the source of the persistent L. monocytogenes , (ii) eliminate this persistent L. monocytogenes , and (iii) identify a control strategy that will prevent similar instances of Listeria persistence in the future. In order to perform an RCA, a multidisciplinary team (e.g., someone from quality and food safety, maintenance, and operations) should be formed. Due to the complexity of many food safety problems, getting input from a diverse set of thinkers can help to identify novel causes, as well as innovative corrective actions. Once assembled, the team should clearly define the problem and discuss what information is needed to help solve the problem; the required additional information or data should then be gathered. There are a variety of techniques that can then be used to identify root causes, including fishbone diagrams, the “five whys” technique, change analysis, and fault tree analysis, among others (The PEW Charitable Trust, ); each technique can be used on its own or in combination, and each technique may be most appropriate for different situations. For example, with respect to conducting an RCA related to a Listeria contamination event, one might opt to first use fishbone diagrams to visualize all possible components of a given problem, followed by the “five whys” technique (i.e., continually asking why some event or practice occurred, or is the way it is, until reaching the root cause) to further identify the root cause associated with each bone of the diagram that has been deemed important. Some of these RCA techniques have been successfully utilized in a handful of instances (Belias et al., , ; US Food and Drug Administration, ) toward identifying root causes of Listeria contamination in produce packinghouses (Table ) and may be similarly employed in other food industry sectors to improve management of Listeria , as well as other food safety‐related business and enterprise risks. An RCA for identifying the source of contamination or persistent Listeria is typically part of a “for‐cause” investigation that also includes efforts to gather sufficient data to facilitate the RCA. This type of “for‐cause” investigation typically requires an intensified sampling of the implicated parts of the processing environment to identify contaminated sites and contamination sources; this type of sampling often involves the collection of hundreds to thousands of samples. While collection of environmental samples is typically key for a Listeria RCA, raw material and finished product testing can also be useful and needed. Importantly, sampling as part of RCAs often represents an iterative process where the RCA identifies possible root causes that require sampling for confirmation (or exclusion), often followed by additional sampling to guide further discussions on the root cause and potential corrective actions to eliminate and prevent similar contamination problems in the future. In addition to identifying actions needed to correct and prevent the occurrence of future Listeria contamination issues at the establishment level, lessons learned from RCAs can sometimes guide or inform further industry‐wide improvements. For example, key findings from the FDA's RCA of a growing/packing operation that was implicated in a 2011 listeriosis outbreak linked to cantaloupe (US Food and Drug Administration, ) were used to inform both (i) regulatory requirements for sanitation of equipment and infrastructure used for fresh produce packing and (ii) industry guidance for managing food safety risks during cantaloupe production (The PEW Charitable Trust, ; US Food and Drug Administration, ; Western Growers Association, ) (Table ). Similarly, in response to findings from the FDA's investigation of a 2010–2015 listeriosis outbreak linked to ice cream, and internal RCAs performed by the company implicated in this outbreak (Blue Bell Creameries, Inc., , ; Conrad et al., ), the FDA initiated more frequent inspections and heightened surveillance of L. monocytogenes in US ice cream production environments (US Food and Drug Administration, ). 4.4 Appropriate use of subtyping methods, including whole genome sequencing, and appropriate interpretation of the resulting data In many parts of the world, subtyping (sometimes referred to as “DNA fingerprinting”) methods, particularly WGS, are increasingly used as part of efforts to manage L. monocytogenes (Alegbeleye & Sant'Ana, ; Jackson et al., ). These methods may be used by either (i) an individual company or (ii) regulatory and public health agencies. For example, individual companies may perform subtyping of all L. monocytogenes or all Listeria spp. isolates that are obtained as part of their routine environmental monitoring programs. Routine subtyping of all isolates helps companies to identify persistent contamination, particularly if positive test results are only sporadically obtained and subtyping is needed to determine whether two positive samples represent contamination with the same Listeria or independent events. In addition, some companies do not perform routine subtyping but may perform subtyping only as part of investigations and RCA efforts. Many companies experience challenges with the use of molecular subtyping methods, including (i) the decisions of whether and when to perform subtyping, (ii) the decision of which subtyping method to use, and (iii) performing and interpreting the data outputs (particularly for WGS). The decisions of whether and when to perform subtyping are complex and involve a number of considerations (e.g., regulatory climate, food safety budget, history of Listeria issues, etc.), but companies with a strong food safety culture increasingly use these tools. Companies that require advanced information to successfully identify the root cause of a Listeria “issue” also typically use subtyping. As for the selection of subtyping methods, commonly used methods include ribotyping, PFGE, and WGS. While the industry often still uses methods such as PFGE and ribotyping, due to typically lower costs, shorter turnaround times, and fewer legal and liability concerns, WGS is being increasingly used by the industry (Jagadeesan et al., ). Public health and regulatory agencies increasingly use WGS to characterize human and food‐associated L. monocytogenes isolates as part of either routine inspections or for‐cause investigations. For example, in the United States, WGS is performed on all human L. monocytogenes isolates, as well as on any isolates from foods and food processing environments obtained by either the FDA or USDA FSIS. These WGS data are uploaded into the National Center for Biotechnology Information (NCBI) pathogen detection database ( https://www.ncbi.nlm.nih.gov/pathogens/ ) and hence are publicly available, even though the metadata provided do not typically allow for identification of the facility an isolate was obtained from. As part of this process, isolates from foods and food processing environments are also clustered with closely related human isolates and other isolates from foods and food processing facilities. This clustering, and subsequent follow‐up genome comparisons, can be used to identify (i) possible human cases that may be linked to a product or facility (providing a hypothesis for subsequent epidemiological investigations) or (ii) possible instances where a specific strain may persist in an environment. However, these analyses do not identify definitive linkages, and thus WGS data need to be interpreted in conjunction with epidemiological data (Alegbeleye & Sant'Ana, ). Industry often struggles with these data analyses, particularly since they often need to be performed and interpreted rapidly to make correct decisions on recalls, recall scopes, and other matters with considerable public health and business impact. While some guidance documents and reviews on interpretation of WGS have been published (Jagadeesan et al., ; Pightling et al., ), appropriate interpretation of WGS data and associated decision‐making are not trivial and should typically be conducted in consultation with experts to avoid costly errors and misinterpretations. Development and consistent implementation of programs that prevent L. monocytogenes introduction and persistence in food‐associated environments A continued main challenge the industry faces is the development and consistent implementation of programs that (i) prevent L. monocytogenes introduction and (ii) prevent L. monocytogenes persistence in food‐associated environments (with persistence being defined as survival of a specific Listeria subtype in a processing facility over time; see below for details). A related challenge is for regulatory agencies to develop, implement, and enforce regulations that encourage industry to develop and implement stringent programs that minimize the likelihood of L. monocytogenes introduction and persistence. For example, regulatory agencies may want to eliminate negative consequences associated with Listeria detection in a processing environment to encourage processors to find Listeria if it is present. However, some regulations or practices may, unintentionally, provide incentives for companies to not implement stringent environmental and finished product testing strategies. For instance, this may be the case if consequences of positive test results (which are expected due to the frequent presence of L. monocytogenes in the environment) are not commensurate to public health risk (e.g., recalls or other severe regulatory consequences due to finding a low level of L. monocytogenes in a single finished product sample that does not support its growth or finding L. monocytogenes in nonfood contact surfaces in a processing facility). In addition, an allowable level for L. monocytogenes in finished products (e.g., those that do not support Listeria growth) may incentivize companies to more aggressively test, improving their ability to identify contamination in the processing facility or in raw materials that could lead to finished product contamination with a frequency or at levels likely to cause illness (Farber et al., ). More specifically, regulatory agencies could not only set limits of 100 CFU/g for foods that do not support growth (which is consistent with CODEX Alimentarius guidelines [Luber, ]) but could also set lower limits using approaches such as three‐class sampling plans, as detailed by Farber et al. . Minimizing L. monocytogenes introduction from outside environments is particularly challenging due to the high prevalence of L. monocytogenes in many different environments, as detailed above. Achieving “zero” introduction of L. monocytogenes into processing facilities is essentially impossible, particularly if raw materials, which in most cases would have to be expected to at least occasionally be contaminated, are introduced in a facility. Key strategies to minimize the introduction of L. monocytogenes include GMPs (e.g., employees should wear clean coats and boots designated for use only within the processing area), regular cleaning and sanitation of trailers used to transport raw materials, regular cleaning and sanitation of forklifts, and regular cleaning and sanitation of creates and bins (including trash bins) that carry materials inside and outside of the facility. However, it is important to note that interventions that introduce additional moisture into the facility (e.g., door foamers and foot baths) can facilitate L. monocytogenes growth and survival if not properly maintained (e.g., if appropriate sanitizer concentrations are not consistently maintained). Verifying sanitizer concentrations as well as testing the areas around foot baths and foamers for Listeria presence can be useful in identifying a lack of Listeria control. Prevention or management of L. monocytogenes persistence in food facilities is a well‐documented issue for the industry. Persistent Listeria refers to the Listeria that remains in the processing environment for an extended time and is able to survive cleaning and sanitation. Once introduced into the processing environment, Listeria can enter niches within the equipment or building infrastructure where cleaners and sanitizers are not able to reach and eliminate its presence, allowing it to become persistent. “Transient Listeria ,” on the other hand, refers to Listeria that is introduced into the processing environment but is subsequently removed during regular cleaning and sanitation activities (Belias et al., ). Since Listeria is prevalent in a variety of environments, it is expected for Listeria to enter the processing environment on occasion. As long as L. monocytogenes is quickly (e.g., by the end of a 1‐day shift) removed by cleaning and sanitation and not allowed to survive in a niche within the processing environment, it is unlikely to pose a substantial public health or business risk, thus making transient L. monocytogenes a lesser concern compared to persistent L. monocytogenes . An effective Listeria sampling program is key to identifying the presence and persistence of Listeria . For any RTE foods that are at risk of exposure to the processing facility environment, appropriate testing programs need to include a robust environmental monitoring program and may also include finished product testing, although often at substantially lower frequencies compared to environmental monitoring testing. Finished product testing, particularly if conducted in the absence of a strong environmental monitoring program, is of limited value, as L. monocytogenes is often present sporadically and at low levels on food samples, which can make it difficult to identify contaminated products via final product testing. On the other hand, environmental monitoring programs often allow for early detection of potential sources and routes of contamination. A key challenge with environmental monitoring programs, however, is that many lack clear and defined goals, such as validation and verification of Listeria control strategies. For instance, routine environmental monitoring programs can be used for verification of Listeria control strategies (e.g., cleaning and sanitation). Another important consideration for Listeria sampling programs is whether to test for Listeria spp. or L. monocytogenes . When performing finished product testing, samples should always be tested for L. monocytogenes , as a positive result for Listeria spp. in a finished product would require further speciation to clearly define the risk associated with the contamination. Meanwhile, when performing environmental monitoring, testing for Listeria spp. generally represents the preferred testing strategy, as several nonpathogenic species of Listeria often inhabit similar environments as L. monocytogenes , and thus, Listeria spp. can represent an index organism for L. monocytogenes (Chapin et al., ). Overall, while strong food safety programs (including environmental monitoring programs) can be costly, they can provide a significant return on investment if they facilitate the identification and elimination of Listeria within the processing environment before detection by regulatory agencies or before a public health issue emerges. In order to identify persistent Listeria (and differentiate them from transient Listeria ), subtyping (e.g., PFGE or WGS) can be used to determine which subtype or strain of Listeria is present. If the same or related subtypes are found over time, it is often an indication of persistent Listeria or continuous reintroduction of the same subtype. While identifying if a given subtype is persistent or continuously being reintroduced into the environment can be notoriously challenging, certain environmental sampling strategies (e.g., performing preoperational environmental sampling) can help differentiate between these two scenarios. For example, subtype characterization of isolates obtained preoperation (i.e., after cleaning and sanitation, but prior to the next production cycle) can provide strong evidence for persistence (if isolates obtained over time share the same or closely related subtypes) (Bolten, Lott, et al., ; Bolten, Ralyea, et al., ). On the other hand, identification of isolates that are obtained mid‐operation (e.g., at least 3–4 h into a given production cycle) and share the same or closely related subtypes could also be due to reintroduction. Importantly, effective food safety programs should be put in place to protect against persistent Listeria . These programs must emphasize proper sanitary design of equipment (i.e., elimination of areas within the equipment or facility infrastructure that are difficult to clean and sanitize); a one‐directional flow of employees, equipment, and food products through the processing area; and proper cleaning and sanitation programs, including disassembly of equipment to a level that allows for effective elimination of Listeria from niches through cleaning and sanitation activities. In many cases, effective programs may include regular more in‐depth cleaning and sanitation of both equipment (known as “Periodic Equipment Cleaning” [PEC]) and infrastructure (known as “Periodic Infrastructure Cleaning” [PIC]), using validated frequency as well as documentation as part of a Master Sanitation schedule. In addition to persistent Listeria and transient Listeria , there is a scenario that can be labeled “persistent transient Listeria ,” which refers to the continual reintroduction of Listeria , representing a single or multiple different subtypes, at a given site or area in the processing environment (Belias et al., ). While this scenario may not be as much of a concern as persistent Listeria , the continuous reintroduction of Listeria to a given area also indicates a lack of proper Listeria control. Persistent transient Listeria is likely to be introduced into the processing environment with raw materials, crates, and employees, among other routes. In order to reduce the prevalence of persistent transient Listeria , more frequent cleaning and sanitation, improved supplier verification, and additional controls to prevent employees from tracking Listeria into the processing environment (e.g., captive footwear programs) should be considered. Management of raw material contamination in products that do not have an effective kill step While Listeria can be easily inactivated by heat, there are a number of products and raw materials that do not receive an effective kill step during processing. Fresh produce, cold‐smoked seafood products, and raw milk dairy products represent examples of commonly consumed RTE products that do not undergo a kill step as part of their processing. For these products in particular, robust supplier verification programs for raw materials are essential to reduce the likelihood that raw materials lead to contamination of the final product. This should include verifying that a supplier has implemented (i) an effective environmental monitoring program if appropriate (e.g., the supplier runs the product through a packinghouse or processing facility) and (ii) thorough cleaning and sanitation programs. In addition, nonthermal and thermal heat treatments that reduce L. monocytogenes (although not at the level of a “kill step,” which is typically defined as a 5‐log reduction) can be used. For example, cold‐smoked seafood producers may use antimicrobial washes of incoming raw materials or the addition of antimicrobial treatments (e.g., nisin) to final products to reduce Listeria levels and growth (Jahncke et al., ). Produce packinghouses and fresh‐cut facilities may also use antimicrobials in wash water to reduce cross‐contamination between produce items (Gil et al., ). Implementation of appropriate RCA procedures that allow for the identification of sources of environmental and product contamination Since L. monocytogenes contamination can originate from a variety of sources and can be facilitated by a number of practices (or lack of practices), identifying the sources of L. monocytogenes found in finished products or the environment, as well as contributing factors (e.g., improper execution of preharvest risk assessments, deficiencies in sanitation standard operating procedures [SSOPs]), can be difficult. While a formal well‐defined RCA approach provides one of the most effective ways to define the root cause of a Listeria “issue,” implementing good RCA procedures remains a challenge for many companies. RCA is a strategy that aims to identify the true or initial cause of a final event, such that without this initial cause, the final event could not occur. Therefore, using RCA pivots corrective actions from being responsive in nature to being preventive (i.e., with proper RCA, similar problems will be prevented from happening in the future). In particular, RCA can be used to provide a more systematic method for identifying and controlling the presence and persistence of Listeria in the food supply chain. For instance, if a persistent Listeria subtype is present in a given processing environment, RCA should be performed to (i) identify the source of the persistent L. monocytogenes , (ii) eliminate this persistent L. monocytogenes , and (iii) identify a control strategy that will prevent similar instances of Listeria persistence in the future. In order to perform an RCA, a multidisciplinary team (e.g., someone from quality and food safety, maintenance, and operations) should be formed. Due to the complexity of many food safety problems, getting input from a diverse set of thinkers can help to identify novel causes, as well as innovative corrective actions. Once assembled, the team should clearly define the problem and discuss what information is needed to help solve the problem; the required additional information or data should then be gathered. There are a variety of techniques that can then be used to identify root causes, including fishbone diagrams, the “five whys” technique, change analysis, and fault tree analysis, among others (The PEW Charitable Trust, ); each technique can be used on its own or in combination, and each technique may be most appropriate for different situations. For example, with respect to conducting an RCA related to a Listeria contamination event, one might opt to first use fishbone diagrams to visualize all possible components of a given problem, followed by the “five whys” technique (i.e., continually asking why some event or practice occurred, or is the way it is, until reaching the root cause) to further identify the root cause associated with each bone of the diagram that has been deemed important. Some of these RCA techniques have been successfully utilized in a handful of instances (Belias et al., , ; US Food and Drug Administration, ) toward identifying root causes of Listeria contamination in produce packinghouses (Table ) and may be similarly employed in other food industry sectors to improve management of Listeria , as well as other food safety‐related business and enterprise risks. An RCA for identifying the source of contamination or persistent Listeria is typically part of a “for‐cause” investigation that also includes efforts to gather sufficient data to facilitate the RCA. This type of “for‐cause” investigation typically requires an intensified sampling of the implicated parts of the processing environment to identify contaminated sites and contamination sources; this type of sampling often involves the collection of hundreds to thousands of samples. While collection of environmental samples is typically key for a Listeria RCA, raw material and finished product testing can also be useful and needed. Importantly, sampling as part of RCAs often represents an iterative process where the RCA identifies possible root causes that require sampling for confirmation (or exclusion), often followed by additional sampling to guide further discussions on the root cause and potential corrective actions to eliminate and prevent similar contamination problems in the future. In addition to identifying actions needed to correct and prevent the occurrence of future Listeria contamination issues at the establishment level, lessons learned from RCAs can sometimes guide or inform further industry‐wide improvements. For example, key findings from the FDA's RCA of a growing/packing operation that was implicated in a 2011 listeriosis outbreak linked to cantaloupe (US Food and Drug Administration, ) were used to inform both (i) regulatory requirements for sanitation of equipment and infrastructure used for fresh produce packing and (ii) industry guidance for managing food safety risks during cantaloupe production (The PEW Charitable Trust, ; US Food and Drug Administration, ; Western Growers Association, ) (Table ). Similarly, in response to findings from the FDA's investigation of a 2010–2015 listeriosis outbreak linked to ice cream, and internal RCAs performed by the company implicated in this outbreak (Blue Bell Creameries, Inc., , ; Conrad et al., ), the FDA initiated more frequent inspections and heightened surveillance of L. monocytogenes in US ice cream production environments (US Food and Drug Administration, ). Appropriate use of subtyping methods, including whole genome sequencing, and appropriate interpretation of the resulting data In many parts of the world, subtyping (sometimes referred to as “DNA fingerprinting”) methods, particularly WGS, are increasingly used as part of efforts to manage L. monocytogenes (Alegbeleye & Sant'Ana, ; Jackson et al., ). These methods may be used by either (i) an individual company or (ii) regulatory and public health agencies. For example, individual companies may perform subtyping of all L. monocytogenes or all Listeria spp. isolates that are obtained as part of their routine environmental monitoring programs. Routine subtyping of all isolates helps companies to identify persistent contamination, particularly if positive test results are only sporadically obtained and subtyping is needed to determine whether two positive samples represent contamination with the same Listeria or independent events. In addition, some companies do not perform routine subtyping but may perform subtyping only as part of investigations and RCA efforts. Many companies experience challenges with the use of molecular subtyping methods, including (i) the decisions of whether and when to perform subtyping, (ii) the decision of which subtyping method to use, and (iii) performing and interpreting the data outputs (particularly for WGS). The decisions of whether and when to perform subtyping are complex and involve a number of considerations (e.g., regulatory climate, food safety budget, history of Listeria issues, etc.), but companies with a strong food safety culture increasingly use these tools. Companies that require advanced information to successfully identify the root cause of a Listeria “issue” also typically use subtyping. As for the selection of subtyping methods, commonly used methods include ribotyping, PFGE, and WGS. While the industry often still uses methods such as PFGE and ribotyping, due to typically lower costs, shorter turnaround times, and fewer legal and liability concerns, WGS is being increasingly used by the industry (Jagadeesan et al., ). Public health and regulatory agencies increasingly use WGS to characterize human and food‐associated L. monocytogenes isolates as part of either routine inspections or for‐cause investigations. For example, in the United States, WGS is performed on all human L. monocytogenes isolates, as well as on any isolates from foods and food processing environments obtained by either the FDA or USDA FSIS. These WGS data are uploaded into the National Center for Biotechnology Information (NCBI) pathogen detection database ( https://www.ncbi.nlm.nih.gov/pathogens/ ) and hence are publicly available, even though the metadata provided do not typically allow for identification of the facility an isolate was obtained from. As part of this process, isolates from foods and food processing environments are also clustered with closely related human isolates and other isolates from foods and food processing facilities. This clustering, and subsequent follow‐up genome comparisons, can be used to identify (i) possible human cases that may be linked to a product or facility (providing a hypothesis for subsequent epidemiological investigations) or (ii) possible instances where a specific strain may persist in an environment. However, these analyses do not identify definitive linkages, and thus WGS data need to be interpreted in conjunction with epidemiological data (Alegbeleye & Sant'Ana, ). Industry often struggles with these data analyses, particularly since they often need to be performed and interpreted rapidly to make correct decisions on recalls, recall scopes, and other matters with considerable public health and business impact. While some guidance documents and reviews on interpretation of WGS have been published (Jagadeesan et al., ; Pightling et al., ), appropriate interpretation of WGS data and associated decision‐making are not trivial and should typically be conducted in consultation with experts to avoid costly errors and misinterpretations. CONCLUSION Listeria monocytogenes and other Listeria spp. are prevalent in a variety of environments, including natural and urban environments as well as primary production, processing, and retail environments, among others. As such, there are a variety of points along the supply chain where RTE food products can become contaminated with L. monocytogenes . While L. monocytogenes poses substantial public health risks, it also poses business risks, including when it is found in RTE products that represent a low risk of human disease (e.g., products that do not support L. monocytogenes growth [Farber et al., ]). As such, risk‐based stringent Listeria control programs should be implemented, which include emphasis on GMPs; regular cleaning and sanitation programs; the use of equipment with sanitary designs (i.e., equipment without niches); and appropriate hygienic zoning (e.g., one‐directional flow of employees, equipment, and food products). In addition, food products and their processing environments should be monitored for Listeria presence and persistence; these environmental monitoring programs need to be linked to specific goals (e.g., validation and verification of certain food safety programs, such as sanitation programs). Furthermore, robust supplier verification programs and nonthermal antimicrobial treatments are especially important for RTE products produced without a kill step. The large number of potential sources of Listeria throughout the food supply chain makes it difficult to identify the true source of contamination when detected in the environment or products. Therefore, the use of formal and well‐executed RCA for “for‐cause” investigations and subtyping tools is thus essential in investigations of Listeria positives in order to not only address the specific issue(s) at hand but also create and implement control measures to prevent similar events from occurring in the future. Alexandra Belias : Conceptualization; investigation; writing—original draft; writing—review and editing; project administration. Samantha Bolten : Investigation; writing—review and editing; visualization. Martin Wiedmann : Conceptualization; funding acquisition; writing—review and editing; supervision; resources. Martin Wiedmann serves as a paid consultant for Neogen Corporation on environmental monitoring for Listeria . The other authors declare no conflicts of interest.
Harnessing Pharmacogenomics in Clinical Research on Psychedelic‐Assisted Therapy
240a2463-914f-4d5f-a3b9-9d0845dcfe8c
11652822
Pharmacology[mh]
Pharmacodynamics The psychedelic literature provides very limited evidence on pharmacodynamics. In two studies, the impact of mutations in the 5‐HT 2A receptor on the response to psychedelics was investigated in vitro . Receptor–ligand interaction experiments showed that the Ala230Thr and His452Tyr mutations in 5‐HT 2A receptor gene ( HTR2A ) led to a sevenfold decrease in psilocin signaling potency compared with wild‐type. In contrast, the Ala447Val variant demonstrated a threefold increase in 5‐MeO‐DMT potency and also enhanced the potency of mescaline. Also, Thr25Asn and Asp48Asn mutations increased potency of mescaline, while the Ser12Asn substitution demonstrated an even greater, ninefold increase in potency. However, apart from the His452Tyr variant that has a frequency of 7.9% in the human population, the other variants are rare with less than 1% frequency. While genetic variants with very low population frequency may be impractical to test for, the His452Tyr (rs6314) mutation could be valuable to investigate if it results in significant differences in the response to psilocybin treatment. From psychiatric literature, the His452Tyr polymorphism has been reported to affect clozapine‐induced signaling networks and is associated with a poorer response to treatment with clozapine, an antipsychotic which has a high affinity for the 5‐HT 2A receptor. However, given the lack of data on psychedelics in humans, it is important to first determine whether this mutation impacts the efficacy of psilocybin (psilocin) treatment. No data were found on genetic mutations affecting the binding of psychedelics to receptors other than 5‐HT 2A , though we acknowledge that mutations in genes encoding other receptors where psychedelics bind, such as other serotonin and dopamine receptors, could hypothetically influence the response to these substances. Pharmacokinetics Significantly more evidence exists regarding pharmacokinetic effects, primarily involving CYP enzymes, where changes in their activity or function can impact the effects of psychedelics. For each psychedelic compound, a short overview of its metabolism including the role of enzymes in its breakdown and bioavailability, is provided separately. Refer to Figure for a visual summary of the metabolism of the discussed drugs, with the participating enzymes indicated. Lysergic acid diethylamide Lysergic acid diethylamide (LSD) is a synthetic psychoactive compound that is primarily metabolized in the liver through N‐dealkylation and oxidation. In humans, 2‐oxo‐3‐hydroxy‐LSD (O‐H‐LSD) is considered as the major metabolite. Among the other metabolites, a notable one is N‐desmethyl‐LSD (nor‐LSD), which has a half‐life longer than LSD and shows a similar binding affinity to 5‐HT 1A and 5‐HT 2A receptors, suggesting that the compound may also possess hallucinogenic properties, in contrast to the inactive O‐H‐LSD metabolite. Enzyme inhibition experiments on pooled human liver microsomes (HLMs) indicated that CYPs 1A2 and 3A4 have a major role in metabolism, with both, along with 2C9 and 2C19, involved in the initial metabolic steps. The significance of CYP3A4 in the metabolism of LSD in HLMs was also confirmed in another study, which demonstrated the 1A2, 2C9, 2E1, and 3A4 participation in the formation of O‐H‐LSD and 2D6, 2E1, and 3A4 involvement in the metabolism of LSD into nor‐LSD. A human study showed that individuals with non‐functional CYP2D6 ( n = 7) had higher plasma LSD levels and slower metabolism of the drug compared with those with functional 2D6 enzymes ( n = 74), after being treated with ~ 100 μg LSD. Increased levels of O‐H‐LSD in PMs were also observed, suggesting that this conversion can occur independently of 2D6, but no associations of other CYPs (1A2, 3A4, C19, C9, B6) in this study were detected; however, this could also be due to the limitations of the study. Compared with those with functional enzymes, PMs experienced a significantly longer duration of subjective effects and a more intense altered state of consciousness (e.g., higher ratings in impaired control and cognition, anxious ego dissolution and anxiety), which may have led to a more challenging experience with increased anxiety and potentially reduced therapeutic effects. The authors of the study concluded that a ~ 50% lower dose may be appropriate to use for PMs. Given the reported differences in LSD response between CYP2D6 phenotypic groups in humans, it would be valuable to assess the impact of CYP2D6 genotype on LSD response in clinical trials. Prospective trials that systematically and uniformly document clinical phenotype (adverse drug reactions and efficacy), pharmacokinetics, and pharmacogenomics are necessary to determine the influence of genotype and guide genotype‐informed prescribing guidelines. Clinical trials that incorporate PGx to adjust drug dosing could potentially achieve better and more consistent drug response and lead to better treatment outcomes, provided that phenotypic differences are well‐defined. These studies would also offer valuable data for future dosing guidelines if LSD‐assisted therapy becomes approved. Additionally, it may be worthwhile to retrospectively assess the impact of the CYP2D6 genotype on LSD response in clinical trials if samples are available for analysis. CYP2D6 phenotypic differences could explain some of the observed inter‐individual variations and should be considered in data analysis. Finally, considering that two in vitro studies have shown the importance of CYP3A4, it would be useful to determine the role of the enzyme on LSD effects. In particular, the infrequent CYP3A4 *22 allele, most common in Europeans (5%), has been reported to decrease the enzyme's activity and significantly influence the pharmacokinetics of several drugs. Psilocybin Psilocybin is a naturally occurring substance found in several species of mushrooms. It is a prodrug that is dephosphorylated into the pharmacologically active psilocin by alkaline phosphatases. , It is suggested that psilocin undergoes phase I metabolism by monoamine oxidase A (MAO‐A) to form an intermediate metabolite 4‐hydroxyindole‐3‐acetaldehyde (4‐HIA) which is then oxidized by aldehyde dehydrogenase (ALDH) to produce 4‐hydroxyindole‐3‐acetic acid (4‐HIAA). The 4‐HIA can be reduced to 4‐hydroxytryptophol (4‐HTP) by alcohol dehydrogenase (ADH), but this has been detected only in vitro and not in humans, likely due to rapid metabolism or lack of production. In contrast to psilocin, these metabolites were shown to have no relevant affinity or activation at the 5‐HT 1A , 5‐HT 2A , and 5‐HT 2B receptors. The suggested main pathway for psilocin is phase II metabolism where it is glucuronidated into psilocin‐O‐glucuronide, which forms ~ 80% of psilocin metabolites. In an analysis of 19 recombinant human UDP‐glucuronosyltransferases (UGTs) from the 1A, 2A, and 2B subfamilies, UGT1A10 was found to have the highest activity for psilocin glucuronidation. UGTs 1A8 and 1A9 also showed considerable activity in metabolizing psilocin, whereas UGTs 1A6 and 1A7 exhibited very low activity. No activity was detected in UGTs 1A1, 1A3, 1A4, 1A5, 2A1, 2A2, 2A3, 2B4, 2B7, 2B10, 2B11, 2B15, 2B17, and 2B28. Despite UGT1A10 being the most active enzyme for psilocin glucuronidation, psilocin shows low affinity for this enzyme. The glucuronidation process of psilocin by UGT1A9 is characterized by complex, biphasic kinetics, suggesting the presence of multiple substrate affinity sites. It is suggested that psilocin glucuronidation mainly occurs in the small intestine (where UGT1A10 is highly expressed, over 100 times more than in the liver) and in the liver (which has a high expression of UGT1A9). Thus, while UGT1A10 is the most active enzyme in the glucuronidation of psilocin, the significant presence of UGT1A9 in the liver suggests it may be a major contributor to the metabolic processing of psilocin in humans. However, no studies have investigated genetic variants in the UGT1A9 and UGT1A10 genes that could influence psilocin's effects. Currently, there are UGT1A9 gene variants associated with the metabolism of other drugs or their metabolites that are substrates for these enzymes (for details, see PharmGKB database at https://www.pharmgkb.org ), which potentially may be relevant for psilocin's metabolism as well. In vitro assessment of recombinant CYP enzymes has shown that while 1A2, 2B6, 2C8, 2C9, 2C19, and 2E1 did not exhibit any relevant activity, 2D6 and 3A4 were involved in the metabolism, with 2D6 rapidly metabolizing 100% of psilocin compared with 40% by 3A4. However, none of the main metabolites tested were detectable when psilocin was metabolized by CYP3A4. Analysis of human samples showed no differences in plasma psilocybin concentrations across CYP2D6 phenotypes (PM = 3, IM = 25, NM = 58, UM = 2), indicating that CYP2D6 likely plays a minor role in psilocin's effects and suggesting it is a minor pathway. However, the significance of this pathway may increase with MAO inhibition. In fact, MAO inhibitors (MAOIs) are also consumed with psilocybin to intensify its effects and hypothetically, CYP2D6 activity could influence the effects of this combination. DMT and pharmahuasca N,N ‐dimethyltryptamine (DMT) is a naturally occurring psychoactive compound found in several species of plants and is also an active component of ayahuasca. Ayahuasca is a brew that contains, in addition to the DMT, also β‐carbolines (primarily harmine, tetrahydroharmine, and harmaline) that act as MAOIs, preventing the extensive metabolism of DMT and making it orally active. In this section, “pharmahuasca” is discussed which is a pharmaceutical version of ayahuasca that involves two main components of traditional ayahuasca in controlled concentration: DMT and MAOI (such as harmine or harmaline). Harmine and harmaline are both potent inhibitors of MAO, including MAO‐A, while their metabolites harmol and harmalol as well as tetrahydroharmine (THH) have been shown to be much less potent inhibitors. , Despite THH being the second most abundant harmala alkaloid in the ayahuasca brew, , it can be suggested that THH plays rather a small role in MAO inhibition due to its weaker inhibitory potency and the high concentration of harmine. The major pathway of DMT metabolism is deamination by MAO‐A, forming indole‐3‐acetic acid (IAA), followed by N‐oxidation producing DMT‐N‐oxide (DMT‐NO). The latter becomes the main pathway when orally consuming either ayahuasca or pharmahuasca as the MAO pathway is inhibited by β‐carbolines. To a lesser extent, N‐methyltryptamine (NMT) is also produced. An in vitro study indicated a role of CYP2D6 and a minor role of CYP2C19 in DMT metabolism, with 2D6 also noted to produce novel metabolites, likely through hydroxylation on the indole core. Other enzymes, including 2A6, 2E1, 2C19 and 1A2, 2B6, 2C8, 2C9, 3A4, 3A5, , were not found to be significantly involved in DMT's metabolism. However, it is suggested that CYP2D6 has rather minor importance when MAO‐A enzymes are present, indicating that this metabolic pathway may not significantly affect the standalone use of DMT, although further research is needed to confirm this in humans. More importantly, consuming DMT with MAOIs can alter the dynamics of metabolism and may increase the relevance of CYP2D6, which could have implications for such combinations. However, determining the phenotypic differences of enzymes in the effects of pharmahuasca is complicated because the common MAOIs used (harmine and harmaline) are also being metabolized, involving the same enzymes. For example, CYPs 1A1, 1A2 and 2D6 are metabolizing harmaline, and 1A1, 1A2, 2C9, 2C19, 2D6 harmine via O‐demethylation. Decreased CYP2D6 functionality can lead to increased and prolonged exposure to these compounds, as shown by reduced harmaline metabolism, slower depletion and longer half‐life in hepatocytes and wild‐type mice with 2D6 deficiency compared with those with functional 2D6. Interestingly, harmaline, harmine and its metabolite harmol have been reported to inhibit CYP2D6, with harmine and harmol also inhibiting 3A4. In such cases, the phenotypic differences of CYP2D6 may become less important due to phenoconversion – a phenomenon where the actual phenotype differs from the genetically inferred one. Considering phenoconversion and assuming vast 2D6 inhibition, one may hypothesize that the drug‐metabolizing enzyme's phenotype has rather a short‐term impact when consuming DMT with harmine and/or harmaline (depending on their concentration and timing of administration). However, when using non‐CYP2D6 inhibiting MAOIs, the influence of the phenotype on the DMT response could be more significant. Due to limited clinical research and the complexity of drug–drug‐gene interactions, the extent to which the activity of the enzymes discussed influences the psychedelic effects of pharmahuasca remains unclear and likely depends on factors such as DMT and MAOI dosing and timing. Finally, DMT has been identified as a substrate of two solute carrier (SLC) superfamily proteins: the proton/organic cation (H + /OC) antiporter and the organic cation transporter 2 (OCT2; encoded by the SLC22A2 gene). The H + /OC antiporter is suggested to play an important role in transporting cationic drugs across biological membranes, such as the blood–brain barrier, while OCT2 is primarily expressed in the kidneys and is involved in the renal elimination of hydrophilic substances. The gene(s) responsible for the H + /OC antiporter have not yet been identified and it is believed that the H + /OC antiport is mediated by more than one protein, making it currently unclear how genetics might affect its function. With regard to OCT2, an in vitro study found that the Ala270Ser variant of SLC22A2 moderately reduced the transport of DMT, potentially leading to decreased elimination of the drug in individuals with this variant. However, due to DMT's extensive metabolism, it is also suggested that SLC22A2 polymorphism is unlikely to have a significant impact on DMT pharmacokinetics overall. 5‐MeO‐DMT 5‐Methoxy‐ N,N ‐dimethyltryptamine (5‐MeO‐DMT) is a psychedelic found in several plants, fungi, the gland secretions of the toad Incilius alvarius and mammals. It is commonly administered parenterally, such as by smoking or vaporizing, as similar to DMT, it is orally inactive due to rapid metabolism by MAO in the gut and liver. However, it can be made active by concurrent use of MAOIs. The main 5‐MeO‐DMT metabolic pathway involves deamination to 5‐methoxyindoleacetic acid (5‐MIAA) by MAO‐A and a small portion is O‐demethylated by CYP2D6 to produce an active metabolite bufotenine (5‐hydroxy‐ N,N ‐dimethyltryptamine; also a hallucinogenic compound), followed by its deamination to form 5‐hydroxyindoleacetic acid (5‐HIAA). The involvement of CYP2D6 in the O‐demethylation of 5‐MeO‐DMT to bufotenin has been demonstrated in two studies using HLMs and hepatic microsomes from CYP2D6‐humanized mice, while no activity was determined for other CYPs investigated, such as 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2A1, 2E1, 3A4, 3A5, , 1A1, 1B1, 2C18, 3A7, 4A11, and 4A. In HLMs, it was shown that CYP2D6 with decreased functionality produced less bufotenin than the fully functional enzymes, with isoforms *2 and *10 exhibiting 2.6‐ and 40‐fold lower catalytic efficiency than the wild‐type, respectively. While 5‐MeO‐DMT metabolism was consistent between CYP2D6 PMs and NMs in human hepatocytes due to the MAO‐A‐mediated major metabolic pathway, the concurrent use of MAOIs shifted the pathway to O‐demethylation, leading to increased bufotenine formation in NMs. The MAOI harmaline reduced the depletion of 5‐MeO‐DMT in both metabolizer groups, but bufotenine was detected only in hepatocytes from NMs, not PMs, showing the differences in CYP2D6 activity in the capacity to convert 5‐MeO‐DMT to bufotenine. Moreover, bufotenine is also metabolized by MAO‐A and the use of MAOIs will not only increase its production but also reduce its clearance. Finally, as previously discussed, harmaline and harmine are substrates of CYP2D6, therefore the enzyme's activity can impact their metabolism, as evidenced by the slower depletion of harmaline in PMs. Considering that O‐demethylation is a minor pathway, phenotypic differences in individuals are unlikely to significantly impact the effects of 5‐MeO‐DMT. However, confirming this in humans would be valuable. Hypothetically, given that 5‐MeO‐DMT and bufotenine have different receptor affinities (for example, bufotenine has several times higher affinity for 5‐HT 2A than 5‐MeO‐DMT), using 5‐MeO‐DMT with an MAOI may result in altered effects mediated by different receptors, thereby influenced by the individual's genetics. This is probably more likely to occur with MAOIs that are not CYP2D6 inhibitors (unlike harmine and harmaline). With non‐CYP2D6 inhibitors, PMs may exhibit no or minimal production of bufotenine, while individuals with functional enzymes metabolize these substrates more rapidly and can convert 5‐MeO‐DMT to bufotenine. On the contrary, when using CYP2D6‐inhibiting MAOIs, the difference between enzyme phenotypes can be hypothesized to be short‐term, as the enzyme could undergo phenoconversion, leading to lower activity (potentially depending on the MAOI dose). While it would be interesting to determine phenotypic differences in humans, we are not aware of any clinical trials co‐administering these compounds, and using 5‐MeO‐DMT with MAOIs may pose a risk of serotonin toxicity due to their agonistic effects on serotonergic systems, thereby this potential combination needs to be carefully considered. Ibogaine Ibogaine is a naturally occurring psychedelic in the roots of the rainforest plant Tabernanthe iboga . It has been suggested that ibogaine is metabolized to its main metabolite, noribogaine, primarily by CYP2D6 with minor contributions from 2C9 and 3A4. , In humans, NMs of CYP2D6 were shown to have lower ibogaine exposure but higher noribogaine levels due to faster metabolism, while PMs showed higher ibogaine exposure and significantly lower noribogaine levels with slower metabolism. The role of CYP2D6 in ibogaine metabolism was confirmed in another human study using paroxetine, a strong CYP2D6 inhibitor. Paroxetine‐treated individuals ( n = 11) had significantly higher peak concentrations and longer ibogaine half‐lives compared with placebo‐treated subjects ( n = 9; 10.2 h vs. 2.5 h). Reduced CYP2D6 activity led to higher ibogaine exposure and similar noribogaine levels, effectively doubling overall exposure to active compounds. CYP3A and CYP2C19 were not assessed in this study. Apart from CYP enzymes, the oral availability of ibogaine has been found to be significantly influenced by the two ATP‐binding cassette transporters ABCB1 (P‐glycoprotein) and ABCG2, where the former was shown to restrict ibogaine brain penetration. However, while genetic variations of ABCB1 and ABCG2 genes can affect ibogaine exposure in patients, the extent of this impact was suggested to be relatively limited. Similarly to LSD, given the significant role of CYP2D6 in its metabolism, pharmacogenomic testing can be suggested for clinical trials involving individuals undergoing ibogaine treatment. Additionally, close monitoring during clinical trials, or even dose adjustments, would be important given the existing concerns about adverse events associated with the drug. This is relevant to PMs, who may experience significantly stronger and longer effects of the drug and it has already been recommended to consider halving the dose for this phenotypic group. In contrast, while not yet assessed, it can be hypothesized that UMs may clear the drug more rapidly and lead to a lower response of the drug. While this requires further research, there is a possibility that UMs may not be suitable candidates for ibogaine treatment, or may require a higher dose. Therefore, determining the CYP2D6 phenotype could enhance both the efficacy and safety of the treatment. The roles of CYP3A and CYP2C19 in its metabolism, especially in CYP2D6 PMs, require further investigation. MDMA While not a classical psychedelic, 3,4‐Methylenedioxymethamphetamine (MDMA) is an entactogen which is being investigated for therapy to treat PTSD. The metabolism of active MDMA occurs through two main pathways. In the first pathway, MDMA is primarily O‐demethylenated by CYP2D6 to form 3,4‐dihydroxymethamphetamine (HHMA), which is then O‐methylated by catechol‐O‐methyltransferase (COMT) to form 3‐methoxy‐4‐hydroxymethamphetamine (HMMA). The second pathway involves N‐demethylation by CYP1A2 and CYP2B6 (and to a lesser extent CYP2C19 and CYP3A4), producing 3,4‐methylenedioxyamphetamine (MDA). MDA undergoes similar metabolic reactions as MDMA, forming 3,4‐dihydroxyamphetamine (HHA), followed by O‐methylation by COMT to produce 4‐hydroxy‐3‐methoxyamphetamine (HMA). CYP2D6 is believed to contribute 30% of MDMA metabolism. In humans, it was demonstrated that CYP2D6 activity altered plasma MDMA levels, which were higher in PMs compared with NMs and lasted up to 3 h after drug administration, however, the difference was small (1.15× higher in PMs compared with NMs). Plasma HHMA levels were significantly higher in NMs, indicating CYP2D6's role in forming this metabolite. CYP2D6 activity was shown to alter systolic blood pressure, as well as “any drug rating” and “drug liking,” which were higher in PMs compared with IMs and NMs, both at 0.6 h, and for “drug liking,” also at 1 h. However, considering that the difference in MDMA plasma levels between PMs and IMs/NMs were small, and variations in the psychotropic effects of MDMA were notable only within the first hour after administration, it suggests that the differences in CYP2D6 function have a minor and short‐lived impact on its effects. This could be due to MDMA's ability to inhibit CYP2D6, , leading to autoinhibition of its own metabolism. Therefore, the impact of variations in the CYP2D6 genotype is confined and primarily observable only in the initial hour following MDMA administration. Besides CYP2D6, in vitro studies have suggested the involvement of other CYP enzymes, such as 2C19, 2B6, and 1A2 that are involved in the N‐demethylation of MDMA. , In humans, it was confirmed that 2C19, 2B6, and 1A2 are involved in the N‐demethylation of MDMA to MDA. However, polymorphisms in those enzymes did not significantly alter the subjective effects of MDMA. Additionally, CYP1A2 (specifically in individuals carrying the inducible rs762551 AA genotype) showed increased activity in N‐demethylation in light smokers compared with non‐smokers and very light smokers. In the COMT gene, which encodes an enzyme involved in the breakdown of HHMA and HHA, the Val158Met polymorphism (rs4680) has been previously reported to impact enzyme's activity, where the substitution of Val with Met is associated with decreased activity (that is, highest in Val/Val genotype and lowest in Met/Met genotype). However, the results regarding its influence on antidepressant response are mixed. Interestingly, the same mutation is associated with MDMA use disorder and MDMA‐induced psychotic symptoms, where carriers of at least one Met allele were associated with a lower risk of developing MDMA use disorder and the Val/Val genotype with a lower risk of developing psychotic symptoms among those with the disorder. Taken together, considering the short‐lasting differences in subjective effects in PMs of CYP2D6 due to autoinhibition and the fact that 2C19, 1A2, and 2B6 did not show any differences in subjective effects, pharmacogenomic testing may have limited clinical relevance in MDMA‐assisted therapy. However, while impairments in individual enzymes may have only minor effects, the combined effect of multiple enzyme deficiencies could have a more significant impact on MDMA metabolism and potentially increase the risk of toxicity. For example, a case report documented a death after consuming MDMA while on ritonavir treatment (a strong CYP2D6 and CYP3A4 inhibitor). Further research is needed to better understand the effects of rare combinations of enzyme functionalities. Other psychedelic compounds For other psychedelics, the literature is sparse. For mescaline, while the primary metabolic pathway involves amine oxidases and it does not significantly interact with CYP2D6, a study has shown it is a substrate for organic cation transporter 1 (OCT1), encoded by the polymorphic SLC22A1 gene. Genetic variants of OCT1 can significantly alter transporter expression and function, potentially causing inter‐individual variations in mescaline pharmacokinetics. This may lead to decreased elimination and an increased risk of intoxication and adverse effects in individuals with reduced or absent OCT1 activity. However, the clinical implications of SLC22A1 polymorphisms on mescaline's pharmacokinetics require further in vivo studies to evaluate their potential impact on its effects. Lastly, in vitro evidence suggests that Salvinorin A is a substrate for CYP2D6, CYP1A1, CYP2E1, CYP2C18, UGT2B7, and possibly P‐glycoprotein, with glucuronidation by UGT2B7 likely being the major metabolic pathway. However, the impact of CYP enzyme activity on the effects of Salvinorin A remains unknown and requires further investigation. Ketamine, which is a dissociative anesthetic but also has antidepressant properties and can produce psychedelic experiences at sub‐anesthetic doses, was not included in this review as it has already been systematically reviewed from pharmacogenomics perspective. The psychedelic literature provides very limited evidence on pharmacodynamics. In two studies, the impact of mutations in the 5‐HT 2A receptor on the response to psychedelics was investigated in vitro . Receptor–ligand interaction experiments showed that the Ala230Thr and His452Tyr mutations in 5‐HT 2A receptor gene ( HTR2A ) led to a sevenfold decrease in psilocin signaling potency compared with wild‐type. In contrast, the Ala447Val variant demonstrated a threefold increase in 5‐MeO‐DMT potency and also enhanced the potency of mescaline. Also, Thr25Asn and Asp48Asn mutations increased potency of mescaline, while the Ser12Asn substitution demonstrated an even greater, ninefold increase in potency. However, apart from the His452Tyr variant that has a frequency of 7.9% in the human population, the other variants are rare with less than 1% frequency. While genetic variants with very low population frequency may be impractical to test for, the His452Tyr (rs6314) mutation could be valuable to investigate if it results in significant differences in the response to psilocybin treatment. From psychiatric literature, the His452Tyr polymorphism has been reported to affect clozapine‐induced signaling networks and is associated with a poorer response to treatment with clozapine, an antipsychotic which has a high affinity for the 5‐HT 2A receptor. However, given the lack of data on psychedelics in humans, it is important to first determine whether this mutation impacts the efficacy of psilocybin (psilocin) treatment. No data were found on genetic mutations affecting the binding of psychedelics to receptors other than 5‐HT 2A , though we acknowledge that mutations in genes encoding other receptors where psychedelics bind, such as other serotonin and dopamine receptors, could hypothetically influence the response to these substances. Significantly more evidence exists regarding pharmacokinetic effects, primarily involving CYP enzymes, where changes in their activity or function can impact the effects of psychedelics. For each psychedelic compound, a short overview of its metabolism including the role of enzymes in its breakdown and bioavailability, is provided separately. Refer to Figure for a visual summary of the metabolism of the discussed drugs, with the participating enzymes indicated. Lysergic acid diethylamide (LSD) is a synthetic psychoactive compound that is primarily metabolized in the liver through N‐dealkylation and oxidation. In humans, 2‐oxo‐3‐hydroxy‐LSD (O‐H‐LSD) is considered as the major metabolite. Among the other metabolites, a notable one is N‐desmethyl‐LSD (nor‐LSD), which has a half‐life longer than LSD and shows a similar binding affinity to 5‐HT 1A and 5‐HT 2A receptors, suggesting that the compound may also possess hallucinogenic properties, in contrast to the inactive O‐H‐LSD metabolite. Enzyme inhibition experiments on pooled human liver microsomes (HLMs) indicated that CYPs 1A2 and 3A4 have a major role in metabolism, with both, along with 2C9 and 2C19, involved in the initial metabolic steps. The significance of CYP3A4 in the metabolism of LSD in HLMs was also confirmed in another study, which demonstrated the 1A2, 2C9, 2E1, and 3A4 participation in the formation of O‐H‐LSD and 2D6, 2E1, and 3A4 involvement in the metabolism of LSD into nor‐LSD. A human study showed that individuals with non‐functional CYP2D6 ( n = 7) had higher plasma LSD levels and slower metabolism of the drug compared with those with functional 2D6 enzymes ( n = 74), after being treated with ~ 100 μg LSD. Increased levels of O‐H‐LSD in PMs were also observed, suggesting that this conversion can occur independently of 2D6, but no associations of other CYPs (1A2, 3A4, C19, C9, B6) in this study were detected; however, this could also be due to the limitations of the study. Compared with those with functional enzymes, PMs experienced a significantly longer duration of subjective effects and a more intense altered state of consciousness (e.g., higher ratings in impaired control and cognition, anxious ego dissolution and anxiety), which may have led to a more challenging experience with increased anxiety and potentially reduced therapeutic effects. The authors of the study concluded that a ~ 50% lower dose may be appropriate to use for PMs. Given the reported differences in LSD response between CYP2D6 phenotypic groups in humans, it would be valuable to assess the impact of CYP2D6 genotype on LSD response in clinical trials. Prospective trials that systematically and uniformly document clinical phenotype (adverse drug reactions and efficacy), pharmacokinetics, and pharmacogenomics are necessary to determine the influence of genotype and guide genotype‐informed prescribing guidelines. Clinical trials that incorporate PGx to adjust drug dosing could potentially achieve better and more consistent drug response and lead to better treatment outcomes, provided that phenotypic differences are well‐defined. These studies would also offer valuable data for future dosing guidelines if LSD‐assisted therapy becomes approved. Additionally, it may be worthwhile to retrospectively assess the impact of the CYP2D6 genotype on LSD response in clinical trials if samples are available for analysis. CYP2D6 phenotypic differences could explain some of the observed inter‐individual variations and should be considered in data analysis. Finally, considering that two in vitro studies have shown the importance of CYP3A4, it would be useful to determine the role of the enzyme on LSD effects. In particular, the infrequent CYP3A4 *22 allele, most common in Europeans (5%), has been reported to decrease the enzyme's activity and significantly influence the pharmacokinetics of several drugs. Psilocybin is a naturally occurring substance found in several species of mushrooms. It is a prodrug that is dephosphorylated into the pharmacologically active psilocin by alkaline phosphatases. , It is suggested that psilocin undergoes phase I metabolism by monoamine oxidase A (MAO‐A) to form an intermediate metabolite 4‐hydroxyindole‐3‐acetaldehyde (4‐HIA) which is then oxidized by aldehyde dehydrogenase (ALDH) to produce 4‐hydroxyindole‐3‐acetic acid (4‐HIAA). The 4‐HIA can be reduced to 4‐hydroxytryptophol (4‐HTP) by alcohol dehydrogenase (ADH), but this has been detected only in vitro and not in humans, likely due to rapid metabolism or lack of production. In contrast to psilocin, these metabolites were shown to have no relevant affinity or activation at the 5‐HT 1A , 5‐HT 2A , and 5‐HT 2B receptors. The suggested main pathway for psilocin is phase II metabolism where it is glucuronidated into psilocin‐O‐glucuronide, which forms ~ 80% of psilocin metabolites. In an analysis of 19 recombinant human UDP‐glucuronosyltransferases (UGTs) from the 1A, 2A, and 2B subfamilies, UGT1A10 was found to have the highest activity for psilocin glucuronidation. UGTs 1A8 and 1A9 also showed considerable activity in metabolizing psilocin, whereas UGTs 1A6 and 1A7 exhibited very low activity. No activity was detected in UGTs 1A1, 1A3, 1A4, 1A5, 2A1, 2A2, 2A3, 2B4, 2B7, 2B10, 2B11, 2B15, 2B17, and 2B28. Despite UGT1A10 being the most active enzyme for psilocin glucuronidation, psilocin shows low affinity for this enzyme. The glucuronidation process of psilocin by UGT1A9 is characterized by complex, biphasic kinetics, suggesting the presence of multiple substrate affinity sites. It is suggested that psilocin glucuronidation mainly occurs in the small intestine (where UGT1A10 is highly expressed, over 100 times more than in the liver) and in the liver (which has a high expression of UGT1A9). Thus, while UGT1A10 is the most active enzyme in the glucuronidation of psilocin, the significant presence of UGT1A9 in the liver suggests it may be a major contributor to the metabolic processing of psilocin in humans. However, no studies have investigated genetic variants in the UGT1A9 and UGT1A10 genes that could influence psilocin's effects. Currently, there are UGT1A9 gene variants associated with the metabolism of other drugs or their metabolites that are substrates for these enzymes (for details, see PharmGKB database at https://www.pharmgkb.org ), which potentially may be relevant for psilocin's metabolism as well. In vitro assessment of recombinant CYP enzymes has shown that while 1A2, 2B6, 2C8, 2C9, 2C19, and 2E1 did not exhibit any relevant activity, 2D6 and 3A4 were involved in the metabolism, with 2D6 rapidly metabolizing 100% of psilocin compared with 40% by 3A4. However, none of the main metabolites tested were detectable when psilocin was metabolized by CYP3A4. Analysis of human samples showed no differences in plasma psilocybin concentrations across CYP2D6 phenotypes (PM = 3, IM = 25, NM = 58, UM = 2), indicating that CYP2D6 likely plays a minor role in psilocin's effects and suggesting it is a minor pathway. However, the significance of this pathway may increase with MAO inhibition. In fact, MAO inhibitors (MAOIs) are also consumed with psilocybin to intensify its effects and hypothetically, CYP2D6 activity could influence the effects of this combination. and pharmahuasca N,N ‐dimethyltryptamine (DMT) is a naturally occurring psychoactive compound found in several species of plants and is also an active component of ayahuasca. Ayahuasca is a brew that contains, in addition to the DMT, also β‐carbolines (primarily harmine, tetrahydroharmine, and harmaline) that act as MAOIs, preventing the extensive metabolism of DMT and making it orally active. In this section, “pharmahuasca” is discussed which is a pharmaceutical version of ayahuasca that involves two main components of traditional ayahuasca in controlled concentration: DMT and MAOI (such as harmine or harmaline). Harmine and harmaline are both potent inhibitors of MAO, including MAO‐A, while their metabolites harmol and harmalol as well as tetrahydroharmine (THH) have been shown to be much less potent inhibitors. , Despite THH being the second most abundant harmala alkaloid in the ayahuasca brew, , it can be suggested that THH plays rather a small role in MAO inhibition due to its weaker inhibitory potency and the high concentration of harmine. The major pathway of DMT metabolism is deamination by MAO‐A, forming indole‐3‐acetic acid (IAA), followed by N‐oxidation producing DMT‐N‐oxide (DMT‐NO). The latter becomes the main pathway when orally consuming either ayahuasca or pharmahuasca as the MAO pathway is inhibited by β‐carbolines. To a lesser extent, N‐methyltryptamine (NMT) is also produced. An in vitro study indicated a role of CYP2D6 and a minor role of CYP2C19 in DMT metabolism, with 2D6 also noted to produce novel metabolites, likely through hydroxylation on the indole core. Other enzymes, including 2A6, 2E1, 2C19 and 1A2, 2B6, 2C8, 2C9, 3A4, 3A5, , were not found to be significantly involved in DMT's metabolism. However, it is suggested that CYP2D6 has rather minor importance when MAO‐A enzymes are present, indicating that this metabolic pathway may not significantly affect the standalone use of DMT, although further research is needed to confirm this in humans. More importantly, consuming DMT with MAOIs can alter the dynamics of metabolism and may increase the relevance of CYP2D6, which could have implications for such combinations. However, determining the phenotypic differences of enzymes in the effects of pharmahuasca is complicated because the common MAOIs used (harmine and harmaline) are also being metabolized, involving the same enzymes. For example, CYPs 1A1, 1A2 and 2D6 are metabolizing harmaline, and 1A1, 1A2, 2C9, 2C19, 2D6 harmine via O‐demethylation. Decreased CYP2D6 functionality can lead to increased and prolonged exposure to these compounds, as shown by reduced harmaline metabolism, slower depletion and longer half‐life in hepatocytes and wild‐type mice with 2D6 deficiency compared with those with functional 2D6. Interestingly, harmaline, harmine and its metabolite harmol have been reported to inhibit CYP2D6, with harmine and harmol also inhibiting 3A4. In such cases, the phenotypic differences of CYP2D6 may become less important due to phenoconversion – a phenomenon where the actual phenotype differs from the genetically inferred one. Considering phenoconversion and assuming vast 2D6 inhibition, one may hypothesize that the drug‐metabolizing enzyme's phenotype has rather a short‐term impact when consuming DMT with harmine and/or harmaline (depending on their concentration and timing of administration). However, when using non‐CYP2D6 inhibiting MAOIs, the influence of the phenotype on the DMT response could be more significant. Due to limited clinical research and the complexity of drug–drug‐gene interactions, the extent to which the activity of the enzymes discussed influences the psychedelic effects of pharmahuasca remains unclear and likely depends on factors such as DMT and MAOI dosing and timing. Finally, DMT has been identified as a substrate of two solute carrier (SLC) superfamily proteins: the proton/organic cation (H + /OC) antiporter and the organic cation transporter 2 (OCT2; encoded by the SLC22A2 gene). The H + /OC antiporter is suggested to play an important role in transporting cationic drugs across biological membranes, such as the blood–brain barrier, while OCT2 is primarily expressed in the kidneys and is involved in the renal elimination of hydrophilic substances. The gene(s) responsible for the H + /OC antiporter have not yet been identified and it is believed that the H + /OC antiport is mediated by more than one protein, making it currently unclear how genetics might affect its function. With regard to OCT2, an in vitro study found that the Ala270Ser variant of SLC22A2 moderately reduced the transport of DMT, potentially leading to decreased elimination of the drug in individuals with this variant. However, due to DMT's extensive metabolism, it is also suggested that SLC22A2 polymorphism is unlikely to have a significant impact on DMT pharmacokinetics overall. 5‐Methoxy‐ N,N ‐dimethyltryptamine (5‐MeO‐DMT) is a psychedelic found in several plants, fungi, the gland secretions of the toad Incilius alvarius and mammals. It is commonly administered parenterally, such as by smoking or vaporizing, as similar to DMT, it is orally inactive due to rapid metabolism by MAO in the gut and liver. However, it can be made active by concurrent use of MAOIs. The main 5‐MeO‐DMT metabolic pathway involves deamination to 5‐methoxyindoleacetic acid (5‐MIAA) by MAO‐A and a small portion is O‐demethylated by CYP2D6 to produce an active metabolite bufotenine (5‐hydroxy‐ N,N ‐dimethyltryptamine; also a hallucinogenic compound), followed by its deamination to form 5‐hydroxyindoleacetic acid (5‐HIAA). The involvement of CYP2D6 in the O‐demethylation of 5‐MeO‐DMT to bufotenin has been demonstrated in two studies using HLMs and hepatic microsomes from CYP2D6‐humanized mice, while no activity was determined for other CYPs investigated, such as 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2A1, 2E1, 3A4, 3A5, , 1A1, 1B1, 2C18, 3A7, 4A11, and 4A. In HLMs, it was shown that CYP2D6 with decreased functionality produced less bufotenin than the fully functional enzymes, with isoforms *2 and *10 exhibiting 2.6‐ and 40‐fold lower catalytic efficiency than the wild‐type, respectively. While 5‐MeO‐DMT metabolism was consistent between CYP2D6 PMs and NMs in human hepatocytes due to the MAO‐A‐mediated major metabolic pathway, the concurrent use of MAOIs shifted the pathway to O‐demethylation, leading to increased bufotenine formation in NMs. The MAOI harmaline reduced the depletion of 5‐MeO‐DMT in both metabolizer groups, but bufotenine was detected only in hepatocytes from NMs, not PMs, showing the differences in CYP2D6 activity in the capacity to convert 5‐MeO‐DMT to bufotenine. Moreover, bufotenine is also metabolized by MAO‐A and the use of MAOIs will not only increase its production but also reduce its clearance. Finally, as previously discussed, harmaline and harmine are substrates of CYP2D6, therefore the enzyme's activity can impact their metabolism, as evidenced by the slower depletion of harmaline in PMs. Considering that O‐demethylation is a minor pathway, phenotypic differences in individuals are unlikely to significantly impact the effects of 5‐MeO‐DMT. However, confirming this in humans would be valuable. Hypothetically, given that 5‐MeO‐DMT and bufotenine have different receptor affinities (for example, bufotenine has several times higher affinity for 5‐HT 2A than 5‐MeO‐DMT), using 5‐MeO‐DMT with an MAOI may result in altered effects mediated by different receptors, thereby influenced by the individual's genetics. This is probably more likely to occur with MAOIs that are not CYP2D6 inhibitors (unlike harmine and harmaline). With non‐CYP2D6 inhibitors, PMs may exhibit no or minimal production of bufotenine, while individuals with functional enzymes metabolize these substrates more rapidly and can convert 5‐MeO‐DMT to bufotenine. On the contrary, when using CYP2D6‐inhibiting MAOIs, the difference between enzyme phenotypes can be hypothesized to be short‐term, as the enzyme could undergo phenoconversion, leading to lower activity (potentially depending on the MAOI dose). While it would be interesting to determine phenotypic differences in humans, we are not aware of any clinical trials co‐administering these compounds, and using 5‐MeO‐DMT with MAOIs may pose a risk of serotonin toxicity due to their agonistic effects on serotonergic systems, thereby this potential combination needs to be carefully considered. Ibogaine is a naturally occurring psychedelic in the roots of the rainforest plant Tabernanthe iboga . It has been suggested that ibogaine is metabolized to its main metabolite, noribogaine, primarily by CYP2D6 with minor contributions from 2C9 and 3A4. , In humans, NMs of CYP2D6 were shown to have lower ibogaine exposure but higher noribogaine levels due to faster metabolism, while PMs showed higher ibogaine exposure and significantly lower noribogaine levels with slower metabolism. The role of CYP2D6 in ibogaine metabolism was confirmed in another human study using paroxetine, a strong CYP2D6 inhibitor. Paroxetine‐treated individuals ( n = 11) had significantly higher peak concentrations and longer ibogaine half‐lives compared with placebo‐treated subjects ( n = 9; 10.2 h vs. 2.5 h). Reduced CYP2D6 activity led to higher ibogaine exposure and similar noribogaine levels, effectively doubling overall exposure to active compounds. CYP3A and CYP2C19 were not assessed in this study. Apart from CYP enzymes, the oral availability of ibogaine has been found to be significantly influenced by the two ATP‐binding cassette transporters ABCB1 (P‐glycoprotein) and ABCG2, where the former was shown to restrict ibogaine brain penetration. However, while genetic variations of ABCB1 and ABCG2 genes can affect ibogaine exposure in patients, the extent of this impact was suggested to be relatively limited. Similarly to LSD, given the significant role of CYP2D6 in its metabolism, pharmacogenomic testing can be suggested for clinical trials involving individuals undergoing ibogaine treatment. Additionally, close monitoring during clinical trials, or even dose adjustments, would be important given the existing concerns about adverse events associated with the drug. This is relevant to PMs, who may experience significantly stronger and longer effects of the drug and it has already been recommended to consider halving the dose for this phenotypic group. In contrast, while not yet assessed, it can be hypothesized that UMs may clear the drug more rapidly and lead to a lower response of the drug. While this requires further research, there is a possibility that UMs may not be suitable candidates for ibogaine treatment, or may require a higher dose. Therefore, determining the CYP2D6 phenotype could enhance both the efficacy and safety of the treatment. The roles of CYP3A and CYP2C19 in its metabolism, especially in CYP2D6 PMs, require further investigation. While not a classical psychedelic, 3,4‐Methylenedioxymethamphetamine (MDMA) is an entactogen which is being investigated for therapy to treat PTSD. The metabolism of active MDMA occurs through two main pathways. In the first pathway, MDMA is primarily O‐demethylenated by CYP2D6 to form 3,4‐dihydroxymethamphetamine (HHMA), which is then O‐methylated by catechol‐O‐methyltransferase (COMT) to form 3‐methoxy‐4‐hydroxymethamphetamine (HMMA). The second pathway involves N‐demethylation by CYP1A2 and CYP2B6 (and to a lesser extent CYP2C19 and CYP3A4), producing 3,4‐methylenedioxyamphetamine (MDA). MDA undergoes similar metabolic reactions as MDMA, forming 3,4‐dihydroxyamphetamine (HHA), followed by O‐methylation by COMT to produce 4‐hydroxy‐3‐methoxyamphetamine (HMA). CYP2D6 is believed to contribute 30% of MDMA metabolism. In humans, it was demonstrated that CYP2D6 activity altered plasma MDMA levels, which were higher in PMs compared with NMs and lasted up to 3 h after drug administration, however, the difference was small (1.15× higher in PMs compared with NMs). Plasma HHMA levels were significantly higher in NMs, indicating CYP2D6's role in forming this metabolite. CYP2D6 activity was shown to alter systolic blood pressure, as well as “any drug rating” and “drug liking,” which were higher in PMs compared with IMs and NMs, both at 0.6 h, and for “drug liking,” also at 1 h. However, considering that the difference in MDMA plasma levels between PMs and IMs/NMs were small, and variations in the psychotropic effects of MDMA were notable only within the first hour after administration, it suggests that the differences in CYP2D6 function have a minor and short‐lived impact on its effects. This could be due to MDMA's ability to inhibit CYP2D6, , leading to autoinhibition of its own metabolism. Therefore, the impact of variations in the CYP2D6 genotype is confined and primarily observable only in the initial hour following MDMA administration. Besides CYP2D6, in vitro studies have suggested the involvement of other CYP enzymes, such as 2C19, 2B6, and 1A2 that are involved in the N‐demethylation of MDMA. , In humans, it was confirmed that 2C19, 2B6, and 1A2 are involved in the N‐demethylation of MDMA to MDA. However, polymorphisms in those enzymes did not significantly alter the subjective effects of MDMA. Additionally, CYP1A2 (specifically in individuals carrying the inducible rs762551 AA genotype) showed increased activity in N‐demethylation in light smokers compared with non‐smokers and very light smokers. In the COMT gene, which encodes an enzyme involved in the breakdown of HHMA and HHA, the Val158Met polymorphism (rs4680) has been previously reported to impact enzyme's activity, where the substitution of Val with Met is associated with decreased activity (that is, highest in Val/Val genotype and lowest in Met/Met genotype). However, the results regarding its influence on antidepressant response are mixed. Interestingly, the same mutation is associated with MDMA use disorder and MDMA‐induced psychotic symptoms, where carriers of at least one Met allele were associated with a lower risk of developing MDMA use disorder and the Val/Val genotype with a lower risk of developing psychotic symptoms among those with the disorder. Taken together, considering the short‐lasting differences in subjective effects in PMs of CYP2D6 due to autoinhibition and the fact that 2C19, 1A2, and 2B6 did not show any differences in subjective effects, pharmacogenomic testing may have limited clinical relevance in MDMA‐assisted therapy. However, while impairments in individual enzymes may have only minor effects, the combined effect of multiple enzyme deficiencies could have a more significant impact on MDMA metabolism and potentially increase the risk of toxicity. For example, a case report documented a death after consuming MDMA while on ritonavir treatment (a strong CYP2D6 and CYP3A4 inhibitor). Further research is needed to better understand the effects of rare combinations of enzyme functionalities. For other psychedelics, the literature is sparse. For mescaline, while the primary metabolic pathway involves amine oxidases and it does not significantly interact with CYP2D6, a study has shown it is a substrate for organic cation transporter 1 (OCT1), encoded by the polymorphic SLC22A1 gene. Genetic variants of OCT1 can significantly alter transporter expression and function, potentially causing inter‐individual variations in mescaline pharmacokinetics. This may lead to decreased elimination and an increased risk of intoxication and adverse effects in individuals with reduced or absent OCT1 activity. However, the clinical implications of SLC22A1 polymorphisms on mescaline's pharmacokinetics require further in vivo studies to evaluate their potential impact on its effects. Lastly, in vitro evidence suggests that Salvinorin A is a substrate for CYP2D6, CYP1A1, CYP2E1, CYP2C18, UGT2B7, and possibly P‐glycoprotein, with glucuronidation by UGT2B7 likely being the major metabolic pathway. However, the impact of CYP enzyme activity on the effects of Salvinorin A remains unknown and requires further investigation. Ketamine, which is a dissociative anesthetic but also has antidepressant properties and can produce psychedelic experiences at sub‐anesthetic doses, was not included in this review as it has already been systematically reviewed from pharmacogenomics perspective. Main findings and research perspectives The current literature discussed in this review suggests that cytochrome P450 enzymes are involved in the metabolism of several psychedelics to varying extents. Most importantly, the highly polymorphic CYP2D6 enzyme has been shown to impact the effects of LSD, ibogaine, and to a lesser extent, MDMA under normal circumstances. For PMs of CYP2D6, it has been previously suggested to reduce the dose of LSD and ibogaine, notably by halving it. , While CYP2D6 also metabolizes DMT, harmine, and harmaline, its role in pharmahuasca is unclear and difficult to estimate due to drug–drug−gene interactions and variations in treatment protocols. When using MAOIs that inhibit the primary metabolic pathway of certain psychedelics, such as 5‐MeO‐DMT or one of the pathways for psilocin, the role of CYP2D6 may become more significant and the enzyme's function could have a more pronounced impact. However, due to limited clinical data, no dosing suggestions can be made at this point. Further research is needed to ascertain granular data on phenotypic group differences for CYP2D6 PMs and UMs, ideally through larger and more diverse cohort studies. These drugs are still under investigation as potential medicines and must successfully complete further clinical trials to gain regulatory approval. If these drugs eventually become available, pre‐emptive pharmacogenomics could potentially improve treatment outcomes, reduce side effects, and provide cost‐saving benefits, particularly in populations with a higher frequency of extremes of CYP2D6 activity. Nevertheless, it is important to note that pharmacogenomics is just one approach to improve treatment efficacy by tailoring the dose to the individual. Other factors, such as the concomitant use of other medicines can lead to drug–drug interactions and influence the effects of psychedelics. Moreover, the therapeutic setting and the participant's psychological state at the time of dosing, are also critical components that can significantly influence psychedelic‐assisted therapy outcomes. All in all, we recommended incorporating pharmacogenomics into clinical trials involving LSD and ibogaine, and exploring other potential drug–gene interactions discussed in this review to provide more insight. Table provides a summary of all the drugs discussed and suggestions for future research. Factors to consider To effectively use genetics‐based determination of enzyme phenotypes and before deciding to do any dose adjustment, some additional factors should be considered and/or eliminated. Firstly, the activity of enzymes can be influenced by other drugs, herbs, smoking, pregnancy, comorbidities, and diet, which can lead to phenoconversion. For instance, using LSD or ibogaine together with CYP2D6 inhibitors may result in a stronger response in individuals with functional enzymes, as the enzyme is phenoconverted to a lower activity state, similar to that of PMs. In some cases, the inhibition may be site‐specific. For instance, grapefruit juice has been shown to lead to potent inhibition of intestinal CYP3A4, while the hepatic activity of CYP3A4 remained unaffected, thereby can be important for orally administered drugs that are substrates for this enzyme. Moreover, individuals who have undergone liver transplants may have experienced phenoconversion. For example, it has been shown that following by a liver transplant, CYP2D6 PMs can convert to NMs as well as NMs to PMs, indicating that the genotype of the donor's liver controls the recipient's phenotype. The utility of using enzyme activity‐based recommendations is likely limited for autoinhibitors like MDMA, which is a potent inhibitor of CYP2D6. MDMA causes the phenoconversion of the enzyme from NMs to PMs, with the activity taking longer than 10 days to return to basal levels. This reduces the effectiveness of CYP2D6‐based dose adjustments as the differences are likely short‐lived. While PGx testing is relatively inexpensive, with a current cost of around AUD 149 in Australia (USD ~ 100) per panel that includes CYP2D6, the challenge for clinical research aiming to characterize the differences lies in identifying enough patients with extreme metabolizer phenotypes, such as CYP2D6 PMs and UMs due to their low frequency in a population. Limitations and final conclusion Current clinical research on psychedelics primarily involves healthy participants. This limits our understanding of how these drugs will translate to individuals with comorbidities, such as cancer and liver disease that can lead to phenoconversion. Data supporting the influence of genetics on the effects of some psychedelics remains limited. The impact of drug‐metabolizing enzymes, as well as other factors such as mutations in drug receptors and molecule transporters, should be further investigated in clinical trials. The current literature discussed in this review suggests that cytochrome P450 enzymes are involved in the metabolism of several psychedelics to varying extents. Most importantly, the highly polymorphic CYP2D6 enzyme has been shown to impact the effects of LSD, ibogaine, and to a lesser extent, MDMA under normal circumstances. For PMs of CYP2D6, it has been previously suggested to reduce the dose of LSD and ibogaine, notably by halving it. , While CYP2D6 also metabolizes DMT, harmine, and harmaline, its role in pharmahuasca is unclear and difficult to estimate due to drug–drug−gene interactions and variations in treatment protocols. When using MAOIs that inhibit the primary metabolic pathway of certain psychedelics, such as 5‐MeO‐DMT or one of the pathways for psilocin, the role of CYP2D6 may become more significant and the enzyme's function could have a more pronounced impact. However, due to limited clinical data, no dosing suggestions can be made at this point. Further research is needed to ascertain granular data on phenotypic group differences for CYP2D6 PMs and UMs, ideally through larger and more diverse cohort studies. These drugs are still under investigation as potential medicines and must successfully complete further clinical trials to gain regulatory approval. If these drugs eventually become available, pre‐emptive pharmacogenomics could potentially improve treatment outcomes, reduce side effects, and provide cost‐saving benefits, particularly in populations with a higher frequency of extremes of CYP2D6 activity. Nevertheless, it is important to note that pharmacogenomics is just one approach to improve treatment efficacy by tailoring the dose to the individual. Other factors, such as the concomitant use of other medicines can lead to drug–drug interactions and influence the effects of psychedelics. Moreover, the therapeutic setting and the participant's psychological state at the time of dosing, are also critical components that can significantly influence psychedelic‐assisted therapy outcomes. All in all, we recommended incorporating pharmacogenomics into clinical trials involving LSD and ibogaine, and exploring other potential drug–gene interactions discussed in this review to provide more insight. Table provides a summary of all the drugs discussed and suggestions for future research. To effectively use genetics‐based determination of enzyme phenotypes and before deciding to do any dose adjustment, some additional factors should be considered and/or eliminated. Firstly, the activity of enzymes can be influenced by other drugs, herbs, smoking, pregnancy, comorbidities, and diet, which can lead to phenoconversion. For instance, using LSD or ibogaine together with CYP2D6 inhibitors may result in a stronger response in individuals with functional enzymes, as the enzyme is phenoconverted to a lower activity state, similar to that of PMs. In some cases, the inhibition may be site‐specific. For instance, grapefruit juice has been shown to lead to potent inhibition of intestinal CYP3A4, while the hepatic activity of CYP3A4 remained unaffected, thereby can be important for orally administered drugs that are substrates for this enzyme. Moreover, individuals who have undergone liver transplants may have experienced phenoconversion. For example, it has been shown that following by a liver transplant, CYP2D6 PMs can convert to NMs as well as NMs to PMs, indicating that the genotype of the donor's liver controls the recipient's phenotype. The utility of using enzyme activity‐based recommendations is likely limited for autoinhibitors like MDMA, which is a potent inhibitor of CYP2D6. MDMA causes the phenoconversion of the enzyme from NMs to PMs, with the activity taking longer than 10 days to return to basal levels. This reduces the effectiveness of CYP2D6‐based dose adjustments as the differences are likely short‐lived. While PGx testing is relatively inexpensive, with a current cost of around AUD 149 in Australia (USD ~ 100) per panel that includes CYP2D6, the challenge for clinical research aiming to characterize the differences lies in identifying enough patients with extreme metabolizer phenotypes, such as CYP2D6 PMs and UMs due to their low frequency in a population. Current clinical research on psychedelics primarily involves healthy participants. This limits our understanding of how these drugs will translate to individuals with comorbidities, such as cancer and liver disease that can lead to phenoconversion. Data supporting the influence of genetics on the effects of some psychedelics remains limited. The impact of drug‐metabolizing enzymes, as well as other factors such as mutations in drug receptors and molecule transporters, should be further investigated in clinical trials. No funding was received for this work. DP, JS, and AH hold equity in a commercial entity, Psychae Therapeutics, which is undertaking research with psychedelic compounds. DP and JS are co‐CEOs of the same organization, and AH holds an Advisor role.
Temperature-dependent trophic associations modulate soil bacterial communities along latitudinal gradients
3119296a-b88a-4c31-bbe8-f57427389eb9
11334336
Microbiology[mh]
Understanding how abiotic and biotic factors influence species distribution and community structure is critical for the exploration of biogeography . As the importance of soil microorganisms, especially soil bacteria, in biogeochemical cycles, crop productivity, and human health is increasingly recognized , a growing body of research is emphasizing the elucidation of full microbial diversity, including community structure, assembly, and ecological drivers. Although many studies have shown that multiple abiotic factors control bacterial diversity , we lack studies explicitly incorporating environmental factors with characteristics of biotic interactions to determine the relative contributions of abiotic and biotic factors shaping microbial community diversity. The rice paddy ecosystem is one of the Earth’s largest wetlands and harbors diverse microorganisms that interact strongly with each other due to the aerobic and anaerobic cycles caused by artificial flooding . Among them, protistan predation and viral lysis, as two important top–down control elements, alter the growth, metabolism, and evolutionary strategies of bacterial cells in soil environments , thereby influencing the coexistence of bacterial species. Recently, it has been shown that the simultaneous presence of protists and viruses strongly affects bacterial virulence and diversification . However, empirical studies have mainly focused on the impacts of soil protists and viruses on bacterial communities based on laboratory incubation experiments or simple model systems . Understanding the latitudinal patterns and driving mechanisms of trophic associations remains difficult at continental scales. The classical latitudinal biotic interaction hypothesis (LBIH) posits that biotic interactions are more intense in stable and warm climates and generally decrease in intensity from low to high latitudes . This theoretical concept partially explains the latitudinal diversity gradient of species, which suggests that flourishing biotic interactions at low latitudes promote coevolution and might result in high species richness . Nonetheless, contradicting the LBIH, recent studies have found weak, absent, or even reversed latitudinal patterns in biotic interactions . Consequently, it remains unclear whether biotic interactions are stronger at low latitudes, resulting in higher species richness and how biotic interactions respond to environmental changes. Recent studies have revealed that the variation in ecological networks along environmental gradients may reflect the coexistence mechanisms underlying community assembly . The different impacts of abiotic and biotic filtering processes through multiple reinforcing and conflicting effects can manifest in alternating patterns of network structure . Turnover in species composition is an important source of variation in networks along environmental gradients, as interactions are primarily conditional on species co-occurrence. Therefore, investigations of how networks vary temporally or spatially have the potential to provide new insight into how species interactions vary. Here, we elucidated how putative trophic interactions influence bacterial communities via a cross-latitude field survey and a laboratory microcosm experiment. We characterized the community compositions of bacteria, protists, and T4-like viruses in paddy field soils along a latitudinal gradient ranging from 19°N to 45°N. Next, we depicted two types of biotic associations, namely, protist–bacteria (P–B) associations and virus–bacteria (V–B) associations at the community and species levels, and explored the impacts of putative top–down controls on the bacterial community. We further assessed the latitudinal gradients of biotic associations and elucidated the underlying environmental drivers. Finally, empirical evidence was obtained for the observed temperature-modulated latitudinal distribution of putative trophic interactions by microcosm experiments with artificial manipulations of temperature and soil water content gradients. Soil sampling During April and May 2016, we collected soil samples from a 100 × 100 m 2 plot in a paddy field at 76 sites from 28 provinces across Eastern China (19.27°N-47.41°N, 85.12°E-124.41°E). These samples represented a wide range of environmental gradients (such as the climates associated with latitude). The geographical information of the sampling sites is provided in . Samples were collected from the top 20 cm of paddy soils, and five discrete cores per plot were collected and mixed thoroughly with three replicates. The collected samples were sealed in sterile bags, kept in an icebox, and transported to the laboratory. After field sampling, soils were sieved (<2 mm) and separated into two parts: one was air-dried for physicochemical measurements, and the other was stored at −80°C for DNA extraction. Abiotic factor data collection The geographic distance between pairwise sites was calculated according to the Global Position System (GPS) coordinates of each site. A total of 19 climatic attributes of each sampling site were obtained from the WorldClim database ( www.worldclim.org ) using the R package “ raster .” Soil properties were determined according to standardized protocols , soil pH was measured in a 1:2.5 ratio of soil to water with a pH meter, soil water content (SWC) was determined after oven-drying at 105°C for 12 h, soil organic matter (SOM) was determined by the dichromate digestion colorimetric method, available phosphorus (AP) was extracted by 0.5 M NaHCO 3 and determined with the molybdenum blue method, total nitrogen (TN) and total carbon (TC) were determined by dry combustion in a Vario Max CNS analyzer (Elementar Instruments, Mt. Laurel, NJ), nitrate-nitrogen (NO 3 - -N), and ammonium-nitrogen (NH 4 + -N) was extracted with 1 M KCl and measured using a flow injection analyzer (SAN++, Skalar, Holland) . All measurements were measured in triplicate. To explore the differences in the diversity and composition of microbial communities across different latitudinal regions, we divided the sample sites into two latitudinal groups according to a previous study: a high-latitude group (latitude >32°, n = 35) and a low-latitude group (latitude <32°, n = 38) . This separation corresponds to the Qinling Mountains-Huaihe River Line (latitude ≈ 32°), an important geographical boundary in Chinese north–south regions regarding climate, landform, and soil conditions . All analyses were performed using R 4.0.2 ( http://www.r-project.org ), unless otherwise indicated. DNA extraction, sequencing, and microbial community characterization Total microbial genomic DNA was extracted from 0.5 g soil using the FastDNA SPIN Kit for Soil (MP Biomedicals, LLC, Solon). Three replicates of DNA samples were pooled for amplicon sequencing. Three genes targeting distinct taxonomic groups with different taxonomic resolutions were amplified and sequenced using HiSeq System (Illumina): (i) V4 region of the 16S ribosomal RNA (rRNA) genes for bacteria , (ii) V9 region of the 18S ribosomal RNA (rRNA) genes for protists , and (iii) a fragment of the major capsid protein-encoding gene g23 of T4-like virus . The Polymerase Chain Reaction (PCR) procedures of 16S rRNA, 18S rRNA, and g23 were performed in a 20-μl reaction system, which contained 10 μl of Easy Fast PCR Mix Buffer (TransGen, Beijing, China), 0.5 μM each primer, 10 ng of DNA template, and Milli-Q water to the final volume. Thermal cycling was conducted as follows: initial denaturation at 95°C for 5 min, followed by 35 cycles of denaturation at 95°C for 1 min, annealing at 55°C for 30 s, and elongation at 72°C for 30 s, with a final step of 72°C for 5 min. After purification with a GeneJET Gel Extraction kit (Thermo Scientific), all three types of PCR products were sequenced on HiSeq 2500 sequencer (Illumina) (Magigen, Guangzhou, China) using a paired-end approach. The sequences acquired were processed according to the protocols described in previous works . Briefly, we used DADA2 (version 1.12.1) to obtain denoised , chimera-free, nonsingleton microbial ASVs based on the default parameters. Taxonomic annotation of ASVs was performed using the SILVA SSU 138.1 database for bacteria and the PR2 SSU 4.12.0 database for protists . To focus on protists, we removed the sequencing reads assigned to Rhodophyta , Streptophyta , Fungi , Metazoa , unclassified Opisthokonta , and ambiguous taxa in eukaryotic community data. For T4-like virus, representative nucleotide sequences of the g23 gene fragments were translated using MEGA-X . The closest relative of the representative sequences was determined using BLAST analysis . The ASV tables for bacteria, protists, and T4-like viruses were rarefied to the minimum number of sequences per sample. A total of 3 915 528 sequences were produced across three targeting sections of the standard DNA barcoding regions. After quality filtering, 8361, 2564, and 545 ASVs were retained for the bacterial, protistan, and T4-like virus datasets, respectively. On average, bacterial communities were dominated by Proteobacteria (36.1%), Actinobacteria (24.0%), and Acidobacteria (10.8%); protistan communities were dominated by Rhizaria (43.1%), Amoebozoa (28.1%), Stramenopiles (12.4%), and Alveolata (9.3%); and T4-like virus communities were dominated by the Paddy group (63.9%), Paddy clones (17.6%), and Lake groups (12.8%) . Not all of the samples passed our rarefaction cutoff, and we obtained information for 73 out of 76 study sites. The rarefaction curves of passed samples are provided in . This information was used for downstream analysis. Geographic distribution patterns of microbial communities We selected species richness (that is, the number of observed ASVs) as a commonly used biodiversity metric to evaluate the diversity of microbes. Richness is the most frequently used and simplest metric of biodiversity . We tested the relationships (linear or quadratic regressions) between microbial richness and latitude by the “lm” function in R. We identified the best model for the regression by the lowest Akaike information criterion (AIC). Nonmetric multidimensional scaling (NMDS) analysis based on Bray–Curtis distance metrics was performed to explore differences in microbial composition at the ASV level (Hellinger transformed) across latitudes. The dissimilarity of microbial composition in different latitudinal groups was tested by analysis of similarity (ANOSIM) using the “ vegan ” package. Distance-decay relationships (DDRs) were evaluated with ordinary least-squares (OLS) regression between the Bray–Curtis similarity and geographic distance matrices. We tested the slopes of the DDRs in the two distinct latitudinal groups (low- and high-latitudinal groups) to compare the differences between them. Quantification of assembly processes of bacterial community We quantified the importance of deterministic processes of the bacterial community using the “1—normalized stochasticity ratio ( NST )”. NST is an index developed to estimate the ecological stochasticity in the community assembly process [with 50% as the boundary between more deterministic (<50%) or more stochastic (>50%) assembly] . By considering the overall performance of similarity metrics, the NST based on Jaccard distance is recommended for estimating the contribution of stochasticity in community assembly. This analysis was conducted in the “ NST ” package. We used a moving window analysis to quantify the variation in the bacterial assemblage across latitudinal groups. Moving window analysis is a prominent method of analyzing the spatial variability of landscape patterns at multiple scales . For each focal unit in the landscape, a matrix is used to specify the neighborhood, and the metric value of this local neighborhood is assigned to each focal unit . Therefore, the windows are allowed to overlap and the result of a moving window analysis is a raster with an identical extent as the input . However, each unit describes the neighborhood in regard to the variability of the selected metric . Here, we selected the low-latitude group as the first unit by continuously including sampling points at higher latitudes while removing sampling points at lower latitudes, such that the low-latitude group transitioned to the high-latitude group. Bipartite networks To evaluate the associations of bacterial, protistan, and T4-like virus communities, we constructed a predator–prey bipartite network based on the ASV relative abundance datasets. Bipartite networks are the representation of relationships between two distinct classes of nodes, such as predator–prey, plant–pollinator, and parasite–host interactions . Identifying patterns in bipartite networks is useful in explaining the formation and function of putative trophic interactions . We constructed bipartite qualitative (binary) networks using the R package “ bipartite ” v.2.08 and visualized in Cytoscape v.3.9.0 . In order to retain more information and reduce computational complexity, only ASVs detected over 30% of sampling sites were kept for the meta-network (of 73 samples) construction. The selection of the threshold of 30% is based on previous studies , and our analysis confirmed that the choice of threshold did not affect our main findings . For instance, the latitudinal patterns of network metrics were consistent at the filtering thresholds of 20%, 30%, and 40%. Moreover, the network metrics along latitudinal gradients were significantly correlated with each other in filtering threshold of 20%, 30%, and 40% . Before constructing the bipartite network, the filtered table was used for pairwise correlation calculation of predators (protist and T4-like virus) and prey (bacteria) using Spearman rank correlations. This was followed by an Random Matrix Theory (RMT) based approach to automatically determine the correlation cutoff threshold, implemented using the “ RMThreshold ” package . This RMT-based approach avoids an arbitrary transition point (St) determination commonly used in association-based network methods, thus minimizing the uncertainty in network construction and comparison . The obtained adjacent matrix associated with the bipartite graph was generated, which filtered the noncorrelated associations and consisted of 1 or 0, showing the presence/absence of the corresponding predator–prey association . To explore the latitudinal patterns of associations of bacteria, protists, and T4-like viruses, a subnetwork of each sampling point was constructed using the “subgraph” function in the “ igraph ” package. The architectures of the observed protist–bacteria and virus–bacteria networks were calculated using the “ bipartite ” package. At the meta-network level, the node number, edge number (total links between nodes in the network), and network connectance ( C = E / N 2​ where E is the number of edges, and N is the number of nodes) were calculated and summarized . At the subnetwork level, edge number and network connectance were selected to evaluate the species associations . Procrustes analysis To explore the congruence between bacterial communities and predatory (protists and T4-like viruses) communities across latitudes, we used Procrustes analysis to transform the first two coordinates of the NMDS plot for each microbial community across latitudes with the Bray–Curtis dissimilarity metric . Procrustes analysis is a technique for comparing the relative position of points (i.e. samples or sites) in two multivariate datasets (in an ordination space). It attempts to stretch and rotate the points in one matrix, such as points obtained from an NMDS, to be as close as possible to points in another matrix, thus preserving the relative distance between points within each matrix . This procedure yields a measure of fit, R 2 , which is the correlation in a symmetric Procrustes rotation. Analogous to a Mantel test, Procrustes analysis is particularly used to determine how much variance in one matrix (i.e. bacteria) is attributable to the variance in the another (i.e. protists) matrix or to assess the statistical significance in the correlation between the two multivariate datasets. In addition, Procrustes analysis has the advantages of the application of the Procrustean association metric (i.e. residuals). Pointwise residuals indicate the difference between two different community ordinations for each sample and are used to examine predator–prey associations across latitudinal gradients. The statistical significance of the Procrustes analysis (i.e. R 2 ) can be assessed by randomly permutating the data 1000 times . This analysis was performed using the R package “ vegan ” v.2.4.6. Multiple factors impact bacterial community diversity and composition To explore the impact of environmental effects (climatic and edaphic factors) and biotic effects (P–B and V–B associations) on bacterial α-diversity, we used multiple OLS regression to analyze the relationship between the potential explanatory variables and species richness. The best models were identified based on the lowest Akaike’s information criterion (AIC) . OLS was performed using the function “lm” in the “ car ” package. Redundancy analysis based on Bray–Curtis distance (dbRDA) was performed to reveal the effects of environmental factors versus biotic associations on the bacterial community . Prior to dbRDA, the attributes were manually selected according to variation inflation factors (VIFs), resulting in VIFs < 10. The statistical significance of each explanatory variable was examined with a permutation test (999 random permutations), and only significant variables were retained . To quantify the relative contribution of the environmental effects (climatic and edaphic impacts) and the biotic effects (P–B and V–B associations) on bacterial β-diversity across the latitudinal groups, we adopted a moving window analysis combined with hierarchical partitioning method using the “rdacca.hp” function in the “ rdaaaca.hp ” package . The independent effects correspond to their relative contribution to the total variation. Similarly, we used Mantel tests to determine the correlation between community Bray–Curtis distances (protist–bacteria and virus–bacteria) from low to high latitudinal groups. To disentangle the direct and indirect relationships of environmental and biotic effects on bacterial richness and community structure at three spatial scales (total sites, low latitudinal group, and high latitudinal group), random forest analysis and partial least squares path modeling (PLS-PM) were constructed using “ randomForest ” and “ plspm ” packages, respectively. The bacterial community structure was represented by NMDS1 of NDMS analysis based on Bray–Curtis distance. We first considered a full model that included all reasonable pathways, and then, we eliminated nonsignificant pathways until we obtained the final model whose pathways were all significant. To reduce the model complexity, we constructed composite variables for climatic factors (MAT and MAP), edaphic factors (pH, SWC, and C/N), and biotic associations (edges and connectance between protists/virus and bacteria) . Goodness of fit (GOF) statistics were used to measure the model’s predictive power. We also performed VPA to verify the results of PLS-PM , and the individual R 2 represents the total effects of climatic factors, edaphic factors, and biotic effects on bacterial communities. Microcosm experiment The microcosm experiment was conducted in soils independent of the large-scale survey presented above to assess the relationships between microbial diversity and climatic factors and enable us to explore the latitudinal patterns of biotic associations between bacterial, protistan, and T4-like virus communities independently of the data used. In May 2021, paddy soil for microcosm construction was collected from Guangdong (22.45°N, 112.41°E) in southeastern China. Soil samples were collected from the top 20 cm layer. The local temperature at the time of sampling was 20.7°C. The percentages of soil water content and soil organic matter were 13.7% and 1.4%. The value of pH, NH 4 + -N (mg kg −1 ), NO 3 - -N (mg kg −1 ), and AP (mg kg −1 ) (measured as described above) were 6.5, 4.1, 82.9, and 39.4, respectively. For microcosm preparation, 20 g of soil was added to a serum bottle. Five temperature gradients (5°C, 10°C, 15°C, 20°C, 25°C) and four soil water content gradients (10%, 15%, 20%, 25%) were set up to prepare soil microcosms. For the temperature gradient microcosms, we uniformly adjusted the soil water content to 13.7%. For the soil water content gradient microcosms, the incubation temperature was uniformly set at 20°C. We set eight replicates for the five temperature gradients and four soil water content gradients experiments, with a total of 72 microcosms incubated in a sterilized incubator for 4 weeks. The soil water content was maintained by adjusting the weight of serum bottle with addition of sterilized water every 3 days during the incubation period. After incubation, we extracted soil DNA and performed high-throughput sequencing to evaluate the variation in diversity and biotic association of microorganisms along the temperature gradients and soil water content gradients. To guarantee methodological consistency, the sequencing platform and analysis pipeline for the microcosm experiment are same as the field investigation. During April and May 2016, we collected soil samples from a 100 × 100 m 2 plot in a paddy field at 76 sites from 28 provinces across Eastern China (19.27°N-47.41°N, 85.12°E-124.41°E). These samples represented a wide range of environmental gradients (such as the climates associated with latitude). The geographical information of the sampling sites is provided in . Samples were collected from the top 20 cm of paddy soils, and five discrete cores per plot were collected and mixed thoroughly with three replicates. The collected samples were sealed in sterile bags, kept in an icebox, and transported to the laboratory. After field sampling, soils were sieved (<2 mm) and separated into two parts: one was air-dried for physicochemical measurements, and the other was stored at −80°C for DNA extraction. The geographic distance between pairwise sites was calculated according to the Global Position System (GPS) coordinates of each site. A total of 19 climatic attributes of each sampling site were obtained from the WorldClim database ( www.worldclim.org ) using the R package “ raster .” Soil properties were determined according to standardized protocols , soil pH was measured in a 1:2.5 ratio of soil to water with a pH meter, soil water content (SWC) was determined after oven-drying at 105°C for 12 h, soil organic matter (SOM) was determined by the dichromate digestion colorimetric method, available phosphorus (AP) was extracted by 0.5 M NaHCO 3 and determined with the molybdenum blue method, total nitrogen (TN) and total carbon (TC) were determined by dry combustion in a Vario Max CNS analyzer (Elementar Instruments, Mt. Laurel, NJ), nitrate-nitrogen (NO 3 - -N), and ammonium-nitrogen (NH 4 + -N) was extracted with 1 M KCl and measured using a flow injection analyzer (SAN++, Skalar, Holland) . All measurements were measured in triplicate. To explore the differences in the diversity and composition of microbial communities across different latitudinal regions, we divided the sample sites into two latitudinal groups according to a previous study: a high-latitude group (latitude >32°, n = 35) and a low-latitude group (latitude <32°, n = 38) . This separation corresponds to the Qinling Mountains-Huaihe River Line (latitude ≈ 32°), an important geographical boundary in Chinese north–south regions regarding climate, landform, and soil conditions . All analyses were performed using R 4.0.2 ( http://www.r-project.org ), unless otherwise indicated. Total microbial genomic DNA was extracted from 0.5 g soil using the FastDNA SPIN Kit for Soil (MP Biomedicals, LLC, Solon). Three replicates of DNA samples were pooled for amplicon sequencing. Three genes targeting distinct taxonomic groups with different taxonomic resolutions were amplified and sequenced using HiSeq System (Illumina): (i) V4 region of the 16S ribosomal RNA (rRNA) genes for bacteria , (ii) V9 region of the 18S ribosomal RNA (rRNA) genes for protists , and (iii) a fragment of the major capsid protein-encoding gene g23 of T4-like virus . The Polymerase Chain Reaction (PCR) procedures of 16S rRNA, 18S rRNA, and g23 were performed in a 20-μl reaction system, which contained 10 μl of Easy Fast PCR Mix Buffer (TransGen, Beijing, China), 0.5 μM each primer, 10 ng of DNA template, and Milli-Q water to the final volume. Thermal cycling was conducted as follows: initial denaturation at 95°C for 5 min, followed by 35 cycles of denaturation at 95°C for 1 min, annealing at 55°C for 30 s, and elongation at 72°C for 30 s, with a final step of 72°C for 5 min. After purification with a GeneJET Gel Extraction kit (Thermo Scientific), all three types of PCR products were sequenced on HiSeq 2500 sequencer (Illumina) (Magigen, Guangzhou, China) using a paired-end approach. The sequences acquired were processed according to the protocols described in previous works . Briefly, we used DADA2 (version 1.12.1) to obtain denoised , chimera-free, nonsingleton microbial ASVs based on the default parameters. Taxonomic annotation of ASVs was performed using the SILVA SSU 138.1 database for bacteria and the PR2 SSU 4.12.0 database for protists . To focus on protists, we removed the sequencing reads assigned to Rhodophyta , Streptophyta , Fungi , Metazoa , unclassified Opisthokonta , and ambiguous taxa in eukaryotic community data. For T4-like virus, representative nucleotide sequences of the g23 gene fragments were translated using MEGA-X . The closest relative of the representative sequences was determined using BLAST analysis . The ASV tables for bacteria, protists, and T4-like viruses were rarefied to the minimum number of sequences per sample. A total of 3 915 528 sequences were produced across three targeting sections of the standard DNA barcoding regions. After quality filtering, 8361, 2564, and 545 ASVs were retained for the bacterial, protistan, and T4-like virus datasets, respectively. On average, bacterial communities were dominated by Proteobacteria (36.1%), Actinobacteria (24.0%), and Acidobacteria (10.8%); protistan communities were dominated by Rhizaria (43.1%), Amoebozoa (28.1%), Stramenopiles (12.4%), and Alveolata (9.3%); and T4-like virus communities were dominated by the Paddy group (63.9%), Paddy clones (17.6%), and Lake groups (12.8%) . Not all of the samples passed our rarefaction cutoff, and we obtained information for 73 out of 76 study sites. The rarefaction curves of passed samples are provided in . This information was used for downstream analysis. We selected species richness (that is, the number of observed ASVs) as a commonly used biodiversity metric to evaluate the diversity of microbes. Richness is the most frequently used and simplest metric of biodiversity . We tested the relationships (linear or quadratic regressions) between microbial richness and latitude by the “lm” function in R. We identified the best model for the regression by the lowest Akaike information criterion (AIC). Nonmetric multidimensional scaling (NMDS) analysis based on Bray–Curtis distance metrics was performed to explore differences in microbial composition at the ASV level (Hellinger transformed) across latitudes. The dissimilarity of microbial composition in different latitudinal groups was tested by analysis of similarity (ANOSIM) using the “ vegan ” package. Distance-decay relationships (DDRs) were evaluated with ordinary least-squares (OLS) regression between the Bray–Curtis similarity and geographic distance matrices. We tested the slopes of the DDRs in the two distinct latitudinal groups (low- and high-latitudinal groups) to compare the differences between them. We quantified the importance of deterministic processes of the bacterial community using the “1—normalized stochasticity ratio ( NST )”. NST is an index developed to estimate the ecological stochasticity in the community assembly process [with 50% as the boundary between more deterministic (<50%) or more stochastic (>50%) assembly] . By considering the overall performance of similarity metrics, the NST based on Jaccard distance is recommended for estimating the contribution of stochasticity in community assembly. This analysis was conducted in the “ NST ” package. We used a moving window analysis to quantify the variation in the bacterial assemblage across latitudinal groups. Moving window analysis is a prominent method of analyzing the spatial variability of landscape patterns at multiple scales . For each focal unit in the landscape, a matrix is used to specify the neighborhood, and the metric value of this local neighborhood is assigned to each focal unit . Therefore, the windows are allowed to overlap and the result of a moving window analysis is a raster with an identical extent as the input . However, each unit describes the neighborhood in regard to the variability of the selected metric . Here, we selected the low-latitude group as the first unit by continuously including sampling points at higher latitudes while removing sampling points at lower latitudes, such that the low-latitude group transitioned to the high-latitude group. To evaluate the associations of bacterial, protistan, and T4-like virus communities, we constructed a predator–prey bipartite network based on the ASV relative abundance datasets. Bipartite networks are the representation of relationships between two distinct classes of nodes, such as predator–prey, plant–pollinator, and parasite–host interactions . Identifying patterns in bipartite networks is useful in explaining the formation and function of putative trophic interactions . We constructed bipartite qualitative (binary) networks using the R package “ bipartite ” v.2.08 and visualized in Cytoscape v.3.9.0 . In order to retain more information and reduce computational complexity, only ASVs detected over 30% of sampling sites were kept for the meta-network (of 73 samples) construction. The selection of the threshold of 30% is based on previous studies , and our analysis confirmed that the choice of threshold did not affect our main findings . For instance, the latitudinal patterns of network metrics were consistent at the filtering thresholds of 20%, 30%, and 40%. Moreover, the network metrics along latitudinal gradients were significantly correlated with each other in filtering threshold of 20%, 30%, and 40% . Before constructing the bipartite network, the filtered table was used for pairwise correlation calculation of predators (protist and T4-like virus) and prey (bacteria) using Spearman rank correlations. This was followed by an Random Matrix Theory (RMT) based approach to automatically determine the correlation cutoff threshold, implemented using the “ RMThreshold ” package . This RMT-based approach avoids an arbitrary transition point (St) determination commonly used in association-based network methods, thus minimizing the uncertainty in network construction and comparison . The obtained adjacent matrix associated with the bipartite graph was generated, which filtered the noncorrelated associations and consisted of 1 or 0, showing the presence/absence of the corresponding predator–prey association . To explore the latitudinal patterns of associations of bacteria, protists, and T4-like viruses, a subnetwork of each sampling point was constructed using the “subgraph” function in the “ igraph ” package. The architectures of the observed protist–bacteria and virus–bacteria networks were calculated using the “ bipartite ” package. At the meta-network level, the node number, edge number (total links between nodes in the network), and network connectance ( C = E / N 2​ where E is the number of edges, and N is the number of nodes) were calculated and summarized . At the subnetwork level, edge number and network connectance were selected to evaluate the species associations . To explore the congruence between bacterial communities and predatory (protists and T4-like viruses) communities across latitudes, we used Procrustes analysis to transform the first two coordinates of the NMDS plot for each microbial community across latitudes with the Bray–Curtis dissimilarity metric . Procrustes analysis is a technique for comparing the relative position of points (i.e. samples or sites) in two multivariate datasets (in an ordination space). It attempts to stretch and rotate the points in one matrix, such as points obtained from an NMDS, to be as close as possible to points in another matrix, thus preserving the relative distance between points within each matrix . This procedure yields a measure of fit, R 2 , which is the correlation in a symmetric Procrustes rotation. Analogous to a Mantel test, Procrustes analysis is particularly used to determine how much variance in one matrix (i.e. bacteria) is attributable to the variance in the another (i.e. protists) matrix or to assess the statistical significance in the correlation between the two multivariate datasets. In addition, Procrustes analysis has the advantages of the application of the Procrustean association metric (i.e. residuals). Pointwise residuals indicate the difference between two different community ordinations for each sample and are used to examine predator–prey associations across latitudinal gradients. The statistical significance of the Procrustes analysis (i.e. R 2 ) can be assessed by randomly permutating the data 1000 times . This analysis was performed using the R package “ vegan ” v.2.4.6. To explore the impact of environmental effects (climatic and edaphic factors) and biotic effects (P–B and V–B associations) on bacterial α-diversity, we used multiple OLS regression to analyze the relationship between the potential explanatory variables and species richness. The best models were identified based on the lowest Akaike’s information criterion (AIC) . OLS was performed using the function “lm” in the “ car ” package. Redundancy analysis based on Bray–Curtis distance (dbRDA) was performed to reveal the effects of environmental factors versus biotic associations on the bacterial community . Prior to dbRDA, the attributes were manually selected according to variation inflation factors (VIFs), resulting in VIFs < 10. The statistical significance of each explanatory variable was examined with a permutation test (999 random permutations), and only significant variables were retained . To quantify the relative contribution of the environmental effects (climatic and edaphic impacts) and the biotic effects (P–B and V–B associations) on bacterial β-diversity across the latitudinal groups, we adopted a moving window analysis combined with hierarchical partitioning method using the “rdacca.hp” function in the “ rdaaaca.hp ” package . The independent effects correspond to their relative contribution to the total variation. Similarly, we used Mantel tests to determine the correlation between community Bray–Curtis distances (protist–bacteria and virus–bacteria) from low to high latitudinal groups. To disentangle the direct and indirect relationships of environmental and biotic effects on bacterial richness and community structure at three spatial scales (total sites, low latitudinal group, and high latitudinal group), random forest analysis and partial least squares path modeling (PLS-PM) were constructed using “ randomForest ” and “ plspm ” packages, respectively. The bacterial community structure was represented by NMDS1 of NDMS analysis based on Bray–Curtis distance. We first considered a full model that included all reasonable pathways, and then, we eliminated nonsignificant pathways until we obtained the final model whose pathways were all significant. To reduce the model complexity, we constructed composite variables for climatic factors (MAT and MAP), edaphic factors (pH, SWC, and C/N), and biotic associations (edges and connectance between protists/virus and bacteria) . Goodness of fit (GOF) statistics were used to measure the model’s predictive power. We also performed VPA to verify the results of PLS-PM , and the individual R 2 represents the total effects of climatic factors, edaphic factors, and biotic effects on bacterial communities. The microcosm experiment was conducted in soils independent of the large-scale survey presented above to assess the relationships between microbial diversity and climatic factors and enable us to explore the latitudinal patterns of biotic associations between bacterial, protistan, and T4-like virus communities independently of the data used. In May 2021, paddy soil for microcosm construction was collected from Guangdong (22.45°N, 112.41°E) in southeastern China. Soil samples were collected from the top 20 cm layer. The local temperature at the time of sampling was 20.7°C. The percentages of soil water content and soil organic matter were 13.7% and 1.4%. The value of pH, NH 4 + -N (mg kg −1 ), NO 3 - -N (mg kg −1 ), and AP (mg kg −1 ) (measured as described above) were 6.5, 4.1, 82.9, and 39.4, respectively. For microcosm preparation, 20 g of soil was added to a serum bottle. Five temperature gradients (5°C, 10°C, 15°C, 20°C, 25°C) and four soil water content gradients (10%, 15%, 20%, 25%) were set up to prepare soil microcosms. For the temperature gradient microcosms, we uniformly adjusted the soil water content to 13.7%. For the soil water content gradient microcosms, the incubation temperature was uniformly set at 20°C. We set eight replicates for the five temperature gradients and four soil water content gradients experiments, with a total of 72 microcosms incubated in a sterilized incubator for 4 weeks. The soil water content was maintained by adjusting the weight of serum bottle with addition of sterilized water every 3 days during the incubation period. After incubation, we extracted soil DNA and performed high-throughput sequencing to evaluate the variation in diversity and biotic association of microorganisms along the temperature gradients and soil water content gradients. To guarantee methodological consistency, the sequencing platform and analysis pipeline for the microcosm experiment are same as the field investigation. Latitudinal patterns of microbial communities and drivers of bacterial assemblage We used amplicon sequencing [16S rRNA genes, 18S rRNA genes, and major capsid protein-encoding gene ( g23 ) markers] and assessments of climatic conditions and soil chemistry to explore how putative trophic interactions (P–B and V–B associations) and abiotic parameters affect the bacterial community along a latitudinal gradient . Bacterial richness declined gradually toward high latitudes , confirming the expected latitudinal diversity gradient (LDG), a decline in species richness from the tropics to the poles . Comparably, the species richness of protists was also lowest at the high latitude, but peaked at the intermediate latitude of approximately 32°N . However, the richness of T4-like viruses showed a nonsignificant trend toward high latitudes . There was a clear clustering of these three taxonomic groups, showing distinct variations in the microbial community composition at low and high latitudes (ANOSIM statistic: P < .001) . We tested the correlation of microbial diversities with environmental parameters and found different response patterns in these three groups . Despite MAT, MAP, pH, and SWC significantly affect the richness of three types of microbes; variations in their correlation with C/N and inorganic nitrogen content underscore the critical role of soil nutrient status in influencing speciation rates and community diversity . The distance–decay relationship (DDR) showed a sharper decrease in the compositional similarity of bacteria and protists at low latitudes than at high latitudes, which is contrary to the patterns of T4-like viruses . These results indicate that the contrasting latitudinal diversity patterns in bacteria, protists, and T4-like viruses may result from their differential responses to environmental filters and dispersal limitations . To illustrate the underlying processes that drive bacterial community assembly from low to high latitudes, we defined the relative contribution of deterministic processes using a moving window analysis (see ) . The proportion of deterministic processes gradually increased toward high latitudes until reaching a plateau at 32°N . We then quantified the extent to which each independent deterministic effect (including climatic, edaphic, and biotic effects) explained the distribution of the bacterial community . The most important abiotic factors determining bacterial diversity were selected to represent the climatic (mean annual temperature: MAT, and mean annual precipitation: MAP) and edaphic (pH, soil water content: SWC, and C:N ratio: C/N) effects. For the biotic effect, we constructed binary bipartite networks to profile putative top–down controls by protists and T4-like viruses on bacteria and extracted the edge number and network connectance of subnetworks to represent the biotic association of each site ( and ). The consistent latitudinal variation in the proportion of the deterministic processes with the overall size of deterministic factors indicates that both abiotic and biotic relationships influenced deterministic processes . In most cases, climatic and edaphic effects played dominant roles in structuring the bacterial assemblage. The influence of edaphic factors showed minimal variation across the latitudinal gradient, whereas the impact of climatic factors significantly increased from 3.0% to 15.1% along the latitude of 30°N to 32°N . Furthermore, the putative trophic interactions (P–B and V–B associations) were also essential in shaping the bacterial assemblage, although fewer proportions explained the variation in the bacterial assemblage (0.8%–4.5% and 0.3%–3.9% for P–B and V–B associations, respectively) compared to environmental effects (3.0%–15.1% and 8.6%–13.4% for climatic and edaphic effect, respectively) . The impact of protists on bacterial communities peaked at mid-latitudes (approximately 32°N), whereas the contribution of T4-like viruses to the bacterial assemblage increased toward high latitudes . We found that the correlation between protists and bacterial communities was greater than that between T4-like virus and bacterial communities . This may be caused by the fact that protists graze on a wider range of bacterial species than the selective infections by specific viruses . Additionally, the usage of amplicon sequencing may underestimate the correlation between viral and bacterial communities by focusing on T4-like virus . It is widely acknowledged that deterministic factors are mainly composed of abiotic filtering and biotic interactions . However, the impacts of biotic interactions have been understudied, largely due to the challenges in quantifying these interactions and linking them to community assembly processes. We here incorporated the species associations of protists–bacteria and virus–bacteria into ecological models and revealed the importance of species associations on the bacterial assemblages at continental scale. These findings offer a deeper explanation of the assembly processes of bacterial communities in terrestrial ecosystems. Both abiotic and biotic effects modulate bacterial diversity Multiple OLS and PLS-PM were employed to investigate the abiotic and biotic factors influencing bacterial diversity and community structure, respectively. At the continental scale, the OLS analysis showed that the variation in bacterial richness was best explained by MAT, SWC, protist–bacteria association (P–B edge number), and pH . Random forest analysis further proved that MAT and SWC were the top two explanatory factors ( P < .05) for the variation in bacterial richness in both low- and high-latitude regions, followed by the protist–bacteria associations and pH . These findings are consistent with previous studies, which showed that climate factors and pH were crucial in explaining the variation of bacterial richness . The effect of V–B association was nonsignificant, but was necessary to improve the final model’s fit in high latitudes rather than in low latitudes . This may be because that higher enzymatic activity degrades viral capsids in warmer and humid soils, resulting in a lower impact of viruses on bacterial diversity at low latitudes . Given that T4-like virus richness gradually increases with latitude, it is reasonable to assume that the latitudinal diversity gradients are closely related to the significance of species interactions. Indeed, multiple studies have linked the latitudinal diversity gradient to a presumed gradient in the importance of biotic interactions . For example, the rate of predation of ants on wasp larvae increased toward the tropics along a latitudinal gradient due to the higher ant diversity in the tropics . These findings suggest that an increase in predator richness may lead to more predation on prey, which, in turn, triggers changes in the intensity of species interactions and ultimately affects prey’s diversity . We further conducted partial least squares path modeling (PLS-PM) analysis to investigate the associations between abiotic and biotic effects and bacterial community structure across various spatial scales . Generally, climatic and edaphic factors both directly and indirectly affected the bacterial community at the continental scale in this study, which is in agreement with previous global observations . The effect size of the P–B association ( β = 0.29, P < .001) was greater than that of the V–B association ( β = 0.08, P < .05) ( ; and ), which may result from protists grazing on a wider range of bacterial species than specialized viruses . Moreover, focusing on sequencing of g23 gene fragments rather than directly assessing viruses in soils may also result in a low V–B association . Consistent with the results of variation partitioning analysis (VPA) and Mantel tests , climatic factors had stronger, direct effects on the community structure of bacteria at high latitudes ( β = 0.14, P < .001) . Edaphic factors and P–B associations significantly affected bacterial communities, both at the continental and regional scales. V–B associations had a slight effect on the bacterial community on a continental scale ( β = 0.08, P < .05), whereas this effect is stronger in high latitudes ( β = 0.14, P < .01) . Together, these results indicate that the putative trophic interactions play important roles in modulating the structure of bacterial communities in paddy soils along latitudinal gradients. Temperature drives latitudinal patterns of putative trophic interactions We explored the latitudinal pattern of trophic associations between protists, T4-like viruses, and bacteria at the community and species levels based on Procrustes and bipartite network analysis, respectively. Procrustes is a technique to determine how much variance in one matrix (i.e. bacteria) is attributable to the variance in the other (i.e. protists). Both protistan and T4-like virus communities were strongly associated with bacterial communities at the continental scale, and the associations were clearly separated at 32°N . The congruence between protistan and bacterial communities ( R 2 = 0.75, P < .001) was higher than that between T4-like virus and bacterial communities ( R 2 = 0.36, P < .001), which implies that the impact of protistan predation on the bacterial community is stronger than the impact of T4-like virus infection. Consistent with the latitudinal patterns of microbial diversity and biotic effects described above, the Procrustes residuals followed a quadratic distribution pattern for the P–B association and a monotonically decreasing pattern for the V–B association . This finding suggests that there are greater putative trophic interactions in the mid-latitudinal zone (~32°N) between protists and bacteria, whereas dominant V–B associations occur in the high latitudinal zone. The edge number and network connectance of the binary networks further support these microbial association patterns, showing the association between protists and bacteria loosens from mid-latitudes to the equator and the North Pole, whereas that between T4-like viruses and bacteria gradually tightens with increasing latitudes . Numerous studies have shown that the links of network can change with species richness due to the inherent correlation in computational logic . However, the scaling of link numbers is also influenced by evolutionary constraints, phenological matching, and competition . Herein, we found that the latitudinal patterns of P–B and V–B associations are analogous to that of predators’ (protist and virus) diversity but not to that of bacterial diversity . A possible reason is that flourishing predation leads to an increase in predator richness and a decrease in prey (bacteria) richness , whereas the supply of underlying nutrient resources allows the rapid growth of soil bacteria . We speculated that the stimulation of soil bacteria from the nutrient resources would offsetting the negative effects of predation to some extent, which need more empirical evidence in future. We found that climatic factors (MAT and MAP) strongly influenced putative trophic correlations between protists, T4-like viruses, and bacteria, as indicated by linear regression analysis . It is important to note that we examined the community-level relationship dynamics through Procrustes analysis and the species correlations using network analysis. We attributed the correlation patterns to environmental factors only when these patterns consistently align with specific environmental gradients at both community and species levels . To understand the driving mechanisms of biotic associations along latitudes in depth, we conducted a microcosm experiment to confirm the observed relationships between climatic factors and trophic associations by setting gradients of temperature and soil water content with independent soil samples. A gradient of soil water contents was generated to represent the variation in MAP due to the strong correlation between SWC and MAP at the continent scale ( R = 0.66, P < .001; ). Consistent with the patterns observed in the survey, the microcosm experiment showed that temperature was a crucial factor driving microbial communities and the biotic associations between protists, T4-like viruses, and bacteria . With rising temperature, the relationship between protists and bacteria initially strengthened and then weakened, whereas the relationship between T4-like viruses and bacteria gradually weakened, as indicated by Procrustes residuals and bipartite network parameters . The edge number of bipartite networks between T4-like viruses and bacteria exhibited a decreasing trend but was highest at 20°C , which is consistent with the observed tendency in network metrics across the latitudinal gradient in the survey. Nevertheless, these results generally showed a consistent pattern of trophic associations with temperatures both in the microcosm experiment and across the latitudinal gradient in the survey. Compared to temperature, soil water content exclusively affected the association between T4-like viruses and bacteria, and the V–B relationship gradually decreased as the SWC increased . This trend could be explained by a preference for wetter conditions by dominant bacteria . In turn, these taxa occupied more ecological niches and limited the survival of rare species, thus resulting in lower diversity in microbial communities . This phenomenon is supported by an analysis of metagenomes in grassland soils with a precipitation gradient, which showed that virus and host diversities are higher in soils with lower precipitation . This higher diversity of viruses and hosts may result in increased predator–prey interactions . Collectively, these results indicate that the latitudinal patterns of putative trophic interactions are predator-dominant and primarily modulated by temperature-related components of climate conditions. Our data provide empirical information regarding the putative biotic interactions (P–B and V–B associations) underlying the microbial ecological patterns in agroecosystems, these results are novel in revealing the roles of putative trophic interactions in shaping bacterial communities along latitudes. Although our data analyses are rigorous and validated, our sample resources are concentrated in the northern hemisphere at mid-latitudes where human activity is intensive. These findings are complementary to those of previous studies in other ecosystems and together emphasize the specificity and complexity of the mid-latitude region. Recent studies have shown that the distribution pattern of global terrestrial microorganisms is markedly complex in the mid-latitude region of the Northern Hemisphere. For instance, a study found a hump-shaped relationship between the protistan Shannon index and absolute latitude in natural soil ecosystems, with the highest diversity in the 30°N region . Another survey on ocean viral biodiversity identified five distinct ecological zones in the global ocean, with a tendency for viral diversity to decrease first and then increase in the Northern Hemisphere . Furthermore, the temperature changes caused by elevation might complicate our findings on the latitudinal patterns . However, most of our sampling sites (over 80%) are located in low-elevation plains (below 500 m), and there was a weaker correlation between MAT and elevation ( R 2 = 0.08, P = .017) than that between MAT and latitude ( R 2 = 0.83, P < .001) across our data. Consequently, elevation variations appear to have a limited impact on our main findings. Future research should consider the combined effects of temperature variations due to both latitude and elevation on species diversity and associations. We acknowledge that studying viruses using amplicon sequencing only captures parts of the viral community , which limits our ability to infer comprehensive virus–bacteria associations based on co-occurrence data. However, the conservation of the g23 -protein sequenced region generally allows for high-resolution identification of T4-like virus communities . Detection and analysis of virus-bacteria co-occurrence relationships at the high-resolution level offers valuable insights for predicting virus–host dynamics and understanding the evolution of their interactions . Additionally, the direct and indirect top–down controls captured from our observational co-occurrence data may not provide conclusive evidence of biotic interactions , and these statistical correlations generate testable hypotheses for future experimental work. Future research could integrate network analysis with experimental approaches in synthetic microbial communities to refine the methods used to evaluate species interactions. Nevertheless, we show the novel influence of biotic associations beyond environmental factors on bacterial communities and discover the drivers of species co-occurrence patterns through microcosm experiments. Our results highlight an important but previously overlooked mechanism of how changing protist–bacteria and virus–bacteria associations that are modulated by climatic conditions could affect bacterial communities along the latitudinal gradients. We used amplicon sequencing [16S rRNA genes, 18S rRNA genes, and major capsid protein-encoding gene ( g23 ) markers] and assessments of climatic conditions and soil chemistry to explore how putative trophic interactions (P–B and V–B associations) and abiotic parameters affect the bacterial community along a latitudinal gradient . Bacterial richness declined gradually toward high latitudes , confirming the expected latitudinal diversity gradient (LDG), a decline in species richness from the tropics to the poles . Comparably, the species richness of protists was also lowest at the high latitude, but peaked at the intermediate latitude of approximately 32°N . However, the richness of T4-like viruses showed a nonsignificant trend toward high latitudes . There was a clear clustering of these three taxonomic groups, showing distinct variations in the microbial community composition at low and high latitudes (ANOSIM statistic: P < .001) . We tested the correlation of microbial diversities with environmental parameters and found different response patterns in these three groups . Despite MAT, MAP, pH, and SWC significantly affect the richness of three types of microbes; variations in their correlation with C/N and inorganic nitrogen content underscore the critical role of soil nutrient status in influencing speciation rates and community diversity . The distance–decay relationship (DDR) showed a sharper decrease in the compositional similarity of bacteria and protists at low latitudes than at high latitudes, which is contrary to the patterns of T4-like viruses . These results indicate that the contrasting latitudinal diversity patterns in bacteria, protists, and T4-like viruses may result from their differential responses to environmental filters and dispersal limitations . To illustrate the underlying processes that drive bacterial community assembly from low to high latitudes, we defined the relative contribution of deterministic processes using a moving window analysis (see ) . The proportion of deterministic processes gradually increased toward high latitudes until reaching a plateau at 32°N . We then quantified the extent to which each independent deterministic effect (including climatic, edaphic, and biotic effects) explained the distribution of the bacterial community . The most important abiotic factors determining bacterial diversity were selected to represent the climatic (mean annual temperature: MAT, and mean annual precipitation: MAP) and edaphic (pH, soil water content: SWC, and C:N ratio: C/N) effects. For the biotic effect, we constructed binary bipartite networks to profile putative top–down controls by protists and T4-like viruses on bacteria and extracted the edge number and network connectance of subnetworks to represent the biotic association of each site ( and ). The consistent latitudinal variation in the proportion of the deterministic processes with the overall size of deterministic factors indicates that both abiotic and biotic relationships influenced deterministic processes . In most cases, climatic and edaphic effects played dominant roles in structuring the bacterial assemblage. The influence of edaphic factors showed minimal variation across the latitudinal gradient, whereas the impact of climatic factors significantly increased from 3.0% to 15.1% along the latitude of 30°N to 32°N . Furthermore, the putative trophic interactions (P–B and V–B associations) were also essential in shaping the bacterial assemblage, although fewer proportions explained the variation in the bacterial assemblage (0.8%–4.5% and 0.3%–3.9% for P–B and V–B associations, respectively) compared to environmental effects (3.0%–15.1% and 8.6%–13.4% for climatic and edaphic effect, respectively) . The impact of protists on bacterial communities peaked at mid-latitudes (approximately 32°N), whereas the contribution of T4-like viruses to the bacterial assemblage increased toward high latitudes . We found that the correlation between protists and bacterial communities was greater than that between T4-like virus and bacterial communities . This may be caused by the fact that protists graze on a wider range of bacterial species than the selective infections by specific viruses . Additionally, the usage of amplicon sequencing may underestimate the correlation between viral and bacterial communities by focusing on T4-like virus . It is widely acknowledged that deterministic factors are mainly composed of abiotic filtering and biotic interactions . However, the impacts of biotic interactions have been understudied, largely due to the challenges in quantifying these interactions and linking them to community assembly processes. We here incorporated the species associations of protists–bacteria and virus–bacteria into ecological models and revealed the importance of species associations on the bacterial assemblages at continental scale. These findings offer a deeper explanation of the assembly processes of bacterial communities in terrestrial ecosystems. Multiple OLS and PLS-PM were employed to investigate the abiotic and biotic factors influencing bacterial diversity and community structure, respectively. At the continental scale, the OLS analysis showed that the variation in bacterial richness was best explained by MAT, SWC, protist–bacteria association (P–B edge number), and pH . Random forest analysis further proved that MAT and SWC were the top two explanatory factors ( P < .05) for the variation in bacterial richness in both low- and high-latitude regions, followed by the protist–bacteria associations and pH . These findings are consistent with previous studies, which showed that climate factors and pH were crucial in explaining the variation of bacterial richness . The effect of V–B association was nonsignificant, but was necessary to improve the final model’s fit in high latitudes rather than in low latitudes . This may be because that higher enzymatic activity degrades viral capsids in warmer and humid soils, resulting in a lower impact of viruses on bacterial diversity at low latitudes . Given that T4-like virus richness gradually increases with latitude, it is reasonable to assume that the latitudinal diversity gradients are closely related to the significance of species interactions. Indeed, multiple studies have linked the latitudinal diversity gradient to a presumed gradient in the importance of biotic interactions . For example, the rate of predation of ants on wasp larvae increased toward the tropics along a latitudinal gradient due to the higher ant diversity in the tropics . These findings suggest that an increase in predator richness may lead to more predation on prey, which, in turn, triggers changes in the intensity of species interactions and ultimately affects prey’s diversity . We further conducted partial least squares path modeling (PLS-PM) analysis to investigate the associations between abiotic and biotic effects and bacterial community structure across various spatial scales . Generally, climatic and edaphic factors both directly and indirectly affected the bacterial community at the continental scale in this study, which is in agreement with previous global observations . The effect size of the P–B association ( β = 0.29, P < .001) was greater than that of the V–B association ( β = 0.08, P < .05) ( ; and ), which may result from protists grazing on a wider range of bacterial species than specialized viruses . Moreover, focusing on sequencing of g23 gene fragments rather than directly assessing viruses in soils may also result in a low V–B association . Consistent with the results of variation partitioning analysis (VPA) and Mantel tests , climatic factors had stronger, direct effects on the community structure of bacteria at high latitudes ( β = 0.14, P < .001) . Edaphic factors and P–B associations significantly affected bacterial communities, both at the continental and regional scales. V–B associations had a slight effect on the bacterial community on a continental scale ( β = 0.08, P < .05), whereas this effect is stronger in high latitudes ( β = 0.14, P < .01) . Together, these results indicate that the putative trophic interactions play important roles in modulating the structure of bacterial communities in paddy soils along latitudinal gradients. We explored the latitudinal pattern of trophic associations between protists, T4-like viruses, and bacteria at the community and species levels based on Procrustes and bipartite network analysis, respectively. Procrustes is a technique to determine how much variance in one matrix (i.e. bacteria) is attributable to the variance in the other (i.e. protists). Both protistan and T4-like virus communities were strongly associated with bacterial communities at the continental scale, and the associations were clearly separated at 32°N . The congruence between protistan and bacterial communities ( R 2 = 0.75, P < .001) was higher than that between T4-like virus and bacterial communities ( R 2 = 0.36, P < .001), which implies that the impact of protistan predation on the bacterial community is stronger than the impact of T4-like virus infection. Consistent with the latitudinal patterns of microbial diversity and biotic effects described above, the Procrustes residuals followed a quadratic distribution pattern for the P–B association and a monotonically decreasing pattern for the V–B association . This finding suggests that there are greater putative trophic interactions in the mid-latitudinal zone (~32°N) between protists and bacteria, whereas dominant V–B associations occur in the high latitudinal zone. The edge number and network connectance of the binary networks further support these microbial association patterns, showing the association between protists and bacteria loosens from mid-latitudes to the equator and the North Pole, whereas that between T4-like viruses and bacteria gradually tightens with increasing latitudes . Numerous studies have shown that the links of network can change with species richness due to the inherent correlation in computational logic . However, the scaling of link numbers is also influenced by evolutionary constraints, phenological matching, and competition . Herein, we found that the latitudinal patterns of P–B and V–B associations are analogous to that of predators’ (protist and virus) diversity but not to that of bacterial diversity . A possible reason is that flourishing predation leads to an increase in predator richness and a decrease in prey (bacteria) richness , whereas the supply of underlying nutrient resources allows the rapid growth of soil bacteria . We speculated that the stimulation of soil bacteria from the nutrient resources would offsetting the negative effects of predation to some extent, which need more empirical evidence in future. We found that climatic factors (MAT and MAP) strongly influenced putative trophic correlations between protists, T4-like viruses, and bacteria, as indicated by linear regression analysis . It is important to note that we examined the community-level relationship dynamics through Procrustes analysis and the species correlations using network analysis. We attributed the correlation patterns to environmental factors only when these patterns consistently align with specific environmental gradients at both community and species levels . To understand the driving mechanisms of biotic associations along latitudes in depth, we conducted a microcosm experiment to confirm the observed relationships between climatic factors and trophic associations by setting gradients of temperature and soil water content with independent soil samples. A gradient of soil water contents was generated to represent the variation in MAP due to the strong correlation between SWC and MAP at the continent scale ( R = 0.66, P < .001; ). Consistent with the patterns observed in the survey, the microcosm experiment showed that temperature was a crucial factor driving microbial communities and the biotic associations between protists, T4-like viruses, and bacteria . With rising temperature, the relationship between protists and bacteria initially strengthened and then weakened, whereas the relationship between T4-like viruses and bacteria gradually weakened, as indicated by Procrustes residuals and bipartite network parameters . The edge number of bipartite networks between T4-like viruses and bacteria exhibited a decreasing trend but was highest at 20°C , which is consistent with the observed tendency in network metrics across the latitudinal gradient in the survey. Nevertheless, these results generally showed a consistent pattern of trophic associations with temperatures both in the microcosm experiment and across the latitudinal gradient in the survey. Compared to temperature, soil water content exclusively affected the association between T4-like viruses and bacteria, and the V–B relationship gradually decreased as the SWC increased . This trend could be explained by a preference for wetter conditions by dominant bacteria . In turn, these taxa occupied more ecological niches and limited the survival of rare species, thus resulting in lower diversity in microbial communities . This phenomenon is supported by an analysis of metagenomes in grassland soils with a precipitation gradient, which showed that virus and host diversities are higher in soils with lower precipitation . This higher diversity of viruses and hosts may result in increased predator–prey interactions . Collectively, these results indicate that the latitudinal patterns of putative trophic interactions are predator-dominant and primarily modulated by temperature-related components of climate conditions. Our data provide empirical information regarding the putative biotic interactions (P–B and V–B associations) underlying the microbial ecological patterns in agroecosystems, these results are novel in revealing the roles of putative trophic interactions in shaping bacterial communities along latitudes. Although our data analyses are rigorous and validated, our sample resources are concentrated in the northern hemisphere at mid-latitudes where human activity is intensive. These findings are complementary to those of previous studies in other ecosystems and together emphasize the specificity and complexity of the mid-latitude region. Recent studies have shown that the distribution pattern of global terrestrial microorganisms is markedly complex in the mid-latitude region of the Northern Hemisphere. For instance, a study found a hump-shaped relationship between the protistan Shannon index and absolute latitude in natural soil ecosystems, with the highest diversity in the 30°N region . Another survey on ocean viral biodiversity identified five distinct ecological zones in the global ocean, with a tendency for viral diversity to decrease first and then increase in the Northern Hemisphere . Furthermore, the temperature changes caused by elevation might complicate our findings on the latitudinal patterns . However, most of our sampling sites (over 80%) are located in low-elevation plains (below 500 m), and there was a weaker correlation between MAT and elevation ( R 2 = 0.08, P = .017) than that between MAT and latitude ( R 2 = 0.83, P < .001) across our data. Consequently, elevation variations appear to have a limited impact on our main findings. Future research should consider the combined effects of temperature variations due to both latitude and elevation on species diversity and associations. We acknowledge that studying viruses using amplicon sequencing only captures parts of the viral community , which limits our ability to infer comprehensive virus–bacteria associations based on co-occurrence data. However, the conservation of the g23 -protein sequenced region generally allows for high-resolution identification of T4-like virus communities . Detection and analysis of virus-bacteria co-occurrence relationships at the high-resolution level offers valuable insights for predicting virus–host dynamics and understanding the evolution of their interactions . Additionally, the direct and indirect top–down controls captured from our observational co-occurrence data may not provide conclusive evidence of biotic interactions , and these statistical correlations generate testable hypotheses for future experimental work. Future research could integrate network analysis with experimental approaches in synthetic microbial communities to refine the methods used to evaluate species interactions. Nevertheless, we show the novel influence of biotic associations beyond environmental factors on bacterial communities and discover the drivers of species co-occurrence patterns through microcosm experiments. Our results highlight an important but previously overlooked mechanism of how changing protist–bacteria and virus–bacteria associations that are modulated by climatic conditions could affect bacterial communities along the latitudinal gradients. Combining a large-scale field survey and microcosm experiments, we provided a comprehensive picture of the latitudinal patterns and driving mechanisms of microbial diversity and putative trophic interactions . We demonstrated that the latitudinal diversity pattern of microbes was kingdom-dependent and depended on the responses of microbes to environmental factors. We found that both environmental factors and biotic associations affected bacterial communities along latitudes, where the intensity of climatic effects sharply increased at intermediate latitude (30°N to 32°N), whereas the intensity of edaphic effects was generally crucial and stable. The putative top–down controls such as protist–bacteria and virus–bacteria associations played a vital role in shaping bacterial structure, although the impacts were less dominant than environmental effects. Furthermore, we provided empirical evidence to prove that latitudinal patterns of putative trophic interactions were primarily modulated by the temperature-related components of climate conditions. These findings enhance our ability to address important ecological theories, as well as to promote soil microbial interconnectedness for performing ecosystem functions and services. Supplementary_Information_wrae145 Supplementary_Table_wrae145
Improving completion rate of advance care planning at a tertiary rheumatological centre in Singapore: a quality improvement project
a2721d6a-03ba-458a-9509-521d32d87c1f
11474902
Internal Medicine[mh]
Advanced care planning (ACP) is a series of ongoing voluntary discussions between patients, families and healthcare professionals to plan for their future healthcare needs. ACP has been shown to improve end-of-life care, but rates of ACP completion have been dismal in patients with rheumatological disorders. In this quality improvement project, we were able to achieve a statistically significant increase in ACP completion across 1 year with a multimodal intervention involving the education of rheumatologists, active referral of patients to ACP coordinators, providing ACP collaterals to patients and bridging the communication between the rheumatologist and ACP coordinators. Through lessons learnt through this project, we were able to increase the rates of ACP completion in patients with rheumatological diseases. We hope that more patients with rheumatological diseases will be able to benefit from the increased uptake of ACP. Advanced care planning (ACP) is a series of ongoing voluntary discussions between patients, families and healthcare professionals to plan for their future healthcare needs. The European Association of Palliative Care recommended that healthcare providers should initiate this conversation. Subsequently, a certified ACP facilitator, medical provider or social worker explores the goals of care and the values of an individual to help craft and document their future healthcare preferences in an ACP Form. These coordinators are professionals employed by healthcare institutions and receive training to become certified ACP facilitators. Detering et al demonstrated that ACP improves end-of-life care and reduces anxiety and stress levels in both patients and their families. Additionally, numerous studies have also documented positive impacts on patient care, including an increase in satisfaction and quality of life. Patients with rheumatic diseases have a high symptom burden with disease complications, which may lead to multiple admissions and recurrent infections. Patients with systemic sclerosis, dermatomyositis, lupus or vasculitis may have high mortality and morbidity rates if there is cardiopulmonary involvement, such as pulmonary hypertension, interstitial lung disease or myocarditis. Therefore, it is imperative for ACP discussions to be held between patients and the healthcare team to ensure that their care preferences can be made known to clinicians involved in their care. Unfortunately, rates of ACP discussions have been low, with one study documenting this to be as low as 4.2%, but measures have been taken to change this. In Singapore, a study in the primary care sector showed that the use of both brochures and active counselling resulted in an absolute increase in completed advanced medical directives. To our knowledge, there have been no studies conducted on rheumatic patients. Therefore, this quality improvement project (QIP) aims to increase the number of completed ACPs in patients with rheumatic diseases. Study setting and period This QIP was conducted in the Department of Rheumatology and Immunology in the Singapore General Hospital from 1 August 2022 to 31 April 2024. This project was conducted by a multidisciplinary team from the process transformation and improvement, medical social work, specialty nursing, and rheumatology and immunology departments. The team comprised five physicians, two specialty nurses, two ACP coordinators, two research coordinators and one quality improvement coach. This project was led by the junior and senior residents in the Department of Rheumatology and Immunology. This project was implemented in the inpatient setting, whereby patients were admitted and there was an opportunity for intervention. We invited all patients with rheumatic diseases who were older than 65 years old to participate in the project. We also included younger patients with significant rheumatic diseases with significant lung pathology (including pulmonary hypertension and interstitial lung disease) and/or significant cardiac pathology (including ischaemic heart disease or heart failure). Our inpatient registrars (senior residents) identified these patients during daily ward rounds and informed the specialty nurses to engage identified patients during their inpatient stay. These specialty nurses are specifically trained in rheumatology and have been working with these patients for at least 3 years; therefore, they have developed a good rapport with them. Prior to the commencement of this project, we had registered the project with the office in charge of quality improvement at Singapore General Hospital. Study design A specialty-based multidisciplinary interventional study was conducted. Data collection and analysis We retrieved the electronic medical records of all patient records in the Department of Rheumatology and Immunology, Singapore General Hospital, to determine if there was any completed documentation of ACP. We also confirmed the records with the medical social work office, who were able to double-check the records on documented ACPs separately. During each Plan-Do-Study-Act (PDSA) cycle, every referral for patients that was planned for an ACP discussion was tracked, along with the completion status and documented reason for not being able to complete the ACP discussion. Our primary outcome was a number of months with at least one documented ACP discussion. This QIP was conducted in the Department of Rheumatology and Immunology in the Singapore General Hospital from 1 August 2022 to 31 April 2024. This project was conducted by a multidisciplinary team from the process transformation and improvement, medical social work, specialty nursing, and rheumatology and immunology departments. The team comprised five physicians, two specialty nurses, two ACP coordinators, two research coordinators and one quality improvement coach. This project was led by the junior and senior residents in the Department of Rheumatology and Immunology. This project was implemented in the inpatient setting, whereby patients were admitted and there was an opportunity for intervention. We invited all patients with rheumatic diseases who were older than 65 years old to participate in the project. We also included younger patients with significant rheumatic diseases with significant lung pathology (including pulmonary hypertension and interstitial lung disease) and/or significant cardiac pathology (including ischaemic heart disease or heart failure). Our inpatient registrars (senior residents) identified these patients during daily ward rounds and informed the specialty nurses to engage identified patients during their inpatient stay. These specialty nurses are specifically trained in rheumatology and have been working with these patients for at least 3 years; therefore, they have developed a good rapport with them. Prior to the commencement of this project, we had registered the project with the office in charge of quality improvement at Singapore General Hospital. A specialty-based multidisciplinary interventional study was conducted. We retrieved the electronic medical records of all patient records in the Department of Rheumatology and Immunology, Singapore General Hospital, to determine if there was any completed documentation of ACP. We also confirmed the records with the medical social work office, who were able to double-check the records on documented ACPs separately. During each Plan-Do-Study-Act (PDSA) cycle, every referral for patients that was planned for an ACP discussion was tracked, along with the completion status and documented reason for not being able to complete the ACP discussion. Our primary outcome was a number of months with at least one documented ACP discussion. The results of our baseline measurement found that there were zero completed ACP discussions over 6 months from July 2022 to February 2023. The multidisciplinary team did a root cause analysis using the fishbone diagram as described in . We plotted possible intervention packages and designed an implementation plan as described in . We then conducted two PDSA cycles and data were collected and analysed. The results were subsequently presented in the medicine division quality improvement sharing session for dissemination. Root cause analysis describes the root causes of the reasons for poor ACP completion rate in patients with rheumatic diseases. 20 identified causes that may result in poor ACP completion rates were classified into reasons related to doctors, disease, patient, materials, system and family. Examples of such causes included a lack of confidence in leading ACP discussions by doctors and a lack of awareness of ACP by patients and their families. Interventions and change ideas describes the change ideas that were targeted to increase the ACP completion rate. With the use of a prioritisation matrix, we focused on six specific changes out of a potential of 10 change ideas. The proposed interventions were (1) team physicians to broach discussion about ACP as well as individual goals and plans during admission and document the discussion to facilitate outpatient discussion, (2) ACP facilitators to remind patients to let their primary physician know during the next appointment about the ACP discussion so that the primary physician can address any outstanding questions or ACP facilitators will document in the electronic medical records for patients with outstanding questions to their primary physicians, (3) ACP trained physicians to conduct ACP presentations during grand ward rounds to allow primary rheumatologists to understand ACP, along with regular reminders to the inpatient rheumatology team, (4) rheumatology inpatient team broaches ACP to rheumatology inpatients and to refer them accordingly to ACP facilitators using the computerised physician order entry, (5) rheumatology specialty nurses to provide ACP brochures when initiating ACP discussion in the ward and clinic settings and (6) rheumatology specialty nurses to provide Quick Response (QR) codes to patients to scan when doing ACP initiations, which would direct patients to an online workbook on ACP designed by the Agency of Integrated Care. This booklet is written in layman’s terms and guides the patient in reflecting on their values, concerns and views towards their health and end-of-life care. The link to the ACP resources is provided for reference ( https://www.aic.sg/care-services/acp-resources/ " https://www.aic.sg/care-services/acp-resources/ ). For our patients, all six interventions described above were implemented on the patients identified by our team registrars. describes the root causes of the reasons for poor ACP completion rate in patients with rheumatic diseases. 20 identified causes that may result in poor ACP completion rates were classified into reasons related to doctors, disease, patient, materials, system and family. Examples of such causes included a lack of confidence in leading ACP discussions by doctors and a lack of awareness of ACP by patients and their families. describes the change ideas that were targeted to increase the ACP completion rate. With the use of a prioritisation matrix, we focused on six specific changes out of a potential of 10 change ideas. The proposed interventions were (1) team physicians to broach discussion about ACP as well as individual goals and plans during admission and document the discussion to facilitate outpatient discussion, (2) ACP facilitators to remind patients to let their primary physician know during the next appointment about the ACP discussion so that the primary physician can address any outstanding questions or ACP facilitators will document in the electronic medical records for patients with outstanding questions to their primary physicians, (3) ACP trained physicians to conduct ACP presentations during grand ward rounds to allow primary rheumatologists to understand ACP, along with regular reminders to the inpatient rheumatology team, (4) rheumatology inpatient team broaches ACP to rheumatology inpatients and to refer them accordingly to ACP facilitators using the computerised physician order entry, (5) rheumatology specialty nurses to provide ACP brochures when initiating ACP discussion in the ward and clinic settings and (6) rheumatology specialty nurses to provide Quick Response (QR) codes to patients to scan when doing ACP initiations, which would direct patients to an online workbook on ACP designed by the Agency of Integrated Care. This booklet is written in layman’s terms and guides the patient in reflecting on their values, concerns and views towards their health and end-of-life care. The link to the ACP resources is provided for reference ( https://www.aic.sg/care-services/acp-resources/ " https://www.aic.sg/care-services/acp-resources/ ). For our patients, all six interventions described above were implemented on the patients identified by our team registrars. Two PDSA cycles were conducted over a 12-month period. In each PDSA cycle, an intervention was implemented and studied for 6 months. Based on the result of the first PDSA cycle, further interventions were included in the second PDSA cycle along with the first cycle PDSA interventions. PDSA cycle 1 In the first PDSA cycle, the proposed interventions as shown in were implemented. Results were monitored across a 6-month period (February 2023–August 2023). In this cycle, we restricted the ACP referral to maximum of 2 per month due to manpower limitations among the ACP facilitators. PDSA cycle 2 In the second PDSA cycle, on top of the interventions in the first PDSA cycle, we did not set a maximum referral limit for the number of ACP referrals. This was done after consultation with the ACP service, which feedbacked that the number of referrals to them could still be increased. A second 6-month period (September 2023–March 2024) was monitored to track the number of ACPs completed. In the first PDSA cycle, the proposed interventions as shown in were implemented. Results were monitored across a 6-month period (February 2023–August 2023). In this cycle, we restricted the ACP referral to maximum of 2 per month due to manpower limitations among the ACP facilitators. In the second PDSA cycle, on top of the interventions in the first PDSA cycle, we did not set a maximum referral limit for the number of ACP referrals. This was done after consultation with the ACP service, which feedbacked that the number of referrals to them could still be increased. A second 6-month period (September 2023–March 2024) was monitored to track the number of ACPs completed. Fisher’s exact test was used to analyse the primary outcome of months with completed ACP. The comparison group used was the 6-month preintervention. The significance level for all tests is set at p<0.05. Statistical analysis was calculated using IBM SPSS Statistics for Windows, V.20 (IBM). A total of 22 patients were referred for ACP discussion. summarises the characteristics of patients being referred for ACP discussion. summarises the results of our PDSA cycles. During preintervention period, there were five ACP referrals, but none were completed. During PDSA cycle 1, eight patients were referred for ACP discussion, we were able to achieve 5 out of 6 months with at least one completed ACP discussion, and this was statistically significant (p=0.015). During this time, the median number of completed ACP per month increased from a baseline of 0 to 1. However, in PDSA cycle 2, 14 patients were referred for ACP discussion. The number of months with at least one completed ACP discussion fell to 2 out of 6 months (p=0.455). As such, the median number of completed ACP per month was 0 for this period, which was the same as the preintervention period. summarises the outcomes of ACP referrals and the reasons for ACP non-completion. Among patients who declined an ACP discussion, the top reasons were that (1) preferred to do advanced medical directives instead of ACP, (2) felt stressed discussing ACP and (3) preferred to read through ACP brochures on their own. Of note, some patients in PDSA cycle 2 (which took place between September 2023 and March 2024) also cited the reason that they would like to defer the discussion after significant holidays, such as the Chinese New Year (CNY), which took place in February 2024. We are one of the first to present a rigorous approach to explore the improvement of ACP completion rate in patients with rheumatic diseases. We were able to significantly increase our ACP completion numbers from 0 to 1 across 6 months in PDSA cycle 1, although this was not reproduced in PDSA cycle 2. In contrast to our study, other QIPs for ACPs involving patients with advanced cancer had shown increased documentation by 12%. This may be because our study has targeted a different population of patients. Patients with rheumatological diseases are younger, and thus, ACP is not something that they will consider, especially when they are not in an acute flare episode. Further studies are needed to assess the understanding of ACP during the non-flare episodes in the outpatient setting. This study revealed similar findings to that of Ng et al . Our patients have an interest in understanding more about ACP. However, when it comes to ACP completion, they were less keen, citing a lack of readiness and the need to first discuss the ACP with their families. Completion of ACP may be perceived as unnecessary when patients are physically well. Additionally, there is an element of the taboo of discussing ACP in Asian culture. Such taboo is particularly seen across festive seasons, especially during CNY, which could have contributed to the decrease in the number of completed ACPs in our PDSA cycle 2. More studies are needed to explore the barriers towards ACP and how to implement strategies to overcome the relevant barriers. During this QIP, the team highlighted certain interventions to be particularly beneficial in improving ACP completion rates. First, it was important that a clinician with good rapport with the patient initiated the conversation on ACP. Examples of such clinicians included the patient’s primary rheumatologist or the advanced practice nurse taking care of the patient longitudinally. This agrees with previous studies, which have shown that the involvement of the patient’s primary provider is important in facilitating such discussions. Additionally, the ACP discussion should be paced well, according to the readiness of the patient. A systematic review on conditions for a successful ACP discussion found that it is essential that patients and their families are willing to participate in ACP. As such, future interventions can include improving clinicians’ skills in assessing a patient’s perceptions towards ACP discussion and strategies for a well-paced discussion. Lastly, patients and their families were able to discuss their ACP when patients were well through the ACP workbook provided to them through QR codes given as part of the QIP. This workbook encouraged patients to reflect on their current concerns and health status and think about their views on treatment and goals of care. Studies have shown that ACP discussions should be iterative and repetitive to increase effectiveness. By allowing patients to reflect on ACP after the initial discussion, patients and their families were able to discuss key aspects of their goals of care when they were outside of the acute flare episode. While such information leaflets cannot replace a facilitated discussion with a trained healthcare professional, they helped to facilitate ACP discussions. This report is not without limitations. Two main factors contributed to the lower ACP completion rate in the second PDSA cycle. First, since our junior doctors are deployed to the department on a rotational basis of 3–6 months, adequate training may not have been achieved during their transitions; therefore, this limited our referral numbers in the second PDSA cycle. This reason was also one of the reasons cited by Johari et al for the lack of internal ACP referrals in their QIP study focusing on increasing the number of ACPs for patients with chronic obstructive pulmonary disease in the emergency department. Additional studies are needed to explore methods to improve the continuity of QIP efforts across junior doctors’ rotation periods. Second, our interventions in the second PDSA cycle cut across holiday seasons, such as Chinese New Year, which made the discussion of ACP more difficult as patients preferred to defer the discussion to after the festive period. Further efforts need to be explored on the communications of ACP during festive seasons. Third, our QIP was conducted in a single tertiary centre in a multiethnic Asian country. We expect the results to differ when conducted in other countries due to varying cultures and beliefs. Last, we did not seek feedback from our patients on their views regarding each intervention, for example, their perspective on the ACP workbook. Moving forward, we will monitor for sustainability of the impact of our interventions on ACP uptake in rheumatic interventions. Additionally, we hope to be able to understand the patients’ perspective of each individual component of our interventions to better improve our efforts towards improving ACP uptake. In this project, we were able to achieve a statistically significant increase in the number of months with at least one completed ACP across a 6-month period. We recommend further studies into factors that can encourage further interest in ACP. This is especially important as the short period during inpatient care may be too limited to fully explore the interests of patients in ACP. 10.1136/bmjoq-2024-002897 online supplemental file 1
Cephalobturator Neoacetabuloplasty: A Therapeutic Solution in Vicious Ankylosis After Developmental Dislocation of the Operated Hip—Case Study
77c18ce7-1a1d-470a-b407-3cc7ae9078ca
11905965
Musculoskeletal System[mh]
An 18-year-old female patient was diagnosed with DDH after the age of 1 year. Although she could have benefited from an early diagnosis of DDH within the National Programme for the Eradication of Developmental Dislocation of the Hip, the patient did not undergo a screening test for DDH prophylaxis. She underwent surgery at the age of 3 years for dislocation and at 4 years for hip reluxation. The patient was referred to our hospital 6 months after the second surgery. Her hip was rigid without mobility and was fixed at 40° flexion, 30° abduction, and 25° internal rotation. Radiography revealed hip dislocation and an unacceptably sized femoral head for achieving congruence through supraacetabular osteotomy (Figure ). After a long period of dynamic abstention and immobilization in a plaster cast, the patient underwent physical therapy to partially recover the mobility of the redislocated hip that had been subjected to surgery twice; however, her hips remained stiff. Radiological imaging of the dislocated hip revealed synostosis between the femoral epiphysis and acetabular portion of the iliac bone. Total hip ankylosis and the presence of possible synostosis suggested the possibility of securing the proximal end of the femur only on the bearing surface corresponding to the acetabular portion of the iliac bone to avoid considerable hypoplasia of the hemipelvis and to secure the pelvic limb in an anatomical position. A previous study discussed adjacent neoarticulation: cephalobturator neoacetabuloplasty. The only anatomical configuration capable of providing stable support is the obturator ring, in which the muscle group is inserted hemiconferentially (Figure ). The muscles, arranged in their anatomical positions, form a strong pericephalic muscle sleeve, which increases the stability provided to the femoral head by the obturator ring. After a thorough preoperative assessment, we decided to perform reconstruction or neoacetabuloplasty with an anterior approach. The Smith Petersen approach offers good visibility in preschool children. Access is made directly on the anterior side of the hip. Start the skin incision to the anterior superior iliac spine. Continue the incision 5 to 8 cm distally. For hip reconstruction, the incision is extended proximally from the anterior superior iliac spine into the portion corresponding to the proximal third of the iliac crest. Distally, the incision can be directed laterally to better visualize the acetabular notch and the muscular interstice for cephalobturator neoacetabuloplasty. Through superficial dissection, the femoral fascia is exposed and the lateral femoral cutaneous nerve is identified. The medial space of the sartorius is exposed, and its medial edge is delineated. The tensor fasciae latae is retracted laterally to avoid nerve damage. Deep surgical dissection exposes the rectus femoris, and the direct tendon is released from the anterior inferior iliac spine, along with the reflected tendon originating supraacetabularly. After releasing the proximal origins, they are fixed with a stay suture or atraumatic clamp and retracted distally. The cleavage plane is identified laterally between the capsule and the pelvitrochanteric muscles and medially between the capsule and the adductors. The Hohmann retractors are positioned, and the capsule is circumferentially sectioned 1 to 1.5 cm from the acetabular insertion to avoid vascular injury. Excess capsule tissue is excised sparingly for the same reasons. During surgery, the articular surface of the acetabulum was no longer covered by the articular cartilage, whereas that of the femoral head showed dyschondroplasia and multiple areas of chondrolysis. Under these conditions, dislocation reduction was equivalent to hip arthrodesis at 6 years of age. Therefore, cephalobturator neoacetabuloplasty was performed. The round ligament is a landmark for individualizing acetabular incisions. We individualized the interstitium between the dorsal and anterior muscle groups and placed the femoral head on the external obturator, under the obturator ring, with the pectineus muscle anteriorly; the quadratus femoris, biceps, and semimembranosus and semitendinosus muscles posteriorly; and the adductor longus, brevis, and gracilis muscles medially. Stability and mobility are also ensured by the “cords” that anchor the proximal femoral end, represented by the insertions of the obturator internus, externus, and piriformis muscles on the anterior crest of the greater trochanter. A proximal threaded Kirschner wire with appropriate sizes of 2 to 3 mm is introduced through the femoral neck to position the femoral head under the obturator foramen. Through a minimal musculoperiosteal window, a subtrochanteric osteotomy for shortening is performed. The resected femoral segment is used as a graft to horizontally position the obturator foramen. The shortening length calculated preoperatively can also be assessed intraoperatively by placing the femoral head under the obturator foramen and maintaining it with the Kirschner wire while the pelvic limb is lightly tractioned with the foot in a neutral position and the femoral diaphysis tangent to the trochantero-intertrochanteric segment. These two sizes must be approximately equal. The proximal musculoperiosteal sleeve of the osteotomy is kept intact. A supraacetabular osteotomy is performed, and in the distal segment (acetabulo-obturator), two proximal threaded Kirschner wires of 2 to 3 mm are introduced to horizontally position the obturator foramen and ensure good containment between the femoral head and the obturator foramen. The supraacetabular osteotomy and graft application were lowered, horizontally rotated, and anteriorly rotated to the obturator ring. Resection-shortening osteotomy, residual provisional fixation, and fixation with a nail plate and screws also strengthened the stabilization of the mobile hip. Intraoperative hip mobility testing revealed stable neoarticulation and normal range of motion (ROM) for all movements. Immediately postoperatively, the radiograph showed the femoral head placed at the obturator foramen (Figure ). Three months postoperatively, after 6 weeks of immobilization and two physical therapy sessions of 2 weeks each, the patient could walk without pain or restraint. One year postoperatively, the decision is made to remove the osteosynthesis materials after a pelvic radiograph has been taken (Figure ). She attended primary, middle, and high schools without any restrictions and engaged in sports, dancing, and practical activities with her peers, wearing a 1.5-cm right plantar riser. Wearing the plantar riser in the distal part of the shoe of the right foot for 12 years induced Achilles tendon retraction and shortening and decreased the foot dorsiflexion by 10. Twelve years postoperatively, the pelvis is balanced and the femoral head is positioned at the level of the obturator ring (Figure ). She endured distortion of the left femur with stoicism and was scheduled for derotation and a 1.5-cm left femur-shortening osteotomy after her college entrance examination. The 1.5-cm shortening present postoperatively remained after 12 years, and the femur had a normal growth rate, similar to that of the opposite side. The last assessment conducted at the age of 18 years, that is, 12 years after surgery, showed that all hip joint movements were within normal limits (Figure , A–D). Hip mobility was assessed by testing all the active and passive movements. Normal values (N) encompassed the quasiunanimously accepted ROM. The flexion was 120° bilaterally (N 110° to 120°), abduction 30° right and 35° left (N 30° to 50°; Figure , A–C, and E), extension 10° bilaterally (N 10° to 15°), adduction 30° bilaterally (N 20° to 30°), external rotation 45° bilaterally (N 40° to 60°), and internal rotation 30° right and 40° left (N 30° to 40°), and circumduction had a smaller amplitude on the right side. It should be noted that she performed squats with foreleg support because she could only perform them on the right pelvic limb and their amplitude was the same as that of the left pelvic limb. Abduction of the right thigh was normal at the lower limit of ROM, which was 5° less than that on the contralateral side (Figure , E). Clinical movement screening tests for opposability were also conducted on the flexor, extensor, and abductor muscles. The movement test for the main abductor muscles, gluteus medius, and gluteus minimus revealed decreased muscle strength compared with the opposite. The patient did not have an insufficient gluteus medius due to nerve damage to the L4, L5, and S1 branches; the gait was normal, and the Trendelenburg sign was negative. The gluteus maximus muscle, the most important extensor, had an action similar to that on the opposite side (10°); branches L5, S1, and S2 and the inferior gluteal artery remained anatomically and functionally intact. Hip ankylosis is a severe and disabling complication in preschool children. For the treatment of hip fibrous ankylosis in children, arthroplasty procedures using autografts, allografts, xenografts, or polymers have been used. However, the stiffness gradually returns in 27% to 50% of cases 1 to 2 years postoperatively. Codivilla-Hey Groves-Colonna capsular arthroplasty, modified by Gantz in 2012, preserves the hip with progressively limited mobility and delays total hip arthroplasty. Simultaneously, the hip is prepared for total hip arthroplasty and the risk of traction nerve injuries is reduced. Currently, for early or limited lesions, chondrocyte transplantation with autologous or allogeneic cells is practiced and xenotransplantation is being explored as a solution owing to genetic engineering. Autologous and allogeneic chondrocyte implantation has a moderate effect and may delay the onset of rigid fibrous ankylosis. Minimally invasive anatomical reconstruction is a concept that allows mobilization with the help of devices integrated into the biomechanics of each joint. Hip endoprostheses were used in teenagers. Preliminary results were good; however, in the medium and long term, pelvic hypoplasia and shortening of the operated limb, repeated endoprosthetic replacement surgery, and hip dysplasia with bone tissue deficiency were observed, complicating the choice and installation of the implant. Major complications after surgical treatment of DDH lead to ankylosis. Recurrent dislocation and avascular necroses are pathogenic. To prevent these terrible complications that include hip ankylosis as a corollary, osteoarticular reconstruction performed with refinement and elegance is also indicated in children aged 1 to 4 years, especially in cases where a second or third surgical intervention is needed after a dislocation complicated by recurrent dislocation and osteonecrosis. Osteonecrosis of the femoral head after reduction of a failed dislocation by relaxation had the highest probability (94.4%). Little attention has been paid in scholarly literature to patients with failed open reduction in DDH. The surgeries that can be used for the treatment of rigid fibrous ankyloses in preschool children are subtrochanteric osteotomy, which reorients the pelvic limb for effective support during movement but presents multiple inconveniences; osteoarthroplasty reconstruction of the hip; and cephalobturator neoacetabuloplasty. In this case, both articular surfaces were damaged. Chondrolysis of articular surfaces is common after one or two failed hip surgeries. Delaying surgery in preschool children with vicious ankylosis until endoprosthesis placement involves serious restrictions that the patient and parents find difficult to bear because the condition is extensive and disabling. Malpositioning of the stiff limb and difficulties encountered while walking cause the child to remain isolated and often present a psychological disability. Neoacetabuloplasty allows integration into the environment. In this case, the patient walked normally 3 months after the surgery. The growth rate of the operated pelvic limb was normal or slightly affected. The 1.5-cm shortening was associated with distortion of the femur through anteversion of the femoral neck. As with the coxofemoral joint, in cephalobturator neoacetabuloplasty, flexion is performed by rotating the femoral head around an axis that passes through the center of the epiphysis of the femoral head. The abduction-adduction movement is similar; the head rotates around an axis that passes from the anterior to the posterior through the epiphysis. These cephalobturator biomechanics, identical to the cephaloacetabular one, cause distortion of the femur through neck anteversion, including the same manifestations in standing and walking. Flexion-extension and abduction-adduction allow femoral rotation; the head with increased anteversion is placed in the center of the obturator foramen, and the distal extremity rotates medially while walking and remains in the same position while standing. If cephalobturator neoacetabuloplasty is performed in a child with intraacetabular ankylosis, shortening does not produce the effects of an LLD.
Looking Back: International Practice Patterns in Breast Radiation Oncology From a Case-Based Survey Across 54 Countries During the First Surge of the COVID-19 Pandemic
0f8886ad-84cc-429d-a77b-afb6a0e1318f
10581620
Internal Medicine[mh]
The COVID-19 pandemic affected radiation therapy (RT) for breast cancer (BC) delivery worldwide. To maximize clinical resources and minimize COVID-19 transmission, radiation oncologists (ROs) modified BC treatments as international professional societies established guidelines. - These guidelines reflected patterns encouraging delayed RT for low-risk BC patients - (less so for advanced-stage BC), , abbreviating treatment regimens, , , and decreasing systemic therapy compared with surgery. Oncologists also reduced patient visitation, recommending initial surgery over preoperative chemotherapy, , and delayed reconstructive surgery after mastectomy. , While institutions, - nations, , , - and regions , , , , reported treatment modifications during COVID-19's peak, the global impact on BC treatment modification has not been collectively assessed. Our unique study is the only case-based global survey evaluating changes to RT recommendations for BC during the pandemic's first surge, which varied by country. CONTEXT Key Objective To examine the international evolution of breast radiation therapy (RT) practices during the early stages of the COVID-19 pandemic and identify differences in treatment recommendations between countries. Knowledge Generated A survey conducted between July and November 2020 involving 1,103 radiation oncologists (ROs) from 54 countries found that approximately 60% of respondents reported no change in their treatment recommendations during the pandemic. The most frequent changes included omitting, delaying, or adopting short-course RT, with many transitioning to moderate hypofractionation. Relevance The pandemic significantly influenced RT delivery for breast cancer, as ROs worldwide swiftly embraced shorter fractionation courses. Alongside the publication of relevant clinical trials during the pandemic and ongoing studies, the trend toward widespread adoption of hypofractionation appears increasingly likely. Key Objective To examine the international evolution of breast radiation therapy (RT) practices during the early stages of the COVID-19 pandemic and identify differences in treatment recommendations between countries. Knowledge Generated A survey conducted between July and November 2020 involving 1,103 radiation oncologists (ROs) from 54 countries found that approximately 60% of respondents reported no change in their treatment recommendations during the pandemic. The most frequent changes included omitting, delaying, or adopting short-course RT, with many transitioning to moderate hypofractionation. Relevance The pandemic significantly influenced RT delivery for breast cancer, as ROs worldwide swiftly embraced shorter fractionation courses. Alongside the publication of relevant clinical trials during the pandemic and ongoing studies, the trend toward widespread adoption of hypofractionation appears increasingly likely. The BC radiation oncology team at Massachusetts General Hospital and Dana Farber Cancer Institute (Boston, MA) initiated an international collaboration of ROs in developing a case-based survey evaluating BC RT decision-making changes during the pandemic's surge across six scenarios, meeting regularly through teleconference to develop it. Consisting of 6 cases and 58 questions (Data Supplement), the survey was approved by Dana Farber/Harvard Cancer Center's institutional review board and was distributed to ROs who self-identified as having treated at least one patient with BC annually, with an international network of radiation oncology professional societies augmenting distribution (Table ). It contained the following scenarios: (1) low-grade ductal carcinoma in situ (DCIS), (2) low-risk invasive BC after breast-conserving surgery, (3) early-stage invasive BC after mastectomy with immediate reconstruction, (4) invasive BC after neoadjuvant chemotherapy (NAC) and mastectomy without reconstruction, (5) invasive BC after mastectomy without reconstruction and with adjuvant chemotherapy, and (6) metastatic BC with an enlarging and bleeding breast mass. Respondents provided recommendations for two scenarios: (1) prepandemic and (2) during the pandemic's surge. Conventional fractionation was defined as 1.8-2.3 Gy per fraction, moderate hypofractionation as 2.31-3.0 Gy, and ultrahypofractionation as >5 Gy. The survey was translated into Spanish, Russian, and Mandarin, and distributed through REDCap on July 17, 2020, closing on November 8, 2020. Anonymous responses were compiled into a secure central database (incomplete responses were excluded [n = 254]). Categorical variables were described as counts and percentages, with chi-square and McNemar-Bowker tests used to examine the significance of changes between prepandemic and surge. P values are reported with statistical significance defined as <0.05. Statistical analysis was performed with R Studio, v. 2021.09.0 + 351 (Posit PBC, Boston, MA), and Excel 365, v. 2021 (Microsoft, Redmond, WA). This study was approved by Partners IRB (Protocol no.: 2020P001416) and nonverbal informed consent was obtained from participants before taking the survey by attesting on the webpage. Overall, 1,103 ROs from 54 countries completed the survey (Fig ), with the most respondents from 13 countries: United States (n = 285), Japan (n = 117), Italy (n = 63), Canada (n = 58), Brazil (n = 56), France (n = 48), Spain (n = 44), Russia (n = 43), China (n = 42), Thailand (n = 38), South Korea (n = 38), United Kingdom (n = 35), and Saudi Arabia (n = 31). ROs practiced in urban (69.8%; n = 770), suburban (19.4%; n = 214), rural (9.6%; n = 106), and other settings (1.2%; n = 13). Additionally, 49.8% (n = 549) practiced in university-affiliated hospitals, 25.7% (n = 283) in private practice, 21.1% (n = 233) in government hospitals, and 3.4% (n = 38) in other centers. Most (74.4%; n = 821) reported treating <200 patients with BC annually, while 45.6% (n = 503) reported >500 patients. In addition, 311 (28.2%) reported ≥1 patient with BC who was COVID-19–positive between November 1, 2019, and July 1, 2020. Herein, we describe treatment recommendation changes during the pandemic's surge as analyzed within six clinical cases. Case 1 DCIS A 52-year-old woman was diagnosed with 1.5-cm grade 2 ER+/PR+ DCIS and treated with left lumpectomy with final surgical margins >2 mm. Adjuvant endocrine therapy was initiated (Fig A). Prepandemic, 80.8% of respondents recommended adjuvant whole-breast RT (WBRT), 12.4% (n = 137) partial breast irradiation (PBI), 6.0% (n = 66) RT omission, and 0.8% (n = 9) delayed RT. In comparison, during the pandemic's surge, significantly more recommended delaying (22.3%, n = 246; P < .005) or omitting RT (12.9%, n = 142; P < .005). ROs from United States (40.7%), Saudi Arabia (25.8%), Canada (25.4%), and Brazil (23.2%) were most likely to delay RT during the surge, while ROs in Russia (39.5%) and Thailand (20.5%) would omit RT. Among those recommending delayed RT, most recommended an 11- to 16-week delay (22.2%, 2/9, prepandemic v 52.8%, 131/248, during surge; P < .005), while others a 17- to 24-week delay (11.1%, 1/9, prepandemic v 29.4%, 73/248, during surge; P < .005). Of those recommending WBRT prepandemic (n = 891), 77.1% (n = 687) chose moderate hypofractionation, and 67.9% (n = 605) omitted a lumpectomy site boost. During the surge, significantly more recommended ultrahypofractionation (1.2%, 11/891, prepandemic to 10.5%, 61/581, during the surge; P < .005). Changes in fractionation varied widely, with ROs in United Kingdom (90.5%), Canada (38.9%), Spain (32.0%), and Saudi Arabia (16.7%) reporting the highest ultrahypofractionated breast RT rates during the surge. By contrast, ultrahypofractionation for DCIS was infrequent in China (6.7%), United States (3.8%), and Italy (2.3%). No respondents from Japan, France, Russia, South Korea, or South Africa recommended ultrahypofractionation for DCIS. The most common PBI modality recommended for DCIS was external-beam RT (72.2%, 99/137), with 31.3% (31/99) favoring 30 Gy in five fractions over 2 weeks (Florence schedule), while 47.5% (47/99) favored >10 fraction regimen. This proportion shifted during the surge, with 31.3% (31/99 prepandemic) versus 58.6% (58/99 during surge; P < .005) recommending a five-fraction regimen. Case 2 Early-Stage Invasive BC After Breast-Conserving Surgery A 61-year-old woman underwent a right lumpectomy revealing a 2-cm grade 2 ER+/PR+/HER2– invasive lobular carcinoma with no evidence of lymphovascular invasion. Out of two sentinel nodes, zero contained malignancy. Her Oncotype Dx recurrence score was 8. Endocrine therapy was initiated (Fig B). Prepandemic, 83% (n = 915) recommended WBRT, 14.7% (n = 162) PBI, 1.6% (n = 18) RT omission, and 0.6% (n = 7) RT delay. A significant increase recommended delayed RT during the surge (19.0%, n = 210; P < .005). Respondents in United States, Thailand, Canada, Saudi Arabia, and Japan reported the highest delayed-RT rates (35.0%, 20.5%, 20.3%, 19.4%, and 17.9%, respectively). There was a slight increase in omitting RT across all countries during the surge (4.6%, n = 51; P < .005); however, ROs in Russia (18.6%), Saudi Arabia (9.7%), United Kingdom (8.9%), Thailand (7.7%), and Brazil (7.1%) favored omitting RT. Moderate hypofractionation was the most popular WBRT regimen, with a significant change between prepandemic and surge (80.8%, 739/915, and 69.4%, 459/661; P < .005, respectively); during the surge, there was a significant increase in ultrahypofractionation (2.6%, 24/915, prepandemic v 16.5%, 109/661, during surge; P < .005). Respondents in United Kingdom (89.3%), Spain (58.6%), Canada (51.5%), and Saudi Arabia (45.0%) recommended ultrahypofractionated WBRT during the surge, while those in South Africa (11.1%), Italy (10.8%), China (5.9%), United States (3.8%), and South Korea (3.4%) infrequently recommended it (no respondents in Japan, France, or Russia recommended it). For those recommending PBI, there was an increase in ≤5 fractions during the surge compared with prepandemic (61.3%, 84/137, v 36.8%, 43/117, respectively; P < .005). Case 3 Invasive BC After Mastectomy With Immediate Reconstruction A 54-year-old woman underwent a total simple mastectomy with immediate tissue expander reconstruction revealing a 3.4-cm grade 2 ER+/PR+/HER2– invasive ductal carcinoma and no lymphovascular invasion. Of three sentinel lymph nodes, one was positive (8-mm focus) without extranodal extension. The Oncotype Dx recurrence score was 4. Adjuvant chemotherapy was not recommended. An adjuvant aromatase inhibitor was planned (Fig ). Prepandemic, most (69.0%, 761) recommended postmastectomy RT (PMRT), whereas a minority favored complete axillary dissection (18.1%, 200) or no further local-regional treatment (12.9%; n = 142). ROs in Spain (81.8%), Canada (81.4%), Brazil (80.4%), Thailand (79.5%), and South Korea (78.9%) favored PMRT, while those in Italy (44.4%) and Russia (41.9%) favored complete axillary dissection. During the surge, there was an increase in no further local-regional treatment (20.8%, n = 229; P < .005). However, PMRT (63.0%, n = 695; P < .005) and complete axillary dissection (16.2%, n = 179; P = .106) recommendations decreased, but only the former was statistically significant. ROs in Japan (35.0%), Italy (33.3%), Russia (32.5%), and China (31.0%) most recommended no further local-regional treatment. Most ROs recommending PMRT chose conventional fractionation, regardless of prepandemic or surge (67.5%, 534/791, and 52.1%, 362/695, respectively). However, during the surge, recommendations significantly increased for moderate hypofractionation (from 28.5%, 217/761, to 43.7%, 304/695; P < .005) and ultrahypofractionation (from 0.4%, 3/761, to 3.3%, 23/695; P < .005). ROs in Canada, Spain, Brazil, United Kingdom, and Saudi Arabia most recommended moderate hypofractionation prepandemic (39.6%, 55.6%, 46.7%, 87.0%, and 60.9%, respectively) and during surge (70.8%, 66.7%, 63.6%, 63.6%, and 57.1%, respectively). Overall, ROs in United Kingdom (36.4%), Spain (12.1%), and Saudi Arabia (48.0%) had the highest rate of recommending an ultrahypofractionation regimen for PMRT during the surge. Case 4 Invasive BC After NAC and Mastectomy A 55-year-old woman with cT2N1 grade 3 triple-negative BC underwent NAC with doxorubicin, cyclophosphamide, and paclitaxel, followed by total mastectomy and sentinel lymph node biopsy. Reconstruction was not performed. A pathologic complete response was achieved, with no residual disease seen in the breast and three sentinel nodes (ypT0N0; Fig ). Prepandemic, most recommended PMRT using conventional fractionation (62.3%, n = 687) compared with moderate hypofractionation (27.9%, n = 308), 3.1-5.0 Gy (0.7%, 8), ultrahypofractionation (0.6%, n = 7), or no PMRT (8.4%, 93). However, during the surge, moderate hypofractionation (40.9%, n = 451; P < .005), no PMRT (13.1%, n = 144; P < .005), and ultrahypofractionated PMRT (3.5%, n = 39; P < .005) were recommended. During the surge, respondents from Canada (86.5%), Saudi Arabia (77.8%), Spain (77.5%), Brazil (69.8%), and Russia (63.6%) mostly recommended moderate hypofractionation, while those in China (23.4%), Japan (21.9%), and Saudi Arabia (14.8%) mostly omitted PMRT. In this scenario, ROs in United Kingdom reported the highest ultrahypofractionation use (66.7%). Case 5 Invasive BC After Mastectomy Without Reconstruction and Adjuvant Chemotherapy A 45-year-old woman underwent a left modified radical mastectomy without immediate reconstruction. Pathology revealed a 5-cm, grade 2 ER+/PR+/HER– invasive ductal carcinoma with evidence of lymphovascular invasion and five out of 15 positive axillary nodes. She completed adjuvant dose-dense doxorubicin, cyclophosphamide, and paclitaxel. An aromatase inhibitor was planned (Fig A). Prepandemic, most ROs (81.6%, 900) preferred to begin PMRT ≤6 weeks after surgery, while 16.5% (n = 182) would initiate PMRT >6-10 weeks after surgery. During the surge, recommendations increased for PMRT to start within 6-10 weeks after surgery (23.5%, n = 259; P < .005) or delay by 11-16 weeks after (5.2%, n = 57; P < .005). Most did not change their recommendation to delay RT during the surge, preferring to start <6 weeks (70.2%, n = 774). The surge did not change bolus or boost fractionation or use. Most recommended conventional fractionation (73.3%, n = 808 and 69.0%, n = 761), using a bolus (55.1%, n = 608, and 53.5%, n = 590), and preferring not to boost the mastectomy scar (75.6%, n = 834, and 78.2%, n = 863). Recommended target volume(s) included the chest wall, axillary nodes, and supraclavicular nodes (46.5%, n = 512), with 52.6% (n = 580) also including the internal mammary nodes, which remained relatively consistent during the surge (49.3%, n = 543, and 49.6%, n = 547, respectively). When the same hypothetical patient underwent immediate breast reconstruction with an implant or tissue expander, most recommended conventional fractionation (81.7%, n = 901, and 69.0%, n = 761) compared with moderate hypofractionation (17.3%, n = 191, and 28.9%, n = 319), prepandemic and surge, respectively. Case 6 Metastatic BC With an Enlarging Breast Mass A 75-year-old woman with metastatic ER+/PR+/HER2– invasive ductal carcinoma resistant to several lines of systemic therapy presents with an enlarging and bleeding 6-cm right breast mass. Karnofsky performance status is 80. Surgical resection is not planned because of the presence of multiple lung metastases (Fig ). Most (60.3%, n = 665) recommended palliative RT delivered in at least 10 fractions prepandemic, specifically, 50 Gy in 25 fractions (8.1%, n = 89), 45 Gy in 18 fractions (18.7%, n = 206), and 30 Gy in 10 fractions (33.6%, n = 370). However, during the surge, most recommended palliative RT delivered in ≤5 fractions (63.9%, n = 705; P =< 0.0005): 26 Gy in five fractions (18.5%, 204/1,103), 20 Gy in five fractions (26.4%, 291/1,103), and 8 Gy in one fraction (19.0%, 210/1,103). DCIS A 52-year-old woman was diagnosed with 1.5-cm grade 2 ER+/PR+ DCIS and treated with left lumpectomy with final surgical margins >2 mm. Adjuvant endocrine therapy was initiated (Fig A). Prepandemic, 80.8% of respondents recommended adjuvant whole-breast RT (WBRT), 12.4% (n = 137) partial breast irradiation (PBI), 6.0% (n = 66) RT omission, and 0.8% (n = 9) delayed RT. In comparison, during the pandemic's surge, significantly more recommended delaying (22.3%, n = 246; P < .005) or omitting RT (12.9%, n = 142; P < .005). ROs from United States (40.7%), Saudi Arabia (25.8%), Canada (25.4%), and Brazil (23.2%) were most likely to delay RT during the surge, while ROs in Russia (39.5%) and Thailand (20.5%) would omit RT. Among those recommending delayed RT, most recommended an 11- to 16-week delay (22.2%, 2/9, prepandemic v 52.8%, 131/248, during surge; P < .005), while others a 17- to 24-week delay (11.1%, 1/9, prepandemic v 29.4%, 73/248, during surge; P < .005). Of those recommending WBRT prepandemic (n = 891), 77.1% (n = 687) chose moderate hypofractionation, and 67.9% (n = 605) omitted a lumpectomy site boost. During the surge, significantly more recommended ultrahypofractionation (1.2%, 11/891, prepandemic to 10.5%, 61/581, during the surge; P < .005). Changes in fractionation varied widely, with ROs in United Kingdom (90.5%), Canada (38.9%), Spain (32.0%), and Saudi Arabia (16.7%) reporting the highest ultrahypofractionated breast RT rates during the surge. By contrast, ultrahypofractionation for DCIS was infrequent in China (6.7%), United States (3.8%), and Italy (2.3%). No respondents from Japan, France, Russia, South Korea, or South Africa recommended ultrahypofractionation for DCIS. The most common PBI modality recommended for DCIS was external-beam RT (72.2%, 99/137), with 31.3% (31/99) favoring 30 Gy in five fractions over 2 weeks (Florence schedule), while 47.5% (47/99) favored >10 fraction regimen. This proportion shifted during the surge, with 31.3% (31/99 prepandemic) versus 58.6% (58/99 during surge; P < .005) recommending a five-fraction regimen. A 52-year-old woman was diagnosed with 1.5-cm grade 2 ER+/PR+ DCIS and treated with left lumpectomy with final surgical margins >2 mm. Adjuvant endocrine therapy was initiated (Fig A). Prepandemic, 80.8% of respondents recommended adjuvant whole-breast RT (WBRT), 12.4% (n = 137) partial breast irradiation (PBI), 6.0% (n = 66) RT omission, and 0.8% (n = 9) delayed RT. In comparison, during the pandemic's surge, significantly more recommended delaying (22.3%, n = 246; P < .005) or omitting RT (12.9%, n = 142; P < .005). ROs from United States (40.7%), Saudi Arabia (25.8%), Canada (25.4%), and Brazil (23.2%) were most likely to delay RT during the surge, while ROs in Russia (39.5%) and Thailand (20.5%) would omit RT. Among those recommending delayed RT, most recommended an 11- to 16-week delay (22.2%, 2/9, prepandemic v 52.8%, 131/248, during surge; P < .005), while others a 17- to 24-week delay (11.1%, 1/9, prepandemic v 29.4%, 73/248, during surge; P < .005). Of those recommending WBRT prepandemic (n = 891), 77.1% (n = 687) chose moderate hypofractionation, and 67.9% (n = 605) omitted a lumpectomy site boost. During the surge, significantly more recommended ultrahypofractionation (1.2%, 11/891, prepandemic to 10.5%, 61/581, during the surge; P < .005). Changes in fractionation varied widely, with ROs in United Kingdom (90.5%), Canada (38.9%), Spain (32.0%), and Saudi Arabia (16.7%) reporting the highest ultrahypofractionated breast RT rates during the surge. By contrast, ultrahypofractionation for DCIS was infrequent in China (6.7%), United States (3.8%), and Italy (2.3%). No respondents from Japan, France, Russia, South Korea, or South Africa recommended ultrahypofractionation for DCIS. The most common PBI modality recommended for DCIS was external-beam RT (72.2%, 99/137), with 31.3% (31/99) favoring 30 Gy in five fractions over 2 weeks (Florence schedule), while 47.5% (47/99) favored >10 fraction regimen. This proportion shifted during the surge, with 31.3% (31/99 prepandemic) versus 58.6% (58/99 during surge; P < .005) recommending a five-fraction regimen. Early-Stage Invasive BC After Breast-Conserving Surgery A 61-year-old woman underwent a right lumpectomy revealing a 2-cm grade 2 ER+/PR+/HER2– invasive lobular carcinoma with no evidence of lymphovascular invasion. Out of two sentinel nodes, zero contained malignancy. Her Oncotype Dx recurrence score was 8. Endocrine therapy was initiated (Fig B). Prepandemic, 83% (n = 915) recommended WBRT, 14.7% (n = 162) PBI, 1.6% (n = 18) RT omission, and 0.6% (n = 7) RT delay. A significant increase recommended delayed RT during the surge (19.0%, n = 210; P < .005). Respondents in United States, Thailand, Canada, Saudi Arabia, and Japan reported the highest delayed-RT rates (35.0%, 20.5%, 20.3%, 19.4%, and 17.9%, respectively). There was a slight increase in omitting RT across all countries during the surge (4.6%, n = 51; P < .005); however, ROs in Russia (18.6%), Saudi Arabia (9.7%), United Kingdom (8.9%), Thailand (7.7%), and Brazil (7.1%) favored omitting RT. Moderate hypofractionation was the most popular WBRT regimen, with a significant change between prepandemic and surge (80.8%, 739/915, and 69.4%, 459/661; P < .005, respectively); during the surge, there was a significant increase in ultrahypofractionation (2.6%, 24/915, prepandemic v 16.5%, 109/661, during surge; P < .005). Respondents in United Kingdom (89.3%), Spain (58.6%), Canada (51.5%), and Saudi Arabia (45.0%) recommended ultrahypofractionated WBRT during the surge, while those in South Africa (11.1%), Italy (10.8%), China (5.9%), United States (3.8%), and South Korea (3.4%) infrequently recommended it (no respondents in Japan, France, or Russia recommended it). For those recommending PBI, there was an increase in ≤5 fractions during the surge compared with prepandemic (61.3%, 84/137, v 36.8%, 43/117, respectively; P < .005). A 61-year-old woman underwent a right lumpectomy revealing a 2-cm grade 2 ER+/PR+/HER2– invasive lobular carcinoma with no evidence of lymphovascular invasion. Out of two sentinel nodes, zero contained malignancy. Her Oncotype Dx recurrence score was 8. Endocrine therapy was initiated (Fig B). Prepandemic, 83% (n = 915) recommended WBRT, 14.7% (n = 162) PBI, 1.6% (n = 18) RT omission, and 0.6% (n = 7) RT delay. A significant increase recommended delayed RT during the surge (19.0%, n = 210; P < .005). Respondents in United States, Thailand, Canada, Saudi Arabia, and Japan reported the highest delayed-RT rates (35.0%, 20.5%, 20.3%, 19.4%, and 17.9%, respectively). There was a slight increase in omitting RT across all countries during the surge (4.6%, n = 51; P < .005); however, ROs in Russia (18.6%), Saudi Arabia (9.7%), United Kingdom (8.9%), Thailand (7.7%), and Brazil (7.1%) favored omitting RT. Moderate hypofractionation was the most popular WBRT regimen, with a significant change between prepandemic and surge (80.8%, 739/915, and 69.4%, 459/661; P < .005, respectively); during the surge, there was a significant increase in ultrahypofractionation (2.6%, 24/915, prepandemic v 16.5%, 109/661, during surge; P < .005). Respondents in United Kingdom (89.3%), Spain (58.6%), Canada (51.5%), and Saudi Arabia (45.0%) recommended ultrahypofractionated WBRT during the surge, while those in South Africa (11.1%), Italy (10.8%), China (5.9%), United States (3.8%), and South Korea (3.4%) infrequently recommended it (no respondents in Japan, France, or Russia recommended it). For those recommending PBI, there was an increase in ≤5 fractions during the surge compared with prepandemic (61.3%, 84/137, v 36.8%, 43/117, respectively; P < .005). Invasive BC After Mastectomy With Immediate Reconstruction A 54-year-old woman underwent a total simple mastectomy with immediate tissue expander reconstruction revealing a 3.4-cm grade 2 ER+/PR+/HER2– invasive ductal carcinoma and no lymphovascular invasion. Of three sentinel lymph nodes, one was positive (8-mm focus) without extranodal extension. The Oncotype Dx recurrence score was 4. Adjuvant chemotherapy was not recommended. An adjuvant aromatase inhibitor was planned (Fig ). Prepandemic, most (69.0%, 761) recommended postmastectomy RT (PMRT), whereas a minority favored complete axillary dissection (18.1%, 200) or no further local-regional treatment (12.9%; n = 142). ROs in Spain (81.8%), Canada (81.4%), Brazil (80.4%), Thailand (79.5%), and South Korea (78.9%) favored PMRT, while those in Italy (44.4%) and Russia (41.9%) favored complete axillary dissection. During the surge, there was an increase in no further local-regional treatment (20.8%, n = 229; P < .005). However, PMRT (63.0%, n = 695; P < .005) and complete axillary dissection (16.2%, n = 179; P = .106) recommendations decreased, but only the former was statistically significant. ROs in Japan (35.0%), Italy (33.3%), Russia (32.5%), and China (31.0%) most recommended no further local-regional treatment. Most ROs recommending PMRT chose conventional fractionation, regardless of prepandemic or surge (67.5%, 534/791, and 52.1%, 362/695, respectively). However, during the surge, recommendations significantly increased for moderate hypofractionation (from 28.5%, 217/761, to 43.7%, 304/695; P < .005) and ultrahypofractionation (from 0.4%, 3/761, to 3.3%, 23/695; P < .005). ROs in Canada, Spain, Brazil, United Kingdom, and Saudi Arabia most recommended moderate hypofractionation prepandemic (39.6%, 55.6%, 46.7%, 87.0%, and 60.9%, respectively) and during surge (70.8%, 66.7%, 63.6%, 63.6%, and 57.1%, respectively). Overall, ROs in United Kingdom (36.4%), Spain (12.1%), and Saudi Arabia (48.0%) had the highest rate of recommending an ultrahypofractionation regimen for PMRT during the surge. A 54-year-old woman underwent a total simple mastectomy with immediate tissue expander reconstruction revealing a 3.4-cm grade 2 ER+/PR+/HER2– invasive ductal carcinoma and no lymphovascular invasion. Of three sentinel lymph nodes, one was positive (8-mm focus) without extranodal extension. The Oncotype Dx recurrence score was 4. Adjuvant chemotherapy was not recommended. An adjuvant aromatase inhibitor was planned (Fig ). Prepandemic, most (69.0%, 761) recommended postmastectomy RT (PMRT), whereas a minority favored complete axillary dissection (18.1%, 200) or no further local-regional treatment (12.9%; n = 142). ROs in Spain (81.8%), Canada (81.4%), Brazil (80.4%), Thailand (79.5%), and South Korea (78.9%) favored PMRT, while those in Italy (44.4%) and Russia (41.9%) favored complete axillary dissection. During the surge, there was an increase in no further local-regional treatment (20.8%, n = 229; P < .005). However, PMRT (63.0%, n = 695; P < .005) and complete axillary dissection (16.2%, n = 179; P = .106) recommendations decreased, but only the former was statistically significant. ROs in Japan (35.0%), Italy (33.3%), Russia (32.5%), and China (31.0%) most recommended no further local-regional treatment. Most ROs recommending PMRT chose conventional fractionation, regardless of prepandemic or surge (67.5%, 534/791, and 52.1%, 362/695, respectively). However, during the surge, recommendations significantly increased for moderate hypofractionation (from 28.5%, 217/761, to 43.7%, 304/695; P < .005) and ultrahypofractionation (from 0.4%, 3/761, to 3.3%, 23/695; P < .005). ROs in Canada, Spain, Brazil, United Kingdom, and Saudi Arabia most recommended moderate hypofractionation prepandemic (39.6%, 55.6%, 46.7%, 87.0%, and 60.9%, respectively) and during surge (70.8%, 66.7%, 63.6%, 63.6%, and 57.1%, respectively). Overall, ROs in United Kingdom (36.4%), Spain (12.1%), and Saudi Arabia (48.0%) had the highest rate of recommending an ultrahypofractionation regimen for PMRT during the surge. Invasive BC After NAC and Mastectomy A 55-year-old woman with cT2N1 grade 3 triple-negative BC underwent NAC with doxorubicin, cyclophosphamide, and paclitaxel, followed by total mastectomy and sentinel lymph node biopsy. Reconstruction was not performed. A pathologic complete response was achieved, with no residual disease seen in the breast and three sentinel nodes (ypT0N0; Fig ). Prepandemic, most recommended PMRT using conventional fractionation (62.3%, n = 687) compared with moderate hypofractionation (27.9%, n = 308), 3.1-5.0 Gy (0.7%, 8), ultrahypofractionation (0.6%, n = 7), or no PMRT (8.4%, 93). However, during the surge, moderate hypofractionation (40.9%, n = 451; P < .005), no PMRT (13.1%, n = 144; P < .005), and ultrahypofractionated PMRT (3.5%, n = 39; P < .005) were recommended. During the surge, respondents from Canada (86.5%), Saudi Arabia (77.8%), Spain (77.5%), Brazil (69.8%), and Russia (63.6%) mostly recommended moderate hypofractionation, while those in China (23.4%), Japan (21.9%), and Saudi Arabia (14.8%) mostly omitted PMRT. In this scenario, ROs in United Kingdom reported the highest ultrahypofractionation use (66.7%). A 55-year-old woman with cT2N1 grade 3 triple-negative BC underwent NAC with doxorubicin, cyclophosphamide, and paclitaxel, followed by total mastectomy and sentinel lymph node biopsy. Reconstruction was not performed. A pathologic complete response was achieved, with no residual disease seen in the breast and three sentinel nodes (ypT0N0; Fig ). Prepandemic, most recommended PMRT using conventional fractionation (62.3%, n = 687) compared with moderate hypofractionation (27.9%, n = 308), 3.1-5.0 Gy (0.7%, 8), ultrahypofractionation (0.6%, n = 7), or no PMRT (8.4%, 93). However, during the surge, moderate hypofractionation (40.9%, n = 451; P < .005), no PMRT (13.1%, n = 144; P < .005), and ultrahypofractionated PMRT (3.5%, n = 39; P < .005) were recommended. During the surge, respondents from Canada (86.5%), Saudi Arabia (77.8%), Spain (77.5%), Brazil (69.8%), and Russia (63.6%) mostly recommended moderate hypofractionation, while those in China (23.4%), Japan (21.9%), and Saudi Arabia (14.8%) mostly omitted PMRT. In this scenario, ROs in United Kingdom reported the highest ultrahypofractionation use (66.7%). Invasive BC After Mastectomy Without Reconstruction and Adjuvant Chemotherapy A 45-year-old woman underwent a left modified radical mastectomy without immediate reconstruction. Pathology revealed a 5-cm, grade 2 ER+/PR+/HER– invasive ductal carcinoma with evidence of lymphovascular invasion and five out of 15 positive axillary nodes. She completed adjuvant dose-dense doxorubicin, cyclophosphamide, and paclitaxel. An aromatase inhibitor was planned (Fig A). Prepandemic, most ROs (81.6%, 900) preferred to begin PMRT ≤6 weeks after surgery, while 16.5% (n = 182) would initiate PMRT >6-10 weeks after surgery. During the surge, recommendations increased for PMRT to start within 6-10 weeks after surgery (23.5%, n = 259; P < .005) or delay by 11-16 weeks after (5.2%, n = 57; P < .005). Most did not change their recommendation to delay RT during the surge, preferring to start <6 weeks (70.2%, n = 774). The surge did not change bolus or boost fractionation or use. Most recommended conventional fractionation (73.3%, n = 808 and 69.0%, n = 761), using a bolus (55.1%, n = 608, and 53.5%, n = 590), and preferring not to boost the mastectomy scar (75.6%, n = 834, and 78.2%, n = 863). Recommended target volume(s) included the chest wall, axillary nodes, and supraclavicular nodes (46.5%, n = 512), with 52.6% (n = 580) also including the internal mammary nodes, which remained relatively consistent during the surge (49.3%, n = 543, and 49.6%, n = 547, respectively). When the same hypothetical patient underwent immediate breast reconstruction with an implant or tissue expander, most recommended conventional fractionation (81.7%, n = 901, and 69.0%, n = 761) compared with moderate hypofractionation (17.3%, n = 191, and 28.9%, n = 319), prepandemic and surge, respectively. A 45-year-old woman underwent a left modified radical mastectomy without immediate reconstruction. Pathology revealed a 5-cm, grade 2 ER+/PR+/HER– invasive ductal carcinoma with evidence of lymphovascular invasion and five out of 15 positive axillary nodes. She completed adjuvant dose-dense doxorubicin, cyclophosphamide, and paclitaxel. An aromatase inhibitor was planned (Fig A). Prepandemic, most ROs (81.6%, 900) preferred to begin PMRT ≤6 weeks after surgery, while 16.5% (n = 182) would initiate PMRT >6-10 weeks after surgery. During the surge, recommendations increased for PMRT to start within 6-10 weeks after surgery (23.5%, n = 259; P < .005) or delay by 11-16 weeks after (5.2%, n = 57; P < .005). Most did not change their recommendation to delay RT during the surge, preferring to start <6 weeks (70.2%, n = 774). The surge did not change bolus or boost fractionation or use. Most recommended conventional fractionation (73.3%, n = 808 and 69.0%, n = 761), using a bolus (55.1%, n = 608, and 53.5%, n = 590), and preferring not to boost the mastectomy scar (75.6%, n = 834, and 78.2%, n = 863). Recommended target volume(s) included the chest wall, axillary nodes, and supraclavicular nodes (46.5%, n = 512), with 52.6% (n = 580) also including the internal mammary nodes, which remained relatively consistent during the surge (49.3%, n = 543, and 49.6%, n = 547, respectively). When the same hypothetical patient underwent immediate breast reconstruction with an implant or tissue expander, most recommended conventional fractionation (81.7%, n = 901, and 69.0%, n = 761) compared with moderate hypofractionation (17.3%, n = 191, and 28.9%, n = 319), prepandemic and surge, respectively. Metastatic BC With an Enlarging Breast Mass A 75-year-old woman with metastatic ER+/PR+/HER2– invasive ductal carcinoma resistant to several lines of systemic therapy presents with an enlarging and bleeding 6-cm right breast mass. Karnofsky performance status is 80. Surgical resection is not planned because of the presence of multiple lung metastases (Fig ). Most (60.3%, n = 665) recommended palliative RT delivered in at least 10 fractions prepandemic, specifically, 50 Gy in 25 fractions (8.1%, n = 89), 45 Gy in 18 fractions (18.7%, n = 206), and 30 Gy in 10 fractions (33.6%, n = 370). However, during the surge, most recommended palliative RT delivered in ≤5 fractions (63.9%, n = 705; P =< 0.0005): 26 Gy in five fractions (18.5%, 204/1,103), 20 Gy in five fractions (26.4%, 291/1,103), and 8 Gy in one fraction (19.0%, 210/1,103). A 75-year-old woman with metastatic ER+/PR+/HER2– invasive ductal carcinoma resistant to several lines of systemic therapy presents with an enlarging and bleeding 6-cm right breast mass. Karnofsky performance status is 80. Surgical resection is not planned because of the presence of multiple lung metastases (Fig ). Most (60.3%, n = 665) recommended palliative RT delivered in at least 10 fractions prepandemic, specifically, 50 Gy in 25 fractions (8.1%, n = 89), 45 Gy in 18 fractions (18.7%, n = 206), and 30 Gy in 10 fractions (33.6%, n = 370). However, during the surge, most recommended palliative RT delivered in ≤5 fractions (63.9%, n = 705; P =< 0.0005): 26 Gy in five fractions (18.5%, 204/1,103), 20 Gy in five fractions (26.4%, 291/1,103), and 8 Gy in one fraction (19.0%, 210/1,103). Our study is unique in its diverse representation and strong global collaboration between experts reporting treatment recommendations concerning the pandemic's first surge in their respective nations. It aims to determine whether the pandemic acutely affected practice patterns for patients with BC receiving RT relative to prepandemic times. Participation was robust, with ROs from 54 countries fully completing the survey, demonstrating wide variations in international BC treatment recommendations prepandemic and surge. In cases 1 and 2, minimal change was observed, with many recommending WBRT delivered with moderate hypofractionation and no boost, prepandemic and surge. Most recommendation changes during the surge indicated delaying, omitting, or abbreviating RT fractionation. This aligned with the HYPO trial publication and published treatment guidelines for physicians prescribing RT during the pandemic, , - although the distribution was not uniform. ROs in United States, Saudi Arabia, Canada, and Brazil most recommended delayed RT, while most recommended omitting RT in Russia and Thailand. Similarly, most ultrahypofractionated RT recommendations during the surge were in the United Kingdom, Canada, Spain, and Saudi Arabia. By contrast, respondents in China, United States, South Korea, and Italy infrequently recommended ultrahypofractionation, and some countries did not recommend ultrahypofractionation (Japan, France, and Russia). Ultrahypofractionated RT for low-risk BC recommendations by ROs in United Kingdom is informed by the FAST trial's 10-year outcomes and FAST FORWARD trial's 5-year outcomes publications, reported during the surge. These showed noninferior outcomes compared with standard fractionation (FAST) or moderate hypofractionation (FAST FORWARD). In United States, where practice patterns can vary significantly by geography and practice type, a notable increase in recommendations for ultrahypofractionated RT for early-stage BC was reported (although lower than in United Kingdom, where practice is uniform with the same dose/fractionation). ROs recommending PBI for early-stage BC favored increasing to ≤5 fractions during the first surge compared with prepandemic. This change toward accelerated PBI is attributed to the Florence Trial, which published 10-year outcomes during the initial surge and survey period. Its findings demonstrated favorable cosmetic outcomes, similar local recurrence, and similar survival compared with WBRT. In the first high-risk BC scenario of a patient undergoing mastectomy and sentinel lymph node biopsy with reconstruction for pT2N1 hormone receptor–positive disease, most favored PMRT prepandemic. ROs in Russia and Italy recommended complete axillary dissection prepandemic (a controversial approach since ACOSOG Z11's publication, which provides evidence against such). For this scenario, during the first surge, a notable increase was observed in recommendations for no further local-regional treatment in a pathologically node-positive mastectomy setting. Those favoring complete axillary dissection prepandemic most recommended no further local-regional treatment. The second high-risk BC scenario of a patient with complete pathologic response in the breast and nodes to NAC highlights ROs' comfort with moderate fractionation and ultrahypofractionation in the mastectomy setting during the surge. Notably, the willingness to omit postmastectomy radiation in the pathologic complete response setting was observed among 8.4% of respondents prepandemic and increased to 13.1% during the surge. Although most ROs recommended PMRT delivered in conventional fractionation before and during the surge for patients with high-risk BC, our survey observed a rapid uptake in moderate hypofractionation. Respondents in Canada, Saudi Arabia, Spain, Brazil, and Russia favored moderate hypofractionation, while ROs in United Kingdom mostly recommended ultrahypofractionation. The latter is likely because of United Kingdom oncologists' ongoing experience with patients enrolled in FAST FORWARD's nodal planning study and coordinated breast RT consensus process. , This comfort with moderate hypofractionation is also likely influenced by a large randomized trial conducted in China comparing conventional fractionation to hypofractionation in the nonreconstructive postmastectomy setting. The long-term results from this trial and findings from similar US clinical trials , evaluating moderate hypofractionation in the mastectomy setting (including reconstruction) will likely influence widespread global adoption of shorter course treatments. Our survey also revealed that during the surge, recommendations to start PMRT 6-10 weeks after surgery (up to 11-16 weeks) slightly increased compared with typical time frames (within 6 weeks). Notably, for the highest-risk patient with pT3N2 hormone receptor–positive left invasive BC after mastectomy and adjuvant chemotherapy, only half recommended target volumes inclusive of internal mammary nodes (prepandemic and surge), suggesting no worldwide consensus. In the prepandemic palliative scenario, most recommended palliative RT prescribed in ≥10 fractions. However, during the surge, most recommended ≤5 fractions, reflecting a significant change influenced by the pandemic, likely in response to protecting patients from COVID-19 exposure and mindful of their quality of life. Although we cannot assess the economic impact of this by country, it highlights physicians' willingness to recommend shorter treatment courses for terminal BC patients and raises questions about routine practice deficits in nonpandemic periods. We must acknowledge several limitations in this study. Recall bias and well-documented survey limitations may have affected answers about prepandemic recommendations (such questions referenced practices 7-11 months before survey distribution). Recall bias may also apply to treatment recommendations for the country-specific surge scenario, which varied among nations and may not have been reached during survey distribution. Additionally, updated treatment guidelines were published during survey distribution, which may have influenced answers. Thus, for the questions related to treatment recommendations during the surge, respondents may have been unable to separate their choice from guideline recommendations. This is salient for hypofractionated RT recommendations during the pandemic's height, as several clinical trials , validated it, thus making it challenging to attribute recommendation increases to COVID-19 alone. Another limiting factor is the over-representation of countries with high response rates. Many countries (n = 40) had fewer than 25 ROs complete surveys, and 31 had ≤5 respondents. Additionally, ROs in United States were over-represented (25.8% of respondents), while ROs in Africa were under-represented. We also cannot overlook selection bias because of the unequal response rate, with only 81.3% completing the entire survey. To minimize this, our analysis of country-specific recommendations is limited to countries with >25 respondents. Finally, each country's culture and its impact on treatment heterogeneity were impossible to factor in. It is unclear if these recommendations represent lasting changes in BC management 2 years into the pandemic. Nevertheless, as the first of its kind in breast radiation oncology during an unprecedented global health emergency, this survey has numerous strengths, including manifold responses and robust international participation. Historically there have been worldwide differences concerning volume and dose fractionation for BC radiotherapy. - Our study uniquely provides a snapshot of case-specific treatment recommendations and builds upon published COVID-19–related surveys and experiences. - Specifically, it demonstrates how the pandemic affected treatment, providing insights into how management varies greatly globally. Lessons gained from this experience will inform consensus guidelines for breast RT and preparedness against future pandemics. Longitudinal surveillance will reveal whether the patterns observed persist after the pandemic and, more importantly, how these changes affect outcomes.
A robust deep learning approach for segmenting cortical and trabecular bone from 3D high resolution µCT scans of mouse bone
45c091aa-9394-4335-b7cf-587f9d1cef25
11906900
Musculoskeletal System[mh]
Preclinical studies are essential for exploring the biological regulation of the musculoskeletal system. They are required in all drug development pipelines, where both small and large animal models are used to assess the efficacy and side effects of treatments on bone remodeling , . Bone remodeling is a dynamic, continuous process where old bone tissue is resorbed by osteoclasts and new bone tissue is formed by osteoblasts . This cycle is influenced by mechanical loading, hormonal regulation, drug treatments, and diseases such as osteoporosis . A precise understanding of how cortical and trabecular bones respond to these factors is crucial for biomechanical analysis, as it helps to better understand bone remodeling dynamics and aids in the development of new therapeutic drugs. Cortical bone provides structural support and strength, while trabecular bone, with its spongy structure, plays a key role in energy absorption and metabolic activities , . Therefore, segmenting these two regions separately is essential for accurately assessing the effects of various treatments and stimuli on the two different compartments. Microcomputed tomography (µCT) is the gold standard for quantifying skeletal structure-function relationships, disease progression, and regeneration in preclinical models. With the use of µCT, major scientific advancements in osteoporosis, bone fracture healing, bone scaffold tissue engineering, and bone cancer metastasis have been made , . A major drawback of µCT is the acquisition of large volumes of data, which requires significant manual labor for processing and statistical analysis. Following data acquisition, the segmentation of bone structures is crucial for subsequent quantitative analysis. Manual segmentation is widely used but poses challenges due to its labor-intensive nature and potential for variability and bias among annotators . This variability can result in inconsistencies across studies, impacting the generalization of findings . The urgent requirement for automated segmentation processes is evident, aiming to increase both reproducibility and efficiency in bone analysis. Semiautomatic manual segmentation of large 3D µCT scans frequently uses interpolation algorithms , segmenting sampled slices from the 3D scan and interpolating over the remaining slices. However, the effectiveness of this approach depends heavily on the sampling step size. A larger step size can lead to segmentation errors in interpolated slices, affecting the overall accuracy. It is essential to strike a balance, as decreasing the step size improves accuracy but incurs a higher time cost in the segmentation process. Moreover, the literature reveals a notable gap concerning the development of a robust algorithm capable of accurately segmenting diverse types of bone scans. Conventional methods often struggle to generalize across various bone scans, underscoring the need for more adaptable and versatile segmentation approaches. While traditional automated segmentation algorithms utilizing classical morphological filtering and 3D image-processing techniques have proven effective , , their performance is often hindered by the morphological variations resulting from different experimental conditions. Despite their accuracy, segmentation techniques based on dual-thresholding of cortical and trabecular bone require calibration phantoms and precise voxel values for effective segmentation, leading to larger scan sizes. This increase necessitates significant computational resources for data storage and processing. Furthermore, these methods face difficulties in maintaining consistent volume segmentation quality in the presence of noise and varying recording conditions, affecting their ability to accurately segment bone structures and maintain connectivity. Segmenting cortical and trabecular bone is particularly challenging in the metaphysis region near the growth plate of long bones, where even experts find it difficult to manually delineate these structures owing to their complexity and intermingling. This complexity is exacerbated by various factors, such as drug treatments with anabolic or anticatabolic (or antiresorptive) effects, such as intermittent parathyroid hormone (PTH), risedronate, or mechanical loading (ML). These factors induce significant bone remodeling – and significantly complicate the segmentation process. Deep learning has revolutionized biomedical imaging segmentation, particularly in high-resolution µCT scans of bones – . Models based on deep learning, especially convolutional neural networks (CNNs), have demonstrated exceptional capability in automatically segmenting medical images with remarkable accuracy and speed. This success is attributed to the ability of CNNs to capture local features and dependencies between voxels, optimizing filters to detect and recognize relevant features effectively. These advancements in deep-learning-based image analysis techniques have the potential to automate µCT data processing, providing rapid research outcomes. Neeteson et al. developed and validated a fully automated segmentation algorithm for human high-resolution peripheral quantitative computed tomography (HR-pQCT) images of the distal radius and tibia, employing a U-Net-based architecture . This method achieved high precision, even in images with significant cortical porosity. Similarly, Klein et al. introduced a dependable, fully automated method for bone segmentation in whole-body CT scans of patients with multiple myeloma, utilizing a U-Net-based framework . Schoppe et al. presented a deep learning pipeline called AIMOS , comprising a preprocessing module, a deep learning backbone, and a postprocessing module that automatically segments major organs in whole-body mouse scans . Malimban et al. (2022) demonstrated that 3D models of nnU-Net achieve superior segmentation accuracy and are more robust to unseen data than 2D models . They also compared the performance of nnU-Net with that of AIMOS on µCT images of the thorax in mouse µCT images. Integrating robust hybrid neural network architectures into the nnUNet framework has proven to work better for medical imaging, as seen with nnFormer . Since their popular launch in the natural language processing field, attention mechanisms and transformers have become prevalent in computer vision , . Self-attention has proven to be highly effective for developing neural network architectures for image processing. Oktay et al. introduced attention gates, which filter feature maps generated in the encoder and transmitted through skip connections to the decoder . Transformers are increasingly popular in computer vision tasks because of their ability to capture long-range dependencies within a 3D scan. The Vision Transformer (ViT) was introduced in 2020 , marking the first fully transformer-based architecture to achieve state-of-the-art performance. To improve efficiency, Swin Transformers employ shifted windows for enhanced global attention by applying self-attention to nonoverlapping windows and enabling cross-window connections in subsequent layers. There is growing interest in hybrid architectures that combine transformers and CNNs, such as UNETR and SwinUNETR . These hybrid architectures excel at capturing both the global and the local context within a 3D scan, providing a more comprehensive data representation. Given the heterogeneity of bone imaging data sources and the variability introduced by different treatments, there is an urgent need to automate the segmentation process and ensure its generalization. Generalization is critical for applying models to unseen data and relies on strategies such as data augmentation to increase model robustness and adaptability . Importantly, the ability to generalize is essential to ensuring that a model trained on limited data can accurately segment images under new biological conditions, scanned at different resolutions, and produced by different µCT devices. This capability ensures that the model maintains high accuracy and reliability across a wide array of preclinical scenarios, broadening its applicability in biomedical research, particularly in understanding bone adaptation processes and studying bone diseases, such as osteoporosis. In this paper, we proposed a novel hybrid deep learning architecture, the Dual-Branch Attention-based Hybrid Network (DBAHNet) , for high-resolution µCT mouse tibia image segmentation. DBAHNet yields excellent segmentation results on a diverse µCT mouse bone dataset, even when trained on a limited control set of high-resolution µCT (4.8–5 µm) mouse tibia images. Our objective was to develop a robust and generalizable model capable of accurately segmenting the cortical and trabecular compartments in high-resolution µCT images across various conditions. This model can be integrated into an automated pipeline for high-resolution µCT mouse bone assessment, automating the analysis in preclinical studies. This advancement is expected to increase our understanding of bone remodeling dynamics and the effects of drugs and mechanical loading on bones. We collected a large dataset from seven different research studies – , – , encompassing various imaging resolutions (4.8–13.7 µm), mouse strains, and experimental conditions (drug treatments and mechanical loading). This highlighted the robustness and generalization capabilities of our deep learning architecture in handling unseen scenarios. Specifically, we trained our model on only 74 control mouse tibia µCT 3D scans and evaluated its performance across a large, diverse dataset. Our model achieved high accuracy in segmenting cortical and trabecular compartments and performed effectively in extreme cases where the bone shape deviated significantly from that of control bones, underscoring its ability to generalize from a limited dataset. Our findings suggest that DBAHNet can achieve robust and precise segmentation even in challenging and previously unseen scenarios. This capability is attributed to the neural network’s ability to effectively learn and differentiate the hidden features of the cortical and trabecular bone. Experimental design The main training and experiments were conducted with 4 NVIDIA V100 32 GB GPUs. We employed the stochastic gradient descent (SGD) optimizer with a momentum of 0.99, a batch size of 4, and a cosine annealing learning rate scheduler starting at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 10^{-4}$$\end{document} . The 3D scans of the bone were each divided into 10 subsets along the z-axis (i.e., the long bone axis). This division is crucial because of the nature of the high-resolution data, as it reduces the size of the scans during data loading. The input scans were randomly cropped into subvolumes of size (320, 320, 32) and subjected to data augmentation and preprocessing. We evaluated the performance of our model via the Sørensen-Dice score coefficient (DSC) and the 95th percentile of the Hausdorff distance (HD95) . The DSC measures the overlap between the predicted segmentation and the ground truth, which is calculated using Eq. ( ). 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {DSC} = \frac{2 \times |X \cap Y|}{|X| + |Y|} \end{aligned}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document} is the set of predicted segmentation pixels and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y$$\end{document} is the set of ground truth segmentation pixels. The Hausdorff distance (HD) measures the distance between two sets of points and is defined with Eq. ( ). 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} d_H(X, Y) = \max \left\{ \sup _{x \in X} \inf _{y \in Y} d(x,y), \sup _{y \in Y} \inf _{x \in X} d(x,y) \right\} \end{aligned}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d(x,y)$$\end{document} is the Euclidean distance between points \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y$$\end{document} . The 95th percentile of the Hausdorff distance (HD95) is used to mitigate the effect of outliers and is calculated as the 95th percentile of all the distances. We used a combined Dice cross-entropy loss function for training, which combines the Dice loss and cross-entropy (CE) loss to optimize the model’s segmentation performance. The loss function is defined with Eq. ( ). 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{aligned} \text {Dice Loss}&= 1 - \frac{2 \sum _{i=1}^{N} p_i g_i}{\sum _{i=1}^{N} p_i + \sum _{i=1}^{N} g_i} \\ \text {Cross-Entropy Loss}&= -\sum _{i=1}^{N} \left[ g_i \log (p_i) + (1 - g_i) \log (1 - p_i) \right] \\ \text {Loss}&= \alpha \times \text {Dice Loss} + \beta \times \text {Cross-Entropy Loss} \end{aligned} \end{aligned}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_i$$\end{document} is the predicted binary mask, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_i$$\end{document} is the ground truth binary mask, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N$$\end{document} is the total number of pixels. The weighting factors \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document} are both set to 0.5 to balance the contributions of the two loss components. The numbers of attention heads used for the transformers at each hierarchical level were 6, 12, 24, and 48. The embedding dimension \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} in the final model was set to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C = 96$$\end{document} . The main control dataset used for training and comparison with other state-of-the-art architectures was split into 70% for training, 10% for validation, and 20% for testing. We used the validation set to monitor the training and conduct the ablation study experiments, whereas the test set was kept isolated to evaluate the model’s performance against popular state-of-the-art architectures. For the ablation study and assessment of model complexity, we used the number of parameters (N Params) in millions and the giga floating point operations per second (GFLOPS). Performance comparison with other state-of-the-art models The proposed DBAHNet model achieved state-of-the-art performance with an average DSC of 98.41%, a DSC of 99.13% for the cortical bone, and a DSC of 97.69% for the trabecular bone, prior to any post-processing. DBAHNet also demonstrated the best results in terms of boundary precision, with an average HD95 of 0.0095 mm, including 0.0080 mm for the cortical bone and 0.0110 mm for the trabecular bone. As shown in Table , DBAHNet consistently outperformed other well-established architectures, including UNet, Attention UNet, UNETR, and SwinUNETR, across both the cortical and trabecular compartments. In addition to achieving the highest DSC scores, DBAHNet exhibited the lowest HD95 across all models. For instance, while SwinUNETR performed well with a DSC of 99.02% for the cortical compartment, its HD95 value of 0.0498 mm is notably higher than DBAHNet’s 0.0080 mm, highlighting DBAHNet’s superior segmentation accuracy and boundary precision. Quantitative and qualitative segmentation results We presented both quantitative and qualitative evaluations of the segmentation results obtained using DBAHNet in this section. The model was trained primarily on control datasets and tested on 3D high-resolution µCT scans of mouse tibiae under various medical treatments. We assessed performance separately for the cortical (C) and trabecular (T) compartments within the specified regions of interest. Quantitative evaluation We presented the segmentation results for our main datasets – in Table . These results comprehensively demonstrated the model’s effectiveness in delineating bone structures under diverse and extreme drug treatments. The model exhibited robust performance under challenging conditions, including high doses of PTH and risedronate, as well as mechanical loading and their interactions, which significantly influenced bone remodelling and porosity. Extreme cases further highlight the model’s generalization. For example, in Dataset 1 , the DSC for PTH 80 µg/kg/day combined with mechanical loading was 0.9498, with 0.9907 for the cortical compartment and 0.9088 for the trabecular compartment. The corresponding HD95 values were 0.0424, 0.0107, and 0.0742, respectively. This performance underscored the model’s ability to handle significant changes in bone morphology. We presented the segmentation results for the secondary datasets [ – ] in Table .For example, the average DSC for Dataset 4 was 0.9620, with 0.9963 for the cortical compartment and 0.9270 for the trabecular compartment. The HD95 values were 0.0342, 0.0071, and 0.0612, respectively. However, the segmentation results for human stem cell implants in young mice (8 weeks old) were lower, with an average DSC of 0.76908 and an HD95 of 0.13652. This outcome is due to the extremely young and nondense bone structure of the ovariectomized mice, which is very different from that of the mature control mice used for training. Qualitative evaluation We conducted a visual inspection of the entire region of interest of the bone, from the middle of the metaphysis to the proximal region of long bones near the growth plate. Figure displays qualitative segmentation results for all the datasets. The visual inspection showed that the segmentation of the cortical and trabecular compartments is highly accurate, with a smooth transitional region between the two compartments. We also observed that our approach produces a much cleaner segmentation compared to semi-automatic manual segmentation, which involves interpolation and often leads to small segmentation errors in the interpolated cross-sectional slices. The segmentation images obtained under different experimental conditions demonstrated the model’s ability to delineate complex bone structures. For high doses of PTH and risedronate, visual inspection revealed precise segmentation even in regions with significant porosity. A qualitative comparison of DBAHNet with the current gold standard, the dual threshold method , for four different dose groups is shown in the supplementary materials. The dual threshold method fails to properly segment the trabecular compartments in certain regions for the PTH-treated groups from Dataset (see Supplementary Fig. S1). Performance across various treatments Our model’s proficiency extended across different treatments, highlighting its adaptability. Specifically, under high PTH and risedronate treatments, the model accurately segmented bone structures, maintaining high DSC and low HD95 values. The mean DSC for Dataset 1 , which involves iPTH and mechanical loading, was 0.9278, with 0.9661 for the cortical compartment and 0.8894 for the trabecular compartment. For Dataset 2 , which involves risedronate, the mean DSC was greater at 0.9738, with 0.9860 for the cortical compartment and 0.9616 for the trabecular compartment. This difference can be attributed to the greater anabolic effect of PTH on bone, which leads to increased porosity and merging between cortical and trabecular bone, compared to the anticatabolic effect of risedronate. These differences underscored the model’s ability to generalize across different treatment regimens, handling significant morphological changes induced by these treatments. Similarly, for Dataset 3 , which involves mechanical loading combined with sciatic neurectomy, the mean DSC was 0.9728, with 0.9842 for the cortical compartment and 0.9614 for the trabecular compartment. These findings indicated that the model effectively handled the structural changes induced by the combined mechanical and neurosurgical interventions. For Dataset 4 , which involved ovariectomized mice under PTH treatment and mechanical loading, the model achieved a mean DSC of 0.9620, with 0.9963 for the cortical compartment and 0.9270 for the trabecular compartment. The HD95 values were 0.0342, 0.0071, and 0.0612, respectively. These results highlighted the model’s robustness in segmenting bone structures affected by hormonal changes and physical interventions, further demonstrating its versatility and effectiveness across various medical treatments. Evaluation across different resolutions, ages, and mouse strains The model exhibited strong performance across different resolutions, particularly excelling at the 5 µm resolution, which aligned with the training resolution of the control set. We also observed success at other unseen resolutions, such as 10.4 µm and 13.7 µm, recorded from both in-vivo and ex-vivo of different µCT scanners, further highlighting the model’s robustness. Specifically, for datasets with 10.4 µm resolution (Dataset 4 and Dataset 6 ), the average DSCs for the cortical and trabecular compartments were 0.9620 and 0.9494, respectively, and the HD95 values were 0.0342 and 0.0155, respectively, indicating good segmentation performance. The model also performed well across a range of ages, from 8 weeks to 24 weeks. Dataset 5 ), which involves homozygous oim mice characterized by fragile and deformed bones and a very young age of 8 weeks, characterized by low bone density and active growth, achieved an average DSC of 0.7691 and an HD95 of 0.1365. Although this value was lower than that of other datasets, it still indicated good performance, particularly given the young age and genetic disorders associated with osteogenesis imperfecta. In contrast, Dataset 6 , involving BALB/c mice aged 24 weeks, presented an average DSC of 0.9494 and an average HD95 of 0.0155, indicating strong segmentation performance despite the age difference. Additionally, the model demonstrated strong performance across different mouse strains, showing its ability to generalize beyond the strain used for training. Dataset 6 , which included BALB/c mice, presented an average DSC of 0.9494 and an average HD95 of 0.0155, comparable to those of the C57BL/6J strain used in other datasets. This adaptability to different resolutions, ages, and mouse strains, including C57BL/6J, BALB/c, and homozygous oim mouse strains, suggests the model’s potential to be generalized across rodents with similar anatomical features, enhancing its utility in diverse experimental settings. Adaptability to different bone types Notably, the model, which was originally trained on control tibia scans, demonstrated the ability to generalize to a different bone type, the femur. This observation underscores the model’s comprehensive understanding of anatomical features, enabling it to adapt to the unique characteristics of new bone types. This versatility contributed to the model’s applicability in a wide range of bone segmentation tasks. Specifically, for Dataset 7 , which involved femur scans (OVX) at a resolution of 13.7 µm, the model achieved an average DSC of 0.9110, with 0.9731 for the cortical compartment and 0.8489 for the trabecular compartment. The corresponding HD95 values were 0.0216, 0.0154, and 0.0278, respectively. These results highlighted the model’s robust performance in segmenting femur bones, despite being trained only on tibia scans. Generalization over unseen data The model’s capacity to generalize over new, unseen data under different conditions-including variations in age, species, bone type, and experimental medical treatments-was excellent. When trained solely on control scan data, the model’s broad generalization ability emphasized its ability to function effectively in practical preclinical cases. This highlighted the power of deep learning models in providing accurate results across a spectrum of unforeseen scenarios and the model’s proficiency in training on a limited dataset, capturing both long-range dependencies and local features within the 3D volume. This comprehensive understanding allowed the model to recognize and generalize over unseen data recorded under very different experimental setups. Impact of postprocessing on segmentation performance In this section, we evaluated the impact of the postprocessing module on segmentation performance. The postprocessing was applied to the high-dose drug treatment groups: 40 µg/kg/day and 80 µg/kg/day in Dataset 1 , as well as risedronate alone at 150 µg/kg/day and in combination with mechanical loading (ML) in Dataset 2. The results are summarized in Table . Overall, the postprocessing step improved segmentation performance across all datasets and groups, as indicated by an increase in the DSC and a reduction in the HD95. The most significant improvement was observed in the 80 µg/kg/day PTH group, where the DSC increased from 0.9498 to 0.9671, and in the risedronate + ML group, with an increase in DSC from 0.9682 to 0.9838. These results demonstrated the ability of postprocessing to enhance segmentation performance, particularly in groups showing the strongest anabolic response, such as those treated with PTH or subjected to mechanical loading. This highlights the potential of postprocessing to further refine the performance of deep learning models for segmenting cortical and trabecular compartments in µCT scans of mouse tibiae. The results may be further improved by fine-tuning the parameters of the postprocessing pipeline for each specific dataset or incorporating additional image processing steps tailored to the characteristics of the dataset. Model configuration study In this study, we evaluated the performance of our proposed architecture by varying key parameters such as the embedding dimension \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} and the reduction embedding vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E$$\end{document} . The embedding dimension \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} defines the size of the vector space into which the input features are mapped. Increasing \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} generally enhances the model’s ability to capture complex patterns, but it also increases computational complexity. The reduction embedding vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E$$\end{document} specifies the downsampling ratio for the feature maps along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} dimensions in the patch embedding block. The results, summarized in Table , show that increasing \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} improves the DSC, indicating better segmentation performance. Specifically, the configuration with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C = 96$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E = [4, 4, 2]$$\end{document} achieves the highest DSC of 0.9872. However, this improvement comes with increased model complexity, as indicated by the greater number of parameters and GFLOPS. Conversely, smaller embedding dimensions reduce computational demands but also slightly decrease performance. The choice of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E$$\end{document} also impacts the model’s performance and efficiency, with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E = [4, 4, 2]$$\end{document} offering a good balance between accuracy and computational load. Furthermore, we demonstrated that a lighter version of the model, with reduced embedding dimensions and downsampling ratios, still achieves excellent performance. This suggested that our architecture can be adapted for environments with limited computational resources while maintaining high segmentation accuracy. Ablation study We conducted an ablation study to analyze the impact of various components of our proposed architecture. An ablation study systematically removes or alters components of a model to understand each component’s contribution to the overall performance. We tested the following configurations: Configuration 1 DBAHNet without a bottleneck, which uses a standard convolution block (3D convolution, batch normalization, and GeLU activation) in both the encoder and decoder. Configuration 2 DBAHNet with the Channel-wise Attention-Based Convolution Module (CACM) in both the encoder and decoder, without a bottleneck. Configuration 3 DBAHNet with the CACM in the encoder and the Spatial-Wise Attention-Based Convolution Module (SACM) in the decoder, without a bottleneck. Configuration 4 The full DBAHNet with all the components. Table presents the results of this ablation study, highlighting the importance of each component in our architecture. The baseline configuration (Configuration 1), which uses standard convolution blocks, achieves a DSC of 0.9487. Introducing CACM to both the encoder and decoder (Configuration 2) significantly improves the DSC to 0.9846, highlighting the effectiveness of attention mechanisms in improving feature representation. Applying CACM in the encoder and SACM in the decoder (Configuration 3) results in the highest DSC of 0.9876, indicating the complementary advantages of these modules. The full DBAHNet, which includes the bottleneck, achieves a DSC of 0.9872, closely matching the performance of Configuration 3. This suggested that while the inclusion of a bottleneck does not significantly improve performance, it still contributes to feature encoding for the decoder and helps prevent overfitting. The main training and experiments were conducted with 4 NVIDIA V100 32 GB GPUs. We employed the stochastic gradient descent (SGD) optimizer with a momentum of 0.99, a batch size of 4, and a cosine annealing learning rate scheduler starting at \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 10^{-4}$$\end{document} . The 3D scans of the bone were each divided into 10 subsets along the z-axis (i.e., the long bone axis). This division is crucial because of the nature of the high-resolution data, as it reduces the size of the scans during data loading. The input scans were randomly cropped into subvolumes of size (320, 320, 32) and subjected to data augmentation and preprocessing. We evaluated the performance of our model via the Sørensen-Dice score coefficient (DSC) and the 95th percentile of the Hausdorff distance (HD95) . The DSC measures the overlap between the predicted segmentation and the ground truth, which is calculated using Eq. ( ). 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {DSC} = \frac{2 \times |X \cap Y|}{|X| + |Y|} \end{aligned}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document} is the set of predicted segmentation pixels and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y$$\end{document} is the set of ground truth segmentation pixels. The Hausdorff distance (HD) measures the distance between two sets of points and is defined with Eq. ( ). 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} d_H(X, Y) = \max \left\{ \sup _{x \in X} \inf _{y \in Y} d(x,y), \sup _{y \in Y} \inf _{x \in X} d(x,y) \right\} \end{aligned}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d(x,y)$$\end{document} is the Euclidean distance between points \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y$$\end{document} . The 95th percentile of the Hausdorff distance (HD95) is used to mitigate the effect of outliers and is calculated as the 95th percentile of all the distances. We used a combined Dice cross-entropy loss function for training, which combines the Dice loss and cross-entropy (CE) loss to optimize the model’s segmentation performance. The loss function is defined with Eq. ( ). 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{aligned} \text {Dice Loss}&= 1 - \frac{2 \sum _{i=1}^{N} p_i g_i}{\sum _{i=1}^{N} p_i + \sum _{i=1}^{N} g_i} \\ \text {Cross-Entropy Loss}&= -\sum _{i=1}^{N} \left[ g_i \log (p_i) + (1 - g_i) \log (1 - p_i) \right] \\ \text {Loss}&= \alpha \times \text {Dice Loss} + \beta \times \text {Cross-Entropy Loss} \end{aligned} \end{aligned}$$\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_i$$\end{document} is the predicted binary mask, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_i$$\end{document} is the ground truth binary mask, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N$$\end{document} is the total number of pixels. The weighting factors \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document} are both set to 0.5 to balance the contributions of the two loss components. The numbers of attention heads used for the transformers at each hierarchical level were 6, 12, 24, and 48. The embedding dimension \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} in the final model was set to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C = 96$$\end{document} . The main control dataset used for training and comparison with other state-of-the-art architectures was split into 70% for training, 10% for validation, and 20% for testing. We used the validation set to monitor the training and conduct the ablation study experiments, whereas the test set was kept isolated to evaluate the model’s performance against popular state-of-the-art architectures. For the ablation study and assessment of model complexity, we used the number of parameters (N Params) in millions and the giga floating point operations per second (GFLOPS). The proposed DBAHNet model achieved state-of-the-art performance with an average DSC of 98.41%, a DSC of 99.13% for the cortical bone, and a DSC of 97.69% for the trabecular bone, prior to any post-processing. DBAHNet also demonstrated the best results in terms of boundary precision, with an average HD95 of 0.0095 mm, including 0.0080 mm for the cortical bone and 0.0110 mm for the trabecular bone. As shown in Table , DBAHNet consistently outperformed other well-established architectures, including UNet, Attention UNet, UNETR, and SwinUNETR, across both the cortical and trabecular compartments. In addition to achieving the highest DSC scores, DBAHNet exhibited the lowest HD95 across all models. For instance, while SwinUNETR performed well with a DSC of 99.02% for the cortical compartment, its HD95 value of 0.0498 mm is notably higher than DBAHNet’s 0.0080 mm, highlighting DBAHNet’s superior segmentation accuracy and boundary precision. We presented both quantitative and qualitative evaluations of the segmentation results obtained using DBAHNet in this section. The model was trained primarily on control datasets and tested on 3D high-resolution µCT scans of mouse tibiae under various medical treatments. We assessed performance separately for the cortical (C) and trabecular (T) compartments within the specified regions of interest. Quantitative evaluation We presented the segmentation results for our main datasets – in Table . These results comprehensively demonstrated the model’s effectiveness in delineating bone structures under diverse and extreme drug treatments. The model exhibited robust performance under challenging conditions, including high doses of PTH and risedronate, as well as mechanical loading and their interactions, which significantly influenced bone remodelling and porosity. Extreme cases further highlight the model’s generalization. For example, in Dataset 1 , the DSC for PTH 80 µg/kg/day combined with mechanical loading was 0.9498, with 0.9907 for the cortical compartment and 0.9088 for the trabecular compartment. The corresponding HD95 values were 0.0424, 0.0107, and 0.0742, respectively. This performance underscored the model’s ability to handle significant changes in bone morphology. We presented the segmentation results for the secondary datasets [ – ] in Table .For example, the average DSC for Dataset 4 was 0.9620, with 0.9963 for the cortical compartment and 0.9270 for the trabecular compartment. The HD95 values were 0.0342, 0.0071, and 0.0612, respectively. However, the segmentation results for human stem cell implants in young mice (8 weeks old) were lower, with an average DSC of 0.76908 and an HD95 of 0.13652. This outcome is due to the extremely young and nondense bone structure of the ovariectomized mice, which is very different from that of the mature control mice used for training. Qualitative evaluation We conducted a visual inspection of the entire region of interest of the bone, from the middle of the metaphysis to the proximal region of long bones near the growth plate. Figure displays qualitative segmentation results for all the datasets. The visual inspection showed that the segmentation of the cortical and trabecular compartments is highly accurate, with a smooth transitional region between the two compartments. We also observed that our approach produces a much cleaner segmentation compared to semi-automatic manual segmentation, which involves interpolation and often leads to small segmentation errors in the interpolated cross-sectional slices. The segmentation images obtained under different experimental conditions demonstrated the model’s ability to delineate complex bone structures. For high doses of PTH and risedronate, visual inspection revealed precise segmentation even in regions with significant porosity. A qualitative comparison of DBAHNet with the current gold standard, the dual threshold method , for four different dose groups is shown in the supplementary materials. The dual threshold method fails to properly segment the trabecular compartments in certain regions for the PTH-treated groups from Dataset (see Supplementary Fig. S1). Performance across various treatments Our model’s proficiency extended across different treatments, highlighting its adaptability. Specifically, under high PTH and risedronate treatments, the model accurately segmented bone structures, maintaining high DSC and low HD95 values. The mean DSC for Dataset 1 , which involves iPTH and mechanical loading, was 0.9278, with 0.9661 for the cortical compartment and 0.8894 for the trabecular compartment. For Dataset 2 , which involves risedronate, the mean DSC was greater at 0.9738, with 0.9860 for the cortical compartment and 0.9616 for the trabecular compartment. This difference can be attributed to the greater anabolic effect of PTH on bone, which leads to increased porosity and merging between cortical and trabecular bone, compared to the anticatabolic effect of risedronate. These differences underscored the model’s ability to generalize across different treatment regimens, handling significant morphological changes induced by these treatments. Similarly, for Dataset 3 , which involves mechanical loading combined with sciatic neurectomy, the mean DSC was 0.9728, with 0.9842 for the cortical compartment and 0.9614 for the trabecular compartment. These findings indicated that the model effectively handled the structural changes induced by the combined mechanical and neurosurgical interventions. For Dataset 4 , which involved ovariectomized mice under PTH treatment and mechanical loading, the model achieved a mean DSC of 0.9620, with 0.9963 for the cortical compartment and 0.9270 for the trabecular compartment. The HD95 values were 0.0342, 0.0071, and 0.0612, respectively. These results highlighted the model’s robustness in segmenting bone structures affected by hormonal changes and physical interventions, further demonstrating its versatility and effectiveness across various medical treatments. Evaluation across different resolutions, ages, and mouse strains The model exhibited strong performance across different resolutions, particularly excelling at the 5 µm resolution, which aligned with the training resolution of the control set. We also observed success at other unseen resolutions, such as 10.4 µm and 13.7 µm, recorded from both in-vivo and ex-vivo of different µCT scanners, further highlighting the model’s robustness. Specifically, for datasets with 10.4 µm resolution (Dataset 4 and Dataset 6 ), the average DSCs for the cortical and trabecular compartments were 0.9620 and 0.9494, respectively, and the HD95 values were 0.0342 and 0.0155, respectively, indicating good segmentation performance. The model also performed well across a range of ages, from 8 weeks to 24 weeks. Dataset 5 ), which involves homozygous oim mice characterized by fragile and deformed bones and a very young age of 8 weeks, characterized by low bone density and active growth, achieved an average DSC of 0.7691 and an HD95 of 0.1365. Although this value was lower than that of other datasets, it still indicated good performance, particularly given the young age and genetic disorders associated with osteogenesis imperfecta. In contrast, Dataset 6 , involving BALB/c mice aged 24 weeks, presented an average DSC of 0.9494 and an average HD95 of 0.0155, indicating strong segmentation performance despite the age difference. Additionally, the model demonstrated strong performance across different mouse strains, showing its ability to generalize beyond the strain used for training. Dataset 6 , which included BALB/c mice, presented an average DSC of 0.9494 and an average HD95 of 0.0155, comparable to those of the C57BL/6J strain used in other datasets. This adaptability to different resolutions, ages, and mouse strains, including C57BL/6J, BALB/c, and homozygous oim mouse strains, suggests the model’s potential to be generalized across rodents with similar anatomical features, enhancing its utility in diverse experimental settings. Adaptability to different bone types Notably, the model, which was originally trained on control tibia scans, demonstrated the ability to generalize to a different bone type, the femur. This observation underscores the model’s comprehensive understanding of anatomical features, enabling it to adapt to the unique characteristics of new bone types. This versatility contributed to the model’s applicability in a wide range of bone segmentation tasks. Specifically, for Dataset 7 , which involved femur scans (OVX) at a resolution of 13.7 µm, the model achieved an average DSC of 0.9110, with 0.9731 for the cortical compartment and 0.8489 for the trabecular compartment. The corresponding HD95 values were 0.0216, 0.0154, and 0.0278, respectively. These results highlighted the model’s robust performance in segmenting femur bones, despite being trained only on tibia scans. Generalization over unseen data The model’s capacity to generalize over new, unseen data under different conditions-including variations in age, species, bone type, and experimental medical treatments-was excellent. When trained solely on control scan data, the model’s broad generalization ability emphasized its ability to function effectively in practical preclinical cases. This highlighted the power of deep learning models in providing accurate results across a spectrum of unforeseen scenarios and the model’s proficiency in training on a limited dataset, capturing both long-range dependencies and local features within the 3D volume. This comprehensive understanding allowed the model to recognize and generalize over unseen data recorded under very different experimental setups. We presented the segmentation results for our main datasets – in Table . These results comprehensively demonstrated the model’s effectiveness in delineating bone structures under diverse and extreme drug treatments. The model exhibited robust performance under challenging conditions, including high doses of PTH and risedronate, as well as mechanical loading and their interactions, which significantly influenced bone remodelling and porosity. Extreme cases further highlight the model’s generalization. For example, in Dataset 1 , the DSC for PTH 80 µg/kg/day combined with mechanical loading was 0.9498, with 0.9907 for the cortical compartment and 0.9088 for the trabecular compartment. The corresponding HD95 values were 0.0424, 0.0107, and 0.0742, respectively. This performance underscored the model’s ability to handle significant changes in bone morphology. We presented the segmentation results for the secondary datasets [ – ] in Table .For example, the average DSC for Dataset 4 was 0.9620, with 0.9963 for the cortical compartment and 0.9270 for the trabecular compartment. The HD95 values were 0.0342, 0.0071, and 0.0612, respectively. However, the segmentation results for human stem cell implants in young mice (8 weeks old) were lower, with an average DSC of 0.76908 and an HD95 of 0.13652. This outcome is due to the extremely young and nondense bone structure of the ovariectomized mice, which is very different from that of the mature control mice used for training. We conducted a visual inspection of the entire region of interest of the bone, from the middle of the metaphysis to the proximal region of long bones near the growth plate. Figure displays qualitative segmentation results for all the datasets. The visual inspection showed that the segmentation of the cortical and trabecular compartments is highly accurate, with a smooth transitional region between the two compartments. We also observed that our approach produces a much cleaner segmentation compared to semi-automatic manual segmentation, which involves interpolation and often leads to small segmentation errors in the interpolated cross-sectional slices. The segmentation images obtained under different experimental conditions demonstrated the model’s ability to delineate complex bone structures. For high doses of PTH and risedronate, visual inspection revealed precise segmentation even in regions with significant porosity. A qualitative comparison of DBAHNet with the current gold standard, the dual threshold method , for four different dose groups is shown in the supplementary materials. The dual threshold method fails to properly segment the trabecular compartments in certain regions for the PTH-treated groups from Dataset (see Supplementary Fig. S1). Our model’s proficiency extended across different treatments, highlighting its adaptability. Specifically, under high PTH and risedronate treatments, the model accurately segmented bone structures, maintaining high DSC and low HD95 values. The mean DSC for Dataset 1 , which involves iPTH and mechanical loading, was 0.9278, with 0.9661 for the cortical compartment and 0.8894 for the trabecular compartment. For Dataset 2 , which involves risedronate, the mean DSC was greater at 0.9738, with 0.9860 for the cortical compartment and 0.9616 for the trabecular compartment. This difference can be attributed to the greater anabolic effect of PTH on bone, which leads to increased porosity and merging between cortical and trabecular bone, compared to the anticatabolic effect of risedronate. These differences underscored the model’s ability to generalize across different treatment regimens, handling significant morphological changes induced by these treatments. Similarly, for Dataset 3 , which involves mechanical loading combined with sciatic neurectomy, the mean DSC was 0.9728, with 0.9842 for the cortical compartment and 0.9614 for the trabecular compartment. These findings indicated that the model effectively handled the structural changes induced by the combined mechanical and neurosurgical interventions. For Dataset 4 , which involved ovariectomized mice under PTH treatment and mechanical loading, the model achieved a mean DSC of 0.9620, with 0.9963 for the cortical compartment and 0.9270 for the trabecular compartment. The HD95 values were 0.0342, 0.0071, and 0.0612, respectively. These results highlighted the model’s robustness in segmenting bone structures affected by hormonal changes and physical interventions, further demonstrating its versatility and effectiveness across various medical treatments. The model exhibited strong performance across different resolutions, particularly excelling at the 5 µm resolution, which aligned with the training resolution of the control set. We also observed success at other unseen resolutions, such as 10.4 µm and 13.7 µm, recorded from both in-vivo and ex-vivo of different µCT scanners, further highlighting the model’s robustness. Specifically, for datasets with 10.4 µm resolution (Dataset 4 and Dataset 6 ), the average DSCs for the cortical and trabecular compartments were 0.9620 and 0.9494, respectively, and the HD95 values were 0.0342 and 0.0155, respectively, indicating good segmentation performance. The model also performed well across a range of ages, from 8 weeks to 24 weeks. Dataset 5 ), which involves homozygous oim mice characterized by fragile and deformed bones and a very young age of 8 weeks, characterized by low bone density and active growth, achieved an average DSC of 0.7691 and an HD95 of 0.1365. Although this value was lower than that of other datasets, it still indicated good performance, particularly given the young age and genetic disorders associated with osteogenesis imperfecta. In contrast, Dataset 6 , involving BALB/c mice aged 24 weeks, presented an average DSC of 0.9494 and an average HD95 of 0.0155, indicating strong segmentation performance despite the age difference. Additionally, the model demonstrated strong performance across different mouse strains, showing its ability to generalize beyond the strain used for training. Dataset 6 , which included BALB/c mice, presented an average DSC of 0.9494 and an average HD95 of 0.0155, comparable to those of the C57BL/6J strain used in other datasets. This adaptability to different resolutions, ages, and mouse strains, including C57BL/6J, BALB/c, and homozygous oim mouse strains, suggests the model’s potential to be generalized across rodents with similar anatomical features, enhancing its utility in diverse experimental settings. Notably, the model, which was originally trained on control tibia scans, demonstrated the ability to generalize to a different bone type, the femur. This observation underscores the model’s comprehensive understanding of anatomical features, enabling it to adapt to the unique characteristics of new bone types. This versatility contributed to the model’s applicability in a wide range of bone segmentation tasks. Specifically, for Dataset 7 , which involved femur scans (OVX) at a resolution of 13.7 µm, the model achieved an average DSC of 0.9110, with 0.9731 for the cortical compartment and 0.8489 for the trabecular compartment. The corresponding HD95 values were 0.0216, 0.0154, and 0.0278, respectively. These results highlighted the model’s robust performance in segmenting femur bones, despite being trained only on tibia scans. The model’s capacity to generalize over new, unseen data under different conditions-including variations in age, species, bone type, and experimental medical treatments-was excellent. When trained solely on control scan data, the model’s broad generalization ability emphasized its ability to function effectively in practical preclinical cases. This highlighted the power of deep learning models in providing accurate results across a spectrum of unforeseen scenarios and the model’s proficiency in training on a limited dataset, capturing both long-range dependencies and local features within the 3D volume. This comprehensive understanding allowed the model to recognize and generalize over unseen data recorded under very different experimental setups. In this section, we evaluated the impact of the postprocessing module on segmentation performance. The postprocessing was applied to the high-dose drug treatment groups: 40 µg/kg/day and 80 µg/kg/day in Dataset 1 , as well as risedronate alone at 150 µg/kg/day and in combination with mechanical loading (ML) in Dataset 2. The results are summarized in Table . Overall, the postprocessing step improved segmentation performance across all datasets and groups, as indicated by an increase in the DSC and a reduction in the HD95. The most significant improvement was observed in the 80 µg/kg/day PTH group, where the DSC increased from 0.9498 to 0.9671, and in the risedronate + ML group, with an increase in DSC from 0.9682 to 0.9838. These results demonstrated the ability of postprocessing to enhance segmentation performance, particularly in groups showing the strongest anabolic response, such as those treated with PTH or subjected to mechanical loading. This highlights the potential of postprocessing to further refine the performance of deep learning models for segmenting cortical and trabecular compartments in µCT scans of mouse tibiae. The results may be further improved by fine-tuning the parameters of the postprocessing pipeline for each specific dataset or incorporating additional image processing steps tailored to the characteristics of the dataset. In this study, we evaluated the performance of our proposed architecture by varying key parameters such as the embedding dimension \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} and the reduction embedding vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E$$\end{document} . The embedding dimension \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} defines the size of the vector space into which the input features are mapped. Increasing \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} generally enhances the model’s ability to capture complex patterns, but it also increases computational complexity. The reduction embedding vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E$$\end{document} specifies the downsampling ratio for the feature maps along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} dimensions in the patch embedding block. The results, summarized in Table , show that increasing \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} improves the DSC, indicating better segmentation performance. Specifically, the configuration with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C = 96$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E = [4, 4, 2]$$\end{document} achieves the highest DSC of 0.9872. However, this improvement comes with increased model complexity, as indicated by the greater number of parameters and GFLOPS. Conversely, smaller embedding dimensions reduce computational demands but also slightly decrease performance. The choice of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E$$\end{document} also impacts the model’s performance and efficiency, with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E = [4, 4, 2]$$\end{document} offering a good balance between accuracy and computational load. Furthermore, we demonstrated that a lighter version of the model, with reduced embedding dimensions and downsampling ratios, still achieves excellent performance. This suggested that our architecture can be adapted for environments with limited computational resources while maintaining high segmentation accuracy. We conducted an ablation study to analyze the impact of various components of our proposed architecture. An ablation study systematically removes or alters components of a model to understand each component’s contribution to the overall performance. We tested the following configurations: Configuration 1 DBAHNet without a bottleneck, which uses a standard convolution block (3D convolution, batch normalization, and GeLU activation) in both the encoder and decoder. Configuration 2 DBAHNet with the Channel-wise Attention-Based Convolution Module (CACM) in both the encoder and decoder, without a bottleneck. Configuration 3 DBAHNet with the CACM in the encoder and the Spatial-Wise Attention-Based Convolution Module (SACM) in the decoder, without a bottleneck. Configuration 4 The full DBAHNet with all the components. Table presents the results of this ablation study, highlighting the importance of each component in our architecture. The baseline configuration (Configuration 1), which uses standard convolution blocks, achieves a DSC of 0.9487. Introducing CACM to both the encoder and decoder (Configuration 2) significantly improves the DSC to 0.9846, highlighting the effectiveness of attention mechanisms in improving feature representation. Applying CACM in the encoder and SACM in the decoder (Configuration 3) results in the highest DSC of 0.9876, indicating the complementary advantages of these modules. The full DBAHNet, which includes the bottleneck, achieves a DSC of 0.9872, closely matching the performance of Configuration 3. This suggested that while the inclusion of a bottleneck does not significantly improve performance, it still contributes to feature encoding for the decoder and helps prevent overfitting. The results from our study demonstrated the significant ability of DBAHNet to segment cortical and trabecular bone compartments from high-resolution µCT scans of the mouse tibia. By employing our large diverse dataset summarized in Table , we ensured a robust evaluation across a multitude of conditions, including variations in resolution, age, strain, drug treatments, surgical procedures, and mechanical loading. This diversity provided a comprehensive landscape for assessing the generalizability of our model. A key strength of our study is the model’s ability to generalize effectively, even though it was trained exclusively on a limited set of control scans. This underscored the potential of deep learning models to be trained on restricted datasets while maintaining high performance across diverse and unseen scenarios. DBAHNet achieved excellent segmentation accuracy not only on the control dataset but also across various challenging conditions where bone morphology was significantly altered due to drug treatments, surgical procedures, or mechanical loading. Moreover, our automated approach outperformed the manual and semiautomatic algorithms used to label the ground truth, which involve manual segmentation of sampled slices followed by interpolation. The semiautomatic method often results in noisy interpolated slices, as segmentation quality decreases with fewer manually segmented slices. In contrast, our model performs segmentation of the basis of the characteristic features of the two types of bone compartments, resulting in more accurate, faster, and smoother segmentation. Importantly, the computational complexity of DBAHNet requires considerable computational resources for training. This highlights the need for adequate hardware to fully leverage the model’s potential. Additionally, the use of high-resolution 3D µCT scans necessitates significant data storage and processing power, potentially posing challenges for large-scale studies. These issues can be mitigated by employing a lighter version of the DBAHNet model, which we have previously demonstrated to be functional, reducing the input cropping size, or decreasing the complexity of the architecture by reducing the sequence length of the transformer, as employed in P2T . Notably, the model’s performance on Dataset 5 was relatively lower than that of the other datasets, primarily because of the undeveloped young mice (8 weeks of age), which are models of the Oim mouse strain, exhibit skeletal deformities, fractures, and cortical thinning. This dataset exhibited an average DSC of 0.7691, highlighting the limitation of training a deep learning model on a limited set of control scans and expecting it to generalize effectively across extreme scenarios. This underperformance underscores the necessity of training on a larger and more varied dataset to capture the full spectrum of bone morphology variations. While the model’s performance is robust, it is still subject to the quality and diversity of the training data. Scenarios with extreme deviations from the control dataset, such as very young or old mice or highly specialized medical treatments not included in the training set, may yield less accurate results. Expanding the training dataset to include a wider variety of conditions could enhance the model’s robustness and generalizability. Future improvements will focus on retraining the model on a more comprehensive and varied dataset that includes a broad spectrum of conditions. We are currently collecting a comprehensive dataset and preparing manual labeling for the final preparation of our last model. This model, trained on a large comprehensive dataset, will be integrated into our automated pipeline for µCT mouse tibia assessment. This approach aims to develop a truly robust and accurate model capable of segmenting the cortical and trabecular compartments of µCT scans of the mouse tibia under any preclinical experimental setup. Additionally, by integrating our deep learning model into our global segmentation pipeline, we can refine the segmentation results and address the limitations posed by the variability of diverse preclinical animal experiments. The global segmentation pipeline ensures a general improvement in the segmentation results of our deep learning model, and prepares the cortical and trabecular compartments for the subsequent steps of morphological and statistical analysis. In summary, our study validated the effectiveness of DBAHNet in achieving high segmentation performance across diverse scenarios. Despite these challenges, our work highlighted the exciting potential of DBAHNet and the global pipeline to transform bone segmentation tasks. By expanding and diversifying the training dataset in future work, we anticipate creating an even more robust and generalizable model. This will advance our understanding of bone structure and significantly improve segmentation accuracy in preclinical and experimental settings, highlighting the large impact of deep learning in high-resolution biomedical imaging. This segmentation module is part of a larger project aimed at developing an automated end-to-end pipeline for analyzing the microarchitecture of the mouse tibia using high-resolution µCT scans in a preclinical setting. The goal of this pipeline is to provide fast, accurate, and fully automated analysis of the effects of various experimental conditions-such as drug treatments, surgical interventions, mechanical loading, and aging-on bone remodeling and dynamics. This will help accelerate research in biomechanics and improve our understanding of bone remodeling and diseases like osteoporosis. Datasets We evaluated the effectiveness of our deep learning segmentation architecture, DBAHNet, across various experimental studies (see Fig. ). Our extensive dataset contains a total of 163 tibia scans derived from seven experimental studies – , – . These scans exhibit varied bone morphology due to differences in scanning resolutions, mouse strains, ages, drug treatments, surgical procedures, and mechanical loading. The dataset includes four mouse strains: C57BL/6, BALB/c, C57BL/6JOlaHsd, and homozygous oim, focusing on young and mature animals ranging from 8 to 24 weeks of age. These animals received a variety of treatments, including ovariectomy (OVX), human amniotic fluid stem cells (hAFSC), sciatic neurectomy (SN), risedronate (Ris), and parathyroid hormone (PTH) treatments at different doses. Additionally, some studies have applied mechanical loading (ML) to investigate the individual and combined effects of these treatments on bone structure. The mouse tibiae were imaged via µCT at different resolutions ranging from 4.8 µm to 13.7 µm. This high resolution enabled a detailed assessment of trabecular and cortical bone structures. The dataset covered various responses to drug interventions, with mechanical loading experiments designed to mimic physiological stress and explore bone adaptation responses. We manually segmented the scans following standard guidelines by sampling sectional 2D slices with a fixed step tailored to the specific bone region. This involved manual segmentation of both the cortical and trabecular compartments at these specific cross-sectional slices. We subsequently employed the Interpolation Biomedisa platform for semiautomated segmentation, which uses weighted random walks for interpolation, and considers both the presegmented slices and the entire original volumetric image data. We performed postprocessing on the interpolated 3D labels to smooth and remove outliers, followed by visual inspection to validate the final ground truth labels. The datasets used in this study are described as follows: Control dataset – This dataset is based on three separate studies and includes tibiae from C57BL/6 virgin female mice aged 19–22 weeks. It comprises 74 control tibiae that were not subjected to any treatments in the referenced preclinical experiments. High-resolution µCT scans were performed via SkyScan 1172 (SkyScan, Kontich, Belgium), with resolutions ranging from 4.8 µm to 5 µm (ex vivo). Dataset 1 This study investigated the impact of the bone anabolic drug intermittent PTH on bone adaptation in virgin female C57BL/6 mice. The treatment doses used were 20, 40, and 80 µg/kg/day, both alone and in combination with ML. The dataset includes images of four groups: PTH 20 (N=6), PTH 20 + ML (N=6), PTH 40 (N=8), and PTH 80 (N=10), all aged 19 weeks. Images were captured at a resolution of 5 µm (ex vivo). Both mechanical loading and PTH treatment have anabolic effects on bone, promoting bone formation and increasing bone mass. Their combined effects result in more pronounced anabolic responses, further complicating segmentation due to increased bone remodeling and porosity, especially near the growth plate. Dataset 2 This study examined the effects of the anticatabolic drug risedronate on bone adaptation in virgin female C57BL/6 mice. The dataset includes three risedronate dose groups (0.15, 1.5, and 15 µg/kg/day) with and without mechanical loading, each with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=5$$\end{document} samples, and a risedronate 150 µg/kg/day group with one loaded and one nonloaded sample (N = 1), all aged 19 weeks. Images were captured at a resolution of 4.8 µm (ex vivo). This segmentation is challenging because of the effects of the anticatabolic risedronate and the anabolic effect of ML. Compared with the control, risedronate reduces bone resorption, resulting in greater trabecular bone volume and trabecular number, whereas ML increases trabecular and cortical. Dataset 3 This study assessed the impact of mechanical loading on bone adaptation in C57BL/6 mice subjected to right sciatic neurectomy to minimize natural loading in their right tibiae. The dataset includes images of two groups, 4 N (N=5) and 8 N (N=5), aged 20 weeks. Images were captured at a resolution of 5 µm (ex vivo). The segmentation challenges arise from localized bone loss due to neurectomy and the subsequent anabolic bone changes induced by mechanical loading. Dataset 4 This study provides high-resolution in vivo µCT images of tibiae from female C57BL/6 mice subjected to OVX, which mimics postmenopausal osteoporosis characterized by increased bone remodeling, followed by combined PTH (100 µg/kg/day) and ML interventions. The dataset includes wild-type (WT) female C57BL/6 OVX (N=4) mice recorded at weeks 14, 18, 20, and 22. Images were captured at a resolution of 10.4 µm (in vivo). OVX increases porosity and bone remodeling, presenting significant segmentation challenges. The combination of PTH and ML further complicates segmentation due to their anabolic effects, enhancing bone formation and altering bone architecture. Additionally, the changes in the resolution and age of the mice compared with those in the control dataset complicate the generalization of segmentation techniques. Dataset 5 This study conducted high-resolution µCT analysis of bone microarchitecture in 8-week-old homozygous oim mice treated with human amniotic fluid stem cells (hAFSC). The dataset includes images of the Oim (N=3) and Oim + hAFSC (N=3) groups. Images were captured at a resolution of 5 µm (ex vivo). Osteogenesis imperfecta (OI) is characterized by severe characteristics, such as reduced size, skeletal fragility, frequent fractures, and abnormal bone microarchitecture, in OIM mice. Treatment with hAFSC improved bone strength, quality, and remodeling. The young age of the mice, combined with their deformed shape due to the nature of the mouse strain (homozygous oim) and hAFSC treatment effects, presents significant segmentation challenges. Their bones are not fully mature and are less dense, complicating the generalization of segmentation techniques from the control dataset of untreated mature bones. Dataset 6 This study explored the impact of ovariectomy on bone structure and density in female C57BL/6 and BALB/c mice by comparing the WT and OVX groups. The dataset includes four groups: C57BL/6 WT (N=1), C57BL/6 OVX (N=1), BALB/c WT (N=1), and BALB/c OVX (N=1). Images were captured at a resolution of 10.4 µm (in vivo) at the age of 24 weeks. The differences in the structure of the strains (C57BL/6 and BALB/c), OVX bone loss effects, and lower resolution make it difficult to generalize segmentation techniques from the control dataset. Dataset 7 This study focused on a murine model of osteoporosis in C57BL/6JOlaHsd OVX female mice. The dataset includes images of femurs from C57BL/6JOlaHsd female mice (N = 4) that underwent OVX at the age of 14 weeks. Images were captured at a resolution of 13.7 µm (ex vivo) at the age of 17 weeks. Compared with tibiae, the combination of femur bones, which have different structural characteristics, a much lower resolution of 13.7 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document} m, and OVX-induced bone loss presents substantial segmentation challenges. This diverse dataset encompasses a wide range of conditions, including different ages, strains, resolutions, drug treatments, surgical procedures, and mechanical loading, providing a rich resource for a robust validation of our deep learning model. A summary of the datasets used in this study is presented in Table . The main datasets – are our own extensive collections, which contain very high-resolution µCT scans at 5 µm. The secondary datasets – consist of publicly available samples collected to test the segmentation under new unseen experimental conditions. The datasets used in this study were obtained from independent experiments conducted by their respective institutions. For Dataset 1 and Dataset 2 , all procedures complied with the UK Animals (Scientific Procedures) Act 1986, with ethical approval from the ethics committee of The Royal Veterinary College (London, UK). Dataset 3 was similarly approved by the ethics committee of the University of Bristol (Bristol, UK). Dataset 4 followed the ARRIVE guidelines and was approved by the local Research Ethics Committee of the University of Sheffield (Sheffield, UK). Dataset 5 was conducted under UK Home Office project licence PPL 70/6857, and Dataset 6 under project licence PPL 40/3499, both overseen by the University of Sheffield. Finally, Dataset 7 received approval from the Local Ethical Committee for Animal Research of the University of the Basque Country (UPV/EHU, ref M20/2019/176), adhering to European Directive 2010/63/EU and Spanish Law RD 53/2013. All original studies ensured compliance with relevant ethical guidelines, and our use of these datasets strictly followed their established approvals. The region of interest in the mouse tibia used in this research was cropped from the metaphysis, starting just below the growth plate (approximately 6–8% of the bone length from the proximal region, where trabecular bone is highly present and active) and extending to approximately 60–65% of the bone length. Additionally, the control scans were obtained from a slightly deeper region within the metaphysis, where trabecular bone is not excessively present in the medullary area. The reason for choosing this deeper region is to detect any potential trabecularization of the cortical bone or growth of the trabecular bone, which can occur under certain conditions such as with drug treatments or aging. General segmentation pipeline This section describes our automated, robust, deep learning-based pipeline developed for 3D high-resolution µCT segmentation, that specifically targets the cortical and trabecular compartments of the mouse tibia, as illustrated in Fig. . The general segmentation pipeline begins with preprocessing the raw 3D µCT scans via image processing techniques to isolate the mouse tibia. These preprocessed scans serve as the input for training the deep learning model. During training, data augmentation automatically expands the dataset, creating variations that improve model accuracy. The model is trained iteratively on the training set, with continuous monitoring of the validation mean Dice score and loss until convergence is achieved. After training, the model produces segmentation masks for both the cortical and the trabecular compartments. A postprocessing step further refines the segmentation to enhance the extraction of the cortical and trabecular bone. The detailed steps of the pipeline are outlined below. Preprocessing We subjected the raw 3D µCT scans to a series of preprocessing steps to prepare the data for segmentation. Thresholding: We applied the Otsu thresholding algorithm to automatically separate the bone from the background, which includes experimental materials such as the sample holder and the resin. To ensure the retention of actual bone voxels, particularly for the trabecular bone, a threshold margin of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M = 5$$\end{document} was subtracted from the threshold obtained by the algorithm to maintain connectivity. Artifact removal: We eliminated any remaining noise by retaining the largest connected component, which represents the bone. These two preprocessing steps are crucial, as they not only clean the bone from the experimental background, allowing the model to focus on segmenting the cortical and trabecular compartments, but also significantly reduce the size of the µCT scans. Working with 3D µCT scans at very high resolution requires careful consideration of efficiency, as training complex deep learning architectures becomes computationally demanding with larger input data. Performing these steps substantially reduces the size of the input images. For instance, a raw scan of the full mouse tibia recorded at 5µm from Dataset 1 is approximately 2.4 GB. After background removal and autocropping, the file size is reduced to approximately 150 MB (both sizes are reported for the compressed Nifti format). Fibula removal: We removed the second-largest component, representing the fibula, at each cross-sectional slice along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} -axis. Normalization: We normalized the voxel values via z-score normalization, transforming the image intensities so that the resulting distribution has a mean of zero and a standard deviation of one. The z-score normalization is defined as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z = \frac{X - \mu }{\sigma }$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z$$\end{document} is the normalized intensity value, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document} is the original intensity value, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document} is the mean intensity value of the image, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma$$\end{document} is the standard deviation of the intensity values of the image. Data augmentation To enhance the model’s generalization ability, we employed various data augmentation techniques applied to the original 3D scans during each batch generation throughout the training. Random affine transformations: We applied rotations and scaling to simulate changes in the orientation and scale of the bone relative to the scanner. The rotation range is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0, \pi ]$$\end{document} along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} -axis, and the scaling factor range is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s \in [0.85, 1.25]$$\end{document} . 3D Elastic deformations: We introduced nonlinear distortions to mimic natural bone variability via the following formula: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x + \alpha \cdot {\mathcal {G}}(\sigma )$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {G}}(\sigma )$$\end{document} is a random Gaussian displacement field with a standard deviation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma \in [9, 13]$$\end{document} and magnitude \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \in [0, 900]$$\end{document} . Random Gaussian Noise: We added random Gaussian noise to simulate varying scanner qualities. The noise addition is given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x + {\mathcal {N}}(0, \sigma ^2)$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {N}}(0, \sigma ^2)$$\end{document} is Gaussian noise with zero mean and variance \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma ^2 = 0.1$$\end{document} . Random intensity scaling: We scaled the intensity of the images to account for differences in imaging conditions. The intensity scaling is given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x \cdot (1 + f)$$\end{document} , where the scaling factor \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f$$\end{document} ranges from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-0.1$$\end{document} to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.1$$\end{document} . Random contrast adjusting: We adjusted the contrast of the images to account for differences in imaging conditions. The contrast adjustment is expressed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x^{\gamma }$$\end{document} with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma \in [0.5, 4.5]$$\end{document} . These transformations ensure the robustness and accuracy of the deep learning model by providing diverse and realistic variations in the training data. This approach generates new, artificially augmented data during training, where data augmentation is applied live to each batch with a small probability ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p = 0.1$$\end{document} ), simulating scans under different experimental setups for the training of our deep learning model. Segmentation In this study, we employed a novel deep neural network architecture, DBAHNet (dual branch attention-based hybrid network), which was previously validated by comparing its performance with popular state-of-the-art architectures on the control dataset . DBAHNet is specifically designed for high-resolution 3D µCT bone image segmentation, and focuses on the cortical and trabecular compartments. This architecture advances deep learning approaches by integrating both transformers and convolutional neural networks to effectively capture local features and long-range dependencies. The hybrid design of DBAHNet leverages the ability of convolutional layers for local feature analysis and the attention mechanism of transformers. In this work, we apply DBAHNet within a comprehensive pipeline to evaluate its robustness across various conditions and datasets, demonstrating its utility beyond the initial conference presentation. The complete architecture of DBAHNet is detailed in the subsequent sections. Postprocessing The final phase involved applying postprocessing techniques to increase the quality of the segmentation masks and mitigate the inherent imperfections in the segmentation process: Noise removal: We removed any segmentation noise and outliers by retaining the largest connected component. Transitional region smoothing: We used morphological opening filters to remove small openings at the endosteum surface of the cortical bone and assign them to the trabecular bone. The morphological opening filter is defined as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Opening}(A, B) = (A \ominus B) \oplus B$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A$$\end{document} is the set of foreground voxels in the binary image, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B$$\end{document} is the structuring element (a sphere with radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_o$$\end{document} ), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ominus$$\end{document} denotes the erosion filter, which removes pixels from the boundaries of objects, eliminating small openings at the endosteum surface, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} denotes the dilation filter, which adds pixels to the boundaries of objects, restoring the original size of the cortical surface while maintaining a smooth transition to the trabecular bone. The kernel value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_o$$\end{document} is set to 3. Trabecular structure connectivity: We ensured the connectivity of the trabeculae for accurate morphometry in subsequent steps. For this, we perform Connected Component Analysis by identifying and labeling all connected components in the binary mask of the trabecular bone and merging components that are close to each other. Merging is performed via a morphological closing filter with a kernel radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_c = 1$$\end{document} , corresponding to the minimum distance required to merge disconnected trabeculae. The morphological closing filter can be defined as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Closing}(A, B) = (A \oplus B) \ominus B$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A$$\end{document} is the set of foreground voxels in the binary image, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B$$\end{document} is the structuring element (a sphere with radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_c$$\end{document} ), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} denotes the dilation filter, which adds pixels to the boundaries of objects, potentially bridging small gaps caused by segmentation errors, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ominus$$\end{document} denotes the erosion filter, which removes pixels from the boundaries of objects, and restores the original object size while maintaining new connections. The different modules of the general segmentation pipeline facilitated the extraction and subsequent morphological analysis of both cortical and trabecular bone from three-dimensional µCT scans, enabling their visualization and assessment of their respective morphological parameters for preclinical skeletal studies. Architecture of DBAHNet The proposed architecture, the Dual-Branch Attention-based Hybrid Network (DBAHNet), features a dual-branch hybrid design that incorporates both convolutional neural networks (CNNs) and transformers in the encoder and decoder pathways (see Fig. A). The patch embedding block projects the 3D scan into an embedding space with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C = 96$$\end{document} channels via successive convolutions. This process results in a reduced-dimensionality space, defined by the reduction embedding vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E = [4, 4, 4]$$\end{document} , creating a patch embedding of size \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(C, \frac{H}{4}, \frac{W}{4}, \frac{D}{4})$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D$$\end{document} represent the height, width, and depth of the input 3D scan, respectively. This embedding serves as the input to both the transformer and convolutional branches, each consisting of three hierarchical levels. In the encoder pathway, each level comprises two sequential Swin transformers blocks in the transformer branch and a Channel Attention-Based Convolution Module (CACM) in the convolution branch. The transformer branch uses 3D-adapted Swin transformers to process feature maps at multiple scales, capturing global long-range dependencies within the volume. Each transformer block consists of two layers; the first employs regular volume partitioning, whereas the second uses shifted partitioning to increase the connectivity between layers. In the convolution branch, the CACM enhances cross-channel interaction by concatenating the outputs of global average pooling and maximum pooling, followed by two GeLU-activated 3D convolutions to create an attention map. This map modulates the initial feature map through elementwise multiplication, and a final 3D convolution further encodes the output for subsequent layers. The outputs from the transformer and convolution branches at each level are fused via the Transformer-Convolution Feature Fusion Module (TCFFM). The TCFFM performs downsampling in the encoder by applying channelwise average pooling to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} (the feature maps from the transformer and convolution branches), followed by a sigmoid function to generate an attention mask that filters the channels. The results are then concatenated and encoded through a 3D convolution layer. After encoding, the resulting feature maps are downscaled to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(8C, \frac{H}{32}, \frac{W}{32}, \frac{D}{32})$$\end{document} and passed to the bottleneck. The bottleneck consists of four global 3D transformer blocks that perform global attention over all the downsampled feature maps, aggregating information to provide a comprehensive representation for the decoder. The decoder mirrors the encoder symmetrically. It uses the spatial attention-based convolution module (SACM) instead of the CACM to enhance relevant spatial features for focused reconstruction of the segmentation mask. The SACM applies max-pooling and average-pooling, concatenates the results, and uses a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 1 \times 1$$\end{document} convolution to create an attention map. This attention map modulates the input feature map, which is further processed by a final 3D convolution. The TCFFM module in the decoder performs upsampling, restoring the original volume size. Throughout the decoder, feature maps from all layers are filtered via attention gates and residual skip connections from the encoder. Finally, a transpose convolution reconstructs the segmentation masks. All internal components of DBAHNet are illustrated in Fig. B. Transformer block We leveraged a 3D adaptation of Swin transformers , which perform self-attention within a local volume of feature maps at each hierarchical level to capture enriched contextual representations of the data. Each transformer unit consists of two consecutive transformers. The first transformer employs regular volume partitioning, whereas the second transformer introduces shifted local volume partitioning to ensure connectivity with the preceding layer’s local volumes. For a given layer \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l$$\end{document} , the input \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^{l-1}$$\end{document} first undergoes layer normalization (LN) and is then processed by a multihead self-attention (MHSA) mechanism. The output of the MHSA is added to the original input via a residual connection, resulting in the intermediate output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^l$$\end{document} . Next, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^l$$\end{document} is normalized again and passed through a multilayer perceptron (MLP), with another residual connection to produce the output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^l$$\end{document} . The second transformer, which uses shifted partitioning, applies a shifted multihead self-attention (SMHSA) mechanism. This shifted transformer increases the connectivity between layers. The normalized output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^l$$\end{document} from the previous step is processed by the SMHSA with a residual connection, resulting in the intermediate output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^{l+1}$$\end{document} . Finally, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^{l+1}$$\end{document} undergoes another normalization and passes through an MLP, with a residual connection to yield the output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^{l+1}$$\end{document} . The Swin transformer block is expressed by the system of equations in Eq. ( ). 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{aligned} \hat{{\textbf{x}}}^l&= \text {MHSA}\left( \text {LN}\left( {\textbf{x}}^{l-1}\right) \right) + {\textbf{x}}^{l-1}, \\ {\textbf{x}}^l&= \text {MLP}\left( \text {LN}\left( \hat{{\textbf{x}}}^l\right) \right) + \hat{{\textbf{x}}}^l, \\ \hat{{\textbf{x}}}^{l+1}&= \text {SMHSA}\left( \text {LN}\left( {\textbf{x}}^l\right) \right) + {\textbf{x}}^l, \\ {\textbf{x}}^{l+1}&= \text {MLP}\left( \text {LN}\left( \hat{{\textbf{x}}}^{l+1}\right) \right) + \hat{{\textbf{x}}}^{l+1} \end{aligned} \end{aligned}$$\end{document} The self-attention mechanism is computed using Eq. ( ). 5 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {Attention}(Q,K,V) = \text {Softmax}\left( \frac{QK^T}{\sqrt{d_k}}\right) V \end{aligned}$$\end{document} Here, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V$$\end{document} represent queries, keys, and values, respectively, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d_k$$\end{document} is the dimension of the key and query. Channel-wise attention-based convolution module (CACM) In the encoder, we utilized a convolution unit based on channelwise attention, assigning distinct levels of importance to different channels, thereby enhancing feature representation. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in {\mathbb {R}}^{C \times H \times W \times D}$$\end{document} be the input feature map. We first apply both global average pooling and maximum pooling channelwise, yielding a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( C, 1, 1, 1\right)$$\end{document} vector, which is then concatenated. This concatenated vector undergoes a 3D convolution to an intermediate dimension, resulting in a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \frac{C}{2}, 1, 1, 1\right)$$\end{document} size, followed by a GeLU activation function. This output is further processed through a second 3D convolution to restore the original channel dimension. An attention map is subsequently generated via a sigmoid activation function, which is then elementwise multiplied with the initial feature map, modulating it on the basis of channelwise attention. Finally, a third convolution is applied, downsampling the dimensions to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( 2C, \frac{H}{2}, \frac{W}{2}, \frac{D}{2}\right)$$\end{document} , to be used in subsequent layers. Spatial-wise attention-based convolution module (SACM) In the decoder, we employed a convolution module that ensures spatial attention; this module focuses selectively on the salient features and regions during the reconstruction of the segmentation mask, aiding in the preservation of detailed structures and enhancing accuracy. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} be the input feature map such that \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in {\mathbb {R}}^{C \times H \times W \times D}$$\end{document} . Initially, we apply both max-pooling and average-pooling to extract two robust feature descriptors. These descriptors are concatenated along the channel axis before undergoing a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 1 \times 1$$\end{document} convolution to yield a feature map of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1, H, W, D)$$\end{document} . Next, a sigmoid activation function derives the attention map, which is then elementwise multiplied with the original input to obtain a feature map of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(C, H, W, D)$$\end{document} . Considering the necessity of upsampling the feature maps during the decoding phase, a transpose 3D convolution operation with a stride of 2 is utilized to upsample the features, resulting in the final feature maps of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \frac{C}{2}, 2H, 2W, 2D\right)$$\end{document} . Transformer-convolution feature fusion module (TCFFM) In the TCFFM block, the feature maps obtained from both the transformer and convolution pathways, denoted as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} , each belonging to the space \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^{C \times H \times W \times D}$$\end{document} , are fused at each hierarchical level. Here, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D$$\end{document} represent the dimensions of the feature maps, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} is the number of channels. Initially, channel-wise average pooling is applied to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} to extract a representative value for each channel of the feature maps. These values are transformed into weights using a sigmoid function, generating an attention mask that enhances significant channels and suppresses less relevant channels. The results are subsequently concatenated and passed through a downsampling convolution layer, followed by a local-volume transformer block, to perform the fusion and leverage the combined strengths of both pathways in the subsequent layers. Bottleneck In the bottleneck, we reduced the dimensionality of the resulting feature maps from the encoder and employ a series of four global 3D transformer blocks, similar to those used in the Vision Transformer (ViT) . These blocks perform global attention over all the downsampled feature maps. They excel at aggregating information from the entire feature map, enabling an understanding of the global context and providing a comprehensive representation to the decoder. Attention gate Instead of using regular concatenation in the skip connections such as those in U-Net , we employed attention gates (AGs) to enhance the model’s ability to focus on target structures of varying shapes and sizes. Attention gates automatically learn to suppress irrelevant regions in an input image while highlighting salient features relevant to a specific task. Specifically, the output of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l^e$$\end{document} -th TCFFM of the encoder, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{l}^e$$\end{document} , is transformed via a linear projection into a key matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_l^e$$\end{document} and a value matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_l^e$$\end{document} . This transformation encodes the spatial and contextual information necessary for the attention mechanism. The output feature maps after the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l^d$$\end{document} -th upsampling layer of the TCFFM in the decoder, denoted \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{l}^d$$\end{document} , serve as the query \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_l^d$$\end{document} . We apply one layer of the transformer block to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_l^d$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_l^e$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_l^e$$\end{document} in the decoder, computing self-attention as previously described for the transformer block. We evaluated the effectiveness of our deep learning segmentation architecture, DBAHNet, across various experimental studies (see Fig. ). Our extensive dataset contains a total of 163 tibia scans derived from seven experimental studies – , – . These scans exhibit varied bone morphology due to differences in scanning resolutions, mouse strains, ages, drug treatments, surgical procedures, and mechanical loading. The dataset includes four mouse strains: C57BL/6, BALB/c, C57BL/6JOlaHsd, and homozygous oim, focusing on young and mature animals ranging from 8 to 24 weeks of age. These animals received a variety of treatments, including ovariectomy (OVX), human amniotic fluid stem cells (hAFSC), sciatic neurectomy (SN), risedronate (Ris), and parathyroid hormone (PTH) treatments at different doses. Additionally, some studies have applied mechanical loading (ML) to investigate the individual and combined effects of these treatments on bone structure. The mouse tibiae were imaged via µCT at different resolutions ranging from 4.8 µm to 13.7 µm. This high resolution enabled a detailed assessment of trabecular and cortical bone structures. The dataset covered various responses to drug interventions, with mechanical loading experiments designed to mimic physiological stress and explore bone adaptation responses. We manually segmented the scans following standard guidelines by sampling sectional 2D slices with a fixed step tailored to the specific bone region. This involved manual segmentation of both the cortical and trabecular compartments at these specific cross-sectional slices. We subsequently employed the Interpolation Biomedisa platform for semiautomated segmentation, which uses weighted random walks for interpolation, and considers both the presegmented slices and the entire original volumetric image data. We performed postprocessing on the interpolated 3D labels to smooth and remove outliers, followed by visual inspection to validate the final ground truth labels. The datasets used in this study are described as follows: Control dataset – This dataset is based on three separate studies and includes tibiae from C57BL/6 virgin female mice aged 19–22 weeks. It comprises 74 control tibiae that were not subjected to any treatments in the referenced preclinical experiments. High-resolution µCT scans were performed via SkyScan 1172 (SkyScan, Kontich, Belgium), with resolutions ranging from 4.8 µm to 5 µm (ex vivo). Dataset 1 This study investigated the impact of the bone anabolic drug intermittent PTH on bone adaptation in virgin female C57BL/6 mice. The treatment doses used were 20, 40, and 80 µg/kg/day, both alone and in combination with ML. The dataset includes images of four groups: PTH 20 (N=6), PTH 20 + ML (N=6), PTH 40 (N=8), and PTH 80 (N=10), all aged 19 weeks. Images were captured at a resolution of 5 µm (ex vivo). Both mechanical loading and PTH treatment have anabolic effects on bone, promoting bone formation and increasing bone mass. Their combined effects result in more pronounced anabolic responses, further complicating segmentation due to increased bone remodeling and porosity, especially near the growth plate. Dataset 2 This study examined the effects of the anticatabolic drug risedronate on bone adaptation in virgin female C57BL/6 mice. The dataset includes three risedronate dose groups (0.15, 1.5, and 15 µg/kg/day) with and without mechanical loading, each with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=5$$\end{document} samples, and a risedronate 150 µg/kg/day group with one loaded and one nonloaded sample (N = 1), all aged 19 weeks. Images were captured at a resolution of 4.8 µm (ex vivo). This segmentation is challenging because of the effects of the anticatabolic risedronate and the anabolic effect of ML. Compared with the control, risedronate reduces bone resorption, resulting in greater trabecular bone volume and trabecular number, whereas ML increases trabecular and cortical. Dataset 3 This study assessed the impact of mechanical loading on bone adaptation in C57BL/6 mice subjected to right sciatic neurectomy to minimize natural loading in their right tibiae. The dataset includes images of two groups, 4 N (N=5) and 8 N (N=5), aged 20 weeks. Images were captured at a resolution of 5 µm (ex vivo). The segmentation challenges arise from localized bone loss due to neurectomy and the subsequent anabolic bone changes induced by mechanical loading. Dataset 4 This study provides high-resolution in vivo µCT images of tibiae from female C57BL/6 mice subjected to OVX, which mimics postmenopausal osteoporosis characterized by increased bone remodeling, followed by combined PTH (100 µg/kg/day) and ML interventions. The dataset includes wild-type (WT) female C57BL/6 OVX (N=4) mice recorded at weeks 14, 18, 20, and 22. Images were captured at a resolution of 10.4 µm (in vivo). OVX increases porosity and bone remodeling, presenting significant segmentation challenges. The combination of PTH and ML further complicates segmentation due to their anabolic effects, enhancing bone formation and altering bone architecture. Additionally, the changes in the resolution and age of the mice compared with those in the control dataset complicate the generalization of segmentation techniques. Dataset 5 This study conducted high-resolution µCT analysis of bone microarchitecture in 8-week-old homozygous oim mice treated with human amniotic fluid stem cells (hAFSC). The dataset includes images of the Oim (N=3) and Oim + hAFSC (N=3) groups. Images were captured at a resolution of 5 µm (ex vivo). Osteogenesis imperfecta (OI) is characterized by severe characteristics, such as reduced size, skeletal fragility, frequent fractures, and abnormal bone microarchitecture, in OIM mice. Treatment with hAFSC improved bone strength, quality, and remodeling. The young age of the mice, combined with their deformed shape due to the nature of the mouse strain (homozygous oim) and hAFSC treatment effects, presents significant segmentation challenges. Their bones are not fully mature and are less dense, complicating the generalization of segmentation techniques from the control dataset of untreated mature bones. Dataset 6 This study explored the impact of ovariectomy on bone structure and density in female C57BL/6 and BALB/c mice by comparing the WT and OVX groups. The dataset includes four groups: C57BL/6 WT (N=1), C57BL/6 OVX (N=1), BALB/c WT (N=1), and BALB/c OVX (N=1). Images were captured at a resolution of 10.4 µm (in vivo) at the age of 24 weeks. The differences in the structure of the strains (C57BL/6 and BALB/c), OVX bone loss effects, and lower resolution make it difficult to generalize segmentation techniques from the control dataset. Dataset 7 This study focused on a murine model of osteoporosis in C57BL/6JOlaHsd OVX female mice. The dataset includes images of femurs from C57BL/6JOlaHsd female mice (N = 4) that underwent OVX at the age of 14 weeks. Images were captured at a resolution of 13.7 µm (ex vivo) at the age of 17 weeks. Compared with tibiae, the combination of femur bones, which have different structural characteristics, a much lower resolution of 13.7 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document} m, and OVX-induced bone loss presents substantial segmentation challenges. This diverse dataset encompasses a wide range of conditions, including different ages, strains, resolutions, drug treatments, surgical procedures, and mechanical loading, providing a rich resource for a robust validation of our deep learning model. A summary of the datasets used in this study is presented in Table . The main datasets – are our own extensive collections, which contain very high-resolution µCT scans at 5 µm. The secondary datasets – consist of publicly available samples collected to test the segmentation under new unseen experimental conditions. The datasets used in this study were obtained from independent experiments conducted by their respective institutions. For Dataset 1 and Dataset 2 , all procedures complied with the UK Animals (Scientific Procedures) Act 1986, with ethical approval from the ethics committee of The Royal Veterinary College (London, UK). Dataset 3 was similarly approved by the ethics committee of the University of Bristol (Bristol, UK). Dataset 4 followed the ARRIVE guidelines and was approved by the local Research Ethics Committee of the University of Sheffield (Sheffield, UK). Dataset 5 was conducted under UK Home Office project licence PPL 70/6857, and Dataset 6 under project licence PPL 40/3499, both overseen by the University of Sheffield. Finally, Dataset 7 received approval from the Local Ethical Committee for Animal Research of the University of the Basque Country (UPV/EHU, ref M20/2019/176), adhering to European Directive 2010/63/EU and Spanish Law RD 53/2013. All original studies ensured compliance with relevant ethical guidelines, and our use of these datasets strictly followed their established approvals. The region of interest in the mouse tibia used in this research was cropped from the metaphysis, starting just below the growth plate (approximately 6–8% of the bone length from the proximal region, where trabecular bone is highly present and active) and extending to approximately 60–65% of the bone length. Additionally, the control scans were obtained from a slightly deeper region within the metaphysis, where trabecular bone is not excessively present in the medullary area. The reason for choosing this deeper region is to detect any potential trabecularization of the cortical bone or growth of the trabecular bone, which can occur under certain conditions such as with drug treatments or aging. This section describes our automated, robust, deep learning-based pipeline developed for 3D high-resolution µCT segmentation, that specifically targets the cortical and trabecular compartments of the mouse tibia, as illustrated in Fig. . The general segmentation pipeline begins with preprocessing the raw 3D µCT scans via image processing techniques to isolate the mouse tibia. These preprocessed scans serve as the input for training the deep learning model. During training, data augmentation automatically expands the dataset, creating variations that improve model accuracy. The model is trained iteratively on the training set, with continuous monitoring of the validation mean Dice score and loss until convergence is achieved. After training, the model produces segmentation masks for both the cortical and the trabecular compartments. A postprocessing step further refines the segmentation to enhance the extraction of the cortical and trabecular bone. The detailed steps of the pipeline are outlined below. Preprocessing We subjected the raw 3D µCT scans to a series of preprocessing steps to prepare the data for segmentation. Thresholding: We applied the Otsu thresholding algorithm to automatically separate the bone from the background, which includes experimental materials such as the sample holder and the resin. To ensure the retention of actual bone voxels, particularly for the trabecular bone, a threshold margin of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M = 5$$\end{document} was subtracted from the threshold obtained by the algorithm to maintain connectivity. Artifact removal: We eliminated any remaining noise by retaining the largest connected component, which represents the bone. These two preprocessing steps are crucial, as they not only clean the bone from the experimental background, allowing the model to focus on segmenting the cortical and trabecular compartments, but also significantly reduce the size of the µCT scans. Working with 3D µCT scans at very high resolution requires careful consideration of efficiency, as training complex deep learning architectures becomes computationally demanding with larger input data. Performing these steps substantially reduces the size of the input images. For instance, a raw scan of the full mouse tibia recorded at 5µm from Dataset 1 is approximately 2.4 GB. After background removal and autocropping, the file size is reduced to approximately 150 MB (both sizes are reported for the compressed Nifti format). Fibula removal: We removed the second-largest component, representing the fibula, at each cross-sectional slice along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} -axis. Normalization: We normalized the voxel values via z-score normalization, transforming the image intensities so that the resulting distribution has a mean of zero and a standard deviation of one. The z-score normalization is defined as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z = \frac{X - \mu }{\sigma }$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z$$\end{document} is the normalized intensity value, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document} is the original intensity value, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document} is the mean intensity value of the image, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma$$\end{document} is the standard deviation of the intensity values of the image. Data augmentation To enhance the model’s generalization ability, we employed various data augmentation techniques applied to the original 3D scans during each batch generation throughout the training. Random affine transformations: We applied rotations and scaling to simulate changes in the orientation and scale of the bone relative to the scanner. The rotation range is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0, \pi ]$$\end{document} along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} -axis, and the scaling factor range is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s \in [0.85, 1.25]$$\end{document} . 3D Elastic deformations: We introduced nonlinear distortions to mimic natural bone variability via the following formula: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x + \alpha \cdot {\mathcal {G}}(\sigma )$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {G}}(\sigma )$$\end{document} is a random Gaussian displacement field with a standard deviation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma \in [9, 13]$$\end{document} and magnitude \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \in [0, 900]$$\end{document} . Random Gaussian Noise: We added random Gaussian noise to simulate varying scanner qualities. The noise addition is given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x + {\mathcal {N}}(0, \sigma ^2)$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {N}}(0, \sigma ^2)$$\end{document} is Gaussian noise with zero mean and variance \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma ^2 = 0.1$$\end{document} . Random intensity scaling: We scaled the intensity of the images to account for differences in imaging conditions. The intensity scaling is given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x \cdot (1 + f)$$\end{document} , where the scaling factor \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f$$\end{document} ranges from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-0.1$$\end{document} to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.1$$\end{document} . Random contrast adjusting: We adjusted the contrast of the images to account for differences in imaging conditions. The contrast adjustment is expressed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x^{\gamma }$$\end{document} with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma \in [0.5, 4.5]$$\end{document} . These transformations ensure the robustness and accuracy of the deep learning model by providing diverse and realistic variations in the training data. This approach generates new, artificially augmented data during training, where data augmentation is applied live to each batch with a small probability ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p = 0.1$$\end{document} ), simulating scans under different experimental setups for the training of our deep learning model. Segmentation In this study, we employed a novel deep neural network architecture, DBAHNet (dual branch attention-based hybrid network), which was previously validated by comparing its performance with popular state-of-the-art architectures on the control dataset . DBAHNet is specifically designed for high-resolution 3D µCT bone image segmentation, and focuses on the cortical and trabecular compartments. This architecture advances deep learning approaches by integrating both transformers and convolutional neural networks to effectively capture local features and long-range dependencies. The hybrid design of DBAHNet leverages the ability of convolutional layers for local feature analysis and the attention mechanism of transformers. In this work, we apply DBAHNet within a comprehensive pipeline to evaluate its robustness across various conditions and datasets, demonstrating its utility beyond the initial conference presentation. The complete architecture of DBAHNet is detailed in the subsequent sections. Postprocessing The final phase involved applying postprocessing techniques to increase the quality of the segmentation masks and mitigate the inherent imperfections in the segmentation process: Noise removal: We removed any segmentation noise and outliers by retaining the largest connected component. Transitional region smoothing: We used morphological opening filters to remove small openings at the endosteum surface of the cortical bone and assign them to the trabecular bone. The morphological opening filter is defined as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Opening}(A, B) = (A \ominus B) \oplus B$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A$$\end{document} is the set of foreground voxels in the binary image, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B$$\end{document} is the structuring element (a sphere with radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_o$$\end{document} ), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ominus$$\end{document} denotes the erosion filter, which removes pixels from the boundaries of objects, eliminating small openings at the endosteum surface, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} denotes the dilation filter, which adds pixels to the boundaries of objects, restoring the original size of the cortical surface while maintaining a smooth transition to the trabecular bone. The kernel value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_o$$\end{document} is set to 3. Trabecular structure connectivity: We ensured the connectivity of the trabeculae for accurate morphometry in subsequent steps. For this, we perform Connected Component Analysis by identifying and labeling all connected components in the binary mask of the trabecular bone and merging components that are close to each other. Merging is performed via a morphological closing filter with a kernel radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_c = 1$$\end{document} , corresponding to the minimum distance required to merge disconnected trabeculae. The morphological closing filter can be defined as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Closing}(A, B) = (A \oplus B) \ominus B$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A$$\end{document} is the set of foreground voxels in the binary image, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B$$\end{document} is the structuring element (a sphere with radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_c$$\end{document} ), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} denotes the dilation filter, which adds pixels to the boundaries of objects, potentially bridging small gaps caused by segmentation errors, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ominus$$\end{document} denotes the erosion filter, which removes pixels from the boundaries of objects, and restores the original object size while maintaining new connections. The different modules of the general segmentation pipeline facilitated the extraction and subsequent morphological analysis of both cortical and trabecular bone from three-dimensional µCT scans, enabling their visualization and assessment of their respective morphological parameters for preclinical skeletal studies. We subjected the raw 3D µCT scans to a series of preprocessing steps to prepare the data for segmentation. Thresholding: We applied the Otsu thresholding algorithm to automatically separate the bone from the background, which includes experimental materials such as the sample holder and the resin. To ensure the retention of actual bone voxels, particularly for the trabecular bone, a threshold margin of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M = 5$$\end{document} was subtracted from the threshold obtained by the algorithm to maintain connectivity. Artifact removal: We eliminated any remaining noise by retaining the largest connected component, which represents the bone. These two preprocessing steps are crucial, as they not only clean the bone from the experimental background, allowing the model to focus on segmenting the cortical and trabecular compartments, but also significantly reduce the size of the µCT scans. Working with 3D µCT scans at very high resolution requires careful consideration of efficiency, as training complex deep learning architectures becomes computationally demanding with larger input data. Performing these steps substantially reduces the size of the input images. For instance, a raw scan of the full mouse tibia recorded at 5µm from Dataset 1 is approximately 2.4 GB. After background removal and autocropping, the file size is reduced to approximately 150 MB (both sizes are reported for the compressed Nifti format). Fibula removal: We removed the second-largest component, representing the fibula, at each cross-sectional slice along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} -axis. Normalization: We normalized the voxel values via z-score normalization, transforming the image intensities so that the resulting distribution has a mean of zero and a standard deviation of one. The z-score normalization is defined as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z = \frac{X - \mu }{\sigma }$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z$$\end{document} is the normalized intensity value, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document} is the original intensity value, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document} is the mean intensity value of the image, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma$$\end{document} is the standard deviation of the intensity values of the image. To enhance the model’s generalization ability, we employed various data augmentation techniques applied to the original 3D scans during each batch generation throughout the training. Random affine transformations: We applied rotations and scaling to simulate changes in the orientation and scale of the bone relative to the scanner. The rotation range is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0, \pi ]$$\end{document} along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z$$\end{document} -axis, and the scaling factor range is \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s \in [0.85, 1.25]$$\end{document} . 3D Elastic deformations: We introduced nonlinear distortions to mimic natural bone variability via the following formula: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x + \alpha \cdot {\mathcal {G}}(\sigma )$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {G}}(\sigma )$$\end{document} is a random Gaussian displacement field with a standard deviation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma \in [9, 13]$$\end{document} and magnitude \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \in [0, 900]$$\end{document} . Random Gaussian Noise: We added random Gaussian noise to simulate varying scanner qualities. The noise addition is given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x + {\mathcal {N}}(0, \sigma ^2)$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {N}}(0, \sigma ^2)$$\end{document} is Gaussian noise with zero mean and variance \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma ^2 = 0.1$$\end{document} . Random intensity scaling: We scaled the intensity of the images to account for differences in imaging conditions. The intensity scaling is given by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x \cdot (1 + f)$$\end{document} , where the scaling factor \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f$$\end{document} ranges from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-0.1$$\end{document} to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.1$$\end{document} . Random contrast adjusting: We adjusted the contrast of the images to account for differences in imaging conditions. The contrast adjustment is expressed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x' = x^{\gamma }$$\end{document} with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma \in [0.5, 4.5]$$\end{document} . These transformations ensure the robustness and accuracy of the deep learning model by providing diverse and realistic variations in the training data. This approach generates new, artificially augmented data during training, where data augmentation is applied live to each batch with a small probability ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p = 0.1$$\end{document} ), simulating scans under different experimental setups for the training of our deep learning model. In this study, we employed a novel deep neural network architecture, DBAHNet (dual branch attention-based hybrid network), which was previously validated by comparing its performance with popular state-of-the-art architectures on the control dataset . DBAHNet is specifically designed for high-resolution 3D µCT bone image segmentation, and focuses on the cortical and trabecular compartments. This architecture advances deep learning approaches by integrating both transformers and convolutional neural networks to effectively capture local features and long-range dependencies. The hybrid design of DBAHNet leverages the ability of convolutional layers for local feature analysis and the attention mechanism of transformers. In this work, we apply DBAHNet within a comprehensive pipeline to evaluate its robustness across various conditions and datasets, demonstrating its utility beyond the initial conference presentation. The complete architecture of DBAHNet is detailed in the subsequent sections. The final phase involved applying postprocessing techniques to increase the quality of the segmentation masks and mitigate the inherent imperfections in the segmentation process: Noise removal: We removed any segmentation noise and outliers by retaining the largest connected component. Transitional region smoothing: We used morphological opening filters to remove small openings at the endosteum surface of the cortical bone and assign them to the trabecular bone. The morphological opening filter is defined as: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Opening}(A, B) = (A \ominus B) \oplus B$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A$$\end{document} is the set of foreground voxels in the binary image, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B$$\end{document} is the structuring element (a sphere with radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_o$$\end{document} ), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ominus$$\end{document} denotes the erosion filter, which removes pixels from the boundaries of objects, eliminating small openings at the endosteum surface, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} denotes the dilation filter, which adds pixels to the boundaries of objects, restoring the original size of the cortical surface while maintaining a smooth transition to the trabecular bone. The kernel value \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_o$$\end{document} is set to 3. Trabecular structure connectivity: We ensured the connectivity of the trabeculae for accurate morphometry in subsequent steps. For this, we perform Connected Component Analysis by identifying and labeling all connected components in the binary mask of the trabecular bone and merging components that are close to each other. Merging is performed via a morphological closing filter with a kernel radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_c = 1$$\end{document} , corresponding to the minimum distance required to merge disconnected trabeculae. The morphological closing filter can be defined as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Closing}(A, B) = (A \oplus B) \ominus B$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A$$\end{document} is the set of foreground voxels in the binary image, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B$$\end{document} is the structuring element (a sphere with radius \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_c$$\end{document} ), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} denotes the dilation filter, which adds pixels to the boundaries of objects, potentially bridging small gaps caused by segmentation errors, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ominus$$\end{document} denotes the erosion filter, which removes pixels from the boundaries of objects, and restores the original object size while maintaining new connections. The different modules of the general segmentation pipeline facilitated the extraction and subsequent morphological analysis of both cortical and trabecular bone from three-dimensional µCT scans, enabling their visualization and assessment of their respective morphological parameters for preclinical skeletal studies. The proposed architecture, the Dual-Branch Attention-based Hybrid Network (DBAHNet), features a dual-branch hybrid design that incorporates both convolutional neural networks (CNNs) and transformers in the encoder and decoder pathways (see Fig. A). The patch embedding block projects the 3D scan into an embedding space with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C = 96$$\end{document} channels via successive convolutions. This process results in a reduced-dimensionality space, defined by the reduction embedding vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E = [4, 4, 4]$$\end{document} , creating a patch embedding of size \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(C, \frac{H}{4}, \frac{W}{4}, \frac{D}{4})$$\end{document} , where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D$$\end{document} represent the height, width, and depth of the input 3D scan, respectively. This embedding serves as the input to both the transformer and convolutional branches, each consisting of three hierarchical levels. In the encoder pathway, each level comprises two sequential Swin transformers blocks in the transformer branch and a Channel Attention-Based Convolution Module (CACM) in the convolution branch. The transformer branch uses 3D-adapted Swin transformers to process feature maps at multiple scales, capturing global long-range dependencies within the volume. Each transformer block consists of two layers; the first employs regular volume partitioning, whereas the second uses shifted partitioning to increase the connectivity between layers. In the convolution branch, the CACM enhances cross-channel interaction by concatenating the outputs of global average pooling and maximum pooling, followed by two GeLU-activated 3D convolutions to create an attention map. This map modulates the initial feature map through elementwise multiplication, and a final 3D convolution further encodes the output for subsequent layers. The outputs from the transformer and convolution branches at each level are fused via the Transformer-Convolution Feature Fusion Module (TCFFM). The TCFFM performs downsampling in the encoder by applying channelwise average pooling to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} (the feature maps from the transformer and convolution branches), followed by a sigmoid function to generate an attention mask that filters the channels. The results are then concatenated and encoded through a 3D convolution layer. After encoding, the resulting feature maps are downscaled to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(8C, \frac{H}{32}, \frac{W}{32}, \frac{D}{32})$$\end{document} and passed to the bottleneck. The bottleneck consists of four global 3D transformer blocks that perform global attention over all the downsampled feature maps, aggregating information to provide a comprehensive representation for the decoder. The decoder mirrors the encoder symmetrically. It uses the spatial attention-based convolution module (SACM) instead of the CACM to enhance relevant spatial features for focused reconstruction of the segmentation mask. The SACM applies max-pooling and average-pooling, concatenates the results, and uses a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 1 \times 1$$\end{document} convolution to create an attention map. This attention map modulates the input feature map, which is further processed by a final 3D convolution. The TCFFM module in the decoder performs upsampling, restoring the original volume size. Throughout the decoder, feature maps from all layers are filtered via attention gates and residual skip connections from the encoder. Finally, a transpose convolution reconstructs the segmentation masks. All internal components of DBAHNet are illustrated in Fig. B. Transformer block We leveraged a 3D adaptation of Swin transformers , which perform self-attention within a local volume of feature maps at each hierarchical level to capture enriched contextual representations of the data. Each transformer unit consists of two consecutive transformers. The first transformer employs regular volume partitioning, whereas the second transformer introduces shifted local volume partitioning to ensure connectivity with the preceding layer’s local volumes. For a given layer \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l$$\end{document} , the input \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^{l-1}$$\end{document} first undergoes layer normalization (LN) and is then processed by a multihead self-attention (MHSA) mechanism. The output of the MHSA is added to the original input via a residual connection, resulting in the intermediate output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^l$$\end{document} . Next, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^l$$\end{document} is normalized again and passed through a multilayer perceptron (MLP), with another residual connection to produce the output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^l$$\end{document} . The second transformer, which uses shifted partitioning, applies a shifted multihead self-attention (SMHSA) mechanism. This shifted transformer increases the connectivity between layers. The normalized output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^l$$\end{document} from the previous step is processed by the SMHSA with a residual connection, resulting in the intermediate output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^{l+1}$$\end{document} . Finally, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^{l+1}$$\end{document} undergoes another normalization and passes through an MLP, with a residual connection to yield the output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^{l+1}$$\end{document} . The Swin transformer block is expressed by the system of equations in Eq. ( ). 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{aligned} \hat{{\textbf{x}}}^l&= \text {MHSA}\left( \text {LN}\left( {\textbf{x}}^{l-1}\right) \right) + {\textbf{x}}^{l-1}, \\ {\textbf{x}}^l&= \text {MLP}\left( \text {LN}\left( \hat{{\textbf{x}}}^l\right) \right) + \hat{{\textbf{x}}}^l, \\ \hat{{\textbf{x}}}^{l+1}&= \text {SMHSA}\left( \text {LN}\left( {\textbf{x}}^l\right) \right) + {\textbf{x}}^l, \\ {\textbf{x}}^{l+1}&= \text {MLP}\left( \text {LN}\left( \hat{{\textbf{x}}}^{l+1}\right) \right) + \hat{{\textbf{x}}}^{l+1} \end{aligned} \end{aligned}$$\end{document} The self-attention mechanism is computed using Eq. ( ). 5 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {Attention}(Q,K,V) = \text {Softmax}\left( \frac{QK^T}{\sqrt{d_k}}\right) V \end{aligned}$$\end{document} Here, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V$$\end{document} represent queries, keys, and values, respectively, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d_k$$\end{document} is the dimension of the key and query. Channel-wise attention-based convolution module (CACM) In the encoder, we utilized a convolution unit based on channelwise attention, assigning distinct levels of importance to different channels, thereby enhancing feature representation. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in {\mathbb {R}}^{C \times H \times W \times D}$$\end{document} be the input feature map. We first apply both global average pooling and maximum pooling channelwise, yielding a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( C, 1, 1, 1\right)$$\end{document} vector, which is then concatenated. This concatenated vector undergoes a 3D convolution to an intermediate dimension, resulting in a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \frac{C}{2}, 1, 1, 1\right)$$\end{document} size, followed by a GeLU activation function. This output is further processed through a second 3D convolution to restore the original channel dimension. An attention map is subsequently generated via a sigmoid activation function, which is then elementwise multiplied with the initial feature map, modulating it on the basis of channelwise attention. Finally, a third convolution is applied, downsampling the dimensions to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( 2C, \frac{H}{2}, \frac{W}{2}, \frac{D}{2}\right)$$\end{document} , to be used in subsequent layers. Spatial-wise attention-based convolution module (SACM) In the decoder, we employed a convolution module that ensures spatial attention; this module focuses selectively on the salient features and regions during the reconstruction of the segmentation mask, aiding in the preservation of detailed structures and enhancing accuracy. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} be the input feature map such that \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in {\mathbb {R}}^{C \times H \times W \times D}$$\end{document} . Initially, we apply both max-pooling and average-pooling to extract two robust feature descriptors. These descriptors are concatenated along the channel axis before undergoing a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 1 \times 1$$\end{document} convolution to yield a feature map of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1, H, W, D)$$\end{document} . Next, a sigmoid activation function derives the attention map, which is then elementwise multiplied with the original input to obtain a feature map of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(C, H, W, D)$$\end{document} . Considering the necessity of upsampling the feature maps during the decoding phase, a transpose 3D convolution operation with a stride of 2 is utilized to upsample the features, resulting in the final feature maps of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \frac{C}{2}, 2H, 2W, 2D\right)$$\end{document} . Transformer-convolution feature fusion module (TCFFM) In the TCFFM block, the feature maps obtained from both the transformer and convolution pathways, denoted as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} , each belonging to the space \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^{C \times H \times W \times D}$$\end{document} , are fused at each hierarchical level. Here, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D$$\end{document} represent the dimensions of the feature maps, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} is the number of channels. Initially, channel-wise average pooling is applied to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} to extract a representative value for each channel of the feature maps. These values are transformed into weights using a sigmoid function, generating an attention mask that enhances significant channels and suppresses less relevant channels. The results are subsequently concatenated and passed through a downsampling convolution layer, followed by a local-volume transformer block, to perform the fusion and leverage the combined strengths of both pathways in the subsequent layers. Bottleneck In the bottleneck, we reduced the dimensionality of the resulting feature maps from the encoder and employ a series of four global 3D transformer blocks, similar to those used in the Vision Transformer (ViT) . These blocks perform global attention over all the downsampled feature maps. They excel at aggregating information from the entire feature map, enabling an understanding of the global context and providing a comprehensive representation to the decoder. Attention gate Instead of using regular concatenation in the skip connections such as those in U-Net , we employed attention gates (AGs) to enhance the model’s ability to focus on target structures of varying shapes and sizes. Attention gates automatically learn to suppress irrelevant regions in an input image while highlighting salient features relevant to a specific task. Specifically, the output of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l^e$$\end{document} -th TCFFM of the encoder, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{l}^e$$\end{document} , is transformed via a linear projection into a key matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_l^e$$\end{document} and a value matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_l^e$$\end{document} . This transformation encodes the spatial and contextual information necessary for the attention mechanism. The output feature maps after the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l^d$$\end{document} -th upsampling layer of the TCFFM in the decoder, denoted \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{l}^d$$\end{document} , serve as the query \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_l^d$$\end{document} . We apply one layer of the transformer block to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_l^d$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_l^e$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_l^e$$\end{document} in the decoder, computing self-attention as previously described for the transformer block. We leveraged a 3D adaptation of Swin transformers , which perform self-attention within a local volume of feature maps at each hierarchical level to capture enriched contextual representations of the data. Each transformer unit consists of two consecutive transformers. The first transformer employs regular volume partitioning, whereas the second transformer introduces shifted local volume partitioning to ensure connectivity with the preceding layer’s local volumes. For a given layer \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l$$\end{document} , the input \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^{l-1}$$\end{document} first undergoes layer normalization (LN) and is then processed by a multihead self-attention (MHSA) mechanism. The output of the MHSA is added to the original input via a residual connection, resulting in the intermediate output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^l$$\end{document} . Next, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^l$$\end{document} is normalized again and passed through a multilayer perceptron (MLP), with another residual connection to produce the output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^l$$\end{document} . The second transformer, which uses shifted partitioning, applies a shifted multihead self-attention (SMHSA) mechanism. This shifted transformer increases the connectivity between layers. The normalized output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^l$$\end{document} from the previous step is processed by the SMHSA with a residual connection, resulting in the intermediate output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^{l+1}$$\end{document} . Finally, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{{\textbf{x}}}^{l+1}$$\end{document} undergoes another normalization and passes through an MLP, with a residual connection to yield the output \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{x}}^{l+1}$$\end{document} . The Swin transformer block is expressed by the system of equations in Eq. ( ). 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{aligned} \hat{{\textbf{x}}}^l&= \text {MHSA}\left( \text {LN}\left( {\textbf{x}}^{l-1}\right) \right) + {\textbf{x}}^{l-1}, \\ {\textbf{x}}^l&= \text {MLP}\left( \text {LN}\left( \hat{{\textbf{x}}}^l\right) \right) + \hat{{\textbf{x}}}^l, \\ \hat{{\textbf{x}}}^{l+1}&= \text {SMHSA}\left( \text {LN}\left( {\textbf{x}}^l\right) \right) + {\textbf{x}}^l, \\ {\textbf{x}}^{l+1}&= \text {MLP}\left( \text {LN}\left( \hat{{\textbf{x}}}^{l+1}\right) \right) + \hat{{\textbf{x}}}^{l+1} \end{aligned} \end{aligned}$$\end{document} The self-attention mechanism is computed using Eq. ( ). 5 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {Attention}(Q,K,V) = \text {Softmax}\left( \frac{QK^T}{\sqrt{d_k}}\right) V \end{aligned}$$\end{document} Here, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V$$\end{document} represent queries, keys, and values, respectively, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d_k$$\end{document} is the dimension of the key and query. In the encoder, we utilized a convolution unit based on channelwise attention, assigning distinct levels of importance to different channels, thereby enhancing feature representation. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in {\mathbb {R}}^{C \times H \times W \times D}$$\end{document} be the input feature map. We first apply both global average pooling and maximum pooling channelwise, yielding a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( C, 1, 1, 1\right)$$\end{document} vector, which is then concatenated. This concatenated vector undergoes a 3D convolution to an intermediate dimension, resulting in a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \frac{C}{2}, 1, 1, 1\right)$$\end{document} size, followed by a GeLU activation function. This output is further processed through a second 3D convolution to restore the original channel dimension. An attention map is subsequently generated via a sigmoid activation function, which is then elementwise multiplied with the initial feature map, modulating it on the basis of channelwise attention. Finally, a third convolution is applied, downsampling the dimensions to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( 2C, \frac{H}{2}, \frac{W}{2}, \frac{D}{2}\right)$$\end{document} , to be used in subsequent layers. In the decoder, we employed a convolution module that ensures spatial attention; this module focuses selectively on the salient features and regions during the reconstruction of the segmentation mask, aiding in the preservation of detailed structures and enhancing accuracy. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x$$\end{document} be the input feature map such that \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in {\mathbb {R}}^{C \times H \times W \times D}$$\end{document} . Initially, we apply both max-pooling and average-pooling to extract two robust feature descriptors. These descriptors are concatenated along the channel axis before undergoing a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \times 1 \times 1$$\end{document} convolution to yield a feature map of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1, H, W, D)$$\end{document} . Next, a sigmoid activation function derives the attention map, which is then elementwise multiplied with the original input to obtain a feature map of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(C, H, W, D)$$\end{document} . Considering the necessity of upsampling the feature maps during the decoding phase, a transpose 3D convolution operation with a stride of 2 is utilized to upsample the features, resulting in the final feature maps of dimensions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \frac{C}{2}, 2H, 2W, 2D\right)$$\end{document} . In the TCFFM block, the feature maps obtained from both the transformer and convolution pathways, denoted as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} , each belonging to the space \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^{C \times H \times W \times D}$$\end{document} , are fused at each hierarchical level. Here, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D$$\end{document} represent the dimensions of the feature maps, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C$$\end{document} is the number of channels. Initially, channel-wise average pooling is applied to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {Tr}}$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {C}}$$\end{document} to extract a representative value for each channel of the feature maps. These values are transformed into weights using a sigmoid function, generating an attention mask that enhances significant channels and suppresses less relevant channels. The results are subsequently concatenated and passed through a downsampling convolution layer, followed by a local-volume transformer block, to perform the fusion and leverage the combined strengths of both pathways in the subsequent layers. In the bottleneck, we reduced the dimensionality of the resulting feature maps from the encoder and employ a series of four global 3D transformer blocks, similar to those used in the Vision Transformer (ViT) . These blocks perform global attention over all the downsampled feature maps. They excel at aggregating information from the entire feature map, enabling an understanding of the global context and providing a comprehensive representation to the decoder. Instead of using regular concatenation in the skip connections such as those in U-Net , we employed attention gates (AGs) to enhance the model’s ability to focus on target structures of varying shapes and sizes. Attention gates automatically learn to suppress irrelevant regions in an input image while highlighting salient features relevant to a specific task. Specifically, the output of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l^e$$\end{document} -th TCFFM of the encoder, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{l}^e$$\end{document} , is transformed via a linear projection into a key matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_l^e$$\end{document} and a value matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_l^e$$\end{document} . This transformation encodes the spatial and contextual information necessary for the attention mechanism. The output feature maps after the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l^d$$\end{document} -th upsampling layer of the TCFFM in the decoder, denoted \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{l}^d$$\end{document} , serve as the query \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_l^d$$\end{document} . We apply one layer of the transformer block to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_l^d$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_l^e$$\end{document} , and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_l^e$$\end{document} in the decoder, computing self-attention as previously described for the transformer block. Supplementary Information.