title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
8
8.58M
Tumor Area Positivity (TAP) score of programmed death-ligand 1 (PD-L1): a novel visual estimation method for combined tumor cell and immune cell scoring
666777f8-de3e-4135-871f-7cd09daaa6e1
10114344
Anatomy[mh]
The discovery of immune checkpoints has led to a paradigm shift toward immunotherapy treatment in cancer. One such checkpoint is the programmed cell death protein 1 (PD-1)/programmed death-ligand 1 (PD-L1) axis which is responsible for inhibiting an immune response of immune cells (IC) to foreign antigens . Tumor cells (TC) can also express PD-L1, leading to activation of the PD-1/PD-L1 pathway, which subsequently allows TC to evade the immune response and results in tumor growth . Increased PD-L1 expression in tissue from patients with cancer is positively correlated with clinical response to immunotherapy ; this highlights the need for scoring methods to accurately quantify PD-L1 protein expression. Optimal scoring methods should be accurate, precise, and help simplify workflow for practicing pathologists. Currently, United States Food and Drug Administration (FDA)-approved PD-L1 immunohistochemistry (IHC) assays/algorithms include scoring methods that consider TC positivity and/or IC positivity (Table ) . Combined Positive Score (CPS) is the only FDA-approved method that combines TC and IC; however, it is an approach based on cell counting which is time consuming and not intuitive to practicing pathologists. In this study, we introduce the Tumor Area Positivity (TAP) score, a simple, visual-based method for scoring TC and IC together which addresses the limitations of a cell-counting approach with comparable efficacy and reproducibility. Institutional review board approval was obtained by the Roche Tissue Diagnostics Clinical Operation Department. The two reader precision studies used commercial samples. For the samples used in the comparison study, which were collected as part of a BeiGene study, consent was obtained in compliance with requirements. Each pathologist received training on the TAP scoring algorithm: [12pt]{minimal} $$=}{}$$ TAP = % PD-L1 positive TC and IC Tumor area Pathologists were then required to pass a series of tests before participation in the studies (see section). Samples from gastric adenocarcinoma, gastroesophageal junction (GEJ) adenocarcinoma and esophageal squamous cell carcinoma (ESCC) (including both resections and biopsies) were stained using the VENTANA PD-L1 (SP263) assay (Ventana Medical Systems, Inc., Tucson, AZ, USA). Between- and within-reader precision studies were performed for the TAP score among three internal (Roche Tissue Diagnostics) pathologists (internal study) and six pathologists from three external organizations (external study). After successful completion of the reader precision studies, TAP score was compared to CPS retrospectively for concordance and time efficacy. TAP scoring method description and approach Identification of tumor area To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Determination of tumor-associated IC Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. Determination of TAP score The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. Pathologist training The training included review of an interpretation guide via Microsoft PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation, and review of a set of training glass slides using multi-headed microscopes in conjunction with the training pathologist. During the training session, PD-L1 biology, staining characteristics of TC and IC (Fig. ), and acceptability of system level controls were reviewed, among other topics. For gastric/GEJ adenocarcinoma, the test and training sets were designed to train the pathologists to accurately score PD-L1 expression status around the 5% cutoff (Fig. ). The tests included a self-study set of 10 cases with consensus scores, a mini-test of 10 cases, and a final test of 60 cases. To pass the final test, the trainee pathologist had to achieve 85% agreement with reference scores on either an initial or a repeat test. The training on ESCC scoring was conducted using different training and test sets. Internal reader precision study Three internal pathologists were trained and qualified for this study. This study evaluated: i) between-reader precision: across qualified readers individually evaluating the same set of randomized gastric or GEJ adenocarcinoma samples ( N = 100 with equal distribution of PD-L1 expression level for positive [ n = 50] and negative [ n = 50] samples, spanning the range of the TAP score); and ii) within-reader precision: within individual readers evaluating the same set of gastric or GEJ adenocarcinoma samples over two assessments, separated by a wash-out time period of at least 2 weeks, and re-randomized and blinded prior to the second read. Between- and within-reader precision were assessed by evaluating the concordance of PD-L1 expression level of samples among the three readers from their first round of reads and within individual readers from their first and second round of reads, respectively. In the between-reader precision analysis, there were three pair-wise comparisons for each sample (reader 1 vs. reader 2, reader 1 vs. reader 3, and reader 2 vs. reader 3). With N = 100 samples, there were a total of 300 pair-wise comparisons. In the within-reader precision analysis, with N = 100 samples, there were 100 comparisons between the two reading rounds for each reader. All samples were commercially obtained formalin-fixed paraffin-embedded specimens. A cutoff of 5%, using the TAP score, was used to determine if the PD-L1 expression in the sample was considered positive or negative. The sample set included 90% resection samples and 10% biopsy samples, 10% of which showed borderline range of PD-L1 expression. A sample was considered negative borderline if the TAP score was 2–4%, and positive borderline if TAP score was 5–9%. The average positive agreement (APA), average negative agreement (ANA), and overall percent agreement (OPA) between and within readers were then calculated, along with 95% confidence intervals (CIs). The acceptance criterion for between-reader precision was ≥85% ANA and APA. The acceptance criteria for within-reader precision were ≥ 90% OPA, and ≥ 85% ANA and APA. The assay was required to produce acceptable levels of non-specific staining on BenchMark ULTRA instruments (Ventana Medical Systems Inc.) in at least 90% of samples. External reader precision study Three external organizations participated in an inter-laboratory reproducibility study using a cutoff of 5% TAP. At each site, two trained and qualified pathologists were selected to score the slides originating from the same sets of blocks. Specifically, 28 commercially obtained gastric or GEJ adenocarcinoma formalin-fixed paraffin-embedded specimens spanning the range of the TAP score were used in the external study. There was an equal distribution of PD-L1 expression level for positive ( n = 14) and negative ( n = 14) samples using the TAP score at 5% cutoff. Ten percent biopsy samples and 10% borderline cases were included in the sample set. The 28 cases were stained on five non-consecutive days over a period of at least 20 days at three sites, generating a total of five sets of slides for evaluation by the two pathologists at each site. The APA, ANA, and OPA were calculated across the three sites. Comparison of TAP and CPS Gastric or GEJ adenocarcinoma and ESCC samples ( n = 52) from a BGB-A317 trial carried out by BeiGene (Beijing, China) were used to compare the TAP and CPS scoring algorithms for evaluation of PD-L1 expression in a retrospective manner. Of the 52 samples, n = 10 were resection samples and n = 42 were biopsies. All samples were stained with the VENTANA PD-L1 (SP263) assay. The samples were distributed among eight internal pathologists and were scored using both methods. All eight pathologists were trained and qualified to evaluate PD-L1 expression using both the TAP and CPS scoring algorithms. The concordance of the TAP score at a 1% and 5% cutoff was assessed against a CPS score of 1 (equivalent to 1%), the FDA-approved cutoff for gastric or GEJ adenocarcinoma. The time spent on scoring for each method was also assessed. Identification of tumor area To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Determination of tumor-associated IC Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. Determination of TAP score The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. To determine TAP score, a hematoxylin and eosin-stained slide is first examined to identify tumor area (area occupied by all viable TC and the tumor-associated stroma containing tumor-associated IC) (Fig. ). If tumor nests are separated by non-neoplastic tissue, they are included as part of the tumor area as long as the tumor nests are bordered on both sides of a 10x field; the intervening non-neoplastic tissue is also included in the tumor area (abbreviated as 10x field rule in the text below; Fig. ). Necrosis, crush, and cautery artifacts are excluded from tumor area. For gastric and GEJ adenocarcinoma, the following must be considered: Pools of mucin and glandular luminal spaces in the presence or absence of viable TC are included as part of the tumor area. Tumor nests within the lymphovascular spaces are included in the tumor area. Tumor area determination in lymph nodes For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. For lymph nodes with multiple nests of tumor metastasis, apply the 10x field rule. In lymph nodes with focal or discrete tumor metastases, tumor area includes tumor nests and the areas occupied by the IC immediately adjacent to the leading edge of the metastatic tumor nests. Tumor-associated IC are intra- and peri-tumoral, including those present within the tumor proper, between tumor nests, and within any tumor-associated reactive stroma. In lymph nodes with focal or discrete tumor metastases, only IC immediately adjacent to the leading edge of the metastatic tumor nest were defined as tumor-associated IC. The TAP score is determined on the IHC slide by visually aggregating/estimating the area covered by PD-L1 positive TC and tumor-associated IC relative to the total tumor area. Both circumferential and partial/lateral membrane staining of TC at any intensity is regarded as positive PD-L1 staining, while cytoplasmic staining of TC is disregarded; membranous, cytoplasmic, and punctate staining of tumor-associated IC at any intensity is regarded as PD-L1 positive staining (Fig. ). For gastric and GEJ adenocarcinoma, staining of IC in the germinal center of lymphoid aggregates are included in the TAP score if they are located within the tumor area. Intra-luminal macrophage staining is not included in the TAP score unless the macrophages completely fill the luminal space and are in direct contact with the TC. Staining of multi-nucleated giant cells, granulomas, and IC located within blood vessels and lymphatics are not included in the TAP score. Off-target staining (e.g., fibroblasts, endothelial cells, neuroendocrine cells, smooth muscle, and nerves) should not be confused for specific PD-L1 staining, and is not included in the TAP score. The training included review of an interpretation guide via Microsoft PowerPoint (Microsoft Corporation, Redmond, WA, USA) presentation, and review of a set of training glass slides using multi-headed microscopes in conjunction with the training pathologist. During the training session, PD-L1 biology, staining characteristics of TC and IC (Fig. ), and acceptability of system level controls were reviewed, among other topics. For gastric/GEJ adenocarcinoma, the test and training sets were designed to train the pathologists to accurately score PD-L1 expression status around the 5% cutoff (Fig. ). The tests included a self-study set of 10 cases with consensus scores, a mini-test of 10 cases, and a final test of 60 cases. To pass the final test, the trainee pathologist had to achieve 85% agreement with reference scores on either an initial or a repeat test. The training on ESCC scoring was conducted using different training and test sets. Three internal pathologists were trained and qualified for this study. This study evaluated: i) between-reader precision: across qualified readers individually evaluating the same set of randomized gastric or GEJ adenocarcinoma samples ( N = 100 with equal distribution of PD-L1 expression level for positive [ n = 50] and negative [ n = 50] samples, spanning the range of the TAP score); and ii) within-reader precision: within individual readers evaluating the same set of gastric or GEJ adenocarcinoma samples over two assessments, separated by a wash-out time period of at least 2 weeks, and re-randomized and blinded prior to the second read. Between- and within-reader precision were assessed by evaluating the concordance of PD-L1 expression level of samples among the three readers from their first round of reads and within individual readers from their first and second round of reads, respectively. In the between-reader precision analysis, there were three pair-wise comparisons for each sample (reader 1 vs. reader 2, reader 1 vs. reader 3, and reader 2 vs. reader 3). With N = 100 samples, there were a total of 300 pair-wise comparisons. In the within-reader precision analysis, with N = 100 samples, there were 100 comparisons between the two reading rounds for each reader. All samples were commercially obtained formalin-fixed paraffin-embedded specimens. A cutoff of 5%, using the TAP score, was used to determine if the PD-L1 expression in the sample was considered positive or negative. The sample set included 90% resection samples and 10% biopsy samples, 10% of which showed borderline range of PD-L1 expression. A sample was considered negative borderline if the TAP score was 2–4%, and positive borderline if TAP score was 5–9%. The average positive agreement (APA), average negative agreement (ANA), and overall percent agreement (OPA) between and within readers were then calculated, along with 95% confidence intervals (CIs). The acceptance criterion for between-reader precision was ≥85% ANA and APA. The acceptance criteria for within-reader precision were ≥ 90% OPA, and ≥ 85% ANA and APA. The assay was required to produce acceptable levels of non-specific staining on BenchMark ULTRA instruments (Ventana Medical Systems Inc.) in at least 90% of samples. Three external organizations participated in an inter-laboratory reproducibility study using a cutoff of 5% TAP. At each site, two trained and qualified pathologists were selected to score the slides originating from the same sets of blocks. Specifically, 28 commercially obtained gastric or GEJ adenocarcinoma formalin-fixed paraffin-embedded specimens spanning the range of the TAP score were used in the external study. There was an equal distribution of PD-L1 expression level for positive ( n = 14) and negative ( n = 14) samples using the TAP score at 5% cutoff. Ten percent biopsy samples and 10% borderline cases were included in the sample set. The 28 cases were stained on five non-consecutive days over a period of at least 20 days at three sites, generating a total of five sets of slides for evaluation by the two pathologists at each site. The APA, ANA, and OPA were calculated across the three sites. Gastric or GEJ adenocarcinoma and ESCC samples ( n = 52) from a BGB-A317 trial carried out by BeiGene (Beijing, China) were used to compare the TAP and CPS scoring algorithms for evaluation of PD-L1 expression in a retrospective manner. Of the 52 samples, n = 10 were resection samples and n = 42 were biopsies. All samples were stained with the VENTANA PD-L1 (SP263) assay. The samples were distributed among eight internal pathologists and were scored using both methods. All eight pathologists were trained and qualified to evaluate PD-L1 expression using both the TAP and CPS scoring algorithms. The concordance of the TAP score at a 1% and 5% cutoff was assessed against a CPS score of 1 (equivalent to 1%), the FDA-approved cutoff for gastric or GEJ adenocarcinoma. The time spent on scoring for each method was also assessed. Internal reader precision study As shown in Table , for between-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/298 [99.3%]; 95% CI, 98.0–100.0), ANA (300/302 [99.3%]; 95% CI, 98.0–100.0), and OPA (298/300 [99.3%]; 95% CI, 98.0–100.0). For within-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/299 [99.0%]; 95% CI, 98.0–100.0), ANA (298/301 [99.0%]; 95% CI, 98.0–100.0), and OPA (297/300 [99.0%]; 95% CI, 98.0–100.0). The background acceptability rate (600/600 [100.0%]; 95% CI, 99.4–100.0) also met the pre-defined acceptance criteria. External reader precision study Table shows that site A achieved the lowest agreement rates for APA (88/109 [80.7%], 95% CI, 63.6–93.5), ANA (144/165 [87.3%], 95% CI, 78.0–95.7), and OPA (116/137 [84.7%], 95% CI, 73.2–94.9), while sites B and C produced identical results for APA (140/140 [100.0%], 95% CI, 97.3–100.0), ANA (140/140 [100.0%], 95% CI, 97.3–100.0), and OPA (140/140 [100.0%], 95% CI, 97.3–100.0). Overall, high agreement levels were demonstrated across the three sites (APA, 368/389 [94.6%], 95% CI, 90.8–98.0; ANA, 424/445 [95.3%], 95% CI, 91.5–98.5; OPA, 396/417 [95.0%], 95% CI, 91.2–98.3). Correlation of TAP and CPS The percentage agreement between TAP (1% cutoff) vs CPS (cutoff of 1) was 39/39 samples (100%; 95% CI, 91.0–100.0) for positive percent agreement (PPA), 11/13 samples (84.6%; 95% CI, 57.8–95.7) for negative percent agreement (NPA), and 50/52 samples (96.2%; 95% CI, 87.0–98.9) for OPA (Table ). For TAP (5% cutoff) vs CPS (cutoff of 1), the percentage agreement was 35/39 samples (89.7%; 95% CI, 76.4–95.9) for PPA, 13/13 samples (100%; 95% CI, 77.2–100.0) for NPA, and 48/52 samples (92.3%; 95% CI, 81.8–97.0) for OPA (Table ). The average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm. As shown in Table , for between-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/298 [99.3%]; 95% CI, 98.0–100.0), ANA (300/302 [99.3%]; 95% CI, 98.0–100.0), and OPA (298/300 [99.3%]; 95% CI, 98.0–100.0). For within-reader analyses (including borderline cases), the pre-defined acceptance criteria were met for APA (296/299 [99.0%]; 95% CI, 98.0–100.0), ANA (298/301 [99.0%]; 95% CI, 98.0–100.0), and OPA (297/300 [99.0%]; 95% CI, 98.0–100.0). The background acceptability rate (600/600 [100.0%]; 95% CI, 99.4–100.0) also met the pre-defined acceptance criteria. Table shows that site A achieved the lowest agreement rates for APA (88/109 [80.7%], 95% CI, 63.6–93.5), ANA (144/165 [87.3%], 95% CI, 78.0–95.7), and OPA (116/137 [84.7%], 95% CI, 73.2–94.9), while sites B and C produced identical results for APA (140/140 [100.0%], 95% CI, 97.3–100.0), ANA (140/140 [100.0%], 95% CI, 97.3–100.0), and OPA (140/140 [100.0%], 95% CI, 97.3–100.0). Overall, high agreement levels were demonstrated across the three sites (APA, 368/389 [94.6%], 95% CI, 90.8–98.0; ANA, 424/445 [95.3%], 95% CI, 91.5–98.5; OPA, 396/417 [95.0%], 95% CI, 91.2–98.3). The percentage agreement between TAP (1% cutoff) vs CPS (cutoff of 1) was 39/39 samples (100%; 95% CI, 91.0–100.0) for positive percent agreement (PPA), 11/13 samples (84.6%; 95% CI, 57.8–95.7) for negative percent agreement (NPA), and 50/52 samples (96.2%; 95% CI, 87.0–98.9) for OPA (Table ). For TAP (5% cutoff) vs CPS (cutoff of 1), the percentage agreement was 35/39 samples (89.7%; 95% CI, 76.4–95.9) for PPA, 13/13 samples (100%; 95% CI, 77.2–100.0) for NPA, and 48/52 samples (92.3%; 95% CI, 81.8–97.0) for OPA (Table ). The average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm. Understanding of immune checkpoint inhibitors has revolutionized the treatment options for cancer patients. Thus far, PD-L1 has been the focus of that recent paradigm shift. However, different scoring systems were introduced in a rapid successive fashion which may have burdened practicing pathologists who had to consistently play catch-up. This study aimed to provide a simple, visual-based estimate scoring method which combines TC and IC to identify the intended patient population of interest. On-market FDA-approved PD-L1 scoring algorithms can be classified into TC- or IC-only score, TC and IC score in a sequential manner, or combined TC/IC score (Table ). In general, TC-only scoring methods have been favorably adopted by the pathology community , whereas IC scoring or sequential TC/IC scoring have been perceived as challenging. CPS is the only FDA-approved method that combines TC and IC. It is a cell counting-based approach where the number of PD-L1-stained cells (TC, lymphocytes, and macrophages) is divided by the total number of viable TC, multiplied by 100 . Cell counting can be time-consuming and is not in sync with pathology practice, which classically uses a Gestalt approach based on visual pattern recognition and estimation. Our study found that the average time spent on scoring was 5 min for the TAP score and 30 min for the CPS scoring algorithm, with one case of a large resection taking up to 1 h using CPS. Accordingly, pathologists must develop strategies to cope with CPS scoring during busy practice periods due to the time-consuming nature of the cell counting process. From communicating with practicing pathologists in the field, these strategies include piecemeal scoring approaches for large tumor resection specimens with heterogeneous staining pattern, eyeballing when applying 20x rules which provide estimated tumor cell numbers, and using a standard cellularity table for TC numbers. An added complexity of CPS scoring is assessment of the type of IC to be included in the count, which requires the pathologist to select only mononuclear IC . The TAP scoring method is inclusive of all types of IC; therefore, pathologists need not exhaust themselves under high magnification to confirm a cell type. Increasingly, research has shown that granulocytes are part of the adaptive tumor immune response ; we have also observed weak to moderate PD-L1 expression in neutrophils around TC (Supplementary Fig. ). This evidence led to inclusion of granulocytes in development of the TAP method. To overly simplify, the TAP method is essentially “the percentage of relevant brown (positive cells) over blue (entire tumor areas on IHC slide)”. In this study, we compared the percentage agreement between TAP (1% and 5% cutoff) and CPS (cutoff of 1) in gastric/GEJ adenocarcinoma and ESCC samples using the VENTANA PD-L1 (SP263) assay, to investigate whether the two scoring methods were interchangeable, and if so, at what cutoff. The PPA, NPA, and OPA of the two comparisons were equal to or greater than 85%, with TAP score at 1% cutoff having better concordance with CPS 1 compared with TAP score at 5%. This suggests that the two algorithms, when used at different cutoffs, could potentially identify the same population of patients. In theory, samples in which the tumor stroma does not comprise large portions of tumor areas, such as mucosal biopsy specimens, have even greater potential for higher concordance of the two scoring methods (TAP and CPS). In fact, a study evaluated associations and potential correlations with clinical efficacy of the PD-L1 SP263 assay scored with the TAP algorithm (referred to as TIC [Tumor and Immune Cell]) at 5% cutoff and the PD-L1 22C3 assay scored with the CPS algorithm at 1% cutoff in gastroesophageal adenocarcinoma. Both the SP263 assay (TAP scoring) and 22C3 assay (CPS scoring) aided in the identification of patients with gastroesophageal adenocarcinoma likely to benefit from tislelizumab . A potential limitation of TAP scoring is in defining the tumor areas in situations where the specimens have complicated histology with various non-neoplastic cells present in between tumor cells. However, this becomes less problematic as a pathologist reviews more cases and gains more experience. The introduction of another PD-L1 scoring method (TAP) to an already confused market could be perceived as a limitation. However, as we have demonstrated, this method can help reduce confusion by providing a viable path for simplifying and standardizing pathology practice without compromising accuracy of patient selection. The data in this study show that the TAP scoring method is as effective as the CPS method in detecting patients with positive PD-L1 expression, but substantially less time-consuming. In addition to being highly reproducible among different pathologists, it can potentially standardize the existing scoring methods that evaluate both TC and IC. Additional file 1: Supplementary Fig. 1. Neutrophils with weak cytoplasmic staining.
Castleman disease complicated by rheumatoid arthritis and postoperative chylous leakage: A case report
9d28fe8e-505d-4eab-9649-36f62481df81
11835113
Surgical Procedures, Operative[mh]
Castleman disease (CD), also known as angiofollicular lymph node hyperplasia or giant lymph node hyperplasia, is a rare disorder characterized primarily by lymphadenopathy. In 2018, it was included in the first list of rare diseases by the Chinese National Health Commission and 4 other departments. According to reports, CD commonly affects the mediastinum, with relatively few cases involving axillary lymph nodes. Chyle is a milky substance that enters the lymphatic system directly from the intestines, containing high levels of proteins, fats, and white blood cells. Chylous leakage is common after head, neck, or thoracoabdominal surgeries, but it is extremely rare after axillary surgery, with an incidence of < 0.5%. Reports of chylous leakage following axillary lymph node biopsy in CD patients are even rarer. However, the coexistence of CD with rheumatoid arthritis (RA) and postoperative chylous leakage remains rare. This case report aims to summarize clinical and therapeutic experiences from a case of CD complicated by RA and postoperative chylous leakage. A 60-year-old female patient presented on November 6, 2019, following the detection of bilateral axillary lymphadenopathy during a routine health checkup 3 weeks prior. Ultrasound examination showed multiple enlarged lymph nodes in both axillae, some measuring up to 45 mm × 15 mm on the left (Fig. A) and 30 mm × 12 mm on the right (Fig. B). Color Doppler revealed rich blood flow signals in the left axilla (arterial spectrum Vmax: 17 cm/s, resistive index: 0.78) and gate-like flow in the right axilla. The ultrasound images confirmed bilateral axillary lymphadenopathy (Fig. C) and suggested the possibility of malignancy. The patient did not report palpable masses in the axillae and was asymptomatic for fever, night sweats, pain, or swelling. The patient had a 6-year history of RA, well-managed with corticosteroids and immunosuppressants, and a 10-year history of hypertension, controlled with nimodipine. Her family history was significant for colon cancer in an elder sister, but no other familial patterns of disease were noted. Physical examination was unremarkable except for slight fullness in the left axilla, without palpable lymphadenopathy. No enlarged lymph nodes were detected in the right axilla or bilateral supraclavicular regions. Other superficial lymph nodes were also not enlarged. Upon admission, laboratory investigations showed hemoglobin at 99.0 g/L and D-dimer at 1370 μg/L. Chest and abdominal compute tomography (CT) scans indicated multiple left axillary and left supraclavicular lymph nodes, and breast magnetic resonance imaging with contrast indicated, “Enlarged lymph nodes in the left axilla, possibly inflammatory. A lymph node measuring approximately 3.6 cm × 1.1 cm was observed, with visible hilum. No significantly enlarged lymph nodes in the right axilla. Multiple enlarged lymph nodes suggestive of lymphoma cannot be excluded,” revealed a 3.6 cm × 1.1 cm lymph node with a visible hilum in the left axilla, raising suspicion for lymphoma. An excisional biopsy of the left axillary lymph node was performed on November 13, 2019, to establish a definitive diagnosis. Intraoperatively, a 4 cm × 3 cm × 1 cm gray-red, soft, encapsulated lymph node was completely excised (Fig. A). Histopathological examination (Figs. B–D, and A) demonstrated numerous enlarged lymphoid follicles with concentric (onion-skin) layering of lymphocytes and prominent vascular proliferation, consistent with hyaline vascular type CD. Immunohistochemistry showed CD21, CD3, CD20, CD5, CD43, CyclinD1(−), CD38, Bcl-2, Kappa (scattered+), Lambda (scattered+), Ki67 (about 10%), confirming the diagnosis (Fig. B–L). The postoperative diagnosis was confirmed as hyaline vascular type Castleman disease (HV-CD). Three days postsurgery, following vigorous physical activity, the patient developed significant chylous leakage from the wound, producing up to 300 mL per day, without odor, redness, or fever. Laboratory analysis of the fluid showed a triglyceride level of 5.31 mmol/L, confirming the chylous nature of the leakage. The chylous leakage was managed with wound drainage, dressing changes, and oral antibiotics, resulting in a gradual reduction of wound secretion and complete healing within 2 weeks. The patient was advised to seek further oncological treatment but declined additional radiotherapy or chemotherapy. In December 2019, PET-CT results indicated increased metabolic activity in the left axillary region (standardized uptake value ~5.8), likely postoperative changes, and mildly increased activity in bilateral axillary lymph nodes without evidence of disseminated disease. The patient remained asymptomatic and continued regular follow-ups. An ultrasound in October 2020 revealed stable nodules in both axillae with no significant progression. The patient has been asymptomatic with no significant disease progression to date. CD is extremely rare, with an incidence of about 0.0025%. The pathogenesis of CD remains unclear. In the 1980s, interleukin-6 (IL-6) was identified as a potential key cytokine in CD pathogenesis. Subsequently, human herpesvirus-8 was detected in CD patients, suggesting that IL-6 and its homologs play crucial roles in the disease. IL-6 abnormalities are also implicated in anemia and RA. Studies show IL-6 correlates positively with hepcidin expression, leading to iron metabolism disturbances and anemia. The present case, like many others, also showed anemia, supporting this association. Additionally, IL-6 is highly expressed in RA patients, indicating a potential link between RA history and CD development. Given the central role of IL-6 in CD pathogenesis, anti-IL-6 therapies have become important treatment options. Tocilizumab and siltuximab, which target IL-6, have shown significant efficacy in reducing inflammation and controlling symptoms in CD patients. This aligns with the current case, where anemia and IL-6 abnormalities were observed, further supporting the association between IL-6 and CD. However, despite these findings, the link between IL-6 and CD remains insufficiently evidenced, and further studies are necessary to clarify its role. CD is classified into hyaline vascular (HV-CD), plasma cell (PC-CD), and mixed types based on pathological features. HV-CD, the most common type, is characterized by increased lymphoid follicles and interfollicular vascular proliferation, accounting for about 90% of cases. CD is further classified into unicentric (UCD) and multicentric (MCD) based on the extent of lymph node involvement. MCD can be associated with immunocompromised states and human herpesvirus-8 infection. Clinically, UCD often presents as asymptomatic lymphadenopathy, while MCD exhibits systemic symptoms such as fever, night sweats, weight loss, anemia, and hepatosplenomegaly. Imaging findings are nonspecific, making pathologic biopsy the diagnostic gold standard. Histopathological examination, supported by immunohistochemical markers such as CD21, CD3, and Bcl-2, is crucial in differentiating CD subtypes and guiding treatment strategies. In the diagnostic process, we considered malignant lymphoma and other inflammatory diseases; however, the pathological and immunohistochemical findings confirmed HV-CD. Effective treatments for CD are still under investigation. Surgical excision is considered the best treatment for UCD, often leading to clinical cure. MCD has a poorer prognosis and may require nonsurgical treatments like corticosteroids, cytotoxic drugs, and IL-6-targeted therapies, though results are not always satisfactory. Siltuximab, an anti-IL-6 monoclonal antibody, has shown significant efficacy in improving symptoms and quality of life in MCD patients, though some patients may not respond adequately. In this case, the patient underwent a biopsy for diagnostic confirmation, with PET-CT follow-up indicating residual disease but no further treatment pursued. The patient has remained stable for over 2 years. Given that the patient had unicentric HV-CD and remained stable, we opted against further treatment and chose regular follow-up observation instead. Chylous leakage postaxillary surgery is rare, typically occurring within 1 to 4 days postoperatively. It may result from damage to abnormal branches of the thoracic duct draining the axilla. Diagnosis is often clinical, supported by typical chyle characteristics and elevated triglyceride levels in the leakage fluid. Current evidence and clinical experience suggest that low-to-moderate output chylous leakage can often be managed effectively with conservative measures. These include continuous wound drainage, dietary modifications to reduce chyle production (eg, a low-fat diet enriched with medium-chain triglycerides), and prophylaxis infection. Pharmacological therapies, such as somatostatin analogs, may be employed in cases of persistent leakage, while surgical intervention is generally reserved for high-output or refractory cases. In this patient, the chylous leakage was moderate, and no systemic complications, such as malnutrition or immune dysfunction, were observed. These factors supported the decision to adopt conservative management, which led to a successful resolution of the condition. Timely intervention with drainage and dietary adjustments effectively reduced leakage and prevented secondary complications. The management of chylous leakage following axillary lymph node biopsy in CD patients, though rare, requires a multidisciplinary approach. Prompt identification and conservative management, including wound drainage, dietary modifications, and infection prevention, are essential to ensure optimal patient outcomes. The patient’s recovery without further complications and the absence of disease progression over 2 years highlight the effectiveness of this approach. However, long-term follow-up is crucial for monitoring potential recurrence or late complications of CD, particularly given the patient’s history of RA. Regular imaging and laboratory evaluations, including monitoring IL-6 levels and inflammatory markers, could help detect disease recurrence or progression early. Additionally, close monitoring of RA disease activity and ensuring continued control with immunosuppressive therapy are vital to reduce the risk of systemic complications that may exacerbate CD. A comprehensive follow-up plan combining clinical, laboratory, and imaging assessments should be tailored to the patient’s overall health status and disease history. The potential link between RA and CD via IL-6 suggests an avenue for further investigation. Understanding this relationship may improve diagnostic accuracy and lead to more targeted therapies. Future studies should focus on the molecular mechanisms underlying CD and its associations with autoimmune diseases to develop more effective management strategies. The complexity and challenges associated with diagnosing and managing CD, particularly when complicated by RA and postoperative chylous leakage, highlight the need for continued research and vigilant clinical management. Data curation: Wei Liu, Zhuoyan Tao, Rong Liang. Formal analysis: Wei Liu, Zhuoyan Tao, Rong Liang, Xinpeng Hu. Funding acquisition: Wei Liu. Writing – original draft: Wei Liu, Xinpeng Hu. Writing – review & editing: Wei Liu, Xinpeng Hu.
4D-Flow MRI and Vector Ultrasound in the In-Vitro Evaluation of Surgical Aortic Heart Valves – a Pilot Study
ada58d19-0b94-4de0-b6b6-ffea57484649
11885334
Surgical Procedures, Operative[mh]
Degenerative diseases of the aortic valve are one of the leading causes of cardiac morbidity and mortality . From a very crude initial concept, the field of surgical aortic valves saved or enhanced the quality of life for millions of patients over the last decades. Due to the high demand for functional and lasting replacement options, the aortic valve device market turned into an ever-evolving field, trying to improve upon the existing bileaflet mechanical heart valves and xenogeneic bioprosthetic valves . Additionally to these two established models, different options were developed over the years, ranging from trileaflet mechanical valves to tissue engineered approaches . One common aim for iterations of established valve systems and even radical innovations is to enhance the hemodynamic performance. Many factors, such as pressure gradients, velocities, flow patterns and thrombogenicity are inherently responsible for adequate blood flow and lasting functionality of the aortic valve and the overall cardiovascular system. The visualization and quantification of blood flow characteristics distal to the aortic valve have been at the center of cardiovascular research for decades . In-vivo evaluation of patients has been performed using radiological modalities ranging from ultrasound (US) to magnetic resonance imaging (MRI) . In basic research, different technologies were developed over time, with particle image velocimetry (PIV) being one of the most applied techniques to visualize fluid characteristics in mock circulation setups . In the past, these mock circulation setups mostly relied on acrylic vessels or silicone cast phantoms . The emergence of additive manufacturing opened new ways of creating accurate anatomical phantoms for integration in mock flow loops . Besides the printing accuracy, an increasing range of printable materials allows for individualized design of the model’s properties, to more closely match the behavior of the human aorta. While these models offer great accuracy, current materials and printing techniques often result in printed vessel walls that are opaque, leading to limited usability in PIV measurements. This technical limitation makes exploring alternative imaging modalities necessary. Technological advances in the field of radiological imaging, offer new capturing techniques, such as 4D-Flow MRI, a type of three-dimensional, time resolved phase-contrast MRI . This technology allows the visualization of disturbed flow patterns and quantification of flow parameters, such as velocity, pressure drops and WSS. In clinical research, 4D- Flow MRI has been widely used in the analysis of congenital heart defects , ventricular flow and portal veins. Besides 4D-Flow MRI, the computing power of modern sonographic imaging devices led to the introduction of vector flow doppler imaging, that allows the visualization of dynamic flow patterns, as well as the calculation of wall shear stress (WSS) and energy loss. These imaging modalities give clinical radiology a broader toolbox to accurately examine patients. Furthermore, they can be used in translational and basic research for fast, non-invasive measurements. Therefore, the goal of this research project was the initial investigation of 4D-Flow MRI and Vector Ultrasound as novel imaging techniques in the in-vitro analysis of hemodynamics in anatomical models. Specifically, by looking at the hemodynamic performance of state-of-the-art surgical heart valves in a 3D-printed aortic arch. Model Creation The main part of the flow loop setup is represented by a 3D-printed flexible thoracic aorta including the ascending aorta, the aortic arch and the descending aorta. The model creation workflow followed a previously published work . Briefly, an anonymized contrast-enhanced CT dataset of a patient who had an indication for surgical aortic valve replacement with a 25 mm prosthesis, was segmented to extract the ascending aorta, aortic arch, aortic root and supra-aortic vessels. Different datasets were measured retrospectively to select a patient sized for a 25 mm aortic valve. Exclusion criteria were any of the following in the region of interest: poor image quality (i.e. device-related artefacts), pathologic diameter change, calcifications outside the aortic root and non-standard configuration of supra-aortic vessels. After segmentation of the blood volume, the digital model was hollowed by adding a constant wall thickness of 2.5 mm external to the blood volume . All vessel ends were modified in a circular uniform diameter for easy attachment to standardized connectors (Fig. A). The proximal end of the left ventricular outflow tract was prolonged to allow for adequate sealing, as well as placement of the heart valve prostheses according to manufacturer’s specifications. Afterwards, the digital model was transferred into the slicing software Modeling Studio (Keyence Corp., Osaka, JP), subsequently uploaded onto a 3D-printer (Agilista 3200W, Keyence Corp.) and printed using a flexible, printing material (AR-G1L, Shore 35A, elongation at break: 160%, Keyence Corp.). After the printing process, the aortic phantom was taken from the build plate and soaked in boiling water to remove the water-soluble support material. Subsequently, the model was placed in a heating cabinet to dry for 24 h at 50 °C. Heart Valve Prostheses To perform standardized comparative tests of different heart valve prostheses, a uniform prosthesis size of 25 mm (manufacturer’s specification) was selected for all valves tested in this study. Included are five different valves for surgical implantation, with two mechanical prosthetic valves (Masters Series 25, Abbott Laboratories, Chicago, USA; On-Xane-25, CryoLife Inc., Kennesaw, USA) and three different bioprosthetic heart valves (Epic 25 mm, Abbott Laboratories; Magna Ease 25 mm, Edwards Lifesciences Inc., Irvine, USA; Perimount 25 mm, Edwards Lifesciences Inc.). Individual valve mounts were designed to follow the individual curvature of the valve’s suture rings (Fig. B). Subsequently, valves were fixed to the mount using surgical sutures (Prolene 5–0, Ethicon Inc., Raritan, USA) and tested for paravalvular leakages. Each mount has a defined height, to allow for supra or intra-annular placement of the valves, according to manufacturer’s recommendations (Fig. C). The orientation of the mechanical valve leaflets was adjusted to match manufacturer’s recommendations. Bioprosthetic valves were stored in their original container with storage solution up until testing. Mock Circulation To allow for testing of the valves in an MRI setting, an entire MRI-compatible mock circulation setup was designed and constructed (Fig. ). The setup was divided into two parts, the external drive unit and the internal fluid circulation unit. The external drive unit consisted of a dedicated computer, linear motor (PS01- 48 × 240 HP, NTI AG, Spreitenbach, CH) with corresponding driver (Series C1100, NTI AG). The linear motor was connected to a piston, which in turn is connected air-tight via a pneumatic hose to the fluid circulation unit. The connecting point also represents the heart of the mock circulation with a self-developed pump chamber, representing the left ventricle. To transfer the pneumatic force created by the piston to the test fluid, a rubber roll membrane with a defined volume of 80 ml was placed between the pneumatic and fluid chambers. The fluid chamber has a total volume of 100 ml resulting in a theoretical peak ejection fraction of 80%. An ejection fraction above physiological levels was chosen to adjust for the rigid nature of the artificial ventricle The chamber was connected to the valve mount via a straight rigid tube to allow for any flow disturbances to subside before passing through the valve prostheses. The 3D-printed aortic arch was then fixed to the valve mount which was placed in a plastic container. The container has five openings, for the proximal fluid entrance, the distal descending aorta and the supra-aortic vessels. After implantation of the valves in the aortic arch, the model was embedded in a hydrogel of 1% agar (Agarose, Sigma-Aldrich Corp., St. Louis, USA) to simulate the surrounding tissue and thereby reduce movement artefacts during MRI acquisition. Distal to the descending aorta and the supra-aortic vessels, a combination of compliance and resistance elements were placed to allow the approximation of the Windkessel-effect and peripheral vascular resistance. The compliance elements consist of an airtight cylinder filled partially with water and air, with a pneumatic valve at the top to adjust the height of the water column. The resistance element is realized through a ball valve that is placed distally to the compliance element. Therefore, realistic pressure conditions of 120/80 mmHg and a cardiac output of 4.6 l/min were achieved. Pressure was measured at the left ventricle, compliance chamber and descending aorta prior to MRI experiments. For all experiments, heart rate was set at 55 bpm, while systolic and diastolic pressure were adjusted to reach 120/80 mmHg. An ECG trigger signal was created and connected to the MRI according to manufacturer’s specifications. The trigger signal allowed the prospective synchronization of the ventricle movement with the acquisition time window. To simulate the viscous behavior of blood, a blood mimicking fluid (calculated viscosity 4.6cP) consisting of 40% glycerin (Rotipuran® ≥ 99.5%, Carl Roth GmbH, Karlsruhe, GER) and 60% distilled water was used . Radiological Imaging Acquisition of the 4D-Flow MRI imaging was performed on a 1.5 T scanner (MAGNETOM Aera, Siemens Healthineers AG, Erlangen, GER) with an 18-channel body coil (Biomatrix Body 18, Siemens Healthineers AG) placed on top of the agar filled plastic box. The acquisition protocol consisted of a non-contrast-enhanced MR-angiography and the 4D-flow sequence. For 4D-flow an isotropic dataset with 25 phases and a slice thickness of 1.0 mm (TE 2.300, TR 38.800, FA 7°, matrix size: 298 × 298 px) was acquired. Velocity encoding was set at 150 cm/s for all measurements . Evaluation and visualization of 4D-Flow MRI results was conducted using a dedicated radiological analysis software (cvi42, CCI Inc., Calgary, CA) . Within the software, the blood volume was separated from surrounding motion artefacts. Four measurement planes were placed perpendicular to the vessel’s centerline, specifically proximal to the valve as a reference plane, 10 mm distal to the top of the valve, at the center of the ascending curvature and at the distal end of the aortic arch (Fig. ). At each plane, velocity, tangential WSS and pressure drop with respect to the reference plane were measured. Calculation of WSS followed the publication by Stalder et al.. It describes an interpolation of local velocity vectors along the contour of the underlying measuring plane. The effective orifice area (EOA) was calculated using the continuity equation (Eq. ) with the velocity time integral in the left ventricular outflow tract (LVOT) and aortic valve (AV) derived from the underlying MRI dataset. 1 [12pt]{minimal} $$EOA= _{LVOT}^{2}* *{VTI}_{LVOT}}{{VTI}_{AV}}$$ E O A = d LVOT 2 ∗ π 4 ∗ VTI LVOT VTI AV Equation : Continuity equation to determine the EOA; d = diameter; VTI = velocity time integral. Sonographic imaging was performed using a dedicated sonography device (Resona 9, Mindray Medical Int. Ltd., Shenzhen, CN) and the v-flow protocol, developed for carotid artery imaging. For image acquisition a linear array transducer (L14-3WU, Mindray Medical Int. Ltd.) was placed on to the agar block in correspondence to the above-mentioned planes, placing the center of the transducer on the according plane. The acquisition window was increased to the biggest possible size (20 × 30 mm) while all other parameters were set to the most precise setting available (acquisition time: 2 s; acquisition quality: 7). Since the acquisition window was developed for application at the carotid bifurcation, measurements had to be split into two parts at the inner and outer curvature of the aorta to cover the entire cross-section, due to the smaller ROI of the acquisition window. Flow velocity, total WSS at five spots along the aortic wall as well as the oscillatory shear index (OSI) were calculated from the measurements. The OSI was calculated as an expression for the magnitude and change in direction of local WSS described by the following formula: 2 [12pt]{minimal} $$OSI= *(1.0- )$$ O S I = 1 2 ∗ ( 1.0 - AWSSV AWSS ) where AWSSV = magnitude of the time-averaged WSS vector, and AWSS = time-averaged WSS magnitude . The main part of the flow loop setup is represented by a 3D-printed flexible thoracic aorta including the ascending aorta, the aortic arch and the descending aorta. The model creation workflow followed a previously published work . Briefly, an anonymized contrast-enhanced CT dataset of a patient who had an indication for surgical aortic valve replacement with a 25 mm prosthesis, was segmented to extract the ascending aorta, aortic arch, aortic root and supra-aortic vessels. Different datasets were measured retrospectively to select a patient sized for a 25 mm aortic valve. Exclusion criteria were any of the following in the region of interest: poor image quality (i.e. device-related artefacts), pathologic diameter change, calcifications outside the aortic root and non-standard configuration of supra-aortic vessels. After segmentation of the blood volume, the digital model was hollowed by adding a constant wall thickness of 2.5 mm external to the blood volume . All vessel ends were modified in a circular uniform diameter for easy attachment to standardized connectors (Fig. A). The proximal end of the left ventricular outflow tract was prolonged to allow for adequate sealing, as well as placement of the heart valve prostheses according to manufacturer’s specifications. Afterwards, the digital model was transferred into the slicing software Modeling Studio (Keyence Corp., Osaka, JP), subsequently uploaded onto a 3D-printer (Agilista 3200W, Keyence Corp.) and printed using a flexible, printing material (AR-G1L, Shore 35A, elongation at break: 160%, Keyence Corp.). After the printing process, the aortic phantom was taken from the build plate and soaked in boiling water to remove the water-soluble support material. Subsequently, the model was placed in a heating cabinet to dry for 24 h at 50 °C. To perform standardized comparative tests of different heart valve prostheses, a uniform prosthesis size of 25 mm (manufacturer’s specification) was selected for all valves tested in this study. Included are five different valves for surgical implantation, with two mechanical prosthetic valves (Masters Series 25, Abbott Laboratories, Chicago, USA; On-Xane-25, CryoLife Inc., Kennesaw, USA) and three different bioprosthetic heart valves (Epic 25 mm, Abbott Laboratories; Magna Ease 25 mm, Edwards Lifesciences Inc., Irvine, USA; Perimount 25 mm, Edwards Lifesciences Inc.). Individual valve mounts were designed to follow the individual curvature of the valve’s suture rings (Fig. B). Subsequently, valves were fixed to the mount using surgical sutures (Prolene 5–0, Ethicon Inc., Raritan, USA) and tested for paravalvular leakages. Each mount has a defined height, to allow for supra or intra-annular placement of the valves, according to manufacturer’s recommendations (Fig. C). The orientation of the mechanical valve leaflets was adjusted to match manufacturer’s recommendations. Bioprosthetic valves were stored in their original container with storage solution up until testing. To allow for testing of the valves in an MRI setting, an entire MRI-compatible mock circulation setup was designed and constructed (Fig. ). The setup was divided into two parts, the external drive unit and the internal fluid circulation unit. The external drive unit consisted of a dedicated computer, linear motor (PS01- 48 × 240 HP, NTI AG, Spreitenbach, CH) with corresponding driver (Series C1100, NTI AG). The linear motor was connected to a piston, which in turn is connected air-tight via a pneumatic hose to the fluid circulation unit. The connecting point also represents the heart of the mock circulation with a self-developed pump chamber, representing the left ventricle. To transfer the pneumatic force created by the piston to the test fluid, a rubber roll membrane with a defined volume of 80 ml was placed between the pneumatic and fluid chambers. The fluid chamber has a total volume of 100 ml resulting in a theoretical peak ejection fraction of 80%. An ejection fraction above physiological levels was chosen to adjust for the rigid nature of the artificial ventricle The chamber was connected to the valve mount via a straight rigid tube to allow for any flow disturbances to subside before passing through the valve prostheses. The 3D-printed aortic arch was then fixed to the valve mount which was placed in a plastic container. The container has five openings, for the proximal fluid entrance, the distal descending aorta and the supra-aortic vessels. After implantation of the valves in the aortic arch, the model was embedded in a hydrogel of 1% agar (Agarose, Sigma-Aldrich Corp., St. Louis, USA) to simulate the surrounding tissue and thereby reduce movement artefacts during MRI acquisition. Distal to the descending aorta and the supra-aortic vessels, a combination of compliance and resistance elements were placed to allow the approximation of the Windkessel-effect and peripheral vascular resistance. The compliance elements consist of an airtight cylinder filled partially with water and air, with a pneumatic valve at the top to adjust the height of the water column. The resistance element is realized through a ball valve that is placed distally to the compliance element. Therefore, realistic pressure conditions of 120/80 mmHg and a cardiac output of 4.6 l/min were achieved. Pressure was measured at the left ventricle, compliance chamber and descending aorta prior to MRI experiments. For all experiments, heart rate was set at 55 bpm, while systolic and diastolic pressure were adjusted to reach 120/80 mmHg. An ECG trigger signal was created and connected to the MRI according to manufacturer’s specifications. The trigger signal allowed the prospective synchronization of the ventricle movement with the acquisition time window. To simulate the viscous behavior of blood, a blood mimicking fluid (calculated viscosity 4.6cP) consisting of 40% glycerin (Rotipuran® ≥ 99.5%, Carl Roth GmbH, Karlsruhe, GER) and 60% distilled water was used . Acquisition of the 4D-Flow MRI imaging was performed on a 1.5 T scanner (MAGNETOM Aera, Siemens Healthineers AG, Erlangen, GER) with an 18-channel body coil (Biomatrix Body 18, Siemens Healthineers AG) placed on top of the agar filled plastic box. The acquisition protocol consisted of a non-contrast-enhanced MR-angiography and the 4D-flow sequence. For 4D-flow an isotropic dataset with 25 phases and a slice thickness of 1.0 mm (TE 2.300, TR 38.800, FA 7°, matrix size: 298 × 298 px) was acquired. Velocity encoding was set at 150 cm/s for all measurements . Evaluation and visualization of 4D-Flow MRI results was conducted using a dedicated radiological analysis software (cvi42, CCI Inc., Calgary, CA) . Within the software, the blood volume was separated from surrounding motion artefacts. Four measurement planes were placed perpendicular to the vessel’s centerline, specifically proximal to the valve as a reference plane, 10 mm distal to the top of the valve, at the center of the ascending curvature and at the distal end of the aortic arch (Fig. ). At each plane, velocity, tangential WSS and pressure drop with respect to the reference plane were measured. Calculation of WSS followed the publication by Stalder et al.. It describes an interpolation of local velocity vectors along the contour of the underlying measuring plane. The effective orifice area (EOA) was calculated using the continuity equation (Eq. ) with the velocity time integral in the left ventricular outflow tract (LVOT) and aortic valve (AV) derived from the underlying MRI dataset. 1 [12pt]{minimal} $$EOA= _{LVOT}^{2}* *{VTI}_{LVOT}}{{VTI}_{AV}}$$ E O A = d LVOT 2 ∗ π 4 ∗ VTI LVOT VTI AV Equation : Continuity equation to determine the EOA; d = diameter; VTI = velocity time integral. Sonographic imaging was performed using a dedicated sonography device (Resona 9, Mindray Medical Int. Ltd., Shenzhen, CN) and the v-flow protocol, developed for carotid artery imaging. For image acquisition a linear array transducer (L14-3WU, Mindray Medical Int. Ltd.) was placed on to the agar block in correspondence to the above-mentioned planes, placing the center of the transducer on the according plane. The acquisition window was increased to the biggest possible size (20 × 30 mm) while all other parameters were set to the most precise setting available (acquisition time: 2 s; acquisition quality: 7). Since the acquisition window was developed for application at the carotid bifurcation, measurements had to be split into two parts at the inner and outer curvature of the aorta to cover the entire cross-section, due to the smaller ROI of the acquisition window. Flow velocity, total WSS at five spots along the aortic wall as well as the oscillatory shear index (OSI) were calculated from the measurements. The OSI was calculated as an expression for the magnitude and change in direction of local WSS described by the following formula: 2 [12pt]{minimal} $$OSI= *(1.0- )$$ O S I = 1 2 ∗ ( 1.0 - AWSSV AWSS ) where AWSSV = magnitude of the time-averaged WSS vector, and AWSS = time-averaged WSS magnitude . MRI Image Analysis Visualization of flow patterns and pathlines was achieved in the aortic arch, the brachiocephalic trunk and the left subclavian artery (Fig. ). Visualization in the left common carotid artery proved difficult due to the smaller diameter of the vessel and was not achieved for all datasets. For the Masters mechanical valve, pathline visualization revealed a central jet during peak systole that closely followed the outer curvature of the ascending aorta. This led to a decentralized flow pattern with lower velocities along the inner curvature. During peak systole, recirculation zones with the formation of sinus vortices at both sides of the proximal aortic root were visible. WSS analysis revealed high local load on the outer curvature of the ascending aorta during peak systole, closely following the high velocity. Other parts of the aortic arch showed no increase in WSS during the systolic phase. The On-Xane mechanical valve showed a slightly less centralized jet during peak systole. This led to a more even distribution of flow velocity across the aortic diameter, while still showing a tendency towards higher flow velocities along the outer curvature. This even distribution could also be visualized in the WSS analysis, where a moderate load and distribution across the ascending aorta could be observed (Fig. ). The examination of the porcine bioprosthetic valve Epic showed a high velocity central jet hitting the outer curvature of the ascending aorta and partially reflecting onto the top of the inner curve. The central jet also showed a symmetric distribution with a tendency of tilting towards the outer curvature, resulting in an asymmetric distribution of systolic flow. WSS analysis revealed a high load on the outer curvature with an added high stress put on the anterior ascending aorta, close to the trunk. The Perimount bioprosthetic valve showed a central jet with high symmetric velocity, reflecting from the outer curvature of the ascending aorta. Visualization of WSS was consistent with the other bioprosthetic valves, where a high WSS occurred on the anterior wall of the ascending aorta. Lastly, the strong central jet could also be observed in the latest generation of bovine bioprosthetic valves, the Magna Ease. Here, the jet also showed a central symmetric velocity distribution distal to the valve followed by a tendency to adhere to the outer curvature, leading to asymmetric flow distribution. Due to the sharp angulation of the jet, wall shear stress was increased on the outer curvature close to the aortic root (Fig. ). Similarly, wall shear stress was also increased on the anterior side of the ascending curvature. MRI Quantitative Analysis Cross-sectional visualization of flow velocities and WSS for all valves can be seen in Fig. . Measured velocity values in the three ROI planes are shown in Fig. A. While all biological heart valves show a constant decrease in peak velocity between the planes, both mechanical heart valves cause an increase in peak velocity, reaching the highest value in the ascending aorta (Plane 2). In this study, the On-Xane mechanical valve reached the highest overall peak velocity of 265,6 cm/s in the ascending aorta while the Epic bioprosthetic valve exhibits the slowest velocity in the ascending aorta of 140.5 cm/s (Fig. A). Analysis of the tangential WSS presented the highest WSS closer to the aortic bulbus (Fig. , measuring plane 1) with a steady drop towards the descending aorta. The overall highest WSS could be observed for the Magna Ease biological valve at the aortic root, reaching 0.37 Pa (Fig. C). Peak pressure gradient measurement of mechanical valves between the proximal inlet and the aortic root revealed a gradient of 5.86 mmHg for the On-Xane valve and 8.50 mmHg for the Masters valve. The biological valves reached a peak pressure gradient of 7.67 mmHg for the Epic, 11.24 mmHg for the Perimount and 11.91 mmHg for the Magna Ease (Fig. E). Additionally, EOA was measured inside the respective valve, with On-X (2.8 cm 2 ) and Masters (2.1 cm 2 ) having the largest EOA, followed by the biological heart valves, Epic (2.0 cm 2 ), Perimount (1.4 cm 2 ) and Magna Ease (1.3 cm 2 ). Especially, the Perimount and Magna Ease valve showed artifacts around the valve mount, which led to some difficulties when setting the plane for velocity assessment in the LVOT. Sonographic Image Analysis Vector flow analysis of the surgical valves revealed overall strong signal during the systolic phase, while diastolic phase led to many visible artefacts (Video File in Supplement). In the aortic bulbus, the mechanical valves revealed a central jet, showcasing the disturbance created by the two semicircular leaflets (Fig. ). Both Masters and On-Xane mechanical valves displayed large recirculation areas and the distinct formation of vortices close to the coronary arteries. In the ascending aorta, as well as the descending aorta, flow patterns exhibited uniform flow with no distinct recirculation areas. The Epic bioprosthetic valve showed a broader central jet during peak systole with a distinct recirculation area above the aortic annular plane. For the Perimount valve, a broader central jet could be observed during peak systole, leading to a smaller low-flow area at the aortic wall. This also significantly reduced the occurrence of turbulences and recirculation. A similar behavior could be observed in the aortic root proximal to the Magna Ease valve, with a large recirculating turbulence next to the central jet (Fig. ). Similarly to the mechanical valves, the flow pattern in the ascending and descending aorta revealed a uniform flow with small recirculation areas for all biological valves. Sonographic Quantitative Analysis Analysis of peak flow velocity during systole revealed a similar behavior to the MRI analysis with mechanical valves showing a lower flow velocity in the aortic bulb, an increase in the ascending aorta, followed by a decrease in the descending aorta (Fig. B). Biological valves created the highest peak flow velocity in the aortic root with a steady decrease along the aortic arch. The highest overall velocity in the sonographic imaging was measured for the On-Xane valve at 263.6 cm/s. Biological valves displayed slightly lower peak velocities with the Perimount valve reaching the highest value of 237.9 cm/s directly in the aortic root. WSS measurements along the aortic wall also exposed big differences between mechanical and biological valves. The Masters valve (5.07 Pa) and the On-Xane valve (12.83 Pa) exhibited much higher total WSS in the aortic root compared to the biological valves (Epic: 2.55 Pa; Perimount: 2.46 Pa; Magna Ease: 1.53 Pa, Fig. D). In the ascending aorta, the WSS dropped for the mechanical valves and increased for the Epic and Perimount valve, with all valves reaching similar wall shear stress values in the descending aorta. The OSI as a measure for the change in direction and magnitude of WSS, is visualized in Fig. F. Mechanical valves reveal a higher initial OSI in the aortic root, with a drop in the ascending aorta. Biological valves showed a lower rate of change compared to the mechanical valves, with a slight drop of the OSI in the ascending aortic arch. Visualization of flow patterns and pathlines was achieved in the aortic arch, the brachiocephalic trunk and the left subclavian artery (Fig. ). Visualization in the left common carotid artery proved difficult due to the smaller diameter of the vessel and was not achieved for all datasets. For the Masters mechanical valve, pathline visualization revealed a central jet during peak systole that closely followed the outer curvature of the ascending aorta. This led to a decentralized flow pattern with lower velocities along the inner curvature. During peak systole, recirculation zones with the formation of sinus vortices at both sides of the proximal aortic root were visible. WSS analysis revealed high local load on the outer curvature of the ascending aorta during peak systole, closely following the high velocity. Other parts of the aortic arch showed no increase in WSS during the systolic phase. The On-Xane mechanical valve showed a slightly less centralized jet during peak systole. This led to a more even distribution of flow velocity across the aortic diameter, while still showing a tendency towards higher flow velocities along the outer curvature. This even distribution could also be visualized in the WSS analysis, where a moderate load and distribution across the ascending aorta could be observed (Fig. ). The examination of the porcine bioprosthetic valve Epic showed a high velocity central jet hitting the outer curvature of the ascending aorta and partially reflecting onto the top of the inner curve. The central jet also showed a symmetric distribution with a tendency of tilting towards the outer curvature, resulting in an asymmetric distribution of systolic flow. WSS analysis revealed a high load on the outer curvature with an added high stress put on the anterior ascending aorta, close to the trunk. The Perimount bioprosthetic valve showed a central jet with high symmetric velocity, reflecting from the outer curvature of the ascending aorta. Visualization of WSS was consistent with the other bioprosthetic valves, where a high WSS occurred on the anterior wall of the ascending aorta. Lastly, the strong central jet could also be observed in the latest generation of bovine bioprosthetic valves, the Magna Ease. Here, the jet also showed a central symmetric velocity distribution distal to the valve followed by a tendency to adhere to the outer curvature, leading to asymmetric flow distribution. Due to the sharp angulation of the jet, wall shear stress was increased on the outer curvature close to the aortic root (Fig. ). Similarly, wall shear stress was also increased on the anterior side of the ascending curvature. Cross-sectional visualization of flow velocities and WSS for all valves can be seen in Fig. . Measured velocity values in the three ROI planes are shown in Fig. A. While all biological heart valves show a constant decrease in peak velocity between the planes, both mechanical heart valves cause an increase in peak velocity, reaching the highest value in the ascending aorta (Plane 2). In this study, the On-Xane mechanical valve reached the highest overall peak velocity of 265,6 cm/s in the ascending aorta while the Epic bioprosthetic valve exhibits the slowest velocity in the ascending aorta of 140.5 cm/s (Fig. A). Analysis of the tangential WSS presented the highest WSS closer to the aortic bulbus (Fig. , measuring plane 1) with a steady drop towards the descending aorta. The overall highest WSS could be observed for the Magna Ease biological valve at the aortic root, reaching 0.37 Pa (Fig. C). Peak pressure gradient measurement of mechanical valves between the proximal inlet and the aortic root revealed a gradient of 5.86 mmHg for the On-Xane valve and 8.50 mmHg for the Masters valve. The biological valves reached a peak pressure gradient of 7.67 mmHg for the Epic, 11.24 mmHg for the Perimount and 11.91 mmHg for the Magna Ease (Fig. E). Additionally, EOA was measured inside the respective valve, with On-X (2.8 cm 2 ) and Masters (2.1 cm 2 ) having the largest EOA, followed by the biological heart valves, Epic (2.0 cm 2 ), Perimount (1.4 cm 2 ) and Magna Ease (1.3 cm 2 ). Especially, the Perimount and Magna Ease valve showed artifacts around the valve mount, which led to some difficulties when setting the plane for velocity assessment in the LVOT. Vector flow analysis of the surgical valves revealed overall strong signal during the systolic phase, while diastolic phase led to many visible artefacts (Video File in Supplement). In the aortic bulbus, the mechanical valves revealed a central jet, showcasing the disturbance created by the two semicircular leaflets (Fig. ). Both Masters and On-Xane mechanical valves displayed large recirculation areas and the distinct formation of vortices close to the coronary arteries. In the ascending aorta, as well as the descending aorta, flow patterns exhibited uniform flow with no distinct recirculation areas. The Epic bioprosthetic valve showed a broader central jet during peak systole with a distinct recirculation area above the aortic annular plane. For the Perimount valve, a broader central jet could be observed during peak systole, leading to a smaller low-flow area at the aortic wall. This also significantly reduced the occurrence of turbulences and recirculation. A similar behavior could be observed in the aortic root proximal to the Magna Ease valve, with a large recirculating turbulence next to the central jet (Fig. ). Similarly to the mechanical valves, the flow pattern in the ascending and descending aorta revealed a uniform flow with small recirculation areas for all biological valves. Analysis of peak flow velocity during systole revealed a similar behavior to the MRI analysis with mechanical valves showing a lower flow velocity in the aortic bulb, an increase in the ascending aorta, followed by a decrease in the descending aorta (Fig. B). Biological valves created the highest peak flow velocity in the aortic root with a steady decrease along the aortic arch. The highest overall velocity in the sonographic imaging was measured for the On-Xane valve at 263.6 cm/s. Biological valves displayed slightly lower peak velocities with the Perimount valve reaching the highest value of 237.9 cm/s directly in the aortic root. WSS measurements along the aortic wall also exposed big differences between mechanical and biological valves. The Masters valve (5.07 Pa) and the On-Xane valve (12.83 Pa) exhibited much higher total WSS in the aortic root compared to the biological valves (Epic: 2.55 Pa; Perimount: 2.46 Pa; Magna Ease: 1.53 Pa, Fig. D). In the ascending aorta, the WSS dropped for the mechanical valves and increased for the Epic and Perimount valve, with all valves reaching similar wall shear stress values in the descending aorta. The OSI as a measure for the change in direction and magnitude of WSS, is visualized in Fig. F. Mechanical valves reveal a higher initial OSI in the aortic root, with a drop in the ascending aorta. Biological valves showed a lower rate of change compared to the mechanical valves, with a slight drop of the OSI in the ascending aortic arch. The introduction of additive manufacturing in the medical field enabled for the creation of highly accurate anatomical models based on underlying radiological data. This study focused on the application of the 3D-printing technology to create a flexible aortic arch for testing of the hemodynamics caused by the implantation of different surgical aortic valves. So far, the hemodynamic evaluation of such valves has been limited to PIV measurements using pulse duplicators. The advancements in computing power seen in the last decades accelerated the use of computational fluid dynamics, as well as 4D-Flow MRI to further investigate hemodynamics in the aorta . 4D-Flow MRI has proven to be a vital tool in clinical assessment with broad opportunities for further validation in an in-vitro setting . It allows for a holistic examination of the cardiovascular region of interest, opening new possibilities in the diagnosis and prevention of i.e., aortic aneurysms. The larger region of interest is especially beneficial when comparing the technology to PIV, where the camera only allows for a limited field of view. The measurement of WSS in the entire aortic arch is a clear benefit of the 4D-Flow MRI with numerous applications in both basic research and clinical routine. The WSS values measured in our 3D model show great comparability to the WSS values measured in patients by Bürk et al., who looked at the WSS in healthy and dilated aortas . While additional comparative studies are required, this shows a good initial approximation of the WSS values created by the flow loop to patient-based data. This slight discrepancy in WSS between our model and the values measured in patients can be explained by the mechanical properties of the 3D-printed aortic arch. Current 3D-printed flexible models lack the possibility to add fiber-orientation and therefore are not able to mimic the exact native aorta’s non-linear elastic behavior. Another explanation for this mismatch could be the material-geometry coupling of aortic replicas described by Comunale et al., which confirms, that not only material properties, but also geometry have an impact on the hemodynamic parameters . Besides the quantification of WSS, the localization of higher WSS areas is important to predict the risk of aortic aneurysm formation . The increase in both WSS and OSI has been associated with an upregulation of inflammatory markers . Especially, the localization of increased WSS on the anterior wall of the ascending aortic arch for all biological valves is a key finding of our study. This has been previously described by Farag et al., for patients undergoing transcatheter aortic valve replacement with a Sapien 3 transcatheter valve, where a large percentage of patients displayed an increase in WSS on the anterior wall compared to a control group . The increased WSS on the outer curvature of the ascending aortic arch observed is in accordance to previously described findings by in-vitro PIV studies . Another parameter measured via 4D-Flow MRI was the pressure drop across the artificial heart valves. The pressure gradient helps in the evaluation of the overall performance of native and artificial valves and is a standard parameter in the sonographic assessment of patients. For the mechanical valves, Hatoum et al. measured the pressure gradients for both the On-Xane as well as the SJM Masters valve reaching 4.15 and 4.75 mmHg in their in-vitro setting, respectively . Lee et al. analyzed the performance of Magna Ease bioprosthetic valves in patients who underwent surgical aortic valve replacement, where the mean pressure gradient for the 25 mm valve reached 12.2 mmHg . Compared to these studies, the pressure gradient for the mechanical valves was a bit lower in our study. While the pressure drop is a valuable metric in determining the performance of an artificial heart valve, comparison between in-vitro and in-vivo studies can prove challenging. A multitude of variables can have an impact on the measured pressure drop, ranging from the exact position of the measurement, the aortic diameter, measurement technique, prosthesis size and additionally, fluid viscosity in case of in-vitro studies. Pressure drops are therefore most comparable within the same experimental setup and a comparison to the aforementioned studies can only be seen as informative. The EOA of surgical valves is another factor having an influence on the transvalvular pressure and flow velocity. Different surgical valves with the same size (e.g., 25 mm) can have highly varying EOA. Pibarot et al. have determined the EOA of different surgical valve models and sizes for comparison, with the 25 mm Edwards Perimount having an EOA of 1.8 ± 0.4 cm 2 while the 25 mm On-X has a EOA of 2.4 ± 0.8 cm 2 . This corresponds to a difference in EOA of 33%, highlighting the importance of individualized prosthesis selection for every patient. The effect of the increased EOA can also be observed in our study, since the mechanical valves show a lower transvalvular pressure gradient compared to the biological valves. The usage of vector sonography is a rather young technique with great potential to improve treatment of cardiovascular patients. The current clinical use-case of quantifying the WSS in carotid arteries is a first step of improving one of the most commonly used radiological modalities . Its application in a benchtop setting offers great opportunities to analyze anatomical structures, which are not easily accessible in a clinical setting. Compared to the 4D-Flow MRI, vector ultrasound allows a much closer analysis of small cardiovascular structures and flow phenomena, like vortices at the aortic valve. In this study, motion artefacts were presented, especially during the diastolic phase, which might be caused by the reflective nature of the 3D-printed material. During the systolic phase, no artifacts were visible, allowing for a precise analysis of the flow conditions in the ROI. The observed vortices for both the mechanical and bioprosthetic valves match closely to the previously described hemodynamics caused by the different designs . The mechanical valves show three distinct forward jets with small recirculation zones distal to the valvular plane, while the bioprosthetic valves display one larger central forward jet with counter-rotating recirculation areas surrounding the central jet. The design improvements from the Perimount to the Magna Ease valve could be partially confirmed in the quantitative analysis. The Magna Ease, which is designed to have a smaller sewing ring and therefore larger EOA has lower WSS in the aortic root, whereas the Perimount valve shows a slightly higher velocity at the aortic arch. The biggest difference in the parameters derived from MRI and ultrasound is the WSS, especially in the aortic root. Vector ultrasound presents consistently higher WSS values, reaching a tenfold higher value for the On-X mechanical valve. These WSS values are much closer to values derived from CFD analyses of the aortic arch . The difference could also be explained by the different measurement techniques employed by MRI and vector ultrasound. MRI using an interpolation of velocity vectors along a circumferential contour, while vector ultrasound uses singular points in the longitudinal axis of the aortic wall. While this study presents in-vitro results that are comparable to clinical data, there are still a few limitations to the setting. Firstly, the flexible material used for the anatomical aortic arch does not offer the same mechanical properties as a native human aorta. The fixed wall thickness and linear elastic behavior of the material are clear limitations. Furthermore, as described in the discussion, the geometry of the arch has an additional impact on the hemodynamic parameters. To minimize the effects of these three aspects, we decided to use the same arch design for all valves to properly compare them, nevertheless, this is an aspect that has to be taken into account when evaluating the collected data. Additionally, the presented model is lacking coronary perfusion. Due to the mechanical properties of the printing material, inclusion of coronary vessels would have led to an unnatural enlargement of the aortic root, which was previously described by other research groups as well. Secondly, although vector ultrasound presents a promising technique to analyze hemodynamic effects in the cardiovascular system, the technique is still rather new and requires further improvements to become a staple in the clinical field. Especially the limited depth of the ROI window represents a limitation when analyzing the aorta, since there is no possibility to examine the entire cross-section at once. Finally, this pilot-study lacks the comparison to measurements in a real-life patient, which is a clear limitation. Combining novel radiological imaging modalities with 3D-printed anatomical models offer great possibilities to further improve the in-vitro analysis of the hemodynamic effects of medical implants. This will be a valuable addition to a more patient-oriented medicine that can prevent patient-prosthesis mismatch and reduce overall complication rate through the usage of patient-specific anatomies in the mock circulatory loop. This study presents a first pilot study, which will lead to further research projects, focusing on the analysis of other cardiovascular implants, as well as the impact of specific anatomical configurations on the hemodynamic. Below is the link to the electronic supplementary material. Supplementary file1 4D-MRI Visualization of Flow Pathlines in a 3D printed aortic arch with a Magna Ease valve implanted (MP4 1459 KB) Supplementary file2 Vector Ultrasound Visualization of the Aortic Bulbus right after a Magna Ease biological heart valve (MP4 12599 KB) Supplementary file3 (MP4 863 KB) Supplementary file4 (MP4 12619 KB) Supplementary file5 (JPG 904 KB) Supplementary file6 (PNG 90 KB)
Differences in bacterial community structure and metabolites between the root zone soil of the new high – Fragrance tea variety Jinlong No. 4 and its grandparent Huangdan
ad282018-92ff-4517-b4ac-5ab78028fe99
11844865
Microbiology[mh]
Tea plant [ Camellia sinensis (L.) O. Kuntze] is a globally cultivated cash crop . The environmental conditions of the soil in which it grows have a critical impact on tea quality and yield . In recent years, with the advancement of agricultural science and technology and the rising demand for tea quality from consumers, breeding and promoting new varieties with high-aroma has become an important development direction for the tea industry . HD is a parent for breeding many new high-aroma tea varieties and is also a commonly used regional trial control specialty for our oolong tea varieties . JL4 is a new high-aroma variety selected from the offspring of Jinguanyin, a descendant of HD. The oolong tea produced from it has a rich floral aroma, mellow taste, and excellent quality. As a new high-aroma tea variety, JL4 has drawn extensive attention due to its unique aroma components and outstanding quality. However, compared with traditional varieties, the physical and chemical properties of the root zone soil of new varieties may differ, potentially affecting the growth and development of tea plants. The root zone soil, which includes the rhizosphere soil and the adjacent soil regions affected by plant roots to a certain extent, is a crucial domain in plant-soil interactions. The rhizosphere soil refers to the soil area around plant roots that is influenced by the roots . In this area, plant roots not only absorb the necessary water, nutrients, and other substances from the soil, but also release some of the metabolites produced by the plant itself into the soil, thereby improving the soil microenvironment and contributing to the formation of its unique rhizosphere microbial community structure. Rhizosphere microorganisms are a type of microorganisms that directly affect the growth and development of plant roots within the soil environment. Tea plant rhizosphere soil bacteria, acting as a pivotal force in nutrient cycling within the microenvironment, play a crucial role in enhancing soil fertility and facilitating the normal growth of tea plants. Their multifaceted functions encompass secreting auxins, engaging in biological nitrogen fixation, accelerating the decomposition of soil organic matter, and mineralizing nutrients, all of which contribute to the overall health and vitality of the tea ecosystem. Tea plant rhizosphere soil metabolites encompass a diverse array of organic compounds secreted by both microorganisms and plant roots. These metabolites not only provide the necessary energy for rhizosphere microorganisms, but also directly affect the quantity and population structure of rhizosphere microorganisms. Research has demonstrated that distinct tea varieties can selectively enrich or exclude specific microbial populations due to variations in their root exudates. This, in turn, can significantly impact the physical and chemical properties as well as the metabolic processes within the rhizosphere soil. Consequently, investigating the microbial and metabolite underpinnings that drive differences in root zone soil properties between the novel high-aroma tea variety JL4 and its ancestral variety HD holds significant importance. Previous studies on the rhizosphere soil of tea plants mainly focused on the relationship between tea quality and the soil environment, the structure and function of microbial communities, the development of microbial resources, and soil improvement and fertilization strategies . However, there has been no comprehensive study on the root zone soil of specific high-aroma new tea varieties, and research on the physical and chemical properties and synergistic mechanism between microorganisms and metabolites still needs to be further strengthened. In this study, high-throughput sequencing technology and GC-MS-derived metabonomics technology were employed to comprehensively detect the metabolites in the root zone soil of two tea varieties and identify the key metabolites related to changes in soil physical and chemical properties. Based on the microbial community structure and metabolite composition data, statistical analysis methods were used to discuss the correlation between microbial communities and metabolites and reveal their synergistic effects on soil physical and chemical properties. The objective of this study was to investigate the microbial and metabolite underpinnings responsible for the disparities in the physicochemical properties of root zone soil between the novel high-aroma tea variety JL4 and HD. Through a comparative analysis of the root zone soil characteristics of these two tea varieties, we sought to uncover the key factors influencing soil quality and tea growth. This research offers a fresh perspective and theoretical foundation for elucidating the distinctions in root zone soil environments between high-aroma tea varieties and conventional varieties. Furthermore, it provides scientific guidance for tailoring soil conditions to the specific needs of different tea varieties, thereby enhancing tea quality and yield, and fostering high-quality tea cultivation and soil health management. Additionally, this study aids in reducing the reliance on chemical fertilizers and pesticides, advancing sustainable tea production, and safeguarding the environment. Moreover, it contributes to a more profound understanding of the intricate tea root zone soil ecosystem, promotes deeper insights into the mechanisms of soil-plant-microbial interactions, enriches the theoretical knowledge of soil ecology and plant nutrition, and furthers the sustainable development of agriculture and ecological environmental protection. Experimental design, soil sampling and preparation The experimental site is located within the Tea Research Institute of Fujian Academy of Agricultural Sciences (in Shekou Town, Fu’an City, Fujian Province, China (119°57’E, 27°22’N)). It is the research facility owned by this institution, and the researchers of this institution do not need additional site-access permits. Characterized by a subtropical marine monsoon climate, the site has an altitude of 91 meters, with an average annual temperature ranging from 13.4 °C to 20.2 °C and an average annual precipitation ranging from 1250 mm to 2350 mm. The frost-free period lasts for 235 to 300 days. A total of six tea varieties, including the experimental variety JL4, the control variety HD, Shuixian and three other descendants of Jinguanyin, were planted in a random block arrangement with three plots in April 2016. Protective rows were established around these plants. The adopted planting method was single-row, double-plant strip replanting, maintaining a plant spacing of 33 cm and a row spacing of 150 cm. All tea trees were cultivated and managed in accordance with conventional tea tree management practices. In April 2023, when the tea trees had matured to 7 years of age, five tea trees were randomly sampled from each treatment group. Soil from the root area, at a depth of 0–20 cm, was collected within a 10 cm diameter circle around each tree, after meticulously removing the topsoil. The collected five samples were then mixed into a single composite sample, and impurities were removed. Each sample collected from a separate plot for each variety was considered as one replicate, resulting in a total of three replicates per variety. Following this, the composite soil sample was divided into three equal parts. The first part underwent air-drying and sieving procedures, preparing it for the determination of soil physicochemical properties. The second part was stored in a −80 °C ultra-low temperature refrigerator, reserved specifically for high-throughput MiSeq sequencing of the soil microbial community. The third part was also stored in a −80 °C refrigerator, intended for GC-MS derivatization and subsequent metabolite detection. Analysis of physical and chemical properties The soil pH was measured using an E20-FiveEasy pH meter (Mettler Toledo, Germany), while the soil electrical conductivity (EC) was determined with an electric conductometer. For soil measurements, a soil-water suspension was prepared (2.5:1 mixture of deionized water and fresh soil for pH, and 5:1 for EC) and shaken for 30 min. Nitrate (NO 3 − -N) and ammonium (NH 4 + -N) were extracted by adding 5 g of fresh soil to 50 ml of a 2 M KCl solution. After shaking for 1 h and allowing it to stand for another hour, the supernatant was filtered through glass fiber filters (Fisher G4, 1.2-μm pore space). The concentrations of NO 3 − -N and NH 4 + -N were then determined using a continuous-flow analytical system (San++ system; Skalar, Holland). Available phosphorus (P) was extracted with a 0.5 M NaHCO 3 solution and measured by the Mo-Sb colorimetric method. Available potassium (K) was extracted with 1 M ammonium acetate (NH 4 OAc) and measured by the flame spectrophotometry method. Soil organic matter (SOM) was measured using the K 2 Cr 2 O 7 -H 2 SO 4 oxidation method. The soil particle size was determined using a laser particle size analyzer (Bettersize 3000, Baite Dandong Instrument Co., Ltd., China) with a measurement range of 0.02-2000 μm. The particle size standard followed the international sediment particle size classification, which included sand particles >20 μm, silt particles 2–20 μm, and clay particles <2 μm. Soil metabolite analysis Fresh soil was collected, weighed accurately, promptly frozen in liquid nitrogen, and stored at −80 °C until utilization. Samples were freeze-dried and subsequently ground into powder at room temperature. Weigh 0.5 g of the sample, add 1 mL of methanol: isopropanol: water (3:3:2 V/V/V) extract, vortex for 3 min and subject to ultrasound for 20 min. The extracts were centrifuged at 12000 r/min at 4°C for 3 min. The supernatant was carefully transferred into a sample vial and 0.020 mL of internal standard (10 μg/mL) was added to evaporate under nitrogen flow. The evaporated samples were transferred to the lyophilizer for freeze-drying. The residue was utilized for further derivatization. The derivatization method was as follows: The sample was mixed with 0.1 mL of a solution of methoxyamine hydrochloride in pyridine (0.015 g/mL). The mixture was incubated at 37 °C for 2 h. Then 0.1 mL of BSTFA (with 1% TMCS) was added to the mixture and kept at 37 °C for 30 min after vortex-mixing. 0.2 mL of the derivatization solution was pipetted, n-hexane was added to dilute to 1 mL, filtered through a 0.22 μm organic phase syringe filter, stored in a refrigerator at −20 °C, and tested on the machine within 24 hours. Agilent 8890 gas chromatograph (Santa Clara, CA) coupled to a 5977B mass spectrometer with a DB-5MS column (30 m length ×  0.25 mm i.d. ×  0.25 μm film thickness, J&W Scientific, USA) was utilized for GC-MS analysis of the extracting solution. Helium was employed as the carrier gas, at a flow rate of 1.2 mL/min. Injections were made in the front inlet mode with a split ratio of 5:1, and the injection volume was 1 μL. The oven temperature was maintained at 40 °C for 1 min, and then raised to 100 °C at 20 °C/min, raised to 300 °C at 15 °C/min, and held at 300 °C for 5 min. All samples were analyzed in scan mode. The ion source and transfer line temperature were 230 °C and 280 °C, respectively. Data pretreatments including peak filtering, alignment, identification and normalization were conducted by Agilent MassHunter software. The Standard Database with PubChem ( https://pubchem.ncbi.nlm.nih.gov/compound/ ), Chem960 ( https://www.chem960.com/cas/ ) and ClassyFire ( http://classyfire.wishartlab.com/#structure-query ) were utilized for structure identification. Variable Importance in Projection (VIP) and Fold Change (FC) values were calculated. Among these metabolites, VIP >  1.0, FC >  1.2 or FC <  0.8 were used as the selection criteria . Orthogonal Partial Least Squares-Discriminant Analysis (OPLS-DA) was conducted by MetaboAnalystR 1.0.1 package in R v.3.5.1 . Volcano plot, Scatter plot and Correlation chord plot were conducted by Pandas 0.23.4 package in Python 3.6.6 , ggplot2 3.3.0 package in R v.3.5.1 , ggplot2 3.4.0 package in R v.4.2.2 , and stats 3.5.1 package in R v.3.5.1 , respectively. Identified metabolites were annotated using KEGG Compound database ( http://www.kegg.jp/kegg/compound/ ), annotated metabolites were then mapped to KEGG Pathway database ( http://www.kegg.jp/kegg/pathway.html ). Soil high-throughput sequencing The total DNA in the soil was extracted using the CTAB method , and the purity and concentration of the extracted DNA were detected using 1% agarose gel electrophoresis. PCR primers 341F (5´- CCTAYGGGRBGCASCAG-3´) and 806R (5´- GACTACNNGGGTATCTAAT-3´) were used to amplify the V3-V4 region of bacterial 16S. The PCR product was purified by 2% agarose gel electrophoresis and recovered using a gel recovery kit provided by the Qiagen company. The TruSeq® DNA PCR Free Sample Preparation Kit was used for library construction, and the constructed library was subjected to quantitative quality inspection using a Qubit/Agilent Bioanalyzer 2100System/Q-PCR, followed by sequencing using NovaSeq 6000. Quality control, splicing, and chimeric filtering were performed on the data obtained from the Illumina NovaSeq sequencing to obtain effective tags. The RDP classifier Bayesian algorithm (97% similarity level) was used for OTU (Operational Taxonomic Units) clustering. Subsequently, species annotation was performed on the representative sequences of each OTU, using photoseq v.1.40.0 and vegan v.2.6.2 packages in R v.4.2.0 to calculate the Chao 1, Shannon, Simpson, ace, Goods coverage, PD_whole_tree indices. Venn diagram analysis was conducted using the stats 3.5.1 package in R v.4.2.0. Principal Component Analysis (PCA) was conducted using stats 3.5.1 package in R v.3.5.1. LEfSe analysis (LEfSe v.1.1.2) was used to screen for differentially abundant bacteria in the root zone soil of different varieties with LDA >3.5. BugBase analysis (BugBase v.0.1.0) was conducted for phenotypic prediction analysis of bacterial communities . FastSpar correlation analysis (FastSpar v1.0.0) was conducted to calculate the correlations between top 100 bacterial genera and, correlations with | r |  >0.8 and abundance ≥0.005% were selected. Correlation analysis between soil metabolites and bacteria was conducted using WGCNA 1.69 and corrplot 0.92 packages in R v.3.5.1 and R v.4.1.2. Redundancy analysis (RDA) and Pearson correlation analysis between soil physicochemical properties and bacterial communities were conducted using the vegan 2.5.6 package in R v.3.5.1 and ComplexHeatmap package in R. The experimental site is located within the Tea Research Institute of Fujian Academy of Agricultural Sciences (in Shekou Town, Fu’an City, Fujian Province, China (119°57’E, 27°22’N)). It is the research facility owned by this institution, and the researchers of this institution do not need additional site-access permits. Characterized by a subtropical marine monsoon climate, the site has an altitude of 91 meters, with an average annual temperature ranging from 13.4 °C to 20.2 °C and an average annual precipitation ranging from 1250 mm to 2350 mm. The frost-free period lasts for 235 to 300 days. A total of six tea varieties, including the experimental variety JL4, the control variety HD, Shuixian and three other descendants of Jinguanyin, were planted in a random block arrangement with three plots in April 2016. Protective rows were established around these plants. The adopted planting method was single-row, double-plant strip replanting, maintaining a plant spacing of 33 cm and a row spacing of 150 cm. All tea trees were cultivated and managed in accordance with conventional tea tree management practices. In April 2023, when the tea trees had matured to 7 years of age, five tea trees were randomly sampled from each treatment group. Soil from the root area, at a depth of 0–20 cm, was collected within a 10 cm diameter circle around each tree, after meticulously removing the topsoil. The collected five samples were then mixed into a single composite sample, and impurities were removed. Each sample collected from a separate plot for each variety was considered as one replicate, resulting in a total of three replicates per variety. Following this, the composite soil sample was divided into three equal parts. The first part underwent air-drying and sieving procedures, preparing it for the determination of soil physicochemical properties. The second part was stored in a −80 °C ultra-low temperature refrigerator, reserved specifically for high-throughput MiSeq sequencing of the soil microbial community. The third part was also stored in a −80 °C refrigerator, intended for GC-MS derivatization and subsequent metabolite detection. The soil pH was measured using an E20-FiveEasy pH meter (Mettler Toledo, Germany), while the soil electrical conductivity (EC) was determined with an electric conductometer. For soil measurements, a soil-water suspension was prepared (2.5:1 mixture of deionized water and fresh soil for pH, and 5:1 for EC) and shaken for 30 min. Nitrate (NO 3 − -N) and ammonium (NH 4 + -N) were extracted by adding 5 g of fresh soil to 50 ml of a 2 M KCl solution. After shaking for 1 h and allowing it to stand for another hour, the supernatant was filtered through glass fiber filters (Fisher G4, 1.2-μm pore space). The concentrations of NO 3 − -N and NH 4 + -N were then determined using a continuous-flow analytical system (San++ system; Skalar, Holland). Available phosphorus (P) was extracted with a 0.5 M NaHCO 3 solution and measured by the Mo-Sb colorimetric method. Available potassium (K) was extracted with 1 M ammonium acetate (NH 4 OAc) and measured by the flame spectrophotometry method. Soil organic matter (SOM) was measured using the K 2 Cr 2 O 7 -H 2 SO 4 oxidation method. The soil particle size was determined using a laser particle size analyzer (Bettersize 3000, Baite Dandong Instrument Co., Ltd., China) with a measurement range of 0.02-2000 μm. The particle size standard followed the international sediment particle size classification, which included sand particles >20 μm, silt particles 2–20 μm, and clay particles <2 μm. Fresh soil was collected, weighed accurately, promptly frozen in liquid nitrogen, and stored at −80 °C until utilization. Samples were freeze-dried and subsequently ground into powder at room temperature. Weigh 0.5 g of the sample, add 1 mL of methanol: isopropanol: water (3:3:2 V/V/V) extract, vortex for 3 min and subject to ultrasound for 20 min. The extracts were centrifuged at 12000 r/min at 4°C for 3 min. The supernatant was carefully transferred into a sample vial and 0.020 mL of internal standard (10 μg/mL) was added to evaporate under nitrogen flow. The evaporated samples were transferred to the lyophilizer for freeze-drying. The residue was utilized for further derivatization. The derivatization method was as follows: The sample was mixed with 0.1 mL of a solution of methoxyamine hydrochloride in pyridine (0.015 g/mL). The mixture was incubated at 37 °C for 2 h. Then 0.1 mL of BSTFA (with 1% TMCS) was added to the mixture and kept at 37 °C for 30 min after vortex-mixing. 0.2 mL of the derivatization solution was pipetted, n-hexane was added to dilute to 1 mL, filtered through a 0.22 μm organic phase syringe filter, stored in a refrigerator at −20 °C, and tested on the machine within 24 hours. Agilent 8890 gas chromatograph (Santa Clara, CA) coupled to a 5977B mass spectrometer with a DB-5MS column (30 m length ×  0.25 mm i.d. ×  0.25 μm film thickness, J&W Scientific, USA) was utilized for GC-MS analysis of the extracting solution. Helium was employed as the carrier gas, at a flow rate of 1.2 mL/min. Injections were made in the front inlet mode with a split ratio of 5:1, and the injection volume was 1 μL. The oven temperature was maintained at 40 °C for 1 min, and then raised to 100 °C at 20 °C/min, raised to 300 °C at 15 °C/min, and held at 300 °C for 5 min. All samples were analyzed in scan mode. The ion source and transfer line temperature were 230 °C and 280 °C, respectively. Data pretreatments including peak filtering, alignment, identification and normalization were conducted by Agilent MassHunter software. The Standard Database with PubChem ( https://pubchem.ncbi.nlm.nih.gov/compound/ ), Chem960 ( https://www.chem960.com/cas/ ) and ClassyFire ( http://classyfire.wishartlab.com/#structure-query ) were utilized for structure identification. Variable Importance in Projection (VIP) and Fold Change (FC) values were calculated. Among these metabolites, VIP >  1.0, FC >  1.2 or FC <  0.8 were used as the selection criteria . Orthogonal Partial Least Squares-Discriminant Analysis (OPLS-DA) was conducted by MetaboAnalystR 1.0.1 package in R v.3.5.1 . Volcano plot, Scatter plot and Correlation chord plot were conducted by Pandas 0.23.4 package in Python 3.6.6 , ggplot2 3.3.0 package in R v.3.5.1 , ggplot2 3.4.0 package in R v.4.2.2 , and stats 3.5.1 package in R v.3.5.1 , respectively. Identified metabolites were annotated using KEGG Compound database ( http://www.kegg.jp/kegg/compound/ ), annotated metabolites were then mapped to KEGG Pathway database ( http://www.kegg.jp/kegg/pathway.html ). The total DNA in the soil was extracted using the CTAB method , and the purity and concentration of the extracted DNA were detected using 1% agarose gel electrophoresis. PCR primers 341F (5´- CCTAYGGGRBGCASCAG-3´) and 806R (5´- GACTACNNGGGTATCTAAT-3´) were used to amplify the V3-V4 region of bacterial 16S. The PCR product was purified by 2% agarose gel electrophoresis and recovered using a gel recovery kit provided by the Qiagen company. The TruSeq® DNA PCR Free Sample Preparation Kit was used for library construction, and the constructed library was subjected to quantitative quality inspection using a Qubit/Agilent Bioanalyzer 2100System/Q-PCR, followed by sequencing using NovaSeq 6000. Quality control, splicing, and chimeric filtering were performed on the data obtained from the Illumina NovaSeq sequencing to obtain effective tags. The RDP classifier Bayesian algorithm (97% similarity level) was used for OTU (Operational Taxonomic Units) clustering. Subsequently, species annotation was performed on the representative sequences of each OTU, using photoseq v.1.40.0 and vegan v.2.6.2 packages in R v.4.2.0 to calculate the Chao 1, Shannon, Simpson, ace, Goods coverage, PD_whole_tree indices. Venn diagram analysis was conducted using the stats 3.5.1 package in R v.4.2.0. Principal Component Analysis (PCA) was conducted using stats 3.5.1 package in R v.3.5.1. LEfSe analysis (LEfSe v.1.1.2) was used to screen for differentially abundant bacteria in the root zone soil of different varieties with LDA >3.5. BugBase analysis (BugBase v.0.1.0) was conducted for phenotypic prediction analysis of bacterial communities . FastSpar correlation analysis (FastSpar v1.0.0) was conducted to calculate the correlations between top 100 bacterial genera and, correlations with | r |  >0.8 and abundance ≥0.005% were selected. Correlation analysis between soil metabolites and bacteria was conducted using WGCNA 1.69 and corrplot 0.92 packages in R v.3.5.1 and R v.4.1.2. Redundancy analysis (RDA) and Pearson correlation analysis between soil physicochemical properties and bacterial communities were conducted using the vegan 2.5.6 package in R v.3.5.1 and ComplexHeatmap package in R. Physical and chemical properties of root zone soil The physical and chemical properties of the root zone soils from JL4 and HD were analyzed, and the results are presented in . In comparison to HD, the JL4 root zone soil exhibited increases in clay (<2 μm) (7.00 ±  0.59%), silt (2–20 μm) (38.63 ±  0.81%), and NO 3 − -N (3.38 ±  1.32 mg · kg −1 ). Conversely, the concentrations of AP (28.91 ±  9.78 mg · kg −1 ), AK (57.67 ±  4.04 mg · kg −1 ), SOM (1.35 ±  0.03 g · kg −1 ), NH 4 ⁺ -N (9.12 ±  1.06 mg · kg −1 ), as well as pH (4.44 ±  0.16) and EC (53.20 ±  12.5 μS/cm) all showed a decrease in JL4 compared with HD. Notably, significant differences were observed in the contents of AP and AK, with P <  0.05 indicating statistical significance. Metabolites in the root zone soil of different tea varieties different metabolites A total of 296 metabolites, including amino acids, hydrocarbons, carbohydrates, ketones, lipids, alcohols, etc., were detected and identified among all soil samples . Of these compounds, the number of lipids was the greatest, accounting to 21.32% of the whole metabolites, followed by carbohydrates (14.21%), alcohols (10.66%), and acid (9.64%). VIP and FC values were calculated, and metabolites with VIP >  1.0, FC >  1.2 or FC <  0.8 were selected as criteria. A total of 20 metabolites were significantly decreased, and 7 were significantly increased. These differential metabolites included alcohols, aldehydes, acids, sugars, hydrocarbons, ketones, heterocyclic compounds, lipids, esters, etc. Among them, sugars (8) are the most abundant, followed by alcohols (4) and heteroc Correlation analysis of differential metabolites in root zone soil of different tea varieties To further understand the mutual regulatory relationship among metabolites of different varieties, correlation analysis (Pearson) was performed on the identified differential metabolites, and correlations with P < 0.05 and | r | >  0.8 were shown in . Among them, 2,3-dihydroxypropyl icosanoate, D-mannitol 2, glycerol 1-palmate, and octadecanoic acid, 2,3-dihydroxypropyl ester were involved in the most correlations. Moreover, they were positively correlated with each other. Hexadecanoic acid, 2-hydroxy-1- (hydroxymethyl) ethyl ester and 2,3-dihydroxypropyl 12-methyldecanoate were next, which were positively correlated with other five differential metabolites. Interestingly epicatechin and catechin were positively correlated with the highest correlation coefficient. In addition, D - talose 1, arabinofuranose, D-allose 2, and stigmasterol 2 were positively correlated. However(3β)-3-(acetyloxy)-cholest-5-en-24-one was negatively correlated with erythritol 1, and (2S, 3R, 4S, 5S, 6R)-2-(2,3-dihydroxypropargy)-6-(hydroxymethyl) tetrahydro-2H-pyran-3,4,5-triol was negatively correlated with N,N-dimethyl-carbamic acid. Carbohydrate of D - talose 1, arabinofuranose and D-allose 2, and alcohol of stigmasterol 2 were all decreased. They were positively correlated with each other. KEGG functional annotation and enrichment analysis of differential metabolite Annotation of KEGG metabolites with significant differences among different tea varieties was conducted, and KEGG pathway enrichment analysis was performed. The results showed that the metabolic pathways of flavonoid biosynthesis, carbon fixation in photosynthetic organisms, and steroid biosynthesis were significantly and highly enriched . The differential abundance score (DA Score) analysis showed that the expression trend of flavonoid biosynthesis metabolites was increased (DA Score =  1, P =  0.04). Bacterial community structure and diversity in root zone soil of different tea varieties The Venn plot of root zone soil of different tea tree varieties showed that there were 2919 OTUs shared by HD and JL4. JL4 had fewer unique OTUs than HD. Among them, JL4 had 1141 unique OTUs and HD had 1603 unique OTUs. Screening representative OTU sequences at a 97% similarity level for taxonomic analysis using the RDP classifier with the Bayesian algorithm, the results showed that at the phylum level, the dominant bacterial groups were Cyanobacteria, Bacteroidota, Gemmatimonadetes , Myxococcota , Crenarchaeota , Actinobacteriota , Actinobacteria , Chloroflexi , Acidobacteriota , and Proteobacteria . Among them, Proteobacteria , Actinobacteriota , Actinobacteria , Bacteroidota , and Cyanobacteria had relatively high abundances in the HD root zone soil, while Crenarchaeota , Chloroflexi , and Acidobacteriota had relatively high abundances in the JL4 root zone soil. The dominant microbial communities at the class level were Bacteroidia , unidentified Gemmatimonadetes , Acidimicrobia , Thermophilia , Nitrosophaeria , unidentified Actinobacteria , Gammaproteobacteria , Alphaproteobacteria , Ktedonobacteria , and Acidobacteriae . Among them, Acidobacteriae , Ktedonobacteria , and Nitrosophaeria had relatively high abundances in the JL4 root zone soil, while Unidentified Actinobacteria , Gammaproteobacteria, Alphaproteobacteria , Thermophilia , and Bacteroidia had relatively high abundances in the HD root zone soil. The soil bacterial community richness and diversity results were presented in . Simpson’s index of diversity (1 – D) of root zone soil bacteria in both varieties was 0.99. The indices of OTUs, Shannon, Chao1, Ace, and PD_whole_tree of HD were higher than those of JL4. Among them, a significant difference was found in the Shannon index between the two varieties ( P < 0.05). To explore the differences in the structure of root zone soil bacterial communities among different tea varieties, PCA was conducted on the soil bacterial communities. The results showed that the contribution rates of PC1 and PC2 were 26.91% and 22.45%, respectively, accounting for a total of 49.36%. The two tea varieties could be slightly clustered together, while different varieties could be well distinguished. JL4 was distributed in the upper left corner relative to HD, indicating a certain difference in the root zone soil bacterial community between JL4 and HD. LEfSe analysis was used to screen for differentially bacteria in the root zone soil of the different varieties ( P < 0.05). The results (shown in ) indicated that the biomarkers in HD included Order Rhizobiales , Family Burkholderiaceae , Species Bacillus sp_NBRC_101253 , Family Bacillaceae , Genus Puia , Genus Phenylobacterium , Family Caulobacteraceae , Family Rhizobiaceae , Order Caulobacterales , Phylum Firmicutes , Class Bacilli , and Order Bacillales . The biomarkers in JL4 were Species metagenome , Species Rhodospirillales_bacterium_URHD0088 , Genus Halomonas , Species Candidatus Adlerbacteria bacterium GW2011 GWC1 50 9 , and Family Halomonadaceae . The above biomarkers played an important role in differentiating the community structure compositions of the root zone soils of different tea varieties. The bacterial community phenotype prediction analysis was performed using BugBase analysis. The abundance ratios and differences of bacteria with different phenotypes in different samples are shown in . The relative abundances of Gram-negative bacteria and aerobic bacteria in JL4 were higher than those in HD, respectively. The relative abundances of other phenotypes such as anaerobic bacteria and bacteria containing mobile elements in JL4 were significantly lower than those in HD, respectively. Correlation between bacterial community structure and physicochemical properties in root zone soil of different tea varieties The redundancy analysis (RDA) between soil physicochemical properties and bacterial communities showed that the physicochemical indicators significantly correlated with bacterial communities (with VIF < 10) were AK, SOM, NO 3 − -N, and pH. RDA1 and RDA2 accounted for 49.69% and 33.45%, respectively. Altogether, they accounted for 83.14% of the total variation in bacterial communities in the root zone soil of tea plants. Pearson correlation analysis with dominant bacterial phyla showed that SOM was significantly positively correlated with P. Actinobacteria and P. Actinobacteriota . It was also highly significantly positively correlated with P. Kapabacteria . pH was significantly positively correlated with P. Gracilibacteria , P. Kryptonia , and P. Nitrospirota , and highly significantly positively correlated with P. Gemmatimonadetes and P. Myxococcota . AP was highly significantly positively correlated with P. Kapabacteria . AK was significantly positively correlated with P. Firmicutes and P. Kapabacteria . EC was significantly positively correlated with P. Bacteroidota . NO 3 − -N was significantly positively correlated with P. Parcubacteria . NH 4 + -N was significantly positively correlated with P. Methylomirabilota . The physical and chemical properties of the root zone soils from JL4 and HD were analyzed, and the results are presented in . In comparison to HD, the JL4 root zone soil exhibited increases in clay (<2 μm) (7.00 ±  0.59%), silt (2–20 μm) (38.63 ±  0.81%), and NO 3 − -N (3.38 ±  1.32 mg · kg −1 ). Conversely, the concentrations of AP (28.91 ±  9.78 mg · kg −1 ), AK (57.67 ±  4.04 mg · kg −1 ), SOM (1.35 ±  0.03 g · kg −1 ), NH 4 ⁺ -N (9.12 ±  1.06 mg · kg −1 ), as well as pH (4.44 ±  0.16) and EC (53.20 ±  12.5 μS/cm) all showed a decrease in JL4 compared with HD. Notably, significant differences were observed in the contents of AP and AK, with P <  0.05 indicating statistical significance. A total of 296 metabolites, including amino acids, hydrocarbons, carbohydrates, ketones, lipids, alcohols, etc., were detected and identified among all soil samples . Of these compounds, the number of lipids was the greatest, accounting to 21.32% of the whole metabolites, followed by carbohydrates (14.21%), alcohols (10.66%), and acid (9.64%). VIP and FC values were calculated, and metabolites with VIP >  1.0, FC >  1.2 or FC <  0.8 were selected as criteria. A total of 20 metabolites were significantly decreased, and 7 were significantly increased. These differential metabolites included alcohols, aldehydes, acids, sugars, hydrocarbons, ketones, heterocyclic compounds, lipids, esters, etc. Among them, sugars (8) are the most abundant, followed by alcohols (4) and heteroc To further understand the mutual regulatory relationship among metabolites of different varieties, correlation analysis (Pearson) was performed on the identified differential metabolites, and correlations with P < 0.05 and | r | >  0.8 were shown in . Among them, 2,3-dihydroxypropyl icosanoate, D-mannitol 2, glycerol 1-palmate, and octadecanoic acid, 2,3-dihydroxypropyl ester were involved in the most correlations. Moreover, they were positively correlated with each other. Hexadecanoic acid, 2-hydroxy-1- (hydroxymethyl) ethyl ester and 2,3-dihydroxypropyl 12-methyldecanoate were next, which were positively correlated with other five differential metabolites. Interestingly epicatechin and catechin were positively correlated with the highest correlation coefficient. In addition, D - talose 1, arabinofuranose, D-allose 2, and stigmasterol 2 were positively correlated. However(3β)-3-(acetyloxy)-cholest-5-en-24-one was negatively correlated with erythritol 1, and (2S, 3R, 4S, 5S, 6R)-2-(2,3-dihydroxypropargy)-6-(hydroxymethyl) tetrahydro-2H-pyran-3,4,5-triol was negatively correlated with N,N-dimethyl-carbamic acid. Carbohydrate of D - talose 1, arabinofuranose and D-allose 2, and alcohol of stigmasterol 2 were all decreased. They were positively correlated with each other. Annotation of KEGG metabolites with significant differences among different tea varieties was conducted, and KEGG pathway enrichment analysis was performed. The results showed that the metabolic pathways of flavonoid biosynthesis, carbon fixation in photosynthetic organisms, and steroid biosynthesis were significantly and highly enriched . The differential abundance score (DA Score) analysis showed that the expression trend of flavonoid biosynthesis metabolites was increased (DA Score =  1, P =  0.04). The Venn plot of root zone soil of different tea tree varieties showed that there were 2919 OTUs shared by HD and JL4. JL4 had fewer unique OTUs than HD. Among them, JL4 had 1141 unique OTUs and HD had 1603 unique OTUs. Screening representative OTU sequences at a 97% similarity level for taxonomic analysis using the RDP classifier with the Bayesian algorithm, the results showed that at the phylum level, the dominant bacterial groups were Cyanobacteria, Bacteroidota, Gemmatimonadetes , Myxococcota , Crenarchaeota , Actinobacteriota , Actinobacteria , Chloroflexi , Acidobacteriota , and Proteobacteria . Among them, Proteobacteria , Actinobacteriota , Actinobacteria , Bacteroidota , and Cyanobacteria had relatively high abundances in the HD root zone soil, while Crenarchaeota , Chloroflexi , and Acidobacteriota had relatively high abundances in the JL4 root zone soil. The dominant microbial communities at the class level were Bacteroidia , unidentified Gemmatimonadetes , Acidimicrobia , Thermophilia , Nitrosophaeria , unidentified Actinobacteria , Gammaproteobacteria , Alphaproteobacteria , Ktedonobacteria , and Acidobacteriae . Among them, Acidobacteriae , Ktedonobacteria , and Nitrosophaeria had relatively high abundances in the JL4 root zone soil, while Unidentified Actinobacteria , Gammaproteobacteria, Alphaproteobacteria , Thermophilia , and Bacteroidia had relatively high abundances in the HD root zone soil. The soil bacterial community richness and diversity results were presented in . Simpson’s index of diversity (1 – D) of root zone soil bacteria in both varieties was 0.99. The indices of OTUs, Shannon, Chao1, Ace, and PD_whole_tree of HD were higher than those of JL4. Among them, a significant difference was found in the Shannon index between the two varieties ( P < 0.05). To explore the differences in the structure of root zone soil bacterial communities among different tea varieties, PCA was conducted on the soil bacterial communities. The results showed that the contribution rates of PC1 and PC2 were 26.91% and 22.45%, respectively, accounting for a total of 49.36%. The two tea varieties could be slightly clustered together, while different varieties could be well distinguished. JL4 was distributed in the upper left corner relative to HD, indicating a certain difference in the root zone soil bacterial community between JL4 and HD. LEfSe analysis was used to screen for differentially bacteria in the root zone soil of the different varieties ( P < 0.05). The results (shown in ) indicated that the biomarkers in HD included Order Rhizobiales , Family Burkholderiaceae , Species Bacillus sp_NBRC_101253 , Family Bacillaceae , Genus Puia , Genus Phenylobacterium , Family Caulobacteraceae , Family Rhizobiaceae , Order Caulobacterales , Phylum Firmicutes , Class Bacilli , and Order Bacillales . The biomarkers in JL4 were Species metagenome , Species Rhodospirillales_bacterium_URHD0088 , Genus Halomonas , Species Candidatus Adlerbacteria bacterium GW2011 GWC1 50 9 , and Family Halomonadaceae . The above biomarkers played an important role in differentiating the community structure compositions of the root zone soils of different tea varieties. The bacterial community phenotype prediction analysis was performed using BugBase analysis. The abundance ratios and differences of bacteria with different phenotypes in different samples are shown in . The relative abundances of Gram-negative bacteria and aerobic bacteria in JL4 were higher than those in HD, respectively. The relative abundances of other phenotypes such as anaerobic bacteria and bacteria containing mobile elements in JL4 were significantly lower than those in HD, respectively. The redundancy analysis (RDA) between soil physicochemical properties and bacterial communities showed that the physicochemical indicators significantly correlated with bacterial communities (with VIF < 10) were AK, SOM, NO 3 − -N, and pH. RDA1 and RDA2 accounted for 49.69% and 33.45%, respectively. Altogether, they accounted for 83.14% of the total variation in bacterial communities in the root zone soil of tea plants. Pearson correlation analysis with dominant bacterial phyla showed that SOM was significantly positively correlated with P. Actinobacteria and P. Actinobacteriota . It was also highly significantly positively correlated with P. Kapabacteria . pH was significantly positively correlated with P. Gracilibacteria , P. Kryptonia , and P. Nitrospirota , and highly significantly positively correlated with P. Gemmatimonadetes and P. Myxococcota . AP was highly significantly positively correlated with P. Kapabacteria . AK was significantly positively correlated with P. Firmicutes and P. Kapabacteria . EC was significantly positively correlated with P. Bacteroidota . NO 3 − -N was significantly positively correlated with P. Parcubacteria . NH 4 + -N was significantly positively correlated with P. Methylomirabilota . As the genetically improved offspring of HD, JL4 retains the genetic blueprint of HD within its genetic makeup, yet it has undergone extensive gene recombination and optimization through the rigorous process of artificial selection and breeding. This inherent genetic distinction may subtly alter the composition and functionality of the microbial community residing in the root zone soil of both JL4 and HD tea plants. In the root zone soil of both varieties, Actinobacteria , Chloroflexi , Acidobacteria , and Proteobacteria emerged as the predominant bacterial phyla, a finding that aligns with the established patterns observed in the rhizospheres of other tea cultivars . However, a notable disparity was observed in the Shannon index, with JL4 exhibiting a significantly lower value (8.48 ±  0.27) compared to HD (9.05 ±  0.14), suggesting a reduction in the α diversity of root zone soil bacteria in JL4. To mitigate this and potentially enhance the bacterial diversity, tea cultivators can explore strategies such as the judicious application of organic fertilizer and the implementation of other cultivation and management practices . Biomarkers for HD include F . Burkholderiaceae , F. Rhizobiaceae, and F. Bacillaceae , etc. In contrast, JL4 has biomarkers like G. Halomonas , S. Candida adlerbacteria gw2011 gwc1509 , F. Halomonadaceae , etc. Burkholderiaceae form nodules in legume roots, converting atmospheric N 2 to plant-available NH 3 . They can enhance plant growth activity, augment the microbial abundance in the vicinity of plant roots, effectively mitigate the occurrence of soil diseases, and certain strains of bacteria possess the capability to hydrolyze phosphorus. Additionally, these bacteria are positively correlated with biomass growth, suggesting their potential as plant growth-promoting rhizobacteria . Microorganisms from Rhizobia family, Burkholderiaceae , and Rhizobiaceae are critical for legumes as they can form symbioses. The abundance of Gram-negative and aerobic bacteria in JL4 was conspicuously higher. Specifically, Halomonadaceae and Halomonas , both classified as Gram-negative bacteria, emerged as biomarkers in JL4. Despite the insignificant salinity difference in soils, other elements may account for halotolerant bacteria as cultivar markers. Different tea variety root exudates, like specific organic acids or secondary metabolites, could supply nutrients or signaling molecules that promote halotolerant bacteria growth and prevalence. Such substances may modify the bacteria’s metabolic and competitive capabilities in the soil microbial community. Soil metabolites act as mediators of interaction between tea trees and soil microorganisms and play a crucial role in altering soil physicochemical properties. The AP and AK of JL4 root zone soil are significantly lower than those of HD, suggesting that soil AP and AK may be the main factors influencing the diversity of root zone soil bacterial communities in these two tea varieties. The difference indexes of soil physical and chemical properties vary among different tea varieties . The most differential metabolites between the two plants are sugars, followed by alcohols and heterocyclic compounds. Arabinofuranose, along with other polysaccharides, exhibits a close association with the decomposition and transformation processes of plant residues and other organic materials within the soil. A proper quantity of D-Mannitol 2 is capable of enhancing the soil structure, augmenting the soil’s porosity and water-holding capacity. This, in turn, promotes the growth of plant roots and facilitates the absorption of essential nutrients, thereby indirectly augmenting the fertility of the soil. Erythritol has the potential to markedly decrease the dry weight of both the root and aerial parts of tomato plants. Additionally, it can impede the germination of corn and tomato seeds. These effects may consequentially and indirectly modify the structure and function of the soil microbial community in the surrounding environment . D-Pinitol, a prevalent sugar alcohol in legumes, witnesses an upregulated biosynthesis in soybeans under soil drought conditions. The induced transcriptional activity of relevant genes prompts a significant accumulation of D-Pinitol in various plant organs, endowing the plants with enhanced tolerance to adversities such as drought . Muco-Inositol and associated molecules in plants bolster tolerance against salt stress. They achieve this by safeguarding cell structures from reactive oxidants and regulating intracellular water pressure, consequently influencing plant growth in saline soils. Compared to HD, the expression trend of flavonoid biosynthesis metabolites in JL4 was increased (DA Score =  1, P =  0.04). This pathway is a part of the phenylpropanoid synthesis pathway. Moreover, flavonoids are widely distributed in plants and have diverse biological functions and roles in plant-environment interaction. They protect plants from UV radiation and are important in sexual reproduction. Flavonoids possess medicinal properties such as antiviral, antimutagenic, antilipoperoxidant, radioprotective, anti-complementary, anti-inflammatory, anti-tumor, antioxidant, and anti-inflammatory activities. Flavonoids, especially flavan 3-ols like -epigallocatechin-epicatechin-gallocatechin-catechin, and their gallate esters, are prominent metabolites due to their high content. Approximately 20% flavan 3-ols in tea is substantial. Tea’s health effects related to flavan 3-ols are based on antioxidative, anticancerogenic, antiallergenic, anti-inflammatory, and vasodilatory properties . There are complex interactions among microbial communities, metabolites, and the physicochemical properties of soil. In our investigation, factors such as soil SOM, NO₃-N, pH, and AK have a substantial influence on the bacterial community structure within the root zone soil, which is consistent with the research results of Kong et al. . Moreover, AK showed a significant positive correlation with bacterial phyla like Firmicutes and Kapabacteria , while AP had a notable positive correlation with Kapabacteria . However, Kong et al. believed that Firmicutes were significantly positively correlated with root zone soil AP content. The inconsistency with the results of this study may be due to the different tea varieties studied. Although this study has made certain progress in uncovering the differences in microbial communities and metabolites in the root zone soil between JL4 and HD, there are still some limitations. For example, this study primarily concentrated on the microbial communities and metabolites of root zone soil. However, it failed to fully consider the physiological and ecological characteristics of the aboveground parts of tea plants and their interaction with root zone soils. Future research can further combine the physiological and ecological processes of tea plant growth and development to further explore the interaction mechanism between microorganisms, plants, and soil. In addition, field experiments can be carried out to confirm the reproducibility and practicability of laboratory research results, providing a scientific basis for tea cultivation management and tea quality improvement. At the same time, with the continuous development of high-throughput sequencing technology and metabolomics technology, future studies are expected to reveal the complexity and diversity of microbial communities and metabolites in the root zone soil of tea plants. Based on NovaSeq 6000 high-throughput sequencing technology, the bacterial diversity and community structure of HD and JL4 root zone soils were identified, and the differences in root zone soil metabolites were identified by GC-MS-derived metabolomics technology. The analysis of soil physical and chemical properties showed that compared with HD, the AP (28.91 ±  9.78 mg · kg −1 ) and AK (57.67 ±  4.04 mg · kg −1 ) of JL4 were significantly reduced ( P < 0.05). The results of 16S rDNA showed that the dominant bacterial phyla were Proteobacteria , Acidobacteriota , Chloroflexi and so on. The Shannon index of JL4 was significantly lower than that of HD. BugBase phenotype prediction analysis showed that the abundance of Gram negative and Aerobic related bacteria in JL4 was higher than that in HD, while other related bacteria such as Anaerobic were significantly lower than those in HD. LEfSe analysis showed that the Biomarkers in HD were P. Firmicutes , O. Rhizobiales , O. Caulobacteras et al. The Biomarkers in JL4 were S. Rhodospirillales bacterium URHD0088 , F. Halomonadaceae et al. RDA and correlation analysis indicated that AK, SOM, NO₃⁻ - N, and pH had a significant impact on the structure of root zone soil bacterial communities. AK and P. Firmicutes , P. Kapabacteria were significantly positively correlated, individually. AP and P. Kapabacteria had a highly significant positive correlation. While AP and P. Unidentified Archaea were significantly negatively correlated. GC-MS derivatization metabolomics showed that among the differential metabolites, sugars (8 in number) had the most, followed by alcohols (4 in number) and heterocyclic compounds (4 in number). In JL4, D-mannitol 2 and scylo-Inositol decreased, while -epicatechin, catechin, and D-pinitol increased. KEGG pathway enrichment analysis showed that the metabolic pathways of flavonoid biosynthesis was highly enriched. The bacteria, metabolites, and physicochemical factors in the root zone soil of tea plants above effected the root zone microecology of different tea varieties, providing a theoretical basis for improving the root zone microenvironment of tea plants and data support for the rational planting and distribution areas of tea varieties, as well as the breeding of tea varieties. S1 Data Alpha diversity, differential Metabolites, KEG DA score, lefse, Metabolites (all), OTUs, PCA components, RDA result.VIF and result. RDA. envfit. (ZIP)
Can a multisensory teaching approach impart the necessary knowledge, skills, and confidence in final year medical students to manage epistaxis?
c4692641-504d-4415-95ae-8ebd586cec97
3899690
Otolaryngology[mh]
Epistaxis is a common condition that up to 60% of the population will experience. A needs-assessment survey conducted in 2000 at the University of Alberta medical school revealed that 95% of the 100 students did not have the confidence to technically manage epistaxis. This presents a significant need for effective teaching of this essential medical skill in the Canadian education curriculum. It has been suggested that medical skills are effectively taught through multisensory approaches based on Fleming’s VARK (visual, auditory, read/write, kinesthetic) model, which proposes that some learners have a preferential sensory channel through which they best receive and integrate information [ - ]. In addition, teaching of technical skills has seen increasing use of simulation-based learning [ - ]. Utilizing cadaver simulators provides a risk free and anatomically high fidelity environment for learners to perform the technical maneuvers to control epistaxis. The majority of our learners also belong to, the “Millennial”, or Generation Y. This generation is technology savvy, resourceful, and able to multitask. The purpose of our study was to evaluate the efficacy of a multisensory teaching approach (consisting of a PODcast , VODcast , written notes, and expert-guided practice on cadaver simulators) in imparting the necessary knowledge, skills, and confidence to technically manage epistaxis in a cohort of fourth year medical students. Appropriate learning objectives were created and teaching and assessment methods were matched. The course content was prepared by an otolaryngologist and was based on current literature and practice guidelines. The content validity of the teaching materials and session was ensured through a standardized checklist peer review process carried out by two other otolaryngologists. Institutional health research ethics board (University of Alberta Health Research Ethics Board) approval was obtained. A focus group of fifteen medical students ensured that the intervention was at an appropriate level of understanding. The learning objectives were as follows: At the end of the teaching session, the learner will be able to 1. Formulate a differential diagnosis for epistaxis and identify risk factors. 2. Prescribe appropriate medical management of epistaxis. 3. Determine when to refer the patient to an otolaryngologist. 4. Perform an examination using a nasal speculum and suction while adhering to universal precautions. 5. Perform silver nitrate cautery of the anterior nasal cavity. 6. Perform anterior nasal packing with Merocel© nasal packs. 7. Perform anterior nasal packing with Vaseline gauze. An online Wiki hosted the learning objectives, pre-session teaching materials, and schedule. By hosting our teaching materials on the internet (Wiki, POD/VODcasts), our students were able to access teaching materials at a place and time that was convenient for them . As part of the pre-session teaching materials, students listened to a 10-minute audio PODcast (iTunes and MedEdPortal) covering learning objectives 1 to 3. They also viewed a 15-minute VODcast highlighting learning objectives 4 to 7 and a 2-minute VODcast of anterior nasal packing with Vaseline gauze of on a clear plastic model . Supplementary written notes were also provided covering all learning objectives. Students were informed that an IRAT would be administered at the beginning of the classroom session, ensuring they had acquired the requisite knowledge for the cadaver simulator lab. All fourth year students participated in the epistaxis teaching session. This session was part of the otolaryngology half day offered four times per year with an average of 36 students per session. Students were informed at the start that the study would have no impact on their assessment for academic promotion and that they could withdraw from the study at any time. Participation in the teaching session was mandatory, however participation in the study was not. No personally identifiable information was gathered, and students willing to participate signed a consent form. Students completed a 7-minute multiple-choice question IRAT assessing the knowledge they had acquired for all 7 learning objectives. Over the next 10 minutes, a facilitator discussed the answers of the IRAT with the learners while they completed a pre-cadaver session Confidence Level Questionnaire (CLQ). The CLQ assessed the student’s confidence in performing the technical learning objectives 4 through 7. The CLQ was constructed using a five point Likert scale ranging from the lowest level of confidence where the individual would not attempt the procedure, to the highest level of confidence where the individual would feel comfortable teaching it to another learner [Additional file : Epistaxis Questionnaire]. Each increment in the scale represented an increasing level of independent practice in the medical learner, which is more intuitive and applicable than having an arbitrary Likert scale with no attached definition. This questionnaire was reviewed with otolaryngologists at our institution for content validity, and a focus group of medical students was interviewed for understandability. Cronbach’s alpha was calculated for internal reliability. Construct validity was determined by comparing the results of the CLQ administered to thirteen experienced practitioners (Otolaryngology residents and Otolaryngologists) to thirteen randomly selected students who had not yet practiced on the cadavers. Twenty-eight fourth year students were randomly selected prior to the cadaver session to perform the four core technical skills while being assessed by two independent observers with the Objective Structured Assessment of Technical Skill (OSATS) [Additional file : Epistaxis OSATS]. The fourth skill (Vaseline gauze packing) was divided into two components due to its’ increased complexity. A ‘1’ was assigned if the learner performed the skill satisfactorily at the level of a general practitioner, and a ‘0’ was assigned if this was not met. A binary scale was used to simplify the assessment for the observers and also improve the overall inter-rater reliability. A global assessment of overall performance using Likert scale of 1 through 5 was utilized at the end. This component of the instrument was adapted from Martin et al. . In Doyle et al’s study , the instrument demonstrated excellent internal reliability (Cronbach alpha 0.91) and good validity. As this was a new instrument adapted to assess the achievement of technical skills for management of epistaxis, inter-rater reliability was determined. Two other board certified otolaryngologists verified content validity of the instrument. OSATS were performed only on a limited number of students due to limitations of student availability in an increasingly constricted curriculum and lack of trained observers. The cadaver lab consisted of an instructor-led demonstration of the technical skills followed by practice in pairs by the students. Two to three otolaryngology residents and up to three otolaryngologists provided feedback. At the end of the session, all participants were asked to complete the post-cadaver lab CLQ and a qualitative feedback form. The previously selected twenty-eight participants had a post-cadaver OSATS administered again by the same two independent observers. We compared the pre and post-cadaver lab CLQ scores. An a priori sample size calculation with a Bonferroni correction was done to address the multiple comparisons being made. For a predetermined power of 0.8 and p < 0.01 (4 independent comparisons), a medium effect size (Cohen’s d=0.50), and an 80% response rate, 120 participants were required per group. Furthermore, we calculated the percentage of students that achieved a confidence level score of 3 or above on all sections at the end the session (ie: will attempt procedure with attending back up but no active involvement). This is the level of competence that is expected of residents which coincides with the next training period we are preparing our medical students for. We then compared the pre and post-cadaver lab OSATS scores. We also determined the percentage of students that achieved all 1’s and at least 3 to 5 on overall technical performance. A priori sample size calculation was done for the OSATS. For a predetermined power of 0.8 and p < 0.05, and a large effect size (Cohen’s d=0.80), 26 participants were required per group. A total of 147 students participated in the teaching sessions from August 2011 to February 2012. One hundred and thirty four students provided informed consent and completed pre and post-CLQ’s, and the IRAT. Eighty-two of the 134 students received a score of 80% or higher on IRAT indicating an adequate grasp of the knowledge provided by the PODcast, VODcast, and written notes. [Figure – IRAT Scores]. The internal reliability of the CLQ was calculated using Cronbach’s alpha (coefficient of reliability). Both the pre-session and post-session CLQ’s had high measures of internal reliability with alpha values of 0.85 and 0.88 respectively [SPSS 19]. Construct validity (the ability of the questionnaire to measure confidence level) was assessed by comparing the pre-session questionnaire responses of 13 randomly selected students [random.org] with the responses of a cohort of 13 otolaryngology residents and staff otolaryngologists. On all four questions, the absolute confidence scores of the experienced group were consistently higher than the pre-cadaver teaching CLQ scores for the medical students. The Mann–Whitney U test of independent samples (non-parametric) showed a statistically significant difference (p<0.01) between the groups for all four questions. Similarly, the OSATS instrument was also validated in this study. Inter-rater reliability on each skill was calculated using Cohen’s kappa with values ranging from 0.48 to 0.85 for the pre-session OSATS to 0.65 to 1.00 for the post-session OSATS. Cronbach’s alpha was calculated for inter-rater reliability on overall performance and was found to be 0.80 pre and 0.56 post [SPSS 19]. Ninety-eight percent of students achieved a score of 3 to 5 on each category (ie: will attempt procedure with attending back up but no active involvement). At baseline, students appeared to be more confident with basic procedures such as nasal cavity examination and silver nitrate cautery, and showed a decreasing trend in confidence with more complex maneuvers such as Vaseline coated gauze packing. Following the cadaver lab, students showed a clear increase in confidence for all 4 learning objectives. Using a paired two-tailed t-test (p<0.01), a statistically significant difference was found in each of the four questions between the pre and post-session responses with consistently higher scores on the post compared to the pre-cadaver teaching session [Figure – CLQ Scores]. Twenty-eight students were randomly selected to be assessed with a pre and post-teaching session OSATS. Average scores between the two observers on each of the five sections were tallied for each participant on the pre and post-OSATS [Figure – Pre and Post OSATS Scores]. Like the pre-CLQ instrument, there was a trend of more students at baseline performing satisfactorily with less complex procedures than the more advanced ones. Overall scores were 2.75 (±0.67) on the pre-OSATS and 4.00 (±0.67) on the post-OSATS. On the post-OSATS, 94% of students received a score of 1 on each category and 3 to 5 on overall performance. The McNemar change test was used to compare the pre to the post-OSATS scores. A statistically significant difference was found for questions 1,2,4, and 5 (p<0.05) and no difference was found for question 3 (p=0.25). The qualitative feedback received from the students was positive overall. Many felt that they benefited most from the cadaver simulator lab with expert feedback as well as the ability to access the pre-session media at their convenience. The most common suggestion for improvement was an improved instructor to student ratio in the cadaver simulator lab. The last decade has seen a paradigm shift in medical education from the traditional ‘sage on stage’ towards a ‘learner-centered’ model. There has been an increasing adoption of self-directed problem-based learning curricula and adaptation to varying learning styles. Cadaver simulators provide a risk-free learning environment where the technical training of the medical student reigns supreme. In contrast to this, in the clinical environment students often have to compete with senior trainees and physicians for these procedures. While some interventions have shown improvements in measurable learning outcomes , there is still considerable debate and evolution in the field. Many of the models in medical education are rooted in theories of learning styles and Fleming’s VARK model is one that is widely used . This study applies this concept by providing multiple modalities of instruction (and thereby multiple opportunities) for the learner to grasp a concept and learn a skill. In the surgical educational literature, Kopta describes the acquirement of new skills as occurring in three phases – cognitive, integrative, and autonomous . In the cognitive phase, the learner intellectualizes the process and plans the necessary steps. In the integrative phase, the learner initiates the appropriate motor behaviour with feedback or knowledge of the results. Finally, in the autonomous phase, motor tasks are performed smoothly with little cognitive input. The pre-session teaching materials (PODcast, VODcast, and notes) in this study aims at providing an environment for the cognitive phase while the cadaver simulators with expert guidance provide the integrative phase. This sets the stage for their continued learning in the clinical environment as they approach the autonomous phase of residency. Given that we have measured both skill and confidence, we can apply Dreyfus’ model of skill acquisition to our study . Dreyfus states that there are five levels of skill expertise: novice, advanced beginner, competent, proficient and then expert. Uniformly, most of our learners started out as novices, lacking skill and confidence. After the teaching intervention, we demonstrated that skill improved with a commensurate increase in confidence. However, we also noted that were some learners who had a disproportionately higher levels of confidence when compare to their actual skill level of achievement. According to Dreyfus, these are the advanced beginners who feel confident enough to be in independent practice but perhaps do not have the skill level to back it up. Fortunately, this is only an introductory course to prepare students for residency. Furthermore, an important competency to meet in residency is skill of self-assessment where our students will learn to align their self-assessments with objective external evaluations. The results of the study can be better interpreted in terms of their educational significance. Simpson’s adaptation of Bloom’s taxonomy of learning for the psychomotor domain places a learner at a guided response stage where the learner is in the early stages of learning a complex skill that includes imitation and trial and error. The intended level of achievement for the participants is not mastery of the skill, but rather advancement from their current level of skill by an amount that will change their practice during their residency. We also demonstrated the utility of internet-based resources such as PODcasting and VODcasting that can be easily used on portable media devices. By priming the students’ knowledge with the media, we were able to dedicate a proportionately greater amount of time towards active learning exercises. Although not the primary objective of this study, we were able to validate the CLQ - which looks at self-assessment of confidence in performing a technical skill. This study does have some limitations. We did not test the students’ knowledge prior to the intervention, so we cannot be certain that there were no other sources of knowledge apart from our intervention. However, it is important to note that this is the only formal training on this topic within the medical school curriculum. We also recognize that there could be some observer bias created by the pre and post CLQs and OSATS. As students on the CLQ and observers on the OSATS both had knowledge of assessment’s timing related to course delivery, it is conceivable that both parties, in the interest of wanting to see an improvement in confidence and skill, gave better scores on the post compared to the pre-cadaver lab assessments. This observer bias for the OSATS could have perhaps been circumvented with blinding the observers. Logistically this would not have been feasible due to a constricted curriculum and lack of trained observers. Furthermore, our study did not have a matched control group to ensure that were no other confounding factors that could have partially explained our findings. We did not recruit a separate cohort of students to serve as we felt that it would not be ethical to withhold innovative teaching materials or methods from one cohort of students. A crossover design may have circumvented this problem, but again logistically this would not have been feasible due to a constricted curriculum and lack of trained observers. In an ideal setting we would have administered the first set of CLQs and OSATS before the PODcast and VODcast, but this was prevented by time restraints. However, one can view the pre CLQ and OSATS as the level of technical achievement gained only from the PODcast, VODcast, and notes. Clearly, the students required the cadaver lab with expert guidance to achieve acceptable levels of competence. One could argue that the pre-session teaching materials were superfluous, but in reality the pre-session teaching materials primed the students for the cadaver lab. Last, given that multiple teaching methods were utilized in a synchronous fashion, it is difficult to differentiate which one had the most significant impact on the outcomes. However, the goal of our study was not to compare or ascertain which component of our teaching intervention had the most impact, but rather if a multisensory approach influenced our outcomes. Overall, our study demonstrates that our multisensory approach imparts the necessary knowledge, skill, and confidence to manage epistaxis in the lab. We would like to direct our future research efforts to look at long-term retention, if there are improvements in patient care, and if a similar model can be adapted to teaching other procedural skills . The authors declare that they have no competing interest. GK was primarily involved in experimental design, statistical analysis and drafting of the manuscript. VB was involved in experimental design and drafting of the manuscript. CC was responsible for data collection. DC and KA were involved in experimental design, critical review and drafting of the manuscript. All authors read and approved the final manuscript. Additional file 1 Epistaxis Questionnaire. Click here for file Additional file 2 Epistaxis OSATS. Click here for file
Publication authorship: A new approach to the bibliometric study of scientific work and beyond
1e3f8357-1414-4ef7-81c3-6fd50abdd83c
11025840
Internal Medicine[mh]
Bibliometric studies use publication data to describe the segmentation of research and to look at the development of the scientific frontier. The seminal works of the 1960s and 1970s built networks of publications (vertices) connected by co-citation or bibliographic coupling (edges). Starting in the 1980s , scholars turn to the social dimension of scientific research by looking at networks of authors (vertices) connected by author co-citation or co-authorship (edges). Several refinements have been made to both publication and social networks in recent years. For example, co-citation proximity analysis posits that citations appearing closer together (e. g., within a paragraph) in a publication are more similar than those further apart (e. g., one in the introduction and one in the discussion). This idea balances out the shortcoming that all citations contribute equally to the establishment of a relation between two (co-)cited publications. Another example is author bibliographic-coupling , which estimates a relation between authors based on the overlap of the bibliographies found in their complete oeuvres. This approach effectively expands the intellectual structure of scientific research from single publications to entire life works, which arguably paints a more realistic picture of the scientific frontier. Bibliometric studies name either publications (e. g., academic articles, research grants, scientific patents) or individuals (i. e., authors) as the vertices of a network. The edges of the network, in turn, are either citations or authorship. The combination of vertices and edges then accounts for a number of different bibliometric networks, whether they are co-citation, bibliographic coupling, author co-citation, author bibliographic-coupling, or co-authorship networks. Notably missing from the combination of vertices and edges is the idea that publications may be connected by authorship. I consequently call this combination publication authorship . It clearly denominates vertices as publications and edges between them as the authorship of these. In the following, I discuss the theoretical foundation of publication authorship including the most prevalent differences to traditional approaches in bibliometric studies. Empirical data from management, physics, and medicine then illustrates publication authorship opposite to bibliographic coupling. More specifically, I draw on academic articles published between 2010 and 2019 in the top-10 journals in accounting, astronomy, and gastroenterology. As a first step in the research, descriptive statistics for both publication authorship and bibliographic coupling provide an overview of development of the literature in these three academic areas. I then apply a standard clustering algorithm and test its goodness-of-fit using the cosine similarity between article abstracts. These empirical illustrations and respective statistical analysis show that publication authorship yields a significantly better segmentation of research than bibliographic coupling in all three academic areas, which consequently points out a more fine-grained picture of the scientific frontier. Finally, I point out similarities and differences in the findings for each one of the three academic areas, discuss co-word analysis and Latent Dirichlet Allocation as two alternative approaches, and conclude with implications for the theory and practice of bibliometric studies. A brief introduction to co-citation, bibliographic coupling, author co-citation, author bibliographic coupling, and co-authorship sets the stage for an elaboration of publication authorship. provides an overview of these altogether six bibliographic networks. Publications appear as rectangles and authors as circles. Citations are directed either from publications (co-citation and author co-citation) or to publications (bibliographic coupling and author bibliographic-coupling), whereas authorship is undirected (i. e., individuals (co-)author publications and publications are (co-)authored by individuals). The intellectual structure of scientific work Up until the mid-1960s, direct citation and keyword analysis are the dominant methods of inquiry into the structure of academic work and the development of the scientific frontier. The concepts of co-citation and bibliographic coupling are first and foremost critical responses to these methods used in early bibliometric studies. Following in the footsteps of de Solla-Price , Small introduces co-citation as a measure of scientific similarity in 1973. He argues that the frequency with which two publications are cited together by other publications (i. e., co-citation) is a better measure than direct citation, which is limited by the need of an explicit reference from one to another publication. Co-citation then identifies the intellectual connections between publications based on their citation patterns. It singles out seminal works in a given academic area using their citation count and tracks the development of intellectual ideas over time by looking at the evolution of clusters in co-citation networks . Already ten years earlier, Kessler introduces the concept of bibliographic coupling in 1963, which measures the similarity between two scientific publications based on the references they have in common. Bibliographic coupling effectively replaces earlier approaches (e. g., keyword analysis) to understand the development of academic areas. It is similar to co-citation in that it identifies structural properties of a given scientific field. However, where co-citation is more sensitive to the overall structure of a field, bibliographic coupling focuses on specific clusters of related publications. Co-citation maps the intellectual structure of an academic area and points to its research frontier, while bibliographic coupling relies on the similarity of publications to interpret core and peripheral works in a discipline. Co-citation and bibliographic coupling both define publications as the vertices in a network. The edges in co-citation connect two publications (A and B) which are jointly cited by one or more other publications (X). They may be weighted by the number of publications which jointly cite the two. Conversely, bibliographic coupling connects two publications (A and B) which share common references to one or more other publications (X). The edges may be weighted by the the number of references two publications have in common. At the center of attention of both these bibliometric networks are the themes and topics of clusters of publications that make up schools of thought and push the scientific frontier. Both co-citation and bibliographic coupling have been widely used in various fields of research such as biology, chemistry, physics, medicine, psychology, sociology, as well as computer, information, and management science. For example, Small et al. identify 71 emerging topics across all of science by combining direct citations and co-citations in publications from 2007 to 2010 . They conclude that three non-exclusive forces drive research: scientific discovery, technological innovation, and exogenous events. On a side note, nearly all emerging topics contain highly cited papers, but only about 10 percent of highly cited papers are part of emerging topics. Jarneving complements bibliometric coupling with a complete-link cluster analysis similar to previous work on co-citation clusters . He tests this combination on a large multidisciplinary set of more than 600000 publications and 17 million references to estimate an optimal level of clustering that preservers core documents essential to the mapping of academic areas. His conclusion reveals but three large clusters of core documents. In a last example of research, Boyack and Klavans show which citation approach represents the intellectual structure of scientific work most accurately . Their compelling comparison between (co-)citation and bibliographic coupling finds that the latter slightly outperforms the first approach with more coherent clusters to represent the scientific frontier. The social structure of scientific work Beginning in the 1980s, bibliographic studies turn to the social dimension of scientific work. Author co-citation , author bibliographic-coupling , and co-authorship are similar to co-citation and bibliographic coupling in that the main interest of any analysis is still the structure of scientific work. The key difference is that author co-citation, author bibliographic coupling, and co-authorship all focus on the social structure as opposed to the intellectual structure. In author co-citation, two authors relate to each other if their works are frequently cited by other authors. In author bibliographic coupling, two authors relate to each other if they are frequently cited together in the same set of references. In addition, the concept of co-authorship allows for the study of collaborative relationships between authors in publications. Tracing these social structures provides insights into research communities and collaborations within and across scientific disciplines. Author co-citation, author bibliographic-coupling, and co-authorship define authors as the vertices in a network. The edges in author co-citation connect two authors (1 and 2) who are jointly cited by one or more publications (X). They may be weighted by the number of publications which jointly cite the two. Conversely, the edges in author bibliographic-coupling connect two authors (1 and 2) who jointly cite one or more publications (X). The edges may be weighted by the number of publications two authors jointly cite. Finally, co-authorship connects two authors (1 and 2) who collaborate on one or more publications (X). The edges may be weighted by the number of publications two authors have in common. Clusters of authors stand in for schools of thought. Sometimes they are further grouped by affiliation or place to see which university or country is pushing the scientific frontier. Instead of a focus on the themes and topics of clusters of publications, the center of attention shifts to clusters of scientific collaboration among authors. Similar to co-citation and bibliographic coupling, bibliometric studies of the social structure of scientific work span across various academic disciplines. For example, White and McCain study the social structure of information science . They submit the top 120 authors most frequently cited in twelve key journals from 1972 through 1995 to author co-citation analysis. Their findings yield automatic classifications relevant to the history of the field including the most canonical authors. In a combination of co-authorship and bibliographic coupling, Biscaro and Giupponi examine citations counts of academic articles . Their study based on 5585 publications from a variety of academic disciplines offers a number of findings, among which are: authors who collaborate with more authors tend to get more citations, and articles that use references from different strands of the literature tend to get more citations. As a last example of research, Schubert and Glänzel take a look at country-by-country co-authorship to find that location, culture, and language determine clusters of mutually strong preferences in geopolitical areas such as Central Europe, Scandinavia, or the Far East . The United States, unsurprisingly, enjoy universal co-authorship preference. More comprehensive reviews of the theory and practice of bibliometric studies are found in Borgman and Furner , Mingers and Leydesdorff , and Donthu et al. . Combining the intellectual and social structure of scientific work Publication authorship takes inspiration from the above discussed approaches to the analysis of scientific work. On the one hand, it defines publications as the vertices of a network similar to co-citation and bibliographic coupling. On the other hand, it takes authors as the basis of a definition of edges as authorship similar to co-authorship. The edges in publication authorship then connect two publications (A and B) which are authored by one or more individuals (0). They may be weighted by the number of authors two publications have in common. Publication authorship keeps the focus on the themes and topics of publications to describe the segmentation of research and the development of the scientific frontier. At the same time, it accounts for the social dimension of scientific work with clusters of publications emerging from the collaboration among authors. Publication authorship may appear as simply another combination of vertices and edges that fills a void in the roster of approaches to the analysis of scientific work. However, it firmly rests with the theoretical argument of a communicative constitution of social systems . The theory suggests that any form of documentation or record (e. g., academic publications, corporate reports, meetings minutes) is a condensate of the participation of individuals in communication . In turn, individuals who participate in communication are common sources of information that connect communication event and episode . Publication authorship follows exactly this line of argument. Scholars participate in academic discourse by authoring publications which, in turn, cluster to reflect the segmentation of research and the development of the scientific frontier . Common to all the approaches in bibliometric studies is the idea that the relations among publications or authors present similarities in the underlying scientific work, which allows for the analysis of clusters of tightly coupled and central vertices (i. e., schools of thought and the scientific frontier). In particular, publication authorship assumes that two publications are similar to the extend that one or more scholars (co-)authors them. Since authors frequently specialize in a narrow field of research (e. g., behavioral economics or adolescent oncology), their publications are likely to present a narrow field of research, too (e. g., a behavioral economist is unlikely to work on transaction-costs issues and an adolescent oncologist rarely contributes to research on childhood obesity). Publication authorship is therefore more exclusive than co-citation and bibliographic coupling because the number of authors who collaborate on two publications is almost always smaller than then number of joint citations or common references. (None of the 27444 publications used in the empirical analysis of this paper had more authors than joint citations or common references.) At the same time, it is more inclusive than author co-citation and co-authorship because it includes both single-authored and co-authored publications. Co-citation, bibliographic coupling, author co-citation, author bibliographic-coupling, co-authorship, and publication authorship all yield unique insights into scientific work. In the light of the similarities and differences among these and other approaches in bibliometric studies , publication authorship is closest to bibliographic coupling, not least because it defines vertices as publications and, therefore, focuses on the themes and topics of these. The following empirical illustrations pit publication authorship against bibliographic coupling to highlight differences in the segmentation of research and a consequently more detailed scientific frontier. Up until the mid-1960s, direct citation and keyword analysis are the dominant methods of inquiry into the structure of academic work and the development of the scientific frontier. The concepts of co-citation and bibliographic coupling are first and foremost critical responses to these methods used in early bibliometric studies. Following in the footsteps of de Solla-Price , Small introduces co-citation as a measure of scientific similarity in 1973. He argues that the frequency with which two publications are cited together by other publications (i. e., co-citation) is a better measure than direct citation, which is limited by the need of an explicit reference from one to another publication. Co-citation then identifies the intellectual connections between publications based on their citation patterns. It singles out seminal works in a given academic area using their citation count and tracks the development of intellectual ideas over time by looking at the evolution of clusters in co-citation networks . Already ten years earlier, Kessler introduces the concept of bibliographic coupling in 1963, which measures the similarity between two scientific publications based on the references they have in common. Bibliographic coupling effectively replaces earlier approaches (e. g., keyword analysis) to understand the development of academic areas. It is similar to co-citation in that it identifies structural properties of a given scientific field. However, where co-citation is more sensitive to the overall structure of a field, bibliographic coupling focuses on specific clusters of related publications. Co-citation maps the intellectual structure of an academic area and points to its research frontier, while bibliographic coupling relies on the similarity of publications to interpret core and peripheral works in a discipline. Co-citation and bibliographic coupling both define publications as the vertices in a network. The edges in co-citation connect two publications (A and B) which are jointly cited by one or more other publications (X). They may be weighted by the number of publications which jointly cite the two. Conversely, bibliographic coupling connects two publications (A and B) which share common references to one or more other publications (X). The edges may be weighted by the the number of references two publications have in common. At the center of attention of both these bibliometric networks are the themes and topics of clusters of publications that make up schools of thought and push the scientific frontier. Both co-citation and bibliographic coupling have been widely used in various fields of research such as biology, chemistry, physics, medicine, psychology, sociology, as well as computer, information, and management science. For example, Small et al. identify 71 emerging topics across all of science by combining direct citations and co-citations in publications from 2007 to 2010 . They conclude that three non-exclusive forces drive research: scientific discovery, technological innovation, and exogenous events. On a side note, nearly all emerging topics contain highly cited papers, but only about 10 percent of highly cited papers are part of emerging topics. Jarneving complements bibliometric coupling with a complete-link cluster analysis similar to previous work on co-citation clusters . He tests this combination on a large multidisciplinary set of more than 600000 publications and 17 million references to estimate an optimal level of clustering that preservers core documents essential to the mapping of academic areas. His conclusion reveals but three large clusters of core documents. In a last example of research, Boyack and Klavans show which citation approach represents the intellectual structure of scientific work most accurately . Their compelling comparison between (co-)citation and bibliographic coupling finds that the latter slightly outperforms the first approach with more coherent clusters to represent the scientific frontier. Beginning in the 1980s, bibliographic studies turn to the social dimension of scientific work. Author co-citation , author bibliographic-coupling , and co-authorship are similar to co-citation and bibliographic coupling in that the main interest of any analysis is still the structure of scientific work. The key difference is that author co-citation, author bibliographic coupling, and co-authorship all focus on the social structure as opposed to the intellectual structure. In author co-citation, two authors relate to each other if their works are frequently cited by other authors. In author bibliographic coupling, two authors relate to each other if they are frequently cited together in the same set of references. In addition, the concept of co-authorship allows for the study of collaborative relationships between authors in publications. Tracing these social structures provides insights into research communities and collaborations within and across scientific disciplines. Author co-citation, author bibliographic-coupling, and co-authorship define authors as the vertices in a network. The edges in author co-citation connect two authors (1 and 2) who are jointly cited by one or more publications (X). They may be weighted by the number of publications which jointly cite the two. Conversely, the edges in author bibliographic-coupling connect two authors (1 and 2) who jointly cite one or more publications (X). The edges may be weighted by the number of publications two authors jointly cite. Finally, co-authorship connects two authors (1 and 2) who collaborate on one or more publications (X). The edges may be weighted by the number of publications two authors have in common. Clusters of authors stand in for schools of thought. Sometimes they are further grouped by affiliation or place to see which university or country is pushing the scientific frontier. Instead of a focus on the themes and topics of clusters of publications, the center of attention shifts to clusters of scientific collaboration among authors. Similar to co-citation and bibliographic coupling, bibliometric studies of the social structure of scientific work span across various academic disciplines. For example, White and McCain study the social structure of information science . They submit the top 120 authors most frequently cited in twelve key journals from 1972 through 1995 to author co-citation analysis. Their findings yield automatic classifications relevant to the history of the field including the most canonical authors. In a combination of co-authorship and bibliographic coupling, Biscaro and Giupponi examine citations counts of academic articles . Their study based on 5585 publications from a variety of academic disciplines offers a number of findings, among which are: authors who collaborate with more authors tend to get more citations, and articles that use references from different strands of the literature tend to get more citations. As a last example of research, Schubert and Glänzel take a look at country-by-country co-authorship to find that location, culture, and language determine clusters of mutually strong preferences in geopolitical areas such as Central Europe, Scandinavia, or the Far East . The United States, unsurprisingly, enjoy universal co-authorship preference. More comprehensive reviews of the theory and practice of bibliometric studies are found in Borgman and Furner , Mingers and Leydesdorff , and Donthu et al. . Publication authorship takes inspiration from the above discussed approaches to the analysis of scientific work. On the one hand, it defines publications as the vertices of a network similar to co-citation and bibliographic coupling. On the other hand, it takes authors as the basis of a definition of edges as authorship similar to co-authorship. The edges in publication authorship then connect two publications (A and B) which are authored by one or more individuals (0). They may be weighted by the number of authors two publications have in common. Publication authorship keeps the focus on the themes and topics of publications to describe the segmentation of research and the development of the scientific frontier. At the same time, it accounts for the social dimension of scientific work with clusters of publications emerging from the collaboration among authors. Publication authorship may appear as simply another combination of vertices and edges that fills a void in the roster of approaches to the analysis of scientific work. However, it firmly rests with the theoretical argument of a communicative constitution of social systems . The theory suggests that any form of documentation or record (e. g., academic publications, corporate reports, meetings minutes) is a condensate of the participation of individuals in communication . In turn, individuals who participate in communication are common sources of information that connect communication event and episode . Publication authorship follows exactly this line of argument. Scholars participate in academic discourse by authoring publications which, in turn, cluster to reflect the segmentation of research and the development of the scientific frontier . Common to all the approaches in bibliometric studies is the idea that the relations among publications or authors present similarities in the underlying scientific work, which allows for the analysis of clusters of tightly coupled and central vertices (i. e., schools of thought and the scientific frontier). In particular, publication authorship assumes that two publications are similar to the extend that one or more scholars (co-)authors them. Since authors frequently specialize in a narrow field of research (e. g., behavioral economics or adolescent oncology), their publications are likely to present a narrow field of research, too (e. g., a behavioral economist is unlikely to work on transaction-costs issues and an adolescent oncologist rarely contributes to research on childhood obesity). Publication authorship is therefore more exclusive than co-citation and bibliographic coupling because the number of authors who collaborate on two publications is almost always smaller than then number of joint citations or common references. (None of the 27444 publications used in the empirical analysis of this paper had more authors than joint citations or common references.) At the same time, it is more inclusive than author co-citation and co-authorship because it includes both single-authored and co-authored publications. Co-citation, bibliographic coupling, author co-citation, author bibliographic-coupling, co-authorship, and publication authorship all yield unique insights into scientific work. In the light of the similarities and differences among these and other approaches in bibliometric studies , publication authorship is closest to bibliographic coupling, not least because it defines vertices as publications and, therefore, focuses on the themes and topics of these. The following empirical illustrations pit publication authorship against bibliographic coupling to highlight differences in the segmentation of research and a consequently more detailed scientific frontier. Three data sets of academic articles in accounting, astronomy, and gastroenterology provide the empirical basis for the illustrations of publication authorship. The choice of academic disciplines is motivated by the idea to pick examples that are independent of each other, which is a safe assumption for scientific work in management, physics, and medicine. Indeed, there are no cross-references among the three data sets and each one exhibits its own unique features such as, for example, a smaller average number of authors for accounting than in astronomy or gastroenterology, a larger dispersion of the number of authors in astronomy than in gastroenterology, and a larger average number of references in accounting than in the other two disciplines . These and other idiosyncrasies of each discipline reflect in the below analysis, of course. A smaller average number of authors on publications in accounting immediately translates to a lower density in respective bibliometric networks, and so on. The point of the empirical illustrations, however, is to compare bibliographic coupling to publication authorship across different academic disciplines, and not to compare disciplines to each other. Thus, I can safely report that data sets of academic articles in marketing, political science, and cancer research yield similar illustrations. The data sets are compiled and downloaded from Elsevier’s abstract and citation database Scopus. They comprise of academic articles published in the ten years between 2010 and 2019 in one of the top-10 journals for accounting, astronomy, and gastroenterology (see Table 17 in the for an overview of journals). The journals are ranked according to their respective CiteScore in 2019. The data sets may be replicated following a step-by-step research protocol available on GitHub . The R source code for the following illustrations of publication authorship can be found in the same location. Altogether, there are 5333 articles in accounting, 10817 articles in astronomy, and 11293 articles in gastroenterology. As usual with publication data, the data sets require considerable cleaning before further analysis. This involves the removal of double entries (e. g., pre-prints), non-article publications (e. g., editorials, notes, letters, book reviews, errata), articles without an abstract or without references, and articles with anonymous authors. For later text mining, abstracts are stripped of punctuation, stop words, and numbers, multiple white-space characters are collapsed into one, and copyright notices are removed. I compute networks for both bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. Vertices represent academic articles. They are connected by edges either because they share one or more references in case of bibliographic coupling or because they have one or more authors in common in case of publication authorship. Therefore, the number of vertices is the same for both types of networks while the number of edges differs from one to the other (cf. ). The difference in the number of edges among the networks already highlights the idiosyncrasies of each academic discipline. For example, the high average number of references in accounting leads to a more than 20 times higher edge count in bibliographic coupling than the low average number of authors in publication authorship. Conversely, the low average number of references in gastroenterology puts the number of edges for bibliographic coupling and publication authorship almost on par. Derivative measures such as network density (i. e., the ratio of the number of edges to the number of possible edges) differ accordingly. Interestingly, network-level measures such as transitivity and assortativity do not follow the decreasing differences in the number of edges and network density from bibliographic coupling to publication authorship. Transitivity quantifies the probability that the adjacent vertices of a vertex are connected. In other words, it points out the probability that three articles form a triangle either because they share common references or authors. Transitivity reveals that the segmentation of research in bibliographic coupling is less dense than in publication authorship, it finds the inverse case to be true in astronomy, and shows a similar coefficient in gastroenterology. Assortativity quantifies the probability that a vertex connects to other vertices that are similar in one way or another. I use the degree of a vertex (i. e., the number of connections a vertex has to other vertices) to quantify the probability that an article with many common references or authors connects to other articles with many common references or authors. Assortativity shows an increase from bibliographic coupling to publication authorship in accounting and a decrease in astronomy and gastroenterology. These differences first and foremost highlight that academic areas are idiosyncratic in the way they conduct research. A low number of large research segments is most often associated with more loose connections among articles, whereas a high number of small research segments commonly calls for more dense connections among articles. With a description of the data in place, I further investigate the differences between bibliographic coupling and publication authorship. I first compute clusters of articles, then estimate their goodness-of-fit to the data using a measure of cosine similarity, and finally discuss the segmentation of research and the development of the respective scientific frontier. These steps follow common practice in bibliometric studies (e. g., ). Clustering Transitivity and assortativity offer bird’s-eye views of the clustering of networks. In order to compute clusters of vertices for bibliographic coupling and publication authorship across all three academic areas, I use a fast-greedy algorithm widely employed in network analysis. The algorithm takes edge weights as an indicator of the strength of bibliographic coupling or publication authorship. I use the cosine similarity between a set of references or authors from publication A and a set of references or authors from publication B : cosine similarity = | A ∩ B | | A | × | B | (1) The weight of the respective edge between two vertices is therefore the ratio of the number of references or authors the two publications A and B have in common, normalized by the square root of the product of the number of references or authors from the two publications A and B . Clusters delimit subsets of articles that share similar theoretical insight or empirical evidence based on common references or common authors. They may be thought of as schools of thought or theoretical paradigms. Consider, for example, bibliographic coupling in accounting. Seven clusters describe the majority of research in the ten years from 2010 to 2019. Four of them share common topics such as banks, information, investors, liquidity, and stock. In contrast, cluster 4 leans towards references to entrepreneurship, innovation, and knowledge. To some extend, these topics adhere to different theoretical paradigms, ranging from economics to law and social science. Bibliographic coupling is infused with a number of troubles that publication authorship hopes to remedy. Among these troubles is the misconception that common references provide a unanimous argument . While it is true that a majority of articles cites references to back up an argument, the same references may well be used to undermine it. Bibliographic coupling is therefore ill equipped to account for the quality of the argument by weighting common references. Publication authorship addresses this shortcoming based on the notion that authors themselves stand in for a school of thought. Authors are more likely to work together because they complement each other in their theoretical ideas, methodological approaches, or empirical interests. Conversely, scholars of opposing schools of thought are unlikely to publish together. There are famous and rare exceptions to this, of course. For example, the academic debate between Habermas and Luhmann eventually led to a joint book publication that carefully elaborated on the commonalities and differences between Habermas’ theory of communicative action and Luhmann’s social systems theory . However, most debates take place as an exchange of arguments in the form of alternating publications or lectures between scholars (e. g., Bohr and Einstein on quantum theory or Hawking and Penrose on time-reversal invariance). Bibliographic coupling draws these academic debates together because the respective articles share common references, whereas publication authorship separates the fields of research based on the authors’ opposing schools of thought (i. e., disjoint authorship). The number of clusters from bibliographic coupling to publication authorship jumps from seven to 278 clusters in accounting, still shows a steep increase from 26 to 138 clusters in astronomy, but slightly decreases from 70 to 62 clusters in gastroenterology . In general, bibliographic coupling yields larger clusters that are more inclusive of opposing research, whereas publication authorship produces a more fine-grained picture of schools of thought, theoretical arguments, or fields of interest. shows the distribution of clusters for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. Opposite to the number of articles in each cluster (gray bars) stands the cumulative percentage of cluster sizes (solid black line) and the 80-percent cut-off (dashed black line). While this cut-off is arbitrary, it puts the focus on a limited number of clusters to tell a story about the segmentation of research and the development of the scientific frontier. Goodness-of-fit Next, I look for evidence of how well clusters fit the bibliometric data. Given that two articles are assumed to be similar in their content based on common references or authors, I compute an additional similarity measure based on article abstracts. Following the above formula for the cosine similarity between two attribute vectors of either references or authors, I compute the cosine similarity (i. e., edge weights) between attribute vectors of abstract terms of two articles (i. e., vertices). I then use the mean intra-cluster cosine similarity to compare the goodness-of-fit of clusters for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. shows boxplots for the mean intra-cluster cosine similarities for bibliographic coupling and publication authorship in all three academic areas. In addition, I run a Mann-Whitney U test on the one-tailed alternative hypothesis that the means in publication authorship are greater than the means in bibliographic coupling. This alternative is true for all three academic areas. additionally shows the corresponding non-parametric measure p , which can take on values between 0 or 1. The extreme values represent entirely separate distribution of means, whereas a p -value of 0.5 indicates a complete overlap. Accounting shows a difference in mean intra-cluster cosine similarities from bibliographic coupling to publication authorship at a p -value of 0.77. Although not as large a difference, publication authorship in astronomy also yields higher means at a p -value of 0.64. Finally, gastroenterology shows a difference between bibliographic coupling and publication authorship at a p-value of 0.75 despite a decrease in the number of clusters from one to the other. The results clearly show that the goodness-of-fit of clusters in publication authorship to the content of articles in questions is better than in bibliographic coupling. Research segmentation I already established that bibliographic coupling is broader in the segmentation of research than publication authorship. The question now is, what additional insights does a more detailed picture yield? Again, I draw on networks to provide an answer for the segmentation of research and the development of the scientific frontier in accounting, astronomy, and gastroenterology. The large numbers of articles and the bibliographic coupling or publication authorship to connect them are prohibitive for any practical visualization. Therefore, I first collapse articles into clusters I already obtained with the help of the above presented algorithm. I then collapse bibliographic coupling or publication authorship between articles into respective relations between clusters and take the mean inter-cluster cosine similarity to weight these relations. Finally, I remove isolate clusters to focus the attention on the central component of each research field. shows six networks for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. The size of the vertices indicates the (normalized) number of articles in each cluster, ranging from a minimum of two articles up to the biggest cluster with 2966 articles for bibliographic coupling in astronomy. The color of the vertices marks the mean age (in years) of articles in a cluster on a gray scale from the youngest cluster in light gray to the oldest cluster in dark gray. In like manner, the color and width of the edges indicates the mean cosine similarity between clusters on a gray scale from least similar relation in light gray to the most similar relation in dark gray. I use Kamada and Kawai’s layout algorithm , which is among the most commonly used algorithms to position vertices and edges. To describe the segmentation of research, I compute the term frequency-inverse document frequency (tf-idf) for article abstracts within each cluster for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology to highlight the most prominent themes and topics. In addition to the visualization of the six networks , I report the number of articles, the mean and standard deviation of their age (in years), as well as the degree, betweenness, and closeness centrality for each cluster. A full glossary of respective technical terminology is found in the . Degree is the simplest measure of connectivity. It counts the number of edges a vertex has to other vertices. Betweenness and closeness centrality are frequently used measures in bibliographic studies where they signal interdisciplinarity and multidisciplinarity, respectively . That is to say, the larger the number of shortest paths that go through a vertex (i. e., the more times a cluster sits in between others), the more that cluster may be considered to be interdisciplinary, and the smaller the average length of shortest paths from a vertex to all other vertices is (i. e., the closer a cluster is to others), the more that cluster may be considered to be multidisciplinary. Accounting Bibliographic coupling in accounting comes about six connected clusters . Already the three largest clusters (1, 2, and 3) combine more than 80 percent of all articles and broadly outline distinct research with only one shared tf-idf term (i. e., information; cf. ). Cluster 5 also shares some common terms with the three largest clusters but is considerably smaller and younger, which may indicate a push of the scientific frontier. Cluster 4 sets itself apart with unique tf-idf terms such as research, universities, technology, innovation, and entrepreneurship. Nonetheless, bibliographic coupling paints a rather coarse picture for accounting. Publication authorship, in turn, promises more details with the segmentation of research into 41 connected clusters. While there is considerable overlap in tf-idf terms among the top-ten clusters (e. g., cluster 2 shares seven terms with cluster 8 and five terms with clusters 3 and 7), some clusters exhibit exclusive terms that delineate unique lines of research (see Tables and for network measures and tf-idf terms). For example, cluster 2 centers on international financial reporting standards (ifrs), cluster 6 looks into high-frequency trading systems (hfts), and cluster 10 brings together venture capital (vc) and initial public offerings (ipo). Each of these three clusters marks a differentiation of research in accounting and thus a push of the scientific boundary. Astronomy Bibliographic coupling in astronomy shows a segmentation of research which is largely made up of four clusters (1, 2, 3, 4). These four clusters are closely connected to each other at the center of the network. They share tf-idf terms that any layperson would guess are descriptive of research in astronomy (e. g., galaxy, mass, star; ). With an average overlap 7.5 tf-idf terms among them (most notably, clusters 2 and 3 share all top-ten terms, albeit in different order), the four largest clusters are too generic to constitute particular fields of interests in astronomy. Some smaller clusters are more unique in their contributions to the research field. For example, cluster 5 exhibits a large body of research on solar flares and cluster 9 features numerous studies on the formation of stars and other stellar objects. In the end, bibliographic coupling makes astronomy appear as if it was a field of research where perhaps only some newer or renewed interests (e. g., the smaller and younger cluster 9 opposite the older and larger cluster 4) are bound to push the scientific boundary. Publication authorship splits research in astronomy into 56 connected clusters. The ten largest clusters make up almost 80 percent of all publications. This more fine-grained picture immediately reflects in the 49 unique top-10 tf-idf terms that describe the clusters, whereas bibliographic coupling only shows 34 unique terms (Tables and ). A combination of tf-idf terms such as black, hole, and kev (kiloelectron volts) in cluster 9 then points to the latest research findings based on data from NASA’s Nuclear Spectroscopic Telescope Array. In contrast, bibliographic coupling buries this research mainly in its largest cluster 1. A similar observation can be made for research on the formation of galaxies found in cluster 3. Next to the generic tf-idf terms such as galaxy, mass, and star, the additional term redshift specifically contributes to our understanding of an ever expanding universe where light from distant stellar objects shifts towards longer wavelength and, therefore, moves into the red end of the electromagnetic spectrum. Again, bibliographic coupling puts this research in its two largest clusters 1 and 2. Other unique lines of inquiry can be made out, too (e. g., cluster 10 on the role of solar winds in the sun’s heliosheath), but ultimately require the expert interpretation of astronomers. Gastroenterology Bibliographic coupling in gastroenterology presents as a dense network of 26 connected clusters. The ten largest clusters make up a little more than 80% of all articles. The periphery is negligible with no more than 21 articles found in the seven smallest clusters . Gastroenterology is dominated by Latin terminology and medical abbreviations foreign to laypersons . Examples for research foci in gastroenterology include Crohn’s disease (cluster 1), liver cirrhosis (cluster 2), colorectal cancer (crc; cluster 4). Publication authorship in gastroenterology expands the number of connected clusters from 26 to 37 . This more detailed picture is best exemplified with cancer research in gastroenterology. Bibliographic coupling groups gastric and colorectal (crc) cancer into clusters 4 and 6. In contrast, publication authorship clearly shows the four most common types of gastrointestinal cancers. First and second, gastric cancer (clusters 1 and 8) and colorectal cancer (cluster 6) are immediately visible as distinct fields of interest. Moreover, colorectal cancer often coincides with inflamatory bowel disease (ibd) and eosinophilic esophagitis (eoe), both of which are large parts of cluster 5. Liver cancer (clusters 1, 3, and 8) and pancreatic cancer (cluster 9) mark the third and fourth most common type of cancer. The distribution of the most common types of gastrointestinal cancer across clusters finds explanation in additional tf-idf terms that relate to common practice in treatment or diagnosis. For example, cluster 1 highlights endoscopic submucosal dissection (esd) as preferential treatment of gastric or liver cancer in patients. In contrast, cluster 8 puts forward diagnostic research on the expression and risk of early gastric cancer . The development of the scientific frontier is not immediately apparent for publication authorship, although the more detailed picture allows even a layperson to make out clear distinctions within research sub-fields such as the focus on treatment of gastric or liver cancer in cluster 1 and 3 as opposed to the diagnosis of these types of cancer in cluster 8. Further interpretations that may shed a light on the latest developments in the research field call for the expertise of gastroenterologists. Transitivity and assortativity offer bird’s-eye views of the clustering of networks. In order to compute clusters of vertices for bibliographic coupling and publication authorship across all three academic areas, I use a fast-greedy algorithm widely employed in network analysis. The algorithm takes edge weights as an indicator of the strength of bibliographic coupling or publication authorship. I use the cosine similarity between a set of references or authors from publication A and a set of references or authors from publication B : cosine similarity = | A ∩ B | | A | × | B | (1) The weight of the respective edge between two vertices is therefore the ratio of the number of references or authors the two publications A and B have in common, normalized by the square root of the product of the number of references or authors from the two publications A and B . Clusters delimit subsets of articles that share similar theoretical insight or empirical evidence based on common references or common authors. They may be thought of as schools of thought or theoretical paradigms. Consider, for example, bibliographic coupling in accounting. Seven clusters describe the majority of research in the ten years from 2010 to 2019. Four of them share common topics such as banks, information, investors, liquidity, and stock. In contrast, cluster 4 leans towards references to entrepreneurship, innovation, and knowledge. To some extend, these topics adhere to different theoretical paradigms, ranging from economics to law and social science. Bibliographic coupling is infused with a number of troubles that publication authorship hopes to remedy. Among these troubles is the misconception that common references provide a unanimous argument . While it is true that a majority of articles cites references to back up an argument, the same references may well be used to undermine it. Bibliographic coupling is therefore ill equipped to account for the quality of the argument by weighting common references. Publication authorship addresses this shortcoming based on the notion that authors themselves stand in for a school of thought. Authors are more likely to work together because they complement each other in their theoretical ideas, methodological approaches, or empirical interests. Conversely, scholars of opposing schools of thought are unlikely to publish together. There are famous and rare exceptions to this, of course. For example, the academic debate between Habermas and Luhmann eventually led to a joint book publication that carefully elaborated on the commonalities and differences between Habermas’ theory of communicative action and Luhmann’s social systems theory . However, most debates take place as an exchange of arguments in the form of alternating publications or lectures between scholars (e. g., Bohr and Einstein on quantum theory or Hawking and Penrose on time-reversal invariance). Bibliographic coupling draws these academic debates together because the respective articles share common references, whereas publication authorship separates the fields of research based on the authors’ opposing schools of thought (i. e., disjoint authorship). The number of clusters from bibliographic coupling to publication authorship jumps from seven to 278 clusters in accounting, still shows a steep increase from 26 to 138 clusters in astronomy, but slightly decreases from 70 to 62 clusters in gastroenterology . In general, bibliographic coupling yields larger clusters that are more inclusive of opposing research, whereas publication authorship produces a more fine-grained picture of schools of thought, theoretical arguments, or fields of interest. shows the distribution of clusters for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. Opposite to the number of articles in each cluster (gray bars) stands the cumulative percentage of cluster sizes (solid black line) and the 80-percent cut-off (dashed black line). While this cut-off is arbitrary, it puts the focus on a limited number of clusters to tell a story about the segmentation of research and the development of the scientific frontier. Next, I look for evidence of how well clusters fit the bibliometric data. Given that two articles are assumed to be similar in their content based on common references or authors, I compute an additional similarity measure based on article abstracts. Following the above formula for the cosine similarity between two attribute vectors of either references or authors, I compute the cosine similarity (i. e., edge weights) between attribute vectors of abstract terms of two articles (i. e., vertices). I then use the mean intra-cluster cosine similarity to compare the goodness-of-fit of clusters for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. shows boxplots for the mean intra-cluster cosine similarities for bibliographic coupling and publication authorship in all three academic areas. In addition, I run a Mann-Whitney U test on the one-tailed alternative hypothesis that the means in publication authorship are greater than the means in bibliographic coupling. This alternative is true for all three academic areas. additionally shows the corresponding non-parametric measure p , which can take on values between 0 or 1. The extreme values represent entirely separate distribution of means, whereas a p -value of 0.5 indicates a complete overlap. Accounting shows a difference in mean intra-cluster cosine similarities from bibliographic coupling to publication authorship at a p -value of 0.77. Although not as large a difference, publication authorship in astronomy also yields higher means at a p -value of 0.64. Finally, gastroenterology shows a difference between bibliographic coupling and publication authorship at a p-value of 0.75 despite a decrease in the number of clusters from one to the other. The results clearly show that the goodness-of-fit of clusters in publication authorship to the content of articles in questions is better than in bibliographic coupling. I already established that bibliographic coupling is broader in the segmentation of research than publication authorship. The question now is, what additional insights does a more detailed picture yield? Again, I draw on networks to provide an answer for the segmentation of research and the development of the scientific frontier in accounting, astronomy, and gastroenterology. The large numbers of articles and the bibliographic coupling or publication authorship to connect them are prohibitive for any practical visualization. Therefore, I first collapse articles into clusters I already obtained with the help of the above presented algorithm. I then collapse bibliographic coupling or publication authorship between articles into respective relations between clusters and take the mean inter-cluster cosine similarity to weight these relations. Finally, I remove isolate clusters to focus the attention on the central component of each research field. shows six networks for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology. The size of the vertices indicates the (normalized) number of articles in each cluster, ranging from a minimum of two articles up to the biggest cluster with 2966 articles for bibliographic coupling in astronomy. The color of the vertices marks the mean age (in years) of articles in a cluster on a gray scale from the youngest cluster in light gray to the oldest cluster in dark gray. In like manner, the color and width of the edges indicates the mean cosine similarity between clusters on a gray scale from least similar relation in light gray to the most similar relation in dark gray. I use Kamada and Kawai’s layout algorithm , which is among the most commonly used algorithms to position vertices and edges. To describe the segmentation of research, I compute the term frequency-inverse document frequency (tf-idf) for article abstracts within each cluster for bibliographic coupling and publication authorship in accounting, astronomy, and gastroenterology to highlight the most prominent themes and topics. In addition to the visualization of the six networks , I report the number of articles, the mean and standard deviation of their age (in years), as well as the degree, betweenness, and closeness centrality for each cluster. A full glossary of respective technical terminology is found in the . Degree is the simplest measure of connectivity. It counts the number of edges a vertex has to other vertices. Betweenness and closeness centrality are frequently used measures in bibliographic studies where they signal interdisciplinarity and multidisciplinarity, respectively . That is to say, the larger the number of shortest paths that go through a vertex (i. e., the more times a cluster sits in between others), the more that cluster may be considered to be interdisciplinary, and the smaller the average length of shortest paths from a vertex to all other vertices is (i. e., the closer a cluster is to others), the more that cluster may be considered to be multidisciplinary. Accounting Bibliographic coupling in accounting comes about six connected clusters . Already the three largest clusters (1, 2, and 3) combine more than 80 percent of all articles and broadly outline distinct research with only one shared tf-idf term (i. e., information; cf. ). Cluster 5 also shares some common terms with the three largest clusters but is considerably smaller and younger, which may indicate a push of the scientific frontier. Cluster 4 sets itself apart with unique tf-idf terms such as research, universities, technology, innovation, and entrepreneurship. Nonetheless, bibliographic coupling paints a rather coarse picture for accounting. Publication authorship, in turn, promises more details with the segmentation of research into 41 connected clusters. While there is considerable overlap in tf-idf terms among the top-ten clusters (e. g., cluster 2 shares seven terms with cluster 8 and five terms with clusters 3 and 7), some clusters exhibit exclusive terms that delineate unique lines of research (see Tables and for network measures and tf-idf terms). For example, cluster 2 centers on international financial reporting standards (ifrs), cluster 6 looks into high-frequency trading systems (hfts), and cluster 10 brings together venture capital (vc) and initial public offerings (ipo). Each of these three clusters marks a differentiation of research in accounting and thus a push of the scientific boundary. Astronomy Bibliographic coupling in astronomy shows a segmentation of research which is largely made up of four clusters (1, 2, 3, 4). These four clusters are closely connected to each other at the center of the network. They share tf-idf terms that any layperson would guess are descriptive of research in astronomy (e. g., galaxy, mass, star; ). With an average overlap 7.5 tf-idf terms among them (most notably, clusters 2 and 3 share all top-ten terms, albeit in different order), the four largest clusters are too generic to constitute particular fields of interests in astronomy. Some smaller clusters are more unique in their contributions to the research field. For example, cluster 5 exhibits a large body of research on solar flares and cluster 9 features numerous studies on the formation of stars and other stellar objects. In the end, bibliographic coupling makes astronomy appear as if it was a field of research where perhaps only some newer or renewed interests (e. g., the smaller and younger cluster 9 opposite the older and larger cluster 4) are bound to push the scientific boundary. Publication authorship splits research in astronomy into 56 connected clusters. The ten largest clusters make up almost 80 percent of all publications. This more fine-grained picture immediately reflects in the 49 unique top-10 tf-idf terms that describe the clusters, whereas bibliographic coupling only shows 34 unique terms (Tables and ). A combination of tf-idf terms such as black, hole, and kev (kiloelectron volts) in cluster 9 then points to the latest research findings based on data from NASA’s Nuclear Spectroscopic Telescope Array. In contrast, bibliographic coupling buries this research mainly in its largest cluster 1. A similar observation can be made for research on the formation of galaxies found in cluster 3. Next to the generic tf-idf terms such as galaxy, mass, and star, the additional term redshift specifically contributes to our understanding of an ever expanding universe where light from distant stellar objects shifts towards longer wavelength and, therefore, moves into the red end of the electromagnetic spectrum. Again, bibliographic coupling puts this research in its two largest clusters 1 and 2. Other unique lines of inquiry can be made out, too (e. g., cluster 10 on the role of solar winds in the sun’s heliosheath), but ultimately require the expert interpretation of astronomers. Gastroenterology Bibliographic coupling in gastroenterology presents as a dense network of 26 connected clusters. The ten largest clusters make up a little more than 80% of all articles. The periphery is negligible with no more than 21 articles found in the seven smallest clusters . Gastroenterology is dominated by Latin terminology and medical abbreviations foreign to laypersons . Examples for research foci in gastroenterology include Crohn’s disease (cluster 1), liver cirrhosis (cluster 2), colorectal cancer (crc; cluster 4). Publication authorship in gastroenterology expands the number of connected clusters from 26 to 37 . This more detailed picture is best exemplified with cancer research in gastroenterology. Bibliographic coupling groups gastric and colorectal (crc) cancer into clusters 4 and 6. In contrast, publication authorship clearly shows the four most common types of gastrointestinal cancers. First and second, gastric cancer (clusters 1 and 8) and colorectal cancer (cluster 6) are immediately visible as distinct fields of interest. Moreover, colorectal cancer often coincides with inflamatory bowel disease (ibd) and eosinophilic esophagitis (eoe), both of which are large parts of cluster 5. Liver cancer (clusters 1, 3, and 8) and pancreatic cancer (cluster 9) mark the third and fourth most common type of cancer. The distribution of the most common types of gastrointestinal cancer across clusters finds explanation in additional tf-idf terms that relate to common practice in treatment or diagnosis. For example, cluster 1 highlights endoscopic submucosal dissection (esd) as preferential treatment of gastric or liver cancer in patients. In contrast, cluster 8 puts forward diagnostic research on the expression and risk of early gastric cancer . The development of the scientific frontier is not immediately apparent for publication authorship, although the more detailed picture allows even a layperson to make out clear distinctions within research sub-fields such as the focus on treatment of gastric or liver cancer in cluster 1 and 3 as opposed to the diagnosis of these types of cancer in cluster 8. Further interpretations that may shed a light on the latest developments in the research field call for the expertise of gastroenterologists. Bibliographic coupling in accounting comes about six connected clusters . Already the three largest clusters (1, 2, and 3) combine more than 80 percent of all articles and broadly outline distinct research with only one shared tf-idf term (i. e., information; cf. ). Cluster 5 also shares some common terms with the three largest clusters but is considerably smaller and younger, which may indicate a push of the scientific frontier. Cluster 4 sets itself apart with unique tf-idf terms such as research, universities, technology, innovation, and entrepreneurship. Nonetheless, bibliographic coupling paints a rather coarse picture for accounting. Publication authorship, in turn, promises more details with the segmentation of research into 41 connected clusters. While there is considerable overlap in tf-idf terms among the top-ten clusters (e. g., cluster 2 shares seven terms with cluster 8 and five terms with clusters 3 and 7), some clusters exhibit exclusive terms that delineate unique lines of research (see Tables and for network measures and tf-idf terms). For example, cluster 2 centers on international financial reporting standards (ifrs), cluster 6 looks into high-frequency trading systems (hfts), and cluster 10 brings together venture capital (vc) and initial public offerings (ipo). Each of these three clusters marks a differentiation of research in accounting and thus a push of the scientific boundary. Bibliographic coupling in astronomy shows a segmentation of research which is largely made up of four clusters (1, 2, 3, 4). These four clusters are closely connected to each other at the center of the network. They share tf-idf terms that any layperson would guess are descriptive of research in astronomy (e. g., galaxy, mass, star; ). With an average overlap 7.5 tf-idf terms among them (most notably, clusters 2 and 3 share all top-ten terms, albeit in different order), the four largest clusters are too generic to constitute particular fields of interests in astronomy. Some smaller clusters are more unique in their contributions to the research field. For example, cluster 5 exhibits a large body of research on solar flares and cluster 9 features numerous studies on the formation of stars and other stellar objects. In the end, bibliographic coupling makes astronomy appear as if it was a field of research where perhaps only some newer or renewed interests (e. g., the smaller and younger cluster 9 opposite the older and larger cluster 4) are bound to push the scientific boundary. Publication authorship splits research in astronomy into 56 connected clusters. The ten largest clusters make up almost 80 percent of all publications. This more fine-grained picture immediately reflects in the 49 unique top-10 tf-idf terms that describe the clusters, whereas bibliographic coupling only shows 34 unique terms (Tables and ). A combination of tf-idf terms such as black, hole, and kev (kiloelectron volts) in cluster 9 then points to the latest research findings based on data from NASA’s Nuclear Spectroscopic Telescope Array. In contrast, bibliographic coupling buries this research mainly in its largest cluster 1. A similar observation can be made for research on the formation of galaxies found in cluster 3. Next to the generic tf-idf terms such as galaxy, mass, and star, the additional term redshift specifically contributes to our understanding of an ever expanding universe where light from distant stellar objects shifts towards longer wavelength and, therefore, moves into the red end of the electromagnetic spectrum. Again, bibliographic coupling puts this research in its two largest clusters 1 and 2. Other unique lines of inquiry can be made out, too (e. g., cluster 10 on the role of solar winds in the sun’s heliosheath), but ultimately require the expert interpretation of astronomers. Bibliographic coupling in gastroenterology presents as a dense network of 26 connected clusters. The ten largest clusters make up a little more than 80% of all articles. The periphery is negligible with no more than 21 articles found in the seven smallest clusters . Gastroenterology is dominated by Latin terminology and medical abbreviations foreign to laypersons . Examples for research foci in gastroenterology include Crohn’s disease (cluster 1), liver cirrhosis (cluster 2), colorectal cancer (crc; cluster 4). Publication authorship in gastroenterology expands the number of connected clusters from 26 to 37 . This more detailed picture is best exemplified with cancer research in gastroenterology. Bibliographic coupling groups gastric and colorectal (crc) cancer into clusters 4 and 6. In contrast, publication authorship clearly shows the four most common types of gastrointestinal cancers. First and second, gastric cancer (clusters 1 and 8) and colorectal cancer (cluster 6) are immediately visible as distinct fields of interest. Moreover, colorectal cancer often coincides with inflamatory bowel disease (ibd) and eosinophilic esophagitis (eoe), both of which are large parts of cluster 5. Liver cancer (clusters 1, 3, and 8) and pancreatic cancer (cluster 9) mark the third and fourth most common type of cancer. The distribution of the most common types of gastrointestinal cancer across clusters finds explanation in additional tf-idf terms that relate to common practice in treatment or diagnosis. For example, cluster 1 highlights endoscopic submucosal dissection (esd) as preferential treatment of gastric or liver cancer in patients. In contrast, cluster 8 puts forward diagnostic research on the expression and risk of early gastric cancer . The development of the scientific frontier is not immediately apparent for publication authorship, although the more detailed picture allows even a layperson to make out clear distinctions within research sub-fields such as the focus on treatment of gastric or liver cancer in cluster 1 and 3 as opposed to the diagnosis of these types of cancer in cluster 8. Further interpretations that may shed a light on the latest developments in the research field call for the expertise of gastroenterologists. Publication authorship proves a point in displaying a more detailed picture of research than bibliographic coupling. Of course, it is only one methodological approach among many others used in bibliometric studies. Alternatives to bibliometric networks based on (co-)citation or author(ship) include the co-occurrences of words in the title or abstract of articles and topic modeling algorithms such as, for example, Latent Dirichlet Allocation (LDA). While bibliometric networks do not immediately compare to these alternative approaches, I discuss some findings from running a cluster analysis of word co-occurrences in abstracts as well as an LDA for the research field of accounting. Co-word analysis Co-word analysis looks at the intellectual organization of research based on the co-occurrences of article keywords. Its strength is a simple setup of words as vertices and edges as their co-occurrences, commonly weighted by an equivalency index similar to the above discussed measure of term frequency-inverse document frequency (tf-idf). However, a first trouble with co-word analysis is that not all articles in scientific databases come with keywords, mostly due to the fact that some journals do not require authors to supply keywords to their articles. This trouble shows most prominently when approximately one third of all articles in accounting and more than 40 percent of all articles in gastroenterology have no associated keywords. It is somewhat less of a concern in astronomy where only five percent of all articles are missing keywords. In order to have the same baseline number of articles as the above studies in bibliographic coupling and publication authorship, I use words in article abstracts instead of keywords to compute word co-occurrences. I run the same network statistics and cluster analysis in the co-word analysis of accounting, astronomy, and gastroenterology in order to highlight similarities and differences to bibliographic coupling and publication authorship. Most notably, the number of vertices and edges increases dramatically in co-word analysis now that words instead of articles are the starting point . Density, transitivity, and assortativity hover around the same values, though they defy any immediate comparison among the disparate networks. The number of clusters steadily increases from accounting to astronomy to gastroenterology . At first sight, it appears as if co-word analysis provides a segmentation of research somewhat opposite to publication authorship where the number of clusters decreases. However, the distribution of clusters reveals that all three academic areas feature one huge cluster of words that are most common to all articles. Disregarding this pool cluster, we observe a more even distribution of words among clusters. While these word clusters describe research fields in great detail, a second and major drawback of co-word analysis is that words are exclusive to clusters. A fairly common word in accounting such as stocks, for example, necessarily appears only in single cluster then. This calls into question the meaningfulness of clusters in the first place. Bibliographic coupling and publication authorship both provide a first layer of connectivity among articles that in later analysis allows for terms to appear in multiple, overlapping clusters, which is much better suited to describe the segmentation of research and the development of the scientific frontier. Latent Dirichlet Allocation An alternative to the exclusive clusters of co-word analysis is Latent Dirichlet Allocation (LDA) . LDA is a generative probabilistic model based on the idea that each document (e. g., an article abstract) in a corpus is a random mix of latent topics, and each topic is in turn characterized by a probability distribution over words. For example, the terms stocks and economy are both likely to make up a topic that describes the impact of stocks on a country’s economy (e. g., Novo Nordisk’s market value has now exceeded the size of the entire Danish economy), whereas they are perhaps less likely to appear in a topic that outlines the connection between the initial public stock offering (IPO) and the economy (e. g., California frequently has a large budget surplus due to income taxes of IPO sales). While LDA yields topics similar to the clusters of bibliometric coupling and publication authorship, it shares little commonalities with the network analysis of vertices and edges. Its biggest drawback is that the number of topics needs to be fixed a priori, though there are several ways to determine the optimal number of topics by now . Another weak spot is that it requires significant computational power. Indeed, computing the optimal number of topics in 25 iterations of LDA in accounting failed due to issues of memory allocation on the ten cores of an Apple Silicon M1 Max with 32 GB RAM. The following computations were instead carried out on 64 Intel Xeon high-performance cores with 364 GB RAM. Running time was around 35 minutes for accounting, 74 minutes for astronomy, and 116 minutes for gastroenterology. In contrast, the entire computations in bibliographic coupling and publication authorship run in less than three minutes on Apple Silicon for all three academic areas combined. shows normalized values for a number of topics ranging from 10 to 250 in accounting, astronomy, and gastroenterology. With a look for either a minimal or a maximal value, the optimal number of topics falls somewhere between 70 and 140 in accounting, between 100 and 140 in astronomy, and between 100 and 160 topics in gastroenterology. Already the number of topics at the lower end of the range for each academic area is larger than then number of connected clusters in bibliographic coupling and publication authorship, which suggests a greater detail of research segmentation. However, LDA offers limited information on the organization of research beside the document-topic probability γ and the topic-word probability β . On the one hand, γ indicates the probability with which a topic represents a document; on the other hand, β indicates the probability with which a word is common to a topic. Taken together, shows the top-ten topics in decreasing order of their mean γ alongside the respective top-ten terms in decreasing order of their β scores. While some topics in LDA compare favorably to clusters in bibliographic coupling (e. g., topic 63 and cluster 4 on knowledge and innovation) and publication authorship (e, g., topic 54 and cluster 8 on the role of analysts in firm earnings or topic 40 and cluster 9 on the quality of audits), others certainly require their own interpretation (e. g., topic 22 on accounting practices and accountability). Unfortunately, additional information on centrality, size, or age of topics similar to clusters is not readily available in LDA. The segmentation of research by topics in LDA is perhaps similar to the one by cluster, though the development of the scientific frontier is not easy to spot, not least because of the missing information on the organization of research areas. Co-word analysis looks at the intellectual organization of research based on the co-occurrences of article keywords. Its strength is a simple setup of words as vertices and edges as their co-occurrences, commonly weighted by an equivalency index similar to the above discussed measure of term frequency-inverse document frequency (tf-idf). However, a first trouble with co-word analysis is that not all articles in scientific databases come with keywords, mostly due to the fact that some journals do not require authors to supply keywords to their articles. This trouble shows most prominently when approximately one third of all articles in accounting and more than 40 percent of all articles in gastroenterology have no associated keywords. It is somewhat less of a concern in astronomy where only five percent of all articles are missing keywords. In order to have the same baseline number of articles as the above studies in bibliographic coupling and publication authorship, I use words in article abstracts instead of keywords to compute word co-occurrences. I run the same network statistics and cluster analysis in the co-word analysis of accounting, astronomy, and gastroenterology in order to highlight similarities and differences to bibliographic coupling and publication authorship. Most notably, the number of vertices and edges increases dramatically in co-word analysis now that words instead of articles are the starting point . Density, transitivity, and assortativity hover around the same values, though they defy any immediate comparison among the disparate networks. The number of clusters steadily increases from accounting to astronomy to gastroenterology . At first sight, it appears as if co-word analysis provides a segmentation of research somewhat opposite to publication authorship where the number of clusters decreases. However, the distribution of clusters reveals that all three academic areas feature one huge cluster of words that are most common to all articles. Disregarding this pool cluster, we observe a more even distribution of words among clusters. While these word clusters describe research fields in great detail, a second and major drawback of co-word analysis is that words are exclusive to clusters. A fairly common word in accounting such as stocks, for example, necessarily appears only in single cluster then. This calls into question the meaningfulness of clusters in the first place. Bibliographic coupling and publication authorship both provide a first layer of connectivity among articles that in later analysis allows for terms to appear in multiple, overlapping clusters, which is much better suited to describe the segmentation of research and the development of the scientific frontier. An alternative to the exclusive clusters of co-word analysis is Latent Dirichlet Allocation (LDA) . LDA is a generative probabilistic model based on the idea that each document (e. g., an article abstract) in a corpus is a random mix of latent topics, and each topic is in turn characterized by a probability distribution over words. For example, the terms stocks and economy are both likely to make up a topic that describes the impact of stocks on a country’s economy (e. g., Novo Nordisk’s market value has now exceeded the size of the entire Danish economy), whereas they are perhaps less likely to appear in a topic that outlines the connection between the initial public stock offering (IPO) and the economy (e. g., California frequently has a large budget surplus due to income taxes of IPO sales). While LDA yields topics similar to the clusters of bibliometric coupling and publication authorship, it shares little commonalities with the network analysis of vertices and edges. Its biggest drawback is that the number of topics needs to be fixed a priori, though there are several ways to determine the optimal number of topics by now . Another weak spot is that it requires significant computational power. Indeed, computing the optimal number of topics in 25 iterations of LDA in accounting failed due to issues of memory allocation on the ten cores of an Apple Silicon M1 Max with 32 GB RAM. The following computations were instead carried out on 64 Intel Xeon high-performance cores with 364 GB RAM. Running time was around 35 minutes for accounting, 74 minutes for astronomy, and 116 minutes for gastroenterology. In contrast, the entire computations in bibliographic coupling and publication authorship run in less than three minutes on Apple Silicon for all three academic areas combined. shows normalized values for a number of topics ranging from 10 to 250 in accounting, astronomy, and gastroenterology. With a look for either a minimal or a maximal value, the optimal number of topics falls somewhere between 70 and 140 in accounting, between 100 and 140 in astronomy, and between 100 and 160 topics in gastroenterology. Already the number of topics at the lower end of the range for each academic area is larger than then number of connected clusters in bibliographic coupling and publication authorship, which suggests a greater detail of research segmentation. However, LDA offers limited information on the organization of research beside the document-topic probability γ and the topic-word probability β . On the one hand, γ indicates the probability with which a topic represents a document; on the other hand, β indicates the probability with which a word is common to a topic. Taken together, shows the top-ten topics in decreasing order of their mean γ alongside the respective top-ten terms in decreasing order of their β scores. While some topics in LDA compare favorably to clusters in bibliographic coupling (e. g., topic 63 and cluster 4 on knowledge and innovation) and publication authorship (e, g., topic 54 and cluster 8 on the role of analysts in firm earnings or topic 40 and cluster 9 on the quality of audits), others certainly require their own interpretation (e. g., topic 22 on accounting practices and accountability). Unfortunately, additional information on centrality, size, or age of topics similar to clusters is not readily available in LDA. The segmentation of research by topics in LDA is perhaps similar to the one by cluster, though the development of the scientific frontier is not easy to spot, not least because of the missing information on the organization of research areas. Bibliometric studies are common practice in all academic disciplines. They assess the history of a research field, point out the state of the art, and identify the development of the scientific frontier. Bibliometric studies are transparent, reproducible, and scalable, making them a cost-effective way of analyzing large volumes of academic articles. In the end, they highlight idiosyncrasies of scientific work that are insightful to both laypersons and experts. From classic approaches of mapping research publications by (co-)citation and bibliographic coupling to centering on collaboration among scholars by author co-citation, author bibliographic coupling, and co-authorship, the methodology of bibliometric studies has gotten more and more technically refined. Still, there are some limitations. For example, (co-)citation analysis and bibliographic coupling do not capture the reasoning behind citations. Whether articles are cited to make or break an argument is therefore unknown. Publication authorship does away with this limitation by accounting for both the social dimension of authorship and the intellectual dimension of scientific work. Analyzing the content of academic articles, of course, is the prime domain of natural language processing. The findings of bibliometric studies may thus be further interpreted using measures such as term frequency-inverse document frequency (tf-idf) to highlight scientific concepts that are most descriptive for academic areas. Together with measures on the level of vertices and edges (e. g., degree, betweenness, closeness, size, age) and on the level of the bibliometric network in question (e. g., density, assortativity, transitivity), the segmentation of research becomes not only more interpretable but also comparable across the space and time of scientific work. Of course, bibliometric studies are far from the only means of inquiry into the segmentation of research and the development of the scientific frontier. Approaches used in natural language processing such as, for example, the analysis of word co-occurrences and Latent Dirichlet Allocation (LDA) are particularly suited to capture the intellectual dimension of scientific work without necessarily inheriting the limitations of (co-)citation analysis and other bibliometric approaches. However, they are computationally costly to begin with and their findings are often harder to interpret without the backdrop of additional measures from the realm of bibliometric studies. The key differences between publication authorship and approaches in natural language processing such as LDA are what makes bibliometric studies attractive in the first place. Publication authorship is transparent in both its definition of what vertices and edges are and its analysis of the respective bibliometric networks. It is easily reproducible not only across the space of multiple disciplines but also across the time of a single discipline, which allows for a comparison of different academic areas and an interpretation of the development of the scientific frontier. Last, publication authorship scales well from small fields of research to large volumes of academic articles. In contrast, LDA as a generative probabilistic model is somewhat opaque, not least because it requires the specification of the number of clusters and a number of training parameters to begin with. Its findings are also more difficult to interpret without additional measures derived from the structure of scientific work. And it is computationally intense, which makes it a costly alternative to bibliometric studies. Consider that publication authorship clearly identifies themes and topics in accounting despite the lower number of clusters. For example, one cluster shows a large but rather peripheral body of work on international financial reporting standards, whereas another cluster that comprises of a slightly smaller number of academic articles on high-frequency trading systems sits in the center of adjacent work in accounting. LDA is more generic, despite the fact that its higher number of clusters suggest more detail. For example, it shows a cluster about a cluster about banking and credit, a cluster about innovation and research, and a cluster about trading and insider information. None of these clusters are immediately identifiable as larger or smaller, central or peripheral, older or younger. Admittedly, the latest developments in artificial intelligence promise to remedy some of these shortcomings in natural language processing (e. g., ChatGPT-4 suggests that Habermas and Luhmann are intellectual rivals despite the fact that they published together; at the same time, it cannot correctly identify the DOIs of either works). Unfortunately, artificial intelligence with hundreds of billions of parameters or more operates largely as a black box. Perhaps there is still room for bibliometric studies carefully rooted in theory then. Publication authorship, I argue, offers a more fine-grained picture of academic research that provides explanatory power beyond simple refinement. The illustrations of bibliographic coupling versus publication authorship in accounting, astronomy, and gastroenterology ultimately confirm significant benefits to bibliometric studies of scientific work. Moreover, the idea to connect publications by authorship immediately extends to, for example, organization studies. Following the now popular notion that communication constitutes organization , we may conceive of corporate documents such as meeting minutes, project reports, or product presentations as communication episodes . The authorship of these episodes, in turn, provides the proverbial glue among the said documents. Documents and authorship are therefore conceived as the vertices and the edges that map out an organization as a network of communication episodes. A respective cluster analysis commonly shows the functions of an organization (e. g., accounting, engineering, marketing) similar to the sub-fields of an academic discipline . Indeed, an academic discipline may well be thought of as an organization of the scientific work conducted within the disciplinary boundaries. My hope then is that publication authorship provides another useful approach in the toolbox of bibliometric studies and beyond. S1 Appendix Top-10 journals in accounting, astronomy, and gastroenterology. (PDF) S2 Appendix Glossary. (PDF)
The Requirements for Setting Up a Dedicated Structure for Adolescents and Young Adults with Cancer—A Systematic Review
fe44fd02-23ce-4497-b245-ec30da46a293
11854605
Internal Medicine[mh]
Adolescents and young adults (AYAs) have been recognized as a distinct population in oncology with specific but often unmet needs. The National Institutes of Health and the National Cancer Institute define the AYA population as individuals aged 15 to 39 years . However, different age ranges may be used depending on local and national contexts, with upper age limits set at 24, 25, or 39 years . Due to this broad age range, AYA patients present with a heterogeneous spectrum of tumor types, characterized by distinct biological features and malignancies commonly observed in both the pediatric and adult populations. Pediatric types of cancer include, for example, acute lymphoblastic leukemia (ALL) or rhabdomyosarcoma, whereas adult cancer types include, for example, breast and colorectal cancer . Despite sharing a diagnosis, the biology may be different in the AYA population. For ALL, for example, favorable cytogenetic abnormalities in children, such as high hyperdiploidy or ETV6-RUNX1 are less frequent in the AYA population . Conversely, AYA patients show less-favorable factors such as BCR-ABL1 or the intrachromosomal amplification of chromosome 21 (iAMP21) more frequently . The same is true for adult types of cancer. In breast cancer, for example, women aged under 40 years are more often diagnosed with larger, poorly differentiated, and endocrine receptor-negative tumors and, further, have more frequent nodal involvement than women over 40 . In addition to the broad range of tumor types, the prevalence and incidence of cancer is higher in the AYA population than in children. As demonstrated by Coccia et al., approximately 11,000 children were newly diagnosed with cancer in 2015, accounting for 0.65% of all cancer cases in the US, whereas around 70,000 AYAs were newly diagnosed, based on SEER data from 1995 to 2015, representing 4.72% of all cancer diagnoses . For 2022, Hughes et al. estimated that, worldwide, 1,300,196 AYA patients were newly diagnosed with cancer, with 377,621 cancer-related deaths . The age-standardized mortality was especially high in low-income countries . Despite these peculiarities of the AYA population, international treatment protocols and clinical trials have predominantly focused on pediatric or adult patients, which have been crucial for improving survival and reducing treatment-related toxicity in these age groups. AYA patients are frequently under-represented and are less often enrolled in clinical trials . In pediatric trials, for example, individuals over the age of 18 years are often excluded . Furthermore, a substantial number of AYA patients are treated in general hospitals rather than specialized cancer centers that possess the concentrated expertise required for their treatment . Although overall survival rates for AYA patients have improved in recent decades, progress for many tumor types has lagged behind other age groups. The lower inclusion rate in clinical trials, the high variety and uniqueness of different tumor types, and the fact that many AYA cancer patients are not referred to or treated in specialized oncological centers may partly explain this lower survival rate . In addition to biological and medical aspects, AYAs are at a critical stage in their lives, where psychosocial pressure can be immense. They are facing many psychosocial challenges, such as working on their education, achieving financial independence, forming relationships, and in some cases managing family responsibilities. A cancer diagnosis can profoundly disrupt these developmental milestones, leading to a marked decline in quality of life . Additionally, AYAs with cancer are in a more vulnerable state for psychological stress than their peers, and the risk for suffering from psychological problems is increased . Also, compared to long-term survivors from childhood cancer they experience more post-traumatic stress disorder, anxiety, depression, and fatigue . Even though AYA cancer patients are facing these immense challenges, there is still some limited recognition among healthcare providers about the unique psychosocial challenges and needs for this age group . The establishment of dedicated AYA units has thus gained increasing importance, aiming to consider and better treat physical and psychosocial aspects and to improve the overall care experience for this population. Haines et al., for example, recently published a practical guidance to further develop and implement AYA units in the US . Current models of care in pediatric and adult oncology settings, such as family-centered approaches in pediatric settings, are not tailored to meet the specific needs of AYA patients . This group has been described as being in a “No Man’s Land”, receiving care that often fails to address their unique needs . AYA units are intended to be specialized, designed to deliver age-appropriate care, comprehensive support, and resources that cover both the clinical and psychosocial needs of AYA patients. These units aim to create an environment that goes beyond pure cancer treatment to address the broader psychological and social challenges faced by AYA patients. Despite the increasing adoption of AYA units, there remains a critical need for a systematic evaluation of their structure, implementation, and effectiveness. To address this gap, we conducted a systematic literature review to assess and evaluate the available evidence for recommendations to establish new AYA units dedicated to oncological care. We conducted a systematic literature search in PubMed in December 2023, following PRISMA guidelines . The search was restricted to studies published between 1 January 2000, and 1 January 2024. A search update was performed in November 2024 and included publications up to October 2024. The search strategy focused on three key concepts: “tumors/oncological diseases”, “adolescents and young adults”, and “set up/models of care/delivery of health” . Inclusion criteria were defined using the PICO framework. The population of interest comprised AYA patients, defined as individuals aged 15–39 years at diagnosis and treated in dedicated AYA units for any cancer diagnosis. Studies that specifically addressed AYA patients but used alternative age ranges were also included. We did not specify and include comparators in the PICO framework as this is not a relevant factor for the questions of this systematic review, and we had already initially assumed that it would therefore not be mentioned in most of the eligible studies. The intervention was defined as the structure and set up of AYA units or programs, encompassing logistical, infrastructural, and personnel-related components. The primary outcomes were the age definitions used for AYA patients, as the age definition for this group is an often-discussed point in the oncology community and differs between different countries, and the recommendations for establishing new AYA units. Relevant parameters for the latter included logistical aspects (e.g., AYA unit located within pediatric or adult hospitals, primary medical lead), infrastructural elements (e.g., facilities such as recreation rooms), and personnel considerations (e.g., specialized training, availability of social workers). Based on the extracted data, we planned to analyze potential indicators of the medical, financial, or political benefits of establishing AYA units as secondary outcomes if possible. Four authors (C.V., M.O., N.B., L.R.v.R.) independently screened all titles, abstracts, and full texts. Each publication was reviewed by two authors, with discrepancies resolved by a fifth reviewer (K.S.). Data were extracted from eligible studies into a standardized data sheet, including details such as first author, publication year, study design, patient characteristics, set up recommendations, and quality indicators, where available. Each study’s quality, relevance, and reliability were assessed by two authors. We used the Joanna Briggs Institute’s critical appraisal tools ( https://jbi.global/critical-appraisal-tools , accessed on 8 July 2024), appropriate to each study type (e.g., appraisal tool for textual evidence narrative). Although the tool does not categorize studies by quality, we established a grading system: studies scoring five or six points (out of six) were classified as “Quality 1” (high quality), those with three or four points as “Quality 2” (medium quality), and those scoring one or two points as “Quality 3” (low quality). Outcomes were planned to be described narratively based on the extracted data. This systematic review was registered with PROSPERO ( https://www.crd.york.ac.uk/prospero/ ; CRD42024505963, accessed on 4 February 2024). The systematic literature search identified 2564 records, with an additional 92 records included from reference screening of the identified reviews. Following the title and abstract screening, 94 records were assessed for eligibility. Ultimately, 7 original articles were included in the final analysis, while 87 records were excluded—primarily due to outcomes other than recommendations for setting up an AYA unit (n = 56) or being review articles (n = 23) . All studies received six points in the quality assessment and were classified as Quality 1. 3.1. Study Characteristics, Age Definition, First Steps, Multidisciplinary Teams, and Models of Care All the included studies were of a descriptive nature, mainly being narratives or expert opinions . There were different age definitions of AYA patients used in the studies. The lower age cutoff ranged from 13 to 16 years and the upper limit from 24 to 39 years . The reasons for choosing the age ranges were not mentioned . The early involvement of political and medical stakeholders and leaders was identified as a critical early step in five studies . Additionally, the concept of an AYA advocate or champion was highlighted in four studies . These advocates/champions can, for example, conduct a multidisciplinary analysis of the current state of AYA care, initiate the planning of future services, advocate for resources, promote evidence-based clinical care, educate clinical personnel, and raise awareness . The importance of having a clear financial situation from the very beginning was highlighted in three studies, also to prevent dissatisfaction in the involved personnel . Collaboration between adult and pediatric oncology teams from the very beginning, already at the stage of the initial planning of the AYA unit, was universally recommended to ensure comprehensive care for AYA patients . All studies stressed the importance of establishing a multidisciplinary team (MDT) to address the complex and diverse needs of AYA oncology patients . Across studies, key MDT members included lead clinicians, nurses, pediatric and adult oncologists, psychosocial support teams, and allied health professionals, such as dietitians, social workers, and physiotherapists . While no specific model of care was universally recommended, the studies emphasized the need to adapt and evolve care models based on local circumstances, reflecting the variability in available resources and patient demographics . Haines et al., for example, described a “consultation-based” model allowing AYA-focused services to be available across disease groups and settings . The service spanned inpatient and outpatient settings in pediatric and adult oncology. Osborn et al. describe the situation of five centers in eight Australian states where the models differ depending on the jurisdiction: multiple hospital-based lead sites, a collaborative network partner model, a single statewide service (mobile team) working across adult and pediatric sectors and across regions, and a single lead site with a statewide responsibility . 3.2. Clinical Trial Inclusion, Logistical Recommendations, and Further Aspects The inclusion of AYA patients into clinical trials emerged as a central theme, with six studies stressing the importance of increasing participation due to the historical under-representation of AYAs in clinical research . Carr et al. proposed an “informed opt-out” consent model to enhance trial enrolment, while both Magni et al. and Haines et al. highlighted the need for trial protocols specifically tailored to AYA patients . Five studies advocated for the development of AYA communities or networks to facilitate the creation of treatment guidelines and educational programs . Creating an AYA-friendly environment was mentioned in six studies . This includes aspects of clinical care (e.g., ward round time, visiting hours) and social aspects (physical and digital interaction among AYA patients, rooms/meeting areas designed and equipped for AYAs) . Windebank et al. even highlighted that AYA patients should be included in the design and equipment of AYA units, but also in formulating ward rules . Finally, four studies mentioned or recommended the development of quality indicators to assess the effectiveness of AYA care models and initiatives . Fertility counseling was specifically mentioned in six studies, with Haines et al., for example, recommending the inclusion of a dedicated AYA fertility specialist . Additionally, four studies emphasized the importance of structured survivorship care . Scott et al. specifically mentioned the inclusion of AYA survivors, who were diagnosed as young children with an oncological disease, as these patients are often forgotten in survivorship care . 3.3. Barriers and Facilitators in Setting Up AYA Units The primary barriers to establishing AYA oncology units were identified as insufficient collaboration between pediatric and adult oncology teams, a lack of awareness, challenges in changing existing institutional medical and political cultures, and a lack of funding . Facilitators included the establishment of national AYA networks and communities, having financial and political support, an analysis of the current situation, the engagement of key stakeholders including healthcare professionals, allied health staff, and also patient and caregiver advocacy . The development of AYA oncology as a recognized specialty was suggested as a long-term solution to some of the barriers to care . All the included studies were of a descriptive nature, mainly being narratives or expert opinions . There were different age definitions of AYA patients used in the studies. The lower age cutoff ranged from 13 to 16 years and the upper limit from 24 to 39 years . The reasons for choosing the age ranges were not mentioned . The early involvement of political and medical stakeholders and leaders was identified as a critical early step in five studies . Additionally, the concept of an AYA advocate or champion was highlighted in four studies . These advocates/champions can, for example, conduct a multidisciplinary analysis of the current state of AYA care, initiate the planning of future services, advocate for resources, promote evidence-based clinical care, educate clinical personnel, and raise awareness . The importance of having a clear financial situation from the very beginning was highlighted in three studies, also to prevent dissatisfaction in the involved personnel . Collaboration between adult and pediatric oncology teams from the very beginning, already at the stage of the initial planning of the AYA unit, was universally recommended to ensure comprehensive care for AYA patients . All studies stressed the importance of establishing a multidisciplinary team (MDT) to address the complex and diverse needs of AYA oncology patients . Across studies, key MDT members included lead clinicians, nurses, pediatric and adult oncologists, psychosocial support teams, and allied health professionals, such as dietitians, social workers, and physiotherapists . While no specific model of care was universally recommended, the studies emphasized the need to adapt and evolve care models based on local circumstances, reflecting the variability in available resources and patient demographics . Haines et al., for example, described a “consultation-based” model allowing AYA-focused services to be available across disease groups and settings . The service spanned inpatient and outpatient settings in pediatric and adult oncology. Osborn et al. describe the situation of five centers in eight Australian states where the models differ depending on the jurisdiction: multiple hospital-based lead sites, a collaborative network partner model, a single statewide service (mobile team) working across adult and pediatric sectors and across regions, and a single lead site with a statewide responsibility . The inclusion of AYA patients into clinical trials emerged as a central theme, with six studies stressing the importance of increasing participation due to the historical under-representation of AYAs in clinical research . Carr et al. proposed an “informed opt-out” consent model to enhance trial enrolment, while both Magni et al. and Haines et al. highlighted the need for trial protocols specifically tailored to AYA patients . Five studies advocated for the development of AYA communities or networks to facilitate the creation of treatment guidelines and educational programs . Creating an AYA-friendly environment was mentioned in six studies . This includes aspects of clinical care (e.g., ward round time, visiting hours) and social aspects (physical and digital interaction among AYA patients, rooms/meeting areas designed and equipped for AYAs) . Windebank et al. even highlighted that AYA patients should be included in the design and equipment of AYA units, but also in formulating ward rules . Finally, four studies mentioned or recommended the development of quality indicators to assess the effectiveness of AYA care models and initiatives . Fertility counseling was specifically mentioned in six studies, with Haines et al., for example, recommending the inclusion of a dedicated AYA fertility specialist . Additionally, four studies emphasized the importance of structured survivorship care . Scott et al. specifically mentioned the inclusion of AYA survivors, who were diagnosed as young children with an oncological disease, as these patients are often forgotten in survivorship care . The primary barriers to establishing AYA oncology units were identified as insufficient collaboration between pediatric and adult oncology teams, a lack of awareness, challenges in changing existing institutional medical and political cultures, and a lack of funding . Facilitators included the establishment of national AYA networks and communities, having financial and political support, an analysis of the current situation, the engagement of key stakeholders including healthcare professionals, allied health staff, and also patient and caregiver advocacy . The development of AYA oncology as a recognized specialty was suggested as a long-term solution to some of the barriers to care . The results of this systematic review reveal the diversity of approaches to establishing AYA oncology units while identifying common essential components for successful implementation. A key finding across all the studies was the early involvement of relevant stakeholders in the planning process, which was universally recognized as crucial to the successful establishment of AYA units. Through this approach, members from the pediatric and adult oncology team feel a sense of belonging right from the start, can actively contribute, and do not feel left out. Having an AYA champion from the start, as mentioned by several included studies, might facilitate these efforts . Several studies highlighted the significant barriers posed by the lack of collaboration between pediatric and adult oncology teams and the challenge of shifting established institutional practices. Reed et al. even described it as a ‘turf war’ between medical and pediatric oncologists, arguing who is better suited to treat AYA patients . Given the broad spectrum of tumor types in AYA patients, which are normally managed by either pediatric or adult oncology specialists, strong collaboration between these teams is essential to ensure optimal care for AYA patients. For example, studies have shown that AYA patients with acute lymphoblastic leukemia or sarcomas, which are more often seen in pediatric patients, achieve better survival outcomes when treated with pediatric protocols . Furthermore, a well-known referral and transition pathway will facilitate the collaboration between the different pediatric and adult specialties and will help to increase the numbers of AYA patients treated in a specialized ward . Another important factor in setting up an AYA unit is strengthening the AYA community at the local, national, and international levels. This strategy can increase acceptance and support for these efforts from both political and medical sectors. Moreover, it may lead to improved educational opportunities for the medical team and the development of more comprehensive guidelines, thus resulting in better care. Patterson et al. concluded that established national systems and coordination seem to lead to higher patient satisfaction, including age-appropriate information and support services for AYA patients, and specialist services . Over the past few decades, significant progress has been made and European collaboration groups created, highlighted in an official position paper on AYA treatment in 2021 by both the adult European Society for Medical Oncology (ESMO) and the pediatric International Society of Pediatric Oncology (SIOP Europe) . An interesting and consistent finding across the studies was the variation in age definitions for AYA patients, reflecting the long and ongoing debate over the most appropriate age range for this population. A consensus on a standardized age range could help to guide and standardize clinical care and research. The 2021 ESMO/SIOPE position paper on AYA care proposed an age range of 15–39 years to harmonize the internationally varying definitions . However, the disadvantage of a broader age range implies different psychosocial needs and cancer biologies. Psychological needs differ widely between cancer patients aged 20 years and 39 years; for example, in terms of maturity, partnership and sexual life, and worries about the future. The same is true for social aspects such as education and career or economic independence. Regarding biological aspects, epithelial cancers are more commonly seen in those aged 25 years and older . The recommended settings and models of care for AYA units also varied significantly. Some studies advocated for consultation-based models integrated within both pediatric and adult oncology services, while others suggested the establishment of dedicated lead sites serving regional or national populations . Both Osborn et al. and Haines et al. emphasized that the appropriate model of care should be determined by local circumstances and the existing structures and resources available. Currently, there is no evidence to indicate which model of care is most favorable for AYA patients . A consistent theme across all the studies was the importance of establishing a multidisciplinary team to address the complex needs of AYA patients. Especially, the need for psychosocial support and appropriate screening is repeatedly mentioned because the psychological burden of these diseases is high . Additionally, creating age-appropriate care environments that foster social interaction and promote psychosocial well-being was emphasized. The focus on flexible designs and dedicated recreational areas reflects an understanding that AYA patients face not only medical challenges but also significant quality-of-life concerns during and after treatment . This was, for example, also seen for AYA patients with chronic diseases. The psychological significance of AYAs with chronic illnesses being surrounded by peers of a similar age and having a role model within their age group has been demonstrated . Another critical point discussed was the under-representation of AYA patients in clinical trials. Several studies emphasized the need to prioritize AYA trial inclusion when designing new AYA units. The “informed opt-out” consent model proposed by Carr et al., in which patients are automatically enrolled in trials unless they actively decline, could help increase participation. This approach could mitigate the current under-representation of AYA patients in oncology trials, which has contributed to stagnation in survival outcomes compared to other age groups. Addressing the limited number of trials open to AYA patients, often due to restrictive age criteria, is another critical step . To tackle this problem, the Working Group on Fostering Age-Inclusive Research (FAIR) was launched in 2017 by the ACCELERATE Forum ( https://www.accelerate-platform.org/fair-trials , accessed on 4 October 2024). The FAIR aims to raise awareness about the problem of the upper age limit of 18 years in pediatric cancer trials and to promote change. This age limit is arbitrary and without medical justification. A similar effort is also undergoing in the US with the “Children’s Oncology Group Adolescent and Young Adult Responsible Investigator Network” . Fertility counseling was consistently identified as an important component of AYA care. The potential long-term impact of not offering fertility preservation is significant for this patient group, and multiple studies emphasized the importance of providing fertility counseling to all at-risk patients, which should be standard of care . The limitations of this systematic review are linked to the data provided in the studies. Only seven studies were included as reviews could not be included for methodological reasons. The design, the included population, and the outcome of these studies were heterogenous. They were mainly retrospective analysis/narratives and expert opinions, which were conducted in a descriptive form, and no prospective data were collected. The strengths include the comprehensive approach by screening titles, abstracts, and full texts by two independent reviewers, and the detailed quality assessment of the included studies, ensuring reliability and relevance. Despite these findings, there are still significant gaps in evidence-based guidelines for AYA oncology care. Only seven studies met the inclusion criteria for this systematic review, indicating a scarcity of data in this field. While the included studies provide valuable recommendations and experience for the structure and organization of AYA units, data on the long-term outcomes and effectiveness of these programs remain limited. Future research should focus on developing standardized metrics to evaluate the impact of AYA care models on patient survival, quality of life, and psychosocial outcomes. In a workshop carried out in 2011 by the Canadian National Task Force on Adolescent and Young Adult Oncology, supported by the Canadian Partnership Against Cancer and the C17 network, relevant categories of outcomes for AYA with cancer and respective metrics to assess them were defined . The defined categories range from epidemiology to screening and prevention, access and place of care, psychosocial health and quality of life, and survivorship, but also economic aspects. Economic metrics to assess the cost/benefit ratio of care are crucial not only to improve the care for AYA patients but also to justify the financial support and funding needed to establish dedicated AYA structures . Ferrari et al. evaluated eight different metrics in a single center in Italy . They considered these metrics important so that AYA projects are accepted as standard of care by medical and political stakeholders . The prospective, longitudinal, observational BRIGHTLIGHT cohort was launched in 2012 in the UK and prospectively collected data to evaluate the benefits of AYA services in the UK . Aspects considered in this cohort included quality of life, satisfaction with care, clinical processes and outcomes, patients’ experience of cancer care, social and educational milestones, and the costs of care. Today, BRIGHTLIGHT has expanded beyond this initial cohort study and includes a broad range of studies and projects related to cancer care in the AYA population . Charities can be additional sources for information about care structures and the AYA patients’ perspective. Teenage Cancer Trust in the UK, for example, published in 2012 a blueprint of care for teenagers and young adults with cancer . Additional charities and organizations are “Young lives vs. Cancer”, “CanTeen”, and “Teen Cancer America”, some of which also provide information about desired care structures. A good overview about these structures is given by Ferrari et al., emphasizing the importance of these organizations . In conclusion, while notable progress has been made in understanding the unique needs of AYA oncology patients, much work remains in formalizing the structure, funding, and collaboration required for the successful establishment of AYA oncology units. Further research is essential to evaluate the long-term outcomes and effectiveness of these programs.
Importance of medical home domains on emergency visits using a cross-sectional national survey of US children
a3d231ca-9365-4aa6-ac19-c0902801917e
11535676
Patient-Centered Care[mh]
Children in the USA have approximately 34 million emergency department (ED) visits per year with one study estimating the median cost per visit in 2016 to be around $1300. Visits to the ED by children have been steady for many years, with recent upticks in visits for mental health-related concerns. While there will always be necessary ED visits among children, effective ambulatory care has been shown to reduce ED visits and costs in children. Over the last decade in the USA, efforts to improve the quality of ambulatory care for children have focused on delivering care through a patient-centred medical home (PCMH) after it was introduced as a goal in Healthy People 2010. PCMH is a set of guidelines and processes for the delivery of ambulatory care. The American Academy of Pediatrics specifies seven qualities essential to care from a PCMH: accessible, family-centred, continuous, comprehensive, coordinated, compassionate and culturally effective care. In the USA, a practice can be accredited as PCMH by organisations like the National Committee for Quality Assurance, which will evaluate and ensure compliance with the PCMH criteria. Several practices within the USA are certified as PCMHs; however, many are not. PCMH evolved from a simple desire to better coordinate care for children in the 1960s, with a focus on children with chronic conditions or special needs, to the recommended way to provide care for all children in general paediatric practices today. In the early 2000s, most children received appropriate care in one or more domains, but few received all the domains of PCMH care. Studies have confirmed that effective ambulatory care via PCMHs is also associated with reduced ED visits among all children as well as those with chronic conditions and special medical needs (children with special healthcare needs (CSHCN)). One recent paper from this team showed that gaps in care coordination (CC) are associated with increased ED visits among all children, not just those with chronic conditions or special medical needs. There have also been studies on the association of each specific domain of PCMH on ED visits, such as coordination of care and family-centred care. The main concern with the PCMH is that its uptake among children has stalled after some early successes. In fact, a US national survey of children in 2016 reported that 48.6% of children had access to a PCMH and 46.6% had access in 2021, indicating no recent improvement. In contrast, adverse social determinants of health (SDoH), such as lower socioeconomic status and household poverty, are associated with increased ED visits in children. Studies have shown that the different domains of the PCMH may affect ED use differently depending on the patient’s SDoH. Further, PCMH has failed to reach all children equitably. In 2020–2021, 55.6% of white children reported receiving care from a PCMH compared with 34.7% of Hispanic and 37.1% of black children. One recent paper from this team showed that children with adverse SDoH, particularly those with multiple adverse SDoH, are more likely to experience gaps in CC. Hence, understanding which domains of the PCMH are most important in populations experiencing adverse SDoH is essential to finding ways to increase the uptake of PCMH and eventually reduce ED visits. However, there have been no studies that rank the relative importance of PCMH among children overall or among those with SDoH with the specific outcome of ED visits. Two other factors that are known to affect ED use among children are CSHCN status and age of the child. Understanding how these factors interact with PCMH and SDoH is critical when studying ED visits. By including several domains of the PCMH, multiple adverse SDoH and other attributes (age, gender and CSHCN), one can determine their relative importance on ED visits. This approach helps disentangle their interrelationships and provides guidance for providers and policymakers on how to prioritise domains of PCMH based on the social circumstances, age and CSHCN status, to provide ambulatory care that best fits the patients’ needs and reduces unnecessary ED visits. Hence, the primary objective of this study is to understand the relative importance of each PCMH component among different populations with adverse SDoH on the outcome of ED visits. Secondarily, we look at the inter-relationships among the domains of PCMH, adverse SDoH, CSHCN and age categories (infants, young children and teenagers) to determine the relative strengths of their associations with ED visits among children. We use a machine learning technique to understand the relative importance of these different elements after considering their association with each other and to the outcome of ED visits. This study uses the National Survey of Children’s Health (NSCH), an annual survey of parents and caregivers of children in the USA from birth to 17 years of age. These data are available for public use, and this project was deemed exempt by the Weill Cornell Medicine Institutional Review Board. Data source and population The survey is managed by the Maternal and Child Health Bureau which is a division of Health and Human Services, and it conducts this survey with the US Census Bureau. The survey is administered annually in all 50 states and DC in English and Spanish. Data collection typically starts in June, and the complete dataset is released in October of the following year. NSCH also releases pooled data that combines responses from multiple years to enhance the sample sizes and reliability of the estimates. Our study used the pooled data from 2 years: 2018 and 2019. The NSCH has a complex design as it oversamples households with children between birth and 5 years old and, separately, children with special needs. The surveys are completed by an adult, usually a parent, who can respond by mail or online and who has knowledge of the child’s health. There are age-specific surveys with targeted questions for children 0–5 years, 6–11 years and 12–17 years. More details of the survey are published elsewhere. Overall response rates were around 40% and hot-deck imputation techniques were employed for sex, race and ethnicity variables in the survey. All children are screened using five criteria to classify them as CSHCN or not. Screening criteria include regular use of medications; routine need for medical, mental health or educational services; use of specialised services (speech, occupational or physical therapy); inability to do things like their peers; or receiving treatment for an emotional, behavioural or developmental problem. CSHCN are defined as having at least one of the above criteria that is expected to last at least 12 months. Children who do not meet any of the criteria are classified as non-CSHCN. Patient-centred medical Home The PCMH components were operationalised in the survey by classifying parent/caregiver responses into five domains: CC, having a personal doctor or nurse, having a usual source of care, family-centred care and ease of getting referrals. details the questions and the definitions as well as the responses used to define each domain. Social determinants of health We used the WHO conceptual model and the Healthy People 2030 implementation guidelines to select adverse SDoH. SDoH includes five distinct domains: (1) social and community context, (2) economic stability, (3) education access and quality, (4) healthcare access and quality and (5) neighbourhood and built environment. describes the categories and the specific questions used to operationalise them. Adverse SDoH often cluster together within the same individual. For example, an individual with low education may also have low income and live in a poor neighbourhood. As such, we generated a variable that reflected the number of SDoH for each individual and calculated a score ranging from 0 to 5. We further classified the score to indicate none, 1, 2 and 3 or more adverse SDoH. The rationale behind this SDoH score is to capture the cumulative burden of adverse SDoH for each child. The primary outcome measure was ED visits dichotomised from the NSCH survey question: ‘During the past 12 months, how many times did this child visit a hospital emergency room?’ Those with one or more visits will be coded as 1 and those with none as 0. Statistical analysis We reported the unweighted number and the weighted percentages for each domain after removing missing values. After a visual analysis of the importance of PCMH domains by SDoH populations, we used a split-improvement variable importance measure based on random forests (RFs) to obtain the importance of each explanatory variable. RFs are a modern ensemble method which combine many estimated decision trees into an overall model for the endpoint given the explanatory variables. This method has several advantages. First, RFs are a flexible machine learning method which can detect nonlinear variable effects and interactions instead of merely linear effects such as in a generalised linear model such as a linear or logistic regression. Second, the aggregate of trees which defines RFs results in estimates having lower variance compared with estimates based on just one tree. Third, RFs can take into account observation weights such as the sampling weights in survey response data. The variable importance measure we used is based on ‘Gini impurity’, which directly measures the degree of improvement in prediction attributed to each explanatory variable. The Gini impurity is a well-established measure for decision trees. We used a modern variable importance measure which updates the Gini impurity to appropriately compare continuous and categorical variables. Further, there are well-developed computational packages which support RFs and associated variable importance measures; we used the R package ‘ranger’ for the analyses. We report the importance rank that reflects importance of each variable after adjustment of all others in the model and the corrected Gini impurity score that shows how beneficial it is to split on a variable. The higher the value of the score, the more beneficial it is. If the score is negative, it indicates that the variable is detrimental to split. Candidate variables for stratified analysis The main objective was to rank the importance of the five domains of PCMH overall and stratified by SDoH and demographics. However, to determine the ‘best’ candidates for stratification, we first ran a model that ranked the five domains of PCMH, the five SDoH variables and three potential confounders (age, gender of the child and CSHCN status) on the endpoint of ED visits for the total population. Next, we selected the highest-ranked co-variates and stratified the population by these variables. Third, we built separate models for each of the strata selected in the prior step where we ranked the importance of PCMH domains. Finally, we ran additional models using the measures of overall burden of SDoH. The National Survey of Children’s Health 2018–2019 data can be accessed at the data resource centre http://www.childhealthdata.org . All analyses were performed using SAS V.9.4 (Cary, NC) or R V.4.2.2. The survey is managed by the Maternal and Child Health Bureau which is a division of Health and Human Services, and it conducts this survey with the US Census Bureau. The survey is administered annually in all 50 states and DC in English and Spanish. Data collection typically starts in June, and the complete dataset is released in October of the following year. NSCH also releases pooled data that combines responses from multiple years to enhance the sample sizes and reliability of the estimates. Our study used the pooled data from 2 years: 2018 and 2019. The NSCH has a complex design as it oversamples households with children between birth and 5 years old and, separately, children with special needs. The surveys are completed by an adult, usually a parent, who can respond by mail or online and who has knowledge of the child’s health. There are age-specific surveys with targeted questions for children 0–5 years, 6–11 years and 12–17 years. More details of the survey are published elsewhere. Overall response rates were around 40% and hot-deck imputation techniques were employed for sex, race and ethnicity variables in the survey. All children are screened using five criteria to classify them as CSHCN or not. Screening criteria include regular use of medications; routine need for medical, mental health or educational services; use of specialised services (speech, occupational or physical therapy); inability to do things like their peers; or receiving treatment for an emotional, behavioural or developmental problem. CSHCN are defined as having at least one of the above criteria that is expected to last at least 12 months. Children who do not meet any of the criteria are classified as non-CSHCN. The PCMH components were operationalised in the survey by classifying parent/caregiver responses into five domains: CC, having a personal doctor or nurse, having a usual source of care, family-centred care and ease of getting referrals. details the questions and the definitions as well as the responses used to define each domain. We used the WHO conceptual model and the Healthy People 2030 implementation guidelines to select adverse SDoH. SDoH includes five distinct domains: (1) social and community context, (2) economic stability, (3) education access and quality, (4) healthcare access and quality and (5) neighbourhood and built environment. describes the categories and the specific questions used to operationalise them. Adverse SDoH often cluster together within the same individual. For example, an individual with low education may also have low income and live in a poor neighbourhood. As such, we generated a variable that reflected the number of SDoH for each individual and calculated a score ranging from 0 to 5. We further classified the score to indicate none, 1, 2 and 3 or more adverse SDoH. The rationale behind this SDoH score is to capture the cumulative burden of adverse SDoH for each child. The primary outcome measure was ED visits dichotomised from the NSCH survey question: ‘During the past 12 months, how many times did this child visit a hospital emergency room?’ Those with one or more visits will be coded as 1 and those with none as 0. We reported the unweighted number and the weighted percentages for each domain after removing missing values. After a visual analysis of the importance of PCMH domains by SDoH populations, we used a split-improvement variable importance measure based on random forests (RFs) to obtain the importance of each explanatory variable. RFs are a modern ensemble method which combine many estimated decision trees into an overall model for the endpoint given the explanatory variables. This method has several advantages. First, RFs are a flexible machine learning method which can detect nonlinear variable effects and interactions instead of merely linear effects such as in a generalised linear model such as a linear or logistic regression. Second, the aggregate of trees which defines RFs results in estimates having lower variance compared with estimates based on just one tree. Third, RFs can take into account observation weights such as the sampling weights in survey response data. The variable importance measure we used is based on ‘Gini impurity’, which directly measures the degree of improvement in prediction attributed to each explanatory variable. The Gini impurity is a well-established measure for decision trees. We used a modern variable importance measure which updates the Gini impurity to appropriately compare continuous and categorical variables. Further, there are well-developed computational packages which support RFs and associated variable importance measures; we used the R package ‘ranger’ for the analyses. We report the importance rank that reflects importance of each variable after adjustment of all others in the model and the corrected Gini impurity score that shows how beneficial it is to split on a variable. The higher the value of the score, the more beneficial it is. If the score is negative, it indicates that the variable is detrimental to split. The main objective was to rank the importance of the five domains of PCMH overall and stratified by SDoH and demographics. However, to determine the ‘best’ candidates for stratification, we first ran a model that ranked the five domains of PCMH, the five SDoH variables and three potential confounders (age, gender of the child and CSHCN status) on the endpoint of ED visits for the total population. Next, we selected the highest-ranked co-variates and stratified the population by these variables. Third, we built separate models for each of the strata selected in the prior step where we ranked the importance of PCMH domains. Finally, we ran additional models using the measures of overall burden of SDoH. The National Survey of Children’s Health 2018–2019 data can be accessed at the data resource centre http://www.childhealthdata.org . All analyses were performed using SAS V.9.4 (Cary, NC) or R V.4.2.2. Sample characteristics There were 59 993 children in the survey, and about 33% were in each age group (infants, from birth to 5 years; young children, 6–11 years; and teenagers, 12–17 years). Half (51%) were male, and 19% were classified as CSHCN. Overall, between 3% and 28% experienced some gap in the PCMH domains, with 15% experiencing gaps in the domain of CC, 11% experiencing poor family-centred care, 3.4% having problems with referrals, 28.2% having no personal doctor or nurse and 24% having no usual source of care ( ). In terms of SDoH, 37% met criteria for adverse social and community context (being Hispanic or black), 68.1% for poor economic stability, 26.1% for lower education access and quality, 26.9% for inadequate healthcare access and quality and 13% for poor neighbourhood and built environment. The proportion of children experiencing gaps in PCMH components was almost always higher among children with any adverse SDoH compared with those without any SDoH. Among children with poor neighbourhood and built environment, 7% reported problems with referrals compared with 3% who do not experience this adversity ( ). Further, the additional burden of SDoH was significantly associated with more gaps in care in all domains of PCMH. For example, 40% of children with three or more adverse SDoH had no usual source of care compared with 12% for children with no adverse SDoH. Most strikingly, of children with inadequate healthcare access and quality, 24% reported a gap in CC compared with 12% of those who had adequate access. Candidate variables for stratification We chose variables for stratification based on their importance ranking in the overall model on all children. As such, age category, which was ranked as the most important predictor (rank, 1; Gini, 131.2), CSHCN status (rank, 5; Gini, 56.6), lower education access and quality (highest education is high school or less (rank, 4; Gini, 58.4)) and adverse social and community context (being black or Hispanic (rank, 2; Gini, 50.7) were chosen for subgroup analysis ( ). Relative importance ranking of PCMH domains shows a heatmap of the relative importance of the five PCMH domains overall and for specific subgroups. It shows that problems with referrals and gaps in CC are among the two most important domains of PCMH associated with ED visits in children overall, with some slight variations ( , ). In age-stratified analysis, among children aged birth to 5 years, poor family-centred care and problems with referrals were the two most important predictors of ED visits. Among the subgroup of CSHCN, problems with referrals and having no usual sources for sick care were the most important predictors. Among older children and non-CSHCN, the results were consistent with the overall population ( , ). Among those with adverse education access and quality, gaps in CC and problems with referrals were the two most important domains of the PCMH to predict ED visits ( , ). Among those with adverse social and community contexts, problems with referrals and gaps in CC were also the most important domains of the PCMH to predict ED visits ( , ). Results by burden of SDoH Finally, the model that included the burden of SDoH identified this variable as the single most important predictor of ED visits (rank, 1; Gini, 83.5) ( ). Patient and public involvement Patients were not involved in the aims of this study or the analysis of this data. There were 59 993 children in the survey, and about 33% were in each age group (infants, from birth to 5 years; young children, 6–11 years; and teenagers, 12–17 years). Half (51%) were male, and 19% were classified as CSHCN. Overall, between 3% and 28% experienced some gap in the PCMH domains, with 15% experiencing gaps in the domain of CC, 11% experiencing poor family-centred care, 3.4% having problems with referrals, 28.2% having no personal doctor or nurse and 24% having no usual source of care ( ). In terms of SDoH, 37% met criteria for adverse social and community context (being Hispanic or black), 68.1% for poor economic stability, 26.1% for lower education access and quality, 26.9% for inadequate healthcare access and quality and 13% for poor neighbourhood and built environment. The proportion of children experiencing gaps in PCMH components was almost always higher among children with any adverse SDoH compared with those without any SDoH. Among children with poor neighbourhood and built environment, 7% reported problems with referrals compared with 3% who do not experience this adversity ( ). Further, the additional burden of SDoH was significantly associated with more gaps in care in all domains of PCMH. For example, 40% of children with three or more adverse SDoH had no usual source of care compared with 12% for children with no adverse SDoH. Most strikingly, of children with inadequate healthcare access and quality, 24% reported a gap in CC compared with 12% of those who had adequate access. We chose variables for stratification based on their importance ranking in the overall model on all children. As such, age category, which was ranked as the most important predictor (rank, 1; Gini, 131.2), CSHCN status (rank, 5; Gini, 56.6), lower education access and quality (highest education is high school or less (rank, 4; Gini, 58.4)) and adverse social and community context (being black or Hispanic (rank, 2; Gini, 50.7) were chosen for subgroup analysis ( ). Relative importance ranking of PCMH domains shows a heatmap of the relative importance of the five PCMH domains overall and for specific subgroups. It shows that problems with referrals and gaps in CC are among the two most important domains of PCMH associated with ED visits in children overall, with some slight variations ( , ). In age-stratified analysis, among children aged birth to 5 years, poor family-centred care and problems with referrals were the two most important predictors of ED visits. Among the subgroup of CSHCN, problems with referrals and having no usual sources for sick care were the most important predictors. Among older children and non-CSHCN, the results were consistent with the overall population ( , ). Among those with adverse education access and quality, gaps in CC and problems with referrals were the two most important domains of the PCMH to predict ED visits ( , ). Among those with adverse social and community contexts, problems with referrals and gaps in CC were also the most important domains of the PCMH to predict ED visits ( , ). Results by burden of SDoH Finally, the model that included the burden of SDoH identified this variable as the single most important predictor of ED visits (rank, 1; Gini, 83.5) ( ). shows a heatmap of the relative importance of the five PCMH domains overall and for specific subgroups. It shows that problems with referrals and gaps in CC are among the two most important domains of PCMH associated with ED visits in children overall, with some slight variations ( , ). In age-stratified analysis, among children aged birth to 5 years, poor family-centred care and problems with referrals were the two most important predictors of ED visits. Among the subgroup of CSHCN, problems with referrals and having no usual sources for sick care were the most important predictors. Among older children and non-CSHCN, the results were consistent with the overall population ( , ). Among those with adverse education access and quality, gaps in CC and problems with referrals were the two most important domains of the PCMH to predict ED visits ( , ). Among those with adverse social and community contexts, problems with referrals and gaps in CC were also the most important domains of the PCMH to predict ED visits ( , ). Finally, the model that included the burden of SDoH identified this variable as the single most important predictor of ED visits (rank, 1; Gini, 83.5) ( ). Patients were not involved in the aims of this study or the analysis of this data. This study uses data from a US nationally representative survey and a novel methodological approach to highlight the most important aspects of the PCMH that predict ED visits among children. It shows that problems with referrals and gaps in CC are the two most important elements of PCMH for ED use among children, after adjusting for age, CSHCN status and SDoH. Our study findings suggest that reducing gaps in CC and referrals may lower ED use, especially among children aged 6 to 17 years. We also observed that family-centred care needs should be prioritised for the youngest children (birth–5 years) and a usual place of care for children with special needs. This study builds on prior work by this team that showed how adverse SDoH can increase gaps in CC and how these gaps can be associated with more ED use in all children, not just those with special healthcare needs. This study includes the five PCMH domains, five adverse social determinants (including a burden of SDoH) and other important factors such as the age of the child and CSHCN status and uses a supervised classification algorithm to determine their relative importance for ED use. Our study adds to a growing area of research that looks at the individual components of the PCMH to better inform how this model of care can be improved for all children. The advantages of PCMH use on the reduction of ED visits are clearly documented ; however, the recent inability to increase the proportion of children getting PCMH care has given impetus for this type of research. Consistent with prior work, our study emphasises the importance of reducing gaps in CC and problems with referrals for minority populations (black and Hispanic) and for those children living in households with lower socioeconomic status (less than high school). This is particularly interesting as the adoption of PCMH has been much slower among communities of colour, and such prioritisation can help policymakers reach these communities better. Our findings are supported by two studies that used earlier versions of the NSCH survey. One study sought to understand the contribution of each PCMH domain on ED visits using multivariable logistic regression analysis. It showed that those receiving CC when they needed it, those with higher education and white children had lower odds of ED visits; these results are consistent with the findings from this study. However, our study uses an RF method of ranking the relative importance of each domain and uses more recent survey results. The second study uses the NSCH to describe ethnic disparities in access to the different components of the PCMH. Our study adds to this by highlighting the importance of gaps in CC and referrals for these communities to improve care and lower ED visits. Finally, our study finds that the burden of SDoH is the single most important predictor of ED visits, confirming the need for keeping SDoH top of mind when designing interventions to improve care for children. It shows that the interplay of PCMH, a set of enabling components to improve care, and SDoH, a set of predisposing characteristics that can detract from care, are important to detangle to address the current inequities in children’s care. Further research with other data sources and analytic methods are necessary to make the PCMH more adaptable to fit the prioritised needs of different, particularly disadvantaged, communities. Limitations This study has some limitations. First, the survey is cross-sectional, and we cannot discern the temporal relationship between ambulatory care and ED visits. Second, unmeasured confounding may exist. Third, it is because the survey did not ask about the reasons for the ED visits, and we are unable to determine if these visits could have been prevented with improved ambulatory care. Fourth, since this survey is based on parent or caregiver reports, the answers to questions about PCMH may not align with how the practices see themselves or what PCMH certification they may have received. Fifth, we also acknowledge that these relationships may have changed during the COVID-19 pandemic. Finally, statistical comparisons are not possible in importance analysis with RF in surveys as standard errors in this setting have not been developed. Conclusion This study underscores the important role that addressing problems with CC, and referrals may play to reduce ED visits for children, especially those with adverse SDoH. Future studies should identify the mechanisms by which better CC and improved referral processes may reduce ED visits. Strategies to expand the reach of PCMH should consider prioritising these two domains, especially in geographic regions that serve a greater proportion of underserved populations. This study has some limitations. First, the survey is cross-sectional, and we cannot discern the temporal relationship between ambulatory care and ED visits. Second, unmeasured confounding may exist. Third, it is because the survey did not ask about the reasons for the ED visits, and we are unable to determine if these visits could have been prevented with improved ambulatory care. Fourth, since this survey is based on parent or caregiver reports, the answers to questions about PCMH may not align with how the practices see themselves or what PCMH certification they may have received. Fifth, we also acknowledge that these relationships may have changed during the COVID-19 pandemic. Finally, statistical comparisons are not possible in importance analysis with RF in surveys as standard errors in this setting have not been developed. This study underscores the important role that addressing problems with CC, and referrals may play to reduce ED visits for children, especially those with adverse SDoH. Future studies should identify the mechanisms by which better CC and improved referral processes may reduce ED visits. Strategies to expand the reach of PCMH should consider prioritising these two domains, especially in geographic regions that serve a greater proportion of underserved populations. 10.1136/bmjopen-2023-081533 online supplemental file 1
Genetic factors in the pathogenesis of cardio-oncology
9484591f-4a3b-4812-a2ae-cc1b648bf731
11301970
Internal Medicine[mh]
In recent years, with advancements in tumour diagnosis and treatment, especially precision medicine guided by multiomics approaches and molecular targeted therapy, immunotherapy, and other treatments, the survival period of cancer patients has continuously increased. Many types of tumours gradually develop into chronic disease-like patterns after treatment . With the increase in cancer patient survival, an array of adverse effects associated with anticancer treatments are becoming more pronounced. Among these factors, cardiac toxicity is a significant concern and is emerging as a leading cause of mortality in cancer patients. For example, in breast cancer patients older than 66 years, cardiovascular disease (CVD) (15.9%) has surpassed breast cancer-related events (15.1%) as the primary cause of death . A meticulous evaluation of cardiovascular risk factors is imperative prior to the initiation of anticancer treatments for the prevention and early detection of cancer therapy-related cardiovascular toxicity (CTR-CVT). A comprehensive assessment, followed by the appropriate initiation of risk-reduction strategies, can significantly reduce the risk of developing cardiovascular complications. Various risk factors have been identified, including previous cardiotoxic therapy, previous cardiovascular disease, lifestyle risk factors, hypertension, and diabetes (Fig. ). The use of these risk factors allows for the stratification of patients, identifying those at high risk of CTR-CVT . However, currently, these parameters still have many limitations. Why some high-risk populations do not experience cardiac toxicity after receiving treatment with cardiotoxic drugs, whereas some low-risk populations still experience cardiac toxicity remain unclear. This difference may be due to the differential susceptibility of patients to cardiac injury, which may depend on the patient’s genotype. Moreover, this screening method has low specificity. The mechanisms of cardiac injury caused by chemotherapy, targeted therapy, and immunotherapy are different; hence, these parameters do not effectively identify populations at high risk of CTR-CVT. Therefore, more specific indicators are needed to stratify patients. Genetic studies provide a novel approach to identify individuals susceptible to CTR-CVT. Genetic variations can impact cardiac susceptibility to drugs through various mechanisms. Some genetic alterations affect the transport of antitumour drugs in the body , and others influence drug metabolism . Certain genetic variations can induce cardiac injury through the generation of reactive oxygen species (ROS) , and others can affect the immune system, leading to immune-related cardiac damage . Genetic screening based on patient genotypes provides a more specific method for identifying potential cardiac injury patients. Recently, some studies have identified genetic changes associated with cardiac toxicity induced by anthracycline-based chemotherapy drugs. Recently, studies have also explored gene variations related to cardiac toxicity induced by targeted therapy and immunotherapy. We conducted this review to summarize the advancements in this field and to assist oncologists and cardiologists in gaining a comprehensive understanding of this field, thereby enabling the implementation of preventive and intervention measures to prevent and treat CTR-CVT. In this study, we review the relationships between genetic variations and CTR-CVT, elucidating the associations between genetic variations and chemotherapy-related cardiac injury. We also review the relationships between genetic variations and targeted therapy-related cardiac injury, between genetic variations and immune-related cardiac injury, and between other types of genetic alterations and cardiac injury. We hope to provide several valuable insights for the prediction, early diagnosis, and management of CTR-CVT. Gene variants associated with cardiac injury induced by chemotherapy Gene variants related to drug transport Variations in drug transport genes are among the factors contributing to treatment-related cardiac toxicity. Adenosine triphosphate-binding cassette (ABC) transporter proteins play active roles in transporting multiple drugs, including anthracyclines, across cellular membranes . In humans, multiple ABC genes encode transmembrane proteins involved in the transport of a wide range of drug substrates. Within the myocardium, ABC transporters facilitate the export of various chemotherapeutic agents from cardiac cells. Notably, at least 8 different variants in 5 different ABC genes, including ABCC1 , ABCC2 , ABCC5 , ABCB1 and ABCB4 , have been identified in association with anthracycline-induced cardiomyopathy (AIC) . In many instances, variants in these ABC genes can lead to defects in drug export, resulting in the accumulation of anthracycline within cardiomyocytes and increasing the risk of cardiac dysfunction and AIC. Conversely, a genetic variant in ABCB1 (rs1045642) appears to confer cardioprotective effects . Given that this gene encodes an efflux transporter, a plausible explanation for its protective effect is that the single nucleotide polymorphisms (SNP) increases drug clearance within cardiomyocytes. Genetic variations within the soluble carrier ( SLC ) transporter gene family also exert a protective effect on AIC. The SLC superfamily genes encode transporter proteins that play crucial roles in facilitating the absorption and transportation of various molecules such as amino acids, ions, metals, and fatty acids across cellular membranes. Anthracycline drugs are well-known substrates of SLC transporters, which facilitate their excretion and renal clearance. The identified genetic variants, including rs4982753 in the SLC22A17 gene, rs4149178 in the SLC22A7 gene, rs487784 in the SLC28A3 gene, rs7853758 in the SLC28A3 gene and rs9514091 in the SLC10A2 gene, are associated with potential protective effects on AIC . Gene variants related to drug metabolism GSTM1 Tumour patients often present with metabolic disorders such as disrupted fatty acid metabolism and glycolysis, and most antitumour drugs can induce or exacerbate metabolic disturbances. In a rat model of anthracycline-induced heart failure, the occurrence of heart failure was mainly associated with metabolic disturbances, including disturbances in fatty acid metabolism, glycolysis, the tricarboxylic acid cycle, glycerophospholipid metabolism, and glutathione metabolism . These metabolic disturbances affect myocardial energy metabolism, oxidative stress, and myocardial contraction. The metabolic pathways of taurine in the heart and skeletal muscles are affected by myocardial toxicity induced by tyrosine kinase inhibitors, leading to a significant decrease in taurine abundance. Taurine has shown to regulate oxidative stress, protein stability, and stress responses . These studies indicate that metabolic disturbances play a crucial role in the occurrence of drug-induced cardiac injury, with metabolism-related genes serving as major regulatory factors. Several metabolism-related genes have been confirmed to be associated with cardiac injury. UDP-glucuronosyltransferases (UGTs) catalyse the glucuronidation of endogenous and exogenous compounds, increasing their water solubility to facilitate elimination . UGT1A6 encodes the UGT family 1 member A6, which converts lipophilic anthracene derivatives into water-soluble and excretable metabolites . Therefore, the UGT1A6 protein plays a crucial role in the clearance of anthracene derivatives. The UGT1A6 rs17863783 variant is associated with AIC . Glutathione S-transferases (GSTs) are a crucial group of phase II metabolic enzymes involved in biotransformation in the human body. GSTs are expressed in nearly all cells and tissues, and their main function is to catalyse the reaction between various electrophilic carcinogens and glutathione, increasing their water solubility for excretion and thereby exerting detoxification effects . GSTM1 encodes glutathione S-transferase M1, which catalyses the detoxification of many carcinogens and drugs, including anthracene derivatives . The GSTM1 protein also scavenges free radicals, reducing the oxidative damage caused by toxic compounds such as anthracene derivatives. Therefore, any genetic variation affecting GSTM1 enzyme expression levels and/or function increases the risk of anthracene-induced cardiotoxicity. The association between GSTM1 gene deletion ( GSTM1 null genotype) and anthracycline-related cardiomyopathy was explored in cancer patients. A gene analysis was conducted for 75 patients with clinically confirmed cardiomyopathy and 92 matched control individuals without cardiomyopathy . These results suggested a significant association between a GSTM1 gene deletion and cardiomyopathy occurrence. After adjusting for factors such as sex, age at cancer diagnosis, chest radiation therapy, and anthracycline dosage, the conditional logistic regression analysis still revealed a significant relationship between a GSTM1 gene deletion and the cardiomyopathy risk. Researchers further examined peripheral blood GSTM1 gene expression in 20 cardiomyopathy patients and 20 control individuals . Concurrently, the expression of the GSTM1 gene was assessed in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) from patients (3 with cardiomyopathy and 3 without cardiomyopathy). The results indicated that GSTM1 expression in the peripheral blood was significantly lower in cardiomyopathy patients than in control individuals (mean relative expression 0.67 ± 0.57 vs. 1.33 ± 1.33, p = 0.049). Additionally, GSTM1 expression levels were significantly reduced in hiPSC-CMs derived from cardiomyopathy patients ( p = 0.007). This study confirmed the close association between GSTM1 gene deletions and anthracycline-related cardiomyopathy. CRB3 The CRB3 gene regulates another drug-metabolizing enzyme, carbonyl reductase 3, which catalyses the reduction of anthracyclines to cardiotoxic alcohol metabolites . Polymorphisms of the CBR3 gene can influence the synthesis of this metabolite, exerting a regulatory effect on AIC. The V244M polymorphism in the CBR3 gene generates two protein isoforms, CBR3 V244 (G allele) and CBR3 M244 (A allele), with distinct catalytic rates. The V244 variant promotes doxorubicinol formation at a rate 2.6 times faster than the M244 variant . Blanco et al. conducted a comparative analysis of data from 170 tumour patients with concomitant cardiomyopathy and 317 tumour patients without cardiomyopathy . The results revealed that when patients were exposed to low to moderate doses (1-250 mg/m 2 ) of anthracyclines, patients with the CBR3 :GG genotype presented a significantly increased risk of AIC compared with patients with the CBR3:GA/AA genotype (Odds Ratio (OR) = 3.30, p = 0.006). Another study involving 1191 breast cancer patients and an analysis of 618,863 SNPs revealed an association between a SNP (Val244Met; rs1056892) in CBR3 and a decreased left ventricular ejection fraction induced by anthracyclines . These studies suggest that CBR3 plays a significant role in the AIC. This information is important for a deeper understanding of the mechanisms underlying AIC and provides new directions for future treatments of cardiac toxicity. Gene variants related to antioxidation Hyaluronic acid (HA) is a long-chain polysaccharide synthesized by hyaluronic acid synthase (HAS). It is an important component of the extracellular matrix (ECM). HA is widely distributed in the human body and has various physiological functions. One of the important functions of HA is its antioxidant activity. It can specifically interact with CD44 receptors on myocardial cells, stimulating cell proliferation, maintaining the integrity of myocardial cells during ROS damage, and preventing the activation of death receptors, thereby preserving cardiomyocyte survival and function . Wang et al. . employed a matched case‒control design to analyse SNPs in 2100 genes related to cardiovascular diseases. They identified a common SNP (rs2232228) in HAS3 that was closely associated with anthracycline dose-dependent cardiac injury. When exposed to low doses (< 250 mg/m2) of anthracyclines, patients with the rs2232228 GG/AA/GA genotype had lower rates of cardiomyopathy. However, when individuals were exposed to high doses (> 250 mg/m2) of anthracyclines, a significant change in the incidence of cardiomyopathy was not observed in individuals with the rs2232228 GG genotype, but this risk increased significantly in patients with the AA and GA genotypes. The risk of cardiomyopathy was highest in patients with the AA genotype, as this risk was 8.9-fold higher in these patients than in patients with the GG genotype. A genotype‒phenotype analysis revealed reduced HAS3 mRNA expression in cardiac samples from patients with the HAS3 rs2232228 AA genotype. Anthracycline inflict myocardial damage by prompting apoptosis in cardiomyocytes. Following myocardial injury, ECM serves as a structural framework for the alignment of myocytes, fibroblasts, endothelial cells, and blood vessels. HA, a constituent of the ECM, has been observed to accumulate in the damaged myocardium of rats following myocardial infarction. Taken together, these data suggest that HA plays a significant role in AIC, and simultaneously indicate that lower cardiac HAS3 mRNA expression (AA genotype) may lead to a decreased synthesis of the antioxidant HA, thereby increasing the risk of cardiomyopathy for individuals with the AA genotype. Top2b -mediated DNA damage Top2b encodes topoisomerase-IIβ, which is expressed in quiescent cells, including adult cardiomyocytes. Specific knockout of the Top2b gene in cardiomyocytes reduces defective mitochondrial biogenesis and the generation of ROS . Furthermore, cardiomyocyte-specific deletion of Top2b protects mice from progressive heart failure induced by doxorubicin . These findings indicate that Top2b plays a significant role in drug-related cardiac toxicity. Top2b is also involved in regulating of cardiac injury via another mechanism. The RARG gene encodes a retinoic acid (RA) receptor belonging to the nuclear hormone receptor family. This RA receptor is a ligand-dependent transcriptional regulatory factor that binds to retinoic acid response elements in the promoter regions of target genes to regulate their expression. The RARG gene is highly expressed in the heart, and Top2b i s one of its target genes . In a genome-wide association study involving paediatric cancer patients receiving anthracycline therapy, Aminkeng et al. identified a nonsynonymous variant (rs2229774) in the coding region of RARG . This variant induces the expression of Top2b , resulting in a 4.7-fold increased risk of AIC. Further investigations revealed that individuals carrying RARG rs2229774 are highly susceptible to AIC, and their hiPSC-CMs exhibit increased sensitivity to the cardiotoxic effects of anthracycline drugs . Gene variants associated with sarcomere dysfunction Genetic variations impacting the architecture of sarcomeres, the fundamental contractile units of cardiomyocytes, may also play a role in the onset of cardiotoxicity following cancer treatments. The CELF protein family comprises a set of splicing regulatory factors that govern developmental processes and tissue-specific splicing events, thereby modulating alternative gene splicing and influencing the cardiac structure . TNNT 2 is a classic target of the CELF family, and this gene encodes cardiac troponin T (cTnT). CELF activity can promote the generation of distinct cTnT variants, and the concurrent presence of multiple cTnT variants leads to the dysregulated contraction of myocardial sarcomeres, thereby diminishing myocardial contractility and precipitating cardiac injury. An analysis of the CELF4 sequence indicated that the G allele of rs1786814 possesses a potential splice donor site and that the A allele lacks this splice site . The GG genotype of rs1786814 is correlated with the coexistence of more than one TNNT2 alternatively spliced isoforms, suggesting that AIC may occur through CELF protein-mediated aberrant TNNT2 splicing . A genome-wide association study targeting paediatric tumour patients confirmed that the SNP rs1786814 located in the CELF4 gene is associated with AIC. Patients with the AA genotype have a low incidence of cardiomyopathy . However, when the dose of anthracyclines exceeds 300 mg/m 2 , patients with the rs1786814 GG genotype have a 10.2-fold increased risk of cardiomyopathy compared with patients with the GA/AA genotypes. Another gene implicated in anticancer treatment-related cardiac injury due to its impact on myocardial structure is gene TTN . Truncating variants in TTN (TTNtvs) are one of the most important causes of AIC in both paediatric and adult cancer patients . The TTN gene encodes titin, the primary sarcomeric scaffold protein regulating cardiac contraction . Its integrity is vital for the sarcomere’s proper function. TTNtvs can lead to the production of incomplete titin proteins. These mutations are found in about 15 to 20% of patients with dilated cardiomyopathy (DCM), compared to a mere 1% in the general population . Consequently, TTNtvs have emerged as the most prevalent known cause of DCM. Moreover, recent studies have uncovered a high prevalence of TTNtvs in cardiomyopathy resulting from diverse triggers such as alcohol and pregnancy , suggesting that individuals with TTNtvs are particularly vulnerable to developing cardiomyopathy in response to a variety of insults. Gene variants associated with cardiac injury induced by targeted therapy Cardiac injury caused by anticancer chemotherapy drugs typically presents as type I toxicity, often resulting from a myocardial cell microstructural disruption leading to irreversible damage through apoptosis . In contrast, type II cardiac toxicity, characterized by reversible damage, often occurs without a concurrent myocardial cell microstructural disruption . This form of injury is commonly associated with targeted anticancer therapies, with anti-HER-2 treatment being a notable example. Therefore, genetic alterations related to cardiac toxicity induced by anti-HER2 drugs may differ from those associated with chemotherapy agents. In addition to its expression in breast tumour cells, HER2 is also expressed in cardiac myocytes . The HER-2 pathway stabilizes the tissue fibre structure through a series of signalling cascades, thereby inhibiting the apoptosis of cardiac myocytes. This pathway can promote cell survival by reducing ROS levels. However, HER-2-targeted therapies disrupt the HER-2 pathway by binding to HER-2, leading to the accumulation of excessive ROS and damage to cardiac myocytes . Under normal circumstances, coronary artery microvascular endothelial cells and the endocardium release neuregulin-1, which induces the signalling pathway mediated by the HER-2/HER-4 heterodimer. This pathway protects the heart through various mechanisms, including maintaining the myocardial fibre structure; promoting cardiac myocyte survival, growth, and proliferation; balancing β-adrenergic effects; maintaining calcium homeostasis; improving angiogenesis; and stimulating stem cell differentiation into cardiomyocytes . Therefore, the disruption of this signalling pathway by anti-HER-2 therapy may impair myocardial function and lead to heart failure. Trastuzumab can also induce cardiomyocyte damage by downregulating the antiapoptotic protein Bcl-xl and upregulating the proapoptotic protein Bcl-xs, leading to a loss of mitochondrial membrane integrity, disruption of electron transport, free radical generation, reduced adenosine triphosphate (ATP) production, and subsequent damage to cardiomyocytes . Additionally, trastuzumab can affect downstream signalling pathways of HER-2, including the phosphatidylinositol 3-kinase (PI3k)-protein kinase B and extracellular signal-regulated kinase-mitogen-activated protein kinase (MAPK) pathways, thereby influencing mitochondrial function and causing damage to or even the death of cardiac myocytes . Somatic and germline mutations in the HER2 gene that affect the transmembrane domain of the HER-2 protein have been identified, including germline mutations in codon 654 . The Ile654Val SNP is closely associated with the incidence of breast cancer and the response to trastuzumab . A study involving 61 patients with HER-2-positive advanced breast cancer treated with trastuzumab included 36 patients with Ile/Ile (59%), 21 patients with Ile/Val (34.4%), and 4 patients with Val/Val (6.6%). After treatment, 5 patients (8.2%) experienced a decrease in the left ventricular ejection fraction of ≥ 20%, all of whom had the Ile/Val genotype . These findings suggest that the Val655Ile genotype is associated with cardiotoxicity. The 1170 Pro/Ala SNP in HER2 is associated with cardiac toxicity. Two studies that included a total of 346 patients reported a significant association between the HER2 1170 Pro/Ala SNP and anticancer treatment-related myocardial injury. These studies revealed that the presence of this SNP is a protective factor against anticancer treatment-related cardiac damage. Stanton et al. demonstrated that the CC genotype (Pro/Pro) was independently associated with anticancer treatment-related cardiac injury (OR = 2.60, p = 0.046) compared with SNP carriers of the C/G (Pro/Ala) and G/G (Ala/Ala) variants. Similarly, Boekhout et al. reported that the homozygous genotype variant G/G (Ala/Ala) was associated with a lower likelihood of cardiac events (OR = 0.09, p = 0.003). A genome-wide association study (GWAS) conducted in a Japanese population compared 11 patients with cardiac toxicity to 257 patients without cardiac toxicity . The researchers identified the top 100 SNPs with the smallest p values. Subsequently, they performed validation using a verification cohort consisting of 14 patients with cardiac toxicity and 199 control individuals. This study identified five loci on chromosomes (rs9316695 on chromosome 13q14.3, rs28415722 on chromosome 15q26.3, rs7406710 on chromosome 17q25.3, rs11932853 on chromosome 4q25, and rs8032978 on chromosome 15q26.3) that may be associated with trastuzumab-induced cardiac toxicity. The researchers developed a risk prediction model based on these five SNPs to predict the risk of trastuzumab-induced cardiac toxicity. The results showed that patients with a risk score ≥ 5 had a significantly greater incidence of trastuzumab-induced cardiac toxicity than did those with a risk score ≤ 4 (42.5% vs. 1.8%, p = 7.82 × 10 15 , relative risk = 40.0). In another retrospective study, CTR-CVT was observed in 19 (7.8%) of 243 patients treated with trastuzumab . They identified a total of 239,360 genetic variants in 9 of the 19 patients with CTR-CVT. The strongest association with CTR-CVT was found for a locus on chromosome 6q12 (rs139944387). ADCs targeting HER-2 constitute another class of anti-HER-2 drugs .No studies have explored the relationship between genetic alterations and cardiac toxicity induced by ADCs. Therefore, research is needed to identify specific gene alterations that are associated with cardiotoxicity caused by ADCs. Gene variants associated with cardiac injury induced by immunotherapy In recent years, immune checkpoint inhibitors (ICIs) have revolutionized cancer treatment, significantly improving the prognosis of cancer patients . PD-1 inhibitors, PD-L1 inhibitors, and CTLA-4 inhibitors are commonly used immune checkpoint inhibitors in clinical practice. However, while they offer clinical benefits, they also lead to immune-related adverse events. Among these adverse events, immune-related myocarditis is particularly notable, although its incidence is only 0.06–3.8% However, the mortality rate can reach 39.7–66.0% , with a higher risk of death in patients receiving combination therapy with ICIs (44% vs. 66%) . The clinical findings of ICI-mediated cardiovascular disease (ICI-CVD) suggest a potential mechanistic role of immune checkpoint signalling in the development of cardiac pathologies. ICIs are associated with adverse cardiovascular effects, indicating a possible role for immune checkpoint signalling in the onset of cardiac pathologies. The function of immune checkpoints has been extensively studied in certain cardiovascular diseases . For example, several immune checkpoints are involved in the development of atherosclerosis . Additionally, blocking coinhibitory checkpoints has been found to exacerbate atherosclerosis in cancer patients. Building on these insights from mechanisms and clinical observations, modulating immune checkpoints has emerged as a potential therapeutic strategy for the treatment of atherosclerotic cardiovascular disease . The precise mechanisms underlying ICI-related cardiotoxicity remain incompletely understood, but several potential pathways have been suggested. Notably, the current evidence on immune checkpoints in heart failure primarily stems from preclinical research or from observational studies on human samples. Consequently, the available data lay the groundwork for future experimental and clinical studies. One of the reasons is that ICI therapy disrupts immune tolerance within the body. By engaging with CTLA-4, CTLA-4 inhibitors competitively bind to CD80/CD86, and PD-1 inhibitors and PD-L1 inhibitors disturb peripheral immune tolerance by blocking the interaction between PD-1 and its ligand PD-L1. In preclinical studies, CTLA-4, PD-1, and PD-L1 were shown to help protect the heart muscle from immune-related damage. Conversely, animal models lacking immune checkpoint function exhibit increased levels of cardiac myosin-specific autoimmune CD4 + and CD8 + T cells . Furthermore, myocardial biopsy samples from patients with immune-related myocarditis revealed the presence of cardiac myosin-specific CD8 + cytotoxic T cells . These findings indicate that ICI therapy disrupts immune tolerance, facilitating T-cell activation, which can lead to cardiac damage. Furthermore, autopsies of patients with ICI-related myocarditis have revealed abundant T-cell infiltration in the myocardium, skeletal muscle, and tumour tissue . High levels of clonal expansion were observed in infiltrating lymphocytes through T-cell receptor sequencing . Additionally, muscle-specific antigens were detected in tumour tissue, suggesting that shared antigens between the myocardium and tumour tissue may contribute to ICI-related cardiotoxicity mechanisms . Dysregulated lipid metabolism and macrophage conversion to a proinflammatory phenotype have also been proposed as mechanisms underlying the development of immune-related myocarditis. Various risk indicators, including genetic markers, are being explored to facilitate the early identification and diagnosis of ICI-CVD, thereby reducing the mortality associated with such adverse events. A prominent genetic susceptibility factor linked to the occurrence of ICI-CVD involves variations in either coinhibitory or costimulatory immune checkpoints. Preclinical studies shown that genetic deletion of the gene encoding PD1 ( Pdcd1 ) results in acute myocarditis in mice, accompanied by the detection of autoantibodies against cardiac troponin I in peripheral blood, suggesting an autoimmune response against the myocardium . Research using single-cell RNA sequencing has also demonstrated that the expression of Pdcd1 is upregulated in regulatory T cells within the hearts of mice experiencing heart failure due to pressure overload . Blocking PD1 in these mice led to a decline in heart function and an increase in cardiac inflammation. Furthermore, genetic deletion of Pdcd1 in mice has been linked to the development of dilated cardiomyopathy . CTLA4 has been suggested as a susceptibility gene for DCM, given that patients with DCM are more likely to have a genetic variant in CTLA4 than are healthy individuals . Similarly, genetic deletion of Ctla4 induced fatal immune myocarditis in mice . In mice treated with TAC, both Ctla4 and Pdcd1 expression levels in immune cells in the heart are increased . Furthermore, in Pdcd1 -deficient mice, Ctla4 knockout leads to immune myocarditis in approximately half of the mice. This finding is consistent with the increased cardiotoxicity observed when CTLA-4 inhibitors are combined with PD-1 inhibitors . In addition to PD1, PDL1, and CTLA4, costimulatory factors for T cell activation, such as CD28 and B7, also play crucial roles in ICI-CVD. CD28 or B7 knockout significantly attenuated aortic constriction-induced congestive heart failure development . Furthermore, CD28/B7 blockade by CTLA4Ig treatment also attenuated cardiac hypertrophy and dysfunction. 4-1BB is another costimulatory protein expressed on the surface of various immune cells that becomes activated upon binding to its ligand, 4-1BBL.The genetic deletion of gene encoding 4-1BBL has been shown to mitigate the injury associated with ischaemia and reperfusion in mice . Both Ctla4 and Pdcd1 expression levels in immune cells in the heart are increased in mice with pressure-overload-induced heart failure after transverse aortic constriction (TAC) surgery . Furthermore, mice with a CD28 or B7 deficiency have lower cardiac inflammation, hypertrophy, fibrosis and dysfunction after TAC surgery than wild-type mice . Similarly, CD28 or B7 blockade with CTLA4 immunoglobulin treatment attenuated TAC-induced cardiac hypertrophy and dysfunction . CTLA4 immunoglobulin treatment also prevents the development of heart failure in mice with pressure-overload induced cardiac hypertrophy . PDL1 has also been linked to the development of ICI-CVD. In a study using human heart tissue samples, the expression of PDL1 was more prominent and frequent in patients with a history of myocardial infarction than in healthy controls . Moreover, a significant negative correlation was observed between the PDL1 expression level and the left ventricular ejection fraction. A preclinical study revealed that PDL1 is expressed in heart failure models and that serum levels of PDL1 are associated with disease severity . In addition to Pdcd1 , PDCD1LG1 , and CTLA4/Ctla4 , other genetic alterations associated with the development of immune-related myocarditis have also been identified. Luo et al. conducted an integrated analysis of single-cell RNA sequencing and bulk sequencing data and reported that the S100A protein family, which includes S100A8, S100A9, S100A11, and S100A12, was significantly upregulated in patients with ICI-related myocarditis. The S100 proteins, encoded by the S100A genes on chromosome 21, belong to a family of calcium-binding proteins. Studies have shown a significant increase in the expression of the S100 protein family in tumour tissues, suggesting potential roles in the immune response and pathogenesis of certain diseases, including ICI-related myocarditis .In summary, research on the associations between genetic variants and the risk of immune-related myocarditis is still in the exploratory stage. However, some preliminary findings have identified genetic variants that may be associated with immune-related cardiac toxicity, indicating a promising direction for further investigation. Gene variants related to drug transport Variations in drug transport genes are among the factors contributing to treatment-related cardiac toxicity. Adenosine triphosphate-binding cassette (ABC) transporter proteins play active roles in transporting multiple drugs, including anthracyclines, across cellular membranes . In humans, multiple ABC genes encode transmembrane proteins involved in the transport of a wide range of drug substrates. Within the myocardium, ABC transporters facilitate the export of various chemotherapeutic agents from cardiac cells. Notably, at least 8 different variants in 5 different ABC genes, including ABCC1 , ABCC2 , ABCC5 , ABCB1 and ABCB4 , have been identified in association with anthracycline-induced cardiomyopathy (AIC) . In many instances, variants in these ABC genes can lead to defects in drug export, resulting in the accumulation of anthracycline within cardiomyocytes and increasing the risk of cardiac dysfunction and AIC. Conversely, a genetic variant in ABCB1 (rs1045642) appears to confer cardioprotective effects . Given that this gene encodes an efflux transporter, a plausible explanation for its protective effect is that the single nucleotide polymorphisms (SNP) increases drug clearance within cardiomyocytes. Genetic variations within the soluble carrier ( SLC ) transporter gene family also exert a protective effect on AIC. The SLC superfamily genes encode transporter proteins that play crucial roles in facilitating the absorption and transportation of various molecules such as amino acids, ions, metals, and fatty acids across cellular membranes. Anthracycline drugs are well-known substrates of SLC transporters, which facilitate their excretion and renal clearance. The identified genetic variants, including rs4982753 in the SLC22A17 gene, rs4149178 in the SLC22A7 gene, rs487784 in the SLC28A3 gene, rs7853758 in the SLC28A3 gene and rs9514091 in the SLC10A2 gene, are associated with potential protective effects on AIC . Gene variants related to drug metabolism GSTM1 Tumour patients often present with metabolic disorders such as disrupted fatty acid metabolism and glycolysis, and most antitumour drugs can induce or exacerbate metabolic disturbances. In a rat model of anthracycline-induced heart failure, the occurrence of heart failure was mainly associated with metabolic disturbances, including disturbances in fatty acid metabolism, glycolysis, the tricarboxylic acid cycle, glycerophospholipid metabolism, and glutathione metabolism . These metabolic disturbances affect myocardial energy metabolism, oxidative stress, and myocardial contraction. The metabolic pathways of taurine in the heart and skeletal muscles are affected by myocardial toxicity induced by tyrosine kinase inhibitors, leading to a significant decrease in taurine abundance. Taurine has shown to regulate oxidative stress, protein stability, and stress responses . These studies indicate that metabolic disturbances play a crucial role in the occurrence of drug-induced cardiac injury, with metabolism-related genes serving as major regulatory factors. Several metabolism-related genes have been confirmed to be associated with cardiac injury. UDP-glucuronosyltransferases (UGTs) catalyse the glucuronidation of endogenous and exogenous compounds, increasing their water solubility to facilitate elimination . UGT1A6 encodes the UGT family 1 member A6, which converts lipophilic anthracene derivatives into water-soluble and excretable metabolites . Therefore, the UGT1A6 protein plays a crucial role in the clearance of anthracene derivatives. The UGT1A6 rs17863783 variant is associated with AIC . Glutathione S-transferases (GSTs) are a crucial group of phase II metabolic enzymes involved in biotransformation in the human body. GSTs are expressed in nearly all cells and tissues, and their main function is to catalyse the reaction between various electrophilic carcinogens and glutathione, increasing their water solubility for excretion and thereby exerting detoxification effects . GSTM1 encodes glutathione S-transferase M1, which catalyses the detoxification of many carcinogens and drugs, including anthracene derivatives . The GSTM1 protein also scavenges free radicals, reducing the oxidative damage caused by toxic compounds such as anthracene derivatives. Therefore, any genetic variation affecting GSTM1 enzyme expression levels and/or function increases the risk of anthracene-induced cardiotoxicity. The association between GSTM1 gene deletion ( GSTM1 null genotype) and anthracycline-related cardiomyopathy was explored in cancer patients. A gene analysis was conducted for 75 patients with clinically confirmed cardiomyopathy and 92 matched control individuals without cardiomyopathy . These results suggested a significant association between a GSTM1 gene deletion and cardiomyopathy occurrence. After adjusting for factors such as sex, age at cancer diagnosis, chest radiation therapy, and anthracycline dosage, the conditional logistic regression analysis still revealed a significant relationship between a GSTM1 gene deletion and the cardiomyopathy risk. Researchers further examined peripheral blood GSTM1 gene expression in 20 cardiomyopathy patients and 20 control individuals . Concurrently, the expression of the GSTM1 gene was assessed in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) from patients (3 with cardiomyopathy and 3 without cardiomyopathy). The results indicated that GSTM1 expression in the peripheral blood was significantly lower in cardiomyopathy patients than in control individuals (mean relative expression 0.67 ± 0.57 vs. 1.33 ± 1.33, p = 0.049). Additionally, GSTM1 expression levels were significantly reduced in hiPSC-CMs derived from cardiomyopathy patients ( p = 0.007). This study confirmed the close association between GSTM1 gene deletions and anthracycline-related cardiomyopathy. CRB3 The CRB3 gene regulates another drug-metabolizing enzyme, carbonyl reductase 3, which catalyses the reduction of anthracyclines to cardiotoxic alcohol metabolites . Polymorphisms of the CBR3 gene can influence the synthesis of this metabolite, exerting a regulatory effect on AIC. The V244M polymorphism in the CBR3 gene generates two protein isoforms, CBR3 V244 (G allele) and CBR3 M244 (A allele), with distinct catalytic rates. The V244 variant promotes doxorubicinol formation at a rate 2.6 times faster than the M244 variant . Blanco et al. conducted a comparative analysis of data from 170 tumour patients with concomitant cardiomyopathy and 317 tumour patients without cardiomyopathy . The results revealed that when patients were exposed to low to moderate doses (1-250 mg/m 2 ) of anthracyclines, patients with the CBR3 :GG genotype presented a significantly increased risk of AIC compared with patients with the CBR3:GA/AA genotype (Odds Ratio (OR) = 3.30, p = 0.006). Another study involving 1191 breast cancer patients and an analysis of 618,863 SNPs revealed an association between a SNP (Val244Met; rs1056892) in CBR3 and a decreased left ventricular ejection fraction induced by anthracyclines . These studies suggest that CBR3 plays a significant role in the AIC. This information is important for a deeper understanding of the mechanisms underlying AIC and provides new directions for future treatments of cardiac toxicity. Gene variants related to antioxidation Hyaluronic acid (HA) is a long-chain polysaccharide synthesized by hyaluronic acid synthase (HAS). It is an important component of the extracellular matrix (ECM). HA is widely distributed in the human body and has various physiological functions. One of the important functions of HA is its antioxidant activity. It can specifically interact with CD44 receptors on myocardial cells, stimulating cell proliferation, maintaining the integrity of myocardial cells during ROS damage, and preventing the activation of death receptors, thereby preserving cardiomyocyte survival and function . Wang et al. . employed a matched case‒control design to analyse SNPs in 2100 genes related to cardiovascular diseases. They identified a common SNP (rs2232228) in HAS3 that was closely associated with anthracycline dose-dependent cardiac injury. When exposed to low doses (< 250 mg/m2) of anthracyclines, patients with the rs2232228 GG/AA/GA genotype had lower rates of cardiomyopathy. However, when individuals were exposed to high doses (> 250 mg/m2) of anthracyclines, a significant change in the incidence of cardiomyopathy was not observed in individuals with the rs2232228 GG genotype, but this risk increased significantly in patients with the AA and GA genotypes. The risk of cardiomyopathy was highest in patients with the AA genotype, as this risk was 8.9-fold higher in these patients than in patients with the GG genotype. A genotype‒phenotype analysis revealed reduced HAS3 mRNA expression in cardiac samples from patients with the HAS3 rs2232228 AA genotype. Anthracycline inflict myocardial damage by prompting apoptosis in cardiomyocytes. Following myocardial injury, ECM serves as a structural framework for the alignment of myocytes, fibroblasts, endothelial cells, and blood vessels. HA, a constituent of the ECM, has been observed to accumulate in the damaged myocardium of rats following myocardial infarction. Taken together, these data suggest that HA plays a significant role in AIC, and simultaneously indicate that lower cardiac HAS3 mRNA expression (AA genotype) may lead to a decreased synthesis of the antioxidant HA, thereby increasing the risk of cardiomyopathy for individuals with the AA genotype. Top2b -mediated DNA damage Top2b encodes topoisomerase-IIβ, which is expressed in quiescent cells, including adult cardiomyocytes. Specific knockout of the Top2b gene in cardiomyocytes reduces defective mitochondrial biogenesis and the generation of ROS . Furthermore, cardiomyocyte-specific deletion of Top2b protects mice from progressive heart failure induced by doxorubicin . These findings indicate that Top2b plays a significant role in drug-related cardiac toxicity. Top2b is also involved in regulating of cardiac injury via another mechanism. The RARG gene encodes a retinoic acid (RA) receptor belonging to the nuclear hormone receptor family. This RA receptor is a ligand-dependent transcriptional regulatory factor that binds to retinoic acid response elements in the promoter regions of target genes to regulate their expression. The RARG gene is highly expressed in the heart, and Top2b i s one of its target genes . In a genome-wide association study involving paediatric cancer patients receiving anthracycline therapy, Aminkeng et al. identified a nonsynonymous variant (rs2229774) in the coding region of RARG . This variant induces the expression of Top2b , resulting in a 4.7-fold increased risk of AIC. Further investigations revealed that individuals carrying RARG rs2229774 are highly susceptible to AIC, and their hiPSC-CMs exhibit increased sensitivity to the cardiotoxic effects of anthracycline drugs . Gene variants associated with sarcomere dysfunction Genetic variations impacting the architecture of sarcomeres, the fundamental contractile units of cardiomyocytes, may also play a role in the onset of cardiotoxicity following cancer treatments. The CELF protein family comprises a set of splicing regulatory factors that govern developmental processes and tissue-specific splicing events, thereby modulating alternative gene splicing and influencing the cardiac structure . TNNT 2 is a classic target of the CELF family, and this gene encodes cardiac troponin T (cTnT). CELF activity can promote the generation of distinct cTnT variants, and the concurrent presence of multiple cTnT variants leads to the dysregulated contraction of myocardial sarcomeres, thereby diminishing myocardial contractility and precipitating cardiac injury. An analysis of the CELF4 sequence indicated that the G allele of rs1786814 possesses a potential splice donor site and that the A allele lacks this splice site . The GG genotype of rs1786814 is correlated with the coexistence of more than one TNNT2 alternatively spliced isoforms, suggesting that AIC may occur through CELF protein-mediated aberrant TNNT2 splicing . A genome-wide association study targeting paediatric tumour patients confirmed that the SNP rs1786814 located in the CELF4 gene is associated with AIC. Patients with the AA genotype have a low incidence of cardiomyopathy . However, when the dose of anthracyclines exceeds 300 mg/m 2 , patients with the rs1786814 GG genotype have a 10.2-fold increased risk of cardiomyopathy compared with patients with the GA/AA genotypes. Another gene implicated in anticancer treatment-related cardiac injury due to its impact on myocardial structure is gene TTN . Truncating variants in TTN (TTNtvs) are one of the most important causes of AIC in both paediatric and adult cancer patients . The TTN gene encodes titin, the primary sarcomeric scaffold protein regulating cardiac contraction . Its integrity is vital for the sarcomere’s proper function. TTNtvs can lead to the production of incomplete titin proteins. These mutations are found in about 15 to 20% of patients with dilated cardiomyopathy (DCM), compared to a mere 1% in the general population . Consequently, TTNtvs have emerged as the most prevalent known cause of DCM. Moreover, recent studies have uncovered a high prevalence of TTNtvs in cardiomyopathy resulting from diverse triggers such as alcohol and pregnancy , suggesting that individuals with TTNtvs are particularly vulnerable to developing cardiomyopathy in response to a variety of insults. Variations in drug transport genes are among the factors contributing to treatment-related cardiac toxicity. Adenosine triphosphate-binding cassette (ABC) transporter proteins play active roles in transporting multiple drugs, including anthracyclines, across cellular membranes . In humans, multiple ABC genes encode transmembrane proteins involved in the transport of a wide range of drug substrates. Within the myocardium, ABC transporters facilitate the export of various chemotherapeutic agents from cardiac cells. Notably, at least 8 different variants in 5 different ABC genes, including ABCC1 , ABCC2 , ABCC5 , ABCB1 and ABCB4 , have been identified in association with anthracycline-induced cardiomyopathy (AIC) . In many instances, variants in these ABC genes can lead to defects in drug export, resulting in the accumulation of anthracycline within cardiomyocytes and increasing the risk of cardiac dysfunction and AIC. Conversely, a genetic variant in ABCB1 (rs1045642) appears to confer cardioprotective effects . Given that this gene encodes an efflux transporter, a plausible explanation for its protective effect is that the single nucleotide polymorphisms (SNP) increases drug clearance within cardiomyocytes. Genetic variations within the soluble carrier ( SLC ) transporter gene family also exert a protective effect on AIC. The SLC superfamily genes encode transporter proteins that play crucial roles in facilitating the absorption and transportation of various molecules such as amino acids, ions, metals, and fatty acids across cellular membranes. Anthracycline drugs are well-known substrates of SLC transporters, which facilitate their excretion and renal clearance. The identified genetic variants, including rs4982753 in the SLC22A17 gene, rs4149178 in the SLC22A7 gene, rs487784 in the SLC28A3 gene, rs7853758 in the SLC28A3 gene and rs9514091 in the SLC10A2 gene, are associated with potential protective effects on AIC . GSTM1 Tumour patients often present with metabolic disorders such as disrupted fatty acid metabolism and glycolysis, and most antitumour drugs can induce or exacerbate metabolic disturbances. In a rat model of anthracycline-induced heart failure, the occurrence of heart failure was mainly associated with metabolic disturbances, including disturbances in fatty acid metabolism, glycolysis, the tricarboxylic acid cycle, glycerophospholipid metabolism, and glutathione metabolism . These metabolic disturbances affect myocardial energy metabolism, oxidative stress, and myocardial contraction. The metabolic pathways of taurine in the heart and skeletal muscles are affected by myocardial toxicity induced by tyrosine kinase inhibitors, leading to a significant decrease in taurine abundance. Taurine has shown to regulate oxidative stress, protein stability, and stress responses . These studies indicate that metabolic disturbances play a crucial role in the occurrence of drug-induced cardiac injury, with metabolism-related genes serving as major regulatory factors. Several metabolism-related genes have been confirmed to be associated with cardiac injury. UDP-glucuronosyltransferases (UGTs) catalyse the glucuronidation of endogenous and exogenous compounds, increasing their water solubility to facilitate elimination . UGT1A6 encodes the UGT family 1 member A6, which converts lipophilic anthracene derivatives into water-soluble and excretable metabolites . Therefore, the UGT1A6 protein plays a crucial role in the clearance of anthracene derivatives. The UGT1A6 rs17863783 variant is associated with AIC . Glutathione S-transferases (GSTs) are a crucial group of phase II metabolic enzymes involved in biotransformation in the human body. GSTs are expressed in nearly all cells and tissues, and their main function is to catalyse the reaction between various electrophilic carcinogens and glutathione, increasing their water solubility for excretion and thereby exerting detoxification effects . GSTM1 encodes glutathione S-transferase M1, which catalyses the detoxification of many carcinogens and drugs, including anthracene derivatives . The GSTM1 protein also scavenges free radicals, reducing the oxidative damage caused by toxic compounds such as anthracene derivatives. Therefore, any genetic variation affecting GSTM1 enzyme expression levels and/or function increases the risk of anthracene-induced cardiotoxicity. The association between GSTM1 gene deletion ( GSTM1 null genotype) and anthracycline-related cardiomyopathy was explored in cancer patients. A gene analysis was conducted for 75 patients with clinically confirmed cardiomyopathy and 92 matched control individuals without cardiomyopathy . These results suggested a significant association between a GSTM1 gene deletion and cardiomyopathy occurrence. After adjusting for factors such as sex, age at cancer diagnosis, chest radiation therapy, and anthracycline dosage, the conditional logistic regression analysis still revealed a significant relationship between a GSTM1 gene deletion and the cardiomyopathy risk. Researchers further examined peripheral blood GSTM1 gene expression in 20 cardiomyopathy patients and 20 control individuals . Concurrently, the expression of the GSTM1 gene was assessed in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) from patients (3 with cardiomyopathy and 3 without cardiomyopathy). The results indicated that GSTM1 expression in the peripheral blood was significantly lower in cardiomyopathy patients than in control individuals (mean relative expression 0.67 ± 0.57 vs. 1.33 ± 1.33, p = 0.049). Additionally, GSTM1 expression levels were significantly reduced in hiPSC-CMs derived from cardiomyopathy patients ( p = 0.007). This study confirmed the close association between GSTM1 gene deletions and anthracycline-related cardiomyopathy. CRB3 The CRB3 gene regulates another drug-metabolizing enzyme, carbonyl reductase 3, which catalyses the reduction of anthracyclines to cardiotoxic alcohol metabolites . Polymorphisms of the CBR3 gene can influence the synthesis of this metabolite, exerting a regulatory effect on AIC. The V244M polymorphism in the CBR3 gene generates two protein isoforms, CBR3 V244 (G allele) and CBR3 M244 (A allele), with distinct catalytic rates. The V244 variant promotes doxorubicinol formation at a rate 2.6 times faster than the M244 variant . Blanco et al. conducted a comparative analysis of data from 170 tumour patients with concomitant cardiomyopathy and 317 tumour patients without cardiomyopathy . The results revealed that when patients were exposed to low to moderate doses (1-250 mg/m 2 ) of anthracyclines, patients with the CBR3 :GG genotype presented a significantly increased risk of AIC compared with patients with the CBR3:GA/AA genotype (Odds Ratio (OR) = 3.30, p = 0.006). Another study involving 1191 breast cancer patients and an analysis of 618,863 SNPs revealed an association between a SNP (Val244Met; rs1056892) in CBR3 and a decreased left ventricular ejection fraction induced by anthracyclines . These studies suggest that CBR3 plays a significant role in the AIC. This information is important for a deeper understanding of the mechanisms underlying AIC and provides new directions for future treatments of cardiac toxicity. Tumour patients often present with metabolic disorders such as disrupted fatty acid metabolism and glycolysis, and most antitumour drugs can induce or exacerbate metabolic disturbances. In a rat model of anthracycline-induced heart failure, the occurrence of heart failure was mainly associated with metabolic disturbances, including disturbances in fatty acid metabolism, glycolysis, the tricarboxylic acid cycle, glycerophospholipid metabolism, and glutathione metabolism . These metabolic disturbances affect myocardial energy metabolism, oxidative stress, and myocardial contraction. The metabolic pathways of taurine in the heart and skeletal muscles are affected by myocardial toxicity induced by tyrosine kinase inhibitors, leading to a significant decrease in taurine abundance. Taurine has shown to regulate oxidative stress, protein stability, and stress responses . These studies indicate that metabolic disturbances play a crucial role in the occurrence of drug-induced cardiac injury, with metabolism-related genes serving as major regulatory factors. Several metabolism-related genes have been confirmed to be associated with cardiac injury. UDP-glucuronosyltransferases (UGTs) catalyse the glucuronidation of endogenous and exogenous compounds, increasing their water solubility to facilitate elimination . UGT1A6 encodes the UGT family 1 member A6, which converts lipophilic anthracene derivatives into water-soluble and excretable metabolites . Therefore, the UGT1A6 protein plays a crucial role in the clearance of anthracene derivatives. The UGT1A6 rs17863783 variant is associated with AIC . Glutathione S-transferases (GSTs) are a crucial group of phase II metabolic enzymes involved in biotransformation in the human body. GSTs are expressed in nearly all cells and tissues, and their main function is to catalyse the reaction between various electrophilic carcinogens and glutathione, increasing their water solubility for excretion and thereby exerting detoxification effects . GSTM1 encodes glutathione S-transferase M1, which catalyses the detoxification of many carcinogens and drugs, including anthracene derivatives . The GSTM1 protein also scavenges free radicals, reducing the oxidative damage caused by toxic compounds such as anthracene derivatives. Therefore, any genetic variation affecting GSTM1 enzyme expression levels and/or function increases the risk of anthracene-induced cardiotoxicity. The association between GSTM1 gene deletion ( GSTM1 null genotype) and anthracycline-related cardiomyopathy was explored in cancer patients. A gene analysis was conducted for 75 patients with clinically confirmed cardiomyopathy and 92 matched control individuals without cardiomyopathy . These results suggested a significant association between a GSTM1 gene deletion and cardiomyopathy occurrence. After adjusting for factors such as sex, age at cancer diagnosis, chest radiation therapy, and anthracycline dosage, the conditional logistic regression analysis still revealed a significant relationship between a GSTM1 gene deletion and the cardiomyopathy risk. Researchers further examined peripheral blood GSTM1 gene expression in 20 cardiomyopathy patients and 20 control individuals . Concurrently, the expression of the GSTM1 gene was assessed in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) from patients (3 with cardiomyopathy and 3 without cardiomyopathy). The results indicated that GSTM1 expression in the peripheral blood was significantly lower in cardiomyopathy patients than in control individuals (mean relative expression 0.67 ± 0.57 vs. 1.33 ± 1.33, p = 0.049). Additionally, GSTM1 expression levels were significantly reduced in hiPSC-CMs derived from cardiomyopathy patients ( p = 0.007). This study confirmed the close association between GSTM1 gene deletions and anthracycline-related cardiomyopathy. The CRB3 gene regulates another drug-metabolizing enzyme, carbonyl reductase 3, which catalyses the reduction of anthracyclines to cardiotoxic alcohol metabolites . Polymorphisms of the CBR3 gene can influence the synthesis of this metabolite, exerting a regulatory effect on AIC. The V244M polymorphism in the CBR3 gene generates two protein isoforms, CBR3 V244 (G allele) and CBR3 M244 (A allele), with distinct catalytic rates. The V244 variant promotes doxorubicinol formation at a rate 2.6 times faster than the M244 variant . Blanco et al. conducted a comparative analysis of data from 170 tumour patients with concomitant cardiomyopathy and 317 tumour patients without cardiomyopathy . The results revealed that when patients were exposed to low to moderate doses (1-250 mg/m 2 ) of anthracyclines, patients with the CBR3 :GG genotype presented a significantly increased risk of AIC compared with patients with the CBR3:GA/AA genotype (Odds Ratio (OR) = 3.30, p = 0.006). Another study involving 1191 breast cancer patients and an analysis of 618,863 SNPs revealed an association between a SNP (Val244Met; rs1056892) in CBR3 and a decreased left ventricular ejection fraction induced by anthracyclines . These studies suggest that CBR3 plays a significant role in the AIC. This information is important for a deeper understanding of the mechanisms underlying AIC and provides new directions for future treatments of cardiac toxicity. Hyaluronic acid (HA) is a long-chain polysaccharide synthesized by hyaluronic acid synthase (HAS). It is an important component of the extracellular matrix (ECM). HA is widely distributed in the human body and has various physiological functions. One of the important functions of HA is its antioxidant activity. It can specifically interact with CD44 receptors on myocardial cells, stimulating cell proliferation, maintaining the integrity of myocardial cells during ROS damage, and preventing the activation of death receptors, thereby preserving cardiomyocyte survival and function . Wang et al. . employed a matched case‒control design to analyse SNPs in 2100 genes related to cardiovascular diseases. They identified a common SNP (rs2232228) in HAS3 that was closely associated with anthracycline dose-dependent cardiac injury. When exposed to low doses (< 250 mg/m2) of anthracyclines, patients with the rs2232228 GG/AA/GA genotype had lower rates of cardiomyopathy. However, when individuals were exposed to high doses (> 250 mg/m2) of anthracyclines, a significant change in the incidence of cardiomyopathy was not observed in individuals with the rs2232228 GG genotype, but this risk increased significantly in patients with the AA and GA genotypes. The risk of cardiomyopathy was highest in patients with the AA genotype, as this risk was 8.9-fold higher in these patients than in patients with the GG genotype. A genotype‒phenotype analysis revealed reduced HAS3 mRNA expression in cardiac samples from patients with the HAS3 rs2232228 AA genotype. Anthracycline inflict myocardial damage by prompting apoptosis in cardiomyocytes. Following myocardial injury, ECM serves as a structural framework for the alignment of myocytes, fibroblasts, endothelial cells, and blood vessels. HA, a constituent of the ECM, has been observed to accumulate in the damaged myocardium of rats following myocardial infarction. Taken together, these data suggest that HA plays a significant role in AIC, and simultaneously indicate that lower cardiac HAS3 mRNA expression (AA genotype) may lead to a decreased synthesis of the antioxidant HA, thereby increasing the risk of cardiomyopathy for individuals with the AA genotype. -mediated DNA damage Top2b encodes topoisomerase-IIβ, which is expressed in quiescent cells, including adult cardiomyocytes. Specific knockout of the Top2b gene in cardiomyocytes reduces defective mitochondrial biogenesis and the generation of ROS . Furthermore, cardiomyocyte-specific deletion of Top2b protects mice from progressive heart failure induced by doxorubicin . These findings indicate that Top2b plays a significant role in drug-related cardiac toxicity. Top2b is also involved in regulating of cardiac injury via another mechanism. The RARG gene encodes a retinoic acid (RA) receptor belonging to the nuclear hormone receptor family. This RA receptor is a ligand-dependent transcriptional regulatory factor that binds to retinoic acid response elements in the promoter regions of target genes to regulate their expression. The RARG gene is highly expressed in the heart, and Top2b i s one of its target genes . In a genome-wide association study involving paediatric cancer patients receiving anthracycline therapy, Aminkeng et al. identified a nonsynonymous variant (rs2229774) in the coding region of RARG . This variant induces the expression of Top2b , resulting in a 4.7-fold increased risk of AIC. Further investigations revealed that individuals carrying RARG rs2229774 are highly susceptible to AIC, and their hiPSC-CMs exhibit increased sensitivity to the cardiotoxic effects of anthracycline drugs . Genetic variations impacting the architecture of sarcomeres, the fundamental contractile units of cardiomyocytes, may also play a role in the onset of cardiotoxicity following cancer treatments. The CELF protein family comprises a set of splicing regulatory factors that govern developmental processes and tissue-specific splicing events, thereby modulating alternative gene splicing and influencing the cardiac structure . TNNT 2 is a classic target of the CELF family, and this gene encodes cardiac troponin T (cTnT). CELF activity can promote the generation of distinct cTnT variants, and the concurrent presence of multiple cTnT variants leads to the dysregulated contraction of myocardial sarcomeres, thereby diminishing myocardial contractility and precipitating cardiac injury. An analysis of the CELF4 sequence indicated that the G allele of rs1786814 possesses a potential splice donor site and that the A allele lacks this splice site . The GG genotype of rs1786814 is correlated with the coexistence of more than one TNNT2 alternatively spliced isoforms, suggesting that AIC may occur through CELF protein-mediated aberrant TNNT2 splicing . A genome-wide association study targeting paediatric tumour patients confirmed that the SNP rs1786814 located in the CELF4 gene is associated with AIC. Patients with the AA genotype have a low incidence of cardiomyopathy . However, when the dose of anthracyclines exceeds 300 mg/m 2 , patients with the rs1786814 GG genotype have a 10.2-fold increased risk of cardiomyopathy compared with patients with the GA/AA genotypes. Another gene implicated in anticancer treatment-related cardiac injury due to its impact on myocardial structure is gene TTN . Truncating variants in TTN (TTNtvs) are one of the most important causes of AIC in both paediatric and adult cancer patients . The TTN gene encodes titin, the primary sarcomeric scaffold protein regulating cardiac contraction . Its integrity is vital for the sarcomere’s proper function. TTNtvs can lead to the production of incomplete titin proteins. These mutations are found in about 15 to 20% of patients with dilated cardiomyopathy (DCM), compared to a mere 1% in the general population . Consequently, TTNtvs have emerged as the most prevalent known cause of DCM. Moreover, recent studies have uncovered a high prevalence of TTNtvs in cardiomyopathy resulting from diverse triggers such as alcohol and pregnancy , suggesting that individuals with TTNtvs are particularly vulnerable to developing cardiomyopathy in response to a variety of insults. Cardiac injury caused by anticancer chemotherapy drugs typically presents as type I toxicity, often resulting from a myocardial cell microstructural disruption leading to irreversible damage through apoptosis . In contrast, type II cardiac toxicity, characterized by reversible damage, often occurs without a concurrent myocardial cell microstructural disruption . This form of injury is commonly associated with targeted anticancer therapies, with anti-HER-2 treatment being a notable example. Therefore, genetic alterations related to cardiac toxicity induced by anti-HER2 drugs may differ from those associated with chemotherapy agents. In addition to its expression in breast tumour cells, HER2 is also expressed in cardiac myocytes . The HER-2 pathway stabilizes the tissue fibre structure through a series of signalling cascades, thereby inhibiting the apoptosis of cardiac myocytes. This pathway can promote cell survival by reducing ROS levels. However, HER-2-targeted therapies disrupt the HER-2 pathway by binding to HER-2, leading to the accumulation of excessive ROS and damage to cardiac myocytes . Under normal circumstances, coronary artery microvascular endothelial cells and the endocardium release neuregulin-1, which induces the signalling pathway mediated by the HER-2/HER-4 heterodimer. This pathway protects the heart through various mechanisms, including maintaining the myocardial fibre structure; promoting cardiac myocyte survival, growth, and proliferation; balancing β-adrenergic effects; maintaining calcium homeostasis; improving angiogenesis; and stimulating stem cell differentiation into cardiomyocytes . Therefore, the disruption of this signalling pathway by anti-HER-2 therapy may impair myocardial function and lead to heart failure. Trastuzumab can also induce cardiomyocyte damage by downregulating the antiapoptotic protein Bcl-xl and upregulating the proapoptotic protein Bcl-xs, leading to a loss of mitochondrial membrane integrity, disruption of electron transport, free radical generation, reduced adenosine triphosphate (ATP) production, and subsequent damage to cardiomyocytes . Additionally, trastuzumab can affect downstream signalling pathways of HER-2, including the phosphatidylinositol 3-kinase (PI3k)-protein kinase B and extracellular signal-regulated kinase-mitogen-activated protein kinase (MAPK) pathways, thereby influencing mitochondrial function and causing damage to or even the death of cardiac myocytes . Somatic and germline mutations in the HER2 gene that affect the transmembrane domain of the HER-2 protein have been identified, including germline mutations in codon 654 . The Ile654Val SNP is closely associated with the incidence of breast cancer and the response to trastuzumab . A study involving 61 patients with HER-2-positive advanced breast cancer treated with trastuzumab included 36 patients with Ile/Ile (59%), 21 patients with Ile/Val (34.4%), and 4 patients with Val/Val (6.6%). After treatment, 5 patients (8.2%) experienced a decrease in the left ventricular ejection fraction of ≥ 20%, all of whom had the Ile/Val genotype . These findings suggest that the Val655Ile genotype is associated with cardiotoxicity. The 1170 Pro/Ala SNP in HER2 is associated with cardiac toxicity. Two studies that included a total of 346 patients reported a significant association between the HER2 1170 Pro/Ala SNP and anticancer treatment-related myocardial injury. These studies revealed that the presence of this SNP is a protective factor against anticancer treatment-related cardiac damage. Stanton et al. demonstrated that the CC genotype (Pro/Pro) was independently associated with anticancer treatment-related cardiac injury (OR = 2.60, p = 0.046) compared with SNP carriers of the C/G (Pro/Ala) and G/G (Ala/Ala) variants. Similarly, Boekhout et al. reported that the homozygous genotype variant G/G (Ala/Ala) was associated with a lower likelihood of cardiac events (OR = 0.09, p = 0.003). A genome-wide association study (GWAS) conducted in a Japanese population compared 11 patients with cardiac toxicity to 257 patients without cardiac toxicity . The researchers identified the top 100 SNPs with the smallest p values. Subsequently, they performed validation using a verification cohort consisting of 14 patients with cardiac toxicity and 199 control individuals. This study identified five loci on chromosomes (rs9316695 on chromosome 13q14.3, rs28415722 on chromosome 15q26.3, rs7406710 on chromosome 17q25.3, rs11932853 on chromosome 4q25, and rs8032978 on chromosome 15q26.3) that may be associated with trastuzumab-induced cardiac toxicity. The researchers developed a risk prediction model based on these five SNPs to predict the risk of trastuzumab-induced cardiac toxicity. The results showed that patients with a risk score ≥ 5 had a significantly greater incidence of trastuzumab-induced cardiac toxicity than did those with a risk score ≤ 4 (42.5% vs. 1.8%, p = 7.82 × 10 15 , relative risk = 40.0). In another retrospective study, CTR-CVT was observed in 19 (7.8%) of 243 patients treated with trastuzumab . They identified a total of 239,360 genetic variants in 9 of the 19 patients with CTR-CVT. The strongest association with CTR-CVT was found for a locus on chromosome 6q12 (rs139944387). ADCs targeting HER-2 constitute another class of anti-HER-2 drugs .No studies have explored the relationship between genetic alterations and cardiac toxicity induced by ADCs. Therefore, research is needed to identify specific gene alterations that are associated with cardiotoxicity caused by ADCs. In recent years, immune checkpoint inhibitors (ICIs) have revolutionized cancer treatment, significantly improving the prognosis of cancer patients . PD-1 inhibitors, PD-L1 inhibitors, and CTLA-4 inhibitors are commonly used immune checkpoint inhibitors in clinical practice. However, while they offer clinical benefits, they also lead to immune-related adverse events. Among these adverse events, immune-related myocarditis is particularly notable, although its incidence is only 0.06–3.8% However, the mortality rate can reach 39.7–66.0% , with a higher risk of death in patients receiving combination therapy with ICIs (44% vs. 66%) . The clinical findings of ICI-mediated cardiovascular disease (ICI-CVD) suggest a potential mechanistic role of immune checkpoint signalling in the development of cardiac pathologies. ICIs are associated with adverse cardiovascular effects, indicating a possible role for immune checkpoint signalling in the onset of cardiac pathologies. The function of immune checkpoints has been extensively studied in certain cardiovascular diseases . For example, several immune checkpoints are involved in the development of atherosclerosis . Additionally, blocking coinhibitory checkpoints has been found to exacerbate atherosclerosis in cancer patients. Building on these insights from mechanisms and clinical observations, modulating immune checkpoints has emerged as a potential therapeutic strategy for the treatment of atherosclerotic cardiovascular disease . The precise mechanisms underlying ICI-related cardiotoxicity remain incompletely understood, but several potential pathways have been suggested. Notably, the current evidence on immune checkpoints in heart failure primarily stems from preclinical research or from observational studies on human samples. Consequently, the available data lay the groundwork for future experimental and clinical studies. One of the reasons is that ICI therapy disrupts immune tolerance within the body. By engaging with CTLA-4, CTLA-4 inhibitors competitively bind to CD80/CD86, and PD-1 inhibitors and PD-L1 inhibitors disturb peripheral immune tolerance by blocking the interaction between PD-1 and its ligand PD-L1. In preclinical studies, CTLA-4, PD-1, and PD-L1 were shown to help protect the heart muscle from immune-related damage. Conversely, animal models lacking immune checkpoint function exhibit increased levels of cardiac myosin-specific autoimmune CD4 + and CD8 + T cells . Furthermore, myocardial biopsy samples from patients with immune-related myocarditis revealed the presence of cardiac myosin-specific CD8 + cytotoxic T cells . These findings indicate that ICI therapy disrupts immune tolerance, facilitating T-cell activation, which can lead to cardiac damage. Furthermore, autopsies of patients with ICI-related myocarditis have revealed abundant T-cell infiltration in the myocardium, skeletal muscle, and tumour tissue . High levels of clonal expansion were observed in infiltrating lymphocytes through T-cell receptor sequencing . Additionally, muscle-specific antigens were detected in tumour tissue, suggesting that shared antigens between the myocardium and tumour tissue may contribute to ICI-related cardiotoxicity mechanisms . Dysregulated lipid metabolism and macrophage conversion to a proinflammatory phenotype have also been proposed as mechanisms underlying the development of immune-related myocarditis. Various risk indicators, including genetic markers, are being explored to facilitate the early identification and diagnosis of ICI-CVD, thereby reducing the mortality associated with such adverse events. A prominent genetic susceptibility factor linked to the occurrence of ICI-CVD involves variations in either coinhibitory or costimulatory immune checkpoints. Preclinical studies shown that genetic deletion of the gene encoding PD1 ( Pdcd1 ) results in acute myocarditis in mice, accompanied by the detection of autoantibodies against cardiac troponin I in peripheral blood, suggesting an autoimmune response against the myocardium . Research using single-cell RNA sequencing has also demonstrated that the expression of Pdcd1 is upregulated in regulatory T cells within the hearts of mice experiencing heart failure due to pressure overload . Blocking PD1 in these mice led to a decline in heart function and an increase in cardiac inflammation. Furthermore, genetic deletion of Pdcd1 in mice has been linked to the development of dilated cardiomyopathy . CTLA4 has been suggested as a susceptibility gene for DCM, given that patients with DCM are more likely to have a genetic variant in CTLA4 than are healthy individuals . Similarly, genetic deletion of Ctla4 induced fatal immune myocarditis in mice . In mice treated with TAC, both Ctla4 and Pdcd1 expression levels in immune cells in the heart are increased . Furthermore, in Pdcd1 -deficient mice, Ctla4 knockout leads to immune myocarditis in approximately half of the mice. This finding is consistent with the increased cardiotoxicity observed when CTLA-4 inhibitors are combined with PD-1 inhibitors . In addition to PD1, PDL1, and CTLA4, costimulatory factors for T cell activation, such as CD28 and B7, also play crucial roles in ICI-CVD. CD28 or B7 knockout significantly attenuated aortic constriction-induced congestive heart failure development . Furthermore, CD28/B7 blockade by CTLA4Ig treatment also attenuated cardiac hypertrophy and dysfunction. 4-1BB is another costimulatory protein expressed on the surface of various immune cells that becomes activated upon binding to its ligand, 4-1BBL.The genetic deletion of gene encoding 4-1BBL has been shown to mitigate the injury associated with ischaemia and reperfusion in mice . Both Ctla4 and Pdcd1 expression levels in immune cells in the heart are increased in mice with pressure-overload-induced heart failure after transverse aortic constriction (TAC) surgery . Furthermore, mice with a CD28 or B7 deficiency have lower cardiac inflammation, hypertrophy, fibrosis and dysfunction after TAC surgery than wild-type mice . Similarly, CD28 or B7 blockade with CTLA4 immunoglobulin treatment attenuated TAC-induced cardiac hypertrophy and dysfunction . CTLA4 immunoglobulin treatment also prevents the development of heart failure in mice with pressure-overload induced cardiac hypertrophy . PDL1 has also been linked to the development of ICI-CVD. In a study using human heart tissue samples, the expression of PDL1 was more prominent and frequent in patients with a history of myocardial infarction than in healthy controls . Moreover, a significant negative correlation was observed between the PDL1 expression level and the left ventricular ejection fraction. A preclinical study revealed that PDL1 is expressed in heart failure models and that serum levels of PDL1 are associated with disease severity . In addition to Pdcd1 , PDCD1LG1 , and CTLA4/Ctla4 , other genetic alterations associated with the development of immune-related myocarditis have also been identified. Luo et al. conducted an integrated analysis of single-cell RNA sequencing and bulk sequencing data and reported that the S100A protein family, which includes S100A8, S100A9, S100A11, and S100A12, was significantly upregulated in patients with ICI-related myocarditis. The S100 proteins, encoded by the S100A genes on chromosome 21, belong to a family of calcium-binding proteins. Studies have shown a significant increase in the expression of the S100 protein family in tumour tissues, suggesting potential roles in the immune response and pathogenesis of certain diseases, including ICI-related myocarditis .In summary, research on the associations between genetic variants and the risk of immune-related myocarditis is still in the exploratory stage. However, some preliminary findings have identified genetic variants that may be associated with immune-related cardiac toxicity, indicating a promising direction for further investigation. In addition to gene variants, epigenetics can influence the expression of genes, thereby affecting cell differentiation, development, and disease risk. In recent years, several studies have suggested that circulating free DNA (cfDNA) methylation can serve as a predictive biomarker for tissue injury, including drug-induced cardiotoxicity. After damaged, the heart can release DNA into the peripheral blood in the form of cfDNA, theoretically allowing the diagnosis of cardiac injury through cfDNA methylation detection . Recently, Israeli researchers published a human cell methylation atlas of the characteristic methylation markers of different tissues, including the heart . The atlas provides an essential resource for studying disease-associated genetic variants and potential tissue-specific biomarkers for use in liquid biopsies, enabling the possibility of diagnosing cardiac injury through cfDNA methylation. Several studies have revealed a notable increase in the levels of cfDNA methylation markers originating from the heart in patients with myocardial infarction. Ren et al. compared cfDNA methylation levels in plasma between myocardial infarction patients and healthy individuals and identified six heart-specific hypermethylation patterns. These authors also reported that the methylation concentration was correlated with disease severity . Additionally, another study reported a significant elevation in the concentration of heart-derived cfDNA in patients with acute myocardial infarction. Furthermore, in patients with sepsis, markedly increased levels of heart-derived cfDNA were detected, which was correlated with a significantly increased risk of cardiac death in these individuals . In preclinical research , the use of a congestive heart failure model facilitated the analysis of N6-methyladenosine (6 mA) methylation patterns in mitochondrial DNA (mtDNA) of cardiomyocytes. An increase in mtDNA 6 mA levels was observed in cardiomyocytes from hearts with heart failure. Upon upregulating the expression of the methyltransferase METTL4, an elevation in mtDNA 6 mA levels ensued, leading to spontaneous mitochondrial dysfunction and the onset of heart failure. Conversely, by knocking out the cardiomyocyte-specific mettl4 gene to reduce mtDNA 6 mA levels, heart failure was alleviated. These findings suggest a close association between mtDNA 6 mA and cardiac dysfunction.In terms of chemotherapy-induced cardiac injury, a study compared the methylation profiles of peripheral blood mononuclear cells (PBMCs) between 9 patients with an abnormal left ventricular ejection fraction (LVEF) and 10 patients with a normal LVEF . They identified 14,883 differentially methylated CpGs at baseline and after the first cycle of chemotherapy (doxorubicin), that were significantly associated with the LVEF status. In patients with an abnormal LVEF, regions with significant differential methylation were found in the promoters and gene bodies of SLFN12 , IRF6 , and RNF39 . The results of this study suggest that the DNA methylation profile of PBMCs may be able to predict the risk of chemotherapy-induced cardiac toxicity. These results indicate that epigenetic changes are linked to cardiac dysfunction triggered by a range of factors, cancer treatment among them. Nevertheless, the study of epigenetic modifications in the context of cardio-oncology is a relatively unexplored area that requires more in-depth research. Future studies should delve into the potential connections between other epigenetic modifications and CTR-CVT. Additionally, it will be important to investigate the utility of epigenetic modifications in the filed of cardio-oncology, such as utilizing methylation patterns as diagnostic markers or prognostic indicators for cardiac injury. Cardiovascular disease and cancer are the two major causes of morbidity and mortality worldwide, accounting for at least 70% of the medical reasons for mortality worldwide. Cancer patients often have multiple comorbidities that can profoundly influence their cancer care and clinical outcomes. As cancer patient survival rates increase due to the development of effective cancer therapies, cancer therapy-induced cardiovascular toxicity has increasingly become a significant threat to cancer patients. The risk factors associated with cardiotoxicity related to tumour treatment have not been fully identified, and effective evaluations and predictive models for the cardiovascular risk are lacking. Genetic studies of both human populations and animal models have elucidated the mechanisms of cancer therapy-induced cardiotoxicity, providing opportunities to optimize patient care during cancer treatment. The European Society of Cardiology (ESC) guidelines classify genetic variants as risk factors for CTR-CVT, specifically identifying seven genetic abnormalities that are known to drive CTR-CVT. However, these variants are limited and are associated primarily with AIC. This review summarizes the epidemiological data and pathogenic mechanisms of anticancer drug-related cardiac injury, particularly the relationships between genetic alterations and CTR-CVT from four perspectives: chemotherapy-related cardiac injury (Table ), targeted therapy-related cardiac injury (Table ), immunotherapy-related cardiac injury (Table ) and the relationship between epigenetics and cardiotoxicity (Fig. ). This study provides a more comprehensive summary of genetic variants associated with CTR-CVT, which could serve as a supplement to the ESC guidelines. This approach enable oncologists and cardiologists to gain a more thorough understanding of the genetic alterations related to CTR-CVT, facilitating the assessment of the risk of cardiovascular toxicity that could be caused by treatment prior to its initiation. This finding also suggests that by stratifying patients according to genetic features, early identification of individuals susceptible to cardiac injury can be achieved. The detection of both common and rare monogenic variants that impact the risk of CTR-CVT in patients following cardiotoxic cancer therapies can significantly refine the cardiovascular risk stratification for these individuals. A burgeoning focus on constructing risk prediction models that utilize clinical data and genetic information has been noted, with the goal of facilitating personalized treatment approaches for cancer patients . The identification of further genetic risk factors in addition to clinical risk factors will facilitate the establishment of a better prediction score model and improve its predictability for CTR-CVT. This approach enables personalized cancer treatment based on known genetic factors and thus reduces the incidence of drug-induced cardiac injury. Genetic susceptibility not only elevates the risk of cardiotoxicity but also detrimentally affects clinical outcomes . Personalizing cancer treatment plans and cardiovascular monitoring strategies based on individual genetic profiles before, during, and after cancer therapy may improve clinical outcomes, a strategy that requires additional research to validate its efficacy. In summary, genetic predisposition plays an essential role in both the development and clinical outcomes of cardiovascular toxicity following cancer therapies. Nevertheless, numerous challenges remain to be addressed and significant additional work is required to refine the understanding and management of this area. RCT evidence in this field is limited. Therefore, an increased number of well-designed RCTs that can yield more credible evidence is a pressing need. Furthermore, long-term follow-up of patients is necessary to gain a comprehensive understanding of their cardiotoxicity, given that cardiovascular toxicity is typically a chronic process. Current research has been limited in terms of the populations involved, with many trials not adequately representing diverse groups such as women and elderly patients, and accumulating data suggest that the interaction of anticancer therapeutics may differ between groups . Currently, studies have focused primarily on the cardiovascular toxicity of traditional chemotherapy drugs such as anthracycline, whereas investigations into newer treatment modalities, such as ICI-related-cardiotoxicity, are relatively limited. Additional studies are needed to explore this emerging area of concern further. Cardio-oncology is an integrative discipline, and cardio-oncology providers with knowledge of the broad scopes of cardiology, oncology, and haematology are limited. Courses and programs focused on cardio-oncology care networks and cardio-oncology services are needed to meet the increased clinical demand. Finally, although some researches have established risk factor-based stratification for cardiotoxicity associated with anticancer treatment, the existing tools are not entirely satisfactory. In the future, leveraging multiomics, big data, and artificial intelligence tools may be necessary to develop and validate more sensitive and specific stratification tools.
Differential contribution of excitatory and inhibitory neurons in shaping neurovascular coupling in different epileptic neural states
e09affdb-c141-499d-9e7a-8a641d5968b6
8054729
Physiology[mh]
Epileptic events represent pathological alterations in neural networks that involve sporadic and recurrent episodes of excessive brain activity. To delineate an epileptogenic zone, perfusion-based imaging methods such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and functional magnetic resonance imaging (fMRI) have been widely utilized owing to their noninvasiveness and the accessibility of a wide range of brain regions. Abnormal perfusion or metabolic changes, such as ictal hyperperfusion – and interictal or postictal hypoperfusion, , , – in specific brain areas are generally used to characterize potential biomarkers of epileptogenic foci. Spatial brain mapping of differences in perfusion signals between different epileptic states, e.g. interictal and ictal states, has also been used. , – Thus, understanding the neurovascular coupling (NVC) that underlie blood flow changes in different epileptic states is fundamental. Ictal states are well known for neuronal hyperactivity in both excitatory and inhibitory neurons. – Animal studies related to NVC in epilepsy have shown that ictal events evoke drastic increases in vessel diameter – and cerebral blood flow (CBF) , due to high metabolic demand caused by intense neuronal activity. , , On the other hand, interictal or preictal states preceding ictal onset are characterized by GABAergic interneuronal activity, , – which is considered to play a crucial role in shaping the transition to ictal states. , , , , Considering that GABAergic interneuronal activity is known to be an important contributor to the regulation of blood flow, – the distinct inhibitory neuronal activities that occur during interictal or preictal states can potentially result in CBF changes. Thus, neurovascular activity that may occur in epileptogenic foci adjacent to upcoming ictal events can potentially indicate epileptic conditions that are modulated by inhibitory interneuronal activity. However, how inhibitory neuronal activities are related to blood flow changes in the absence of ongoing seizures, such as during interictal hypoperfusion, is not fully understood yet. Moreover, the relative contribution of excitatory and inhibitory neuron activity to vascular responses between different epileptic states is still unclear. We propose that a comprehensive examination of excitatory and inhibitory neurons as well as vascular activity is needed to thoroughly elucidate the NVC underlying different epileptic states. We therefore sought to investigate whether vascular responses caused by different epileptic states can be used as biomarkers of pathological states in excitatory and inhibitory neurons in epileptogenic foci. In this study, we conducted real-time in vivo measurements of CBF, cortical vessel diameter, and excitatory and inhibitory neuronal activity. We used a 4-aminopyridine (4-AP) seizure model that is known to reliably induce stereotypical focal seizures with sufficient interictal and ictal intervals. , , , Additionally, the 4-AP model allowed us to fully explore the dynamics of neurovascular events in a territorially well-defined seizure focus. , Local field potential (LFP) recordings were also simultaneously performed to verify seizure events, and to investigate neural correlates that are related to neuronal and vascular dynamics. Animals All experimental procedures were approved by the Sungkyunkwan University Institutional Animal Care and Use Committee and were conducted in accordance with the Guide for the Care and Use of Laboratory Animals of the Animal Protection Law & the Laboratory Animal Act set by the Korea Animal and Plant Quarantine Agency and the Korea Ministry of Food and Drug Safety. We used adult male C57BL/6 mice ( n = 20; Orient Bio, South Korea), male Thy1-GCaMP6f mice ( n = 10, C57BL/6J-TgGP5.17DKim/J, stock no.025939, Jackson Laboratory, USA) and adult male C57BL/6 mice (n = 10; Orient Bio) with viral expression of GCaMP6f (AA9-mDlx-GCaMP6f-Fishell-2, plasmid#83899, Addgene, USA). All mice were maintained under a 12-h dark/light cycle, 24–25°C temperature and 50–60% humidity. Experiments were carried out on 10- to 14-week old mice. Animal surgery for in vivo experiments Surgical and experimental procedures were performed as shown in . Mice were initially anesthetized with 2.5% isoflurane in an induction chamber, and anesthesia was maintained with 1.2% isoflurane after each mouse was transferred to a stereotaxic frame (Kopf Instruments, USA). Body temperature was maintained at approximately 37 °C using a temperature-controlled heating pad (DC temperature control system, FHC, USA). After an incision was made in the skin over the right hemisphere, a 2-mm-diameter circular craniotomy was carefully performed over the somatosensory cortex (0.5–2.5 mm posterior and 1–3 mm lateral to bregma) using a dental drill (Ram Products, Microtorque II, USA). The dura mater remained intact. The exposed cortex was covered with a glass coverslip (4 × 4 mm, Deckglaser, Germany), but a small space was left on the lateral side to allow for insertion of a microelectrode and a glass pipette (Supplementary Figure 1(a,b)). A metal holding frame was then glued to the skull to (1) minimize head motion during imaging and (2) adjust the tilt of the head so that the brain surface inside the imaging window was perpendicular to the microscope objective axis. For further experiments, anesthesia was then switched to urethane (1.25 g/kg, i.p.). Urethane anesthesia has been extensively used for NVC studies. , – It is known to preserve excitatory and inhibitory synaptic transmission and autoregulation of CBF. Throughout the experiments, we continuously monitored the physiological parameters of the mice (heart rate: 520–580 bpm, SpO 2: 98–99%, respiratory rate: 165–180 r/min) to ensure that stable physiological conditions were maintained under urethane anesthesia. Virus injection For two-photon imaging of GCaMP6f in inhibitory neurons, we intracortically injected AA9-mdlx-GCaMP6f-Fishell-2 (0.92 × 10 GC/ml) into the somatosensory cortex 3–4 weeks prior to the surgical procedures described above. C57BL/6 mice were also anesthetized via inhalation of 2.5% isoflurane in an induction chamber and were maintained under anesthesia with 1.5% isoflurane during the injection. Two small holes (0.25 mm diameter, −1 to −2 mm posterior and 1 mm lateral to the bregma) were made in the right hemisphere avoiding the large pial vessels. The tip of a beveled (40° angle) glass micropipette (outer diameter (OD) of 15–20 μm) was inserted into layer 2/3 of the somatosensory cortex with a micromanipulator (Eppendorf, Germany). The virus solution (1/2 diluted in saline, 800 nl) was injected using a syringe pump (80 nl/min, Harvard Apparatus, USA). The holes were then covered with dental resin (OA2, Dentkist Inc., South Korea) and the skin was sutured. Electrophysiological recording and seizure model establishment A tungsten microelectrode (300–500 kΩ, FHC, USA) was used for LFP recordings, and a glass micropipette was made for induction of seizure events via intracortical injection of 4-AP (15 mM in sterile saline, Sigma, USA) mixed with Alexa594 (10 μM, Thermo Fisher, USA). A glass pipette with a tip diameter of 20–30 μm was made from a glass capillary tube (OD:1.0 mm, inner diameter (ID): 0.50 mm, borosilicate glass, Sutter Instrument, USA) using a micropipette puller (P-1000, Sutter Instrument). While viewing the cortex through a microscope objective lens, a microelectrode and a glass micropipette filled with the 4-AP solution were carefully inserted into the cortex (25° angle) to a depth of ∼350 μm beneath the pial surface. The distance between the microelectrode tip and the glass pipette was up to 500 μm in all experiments. The raw electrophysiological data were amplified and acquired at 40,000 Hz using an Omniplex recording system (Plexon, USA). The LFP signals were then acquired via downsampling to 1,000 Hz and filtering with 0.5-Hz high-pass and 200-Hz low-pass filters. After pre-injection baseline data were acquired for 10 min, the 4-AP solution was slowly injected (80 nl/min) using an infusion pump (Pump 11 Pico Plus Elite, Harvard Apparatus). The mixture of 4-AP with Alexa 594 enabled visualization of the glass pipette during the insertion and of the diffusion area during the infusion (500 nl, 80 nl/min; Supplementary Figure 1(c)). The 4-AP injection reliably induced recurrent spontaneous seizures that repeatedly occurred at intervals of several tens of seconds to several minutes and could be verified by LFP recording (Supplementary Figure 1(d)). Laser Doppler flowmetry CBF changes were measured by using a Laser Doppler flowmetry (LDF) probe (wavelength: 780 nm, probe diameter, 450 μm, Perimed, PeriFlux System 5000, Sweden). The LDF probe was placed on the cortical surface, avoiding the large pial vessels, and separated from the LFP recording microelectrode by ∼200 μm. The LDF signals were sampled at 1 kHz and were digitally acquired using a Plexon system that allowed simultaneous measurement of the LDF signals with the LFPs. To assess preictal CBF changes, the measured LDF signals were normalized to the averaged CBF level of the 5- or 10-min period before the 4-AP was injected. In vivo two-photon imaging Two-photon imaging was conducted to measure changes in either cortical vascular diameter or neural activity. For two-photon vessel imaging, fluorescein isothiocyanate (FITC)-labeled dextran (MW = 70kDa, FD-70S, Sigma) was used to visualize the cortical vasculature (5%, 1.5 μl/g, through the retro-orbital sinus). We chose an area of the somatosensory cortex in which at least two penetrating arterioles and venules were observed. Calcium imaging of neural activity was performed in mice expressing GCaMP6f in excitatory or inhibitory neurons in the somatosensory cortex. Images were obtained using a two-photon laser scanning microscopy system (TCS SP8 MP, Leica, Germany) equipped with a broadly tunable Ti:sapphire laser (680–1080 nm, 80 MHz, 140 fs pulse width, Chameleon Vision II, Coherent, USA). A 10× objective lens (Leica, HCX APO L, NA = 0.30) was used with a 920-nm tuned laser to excite the fluorescent signals. Bandpass filters at 520/50 nm and 585/40 nm were used to collect green (FITC or GCaMP6f) and red (Alexa594 mixed with the 4-AP solution) fluorescence, respectively, at a pixel resolution of 1.73 μm. Focusing at a depth of 250–300 μm (layer 2/3), images were acquired at 2 Hz for surface vascular imaging and at 5 Hz or 10 Hz for calcium imaging. The imaging area included the tip of the 4-AP glass pipette but not the microelectrode tip because direct exposure of metal microelectrodes to focused Ti:sapphire laser light creates photovoltaic artifacts. Thus, the electrode tip was positioned rostrally or caudally near the border of the imaged area to avoid artifacts (Supplementary Figure 1 (b) and (c)). Data analysis All data were analyzed using Fiji (ImageJ, USA), custom-written code in MATLAB (Mathworks, USA) and Chronux ( http://chronux.org/ ). To ensure that consecutive ictal events were treated as isolated episodes, seizure events with intervals of less than 40 s were excluded from further analysis. , , The neural recordings were 0.5–150-Hz bandpass filtered using a third-order Butterworth filter. Seizure onset was defined as the time point at which LFP amplitudes increased two standard deviation (SD) above the preictal baseline. The offset was defined as the time point at which the signal returned to within 2 SD of the preictal baseline. Seizure onset and offset were then confirmed by visual inspection and adjusted manually. A time course of the power spectral density (PSD) was calculated by applying a multitaper transformation (sliding window: 1 s, bin: 100 ms). The PSDs were summated in five distinct frequency ranges: 1–4 Hz (δ-band), 4–7 Hz (θ-band), 7–13 Hz (α-band), 13–30 Hz (β-band), and 30–100 Hz (γ-band). The detailed methods for postprocessing of the imaging data are described in the Supplementary Material. The preictal CBF and diameter changes measured during the 10 s prior to each seizure onset were normalized by the averaged pre-injection baseline. The ictal CBF and diameter changes were calculated by normalizing the ictal CBF levels and diameters by the average preictal CBF level and diameter. The arteriole or venule changes from different segments in each field of view (FOV) were averaged for each seizure. The GCaMP6f signals over time were calculated as ΔF/F = (F−F 0 )/F 0 , where F 0 and F represent the baseline fluorescence (the averaged fluorescence of either the 10-min pre-injection or the preictal period) and the fluorescence over time, respectively. To quantify the oscillating activity during the preictal periods, we counted peaks only when they appeared more than 400 ms after the previous one or were more than 0.2-fold higher than the basal preictal level. To analyze neuronal synchrony, the correlation coefficients of the ΔF/F signals were calculated (1 s window with a 1 s step) for all pairs of the neuron regions of interest (neuron ROIs) for each seizure. Study design and statistics The study design and reporting followed the ARRIVE (Animal Research: Reporting In Vivo Experiments) guidelines. The sample sizes were determined to detect over 30% differences between mean values (coefficient of variance = 0.2–0.5, power = 80%, α = 0.05). We conducted normality tests with the Shapiro-Wilk test for all data sets (IBM SPSS Statistics 19, USA). Depending on the results of the normality test, we used either an independent t -test or the Mann-Whitney U test to examine differences between two independent samples. We used either a paired t -test or the Wilcoxon signed-rank test when comparing two dependent samples. Likewise, according to the normality test results, we calculated either Pearson’s or Spearman’s correlation coefficient to examine a linear relationship between two variables, and linear regression models were fitted using the ordinary least squares method. ***, **, and * indicate p < 0.001, 0.01 and 0.05, respectively. The data throughout the paper are displayed as mean±SD or as the median with 25th–75th percentiles. The numbers of trials and animals used for the data analyses are also described in the figure legends. All experimental procedures were approved by the Sungkyunkwan University Institutional Animal Care and Use Committee and were conducted in accordance with the Guide for the Care and Use of Laboratory Animals of the Animal Protection Law & the Laboratory Animal Act set by the Korea Animal and Plant Quarantine Agency and the Korea Ministry of Food and Drug Safety. We used adult male C57BL/6 mice ( n = 20; Orient Bio, South Korea), male Thy1-GCaMP6f mice ( n = 10, C57BL/6J-TgGP5.17DKim/J, stock no.025939, Jackson Laboratory, USA) and adult male C57BL/6 mice (n = 10; Orient Bio) with viral expression of GCaMP6f (AA9-mDlx-GCaMP6f-Fishell-2, plasmid#83899, Addgene, USA). All mice were maintained under a 12-h dark/light cycle, 24–25°C temperature and 50–60% humidity. Experiments were carried out on 10- to 14-week old mice. Surgical and experimental procedures were performed as shown in . Mice were initially anesthetized with 2.5% isoflurane in an induction chamber, and anesthesia was maintained with 1.2% isoflurane after each mouse was transferred to a stereotaxic frame (Kopf Instruments, USA). Body temperature was maintained at approximately 37 °C using a temperature-controlled heating pad (DC temperature control system, FHC, USA). After an incision was made in the skin over the right hemisphere, a 2-mm-diameter circular craniotomy was carefully performed over the somatosensory cortex (0.5–2.5 mm posterior and 1–3 mm lateral to bregma) using a dental drill (Ram Products, Microtorque II, USA). The dura mater remained intact. The exposed cortex was covered with a glass coverslip (4 × 4 mm, Deckglaser, Germany), but a small space was left on the lateral side to allow for insertion of a microelectrode and a glass pipette (Supplementary Figure 1(a,b)). A metal holding frame was then glued to the skull to (1) minimize head motion during imaging and (2) adjust the tilt of the head so that the brain surface inside the imaging window was perpendicular to the microscope objective axis. For further experiments, anesthesia was then switched to urethane (1.25 g/kg, i.p.). Urethane anesthesia has been extensively used for NVC studies. , – It is known to preserve excitatory and inhibitory synaptic transmission and autoregulation of CBF. Throughout the experiments, we continuously monitored the physiological parameters of the mice (heart rate: 520–580 bpm, SpO 2: 98–99%, respiratory rate: 165–180 r/min) to ensure that stable physiological conditions were maintained under urethane anesthesia. For two-photon imaging of GCaMP6f in inhibitory neurons, we intracortically injected AA9-mdlx-GCaMP6f-Fishell-2 (0.92 × 10 GC/ml) into the somatosensory cortex 3–4 weeks prior to the surgical procedures described above. C57BL/6 mice were also anesthetized via inhalation of 2.5% isoflurane in an induction chamber and were maintained under anesthesia with 1.5% isoflurane during the injection. Two small holes (0.25 mm diameter, −1 to −2 mm posterior and 1 mm lateral to the bregma) were made in the right hemisphere avoiding the large pial vessels. The tip of a beveled (40° angle) glass micropipette (outer diameter (OD) of 15–20 μm) was inserted into layer 2/3 of the somatosensory cortex with a micromanipulator (Eppendorf, Germany). The virus solution (1/2 diluted in saline, 800 nl) was injected using a syringe pump (80 nl/min, Harvard Apparatus, USA). The holes were then covered with dental resin (OA2, Dentkist Inc., South Korea) and the skin was sutured. A tungsten microelectrode (300–500 kΩ, FHC, USA) was used for LFP recordings, and a glass micropipette was made for induction of seizure events via intracortical injection of 4-AP (15 mM in sterile saline, Sigma, USA) mixed with Alexa594 (10 μM, Thermo Fisher, USA). A glass pipette with a tip diameter of 20–30 μm was made from a glass capillary tube (OD:1.0 mm, inner diameter (ID): 0.50 mm, borosilicate glass, Sutter Instrument, USA) using a micropipette puller (P-1000, Sutter Instrument). While viewing the cortex through a microscope objective lens, a microelectrode and a glass micropipette filled with the 4-AP solution were carefully inserted into the cortex (25° angle) to a depth of ∼350 μm beneath the pial surface. The distance between the microelectrode tip and the glass pipette was up to 500 μm in all experiments. The raw electrophysiological data were amplified and acquired at 40,000 Hz using an Omniplex recording system (Plexon, USA). The LFP signals were then acquired via downsampling to 1,000 Hz and filtering with 0.5-Hz high-pass and 200-Hz low-pass filters. After pre-injection baseline data were acquired for 10 min, the 4-AP solution was slowly injected (80 nl/min) using an infusion pump (Pump 11 Pico Plus Elite, Harvard Apparatus). The mixture of 4-AP with Alexa 594 enabled visualization of the glass pipette during the insertion and of the diffusion area during the infusion (500 nl, 80 nl/min; Supplementary Figure 1(c)). The 4-AP injection reliably induced recurrent spontaneous seizures that repeatedly occurred at intervals of several tens of seconds to several minutes and could be verified by LFP recording (Supplementary Figure 1(d)). CBF changes were measured by using a Laser Doppler flowmetry (LDF) probe (wavelength: 780 nm, probe diameter, 450 μm, Perimed, PeriFlux System 5000, Sweden). The LDF probe was placed on the cortical surface, avoiding the large pial vessels, and separated from the LFP recording microelectrode by ∼200 μm. The LDF signals were sampled at 1 kHz and were digitally acquired using a Plexon system that allowed simultaneous measurement of the LDF signals with the LFPs. To assess preictal CBF changes, the measured LDF signals were normalized to the averaged CBF level of the 5- or 10-min period before the 4-AP was injected. Two-photon imaging was conducted to measure changes in either cortical vascular diameter or neural activity. For two-photon vessel imaging, fluorescein isothiocyanate (FITC)-labeled dextran (MW = 70kDa, FD-70S, Sigma) was used to visualize the cortical vasculature (5%, 1.5 μl/g, through the retro-orbital sinus). We chose an area of the somatosensory cortex in which at least two penetrating arterioles and venules were observed. Calcium imaging of neural activity was performed in mice expressing GCaMP6f in excitatory or inhibitory neurons in the somatosensory cortex. Images were obtained using a two-photon laser scanning microscopy system (TCS SP8 MP, Leica, Germany) equipped with a broadly tunable Ti:sapphire laser (680–1080 nm, 80 MHz, 140 fs pulse width, Chameleon Vision II, Coherent, USA). A 10× objective lens (Leica, HCX APO L, NA = 0.30) was used with a 920-nm tuned laser to excite the fluorescent signals. Bandpass filters at 520/50 nm and 585/40 nm were used to collect green (FITC or GCaMP6f) and red (Alexa594 mixed with the 4-AP solution) fluorescence, respectively, at a pixel resolution of 1.73 μm. Focusing at a depth of 250–300 μm (layer 2/3), images were acquired at 2 Hz for surface vascular imaging and at 5 Hz or 10 Hz for calcium imaging. The imaging area included the tip of the 4-AP glass pipette but not the microelectrode tip because direct exposure of metal microelectrodes to focused Ti:sapphire laser light creates photovoltaic artifacts. Thus, the electrode tip was positioned rostrally or caudally near the border of the imaged area to avoid artifacts (Supplementary Figure 1 (b) and (c)). All data were analyzed using Fiji (ImageJ, USA), custom-written code in MATLAB (Mathworks, USA) and Chronux ( http://chronux.org/ ). To ensure that consecutive ictal events were treated as isolated episodes, seizure events with intervals of less than 40 s were excluded from further analysis. , , The neural recordings were 0.5–150-Hz bandpass filtered using a third-order Butterworth filter. Seizure onset was defined as the time point at which LFP amplitudes increased two standard deviation (SD) above the preictal baseline. The offset was defined as the time point at which the signal returned to within 2 SD of the preictal baseline. Seizure onset and offset were then confirmed by visual inspection and adjusted manually. A time course of the power spectral density (PSD) was calculated by applying a multitaper transformation (sliding window: 1 s, bin: 100 ms). The PSDs were summated in five distinct frequency ranges: 1–4 Hz (δ-band), 4–7 Hz (θ-band), 7–13 Hz (α-band), 13–30 Hz (β-band), and 30–100 Hz (γ-band). The detailed methods for postprocessing of the imaging data are described in the Supplementary Material. The preictal CBF and diameter changes measured during the 10 s prior to each seizure onset were normalized by the averaged pre-injection baseline. The ictal CBF and diameter changes were calculated by normalizing the ictal CBF levels and diameters by the average preictal CBF level and diameter. The arteriole or venule changes from different segments in each field of view (FOV) were averaged for each seizure. The GCaMP6f signals over time were calculated as ΔF/F = (F−F 0 )/F 0 , where F 0 and F represent the baseline fluorescence (the averaged fluorescence of either the 10-min pre-injection or the preictal period) and the fluorescence over time, respectively. To quantify the oscillating activity during the preictal periods, we counted peaks only when they appeared more than 400 ms after the previous one or were more than 0.2-fold higher than the basal preictal level. To analyze neuronal synchrony, the correlation coefficients of the ΔF/F signals were calculated (1 s window with a 1 s step) for all pairs of the neuron regions of interest (neuron ROIs) for each seizure. The study design and reporting followed the ARRIVE (Animal Research: Reporting In Vivo Experiments) guidelines. The sample sizes were determined to detect over 30% differences between mean values (coefficient of variance = 0.2–0.5, power = 80%, α = 0.05). We conducted normality tests with the Shapiro-Wilk test for all data sets (IBM SPSS Statistics 19, USA). Depending on the results of the normality test, we used either an independent t -test or the Mann-Whitney U test to examine differences between two independent samples. We used either a paired t -test or the Wilcoxon signed-rank test when comparing two dependent samples. Likewise, according to the normality test results, we calculated either Pearson’s or Spearman’s correlation coefficient to examine a linear relationship between two variables, and linear regression models were fitted using the ordinary least squares method. ***, **, and * indicate p < 0.001, 0.01 and 0.05, respectively. The data throughout the paper are displayed as mean±SD or as the median with 25th–75th percentiles. The numbers of trials and animals used for the data analyses are also described in the figure legends. Ictal increases in CBF and vessel diameter are correlated with its preceding preictal CBF and diameter that are associated with γ-band LFP power As shown in , we first investigated real-time CBF and vessel diameter changes by LDF recording and by two-photon imaging with concurrent recording of LFP signals. For vessel imaging, FITC-labeled 70-kDa dextran was retro-orbitally injected into wild-type C57BL/6 mice to visualize the cortical vasculature. Prior to 4-AP injection, the basal CBF level and vessel diameter were measured for approximately 10 min. Within several minutes following the 4-AP injections, recurrent spontaneous seizures were generated, and CBF changes were tightly linked to different epileptic states . In accordance with other reports using the 4-AP model, we used the term “ictal” in the context of seizure activity and “interictal” to indicate the periods between two consecutive ictal events. , , , The “preictal” period, as part of the interictal period, was also separately designated as the 30-s time period immediately preceding each seizure onset , ( and Supplementary Figure 1(d)). For further analysis, we focused on two epileptic states, the preictal and ictal states, and examined how the two states are related from the perspective of NVC. Compared to the CBF levels before injection, the CBF levels during interictal periods, including preictal states, were lower, while those during ictal states were higher . In addition, termination of recurrent seizures that were generally maintained for approximately 60–90 min was associated with a reduction in CBF level (Supplementary Figure 2(a) and (b)), and LFP amplitudes during recurrent seizures were negatively correlated with postictal CBF reductions (Supplementary Figure 2(c)). No epileptiform activity was observed after saline injection alone (sham control), and there were no apparent changes in LFP, CBF or heart rate over time (Supplementary Figure 3(a)). We also confirmed that the anesthesia conditions were stable in sham control and 4-AP injected mice throughout the experiments (Supplementary Figure 3(b)). In line with the CBF changes, the arteriole diameters were lower during interictal and preictal states than before injection and were increased during ictal events . To estimate preictal CBF and diameter changes, each CBF and diameter value during the 10 s prior to seizure onset was normalized by its respective 10-min-averaged pre-injection level ( , preictal change . To estimate ictal CBF and diameter changes, the average CBF and diameter during each seizure event were normalized by the respective preceding preictal values ( , ictal change . Overall, the preictal CBF level fell by 27.98 ± 16.18%, and the ictal CBF increased by 69.00 ± 52.47% . The ictal CBF responses were comparable to those shown in other reports. , The preictal arteriole diameter decreased by 22.71 ± 13.14% and the ictal diameter increased by 18.47 ± 11.76% . Venule diameter changes were used as references to confirm that the observed changes in arteriole diameter were not attributable to motion artifacts or focal plane drifts (Supplementary Figure 4(a) and (b)). Interestingly, the preictal CBF level was highly negatively correlated with the subsequent ictal CBF increase ( ; Spearman’s r =−0.644, p < 0.001, R 2 =0.572), showing that a larger decrease during the preictal period was associated with a larger increase during the ictal period. Likewise, the preictal arteriole diameter, which was variable over time (Supplementary Figure 4(c)), was negatively correlated with subsequent ictal dilation ( ; Spearman’s r =−0.848, p < 0.001, R 2 =0.746). In other words, greater reductions in CBF and arteriole diameter during the preictal state were associated with a higher CBF response and vasodilation during the subsequent ictal state, respectively. To characterize the neural correlates of the preictal vascular changes, we then compared them with concurrently measured LFP signals. The preictal vascular changes were most highly correlated with the power of the γ-band among the different neural bands of preictal LFP signals , and a lower γ power was correlated with a larger reduction in the preictal vessel diameter. When the seizure strength was estimated by summating the absolute LFP power during each ictal event, a stronger seizure with a higher ictal dilation (Supplementary Figure 5, right; Spearman’s r = 0.518, *** p < 0.001, R 2 =0.242) was associated with a larger decrease in arteriole diameter in the preictal period (Supplementary Figure 5, left; Spearman’s r =−0.377, ** p = 0.007, R 2 =0.145). Collectively, these data reveal that the potential neural origin of the preictal vascular change is related to the γ-band of LFP signals, and that the preictal level affects the strength of the following seizure. Preictal excitatory and inhibitory neuronal activity during recurrent seizures We then sought to examine the excitatory and inhibitory neuronal activity levels underlying the preictal vascular changes and LFP γ power by in vivo two-photon calcium imaging. For this, we used transgenic Tg-Thy1-GCaMP6f-GP5.17DKim/J and C57BL/6 mice with viral expression of GCaMP6f under an mdlx promoter (AA9-mDlx-GCaMP6f-Fishell-2). Immunostaining verified GCaMP expression in neurons in the somatosensory cortex ( and Supplementary Figure 6). In Thy1-GCaMP6f mice, which are known to express GCaMP6f in cortical pyramidal neurons, , GCaMP6f + cells were well overlaid with NeuN + signals (Supplementary Figure 6(a)). Under the mDlx promoter, the proportions of PV + and SOM + neurons in GCaMP6f + cells were 33.57 ± 9.37% and 17.09 ± 3.08% (Supplementary Figure 6(b–d)), respectively, in accordance with previous reports. , Additionally, GCaMP6f + cells were accounted for 92.48 ± 9.80% and 95.01 ± 4.13% of PV + and SOM + cells, respectively (Supplementary Figure 6(e)). These results indicate that viral expression under the mDlx promoter is specific to and effective in GABAergic inhibitory neurons. Two-photon GCaMP6f imaging was performed separately in excitatory and inhibitory neurons to avoid overlapping signals between them, especially in neuropils. Spontaneous GCaMP6f signals in excitatory neurons were apparently weaker in both soma and neuropils during the preictal state than during the pre-injection period and increased following seizure onset . On the other hand, those in inhibitory neurons showed heterogeneous changes during the preictal period, as indicated by the basal fluorescent levels , and increased following seizure onset . The averaged excitatory signals (fluorescence levels) of 200–700 μm around the injection focus were decreased during interictal and preictal states . Interestingly, the inhibitory signals not only were generally decreased during interictal and preictal states but also had distinct oscillations ( and Supplementary Movie 1). In line with the previous methods for quantification of CBF and vessel diameter changes, the preictal signal changes were normalized by the averages during the 10-min pre-injection period, and the ictal changes were normalized by those of the preceding preictal period . The preictal level (ΔF/F) was decreased by 0.34 ± 0.12 and 0.25 ± 0.12 in excitatory and inhibitory neurons, while the ictal change (ΔF/F) was increased by 0.87 ± 0.30 and 0.18 ± 0.09 in excitatory and inhibitory neurons, respectively . Seizure duration and LFP amplitudes in seizure events were similar between the different experiments (Supplementary Figure 7). Lower (i.e. larger reductions in) excitatory neuron preictal activity and inhibitory neuron preictal activity were followed by less and more seizure-evoked activity, respectively ( ; excitatory: Pearson’s r = 0.655, ** p = 0.008, R 2 =0.428; inhibitory: Pearson’s r =−0.670, ** p = 0.002, R 2 =0.448). Moreover, the negative preictal–ictal relationship observed for inhibitory neuronal activity was consistent with the negative preictal–ictal relationship observed for the vascular activity (CBF; ; Spearman’s r =−0.644***; arteriole diameter; ; Spearman’s r =−0.848***). When the preictal oscillating activity levels were quantified as shown in , the amplitudes (ΔF/F) were 0.09 ± 0.08 and 0.31 ± 0.18 in excitatory and inhibitory neurons, respectively, while the frequencies were 0.46 ± 0.26 Hz and 0.81 ± 0.30 Hz , indicating that inhibitory neurons exhibit more apparent oscillating activity than excitatory neurons during the preictal period. From these results, we suppose that excitatory and inhibitory neurons contribute differently to the preictal state, which may affect subsequent seizure events. Additionally, ictal oscillating activity, which was relatively higher in excitatory neurons than in inhibitory neurons , was much lower than preictal inhibitory activity, indicating that oscillating activity is more specific to inhibitory neurons in the preictal state. The level of preictal basal neuronal activity, which is characterized by coherent oscillating inhibitory neuronal activity, is related to the preictal γ-band LFP power We further explored neuronal activity at the single-soma level. Based on seizure-evoked GCaMP6f intensity changes, a map of cell soma regions (neuron ROIs) was created for each seizure trial as described in Supplementary Figure 8 and in the Supplementary Material. Calcium transients were then extracted from neuron ROIs. To examine whether the oscillating activity resulted from neuronal synchrony, we calculated the correlation coefficients of the calcium transients between all neuronal pairs. During the preictal period, oscillating activity and neuronal synchrony were apparent in inhibitory soma activity. The averaged correlation values over time (shown as yellow lines) matched well with the oscillating activity of inhibitory neuronal calcium signals as opposed to excitatory neuronal calcium signals . The preictal inhibitory soma activity exhibited higher neuronal synchrony than excitatory activity ( , preictal), and the oscillation of this activity was also significantly higher and more frequent than that of excitatory activity , suggesting that inhibitory neurons may play more active roles than excitatory neurons in shaping the preictal neural state. During the preictal period, the synchronized activity had a strong positive correlation with oscillating activity ( ; excitatory: Spearman’s r = 0.785, *** p < 0.001, R 2 =0.915; inhibitory: Spearman’s r = 0.828, *** p < 0.001, R 2 =0.603). On the other hand, the correlation of excitatory neuronal activity became stronger than that of inhibitory activity during the ictal periods ( , ictal). Ictal oscillating activity, which was relatively weak , was more highly correlated with the correlation of excitatory neuronal activity than that of inhibitory neuronal activity ( ; excitatory: Spearman’s r = 0.776, ** p = 0.001, R 2 =0.626; inhibitory: Pearson’s r = 0.558, * p = 0.013, R 2 =0.311). Overall, these findings indicate that oscillating activity is mostly apparent in inhibitory neurons in the preictal state and that oscillating activity is related to neuronal synchrony that is contrasting between excitatory and inhibitory activity in two different epileptic states. During the preictal state, the power of the oscillating inhibitory activity exhibited a positive linear relationship with the basal inhibitory activity level ( ; excitatory: Pearson’s r =−0.068, n.s., R 2 =0.005; inhibitory: Pearson’s r = 0.507, * p = 0.027, R 2 =0.258) that was previously shown to correlate with the magnitude of the subsequent ictal response . In other words, more synchronized and oscillating inhibitory activity was linked to a smaller reduction in the basal inhibitory activity level. However, this relationship was not observed for excitatory activity during preictal states. Furthermore, preictal basal excitatory and inhibitory activity levels were also correlated with concurrently measured γ-band LFP power ( ; excitatory: Pearson’s r = −0.611, * p = 0.01, R 2 =0.374; inhibitory: Pearson’s r = 0.495, * p = 0.03, R 2 =0.245; the correlation coefficients for other neural bands are shown in Supplementary Tables 1 and 2). Higher γ power was correlated with a greater reduction in the preictal excitatory activity level but a smaller reduction in the inhibitory activity level. The relationship observed for the inhibitory activity level was consistent with the preictal vascular activity since a higher γ power was associated with a smaller reduction in the preictal arteriole diameter . Overall, we suppose that the preictal vascular activity alongside with the LFP signal γ-band may reflect the relative levels of excitatory and inhibitory neuronal activity during the preictal period, which may imply the magnitude of the following ictal response. In particular, inhibitory neurons may play a major role in shaping the preictal neural state, which is characterized by coherent oscillating inhibitory neuronal activity, regarding the extent of the preictal inhibitory activity level. On the other hand, excitatory activity shows higher neuronal synchrony than inhibitory activity during the ictal state. Given these results, we suggest that excitatory and inhibitory neurons may contribute differently to shaping two different epileptic neural states, i.e. the preictal and ictal states, and may also affect vascular signals. As shown in , we first investigated real-time CBF and vessel diameter changes by LDF recording and by two-photon imaging with concurrent recording of LFP signals. For vessel imaging, FITC-labeled 70-kDa dextran was retro-orbitally injected into wild-type C57BL/6 mice to visualize the cortical vasculature. Prior to 4-AP injection, the basal CBF level and vessel diameter were measured for approximately 10 min. Within several minutes following the 4-AP injections, recurrent spontaneous seizures were generated, and CBF changes were tightly linked to different epileptic states . In accordance with other reports using the 4-AP model, we used the term “ictal” in the context of seizure activity and “interictal” to indicate the periods between two consecutive ictal events. , , , The “preictal” period, as part of the interictal period, was also separately designated as the 30-s time period immediately preceding each seizure onset , ( and Supplementary Figure 1(d)). For further analysis, we focused on two epileptic states, the preictal and ictal states, and examined how the two states are related from the perspective of NVC. Compared to the CBF levels before injection, the CBF levels during interictal periods, including preictal states, were lower, while those during ictal states were higher . In addition, termination of recurrent seizures that were generally maintained for approximately 60–90 min was associated with a reduction in CBF level (Supplementary Figure 2(a) and (b)), and LFP amplitudes during recurrent seizures were negatively correlated with postictal CBF reductions (Supplementary Figure 2(c)). No epileptiform activity was observed after saline injection alone (sham control), and there were no apparent changes in LFP, CBF or heart rate over time (Supplementary Figure 3(a)). We also confirmed that the anesthesia conditions were stable in sham control and 4-AP injected mice throughout the experiments (Supplementary Figure 3(b)). In line with the CBF changes, the arteriole diameters were lower during interictal and preictal states than before injection and were increased during ictal events . To estimate preictal CBF and diameter changes, each CBF and diameter value during the 10 s prior to seizure onset was normalized by its respective 10-min-averaged pre-injection level ( , preictal change . To estimate ictal CBF and diameter changes, the average CBF and diameter during each seizure event were normalized by the respective preceding preictal values ( , ictal change . Overall, the preictal CBF level fell by 27.98 ± 16.18%, and the ictal CBF increased by 69.00 ± 52.47% . The ictal CBF responses were comparable to those shown in other reports. , The preictal arteriole diameter decreased by 22.71 ± 13.14% and the ictal diameter increased by 18.47 ± 11.76% . Venule diameter changes were used as references to confirm that the observed changes in arteriole diameter were not attributable to motion artifacts or focal plane drifts (Supplementary Figure 4(a) and (b)). Interestingly, the preictal CBF level was highly negatively correlated with the subsequent ictal CBF increase ( ; Spearman’s r =−0.644, p < 0.001, R 2 =0.572), showing that a larger decrease during the preictal period was associated with a larger increase during the ictal period. Likewise, the preictal arteriole diameter, which was variable over time (Supplementary Figure 4(c)), was negatively correlated with subsequent ictal dilation ( ; Spearman’s r =−0.848, p < 0.001, R 2 =0.746). In other words, greater reductions in CBF and arteriole diameter during the preictal state were associated with a higher CBF response and vasodilation during the subsequent ictal state, respectively. To characterize the neural correlates of the preictal vascular changes, we then compared them with concurrently measured LFP signals. The preictal vascular changes were most highly correlated with the power of the γ-band among the different neural bands of preictal LFP signals , and a lower γ power was correlated with a larger reduction in the preictal vessel diameter. When the seizure strength was estimated by summating the absolute LFP power during each ictal event, a stronger seizure with a higher ictal dilation (Supplementary Figure 5, right; Spearman’s r = 0.518, *** p < 0.001, R 2 =0.242) was associated with a larger decrease in arteriole diameter in the preictal period (Supplementary Figure 5, left; Spearman’s r =−0.377, ** p = 0.007, R 2 =0.145). Collectively, these data reveal that the potential neural origin of the preictal vascular change is related to the γ-band of LFP signals, and that the preictal level affects the strength of the following seizure. We then sought to examine the excitatory and inhibitory neuronal activity levels underlying the preictal vascular changes and LFP γ power by in vivo two-photon calcium imaging. For this, we used transgenic Tg-Thy1-GCaMP6f-GP5.17DKim/J and C57BL/6 mice with viral expression of GCaMP6f under an mdlx promoter (AA9-mDlx-GCaMP6f-Fishell-2). Immunostaining verified GCaMP expression in neurons in the somatosensory cortex ( and Supplementary Figure 6). In Thy1-GCaMP6f mice, which are known to express GCaMP6f in cortical pyramidal neurons, , GCaMP6f + cells were well overlaid with NeuN + signals (Supplementary Figure 6(a)). Under the mDlx promoter, the proportions of PV + and SOM + neurons in GCaMP6f + cells were 33.57 ± 9.37% and 17.09 ± 3.08% (Supplementary Figure 6(b–d)), respectively, in accordance with previous reports. , Additionally, GCaMP6f + cells were accounted for 92.48 ± 9.80% and 95.01 ± 4.13% of PV + and SOM + cells, respectively (Supplementary Figure 6(e)). These results indicate that viral expression under the mDlx promoter is specific to and effective in GABAergic inhibitory neurons. Two-photon GCaMP6f imaging was performed separately in excitatory and inhibitory neurons to avoid overlapping signals between them, especially in neuropils. Spontaneous GCaMP6f signals in excitatory neurons were apparently weaker in both soma and neuropils during the preictal state than during the pre-injection period and increased following seizure onset . On the other hand, those in inhibitory neurons showed heterogeneous changes during the preictal period, as indicated by the basal fluorescent levels , and increased following seizure onset . The averaged excitatory signals (fluorescence levels) of 200–700 μm around the injection focus were decreased during interictal and preictal states . Interestingly, the inhibitory signals not only were generally decreased during interictal and preictal states but also had distinct oscillations ( and Supplementary Movie 1). In line with the previous methods for quantification of CBF and vessel diameter changes, the preictal signal changes were normalized by the averages during the 10-min pre-injection period, and the ictal changes were normalized by those of the preceding preictal period . The preictal level (ΔF/F) was decreased by 0.34 ± 0.12 and 0.25 ± 0.12 in excitatory and inhibitory neurons, while the ictal change (ΔF/F) was increased by 0.87 ± 0.30 and 0.18 ± 0.09 in excitatory and inhibitory neurons, respectively . Seizure duration and LFP amplitudes in seizure events were similar between the different experiments (Supplementary Figure 7). Lower (i.e. larger reductions in) excitatory neuron preictal activity and inhibitory neuron preictal activity were followed by less and more seizure-evoked activity, respectively ( ; excitatory: Pearson’s r = 0.655, ** p = 0.008, R 2 =0.428; inhibitory: Pearson’s r =−0.670, ** p = 0.002, R 2 =0.448). Moreover, the negative preictal–ictal relationship observed for inhibitory neuronal activity was consistent with the negative preictal–ictal relationship observed for the vascular activity (CBF; ; Spearman’s r =−0.644***; arteriole diameter; ; Spearman’s r =−0.848***). When the preictal oscillating activity levels were quantified as shown in , the amplitudes (ΔF/F) were 0.09 ± 0.08 and 0.31 ± 0.18 in excitatory and inhibitory neurons, respectively, while the frequencies were 0.46 ± 0.26 Hz and 0.81 ± 0.30 Hz , indicating that inhibitory neurons exhibit more apparent oscillating activity than excitatory neurons during the preictal period. From these results, we suppose that excitatory and inhibitory neurons contribute differently to the preictal state, which may affect subsequent seizure events. Additionally, ictal oscillating activity, which was relatively higher in excitatory neurons than in inhibitory neurons , was much lower than preictal inhibitory activity, indicating that oscillating activity is more specific to inhibitory neurons in the preictal state. We further explored neuronal activity at the single-soma level. Based on seizure-evoked GCaMP6f intensity changes, a map of cell soma regions (neuron ROIs) was created for each seizure trial as described in Supplementary Figure 8 and in the Supplementary Material. Calcium transients were then extracted from neuron ROIs. To examine whether the oscillating activity resulted from neuronal synchrony, we calculated the correlation coefficients of the calcium transients between all neuronal pairs. During the preictal period, oscillating activity and neuronal synchrony were apparent in inhibitory soma activity. The averaged correlation values over time (shown as yellow lines) matched well with the oscillating activity of inhibitory neuronal calcium signals as opposed to excitatory neuronal calcium signals . The preictal inhibitory soma activity exhibited higher neuronal synchrony than excitatory activity ( , preictal), and the oscillation of this activity was also significantly higher and more frequent than that of excitatory activity , suggesting that inhibitory neurons may play more active roles than excitatory neurons in shaping the preictal neural state. During the preictal period, the synchronized activity had a strong positive correlation with oscillating activity ( ; excitatory: Spearman’s r = 0.785, *** p < 0.001, R 2 =0.915; inhibitory: Spearman’s r = 0.828, *** p < 0.001, R 2 =0.603). On the other hand, the correlation of excitatory neuronal activity became stronger than that of inhibitory activity during the ictal periods ( , ictal). Ictal oscillating activity, which was relatively weak , was more highly correlated with the correlation of excitatory neuronal activity than that of inhibitory neuronal activity ( ; excitatory: Spearman’s r = 0.776, ** p = 0.001, R 2 =0.626; inhibitory: Pearson’s r = 0.558, * p = 0.013, R 2 =0.311). Overall, these findings indicate that oscillating activity is mostly apparent in inhibitory neurons in the preictal state and that oscillating activity is related to neuronal synchrony that is contrasting between excitatory and inhibitory activity in two different epileptic states. During the preictal state, the power of the oscillating inhibitory activity exhibited a positive linear relationship with the basal inhibitory activity level ( ; excitatory: Pearson’s r =−0.068, n.s., R 2 =0.005; inhibitory: Pearson’s r = 0.507, * p = 0.027, R 2 =0.258) that was previously shown to correlate with the magnitude of the subsequent ictal response . In other words, more synchronized and oscillating inhibitory activity was linked to a smaller reduction in the basal inhibitory activity level. However, this relationship was not observed for excitatory activity during preictal states. Furthermore, preictal basal excitatory and inhibitory activity levels were also correlated with concurrently measured γ-band LFP power ( ; excitatory: Pearson’s r = −0.611, * p = 0.01, R 2 =0.374; inhibitory: Pearson’s r = 0.495, * p = 0.03, R 2 =0.245; the correlation coefficients for other neural bands are shown in Supplementary Tables 1 and 2). Higher γ power was correlated with a greater reduction in the preictal excitatory activity level but a smaller reduction in the inhibitory activity level. The relationship observed for the inhibitory activity level was consistent with the preictal vascular activity since a higher γ power was associated with a smaller reduction in the preictal arteriole diameter . Overall, we suppose that the preictal vascular activity alongside with the LFP signal γ-band may reflect the relative levels of excitatory and inhibitory neuronal activity during the preictal period, which may imply the magnitude of the following ictal response. In particular, inhibitory neurons may play a major role in shaping the preictal neural state, which is characterized by coherent oscillating inhibitory neuronal activity, regarding the extent of the preictal inhibitory activity level. On the other hand, excitatory activity shows higher neuronal synchrony than inhibitory activity during the ictal state. Given these results, we suggest that excitatory and inhibitory neurons may contribute differently to shaping two different epileptic neural states, i.e. the preictal and ictal states, and may also affect vascular signals. In this study, we undertook a detailed investigation of neuronal and vascular activity during recurrent seizures. The main findings of this study were the characterization of vascular, excitatory, and inhibitory activity in two different epileptic states and elucidation of the relationships among these types of activity, as summarized in . Moreover, our results suggest that the preictal levels of vascular and neuronal activity may represent the severity of the upcoming ictal event. We believe that this work not only improves understanding of epileptogenesis but also presents a new strategy for prediction of the pathological severity of epilepsy from the perspective of NVC. Altogether, our findings may provide insight into the neuronal basis of the perfusion signals that are often utilized in diagnosing and treating epilepsy patients. Preictal neuronal activity in terms of excitation-inhibition (E/I) balance Our data revealed that the basal levels of excitatory and inhibitory activity in preictal states were lower than those in pre-injection periods. Since a balance between excitatory and inhibitory synaptic transmission is important for the maintenance of E/I balance, – altered preictal neuronal activity may indicate altered synaptic function and E/I imbalance. In the current study, higher ictal activity occurred when the preictal reduction was smaller in excitatory neurons and larger in inhibitory neurons. This preictal condition may indicate greater E/I imbalance, considering that an E/I-imbalanced state is often assumed to generate seizures. , , In other words, the differential activity between excitatory and inhibitory neurons may reflect the degree of E/I imbalance. Moreover, E/I balance is precisely modulated by inhibitory neural circuits and is closely related to gamma oscillations. – Previous reports have also revealed that GABAergic inhibitory interneurons play a key role in the generation of gamma oscillations. – Similarly, our results showed that weaker γ-band LFP power was correlated with a greater reduction in the basal inhibitory activity level, while the opposite trend was observed for the excitatory activity level. Importantly, only inhibitory neurons exhibited oscillating and synchronized activity during this period. The degree of oscillating inhibitory activity was correlated with the basal inhibitory activity level. Overall, we suggest that inhibitory neuronal activity plays an important role in shaping preictal neuronal states which may affect the degree of E/I imbalance, and that gamma oscillation strength can imply the degree of neuronal alteration. NVC in different epileptic states Our results also showed that CBF and arteriole diameters were generally reduced during preictal states. Such reductions may result from decreased basal activity in both excitatory and inhibitory neurons. However, the degrees of the reductions were variable and were correlated with γ-band LFP power in our study, consistent with other reports showing that gamma oscillations are highly related to resting blood flow changes. – Greater reductions in vascular activity were associated with lower γ power, which could have been linked to smaller decreases in excitatory activity and larger decreases in inhibitory activity along with less-synchronized activity in inhibitory neurons. Considering that this neuronal condition may result in a greater E/I imbalance, we suppose that the E/I imbalance can be linked to reduced vascular activity during preictal states. Other studies have also revealed that E/I balance is highly related to hemodynamic signals and that E/I imbalance induced by impairment of GABAergic interneuronal activity is accompanied by reduced gamma oscillations and vascular responses. Moreover, GABAergic inhibitory neurons themselves are known to play crucial roles in the regulation of cortical vessel tone and blood flow. , , , , , Collectively, these findings suggest that GABAergic interneuronal activity may account for the relationship between the degree of E/I imbalance and vascular responses during preictal states. In addition, gamma oscillation strength may reflect this relationship. On the other hand, during ictal states, neuronal excitation and synchrony in excitatory neurons were much higher than those in inhibitory neurons in the current study. This may indicate that mainly excitatory neurons contribute to E/I imbalance during ictal states, resulting in neuronal network hyperexcitability. , Stronger seizures were accompanied by greater vascular responses, consistent with the findings in other reports. , Increased excitation, which can cause E/I imbalance, may mainly drive vascular responses during ictal states, since activation of NMDA receptors in cortical excitatory neurons releases COX-2 products, resulting in increased blood flow. , Importantly, a recent study has demonstrated that reduced GABA-mediated inhibition can cause ictal propagation via excitatory synaptic pathways and that inhibitory barrages received by pyramidal neurons before they are recruited to ictal events are crucial for opposition of ictal activity. , , These findings may explain why the neuronal activity in preictal states, presumed to be driven by GABAergic interneuronal activity, was correlated with that in ictal states, during which neuronal synchrony was higher in excitatory neurons than in inhibitory neurons. Further studies should modulate excitatory and inhibitory neurons in different epileptic states and confirm their roles in the regulation of blood flow in terms of E/I imbalance by precisely measuring their synaptic activity. Specific subtype of GABAergic inhibitory interneurons as a potential candidate that may account for preictal changes Among different subtypes of GABAergic interneurons, PV-expressing interneurons are known as important regulators of cortical network excitability. , , PV neurons can modulate epileptiform activities both in vitro and in vivo, , , , , and the recruitment of PV interneurons that precedes the transition to seizure onset has been suggested to play a crucial role in local seizure restraint. , , , In addition, PV interneuronal activity is known to be essential for the generation of gamma oscillations. – Seizure severity is reduced in mouse epilepsy models when antiepileptic drugs augment interictal γ power in relation to PV interneuronal activity. Consistent with these findings, our data showed that a weaker ictal response was preceded by higher oscillating inhibitory activity, basal inhibitory activity and γ power during the preictal period. Therefore, we suggest that PV interneurons may largely contribute to the preictal neuronal alterations related to gamma oscillations. Moreover, PV interneuronal activity is crucial to the maintenance of E/I balance. , However, it is unclear how PV interneuronal activity can directly affect preictal vascular activity since its exact role in the modulation of hemodynamics is controversial. , , Considering the abundance of PV interneurons and their connections to other cells in cortical microcircuits, , these neurons may indirectly affect vascular activity by inhibiting or disinhibiting other neurons that have vasoactive properties. , Consequently, the roles of different subtypes of inhibitory interneurons in shaping neural and vascular changes during preictal states, as well as their subsequent effects on the following ictal events, should be further investigated. Clinical relevance Epilepsy patients frequently show declines in basal perfusion levels despite the absence of ongoing seizures. , , – , , Recent studies using animal epilepsy models have described alterations or dysfunction in vascular activity , – that may account for the abnormal perfusion signals. Our results suggest that the blood flow changes may also be attributable to alterations in excitatory and inhibitory neuronal activity. Furthermore, since preictal neuronal and vascular activity levels were found to be correlated with the following ictal magnitudes in seizure foci in this study, our observations may provide useful insights for estimation of the pathological severity of epilepsy from the perspective of NVC. Additionally, since the experiments in this study were conducted within a precisely defined seizure focus, our findings may help better localize epileptogenic foci in clinical data acquired during interictal periods, which constitute most of the lives of epilepsy patients. Limitations The current study had several limitations that should be addressed in future research. First, the acute pharmacologic seizures that we induced do not perfectly reproduce the chronic focal seizures of human epilepsy. Although 4-AP-induced seizures in anesthetized mice are known to mimic many of the characteristics of spontaneous ictal activity in human focal epilepsy, , , our findings may be specific to the 4-AP model used. It is also likely that a direct pharmaceutical effect of 4-AP contributes in part to basal vascular activity, since 4-AP can block voltage-gated potassium channels expressed on vascular smooth muscle cells. However, if the preictal vascular changes we observed were merely due to a vasoconstricting effect of 4-AP, greater reductions would have been followed by smaller vascular changes induced by the following ictal event. Given that we observed the opposite effect, we believe that pharmaceutical-induced vasoconstriction did not confound our observations regarding the preictal-to-ictal neurovascular relationship. Further study is required to confirm our findings in other epilepsy models, including chronic epilepsy. Additionally, only male mice were used in this study. Sex-dependent effects on excitatory and inhibitory neuronal activity could be an interesting topic for investigation since sex hormones and neurosteroids can affect neuronal excitability and GABA-mediated inhibition, thus causing differences in seizure susceptibility. – Another interesting point for further investigation is the use of anesthetics. We carried out experiments under anesthesia to avoid the stress of a series of prolonged tonic-clonic seizures in awake mice and because the fundamental characteristics of the general neural activity patterns of cortical seizures are consistent across anesthesia and wakefulness. Urethane anesthesia used in this study is known to preserve excitatory and inhibitory synaptic transmission and to provide a stable state of anesthesia for assessment of NVC. , , However, our findings could differ from the situation in the awake state due to different neurovascular properties between anesthesia and wakefulness as suggested by other reports. , Thus, neuronal and vascular dynamics should be further investigated in awake mice. Our data revealed that the basal levels of excitatory and inhibitory activity in preictal states were lower than those in pre-injection periods. Since a balance between excitatory and inhibitory synaptic transmission is important for the maintenance of E/I balance, – altered preictal neuronal activity may indicate altered synaptic function and E/I imbalance. In the current study, higher ictal activity occurred when the preictal reduction was smaller in excitatory neurons and larger in inhibitory neurons. This preictal condition may indicate greater E/I imbalance, considering that an E/I-imbalanced state is often assumed to generate seizures. , , In other words, the differential activity between excitatory and inhibitory neurons may reflect the degree of E/I imbalance. Moreover, E/I balance is precisely modulated by inhibitory neural circuits and is closely related to gamma oscillations. – Previous reports have also revealed that GABAergic inhibitory interneurons play a key role in the generation of gamma oscillations. – Similarly, our results showed that weaker γ-band LFP power was correlated with a greater reduction in the basal inhibitory activity level, while the opposite trend was observed for the excitatory activity level. Importantly, only inhibitory neurons exhibited oscillating and synchronized activity during this period. The degree of oscillating inhibitory activity was correlated with the basal inhibitory activity level. Overall, we suggest that inhibitory neuronal activity plays an important role in shaping preictal neuronal states which may affect the degree of E/I imbalance, and that gamma oscillation strength can imply the degree of neuronal alteration. Our results also showed that CBF and arteriole diameters were generally reduced during preictal states. Such reductions may result from decreased basal activity in both excitatory and inhibitory neurons. However, the degrees of the reductions were variable and were correlated with γ-band LFP power in our study, consistent with other reports showing that gamma oscillations are highly related to resting blood flow changes. – Greater reductions in vascular activity were associated with lower γ power, which could have been linked to smaller decreases in excitatory activity and larger decreases in inhibitory activity along with less-synchronized activity in inhibitory neurons. Considering that this neuronal condition may result in a greater E/I imbalance, we suppose that the E/I imbalance can be linked to reduced vascular activity during preictal states. Other studies have also revealed that E/I balance is highly related to hemodynamic signals and that E/I imbalance induced by impairment of GABAergic interneuronal activity is accompanied by reduced gamma oscillations and vascular responses. Moreover, GABAergic inhibitory neurons themselves are known to play crucial roles in the regulation of cortical vessel tone and blood flow. , , , , , Collectively, these findings suggest that GABAergic interneuronal activity may account for the relationship between the degree of E/I imbalance and vascular responses during preictal states. In addition, gamma oscillation strength may reflect this relationship. On the other hand, during ictal states, neuronal excitation and synchrony in excitatory neurons were much higher than those in inhibitory neurons in the current study. This may indicate that mainly excitatory neurons contribute to E/I imbalance during ictal states, resulting in neuronal network hyperexcitability. , Stronger seizures were accompanied by greater vascular responses, consistent with the findings in other reports. , Increased excitation, which can cause E/I imbalance, may mainly drive vascular responses during ictal states, since activation of NMDA receptors in cortical excitatory neurons releases COX-2 products, resulting in increased blood flow. , Importantly, a recent study has demonstrated that reduced GABA-mediated inhibition can cause ictal propagation via excitatory synaptic pathways and that inhibitory barrages received by pyramidal neurons before they are recruited to ictal events are crucial for opposition of ictal activity. , , These findings may explain why the neuronal activity in preictal states, presumed to be driven by GABAergic interneuronal activity, was correlated with that in ictal states, during which neuronal synchrony was higher in excitatory neurons than in inhibitory neurons. Further studies should modulate excitatory and inhibitory neurons in different epileptic states and confirm their roles in the regulation of blood flow in terms of E/I imbalance by precisely measuring their synaptic activity. Among different subtypes of GABAergic interneurons, PV-expressing interneurons are known as important regulators of cortical network excitability. , , PV neurons can modulate epileptiform activities both in vitro and in vivo, , , , , and the recruitment of PV interneurons that precedes the transition to seizure onset has been suggested to play a crucial role in local seizure restraint. , , , In addition, PV interneuronal activity is known to be essential for the generation of gamma oscillations. – Seizure severity is reduced in mouse epilepsy models when antiepileptic drugs augment interictal γ power in relation to PV interneuronal activity. Consistent with these findings, our data showed that a weaker ictal response was preceded by higher oscillating inhibitory activity, basal inhibitory activity and γ power during the preictal period. Therefore, we suggest that PV interneurons may largely contribute to the preictal neuronal alterations related to gamma oscillations. Moreover, PV interneuronal activity is crucial to the maintenance of E/I balance. , However, it is unclear how PV interneuronal activity can directly affect preictal vascular activity since its exact role in the modulation of hemodynamics is controversial. , , Considering the abundance of PV interneurons and their connections to other cells in cortical microcircuits, , these neurons may indirectly affect vascular activity by inhibiting or disinhibiting other neurons that have vasoactive properties. , Consequently, the roles of different subtypes of inhibitory interneurons in shaping neural and vascular changes during preictal states, as well as their subsequent effects on the following ictal events, should be further investigated. Epilepsy patients frequently show declines in basal perfusion levels despite the absence of ongoing seizures. , , – , , Recent studies using animal epilepsy models have described alterations or dysfunction in vascular activity , – that may account for the abnormal perfusion signals. Our results suggest that the blood flow changes may also be attributable to alterations in excitatory and inhibitory neuronal activity. Furthermore, since preictal neuronal and vascular activity levels were found to be correlated with the following ictal magnitudes in seizure foci in this study, our observations may provide useful insights for estimation of the pathological severity of epilepsy from the perspective of NVC. Additionally, since the experiments in this study were conducted within a precisely defined seizure focus, our findings may help better localize epileptogenic foci in clinical data acquired during interictal periods, which constitute most of the lives of epilepsy patients. The current study had several limitations that should be addressed in future research. First, the acute pharmacologic seizures that we induced do not perfectly reproduce the chronic focal seizures of human epilepsy. Although 4-AP-induced seizures in anesthetized mice are known to mimic many of the characteristics of spontaneous ictal activity in human focal epilepsy, , , our findings may be specific to the 4-AP model used. It is also likely that a direct pharmaceutical effect of 4-AP contributes in part to basal vascular activity, since 4-AP can block voltage-gated potassium channels expressed on vascular smooth muscle cells. However, if the preictal vascular changes we observed were merely due to a vasoconstricting effect of 4-AP, greater reductions would have been followed by smaller vascular changes induced by the following ictal event. Given that we observed the opposite effect, we believe that pharmaceutical-induced vasoconstriction did not confound our observations regarding the preictal-to-ictal neurovascular relationship. Further study is required to confirm our findings in other epilepsy models, including chronic epilepsy. Additionally, only male mice were used in this study. Sex-dependent effects on excitatory and inhibitory neuronal activity could be an interesting topic for investigation since sex hormones and neurosteroids can affect neuronal excitability and GABA-mediated inhibition, thus causing differences in seizure susceptibility. – Another interesting point for further investigation is the use of anesthetics. We carried out experiments under anesthesia to avoid the stress of a series of prolonged tonic-clonic seizures in awake mice and because the fundamental characteristics of the general neural activity patterns of cortical seizures are consistent across anesthesia and wakefulness. Urethane anesthesia used in this study is known to preserve excitatory and inhibitory synaptic transmission and to provide a stable state of anesthesia for assessment of NVC. , , However, our findings could differ from the situation in the awake state due to different neurovascular properties between anesthesia and wakefulness as suggested by other reports. , Thus, neuronal and vascular dynamics should be further investigated in awake mice. To our knowledge, this is the first comprehensive characterization of neuronal and vascular activity, including CBF, vascular diameter, and excitatory and inhibitory neuronal activity, in seizure foci. Based on our results, we suggest that excitatory and inhibitory neurons play different roles in shaping different epileptic states and the associated hemodynamic changes. Our findings provide useful information regarding perfusion changes that are associated with pathological brain states induced by focal epilepsy. Furthermore, they may be applicable to pathological states in other brain diseases because epileptic seizures are phenomena shared by many neurological disorders. sj-pdf-1-jcb-10.1177_0271678X20934071 - Supplemental material for Differential contribution of excitatory and inhibitory neurons in shaping neurovascular coupling in different epileptic neural states Click here for additional data file. Supplemental material, sj-pdf-1-jcb-10.1177_0271678X20934071 for Differential contribution of excitatory and inhibitory neurons in shaping neurovascular coupling in different epileptic neural states by Hyun-Kyoung Lim, Nayeon You, Sungjun Bae, Bok-Man Kang, Young-Min Shon, Seong-Gi Kim and Minah Suh in Journal of Cerebral Blood Flow & Metabolism
The contribution of family medicine to community-orientated health services in Mali: A short report
ce7c0e10-d358-4c43-909b-81131d5a6233
8517720
Family Medicine[mh]
The World Health Organization (WHO) recommends different strategies to strengthen what they call ‘integrated people-centred health services’. One strategy is to re-orientate the health system towards strong primary healthcare that includes family medicine. This approach has not received appropriate attention in sub-Saharan Africa. Furthermore, the role of family medicine and its impact on health services and communities has not been adequately documented. The role of family medicine is especially important in low-resource contexts where a heavy burden of disease is experienced. Because of the lack of resources, healthcare in many African countries relies primarily on donor-driven and vertical disease-orientated programmes. The role of family medicine is poorly understood and the discipline is rarely recognised as a medical speciality. Most African countries are silent on the role of family medicine in their health systems and priority is given to hospital-centred services, a phenomenon amplified in rural areas. There is, however, an emerging interest in developing family medicine as a key component of district health services in the sub-Saharan African context. Postgraduate training in family medicine is progressing and several countries have established training programmes. In addition, there have been attempts to define the importance of family medicine, as seen for example with the consensus statement on family medicine during the African Regional WONCA Conference in 2009. This short report seeks to add to this by documenting how what is now termed ‘family and community medicine’, combining both family physician training and a community health approach, can promote community-orientated health services in Mali. Interviews were conducted with physicians, partners and beneficiaries of two international development projects, namely, DECLIC ( Projet d’appui à la formation des professionnels de la santé au Mali ), a support project for the training of health professionals, and CLEFS ( Communautés locales d’enseignement pour les femmes et les filles en santé ), a local education initiative for healthy women and girls (Projects financed by Global Affairs Canada, Canada’s Department of Foreign Affairs, Trade and Development; in effect since 2012). These projects strengthen the primary healthcare through the creation and adaptation of a new family and community medicine postgraduate medical programme, by emphasising field training, immersion in local communities and interdisciplinary collaboration. By linking the training of health professionals with the needs of the population, these projects make the advanced services to reach the communities through local university-related health clinics. Mali’s health system is highly decentralised (compared with neighbouring countries) and is built on a pyramidal structure with the first level being the Community Health Centre (CSCom). The CSCom is a health institution dating from the end of the 1990s with a mission to provide public health services in a specific socio-sanitary area. The CSCom is managed by community stakeholders through an association known as Associations de santé communautaire , or ASACO. Community Health Centres promote community participation in the management of individual and community health issues. Community Health Centres are the centrepiece of Malian national health policies, playing an even more extensive role in rural areas, given the difficulties in accessing health services in these areas. Mali has also enacted University Community Health Centres (CSCom-U), adding university accreditation to CSComs to adequately train health professionals and reach rural/remote areas. These centres are accredited by the combined Faculty of Medicine and Odontostomatology (FMOS; Bamako University of Sciences, Techniques and Technologies) and perform clinical training, education and research for family and community medicine students, midwives, nurses and laboratory technicians, thereby underlining the importance of interdisciplinary teamwork focusing on primary care. It is an innovative model in French-speaking West Africa, but one that has been proven successful elsewhere in the world. As mentioned by one interviewee, ‘family and community medicine did not exist before in Mali. This is the innovation brought in by these projects’ (Doctor and professor, male, 2018). A single new family and community medical postgraduate programme was created in 2012, including an objective to decentralise training, where the students are deployed in the distributed platform, close to the population, during three out of the 4 years of their training. Family physicians can now be trained directly in rural and remote areas through the CSCom-U network, better responding to the needs of communities and involving them in the planning and organisation of the different health services. ‘The university is moving out of the big cities to the outlying places,’ one professor mentioned (Professor, male, 2018). ‘It’s appropriate because it’s their [ the trained health professionals ] future place of work’ (Professor, male, 2018). This helps respond to the challenge of reaching those living in remote areas: ‘In our community, there is a real problem with regard to the care of people in remote areas. Quality care is essentially in the big cities, but we know that the majority of the population is elsewhere.’ (Doctor, male, 2018) As a result, family and community medicine physicians are better able to respond adequately to the needs of the populations, whilst contributing to better training of front-line professionals and bringing knowledge to patients and users. This community-orientated approach of family medicine is key to the appropriation and acceptance of health services by the users of these services: ‘Those who respond the most are the communities, underlined one professor, because they are in need. We are always in a community approach, negotiating with them […] and even make them plan the activities that they want to be carried out.’ (Doctor, male, 2018) One woman said, for example, that before she came to CSCom-U, she knew that newborns needed to be vaccinated, but that now she says she understands ‘why it is important’, thanks to the availability and accessibility of an interdisciplinary team. Continuous quality improvement and the training of health professionals Improving the quality of family medicine services and training is key to building strong and resilient community-orientated health services. Quality assessment evaluation (meaning the traditional quality evaluation of services) has been replaced with continuous quality improvement, which is a more dynamic approach. This includes community-orientated teaching methods and continuous improvement of services through community collaboration and applied research projects. Most of these research projects, which are conducted during the fourth year of the programme, use an action research methodology, by adding various community stakeholders in project development, data collection, result analysis and implementation of corrective measures, with the objective of strengthening social cohesion around primary care and family medicine. For example: ‘Amadou’s [ name changed ] action research [ on the reception of beneficiaries in the clinic’s reception area ] […] allowed the staff to express their concerns freely […] and it also allowed the patients to ask questions […]. It also shortened the waiting time, which was a big concern […]. The reception improved thanks to the action plan.’ (Doctor, male, 2018) The projects also enabled educational institutions to become more open to pedagogical reforms. This includes the development of programmes based on community needs, training of students in close contact with the population served, field teaching and the focus of teaching on the student’s capacity and openness to conversation with the population: ‘What DECLIC has done, a physician said, is to improve the quality of care for the population. The quality of life of the population is bound to improve’ (Doctor, male, 2018). The influence of the new family and community medicine programme also demonstrates that early exposure of health science students to primary healthcare and the concept of social responsibility is possible and productive as it promotes learner involvement with vulnerable and/or remote populations. A midwife student who was interviewed stressed, for example, that: ‘What we have never done, we don’t know how to do; we have only been given theory. […] So, when we went to the training sites, it was easier for us, it was like we were already there because we had already practised the interventions.’ (Midwife student, second year, female, 2018) Improving the quality of family medicine services and training is key to building strong and resilient community-orientated health services. Quality assessment evaluation (meaning the traditional quality evaluation of services) has been replaced with continuous quality improvement, which is a more dynamic approach. This includes community-orientated teaching methods and continuous improvement of services through community collaboration and applied research projects. Most of these research projects, which are conducted during the fourth year of the programme, use an action research methodology, by adding various community stakeholders in project development, data collection, result analysis and implementation of corrective measures, with the objective of strengthening social cohesion around primary care and family medicine. For example: ‘Amadou’s [ name changed ] action research [ on the reception of beneficiaries in the clinic’s reception area ] […] allowed the staff to express their concerns freely […] and it also allowed the patients to ask questions […]. It also shortened the waiting time, which was a big concern […]. The reception improved thanks to the action plan.’ (Doctor, male, 2018) The projects also enabled educational institutions to become more open to pedagogical reforms. This includes the development of programmes based on community needs, training of students in close contact with the population served, field teaching and the focus of teaching on the student’s capacity and openness to conversation with the population: ‘What DECLIC has done, a physician said, is to improve the quality of care for the population. The quality of life of the population is bound to improve’ (Doctor, male, 2018). The influence of the new family and community medicine programme also demonstrates that early exposure of health science students to primary healthcare and the concept of social responsibility is possible and productive as it promotes learner involvement with vulnerable and/or remote populations. A midwife student who was interviewed stressed, for example, that: ‘What we have never done, we don’t know how to do; we have only been given theory. […] So, when we went to the training sites, it was easier for us, it was like we were already there because we had already practised the interventions.’ (Midwife student, second year, female, 2018) Being distributed throughout the region, thanks to the CSComs network, family and community medicine physicians are now essential elements for improving primary health care through community-orientated health services. This has laid a solid foundation for significant change in the Malian health system, with already noticeable advances in the health of the Malian population, specifically women, girls and children.
A narrative review of wastewater surveillance: pathogens of concern, applications, detection methods, and challenges
d6f2ef8a-221d-4dcc-bf6b-68aa2be9a3f8
11319304
Microbiology[mh]
Introduction Recent decades have seen a rise in both the emergence and reemergence of pathogens, which has led to significant and deadly outbreaks . Authorities such as the global scientific community, the National Institutes of Health (NIH), USAID, and the World Health Organization (WHO) are aware of the substantial worldwide impact these outbreaks have and the importance of developing predictive and preventive systems. Since 1970, there has been the identification of over 1,500 new pathogens, with about 40 being deemed emerging infectious diseases . Regular mass screening in clinical settings poses difficulties, and those who are asymptomatic or exhibit mild symptoms frequently go undetected. The increase in the global population is likely to escalate these challenges and the risk of infectious diseases, highlighting the need for a surveillance method that is comprehensive, provides real-time results, can monitor multiple diseases—including rare ones—and is both scalable and cost-effective. Wastewater surveillance historically serves to monitor water-borne or fecal-orally transmitted pathogens by collecting samples from sewage systems, offering a sensitive way to observe changes and varieties of pathogens within communities . Over the past three decades, studies have consistently shown the accuracy of wastewater testing in representing disease at the population level . Chemical and biological markers in wastewater could even act as an early alert system for disease breakouts, potentially improving current surveillance systems for infections . The origins of wastewater surveillance can be traced to John Snow’s seminal work on London’s cholera outbreak in 1854, where he identified contaminated water as a primary source . In the 1940s in the United States, wastewater was pivotal for tracking and managing polio outbreaks, with poliovirus detection still considered highly sensitive today, becoming common practice in many parts of the world . The advantage of sampling wastewater lies in its high pathogen content compared to other environmental samples . It also allows for the inclusion of pathogens from individuals who are either asymptomatic or pre-symptomatic, unlike clinical tests, thus presenting a potent early indicator and prompt intervention tool for infectious diseases. Moreover, recent interest has emerged in using wastewater examination for AMR (antimicrobial resistance) surveillance, with studies revealing seasonal distributions of AMR, worldwide gene abundance, and correlations between AMR found in wastewater and clinical contexts . Despite various reviews discussing wastewater surveillance’s significance, there’s a gap in literature providing a thorough review that collectively highlights concerning pathogens, wastewater surveillance applications, available technologies, and pathogen detection challenges in wastewater. Thus, this narrative review focuses on wastewater surveillance for infectious diseases, aiming to consolidate these issues. In preparing this narrative review, a methodical approach was used, using a selection of prominent medical search engines to ensure a comprehensive exploration of the literature. The databases harnessed for this review included PubMed, Scopus, ScienceDirect, The Cochrane Library, and Google Scholar. Only published studies were included for this review. Non–peer-reviewed articles such as short communications and research letters were excluded. The methodology entailed a systematic and structured search using a set of predetermined search terms that were central to the theme of wastewater surveillance and its role in public health. These terms included “wastewater surveillance,” “pathogens,” “detection methods,” “public health,” and “epidemiology,” among others. The search was refined to capture articles that shed light on the methodologies for pathogen detection in wastewater, the challenges encountered in the surveillance process, and the implications for public health policy and disease prevention. Wastewater surveillance: monitoring key pathogens of concern Human pathogens, causing infections and even death, remain a leading threat to global public health. Currently, there are approximately 538 species of pathogenic bacteria, 208 viruses, 57 species of parasitic protozoa and some fungi and helminths infecting humans . Numerous pathogen species found in wastewater pose a serious threat to human health. Different type of pathogens and concerned diseases have been listed in . Also, the pathway for and effective wastewater surveillance has been explained in . Most pathogens in wastewater are shed by humans, although some might originate from other sources such as animals. Some of these pathogens have been discussed in detail below. 2.1 Gastrointestinal pathogens Campylobacter spp. is major cause of diarrhea, and human gastroenteritis worldwide . It is comprised of 17 species and 6 subspecies, out of which Campylobacter jejuni and Campylobacter coli account for 80–85% and 10–15% of total infections, respectively (Leblanc et al., 2011) and are also the main species widely detected and isolated from wastewater . C. jejuni was first isolated from the feces of patients with gastrointestinal disease in the 1970s . Subsequently, many studies have demonstrated C. jejuni to be a major cause of human infections transmitted by the fecal-oral route through contaminated food and water . Salmonella is another important enteropathogenic bacteria, causing approximately 94 million infections and 155,000 deaths annually worldwide . Salmonella enterica serovar Typhi and Salmonella enterica serovar Paratyphi are the main causes of typhoid fever and paratyphoid fever, respectively . Both are gram-negative, human-restricted, and species-specific bacterial diseases. The transmission can occur from person to person by eating contaminated food or water or by contact with an acute or chronic infected person . To evaluate the water quality and the likelihood of contracting waterborne infections, a study was carried out in Nigeria that examined several sources of drinking water . Areas with a high number of reported waterborne cases and those with a low number of cases had their water samples taken. Most tests contained Vibrio cholerae , Salmonella typhi , and Shigella dysenteriae , and it was hypothesized that discharge of polluted water during the intense rainy season had contaminated drinking water sources . Enterohaemorrhagic and enteroinvasive Escherichia coli are pathogenic and causes illness in mammals including humans. Shiga toxin producing E. coli (STEC) O157:H7 causes diarrhea, haemorrhagic colitis, haemolytic uremic syndrome, that leads to serious long-term complication, and it is often employed as a model for pathogenic bacteria study in wastewater . Through PCR, high amount of E. coli O157:H7 gene were detected in the sewage sludge (1,819,700 copies of gene/100 mL). The common feature of STEC E. coli O157:H7 is that even a low inoculum as little as 10 cells may trigger disease . In 2000, an outbreak in Walkerton, Ontario was linked to E. coli O157:H7 in the Great Lakes area, resulting in 2300 illness cases . In 2011 in Germany, a STEC E. coli (strain O104:H4) was the causative agent of severe cases of acute diarrhea and bloody diarrhea due to the consumption of uncooked sprouts that were irrigated with contaminated water . The protozoan parasites, Cryptosporidium and Giardia, are also important enteric pathogens of public health concern and major waterborne pathogens . Cryptosporidium is the second most important cause of moderate to severe diarrhea and mortality in children under 5 years of age in developing countries . The largest cryptosporidiosis outbreak due to Cryptosporidium protozoa occurred in 1993 in United States, which affected over 400,000 individuals, was due to drinking water becoming contaminated with wastewater . Giardiasis is the most common enteric protozoan parasitic infection worldwide, with an estimated 280 million people infected annually . Both parasites are prevalent in wastewater with concentrations in as high as 60,000 Cryptosporidium oocysts and 100,000 Giardia cysts . Among viruses, Adenoviruses are a leading pathogen of clinical diseases, such as gastroenteritis, conjunctivitis, respiratory illnesses, haemorrhagic cystitis, and systemic infections. Adenoviral infections accounts for 2 to 10% cases of diarrhea. They are commonly detected in raw wastewater and have been cited as among the most significantly abundant human viruses in wastewater. Adenoviruses have also been detected in human excrement of infected persons, including both feces and urine . In both low to middle-income and high-income countries, Norovirus is considered the second main cause of viral acute gastroenteritis after rotavirus. Globally, norovirus is responsible for nearly 20% of all acute gastroenteritis cases, with 677 million cases per year and over 213,000 deaths. Studies have linked the level of enteric viruses such as Norovirus, Hepatitis E and Hepatitis A virus in wastewater with incidence of clinical cases. Hence, wastewater surveillance can provide an early warning of outbreaks involving enteric viruses . 2.2 Respiratory pathogen The emergence in 2020 of the severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2), which causes viral pneumonia, has heightened the focus on Wastewater as a surveillance tool to provide early detection of disease in the community. There are more than 2,000 locales in 55 nations where wastewater surveillance for SARS-CoV-2 is ongoing, and there are many cases across the literature reporting on the detection of SARS-CoV-2 from sewage . Although SARS-CoV-2 typically causes respiratory symptoms, and is shed in nasal, buccal, esophageal, and respiratory discharges into wastewater, it can also result in gastrointestinal symptoms and/or viral shedding in feces . In a meta-analysis of COVID-19 studies, finding revealed that 17.6% of COVID-19 patients had gastrointestinal symptoms and 48.1% of COVID-19 patients had SARS-CoV-2 RNA detected in their feces. Thus, monitoring the presence of SARS-CoV-2 RNA in wastewater is becoming widely used to track changes in COVID-19 case numbers in communities. Among other respiratory pathogens, 13 respiratory viruses were detected from different wastewater treatment plants in Queensland, Australia. Out of these 13 viruses, Bocavirus (BoV), Parechovirus (PeV), Rhinovirus A (RhV A) and Rhinovirus B (RhV B) were detected in all wastewater samples . Different studies reported here shows that the application of wastewater surveillance to monitor respiratory viruses can be a potential tool in community disease surveillance. Gastrointestinal pathogens Campylobacter spp. is major cause of diarrhea, and human gastroenteritis worldwide . It is comprised of 17 species and 6 subspecies, out of which Campylobacter jejuni and Campylobacter coli account for 80–85% and 10–15% of total infections, respectively (Leblanc et al., 2011) and are also the main species widely detected and isolated from wastewater . C. jejuni was first isolated from the feces of patients with gastrointestinal disease in the 1970s . Subsequently, many studies have demonstrated C. jejuni to be a major cause of human infections transmitted by the fecal-oral route through contaminated food and water . Salmonella is another important enteropathogenic bacteria, causing approximately 94 million infections and 155,000 deaths annually worldwide . Salmonella enterica serovar Typhi and Salmonella enterica serovar Paratyphi are the main causes of typhoid fever and paratyphoid fever, respectively . Both are gram-negative, human-restricted, and species-specific bacterial diseases. The transmission can occur from person to person by eating contaminated food or water or by contact with an acute or chronic infected person . To evaluate the water quality and the likelihood of contracting waterborne infections, a study was carried out in Nigeria that examined several sources of drinking water . Areas with a high number of reported waterborne cases and those with a low number of cases had their water samples taken. Most tests contained Vibrio cholerae , Salmonella typhi , and Shigella dysenteriae , and it was hypothesized that discharge of polluted water during the intense rainy season had contaminated drinking water sources . Enterohaemorrhagic and enteroinvasive Escherichia coli are pathogenic and causes illness in mammals including humans. Shiga toxin producing E. coli (STEC) O157:H7 causes diarrhea, haemorrhagic colitis, haemolytic uremic syndrome, that leads to serious long-term complication, and it is often employed as a model for pathogenic bacteria study in wastewater . Through PCR, high amount of E. coli O157:H7 gene were detected in the sewage sludge (1,819,700 copies of gene/100 mL). The common feature of STEC E. coli O157:H7 is that even a low inoculum as little as 10 cells may trigger disease . In 2000, an outbreak in Walkerton, Ontario was linked to E. coli O157:H7 in the Great Lakes area, resulting in 2300 illness cases . In 2011 in Germany, a STEC E. coli (strain O104:H4) was the causative agent of severe cases of acute diarrhea and bloody diarrhea due to the consumption of uncooked sprouts that were irrigated with contaminated water . The protozoan parasites, Cryptosporidium and Giardia, are also important enteric pathogens of public health concern and major waterborne pathogens . Cryptosporidium is the second most important cause of moderate to severe diarrhea and mortality in children under 5 years of age in developing countries . The largest cryptosporidiosis outbreak due to Cryptosporidium protozoa occurred in 1993 in United States, which affected over 400,000 individuals, was due to drinking water becoming contaminated with wastewater . Giardiasis is the most common enteric protozoan parasitic infection worldwide, with an estimated 280 million people infected annually . Both parasites are prevalent in wastewater with concentrations in as high as 60,000 Cryptosporidium oocysts and 100,000 Giardia cysts . Among viruses, Adenoviruses are a leading pathogen of clinical diseases, such as gastroenteritis, conjunctivitis, respiratory illnesses, haemorrhagic cystitis, and systemic infections. Adenoviral infections accounts for 2 to 10% cases of diarrhea. They are commonly detected in raw wastewater and have been cited as among the most significantly abundant human viruses in wastewater. Adenoviruses have also been detected in human excrement of infected persons, including both feces and urine . In both low to middle-income and high-income countries, Norovirus is considered the second main cause of viral acute gastroenteritis after rotavirus. Globally, norovirus is responsible for nearly 20% of all acute gastroenteritis cases, with 677 million cases per year and over 213,000 deaths. Studies have linked the level of enteric viruses such as Norovirus, Hepatitis E and Hepatitis A virus in wastewater with incidence of clinical cases. Hence, wastewater surveillance can provide an early warning of outbreaks involving enteric viruses . Respiratory pathogen The emergence in 2020 of the severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2), which causes viral pneumonia, has heightened the focus on Wastewater as a surveillance tool to provide early detection of disease in the community. There are more than 2,000 locales in 55 nations where wastewater surveillance for SARS-CoV-2 is ongoing, and there are many cases across the literature reporting on the detection of SARS-CoV-2 from sewage . Although SARS-CoV-2 typically causes respiratory symptoms, and is shed in nasal, buccal, esophageal, and respiratory discharges into wastewater, it can also result in gastrointestinal symptoms and/or viral shedding in feces . In a meta-analysis of COVID-19 studies, finding revealed that 17.6% of COVID-19 patients had gastrointestinal symptoms and 48.1% of COVID-19 patients had SARS-CoV-2 RNA detected in their feces. Thus, monitoring the presence of SARS-CoV-2 RNA in wastewater is becoming widely used to track changes in COVID-19 case numbers in communities. Among other respiratory pathogens, 13 respiratory viruses were detected from different wastewater treatment plants in Queensland, Australia. Out of these 13 viruses, Bocavirus (BoV), Parechovirus (PeV), Rhinovirus A (RhV A) and Rhinovirus B (RhV B) were detected in all wastewater samples . Different studies reported here shows that the application of wastewater surveillance to monitor respiratory viruses can be a potential tool in community disease surveillance. Application of wastewater surveillance 3.1 Understanding outbreaks and public health through wastewater studies The detection of the Polio virus nationwide in late 1930s United States sewers , the presence of non-polio enteroviruses in the Philippines’ children , and recent traces in New York and London highlighted the need for swift governmental action against potential outbreaks. Detection of SARS-CoV-2, Mpox virus and PMMoV in community wastewater of United States was evaluated by Keegan et al. . A study done in Hong Kong Zheng reported that wastewater surveillance can even provide spatiotemporal SARS-CoV-2 infection dynamics . Wolken et al. , in Houston demonstrated role of wastewater surveillance in detection of SARS CoV-2 and Influenza outbreaks. Similarly, Evidence of SARS-CoV-2 in Australian wastewater was presented by Ahmed et al. , shedding light on community prevalence and aiding public health measures . Hasan et al. , and Vo et al. completed further wastewater studies in the UAE, discovering early indications of SARS-CoV-2 variants prior to clinical case identification. Kirby et al. detected omicron mutation markers in the United States sewage, underscoring the predictive capability of wastewater-based epidemiology. In South Africa, a study done by Yousif et al. , demonstrated the utility of wastewater genomics to monitor evolution and spread of endemic viruses. Investigation in Sweden by Hellmér et al. , using qPCR found substantial amounts of Norovirus GII and Hepatitis A indicating upcoming outbreaks. This technique allows estimation of affected individuals based on viral load in sewage. Countries like Spain and United States with documented clinical cases and community spread detected the Mpox virus in wastewater samples . In Nepal, Salmonella typhi bacteriophages were detected from surface waters which was reported as a scalable approach to environmental surveillance . Rechenburg and Kistemann found Campylobacter contamination in German rivers increased infection risks, while Liu et al. , reported typhoid-causing bacteria in India and Bangladesh’s wastewater. Diemart and Yan’s study exposed undiscovered S. enterica outbreaks linked to wastewater strains via genetic analysis. Barrett et al. , isolated Vibrio cholerae O1 from Louisiana sewage, and Zohra et al. , identified toxigenic strains in Pakistan’s water presenting continual infection threats unrelated to season patterns. Razzolini et al. , disclosed a high frequency of Cryptosporidium and Giardia in Brazilian chlorine-treated wastewater, leading to gastrointestinal disease transmission through poor hygiene. Additionally, Amoah et al. , observed multiple parasites in South African wastewater, with particular concern for worm-infested community water sources as evidenced by a Monte Carlo study . These comprehensive wastewater surveillance studies aid in formulating public health policies and establishing outbreak response, demonstrating their value in epidemiological research. 3.2 Antimicrobial resistance detection in wastewater One of the major factors affecting the re-emergence of infectious diseases is antimicrobial resistance . According to the United Nations, around 700,000 people die yearly of infections associated with antimicrobial resistant microorganisms. Wastewater is one of the primary routes for resistant pathogens and antimicrobe to enter the environment. Mao et al. studied prevalence of antibiotic resistance genes reported in wastewater treatment plants. Similarly , studied diverse range of multiple antibiotic resistance genes in 10 large-scale membrane bioreactors for municipal wastewater treatment. The effects of seasonality upon antibiotic resistance genes in wastewater is another underexplored area, though reported that strong seasonal presence of ARGs (Antibiotic Resistance Genes) within wastewater, with higher levels observed in autumn and winter which coincided with increased antibiotic prescribing in those months . Higher levels of resistance have been found in wastewater with higher antibiotic concentrations (e.g., hospitals discharge vs. municipality) . Understanding the relationship between antibiotic concentrations and resistance further could inform where to target mitigation measures more effectively. 3.3 Markers of pharmacological intervention The proportion of regular pharmaceutical in wastewater has been assessed in numerous studies as a metric of disease prevalence. Analyses of metformin (a medication frequently used to treat type 2 diabetes), found in wastewater have been used to assess the prevalence of type 2 diabetes . Measurement of pharmaceutical concentrations in wastewater has been used alongside non-wastewater indicators, such as survey data, socio-economic or demographic data, or environmental data to identify correlations . Elevated levels of isoprostanes detected from wastewater, were suggested to be an indicator of increased levels of community anxiety during the COVID-19 . The use of these pharmaceutical biomarkers needs to be validated more, and extensive research is required to determine how the data may be used to improve public health measures. Understanding outbreaks and public health through wastewater studies The detection of the Polio virus nationwide in late 1930s United States sewers , the presence of non-polio enteroviruses in the Philippines’ children , and recent traces in New York and London highlighted the need for swift governmental action against potential outbreaks. Detection of SARS-CoV-2, Mpox virus and PMMoV in community wastewater of United States was evaluated by Keegan et al. . A study done in Hong Kong Zheng reported that wastewater surveillance can even provide spatiotemporal SARS-CoV-2 infection dynamics . Wolken et al. , in Houston demonstrated role of wastewater surveillance in detection of SARS CoV-2 and Influenza outbreaks. Similarly, Evidence of SARS-CoV-2 in Australian wastewater was presented by Ahmed et al. , shedding light on community prevalence and aiding public health measures . Hasan et al. , and Vo et al. completed further wastewater studies in the UAE, discovering early indications of SARS-CoV-2 variants prior to clinical case identification. Kirby et al. detected omicron mutation markers in the United States sewage, underscoring the predictive capability of wastewater-based epidemiology. In South Africa, a study done by Yousif et al. , demonstrated the utility of wastewater genomics to monitor evolution and spread of endemic viruses. Investigation in Sweden by Hellmér et al. , using qPCR found substantial amounts of Norovirus GII and Hepatitis A indicating upcoming outbreaks. This technique allows estimation of affected individuals based on viral load in sewage. Countries like Spain and United States with documented clinical cases and community spread detected the Mpox virus in wastewater samples . In Nepal, Salmonella typhi bacteriophages were detected from surface waters which was reported as a scalable approach to environmental surveillance . Rechenburg and Kistemann found Campylobacter contamination in German rivers increased infection risks, while Liu et al. , reported typhoid-causing bacteria in India and Bangladesh’s wastewater. Diemart and Yan’s study exposed undiscovered S. enterica outbreaks linked to wastewater strains via genetic analysis. Barrett et al. , isolated Vibrio cholerae O1 from Louisiana sewage, and Zohra et al. , identified toxigenic strains in Pakistan’s water presenting continual infection threats unrelated to season patterns. Razzolini et al. , disclosed a high frequency of Cryptosporidium and Giardia in Brazilian chlorine-treated wastewater, leading to gastrointestinal disease transmission through poor hygiene. Additionally, Amoah et al. , observed multiple parasites in South African wastewater, with particular concern for worm-infested community water sources as evidenced by a Monte Carlo study . These comprehensive wastewater surveillance studies aid in formulating public health policies and establishing outbreak response, demonstrating their value in epidemiological research. Antimicrobial resistance detection in wastewater One of the major factors affecting the re-emergence of infectious diseases is antimicrobial resistance . According to the United Nations, around 700,000 people die yearly of infections associated with antimicrobial resistant microorganisms. Wastewater is one of the primary routes for resistant pathogens and antimicrobe to enter the environment. Mao et al. studied prevalence of antibiotic resistance genes reported in wastewater treatment plants. Similarly , studied diverse range of multiple antibiotic resistance genes in 10 large-scale membrane bioreactors for municipal wastewater treatment. The effects of seasonality upon antibiotic resistance genes in wastewater is another underexplored area, though reported that strong seasonal presence of ARGs (Antibiotic Resistance Genes) within wastewater, with higher levels observed in autumn and winter which coincided with increased antibiotic prescribing in those months . Higher levels of resistance have been found in wastewater with higher antibiotic concentrations (e.g., hospitals discharge vs. municipality) . Understanding the relationship between antibiotic concentrations and resistance further could inform where to target mitigation measures more effectively. Markers of pharmacological intervention The proportion of regular pharmaceutical in wastewater has been assessed in numerous studies as a metric of disease prevalence. Analyses of metformin (a medication frequently used to treat type 2 diabetes), found in wastewater have been used to assess the prevalence of type 2 diabetes . Measurement of pharmaceutical concentrations in wastewater has been used alongside non-wastewater indicators, such as survey data, socio-economic or demographic data, or environmental data to identify correlations . Elevated levels of isoprostanes detected from wastewater, were suggested to be an indicator of increased levels of community anxiety during the COVID-19 . The use of these pharmaceutical biomarkers needs to be validated more, and extensive research is required to determine how the data may be used to improve public health measures. Sample collection methods 4.1 Moore swab The Moore swab was first proposed by Brendan Moore to trace S. paratyphi B from sewage contaminated water in a small town in England . In this method, a cotton gauze swab tied with string is submerged in water. The method traps pathogens as water passes through swab. After leaving it in water for 2–4 days, the swabs are sent to the laboratory inside sterile jars and processed further . This method has been utilized throughout the world to detect several pathogens such as human norovirus, poliovirus, E. coli , V. cholerae and now SARS-CoV-2 as well. Liu et al. , conducted a study in which Moore swab method was used for wastewater surveillance of COVID-19 at institutional level. Among the 219 swab samples tested, 28 (12.8%) swabs collected were found positive for SARS-CoV-2. Sbodio et al. , detected E. coli O157:H7 and S. enterica using Moore swab methodology in large volume field samples of irrigation water. Similarly, McEgan et al. , detected Salmonella spp. from larger volume of water by Moore swab method. In Farnham, United Kingdom, Hobbs reported a case of typhoid in a 7-year-old child who had exposure to a sewage-contaminated river and the use of Moore swabs to trace the carrier. Greenberg et al. , and Shearer et al. , described detection of a single carrier in the isolated town of Portola, CA via use of Moore swabs in sewers; that carrier had been responsible for cases of typhoid occurring intermittently over 5 years . 4.2 Grab method In this method, raw sewage is collected from sampling point either at 1 point in time or at specified points in time to form a composite sample. Many wastewater treatment plants use automated equipment to take samples at regular intervals during a 24-h period or during peak periods of domestic wastewater flow . The larger the volume of wastewater analyzed, higher the theoretical sensitivity to detect pathogen circulation in the source population . However, volumes greater than 1 L can be difficult to handle in the laboratory and can be replaced by multiple parallel regular samples. Sampling is preferred to trapping because it is a more quantitative method that allows an estimation of the detection sensitivity of the system . In addition, long-term experience indicates that programs using concentrated sampling detect Polioviruses and non-polio enteroviruses more frequently than those using trap sampling . Moore swab The Moore swab was first proposed by Brendan Moore to trace S. paratyphi B from sewage contaminated water in a small town in England . In this method, a cotton gauze swab tied with string is submerged in water. The method traps pathogens as water passes through swab. After leaving it in water for 2–4 days, the swabs are sent to the laboratory inside sterile jars and processed further . This method has been utilized throughout the world to detect several pathogens such as human norovirus, poliovirus, E. coli , V. cholerae and now SARS-CoV-2 as well. Liu et al. , conducted a study in which Moore swab method was used for wastewater surveillance of COVID-19 at institutional level. Among the 219 swab samples tested, 28 (12.8%) swabs collected were found positive for SARS-CoV-2. Sbodio et al. , detected E. coli O157:H7 and S. enterica using Moore swab methodology in large volume field samples of irrigation water. Similarly, McEgan et al. , detected Salmonella spp. from larger volume of water by Moore swab method. In Farnham, United Kingdom, Hobbs reported a case of typhoid in a 7-year-old child who had exposure to a sewage-contaminated river and the use of Moore swabs to trace the carrier. Greenberg et al. , and Shearer et al. , described detection of a single carrier in the isolated town of Portola, CA via use of Moore swabs in sewers; that carrier had been responsible for cases of typhoid occurring intermittently over 5 years . Grab method In this method, raw sewage is collected from sampling point either at 1 point in time or at specified points in time to form a composite sample. Many wastewater treatment plants use automated equipment to take samples at regular intervals during a 24-h period or during peak periods of domestic wastewater flow . The larger the volume of wastewater analyzed, higher the theoretical sensitivity to detect pathogen circulation in the source population . However, volumes greater than 1 L can be difficult to handle in the laboratory and can be replaced by multiple parallel regular samples. Sampling is preferred to trapping because it is a more quantitative method that allows an estimation of the detection sensitivity of the system . In addition, long-term experience indicates that programs using concentrated sampling detect Polioviruses and non-polio enteroviruses more frequently than those using trap sampling . Methods available for detection of pathogens in wastewater 5.1 Culture based method The utilization of culture-based approaches to capture antibiotic-resistant bacteria (ARB) is beneficial for various reasons such as verifying viability, testing for virulence , profiling phenotypic and genotypic multi-drug resistance (MDR) , and producing data that may be utilized for risk assessment related to human health. However, much of the media used to isolate opportunistic infections were not effective on environmental samples because they were created for clinical use. Certain bacteria found in wastewater originate from the feces and can survive in surface water, while other populations of these bacteria are autochthonous and found in aquatic habitats. Acinetobacter spp., Aeromonas spp., and Pseudomonas spp., have been found to be important opportunistic pathogens that can grow in wastewater and natural aquatic environments. These pathogens can also acquire genes that confer multiple antibiotic resistance, making them potentially useful targets for culture-based monitoring . The drawback of the culture-based approach is that, while some organisms may be inactivated (dead) or unable to grow on the chosen media (bacteria) or cell culture (used for viruses), molecular approaches can detect quantities from 1 to 10,000 greater than those of culture methods . 5.2 Polymerase chain reaction The identification of pathogens in wastewater can be accomplished by culture-based approaches, however the process can take many days or weeks. Without the requirement for cultivation, alternative molecular techniques like the PCR have proven successful in identifying bacterial, viral, and protozoan pathogens in sewage . PCR is the most common molecular-based technique to detect lesser amounts of a specific nucleic acid and is widely used for detection of pathogens . It enables the detection of a single pathogenic strain by targeting specific DNA sequences . This benefit makes it possible to identify and detect even lower amount of the target DNA sequence. It is thus widely used in the diagnosis of human pathogens . Fan et al. , reported PCR assay to achieve the simultaneous detection of various human pathogens in a single tube, with the detection sensitivities between 10 to 10 2 CFU/100 mL in seawater. Omar et al. , identified commensal and pathogenic E. coli from medical and environmental water sources by using multiplex PCR technique. PCR technique, due to its high specificity, was also adopted to detection of enteroviruses and Hepatitis A virus (HAV) in environment. Quantitative real-time PCR (qPCR), another PCR variant, allows for the measurement of DNA targets by tracking amplified products throughout cycle as evidenced by rising fluorescence . This approach decreases the potential of cross-contamination, offers excellent sensitivity and specificity, a faster rate of detection, and eliminates the requirement for post-PCR analysis . Shannon et al. , detected E. coli , Klebsiella pneumoniae , Clostridium perfringens and Enterococcus faecalis through wastewater by application of qPCR. With a lower quantification limit of 2.5 oocysts/sample, qPCR techniques have also been devised for the detection and identification of Cryptosporidium spp. in river water . qPCR had a sensitivity of 0.45 cysts per reaction for the detection of G. lamblia and Giardia ardeae in wastewater samples . For detection of RNA viruses, quantitative reverse-transcriptase (qRT)-PCR was developed to provide quantitative estimation of the pathogen concentration in water . Limitations of PCR includes the inability to discriminate between viable from non-viable cells that both contain DNA, the low concentration of several pathogens in water such as Cryptosporidium , Giardia and viruses, and the lack of data to indicate the real infectious risk to a population . 5.3 DNA microarray One of the most innovative molecular biology-based techniques, DNA microarray technology enables researchers to run several environmental samples simultaneously in large-scale, data-intensive investigations . It is widely utilized to monitor gene expression under different cell growth conditions, detecting specific mutations in DNA sequences and characterizing microorganisms in environmental samples. It is a unique glass or silicon chip that has a DNA microarray that covers a surface area of several square centimeters with many nucleic acid probes. After being coupled with the probes, DNA, complementary DNA (cDNA), and RNA in the sample are identified by fluorescence or electric signal . DNA microarrays allow the hybridization-based detection of numerous targets in a single experiment. As a result, it is a quick and accurate diagnostic approach for analyzing several clinical or environmental samples . Wilson et al. , identified 18 pathogenic bacteria, eukaryotes, and viruses by using species-specific primer sets to amplify multiple regions unique toward individual pathogen in the microarray. Inoue and et al. studied the occurrence of 941 pathogenic bacterial species in groundwater and were able to differentiate between human and animal sources. Leski et al. , developed a high-density re-sequencing microarray that has the capability of detecting 84 different types of pathogens ranging from bacteria, protozoa, and viruses, including Bacillus anthracis , Ebola virus and Francisella tularensis with detection limit of 104 to 106 copies per test for most of the pathogens exhibiting high specificity. This technology is helpful as most known bacteria found in samples can be detected without the need for culturing, and the sensitivity of this approach allows for the detection of species with lower abundances (detection limit of 0.01% of microbial communities) . However, accuracy of the microarray data, complex probe design work, and clinical relevance of the early results have been criticized . A single microarray experiment can be very expensive, there are many probe designs based on low-specificity sequences, and most widely used microarray platforms only use one set of manufacturer-designed probes, which leaves little control over the pool of transcripts that are analyzed. These are the main drawbacks of microarray technology. Along with their high sensitivity to changes in the hybridization temperature , the purity and rate of genetic material degradation , and the amplification process , microarrays also have other limitations. These factors, when combined, have the potential to affect gene expression estimates. 5.4 Fluorescent in situ hybridization A cytogenetic method called FISH is used to locate the nucleic acids in cells or sample matrices. In molecular ecology, fluorescently labeled nucleic acid probes can be used to identify genes on chromosomes or to label ribosomal RNA in various taxonomic bacteria or archaea by hybridizing only with highly similar nucleic acids. It is possible to use FISH to count specific microbial populations . Santiago et al. , detected Salmonella spp. from wastewater reused for irrigation by using FISH as a molecular method tool. Amann and Fuchs isolated members of the family Enterobacteriaceae and E. coli in drinking water systems, freshwater and river water by this tool. In addition, emerging human pathogens in water, wastewater, sludge, and cellular survival and infection mechanisms have all been investigated with FISH . Because it is less sensitive to inhibitory substances than PCR, FISH is better suited for complex matrices. However, the fact that only a limited number of phylogenetically distinct targets can be detected simultaneously is a major drawback of FISH. 5.5 Loop-mediated isothermal amplification LAMP is a method for isothermal nucleic acid amplification. Currently, LAMP has been used to identify and quantify pathogenic bacteria with benefits in terms of sensitivity, specificity, and speed . With a detection limit of 10 copies or less in the template for one reaction, the LAMP approach was also proven to be 10–100 times more sensitive than PCR detection . Lu et al. , utilized LAMP-based method for a rapid identification of Legionella spp. from the environmental water source. Koizumi et al. , used loop-mediated isothermal amplification method for rapid, simple, and sensitive detection of Leptospira spp. in urine sample. This method can directly detect pathogenic microorganisms in wastewater avoiding the tedious step of culture and nucleic acid extraction . However, the major drawback of LAMP is it is more difficult to design specific primers for LAMP than for PCR (because LAMP requires 4–6 primers and PCR only two). 5.6 Pyrosequencing Pyrosequencing is a DNA sequencing technique that facilitates microbial genome sequencing to identify bacterial species, discriminate pathogenic strains, and detect genetic mutations that confer resistance to anti-microbial agents . Hong et al. , analyzed bacterial biofilm communities in water meters of a drinking water distribution system by Pyrosequencing technique. Study conducted by Ibekwe et al. , identified most of the potential pathogenic bacterial sequences from three major phyla, namely, Proteobacteria , Bacteroidetes , and Firmicutes in a mixed urban watershed as revealed by pyrosequencing. The advantages of pyrosequencing for microbiology applications include rapid and reliable high-throughput screening and accurate identification of microbes and microbial genome mutations. The pyrosequencing instrument can also analyze the complete genetic diversity of anti-microbial drug resistance, including SNP typing, point mutations, insertions, and deletions, as well as quantification of multiple gene copies that may occur in some anti-microbial resistance patterns . However, the DNA present in wastewater samples could limit the sensitivity of this tool as it requires DNA templates at picomole level, but a much lower amount of DNA can hamper the output . This technology is also limited by the cost, the complexity of analysis, the need for increasing availability of massive computing power and the efficiency of data generation . 5.7 Digital PCR To identify enteric virus contamination in water and wastewater, PCR and its variants such as quantitative PCR (qPCR), real-time RT-PCR, RT-qPCR, nested PCR, and digital PCR (dPCR) have been implemented . In contrast, qPCR can detect multiplex viral targets . Digital PCR (dPCR) has proven to be efficient for wastewater surveillance, owing to its increased robustness against PCR inhibitors commonly encountered in more difficult sample types . Heijnen et al. , evaluated that digital PCR may be utilized to detect and quantify mutations in SARS-CoV-2 in raw sewage samples from the cities of Amsterdam and Utrecht in The Netherlands. With its sensitivity and precision in quantification, digital PCR (dPCR) was quickly identified as a suitable choice for monitoring SARS-CoV-2 in wastewater monitoring . In terms of quantifying human-associated fecal markers in water, it was discovered that dPCR displayed superior precision and reproducibility than qPCR . With dPCR, the sample analysis cost and processing time are higher than qPCR. For the quantification of pathogens, dPCR can be a viable alternative if enhanced analytical performance (i.e., accuracy and sensitivity) is essential . 5.8 Whole genome sequencing Profiling bacterial diversity and potential pathogens in wastewater has been a widely used application of sequencing, a robust analytical tool. For surveillance and outbreak investigations, the state of the art is shifting toward WGS (Whole Genome Sequencing) as a replacement for conventional molecular techniques . WGS study of the complete pathogen genome has the potential to transform outbreak analysis by providing understanding of distinguishing even closely related bacterial lineages . As demonstrated by Christoph et al. , numerous SARS-CoV-2 genotypes were found through sequencing of viral concentrations and RNA recovered directly from wastewater. Fumian et al. , identified Norovirus GII genotypes through genome sequencing from a wastewater treatment plant in Rio de Janeiro, Brazil. Mahfouz et al. , analyzed whole genome sequences for the indicator species E. coli of the inflow and outflow of a sewage treatment plant which revealed that nearly all isolates are multi-drug resistant, and many are potentially pathogenic. Recently, Mbanga et al. , reported genomics of antibiotic resistant Klebsiella grimontii novel sequence type ST350 isolated from a wastewater source in South Africa. Whole genome sequencing reveals insights into recent improvements in sequencing technologies and analysis tools have rapidly increased the output and analysis speed as well as reduced the overall costs of WGS . Nevertheless, Genomic surveillance is still challenging due to low target concentration, complex microbial and chemical background, and lack of robust nucleic acid recovery experimental procedures . 5.9 MALDI-TOF Matrix-assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF MS) is a rapid and accurate method of identification of bacterial and fungal isolates in the laboratory . The identification of microorganisms is based on the protein fingerprint unique to the microorganism . V. cholerae non-O1 isolates from wastewater were identified by MALDI TOF MS by Eddabra et al. . V. alginolyticus isolated from Perna perna mussles was efficiently identified by MALDI TOF MS by Bronzato et al. . There are numerous studies that have proven the use of MALDI TOF MS on bacterial and fungal isolates. Croxatto et al. , have reported that numerous studies have been attempted to perform direct testing of urine using MALDI TOF MS. The method could be used with up to 94% accuracy but only if bacterial count is 105/ml. Nachtigall et al. , found that MALDI TOF was 80% concordant with RT-PCR in identifying SARS-CoV-2 from nasal mucus secretions. Rybicka et al. , found that MALDI TOF was better than RT-PCR in detecting SARS-Cov-2. Gerbersdorf et al. , have shown that dextran, gellan and xanthan from anaerobic microbial aggregates can be differentially demonstrated by MALDI TOF MS in different wastewater. The exopolysaccharides in biofilms are found to be important in microbial adhesion and aggregation . Picó et al. , found that MALDI TOF can be adapted for rapid detection and characterization of proteins in wastewater. However, MALDI-TOF MS has relatively low resolution power if compared to other high-resolution mass spectrometers and the accuracy of identification depends on the quality of the reference database . Culture based method The utilization of culture-based approaches to capture antibiotic-resistant bacteria (ARB) is beneficial for various reasons such as verifying viability, testing for virulence , profiling phenotypic and genotypic multi-drug resistance (MDR) , and producing data that may be utilized for risk assessment related to human health. However, much of the media used to isolate opportunistic infections were not effective on environmental samples because they were created for clinical use. Certain bacteria found in wastewater originate from the feces and can survive in surface water, while other populations of these bacteria are autochthonous and found in aquatic habitats. Acinetobacter spp., Aeromonas spp., and Pseudomonas spp., have been found to be important opportunistic pathogens that can grow in wastewater and natural aquatic environments. These pathogens can also acquire genes that confer multiple antibiotic resistance, making them potentially useful targets for culture-based monitoring . The drawback of the culture-based approach is that, while some organisms may be inactivated (dead) or unable to grow on the chosen media (bacteria) or cell culture (used for viruses), molecular approaches can detect quantities from 1 to 10,000 greater than those of culture methods . Polymerase chain reaction The identification of pathogens in wastewater can be accomplished by culture-based approaches, however the process can take many days or weeks. Without the requirement for cultivation, alternative molecular techniques like the PCR have proven successful in identifying bacterial, viral, and protozoan pathogens in sewage . PCR is the most common molecular-based technique to detect lesser amounts of a specific nucleic acid and is widely used for detection of pathogens . It enables the detection of a single pathogenic strain by targeting specific DNA sequences . This benefit makes it possible to identify and detect even lower amount of the target DNA sequence. It is thus widely used in the diagnosis of human pathogens . Fan et al. , reported PCR assay to achieve the simultaneous detection of various human pathogens in a single tube, with the detection sensitivities between 10 to 10 2 CFU/100 mL in seawater. Omar et al. , identified commensal and pathogenic E. coli from medical and environmental water sources by using multiplex PCR technique. PCR technique, due to its high specificity, was also adopted to detection of enteroviruses and Hepatitis A virus (HAV) in environment. Quantitative real-time PCR (qPCR), another PCR variant, allows for the measurement of DNA targets by tracking amplified products throughout cycle as evidenced by rising fluorescence . This approach decreases the potential of cross-contamination, offers excellent sensitivity and specificity, a faster rate of detection, and eliminates the requirement for post-PCR analysis . Shannon et al. , detected E. coli , Klebsiella pneumoniae , Clostridium perfringens and Enterococcus faecalis through wastewater by application of qPCR. With a lower quantification limit of 2.5 oocysts/sample, qPCR techniques have also been devised for the detection and identification of Cryptosporidium spp. in river water . qPCR had a sensitivity of 0.45 cysts per reaction for the detection of G. lamblia and Giardia ardeae in wastewater samples . For detection of RNA viruses, quantitative reverse-transcriptase (qRT)-PCR was developed to provide quantitative estimation of the pathogen concentration in water . Limitations of PCR includes the inability to discriminate between viable from non-viable cells that both contain DNA, the low concentration of several pathogens in water such as Cryptosporidium , Giardia and viruses, and the lack of data to indicate the real infectious risk to a population . DNA microarray One of the most innovative molecular biology-based techniques, DNA microarray technology enables researchers to run several environmental samples simultaneously in large-scale, data-intensive investigations . It is widely utilized to monitor gene expression under different cell growth conditions, detecting specific mutations in DNA sequences and characterizing microorganisms in environmental samples. It is a unique glass or silicon chip that has a DNA microarray that covers a surface area of several square centimeters with many nucleic acid probes. After being coupled with the probes, DNA, complementary DNA (cDNA), and RNA in the sample are identified by fluorescence or electric signal . DNA microarrays allow the hybridization-based detection of numerous targets in a single experiment. As a result, it is a quick and accurate diagnostic approach for analyzing several clinical or environmental samples . Wilson et al. , identified 18 pathogenic bacteria, eukaryotes, and viruses by using species-specific primer sets to amplify multiple regions unique toward individual pathogen in the microarray. Inoue and et al. studied the occurrence of 941 pathogenic bacterial species in groundwater and were able to differentiate between human and animal sources. Leski et al. , developed a high-density re-sequencing microarray that has the capability of detecting 84 different types of pathogens ranging from bacteria, protozoa, and viruses, including Bacillus anthracis , Ebola virus and Francisella tularensis with detection limit of 104 to 106 copies per test for most of the pathogens exhibiting high specificity. This technology is helpful as most known bacteria found in samples can be detected without the need for culturing, and the sensitivity of this approach allows for the detection of species with lower abundances (detection limit of 0.01% of microbial communities) . However, accuracy of the microarray data, complex probe design work, and clinical relevance of the early results have been criticized . A single microarray experiment can be very expensive, there are many probe designs based on low-specificity sequences, and most widely used microarray platforms only use one set of manufacturer-designed probes, which leaves little control over the pool of transcripts that are analyzed. These are the main drawbacks of microarray technology. Along with their high sensitivity to changes in the hybridization temperature , the purity and rate of genetic material degradation , and the amplification process , microarrays also have other limitations. These factors, when combined, have the potential to affect gene expression estimates. Fluorescent in situ hybridization A cytogenetic method called FISH is used to locate the nucleic acids in cells or sample matrices. In molecular ecology, fluorescently labeled nucleic acid probes can be used to identify genes on chromosomes or to label ribosomal RNA in various taxonomic bacteria or archaea by hybridizing only with highly similar nucleic acids. It is possible to use FISH to count specific microbial populations . Santiago et al. , detected Salmonella spp. from wastewater reused for irrigation by using FISH as a molecular method tool. Amann and Fuchs isolated members of the family Enterobacteriaceae and E. coli in drinking water systems, freshwater and river water by this tool. In addition, emerging human pathogens in water, wastewater, sludge, and cellular survival and infection mechanisms have all been investigated with FISH . Because it is less sensitive to inhibitory substances than PCR, FISH is better suited for complex matrices. However, the fact that only a limited number of phylogenetically distinct targets can be detected simultaneously is a major drawback of FISH. Loop-mediated isothermal amplification LAMP is a method for isothermal nucleic acid amplification. Currently, LAMP has been used to identify and quantify pathogenic bacteria with benefits in terms of sensitivity, specificity, and speed . With a detection limit of 10 copies or less in the template for one reaction, the LAMP approach was also proven to be 10–100 times more sensitive than PCR detection . Lu et al. , utilized LAMP-based method for a rapid identification of Legionella spp. from the environmental water source. Koizumi et al. , used loop-mediated isothermal amplification method for rapid, simple, and sensitive detection of Leptospira spp. in urine sample. This method can directly detect pathogenic microorganisms in wastewater avoiding the tedious step of culture and nucleic acid extraction . However, the major drawback of LAMP is it is more difficult to design specific primers for LAMP than for PCR (because LAMP requires 4–6 primers and PCR only two). Pyrosequencing Pyrosequencing is a DNA sequencing technique that facilitates microbial genome sequencing to identify bacterial species, discriminate pathogenic strains, and detect genetic mutations that confer resistance to anti-microbial agents . Hong et al. , analyzed bacterial biofilm communities in water meters of a drinking water distribution system by Pyrosequencing technique. Study conducted by Ibekwe et al. , identified most of the potential pathogenic bacterial sequences from three major phyla, namely, Proteobacteria , Bacteroidetes , and Firmicutes in a mixed urban watershed as revealed by pyrosequencing. The advantages of pyrosequencing for microbiology applications include rapid and reliable high-throughput screening and accurate identification of microbes and microbial genome mutations. The pyrosequencing instrument can also analyze the complete genetic diversity of anti-microbial drug resistance, including SNP typing, point mutations, insertions, and deletions, as well as quantification of multiple gene copies that may occur in some anti-microbial resistance patterns . However, the DNA present in wastewater samples could limit the sensitivity of this tool as it requires DNA templates at picomole level, but a much lower amount of DNA can hamper the output . This technology is also limited by the cost, the complexity of analysis, the need for increasing availability of massive computing power and the efficiency of data generation . Digital PCR To identify enteric virus contamination in water and wastewater, PCR and its variants such as quantitative PCR (qPCR), real-time RT-PCR, RT-qPCR, nested PCR, and digital PCR (dPCR) have been implemented . In contrast, qPCR can detect multiplex viral targets . Digital PCR (dPCR) has proven to be efficient for wastewater surveillance, owing to its increased robustness against PCR inhibitors commonly encountered in more difficult sample types . Heijnen et al. , evaluated that digital PCR may be utilized to detect and quantify mutations in SARS-CoV-2 in raw sewage samples from the cities of Amsterdam and Utrecht in The Netherlands. With its sensitivity and precision in quantification, digital PCR (dPCR) was quickly identified as a suitable choice for monitoring SARS-CoV-2 in wastewater monitoring . In terms of quantifying human-associated fecal markers in water, it was discovered that dPCR displayed superior precision and reproducibility than qPCR . With dPCR, the sample analysis cost and processing time are higher than qPCR. For the quantification of pathogens, dPCR can be a viable alternative if enhanced analytical performance (i.e., accuracy and sensitivity) is essential . Whole genome sequencing Profiling bacterial diversity and potential pathogens in wastewater has been a widely used application of sequencing, a robust analytical tool. For surveillance and outbreak investigations, the state of the art is shifting toward WGS (Whole Genome Sequencing) as a replacement for conventional molecular techniques . WGS study of the complete pathogen genome has the potential to transform outbreak analysis by providing understanding of distinguishing even closely related bacterial lineages . As demonstrated by Christoph et al. , numerous SARS-CoV-2 genotypes were found through sequencing of viral concentrations and RNA recovered directly from wastewater. Fumian et al. , identified Norovirus GII genotypes through genome sequencing from a wastewater treatment plant in Rio de Janeiro, Brazil. Mahfouz et al. , analyzed whole genome sequences for the indicator species E. coli of the inflow and outflow of a sewage treatment plant which revealed that nearly all isolates are multi-drug resistant, and many are potentially pathogenic. Recently, Mbanga et al. , reported genomics of antibiotic resistant Klebsiella grimontii novel sequence type ST350 isolated from a wastewater source in South Africa. Whole genome sequencing reveals insights into recent improvements in sequencing technologies and analysis tools have rapidly increased the output and analysis speed as well as reduced the overall costs of WGS . Nevertheless, Genomic surveillance is still challenging due to low target concentration, complex microbial and chemical background, and lack of robust nucleic acid recovery experimental procedures . MALDI-TOF Matrix-assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF MS) is a rapid and accurate method of identification of bacterial and fungal isolates in the laboratory . The identification of microorganisms is based on the protein fingerprint unique to the microorganism . V. cholerae non-O1 isolates from wastewater were identified by MALDI TOF MS by Eddabra et al. . V. alginolyticus isolated from Perna perna mussles was efficiently identified by MALDI TOF MS by Bronzato et al. . There are numerous studies that have proven the use of MALDI TOF MS on bacterial and fungal isolates. Croxatto et al. , have reported that numerous studies have been attempted to perform direct testing of urine using MALDI TOF MS. The method could be used with up to 94% accuracy but only if bacterial count is 105/ml. Nachtigall et al. , found that MALDI TOF was 80% concordant with RT-PCR in identifying SARS-CoV-2 from nasal mucus secretions. Rybicka et al. , found that MALDI TOF was better than RT-PCR in detecting SARS-Cov-2. Gerbersdorf et al. , have shown that dextran, gellan and xanthan from anaerobic microbial aggregates can be differentially demonstrated by MALDI TOF MS in different wastewater. The exopolysaccharides in biofilms are found to be important in microbial adhesion and aggregation . Picó et al. , found that MALDI TOF can be adapted for rapid detection and characterization of proteins in wastewater. However, MALDI-TOF MS has relatively low resolution power if compared to other high-resolution mass spectrometers and the accuracy of identification depends on the quality of the reference database . Challenges of wastewater-based epidemiology 6.1 Complexity of wastewater matrix Although Wastewater-Based Epidemiology (WBE) offers appealing advantages for the monitoring of public health, it comes along with several challenges. One major challenge being the level of biomarkers (chemical and/or biological compounds) as it is far more diluted in wastewater which makes it difficult to trace . The complex matrix is also challenging for pathogen detection . Nucleic acid-based Polymerase chain reaction (PCR) is the primary technique for analyzing pathogens; however, wastewater contains a variety of PCR inhibitors, including fat, protein, and other compounds, that might affect PCR analysis . 6.2 Estimation of population size The dynamic population size estimation is another challenge . For example, it may be difficult to determine whether the presence of a pathogen in wastewater was caused by visitors passing through or by residents of the community in the concerned area . However, the presence of pathogens in wastewater, whether from the local population, undoubtedly provides valuable information, which may indicate an outbreak of disease in the community, thereby providing real time data for proper preparedness and response . This also ensures that WBE is used to provide timely warning of infectious disease outbreaks. 6.3 Detection methods The physical distinctions between the major pathogen groups, the presence of inhibitors in the sample, established standard techniques for sample collection, culture-independent detection methods, and identification of pathogen host origin are the problems of detection methods . Specificity, sensitivity, repeatability of results, rapidity, automation, and cheap cost are the most significant prerequisites for reliable analysis . Furthermore, because human pathogens that reside in a viable but non-culturable (VBNC) form, such as E. coli , Helicobacter pylori , and V. cholerae , have a wide environmental dispersion, culture-dependent approaches may provide false negative results . Complexity of wastewater matrix Although Wastewater-Based Epidemiology (WBE) offers appealing advantages for the monitoring of public health, it comes along with several challenges. One major challenge being the level of biomarkers (chemical and/or biological compounds) as it is far more diluted in wastewater which makes it difficult to trace . The complex matrix is also challenging for pathogen detection . Nucleic acid-based Polymerase chain reaction (PCR) is the primary technique for analyzing pathogens; however, wastewater contains a variety of PCR inhibitors, including fat, protein, and other compounds, that might affect PCR analysis . Estimation of population size The dynamic population size estimation is another challenge . For example, it may be difficult to determine whether the presence of a pathogen in wastewater was caused by visitors passing through or by residents of the community in the concerned area . However, the presence of pathogens in wastewater, whether from the local population, undoubtedly provides valuable information, which may indicate an outbreak of disease in the community, thereby providing real time data for proper preparedness and response . This also ensures that WBE is used to provide timely warning of infectious disease outbreaks. Detection methods The physical distinctions between the major pathogen groups, the presence of inhibitors in the sample, established standard techniques for sample collection, culture-independent detection methods, and identification of pathogen host origin are the problems of detection methods . Specificity, sensitivity, repeatability of results, rapidity, automation, and cheap cost are the most significant prerequisites for reliable analysis . Furthermore, because human pathogens that reside in a viable but non-culturable (VBNC) form, such as E. coli , Helicobacter pylori , and V. cholerae , have a wide environmental dispersion, culture-dependent approaches may provide false negative results . Economics of wastewater surveillance Performing clinical testing for mass surveillance puts a huge financial burden on low-and middle-income countries (LMICs), because WHO recommended testing protocols are costly to implement. In addition, the recent recommendation of the real-time surveillance of pathogens of concern that need prohibitively expensive next generation sequencing technology is less affordable by LMICs . While clinical surveillance will always be vital for the response to infectious diseases, wastewater-based surveillance allow for quick and economical surveillance–even in areas that are currently unexplored. Wastewater monitoring enables community prevalence quantification and rapid detection of pathogen. At sites where wastewater from the population collects and mixes, so do a diverse array of microbes shed from individuals . Pathogen concentrations accurately estimate prevalence (the number of current infections in the population) and given that wastewater trends often precede corresponding clinical detections, they may allow for early detection . To summarize, because wastewater surveillance covers a wide-scale population, the additional cost per resident would be very small, even when focusing on an institutionalized population. Primary screening with wastewater surveillance is highly likely to be economically more justifiable, scalable, providing results in real time than a primary screening with clinical tests. However, progressing toward more equitable and sustainable surveillance will require continued development of local, self-sustaining scientific ecosystems through laboratory and computational methods development and training, capacity building efforts, and financial support of domestic scientific enterprise. Conclusion Wastewater surveillance had shown great potential in providing complete health status information in a comprehensive and near-real-time manner at the community level. It offers a unique perspective on the spread and evolution of pathogens, aiding in the prevention and control of disease epidemics. This review underscores the importance of continued research and development in this field to overcome current challenges and maximize the potential of wastewater surveillance in public health. It also offers a framework and evidence foundation to guide laboratories in selecting the most suitable tools for implementing wastewater surveillance. Since, there are so many emerging new pathogens that are causing illnesses and waterborne outbreaks, pathogen indicators need to be continually strengthened. Optimizing presently available technologies could increase our understanding of infectious pathogens, our ability to predict pathogen contamination, and our potential to safeguard public health. These technologies would be able to identify causal agents more precisely and quickly, detect viable microorganisms and characterize them according to microbial communities, and enable the creation of accessible data. If wastewater monitoring is conducted consistently, it may be utilized to locate possible pathogen carriers, provide comprehensive data, determine the origin of the infections, and deliver reliable early warning. However, there is still a lot of work to be done for adoption on a broader scale. SS: Supervision, Visualization, Writing – original draft, Writing – review & editing. AmA: Writing – original draft. SuA: Writing – original draft. ShA: Writing – original draft. AsA: Writing – original draft. GO: Writing – original draft. MC: Writing – review & editing. SBS: Writing – original draft. GB: Writing – review & editing. WE: Writing – original draft, Writing – review & editing.
Mechanical properties of a polylactic 3D-printed interim crown after thermocycling
275d82af-d033-410a-be86-abd8dcda810b
11781676
Dentistry[mh]
Interim restorations are an integral part of prosthodontic treatment. Interim restorations are used to protect the tooth structure until the final restoration is placed, maintain aesthetics and function during the healing period, evaluate patient acceptance and determine the feasibility of transitioning to the final restoration. For these purposes, appropriate physical properties, mechanical strength, color, ease of fabrication, retention of properties in the intraoral environment, and in vivo biocompatibility are essential . Dental polymers primarily used as interim materials are limited to biomaterials such as polymethylmethacrylate (PMMA), bisphenol resins, and polyacrylate-ethene (PAEK). Although each material has certain limitations, such as high shrinkage of PMMA and low flowability of bisphenol A-glycidyl methacrylate (Bis-GMA) and urethane dimethacrylate (UDMA), they have been optimized for use in dental applications, minimizing the impact of these weaknesses . As environmental issues such as material consumption and pollution are escalating, subtractive manufacturing technology is being replaced by additive manufacturing of restorations via various 3D printing technologies . Compared with subtractive manufacturing, the additive method can reduce the consumption of material and energy and the wear of cutting tools. In addition, dental polymers for 3D printing are becoming increasingly diverse, and various types of materials, such as liquids, filaments, granules, and powders, are available for use with 3D printing technology, such as fused deposition modeling (FDM), stereolithography (SLA), digital light processing (DLP), and selective laser sintering (SLS) . In FDM, products are fabricated via the extrusion of liquefied filaments or granules through a moving nozzle to layer materials on a scaffold. Compared with SLA and DLP, FDM has disadvantages, such as rougher surfaces and limited microrealization, but it is widely used for fabricating diagnostic models, customized individual trays, and provisional restorations in dentistry because of its advantages in terms of time and cost . Unlike other dental polymers, such as polymethylmethacrylate (PMMA) and bisphenol resins, which contain residual monomers and elution additives that can cause cytotoxicity or systemic cytotoxicity , polylactic acid (PLA), derived from nontoxic natural renewable resources, is considered one of the most biocompatible and biodegradable biopolymers for use in suture materials, surgical membranes, medical implants, and orthopedic devices . This suitability is due to the alpha-hydroxyl acids in PLA byproducts, which do not interfere with tissue healing and are excreted as water and carbon dioxide through the tricarboxylic acid cycle of the human body . Owing to the advantages of PLA, such as its biocompatibility, biodegradability, ease of fabrication, moderate strength, low cost, and low energy demand for manufacturing, there has been interest in and research into the possibility of using PLA to produce interim restorations via additive manufacturing . A few studies have investigated the possibility of using PLA in dental prostheses and reported a clinically acceptable marginal fit . A study reported that a three-unit provisional fixed dental prosthesis (FDP) fabricated from PLA via FDM showed only deformation and not fracture because of its greater flexibility than the PMMA specimen fabricated via SLA or DLP . Our previous study on the mechanical properties of PLA bar-shaped samples, in which intraoral conditions such as temperature and saliva were not considered, revealed that PLA FDM samples have lower flexural strength and surface roughness and a higher elastic modulus than milled PMMA and SLA-printed bisphenol samples do and that the mechanical properties of PLA FDM samples are within the clinically acceptable range . However, few studies have investigated the clinical characteristics, including physiochemical and mechanical properties, in the intraoral environment. This information is important because an interim restoration should be able to withstand external stress from functional loads, saturated humidity, and changes in temperature for a period. Therefore, this study aimed to evaluate the potential clinical use of PLA as a material for interim crowns by comparing its mechanical properties, such as fracture strength (FS), Shore D hardness, and surface roughness, with those of conventionally used CAD/CAM dental polymers, specifically PMMA (via subtractive manufacturing) and bisphenol (via SLA). Thermocycling was performed to replicate aging in the intraoral environment. The hypothesis was that the mechanical properties of PMMA, bisphenol, and PLA would not significantly differ after thermocycling. The PLA used in this study was approved as a material for interim crowns and bridges by the Korea Food and Drug Association (KFDA) after a series of tests, such as skin sensitization, intracutaneous reactivity, oral mucosa irritation, in vitro cytotoxicity, and dental device tests, which were conducted by the Yonsei University Medical Center. Test specimens and materials Three types of conventional CAD/CAM polymers for interim restorations were tested. PMMA samples were fabricated via subtractive manufacturing. PLA samples were fabricated via FDM additive manufacturing, and bisphenol samples were fabricated via SLA manufacturing. The parameters applied to this investigation were the values that have been recommended in previous studies or by the manufacturers, including the layer height, nozzle size, ejection speed, and curing time . The bisphenol SLA samples were immersed in 100% isopropyl alcohol to remove the resin monomers (Medifive, Tornado, Korea), and postpolymerization was performed for 210 s by using a UV light polymerization unit (LC-3D print box, NextDent, Netherlands); however, the samples in the FDM group did not undergo postpolymerization processing. For the interim single crown samples, a right first molar phantom tooth of the mandibular typodont system (Nissin Dental Product Inc., Tokyo, Japan) was prepared using 6° convergence and chamfer-ended margins, with an axial reduction of 1.5 mm and an occlusal reduction of 2.0 mm . A total of 75 dies of prepared teeth were made of POLYROCK (Cendres Metaux) through a conventional polyvinylsiloxane (PVS) impression procedure. Virtual images of the crown samples were obtained with a laboratory model scanner (Trios 4, 3Shape, Denmark). The samples were virtually designed with a CAD software program (exoCAD, exoCAD GmbH, Germany), converted into stereolithographic (STL) format, and then milled or 3D printed into crowns 8.0 mm in height with preset CAD parameters of 0.05 mm cement space and 0.05 mm above the margin line to be seated on the abutment . The samples for each group were cemented on individual dies with Temp-bond E (Kerr, Brea, USA) under 50 N constant pressure for six minutes by one technician. The samples used for the Shore D hardness and surface roughness tests were milled or 3D printed into round discs with a diameter of 5 mm and a thickness of 2 mm according to ISO 868 . All of the crown samples were thermocycled for 10,000 cycles (5°C/55°C) with a dwell time of 30 s and a transition time of 10 s via a thermocycling machine (Thermal Cyclic Tester RB 508, R&B Inc., Korea) to simulate one month of the oral environment . The sample size of 25 interim crowns for each group was determined via a sensitivity power analysis with 80% power, a 5% significance level and an effect size of 0.4 usingvia a software program (G*Power, v3.1.9.2; Heinrich-Heine-Universität Düsseldorf). Fracture strength and fracture mode FS was measured after the thermocycling process. The test was conducted with a universal testing machine (Instron 3366; Instron Corporation). The samples cemented on each individual die using temporary cement (Kerr Dental, Brea, CA, USA) were placed on a holding jig, and a vertical load pressure was applied on the center of the samples with a 10 kN load cell at a cross-head speed of 1.0 mm/min using a 9.5 mm diameter steel ball until the samples fractured. The FS values were recorded in Newtons (N). Shore D hardness The Shore D hardness was measured in both the before-thermocycling and after-thermocycling subgroups. Five measurements were performed at 25°C for each sample according to ISO 868 by placing the sample under the indenter of a Shore durometer (HPSD; Schmidt), and the mean value was recorded. Surface roughness Surface roughness was measured in both the before-thermocycling and the after-thermocycling subgroups. The surfaces of two samples from each group were analyzed at 9 locations per sample via a 3D optical surface roughness analyzer with a vertical resolution of 0.05 nm and a root mean square (RMS) repeatability of 0.01 nm (Contour GT-X3 BASE; Bruker). The centerline average roughness (Ra) and ten-point median height (Rz) were calculated. The objective magnification was 50×, and the zoom was 2×. The size of the field of view was 0.09 × 0.066 mm 2 . Scanning electron microscopy (SEM) To assess the surface topography, one sample per group was observed with field emission SEM (FE-SEM) (JEOL-7800F; JEOL, Ltd.) at an acceleration voltage of 2 kV and magnifications of x100 and x5000. The sample from each material group was left to dry at room temperature for 24 hours and then sputter-coated with gold and palladium for 180 s before FE‒SEM examination. Statistical analysis All the statistical analyses were performed using the SPSS 20 Statistics package (IBM SPSS; IBM Corp.) and reviewed by an independent statistician. Descriptive analysis was performed, and normality tests were conducted using Shapiro‒Wilk test. To compare fracture strength after thermocycling and surface roughness before and after thermocycling, an independent t test was performed. Analysis of variance (ANOVA) was conducted to check for significant differences among the test groups, and the Bonferroni post hoc correction was used for multiple comparisons between individual groups. For Shore D hardness, the Wilcoxon rank sum test was performed due to the small sample size (N = 5) per group. A significance level of 0.05 was set for all statistical analyses. Three types of conventional CAD/CAM polymers for interim restorations were tested. PMMA samples were fabricated via subtractive manufacturing. PLA samples were fabricated via FDM additive manufacturing, and bisphenol samples were fabricated via SLA manufacturing. The parameters applied to this investigation were the values that have been recommended in previous studies or by the manufacturers, including the layer height, nozzle size, ejection speed, and curing time . The bisphenol SLA samples were immersed in 100% isopropyl alcohol to remove the resin monomers (Medifive, Tornado, Korea), and postpolymerization was performed for 210 s by using a UV light polymerization unit (LC-3D print box, NextDent, Netherlands); however, the samples in the FDM group did not undergo postpolymerization processing. For the interim single crown samples, a right first molar phantom tooth of the mandibular typodont system (Nissin Dental Product Inc., Tokyo, Japan) was prepared using 6° convergence and chamfer-ended margins, with an axial reduction of 1.5 mm and an occlusal reduction of 2.0 mm . A total of 75 dies of prepared teeth were made of POLYROCK (Cendres Metaux) through a conventional polyvinylsiloxane (PVS) impression procedure. Virtual images of the crown samples were obtained with a laboratory model scanner (Trios 4, 3Shape, Denmark). The samples were virtually designed with a CAD software program (exoCAD, exoCAD GmbH, Germany), converted into stereolithographic (STL) format, and then milled or 3D printed into crowns 8.0 mm in height with preset CAD parameters of 0.05 mm cement space and 0.05 mm above the margin line to be seated on the abutment . The samples for each group were cemented on individual dies with Temp-bond E (Kerr, Brea, USA) under 50 N constant pressure for six minutes by one technician. The samples used for the Shore D hardness and surface roughness tests were milled or 3D printed into round discs with a diameter of 5 mm and a thickness of 2 mm according to ISO 868 . All of the crown samples were thermocycled for 10,000 cycles (5°C/55°C) with a dwell time of 30 s and a transition time of 10 s via a thermocycling machine (Thermal Cyclic Tester RB 508, R&B Inc., Korea) to simulate one month of the oral environment . The sample size of 25 interim crowns for each group was determined via a sensitivity power analysis with 80% power, a 5% significance level and an effect size of 0.4 usingvia a software program (G*Power, v3.1.9.2; Heinrich-Heine-Universität Düsseldorf). FS was measured after the thermocycling process. The test was conducted with a universal testing machine (Instron 3366; Instron Corporation). The samples cemented on each individual die using temporary cement (Kerr Dental, Brea, CA, USA) were placed on a holding jig, and a vertical load pressure was applied on the center of the samples with a 10 kN load cell at a cross-head speed of 1.0 mm/min using a 9.5 mm diameter steel ball until the samples fractured. The FS values were recorded in Newtons (N). The Shore D hardness was measured in both the before-thermocycling and after-thermocycling subgroups. Five measurements were performed at 25°C for each sample according to ISO 868 by placing the sample under the indenter of a Shore durometer (HPSD; Schmidt), and the mean value was recorded. Surface roughness was measured in both the before-thermocycling and the after-thermocycling subgroups. The surfaces of two samples from each group were analyzed at 9 locations per sample via a 3D optical surface roughness analyzer with a vertical resolution of 0.05 nm and a root mean square (RMS) repeatability of 0.01 nm (Contour GT-X3 BASE; Bruker). The centerline average roughness (Ra) and ten-point median height (Rz) were calculated. The objective magnification was 50×, and the zoom was 2×. The size of the field of view was 0.09 × 0.066 mm 2 . To assess the surface topography, one sample per group was observed with field emission SEM (FE-SEM) (JEOL-7800F; JEOL, Ltd.) at an acceleration voltage of 2 kV and magnifications of x100 and x5000. The sample from each material group was left to dry at room temperature for 24 hours and then sputter-coated with gold and palladium for 180 s before FE‒SEM examination. All the statistical analyses were performed using the SPSS 20 Statistics package (IBM SPSS; IBM Corp.) and reviewed by an independent statistician. Descriptive analysis was performed, and normality tests were conducted using Shapiro‒Wilk test. To compare fracture strength after thermocycling and surface roughness before and after thermocycling, an independent t test was performed. Analysis of variance (ANOVA) was conducted to check for significant differences among the test groups, and the Bonferroni post hoc correction was used for multiple comparisons between individual groups. For Shore D hardness, the Wilcoxon rank sum test was performed due to the small sample size (N = 5) per group. A significance level of 0.05 was set for all statistical analyses. Fracture strength and fracture modes The FS was the highest in the PMMA group (2787.93 N), followed by the bisphenol (2165.47 N) and PLA (2088.78 N) groups, but there was no significant difference between the bisphenol and PLA groups . The PMMA group predominantly exhibited vertical fractures directly downstream of the steel ball, but the fragments were retained without dislodgement. In the bisphenol group, midline fracture with a loss of more than half of the crown sample was mainly observed. The PLA group was characterized as close to tearing the material or deformation of the die instead of crown fracture . Shore D hardness The highest Shore D hardness (HDS) before and after thermocycling was observed in the bisphenol group, followed by the PMMA and PLA groups . While the Shore D hardness of each material in the PMMA and bisphenol groups did not significantly change after thermocycling (p>0.05), the PLA group showed a significant increase (p<0.05). The Shore D hardness value decreased in the PMMA and bisphenol groups, resulting in insignificant differences between PMMA and bisphenol after the thermocycling procedure . Surface roughness In terms of Ra, which is a measure of the average roughness, the PMMA and PLA groups had similar Ra values, whereas the bisphenol group had the lowest roughness . The 10-point median Rz increased in the order of the bisphenol, PMMA, and PLA groups. Within each material, the PMMA group showed no change in the surface roughness after thermocycling. For the PLA group, a significant increase in roughness was observed in Ra, but no change was observed in Rz. Notably, in the bisphenol group, significant increases in both Ra and Rz were observed after thermocycling, indicating that it was the roughest material among the three materials ( p <0.0001). SEM observations Representative FE-SEM images revealed that the PLA samples were generally smooth, but uniform irregularities in the form of layers due to filament stacking were observed . In the PMMA group, a uniform pattern due to the cutting process was observed, but the irregularities were less pronounced than those in the PLA group . The bisphenol samples were produced by photopolymerization, and no roughness in the pattern was observed . However, the bisphenol group showed a significant increase in roughness after thermocycling, which was confirmed by SEM images at 5000x . The FS was the highest in the PMMA group (2787.93 N), followed by the bisphenol (2165.47 N) and PLA (2088.78 N) groups, but there was no significant difference between the bisphenol and PLA groups . The PMMA group predominantly exhibited vertical fractures directly downstream of the steel ball, but the fragments were retained without dislodgement. In the bisphenol group, midline fracture with a loss of more than half of the crown sample was mainly observed. The PLA group was characterized as close to tearing the material or deformation of the die instead of crown fracture . The highest Shore D hardness (HDS) before and after thermocycling was observed in the bisphenol group, followed by the PMMA and PLA groups . While the Shore D hardness of each material in the PMMA and bisphenol groups did not significantly change after thermocycling (p>0.05), the PLA group showed a significant increase (p<0.05). The Shore D hardness value decreased in the PMMA and bisphenol groups, resulting in insignificant differences between PMMA and bisphenol after the thermocycling procedure . In terms of Ra, which is a measure of the average roughness, the PMMA and PLA groups had similar Ra values, whereas the bisphenol group had the lowest roughness . The 10-point median Rz increased in the order of the bisphenol, PMMA, and PLA groups. Within each material, the PMMA group showed no change in the surface roughness after thermocycling. For the PLA group, a significant increase in roughness was observed in Ra, but no change was observed in Rz. Notably, in the bisphenol group, significant increases in both Ra and Rz were observed after thermocycling, indicating that it was the roughest material among the three materials ( p <0.0001). Representative FE-SEM images revealed that the PLA samples were generally smooth, but uniform irregularities in the form of layers due to filament stacking were observed . In the PMMA group, a uniform pattern due to the cutting process was observed, but the irregularities were less pronounced than those in the PLA group . The bisphenol samples were produced by photopolymerization, and no roughness in the pattern was observed . However, the bisphenol group showed a significant increase in roughness after thermocycling, which was confirmed by SEM images at 5000x . This study aimed to assess the mechanical properties of a PLA interim single FDP after thermocycling to simulate intraoral conditions. The initial hypothesis that the mechanical properties of PMMA, bisphenol, and PLA would not significantly differ after thermocycling was partially confirmed. The method of thermocycling to simulate the one month usage of interim crowns based on ISO 7405 refers to previous studies, in which the number of cycles was varied from 5000–60000 to simulate 6 months of use in the oral environment . A total of 10000 thermocycles were performed to simulate 1-month usage in this study. This study aimed to compare the mechanical properties of PLA with those of conventionally used CAD/CAM interim dental restorations. The parameters in this investigation were based on recommendations from previous studies and manufacturers . For PLA, postprocessing annealing is the main process for improving mechanical properties; hence, the bed temperature is usually kept above the glass transition temperature (Tg, approximately 60°C) to maximize bonding between the deposited layers . In this study, the nozzle temperature and bed temperature were maintained at approximately 200°C and 65°C, respectively, to maximize the balance between the degree of crystallinity and postprocessing annealing. Previous studies have explored the effects of the printing angle on the mechanical properties of 3D-printed interim dental restorations. Alharbi et al. reported that a sample printed perpendicular to the load direction exhibited higher compressive strength, whereas Osman et al. recommended 135° for DLP. Additionally, one study reported that printing at 30° resulted in the highest FS . In this study, a 135-degree printing orientation for both PLA FDM and SLA was consistently used, based on prior research indicating that this angle was the ideal orientation for producing optimal mechanical properties. In this study, the PMMA group presented the highest fracture resistance (2787.78 N) among the three groups, whereas the PLA (2088.78 N) and bisphenol (2165.47 N) groups showed similar strengths without significant differences. These values recorded in units of N were much greater than those reported in previous studies, in which the strengths of single- or 3-unit FDPs ranged from 540 N to 1350 N, depending on the materials used . The large difference in the absolute values of the data is thought to result from the sample design, test method, and CAD/CAM manufacturing parameters. Generally, the molecular alignment, weight, crystallinity, and postprocessing annealing of a printed polymer are affected by printing parameters, including the temperature, output position, build angle, number of layers, and configuration of the support structure . The average maximum bite force is reported to vary widely between 286 and 727 N, ranging from 250–286 N for the anterior teeth and 580–727 N for the posterior teeth . However, compared with that of the PMMA group produced via subtractive manufacturing and bisphenol group produced via SLA additive manufacturing, the FS of the PLA group produced via FDM additive manufacturing after thermocycling could be acceptable for clinical use. The fracture pattern in the PLA group differed from that in the other groups, showing a torn pattern instead of the fractures or cracks observed in the PMMA group or bisphenol group, which was similar to the results of a previous study reporting that conventional PMMA, DLP-PMMA, and SLA-PMMA exhibited crack or fracture patterns; however, the flexural strength of the PLA group was difficult to measure because the samples deformed without breaking . Our previous study reported that the elastic modulus of the PLA group was greater than those of the PMMA and bisphenol groups. This might be related to the fracture pattern of the PLA group . During practical chewing, the ability of the PLA product, as an interim FDP, to deform rather than fracture and fail under functional loads may be advantageous, as long as the functional load is within the sustainable range of the PLA FDP. The fracture pattern in each group can also be explained by the Shore D hardness results. In this study, the bisphenol group presented the highest value, followed by the PMMA and PLA groups, regardless of the thermocycling procedure, which was the same as the results of our previous study . It could be inferred that the high Shore D hardness of the bisphenol group causes destructive fracture patterns and dislodgement when forces are concentrated at the occlusal contact. In contrast, the PLA group with a lower Shore D hardness was more likely to deform than to fail or fracture. One of the main issues is whether the mechanical properties of an interim crown made of PLA are maintained over a provisional period in the humid environment of the oral cavity. How do temperature changes in the oral cavity with humidity affect the mechanical properties of PLA materials? PLA degrades through hydrolysis of the backbone ester groups, and the rate of degradation depends on the crystallinity, molecular weight and distribution, morphology, water diffusion rate, and stereoisomer content of PLA. Because PLA is a hydrophobic and aliphatic polyester, the initial hydrolysis rate at the end of the polymer chain is very slow. More than 90% of the material has been reported to remain after 133 days at 37°C and after 28 days at 60°C . Furthermore, hydrolysis is rapidly accelerated when carboxyl groups are formed at the end of the chain, forming water-soluble oligomers . In the present study, the Shore D hardness in the PLA group increased after thermocycling, although the difference was not statistically significant. These findings suggest that PLA can maintain its molecular structure without undergoing hydrolysis even in a humid intraoral environment during the provisional function period. In addition, the Shore D hardness significantly differed between each group before thermocycling, but after thermocycling, a significant difference was observed only between the PLA group and the bisphenol group due to the increased value of the PLA group and the decreased values of the PMMA and bisphenol groups, which could further support the clinical potential of the PLA interim crown . Surface roughness is one of the important factors to consider for provisional restorative materials, as excessive increases in surface roughness in the intraoral environment could lead to concerns such as plaque accumulation, particularly for materials such as PLA with hydrolytic properties. Notably, the bisphenol group presented the lowest surface roughness values in terms of Ra and Rz before thermocycling, but after thermocycling, it presented the highest values, with statistically significant differences among the experimental groups. The Ra values of the PLA group before and after thermocycling significantly differed but were not significantly different from those of PMMA. The Rz values before and after thermocycling were not significantly different. Regarding bacterial adhesion, some in vivo studies have indicated a threshold surface roughness for bacterial retention (Ra = 0.2 μm), above which plaque accumulation significantly increases, heightening the risk of caries and periodontal inflammation. Based on this, the bisphenol group may not be ideal for long-term provisional FDP, while PLA could serve as a suitable alternative material for an interim prosthesis . Therefore, bisphenol may not be ideal for a long-term provisional FDP, but PLA could serve as a suitable alternative material for an interim prosthesis. The FE-SEM analysis and surface roughness results were not mutually supportive, even though the FE-SEM images with the mean surface roughness values among the test groups were selected ( , Figs and ). This discrepancy is due to different sites being analyzed in the FE-SEM and surface roughness tests. A few limitations of the present investigation should be addressed. This in vitro study could not reflect the complicated and diverse conditions of the oral cavity. In addition, even when an identical 3D printer or milling device is used to manufacture a provisional prosthesis, the mechanical properties of an interim prosthesis vary depending on the related parameters or conditions. Along with the technical improvement of FDM to increase the accuracy of PLA products and further research on PLA materials, such as their flexibility, conducting clinical trials is recommended to expand the analysis of the mechanical properties of PLA interim FDPs produced by additive manufacturing, with a focus on aspects such as biocompatibility, color stability, and reparability. Within the limitations of this in vitro study, the following conclusions were drawn: After the thermocycling process, the PLA group produced via FDM additive manufacturing showed fracture strength, Shore D hardness, and surface roughness similar to those of the PMMA group produced via subtractive manufacturing and the bisphenol group produced via SLA additive manufacturing. The PLA single interim FDP printed via FDM manufacturing maintained the appropriate mechanical properties after thermocycling, simulating a one-month provisional period; thus, PLA could be used as an alternative to conventional interim restoration materials. S1 File (XLSX) S2 File (XLSX) S3 File (XLSX)
Cardiac Computed Tomography Measurements in Pulmonary Embolism Associated with Clinical Deterioration
4367f179-1cda-48b2-8d86-edf3cc55c997
11931709
Cardiovascular System[mh]
Established pulmonary embolism (PE) risk-stratification guidelines employ binary assessments of hemodynamic stability and right ventricular dysfunction (RVD) using imaging modalities and troponin. The main imaging modalities of RVD are echocardiography and computed tomography pulmonary angiogram (CTPA). Comprehensive echocardiography provides multifaceted RVD assessments; however, it rarely confirms diagnosis of PE and may not be immediately available. A CTPA diagnoses PE and identifies limited parameters of RVD, usually as right ventricle (RV) dilatation. Radiologists usually report on RVD as a binary variable of RV to left ventricle diameter ratio (RV:LV) using a range of cut-offs from 0.9 to 1.5. – Right ventricular dysfunction on CTPA, when expressed as a continuous variable, may be a better predictor than its binary version. Consistent reporting of RVD measurements may be labor intensive for radiologists. Artificial intelligence (AI) algorithms have been developed to assist radiologists’ workflow by simultaneously interpreting presence of filling defects and measuring cardiac chamber sizes. , While RVD by CTPA or echocardiography is an independent predictor of acute clinical deterioration, there have been inconsistent results regarding its relationship with 30-day mortality. , , – Echocardiography studies have shown that as RVD severity increases, both risk of clinical deterioration and use of advanced interventions increase. We aimed to characterize the association of AI-derived CTPA cardiac measurements with in-hospital clinical deterioration (primary outcome) in a registry of patients with intermediate- to high-risk PE. The secondary objective was to compare retrospectively derived AI measurements in patients with or without use of advanced interventions (secondary outcome). For our exploratory objectives, we compared 1) radiologist vs AI-derived CTPA categorization of RV:LV and 2) AI vs echocardiography measurements. If, by retrospective study, we were to show that AI-derived CTPA measurements are strongly associated with acute clinical deterioration, then capturing immediately available CTPA cardiac measurements within clinical workflow could improve PE risk stratification. Study Setting and Design We conducted a retrospective analysis of data in our Clinical Outcomes Pulmonary Embolism Research Registry (COPERR). The COPERR is populated with adult patients identified as intermediate- or high-risk PE at presentation to any of eight Atrium Health emergency departments (ED) in North Carolina. We extracted data for registry patients who were treated between June 6, 2018–August 31, 2023. In November 2023, we requested a retrospective, remote AI analysis of CTPAs with confirmed index PE from this population of registry patients. Selection of Participants Using the COPERR database, we identified adult patients (≥18 years) presenting to a participating ED who had 1) acute symptomatic PE as the primary ED diagnosis (by positive CTPA) and 2) intermediate- or high-risk PE classification. The PE risk was classified by emergency clinicians using European Society of Cardiology (ESC) guidelines and our PE response team’s (PERT) “Code PE” pathway ( ). The latter shows the structure, function, and logistics of PERT activation, triaging, multispecialty notification, and considerations for advanced PE interventions based on PE severity and bleeding risk. For the exploratory objective, we included above-mentioned patients with comprehensive transthoracic echocardiography (TTE) and RV-focused measurements completed within 24 hours of PE diagnosis. We included patients with intermediate- or high-risk PE at ED presentation with CT images of 1-mm slice thickness available for AI analysis for the primary objective and with any AI analysis for the secondary objective. We excluded the following: patients with PE diagnosed only by high-probability ventilation/perfusion nuclear imaging; those whose point-of-care TTE findings were highly suspicious of PE but PE was not confirmed by CT; and those whose CTPA was not for index PE. We also excluded CTPAs that could not be analyzed by AI algorithm. Data Collection and Processing Data entered in COPERR and available for analysis included demographics; clinical presentation features (including initial and worst vital signs within three hours of ED presentation); comorbidities; PE risk factors; criteria used for PE risk stratification; radiologist report of RV:LV; TTE measurements, dates, and times; PERT notification dates and times; laboratory measurements; PE-related outcomes and interventions; and adverse events. , , Trained data extractors retrieved information from the electronic health record and entered data in the registry. During real-time clinical care of index PE hospitalization, RV:LV was measured by board-certified radiologists, and TTE was performed by certified cardiac sonographers from an echocardiography laboratory accredited by the Intersocietal Commission for the Accreditation of Echocardiography Laboratories. Given this was a retrospective study, the radiologists and sonographers were not aware of the study or its objectives. Radiologists measured RV:LV on the minor cardiac axis on CTPA. Measurements were at the widest points between the inner free wall of each ventricle to the inner wall of the ventricular septum. Radiologists used RV:LV cut-off of 1.0, with less than 1.0 considered negative for RV dilatation. Sonographers used standard or RV-focused apical views to measure end-diastolic RV inner diameter at the base. The LV basal end-diastolic measurements were performed in the parasternal long axis view. Images were uploaded into a secure local server and portal system Merge Cardio (Merative LP, Ann Arbor, MI [formerly IBM Watson Health]). Board-certified cardiologists interpreted images and measurements and were blind to study and clinical outcomes. Only initial echocardiography measurements for index PE hospitalization were used in this study. For each registry patient included in the study, we exported the fully anonymized digital imaging and communications in medicine (DICOM) file for each CTPA to share with the AI vendor for analysis. We transferred DICOM data from our study center to the server of an AI operating system (Aidoc, Tel Aviv, Israel) using encrypted secure file transfer protocol. Prior to transfer, all data were de-identified per the safe harbor de-identification protocol defined by the Health Insurance Portability and Accountability Act. The de-identified accession number was extracted from the DICOM header of shared studies. The study center used the key pair of de-identified accessions and identified accessions computed at the data anonymization step to re-identify data for the study. The Aidoc PE algorithm is FDA cleared via the 510(k) premarket notification pathway required of all AI software medical devices. Aidoc’s use in detecting PE on CTPAs has been previously reported. , The prototype of the PE detection algorithm was developed using input from anonymized, 1-mm series of CTPA reconstructions and based on a deep convolutional neural network comprising a Resnet architecture and trained and validated on over 25,000 CTPAs taken from many institutions. Aidoc algorithms had specific CTPA inclusion criteria, including slice thickness, kernel, and contrast phase to allow analysis. Aidoc has two software components: one for software analysis of CTPA DICOM files, and another for real-time analysis and reporting of interpretations to clinicians and radiologists. Only the first component was used in this study. The AI analyses of CTPAs and measurements were not performed during real-time clinical care. Each CTPA was analyzed by two AI algorithms independently. For the first algorithm, if a PE was detected, AI determined whether the PE was a central clot or not. Central clot was defined by the following locations: pulmonary trunk; saddle (bifurcation of the main pulmonary artery trunk); right or left main pulmonary arteries or lobar pulmonary arteries. For the second algorithm, AI measured each RV and LV largest diameter (between inner walls) as a number and calculated the ratio of RV to LV. This was produced in a four-step process, including ventricular detection, ventricular segmentation, interventricular septum detection, and caliper positioning and measurements. The AI algorithm also identified patients with large central PEs. It is important to note a subsegmental PE did not provide a positive result. This was done to allow the AI-augmented clinical workflow to accurately identify acute PEs with RV dilatation as necessary conditions for intermediate- and high-risk PE classification. The AI-based algorithm variables included the following categorical values: 1) Did the Aidoc algorithm analyze the data (yes or no); and 2) did the CTPA contain a PE (yes or no)? The AI-based continuous variables were RV basal diameter, LV basal diameter, and RV:LV. All data for AI-derived CTPA variables were matched to pertinent study IDs and uploaded into a standard electronic form within Research Electronic Data Capture (REDCap) tools at our institution. Outcomes The primary outcome was PE-related clinical deterioration, defined as a composite of one or more of the following clinical deterioration events within days of index PE hospitalization: death; cardiac arrest; sustained hypotension treated with vasoactive medications; or rescue respiratory intervention (mechanical or positive pressure ventilation). The secondary outcome was use of advanced PE-specific interventions, including systemic thrombolysis, catheter-directed interventions, extracorporeal membrane oxygenation (ECMO), or surgical embolectomy. Statistical Analysis Sample size was determined by the number of patients eligible for study analysis. To determine association with PE-related clinical deterioration (primary outcome), we used various statistical methods. We used bivariable analysis with the Student t -test or chi square to stratify by primary outcome groups. We conducted multivariable analyses for the primary outcome in two ways. First, we used least absolute shrinkage and selection operator (LASSO) regression to develop two models, one with AI assessment variables only and one with all independent variables. We reported missingness of each variable and used complete case analysis. We expressed strength of association as odds ratios with 95% confidence intervals (CI). Second, we used random forest (RF) to statistically infer the strength of the association of all independent variables in the dataset and identify the top 20 predictors of PE-related clinical deterioration (primary outcome) in a variable importance plot. For each model’s prognostic performance on the primary outcome, we reported discrimination as area under the curve (AUC) and calibration as calibration plots with calibration statistics, including Brier, Brier scaled, intercept and slope. Performance for RF and LASSO logistic models was based on out-of-bag samples and 10-fold cross validation, respectively. Finally, to address the trade-off of false positives and false negatives, we used the Youden index to determine optimal RV:LV cut-offs and other AI-derived measurements for prognosis of clinical deterioration. For the selected optimal RV:LV and other AI cardiac measurements, we determined sensitivity, specificity, likelihood ratios, and AUC with 95% CI. To determine association with the use of advanced interventions (secondary outcome), we used bivariable analysis with the Student t -test or chi square to stratify by secondary outcome groups. To measure reliability between AI-derived and radiologist CT classification of RV:LV ≥ 1.0 vs < 1.0, we used the Cohen kappa with its 95% CIs. We used suggested guidelines of Landis and Koch to describe the strength of agreement for the κ statistic: less than 0 = poor; 0 to 0.20 = slight; 0.021 to 0.40 = fair; 0.41 to 0.60 = moderate; 0.61 to 0.80 = substantial; and 0.81 to 1.00 = almost perfect. We reported mean and standard deviation time intervals in hours between PERT notification and TTE for the middle 95%. We used two methods to assess agreement between AI-derived CT cardiac and TTE measurements for RV, LV, and RV:LV. First, we used Pearson correlations with 95% CIs for continuous variables to test for magnitude and direction of linear relationships. Second, we used Bland-Altman plots to depict the relationship of difference and mean for each pair of CTPA and TTE measurements. Disclosures Regarding the relationship with the company that developed and markets the AI-based PE algorithm used in this study, we declare that Aidoc had no role in the design of the study, the collection, analysis, and interpretation of data, or the preparation of the published manuscript. We further declare that we have not received and will not receive any compensation, direct or indirect, from Aidoc or any of its affiliates. We do not own stock in the company. We conducted a retrospective analysis of data in our Clinical Outcomes Pulmonary Embolism Research Registry (COPERR). The COPERR is populated with adult patients identified as intermediate- or high-risk PE at presentation to any of eight Atrium Health emergency departments (ED) in North Carolina. We extracted data for registry patients who were treated between June 6, 2018–August 31, 2023. In November 2023, we requested a retrospective, remote AI analysis of CTPAs with confirmed index PE from this population of registry patients. Using the COPERR database, we identified adult patients (≥18 years) presenting to a participating ED who had 1) acute symptomatic PE as the primary ED diagnosis (by positive CTPA) and 2) intermediate- or high-risk PE classification. The PE risk was classified by emergency clinicians using European Society of Cardiology (ESC) guidelines and our PE response team’s (PERT) “Code PE” pathway ( ). The latter shows the structure, function, and logistics of PERT activation, triaging, multispecialty notification, and considerations for advanced PE interventions based on PE severity and bleeding risk. For the exploratory objective, we included above-mentioned patients with comprehensive transthoracic echocardiography (TTE) and RV-focused measurements completed within 24 hours of PE diagnosis. We included patients with intermediate- or high-risk PE at ED presentation with CT images of 1-mm slice thickness available for AI analysis for the primary objective and with any AI analysis for the secondary objective. We excluded the following: patients with PE diagnosed only by high-probability ventilation/perfusion nuclear imaging; those whose point-of-care TTE findings were highly suspicious of PE but PE was not confirmed by CT; and those whose CTPA was not for index PE. We also excluded CTPAs that could not be analyzed by AI algorithm. Data entered in COPERR and available for analysis included demographics; clinical presentation features (including initial and worst vital signs within three hours of ED presentation); comorbidities; PE risk factors; criteria used for PE risk stratification; radiologist report of RV:LV; TTE measurements, dates, and times; PERT notification dates and times; laboratory measurements; PE-related outcomes and interventions; and adverse events. , , Trained data extractors retrieved information from the electronic health record and entered data in the registry. During real-time clinical care of index PE hospitalization, RV:LV was measured by board-certified radiologists, and TTE was performed by certified cardiac sonographers from an echocardiography laboratory accredited by the Intersocietal Commission for the Accreditation of Echocardiography Laboratories. Given this was a retrospective study, the radiologists and sonographers were not aware of the study or its objectives. Radiologists measured RV:LV on the minor cardiac axis on CTPA. Measurements were at the widest points between the inner free wall of each ventricle to the inner wall of the ventricular septum. Radiologists used RV:LV cut-off of 1.0, with less than 1.0 considered negative for RV dilatation. Sonographers used standard or RV-focused apical views to measure end-diastolic RV inner diameter at the base. The LV basal end-diastolic measurements were performed in the parasternal long axis view. Images were uploaded into a secure local server and portal system Merge Cardio (Merative LP, Ann Arbor, MI [formerly IBM Watson Health]). Board-certified cardiologists interpreted images and measurements and were blind to study and clinical outcomes. Only initial echocardiography measurements for index PE hospitalization were used in this study. For each registry patient included in the study, we exported the fully anonymized digital imaging and communications in medicine (DICOM) file for each CTPA to share with the AI vendor for analysis. We transferred DICOM data from our study center to the server of an AI operating system (Aidoc, Tel Aviv, Israel) using encrypted secure file transfer protocol. Prior to transfer, all data were de-identified per the safe harbor de-identification protocol defined by the Health Insurance Portability and Accountability Act. The de-identified accession number was extracted from the DICOM header of shared studies. The study center used the key pair of de-identified accessions and identified accessions computed at the data anonymization step to re-identify data for the study. The Aidoc PE algorithm is FDA cleared via the 510(k) premarket notification pathway required of all AI software medical devices. Aidoc’s use in detecting PE on CTPAs has been previously reported. , The prototype of the PE detection algorithm was developed using input from anonymized, 1-mm series of CTPA reconstructions and based on a deep convolutional neural network comprising a Resnet architecture and trained and validated on over 25,000 CTPAs taken from many institutions. Aidoc algorithms had specific CTPA inclusion criteria, including slice thickness, kernel, and contrast phase to allow analysis. Aidoc has two software components: one for software analysis of CTPA DICOM files, and another for real-time analysis and reporting of interpretations to clinicians and radiologists. Only the first component was used in this study. The AI analyses of CTPAs and measurements were not performed during real-time clinical care. Each CTPA was analyzed by two AI algorithms independently. For the first algorithm, if a PE was detected, AI determined whether the PE was a central clot or not. Central clot was defined by the following locations: pulmonary trunk; saddle (bifurcation of the main pulmonary artery trunk); right or left main pulmonary arteries or lobar pulmonary arteries. For the second algorithm, AI measured each RV and LV largest diameter (between inner walls) as a number and calculated the ratio of RV to LV. This was produced in a four-step process, including ventricular detection, ventricular segmentation, interventricular septum detection, and caliper positioning and measurements. The AI algorithm also identified patients with large central PEs. It is important to note a subsegmental PE did not provide a positive result. This was done to allow the AI-augmented clinical workflow to accurately identify acute PEs with RV dilatation as necessary conditions for intermediate- and high-risk PE classification. The AI-based algorithm variables included the following categorical values: 1) Did the Aidoc algorithm analyze the data (yes or no); and 2) did the CTPA contain a PE (yes or no)? The AI-based continuous variables were RV basal diameter, LV basal diameter, and RV:LV. All data for AI-derived CTPA variables were matched to pertinent study IDs and uploaded into a standard electronic form within Research Electronic Data Capture (REDCap) tools at our institution. The primary outcome was PE-related clinical deterioration, defined as a composite of one or more of the following clinical deterioration events within days of index PE hospitalization: death; cardiac arrest; sustained hypotension treated with vasoactive medications; or rescue respiratory intervention (mechanical or positive pressure ventilation). The secondary outcome was use of advanced PE-specific interventions, including systemic thrombolysis, catheter-directed interventions, extracorporeal membrane oxygenation (ECMO), or surgical embolectomy. Sample size was determined by the number of patients eligible for study analysis. To determine association with PE-related clinical deterioration (primary outcome), we used various statistical methods. We used bivariable analysis with the Student t -test or chi square to stratify by primary outcome groups. We conducted multivariable analyses for the primary outcome in two ways. First, we used least absolute shrinkage and selection operator (LASSO) regression to develop two models, one with AI assessment variables only and one with all independent variables. We reported missingness of each variable and used complete case analysis. We expressed strength of association as odds ratios with 95% confidence intervals (CI). Second, we used random forest (RF) to statistically infer the strength of the association of all independent variables in the dataset and identify the top 20 predictors of PE-related clinical deterioration (primary outcome) in a variable importance plot. For each model’s prognostic performance on the primary outcome, we reported discrimination as area under the curve (AUC) and calibration as calibration plots with calibration statistics, including Brier, Brier scaled, intercept and slope. Performance for RF and LASSO logistic models was based on out-of-bag samples and 10-fold cross validation, respectively. Finally, to address the trade-off of false positives and false negatives, we used the Youden index to determine optimal RV:LV cut-offs and other AI-derived measurements for prognosis of clinical deterioration. For the selected optimal RV:LV and other AI cardiac measurements, we determined sensitivity, specificity, likelihood ratios, and AUC with 95% CI. To determine association with the use of advanced interventions (secondary outcome), we used bivariable analysis with the Student t -test or chi square to stratify by secondary outcome groups. To measure reliability between AI-derived and radiologist CT classification of RV:LV ≥ 1.0 vs < 1.0, we used the Cohen kappa with its 95% CIs. We used suggested guidelines of Landis and Koch to describe the strength of agreement for the κ statistic: less than 0 = poor; 0 to 0.20 = slight; 0.021 to 0.40 = fair; 0.41 to 0.60 = moderate; 0.61 to 0.80 = substantial; and 0.81 to 1.00 = almost perfect. We reported mean and standard deviation time intervals in hours between PERT notification and TTE for the middle 95%. We used two methods to assess agreement between AI-derived CT cardiac and TTE measurements for RV, LV, and RV:LV. First, we used Pearson correlations with 95% CIs for continuous variables to test for magnitude and direction of linear relationships. Second, we used Bland-Altman plots to depict the relationship of difference and mean for each pair of CTPA and TTE measurements. Regarding the relationship with the company that developed and markets the AI-based PE algorithm used in this study, we declare that Aidoc had no role in the design of the study, the collection, analysis, and interpretation of data, or the preparation of the published manuscript. We further declare that we have not received and will not receive any compensation, direct or indirect, from Aidoc or any of its affiliates. We do not own stock in the company. Study Flow shows we screened 1,809 patients with CTPA-confirmed acute PE diagnosed in ED. Of these, 1,664 (92.0%) had CTPA associated with index PE diagnosis and anonymized DICOM files transferred for AI analysis. Radiologists provided categorical RV:LV classification for 1,467 of 1,664 (88.2%) CTPAs. The AI vendor analyzed 1,660 of the 1,664; four cases were excluded because of inadequate CTPA slice thickness for AI analysis. The AI assessment for central clot was successful in all (100%) CTPAs and 1,267 (76.3%) were found to have large central PE by the algorithm. The AI-derived cardiac measurements were obtained for 1,617/1,660 (97.4%). The AI failed to analyze 43 CTPAs because 1) they did not meet study inclusion criteria (i.e., slice thickness, kernel, contrast phase), or 2) the RV:LV algorithm was unable to detect appropriate landmarks to perform RV:LV analysis. Of 1,664 CTPAs, 733 (44.1%) had comprehensive TTE measurements during index PE hospitalization. Mean and SD for time interval between CTPA and TTE for the middle 95% was 13.6 (11.3) hours. We were able to determine primary outcome responses for 1,639 unique patients ( ) and secondary outcome for 1,643 unique patients. Of the 1,639, mean age was 63.0 ± 16 years, 805 (49.1%) were male, 997 (60.8%) were White, and 190 (11.6%) had one or more components of the primary outcome. Four patients had more than one ED visit for acute PE during the 2018–2023 study period. We reported PE-related clinical deterioration (primary outcome) for first visit only. Patient Characteristics There were no significant differences between those with or without clinical deterioration for age, gender, race, or ethnicity. There were significant differences for mean values of vital signs. Patients who had PE-related clinical deterioration (primary outcome) had lower systolic blood pressure and oxygen saturation readings and higher respiratory rate and heart rates than patients without clinical deterioration. There was significantly increased use of systemic thrombolysis, ECMO, and surgical embolectomy in the primary outcome group. However, there were no significant differences in use of catheter-directed interventions between outcome groups. For categorical cardiac CTPA assessments, shows radiologists’ binary categorization of RVD using the RV:LV cut-off 1.0 was not significant between primary outcome groups. In contrast, AI-derived RV:LV binary categorization was significant. For mean AI-derived CTPA measurements, shows significant differences in RV:LV, RV, and LV basal diameters between those with and without clinical deterioration. For the 733 patients with TTE, TTE measurements were less than AI-derived CT cardiac measurements. Only LV basal diameter had significant differences between the primary outcome groups. Although mean RV basal diameter was above normal limits, the difference was not statistically significant for outcome-negative and outcome-positive groups. Primary Outcome Multivariable analyses with unadjusted LASSO for PE-related clinical deterioration (primary outcome) showed the most significant independent AI-derived predictors were RV:LV (19.28 [3.0–109.4]) and central clot by AI (2.4 [1.6–3.6]). Both the adjusted LASSO and RF models vetted all candidate database variables. Both RF and adjusted LASSO prognostic models had excellent discrimination and calibration metrics for prognostic accuracy ( ): For discrimination, adjusted LASSO and RF had AUC of 0.88 (0.85, 0.90) and 0.87 (0.84, 0.89), respectively. Both models were well calibrated with Brier scores of 0.07. The RF model was slightly less calibrated than the LASSO model on other calibration metrics. and show cardiac arrest at presentation was the top predictor of in-hospital clinical deterioration in both multivariable models (LASSO and RF). Admission to the intensive care unit, lowest systolic blood pressure, lowest oxygen saturation, and highest heart and respiratory rates were also top predictors in both models. The CTPA cardiac measurements were among the top 11 predictors selected by LASSO. Abnormal troponin was one of the top predictors by LASSO but had a lower influence on RF model accuracy than CTPA assessments. The CTPA cardiac measurements and findings of central clot location with RV:LV ≥ 1.0 were among the top 10 independent predictors of clinical deterioration in the RF model. shows optimal cut-offs of AI-derived cardiac CTPA measurements with prediction metrics for PE-related clinical deterioration as RV:LV 1.54 (OR 2.5 [1.85–3.45] and AUC 0.6 [0.66, 0.70]). These cut-off values had high negative predictive values (NPV) but low positive predictive values (PPV). Secondary Outcome shows bivariable analysis of cardiac assessments stratified by use of advanced interventions (secondary outcome). Regardless of how cardiac measurements were derived, there were significant differences in cardiac measurements (whether continuous or categorical) between those with and without advanced interventions. For example, AI-derived CTPA RV:LV means with SDs were 1.62 (0.33) vs 1.35 (0.32) for those with and without advanced interventions (secondary outcome), respectively. With TTE, RV:LV means were 1.17 (0.29) vs 1.02 (0.27) for those with and without advanced interventions, respectively. Exploratory Outcomes There was agreement between AI and radiologists on RV:LV ≥ 1.0 for 1,224 cases and on RV:LV <1.0 for 67 cases (88% overall agreement [kappa 0.36, 95% CI 0.28–0.43], data not shown). The RV:LV means with SDs were 1.48 (0.31) and 0.86 (0.11), respectively. There was disagreement for 178 (12.1%) cases. RV:LV means were 1.23 (0.23) and 0.92 (0.05) when AI reported abnormal RV:LV vs RV:LV < 1.0, respectively. For comparison of AI-derived CTPA with TTE measurements, Pearson correlation coefficients for RV, LV, and RV:LV were 0.47 (0.42, 0.52), 0.58 (0.53, 0.62), and 0.50 (0.45, 0.55), respectively. All kappas were interpreted as moderate agreement per Landis and Koch guidelines. shows strong negative bias with lower TTE measurements than CTPA measurements at presentation. shows we screened 1,809 patients with CTPA-confirmed acute PE diagnosed in ED. Of these, 1,664 (92.0%) had CTPA associated with index PE diagnosis and anonymized DICOM files transferred for AI analysis. Radiologists provided categorical RV:LV classification for 1,467 of 1,664 (88.2%) CTPAs. The AI vendor analyzed 1,660 of the 1,664; four cases were excluded because of inadequate CTPA slice thickness for AI analysis. The AI assessment for central clot was successful in all (100%) CTPAs and 1,267 (76.3%) were found to have large central PE by the algorithm. The AI-derived cardiac measurements were obtained for 1,617/1,660 (97.4%). The AI failed to analyze 43 CTPAs because 1) they did not meet study inclusion criteria (i.e., slice thickness, kernel, contrast phase), or 2) the RV:LV algorithm was unable to detect appropriate landmarks to perform RV:LV analysis. Of 1,664 CTPAs, 733 (44.1%) had comprehensive TTE measurements during index PE hospitalization. Mean and SD for time interval between CTPA and TTE for the middle 95% was 13.6 (11.3) hours. We were able to determine primary outcome responses for 1,639 unique patients ( ) and secondary outcome for 1,643 unique patients. Of the 1,639, mean age was 63.0 ± 16 years, 805 (49.1%) were male, 997 (60.8%) were White, and 190 (11.6%) had one or more components of the primary outcome. Four patients had more than one ED visit for acute PE during the 2018–2023 study period. We reported PE-related clinical deterioration (primary outcome) for first visit only. There were no significant differences between those with or without clinical deterioration for age, gender, race, or ethnicity. There were significant differences for mean values of vital signs. Patients who had PE-related clinical deterioration (primary outcome) had lower systolic blood pressure and oxygen saturation readings and higher respiratory rate and heart rates than patients without clinical deterioration. There was significantly increased use of systemic thrombolysis, ECMO, and surgical embolectomy in the primary outcome group. However, there were no significant differences in use of catheter-directed interventions between outcome groups. For categorical cardiac CTPA assessments, shows radiologists’ binary categorization of RVD using the RV:LV cut-off 1.0 was not significant between primary outcome groups. In contrast, AI-derived RV:LV binary categorization was significant. For mean AI-derived CTPA measurements, shows significant differences in RV:LV, RV, and LV basal diameters between those with and without clinical deterioration. For the 733 patients with TTE, TTE measurements were less than AI-derived CT cardiac measurements. Only LV basal diameter had significant differences between the primary outcome groups. Although mean RV basal diameter was above normal limits, the difference was not statistically significant for outcome-negative and outcome-positive groups. Multivariable analyses with unadjusted LASSO for PE-related clinical deterioration (primary outcome) showed the most significant independent AI-derived predictors were RV:LV (19.28 [3.0–109.4]) and central clot by AI (2.4 [1.6–3.6]). Both the adjusted LASSO and RF models vetted all candidate database variables. Both RF and adjusted LASSO prognostic models had excellent discrimination and calibration metrics for prognostic accuracy ( ): For discrimination, adjusted LASSO and RF had AUC of 0.88 (0.85, 0.90) and 0.87 (0.84, 0.89), respectively. Both models were well calibrated with Brier scores of 0.07. The RF model was slightly less calibrated than the LASSO model on other calibration metrics. and show cardiac arrest at presentation was the top predictor of in-hospital clinical deterioration in both multivariable models (LASSO and RF). Admission to the intensive care unit, lowest systolic blood pressure, lowest oxygen saturation, and highest heart and respiratory rates were also top predictors in both models. The CTPA cardiac measurements were among the top 11 predictors selected by LASSO. Abnormal troponin was one of the top predictors by LASSO but had a lower influence on RF model accuracy than CTPA assessments. The CTPA cardiac measurements and findings of central clot location with RV:LV ≥ 1.0 were among the top 10 independent predictors of clinical deterioration in the RF model. shows optimal cut-offs of AI-derived cardiac CTPA measurements with prediction metrics for PE-related clinical deterioration as RV:LV 1.54 (OR 2.5 [1.85–3.45] and AUC 0.6 [0.66, 0.70]). These cut-off values had high negative predictive values (NPV) but low positive predictive values (PPV). shows bivariable analysis of cardiac assessments stratified by use of advanced interventions (secondary outcome). Regardless of how cardiac measurements were derived, there were significant differences in cardiac measurements (whether continuous or categorical) between those with and without advanced interventions. For example, AI-derived CTPA RV:LV means with SDs were 1.62 (0.33) vs 1.35 (0.32) for those with and without advanced interventions (secondary outcome), respectively. With TTE, RV:LV means were 1.17 (0.29) vs 1.02 (0.27) for those with and without advanced interventions, respectively. There was agreement between AI and radiologists on RV:LV ≥ 1.0 for 1,224 cases and on RV:LV <1.0 for 67 cases (88% overall agreement [kappa 0.36, 95% CI 0.28–0.43], data not shown). The RV:LV means with SDs were 1.48 (0.31) and 0.86 (0.11), respectively. There was disagreement for 178 (12.1%) cases. RV:LV means were 1.23 (0.23) and 0.92 (0.05) when AI reported abnormal RV:LV vs RV:LV < 1.0, respectively. For comparison of AI-derived CTPA with TTE measurements, Pearson correlation coefficients for RV, LV, and RV:LV were 0.47 (0.42, 0.52), 0.58 (0.53, 0.62), and 0.50 (0.45, 0.55), respectively. All kappas were interpreted as moderate agreement per Landis and Koch guidelines. shows strong negative bias with lower TTE measurements than CTPA measurements at presentation. We found AI-derived RV:LV measurements on CTPA were significantly greater in PE patients experiencing clinical deterioration or receiving advanced intervention than those without these outcomes. There was significantly increased use of systemic thrombolysis, ECMO, and surgical embolectomy in the primary outcome group. In our models, which had strong discrimination and calibration, AI-derived RV:LV measurements were independent predictors of clinical deterioration, along with abnormal vital signs and cardiac arrest at presentation in one or both multivariable models. The optimal RV:LV cut-off of 1.5 had an odds ratio of 2.5 and AUC of 0.6 for PE-related clinical deterioration (primary outcome). The AI-derived RV:LV measurements performed better as predictors of primary and secondary outcomes than radiologists’ or AI-derived categorizations using RV:LV cut-off of 1.0. Other reports have focused on outcomes similar to ours. Beigel et al. performed a study evaluating 179 intermediate-risk PE patients for predictors of short-term death and advanced interventions. Twenty-six patients required advanced intervention, which was significantly associated with echocardiographic evidence of severe RVD (42% vs 19%, P < 0.01) or higher RV:LV measurement on CTPA (1.9 ± 0.6 vs 1.46 ± 0.5, P < 0.001). The RV dilatation on TTE was an independent predictor for advanced interventions. This information further corroborates the importance of measurements to risk stratify PE patients. Unlike TTE measurements, cardiac CTPA measurements are immediately available at the time of PE diagnosis for risk stratification. Other studies that assessed how CTPA cardiac measurements are associated with clinical outcomes had mixed results. A retrospective study by Foley et al. involving 101 patients with CT-proven PEs of any severity at a single center showed strong agreement (intraclass correlation 0.83, [0.77–0.88]) between radiologists’ and AI-derived CTPA measurements for RV:LV. In this study, RV:LV ranged from 0.67–2.43, with 65% being ≥ 1.0. The optimal RV:LV cut-off for 30-day mortality was 1.18. The use of AI analysis in our study led to a change in risk stratification in 45% of patients. However, in a large prospective study of 1,950 CT-confirmed PEs by Beenen et al., RV:LV measurements by radiologists were not significantly different between those with and without short-term mortality. Similar to the Foley et al. study, we found an elevated RV:LV had a strong association with in-hospital clinical deterioration in our intermediate- and high-risk PE cohort. Our optimal RV:LV cut-off of 1.5 was higher than theirs. A previous report showed fair agreement (kappa 0.4) for categorical assessments of RV dysfunction between CTPA and TTE. Our study found moderate agreement of RV:LV measurements by CTPA and TTE. We believe our findings underscore the importance of using immediately available CTPA measurements of RVD for risk stratification and prognosis. However, at many institutions, RV measurements are not routinely performed or interpreted on CTPA. One study in a large regional healthcare system with 21 sites showed only 18.3% of 1,571 positive CTPA interpretation reports included RV measurements. The use of AI to detect PE and analyze CTPA cardiac measurements at time of PE presentation may improve risk stratification for PERTs and provide quality assurance to enhance radiologists’ workflow. The diagnostic accuracy of AI should include a low number of false positives to minimize notification fatigue and potential for medication mismanagement. In a retrospective multicenter study, Cheik et al. evaluated diagnostic performances of the Aidoc PE algorithm on CTPAs and compared them with those of radiologists to determine impact of AI PE detection. Of 1,202 patients included, the AI algorithm detected 219 suspicious PEs, of which 176 were true PEs, including 19 true PEs missed by radiologists. The highest sensitivity and NPVs were obtained with AI, while the highest specificity and PPV were found with radiologists. Our retrospective study focused on less subtle PE diagnoses; the AI analysis was specifically created to focus on non-segmental PE, and AI agreed that PE findings were present in all CTPAs. Artificial intelligence further analyzed ventricle measurements on CTPA and determined central vs non-central filling defects. Although our comparison of CTPA RV:LV categorization by AI vs radiologists had 88% agreement, the kappa 0.34 is interpreted as fair agreement. Agreement was more likely when RV:LV was well above or well below the 1.0 cut-off; the two sources were more likely to disagree when RV:LV was closer to 1.0. It is unknown whether AI-derived CTPA measurements might “correct” radiologist assessments in real time for those close to the 1.0 cut-off or whether such a “correction” would have clinical significance on patient care and outcomes. Even with an optimal RV:LV cut-off of 1.5, we note the low PPV for PE-related clinical deterioration. So, an RV:LV cut-off of 1.5 is not sufficient to be the sole determinant of decision-making about disposition or advanced interventions. Similar to another report, our study showed a combination of CTPA parameters (central clot location and RV:LV) had stronger associations with clinical deterioration than RV:LV alone (categorical or continuous). Incorporation of CTPA cardiac measurements in PE risk stratification may impact local/regional clinical practice or guidelines. Next steps may include prospective studies that include CTPA measurements as predictors of clinical outcomes and PERT risk stratification, and pragmatic comparisons of AI-assisted workflow vs traditional workflow in which CTPA cardiac measurements, clinical management metrics, and patient-centered outcomes are assessed. Our study had several limitations. First, we conducted a retrospective, remote AI analysis of CTPA with confirmed intermediate- and high-risk PE. We did not study real-time AI analyses on recently completed CTPAs. Our study design and inclusion criteria, therefore, do not lend to any interpretation about diagnostic accuracy of the AI platform on CT of patients with lower acuity PE or without PE. We cannot report on false positive or false negative interpretations, potential impact on PERT notifications or clinical management, or compare to previous reports of AI’s diagnostic accuracy for PE. Theoretically, we have shown AI-derived measurements were better predictors of acute clinical deterioration than categorical radiologist assessment of RV:LV cut-off of 1.0. However, to show the impact of AI on patient care by clinicians, there would need to be pragmatic, randomized controlled trials comparing usual care vs AI-assisted clinical care. Prospective studies would enable reporting timeliness of AI analysis of CT and its effect on radiologist workload, physician notification of positive and significant findings, and impact of measurements on risk assignment, resource utilization, advanced interventions, and clinical deterioration. Other limitations are specific to the exploratory objectives. Our study did not verify whether agreements between radiologist and AI for RV:LV ≥ 1.0 were correct; both interpretations could be incorrect. Study design could be improved by including a comparator, such as a reference standard (e.g., cardiac magnetic resonance imaging), use of an independent, blinded radiologist for separate measurements or to serve as an adjudicator, or earlier contemporaneous TTE measurements. For the second exploratory objective, we did not determine presence or absence of interventions in the interval between CT and TTE. The TTE and CTPA were performed at different times and often more than 12 hours apart. Therefore, the differences between these measured variables may be due to worsening or improving cardiac burden during the time intervals. Not all patients in the cohort had TTE. High missingness of TTE measurements was a limitation in comparison of them with the AI-derived CTPA measurements. The differences observed in these mean measurements may be due to different imaging modality or time interval between studies. The subgroup that had TTE likely represented those with higher acuity at presentation. Right ventricle:left ventricle measurements of 1.5 or more on the initial CT pulmonary angiogram had strong associations with in-hospital clinical deterioration and advanced interventions in a large database of intermediate- and high-risk patients with pulmonary embolism. This study points to the potential of capitalizing on immediately available CTPA RV:LV measurements for gauging PE severity and for risk stratification.
Early arthroscopic debridement of posterior cruciate ligament calcification after symptom presentation led to immediate recovery: a case report
cb26132d-45a1-4b4d-a280-b8da64ba3b1d
11365158
Debridement[mh]
Knee ligament calcification is rare but calcification of vessel walls and the rotator cuff is commonly observed. Patients with calcification in the articulation experience severe pain even at rest and often cannot sleep due to the pain. The literature in PubMed includes several reports of medial collateral ligament (MCL) calcification in the knee ligament and a few reports of anterior cruciate ligament (ACL) calcification [ – ]. However, there is only one report of posterior cruciate ligament (PCL) calcification and one report of PCL ossification . In these two cases, arthroscopic debridement was performed more than a year after symptoms had appeared. Herein, we report a patient in whom arthroscopic debridement of calcium deposits was performed two days after symptoms had appeared. A 71-year-old man was admitted to our outpatient clinic and complained that he had been experiencing left popliteal pain since the day before. His symptoms appeared in the morning and acutely got worse that night without any history of trauma. The pain was severe even during rest, and the patient was not able to sleep. On the following day, he presented to our department. His past medical history included diagnoses of diabetes and hyperlipidemia. He had not experienced any fever since the symptoms first appeared, but the popliteal fossa was observed to be tender. The patient’s knee was swollen and had a positive ballottement test. The skin of the knee had a normal temperature and did not exhibit any redness. However, the patient could not flex his knee at all (0 degrees of flexion) and he could not walk because of the pain. Blood tests at first admission showed that the white blood cell count was 9700 /µL, the CRP level was 1.45 mg/dL, the glucose level was 202 mg/dL, hemoglobin A1c was 7.4%, and the uric acid level was 2.4 mg/dL. X-rays revealed a high-density mass within the intercondylar notch (Fig. ). Multi-planar computed tomography (CT) showed a mass with heterogeneous density behind the PCL (Fig. A). Magnetic resonance imaging (MRI) showed the mass behind the PCL with mild osteoarthritic changes, accumulation of synovial fluid in the articulation, and inflammation of the synovial membrane in the popliteal fossa (Fig. B). Synovial fluid was collected, and its analysis did not reveal any crystals. Although the patient’s knee joint was injected with steroids, he was still in severe pain the next day. We performed arthroscopic surgery two days after symptoms had first appeared in order to conduct further examinations and initiate treatment. Since the pain was preventing the patient from sleeping at night, he wanted to be diagnosed and treated as soon as possible. Intraoperatively, a partial medial meniscus posterior root tear (MMPRT) that appeared to be old and degraded was found (Fig. A), the lateral tibia cartilage was observed to be fibrillated (Fig. B), and a white, soft toothpaste-like tissue was noted in the synovial membrane behind the PCL. A minimal portion of the synovial membrane of the PCL was removed and the calcification was pushed out to retain as much of the PCL as possible. The MMPRT was not treated since the medial meniscus posterior root did not show any instability. The majority of the unknown tissue behind the PCL broke apart in the synovial fluid, but a part of the tissue was collected for further histological analysis (Fig. A, B). H-E staining revealed sparse fibers and multi-nucleated giant cells within the tissue (Fig. A). Von-Kossa staining showed calcium deposits in most of the fibrous tissue (Fig. B). The patient’s symptoms were completely gone after surgery. He was allowed full range of motion and could walk without pain. A small amount of calcification was observed on postoperative CT scans (Fig. ). One month later, X-rays with a posterior gravity sagging view showed no posterior sagging of the proximal tibia. The patient remains asymptomatic one month after surgery. This is the first reported case of early debridement of PCL calcification and ossification that was performed soon after symptoms appeared. In addition, this procedure led to complete recovery. There is only one report of PCL calcification and one report of PCL ossification in the literature. In both cases, arthroscopic debridement was performed more than a year after symptoms had appeared. The diagnosis of ligament calcification was based on MRI and arthroscopic findings of this case and previous reports. Arthroscopic images showed that the calcification was in the synovial membrane of the PCL and were similar to findings observed in a previous report of ACL calcification . Synovial mesenchymal stem cells can also differentiate into chondrocytes , which may have given rise to the calcification in the synovial membrane. However, there have been no reports of intrasynovial calcification other than in solid tumor tissue. In our case, the calcification in the synovial cavity developed among sparse fibers in the absence of tumor tissue. One possible scenario is that the lining integrity of the synovial barrier was disrupted during joint inflammation and monocytes which can undergo autophagy migrated to the synovial cavity . Differential diseases in this case included MMPRT and crystal-induced arthritis. It was difficult to diagnose the MMPRT on MRI because it was an incomplete tear (type 1) and had no extrusion (stage 0) . MMPRTs can sometimes lead to severe pain. However, the pain experienced by our patient could not have been due to osteoarthritis of the knee and MMPRT because he had severe pain even at rest, the MMPRT was not fresh, and only the PCL calcification was treated through debridement. Additionally, the patient did not have crystal-induced arthritis because synovial fluid analysis did not reveal any crystals. Thus, we concluded that the pain had been caused by the PCL calcification. Arthroscopic surgery was selected for treatment and also for conducting further examinations, since the patient could not sleep at night due to the pain and wanted to be diagnosed and treated as soon as possible. During the operation, a minimal amount of the synovial membrane of the PCL was removed and the calcification was pushed out to retain as much of the PCL as possible. This led to some of the calcification being retained behind the PCL, as was observed in postoperative CT scans. However, the patient was able to completely recover. This suggests that the calcification does not have to be removed completely for an optimal outcome. A previous study also demonstrated that ultrasound-guided debridement of MCL calcification led to early complete recovery with a small calcification left behind . In our case study, arthroscopic surgery was performed to determine if there was another cause for the pain, but ultrasound-guided debridement might have had a similar outcome. The histological analysis led us to believe that calcification was present. Calcification is defined as the deposit of calcium salts in tissue. In contrast, ossification is defined as the formation of bone (calcification in the collagen matrix) whether or not there is bone marrow . In our case, calcium deposits were found in the fibrous tissue but the fibers were sparse and there was no bone-like tissue. Therefore, we speculated that the calcium deposits were not indicative of bone formation but of calcification. Calcification can occur in vessels, muscles, tendons, and ligaments . Trauma, overuse, and metabolic disorders like diabetes can cause calcification in the articulation [ – ]. However, the pathogenesis of that process remains unclear. Our patient had a history of diabetes. The reference range for serum uric acid in humans is 1.5-6.0 mg/dL for women and 2.5-7.0 mg/dL for men, while hypouricemia is commonly diagnosed when levels drop to 2.0 mg/dL or less. There is currently no report regarding any correlation between calcification in the joint and uric acid. However, it has been reported that hypouricemia can increase oxidant stress , which in turn can lead to vessel calcification . In our case study, low blood uric acid levels could have also led to low uric acid levels in the joint, resulting in oxidant stress and calcification. Our patient had diabetes as well as relatively low blood uric acid, which could have easily induced calcification in the body. However, it remains unclear why calcification did occur behind the PCL in this case. The underlying mechanism for calcification within the knee joint remains unclear. Calcification of the rotator cuff is induced by chondrocyte-like cells , which may be involved in endochondral ossification. In addition, calcific tendonitis of the rotator cuff can lead to severe pain during the resorption phase because of inflammation around the calcification . In this phase, the deposit has a creamy or toothpaste-like consistency while it has a stiff mass like chalk in the calcific phase . In our case, the calcium deposit may have been in the resorption phase because the tissue contained multi-nucleated giant cells and was soft enough to break apart in the synovial fluid. This could have led to inflammation of the synovial membrane and severe pain. In conclusion, PCL calcification is rare. In our case, metabolic disorder may have been the cause of the calcification. This is the first report where debridement of calcification was performed soon after symptoms first appeared, leading to complete recovery.
How far can we go? A 20-year meta-analysis of dental implant survival rates
11fc7698-b1d0-484d-922e-c2a85a73edb5
11416373
Dentistry[mh]
Dental implantology has emerged as a cornerstone of modern dentistry and oral surgery . Projections suggest that the prevalence of dental implants in the United States will soar to 23% by 2026 . Considering this growth, questions regarding the long-term durability become increasingly pertinent. Extensive research reveals compelling evidence, demonstrating survival rates exceeding 90% even after ten years . These findings are not only substantiated by many individual studies but also by comprehensive systematic reviews and high-quality meta-analyses . It is worth noting that the difference between a 10-year and a 20-year lifespan has substantial implications for treatment planning . If dental implants continue to exhibit such outstanding results over 20 years, it would necessitate a reevaluation of the decision-making process between preserving natural teeth (endodontics, periodontal therapy) and choosing implant insertion . Attempting to delay implantation may carry the risk of additional infections and significantly complicate future implant-prosthetic treatments due to potential bone deficits . In some cases, this could lead to a considerably costly reconstruction of prosthetic work for both the patient and, if applicable, the clinician. On the other hand, a systematic review concluded that prosthetic treatments on periodontally compromised teeth resulted in fewer complications compared to implant treatments . It raises questions about how effective we truly are and how far we can go. Do dental implants genuinely offer a lifelong solution? Furthermore, it should be noted that due to demographic changes, there is not only an achievement but also a demand for higher survival rates . When assessing treatment alternatives, questions may arise about the feasibility of repeat operations in subsequent years, considering the patient's health . However, if calculably high survival rates are consistently achieved over a 20-year period, it would significantly impact treatment approaches, offering many patients, even in advanced age, an improved quality of life through fixed prosthodontic care . Previous meta-analyses were limited to a 10-year follow-up . Additionally, advances in technology have transformed the characteristics of commercially available implants. The traditional implant with a machined surface is now rarely found and can no longer be described as state-of-the-art. Moreover, cylindrical and hollow-cylinder implants, although frequently included in earlier studies, have largely vanished from clinical practice . For these reasons, the aim of this systematic review with a meta-analysis was to assess the survival rate of screw-shaped dental implants with a rough surface after 20 years. This study seeks to provide a practical and realistic guide for clinicians while also identifying potential areas for future research and shedding light on any existing deficiencies. Considering the information mentioned in the Introduction, the following PICO criteria were defined: P—Patients over 18 years ("adults") I—Insertion of a screw-shaped dental implant with a rough surface C—No control intervention was recorded. The goal was to determine implant survival. O—20-year survival rate of dental implants While conducting this systematic review, we adhered to the PRISMA guidelines and followed the corresponding checklist. The protocol was registered on PROSPERO (CRD42023402989). Inclusion and exclusion criteria Study designs In a first exploratory search, the number of prospective studies was considered too small to focus solely on them. Also, the reported data significantly differed in presentation and quality. For this reason, it was decided to include both prospective and retrospective studies. Observational as well as interventional studies were considered. Specifically, the following study types were included: Observational studies (prospective or retrospective cohort, case–control, cross-sectional and longitudinal studies), interventional studies (randomised and non-randomised controlled trials, controlled and uncontrolled trials). Publications with less than 10 implants inserted were excluded. There were no restrictions regarding the publication date. The last search was conducted in February 2024, serving as the upper time limit. Only English-language publications were included. Intervention To increase relevance and realism, strict rules were established for the type of implant. They had to be screw-shaped implants made of titanium or a titanium alloy. The surface had to be rough (e.g., acid-etched, sandblasted, etc.). Obsolete or rarely used implant systems such as implants with a turned surface (e.g., Branemark), hollow screw, or hollow cylinder implants were excluded. Likewise, ceramic implants were not included. The superstructure was divided in many studies into single crowns, fixed partial prostheses, fixed full-arch prostheses, and overdentures. The focus of this review is solely on the implant itself, and hence the types of restorations were recorded but not a basis for exclusion or inclusion. We excluded populations consisting solely of patients with severe conditions directly affecting bone regeneration, such as those on antiresorptive therapy or with osteoporosis. However, diabetes, for example, was not an exclusion criterion. Setting The study setting was not limited, allowing for a diverse range of environments such as university teaching hospitals, specialist dental practices, and general dental practices. This inclusive approach ensures that the results obtained reflect real-world scenarios and contribute to a more comprehensive understanding of the topic. Search strategy A systematic electronic literature search was conducted in the databases: MEDLINE (PubMed), Cochrane, and Web of Science. The reference list and citations were also searched for relevant studies. There were no restrictions regarding the publication date to avoid missing any results. There were no restrictions regarding language during the search process, but only English language literature was included. Under these conditions, all subheadings, MeSH terms, as well as the title and abstract, were reviewed extensively following the strategy mentioned below. In addition, PROSPERO was thoroughly searched to identify any ongoing or recently completed systematic reviews. The following terms were used for all databases with adapted subheadings and syntax. In the final step, the three issues were connected with “AND”. Complex 1: dental implants Dental implant*[MeSH Terms] OR tooth[Title/Abstract] OR teeth[Title/Abstract] OR dental[Title/Abstract] OR oral[Title/Abstract] OR implant*[Title/Abstract] OR osseointegrat*[Title/Abstract]. Complex 2: exclusion of animal studies (inclusion of studies with animals AND humans) NOT (Animal*[MeSH Terms]) NOT (human*[MeSH Terms] AND Animal*[MeSH Terms]). Complex 3: twenty years of follow up 20 NEAR year* [Title/Abstract] OR Twenty NEAR year* [Title/Abstract] The search was documented using commercially available spreadsheet software (Microsoft Excel). Using the citation software Endnote 20, the results were collected, and duplicates and triplicates were excluded. Two of the authors (J.R.K. and E.S.) independently reviewed the results and selected suitable studies based on titles and abstracts. In case of discrepancies, a joint discussion was held to decide whether the study met the inclusion criteria. All authors re-examined the full texts for suitability, and authors were contacted in case of missing or incomplete data. Risk of Bias J.R.K. assessed the risk of bias for all studies using a tool by Hoy et al., specifically tailored for prevalence studies, which was considered the most suitable in this case . The results were reviewed and confirmed by all authors. Data J.R.K. extracted the relevant data from the studies, and the other authors verified the results for accuracy. Any disagreements were resolved through joint discussions. Apart from outcome data, the names of authors, publication dates, and other study identification details were recorded, as well as the study type. During data collection, it became apparent that certain assumptions had to be made for the studies to obtain data on implant survival: For controlled studies or those with multiple treatment groups, they were summed up and the resulting overall survival rate for the study group was calculated. Explanations are given in the results section for every study where necessary. Conversely, for studies where only one group was relevant for this review, only that group was included. Imputation method It is a well-known fact that particularly long-term studies have a high rate of patients or implants lost to follow-up. Therefore, an appropriate imputation method was chosen to obtain more realistic data for prospective studies. For this purpose, we relied on a publication by Akl et al. , which recommends estimating the proportion of failed implants five times higher in the lost-to-follow-up (LTFU) group than in the group that could be tracked. This appears reasonable, especially considering that targeted follow-up increases implant survival and reduces the incidence of peri-implantitis. Nevertheless, this represents a conservative estimation that likely underestimates the actual survival rate. It should also be noted that this approach is supported by limited evidence and refers to a general procedure for follow-up data in medicine, not specifically for dental implantology. However, it is worth mentioning that Howe et al., in their meta-analysis of implant survival over 10 years, followed a similar approach . Outcome measures A complete case analysis was conducted for the primary evaluation of prospective studies. The analysis focused on implant survival rates rather than patients. For each study, the 95% confidence interval was individually calculated using the confint.binom function in R, from which the standard error (SE) was derived. Secondary outcomes The evaluation of retrospective studies did not require imputation since all studies used Kaplan–Meier curves, a more realistic method to handle implants that had no complete followed up. Consequently, the meta-analysis was conducted by gathering data on the survival rate, and if not provided, the 95% confidence interval was calculated as previously described. Data synthesis We utilized the statistical software environment R in conjunction with R Studio. The R package "metaprop" specializes in analyzing meta-analyses with binary data, particularly for proportions. It employs a random-effects model, the DerSimonian-Laird estimator, to account for heterogeneity among studies. This is necessary due to the inherent high heterogeneity expected. The study objectives vary significantly, as do applied methods, implant systems, patient characteristics, prosthetic restorations used, and others. The number of implant losses were calculated from extract proportions of losses and total number of implants. No adjustments were made for censoring and the fact that multiple implants per patients had been performed because papers did not provide sufficient information. Hence the precision is overestimated (confidence intervals are too small). The results were graphically presented as forest plots, and a funnel plot was also generated to visualize publication bias. As a result, there were graphics for the complete case analysis of prospective studies, secondly for the dataset after imputation, and thirdly for the retrospective studies. The forest plots provide a visual representation of the combined effect sizes and their corresponding confidence intervals, allowing for an assessment of the overall impact of the interventions. The funnel plot aids in detecting potential publication bias, which can arise if studies with significant results are more likely to be published. Study designs In a first exploratory search, the number of prospective studies was considered too small to focus solely on them. Also, the reported data significantly differed in presentation and quality. For this reason, it was decided to include both prospective and retrospective studies. Observational as well as interventional studies were considered. Specifically, the following study types were included: Observational studies (prospective or retrospective cohort, case–control, cross-sectional and longitudinal studies), interventional studies (randomised and non-randomised controlled trials, controlled and uncontrolled trials). Publications with less than 10 implants inserted were excluded. There were no restrictions regarding the publication date. The last search was conducted in February 2024, serving as the upper time limit. Only English-language publications were included. Intervention To increase relevance and realism, strict rules were established for the type of implant. They had to be screw-shaped implants made of titanium or a titanium alloy. The surface had to be rough (e.g., acid-etched, sandblasted, etc.). Obsolete or rarely used implant systems such as implants with a turned surface (e.g., Branemark), hollow screw, or hollow cylinder implants were excluded. Likewise, ceramic implants were not included. The superstructure was divided in many studies into single crowns, fixed partial prostheses, fixed full-arch prostheses, and overdentures. The focus of this review is solely on the implant itself, and hence the types of restorations were recorded but not a basis for exclusion or inclusion. We excluded populations consisting solely of patients with severe conditions directly affecting bone regeneration, such as those on antiresorptive therapy or with osteoporosis. However, diabetes, for example, was not an exclusion criterion. Setting The study setting was not limited, allowing for a diverse range of environments such as university teaching hospitals, specialist dental practices, and general dental practices. This inclusive approach ensures that the results obtained reflect real-world scenarios and contribute to a more comprehensive understanding of the topic. In a first exploratory search, the number of prospective studies was considered too small to focus solely on them. Also, the reported data significantly differed in presentation and quality. For this reason, it was decided to include both prospective and retrospective studies. Observational as well as interventional studies were considered. Specifically, the following study types were included: Observational studies (prospective or retrospective cohort, case–control, cross-sectional and longitudinal studies), interventional studies (randomised and non-randomised controlled trials, controlled and uncontrolled trials). Publications with less than 10 implants inserted were excluded. There were no restrictions regarding the publication date. The last search was conducted in February 2024, serving as the upper time limit. Only English-language publications were included. To increase relevance and realism, strict rules were established for the type of implant. They had to be screw-shaped implants made of titanium or a titanium alloy. The surface had to be rough (e.g., acid-etched, sandblasted, etc.). Obsolete or rarely used implant systems such as implants with a turned surface (e.g., Branemark), hollow screw, or hollow cylinder implants were excluded. Likewise, ceramic implants were not included. The superstructure was divided in many studies into single crowns, fixed partial prostheses, fixed full-arch prostheses, and overdentures. The focus of this review is solely on the implant itself, and hence the types of restorations were recorded but not a basis for exclusion or inclusion. We excluded populations consisting solely of patients with severe conditions directly affecting bone regeneration, such as those on antiresorptive therapy or with osteoporosis. However, diabetes, for example, was not an exclusion criterion. The study setting was not limited, allowing for a diverse range of environments such as university teaching hospitals, specialist dental practices, and general dental practices. This inclusive approach ensures that the results obtained reflect real-world scenarios and contribute to a more comprehensive understanding of the topic. A systematic electronic literature search was conducted in the databases: MEDLINE (PubMed), Cochrane, and Web of Science. The reference list and citations were also searched for relevant studies. There were no restrictions regarding the publication date to avoid missing any results. There were no restrictions regarding language during the search process, but only English language literature was included. Under these conditions, all subheadings, MeSH terms, as well as the title and abstract, were reviewed extensively following the strategy mentioned below. In addition, PROSPERO was thoroughly searched to identify any ongoing or recently completed systematic reviews. The following terms were used for all databases with adapted subheadings and syntax. In the final step, the three issues were connected with “AND”. Complex 1: dental implants Dental implant*[MeSH Terms] OR tooth[Title/Abstract] OR teeth[Title/Abstract] OR dental[Title/Abstract] OR oral[Title/Abstract] OR implant*[Title/Abstract] OR osseointegrat*[Title/Abstract]. Complex 2: exclusion of animal studies (inclusion of studies with animals AND humans) NOT (Animal*[MeSH Terms]) NOT (human*[MeSH Terms] AND Animal*[MeSH Terms]). Complex 3: twenty years of follow up 20 NEAR year* [Title/Abstract] OR Twenty NEAR year* [Title/Abstract] The search was documented using commercially available spreadsheet software (Microsoft Excel). Using the citation software Endnote 20, the results were collected, and duplicates and triplicates were excluded. Two of the authors (J.R.K. and E.S.) independently reviewed the results and selected suitable studies based on titles and abstracts. In case of discrepancies, a joint discussion was held to decide whether the study met the inclusion criteria. All authors re-examined the full texts for suitability, and authors were contacted in case of missing or incomplete data. J.R.K. assessed the risk of bias for all studies using a tool by Hoy et al., specifically tailored for prevalence studies, which was considered the most suitable in this case . The results were reviewed and confirmed by all authors. J.R.K. extracted the relevant data from the studies, and the other authors verified the results for accuracy. Any disagreements were resolved through joint discussions. Apart from outcome data, the names of authors, publication dates, and other study identification details were recorded, as well as the study type. During data collection, it became apparent that certain assumptions had to be made for the studies to obtain data on implant survival: For controlled studies or those with multiple treatment groups, they were summed up and the resulting overall survival rate for the study group was calculated. Explanations are given in the results section for every study where necessary. Conversely, for studies where only one group was relevant for this review, only that group was included. Imputation method It is a well-known fact that particularly long-term studies have a high rate of patients or implants lost to follow-up. Therefore, an appropriate imputation method was chosen to obtain more realistic data for prospective studies. For this purpose, we relied on a publication by Akl et al. , which recommends estimating the proportion of failed implants five times higher in the lost-to-follow-up (LTFU) group than in the group that could be tracked. This appears reasonable, especially considering that targeted follow-up increases implant survival and reduces the incidence of peri-implantitis. Nevertheless, this represents a conservative estimation that likely underestimates the actual survival rate. It should also be noted that this approach is supported by limited evidence and refers to a general procedure for follow-up data in medicine, not specifically for dental implantology. However, it is worth mentioning that Howe et al., in their meta-analysis of implant survival over 10 years, followed a similar approach . Outcome measures A complete case analysis was conducted for the primary evaluation of prospective studies. The analysis focused on implant survival rates rather than patients. For each study, the 95% confidence interval was individually calculated using the confint.binom function in R, from which the standard error (SE) was derived. Secondary outcomes The evaluation of retrospective studies did not require imputation since all studies used Kaplan–Meier curves, a more realistic method to handle implants that had no complete followed up. Consequently, the meta-analysis was conducted by gathering data on the survival rate, and if not provided, the 95% confidence interval was calculated as previously described. Data synthesis We utilized the statistical software environment R in conjunction with R Studio. The R package "metaprop" specializes in analyzing meta-analyses with binary data, particularly for proportions. It employs a random-effects model, the DerSimonian-Laird estimator, to account for heterogeneity among studies. This is necessary due to the inherent high heterogeneity expected. The study objectives vary significantly, as do applied methods, implant systems, patient characteristics, prosthetic restorations used, and others. The number of implant losses were calculated from extract proportions of losses and total number of implants. No adjustments were made for censoring and the fact that multiple implants per patients had been performed because papers did not provide sufficient information. Hence the precision is overestimated (confidence intervals are too small). The results were graphically presented as forest plots, and a funnel plot was also generated to visualize publication bias. As a result, there were graphics for the complete case analysis of prospective studies, secondly for the dataset after imputation, and thirdly for the retrospective studies. The forest plots provide a visual representation of the combined effect sizes and their corresponding confidence intervals, allowing for an assessment of the overall impact of the interventions. The funnel plot aids in detecting potential publication bias, which can arise if studies with significant results are more likely to be published. It is a well-known fact that particularly long-term studies have a high rate of patients or implants lost to follow-up. Therefore, an appropriate imputation method was chosen to obtain more realistic data for prospective studies. For this purpose, we relied on a publication by Akl et al. , which recommends estimating the proportion of failed implants five times higher in the lost-to-follow-up (LTFU) group than in the group that could be tracked. This appears reasonable, especially considering that targeted follow-up increases implant survival and reduces the incidence of peri-implantitis. Nevertheless, this represents a conservative estimation that likely underestimates the actual survival rate. It should also be noted that this approach is supported by limited evidence and refers to a general procedure for follow-up data in medicine, not specifically for dental implantology. However, it is worth mentioning that Howe et al., in their meta-analysis of implant survival over 10 years, followed a similar approach . A complete case analysis was conducted for the primary evaluation of prospective studies. The analysis focused on implant survival rates rather than patients. For each study, the 95% confidence interval was individually calculated using the confint.binom function in R, from which the standard error (SE) was derived. The evaluation of retrospective studies did not require imputation since all studies used Kaplan–Meier curves, a more realistic method to handle implants that had no complete followed up. Consequently, the meta-analysis was conducted by gathering data on the survival rate, and if not provided, the 95% confidence interval was calculated as previously described. We utilized the statistical software environment R in conjunction with R Studio. The R package "metaprop" specializes in analyzing meta-analyses with binary data, particularly for proportions. It employs a random-effects model, the DerSimonian-Laird estimator, to account for heterogeneity among studies. This is necessary due to the inherent high heterogeneity expected. The study objectives vary significantly, as do applied methods, implant systems, patient characteristics, prosthetic restorations used, and others. The number of implant losses were calculated from extract proportions of losses and total number of implants. No adjustments were made for censoring and the fact that multiple implants per patients had been performed because papers did not provide sufficient information. Hence the precision is overestimated (confidence intervals are too small). The results were graphically presented as forest plots, and a funnel plot was also generated to visualize publication bias. As a result, there were graphics for the complete case analysis of prospective studies, secondly for the dataset after imputation, and thirdly for the retrospective studies. The forest plots provide a visual representation of the combined effect sizes and their corresponding confidence intervals, allowing for an assessment of the overall impact of the interventions. The funnel plot aids in detecting potential publication bias, which can arise if studies with significant results are more likely to be published. Study selection The initial database search across PubMed, Web of Science, and the Cochrane Library yielded a total of 805 results. After eliminating duplicates and triplicates, 621 unique records remained, which were subjected to a review of their titles and abstracts. Subsequently, 572 articles were excluded, and full texts were accessed for the remaining 49 articles. Following this stage, 8 articles (comprising 3 retrospective studies and 5 prospective studies) were deemed eligible for both qualitative and quantitative analyses. The primary reasons for exclusion after a thorough review of full articles were: a follow-up period of less than 20 years (n = 11), the use of excluded implant systems (e.g., machined surface, outdated), or indistinguishable data from turned and rough surfaces (n = 18).Additionally, articles were excluded due to duplication with different titles (n = 3), as well as for being case reports or studies with small or specific study populations (e.g., Pappillon-Lévevre Syndrome) (n = 5). Furthermore, some articles were excluded due to their unavailability (n = 2), implausible data (n = 1) or not mentioned survival as outcome parameter (n = 1). A detailed PRISMA flowchart is provided in Fig. . Characteristics of the studies The general characteristics of the studies are summarized in Table . Straumann and Astra Tech were the most commonly used implant systems . Most studies were conducted in Europe, with Germany, Sweden, two from Belgium, and two from Italy represented . Additionally, there was one study from Asia (Japan) and one from America (USA) . Half of the studies were conducted in specialized practices, while the others took place in university centers. The retrospective studies exclusively comprised cohort studies, while the prospective studies had various designs, including a randomized controlled study , a prospective cohort study , and one with a split-mouth design . The earliest patient data date back to 1984, and the most recent publications were in 2022. The study populations in the publications by Roccuzzo, Donati, Jacobs, and Mangano exclusively included patients with fixed prostheses . Horikawa, Becker, Vrielinck, and Cheng also included patients with removable prostheses . Overall, the sample size (patients) ranged from 18 to 371, and the number of implants varied from 50 to 415. The rate of implants lost to follow-up in prospective investigations varied, ranging from just under 44% in Roccuzzo's study to 48% in Donati's and Jacobs' studies. The absolute number of implants and patients lost to follow-up (LTFU) was: Donati et al.: 36 implants, 26 patients; Jacobs et al.: 24 implants, 7 patients; Roccuzzo et al.: 125 implants, 65 patients . The retrospective studies, however, did not disclose the number of patients lost to follow-up but instead compensated for this factor through Kaplan–Meier estimation. Risk of Bias Since all studies exhibited a similar risk of bias, no weighting was applied in this regard (Table ). Only one study received industry funding . The other authors either declared no conflicts of interest or did not receive external funding. Summary of evidence quality across studies The overall GRADE assessment indicated that the evidence quality across studies is considered very low . Both prospective and retrospective studies were included, with a high selection bias expected, primarily due to the high rate of loss to follow-up. Detailed comments can be found in Table and are not repeated here for clarity. Data Synthesis Primary Outcome For the prospective studies, a complete-case analysis was conducted, resulting in a mean survival rate of 92% with a 95% CI of 82% to 97%. A total of 237 implants were included. There was a moderate level of heterogeneity at 54%, which was not statistically significant (p = 0.11) (Fig. ). A best-case analysis is available in the supplementary documents. After imputation, the number of included implants increased to 422. The survival rate was significantly reduced to 78% (95% CI: 74%-82%). Heterogeneity was negligible at 0%, with p = 0.39 (Fig. ). The retrospective studies included a total of 1440 implants and showed a survival rate of 88% (95% CI: 78%-94%) using Kaplan–Meier analysis. Heterogeneity was very high at 95% (p < 0.01) (Fig. ). Funnel plots of the analyses can all be found in the supplementary document, as the interpretive power is limited due to the small number of studies. Descriptive results The studies displayed heterogeneous approaches to data analysis and the variables considered. To provide insight into potential risk factors for implant survival, the following parameters with a significant impact on implant survival are summarized descriptively. Whether failure was attributed to biological complications or fractures varied depending on the study. Donati recorded six fractures and one disintegration, while Roccuzzo recorded eleven losses due to peri-implantitis and one due to a fracture in a prosthetic restoration with a cantilever. Retrospective studies did not differentiate between these two causes or did not specify the reason for the lost implants. In the study by Horikawa et al., Implant type, keratinized mucosa width (over 2mm), and gender were the only factors found to impact the prevalence of peri-implant infections . They reported hazard ratios of 40.09 (p = 0.0012) for maxilla versus anterior mandible implants and 18.69 (p = 0.0013) for maxilla versus posterior mandible implants . Cheng also reported higher survival rates for implants in the posterior mandible . Regarding implant position Donati, Mangano, Becker and Jacobs had nearly equal distribution between mandible and maxilla but did not analyze differences in the prognosis. Vrielinck only placed implants in the anterior maxilla . Single crowns were associated with a better prognosis and diabetes worsened the outcome . Cheng also analyzed the differences between groups with osteoporosis. These groups were not included in our statistical analysis, only the healthy control group. However, it should be noted that adequate antiresorptive therapy (survival rate: oral: 94% (CI: 90–96%), injectible: 90% (CI: 78–97%)) can mitigate the increased risk of implant patients with osteoporosis/osteopenia (84% (CI: 79–88%)) . Vrielinck et al. also reported a higher implant survival rate for fixed prostheses compared to removable restorations. Short implants and patients with bruxism had a higher likelihood of failure (with all losses occurring within the first month after implant placement) . Becker identified smoking and implantation type (according to the ITI Consensus Conference) as significant factors . Bone grafting was also not an exclusion criterion in our analysis. Among the included studies, only Becker et al. and Cheng et al. analyzed the difference between augmentation and no augmentation, and both found no significant differences . Conversely, Donati et al. and Roccuzzo et al. excluded cases with bone augmentation . Notably, in Cheng et al.'s study, bone augmentation in the osteoporotic group led to reduced survival rates. Since many studies included periodontally compromised patients, radiological bone loss was occasionally noted. Donati observed an average bone loss of -0.83 mm (95% CI: -1.38/-0.28), with 34% showing no bone loss at all . Jacobs only noted a loss of 0.13 +—0.34 mm (SD), ranging from -0.44 to 0.92mm . Becker reported bone loss exceeding 2.5 mm in 18.5% of implants, and nearly 10% of implants showed signs of peri-implantitis . No study explicitly excluded patients with periodontal disease per se. In the case of Donati, patients had moderate or advanced periodontitis, yet only 5 implants exhibited signs of peri-implantitis (bleeding on probing/suppuration and bone loss exceeding 1 mm) . Roccuzzo compared groups with different levels of periodontal disease, but found no differences in survival rates . In contrast, Becker observed implant loss in 17 patients with periodontitis and only in 7 without . Horikawa detected peri-implant infections in 48 implants, with an incidence of 21% after 15 years and 27.9% after 25 years . The initial database search across PubMed, Web of Science, and the Cochrane Library yielded a total of 805 results. After eliminating duplicates and triplicates, 621 unique records remained, which were subjected to a review of their titles and abstracts. Subsequently, 572 articles were excluded, and full texts were accessed for the remaining 49 articles. Following this stage, 8 articles (comprising 3 retrospective studies and 5 prospective studies) were deemed eligible for both qualitative and quantitative analyses. The primary reasons for exclusion after a thorough review of full articles were: a follow-up period of less than 20 years (n = 11), the use of excluded implant systems (e.g., machined surface, outdated), or indistinguishable data from turned and rough surfaces (n = 18).Additionally, articles were excluded due to duplication with different titles (n = 3), as well as for being case reports or studies with small or specific study populations (e.g., Pappillon-Lévevre Syndrome) (n = 5). Furthermore, some articles were excluded due to their unavailability (n = 2), implausible data (n = 1) or not mentioned survival as outcome parameter (n = 1). A detailed PRISMA flowchart is provided in Fig. . The general characteristics of the studies are summarized in Table . Straumann and Astra Tech were the most commonly used implant systems . Most studies were conducted in Europe, with Germany, Sweden, two from Belgium, and two from Italy represented . Additionally, there was one study from Asia (Japan) and one from America (USA) . Half of the studies were conducted in specialized practices, while the others took place in university centers. The retrospective studies exclusively comprised cohort studies, while the prospective studies had various designs, including a randomized controlled study , a prospective cohort study , and one with a split-mouth design . The earliest patient data date back to 1984, and the most recent publications were in 2022. The study populations in the publications by Roccuzzo, Donati, Jacobs, and Mangano exclusively included patients with fixed prostheses . Horikawa, Becker, Vrielinck, and Cheng also included patients with removable prostheses . Overall, the sample size (patients) ranged from 18 to 371, and the number of implants varied from 50 to 415. The rate of implants lost to follow-up in prospective investigations varied, ranging from just under 44% in Roccuzzo's study to 48% in Donati's and Jacobs' studies. The absolute number of implants and patients lost to follow-up (LTFU) was: Donati et al.: 36 implants, 26 patients; Jacobs et al.: 24 implants, 7 patients; Roccuzzo et al.: 125 implants, 65 patients . The retrospective studies, however, did not disclose the number of patients lost to follow-up but instead compensated for this factor through Kaplan–Meier estimation. Since all studies exhibited a similar risk of bias, no weighting was applied in this regard (Table ). Only one study received industry funding . The other authors either declared no conflicts of interest or did not receive external funding. The overall GRADE assessment indicated that the evidence quality across studies is considered very low . Both prospective and retrospective studies were included, with a high selection bias expected, primarily due to the high rate of loss to follow-up. Detailed comments can be found in Table and are not repeated here for clarity. Primary Outcome For the prospective studies, a complete-case analysis was conducted, resulting in a mean survival rate of 92% with a 95% CI of 82% to 97%. A total of 237 implants were included. There was a moderate level of heterogeneity at 54%, which was not statistically significant (p = 0.11) (Fig. ). A best-case analysis is available in the supplementary documents. After imputation, the number of included implants increased to 422. The survival rate was significantly reduced to 78% (95% CI: 74%-82%). Heterogeneity was negligible at 0%, with p = 0.39 (Fig. ). The retrospective studies included a total of 1440 implants and showed a survival rate of 88% (95% CI: 78%-94%) using Kaplan–Meier analysis. Heterogeneity was very high at 95% (p < 0.01) (Fig. ). Funnel plots of the analyses can all be found in the supplementary document, as the interpretive power is limited due to the small number of studies. Descriptive results The studies displayed heterogeneous approaches to data analysis and the variables considered. To provide insight into potential risk factors for implant survival, the following parameters with a significant impact on implant survival are summarized descriptively. Whether failure was attributed to biological complications or fractures varied depending on the study. Donati recorded six fractures and one disintegration, while Roccuzzo recorded eleven losses due to peri-implantitis and one due to a fracture in a prosthetic restoration with a cantilever. Retrospective studies did not differentiate between these two causes or did not specify the reason for the lost implants. In the study by Horikawa et al., Implant type, keratinized mucosa width (over 2mm), and gender were the only factors found to impact the prevalence of peri-implant infections . They reported hazard ratios of 40.09 (p = 0.0012) for maxilla versus anterior mandible implants and 18.69 (p = 0.0013) for maxilla versus posterior mandible implants . Cheng also reported higher survival rates for implants in the posterior mandible . Regarding implant position Donati, Mangano, Becker and Jacobs had nearly equal distribution between mandible and maxilla but did not analyze differences in the prognosis. Vrielinck only placed implants in the anterior maxilla . Single crowns were associated with a better prognosis and diabetes worsened the outcome . Cheng also analyzed the differences between groups with osteoporosis. These groups were not included in our statistical analysis, only the healthy control group. However, it should be noted that adequate antiresorptive therapy (survival rate: oral: 94% (CI: 90–96%), injectible: 90% (CI: 78–97%)) can mitigate the increased risk of implant patients with osteoporosis/osteopenia (84% (CI: 79–88%)) . Vrielinck et al. also reported a higher implant survival rate for fixed prostheses compared to removable restorations. Short implants and patients with bruxism had a higher likelihood of failure (with all losses occurring within the first month after implant placement) . Becker identified smoking and implantation type (according to the ITI Consensus Conference) as significant factors . Bone grafting was also not an exclusion criterion in our analysis. Among the included studies, only Becker et al. and Cheng et al. analyzed the difference between augmentation and no augmentation, and both found no significant differences . Conversely, Donati et al. and Roccuzzo et al. excluded cases with bone augmentation . Notably, in Cheng et al.'s study, bone augmentation in the osteoporotic group led to reduced survival rates. Since many studies included periodontally compromised patients, radiological bone loss was occasionally noted. Donati observed an average bone loss of -0.83 mm (95% CI: -1.38/-0.28), with 34% showing no bone loss at all . Jacobs only noted a loss of 0.13 +—0.34 mm (SD), ranging from -0.44 to 0.92mm . Becker reported bone loss exceeding 2.5 mm in 18.5% of implants, and nearly 10% of implants showed signs of peri-implantitis . No study explicitly excluded patients with periodontal disease per se. In the case of Donati, patients had moderate or advanced periodontitis, yet only 5 implants exhibited signs of peri-implantitis (bleeding on probing/suppuration and bone loss exceeding 1 mm) . Roccuzzo compared groups with different levels of periodontal disease, but found no differences in survival rates . In contrast, Becker observed implant loss in 17 patients with periodontitis and only in 7 without . Horikawa detected peri-implant infections in 48 implants, with an incidence of 21% after 15 years and 27.9% after 25 years . For the prospective studies, a complete-case analysis was conducted, resulting in a mean survival rate of 92% with a 95% CI of 82% to 97%. A total of 237 implants were included. There was a moderate level of heterogeneity at 54%, which was not statistically significant (p = 0.11) (Fig. ). A best-case analysis is available in the supplementary documents. After imputation, the number of included implants increased to 422. The survival rate was significantly reduced to 78% (95% CI: 74%-82%). Heterogeneity was negligible at 0%, with p = 0.39 (Fig. ). The retrospective studies included a total of 1440 implants and showed a survival rate of 88% (95% CI: 78%-94%) using Kaplan–Meier analysis. Heterogeneity was very high at 95% (p < 0.01) (Fig. ). Funnel plots of the analyses can all be found in the supplementary document, as the interpretive power is limited due to the small number of studies. The studies displayed heterogeneous approaches to data analysis and the variables considered. To provide insight into potential risk factors for implant survival, the following parameters with a significant impact on implant survival are summarized descriptively. Whether failure was attributed to biological complications or fractures varied depending on the study. Donati recorded six fractures and one disintegration, while Roccuzzo recorded eleven losses due to peri-implantitis and one due to a fracture in a prosthetic restoration with a cantilever. Retrospective studies did not differentiate between these two causes or did not specify the reason for the lost implants. In the study by Horikawa et al., Implant type, keratinized mucosa width (over 2mm), and gender were the only factors found to impact the prevalence of peri-implant infections . They reported hazard ratios of 40.09 (p = 0.0012) for maxilla versus anterior mandible implants and 18.69 (p = 0.0013) for maxilla versus posterior mandible implants . Cheng also reported higher survival rates for implants in the posterior mandible . Regarding implant position Donati, Mangano, Becker and Jacobs had nearly equal distribution between mandible and maxilla but did not analyze differences in the prognosis. Vrielinck only placed implants in the anterior maxilla . Single crowns were associated with a better prognosis and diabetes worsened the outcome . Cheng also analyzed the differences between groups with osteoporosis. These groups were not included in our statistical analysis, only the healthy control group. However, it should be noted that adequate antiresorptive therapy (survival rate: oral: 94% (CI: 90–96%), injectible: 90% (CI: 78–97%)) can mitigate the increased risk of implant patients with osteoporosis/osteopenia (84% (CI: 79–88%)) . Vrielinck et al. also reported a higher implant survival rate for fixed prostheses compared to removable restorations. Short implants and patients with bruxism had a higher likelihood of failure (with all losses occurring within the first month after implant placement) . Becker identified smoking and implantation type (according to the ITI Consensus Conference) as significant factors . Bone grafting was also not an exclusion criterion in our analysis. Among the included studies, only Becker et al. and Cheng et al. analyzed the difference between augmentation and no augmentation, and both found no significant differences . Conversely, Donati et al. and Roccuzzo et al. excluded cases with bone augmentation . Notably, in Cheng et al.'s study, bone augmentation in the osteoporotic group led to reduced survival rates. Since many studies included periodontally compromised patients, radiological bone loss was occasionally noted. Donati observed an average bone loss of -0.83 mm (95% CI: -1.38/-0.28), with 34% showing no bone loss at all . Jacobs only noted a loss of 0.13 +—0.34 mm (SD), ranging from -0.44 to 0.92mm . Becker reported bone loss exceeding 2.5 mm in 18.5% of implants, and nearly 10% of implants showed signs of peri-implantitis . No study explicitly excluded patients with periodontal disease per se. In the case of Donati, patients had moderate or advanced periodontitis, yet only 5 implants exhibited signs of peri-implantitis (bleeding on probing/suppuration and bone loss exceeding 1 mm) . Roccuzzo compared groups with different levels of periodontal disease, but found no differences in survival rates . In contrast, Becker observed implant loss in 17 patients with periodontitis and only in 7 without . Horikawa detected peri-implant infections in 48 implants, with an incidence of 21% after 15 years and 27.9% after 25 years . Strengths and relevance For the very first time, this meta-analysis compiles data on the survival of dental implants after 20 years. The data are exclusively derived from more recent publications, which can be attributed to the fact that the technological advancements in dental implantology over the past decades have been significant . At present, the most commonly used format is the screw-shaped titanium implant with a rough surface, and thus, only these were included in the analysis . When comparing the results to meta-analyses covering 5 and 10-year survival, it is evident that implant survival rates consistently remain well above 90% within these shorter time frames . This, however, does not guarantee a decrease in complication rates in the second decade during which the implant remains in function. This factor should be considered, particularly in light of increasing life expectancy, as it necessitates that more implants must remain functional for longer durations . Even when broadly comparing the results, the therapy involving dental implants can be regarded as a successful concept, especially when compared to total or unicondylar knee replacement, which show survival rates of 82% and 70% after 25 years . It is also favorable when compared to total hip arthroplasty, which has a survival rate of only 60.4%-77.7% after 20 years . Moreover, it is still open for discussion whether the prognosis of complex periodontal therapy may not compare favorably with the high success rates of implant treatment . Comparative and well-planned studies conducted over very long periods could provide valuable insights. Nevertheless, as demonstrated here, after 20 years, a non-negligible proportion of implants are lost. Therefore, the recommendation can only be that, as described by Pjetursson, implants should replace truly lost teeth and not the natural tooth itself . This review was conducted in collaboration with statistically trained scientists and clinically experienced practitioners, thus providing a well-founded and practical source for both clinical practice and research . Each individual study was scrutinized for statistical plausibility. Survival rates are a frequently requested parameter that holds high clinical significance in both medicine and dentistry . To determine them, it is often necessary to employ statistical models that account for information loss due to increasing Lost to Follow-Up (LTFU) rates, with the Kaplan–Meier analysis being a particularly established method . While some previous works only included prospective studies, which, in principle, adhere to higher quality standards, they frequently did not employ Kaplan–Meier analysis, as is also the case with our example . Therefore, we included retrospective works as they crucially contributed to achieving a sufficient sample size for this review. A pure complete case analysis often overestimates the success of a treatment . To address data loss due to LTFU, we used an imputation method following Akl et al., which should be considered a rather conservative approach, as it likely overestimates the loss rate in the LTFU group . No studies have addressed whether patients who were LTFU did not return for follow-ups because the implant was complication-free or because it was lost. However, a study by Lee et al. mentions that patients with poor compliance show a risk of tooth loss twice as high as the regular-compliance group . The authors of the individual studies unanimously state that the reported data likely overestimate the survival rate. Our result shows that it is very likely to be the truth, and even if the imputed data may be too negative, the confidence intervals still overlap with those of the retrospective studies. It is substantiated to claim that approximately 4 out of 5 implants survive after 20 years. When considering the extended timeframe, a sample size of 1440 plus 237 for this review is deemed sufficient . The most recent meta-analysis by Howe et al. included 2688 implants for the 10-year survival rate . Previous works, depending on the research question, also encompassed between 101 and 1435 implants for the relatively short period of just 10 years . The long follow-up period is also extraordinary since in 2010 the median length of follow-up in RCTs was one year . The literature search employed general terms and was supplemented by a search in the reference lists of the included publications, thus presenting a comprehensive literature view. All works are of recent date, reflecting the current state of treatment. They originate from six countries and three continents, providing a worldwide perspective . This is particularly significant within systematic reviews in implantology, as, in the majority of cases, they involve single-center studies . They were conducted in both university clinics and private practices, further enhancing external validity . In absolute numbers, more implants are placed in private practice, while most studies are conducted in university clinics . Therefore, the inclusion of data from both settings is particularly valuable. For example, Da Silva found a significantly lower survival rate after 5 years in private practice compared to data from university centers, but this may also be largely attributed to the pre-selected patient population . Limitations In general, studies in dentistry and implantology face a common challenge, which is a high rate of patients lost to follow-up . This factor is particularly pronounced in prospective studies, as evident from the different results post-imputation, suggesting a very high risk of bias . This can be deemed sufficient for estimating the survival rate as a reference for clinical practice. Presenting it as "4 out of 5 implants survive after 20 years" aids patient comprehension, as opposed to abstract percentage figures . The decision to include retrospective studies offers the advantages mentioned above but also comes with limitations. Retrospective data collection always carries the risk of data being inaccurately remembered or incompletely documented . The inability to establish a causal relationship does not play a crucial role in the central question concerning the survival rate . While retrospective studies conducted Kaplan–Meier analyses, this method of analysis was missing in prospective studies. Only three publications provided a 95% confidence interval, and one publication's data was deemed implausible, leading to its exclusion . It's worth noting that among the prospective studies, only one RCT was included in the analysis. While this may be a disadvantage for comparative analysis, it has little influence on the results of the survival rates. Furthermore, the external validity of cohort studies tends to be greater . One drawback in the statistical analysis across all studies was the failure to consider death as a competing risk factor. Such an analysis is applied in other fields, such as kidney transplantation . Although the number of patients dropping out of the study due to death was partially recorded, there was a lack of corresponding implant numbers and consideration in the Kaplan–Meier analysis. Future studies in implantology need to account for this factor, considering that an aging population retains more implants beyond their lifespans . In conclusion, it is suggested that, in light of increasing demands for study quality, dental implantology can best meet scientific standards by consulting with a statistically trained expert . Rather than a limitation of this meta-analysis, the lack of distinction between implant survival and success stems from missing studies. It cannot be emphasized enough that data based on a standardized definition of treatment success would be more insightful than survival rates alone. Descriptive results In the present studies, various factors are mentioned that have a significant impact on implant survival. In general, they align with information from other literature and studies conducted over a shorter time frame. We will not discuss every single point extensively since systematic reviews are available in most cases. The literature often states that the lower bone quality of the maxilla is mainly responsible for the lower survival rate in this region . The low survival rate in Vrielinck’s study might also be attributed to the exclusive placement in the anterior maxilla. Horikawa's results are consistent with other authors who have also reported higher bone loss for implants placed in the maxilla . Data on how diabetes affects implant survival have been heterogeneous. If anything, there is a tendency for a negative influence, but two meta-analyses could not find a significantly increased relative risk . A meta-analysis showed that osteoporosis does not necessarily represent a risk factor for implant loss. But a systematic review also sees a slight advantage regarding peri-implant bone loss with antiresorptive therapy. Adequate treatment of the underlying condition can be most likely derived from the existing data . A systematic review from 2016 sees no survival disadvantage for implants under 10mm and considers them equivalent to longer implants . However, most of the currently used implants are 10mm or shorter. Another study that defines short implants as 5-6mm also sees comparable results for the 5–10-year survival rate as with longer implants. Emphasis is placed on the correct indication and application . A systematic review by Kern et al. not only supports the aforementioned higher survival rate of implants in the mandible but also the better prognosis of fixed restorations as recorded by Vrielinck. The correct indication is also indispensable for this topic . Consistent with the results of Cheng, Jung et al. also report outstanding survival rates for implants restored with single crowns at 96.3% (95% CI: 94.2–97.6%) . In Vrielinck's study, losses occurred in patients with bruxism only in the first months after implantation, which matches the overall literature . Therapy should be adapted (e.g., the number of implants: two instead of four) . Smoking is widely known as a risk factor for periodontitis and in implant therapy. Mustapha et al. also conclude that smokers have a 140.2% higher risk of implant failure . Naseri has shown that the risk of loss also increases with the number of cigarettes smoked daily . They thus support Becker's results. The ITI implantation type receives little attention in the literature, and reliable data are still lacking. A search on PubMed for the term "ITI implantation type" in titles and abstracts yielded only the study by Becker et al. as a result. Even though periodontal diseases seem to be an important prognostic factor, as described by other authors, excluding these patients from this review is not meaningful due to the high prevalence in the population with over 40% of people affected in the US . Especially since Roccuzzo could not find significant differences in the survival rate of dental implants . The inclusion and consideration of these patients contribute to the external validity of this study. None of the studies recorded patient-related outcomes. Especially after such a long duration, patient satisfaction would have been of great interest and should be included in the future. Although it is a relatively new topic that only received increased attention in medical research in the 1980s and 2000s and challenges regarding validity still exist, they are essential to validate more patient-centered treatments in the future . The risk factors recorded by the included studies largely represent results that are covered in other literature. This suggests that these studies provide realistic depictions of reality . However, it must be noted that all factors in this review were identified by a maximum of one study. There was no consensus, but this can also be attributed to differing data and not all information being available. It can be assumed that the reasons for implant failure are multifactorial and cannot be attributed to a single cause or risk factor . It will be important to closely follow up with patients and consistently treat comorbidities like osteoporosis, bruxism, or periodontitis over a long period, as the initial status may not necessarily persist after 20 years . To sum up, periodontitis, diabetes, and osteoporosis are conditions that should not necessarily contraindicate implant placement when adequately treated, as the long-term prognosis is, at most, only slightly reduced. Although only one study indicated a reduced survival rate associated with nicotine abuse, we strongly recommend smoking cessation to minimize potential complications. Some risk factors, such as unfavorable placement, may only show their negative effect after years, e.g., through implant fracture due to material fatigue . Implant patients should not leave the practice without adequately planned follow-up after the operation. Continuous check-ups will be key to preventing complications by identifying risk factors or uncontrolled health issues. For the very first time, this meta-analysis compiles data on the survival of dental implants after 20 years. The data are exclusively derived from more recent publications, which can be attributed to the fact that the technological advancements in dental implantology over the past decades have been significant . At present, the most commonly used format is the screw-shaped titanium implant with a rough surface, and thus, only these were included in the analysis . When comparing the results to meta-analyses covering 5 and 10-year survival, it is evident that implant survival rates consistently remain well above 90% within these shorter time frames . This, however, does not guarantee a decrease in complication rates in the second decade during which the implant remains in function. This factor should be considered, particularly in light of increasing life expectancy, as it necessitates that more implants must remain functional for longer durations . Even when broadly comparing the results, the therapy involving dental implants can be regarded as a successful concept, especially when compared to total or unicondylar knee replacement, which show survival rates of 82% and 70% after 25 years . It is also favorable when compared to total hip arthroplasty, which has a survival rate of only 60.4%-77.7% after 20 years . Moreover, it is still open for discussion whether the prognosis of complex periodontal therapy may not compare favorably with the high success rates of implant treatment . Comparative and well-planned studies conducted over very long periods could provide valuable insights. Nevertheless, as demonstrated here, after 20 years, a non-negligible proportion of implants are lost. Therefore, the recommendation can only be that, as described by Pjetursson, implants should replace truly lost teeth and not the natural tooth itself . This review was conducted in collaboration with statistically trained scientists and clinically experienced practitioners, thus providing a well-founded and practical source for both clinical practice and research . Each individual study was scrutinized for statistical plausibility. Survival rates are a frequently requested parameter that holds high clinical significance in both medicine and dentistry . To determine them, it is often necessary to employ statistical models that account for information loss due to increasing Lost to Follow-Up (LTFU) rates, with the Kaplan–Meier analysis being a particularly established method . While some previous works only included prospective studies, which, in principle, adhere to higher quality standards, they frequently did not employ Kaplan–Meier analysis, as is also the case with our example . Therefore, we included retrospective works as they crucially contributed to achieving a sufficient sample size for this review. A pure complete case analysis often overestimates the success of a treatment . To address data loss due to LTFU, we used an imputation method following Akl et al., which should be considered a rather conservative approach, as it likely overestimates the loss rate in the LTFU group . No studies have addressed whether patients who were LTFU did not return for follow-ups because the implant was complication-free or because it was lost. However, a study by Lee et al. mentions that patients with poor compliance show a risk of tooth loss twice as high as the regular-compliance group . The authors of the individual studies unanimously state that the reported data likely overestimate the survival rate. Our result shows that it is very likely to be the truth, and even if the imputed data may be too negative, the confidence intervals still overlap with those of the retrospective studies. It is substantiated to claim that approximately 4 out of 5 implants survive after 20 years. When considering the extended timeframe, a sample size of 1440 plus 237 for this review is deemed sufficient . The most recent meta-analysis by Howe et al. included 2688 implants for the 10-year survival rate . Previous works, depending on the research question, also encompassed between 101 and 1435 implants for the relatively short period of just 10 years . The long follow-up period is also extraordinary since in 2010 the median length of follow-up in RCTs was one year . The literature search employed general terms and was supplemented by a search in the reference lists of the included publications, thus presenting a comprehensive literature view. All works are of recent date, reflecting the current state of treatment. They originate from six countries and three continents, providing a worldwide perspective . This is particularly significant within systematic reviews in implantology, as, in the majority of cases, they involve single-center studies . They were conducted in both university clinics and private practices, further enhancing external validity . In absolute numbers, more implants are placed in private practice, while most studies are conducted in university clinics . Therefore, the inclusion of data from both settings is particularly valuable. For example, Da Silva found a significantly lower survival rate after 5 years in private practice compared to data from university centers, but this may also be largely attributed to the pre-selected patient population . In general, studies in dentistry and implantology face a common challenge, which is a high rate of patients lost to follow-up . This factor is particularly pronounced in prospective studies, as evident from the different results post-imputation, suggesting a very high risk of bias . This can be deemed sufficient for estimating the survival rate as a reference for clinical practice. Presenting it as "4 out of 5 implants survive after 20 years" aids patient comprehension, as opposed to abstract percentage figures . The decision to include retrospective studies offers the advantages mentioned above but also comes with limitations. Retrospective data collection always carries the risk of data being inaccurately remembered or incompletely documented . The inability to establish a causal relationship does not play a crucial role in the central question concerning the survival rate . While retrospective studies conducted Kaplan–Meier analyses, this method of analysis was missing in prospective studies. Only three publications provided a 95% confidence interval, and one publication's data was deemed implausible, leading to its exclusion . It's worth noting that among the prospective studies, only one RCT was included in the analysis. While this may be a disadvantage for comparative analysis, it has little influence on the results of the survival rates. Furthermore, the external validity of cohort studies tends to be greater . One drawback in the statistical analysis across all studies was the failure to consider death as a competing risk factor. Such an analysis is applied in other fields, such as kidney transplantation . Although the number of patients dropping out of the study due to death was partially recorded, there was a lack of corresponding implant numbers and consideration in the Kaplan–Meier analysis. Future studies in implantology need to account for this factor, considering that an aging population retains more implants beyond their lifespans . In conclusion, it is suggested that, in light of increasing demands for study quality, dental implantology can best meet scientific standards by consulting with a statistically trained expert . Rather than a limitation of this meta-analysis, the lack of distinction between implant survival and success stems from missing studies. It cannot be emphasized enough that data based on a standardized definition of treatment success would be more insightful than survival rates alone. In the present studies, various factors are mentioned that have a significant impact on implant survival. In general, they align with information from other literature and studies conducted over a shorter time frame. We will not discuss every single point extensively since systematic reviews are available in most cases. The literature often states that the lower bone quality of the maxilla is mainly responsible for the lower survival rate in this region . The low survival rate in Vrielinck’s study might also be attributed to the exclusive placement in the anterior maxilla. Horikawa's results are consistent with other authors who have also reported higher bone loss for implants placed in the maxilla . Data on how diabetes affects implant survival have been heterogeneous. If anything, there is a tendency for a negative influence, but two meta-analyses could not find a significantly increased relative risk . A meta-analysis showed that osteoporosis does not necessarily represent a risk factor for implant loss. But a systematic review also sees a slight advantage regarding peri-implant bone loss with antiresorptive therapy. Adequate treatment of the underlying condition can be most likely derived from the existing data . A systematic review from 2016 sees no survival disadvantage for implants under 10mm and considers them equivalent to longer implants . However, most of the currently used implants are 10mm or shorter. Another study that defines short implants as 5-6mm also sees comparable results for the 5–10-year survival rate as with longer implants. Emphasis is placed on the correct indication and application . A systematic review by Kern et al. not only supports the aforementioned higher survival rate of implants in the mandible but also the better prognosis of fixed restorations as recorded by Vrielinck. The correct indication is also indispensable for this topic . Consistent with the results of Cheng, Jung et al. also report outstanding survival rates for implants restored with single crowns at 96.3% (95% CI: 94.2–97.6%) . In Vrielinck's study, losses occurred in patients with bruxism only in the first months after implantation, which matches the overall literature . Therapy should be adapted (e.g., the number of implants: two instead of four) . Smoking is widely known as a risk factor for periodontitis and in implant therapy. Mustapha et al. also conclude that smokers have a 140.2% higher risk of implant failure . Naseri has shown that the risk of loss also increases with the number of cigarettes smoked daily . They thus support Becker's results. The ITI implantation type receives little attention in the literature, and reliable data are still lacking. A search on PubMed for the term "ITI implantation type" in titles and abstracts yielded only the study by Becker et al. as a result. Even though periodontal diseases seem to be an important prognostic factor, as described by other authors, excluding these patients from this review is not meaningful due to the high prevalence in the population with over 40% of people affected in the US . Especially since Roccuzzo could not find significant differences in the survival rate of dental implants . The inclusion and consideration of these patients contribute to the external validity of this study. None of the studies recorded patient-related outcomes. Especially after such a long duration, patient satisfaction would have been of great interest and should be included in the future. Although it is a relatively new topic that only received increased attention in medical research in the 1980s and 2000s and challenges regarding validity still exist, they are essential to validate more patient-centered treatments in the future . The risk factors recorded by the included studies largely represent results that are covered in other literature. This suggests that these studies provide realistic depictions of reality . However, it must be noted that all factors in this review were identified by a maximum of one study. There was no consensus, but this can also be attributed to differing data and not all information being available. It can be assumed that the reasons for implant failure are multifactorial and cannot be attributed to a single cause or risk factor . It will be important to closely follow up with patients and consistently treat comorbidities like osteoporosis, bruxism, or periodontitis over a long period, as the initial status may not necessarily persist after 20 years . To sum up, periodontitis, diabetes, and osteoporosis are conditions that should not necessarily contraindicate implant placement when adequately treated, as the long-term prognosis is, at most, only slightly reduced. Although only one study indicated a reduced survival rate associated with nicotine abuse, we strongly recommend smoking cessation to minimize potential complications. Some risk factors, such as unfavorable placement, may only show their negative effect after years, e.g., through implant fracture due to material fatigue . Implant patients should not leave the practice without adequately planned follow-up after the operation. Continuous check-ups will be key to preventing complications by identifying risk factors or uncontrolled health issues. So how far can we go? For the first time, this review consolidates data on dental implant survival over a 20-year period. A survival rate of approximately 4 out of 5 implants is still considered remarkably good in the medical field for such a time frame. However, certain aspects have also emerged that will require further attention in the future. The significant difference in survival rates between 10 and 20 years indicates that dental implant therapy does not conclude after the initial surgery but also necessitates lifelong follow-up care. The research challenge ahead lies in pinpointing the pertinent risk factors within this timeframe and crafting strategies to ensure sustained implant survival, considering the likely multifactorial nature of implant failure. In this context, giving special consideration to quality standards is crucial to prevent overestimating the effectiveness of current treatments due to potential statistical errors. High-quality and reliable therapies have been developed in dental implantology, but the conclusion remains: we can go even further. For the first time, this review consolidates data on dental implant survival over a 20-year period. A survival rate of approximately 4 out of 5 implants is still considered remarkably good in the medical field for such a time frame. However, certain aspects have also emerged that will require further attention in the future. The significant difference in survival rates between 10 and 20 years indicates that dental implant therapy does not conclude after the initial surgery but also necessitates lifelong follow-up care. The research challenge ahead lies in pinpointing the pertinent risk factors within this timeframe and crafting strategies to ensure sustained implant survival, considering the likely multifactorial nature of implant failure. In this context, giving special consideration to quality standards is crucial to prevent overestimating the effectiveness of current treatments due to potential statistical errors. High-quality and reliable therapies have been developed in dental implantology, but the conclusion remains: we can go even further. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 1132 KB)
Adapting an Electronic STI Risk Assessment Program for Use in Pediatric Primary Care
e87615c0-604c-4f87-bbda-fae180203ad8
10201180
Pediatrics[mh]
Adolescents continue to bear a high burden of the sexually transmitted infections (STIs) gonorrhea, chlamydia, and human immunodeficiency virus (HIV), with almost half of new STIs in youth ages 15 to 24 years old. Because of the significant impact of STIs on patients in this age group, the Centers for Disease Control and Prevention and American Academy of Pediatrics (AAP) have and continue to recommend STI screening in at-risk adolescents. , However screening and testing continues to lag behind the need. - Numerous barriers to screening have been identified in the existing literature. Córdova et al interviewed adolescents and found adolescents widely perceived clinicians as uncomfortable and judgmental around the subject matter of STI testing. They also highlighted that maintaining confidentiality was a key aspect to foster communication around this issue, and that better communication would facilitate greater disclosure. Goyal et al surveyed primary care pediatricians on their STI screening habits and 71% reported difficulty screening discretely when the adolescent is accompanied by parents or guardians. We previously piloted and implemented an STI risk assessment program in our pediatric emergency department (ED). - This platform utilized a novel electronic questionnaire and software platform that integrated into the electronic health record (EHR). A robust literature has already shown that adolescents prefer disclosing sensitive information such as sexual histories via digital means instead of face-to-face interviews. - We demonstrated similar findings when implementing this program in our ED, and it led to increased STI diagnoses. The St. Louis region has among the highest incidences of STIs in the United States and has a critical need for improved STI screening among adolescents. Pediatric primary care clinics may be even better positioned for STI risk assessments, as they often offer greater privacy and confidentiality, a less stressful patient environment, and greater opportunity for longitudinal care. STI risk assessment and testing continues to be a challenge in this setting, however, due to infrequent sexual history taking and STI testing in many pediatric primary care practices. , - The overall goal of our work is to adapt our ED-based STI screening program for use in pediatric primary care to improve STI screening in this setting and is guided by the implementation science framework the Consolidated Framework for Implementation Research. We previously reported on interviews we conducted with primary care pediatricians, clinic staff, as well as adolescents, and their parents who receive care in practices in our region to understand facilitators and barriers to STI screening to inform this effort. We asked these same interviewees to evaluate the existing ED-based electronic STI risk assessment tool to assess its usability and provide qualitative feedback to inform modifications prior to implementing in primary care, and report on those findings here. This study is being carried out in 2 stages: (1) qualitative interviews with clinics providing feedback to adapt the tool and (2) implementation of the tool for a minimum of 12 months. Change in outcomes for STI screening will be evaluated in a future report. We interviewed pediatricians, pediatric clinical staff from pediatric practices, and adolescents who receive care in these practices. Practices were recruited in collaboration with the Washington University Pediatric & Adolescent Ambulatory Research Consortium (WU PAARC), a network of over 30 pediatric practices in our community that participate in research studies. We obtained qualitative and quantitative feedback on the electronic platform, the questionnaire content, and their perspective on using it in the primary care setting. A significant aspect was evaluating the tool’s usability, a crucial element for digital health interventions. The Human Research Protection Office at Washington University in St. Louis approved this study. The Screening Tool and Workflow Integration The current iteration of the program utilizes the Epic EHR to offer an STI risk assessment questionnaire for adolescents receiving care in our ED, regardless of the reason for their visit. While the EHR has changed from our original platform, the questionnaire content for ED patients is unchanged from what we previously published, with the only change being that the current EHR platform no longer provides an audio-component. In the ED, 15 to 21 years old patients and their families are given a brochure giving a non-specific overview of the program and discussing the need for privacy and confidentiality when the patient answers the questionnaire. , Patients using the questionnaire read a brief introduction explaining the purpose of the questionnaire—to identify if they should be offered STI testing during the visit. They were then asked a series of questions to obtain a sexual history; an integrated decision rule then provides recommendations as to whether the patient should be offered testing for gonorrhea/chlamydia and/or HIV during the visit. Recommendations are given on-screen to the patient, who can electronically “opt-in” to testing. Questionnaire responses and STI test recommendations are integrated in real-time into the EHR for the physicians and nurses to review. Recruitment and Participation We interviewed all physicians at 4 participating practices, up to 5 clinic staff at each practice, and a convenience sample of 5 adolescents aged 15 to 21 years and one of their parents from each practice. Data reported here were obtained during interviews that were also used to examine facilitators and barriers to STI care in each practice; those data have been reported separately. We did not show the electronic tool to parents as it is only intended for use by adolescents and healthcare providers, thus no parental data is in this report. Healthcare workers were recruited via email, phone, or in-person. We identified potential eligible adolescents through the EHR by reviewing scheduled appointments for upcoming yearly preventive visits. Families were contacted by phone in advance of their appointment to screen for eligibility and to gauge interest. Written informed consent for the interviews was obtained from all participants 18 years and older, and written assent obtained for all minor participants less than 18 years of age. All participants received a $50 gift card as remuneration. Interviews and Data Collection Interviews were conducted from April 2020 through May 2021. In the first half of each interview we conducted semi-structured interviews to understand their beliefs around STI screening and testing; these data have been reported previously. All data reported here were obtained in-person during the second half of participant interviews, during which the tool was being evaluated for use in the primary care setting. After completing the semi-structured interviews, participants were provided an overview of the existing electronic STI risk assessment tool used in our ED, given a demonstration of the platform on a tablet computer, and then allowed to trial it. and provide tablet screenshots representative of the questions shown to patients completing the questionnaire on the tablet. After using the questionnaire on the tablet, participants were asked to complete the System Usability Scale (SUS). , The SUS is a validated, reliable instrument to measure the usability of many items, including hardware, software, websites, and applications. The SUS asks participants 10 questions using a five-point Likert-scale with responses ranging from strongly agree to strongly disagree. Questions include items such as “I thought the system was easy to use” and “I found the various functions in this system were well integrated.” Participants completed the SUS by using REDCap. After completing the SUS, all participants were then asked to provide open ended, qualitative feedback that could be used to refine the electronic tool. All interviews were conducted by study team member VD. All interviews were audio-recorded and transcribed verbatim by a transcription service and verified for accuracy by study team member VD. Analysis Quantitative survey data from the SUS was managed in REDCap. These data were analyzed using descriptive statistics using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). SUS scores range from a score of 0 (lowest usability) to 100 (highest usability); a SUS score of 68 points or higher indicates above average usability. - Open-ended responses were analyzed qualitatively. We developed a codebook using thematic coding techniques to describe users’ evaluation of the electronic screening tool and perspectives on its use in pediatric primary care. Authors CM and PC reviewed each transcript and conducted inter-rater reliability checks on all data to ensure consistent interpretations; any differences were adjudicated by author VM. These data were analyzed using NVivo (version 20). The current iteration of the program utilizes the Epic EHR to offer an STI risk assessment questionnaire for adolescents receiving care in our ED, regardless of the reason for their visit. While the EHR has changed from our original platform, the questionnaire content for ED patients is unchanged from what we previously published, with the only change being that the current EHR platform no longer provides an audio-component. In the ED, 15 to 21 years old patients and their families are given a brochure giving a non-specific overview of the program and discussing the need for privacy and confidentiality when the patient answers the questionnaire. , Patients using the questionnaire read a brief introduction explaining the purpose of the questionnaire—to identify if they should be offered STI testing during the visit. They were then asked a series of questions to obtain a sexual history; an integrated decision rule then provides recommendations as to whether the patient should be offered testing for gonorrhea/chlamydia and/or HIV during the visit. Recommendations are given on-screen to the patient, who can electronically “opt-in” to testing. Questionnaire responses and STI test recommendations are integrated in real-time into the EHR for the physicians and nurses to review. We interviewed all physicians at 4 participating practices, up to 5 clinic staff at each practice, and a convenience sample of 5 adolescents aged 15 to 21 years and one of their parents from each practice. Data reported here were obtained during interviews that were also used to examine facilitators and barriers to STI care in each practice; those data have been reported separately. We did not show the electronic tool to parents as it is only intended for use by adolescents and healthcare providers, thus no parental data is in this report. Healthcare workers were recruited via email, phone, or in-person. We identified potential eligible adolescents through the EHR by reviewing scheduled appointments for upcoming yearly preventive visits. Families were contacted by phone in advance of their appointment to screen for eligibility and to gauge interest. Written informed consent for the interviews was obtained from all participants 18 years and older, and written assent obtained for all minor participants less than 18 years of age. All participants received a $50 gift card as remuneration. Interviews were conducted from April 2020 through May 2021. In the first half of each interview we conducted semi-structured interviews to understand their beliefs around STI screening and testing; these data have been reported previously. All data reported here were obtained in-person during the second half of participant interviews, during which the tool was being evaluated for use in the primary care setting. After completing the semi-structured interviews, participants were provided an overview of the existing electronic STI risk assessment tool used in our ED, given a demonstration of the platform on a tablet computer, and then allowed to trial it. and provide tablet screenshots representative of the questions shown to patients completing the questionnaire on the tablet. After using the questionnaire on the tablet, participants were asked to complete the System Usability Scale (SUS). , The SUS is a validated, reliable instrument to measure the usability of many items, including hardware, software, websites, and applications. The SUS asks participants 10 questions using a five-point Likert-scale with responses ranging from strongly agree to strongly disagree. Questions include items such as “I thought the system was easy to use” and “I found the various functions in this system were well integrated.” Participants completed the SUS by using REDCap. After completing the SUS, all participants were then asked to provide open ended, qualitative feedback that could be used to refine the electronic tool. All interviews were conducted by study team member VD. All interviews were audio-recorded and transcribed verbatim by a transcription service and verified for accuracy by study team member VD. Quantitative survey data from the SUS was managed in REDCap. These data were analyzed using descriptive statistics using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). SUS scores range from a score of 0 (lowest usability) to 100 (highest usability); a SUS score of 68 points or higher indicates above average usability. - Open-ended responses were analyzed qualitatively. We developed a codebook using thematic coding techniques to describe users’ evaluation of the electronic screening tool and perspectives on its use in pediatric primary care. Authors CM and PC reviewed each transcript and conducted inter-rater reliability checks on all data to ensure consistent interpretations; any differences were adjudicated by author VM. These data were analyzed using NVivo (version 20). We recruited 47 participants (14 physicians, 9 clinic staff, and 12 adolescents) across 4 pediatric primary care practices. All 14 physicians participated in interviews, however, one physician did not complete the SUS. We have previously reported demographics of this group Briefly, 15/23 (65.2%) clinical staff were between the ages of 40 to 60, 19/23 (82.6%) were white, and 20/23 (87.0%) were female. Of the participating adolescents, the median age was 17 years, 7/12 (58.3%) were female, 9/12 (75.0%) were white, and 3/12 (25.0%) black. System Usability Scoring Participants rated the tool highly using the SUS, with a median score of 92.5 and an interquartile range of 82.5 to 100 across adolescents, physicians, and clinic staff. Supplemental Appendix 1 provides a summary of responses for each SUS item across adolescents, physicians, and clinic staff. Adolescents rated the tool highly. All 12 adolescent participants indicated they would like to use the system and that it was easy to use. Only 1 of the 12 adolescents indicated they would need assistance using the tool, but all 12 indicated they could learn to use it easily. Physicians and clinic staff rated the tool highly as well, though their responses had slightly more variation than the adolescents. The 13 physicians scored the tool highly, however, 2 users indicated the system was not easy to use for them, and 3 indicated they found it complex. The 7 participating clinic staff members rated the tool higher than the physicians, with scores closer to that of the adolescents, and near universal agreement the system was easy to use. Electronic Tool Qualitative Feedback Overall, the participants in our study gave our tool positive feedback. We identified 3 overarching themes: (1) workflow; (2) perceived need; and (3) honest responses. We placed additional relevant quotes with disparate themes into a general category . Physicians and clinic staff indicated the tool would fit well into their workflow and adolescents indicated it was easy to operate and use ( , quotes 2.1-2.3). Having patients complete the questionnaire early in the visit was identified as the most important workflow criterion. Multiple participating clinics ask adolescents to complete the PHQ-9 depression questionnaire on paper during well-child checks, and the potential to integrate that or other questionnaires into this platform was of significant interest. The potential for our electronic tool to support screening for STIs was of particular importance ( , quotes 2.4-2.7). Multiple clinic staff commented on the high prevalence of STIs in our region and the need for improved ability to screen adolescents. One physician commented they do not routinely offer STI screening and using this tool would facilitate obtaining information they need from patients to more readily identify patients in need of STI testing. Participants also indicated the confidential and private nature of the tool would facilitate honest disclosure of sensitive information ( , quotes 2.8-2.11). They believe adolescents’ hesitance to discuss sensitive information directly with their physician, especially when a parent may be present, would be greatly ameliorated through use of this tool. Participants provided general feedback on the tool as well; representative quotes are provided in . Clinic staff indicated the ability to obtain a patient phone number for direct notification of test results was helpful. Adolescents commented it was comfortable to use the tool, especially when compared to speaking directly to a physician. While a physician commented that the tool was easy to use, a different clinic staff member expressed concern the tool may take too long to complete. Questionnaire and Process Adaptations We used the initial feedback from the participants to make modifications to the program prior to implementation in pediatric practices. We changed the content of the questionnaire to reflect the change of context from the ED to the primary care setting ( Supplemental Appendix 2 ). This primarily involved modifying the introduction and the questions related to test recommendations. We made very few changes to individual questions, however we did remove several “introductory” questions that the pediatricians indicated were not needed in their environment. This included removing questions related to patient’s self-reported grade level, access to healthcare, race, and ethnicity. The program was implemented in our first participating practice in November 2021. After the first few months of use, physicians requested we review the questionnaire to identify if we could further reduce its length. After an in-person review, we further shortened the introduction. Integration of the questionnaire into primary care required process adaptations as well. Participating pediatricians currently offer the questionnaire to 15 to 21 years old patients presenting for their yearly well-child checks. We created best practice advisories (BPAs), alerts in the EHR for clinicians, and staff, to notify office staff when a patient is eligible for the questionnaire. Patients and families are given a brochure, modeled off our ED brochure, describing the general purpose of the electronic screening without disclosing the topic. Adolescents are given privacy in a patient room to complete the questionnaire. After completion, a BPA alerts the clinician. The BPA provides a link to review questionnaire responses and STI testing recommendations, and prompts clinicians to order testing when recommended. Questionnaire responses do not appear in the online patient portal, preserving patient confidentiality even should a parent have proxy access to review their child’s information. Participants rated the tool highly using the SUS, with a median score of 92.5 and an interquartile range of 82.5 to 100 across adolescents, physicians, and clinic staff. Supplemental Appendix 1 provides a summary of responses for each SUS item across adolescents, physicians, and clinic staff. Adolescents rated the tool highly. All 12 adolescent participants indicated they would like to use the system and that it was easy to use. Only 1 of the 12 adolescents indicated they would need assistance using the tool, but all 12 indicated they could learn to use it easily. Physicians and clinic staff rated the tool highly as well, though their responses had slightly more variation than the adolescents. The 13 physicians scored the tool highly, however, 2 users indicated the system was not easy to use for them, and 3 indicated they found it complex. The 7 participating clinic staff members rated the tool higher than the physicians, with scores closer to that of the adolescents, and near universal agreement the system was easy to use. Overall, the participants in our study gave our tool positive feedback. We identified 3 overarching themes: (1) workflow; (2) perceived need; and (3) honest responses. We placed additional relevant quotes with disparate themes into a general category . Physicians and clinic staff indicated the tool would fit well into their workflow and adolescents indicated it was easy to operate and use ( , quotes 2.1-2.3). Having patients complete the questionnaire early in the visit was identified as the most important workflow criterion. Multiple participating clinics ask adolescents to complete the PHQ-9 depression questionnaire on paper during well-child checks, and the potential to integrate that or other questionnaires into this platform was of significant interest. The potential for our electronic tool to support screening for STIs was of particular importance ( , quotes 2.4-2.7). Multiple clinic staff commented on the high prevalence of STIs in our region and the need for improved ability to screen adolescents. One physician commented they do not routinely offer STI screening and using this tool would facilitate obtaining information they need from patients to more readily identify patients in need of STI testing. Participants also indicated the confidential and private nature of the tool would facilitate honest disclosure of sensitive information ( , quotes 2.8-2.11). They believe adolescents’ hesitance to discuss sensitive information directly with their physician, especially when a parent may be present, would be greatly ameliorated through use of this tool. Participants provided general feedback on the tool as well; representative quotes are provided in . Clinic staff indicated the ability to obtain a patient phone number for direct notification of test results was helpful. Adolescents commented it was comfortable to use the tool, especially when compared to speaking directly to a physician. While a physician commented that the tool was easy to use, a different clinic staff member expressed concern the tool may take too long to complete. We used the initial feedback from the participants to make modifications to the program prior to implementation in pediatric practices. We changed the content of the questionnaire to reflect the change of context from the ED to the primary care setting ( Supplemental Appendix 2 ). This primarily involved modifying the introduction and the questions related to test recommendations. We made very few changes to individual questions, however we did remove several “introductory” questions that the pediatricians indicated were not needed in their environment. This included removing questions related to patient’s self-reported grade level, access to healthcare, race, and ethnicity. The program was implemented in our first participating practice in November 2021. After the first few months of use, physicians requested we review the questionnaire to identify if we could further reduce its length. After an in-person review, we further shortened the introduction. Integration of the questionnaire into primary care required process adaptations as well. Participating pediatricians currently offer the questionnaire to 15 to 21 years old patients presenting for their yearly well-child checks. We created best practice advisories (BPAs), alerts in the EHR for clinicians, and staff, to notify office staff when a patient is eligible for the questionnaire. Patients and families are given a brochure, modeled off our ED brochure, describing the general purpose of the electronic screening without disclosing the topic. Adolescents are given privacy in a patient room to complete the questionnaire. After completion, a BPA alerts the clinician. The BPA provides a link to review questionnaire responses and STI testing recommendations, and prompts clinicians to order testing when recommended. Questionnaire responses do not appear in the online patient portal, preserving patient confidentiality even should a parent have proxy access to review their child’s information. STI screening in for adolescents continues to lag behind the need and recommendations. Providing this care for adolescents faces barriers compared to adults due to challenges obtaining this care independently and confidentially. We demonstrated that an STI screening tool developed for use in a pediatric ED has a high degree of acceptability by primary care pediatricians and adolescents. Based on results of the SUS, adolescents, pediatricians, and clinic staff indicated our electronic platform was exceptionally easy to use, scoring well above the “average” usability score of 68, even before adaptations were made for their environment. This is reflective of the significant effort we spent originally developing the program, which included interviews of adolescents to review the questionnaire, and the qualitative feedback provided by adolescents in our ED using it. A consistent theme emerged among all interviewees that this method would encourage honest responses from adolescents and facilitate this difficult conversation. This also mirrors our findings from the ED that adolescents prefer to disclose such sensitive information via electronic questionnaires instead of face-to-face interviews. While some physicians expressed concern over the length of the ED version of the questionnaire and how to best integrate it into their workflow, they were positive about its potential use in their practices and collaborative in making modifications to the questionnaire for use in their setting. The original ED version of the questionnaire was completed in a median of 8.3 min. With a shortened introduction and fewer questions, and completion in the calmer and more controlled primary care environment, we anticipate a shorter completion time. Our initial work exploring facilitators and barriers to STI screening in this setting, as well the work of others, , highlights the need for improved ways for adolescents and pediatricians to communicate STI care. Adolescents and clinicians emphasized the importance of maintaining confidentiality when discussing STIs, and our tool offers 1 such opportunity. While confidential delivery of STI care to adolescent minors is still legally protected in all states, providing this care presents many challenges. Our tool provides an efficient and confidential way to discretely screen adolescents, and we found adolescents as well as clinicians were supportive of this approach. This method is not the only electronic/web-based method to obtain this information from adolescents. Stalgaitis and Glick performed a systematic review of web-based diaries used to identified risky sexual behaviors. The studies they identified included only adults, and were implemented for a limited time. While a real-time diary would reduce the potential for recall bias, it is more time-intensive and could raise privacy concerns with information being stored and transmitted online. These concerns are mitigated by our system, which is integrated into the EHR. Additionally, our patient responses are not viewable in the EHR patient portal, which is important for confidentiality as many parents can access adolescents’ information in the portal. Karas et al described implementation of a clinical decision support tool for a pediatric primary care network to increase STI testing during well-child checks, similar to the goal of our program. Their effort differed from ours as it was only for adolescent females, and utilized documentation and test results in the EHR to alert clinicians instead of patient questionnaires to generate STI testing alerts. They did see a doubling of chlamydia testing in their target population, demonstrating an alternative way to leverage EHR alerts for similar goals. Wood et al similarly pursued increased chlamydia screening for adolescent females in a pediatric primary setting through EHR alerts, with universal obtaining of urine specimens but combined it with universal collection of urine specimens and observed an increase in chlamydia testing. Wayal et al used a web-based survey to obtain STI risk information from adolescent and adults; their program was implemented at sexual health clinics in England, a higher risk setting and population than our effort, however did find high acceptability of their intervention and potential to identify STI risk. A 2016 systematic review of STI screening in clinic settings identified 42 interventions to increase STI screening, however none appeared to use EHR-integrated questionnaires and alerts, and few included men. Our program appears to be unique with our focus on pediatric primary care practices, including all female and male patients, and the use of an EHR integrated questionnaire, however elements such as universal urine collection and enhanced use of existing data from the EHR could improve our process. We have implemented our tool with the updated questionnaire in a second practice, and will soon be implementing in a third practice. We are monitoring STI testing practices and continue to obtain feedback from physicians to continue to improve the tool. Our integration into their existing workflows and the overall use of the tool will be described in a future report describing our implementation outcomes. Limitations of our work including recruiting a convenience sample of patients from a subset of patients in our region. As such participating adolescents may not have been reflective of others in the region or nationally. Additionally, while participants rated our tool highly using the SUS, this evaluative mechanism has limitations. Broekhuis et al compared the usability of the SUS compared to other benchmarking instruments and found that think-aloud protocols were a more effective tool and recommended against using the SUS as the sole evaluative method. While we did not have a formal think-aloud protocol, participants were able to provide qualitative feedback on the tool and their comments were reflective of their overall positive view of our tool. We demonstrated that our electronic STI risk assessment tool, originally developed for use in a pediatric ED, had high usability and appeal in the pediatric primary care setting. With minor modifications, we were able to adapt the tool for use in primary care settings and subsequently implemented an STI screening program in pursuit of evaluating its clinical effectiveness in this new patient environment. sj-doc-2-jpc-10.1177_21501319231172900 – Supplemental material for Adapting an Electronic STI Risk Assessment Program for Use in Pediatric Primary Care Click here for additional data file. Supplemental material, sj-doc-2-jpc-10.1177_21501319231172900 for Adapting an Electronic STI Risk Assessment Program for Use in Pediatric Primary Care by Fahd A. Ahmad, Pamela Chan, Collin McGovern, Viani Dickey, Randi Foraker and Virginia McKay in Journal of Primary Care & Community Health sj-docx-1-jpc-10.1177_21501319231172900 – Supplemental material for Adapting an Electronic STI Risk Assessment Program for Use in Pediatric Primary Care Click here for additional data file. Supplemental material, sj-docx-1-jpc-10.1177_21501319231172900 for Adapting an Electronic STI Risk Assessment Program for Use in Pediatric Primary Care by Fahd A. Ahmad, Pamela Chan, Collin McGovern, Viani Dickey, Randi Foraker and Virginia McKay in Journal of Primary Care & Community Health
MDCT evaluation of dynamic changes in aortic root parameters during the cardiac cycle in patients with aortic regurgitation
54869a62-dae7-4b02-bbdd-a86e1b33b5e3
11937439
Surgery[mh]
Aortic regurgitation (AR) has a prevalence of approximately 4.9% in the global population, and its incidence increases with age, commonly diagnosed between the ages of 40 and 60 , . In Western countries, the prevalence of moderate or severe AR is around 0.5%, whereas in China, the prevalence of moderate to severe isolated AR reaches as high as 1.2% , . Additionally, in some regions, such as China, aortic regurgitation (AR) is more prevalent than aortic stenosis (AS) among the elderly, making it one of the most common types of valvular dysfunction , . Common causes of AR include aortic root dilation, congenital bicuspid valves, or the presence of infective, rheumatic, degenerative, or calcific valve diseases, as well as recurrent regurgitation following aortic valve replacement surgery , . Current guidelines recommend surgical aortic valve replacement only for patients with severe AR who exhibit significant symptoms, left ventricular ejection fraction < 50%, or left ventricular end-systolic diameter > 50 mm, transcatheter aortic valve replacement (TAVR) is considered relatively contraindicated in such cases , . However, many patients with severe AR often have significant left ventricular dysfunction, a history of stroke, or gastrointestinal bleeding, which makes up to one-fifth of these patients considered high-risk and unsuitable for surgery, leaving conservative treatment such as pharmacological therapyas the only option , . For these patients, the annual global mortality rate reaches 10–20% . The use of TAVR in AR cases is relatively rare; between 2011 and 2019, AR accounted for less than 1.0% of all TAVR procedures in the United States , . This is because most currently available transcatheter valve prostheses are primarily designed for AS, unlike severe AS, AR patients typically exhibit little aortic valve calcification, and those with severe AR often experience marked degeneration of the elastic fibers in the valve’s fibroelastic complex and the ascending aortic wall , . This leads to impaired strain capacity and enlargement of the aortic root structure, including the aortic annulus (AA), which predisposes to inadequate valve anchoring and sealing, thereby challenging prosthesis fixation, these factors increase the risk of valve migration, the need for a second valve implantation, and significant paravalvular leaks , , .Therefore, surgical treatment remains the primary approach for AR patients, nonetheless, for high-risk AR patients with contraindications to surgery, TAVR may be the most effective treatment option. Recent studies have shown that, despite the challenges, TAVR is feasible for AR patients , , . The development of second-generation prostheses specifically designed for AR, such as the JenaValve (USA) and J-Valve (China), has significantly improved the success rate of TAVR in AR patients . In addition to improvements in prosthetic valve design, preoperative imaging assessments play a crucial role in increasing the success rate of TAVR for AR. Transthoracic echocardiography is the first-line imaging modality used to evaluate the severity and progression of AR by utilizing Doppler and color flow imaging . However, because multidetector computed tomography (MDCT) can more clearly assess the morphology of the aortic valve, the size and shape of the annulus, the degree and distribution of valve and vascular calcification, the risk of coronary artery obstruction, the dimensions of the aortic root, the optimal fluoroscopic projection angles for valve placement, and the selection of vascular access, MDCT is considered the gold standard for pre-TAVR evaluation in AR patients – .Evaluating the anatomy of the aortic root is essential for selecting suitable candidates for TAVR. However, studies focusing on the characteristics of aortic root parameters collected through MDCT in AR patients are scarce. This study aims to explore the dynamic changes in multiple parameters of the aortic root at various phases of the cardiac cycle in AR patients using MDCT, providing a research basis for preoperative MDCT assessment in AR patients. General information This single-center retrospective study included 30 patients with AR who were treated at the Second Xiangya Hospital of Central South University between September 2021 and December 2023. Inclusion criteria:1.Patients diagnosed with isolated severe AR via echocardiography.2.Patients deemed unsuitable for surgical intervention and clinically assessed as requiring TAVR. Exclusion criteria:1.Patients with other valvular diseases in addition to aortic valve disease 2.Patients with aortic aneurysm(Ascending aortic diameter exceeding 5.0 cm) or aortic coarctation 3.Patients with congenital heart diseases (e.g., atrial septal defect, ventricular septal defect, or patent ductus arteriosus) 4.Patients with primary or secondary pulmonary hypertension 5.Patients with allergies to contrast agents or severe cardiac, hepatic, or renal dysfunction 6.Patients with significant arrhythmias, such as atrial fibrillation or atrial flutter 7.Patients with concomitant AS. Clinical trial ethical considerations This study was approved by the Ethics Committee of the Second Xiangya Hospital of Central South University (Approval No.: LYF20240152) and all methods were performed in accordance with the relevant guidelines and regulations.All the participants gave their written informed consent to participate in the study. MDCT scanning protocol and post-processing methods A SIEMENS SOMATOM FORCE dual-source CT scanner (Germany) was used with a retrospective ECG-gated coronary scanning protocol synchronized with respiration. Automatic trigger scanning was employed, with the region of interest set at the ascending aorta, and a trigger threshold of 100 HU. The scan range extended from 1 cm below the carina to the diaphragm.For contrast enhancement, a bolus injection was administered via the median cubital vein at a rate of 4.5 ml/s with 18 ml of saline, followed by 65 ml of iodixanol contrast agent (400 mg I/ml, Italy) at a flow rate of 3.2–3.5 ml/s, and then an additional 30 ml of saline at 2.5 ml/s.Scanning parameters: CAREDOSE 4D intelligent tube voltage (KV) and current (mAs) matching mode.Detector width: 192 × 0.6 mm.Matrix size: 512 × 512 mm.Rotation time: 0.25 s.Full-phase acquisition with adaptive pitch and volumetric scanningAfter scanning, images were processed using the Ziostation2 (Japan) system. The raw data were reconstructed at 10% RR intervals across 10 phases of the cardiac cycle, from 10 to 100%, with a slice thickness of 0.6 mm and an interval of 0.3 mm. An iterative reconstruction algorithm based on the raw data was employed. Definitions of early systole, late systole, early diastole, and late diastole Post-processing software Ziostation2 (Japan) was used to first assess cardiac function, generating an individual left ventricular volume variation curve, the abscissa corresponds to the time point of the cardiac cycle(Fig. ). Based on this curve, the specific phases of the cardiac cycle were determined for each patient: Early systole corresponds to the phase when the left ventricular volume curve begins to decline; Late systole is defined as the phase where the volume decreases to its lowest point. Early diastole is identified as the phase immediately following the transition from the lowest point of the curve to the initial upward trend, while Late diastole corresponds to the phase when the volume recovers to its maximum value. Localization and measurement at AA, LVOT, and STJ Using the post-processing software Ziostation 2 (Japan) and following the expert consensus guidelines of the Society of Cardiovascular Computed Tomography, the lowest points of the three coronary sinuses are automatically or manually identified on oblique planes (oblique sagittal and oblique coronal views). This allows for the definition of the virtual plane of the AA, which is the plane connecting the bases of the three aortic valve leaflets (Fig. ). The software automates many of the steps required for the oblique plane measurements and helps reduce variability in aortic annulus measurements between observers. In cases of calcification at the AA, the aortic annulus contour should pass through the area of calcification at its shortest distance between two points .The left ventricular outflow tract (LVOT) plane is defined as 5 mm below the AA plane in the oblique sagittal view (Fig. ) . The sinotubular junction (STJ) plane is defined as the transition between the aortic sinus and the ascending aorta (Fig. ) . Measurements of the long diameter, short diameter, average diameter, area, perimeter, diameter derived from area (DA), and diameter derived from circumference (DC) at the AA, LVOT, and STJ were performed by two radiologists specially trained in TAVR. The formulas for DA, DC, and average diameter are as follows: DA = [12pt]{minimal} $$2 {() }$$ , DC = [12pt]{minimal} $$\:}{{\:}}$$ , Average Diameter = (Long Diameter + Short Diameter) /2.These measurements were automatically provided by the software. In cases where significant discrepancies were noted between the two radiologists’ measurements, the results were discussed and agreed upon by both to reach a consensus for each measurement. Statistical analysis Data analysis was conducted using SPSS version 27.0 (IBM, Chicago, USA). For continuous variables, the Shapiro-Wilk test was used to check for normality. If normality was satisfied, the data was presented as means and standard deviations (‾X±S); otherwise, the data was presented as medians and interquartile ranges. To assess the consistency of measurements for AA, LVOT, and STJ taken by two different observers or by the same observer at different times, interclass /Intraclass correlation coefficients (ICC) were calculated. The consistency of the measurement data was categorized as follows: ICC ≥ 0.75 indicates strong consistency, 0.40 ≤ ICC < 0.75 indicates moderate consistency, and ICC < 0.40 indicates poor consistency. If the consistency was deemed good, paired t-tests or Wilcoxon signed-rank tests were performed to compare the average values of the measured parameters at the peak times with those at other times. A p-value < 0.05 was considered statistically significant. This single-center retrospective study included 30 patients with AR who were treated at the Second Xiangya Hospital of Central South University between September 2021 and December 2023. Inclusion criteria:1.Patients diagnosed with isolated severe AR via echocardiography.2.Patients deemed unsuitable for surgical intervention and clinically assessed as requiring TAVR. Exclusion criteria:1.Patients with other valvular diseases in addition to aortic valve disease 2.Patients with aortic aneurysm(Ascending aortic diameter exceeding 5.0 cm) or aortic coarctation 3.Patients with congenital heart diseases (e.g., atrial septal defect, ventricular septal defect, or patent ductus arteriosus) 4.Patients with primary or secondary pulmonary hypertension 5.Patients with allergies to contrast agents or severe cardiac, hepatic, or renal dysfunction 6.Patients with significant arrhythmias, such as atrial fibrillation or atrial flutter 7.Patients with concomitant AS. This study was approved by the Ethics Committee of the Second Xiangya Hospital of Central South University (Approval No.: LYF20240152) and all methods were performed in accordance with the relevant guidelines and regulations.All the participants gave their written informed consent to participate in the study. A SIEMENS SOMATOM FORCE dual-source CT scanner (Germany) was used with a retrospective ECG-gated coronary scanning protocol synchronized with respiration. Automatic trigger scanning was employed, with the region of interest set at the ascending aorta, and a trigger threshold of 100 HU. The scan range extended from 1 cm below the carina to the diaphragm.For contrast enhancement, a bolus injection was administered via the median cubital vein at a rate of 4.5 ml/s with 18 ml of saline, followed by 65 ml of iodixanol contrast agent (400 mg I/ml, Italy) at a flow rate of 3.2–3.5 ml/s, and then an additional 30 ml of saline at 2.5 ml/s.Scanning parameters: CAREDOSE 4D intelligent tube voltage (KV) and current (mAs) matching mode.Detector width: 192 × 0.6 mm.Matrix size: 512 × 512 mm.Rotation time: 0.25 s.Full-phase acquisition with adaptive pitch and volumetric scanningAfter scanning, images were processed using the Ziostation2 (Japan) system. The raw data were reconstructed at 10% RR intervals across 10 phases of the cardiac cycle, from 10 to 100%, with a slice thickness of 0.6 mm and an interval of 0.3 mm. An iterative reconstruction algorithm based on the raw data was employed. Post-processing software Ziostation2 (Japan) was used to first assess cardiac function, generating an individual left ventricular volume variation curve, the abscissa corresponds to the time point of the cardiac cycle(Fig. ). Based on this curve, the specific phases of the cardiac cycle were determined for each patient: Early systole corresponds to the phase when the left ventricular volume curve begins to decline; Late systole is defined as the phase where the volume decreases to its lowest point. Early diastole is identified as the phase immediately following the transition from the lowest point of the curve to the initial upward trend, while Late diastole corresponds to the phase when the volume recovers to its maximum value. Using the post-processing software Ziostation 2 (Japan) and following the expert consensus guidelines of the Society of Cardiovascular Computed Tomography, the lowest points of the three coronary sinuses are automatically or manually identified on oblique planes (oblique sagittal and oblique coronal views). This allows for the definition of the virtual plane of the AA, which is the plane connecting the bases of the three aortic valve leaflets (Fig. ). The software automates many of the steps required for the oblique plane measurements and helps reduce variability in aortic annulus measurements between observers. In cases of calcification at the AA, the aortic annulus contour should pass through the area of calcification at its shortest distance between two points .The left ventricular outflow tract (LVOT) plane is defined as 5 mm below the AA plane in the oblique sagittal view (Fig. ) . The sinotubular junction (STJ) plane is defined as the transition between the aortic sinus and the ascending aorta (Fig. ) . Measurements of the long diameter, short diameter, average diameter, area, perimeter, diameter derived from area (DA), and diameter derived from circumference (DC) at the AA, LVOT, and STJ were performed by two radiologists specially trained in TAVR. The formulas for DA, DC, and average diameter are as follows: DA = [12pt]{minimal} $$2 {() }$$ , DC = [12pt]{minimal} $$\:}{{\:}}$$ , Average Diameter = (Long Diameter + Short Diameter) /2.These measurements were automatically provided by the software. In cases where significant discrepancies were noted between the two radiologists’ measurements, the results were discussed and agreed upon by both to reach a consensus for each measurement. Data analysis was conducted using SPSS version 27.0 (IBM, Chicago, USA). For continuous variables, the Shapiro-Wilk test was used to check for normality. If normality was satisfied, the data was presented as means and standard deviations (‾X±S); otherwise, the data was presented as medians and interquartile ranges. To assess the consistency of measurements for AA, LVOT, and STJ taken by two different observers or by the same observer at different times, interclass /Intraclass correlation coefficients (ICC) were calculated. The consistency of the measurement data was categorized as follows: ICC ≥ 0.75 indicates strong consistency, 0.40 ≤ ICC < 0.75 indicates moderate consistency, and ICC < 0.40 indicates poor consistency. If the consistency was deemed good, paired t-tests or Wilcoxon signed-rank tests were performed to compare the average values of the measured parameters at the peak times with those at other times. A p-value < 0.05 was considered statistically significant. Clinical information Nine patients were excluded for meeting the exclusion criteria, and one was excluded due to poor image quality. Ultimately, 20 patients (17 males and 3 females) were included in the study.The patients were aged between 50 and 81 years, with an average age of (66.45 ± 1.83) years, and a mean BMI of(23.5 ± 3.1)kg/m 2 . Consistency of measurement metrics between two observers The inter-observer consistency of measurements for AA, LVOT, and STJ in AR patients was analyzed between the two physicians, with results shown in Table . The results indicated that the consistency of the various metrics measured by the two physicians was strong, with Absolute agreement ICC values all greater than 0.900 ( P < 0.001). Consistency of measurement metrics by the same physician at different times The intra-observer consistency of measurements for AA, LVOT, and STJ taken by the same physician, with a one-week interval between measurements, was analyzed. The results are shown in Table . The findings indicated that the consistency of the various metrics measured by the same physician at different times was strong, with consistency ICC values all greater than 0.900 ( P < 0.001). Measurement parameters at different phases of the cardiac cycle in AR patients At the AA site, the parameters of area, perimeter, DA, DC, and average diameter during early systole were all significantly greater than those during early diastole ( P < 0.05). However, although the area, perimeter, DA, DC, and average diameter achieved their maximum values during early systole, there were no significant statistical differences when compared with late systole and late diastole ( P > 0.05). The short axis diameter was largest during early systole, with statistically significant differences compared to late systole, early diastole, and late diastole ( P < 0.05). The long axis diameter was largest during late diastole, but there were no statistically significant differences when compared with other phases (long axis diameters were 28.98 ± 2.99, 29.21 ± 2.89, 28.78 ± 3.33, and 29.25 ± 3.75, respectively) ( P > 0.05). The results are shown in Tables , and . At the LVOT, the morphological parameters including area, perimeter, DA, DC, and long diameter reached their maximum values in late diastole, with no significant statistical differences compared to early or late systole ( P > 0.05). However, significant differences were observed between late diastole and early diastole ( P < 0.05). The average diameter was largest during early systole, but no significant statistical differences were found when comparing early systole with late systole and late diastole ( P > 0.05); significant differences were observed between early systole and early diastole ( P < 0.05). The short axis diameter was largest during early systole, with significant differences compared to late systole, early diastole, and late diastole (short axis diameters were 24.37 ± 2.82, 23.31 ± 3.78, 21.61 ± 4.69, and 22.77 ± 5.06, respectively) ( P < 0.05). The results are shown in Tables , and . At the STJ site, the parameters for area, perimeter, DA, DC, long axis diameter, short axis diameter, and average diameter all reached their maximum values during late systole. However, there were no significant differences between late systole and early systole or early diastole ( P > 0.05). Significant differences were observed between late systole and late diastole ( P < 0.05). The results are shown in Tables and . Nine patients were excluded for meeting the exclusion criteria, and one was excluded due to poor image quality. Ultimately, 20 patients (17 males and 3 females) were included in the study.The patients were aged between 50 and 81 years, with an average age of (66.45 ± 1.83) years, and a mean BMI of(23.5 ± 3.1)kg/m 2 . The inter-observer consistency of measurements for AA, LVOT, and STJ in AR patients was analyzed between the two physicians, with results shown in Table . The results indicated that the consistency of the various metrics measured by the two physicians was strong, with Absolute agreement ICC values all greater than 0.900 ( P < 0.001). The intra-observer consistency of measurements for AA, LVOT, and STJ taken by the same physician, with a one-week interval between measurements, was analyzed. The results are shown in Table . The findings indicated that the consistency of the various metrics measured by the same physician at different times was strong, with consistency ICC values all greater than 0.900 ( P < 0.001). At the AA site, the parameters of area, perimeter, DA, DC, and average diameter during early systole were all significantly greater than those during early diastole ( P < 0.05). However, although the area, perimeter, DA, DC, and average diameter achieved their maximum values during early systole, there were no significant statistical differences when compared with late systole and late diastole ( P > 0.05). The short axis diameter was largest during early systole, with statistically significant differences compared to late systole, early diastole, and late diastole ( P < 0.05). The long axis diameter was largest during late diastole, but there were no statistically significant differences when compared with other phases (long axis diameters were 28.98 ± 2.99, 29.21 ± 2.89, 28.78 ± 3.33, and 29.25 ± 3.75, respectively) ( P > 0.05). The results are shown in Tables , and . At the LVOT, the morphological parameters including area, perimeter, DA, DC, and long diameter reached their maximum values in late diastole, with no significant statistical differences compared to early or late systole ( P > 0.05). However, significant differences were observed between late diastole and early diastole ( P < 0.05). The average diameter was largest during early systole, but no significant statistical differences were found when comparing early systole with late systole and late diastole ( P > 0.05); significant differences were observed between early systole and early diastole ( P < 0.05). The short axis diameter was largest during early systole, with significant differences compared to late systole, early diastole, and late diastole (short axis diameters were 24.37 ± 2.82, 23.31 ± 3.78, 21.61 ± 4.69, and 22.77 ± 5.06, respectively) ( P < 0.05). The results are shown in Tables , and . At the STJ site, the parameters for area, perimeter, DA, DC, long axis diameter, short axis diameter, and average diameter all reached their maximum values during late systole. However, there were no significant differences between late systole and early systole or early diastole ( P > 0.05). Significant differences were observed between late systole and late diastole ( P < 0.05). The results are shown in Tables and . MDCT is widely recognized as the primary non-invasive imaging modality for preoperative assessment in TAVR. In addition to providing crucial measurements of the AA size, which are essential for selecting an appropriate transcatheter valve, MDCT also offers detailed anatomical information on aortic valve and annulus calcification, the morphology of the ascending aorta, and the angle between the ascending aorta and the LVOT, as well as potential vascular access routes for TAVR .Currently, TAVR faces several challenges in patients with AR, as a result, surgical aortic valve replacement (SAVR) remains the primary treatment for AR patients. However, for those unable to tolerate SAVR, TAVR is currently the best available option , , .Recent studies have demonstrated favorable outcomes with TAVR in appropriately selected AR patients. Therefore, detailed MDCT assessment of aortic root changes in AR patients is necessary to identify suitable TAVR candidates , – . To date, research on the dynamic changes in aortic root structural parameters pre-TAVR in AR patients is scarce. In AR patients, the early systolic area, perimeter, DA, DC, and mean diameter at the AA site show no significant changes in late systole or late diastole, with only some variation observed in early diastole.This limited variation during the cardiac cycle in AR patients contrasts with the more pronounced changes observed in AS patients, suggesting that the deformation of the AA in AR patients is less significant throughout the cardiac cycle. This may be due to the pronounced dilation of the AA in AR patients, which could lead to structural fragility and reduced elasticity and compliance of the AA , . Some studies suggest that during TAVR using balloon-expandable prostheses such as the SAPIEN 3 in AR patients, to reduce the risks of valve malposition and paravalvular leak, the prosthesis size should be 15–25% larger relative to the maximum AA size , – . The finding that multiple critical parameters at the AA site in AR patients reach their maximum during early systole indicates that the consensus for valve prosthesis selection used for AS can also be applied to AR patients. Specifically, measuring AA parameters at the time of maximum AA size can maximize the accuracy of prosthesis selection. This approach is currently employed clinically for prosthesis sizing. Therefore, we propose that in AR patients, early systolic AA measurements can also serve as the primary basis for prosthesis size selection , , . In this study, 20 subjects underwent evaluation for TAVR selection. However, due to most patients declining TAVR for personal reasons and opting for conservative treatment or transferring to other hospitals, only 3 patients underwent TAVR at our hospitaln this study. We primarily based the selection of prosthesis size on the values of DC and DA at the AA during early systole.Two of the three patients received the Medtronic Evolut R valve, and one received the J-valve. The surgical outcomes were favorable, with no significant paravalvular leakage and no pacemaker implantation(Fig. ). Additionally, the study revealed that the maximum value of the long axis diameter at the AA site in AR patients occurred during late diastole. The maximum long axis diameter did not show significant differences compared to other phases, indicating that the long axis diameter remains relatively stable in AR valve disease. In contrast, the short axis diameter was significantly larger during early systole compared to other phases, suggesting substantial changes in the elliptical shape of the AA throughout the cardiac cycle in AR patients. At the LVOT site, the area, perimeter, DA, DC, and long axis diameter in AR patients were largest during late diastole, and were significantly greater than those observed during early diastole. There were no significant differences compared to early or late systole. We attribute these findings to the fact that the aortic regurgitation during early to late diastole causes prolonged overfilling of the left ventricle, leading to secondary dilation of the LVOT and a notable reduction in tissue compliance.We believe this may lead to a less stable fit between the valve and the outflow tract compared to AS patients. Therefore, a self-expanding valve with a stronger seal may be required during TAVR to address the complexity of LVOT morphology and the risk of post-procedural regurgitation. Additionally, precise measurement of the LVOT size during late diastole is necessary preoperatively to avoid issues of undersizing or incomplete sealing due to dilation.The average diameter at the LVOT site was largest during early systole, with minimal changes compared to late systole and late diastole, but showed significant differences when compared to early diastole. The short axis diameter was also largest during early systole, with marked differences compared to other phases of the cardiac cycle. These observations suggest significant hemodynamic changes in this region throughout the cardiac cycle in AR patients. At the STJ site, in AR patients, the area, perimeter, DA, DC, long axis diameter, short axis diameter, and average diameter all reached their maximum values during late systole. However, while the values of these parameters during late systole were not significantly different from those observed during early systole and early diastole, they were significantly greater than those during late diastole. We speculate that this discrepancy is due to the peak ventricular ejection volume occurring during late systole and the maximum regurgitant flow during late diastole, leading to the largest differences in STJ parameters between these two phases.Although the significance of STJ parameters for prosthesis sizing requires further investigation, existing studies have shown that a larger STJ can increase the risk of prosthesis misalignment when using the Venus-A valve for TAVR . Therefore, assessing the STJ is beneficial for preoperatively predicting and identifying patients at high risk for severe prosthesis malposition . Additionally, research has demonstrated that a smaller AA, LVOT, and STJ is advantageous when using self-expanding valve systems like the VitaFlow and Venus A in severe AR patients, as it provides better anchoring of the valve and helps prevent valve displacement due to a better fitting of the valve’s “crown” design . In summary, we believe that when extending the use of prosthetic valves designed for AS to patients with AR, it is crucial to thoroughly assess the structural characteristics of the aortic root preoperatively. Fundamentally, the design of valve prostheses specifically tailored for AR should be considered for TAVR treatment. The study has several limitations. First, the small sample size of 20 patients restricts the statistical power of the analysis and limits the generalizability of the findings. With a larger sample, the results could be further validated, and the impact of potential confounders could be better controlled. Second, the single-center design of the study introduces a degree of homogeneity in the patient population, as patients from a single institution may share similar demographics, healthcare access, and treatment protocols, which may not reflect the broader population of AR patients. Additionally, the training and experience of physicians involved in the study were consistent across the center, which may have reduced variability in measurements but limits the external validity of the findings to institutions with different practices or expertise. Third, the majority of patients in this study did not undergo TAVR during the study period, despite being clinically assessed as suitable candidates. This was largely due to patients opting for conservative treatment or transferring to other hospitals, which may indicate a potential selection bias in the study cohort. The relatively low number of TAVR procedures conducted limits the ability to assess the real-world effectiveness of prosthesis selection based on the MDCT parameters. In AR patients, the selection of prostheses can be guided by MDCT parameters, with early systole serving as the optimal phase for prosthesis selection. Prospective studies focusing on patients with aortic regurgitation remain limited, highlighting the urgent need for further clinical research to validate the application and limitations of MDCT in this patient population.
Video-based robotic surgical action recognition and skills assessment on porcine models using deep learning
fa6f0de7-637e-451a-b816-70eeb28d889b
11870904
Surgical Procedures, Operative[mh]
Surgical performance is directly associated with the intraoperative and postoperative outcome . This also applies to robot-assisted surgery (RAS), where insufficient training and inadequate surgical skills can compromise the clinical outcome . A substantial amount of work has been done to create assessment tools for RAS, such as the Global Evaluative Assessment of Robotic Skills (GEARS) score, for use in surgical training and evaluation of surgical performance . However, these assessment tools often depend on an experienced surgeon being present or reviewing video recordings to assess performance . This is resource-demanding, time-consuming, and can be prone to rater bias and interrater variability . Recent emerging technologies based on artificial intelligence (AI) and the subfields of machine learning and deep learning have led to a new field in surgical data science that seeks to assess and evaluate surgical procedures automatically without the need for human assessors . Currently, deep learning models, particularly convolutional neural networks (CNN), combined with other approaches, are among the most common techniques for surgical action recognition . The state-of-the-art in surgical assessment involves the use of a base model, such as CNN or Vision Transformer (ViT), with a temporal aspect, viewing frames as timeseries instead of individual pictures . Unlike the CNN which uses filters to extract features of pictures, the ViT utilizes different patches across the image at the same time . Experimental methods, such as Fusion Key Value (Fusion-KV), integrate multiple data modalities, including video, event data, and kinematic data, alongside various machine learning and deep learning models for more precise results . Although promising, AI solutions generally fail to generalize when presented with different surgical procedures or new datasets from other settings, mainly because of the lack of large and diverse high-quality datasets . To address the pressing need for more robust machine learning models, the present study focused on the development of a deep learning model trained on a highly diverse dataset with multiple procedures and participants with varying surgical skill levels. We previously presented a method for acquiring and preparing video-based robotic surgical data for machine learning implementation using open-source solutions . Extracting data from video recordings enables access to large amounts of data from different surgical procedures . In this study, we present a method using video-based data to create a deep learning algorithm that can recognize basic surgical actions, such as dissection and suturing, in a diverse dataset with different intra-abdominal procedures performed on in vivo porcine models. Thus, we aimed to further classify surgeons based on short-segment analyses that can be used for immediate feedback in future. Study setting and participants The study was conducted at the Biomedical Laboratory of Aalborg University Hospital, Aalborg, Denmark. We collected data from different RAS procedures using in vivo porcine models. All procedures on porcine models were approved by The Danish Animal Experiments Inspectorate under the Danish Veterinary and Food Administration (ID: 2018-15-0201-01392). The Robot Center Nord (ROCNord) at Aalborg University Hospital conducts several RAS courses annually, and participants were included if they participated in one of these RAS courses. Based on prior work, we defined two groups of participants: experienced (> 100 RAS procedures) and novice (< 100 RAS procedures) . The study was approved by The North Denmark Region Committee on Health Research Ethics (ID: 2021-000438) and the Regional Research Review Board under the Danish Data Protection Agency (ID: 2021-246). All video footage and labels were anonymized and are available at our GitHub repository www.Github.com/NasHas for open-source use in future research. Furthermore, all Python scripts created and used in this study can also be found at our GitHub repository for open-source use . Data capture and extraction The method of collecting and preparing data was based on our previously published study on the acquisition and usage of robotic surgical data for machine learning algorithms . We only utilized step one and step four of our previous method and thus did not extract event data or movement data from the surgeons in the current study . The da Vinci Surgical System (DVSS) surgeon console was connected to two HDMI to USB Video Capture Cards (VCCS) from Ozvavzk (one for each ocular output of the surgical robot). Video footage was recorded using OBS Studio (Open Broadcaster Software Studio, Wizards of OBS, OBS Studio v. 27.2.4, 64-bit) with a recording set at 15 frames per second (FPS) and 2560 × 720 pixels to record both ocular outputs of the surgical robot (some initial videos were recorded at other resolutions and different FPS; please refer to our previous study on data collection ). The videos were cropped into separate right and left 1280 × 720 (or 1920 × 1080 ) views, and sequences where nothing of relevance occurred (such as cleaning of camera and change of instruments) were cut out using Free Video Crop, RZSoft Technology Co. Ltd, v. 1.08, and the command-line software FFmpeg. Preparation of robotic surgical data In this study, we used temporal labeling, in which each surgical event was marked by a timestamp at its beginning and end. We marked two primary label categories representing the basic elements and actions of surgery; ‘suturing’ and ‘dissection’ . The suturing label covers a suture action from the initial suture positioning in the instrument to the final knot tying . We used two labels to indicate the type of suture (single or running) and two labels for suture actions (needle driving and suture handling, including positioning and knot tying). The dissection labels were divided into three subcategories; general dissection (blunt, sharp, or combined techniques such as two-handed dissection and coagulation with cutting), clip application (applying clips before cutting, hot or cold), and hemostasis control (locating and stopping bleeding) . Only general dissection was analyzed, while the other subcategories were excluded. All actions were labeled from their initiation to their conclusion. The labels were created manually using the Behavioral Observation Research Interactive Software (BORIS, v. 7.13.8) , and all labels were created by NH. Pre-processing The video footage was sampled at 1 FPS using our script in Python 3.8. To accommodate the temporal aspect of the network, we stacked five consecutive frames into sequences (all frames in one sequence had the same label; see Supplementary Fig. 1). This was also performed using Python script. We found that there were three main groups of sequences: sequences where either suturing or dissection occurred and sequences where both suturing and dissection occurred. Using the lowest denominator, we chose to omit the last category because of the low number of frames and ensured that the number of sequences in the remaining two groups was balanced as much as possible to avoid overrepresentation of one class. The information was saved as a CSV file. For the skills assessment, each video had a label revealing whether the video was conducted by a novice or experienced participant and was split into sequences of 10 s. The raw footage also contains information bars in the lower and upper parts of the picture, and these parts of each frame were cropped, and consecutively, all frames were resized from their original resolution (either 1280 × 720 or 1920 × 1080) to 256 × 256 pixels to lower the computational requirements. After creating image sequences, cropping, and resizing, the complete dataset was split into three groups: training, validation, and test sets to perform the final evaluation of the best model. We chose a split of ∼ 80% of the data for the training set, ∼ 10% for the validation set, and ∼ 10% for the test set. We also ensured that videos of each participant would only be used exclusively in one specific dataset, that is, no participants would be present in the two datasets at the same time. Split and balancing were performed automatically based on the total number of sequences using our available Python script. Therefore, minor differences in the number of sequences between the suturing and dissection groups could occur (see Supplementary Tables 1 and 2). For skills assessment, we balanced the datasets manually so that procedures performed by both novice and experienced groups were included in the training set, and only procedure types used in the training set were used in the validation and test sets, see Supplementary Table 2. We then combined the training and validation sets (~ 90% of the data) and used K-fold cross-validation to estimate the performance of our model. K-fold cross-validation splits the dataset into K-folds, in this case five folds, and leaves onefold for testing, while training on the remaining folds. It then iterates through this process until all the folds have been trained and tested. Performance accuracies are saved for each fold, and the mean performance and standard deviation of the cross-validation are presented (see Supplementary Table 3), which provides a more generalized estimate of how the model would perform on unseen data. We used K-fold cross-validation to fine-tune the network parameters before finally using the best parameters to train a new model on the full dataset (the training and validation set) and on the unseen test set. Architecture of the neural network Our network combines a CNN to extract spatial features with a Long Short-Term Memory (LSTM) layer incorporation of temporal information, see Fig. . For a more detailed explanation of the network structure, see Supplementary Text 1. We split the dataset into small batches of eight five-second sequences and ran up to 50 training cycles (epochs), using early stopping to avoid overfitting. We developed two versions of the network: one for classifying actions (suturing vs. dissection) and another for assessing skill level (novice vs. experienced). The skills assessment was limited by having only four videos available for testing, so we applied techniques like dropout, batch normalization, regularization, and one extra dense layer before the last layer to prevent overfitting. Model evaluation We evaluated our models by generating confusion matrices and calculating the accuracy, recall/sensitivity, precision, F1-score, true-positive rates, and false-positive rates. To increase interpretability, we calculated the predictive certainty of the action recognition network by performing predictions and obtaining probabilities for each class, and then saving the highest probability of each prediction and comparing them with the ground true labels. This was then plotted visually along with the maximum, mean, and minimum probabilities. The probability distribution can be viewed as an expression of the certainty behind the decision to classify a certain sequence as a certain class. For the skills assessment network, we used the true- and false-positive rates to plot the ROC curve and AUC for the entire test set as well as for the subsets. We also used Gradient-weighted Class Activation Mapping (Grad CAM) to produce a localization map that highlights regions of importance in the image sequences analyzed by the algorithm. For skill assessment, we plotted all predictions during a complete video to determine the sections in which misclassifications occurred. The study was conducted at the Biomedical Laboratory of Aalborg University Hospital, Aalborg, Denmark. We collected data from different RAS procedures using in vivo porcine models. All procedures on porcine models were approved by The Danish Animal Experiments Inspectorate under the Danish Veterinary and Food Administration (ID: 2018-15-0201-01392). The Robot Center Nord (ROCNord) at Aalborg University Hospital conducts several RAS courses annually, and participants were included if they participated in one of these RAS courses. Based on prior work, we defined two groups of participants: experienced (> 100 RAS procedures) and novice (< 100 RAS procedures) . The study was approved by The North Denmark Region Committee on Health Research Ethics (ID: 2021-000438) and the Regional Research Review Board under the Danish Data Protection Agency (ID: 2021-246). All video footage and labels were anonymized and are available at our GitHub repository www.Github.com/NasHas for open-source use in future research. Furthermore, all Python scripts created and used in this study can also be found at our GitHub repository for open-source use . The method of collecting and preparing data was based on our previously published study on the acquisition and usage of robotic surgical data for machine learning algorithms . We only utilized step one and step four of our previous method and thus did not extract event data or movement data from the surgeons in the current study . The da Vinci Surgical System (DVSS) surgeon console was connected to two HDMI to USB Video Capture Cards (VCCS) from Ozvavzk (one for each ocular output of the surgical robot). Video footage was recorded using OBS Studio (Open Broadcaster Software Studio, Wizards of OBS, OBS Studio v. 27.2.4, 64-bit) with a recording set at 15 frames per second (FPS) and 2560 × 720 pixels to record both ocular outputs of the surgical robot (some initial videos were recorded at other resolutions and different FPS; please refer to our previous study on data collection ). The videos were cropped into separate right and left 1280 × 720 (or 1920 × 1080 ) views, and sequences where nothing of relevance occurred (such as cleaning of camera and change of instruments) were cut out using Free Video Crop, RZSoft Technology Co. Ltd, v. 1.08, and the command-line software FFmpeg. In this study, we used temporal labeling, in which each surgical event was marked by a timestamp at its beginning and end. We marked two primary label categories representing the basic elements and actions of surgery; ‘suturing’ and ‘dissection’ . The suturing label covers a suture action from the initial suture positioning in the instrument to the final knot tying . We used two labels to indicate the type of suture (single or running) and two labels for suture actions (needle driving and suture handling, including positioning and knot tying). The dissection labels were divided into three subcategories; general dissection (blunt, sharp, or combined techniques such as two-handed dissection and coagulation with cutting), clip application (applying clips before cutting, hot or cold), and hemostasis control (locating and stopping bleeding) . Only general dissection was analyzed, while the other subcategories were excluded. All actions were labeled from their initiation to their conclusion. The labels were created manually using the Behavioral Observation Research Interactive Software (BORIS, v. 7.13.8) , and all labels were created by NH. The video footage was sampled at 1 FPS using our script in Python 3.8. To accommodate the temporal aspect of the network, we stacked five consecutive frames into sequences (all frames in one sequence had the same label; see Supplementary Fig. 1). This was also performed using Python script. We found that there were three main groups of sequences: sequences where either suturing or dissection occurred and sequences where both suturing and dissection occurred. Using the lowest denominator, we chose to omit the last category because of the low number of frames and ensured that the number of sequences in the remaining two groups was balanced as much as possible to avoid overrepresentation of one class. The information was saved as a CSV file. For the skills assessment, each video had a label revealing whether the video was conducted by a novice or experienced participant and was split into sequences of 10 s. The raw footage also contains information bars in the lower and upper parts of the picture, and these parts of each frame were cropped, and consecutively, all frames were resized from their original resolution (either 1280 × 720 or 1920 × 1080) to 256 × 256 pixels to lower the computational requirements. After creating image sequences, cropping, and resizing, the complete dataset was split into three groups: training, validation, and test sets to perform the final evaluation of the best model. We chose a split of ∼ 80% of the data for the training set, ∼ 10% for the validation set, and ∼ 10% for the test set. We also ensured that videos of each participant would only be used exclusively in one specific dataset, that is, no participants would be present in the two datasets at the same time. Split and balancing were performed automatically based on the total number of sequences using our available Python script. Therefore, minor differences in the number of sequences between the suturing and dissection groups could occur (see Supplementary Tables 1 and 2). For skills assessment, we balanced the datasets manually so that procedures performed by both novice and experienced groups were included in the training set, and only procedure types used in the training set were used in the validation and test sets, see Supplementary Table 2. We then combined the training and validation sets (~ 90% of the data) and used K-fold cross-validation to estimate the performance of our model. K-fold cross-validation splits the dataset into K-folds, in this case five folds, and leaves onefold for testing, while training on the remaining folds. It then iterates through this process until all the folds have been trained and tested. Performance accuracies are saved for each fold, and the mean performance and standard deviation of the cross-validation are presented (see Supplementary Table 3), which provides a more generalized estimate of how the model would perform on unseen data. We used K-fold cross-validation to fine-tune the network parameters before finally using the best parameters to train a new model on the full dataset (the training and validation set) and on the unseen test set. Our network combines a CNN to extract spatial features with a Long Short-Term Memory (LSTM) layer incorporation of temporal information, see Fig. . For a more detailed explanation of the network structure, see Supplementary Text 1. We split the dataset into small batches of eight five-second sequences and ran up to 50 training cycles (epochs), using early stopping to avoid overfitting. We developed two versions of the network: one for classifying actions (suturing vs. dissection) and another for assessing skill level (novice vs. experienced). The skills assessment was limited by having only four videos available for testing, so we applied techniques like dropout, batch normalization, regularization, and one extra dense layer before the last layer to prevent overfitting. We evaluated our models by generating confusion matrices and calculating the accuracy, recall/sensitivity, precision, F1-score, true-positive rates, and false-positive rates. To increase interpretability, we calculated the predictive certainty of the action recognition network by performing predictions and obtaining probabilities for each class, and then saving the highest probability of each prediction and comparing them with the ground true labels. This was then plotted visually along with the maximum, mean, and minimum probabilities. The probability distribution can be viewed as an expression of the certainty behind the decision to classify a certain sequence as a certain class. For the skills assessment network, we used the true- and false-positive rates to plot the ROC curve and AUC for the entire test set as well as for the subsets. We also used Gradient-weighted Class Activation Mapping (Grad CAM) to produce a localization map that highlights regions of importance in the image sequences analyzed by the algorithm. For skill assessment, we plotted all predictions during a complete video to determine the sections in which misclassifications occurred. We included data from 21 surgeons who participated in the study (16 novices and 5 experienced RAS surgeons; Table ). In total, 16 different intra-abdominal RAS procedures were performed on in vivo porcine models (from either urological, general surgical, or gynecological courses; Table ) and our open-source dataset . From the 130 recorded procedural videos, the temporal annotations of suturing and dissection were turned into sequences of 5 s and 10 s for the training, validation, and testing of the neural networks (see Supplementary Tables 1 and 2). K-fold cross-validation in five splits resulted in a mean accuracy of 90% for action recognition and 72% for skill assessment. The full results are from the last round of hyperparameter tuning and are presented in Supplementary Table 3. Primary action category network The CNN LSTM network for primary action classification, classifying suturing and dissection, reached the lowest average validation loss in the 8rd epoch, and the last model of this epoch was used for the final testing. The model was used to predict the test set, which was then randomly selected and prepared. The accuracy of the model was 96.0%. This provided an average recall of 96.0%, precision of 96.0%, and F1-score of 96.0% for the model, as shown in Table . Finally, the predictive certainty had a mean value of 98.82%, with a maximum probability value of 100% and a minimum probability value of 50.19%. A plot of predictive data is shown in Supplementary Fig. 2. Gradient-weighted class activation mapping To produce a visual explanation of the spatial regions featured by the CNN LSTM network, we used a GradCAM filter. As shown in Fig. , examples from the network are illustrated for each category (suturing vs. dissection). We can see that the network focuses on the tissue and instrument tips during dissection and on the needle, suture, and instrument tips during suturing. Skills assessment network For the skill assessment, the training reached the lowest average loss at the 20th epoch. The final model of this epoch was saved for final testing. When assessed on the test set, it showed an accuracy of 81.3%, as listed in Table . This resulted in an average recall of 81.3%, a precision of 83.3%, and an F1-score of 80.9%, as shown in Table . The predictions can be plotted for the complete procedure video, providing insight into which parts of the procedure novices were misclassified as experienced and vice versa (see Fig. ). Figure shows procedure plots for all test videos and the ROC curves with respective AUC for the complete test set and subset predictions. The ROC curve for the complete test set shows an AUC of 0.88. Comparing the subsets shows an AUC of 0.87 between novice lymph node dissection and experienced bladder puncture and an AUC of 0.90 when comparing novice to experienced surgeons in lymph node dissection. The CNN LSTM network for primary action classification, classifying suturing and dissection, reached the lowest average validation loss in the 8rd epoch, and the last model of this epoch was used for the final testing. The model was used to predict the test set, which was then randomly selected and prepared. The accuracy of the model was 96.0%. This provided an average recall of 96.0%, precision of 96.0%, and F1-score of 96.0% for the model, as shown in Table . Finally, the predictive certainty had a mean value of 98.82%, with a maximum probability value of 100% and a minimum probability value of 50.19%. A plot of predictive data is shown in Supplementary Fig. 2. To produce a visual explanation of the spatial regions featured by the CNN LSTM network, we used a GradCAM filter. As shown in Fig. , examples from the network are illustrated for each category (suturing vs. dissection). We can see that the network focuses on the tissue and instrument tips during dissection and on the needle, suture, and instrument tips during suturing. For the skill assessment, the training reached the lowest average loss at the 20th epoch. The final model of this epoch was saved for final testing. When assessed on the test set, it showed an accuracy of 81.3%, as listed in Table . This resulted in an average recall of 81.3%, a precision of 83.3%, and an F1-score of 80.9%, as shown in Table . The predictions can be plotted for the complete procedure video, providing insight into which parts of the procedure novices were misclassified as experienced and vice versa (see Fig. ). Figure shows procedure plots for all test videos and the ROC curves with respective AUC for the complete test set and subset predictions. The ROC curve for the complete test set shows an AUC of 0.88. Comparing the subsets shows an AUC of 0.87 between novice lymph node dissection and experienced bladder puncture and an AUC of 0.90 when comparing novice to experienced surgeons in lymph node dissection. In this study, we used a novel method to acquire data and train a deep learning network with two sets of configurations for the classification of surgical actions and skills assessment. Both networks demonstrated good performance in a surgically diverse dataset. We used the same network architecture (CNN-LSTM) for both problems, demonstrating the flexibility of the model. Interest in machine learning-based action recognition or surgical skill assessment of RAS has been increasingly investigated in recent years . Most studies use accuracy to evaluate model performance, as it measures the ratio of correct predictions to total predictions, unlike precision, which focuses specifically on the accuracy of positive predictions or image-level binary classification tasks, such as in this study, the Metrics Reloaded framework supports using accuracy as a primary metric . In addition to accuracy, we also include precision, recall, and F1-score for a more comprehensive evaluation of the results. Regarding action recognition, prior studies have achieved accuracies ranging from 68 to 90% . Similarly, in the context of skill assessment, accuracies range from 76 to 100% when only video data are used . Among state-of-the-art for surgical video assessment is SAIS, which leverages a pre-trained ViT model with a temporal component to identify surgical gestures and evaluate surgical skills . This approach was tested across three hospitals and two surgical procedures (robot-assisted nephrectomy and radical prostatectomy), achieving AUC values exceeding 0.90 for gesture recognition and over 0.80 for both skills assessment and cross-procedure gesture recognition . However, SAIS primarily evaluated on experienced surgeons and also had a more detailed discrimination of surgical gestures . Another notable study utilized a temporal segment network for surgical assessment, combining a CNN with temporal aspect, like our network, to achieve 95% accuracy . This research was conducted using the JIGSAWS dataset for training and testing . Most prior studies have tested their models on the JIGSAWS dataset, which is a public dataset of video and kinematic data made in a highly standardized, controlled dry lab environment . The main limitation of using a small dataset in a controlled environment is overfitting, which occurs when an algorithm is not generalizable to new data from other environments or procedures . The small size of the JIGSAWS dataset makes it difficult to allocate each participant exclusively to training, validation, or test sets, as we have done in the current study, due to the limited number of experts in the JIGSAWS dataset (only two). This raises questions regarding the results of prior studies in this field, as many studies do not explicitly address how they avoid leakage from training to validation and test data . Another problem when training and developing machine learning models based on dry lab data is that they do not generalize to clinical settings. However, data acquisition from clinical settings is difficult and expensive . More importantly, it may be impossible to collect data for training machine learning models that include examples of poor or erroneous performance in the clinical setting, which are needed to train a good model to assess different levels of clinical skills. In recent years, public datasets like CholecTriplet, HeiChole, SAR-RARP50, ESAD, and PSI-AVA have emerged, alongside non-public datasets, such as SAIS and Theator, which is a surgical video database platform . These datasets all use endoscopic footage, similar to our dataset. However, CholecTriplet and HeiChole are specific to human laparoscopic cholecystectomy, excluding robotic surgery, while SAR-RARP50, ESAD, PSI-AVA, and SAIS datasets focus on human robot-assisted radical prostatectomy (RARP) procedures . Also, procedures were done by experienced surgeons, creating a small variance in both procedures and group of participants . Annotation methods also varied, with SAR-RARP50 using only visual annotations, boundary boxes, while the other datasets include both visual and temporal annotations, such as instrument segmentation and time labels . We used an in vivo porcine wet-lab setting to allow for the collection of data from multiple procedures across a large variety of surgeons with different skill levels, both novice and experienced . This enabled us to develop a model that was indifferent to the 16 different surgical procedures on which it was trained. We also chose to use temporal annotations of the videos, where each surgical action was defined in the time they occurred, because it represents the most basic and simple way of annotating surgical procedures, especially when aiming to collect greater datasets and streamlining the data processing . Other methods such as spatial annotation using boundary boxes or segmenting instruments or anatomic structures are usually technically harder and require more defined criteria . We chose the two main categories of tasks; suturing and dissection and left out subcategories to avoid unbalancing the classes and because of the skewed frequency the subcategories were used throughout the surgical procedures . We also left the ‘Other’ category, which was described in our previous study as a category for tasks, such as suction and holding . We excluded the “Other” group because it overlapped with the “Suturing” and “Dissection” classes, creating a multilabel issue with skewed balance and technical complexity in this proof-of-concept study. Secondly, the “Other” class was inconsistent, containing varied actions that sometimes resembled “Suturing” or “Dissection” due to shared elements. Thus, the “Other” class was excluded from training and testing. Although previous research has identified subcategories within suturing and dissection, a widely accepted classification system and consensus has not yet been established . Therefore, we adopted broader definitions of suturing and dissection that encompass finer subcategories . Both the suturing and dissection labels were annotated as segments of time using timestamps, which is a function of BORIS . We based our subcategories on previous research which defines various subcategories of both suturing and dissection . For example, our subcategory of general dissection label included sharp dissection (which have previously been defined as spread, hook, push, and peel with any instrument), sharp dissection (hot and cold cut, burn and cut), and combinations (multiple peels either blunt or sharp and dissection with both instruments) . Because of the size of our dataset, a subcategorization of the surgical actions would lead to non-generalizable results . The use of generic tasks is supported by the SAGES framework for annotation of surgical videos . All labels were annotated by a medical doctor who is a clinical trainee in urology. Our use of temporal annotations aligns with the SAGES framework; however, because we did not annotate surgical phases or steps, defining relationships between different parts of the procedures is challenging . Additionally, we did not label segments were nothing of surgical relevance happened (such as cleaning of camera and change of instruments); instead, we removed it during pre-processing, which aligns with the reason of why they should be labeled according to the SAGES framework . We suggest that developing machine learning models in a wet-lab setting will allow easier generalization to the clinical setting, potentially using much smaller amounts of data for transfer learning, as demonstrated in other areas of data science . This will be the subject of future research. Skill assessment annotations were based on a binary classification (experienced vs. novice), defined by operative volume alone. Each procedure was labeled as either ‘experienced’ or ‘novice.’ While this quantitative approach is common, it may not accurately reflect individual technical skill levels, it lacks flexibility, and studies have shown considerable variability in determining the skill levels . We chose a value of 100 cases to distinguish between novice and experienced surgeons. Still, previous studies have used a wide range, from 30 to over 1,000 cases, with thresholds differing by procedure type and medical specialty . A way of generating more flexibility could be to use more degrees of experience and actual ratings of clinical performance. However, our proof-of-concept model demonstrates continuous evaluation on shorter segments during the procedures, unlike assessments such as GEARS, which only gives endpoint evaluation. Continuous evaluation provides surgeons and trainees with identifiable segments of less surgical quality during a procedure, allowing for targeted improvement in future procedures . Another limitation in the skills assessment tasks was our access to experienced robotic surgeons. Future research could benefit from a multicenter approach to gathering more data, addressing the challenge of having few experienced robotic surgeons at a single center . When models fail to deliver results accuracy despite the best possible set of features, there are two main avenues for improvement: using techniques to prevent overfitting and underfitting or gathering more data . A clear limitation of the current study was the number of experienced participants and the overall size of the dataset. The dataset was reduced to comply with the criteria of participant exclusivity in the training, validation, and testing sets and also to balance both novice and experienced groups. Because of the limited skills assessment data, we used different machine learning techniques such as dropout, regularization, batch normalization, and an extra dense layer to make the network more robust and avoid overfitting. All machine learning models, including those used in surgical video analysis, are inherently limited by the quality of the datasets on which they are trained . Biases introduced during the collection of training data can result in models that are less generalizable . A pitfall is the use of datasets with limited variability, which fail to account for differences in surgical approaches, differences in anatomy, or even institutional practices . However, because of the black-box nature of deep learning algorithms, we cannot be sure of which features truly influence a models predictions . The lack of explainability and interpretability has been one of the reasons hindering its implementation . GradCAM has been described as a way to make increase the interpretability of deep learning algorithms, especially CNN . As shown in Fig. , the visual representation provides an interpretation of the decisions leading to the algorithm’s choices. Figure shows the individual frames from longer sequences that are input to the LSTM layer. In addition, GradCAM has other limitations, such as problems with localizing multiple occurrences in an image, possible loss of signal because of the up- and down-sampling processes, and problems with the gradients of deep layers of a neural network . It is important to note that our model analyzes not only the spatial features highlighted by GradCAM but also the temporal changes in these regions using the LSTM layer, making decisions based on the entire sequence rather than isolated frames. The use of the LSTM layer allows the model to recognize sequences and patterns over time, which is crucial for distinguishing similar actions with different outcomes . Features that increase both interpretability and explainability are important for gaining the trust of clinicians and helping with the implementation of AI in clinical settings . Future research could focus on methods that incorporate transparency as part of the network architecture or include multiple features simultaneously to increase both interpretability and explainability . Moreover, studies are needed to determine how real-time machine learning feedback impacts the surgical workflow, surgeon attention, performance, and long-term learning. Our study demonstrated that machine learning can be used to automate surgical action recognition and skill assessment. The use of in vivo porcine models enables effective data collection at different levels of surgical performance, which is normally not available in the clinical setting. Future studies are needed to test how well machine learning models developed within a porcine setting can be used to detect errors and provide feedback and actionable skills assessment in the clinical setting. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 13 KB) Supplementary file2 (PNG 849 KB) Supplementary file3 (DOCX 13 KB) Supplementary file4 (PNG 43 KB) Supplementary file5 (PNG 19 KB) Supplementary file6 (PNG 15 KB) Supplementary file7 (DOCX 15 KB) Supplementary file8 (DOCX 14 KB) Supplementary file9 (DOCX 14 KB) Supplementary file10 (DOCX 20 KB)
Scoping review of happiness and well-being measurement: uses and implications for paediatric surgery in low- and middle-income contexts
1c9413ee-d955-41d8-b5f6-a8b6961c9b07
11808877
Surgical Procedures, Operative[mh]
The critical need for paediatric surgical interventions in low- and middle-income countries (LMICs) is well documented, with significant health implications highlighted by recent studies. A 2017 study by Butler et al found that roughly 85% of children in these regions are likely to need surgical care before they reach 15 years. This high percentage highlights the urgent need for accessible paediatric surgical services in these areas. In 2015, Bickler et al estimated that per year, over 77.2 million disability-adjusted life years (DALYs) could be saved through essential surgical procedures. Furthermore, the WHO estimated that, in 2019, 51.8 million DALYs were due to congenital abnormalities, ranking congenital abnormalities as the 10th leading cause of DALYs in 2019. Globally, addressing general surgical needs has been the focus of a large number of organisations. In 2016, Ng-Kamstra et al identified 403 surgical non-governmental organisations (s-NGOs) working across 139 LMICs. However, adequate capacity for paediatric surgical intervention remains a problem in LMIC contexts. Furthermore, Krishnaswami et al argue that robust needs assessments and standardised measures of impact and quality of care should guide effective partnerships between s-NGOs and local institutions in LMICs. Despite efforts to improve paediatric care, the lack of a standard method for evaluating postsurgical happiness and well-being complicates resource allocation. Conventional outcome measures, such as clinician-reported and observer-reported measures, often fail to consider critical aspects like happiness and well-being. This highlights the need for tailored patient-reported outcome measures and patient-reported experience measures that can more accurately address these dimensions. Many health interventions’ economic assessments rely heavily on gross domestic product (GDP) as the primary measure. GDP, which quantifies the monetary value of all goods and services produced within a country, correlates with specific health outcomes related to better infrastructure and services available in wealthier nations. However, GDP does not account for non-financial aspects of happiness and well-being, leading to a shift towards more holistic measures that consider the broader impacts of medical interventions. While many indices evaluate well-being at the international and national levels, they often fail to connect to medical intervention contexts directly. This multitude of options has led to significant ambiguity across evaluations of intervention outcomes. Without consistent measures, it is challenging to compare the efficacy of different interventions or to justify the distribution of resources, particularly in LMICs. This lack of standardisation hinders the ability to identify areas most needing improved surgical services. The current evaluation methods often yield ambiguous data, lacking spatial and temporal precision. Due to this ambiguity, the effect of increasing investment in global surgical capacity remains largely unknown, hindering the identification of specific areas within countries most in need. Further compounding these issues, the specificity of well-being evaluations varies—general well-being indices may consider a greater breadth of well-being domains. In contrast, health-related well-being assessments may only consider indicators directly associated with physical or mental health. This diversity in methodologies, from the specific surgical impact assessments to broader happiness indices, emphasises the need for a more integrated and comprehensive evaluation tool that links paediatric surgical intervention to greater individual and population well-being. This scoping review aims to map and compare existing happiness and well-being indices methodologies and examine their application to paediatric surgical interventions in LMICs. This review seeks to highlight effective practices and identify gaps or overlooked measures by organising and contrasting different methods. The insights garnered are intended to support the development of industry standards for assessing paediatric surgical needs and inform policymakers about the broader implications of healthcare disparities. Ultimately, this could lead to a more informed and equitable allocation of healthcare resources, enhancing the well-being of children in LMICs. Rather than a traditional systematic review focussed on outcomes, a scoping review was conducted to explore existing methodologies for happiness and well-being indices and examine their application to paediatric surgical interventions in LMICs. Scoping reviews follow five key steps: identifying the research question, identifying relevant studies, selecting the study, charting the data, and data summary and synthesis. This study adhered to the methodological framework developed by the Joanna Briggs Institute, along with the methodological updates by Peters et al , and followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. Stage 1: identifying the research question The principal research question of this scoping review is: which methodologies are currently used to measure happiness and well-being in populations? In line with Peters et al .’s guidance, a secondary research question was included to specifically address the context of paediatric surgical interventions in LMICs: how are indicators of happiness and well-being used to assess the needs and impacts of paediatric surgical interventions in LMICs? Stage 2: identifying relevant studies We conducted our literature search via multidisciplinary electronic databases, including PubMed, ScienceDirect and Google Scholar, and bibliographies of relevant studies . Our search was limited to literature published in English no earlier than the year 2000. The time restriction ensures the inclusion of studies with up-to-date data, modern healthcare frameworks and relevant evaluation tools that are essential for assessing current global health challenges. We included search terms relating to happiness and well-being, happiness index methodologies, well-being index methodologies, surgical intervention, paediatric surgical intervention, LMICs, happiness in health and development, and surgical intervention impact. These search terms were selected to capture literature relating to existing methodologies for happiness and well-being indices used in global health, LMIC contexts, paediatric surgical contexts and any literature tangentially associated with these topics. The search terms were used consistently across all three databases, and two researchers searched each database separately. This search strategy was initiated in October 2023 and continued until April 2024. In some cases, particularly for sources surrounding national and international well-being indices, a secondary search was conducted to clarify methodologies. Any sources identified in this secondary search went through the same data charting steps as sources from the initial search. Stage 3: study selection Studies were included if they met at least one of the specified inclusion criteria and excluded if they met any exclusion criteria, ensuring they were relevant to the research questions, especially regarding LMICs, surgical procedures and well-being or happiness outcomes. Eligible sources included research articles, review articles and technical reports. Inclusion criteria Paediatric surgical interventions: studies on paediatric surgical interventions specifically in LMIC settings, particularly those that assess well-being or happiness outcomes. Health and well-being measurement: research in global health contexts that involves measurements of well-being or happiness, including subjective and objective approaches. Happiness and well-being indices: methodologies or indices (eg, Gross National Happiness) measuring happiness or well-being, emphasising applications in LMICs and relation to paediatric health and surgery. Surgical needs and outcomes assessments: studies assessing surgical needs or outcomes in paediatric surgery, focussing on well-being and happiness as measured or implied outcomes. Specific conditions and populations: studies targeting particular conditions (eg, congenital heart disease (CHD), cleft lip and palate (CLP)) within LMICs and their impact on well-being and happiness in children. Exclusion criteria Studies published in a language other than English. Studies with unclear methodologies or data sources. Undefined or poorly defined concepts of happiness or well-being, when applicable. Studies primarily focussed on socioeconomic indices. Studies emphasising surgical techniques over intervention outcomes. Studies published before the year 2000. Stage 4: charting the data Two reviewers (JH and CP) independently extracted data, including study title, authors, publication year, country of origin, aims, population, sample size, methodology and key findings. This approach was piloted in three studies to ensure the extraction was consistent with the research question (CD). Data for each category was compiled into an Excel spreadsheet for validation and coding. Stage 5: data summary and synthesis The fifth and final stage summarises and reports findings, which are presented in the subsequent section. Patient and public involvement No patient or public level data were used in this study. The principal research question of this scoping review is: which methodologies are currently used to measure happiness and well-being in populations? In line with Peters et al .’s guidance, a secondary research question was included to specifically address the context of paediatric surgical interventions in LMICs: how are indicators of happiness and well-being used to assess the needs and impacts of paediatric surgical interventions in LMICs? We conducted our literature search via multidisciplinary electronic databases, including PubMed, ScienceDirect and Google Scholar, and bibliographies of relevant studies . Our search was limited to literature published in English no earlier than the year 2000. The time restriction ensures the inclusion of studies with up-to-date data, modern healthcare frameworks and relevant evaluation tools that are essential for assessing current global health challenges. We included search terms relating to happiness and well-being, happiness index methodologies, well-being index methodologies, surgical intervention, paediatric surgical intervention, LMICs, happiness in health and development, and surgical intervention impact. These search terms were selected to capture literature relating to existing methodologies for happiness and well-being indices used in global health, LMIC contexts, paediatric surgical contexts and any literature tangentially associated with these topics. The search terms were used consistently across all three databases, and two researchers searched each database separately. This search strategy was initiated in October 2023 and continued until April 2024. In some cases, particularly for sources surrounding national and international well-being indices, a secondary search was conducted to clarify methodologies. Any sources identified in this secondary search went through the same data charting steps as sources from the initial search. Studies were included if they met at least one of the specified inclusion criteria and excluded if they met any exclusion criteria, ensuring they were relevant to the research questions, especially regarding LMICs, surgical procedures and well-being or happiness outcomes. Eligible sources included research articles, review articles and technical reports. Inclusion criteria Paediatric surgical interventions: studies on paediatric surgical interventions specifically in LMIC settings, particularly those that assess well-being or happiness outcomes. Health and well-being measurement: research in global health contexts that involves measurements of well-being or happiness, including subjective and objective approaches. Happiness and well-being indices: methodologies or indices (eg, Gross National Happiness) measuring happiness or well-being, emphasising applications in LMICs and relation to paediatric health and surgery. Surgical needs and outcomes assessments: studies assessing surgical needs or outcomes in paediatric surgery, focussing on well-being and happiness as measured or implied outcomes. Specific conditions and populations: studies targeting particular conditions (eg, congenital heart disease (CHD), cleft lip and palate (CLP)) within LMICs and their impact on well-being and happiness in children. Exclusion criteria Studies published in a language other than English. Studies with unclear methodologies or data sources. Undefined or poorly defined concepts of happiness or well-being, when applicable. Studies primarily focussed on socioeconomic indices. Studies emphasising surgical techniques over intervention outcomes. Studies published before the year 2000. Two reviewers (JH and CP) independently extracted data, including study title, authors, publication year, country of origin, aims, population, sample size, methodology and key findings. This approach was piloted in three studies to ensure the extraction was consistent with the research question (CD). Data for each category was compiled into an Excel spreadsheet for validation and coding. The fifth and final stage summarises and reports findings, which are presented in the subsequent section. No patient or public level data were used in this study. As depicted in , the study selection process began with identifying 51 records, 48 sourced through database searches and 3 through other means. After removing duplicates, all 51 unique records were screened, with none being excluded at this stage. Subsequently, the full texts of these 51 records were assessed for eligibility. During this phase, 23 studies were excluded for various reasons, including focussing on the economic impacts of surgery rather than well-being, clinical techniques for specific procedures rather than the broader impact of the intervention or inapplicability to the LMIC context. Ultimately, 28 studies met the inclusion criteria and were included in the qualitative and quantitative synthesis . Among the studies included 39% focussed exclusively on lower-middle-income countries, 14% on upper-middle-income countries and 10% on high-income countries. An additional 25% covered multiple income categories, including high- and low-income settings, providing a broader global perspective. Furthermore, 10% of studies were theoretical or methodological, with no specific geographic sample. Geographically, South Asia appeared most frequently, with representation in nine studies, followed by Southeast Asia (three studies) and Europe (two studies). North America and East Asia were the least represented, with only one study each. Additionally, eight studies covered multiple regions, and four had no specific geographic focus due to their study design. In terms of well-being indices, the Bhutan Gross National Happiness Index (BGNHI), Organisation for Economic Co-Operation and Development Better Life Index (OECD BLI), Human Development Index (HDI) and Healthy Planet Index (HPI) were the most frequently referenced. However, none specifically addressed surgical or child-centred measures. These general indices were chosen for their broad applicability to global health. The review identified two primary types of well-being measures: subjective and objective. Subjective measures were used in 18% of studies, objective measures in another 18% and a combination of both in 64%, illustrating diverse methodological approaches. Health emerged as a critical indicator in 27 studies (96%), underscoring its central role in assessing well-being and happiness outcomes. Methodologies currently used to measure happiness and well-being in populations When considering methodologies for measuring population happiness and well-being, we identified two categories: subjective and objective well-being. Choon et al identified this distinction as ‘inner’ (subjective) versus ‘outer’ (objective) indicators, meaning indicators that relate to an individual’s perceived emotional or physical experience and indicators related to an individual’s environment or physical state, respectively. Many existing indices that measure well-being within global health and development contexts focus primarily on objective indicators. Several prominent non-GDP-based indices and their methodologies are outlined as follows. Bhutan Gross National Happiness The BGNHI is drawn from national survey data and based on four Bhutanese principles: sustainable and equitable economic development, conservation of the environment, preservation and promotion of culture, and good governance. The Gross National Happiness Index (GNHI) was explicitly developed to provide a more holistic alternative to GDP for measuring national well-being and success. Bhutan was the first nation to include happiness as a component of state policy. The index consists of nine equally weighted domains: psychological well-being, health, time use, education, cultural diversity and resilience, good governance, community vitality, ecological diversity and resilience, and living standards. These nine domains consist of 33 clustered indicators with 124 variables of differing weights— objective indicators are given higher weights, while subjective indicators are given lower weights. While some indicators resemble those of other well-being indices (literacy rates, education, etc), the GNHI is unique in that Bhutan’s values and traditions are reflected in several indicators, such as respect for the sacredness of nature. It reflects the Bhutanese philosophy about happiness as more than a feeling or emotional state but a concept rooted in the interconnectedness of living beings. The mathematical structure is based on the Alkire Foster method, where a sufficiency level (rather than deprivation level) is attached to each variable. The GNHI is then calculated as a value between 0 and 1 with one of the two following equations: 1 G N H I = 1 - H n A n 2 G N H I = H h + ( H n ∙ A s ) where H h is the proportion of the population with a sufficiency score greater than or equal to 66%; H n is the proportion of the population with a sufficiency score below 66%; A s is the percentage of domains in which people who are not yet happy experience sufficiency (similar to an ‘intensity’ value); finally, A n is the percentage of domains in which not-yet-happy people lack sufficiency. The GNHI has since been cited in several articles exploring happiness and well-being, serving as a basis for new happiness index development. OECD Better Life Index The OECD BLI emphasises two well-being categories: current and future well-being. The framework for current well-being has four features that guide the dimensions of the index: (1) focus on people, meaning the experience and community relations of individuals and households, rather than the economy; (2) focus on well-being outcomes rather than inputs or outputs, assessed by both objective (non-self-reported) and subjective (self-reported) measures; (3) considers the distribution of well-being outcomes across populations; (4) considers subjective experiences as well as objective assessments of well-being. In total, 11 dimensions measure current well-being, including health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, subjective well-being, income and wealth, jobs and earnings, and housing. Future well-being is assessed through indicators of different types of capital, such as economic, natural, human and social capital, which drive well-being over time. The index has a three-level hierarchical structure in which level 1 comprises the individual indicators that form the 11 dimensions, level 2 comprises the 11 dimensions and level 3 is the OECD BLI. Each indicator value is normalised via the equation : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e where the ‘actual value’ is the country value for the indicator, the ‘minimum value’ is the global minimum for the indicator and the ‘maximum value’ is the global maximum for the indicator. A composite index for the education dimension is obtained by averaging the indices for expected years of schooling and mean years of schooling. If the indicator measures a negative value, the normalisation formula is I x = 1 - a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e The normalised values for all indicators within a dimension are then averaged with equal weights to obtain a single aggregate dimensional value. However, the OECD has not adopted a singular method of aggregating the 11 dimensions to obtain the total OECD BLI value. Instead, the users of the OECD BLI interface can assign dimensional weights manually. This system reflects an ongoing debate surrounding how best to weight complex multidimensional indices—assigning equal weights incorrectly assumes that each dimension has an equal bearing on well-being, but assigning differential weights manipulates results. Balestra et al conducted a study using OECD BLI website data to identify which dimensions are weighted the highest on average. The results show that health, education and life satisfaction are weighted the highest by users of the OECD BLI. Furthermore, a growing body of literature has explored the development of non-compensatory methods to overcome the compensation effect, meaning success in one indicator compensating for a deficit in another indicator in composite indices. Koronakos et al proposed a Multiple Objective Programming assessment framework for the BLI, incorporating public opinion to create weight restrictions, reducing the compensation effect. Carlsen conducted a study using partial data ordering to address the compensation effect in the World Happiness Index (HI), an index calculated by the arithmetic addition of its seven indicators. In doing so, Carlsen considered all seven indicators simultaneously for 157 countries, leading to a different international HI ranking. Thus, the compensation effect is also a concern for how countries are ranked and compared based on their index value. Human Development Index The HDI, initially designed by Mahbub ul Haq in 1990, has been implemented by the United Nations Development Programme to measure global development. The HDI consists of four indicators—life expectancy, expected years of schooling, mean years of schooling and per capita income, making up three dimensions of the HDI: health, education and income. In the same manner as the OECD BLI, indicator variables are normalised and transformed to a unitless index value between 0 and 1 using the following formula : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e For the income dimension, the same equation is used but with the natural logarithm of each variable as follows : I x = ln ⁡ a c t u a l v a l u e - l n ( m i n i m u m v a l u e ) l n ( m a x i m u m v a l u e ) - l n ( m i n i m u m v a l u e ) The HDI is then obtained by averaging the health, education and income indices as follows : H D I = ( I h e a l t h ⋅ I e d u c a t i o n ⋅ I i n c o m e ) 1 3 Unlike the OECD BLI, equal weights are assigned to each dimension of the HDI, leading to the same criticisms of assuming that each parameter matters equally to well-being and concerns over the compensation effect. The HDI has also been criticised for being redundant with other measures for human development or only limitedly useful. Ranis et al conducted a study in which the HDI was tested for correlation with 39 indicators across 11 broad domains of human development. The HDI was only correlated with 8 of the 39 indicators, suggesting it is not a strong indicator for broad human development. However, when the same test was performed against under-five mortality and per capita income, two of the most common development indicators, the HDI performed equally as well as under-five mortality and better than per capita income as a measure of broad human development. Happy Planet Index The HPI measures sustainable well-being through three domains: life expectancy, experienced well-being (average of individual responses to rank oneself on a ladder of life from 0 to 10) and ecological footprint (the average amount of land needed, per person in the population, to sustain typical consumption patterns). The index is calculated as follows: H P I = α ∙ l i f e e x p e c t a n c y ∙ e x p e r i e n c e d w e l l b e i n g + β - γ e c o l o g i c a l f o o t p r i n t + ε where α =0.75 and γ =54.92, both of which are scaling constants, β =2.92, which ensures the coefficient of variance is equivalent for well-being and life expectancy, and ε =6.39, which ensures that the coefficient of variance for ‘ecological footprint’ is equivalent to that of the ‘happy life years’ measure (life expectancy multiplied by experienced well-being). Subjective well-being A growing body of literature emphasises that subjective well-being, meaning well-being as it is identified by the individual and not by ‘objective’ data, is heavily influenced by health status in childhood and throughout life. Arguments have been made for broadening standard population-level health indicators beyond morbidity and mortality to include a third indicator encompassing biological health and ‘lived health’. Stucki and Bickenbach referred to this third indicator as ‘functioning’, intended to capture an individual’s capacity and performance with respect to any physical limitations or health conditions. Thus, health is relevant in subjective well-being, and to capture a holistic measure of population health, a more subjective indicator of ‘lived health’ may be necessary. Subjective well-being is incorporated into existing non-GDP-based well-being indices to varying degrees. The GNHI and the OECD BLI are the most inclusive of subjective measures. However, the GNHI assigns lower weights to indicators with higher levels of subjectivity, and the BLI does not provide any guidance on dimensional weights. The HPI has an ‘experienced well-being’ category. Still, it comprises only one subjective indicator—a‘ladder of life’, which has been shown to possess good convergent validity with other emotional well-being measures, specifically in children. However, indices specific to subjective well-being typically lack objective measures, resulting in a similar loss of holistic measurement. Additionally, the validity of cross-cultural and cross-national comparisons when relying solely on subjective well-being is controversial. The Pemberton Happiness Index (PHI), developed by Hervás and Vásquez, is a subjective well-being-based index tested across multiple regions to validate its consistency across geographic and cultural boundaries. The index was developed to capture both remembered well-being (a retrospective, memory-based assessment) and experienced well-being (a momentary assessment of the active state of well-being). The final structure consists of 11 items to capture general, eudaimonic, hedonic and social domains of remembered well-being and 10 items to capture experienced well-being. The index value is obtained by adding the scores (0–10 scale) from the 11 remembered well-being items with the score for experienced well-being (sum of positive items experienced and negative items not experienced) and then dividing by 12. However, the PHI was not designed to be a national index or a tool in development contexts; instead, it was constructed based on existing indices and measurements for clinical contexts. It does not include any ‘objective’ indicators necessary for creating a holistic index for global health and development needs assessments. Choon et al developed an integrated happiness framework for sustainable development based on existing non-GDP-based happiness and well-being indices and the Positive Emotion, Engagement, Relationships, Meaning and Accomplishment (PERMA) psychology model. It consists of eight outer dimensions (environment, education, governance, culture, community, health, safety and economics) based on the GNHI, the OECD BLI and the Malaysia Happiness Index (there was not sufficient literature or empirical evidence available to warrant inclusion of the Malaysia Happiness Index in our study), and five inner dimensions (positive emotion, engagement, relationships, meaning and accomplishments), based on the PERMA model. To test the model, a questionnaire was created with four sections: happiness and value of life, external environment, positive psychology and demographics. Each dimension of the model consists of three questions, and each question, excluding those of the demographics section, is scored on a 7-point Likert scale. The same normalisation method the OECD BLI and the HDI uses is applied to convert the indicators into indices. Bridging the gap between the objective and subjective has been the focus of an entire branch of literature. van Praag et al assert that subjective well-being or ‘self-reported satisfaction’, is a key tool for developing and assessing socioeconomic policy, claiming that subjective questions and responses may be used as proxies for individual satisfaction, and general domain satisfaction is explainable by objective variables. In other words, rather than treating subjective and objective as two different categories, van Praag et al interlinked them, using a model where general satisfaction, GS, consists of six domains of satisfaction, DS 1 …DS J , which is dependent on observable, objective variables, x . Salameh et al used an ordered logit and tobit model to identify socioeconomic determinants of subjective well-being and found that income, education, government effectiveness, no perceived corruption and perceived institutional quality improve well-being, while lower trust in family and friends, poor health status, living in rented housing and dissatisfaction with hospital services are negatively associated with subjective well-being. Sujarwoto et al conducted a three-level logit regression study to explore a multilevel data structure with individual, household and district data for Indonesia. The findings show that happiness and life satisfaction are significantly associated with household and district government-level and individual factors. Thus, well-being comprises a complex combination of objective and subjective factors spread across individual, household and institutional domains. In nearly all included studies, health is highlighted as a key indicator for happiness and well-being. For example, Salameh et al identified poor individual-level health and dissatisfaction with health systems as factors that decrease well-being. Similarly, in a study exploring the significance of happiness in relation to health system efficiency, See and Yen found that health system inefficiency and happiness levels are inversely related. Paediatric health has been identified as an important indicator of happiness throughout life. Sujarwoto et al found that the association between poor childhood health and adult happiness levels was very significant, with a magnitude of 32% between the presence of an emotional, nervous or psychiatric episode in childhood and happiness levels in adulthood. Ettinger et al studied how best to support general child well-being through a community-based participatory research study exploring community-rooted definitions and approaches to child and youth thriving. Study participants identified 104 unique items associated with child thriving, sorted into seven domains: a healthy environment, safety, positive identity and self-worth, caring families and relationships, strong minds and bodies, vibrant communities, and fun and happiness. This brings us to our secondary question. Utilisation of happiness and well-being indicators to assess the needs and impacts of paediatric surgical interventions in LMICs Research on well-being and happiness within the context of surgical interventions in LMICs is sparse, but limited existing research suggests that they are strongly associated. For example, Feeny et al found statistically significant improvements in all assessed well-being categories in a study on the non-monetary benefits of cataract surgery. After surgery, the percentage of patients who reported some level of difficulty with autonomy in mobility, self-care or performing activities decreased from 30% to 10%, and the percentage of patients who self-reported health as ‘poor’ declined from 46% to 6%. The percentage of self-reported mental health as ‘very good’ or ‘excellent’ increased from 6% to 51%, the percentage who reported moderate to high levels of anxiety or depression decreased from 88% to 24%, average emotional well-being scores increased from 39 to 73 (on a 100 point scale), average self-assessed life-satisfaction increased from 5.1 to 7.6 (on a 10 point scale), average hope values increased from 27.2 to 37.5 (on a 100 point scale) and average self-efficacy increased by 4 points. Results in paediatric-specific surgical interventions similarly link well-being and surgery. Ladak et al conducted a study exploring postoperative, health-related quality of life (HRQOL) in children and adolescents with CHD. The study used the PedsQL 4.0 Generic Core Scale to assess domains of physical, emotional, social and school functioning; the PedsQL Cognitive Functioning Scale to explore cognitive functioning; and the PedsQL 3.0 Cardiac Module to assess disease-specific HRQOL. HRQOL was significantly lower in CHD subjects than in their age-matched healthy siblings for all domains, particularly emotional, psychological, physical and school functioning. Similarly, in a study exploring the impact of CLP surgery on adolescent life outcomes, Wydick et al found that children with CLP experience statistically significant losses in indices of speech quality (−1.59σ), physical well-being (0.32σ), academic and cognitive ability (−0.37σ), and social integration (−0.32σ). The results also show that surgical intervention restores social integration and inclusion, speech outcomes are vital for social inclusion, and early surgery produces strong speech outcomes and restores general human flourishing (composite index of all assessment indices). Subjective social status is also directly associated with happiness, according to a study exploring perceived position on community respect and economic ladders and happiness levels measured in birth cohorts from Guatemala, the Philippines and South Africa. Thus, linking surgical intervention and social inclusion also links surgical intervention directly to happiness. Well-being and surgical outcomes appear to have a reciprocal relationship—surgical outcomes impact well-being, and, according to Ladak et al , non-health-related domains of well-being impact surgical outcomes. In a qualitative study using the Social Ecological Model (SEM), Ladak et al explored parental perspectives on the influence of sociocultural and environmental factors on HRQOL of CHD patients. SEM includes intrapersonal and interpersonal, institutional, sociocultural and public policy factors, all of which were found to have a substantial impact on the HRQOL of children following CHD surgery. Thus, understanding and measuring both health-related and non-health-related indicators of well-being are vital to supporting paediatric health outcomes. The link between surgical intervention and well-being also applies to caregivers. Ladak et al found that mothers frequently face detrimental impacts on their social and emotional well-being when serving as the sole primary caregiver to a child with CHD. Evidence suggests surgical intervention can restore well-being levels to caregivers and patients. Feeny et al found significant improvements in all measures of well-being for both patients and caregivers after cataract surgery. Specifically, the percentage of caregivers who self-reported health as ‘very good’ or ‘excellent’ increased from 13% to 45%, the percentage of caregivers who self-reported mental health as ‘very good’ or ‘excellent’ increased from 13% to 57%, emotional well-being scores (on a 100 point scale) increased from 47 to 76, life-satisfaction values increased by 1.7 points on a 10 point scale, average values of hope increased from 33.1 to 39.0 (on a 100 point scale) and finally, self-efficacy increased by 5 points. When considering methodologies for measuring population happiness and well-being, we identified two categories: subjective and objective well-being. Choon et al identified this distinction as ‘inner’ (subjective) versus ‘outer’ (objective) indicators, meaning indicators that relate to an individual’s perceived emotional or physical experience and indicators related to an individual’s environment or physical state, respectively. Many existing indices that measure well-being within global health and development contexts focus primarily on objective indicators. Several prominent non-GDP-based indices and their methodologies are outlined as follows. Bhutan Gross National Happiness The BGNHI is drawn from national survey data and based on four Bhutanese principles: sustainable and equitable economic development, conservation of the environment, preservation and promotion of culture, and good governance. The Gross National Happiness Index (GNHI) was explicitly developed to provide a more holistic alternative to GDP for measuring national well-being and success. Bhutan was the first nation to include happiness as a component of state policy. The index consists of nine equally weighted domains: psychological well-being, health, time use, education, cultural diversity and resilience, good governance, community vitality, ecological diversity and resilience, and living standards. These nine domains consist of 33 clustered indicators with 124 variables of differing weights— objective indicators are given higher weights, while subjective indicators are given lower weights. While some indicators resemble those of other well-being indices (literacy rates, education, etc), the GNHI is unique in that Bhutan’s values and traditions are reflected in several indicators, such as respect for the sacredness of nature. It reflects the Bhutanese philosophy about happiness as more than a feeling or emotional state but a concept rooted in the interconnectedness of living beings. The mathematical structure is based on the Alkire Foster method, where a sufficiency level (rather than deprivation level) is attached to each variable. The GNHI is then calculated as a value between 0 and 1 with one of the two following equations: 1 G N H I = 1 - H n A n 2 G N H I = H h + ( H n ∙ A s ) where H h is the proportion of the population with a sufficiency score greater than or equal to 66%; H n is the proportion of the population with a sufficiency score below 66%; A s is the percentage of domains in which people who are not yet happy experience sufficiency (similar to an ‘intensity’ value); finally, A n is the percentage of domains in which not-yet-happy people lack sufficiency. The GNHI has since been cited in several articles exploring happiness and well-being, serving as a basis for new happiness index development. OECD Better Life Index The OECD BLI emphasises two well-being categories: current and future well-being. The framework for current well-being has four features that guide the dimensions of the index: (1) focus on people, meaning the experience and community relations of individuals and households, rather than the economy; (2) focus on well-being outcomes rather than inputs or outputs, assessed by both objective (non-self-reported) and subjective (self-reported) measures; (3) considers the distribution of well-being outcomes across populations; (4) considers subjective experiences as well as objective assessments of well-being. In total, 11 dimensions measure current well-being, including health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, subjective well-being, income and wealth, jobs and earnings, and housing. Future well-being is assessed through indicators of different types of capital, such as economic, natural, human and social capital, which drive well-being over time. The index has a three-level hierarchical structure in which level 1 comprises the individual indicators that form the 11 dimensions, level 2 comprises the 11 dimensions and level 3 is the OECD BLI. Each indicator value is normalised via the equation : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e where the ‘actual value’ is the country value for the indicator, the ‘minimum value’ is the global minimum for the indicator and the ‘maximum value’ is the global maximum for the indicator. A composite index for the education dimension is obtained by averaging the indices for expected years of schooling and mean years of schooling. If the indicator measures a negative value, the normalisation formula is I x = 1 - a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e The normalised values for all indicators within a dimension are then averaged with equal weights to obtain a single aggregate dimensional value. However, the OECD has not adopted a singular method of aggregating the 11 dimensions to obtain the total OECD BLI value. Instead, the users of the OECD BLI interface can assign dimensional weights manually. This system reflects an ongoing debate surrounding how best to weight complex multidimensional indices—assigning equal weights incorrectly assumes that each dimension has an equal bearing on well-being, but assigning differential weights manipulates results. Balestra et al conducted a study using OECD BLI website data to identify which dimensions are weighted the highest on average. The results show that health, education and life satisfaction are weighted the highest by users of the OECD BLI. Furthermore, a growing body of literature has explored the development of non-compensatory methods to overcome the compensation effect, meaning success in one indicator compensating for a deficit in another indicator in composite indices. Koronakos et al proposed a Multiple Objective Programming assessment framework for the BLI, incorporating public opinion to create weight restrictions, reducing the compensation effect. Carlsen conducted a study using partial data ordering to address the compensation effect in the World Happiness Index (HI), an index calculated by the arithmetic addition of its seven indicators. In doing so, Carlsen considered all seven indicators simultaneously for 157 countries, leading to a different international HI ranking. Thus, the compensation effect is also a concern for how countries are ranked and compared based on their index value. Human Development Index The HDI, initially designed by Mahbub ul Haq in 1990, has been implemented by the United Nations Development Programme to measure global development. The HDI consists of four indicators—life expectancy, expected years of schooling, mean years of schooling and per capita income, making up three dimensions of the HDI: health, education and income. In the same manner as the OECD BLI, indicator variables are normalised and transformed to a unitless index value between 0 and 1 using the following formula : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e For the income dimension, the same equation is used but with the natural logarithm of each variable as follows : I x = ln ⁡ a c t u a l v a l u e - l n ( m i n i m u m v a l u e ) l n ( m a x i m u m v a l u e ) - l n ( m i n i m u m v a l u e ) The HDI is then obtained by averaging the health, education and income indices as follows : H D I = ( I h e a l t h ⋅ I e d u c a t i o n ⋅ I i n c o m e ) 1 3 Unlike the OECD BLI, equal weights are assigned to each dimension of the HDI, leading to the same criticisms of assuming that each parameter matters equally to well-being and concerns over the compensation effect. The HDI has also been criticised for being redundant with other measures for human development or only limitedly useful. Ranis et al conducted a study in which the HDI was tested for correlation with 39 indicators across 11 broad domains of human development. The HDI was only correlated with 8 of the 39 indicators, suggesting it is not a strong indicator for broad human development. However, when the same test was performed against under-five mortality and per capita income, two of the most common development indicators, the HDI performed equally as well as under-five mortality and better than per capita income as a measure of broad human development. Happy Planet Index The HPI measures sustainable well-being through three domains: life expectancy, experienced well-being (average of individual responses to rank oneself on a ladder of life from 0 to 10) and ecological footprint (the average amount of land needed, per person in the population, to sustain typical consumption patterns). The index is calculated as follows: H P I = α ∙ l i f e e x p e c t a n c y ∙ e x p e r i e n c e d w e l l b e i n g + β - γ e c o l o g i c a l f o o t p r i n t + ε where α =0.75 and γ =54.92, both of which are scaling constants, β =2.92, which ensures the coefficient of variance is equivalent for well-being and life expectancy, and ε =6.39, which ensures that the coefficient of variance for ‘ecological footprint’ is equivalent to that of the ‘happy life years’ measure (life expectancy multiplied by experienced well-being). The BGNHI is drawn from national survey data and based on four Bhutanese principles: sustainable and equitable economic development, conservation of the environment, preservation and promotion of culture, and good governance. The Gross National Happiness Index (GNHI) was explicitly developed to provide a more holistic alternative to GDP for measuring national well-being and success. Bhutan was the first nation to include happiness as a component of state policy. The index consists of nine equally weighted domains: psychological well-being, health, time use, education, cultural diversity and resilience, good governance, community vitality, ecological diversity and resilience, and living standards. These nine domains consist of 33 clustered indicators with 124 variables of differing weights— objective indicators are given higher weights, while subjective indicators are given lower weights. While some indicators resemble those of other well-being indices (literacy rates, education, etc), the GNHI is unique in that Bhutan’s values and traditions are reflected in several indicators, such as respect for the sacredness of nature. It reflects the Bhutanese philosophy about happiness as more than a feeling or emotional state but a concept rooted in the interconnectedness of living beings. The mathematical structure is based on the Alkire Foster method, where a sufficiency level (rather than deprivation level) is attached to each variable. The GNHI is then calculated as a value between 0 and 1 with one of the two following equations: 1 G N H I = 1 - H n A n 2 G N H I = H h + ( H n ∙ A s ) where H h is the proportion of the population with a sufficiency score greater than or equal to 66%; H n is the proportion of the population with a sufficiency score below 66%; A s is the percentage of domains in which people who are not yet happy experience sufficiency (similar to an ‘intensity’ value); finally, A n is the percentage of domains in which not-yet-happy people lack sufficiency. The GNHI has since been cited in several articles exploring happiness and well-being, serving as a basis for new happiness index development. The OECD BLI emphasises two well-being categories: current and future well-being. The framework for current well-being has four features that guide the dimensions of the index: (1) focus on people, meaning the experience and community relations of individuals and households, rather than the economy; (2) focus on well-being outcomes rather than inputs or outputs, assessed by both objective (non-self-reported) and subjective (self-reported) measures; (3) considers the distribution of well-being outcomes across populations; (4) considers subjective experiences as well as objective assessments of well-being. In total, 11 dimensions measure current well-being, including health status, work-life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security, subjective well-being, income and wealth, jobs and earnings, and housing. Future well-being is assessed through indicators of different types of capital, such as economic, natural, human and social capital, which drive well-being over time. The index has a three-level hierarchical structure in which level 1 comprises the individual indicators that form the 11 dimensions, level 2 comprises the 11 dimensions and level 3 is the OECD BLI. Each indicator value is normalised via the equation : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e where the ‘actual value’ is the country value for the indicator, the ‘minimum value’ is the global minimum for the indicator and the ‘maximum value’ is the global maximum for the indicator. A composite index for the education dimension is obtained by averaging the indices for expected years of schooling and mean years of schooling. If the indicator measures a negative value, the normalisation formula is I x = 1 - a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e The normalised values for all indicators within a dimension are then averaged with equal weights to obtain a single aggregate dimensional value. However, the OECD has not adopted a singular method of aggregating the 11 dimensions to obtain the total OECD BLI value. Instead, the users of the OECD BLI interface can assign dimensional weights manually. This system reflects an ongoing debate surrounding how best to weight complex multidimensional indices—assigning equal weights incorrectly assumes that each dimension has an equal bearing on well-being, but assigning differential weights manipulates results. Balestra et al conducted a study using OECD BLI website data to identify which dimensions are weighted the highest on average. The results show that health, education and life satisfaction are weighted the highest by users of the OECD BLI. Furthermore, a growing body of literature has explored the development of non-compensatory methods to overcome the compensation effect, meaning success in one indicator compensating for a deficit in another indicator in composite indices. Koronakos et al proposed a Multiple Objective Programming assessment framework for the BLI, incorporating public opinion to create weight restrictions, reducing the compensation effect. Carlsen conducted a study using partial data ordering to address the compensation effect in the World Happiness Index (HI), an index calculated by the arithmetic addition of its seven indicators. In doing so, Carlsen considered all seven indicators simultaneously for 157 countries, leading to a different international HI ranking. Thus, the compensation effect is also a concern for how countries are ranked and compared based on their index value. The HDI, initially designed by Mahbub ul Haq in 1990, has been implemented by the United Nations Development Programme to measure global development. The HDI consists of four indicators—life expectancy, expected years of schooling, mean years of schooling and per capita income, making up three dimensions of the HDI: health, education and income. In the same manner as the OECD BLI, indicator variables are normalised and transformed to a unitless index value between 0 and 1 using the following formula : I x = a c t u a l v a l u e - m i n i m u m v a l u e m a x i m u m v a l u e - m i n i m u m v a l u e For the income dimension, the same equation is used but with the natural logarithm of each variable as follows : I x = ln ⁡ a c t u a l v a l u e - l n ( m i n i m u m v a l u e ) l n ( m a x i m u m v a l u e ) - l n ( m i n i m u m v a l u e ) The HDI is then obtained by averaging the health, education and income indices as follows : H D I = ( I h e a l t h ⋅ I e d u c a t i o n ⋅ I i n c o m e ) 1 3 Unlike the OECD BLI, equal weights are assigned to each dimension of the HDI, leading to the same criticisms of assuming that each parameter matters equally to well-being and concerns over the compensation effect. The HDI has also been criticised for being redundant with other measures for human development or only limitedly useful. Ranis et al conducted a study in which the HDI was tested for correlation with 39 indicators across 11 broad domains of human development. The HDI was only correlated with 8 of the 39 indicators, suggesting it is not a strong indicator for broad human development. However, when the same test was performed against under-five mortality and per capita income, two of the most common development indicators, the HDI performed equally as well as under-five mortality and better than per capita income as a measure of broad human development. The HPI measures sustainable well-being through three domains: life expectancy, experienced well-being (average of individual responses to rank oneself on a ladder of life from 0 to 10) and ecological footprint (the average amount of land needed, per person in the population, to sustain typical consumption patterns). The index is calculated as follows: H P I = α ∙ l i f e e x p e c t a n c y ∙ e x p e r i e n c e d w e l l b e i n g + β - γ e c o l o g i c a l f o o t p r i n t + ε where α =0.75 and γ =54.92, both of which are scaling constants, β =2.92, which ensures the coefficient of variance is equivalent for well-being and life expectancy, and ε =6.39, which ensures that the coefficient of variance for ‘ecological footprint’ is equivalent to that of the ‘happy life years’ measure (life expectancy multiplied by experienced well-being). A growing body of literature emphasises that subjective well-being, meaning well-being as it is identified by the individual and not by ‘objective’ data, is heavily influenced by health status in childhood and throughout life. Arguments have been made for broadening standard population-level health indicators beyond morbidity and mortality to include a third indicator encompassing biological health and ‘lived health’. Stucki and Bickenbach referred to this third indicator as ‘functioning’, intended to capture an individual’s capacity and performance with respect to any physical limitations or health conditions. Thus, health is relevant in subjective well-being, and to capture a holistic measure of population health, a more subjective indicator of ‘lived health’ may be necessary. Subjective well-being is incorporated into existing non-GDP-based well-being indices to varying degrees. The GNHI and the OECD BLI are the most inclusive of subjective measures. However, the GNHI assigns lower weights to indicators with higher levels of subjectivity, and the BLI does not provide any guidance on dimensional weights. The HPI has an ‘experienced well-being’ category. Still, it comprises only one subjective indicator—a‘ladder of life’, which has been shown to possess good convergent validity with other emotional well-being measures, specifically in children. However, indices specific to subjective well-being typically lack objective measures, resulting in a similar loss of holistic measurement. Additionally, the validity of cross-cultural and cross-national comparisons when relying solely on subjective well-being is controversial. The Pemberton Happiness Index (PHI), developed by Hervás and Vásquez, is a subjective well-being-based index tested across multiple regions to validate its consistency across geographic and cultural boundaries. The index was developed to capture both remembered well-being (a retrospective, memory-based assessment) and experienced well-being (a momentary assessment of the active state of well-being). The final structure consists of 11 items to capture general, eudaimonic, hedonic and social domains of remembered well-being and 10 items to capture experienced well-being. The index value is obtained by adding the scores (0–10 scale) from the 11 remembered well-being items with the score for experienced well-being (sum of positive items experienced and negative items not experienced) and then dividing by 12. However, the PHI was not designed to be a national index or a tool in development contexts; instead, it was constructed based on existing indices and measurements for clinical contexts. It does not include any ‘objective’ indicators necessary for creating a holistic index for global health and development needs assessments. Choon et al developed an integrated happiness framework for sustainable development based on existing non-GDP-based happiness and well-being indices and the Positive Emotion, Engagement, Relationships, Meaning and Accomplishment (PERMA) psychology model. It consists of eight outer dimensions (environment, education, governance, culture, community, health, safety and economics) based on the GNHI, the OECD BLI and the Malaysia Happiness Index (there was not sufficient literature or empirical evidence available to warrant inclusion of the Malaysia Happiness Index in our study), and five inner dimensions (positive emotion, engagement, relationships, meaning and accomplishments), based on the PERMA model. To test the model, a questionnaire was created with four sections: happiness and value of life, external environment, positive psychology and demographics. Each dimension of the model consists of three questions, and each question, excluding those of the demographics section, is scored on a 7-point Likert scale. The same normalisation method the OECD BLI and the HDI uses is applied to convert the indicators into indices. Bridging the gap between the objective and subjective has been the focus of an entire branch of literature. van Praag et al assert that subjective well-being or ‘self-reported satisfaction’, is a key tool for developing and assessing socioeconomic policy, claiming that subjective questions and responses may be used as proxies for individual satisfaction, and general domain satisfaction is explainable by objective variables. In other words, rather than treating subjective and objective as two different categories, van Praag et al interlinked them, using a model where general satisfaction, GS, consists of six domains of satisfaction, DS 1 …DS J , which is dependent on observable, objective variables, x . Salameh et al used an ordered logit and tobit model to identify socioeconomic determinants of subjective well-being and found that income, education, government effectiveness, no perceived corruption and perceived institutional quality improve well-being, while lower trust in family and friends, poor health status, living in rented housing and dissatisfaction with hospital services are negatively associated with subjective well-being. Sujarwoto et al conducted a three-level logit regression study to explore a multilevel data structure with individual, household and district data for Indonesia. The findings show that happiness and life satisfaction are significantly associated with household and district government-level and individual factors. Thus, well-being comprises a complex combination of objective and subjective factors spread across individual, household and institutional domains. In nearly all included studies, health is highlighted as a key indicator for happiness and well-being. For example, Salameh et al identified poor individual-level health and dissatisfaction with health systems as factors that decrease well-being. Similarly, in a study exploring the significance of happiness in relation to health system efficiency, See and Yen found that health system inefficiency and happiness levels are inversely related. Paediatric health has been identified as an important indicator of happiness throughout life. Sujarwoto et al found that the association between poor childhood health and adult happiness levels was very significant, with a magnitude of 32% between the presence of an emotional, nervous or psychiatric episode in childhood and happiness levels in adulthood. Ettinger et al studied how best to support general child well-being through a community-based participatory research study exploring community-rooted definitions and approaches to child and youth thriving. Study participants identified 104 unique items associated with child thriving, sorted into seven domains: a healthy environment, safety, positive identity and self-worth, caring families and relationships, strong minds and bodies, vibrant communities, and fun and happiness. This brings us to our secondary question. Research on well-being and happiness within the context of surgical interventions in LMICs is sparse, but limited existing research suggests that they are strongly associated. For example, Feeny et al found statistically significant improvements in all assessed well-being categories in a study on the non-monetary benefits of cataract surgery. After surgery, the percentage of patients who reported some level of difficulty with autonomy in mobility, self-care or performing activities decreased from 30% to 10%, and the percentage of patients who self-reported health as ‘poor’ declined from 46% to 6%. The percentage of self-reported mental health as ‘very good’ or ‘excellent’ increased from 6% to 51%, the percentage who reported moderate to high levels of anxiety or depression decreased from 88% to 24%, average emotional well-being scores increased from 39 to 73 (on a 100 point scale), average self-assessed life-satisfaction increased from 5.1 to 7.6 (on a 10 point scale), average hope values increased from 27.2 to 37.5 (on a 100 point scale) and average self-efficacy increased by 4 points. Results in paediatric-specific surgical interventions similarly link well-being and surgery. Ladak et al conducted a study exploring postoperative, health-related quality of life (HRQOL) in children and adolescents with CHD. The study used the PedsQL 4.0 Generic Core Scale to assess domains of physical, emotional, social and school functioning; the PedsQL Cognitive Functioning Scale to explore cognitive functioning; and the PedsQL 3.0 Cardiac Module to assess disease-specific HRQOL. HRQOL was significantly lower in CHD subjects than in their age-matched healthy siblings for all domains, particularly emotional, psychological, physical and school functioning. Similarly, in a study exploring the impact of CLP surgery on adolescent life outcomes, Wydick et al found that children with CLP experience statistically significant losses in indices of speech quality (−1.59σ), physical well-being (0.32σ), academic and cognitive ability (−0.37σ), and social integration (−0.32σ). The results also show that surgical intervention restores social integration and inclusion, speech outcomes are vital for social inclusion, and early surgery produces strong speech outcomes and restores general human flourishing (composite index of all assessment indices). Subjective social status is also directly associated with happiness, according to a study exploring perceived position on community respect and economic ladders and happiness levels measured in birth cohorts from Guatemala, the Philippines and South Africa. Thus, linking surgical intervention and social inclusion also links surgical intervention directly to happiness. Well-being and surgical outcomes appear to have a reciprocal relationship—surgical outcomes impact well-being, and, according to Ladak et al , non-health-related domains of well-being impact surgical outcomes. In a qualitative study using the Social Ecological Model (SEM), Ladak et al explored parental perspectives on the influence of sociocultural and environmental factors on HRQOL of CHD patients. SEM includes intrapersonal and interpersonal, institutional, sociocultural and public policy factors, all of which were found to have a substantial impact on the HRQOL of children following CHD surgery. Thus, understanding and measuring both health-related and non-health-related indicators of well-being are vital to supporting paediatric health outcomes. The link between surgical intervention and well-being also applies to caregivers. Ladak et al found that mothers frequently face detrimental impacts on their social and emotional well-being when serving as the sole primary caregiver to a child with CHD. Evidence suggests surgical intervention can restore well-being levels to caregivers and patients. Feeny et al found significant improvements in all measures of well-being for both patients and caregivers after cataract surgery. Specifically, the percentage of caregivers who self-reported health as ‘very good’ or ‘excellent’ increased from 13% to 45%, the percentage of caregivers who self-reported mental health as ‘very good’ or ‘excellent’ increased from 13% to 57%, emotional well-being scores (on a 100 point scale) increased from 47 to 76, life-satisfaction values increased by 1.7 points on a 10 point scale, average values of hope increased from 33.1 to 39.0 (on a 100 point scale) and finally, self-efficacy increased by 5 points. This scoping review explores existing happiness and well-being indices methodologies and assesses their application to paediatric surgical interventions in LMICs. A key strength of the review is its ability to identify research gaps, offering valuable guidance for future studies. By exploring a wide range of studies, the review captures the multidimensional nature of well-being, showcasing different perspectives that can inform the development of standardised approaches for assessing paediatric surgical needs and addressing broader health disparities. This is particularly relevant for clinicians and stakeholders seeking to improve healthcare equity and understand the impact of surgical interventions on well-being in LMICs. In terms of clinical utility, the findings of this review can guide paediatric surgeons and healthcare providers by emphasising the importance of integrating well-being assessments into clinical practice. Standardising the measurement of happiness and well-being in paediatric surgery could help surgeons assess immediate surgical outcomes and gauge the broader impact of surgery on a child’s long-term quality of life. This could facilitate more holistic care planning, tailoring interventions to not only address physical outcomes but also enhance the emotional and psychological well-being of patients. For example, understanding the cultural and social dimensions of well-being in LMICs can help paediatric surgeons improve communication with families and align treatment goals with the broader needs of the child. However, this review has some limitations. As a scoping review, it does not evaluate the quality of the included studies or synthesise evidence into conclusive statements. The wide range of methodologies examined also results in inconsistencies in study design, sample sizes and measurement tools, which may limit the reliability of conclusions. The scope of the review was confined to research articles, review articles and technical reports in English, potentially excluding valuable insights from non-English sources. Furthermore, the inclusion of diverse methodologies for measuring happiness and well-being may result in high heterogeneity, making it challenging to draw consistent conclusions or recommend standard practices for LMICs and paediatric surgical settings. Finally, while this review emphasises quantitative methodologies, excluding qualitative studies may limit the depth of insights into subjective well-being. This limitation underscores a tension between the study’s focus on measurable, quantitative indices and its conclusion advocating for a more comprehensive integration of subjective well-being in clinical practice. Future studies should, therefore, consider incorporating qualitative methodologies to enrich well-being assessments, offering a more holistic view that aligns with the ultimate goal of patient-centred care. Despite these limitations, this broad review identifies gaps in current research and highlights the need for more standardised, holistic measures that include subjective and objective indicators. By synthesising a diverse range of studies, the review offers valuable groundwork and direction for future research, emphasising areas where more focussed, outcome-driven studies could improve well-being assessment and enhance global health interventions in LMICs. The review highlights the urgent need for improved, standardised methodologies to assess well-being and happiness in paediatric surgical interventions. The lack of consistency makes it difficult to draw definitive conclusions or compare results across studies. While some research links surgical interventions to improvements in well-being, measuring these outcomes alongside surgical interventions is not yet a standard practice in paediatric surgery. The adoption of well-being metrics in clinical settings could provide paediatric surgeons with valuable insights into patient recovery and long-term quality of life, allowing for more comprehensive postsurgical care that addresses both physical and emotional outcomes. Moreover, many existing methodologies lack the spatial and temporal precision required to determine where interventions are most needed or to assess their long-term effects, complicating efforts to deliver targeted healthcare. Additionally, most well-being indices focus too heavily on either objective or subjective measures, failing to capture the full range of experiences faced by children undergoing surgical interventions. The lack of cultural adaptability in many of these methodologies further limits their effectiveness in LMIC settings, where cultural differences may influence perceptions of well-being and happiness. This could lead to skewed data and less effective clinical interventions. There is a pressing need to develop robust, integrated and culturally sensitive methodologies that can more effectively assess well-being outcomes following surgical interventions to address these significant gaps. Such methods would improve the understanding of the impacts of healthcare interventions, enabling paediatric surgeons to make better-informed decisions and promote more equitable healthcare delivery. By incorporating well-being assessments into everyday clinical practice, paediatric surgeons can offer more holistic care that improves physical outcomes and enhances the overall quality of life for children undergoing surgery in LMICs. This review sets the stage for future research and calls for concerted efforts to bridge these gaps, reduce disparities and enhance the well-being of children following surgical interventions in LMICs. 10.1136/bmjopen-2024-089703 online supplemental file 1 10.1136/bmjopen-2024-089703 online supplemental file 2
Consulta no presencial en tiempos de coronavirus: información para médicos de Atención Primaria
2aff2d62-280a-437c-b323-5f97c652e4a1
7524681
Family Medicine[mh]
La tecnología nos brinda herramientas para mejorar nuestro trabajo como personal sanitario: las historias clínicas electrónicas, los wearables , que permiten controlar el ritmo cardíaco, la saturación de oxígeno, el peso y que incluso tienen la posibilidad de conexión a un teléfono inteligente u ordenador utilizando distintas tecnologías (USB, bluetooth , wifi), lo que crea una red de áreas personales (PAN por sus siglas en inglés de personal area networks ) . En periodos como el de la actual pandemia, muchos pacientes reciben mucha información (difícil de comprender) y, a su vez, se preocupan por los síntomas que tienen (¿tengo coronavirus?), con lo que aparecen en algunos casos ansiedad y preocupación extra. La consulta a distancia (utilizando medios telemáticos) es una solución potencial para mejorar estos problemas, e incluso para evitar el contagio de pacientes y personal sanitario. Estas soluciones tecnológicas son útiles también como forma de triaje para clasificar los requerimientos de urgencia en la atención de los pacientes en función de los síntomas que manifiestan antes de la consulta con el médico. La literatura reseña, además, la posibilidad de emplear estas soluciones para apoyar la toma de decisiones, con la conformación de clasificadores de segmentación por zonas geográficas de los médicos, con la capacidad de resolver interconsultas, por ejemplo, en radiología, valorando radiografías y tomografías computarizadas, por ejemplo, entre EE. UU., Canadá e Irán . Incluso se pueden establecer sistemas de consulta no presencial de orden nacional que se encuentran implementadas en los respectivos sistemas sanitarios nacionales . Las consultas no presenciales o remotas ofrecen ventajas potenciales a los pacientes, como evitar el costo y el inconveniente de trasladarse de un lugar a otro. Además, dan la posibilidad de acceder a la atención de forma oportuna y cuando sea necesaria. En lo que respecta al sistema sanitario, las consultas no presenciales confieren al sistema capacidades de atención al paciente más efectivas desde el punto de vista del coste . En ambos casos, el riesgo de disminuir el probable contagio por coronavirus es una ventaja más. Si tomamos en consideración solo las videoconsultas, algunos estudios las relacionan con una alta satisfacción de los pacientes, sin empeoramiento de la enfermedad actual y con menor gasto , e incluso con una mejor adherencia al tratamiento . Diversos estudios muestran también resultados positivos de este tipo de consulta con pacientes postoperados, diabéticos, con enfermedad crónica, con problemas ortopédicos, e incluso en el manejo de pacientes con problemas de salud mental, entre otros . Ya en el año 2015, un estudio de Armfield identificó 27 estudios publicados sobre el uso de la aplicación Skype® para llamadas en línea y sobre tecnologías similares usadas en Medicina . Hay estudios que muestran que las consultas no presenciales evitaron casi el 88% de las consultas presenciales, especialmente aquellas relacionadas con resultados de analíticas, información médica y de recetas de medicamentos . Pero no todo son beneficios, puesto que en algunos casos se evidenció que no era el tipo adecuado de consulta y que algunos médicos preferían las consultas convencionales . Aparte de las capacidades prometedoras de las teleconsultas, se evidencia que no se utilizan en muchas regiones o países, lo que suele presentar un desafío operacional (a veces con una inadecuada interacción con el paciente por potenciales malentendidos) y un reto técnico para el personal encargado de la parte telemática . El uso de la consulta no presencial o teleconsulta podría ser una herramienta importante en los casos en los que el médico o el paciente se encuentren aislados por sospecha o contagio confirmado de coronavirus. Es muy útil también cuando el paciente presenta sintomatología que es factible de ser valorada a distancia, cuando precisa de información para el manejo de su enfermedad (actual o crónica) o cuando se encuentra en un estado de ansiedad causado por su probable contacto con la covid-19. También es factible su uso en los casos en los cuales el médico y los pacientes se encuentran geográficamente muy alejados y el personal sanitario que debería apoyar a esa población se encuentra muy afectado (labores asistenciales, mal equipamiento, imposibilidad de desplazamiento, etc.) ( ). Antes de una consulta no presencial es muy importante preguntarse si es esto lo que realmente necesita el paciente, si es seguro para él y si existe el suficiente soporte para llevarla a cabo . Existen diferentes modos de clasificar la necesidad de teleconsultas: es de relevancia la clasificación de acuerdo con el modo en que se lleva adelante la interacción entre paciente y médico. En este sentido, la clasificación más conocida y utilizada es la que especifica si el contacto con el paciente es en tiempo real (paciente y médico interactúan al mismo tiempo, por ejemplo, por videoconferencia: modo sincrónico) o si es en tiempo diferido (cuando el paciente y el médico interactúan, pero no al mismo tiempo, por ejemplo, por correo electrónico: modo asincrónico) . A partir de ahora solo nos referiremos a la teleconsulta sincrónica y a las pautas para su adecuada utilización con los pacientes que la necesiten durante el periodo de pandemia. Para el uso adecuado de la consulta a distancia, resulta necesario valorar el tipo de consulta, por ejemplo, en enfermedades crónicas (en pacientes estables), en procedimientos administrativos (como bajas por enfermedad), el acceso a información por parte de los pacientes, en servicios de triaje, entre otros. En particular, se debería prestar atención a todo problema que pueda empeorar si el paciente abandona su domicilio (casos de multimorbilidad, cáncer avanzado, situación terminal o discapacidad grave) ( ). Siempre hay que considerar la posible dificultad de transmitir de manera remota lo que se quiere expresar: esto puede significar que el paciente malinterprete nuestra actitud y piense que llevamos prisa o que estamos enojados. Para evitarlo muchas veces es mejor utilizar videoconsultas porque el paciente nos ve y así no confunde nuestro estado de ánimo. Otra estrategia es preparar una guía escrita sobre cómo llevar las consultas no presenciales con los pacientes. Hay que resaltar que en el ámbito de las teleconsultas sincrónicas sobresalen 2 tipos que suelen ser las más utilizadas: la consulta por teléfono y la de video. Si bien ambas son los escenarios más frecuentes, la preparación y requerimientos tecnológicos de cada una son diferentes. Respecto al uso de las herramientas, la consulta con video requiere mayores esfuerzos de preparación de los usuarios, y también son mayores sus requerimientos de tecnología base (capacidad del ordenador, móviles, ancho de banda de Internet, etc.). La Universidad de Oxford ha preparado un documento sobre el tema y ha identificado 6 pasos para una adecuada consulta telefónica especialmente orientada hacia el manejo de pacientes con coronavirus: 1. Planeamiento: debemos tener la información relevante del paciente, así como elegir el momento adecuado para hacer la llamada, dado que puede resultar estresante (tanto para el paciente como para el médico), al no ser el tipo de consulta normal. También hay que planificar dónde se derivará al paciente en caso de que la consulta telefónica no tenga el éxito esperado (¿urgencias?, ¿existe un área especial para el manejo de estos pacientes?). 2. Inicio de conversación: tenemos que informar al paciente de quiénes somos y cuál es el motivo de la llamada. Es fundamental que este paso se dé al principio de la consulta, para que el paciente se sienta seguro. Debemos seguir los criterios de confidencialidad durante la consulta para evitar filtrar datos a personas que no sean el paciente (por ejemplo, solicitar el número de seguridad social o el DNI). Tenemos que evitar sonar como «robots» que repiten lo mismo en todos los contactos, puesto que los pacientes suelen escuchar mejor si sienten que son contactados de forma personal. 3. Información necesaria: este paso se ejecuta durante la consulta. Debemos tener en cuenta al paciente, sus enfermedades asociadas y sus síntomas principales. Es importante preguntarnos: a. ¿Los síntomas que el paciente está experimentando se deben a la covid-19 o al asma, enfermedad pulmonar obstructiva crónica o a una bronquitis o neumonía? b. ¿Existe la forma de valorar su temperatura, pulso, glucosa o, incluso, saturación? Algunos pacientes crónicos tienen equipo para controles en casa. c. ¿Presenta el paciente algún síntoma de empeoramiento?, por ejemplo, es incapaz de hacer sus actividades de la vida diaria, ¿tiene disnea?, ¿está confuso? 4. Manejo apropiado: antes de terminar la consulta telefónica tenemos que valorar si esta forma de consulta fue la adecuada, es decir, ¿tenemos el diagnóstico probable?, ¿existen problemas potenciales que puedan perjudicar el estado actual del paciente? Lo más importante es ¿entendió el paciente el consejo dado o el tratamiento prescrito? 5. Consejo claro sobre sintomatología y quedarse en casa: en el caso de que el paciente sea sospechoso de tener coronavirus, pero sin complicaciones en el momento de la consulta, se le deben explicar los motivos por los que debe quedarse en casa. Debemos aconsejarle de manera clara pero no repetitiva para evitar que pierda el interés; se le pueden dar fuentes de información (por ejemplo, pacientesemergen.es) y siempre decirle que puede contactarnos en caso de empeoramiento. 6. Red de seguridad: hay que recordar a los pacientes que, en caso de empeorar (disnea, dolor toráxico, confusión) ( ), deben contactar con personal sanitario. También es el momento para que el paciente formule sus preguntas o dudas sobre la consulta. 7. Consulta convencional: en caso de que no se consiga validar un diagnóstico o la causa o severidad de los síntomas del paciente, se le aconsejará que acuda a una consulta normal (valorando dónde previamente) para determinar su estado de salud y brindarle la atención necesaria. A diferencia de las consultas telefónicas, se deben considerar otros aspectos más técnicos. Por ejemplo, hay que valorar si este tipo de consulta es la más adecuada para nuestro paciente (los pacientes ancianos suelen tener menor acceso a esta tecnología). En muchos casos una consulta telefónica puede ser suficiente, pero, en otros (en pacientes con diagnóstico previo de ansiedad, por ejemplo), la videoconsulta puede ofrecer mayores beneficios e incluso ser más tranquilizadora ( ). Existen diversas formas para este tipo de consulta: en algunas comunidades ya se tienen sistemas anexos a la historia clínica que garantizan una conexión adecuada con encriptación para evitar que la información del paciente pueda ser vista por otras personas. En Dinamarca, debido a la gran cantidad de personas que se vieron afectadas por el coronavirus, el Gobierno autorizó el uso de medios no convencionales (Skype, WhatsApp, Facetime) para este tipo de consultas, debido a que consideró que el problema de salud era más importante que el problema de confidencialidad . Hay que verificar si existe buena conexión a Internet, si la cámara funciona adecuadamente, si la posición en la que nos encontramos dentro de nuestra consulta (frente a la ventana, con sol a espaldas, etc.) facilita o dificulta que se nos vea. Algunas veces es muy recomendable hacer llamadas de prueba para solventar inconvenientes. A diferencia de las consultas telefónicas, que pueden hacerse sin cita, se recomienda que la videoconsulta se encuentre agendada, ya que el paciente puede necesitar un acceso adecuado a Internet y debe encontrarse además cómodo para atenderla. El saludo lo debemos hacer inicialmente moviendo la mano frente a la cámara, dado que muchas veces los problemas de conectividad hacen que el audio tarde en conectarse. Así mismo, es importante contar con otras formas de contacto en caso de problemas técnicos (pérdida de red o electricidad, por ejemplo). Las fases importantes que tener en cuenta son parecidas a las de la consulta telefónica, pero tienen un apartado extra: 1. Planificación: es muy efectivo establecer un proceso en el caso de las videoconsultas; si bien cada profesional sanitario es libre de elaborar su propio proceso, este debe contemplar posibles contingencias y cómo solucionarlas (fallo del sistema, mal acceso a Internet). 2. Información necesaria: hay que contar con el historial del paciente, con su medicación (igual que en la consulta telefónica), aparte de buscar una zona adecuada para la consulta (habitación bien iluminada). 3. Inicio de consulta: similar al de la consulta telefónica pero, además de empezar con un saludo, tendremos que comprobar si el paciente puede vernos y oírnos adecuadamente antes de continuar. 4. Consulta: es el equivalente de los puntos 3, 4 y 5 de la consulta telefónica. A esto hay que añadir que debemos informar al paciente en todo momento de lo que hacemos, como, por ejemplo, que estamos tomando notas sobre lo que nos comenta o que estamos leyendo una prescripción médica. De esta manera, evitamos que el paciente piense que no es importante o que no es tomado en cuenta. 5. Cierre de la videoconsulta: al llegar a esta fase debemos tener suficientes datos para alcanzar una idea diagnóstica, además de estar seguros de que el paciente ha comprendido e interpretado nuestras explicaciones y tratamiento. En algunos casos es necesario repetir puntos clave o brindar alguna aclaración, antes de finalizar la consulta con una despedida. Es muy importante recalcarle al paciente que debe comunicarse con nosotros otra vez en caso de empeoramiento (valorar opciones de seguimiento, de complicaciones, ). 6. Consulta convencional: al igual que en la consulta telefónica, si se considera que no tenemos suficientes datos para un diagnóstico adecuado o si el paciente presentara síntomas graves, debemos referirlo a un centro sanitario adecuado a la sintomatología y probabilidad diagnóstica. Consideramos importante agregar que siempre debe quedar registro de lo hecho durante la teleconsulta en la historia clínica del paciente (motivo de consulta, problemas, antecedentes, etc.) para asegurar la continuidad del proceso de cuidado del paciente. • Preparar previamente la consulta, con datos del paciente e información relacionada, por ejemplo, el horario de atención para dejar una muestra de orina en caso de que la consulta sea sobre probable infección urinaria. • Se pueden hacer pruebas de funcionamiento previas con otros colegas para valorar la mejor forma de llevar este tipo de consulta. • Mantenerse relajado: si se hace de manera adecuada, esta consulta puede darnos mucha información sobre el problema del paciente. • Colocar el micrófono sobre una superficie firme y plana. Lo más cerca posible para mejorar la calidad de audio y minimizar el ruido de fondo. • Pedir a los pacientes que hablen claramente, en su volumen de voz normal y que que apaguen o pongan en modo vibración sus móviles. • Mirar a la pantalla, no es necesario mantener la vista en la cámara. • Pedir al paciente que use su cámara para valorar alguna lesión, por ejemplo, eccema. • Recordar «cerrar la consulta», es decir estar seguro de que el paciente ha comprendido la consulta para finalmente despedirse y parar la videoconsulta. Es muy importante esta fase de cierre porque permite que el paciente se sienta vinculado a la consulta y tenga una mejor adherencia. • Dejar registro en la historia clínica del paciente de la existencia de la teleconsulta y de la información obtenida en el encuentro virtual. Los autores no hemos recibido ningún tipo de financiamiento para la elaboración de este documento. Los autores no tienen ningún tipo de conflictos de intereses.
Proteomic profiling reveals biological processes and biomarkers involved in the pathogenesis of occult breast cancer
7ae1c746-8009-477e-93bf-1e8828a07a83
11812265
Biochemistry[mh]
Occult breast cancer (OBC), characterized by axillary lymph node (LN) or other metastases without any evidence of primary lesions in the breast through physical, radiological or pathological examination, accounts for 0.3% to 1.0% of breast cancer cases . Given the rarity of OBC, there are few supportive data from large population studies on clinical management strategies. In particular, the treatment strategy for breast remains arguable. Mastectomy with axillary lymph node dissection (ALND) is usually recommended for T0N + M0 patients. However, accumulated evidence indicates that either breast radiation plus ALND or mastectomy plus ALND could improve the survival outcomes of OBC patients . Studies have shown that compared with node-positive breast cancer patients, OBC patients have a better survival outcome , while several have reported contrary conclusions . To date, few studies have focused on the pathogenesis of OBC or the origin of the disease and the specific molecular signature of OBC has not been identified . Mitsuo et al. speculated that OBC may originate from ectopic breast tissue present in axillary lymph nodes but this hypothesis needs to be confirmed by further clinical and basic science research . Given that the surgical pathology report of the tissue specimen removed from OBC patients suggested that metastases in regional lymph nodes (LNs) originated from the breast, we considered that the metastatic lesions in regional LNs originated from breast primary tumor (PT) which obtained metastatic ability much earlier than usual. To date, the natural history of OBC is still elusive, and the reason why OBC possesses a superior prognosis compared with stage II or III breast cancer remains unknown. It seems that during the early stages of OBC, uncommon metastasis occurs, and primary tumor growth is suppressed for some reason. The exact underlying mechanisms are essential for a better understanding of OBC biology and are helpful for developing novel therapeutic approaches for breast cancer patients. Proteomics, an advanced technology for the identification and quantification of proteins in different tissue samples, is a vital tool for understanding the pathogenicity mechanisms of some diseases. Since proteins always undergo posttranslational modification, changes in which cannot be reflected by genomic or transcriptomic profiling, proteomics provides the most relevant data on protein expression levels . Liquid chromatography-mass spectrometry/mass spectrometry (LC–MS/MS) based proteomics, a high-throughput technique, has advantages over conventional antibody-based proteomic methods. This approach can not only save time for proteome experiments and data analysis but also improve the precision of quantitation and the depth of proteome coverage . High-throughput proteomics has been widely utilized to explore the proteome signatures of various diseases . However, the proteomic characteristics of OBC have not been elucidated and the biology of OBC is poorly understood. In this study, we compared the prognosis between OBC and Non-OBC patients enrolled from the SEER database, investigated the prognostic factors and survival outcomes of OBC patients and analyzed the effects of different local therapeutic strategies on the outcomes of OBC patients. Additionally, we used LC–MS/MS based quantitative proteomics to analyze tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes (Non-OBC-LN) and primary tumor (Non-OBC-PT) from 3 Non-OBC patients. With a method of data independent acquisition (DIA), we detected more than 7000 proteins from the 9 samples. We characterized the proteome profiles of the 3 sample groups and explored the differences in protein levels between OBC-LN samples and Non-OBC-LN samples. Many extracellular matrix (ECM) proteins presented higher expression levels and epithelial-mesenchymal transition (EMT) was highly enriched in the OBC-LN samples based on the results of functional enrichment analyses. Our data revealed that the EMT program was likely involved in the pathogenesis of OBC. SEER database study Data source and patient selection We conducted a retrospective population-based cohort study based on the SEER database of the National Cancer Institute in the United States. The data we used were released in April 2023 and included data from 17 population-based cancer registries, representing approximately 26% of the USA population. The inclusion criteria were as follows: females aged 18 years or older; diagnosed between January 1, 2010 and December 31, 2020; with breast cancer as the first and only cancer diagnosis according to the International Classification of Diseases–Oncology (ICD-O), version 3 site; with the Derived American Joint Committee on Cancer (AJCC) 7th edition stage (2010–2015), with the Derived SEER combined stage (2016–2017) and with the Derived EOD combined stage (2018 +) T0-4N1-3M0; with unilateral breast cancer; one or more positive LNs; and with the type of reporting source excluding the death certificate or autopsy. Patients whose data were recorded as “unknown of breast surgery type” were excluded. In total, we screened 121,587 eligible patients. A total of 507 patients (T0N1-3M0) were considered as OBC patients, and 121,080 patients (T1-4N1-3M0) were considered as Non-OBC patients (Fig. A). Specifically, patients who underwent breast-conserving surgery (breast surgery 20–24) were excluded from the OBC cohort because they may not have met the definition of OBC. Variables We extracted detailed data on age, race, year of diagnosis, marital status at diagnosis, laterality, grade, stage, T, N, and M stage, number of examined LNs, number of positive LNs, status of estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 expression, breast subtype, breast surgery type, survival months, and causes of death. The SEER database provides information about whether patients receive radiation and chemotherapy or not, but some details (such as radiation dose and side and chemotherapy regimens) are absent. Marital status was classified into four categories: married, unmarried (including divorced, separated, single, unmarried or domestic partner), widowed and unknown. The extent of regional LN dissection was not interpreted in the database. According to the National Comprehensive Cancer Network (NCCN) guidelines and clinical experience with breast cancer, the number of examined/removed LNs ranging from 1 to 6 was considered sentinel lymph node dissection (SLND), and more than 10 LNs examined were considered axillary lymph node dissection (ALND). Outcomes and endpoints Breast cancer-specific survival (BCSS) was defined as the duration from the date of diagnosis to the date of last follow-up or death from breast cancer, while overall survival (OS) was defined as the time between diagnosis and death from any cause. Both were used for evaluating survival outcomes. Patients who were alive at the time of the last follow-up were censored, and since April 2023, submission to the SEER database contains complete death data through 2020, the cutoff date of the population-based study was December 31, 2020. Statistical analysis Demographics and clinicopathological features were compared between the OBC group and Non-OBC group via the Pearson chi-square test or Fisher’s exact test. For survival analysis, the Kaplan–Meier method was used to estimate survival curves and the log-rank test was used for comparisons of survival between groups with different variables. Variables that were significant in the univariate analysis or correlated with the prognosis of patients with breast cancer according to prior studies were included in the multivariate analysis, which was conducted through Cox proportional hazard regression. To control for confounders, 1:1 propensity score matching (PSM) was conducted with R software ver. 4.3.2 (R Core Team, 2014) and the caliper was 0.02. The standardized mean difference (SMD) was used to assess whether the baseline characteristics were balanced between the two groups. All the statistical analyses of the public data were performed via SPSS software, version 27.0 for Windows (IBM Corp., Armonk, NY, USA), and GraphPad Prism version 7.0.0 for Windows, GraphPad Software, Boston, Massachusetts, USA, www.graphpad.com . A p value less than 0.05 was considered to indicate statistical significance. Proteomic profiling Tissue acquisition For the proteomic study, tissue samples were all taken from treatment-naïve breast cancer patients. Only three female patients with a pathological diagnosis of OBC whose tumor tissues were available at the First Affiliated Hospital of Xi’an Jiaotong University were enrolled in this study. To control for confounding factors, we selected three Non-OBC patients with similar baseline characteristics (Table ). Tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes (Non-OBC-LN) and primary tumor (Non-OBC-PT) from 3 Non-OBC patients were collected from surgical pathology files at the First Affiliated Hospital of Xi’an Jiaotong University. We used formalin-fixed paraffin-embedded (FFPE) tissues which have been proved to have qualitative and quantitative proteomic features similar to those of fresh frozen tissue samples and remain an inexpensive archival method . All three OBC patients in our study had axillary LN metastasis, but postoperative pathology did not reveal any cancerous tissue in the breast. Sample preparation for LC–MS/MS analysis Protein extraction and enzymatic digestion were performed on the samples to obtain peptides, which were then separated by an ultrahigh performance liquid phase system. Before deparaffinization, all the slides were heated for one hour to melt the wax. Then, we used the solvent xylene to dewax the FFPE samples and used graded alcohol (100%/95%/85%/75%/50%) for rehydration of the embedded tissue. The tissues were then carefully transferred to 0.2 ml Eppendorf tubes, and four volumes of cleavage buffer (1% SDC, 1% protease inhibitor, and 1% phosphatase inhibitor) were added. Power ultrasonics is used to extract bioactive components, which is called ultrasonic assisted extraction. Noncontact ultrasonic techniques were applied to the samples, which meant that the container of samples was immersed in an environment of sound waves without direct contact with ultrasonic waves. The samples were subjected to power ultrasonics for 3 min and centrifuged at 12,000 × g for 10 min at 4℃ to remove cell debris, and the supernatants were collected in new centrifuge tubes. For determination of the protein concentration with a BCA protein assay kit, 5μL of each sample was collected. The subsequent procedures were as follows: 1)The standard product was added to the sample well of the enzyme label strip at 0 μL, 5 μL, 10 μL, 15 μL, or 20 μL, the sample diluent was added to 20 μL, and 3 additional wells were detected; 2) 5 μL of protein sample was added to the sample well of the enzyme label strip, the sample diluent was added to 20 μL, and 3 additional wells were detected; 3) 200μL of bicinchoninic acid was added to each well and the sample was allowed to stand at 37 °C for 30 min; 4) A570 was determined with an enzyme labeling instrument (the best absorption wavelength was 562 nm, and other wavelengths between 540–595 nm could also be applied); and 5) The protein concentration of each sample was calculated according to the standard curve and the sample volume used. According to the determined protein concentration, the protein of each sample was subjected to the same amount of enzymatic digestion, the volume was adjusted to the same volume as that of the lysate, and then dithiothreitol was added to a final concentration of 5 mM and reduced at 56℃ for 30 min. Then, iodoacetamide was added to a final concentration of 11 mM, and the mixture was incubated at room temperature for 15 min in the dark. Trypsin was added at a ratio of 1:50 (protease: protein, m/m), and the mixture was subjected to enzymatic digestion overnight. Then, trypsin was added at a ratio of 1:100 (protease: protein, m/m), and the enzymatic digestion was continued for 4 h. LC–MS/MS analysis LC–MS/MS analysis was conducted on a timsTOF Pro mass spectrometer. The peptide segments were dissolved in mobile phase A, which was composed of a water solution containing 0.1% formic acid and 2% acetonitrile, and separated via a NanoElute ultrahigh-performance liquid chromatography system. Mobile phase B was an acetonitrile-aqueous solution containing 0.1% formic acid. The liquid phase gradient was programmed as follows: 0–8 min, 9–24% B; 8–12 min, 24%-35% B; 12–16 min, 35%-80% B; 16–20 min, 80% B. The flow rate was maintained at 450 nl/min. The peptide segments are separated by an ultrahigh performance liquid phase system, injected into the capillary ion source for ionization and then applied to the TOF for data acquisition. The source voltage was set to 1.60 kV, and the parent ions of the peptide segment and their secondary fragments were detected and analyzed via TOF. The data-independent parallel accumulation serial fragmentation (dia-PASEF) method was used for MS data acquisition because it can provide high speed and sensitivity to increase proteomic depths when minimum sample amounts are utilized . The MS 1 spectrum was acquired in the range of m/z 100–1700, and an MS 1 scan was followed by ten scans in dia-PASEF mode using an isolation window of 25 m/z. The MS 2 spectrum was in the range of m/z 400–1200. Proteomic MS data processing The acquired original MS data were processed by DIA-NN (ver. 1.8) for data analysis. The database used was “Homo_sapiens_9606_SP_20220107.fasta,” containing 20,389 protein sequences. The enzyme cleavage specificity was set to Trypsin/P, and up to 2 missed cleavage sites were permitted. The following fixed modifications were used: N-term M excision and C carbamidomethylation. The theoretical spectrum library is constructed by a deep learning algorithm, and the inverse library is added to calculate the false discovery rate (FDR) caused by random matching. The FDR for precursor and protein identification was established at 1%. Each identified protein contained at least one unique peptide. We obtained 7271 unique proteins across all the samples, 63 of which were not quantified in some of the samples; thus, a total of 7208 comparable proteins were ultimately identified. Quality control of the raw data, including the peptide length distribution and the number of peptides per protein, was carried out. Approximately 96% of the peptides ranged from 7 to 20 amino acids in length, and the number of unique peptides per protein was mostly two or more, both of which demonstrated that the raw data were of high quality (Figure ). Quantification and statistical analysis The search results provide the normalized intensity for each protein in different samples which is the result of normalization of the original intensity of the protein across samples. We obtained the relative quantitation (R) derived from the normalized intensity (I) though a centering transformation. The computational formula is as follows, where i represents the sample and j represents the protein: R ij = I ij /Mean(I j ). The ratio of the mean relative quantitative values in the two comparable groups was considered the fold change (FC). For example, when comparing the protein levels between group A and group B, the calculation formula is as follows, where R represents relative quantitative value of the protein, i means the sample, and k represents the protein: FC A/B,k = Mean(R ik ,i ∈ A)/Mean(R ik ,i ∈ B). To determine the statistical significance of the differences, the relative quantitative value of each protein in the two sample groups was tested by Student’s t test, the corresponding p value was calculated, and a p value less than 0.05 was considered a default threshold. To ensure that the test data conformed to a normal distribution according to the Student’s t test requirements, we performed log2 logarithmic transformation of the relative quantitative value of each protein before analysis. The calculation formula is as follows: p = t.test (log2(R ik ,i ∈ A), log2(R ik ,i ∈ B)). We considered FC > 1.5 as the threshold for significant upregulation and FC < 1/1.5 as the threshold for significant downregulation, when the p value was < 0.05. Proteins whose changes in expression level reached the above threshold were regarded as differentially expressed proteins (DEPs). Functional enrichment analysis was performed based on Gene Ontology (GO) categories (including biological processes, cellular components, and molecular functions) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to better understand the functions of the DEPs. The analysis was conducted by the clusterProfiler R package (v4.10.0) and the results of functional enrichment were visualized by the ggplot2 R package (v.3.4.4). We utilized Fisher’s exact test to determine the significance of the enrichment analysis of DEPs. A p value < 0.05 was considered to indicate statistical significance. Benjamini–Hochberg correction was used for multiple hypothesis testing, and the adjusted p value was transformed by the function x = − log10 ( p value). DEPs in three comparable groups were combined, and hierarchical clustering was conducted in R (v.4.3.2) using the pheatmap package (v.1.0.12). Gene set enrichment analysis (GSEA), which evaluates gene expression data at the gene set level, has been shown to provide many insights into several cancer-related datasets . A preranked list of genes and FC values was prepared and annotated gene collections were downloaded from the Molecular Signatures Database (MSigDB v7.0 for H (hallmark gene sets), C2 (curated gene sets), and C5 (GO gene sets)). We performed GSEA based on hallmark, GO and KEGG gene sets using the clusterProfiler R package (v4.10.0) and the results were visualized with the GseaVis package (v.0.0.9) and ggridges package (v.0.5.5). Normalized enrichment scores (NESs) were calculated and p -adjust < 0.05, and FDR values < 0.05 were regarded as significant. The Pearson correlation coefficient was calculated by using the psych package (v.2.4.3) and the correlation matrix was graphically presented with the pheatmap package (v.1.0.12). We also constructed a protein–protein interaction (PPI) network among the DEPs via the STRING database ( https://cn.string-db.org/ ). We used the full STRING network, and the minimum required interaction score was 0.400 (medium confidence). The network was imported into Cytoscape software (v.3.9.1) and the degree (DC), betweenness (BC), closeness (CC), local average connectivity (LAC) and network (NC) of every edge were calculated by the CytoNCA plugin within Cytoscape. Proteins that ranked in the top 20 for DC, BC, CC, LAC and NC were regarded as hub proteins. The Kaplan Meier plotter ( https://kmplot.com/analysis/ ) which is capable of assessing the correlation between gene expression and the survival information of breast cancer patients downloaded from GEO, EGA and TCGA was used to analyze the correlation between overexpression of the hub proteins and the overall survival of breast cancer patients . Furthermore, we used the molecular complex detection (MCODE) plug-in in Cytoscape to select clusters of the PPI network. The default settings used were as follows: degree cutoff = 2, node score cutoff = 0.2, K-core = 2, and max depth = 100.We screened the top 3 clusters with MCODE scores ≥ 5 and performed Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis of proteins involved in the 3 subnetworks. Immunohistochemistry Tissue samples of metastatic lymph nodes from 6 patients (3 OBC patients and 3 paired Non-OBC patients) were also collected from surgical pathology files at the First Affiliated Hospital of Xi’an Jiaotong University. The demographics and clinicopathological characteristics of the patients are summarized in Table S7. All the samples involved in the immunohistochemistry (IHC) study were treatment-naïve. FFPE tissues were sectioned and subjected to deparaffinization and rehydration. Then, antigen retrieval was achieved by boiling in citrate buffer. The sections were blocked in PBS containing 3% bovine serum albumin (BSA) for 30 min and then incubated with anti-rabbit primary antibody at 4℃ overnight (Table S8). Several washes with PBS were performed prior to incubation with a horseradish peroxidase (HRP)-conjugated secondary antibody (HRP- conjugated goat anti-rat IgG, #GB23303; Servicebio) for 50 min at room temperature. The sections were stained with DAB substrate kits (Cat. #G1212, Servicebio) and visualized via fluorescence microscopy. For the quantification of the data, 2 pathologists with over 15 years of experience were invited to score the immunohistochemistry stains independently. They were blinded to the features of the samples and each other’s assessments and reviewed the slides under light microscopy with 20 × power. An IHC scoring system that accounts for staining intensity was used for the assessment of protein quantity. The percentage of positively stained cancer cells was scored as 0 (-, completely absent or cancer cells with < 1% staining,), 1 (-, cancer cells with 1% ~ 10% staining), 2 (+ , cancer cells with 11% ~ 50% staining), 3 (+ + , cancer cells with 51% ~ 75% staining), or 4 (+ + + , cancer cells with ≥ 76% staining). The staining color was scored as 1 (light-yellow particles), 2 (brown-yellow particles), or 3 (brown particles). The final IHC score was defined as the positively stained cancer cells number score multiplied by the staining color score. Specifically, for those proteins detected mainly in the tumor stroma, we quantified the stained area fraction and average optical density (AOD) via ImageJ 1.54 g ( http://imagej.org ). Five fields per slide were randomly selected for analysis. A paired t test was used to examine the difference in the IHC scores of COL1A2, MMP2 and LUM and a Welch’s t test was used to examine the difference in the stained area fraction and AOD of COL1A1 and COL3A1 between the OBC-LN group and the Non-OBC-LN group. Data source and patient selection We conducted a retrospective population-based cohort study based on the SEER database of the National Cancer Institute in the United States. The data we used were released in April 2023 and included data from 17 population-based cancer registries, representing approximately 26% of the USA population. The inclusion criteria were as follows: females aged 18 years or older; diagnosed between January 1, 2010 and December 31, 2020; with breast cancer as the first and only cancer diagnosis according to the International Classification of Diseases–Oncology (ICD-O), version 3 site; with the Derived American Joint Committee on Cancer (AJCC) 7th edition stage (2010–2015), with the Derived SEER combined stage (2016–2017) and with the Derived EOD combined stage (2018 +) T0-4N1-3M0; with unilateral breast cancer; one or more positive LNs; and with the type of reporting source excluding the death certificate or autopsy. Patients whose data were recorded as “unknown of breast surgery type” were excluded. In total, we screened 121,587 eligible patients. A total of 507 patients (T0N1-3M0) were considered as OBC patients, and 121,080 patients (T1-4N1-3M0) were considered as Non-OBC patients (Fig. A). Specifically, patients who underwent breast-conserving surgery (breast surgery 20–24) were excluded from the OBC cohort because they may not have met the definition of OBC. We conducted a retrospective population-based cohort study based on the SEER database of the National Cancer Institute in the United States. The data we used were released in April 2023 and included data from 17 population-based cancer registries, representing approximately 26% of the USA population. The inclusion criteria were as follows: females aged 18 years or older; diagnosed between January 1, 2010 and December 31, 2020; with breast cancer as the first and only cancer diagnosis according to the International Classification of Diseases–Oncology (ICD-O), version 3 site; with the Derived American Joint Committee on Cancer (AJCC) 7th edition stage (2010–2015), with the Derived SEER combined stage (2016–2017) and with the Derived EOD combined stage (2018 +) T0-4N1-3M0; with unilateral breast cancer; one or more positive LNs; and with the type of reporting source excluding the death certificate or autopsy. Patients whose data were recorded as “unknown of breast surgery type” were excluded. In total, we screened 121,587 eligible patients. A total of 507 patients (T0N1-3M0) were considered as OBC patients, and 121,080 patients (T1-4N1-3M0) were considered as Non-OBC patients (Fig. A). Specifically, patients who underwent breast-conserving surgery (breast surgery 20–24) were excluded from the OBC cohort because they may not have met the definition of OBC. We extracted detailed data on age, race, year of diagnosis, marital status at diagnosis, laterality, grade, stage, T, N, and M stage, number of examined LNs, number of positive LNs, status of estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 expression, breast subtype, breast surgery type, survival months, and causes of death. The SEER database provides information about whether patients receive radiation and chemotherapy or not, but some details (such as radiation dose and side and chemotherapy regimens) are absent. Marital status was classified into four categories: married, unmarried (including divorced, separated, single, unmarried or domestic partner), widowed and unknown. The extent of regional LN dissection was not interpreted in the database. According to the National Comprehensive Cancer Network (NCCN) guidelines and clinical experience with breast cancer, the number of examined/removed LNs ranging from 1 to 6 was considered sentinel lymph node dissection (SLND), and more than 10 LNs examined were considered axillary lymph node dissection (ALND). Breast cancer-specific survival (BCSS) was defined as the duration from the date of diagnosis to the date of last follow-up or death from breast cancer, while overall survival (OS) was defined as the time between diagnosis and death from any cause. Both were used for evaluating survival outcomes. Patients who were alive at the time of the last follow-up were censored, and since April 2023, submission to the SEER database contains complete death data through 2020, the cutoff date of the population-based study was December 31, 2020. Demographics and clinicopathological features were compared between the OBC group and Non-OBC group via the Pearson chi-square test or Fisher’s exact test. For survival analysis, the Kaplan–Meier method was used to estimate survival curves and the log-rank test was used for comparisons of survival between groups with different variables. Variables that were significant in the univariate analysis or correlated with the prognosis of patients with breast cancer according to prior studies were included in the multivariate analysis, which was conducted through Cox proportional hazard regression. To control for confounders, 1:1 propensity score matching (PSM) was conducted with R software ver. 4.3.2 (R Core Team, 2014) and the caliper was 0.02. The standardized mean difference (SMD) was used to assess whether the baseline characteristics were balanced between the two groups. All the statistical analyses of the public data were performed via SPSS software, version 27.0 for Windows (IBM Corp., Armonk, NY, USA), and GraphPad Prism version 7.0.0 for Windows, GraphPad Software, Boston, Massachusetts, USA, www.graphpad.com . A p value less than 0.05 was considered to indicate statistical significance. Tissue acquisition For the proteomic study, tissue samples were all taken from treatment-naïve breast cancer patients. Only three female patients with a pathological diagnosis of OBC whose tumor tissues were available at the First Affiliated Hospital of Xi’an Jiaotong University were enrolled in this study. To control for confounding factors, we selected three Non-OBC patients with similar baseline characteristics (Table ). Tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes (Non-OBC-LN) and primary tumor (Non-OBC-PT) from 3 Non-OBC patients were collected from surgical pathology files at the First Affiliated Hospital of Xi’an Jiaotong University. We used formalin-fixed paraffin-embedded (FFPE) tissues which have been proved to have qualitative and quantitative proteomic features similar to those of fresh frozen tissue samples and remain an inexpensive archival method . All three OBC patients in our study had axillary LN metastasis, but postoperative pathology did not reveal any cancerous tissue in the breast. For the proteomic study, tissue samples were all taken from treatment-naïve breast cancer patients. Only three female patients with a pathological diagnosis of OBC whose tumor tissues were available at the First Affiliated Hospital of Xi’an Jiaotong University were enrolled in this study. To control for confounding factors, we selected three Non-OBC patients with similar baseline characteristics (Table ). Tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes (Non-OBC-LN) and primary tumor (Non-OBC-PT) from 3 Non-OBC patients were collected from surgical pathology files at the First Affiliated Hospital of Xi’an Jiaotong University. We used formalin-fixed paraffin-embedded (FFPE) tissues which have been proved to have qualitative and quantitative proteomic features similar to those of fresh frozen tissue samples and remain an inexpensive archival method . All three OBC patients in our study had axillary LN metastasis, but postoperative pathology did not reveal any cancerous tissue in the breast. Protein extraction and enzymatic digestion were performed on the samples to obtain peptides, which were then separated by an ultrahigh performance liquid phase system. Before deparaffinization, all the slides were heated for one hour to melt the wax. Then, we used the solvent xylene to dewax the FFPE samples and used graded alcohol (100%/95%/85%/75%/50%) for rehydration of the embedded tissue. The tissues were then carefully transferred to 0.2 ml Eppendorf tubes, and four volumes of cleavage buffer (1% SDC, 1% protease inhibitor, and 1% phosphatase inhibitor) were added. Power ultrasonics is used to extract bioactive components, which is called ultrasonic assisted extraction. Noncontact ultrasonic techniques were applied to the samples, which meant that the container of samples was immersed in an environment of sound waves without direct contact with ultrasonic waves. The samples were subjected to power ultrasonics for 3 min and centrifuged at 12,000 × g for 10 min at 4℃ to remove cell debris, and the supernatants were collected in new centrifuge tubes. For determination of the protein concentration with a BCA protein assay kit, 5μL of each sample was collected. The subsequent procedures were as follows: 1)The standard product was added to the sample well of the enzyme label strip at 0 μL, 5 μL, 10 μL, 15 μL, or 20 μL, the sample diluent was added to 20 μL, and 3 additional wells were detected; 2) 5 μL of protein sample was added to the sample well of the enzyme label strip, the sample diluent was added to 20 μL, and 3 additional wells were detected; 3) 200μL of bicinchoninic acid was added to each well and the sample was allowed to stand at 37 °C for 30 min; 4) A570 was determined with an enzyme labeling instrument (the best absorption wavelength was 562 nm, and other wavelengths between 540–595 nm could also be applied); and 5) The protein concentration of each sample was calculated according to the standard curve and the sample volume used. According to the determined protein concentration, the protein of each sample was subjected to the same amount of enzymatic digestion, the volume was adjusted to the same volume as that of the lysate, and then dithiothreitol was added to a final concentration of 5 mM and reduced at 56℃ for 30 min. Then, iodoacetamide was added to a final concentration of 11 mM, and the mixture was incubated at room temperature for 15 min in the dark. Trypsin was added at a ratio of 1:50 (protease: protein, m/m), and the mixture was subjected to enzymatic digestion overnight. Then, trypsin was added at a ratio of 1:100 (protease: protein, m/m), and the enzymatic digestion was continued for 4 h. LC–MS/MS analysis was conducted on a timsTOF Pro mass spectrometer. The peptide segments were dissolved in mobile phase A, which was composed of a water solution containing 0.1% formic acid and 2% acetonitrile, and separated via a NanoElute ultrahigh-performance liquid chromatography system. Mobile phase B was an acetonitrile-aqueous solution containing 0.1% formic acid. The liquid phase gradient was programmed as follows: 0–8 min, 9–24% B; 8–12 min, 24%-35% B; 12–16 min, 35%-80% B; 16–20 min, 80% B. The flow rate was maintained at 450 nl/min. The peptide segments are separated by an ultrahigh performance liquid phase system, injected into the capillary ion source for ionization and then applied to the TOF for data acquisition. The source voltage was set to 1.60 kV, and the parent ions of the peptide segment and their secondary fragments were detected and analyzed via TOF. The data-independent parallel accumulation serial fragmentation (dia-PASEF) method was used for MS data acquisition because it can provide high speed and sensitivity to increase proteomic depths when minimum sample amounts are utilized . The MS 1 spectrum was acquired in the range of m/z 100–1700, and an MS 1 scan was followed by ten scans in dia-PASEF mode using an isolation window of 25 m/z. The MS 2 spectrum was in the range of m/z 400–1200. The acquired original MS data were processed by DIA-NN (ver. 1.8) for data analysis. The database used was “Homo_sapiens_9606_SP_20220107.fasta,” containing 20,389 protein sequences. The enzyme cleavage specificity was set to Trypsin/P, and up to 2 missed cleavage sites were permitted. The following fixed modifications were used: N-term M excision and C carbamidomethylation. The theoretical spectrum library is constructed by a deep learning algorithm, and the inverse library is added to calculate the false discovery rate (FDR) caused by random matching. The FDR for precursor and protein identification was established at 1%. Each identified protein contained at least one unique peptide. We obtained 7271 unique proteins across all the samples, 63 of which were not quantified in some of the samples; thus, a total of 7208 comparable proteins were ultimately identified. Quality control of the raw data, including the peptide length distribution and the number of peptides per protein, was carried out. Approximately 96% of the peptides ranged from 7 to 20 amino acids in length, and the number of unique peptides per protein was mostly two or more, both of which demonstrated that the raw data were of high quality (Figure ). The search results provide the normalized intensity for each protein in different samples which is the result of normalization of the original intensity of the protein across samples. We obtained the relative quantitation (R) derived from the normalized intensity (I) though a centering transformation. The computational formula is as follows, where i represents the sample and j represents the protein: R ij = I ij /Mean(I j ). The ratio of the mean relative quantitative values in the two comparable groups was considered the fold change (FC). For example, when comparing the protein levels between group A and group B, the calculation formula is as follows, where R represents relative quantitative value of the protein, i means the sample, and k represents the protein: FC A/B,k = Mean(R ik ,i ∈ A)/Mean(R ik ,i ∈ B). To determine the statistical significance of the differences, the relative quantitative value of each protein in the two sample groups was tested by Student’s t test, the corresponding p value was calculated, and a p value less than 0.05 was considered a default threshold. To ensure that the test data conformed to a normal distribution according to the Student’s t test requirements, we performed log2 logarithmic transformation of the relative quantitative value of each protein before analysis. The calculation formula is as follows: p = t.test (log2(R ik ,i ∈ A), log2(R ik ,i ∈ B)). We considered FC > 1.5 as the threshold for significant upregulation and FC < 1/1.5 as the threshold for significant downregulation, when the p value was < 0.05. Proteins whose changes in expression level reached the above threshold were regarded as differentially expressed proteins (DEPs). Functional enrichment analysis was performed based on Gene Ontology (GO) categories (including biological processes, cellular components, and molecular functions) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to better understand the functions of the DEPs. The analysis was conducted by the clusterProfiler R package (v4.10.0) and the results of functional enrichment were visualized by the ggplot2 R package (v.3.4.4). We utilized Fisher’s exact test to determine the significance of the enrichment analysis of DEPs. A p value < 0.05 was considered to indicate statistical significance. Benjamini–Hochberg correction was used for multiple hypothesis testing, and the adjusted p value was transformed by the function x = − log10 ( p value). DEPs in three comparable groups were combined, and hierarchical clustering was conducted in R (v.4.3.2) using the pheatmap package (v.1.0.12). Gene set enrichment analysis (GSEA), which evaluates gene expression data at the gene set level, has been shown to provide many insights into several cancer-related datasets . A preranked list of genes and FC values was prepared and annotated gene collections were downloaded from the Molecular Signatures Database (MSigDB v7.0 for H (hallmark gene sets), C2 (curated gene sets), and C5 (GO gene sets)). We performed GSEA based on hallmark, GO and KEGG gene sets using the clusterProfiler R package (v4.10.0) and the results were visualized with the GseaVis package (v.0.0.9) and ggridges package (v.0.5.5). Normalized enrichment scores (NESs) were calculated and p -adjust < 0.05, and FDR values < 0.05 were regarded as significant. The Pearson correlation coefficient was calculated by using the psych package (v.2.4.3) and the correlation matrix was graphically presented with the pheatmap package (v.1.0.12). We also constructed a protein–protein interaction (PPI) network among the DEPs via the STRING database ( https://cn.string-db.org/ ). We used the full STRING network, and the minimum required interaction score was 0.400 (medium confidence). The network was imported into Cytoscape software (v.3.9.1) and the degree (DC), betweenness (BC), closeness (CC), local average connectivity (LAC) and network (NC) of every edge were calculated by the CytoNCA plugin within Cytoscape. Proteins that ranked in the top 20 for DC, BC, CC, LAC and NC were regarded as hub proteins. The Kaplan Meier plotter ( https://kmplot.com/analysis/ ) which is capable of assessing the correlation between gene expression and the survival information of breast cancer patients downloaded from GEO, EGA and TCGA was used to analyze the correlation between overexpression of the hub proteins and the overall survival of breast cancer patients . Furthermore, we used the molecular complex detection (MCODE) plug-in in Cytoscape to select clusters of the PPI network. The default settings used were as follows: degree cutoff = 2, node score cutoff = 0.2, K-core = 2, and max depth = 100.We screened the top 3 clusters with MCODE scores ≥ 5 and performed Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis of proteins involved in the 3 subnetworks. Tissue samples of metastatic lymph nodes from 6 patients (3 OBC patients and 3 paired Non-OBC patients) were also collected from surgical pathology files at the First Affiliated Hospital of Xi’an Jiaotong University. The demographics and clinicopathological characteristics of the patients are summarized in Table S7. All the samples involved in the immunohistochemistry (IHC) study were treatment-naïve. FFPE tissues were sectioned and subjected to deparaffinization and rehydration. Then, antigen retrieval was achieved by boiling in citrate buffer. The sections were blocked in PBS containing 3% bovine serum albumin (BSA) for 30 min and then incubated with anti-rabbit primary antibody at 4℃ overnight (Table S8). Several washes with PBS were performed prior to incubation with a horseradish peroxidase (HRP)-conjugated secondary antibody (HRP- conjugated goat anti-rat IgG, #GB23303; Servicebio) for 50 min at room temperature. The sections were stained with DAB substrate kits (Cat. #G1212, Servicebio) and visualized via fluorescence microscopy. For the quantification of the data, 2 pathologists with over 15 years of experience were invited to score the immunohistochemistry stains independently. They were blinded to the features of the samples and each other’s assessments and reviewed the slides under light microscopy with 20 × power. An IHC scoring system that accounts for staining intensity was used for the assessment of protein quantity. The percentage of positively stained cancer cells was scored as 0 (-, completely absent or cancer cells with < 1% staining,), 1 (-, cancer cells with 1% ~ 10% staining), 2 (+ , cancer cells with 11% ~ 50% staining), 3 (+ + , cancer cells with 51% ~ 75% staining), or 4 (+ + + , cancer cells with ≥ 76% staining). The staining color was scored as 1 (light-yellow particles), 2 (brown-yellow particles), or 3 (brown particles). The final IHC score was defined as the positively stained cancer cells number score multiplied by the staining color score. Specifically, for those proteins detected mainly in the tumor stroma, we quantified the stained area fraction and average optical density (AOD) via ImageJ 1.54 g ( http://imagej.org ). Five fields per slide were randomly selected for analysis. A paired t test was used to examine the difference in the IHC scores of COL1A2, MMP2 and LUM and a Welch’s t test was used to examine the difference in the stained area fraction and AOD of COL1A1 and COL3A1 between the OBC-LN group and the Non-OBC-LN group. Demographics and clinicopathological characteristics A total of 507 OBC patients and 121,080 Non-OBC patients in the SEER dataset were included in our analysis. The demographics and clinicopathological characteristics of the two comparable groups are summarized in Table . Among the 507 OBC patients, the median follow-up period was 45 months and the median age at diagnosis was 60 years, the median follow-up time of the 121,080 Non-OBC patients was 50 months, and the median age at diagnosis was 57 years. Compared with the Non-OBC patients, a greater percentage of the OBC patients were ≥ 60 years old (52.67% vs 43.06%), had a widowed status (13.61% vs 10.19%), had N3 stage disease(14.79% vs 8.48%), had ER negative status (30.97% vs 17.50%), PR negative status (53.85% vs 27.84%), HER2 positive status (28.21% vs 17.61%), and triple negative breast cancer subtype(18.73% vs 10.87%). In addition, OBC patients were more likely to receive chemotherapy (83.04% vs 69.27%), and less likely to undergo mastectomy (36.09% vs 55.85%) compared with Non-OBC patients. These data demonstrated that the demographics and clinicopathological characteristics were different between the two groups. Survival analysis between OBC patients and Non-OBC patients After summarizing the clinicopathological characteristics of the two groups, we used Kaplan–Meier survival curves to assess the 10-year OS and BCSS (Fig. B, C). There were no significant differences in OS (log-rank test: p = 0.755) or BCSS (log-rank test: p = 0.966) between the two groups. Furthermore, we performed Cox regression analysis to determine the influence of clinicopathological characteristics on the survival outcomes of these breast cancer patients. Multivariate Cox proportional hazards analysis (Table ) was conducted based on the results of the univariate analysis. The results indicated that OBC may be an independent prognostic factor for better OS and BCSS for breast cancer patients, and patients in our study with widowed status, higher grade, more positive LNs, and negative ER and PR status seemed to have worse OS and BCSS (HR > 1, p < 0.001). In addition, refusal of chemotherapy or radiation was associated with inferior OS and BCSS. To further investigate the survival outcomes of OBC and Non-OBC patients, we used PSM to control for confounding. After 1:1 PSM, 489 OBC patients and 489 Non-OBC patients were retained. The SMD of each variable was less than 0.1, which meant that the clinicopathological features were well balanced (Table S2, Figure S2B,). Propensity scores were evenly distributed between the two groups (Figure S2A). Compared with Non-OBC patients, OBC patients in the PSM dataset had better OS and BCSS (Fig. D,1E; log-rank test: p < 0.01). The 5-year OS for OBC patients versus Non-OBC patients were 82.75% ± 2.01% vs. 68.46% ± 2.65%, and the 10-year OS for OBC patients versus Non-OBC patients were 76.28% ± 3.02% vs. 55.73% ± 3.52%, while the 5-year BCSS were 87.01% ± 1.78% for OBC patients vs. 76.46% ± 2.46% for Non-OBC patients, and 10-year BCSS were 84.46% ± 2.18% for OBC patients vs. 68.70% ± 3.13% for Non-OBC patients. Because of the disparity in sample size between the OBC group (507 patients) and the Non-OBC group (121,080 patients), we then carried out a multivariate Cox proportional analysis of the PSM dataset (Table ). Patients with advanced nodal disease (N2-N3) had worse BCSS (N2: HR = 1.925, 95% CI 1.047 to 3.541, p = 0.035; N3: HR = 1.873, 95% CI 1.077 to 3.258, p = 0.026). Non-OBC patients possessed a worse OS (HR = 1.910, 95% CI 1.424 to 2.561, p < 0.001), and BCSS (HR = 1.820, 95% CI 1.284 to 2.579, p < 0.001). In a word, OBC patients had a better prognosis than Non-OBC patients did. Effect of local treatment on the survival outcomes of OBC patients Due to the low incidence of OBC, there are limited data available for clinical management guidelines, where the most controversial issue is the treatment of the breast in OBC patients. To identify the optimal local treatment for OBC patients, we generated Kaplan–Meier survival curves (Fig. ). The population was classified into four groups: Radiation + Mastectomy group, Radiation-only group, Mastectomy-only group, No treatment of breast group. No treatment of breast group had the worst prognosis, with a 5-year OS of 64.46% ± 4.88% and a 5-year BCSS of 73.88% ± 4.55%. Interestingly, the other three groups had similar 5 year-BCSSs, but there were differences in the BCSS rates during subsequent follow-up: the 10-year BCSS rate of Radiation-only group, Mastectomy-only group and Radiation + Mastectomy group were 85.98% ± 3.61%, 95.25% ± 2.69%, and 89.09% ± 3.79%, respectively. However, the differences in OS (log-rank test: p = 0.13) or BCSS (log-rank test: p = 0.17) observed between OBC patients who underwent mastectomy only and OBC patients who underwent radiation only didn’t reach statistical significance. NCCN guidelines recommend mastectomy plus axillary nodal dissection or axillary nodal dissection plus whole breast irradiation for patients with T0, N1, and M0 disease, while for patients with T0, N2-N3, and M0 disease, mastectomy plus axillary nodal dissection should be considered. Given that NCCN guidelines provide different local treatment options for OBC patients with N1 or N2-N3 disease, we also found no significant differences in OS (Figure S3A, log-rank test: p = 0.14) or BCSS (Figure S3B, log-rank test: p = 0.18) between N1 OBC patients who underwent mastectomy and N1 OBC patients who underwent radiation, and there were no statistically significant differences in OS or BCSS between OBC patients with N2-N3 who underwent mastectomy and patients who underwent radiation (Figure S3C, OS: log-rank test, p = 0.76; Figure S3D, BCSS: log-rank test, p = 0.75). Generally, either mastectomy or breast radiation can improve the survival outcomes of OBC patients. Then, we conducted a multivariate Cox proportional analysis of the OBC cohort to adjust for other prognostic factors (Table ). No treatment of breast group was associated with an unfavorable OS (HR = 2.926, 95% CI,1.369 to 6.252, p = 0.006) and BCSS (HR = 2.921, 95% CI,1.243 to 6.867, p = 0.014). No significant differences in OS or BCSS were observed among patients who underwent both mastectomy and radiation, patients who underwent mastectomy only (OS: p = 0.632; BCSS: p = 0.541), or patients who received breast radiation only (OS: p = 0.698; BCSS: p = 0.745). Compared with SLND (1 ~ 6 LNs examined), ALND (≥ 10 LNs examined) was an independent prognostic protective factor for OBC (OS:HR = 0.356, 95% CI,0.150 to 0.848, p = 0.020, BCSS:HR = 0.200, 95% CI,0.067 to 0.601, p = 0.004). Besides, HER2 negative status (OS:HR = 2.621, 95% CI,1.361 to 5.049, p = 0.004, BCSS:HR = 2.273, 95% CI,1.081 to 4.779, p = 0.030) and more than 4 positive LNs (OS:HR > 4, p < 0.05, BCSS:HR > 3, p < 0.05) were both related to unfavorable prognosis. Proteomic signatures of metastatic LNs in OBC, Non-OBC and paired primary tumor To identify the proteome signatures of metastatic LNs in OBC, Non-OBC and paired primary tumor, we collected tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes (Non-OBC-LN) and primary tumor (Non-OBC-PT) from 3 Non-OBC patients, for a total of 9 samples. All the tissue samples were treatment-naïve. To control for confounding, we selected 3 Non-OBC patients who had the same subtype, menstrual state, AJCC stage and AJCC N category as the 3 OBC patients. The detailed clinicopathological characteristics and survival information of the 6 patients are summarized in Table . We performed quantitative proteomic profiling on the 9 samples to characterize the proteomic landscape and identify DEPs (Fig. A). Based on LC–MS/MS, we performed DIA proteomics, constructing a quantitative proteomic profiling of 9 human tissue samples. A total of 7208 comparable proteins were identified and quality control of the raw data was performed (Figure S1). The results of principal component analysis (PCA) demonstrated a clear distinction among OBC-LN samples, Non-OBC-LN samples and Non-OBC-PT samples (Fig. B). We identified DEPs in each comparable group using Student’s t test (Fig. C, 3F, FC ≥ 1.50 or ≤ 0.67, p value < 0.05). There were 260 upregulated and 242 downregulated proteins, 186 upregulated and 242 downregulated proteins in the comparison of Non-OBC-LN vs Non-OBC-PT, OBC-LN vs Non-OBC-PT, respectively. The two comparisons shared 98 upregulated proteins and 74 downregulated proteins in common (Fig. D, E). Most of the commonly upregulated proteins were enriched in the immune system (Table S4), considered to be a proteome feature of lymph node tissue samples. Interestingly, the fewest DEPs were detected between OBC-LN and Non-OBC-LN: 131 upregulated and 45 downregulated proteins, demonstrating that the two types of ALNs exhibited fewer proteomic changes. Furthermore, we performed hierarchical clustering based on the DEPs identified across all the samples, from which we obtained four proteomic clusters (Fig. G). To better understand the functional features of the DEPs in different clusters, pathway enrichment analysis based on KEGG categories was carried out. Among the significantly enriched pathways, we selected the top five pathways with the highest gene counts and displayed them on the right in different colors to represent different clusters. Proteins in cluster 1 generally had higher abundance in Non-OBC-PT group than in the other two groups and they were enriched in the pathways of complement and coagulation cascades, coronavirus disease—COVID-19, regulation of actin cytoskeleton, protein digestion and absorption and systemic lupus erythematosus. For protein cluster 2, we observed a decrease in protein abundance in Non-OBC-LN group and we found that proteins were associated with protein processing in endoplasmic reticulum, protein digestion and absorption. Many proteins, including HSP90B1, DDOST, SSR1, SSR4, DAD1 and RRBP1, which are localized to the endoplasmic reticulum and participate in protein translocation and processing, were upregulated in OBC-LN group compared with Non-OBC-LN group, indicating that protein modification and transportation were likely more frequent in OBC-LN samples. In addition, multiple collagen proteins and some integrin protein subunits, which were involved mostly in the PI3K-Akt signaling pathway, focal adhesion and ECM-receptor interaction, were elevated in OBC-LN and Non-OBC-PT samples. Focusing on the comparison of OBC-LN and Non-OBC-LN, the results indicated that functions of the intercellular adhesion molecules and the patterns of the ECM-cell interactions in OBC-LN samples are likely different from those in Non-OBC-LN samples. We obtained only 3 enriched pathways from the KEGG enrichment analysis of cluster 3, where the proteins in OBC-LN group had a lower abundance than did those in the other groups. The proteins were some integrin protein subunits (such as ITGA2 and ITGA3, different from those in cluster 2) and laminin subunits (such as LAMA3 and LAMB2). We noticed that the expression levels of proteins in cluster 4 were increased in the two types of ALN sample groups and the proteins were enriched mainly in the immune-related pathways, which are related to the proteomic characteristics of the lymph node samples. Differences in proteome profiles between OBC-LN samples and Non-OBC-LN samples OBC presents as axillary LNs or other metastases without the discovery of the primary tumor in the breast and it is still challenging to recognize its origin and evolution. So far, the proteomic characteristics of OBC are poorly understood . Here, we compared proteome profiles between OBC-LN and Non-OBC-LN samples to explore possible alterations in protein levels during OBC progression. The results of the protein subcellular localization classification (Fig. A) revealed that 131 upregulated proteins were distributed mainly in the extracellular region (44, 33.59%) and plasma membrane (24, 18.32%), whereas 45 downregulated proteins were localized mainly in the nucleus (10, 22.22%), cytoplasm (10, 22.22%), extracellular space (8, 17.78%) and mitochondria (8, 17.78%). To our knowledge, various signaling pathways that promote cell–cell communication involve physical interactions between extracellular proteins which play a vital role in the development of tissues and organisms . Approximately one-third of the upregulated proteins were localized in the extracellular region, indicating that cell–cell interactions and communication are likely more active in OBC-LN samples. To interpret the functions of the DEPs, we performed functional enrichment analysis using GO and KEGG categories. Compared with those in Non-OBC-LN samples, proteins with higher expression levels in OBC-LN samples were enriched mainly for the GO terms related to the ECM, extracellular structure and collagen fibrils (Fig. B, Table S5). Proteins enriched in the above pathways are mainly multiple collagen proteins in OBC-LN samples, including COL1A1 (ratio = 2.19, p < 0.05), COL1A2 (ratio = 1.88, p < 0.05), COL3A1 (ratio = 2.17, p < 0.05), COL8A1 (ratio = 2.81, p < 0.05) and COL5A1 (ratio = 2.09, p < 0.05). They are chains that constitute many types of collagens which have mostly been shown to be correlated with tumor progression and metastasis . Notably, some protein subunits of the integrin family, which are crucial linkers between the ECM and cytoskeleton and are correlated with mammary epithelial cell malignant invasion and metastasis, are upregulated in OBC-LN tissue . Previous studies have demonstrated that the silencing of integrin β3 (ITGB3, OBC-LN vs Non-OBC-LN ratio = 1.68, p < 0.05) inhibits TGF-β mediated EMT and breast cancer metastasis in vivo and in vitro . In addition, the overexpression of integrin αV (ITGAV, ratio = 1.545, p < 0.05) induces breast cancer metastasis via the upregulation of PXN . The enriched KEGG pathways of upregulated proteins were mainly related to protein processing in endoplasmic reticulum, N-glycan biosynthesis, protein digestion, absorption and export, ECM-receptor interaction and proteoglycans in cancer (Fig. C), which indicated that protein production and modification were possibly enhanced in OBC metastases and that many ECM-related proteins were elevated. Multiple proteins enriched in the pathway of protein processing in endoplasmic reticulum (ER) were ER-related proteins, such as RRBP1 (ratio = 1.919, p < 0.05), SSR1 (ratio = 1.784, p < 0.05) and SSR4 (ratio = 1.629, p < 0.05), are associated with protein synthesis and transportation. Thus, the protein profile of OBC-LN exhibited an active cellular metabolic state, closer intercellular junctions, and a stronger propensity for metastasis. However, we did not obtain any significantly enriched pathways from the downregulated proteins in the KEGG categories. GSEA of MSigDB data was performed via GO gene sets (Fig. D), KEGG pathways (Fig. E) and hallmark gene sets (Fig. F). The GSEA results based on the GO gene sets revealed that compared with those of the Non-OBC-LN samples, the protein profiles associated with collagen and collagen fibrils were significantly upregulated in OBC-LN samples, which is partly in line with the results of the GO annotation enrichment analysis, and proteins associated with chromosome organization and epithelial structure maintenance were downregulated. Interestingly, some biological processes correlated with immune responses, especially immunoglobulin mediated immunity were more active in OBC LN metastases. GSEA of the KEGG pathways revealed that the expression of proteins associated with the RAP1, cAMP and estrogen signaling pathways and the B-cell and T-cell receptor signaling pathways was downregulated in the OBC-LN samples, while the upregulated terms were essentially the same as those in the KEGG enrichment analysis. Moreover, we obtained only three enriched hallmark gene sets: EMT, interferon gamma (IFN-γ) and Uv response up. Studies have demonstrated that EMT is involved in the migration and invasion of malignant cells during cancer progression . IFN-γ, a significant mediator of infection, inflammation and antitumor immunity, is known to play a controversial role in antitumor immunity: its antitumor or protumorigenic activities are affected by its concentration in the tumor microenvironment . We speculated that EMT and IFN-γ signaling are critical in OBC pathogenesis and that their functions may help explain early metastasis and the inhibition of primary tumor growth during the course of OBC. Expression profiling of EMT-related genes across all samples GSEA hallmark analyses were also carried out in other two comparable groups (Fig. A, B). The only overlapping enriched gene set among the three comparisons was EMT. The leading-edge subset in EMT was obtained, and protein expression levels in each sample were visualized (Fig. C). In contrast to LN metastases, breast PT tissues presented upregulated expression of proteins associated with EMT. In the comparison of the two LN metastasis samples, OBC-LN generally presented higher related gene expression levels. Hub protein identification We used the STRING database to construct the PPI network, which indicated both functional correlations and physical associations. The DC, BC, CC, LAC and NC of every edge were all calculated. Proteins ranged by DC values were included in the network shown in Fig. A. A correlation heatmap was generated to display the linkages between the top 20 DEPs with the highest DC and several EMT markers (Fig. B). Among the 20 DEPs, only 2 proteins were downregulated in the OBC-LN samples (COL4A1 and LAMA3). The heatmap indicated strong correlations between most of the DEPs and the EMT markers. We selected the top 20 proteins with the highest DC, BC, CC, LAC and NC values and identified the overlapping proteins as the hub proteins: COL1A1, COL1A2, COL3A1, MMP2 and LUM (Table ). The correlations between the gene expression levels of the 5 hub proteins and the survival of breast cancer patients were assessed by Kaplan–Meier plotter (KM plotter, https://kmplot.com/analysis/ ). The overexpression of the 5 hub proteins was associated with better 10-year OS of breast cancer patients according to the KM plotter database ( n = 1879, HR < 1, Fig. C). Furthermore, significant subclusters (MCODE score ≥ 5) were identified (Fig. D, E, F). Moreover, KEGG pathway enrichment analysis of proteins in the three clusters was performed (Fig. G, H, I). Proteins in Cluster 1 and Cluster 3 were both enriched in ECM-receptor interactions, focal adhesion and the AGE − RAGE signaling pathway in diabetic complications, while proteins in Cluster 2 were related mainly to protein processing in endoplasmic reticulum. The results revealed that compared with Non-OBC-LN samples, OBC-LN samples had an active ECM and were likely to be more positive for protein processing, which is considered supportive evidence for early metastasis during the pathogenesis of OBC. Validation of the hub proteins by immunohistochemistry To further validate the 5 hub proteins identified by proteomic profiling, we performed IHC on tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes from 3 Non-OBC patients (Non-OBC-LN). Metastases are apparent in LNs microscopically and adenocarcinoma cells arranged in clumps or cords can be found in the subcapsular sinus, paracortex and medulla of metastatic lymph nodes. Greater staining of the 5 proteins was observed in the cases with OBC (Fig. A). Notably, COL1A2, MMP2 and LUM were positively stained in the cytoplasm and membrane of the cancer cells. Semiquantitative analysis demonstrated that these 3 proteins were increased in OBC-LN (Fig. B ~ D, paired t test, p < 0.05). Additionally, COL1A1 and COL3A1 mainly showed tumor stromal staining in the metastatic lymph nodes (Fig. A). Compared with those in the Non-OBC-LN group, both the stained area fraction and AOD of COL1A1 and COL3A1 were greater in OBC-LN group (Fig. E, , Welch’s t test, p < 0.05). Interestingly, we also observed COL3A1 cytoplasmic and membranous staining in one of the OBC-LN samples (Fig. G), which indicated that the expression level of COL3A1 was elevated not only in the tumor stoma but also in cancer cells. A total of 507 OBC patients and 121,080 Non-OBC patients in the SEER dataset were included in our analysis. The demographics and clinicopathological characteristics of the two comparable groups are summarized in Table . Among the 507 OBC patients, the median follow-up period was 45 months and the median age at diagnosis was 60 years, the median follow-up time of the 121,080 Non-OBC patients was 50 months, and the median age at diagnosis was 57 years. Compared with the Non-OBC patients, a greater percentage of the OBC patients were ≥ 60 years old (52.67% vs 43.06%), had a widowed status (13.61% vs 10.19%), had N3 stage disease(14.79% vs 8.48%), had ER negative status (30.97% vs 17.50%), PR negative status (53.85% vs 27.84%), HER2 positive status (28.21% vs 17.61%), and triple negative breast cancer subtype(18.73% vs 10.87%). In addition, OBC patients were more likely to receive chemotherapy (83.04% vs 69.27%), and less likely to undergo mastectomy (36.09% vs 55.85%) compared with Non-OBC patients. These data demonstrated that the demographics and clinicopathological characteristics were different between the two groups. After summarizing the clinicopathological characteristics of the two groups, we used Kaplan–Meier survival curves to assess the 10-year OS and BCSS (Fig. B, C). There were no significant differences in OS (log-rank test: p = 0.755) or BCSS (log-rank test: p = 0.966) between the two groups. Furthermore, we performed Cox regression analysis to determine the influence of clinicopathological characteristics on the survival outcomes of these breast cancer patients. Multivariate Cox proportional hazards analysis (Table ) was conducted based on the results of the univariate analysis. The results indicated that OBC may be an independent prognostic factor for better OS and BCSS for breast cancer patients, and patients in our study with widowed status, higher grade, more positive LNs, and negative ER and PR status seemed to have worse OS and BCSS (HR > 1, p < 0.001). In addition, refusal of chemotherapy or radiation was associated with inferior OS and BCSS. To further investigate the survival outcomes of OBC and Non-OBC patients, we used PSM to control for confounding. After 1:1 PSM, 489 OBC patients and 489 Non-OBC patients were retained. The SMD of each variable was less than 0.1, which meant that the clinicopathological features were well balanced (Table S2, Figure S2B,). Propensity scores were evenly distributed between the two groups (Figure S2A). Compared with Non-OBC patients, OBC patients in the PSM dataset had better OS and BCSS (Fig. D,1E; log-rank test: p < 0.01). The 5-year OS for OBC patients versus Non-OBC patients were 82.75% ± 2.01% vs. 68.46% ± 2.65%, and the 10-year OS for OBC patients versus Non-OBC patients were 76.28% ± 3.02% vs. 55.73% ± 3.52%, while the 5-year BCSS were 87.01% ± 1.78% for OBC patients vs. 76.46% ± 2.46% for Non-OBC patients, and 10-year BCSS were 84.46% ± 2.18% for OBC patients vs. 68.70% ± 3.13% for Non-OBC patients. Because of the disparity in sample size between the OBC group (507 patients) and the Non-OBC group (121,080 patients), we then carried out a multivariate Cox proportional analysis of the PSM dataset (Table ). Patients with advanced nodal disease (N2-N3) had worse BCSS (N2: HR = 1.925, 95% CI 1.047 to 3.541, p = 0.035; N3: HR = 1.873, 95% CI 1.077 to 3.258, p = 0.026). Non-OBC patients possessed a worse OS (HR = 1.910, 95% CI 1.424 to 2.561, p < 0.001), and BCSS (HR = 1.820, 95% CI 1.284 to 2.579, p < 0.001). In a word, OBC patients had a better prognosis than Non-OBC patients did. Due to the low incidence of OBC, there are limited data available for clinical management guidelines, where the most controversial issue is the treatment of the breast in OBC patients. To identify the optimal local treatment for OBC patients, we generated Kaplan–Meier survival curves (Fig. ). The population was classified into four groups: Radiation + Mastectomy group, Radiation-only group, Mastectomy-only group, No treatment of breast group. No treatment of breast group had the worst prognosis, with a 5-year OS of 64.46% ± 4.88% and a 5-year BCSS of 73.88% ± 4.55%. Interestingly, the other three groups had similar 5 year-BCSSs, but there were differences in the BCSS rates during subsequent follow-up: the 10-year BCSS rate of Radiation-only group, Mastectomy-only group and Radiation + Mastectomy group were 85.98% ± 3.61%, 95.25% ± 2.69%, and 89.09% ± 3.79%, respectively. However, the differences in OS (log-rank test: p = 0.13) or BCSS (log-rank test: p = 0.17) observed between OBC patients who underwent mastectomy only and OBC patients who underwent radiation only didn’t reach statistical significance. NCCN guidelines recommend mastectomy plus axillary nodal dissection or axillary nodal dissection plus whole breast irradiation for patients with T0, N1, and M0 disease, while for patients with T0, N2-N3, and M0 disease, mastectomy plus axillary nodal dissection should be considered. Given that NCCN guidelines provide different local treatment options for OBC patients with N1 or N2-N3 disease, we also found no significant differences in OS (Figure S3A, log-rank test: p = 0.14) or BCSS (Figure S3B, log-rank test: p = 0.18) between N1 OBC patients who underwent mastectomy and N1 OBC patients who underwent radiation, and there were no statistically significant differences in OS or BCSS between OBC patients with N2-N3 who underwent mastectomy and patients who underwent radiation (Figure S3C, OS: log-rank test, p = 0.76; Figure S3D, BCSS: log-rank test, p = 0.75). Generally, either mastectomy or breast radiation can improve the survival outcomes of OBC patients. Then, we conducted a multivariate Cox proportional analysis of the OBC cohort to adjust for other prognostic factors (Table ). No treatment of breast group was associated with an unfavorable OS (HR = 2.926, 95% CI,1.369 to 6.252, p = 0.006) and BCSS (HR = 2.921, 95% CI,1.243 to 6.867, p = 0.014). No significant differences in OS or BCSS were observed among patients who underwent both mastectomy and radiation, patients who underwent mastectomy only (OS: p = 0.632; BCSS: p = 0.541), or patients who received breast radiation only (OS: p = 0.698; BCSS: p = 0.745). Compared with SLND (1 ~ 6 LNs examined), ALND (≥ 10 LNs examined) was an independent prognostic protective factor for OBC (OS:HR = 0.356, 95% CI,0.150 to 0.848, p = 0.020, BCSS:HR = 0.200, 95% CI,0.067 to 0.601, p = 0.004). Besides, HER2 negative status (OS:HR = 2.621, 95% CI,1.361 to 5.049, p = 0.004, BCSS:HR = 2.273, 95% CI,1.081 to 4.779, p = 0.030) and more than 4 positive LNs (OS:HR > 4, p < 0.05, BCSS:HR > 3, p < 0.05) were both related to unfavorable prognosis. To identify the proteome signatures of metastatic LNs in OBC, Non-OBC and paired primary tumor, we collected tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes (Non-OBC-LN) and primary tumor (Non-OBC-PT) from 3 Non-OBC patients, for a total of 9 samples. All the tissue samples were treatment-naïve. To control for confounding, we selected 3 Non-OBC patients who had the same subtype, menstrual state, AJCC stage and AJCC N category as the 3 OBC patients. The detailed clinicopathological characteristics and survival information of the 6 patients are summarized in Table . We performed quantitative proteomic profiling on the 9 samples to characterize the proteomic landscape and identify DEPs (Fig. A). Based on LC–MS/MS, we performed DIA proteomics, constructing a quantitative proteomic profiling of 9 human tissue samples. A total of 7208 comparable proteins were identified and quality control of the raw data was performed (Figure S1). The results of principal component analysis (PCA) demonstrated a clear distinction among OBC-LN samples, Non-OBC-LN samples and Non-OBC-PT samples (Fig. B). We identified DEPs in each comparable group using Student’s t test (Fig. C, 3F, FC ≥ 1.50 or ≤ 0.67, p value < 0.05). There were 260 upregulated and 242 downregulated proteins, 186 upregulated and 242 downregulated proteins in the comparison of Non-OBC-LN vs Non-OBC-PT, OBC-LN vs Non-OBC-PT, respectively. The two comparisons shared 98 upregulated proteins and 74 downregulated proteins in common (Fig. D, E). Most of the commonly upregulated proteins were enriched in the immune system (Table S4), considered to be a proteome feature of lymph node tissue samples. Interestingly, the fewest DEPs were detected between OBC-LN and Non-OBC-LN: 131 upregulated and 45 downregulated proteins, demonstrating that the two types of ALNs exhibited fewer proteomic changes. Furthermore, we performed hierarchical clustering based on the DEPs identified across all the samples, from which we obtained four proteomic clusters (Fig. G). To better understand the functional features of the DEPs in different clusters, pathway enrichment analysis based on KEGG categories was carried out. Among the significantly enriched pathways, we selected the top five pathways with the highest gene counts and displayed them on the right in different colors to represent different clusters. Proteins in cluster 1 generally had higher abundance in Non-OBC-PT group than in the other two groups and they were enriched in the pathways of complement and coagulation cascades, coronavirus disease—COVID-19, regulation of actin cytoskeleton, protein digestion and absorption and systemic lupus erythematosus. For protein cluster 2, we observed a decrease in protein abundance in Non-OBC-LN group and we found that proteins were associated with protein processing in endoplasmic reticulum, protein digestion and absorption. Many proteins, including HSP90B1, DDOST, SSR1, SSR4, DAD1 and RRBP1, which are localized to the endoplasmic reticulum and participate in protein translocation and processing, were upregulated in OBC-LN group compared with Non-OBC-LN group, indicating that protein modification and transportation were likely more frequent in OBC-LN samples. In addition, multiple collagen proteins and some integrin protein subunits, which were involved mostly in the PI3K-Akt signaling pathway, focal adhesion and ECM-receptor interaction, were elevated in OBC-LN and Non-OBC-PT samples. Focusing on the comparison of OBC-LN and Non-OBC-LN, the results indicated that functions of the intercellular adhesion molecules and the patterns of the ECM-cell interactions in OBC-LN samples are likely different from those in Non-OBC-LN samples. We obtained only 3 enriched pathways from the KEGG enrichment analysis of cluster 3, where the proteins in OBC-LN group had a lower abundance than did those in the other groups. The proteins were some integrin protein subunits (such as ITGA2 and ITGA3, different from those in cluster 2) and laminin subunits (such as LAMA3 and LAMB2). We noticed that the expression levels of proteins in cluster 4 were increased in the two types of ALN sample groups and the proteins were enriched mainly in the immune-related pathways, which are related to the proteomic characteristics of the lymph node samples. OBC presents as axillary LNs or other metastases without the discovery of the primary tumor in the breast and it is still challenging to recognize its origin and evolution. So far, the proteomic characteristics of OBC are poorly understood . Here, we compared proteome profiles between OBC-LN and Non-OBC-LN samples to explore possible alterations in protein levels during OBC progression. The results of the protein subcellular localization classification (Fig. A) revealed that 131 upregulated proteins were distributed mainly in the extracellular region (44, 33.59%) and plasma membrane (24, 18.32%), whereas 45 downregulated proteins were localized mainly in the nucleus (10, 22.22%), cytoplasm (10, 22.22%), extracellular space (8, 17.78%) and mitochondria (8, 17.78%). To our knowledge, various signaling pathways that promote cell–cell communication involve physical interactions between extracellular proteins which play a vital role in the development of tissues and organisms . Approximately one-third of the upregulated proteins were localized in the extracellular region, indicating that cell–cell interactions and communication are likely more active in OBC-LN samples. To interpret the functions of the DEPs, we performed functional enrichment analysis using GO and KEGG categories. Compared with those in Non-OBC-LN samples, proteins with higher expression levels in OBC-LN samples were enriched mainly for the GO terms related to the ECM, extracellular structure and collagen fibrils (Fig. B, Table S5). Proteins enriched in the above pathways are mainly multiple collagen proteins in OBC-LN samples, including COL1A1 (ratio = 2.19, p < 0.05), COL1A2 (ratio = 1.88, p < 0.05), COL3A1 (ratio = 2.17, p < 0.05), COL8A1 (ratio = 2.81, p < 0.05) and COL5A1 (ratio = 2.09, p < 0.05). They are chains that constitute many types of collagens which have mostly been shown to be correlated with tumor progression and metastasis . Notably, some protein subunits of the integrin family, which are crucial linkers between the ECM and cytoskeleton and are correlated with mammary epithelial cell malignant invasion and metastasis, are upregulated in OBC-LN tissue . Previous studies have demonstrated that the silencing of integrin β3 (ITGB3, OBC-LN vs Non-OBC-LN ratio = 1.68, p < 0.05) inhibits TGF-β mediated EMT and breast cancer metastasis in vivo and in vitro . In addition, the overexpression of integrin αV (ITGAV, ratio = 1.545, p < 0.05) induces breast cancer metastasis via the upregulation of PXN . The enriched KEGG pathways of upregulated proteins were mainly related to protein processing in endoplasmic reticulum, N-glycan biosynthesis, protein digestion, absorption and export, ECM-receptor interaction and proteoglycans in cancer (Fig. C), which indicated that protein production and modification were possibly enhanced in OBC metastases and that many ECM-related proteins were elevated. Multiple proteins enriched in the pathway of protein processing in endoplasmic reticulum (ER) were ER-related proteins, such as RRBP1 (ratio = 1.919, p < 0.05), SSR1 (ratio = 1.784, p < 0.05) and SSR4 (ratio = 1.629, p < 0.05), are associated with protein synthesis and transportation. Thus, the protein profile of OBC-LN exhibited an active cellular metabolic state, closer intercellular junctions, and a stronger propensity for metastasis. However, we did not obtain any significantly enriched pathways from the downregulated proteins in the KEGG categories. GSEA of MSigDB data was performed via GO gene sets (Fig. D), KEGG pathways (Fig. E) and hallmark gene sets (Fig. F). The GSEA results based on the GO gene sets revealed that compared with those of the Non-OBC-LN samples, the protein profiles associated with collagen and collagen fibrils were significantly upregulated in OBC-LN samples, which is partly in line with the results of the GO annotation enrichment analysis, and proteins associated with chromosome organization and epithelial structure maintenance were downregulated. Interestingly, some biological processes correlated with immune responses, especially immunoglobulin mediated immunity were more active in OBC LN metastases. GSEA of the KEGG pathways revealed that the expression of proteins associated with the RAP1, cAMP and estrogen signaling pathways and the B-cell and T-cell receptor signaling pathways was downregulated in the OBC-LN samples, while the upregulated terms were essentially the same as those in the KEGG enrichment analysis. Moreover, we obtained only three enriched hallmark gene sets: EMT, interferon gamma (IFN-γ) and Uv response up. Studies have demonstrated that EMT is involved in the migration and invasion of malignant cells during cancer progression . IFN-γ, a significant mediator of infection, inflammation and antitumor immunity, is known to play a controversial role in antitumor immunity: its antitumor or protumorigenic activities are affected by its concentration in the tumor microenvironment . We speculated that EMT and IFN-γ signaling are critical in OBC pathogenesis and that their functions may help explain early metastasis and the inhibition of primary tumor growth during the course of OBC. GSEA hallmark analyses were also carried out in other two comparable groups (Fig. A, B). The only overlapping enriched gene set among the three comparisons was EMT. The leading-edge subset in EMT was obtained, and protein expression levels in each sample were visualized (Fig. C). In contrast to LN metastases, breast PT tissues presented upregulated expression of proteins associated with EMT. In the comparison of the two LN metastasis samples, OBC-LN generally presented higher related gene expression levels. We used the STRING database to construct the PPI network, which indicated both functional correlations and physical associations. The DC, BC, CC, LAC and NC of every edge were all calculated. Proteins ranged by DC values were included in the network shown in Fig. A. A correlation heatmap was generated to display the linkages between the top 20 DEPs with the highest DC and several EMT markers (Fig. B). Among the 20 DEPs, only 2 proteins were downregulated in the OBC-LN samples (COL4A1 and LAMA3). The heatmap indicated strong correlations between most of the DEPs and the EMT markers. We selected the top 20 proteins with the highest DC, BC, CC, LAC and NC values and identified the overlapping proteins as the hub proteins: COL1A1, COL1A2, COL3A1, MMP2 and LUM (Table ). The correlations between the gene expression levels of the 5 hub proteins and the survival of breast cancer patients were assessed by Kaplan–Meier plotter (KM plotter, https://kmplot.com/analysis/ ). The overexpression of the 5 hub proteins was associated with better 10-year OS of breast cancer patients according to the KM plotter database ( n = 1879, HR < 1, Fig. C). Furthermore, significant subclusters (MCODE score ≥ 5) were identified (Fig. D, E, F). Moreover, KEGG pathway enrichment analysis of proteins in the three clusters was performed (Fig. G, H, I). Proteins in Cluster 1 and Cluster 3 were both enriched in ECM-receptor interactions, focal adhesion and the AGE − RAGE signaling pathway in diabetic complications, while proteins in Cluster 2 were related mainly to protein processing in endoplasmic reticulum. The results revealed that compared with Non-OBC-LN samples, OBC-LN samples had an active ECM and were likely to be more positive for protein processing, which is considered supportive evidence for early metastasis during the pathogenesis of OBC. To further validate the 5 hub proteins identified by proteomic profiling, we performed IHC on tissue samples of metastatic lymph nodes from 3 OBC patients (OBC-LN), and paired tissue samples of metastatic lymph nodes from 3 Non-OBC patients (Non-OBC-LN). Metastases are apparent in LNs microscopically and adenocarcinoma cells arranged in clumps or cords can be found in the subcapsular sinus, paracortex and medulla of metastatic lymph nodes. Greater staining of the 5 proteins was observed in the cases with OBC (Fig. A). Notably, COL1A2, MMP2 and LUM were positively stained in the cytoplasm and membrane of the cancer cells. Semiquantitative analysis demonstrated that these 3 proteins were increased in OBC-LN (Fig. B ~ D, paired t test, p < 0.05). Additionally, COL1A1 and COL3A1 mainly showed tumor stromal staining in the metastatic lymph nodes (Fig. A). Compared with those in the Non-OBC-LN group, both the stained area fraction and AOD of COL1A1 and COL3A1 were greater in OBC-LN group (Fig. E, , Welch’s t test, p < 0.05). Interestingly, we also observed COL3A1 cytoplasmic and membranous staining in one of the OBC-LN samples (Fig. G), which indicated that the expression level of COL3A1 was elevated not only in the tumor stoma but also in cancer cells. OBC, a rare type of breast cancer, initially presents as LN metastasis without a primary tumor in the breast. Given the extremely low incidence of OBC, there is a consistent lack of data for clinical management and the understanding of its pathogenesis is limited. Here, we used data acquired from the SEER database to describe the clinicopathological features of OBC patients, evaluate the survival outcomes of OBC patients and determine the optimal local treatment for OBC patients to provide insights for clinical management. Besides, we employed LC–MS/MS-based proteomic technology to clarify the quantitative state of the proteome of metastatic LNs in OBC, Non-OBC and paired primary tumors to better understand the mechanisms of OBC pathogenesis. IHC was also used for validation of the hub proteins we identified through proteomics. In the population-based study, age, race, grade, stage, the status of ER and PR, number of examined LNs, number of positive LNs, breast surgery type, breast radiation and chemotherapy were considered independent prognostic factors of breast cancer. For OBC patients, the number of examined LNs and the number of positive LNs were prognostic factors. In addition, widowed status resulted in poor prognosis in the whole population, which is thought to be associated with mental pressure. Studies have shown that women who are widowed or divorced are likely to suffer from generalized anxiety disorder . NCCN guidelines for cancer of unknown primary (CUP) highlight the psychosocial distress of patients, as they often feel anxious and depressed after receiving a diagnosis of an unknown primary cancer. For clinical management, psychotherapy is also indispensable. The data from our study indicated that OBC patients had better survival outcomes than Non-OBC patients after controlling for confounders, in line with some prior analyses . For local treatment of OBC patients, various authors have reported similar OS and BCSS rates between patients who received breast radiation and patients who underwent mastectomy , but a single-center retrospective study reported that patients who underwent mastectomy plus ALND had superior disease-free survival than patients who received breast radiation with ALND . In our study, we did not observe any significant survival benefit from mastectomy compared with breast radiation. In addition, the OBC patients in our study distinctly benefited from ALND compared with SLND. Thus, our data above suggest that either breast radiation plus ALND or mastectomy with ALND is recommended for OBC patients. Given the small sample size of N2-N3 OBC patients enrolled in our population-based study, multicenter studies with large sample sizes are possibly still needed for further analysis. To explore the natural history of OBC and investigate the mechanisms of “present metastases without detectable primary breast tumors”, high-throughput quantitative proteomics was conducted for OBC-LN, Non-OBC-LN and Non-OBC-PT samples. We discovered that compared with those in Non-OBC-LN samples, collagen-related and ECM-related proteins were significantly upregulated and EMT was highly enriched in OBC-LN samples. The results revealed that the proteome of OBC LN metastases has a lively ECM and exhibits highly enriched EMT characteristic, which provides a better understanding of this rare disorder. EMT, especially type 3 EMT, contributes to the invasion and migration of carcinoma cells and plays an important role in the metastasis of various cancers . During this process, carcinoma cells with epithelial phenotypes transition to mesenchymal phenotypes and are capable of disseminating to distant tissues. ECM proteolytic degradation and collagen secretion are necessary steps in EMT and metastasis. Adhesion and junctions within carcinoma cells are lost, and cells invade adjacent normal tissues through breakdown of the basement membrane and ECM. Then, they intravasate into the bloodstream or lymphatic system and become circulating tumor cells (CTCs). CTCs undergo phenotypic transition, even a gradation of phenotypic states, to present intermediate epithelial/mesenchymal states (E/M states). These cells maintain their ability to adapt to various microenvironments during multiple phases of metastasis, and a minority of them survive and form micrometastases. Single-cell transcriptomic analysis revealed that partial EMT localized to the leading edge of the PT, close to the surrounding tumor stroma, and that partial EMT was considered an independent predictor of LN metastasis . These findings suggest that EMT likely plays a vital role in promoting the migration of carcinoma cells in PTs and thus induces tumor metastasis. In the present study, many EMT-associated genes presented the highest protein expression abundance in Non-OBC-PT samples, indicating the potential for malignant cell migration and invasion (Fig. C). Compared with the LN samples, the PT samples demonstrated a predominance in the EMT process. Collagen biosynthesis, metabolism and binding, and ECM structural organization were highly enriched in the OBC-LN samples compared with the Non-OBC-LN samples, and the EMT process was highly upregulated in the OBC-LN samples. These results indicated that OBC LN metastases processed an active ECM and an expanded EMT program, demonstrating a stronger tendency toward migration and dissemination. Thus, the initiation and progression of OBC can possibly be interpreted as metastases occurring before the primary tumor forms into a solid tumor which can be identified by physical examination and imaging techniques. With the activation of EMT, these cells have a strong ability to escape from the primary site and migrate to normal tissues. Owing to the early occurrence of metastasis during the course of OBC, the number of CTLs is quite small, but CTLs later suffer from systemic immune defenses, a lack of supportive nutrients and shear forces in the circulation . Thus, CTLs must possess a superior ability to metastasize during the pathogenesis of OBC. Probably, they present more complicated E/M states to survive in new environments, such as axillary LNs, blood vessels or some distant organs. The hub proteins we identified and validated are mostly involved in the EMT program . Specifically, COL1A1 has been shown to induce EMT in various invasive and metastatic tumor cells, including human breast carcinoma cells . Type I and type III collagens and MMP-2 are considered mesenchymal proteins, and are more strongly produced when EMT initiates . When stimulated by transforming growth factor–β (an EMT inducer), MMP2 expression is increased, which promotes carcinoma cell invasion through proteolytic matrix degradation . Interestingly, the effect of lumican on tumor progression is arguable. Studies have demonstrated that lumican can inhibit the migration and invasion of tumor cells and induce EMT/MET reprogramming . Three of the five hub proteins are members of the collagen family: COL1A1, COL1A2, and COL3A1. During tumor progression and metastasis, the content and distribution of collagen are altered and the ECM go through structural changes . Collagen type I, composed of COL1A1 and COL1A2, has been found to frequently promote tumorigenesis in multiple cancers not only by regulating EMT and cell proliferation but also by encouraging tumor angiogenesis and modulating cancer stemness . Liu et al. reported that elevated COL1A1 inhibits cell apoptosis in cervical cancer tissues by affecting the caspase-3/PI3K/AKT pathways . However, the roles of collagen type I in cancer development are tumor type- and tissue type- dependent, as collagen type I sometimes represses cancer progression. Brisson et al. reported that collagen type III deficiency induces a tumor-favorable microenvironment with denser collagen, which promotes the malignant progression of human breast cancer in vivo and in vitro . Collagen is the major component of ECM proteins. Studies have shown that fibrillar collagen crosslinking accounts for ECM remodeling and plays a vital role in breast tumor progression . Moreover, the levels of some collagen crosslinking enzymes, such as lysyl oxidase (LOX), which mediates collagen crosslinking in tumor tissues, leading to tissue fibrosis and focal adhesions, are elevated in breast cancer tissues. In the present study, we detected even higher expression levels of many collagen proteins (COL1A1, COL1A2, COL3A1, COL5A1 and COL8A1) in OBC-LN samples and some proteins whose expression increased in the OBC-LN samples were highly enriched in ECM organization, extracellular structure organization and collagen fibril organization. We speculated that carcinoma tissues in OBC LNs have unique collagen alignments and ECM structures that might function differently in tumorigenesis . Importantly, we detected a greater expression level of LOX in OBC-LN samples, although the difference was not significant (ratio = 2.375, p value = 0.10). Specifically, epithelial discoidin domain-containing receptor 1 (DDR1), identified as one of collagen receptors, was upregulated in OBC-LN samples (OBC-LN vs Non-OBC-LN ratio = 1.929, p value = 0.01) . DDR1 has been discovered to be a crucial regulator of cell proliferation and the response to the ECM and it promotes collagen fiber alignment to induce immune exclusion and encourage the tumor growth in triple-negative breast cancer . We suspected that during the early stage of OBC progression, collagen synthesis and crosslinking may be extensively enhanced, and the active ECM with altered stiffness may cooperate with oncogenes, driving the malignant invasion of mammary epithelial cells and untimely metastasis of carcinoma cells . In addition to EMT, MMP2 and LUM are involved in other biological processes correlated with tumor development and progression. MMP2 is affiliated with the gelatinase family since it can degrade denatured collagen or gelatin .Previous studies have shown that MMP2 is involved in cancer progression and angiogenesis in many cases, but the underlying mechanisms remain to be determined . A meta-analysis revealed that MMP2 overexpression was associated with poor prognosis and a high risk of metastasis (OR = 2.69, P = 0.005) . Littlepage et al. reported that MMP2 contributed to the progression of prostate cancer, and Du et al. observed that MMP2 deficiency resulted in the apoptosis of tumor cells and extended the survival time of mice . In addition, increased expression of LUM has been detected in colon, gastric, and pancreatic cancer and melanoma. Interestingly, studies have shown that LUM inhibits breast tumor growth and progression partly through reversing the EMT program, while it plays a role in tumor development by regulating cell growth, migration, invasion and adhesion in other cancers . Furthermore, Karamanou et al. reported that LUM reduced the expression levels of matrix metalloproteinases (including MMP-7 and MMP-14) . We found that the expression levels of MMP2 and LUM in OBC-LN tissues were 2.17-fold and 2.22-fold greater than those in Non-OBC-LN tissues ( p < 0.05), respectively (Table ). Thus, we suspected that the MMP2-LUM interaction may play an important role in the pathogenesis of breast cancer. In line with the findings of several prior studies, our results suggest that OBC patients tend to have a more favorable prognosis than do stage II-III Non-OBC patients. The reason why OBC seems to have benign biological behavior could be explained as follows: Firstly, OBC patients have a lower tumor burden than overt breast cancer patients with the same stage. Secondly, our study demonstrated that the ECM components in OBC LN metastasis samples are different from those in Non-OBC-LN samples. The unique ECM organization may function distinctly in the tumor microenvironment, leading to a protective effect on the prognosis of OBC patients. To be specific, the overexpression of the 5 hub proteins we identified was correlated with favorable prognosis of breast cancer patients (Fig. C) and they were all overexpressed in OBC-LN tissue samples. In addition, we found that IFN-γ signaling was upregulated, immunoglobulin molecules were overexpressed and that immunoglobulin-related pathways were highly enriched in OBC-LN samples (Figs. F, D, F). IFN-γ signaling has been proven to mediate diverse host responses, contributing to antitumor immunity . Some immunoglobulins secreted by tumor-infiltrating B cells promote antibody-dependent cellular cytotoxicity or complement-dependent cytotoxicity in the tumor microenvironment, postponing tumor growth . Probably, cancer-associated stromal cells in the active ECM interact with tumor-infiltrating immune cells together with other molecules that can modulate immune responses, and thus improve antitumor immunity in the OBC tumor microenvironment, leading to shrinkage of the primary tumor . However, further studies are required to determine the underlying mechanisms involved. Prior studies on OBC, were mainly population-based observational studies which focused on descriptions of clinicopathological characteristics, analyses of survival outcomes and breast treatment recommendations. For CUP, molecular profiling and gene sequencing were used to identify the tumor tissue origin and search for potential therapeutic targets. Here, through proteomic profiling analysis, we identified hub proteins and key pathways associated with OBC LN metastases and constructed a protein–protein interaction network of the DEPs in the comparison of OBC-LN vs Non-OBC-LN. ECM and EMT play crucial roles in the pathogenesis of OBC. Next, we will focus on the hub proteins we identified and use the methods based on molecular biology and molecular mechanics to elucidate the effects of these molecules on the ECM structure and EMT phenotype, in order to simulate the special clinical manifestations of OBC, build research models, and clarify the pathogenesis. The limitations of our study are as follows: First, the sample size of the proteomics and the IHC study were small due to the rarity of OBC. The small sample size increased the likelihood of Type II errors when we identified DEPs. Additionally, the limited samples we used are not representative enough of the entire population, which results in biased results and unreliable conclusions. We will prospectively collect more treatment-naïve OBC samples through multicenter cooperation. The biomarkers and key pathways identified in this study will be validated in a larger cohort to confirm our conclusions. The underlying mechanisms involved in the pathogenesis of OBC need to be determined. Due to the small sample size in our proteomic study, we failed to perform genomic and transcriptomic studies. Prospective sample collection is therefore needed to increase the sample size for further multiomics study. In addition, given that the data concerning radiation therapy were not detailed enough from the SEER database, radiation records with “None/Unknown” and “Recommended, unknown if administered” were regarded as “did not receive radiation therapy”; thus, the effect of radiation on patient prognosis could be underestimated. In summary, we found that OBC patients had a more favorable prognosis than Non-OBC patients did and that either breast radiation plus ALND or mastectomy plus ALND was recommended for OBC patients. Proteomic analysis revealed that during the early course of OBC, metastasis occurred prematurely before the primary tumor could be detected, possibly because of the active ECM and the involvement of the EMT program. Our study provides a novel perspective on the pathogenesis of OBC and further studies are required to confirm our findings. Supplementary Material 1: Supplementary tables. Table S1-8. Supplementary Material 2: Supplementary Figure 1. Figure S1. Supplementary Material 3: Supplementary Figure 2. Figure S2. Supplementary Material 4: Supplementary Figure 3. Figure S3.
Endothelial response to blood-brain barrier disruption in the human brain
6ca8ec34-4778-435f-ae99-e649a56ec765
11949064
Cardiovascular System[mh]
The blood-brain barrier (BBB) is a complex network of multiple cell types that lines the neurovasculature. It plays a crucial role in safeguarding the central nervous system (CNS) from harmful substances and maintaining an environment optimal for neuronal function ( , ). The cerebral endothelial cells (ECs) are a key component of this barrier, forming a continuous layer that restricts transcellular and paracellular transport through specialized tight junction (TJ) complexes, characteristic suppression of transcytosis, a dense basement membrane, and selective membrane transport proteins for essential nutrients and metabolites ( – ). Cerebral ECs are integral to CNS homeostasis. Loss of barrier integrity has been implicated as a secondary mechanism of neuronal injury in acute neurological disease states, ranging from traumatic brain injury (TBI) to ischemic stroke ( , , – ). Animal studies have utilized electron microscopy to examine how sudden BBB disruption results in ultrastructural changes to ECs at the cerebral microvasculature, including the breakdown of TJs, increased transcytosis, and the breakdown of the basement membrane. These structural changes are a part of the “barrier breakdown” that results in pathological neurovascular permeability and contributes to the cerebral edema that serves as a secondary mechanism of neuronal injury in these disease states ( – ). While this pathological permeability might be reversible, little is known about the process of barrier repair and return to BBB homeostasis in humans. Animal models have shown that acute BBB disruption induces cerebral ECs to alter transcription of genes related to intercellular adhesion, cytoskeletal organization, and attachment to the extracellular matrix. This implies that ECs are sensitive to barrier compromise and likely play a role in the repair process that involves structural reorganization of their attachments to each other and the surrounding environment ( , ). Studying how human cerebral ECs respond to acute BBB disruption could elucidate how barrier integrity is restored, how CNS homeostasis is regained, and how to mitigate permanent neurological injury in these disease states. The use of low-intensity pulsed ultrasound with microbubbles (LIPU/MB) has emerged as a technique to enhance the brain concentrations of systemically administered drugs for the treatment of tumors and other CNS diseases ( – ). In previous reports, including ours, LIPU/MB via a skull-implantable ultrasound array (the SonoCloud-9 or SC9, Carthera) was used to induce temporary BBB disruption in patients with recurrent glioblastoma (GBM) ( , ). This method has proven to be a safe, reproducible, and feasible means of enhancing concentrations of multiple drugs in the human brain ( , ). Using contrast-enhanced magnetic resonance imaging (MRI), we showed that BBB opening within brain regions targeted by LIPU/MB (hereafter referred to as “sonication”) resolves rapidly, as permeability to gadolinium contrast was mostly reduced within an hour after this procedure ( , ). Having established the feasibility and kinetics of sonication-induced BBB opening, we leveraged LIPU/MB as a means of studying acute BBB disruption within the human brain in a controlled and consistent timeframe. Through our phase I clinical trial NCT04528680, we used intraoperative sonication to induce transient BBB disruption in patients undergoing resection of recurrent GBM ( ). After opening the BBB, we sampled noneloquent sonicated peritumoral brain within minutes of the procedure (when the BBB was most permeable) and again at approximately 45–60 minutes afterward, along with nonsonicated control tissues. Using single-cell RNA sequencing (scRNA-Seq) and transmission electron microscopy (TEM), we then studied the effects of ultrasound-mediated BBB disruption on the transcriptome and ultrastructure of microvascular ECs in the human brain. Transcriptional response of human cerebral endothelium to ultrasound-mediated BBB disruption. We used scRNA-Seq to characterize the transcriptional response of human cerebral ECs to acute BBB disruption via sonication. As described previously, BBB disruption within sonicated peritumoral brain was mapped using fluorescein and fluorescence-based microsurgery ( ). Fluorescein, which is typically restricted from crossing an intact BBB, accumulated in areas where the BBB was disrupted by LIPU/MB ( , ). Thus, sonicated brain with increased BBB permeability exhibited notable fluorescence compared to adjacent nonsonicated brain not targeted by the ultrasound. A summary of the intraoperative LIPU/MB procedure and peritumoral biopsy process is shown in . Each peritumoral brain sample was processed fresh into single-cell suspensions and subjected to scRNA-Seq library preparation (1 nonsonicated and 1 late sonicated peritumoral brain sample per patient, N = 6 patients, 12 brain samples in total). Unsupervised analysis led to 14 distinct gene expression–based cell clusters, designated as oligodendrocytes, microglia, T cells, ECs, monocytes, pericytes, oligodendroglial progenitor cells, natural killer cells, B cells, or glioma/astrocytes. We focused our analysis on ECs given their vital role in barrier function at the neurovasculature. We analyzed 2,643 ECs, including 1,470 from sonicated and 1,173 derived from nonsonicated peritumoral brain specimens ( ). A uniform manifold approximation and projection (UMAP) plot of these ECs was generated with cell labeling done according to whether these derived from late sonicated or nonsonicated control samples ( ). Gene set enrichment analysis (GSEA) of EC transcriptomics revealed significant alterations in gene transcription following sonication, impacting several ontology themes of interest in the context of the BBB (adjusted P < 0.05). Notably, there was downregulation of Gene Ontology (GO) themes Regulation of Endocytosis (normalized enrichment score [NES] = –2.01), Blood Vessel Morphogenesis (NES = –2.08), Cell Matrix Adhesion (NES = –2.13), Abnormality of Cerebral Vasculature (NES = –2.01), Structural Component of Cytoskeleton (NES = –1.96), and Cell-Cell Adhesion (NES = –2.04). Conversely, there was an upregulation of the theme Active Transmembrane Transporter activity (NES = 2.23) ( ). The heatmap in shows the expression changes of individual genes within these GO themes, comparing sonicated and nonsonicated brain ECs. Notable changes included altered transcription of genes previously implicated in neurovascular biology and barrier function. This included the downregulation of GPR4 (log 2 fold-change [log 2 FC] = –0.3748953, adjusted P = 1.08 × 10 –18 ), a pH-sensing G protein–coupled receptor in cerebral ECs that modulates cAMP signaling and is crucial for cerebrovascular integrity ( , ). Other alterations were in genes associated with selective transcytosis across the BBB. For example, we observed downregulation of transcripts for the gene of the LDL receptor ( LDLR ), expressed in cerebral ECs and used to mediate transcytosis (log 2 FC = –0.29, adjusted P = 1.61 × 10 –9 ) ( , ). Notably, sonication was also associated with significantly altered expression of various genes within the solute carrier (SLC) and organic ion (SLCO) superfamilies of membrane transport proteins. These transporters have been previously associated with influx and efflux of various substances across the neurovasculature and maintain a cerebral spinal fluid (CSF) ionic milieu that is conducive to proper neuronal development and function ( ). Notable expression changes within these families included upregulation of genes SLC38A3 (log 2 FC = 0.74, adjusted P = 1.14 × 10 –28 ) and SLC38A5 (log 2 FC = 0.28, adjusted P = 3.34 × 10 –4 ), both coding for transporters specific for nitrogen-rich amino acids that can remove excess glutamine/glutamate from the CSF to the endothelium, likely as a means of avoiding excitotoxicity ( , ); upregulation of SLC7A5 (log 2 FC = 0.76, adjusted P = 9.47 × 10 –44 ), a transporter of various neutral amino acids that also plays a role in glutamine/glutamate homeostasis in the CSF ( , ); downregulation of SLC4A7 (log 2 FC = –0.331, adjusted P = 4.78 × 10 –11 ), a sodium/bicarbonate cotransporter responsible for maintaining appropriate ionic concentrations and pH in the CSF ( ); and upregulation of SLCO1A2 (log 2 FC = 0.553, adjusted P = 4.25 × 10 –15 ) and downregulation of SLCO4A1 (log 2 FC = –0.313, adjusted P = 2.64 × 10 –14 ), both sodium-independent uptake transporters thought to play a role in drug delivery across the neurovasculature ( – ). BBB disruption alters EC and basement membrane morphology in brain capillaries. Previous animal studies employed TEM to examine the effects of sonication on the ultrastructure of the cerebral microvasculature and ECs. These studies highlighted structural changes associated with acute BBB disruption, including irregular “opening” of the TJs between ECs that could facilitate paracellular drug delivery following sonication ( , ). Therefore, we used TEM to study the ultrastructural changes induced by sonication to cerebral ECs in human peritumoral brain specimens. For this, we acquired peritumoral brain specimens from early-sonicated (within 15 minutes of LIPU/MB), late-sonicated (at least 45 minutes after LIPU/MB), and nonsonicated peritumoral brain biopsies (at least 45 minutes after LIPU/MB) from 3 separate patients ( N = 9 tissue biopsies, 3 per patient). Using TEM, we then imaged capillary cross sections from each tissue specimen (nonsonicated N = 17, early sonicated N = 18, late sonicated N = 21) from each patient. These electron micrographs were analyzed by an expert in cell biology and vascular pathology, who conducted a blinded review of the images. The expert was able to identify vessels as sonicated (either time point) and nonsonicated with 100% accuracy. A spectrum of key morphological distinctions was noted and used to distinguish sonicated and nonsonicated vessels. First, the basement membrane of sonicated capillaries frequently displayed granular or amorphous deposits that disrupted its continuity ( ). Second, sonicated ECs often showed evidence of cytosol rarefaction and disorganization of the cytoskeleton ( ). TJ complexes of sonicated ECs occasionally appeared less “dense” than their nonsonicated counterparts, sometimes with irregular spaces and “opening” of the intercellular cleft ( ). In line with these observations, our scRNA-Seq analysis revealed that sonication was associated with changes in the transcription of genes coding for structural components of the basement membrane and TJ/adherens junction complex. Notable changes included the downregulation of COL4A1 coding for collagen type IV alpha 1 chain (log 2 FC = –0.34, adjusted P = 7.9 × 10 –10 ), an essential component of the endothelial basement membrane linked to BBB integrity. In line with this, mutations in this gene have been implicated in intracerebral hemorrhage in mice ( , ). We also noted downregulation of CDH5 , which codes for cadherin-5 (log 2 FC = –0.288, adjusted P = 3 × 10 –7 ), a major component of the adherens junctions found between cerebral ECs ( , ). Conversely, there was upregulation of CGNL1 , which codes for paracingulin, a protein localized to the cytoplasmic region of the apical portion of the TJ/adherens complex of brain ECs (log 2 FC = 0.457, adjusted P = 8.2 × 10 –3 ) ( , , ). We also observed downregulation of ACTB coding for β-actin (log 2 FC = –0.86, adjusted P = 2.8 × 10 –6 ), a cytoskeletal protein whose remodeling has been implicated in reorganization of the endothelial TJs under periods of BBB permeability following mechanical stimuli and ischemic injury to the endothelium ( ) ( , ). In sum, the combined results from our TEM and scRNA-Seq analyses indicate that LIPU/MB-induced BBB disruption is associated with marked changes to the morphology and transcriptional activity of human cerebral ECs that could be related to increased neurovascular permeability. These changes appear to exert a particularly strong effect on intercellular junctions, the basement membrane, and cytoskeleton. Ultrasound-mediated BBB disruption alters cerebral endothelial caveolar pit density in a time-dependent fashion. Building on previous animal studies that suggested that enhanced caveolar transcytosis in sonicated capillaries acted as a secondary mechanism of drug delivery across the BBB following sonication ( ), we aimed to assess the density of endothelial caveolae in sonicated and nonsonicated peritumoral brain tissues. Since we previously found that peak BBB permeability after sonication occurred within 15 minutes of LIPU/MB ( ), and barrier integrity returned quickly thereafter ( ), we collected peritumoral brain specimens within this 15-minute window of maximum permeability. We counted well-formed caveolar pits (approximately 40–80 nm in diameter) that were attached to the basal and luminal membranes of the ECs ( ). Using a linear mixed effects model, we noted a significant effect of sonication on the frequency of luminal caveolar pits (χ 2 , P = 0.01542). Post hoc analysis showed decreased numbers of luminal caveolae in the peritumoral brain collected at the early-sonicated time point compared with the nonsonicated time point (χ 2 , P = 0.0185). Nonsignificant trends were found for numbers of luminal caveolae between late-sonicated and nonsonicated ECs (χ 2 , P = 0.0734). According to the same mixed effects model, sonication did not have an effect on the frequency of basal caveolae (χ 2 , P = 0.1049; ). Post hoc analysis also showed no significant relationship between the frequency of basal caveolae counted in capillary cross sections, when comparing nonsonicated to early-sonicated (χ 2 , P = 0.0983), nonsonicated to late-sonicated (χ 2 , P = 0.3794), or early- to late-sonicated time points (χ 2 , P = 0.6751). In line with these observations, our GSEA highlighted a downregulation in the GO theme Regulation of Endocytosis in sonicated ECs, as previously mentioned (NES = –2.01, adjusted P = 0.0432) ( ). We also noted that sonication altered the transcription of genes related to caveolar transcytosis, including increased expression of MFSD2A (log 2 FC = 0.592, adjusted P = 1.31 × 10 –6 ) and decreased expression of CAV1 (average log 2 FC in expression for sonicated ECs over nonsonicated cells = –0.77, adjusted P = 1.14 × 10 –32 ) ( ). MFSD2A codes for a lysophosphatidylcholine symporter that has previously been characterized as essential to maintaining BBB function and repressing caveolar transcytosis at the cerebral endothelium, while CAV1 codes for a protein component of caveolae ( , ). Moreover, selective enrichment of Mfsd2a in rats was shown to attenuate caveolar transcytosis, BBB permeability, and neuronal injury in the days following experimentally induced subarachnoid hemorrhage ( , ). Therefore, our TEM and transcriptional analyses could suggest that, within the 1-hour time frame we explored after BBB disruption, caveolar transcytosis does not appear to be enhanced in human cerebral ECs. BBB disruption by LIPU/MB leads to cytoplasmic vacuoles in ECs. Upon further examination of our electron micrographs, we observed that sonicated ECs demonstrated large cytoplasmic vacuoles more frequently than nonsonicated ECs. These structures varied greatly in size but were much larger than and distinct from the membrane-bound caveolae noted previously ( ). To determine if these vacuoles were more frequent in sonicated blood vessels and to explore any time-dependent relationship to their frequency, we quantified their numbers in the EC cytoplasm. We then normalized these counts to the cross-sectional surface area of the EC cytoplasm in the micrograph for each vessel. Using a linear mixed effects model, we found that sonication had a significant effect on the frequency of these vacuoles (χ 2 , P = 0.004282; ). The post hoc analysis highlighted a significant difference specifically between the late sonicated and nonsonicated groups ( P = 0.0036). However, no significant differences were found for early-sonicated and late-sonicated groups ( P = 0.3379) or early-sonicated and nonsonicated groups ( P = 0.1313). These findings indicate that LIPU/MB-mediated BBB disruption leads to notable morphological changes within ECs, particularly in the formation of cytoplasmic vacuoles, that tend to increase over time. We used scRNA-Seq to characterize the transcriptional response of human cerebral ECs to acute BBB disruption via sonication. As described previously, BBB disruption within sonicated peritumoral brain was mapped using fluorescein and fluorescence-based microsurgery ( ). Fluorescein, which is typically restricted from crossing an intact BBB, accumulated in areas where the BBB was disrupted by LIPU/MB ( , ). Thus, sonicated brain with increased BBB permeability exhibited notable fluorescence compared to adjacent nonsonicated brain not targeted by the ultrasound. A summary of the intraoperative LIPU/MB procedure and peritumoral biopsy process is shown in . Each peritumoral brain sample was processed fresh into single-cell suspensions and subjected to scRNA-Seq library preparation (1 nonsonicated and 1 late sonicated peritumoral brain sample per patient, N = 6 patients, 12 brain samples in total). Unsupervised analysis led to 14 distinct gene expression–based cell clusters, designated as oligodendrocytes, microglia, T cells, ECs, monocytes, pericytes, oligodendroglial progenitor cells, natural killer cells, B cells, or glioma/astrocytes. We focused our analysis on ECs given their vital role in barrier function at the neurovasculature. We analyzed 2,643 ECs, including 1,470 from sonicated and 1,173 derived from nonsonicated peritumoral brain specimens ( ). A uniform manifold approximation and projection (UMAP) plot of these ECs was generated with cell labeling done according to whether these derived from late sonicated or nonsonicated control samples ( ). Gene set enrichment analysis (GSEA) of EC transcriptomics revealed significant alterations in gene transcription following sonication, impacting several ontology themes of interest in the context of the BBB (adjusted P < 0.05). Notably, there was downregulation of Gene Ontology (GO) themes Regulation of Endocytosis (normalized enrichment score [NES] = –2.01), Blood Vessel Morphogenesis (NES = –2.08), Cell Matrix Adhesion (NES = –2.13), Abnormality of Cerebral Vasculature (NES = –2.01), Structural Component of Cytoskeleton (NES = –1.96), and Cell-Cell Adhesion (NES = –2.04). Conversely, there was an upregulation of the theme Active Transmembrane Transporter activity (NES = 2.23) ( ). The heatmap in shows the expression changes of individual genes within these GO themes, comparing sonicated and nonsonicated brain ECs. Notable changes included altered transcription of genes previously implicated in neurovascular biology and barrier function. This included the downregulation of GPR4 (log 2 fold-change [log 2 FC] = –0.3748953, adjusted P = 1.08 × 10 –18 ), a pH-sensing G protein–coupled receptor in cerebral ECs that modulates cAMP signaling and is crucial for cerebrovascular integrity ( , ). Other alterations were in genes associated with selective transcytosis across the BBB. For example, we observed downregulation of transcripts for the gene of the LDL receptor ( LDLR ), expressed in cerebral ECs and used to mediate transcytosis (log 2 FC = –0.29, adjusted P = 1.61 × 10 –9 ) ( , ). Notably, sonication was also associated with significantly altered expression of various genes within the solute carrier (SLC) and organic ion (SLCO) superfamilies of membrane transport proteins. These transporters have been previously associated with influx and efflux of various substances across the neurovasculature and maintain a cerebral spinal fluid (CSF) ionic milieu that is conducive to proper neuronal development and function ( ). Notable expression changes within these families included upregulation of genes SLC38A3 (log 2 FC = 0.74, adjusted P = 1.14 × 10 –28 ) and SLC38A5 (log 2 FC = 0.28, adjusted P = 3.34 × 10 –4 ), both coding for transporters specific for nitrogen-rich amino acids that can remove excess glutamine/glutamate from the CSF to the endothelium, likely as a means of avoiding excitotoxicity ( , ); upregulation of SLC7A5 (log 2 FC = 0.76, adjusted P = 9.47 × 10 –44 ), a transporter of various neutral amino acids that also plays a role in glutamine/glutamate homeostasis in the CSF ( , ); downregulation of SLC4A7 (log 2 FC = –0.331, adjusted P = 4.78 × 10 –11 ), a sodium/bicarbonate cotransporter responsible for maintaining appropriate ionic concentrations and pH in the CSF ( ); and upregulation of SLCO1A2 (log 2 FC = 0.553, adjusted P = 4.25 × 10 –15 ) and downregulation of SLCO4A1 (log 2 FC = –0.313, adjusted P = 2.64 × 10 –14 ), both sodium-independent uptake transporters thought to play a role in drug delivery across the neurovasculature ( – ). Previous animal studies employed TEM to examine the effects of sonication on the ultrastructure of the cerebral microvasculature and ECs. These studies highlighted structural changes associated with acute BBB disruption, including irregular “opening” of the TJs between ECs that could facilitate paracellular drug delivery following sonication ( , ). Therefore, we used TEM to study the ultrastructural changes induced by sonication to cerebral ECs in human peritumoral brain specimens. For this, we acquired peritumoral brain specimens from early-sonicated (within 15 minutes of LIPU/MB), late-sonicated (at least 45 minutes after LIPU/MB), and nonsonicated peritumoral brain biopsies (at least 45 minutes after LIPU/MB) from 3 separate patients ( N = 9 tissue biopsies, 3 per patient). Using TEM, we then imaged capillary cross sections from each tissue specimen (nonsonicated N = 17, early sonicated N = 18, late sonicated N = 21) from each patient. These electron micrographs were analyzed by an expert in cell biology and vascular pathology, who conducted a blinded review of the images. The expert was able to identify vessels as sonicated (either time point) and nonsonicated with 100% accuracy. A spectrum of key morphological distinctions was noted and used to distinguish sonicated and nonsonicated vessels. First, the basement membrane of sonicated capillaries frequently displayed granular or amorphous deposits that disrupted its continuity ( ). Second, sonicated ECs often showed evidence of cytosol rarefaction and disorganization of the cytoskeleton ( ). TJ complexes of sonicated ECs occasionally appeared less “dense” than their nonsonicated counterparts, sometimes with irregular spaces and “opening” of the intercellular cleft ( ). In line with these observations, our scRNA-Seq analysis revealed that sonication was associated with changes in the transcription of genes coding for structural components of the basement membrane and TJ/adherens junction complex. Notable changes included the downregulation of COL4A1 coding for collagen type IV alpha 1 chain (log 2 FC = –0.34, adjusted P = 7.9 × 10 –10 ), an essential component of the endothelial basement membrane linked to BBB integrity. In line with this, mutations in this gene have been implicated in intracerebral hemorrhage in mice ( , ). We also noted downregulation of CDH5 , which codes for cadherin-5 (log 2 FC = –0.288, adjusted P = 3 × 10 –7 ), a major component of the adherens junctions found between cerebral ECs ( , ). Conversely, there was upregulation of CGNL1 , which codes for paracingulin, a protein localized to the cytoplasmic region of the apical portion of the TJ/adherens complex of brain ECs (log 2 FC = 0.457, adjusted P = 8.2 × 10 –3 ) ( , , ). We also observed downregulation of ACTB coding for β-actin (log 2 FC = –0.86, adjusted P = 2.8 × 10 –6 ), a cytoskeletal protein whose remodeling has been implicated in reorganization of the endothelial TJs under periods of BBB permeability following mechanical stimuli and ischemic injury to the endothelium ( ) ( , ). In sum, the combined results from our TEM and scRNA-Seq analyses indicate that LIPU/MB-induced BBB disruption is associated with marked changes to the morphology and transcriptional activity of human cerebral ECs that could be related to increased neurovascular permeability. These changes appear to exert a particularly strong effect on intercellular junctions, the basement membrane, and cytoskeleton. Building on previous animal studies that suggested that enhanced caveolar transcytosis in sonicated capillaries acted as a secondary mechanism of drug delivery across the BBB following sonication ( ), we aimed to assess the density of endothelial caveolae in sonicated and nonsonicated peritumoral brain tissues. Since we previously found that peak BBB permeability after sonication occurred within 15 minutes of LIPU/MB ( ), and barrier integrity returned quickly thereafter ( ), we collected peritumoral brain specimens within this 15-minute window of maximum permeability. We counted well-formed caveolar pits (approximately 40–80 nm in diameter) that were attached to the basal and luminal membranes of the ECs ( ). Using a linear mixed effects model, we noted a significant effect of sonication on the frequency of luminal caveolar pits (χ 2 , P = 0.01542). Post hoc analysis showed decreased numbers of luminal caveolae in the peritumoral brain collected at the early-sonicated time point compared with the nonsonicated time point (χ 2 , P = 0.0185). Nonsignificant trends were found for numbers of luminal caveolae between late-sonicated and nonsonicated ECs (χ 2 , P = 0.0734). According to the same mixed effects model, sonication did not have an effect on the frequency of basal caveolae (χ 2 , P = 0.1049; ). Post hoc analysis also showed no significant relationship between the frequency of basal caveolae counted in capillary cross sections, when comparing nonsonicated to early-sonicated (χ 2 , P = 0.0983), nonsonicated to late-sonicated (χ 2 , P = 0.3794), or early- to late-sonicated time points (χ 2 , P = 0.6751). In line with these observations, our GSEA highlighted a downregulation in the GO theme Regulation of Endocytosis in sonicated ECs, as previously mentioned (NES = –2.01, adjusted P = 0.0432) ( ). We also noted that sonication altered the transcription of genes related to caveolar transcytosis, including increased expression of MFSD2A (log 2 FC = 0.592, adjusted P = 1.31 × 10 –6 ) and decreased expression of CAV1 (average log 2 FC in expression for sonicated ECs over nonsonicated cells = –0.77, adjusted P = 1.14 × 10 –32 ) ( ). MFSD2A codes for a lysophosphatidylcholine symporter that has previously been characterized as essential to maintaining BBB function and repressing caveolar transcytosis at the cerebral endothelium, while CAV1 codes for a protein component of caveolae ( , ). Moreover, selective enrichment of Mfsd2a in rats was shown to attenuate caveolar transcytosis, BBB permeability, and neuronal injury in the days following experimentally induced subarachnoid hemorrhage ( , ). Therefore, our TEM and transcriptional analyses could suggest that, within the 1-hour time frame we explored after BBB disruption, caveolar transcytosis does not appear to be enhanced in human cerebral ECs. Upon further examination of our electron micrographs, we observed that sonicated ECs demonstrated large cytoplasmic vacuoles more frequently than nonsonicated ECs. These structures varied greatly in size but were much larger than and distinct from the membrane-bound caveolae noted previously ( ). To determine if these vacuoles were more frequent in sonicated blood vessels and to explore any time-dependent relationship to their frequency, we quantified their numbers in the EC cytoplasm. We then normalized these counts to the cross-sectional surface area of the EC cytoplasm in the micrograph for each vessel. Using a linear mixed effects model, we found that sonication had a significant effect on the frequency of these vacuoles (χ 2 , P = 0.004282; ). The post hoc analysis highlighted a significant difference specifically between the late sonicated and nonsonicated groups ( P = 0.0036). However, no significant differences were found for early-sonicated and late-sonicated groups ( P = 0.3379) or early-sonicated and nonsonicated groups ( P = 0.1313). These findings indicate that LIPU/MB-mediated BBB disruption leads to notable morphological changes within ECs, particularly in the formation of cytoplasmic vacuoles, that tend to increase over time. Here we have leveraged scRNA-Seq and TEM to characterize the transcriptional response and ultrastructural changes of human cerebral ECs in an acute state of BBB disruption following LIPU/MB. Our study provides human data on the processes related to BBB disruption and restoration shortly after insult. A summary of some of the key structural and transcriptional changes is illustrated in . Previous studies characterized the transcriptome of the human ECs in health and vascular pathology ( – ). Yet, to our knowledge, transcriptional and structural changes in response to acute BBB disruption have been studied only in animal models. Many of the gene expression changes we identified, such as in CDH5 and COL4A1 , encode proteins that have previously been identified as structural components of the neurovascular unit and BBB, which impede passive diffusion of substances from the blood into the brain. Abnormal organization or absence of these components has been implicated in enhanced BBB permeability ( , ). Other genes, such as MFSD2A , CAV1 , LDLR , and SLC/SLCO family transporters, have previously been implicated in regulating transcytosis or allowing for selective delivery of substances across the BBB ( , , , ). Our GO analysis also revealed that sonication induced significant changes to themes related to intercellular and cell matrix adhesion, cytoskeletal organization, and vascular morphogenesis. Given that the established mechanism of LIPU/MB-enhanced drug delivery involves mechanical separation of ECs, these transcriptional changes could reflect a transient suppression of EC genes coding for components of the neurovascular ultrastructure, as suggested by our TEM analyses, wherein sonicated capillaries showed occasional disassembly of TJs, rarefaction of EC cytosol, and amorphous/granular deposits in the basement membrane that might reflect mechanical perturbation of the microvasculature. Some of these TEM findings have also been described in other preclinical models of BBB disruption, including LIPU/MB, ischemic stroke, and TBI ( , , , ). Using TEM, we observed a marked increase in the frequency of cytoplasmic vacuoles in sonicated ECs, which to our knowledge, has not been described previously. Their functional relevance remains unknown. Prior in vitro studies utilizing scanning electron microscopy noted that alternating acoustic pressures of ultrasound, with or without microbubbles, could form pores in cell membranes that render cells more permeable to drug delivery ( ). The vacuoles we identified on TEM could be cross sections of these pores channeling through ECs, or alternatively, they could play some role in the pinocytosis of substances across the neurovasculature. However, we found these structures to be most frequent at a time point after sonication, when we observed permeability to gadolinium to already be greatly diminished ( ). As permeability to gadolinium might differ to that of other substances, the potential contribution of these vacuoles to drug transport across the BBB remains to be determined. With regard to transcytosis, our transcriptional analysis showed that sonication altered expression of various genes previously implicated in EC transporter activity and the regulation of endocytosis. This included increased expression of various SLC/SLCO family genes that are established regulators for concentrations of various metabolic substrates and ions in the brain interstitial space ( – ). It is possible that the increased expression of these transporters reflects a compensatory mechanism to correct abnormal concentrations of various amino acids and ions that could accumulate in the brain following sonication. Contrary to preclinical TEM studies of LIPU/MB ( , ), we did not find a time-dependent increase in the frequency of EC caveolae within an hour of sonication. However, we observed a time-dependent decrease in the frequency of luminal caveolar pits 4–15 minutes after sonication, a time point not explored by earlier studies ( ). This discrepancy could have resulted from a difference in timing of tissue acquisition after sonication. Another possible explanation for this is that, while prior studies reported an increase in the number of EC caveolae, this effect was only statistically significant in arterioles ( ). Given that our tissue biopsies were taken from the superficial cortex, most blood vessels we identified were capillaries with rare arterioles and venules. Thus, we restricted our analysis to capillaries and were unable to consider consequences of LIPU/MB on caveolar transcytosis at noncapillary components of the cerebral microvasculature. Another possibility could be that, in human cerebral ECs, caveolae do not play a substantial role in transcytosis following LIPU/MB. This is suggested by the increased transcription of the gene MFSD2A , which is known to inhibit caveolae-mediated transcytosis, as well as decreased transcription of CAV1 that we noted in sonicated ECs. It has also been reported that caveolae have alternative functions unrelated to transcytosis, particularly at the neurovasculature. Prior electron microscopic studies performed in vitro reported that caveolae can “flatten” in response to mechanical forces such as uniaxial stretching. In this sense, they act as membrane redundancy or a “reservoir” that buffers mechanical stresses across the cell and protects it from rupturing ( ). Moreover, integrin detachment and altered cell adhesion can cause caveolar pits to rapidly flatten and their density to decrease and then normalize within minutes upon readhesion ( ). Consistent with this, we only encountered a decrease in caveolae in the luminal membrane of sonicated ECs, which are more likely to be directly targeted by the pressure of microbubble cavitation. Given that LIPU/MB is thought to mechanically separate adjacent ECs, the initial decrease in membrane-bound caveolae we observed immediately after sonication might reflect EC detachment from their intercellular connections and the underlying basal lamina. In line with this, normalization of the caveolar pit density within an hour of sonication coincided with partial restoration of BBB integrity (as also evidenced by our radiographic studies). Thus, our TEM analysis suggests that the immediate decrease in the frequency of caveolae could contribute to cellular resilience to mechanical stress, and BBB homeostasis following microbubble cavitation, while enhanced caveolar transcytosis may not contribute to increased permeability in human cerebral capillary ECs in a state of BBB disruption, at least within 1 hour of LIPU/MB. Our study assessed the response of cerebral ECs to BBB disruption at a very acute time point (within an hour of sonication). We chose this time point for logistical reasons pertaining to chemotherapy infusion during the surgery, but it also coincided with our previous estimates on the kinetics of BBB and restoration of barrier function to gadolinium ( ). Imaging studies in patients undergoing transcranial focused ultrasound have reported variable timelines on the return of barrier function. Some estimate persistent BBB permeability 2 to 6 hours after LIPU/MB, while others put it at 24 hours ( , , – ). This variability could be accounted for by differences in the acoustic parameters and the modality of ultrasound used to open the BBB in each study (transcranial versus skull implantable). Our study utilized a mechanical index of 1.03 MPa. This parameter was decided upon after previous clinical trials found it to be optimal for safe and effective BBB disruption using the SC9, where sound waves do not penetrate across bone ( , ). In a recent publication by Carpentier et al., wherein patients with recurrent GBM underwent serial sonication-enhanced chemotherapy with the SC9, they reported hypointense lesions in susceptibility weighted imaging sequences seen on MRI suggestive of microhemorrhages in 6 out of 52 sonications (11%) ( ). While we focused on the response of the ECs to LIPU/MB, it is possible that the transcriptional and structural changes we report are not unique to LIPU/MB-based BBB disruption. Munji et al. examined the transcriptional alterations within the cerebral ECs of rodents in various experimental models of acute BBB disruption, including TBI, ischemic stroke, seizure, and autoimmune encephalomyelitis ( ). Distinct transcriptional alterations were noted for each model, but there was also a core module of 54 genes whose transcripts were consistently enriched across all models at the time of peak BBB disruption. The authors speculated that this core module could reflect a conserved mechanism of regulating EC permeability and BBB repair ( ). Some of these alterations were found on GO themes similar to those of our analysis, including cell adhesion–ECM receptor interaction and regulation of angiogenesis. In conclusion, we characterized the transcriptional and ultrastructural alterations of LIPU/MB-mediated BBB disruption on human cerebral ECs at an acute time point. For this, we relied on intraoperative LIPU/MB of peritumoral brain to model this process in human cerebral tissues. We show that loss of BBB integrity is associated with altered expression of genes that relate to EC structure, attachment, and transcytosis. We also show that sonication alters the physical phenotype of ECs and the broader neurovascular ultrastructure. While our findings highlight acute changes seen after sonication, they present some similarity to EC changes reported in acute neurological disease states, where permeability of the BBB has been implicated; thus, our data might provide insight into mechanisms of BBB homeostasis and EC response to microvascular injury in the human brain seen in various neurological pathologies. Work should also be done to further characterize changes to cerebral ECs at later time points than ones we were able to explore. Though this presents obvious logistical hurdles, exploring the mechanisms of neurovascular permeability and recovery in the late stages of these diseases could reveal valuable targets for molecular therapies that may be used in the acute setting to attenuate permanent neuronal injury secondary to pathological BBB permeability. Sex as a biological variable Sex was not considered as a biological variable for the purposes of this study, due to availability of tissue samples. Tissues for this study were acquired from both male and female participants. The sex of each study participant is in ; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.187328DS1 Intraoperative LIPU/MB-enhanced chemotherapy and stereotactic biopsy of sonicated peritumoral brain Enrolled patients received treatment as described previously ( ). In brief, the use of intraoperative corticosteroids or mannitol was avoided for all cases where we performed intraoperative pharmacokinetics studies. Biopsy of noneloquent peritumoral brain was performed when feasible and justified as per standard neurosurgical technique. For these studies, we decreased the fraction of inspired oxygen as much as tolerated up to 20%, aiming to obtain an arterial O 2 pressure < 100 mm/Hg, to model the outpatient setting where patients are on room air. We exposed the peritumoral brain to be excised, positioned the SC9 device in the cranial window, flooded the field with sterile saline, connected the device to the SC9 radiofrequency generator, and infused intravenous (IV) DEFINITY 10 μL/kg (Lantheus) microbubbles while sonicating the brain for a duration of 270 seconds using an acoustic pressure of 1.03 MPa, as was used in our recent clinical trials with the SC9 system ( , ). Immediately after sonication, we infused fluorescein 500 mg IV and initiated a 45-minute IV infusion of nab-paclitaxel chemotherapy (Abraxane). LIPU/MB-based BBB opening was visualized and mapped using fluorescence microscopy (ZEISS Yellow 560 nm filter). Sonicated peritumoral brain was identified by fluorescence microscopy following infusion of fluorescein and nonsonicated peritumoral brain based on absence of fluorescence in this setting. Within 4–15 minutes of sonication (referred to as early time point), we obtained biopsies of ineloquent sonicated peritumoral brain where feasible, which were immediately fixed for TEM. Following the remainder of the 45-minute infusion period, we further biopsied paired sonicated and nonsonicated ineloquent peritumoral brain for additional TEM analysis and scRNA-Seq. Samples intended for sequencing were transported in saline on ice and underwent immediate processing. Representative fluorescence photographs of the brain and corresponding stereotaxic coordinates were obtained for each biopsy. This was followed by standard tumor resection and permanent implantation of the SC9 at the end of the procedure. scRNA-Seq Patients whose tissues were used for scRNA-Seq analysis did not receive dexamethasone prior to obtaining these biopsies. RNA-Seq was performed for paired sonicated and nonsonicated peritumoral brain specimens collected at approximately 45 minutes after LIPU/MB per patient. Peritumoral brain was defined as brain parenchyma that was not enhancing per the contrast MRI used for stereotaxic navigation. Sonicated brain was identified by fluorescence microscopy following infusion of fluorescein and nonsonicated brain based on absence of fluorescence in this setting. Each tissue sample was processed fresh into single-cell suspensions and subjected to scRNA-Seq library preparation. Samples were transported on ice, and single-cell suspension was performed using the Miltenyi Biotec system on gentleMACS Octo Dissociator ]according to the manufacturer’s instructions. Isolated cells were washed with PBS containing 0.04% bovine serum albumin and filtered through a 40 μm cell strainer (MilliporeSigma). Cell concentration and viability were determined by a Countess II Automated Cell Counter (Thermo Fisher Scientific) with a final cell concentration of 700–1,200 cells/μL. scRNA-Seq libraries were generated using the Chromium Single Cell 3′ Reagent Kit (10x Genomics). Single-cell suspension was mixed with RT-PCR master mix and loaded together with Single Cell 3′ Gel Beads and Partitioning Oil into a Single Cell 3′ Chip (10x Genomics). The cDNA was amplified and further used to construct a 3′ gene expression library according to the manufacturer’s instructions. The size profiles of preamplified cDNA and sequencing libraries were examined by the Agilent High Sensitivity 2100 Systems. The scRNA-Seq library was sequenced on the Illumina NextSeq 500/550 platform. Single-cell transcriptomic analysis All the scRNA-Seq data were aligned to GRCh38 reference genome and quantified using 10x Genomics Cell Ranger pipeline by running cellranger count. We kept the filtered data from Cell Ranger for further quality control (qc). Doublet removal and qc. The filtered_feature_bc_matrix generated by Cell Ranger pipeline was processed with Seurat ( ). Cells with fewer than 200 unique genes or greater than 4,000 genes were removed. The remaining cells in each sample were used as the input of DoubletFinder ( ). The first 20 principal components (PCs) with the proportion of artificial doublets = 0.25 and proportion of nearest neighbors = 0.09 were used to identify the doublets. The cells that were classified as doublets were then removed. The remaining cells from 12 samples were merged as a single Seurat object. To further remove the dead or dying cells, we filtered the cells by percentage of mitochondrial reads per cell greater than 15% or with greater than 20,000 counts. Batch effect removal, dimensionality reduction, clustering, and cell annotation. Cells from the Seurat object were analyzed with the standard workflow of Seurat. First, NormalizeData was run using the LogNormalize method and the scale factor with 10,000 for cell level normalization. The variable features were identified by findVariableFeatures using vst method with 2,000 features. The data were scaled to 10,000 unique molecular identifiers per cell and PCs were computed with RunPCA. The batch effect correction was performed using Harmony ( ). UMAP was generated from the results of batch-corrected PCs. The cells were then clustered using FindNeighbors with batch corrected to 20 dimensions and FindClusters with a resolution of 0.5. Briefly, we determined the k-nearest neighbors of each cell and used k-nearest neighbors graphs to construct the shared nearest neighbors graph by calculating the neighborhood overlap (Jaccard index) between every cell and its k.param nearest neighbors to determine the unsupervised cell clusters. Cluster-specific marker genes were defined by Wilcoxon’s test with adjusted P < 0.01 and average logFC > 0.5. Clusters were annotated to cell types by comparing marker genes for each cluster to cell type markers from Panglaodb marker gene database ( ) corresponding to expected human brain cell types. For example, P2RY12 and PTGS1 were used to define microglia cells; CNP and PLP1 were used to define oligo cells; and FTL1 , LYZ , and IL7R were used to define endothelial, monocyte, and T cell, respectively ( , – ). Differential expression and functional enrichment analysis. We performed differential expression analysis between sonicated and nonsonicated samples across each cell type using Wilcoxon’s test, and Benjamini-Hochberg method was used to estimate the FDR, following the recommendation of Seurat. The differentially expressed genes (DEGs) were filtered using average logFC > 0.5 and adjusted P < 0.05. The functional enrichment analysis for DEGs between sonicated and nonsonicated samples was conducted using clusterProfiler R package ( ). Electron microscopy analysis Patients whose tissues were used for ultrastructural analysis of peritumoral brain by TEM did not receive dexamethasone prior to obtaining the biopsies. For electron microscopy, approximately 1–2 mm 3 samples of brain tissue, subjected to LIPU/MB or not, were excised and fixed in a mixture of 2.5% glutaraldehyde and 2% paraformaldehyde in 0.1 M cacodylate buffer for 2 or 3 hours or overnight at 4°C. After fixation, tissue was exposed to 1% osmium tetroxide and 3% uranyl acetate, dehydrated in ethanol, embedded in Epon resin, and polymerized for 48 hours at 60°C. Then ultrathin sections were made using Ultracut UC7 Ultramicrotome (Leica Microsystems) and contrasted with 3% uranyl acetate and Reynolds’s lead citrate. Samples were imaged using an FEI Company Tecnai Spirit G2 transmission electron microscope operated at 80 kV. Images were captured by Eagle 4k HR 200 kV charge-coupled device camera. Caveolar pits were identified as membrane-bound invaginations (40–80 nm in diameter) that were directly attached to the basal and luminal surfaces of ECs. Caveolae were also distinguished from clathrin-coated vesicles according to their size, as well as by the density and absence of obvious protein spike along their membrane surfaces. Only well-formed caveolae, showing direct attachment to either of the endothelial membranes, were counted for this analysis. Cytoplasmic vacuoles were identified as single membrane vesicles ranging in sizes (150–250 nm in diameter) without any electron-dense content in majority cases. Statistics We utilized a mixed effects linear model to assess the effect of sonication status on the frequency of basal and luminal endothelial caveolae and vacuoles relative to the cross-sectional surface area of the endothelial cytoplasm. For each type of cellular structure, models were constructed to compare a null scenario, considering interpatient variability, with an alternative model that included sonication status as a fixed effect. P values were obtained by likelihood ratio tests of the full model with the effect in question against the model without the effect in question. P < 0.05 was considered statistically significant. Post hoc analyses were also used to determine any relationship between the frequency of these structures at nonsonicated, early sonicated, and late sonicated time points. Study approval This study was approved by the institutional review board of Northwestern University Feinberg School of Medicine (STU00212298), and all patients provided written informed consent, which included consent for the translational pharmacokinetics study and for nonidentifiable data collected to be included in scientific publications. Quality assurance monitors from the Clinical Trials Office at the Robert H. Lurie Comprehensive Cancer Center of Northwestern University verified the underlying study data and confirmed the accuracy of the results presented in this article. Data availability The scRNA-Seq data have been deposited to the National Center for Biotechnology Information Gene Expression Omnibus (GSE208074). for all figures and analyses can be found in the file, which can be found in the online supplemental material. Sex was not considered as a biological variable for the purposes of this study, due to availability of tissue samples. Tissues for this study were acquired from both male and female participants. The sex of each study participant is in ; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.187328DS1 Enrolled patients received treatment as described previously ( ). In brief, the use of intraoperative corticosteroids or mannitol was avoided for all cases where we performed intraoperative pharmacokinetics studies. Biopsy of noneloquent peritumoral brain was performed when feasible and justified as per standard neurosurgical technique. For these studies, we decreased the fraction of inspired oxygen as much as tolerated up to 20%, aiming to obtain an arterial O 2 pressure < 100 mm/Hg, to model the outpatient setting where patients are on room air. We exposed the peritumoral brain to be excised, positioned the SC9 device in the cranial window, flooded the field with sterile saline, connected the device to the SC9 radiofrequency generator, and infused intravenous (IV) DEFINITY 10 μL/kg (Lantheus) microbubbles while sonicating the brain for a duration of 270 seconds using an acoustic pressure of 1.03 MPa, as was used in our recent clinical trials with the SC9 system ( , ). Immediately after sonication, we infused fluorescein 500 mg IV and initiated a 45-minute IV infusion of nab-paclitaxel chemotherapy (Abraxane). LIPU/MB-based BBB opening was visualized and mapped using fluorescence microscopy (ZEISS Yellow 560 nm filter). Sonicated peritumoral brain was identified by fluorescence microscopy following infusion of fluorescein and nonsonicated peritumoral brain based on absence of fluorescence in this setting. Within 4–15 minutes of sonication (referred to as early time point), we obtained biopsies of ineloquent sonicated peritumoral brain where feasible, which were immediately fixed for TEM. Following the remainder of the 45-minute infusion period, we further biopsied paired sonicated and nonsonicated ineloquent peritumoral brain for additional TEM analysis and scRNA-Seq. Samples intended for sequencing were transported in saline on ice and underwent immediate processing. Representative fluorescence photographs of the brain and corresponding stereotaxic coordinates were obtained for each biopsy. This was followed by standard tumor resection and permanent implantation of the SC9 at the end of the procedure. Patients whose tissues were used for scRNA-Seq analysis did not receive dexamethasone prior to obtaining these biopsies. RNA-Seq was performed for paired sonicated and nonsonicated peritumoral brain specimens collected at approximately 45 minutes after LIPU/MB per patient. Peritumoral brain was defined as brain parenchyma that was not enhancing per the contrast MRI used for stereotaxic navigation. Sonicated brain was identified by fluorescence microscopy following infusion of fluorescein and nonsonicated brain based on absence of fluorescence in this setting. Each tissue sample was processed fresh into single-cell suspensions and subjected to scRNA-Seq library preparation. Samples were transported on ice, and single-cell suspension was performed using the Miltenyi Biotec system on gentleMACS Octo Dissociator ]according to the manufacturer’s instructions. Isolated cells were washed with PBS containing 0.04% bovine serum albumin and filtered through a 40 μm cell strainer (MilliporeSigma). Cell concentration and viability were determined by a Countess II Automated Cell Counter (Thermo Fisher Scientific) with a final cell concentration of 700–1,200 cells/μL. scRNA-Seq libraries were generated using the Chromium Single Cell 3′ Reagent Kit (10x Genomics). Single-cell suspension was mixed with RT-PCR master mix and loaded together with Single Cell 3′ Gel Beads and Partitioning Oil into a Single Cell 3′ Chip (10x Genomics). The cDNA was amplified and further used to construct a 3′ gene expression library according to the manufacturer’s instructions. The size profiles of preamplified cDNA and sequencing libraries were examined by the Agilent High Sensitivity 2100 Systems. The scRNA-Seq library was sequenced on the Illumina NextSeq 500/550 platform. All the scRNA-Seq data were aligned to GRCh38 reference genome and quantified using 10x Genomics Cell Ranger pipeline by running cellranger count. We kept the filtered data from Cell Ranger for further quality control (qc). Doublet removal and qc. The filtered_feature_bc_matrix generated by Cell Ranger pipeline was processed with Seurat ( ). Cells with fewer than 200 unique genes or greater than 4,000 genes were removed. The remaining cells in each sample were used as the input of DoubletFinder ( ). The first 20 principal components (PCs) with the proportion of artificial doublets = 0.25 and proportion of nearest neighbors = 0.09 were used to identify the doublets. The cells that were classified as doublets were then removed. The remaining cells from 12 samples were merged as a single Seurat object. To further remove the dead or dying cells, we filtered the cells by percentage of mitochondrial reads per cell greater than 15% or with greater than 20,000 counts. Batch effect removal, dimensionality reduction, clustering, and cell annotation. Cells from the Seurat object were analyzed with the standard workflow of Seurat. First, NormalizeData was run using the LogNormalize method and the scale factor with 10,000 for cell level normalization. The variable features were identified by findVariableFeatures using vst method with 2,000 features. The data were scaled to 10,000 unique molecular identifiers per cell and PCs were computed with RunPCA. The batch effect correction was performed using Harmony ( ). UMAP was generated from the results of batch-corrected PCs. The cells were then clustered using FindNeighbors with batch corrected to 20 dimensions and FindClusters with a resolution of 0.5. Briefly, we determined the k-nearest neighbors of each cell and used k-nearest neighbors graphs to construct the shared nearest neighbors graph by calculating the neighborhood overlap (Jaccard index) between every cell and its k.param nearest neighbors to determine the unsupervised cell clusters. Cluster-specific marker genes were defined by Wilcoxon’s test with adjusted P < 0.01 and average logFC > 0.5. Clusters were annotated to cell types by comparing marker genes for each cluster to cell type markers from Panglaodb marker gene database ( ) corresponding to expected human brain cell types. For example, P2RY12 and PTGS1 were used to define microglia cells; CNP and PLP1 were used to define oligo cells; and FTL1 , LYZ , and IL7R were used to define endothelial, monocyte, and T cell, respectively ( , – ). Differential expression and functional enrichment analysis. We performed differential expression analysis between sonicated and nonsonicated samples across each cell type using Wilcoxon’s test, and Benjamini-Hochberg method was used to estimate the FDR, following the recommendation of Seurat. The differentially expressed genes (DEGs) were filtered using average logFC > 0.5 and adjusted P < 0.05. The functional enrichment analysis for DEGs between sonicated and nonsonicated samples was conducted using clusterProfiler R package ( ). The filtered_feature_bc_matrix generated by Cell Ranger pipeline was processed with Seurat ( ). Cells with fewer than 200 unique genes or greater than 4,000 genes were removed. The remaining cells in each sample were used as the input of DoubletFinder ( ). The first 20 principal components (PCs) with the proportion of artificial doublets = 0.25 and proportion of nearest neighbors = 0.09 were used to identify the doublets. The cells that were classified as doublets were then removed. The remaining cells from 12 samples were merged as a single Seurat object. To further remove the dead or dying cells, we filtered the cells by percentage of mitochondrial reads per cell greater than 15% or with greater than 20,000 counts. Cells from the Seurat object were analyzed with the standard workflow of Seurat. First, NormalizeData was run using the LogNormalize method and the scale factor with 10,000 for cell level normalization. The variable features were identified by findVariableFeatures using vst method with 2,000 features. The data were scaled to 10,000 unique molecular identifiers per cell and PCs were computed with RunPCA. The batch effect correction was performed using Harmony ( ). UMAP was generated from the results of batch-corrected PCs. The cells were then clustered using FindNeighbors with batch corrected to 20 dimensions and FindClusters with a resolution of 0.5. Briefly, we determined the k-nearest neighbors of each cell and used k-nearest neighbors graphs to construct the shared nearest neighbors graph by calculating the neighborhood overlap (Jaccard index) between every cell and its k.param nearest neighbors to determine the unsupervised cell clusters. Cluster-specific marker genes were defined by Wilcoxon’s test with adjusted P < 0.01 and average logFC > 0.5. Clusters were annotated to cell types by comparing marker genes for each cluster to cell type markers from Panglaodb marker gene database ( ) corresponding to expected human brain cell types. For example, P2RY12 and PTGS1 were used to define microglia cells; CNP and PLP1 were used to define oligo cells; and FTL1 , LYZ , and IL7R were used to define endothelial, monocyte, and T cell, respectively ( , – ). We performed differential expression analysis between sonicated and nonsonicated samples across each cell type using Wilcoxon’s test, and Benjamini-Hochberg method was used to estimate the FDR, following the recommendation of Seurat. The differentially expressed genes (DEGs) were filtered using average logFC > 0.5 and adjusted P < 0.05. The functional enrichment analysis for DEGs between sonicated and nonsonicated samples was conducted using clusterProfiler R package ( ). Patients whose tissues were used for ultrastructural analysis of peritumoral brain by TEM did not receive dexamethasone prior to obtaining the biopsies. For electron microscopy, approximately 1–2 mm 3 samples of brain tissue, subjected to LIPU/MB or not, were excised and fixed in a mixture of 2.5% glutaraldehyde and 2% paraformaldehyde in 0.1 M cacodylate buffer for 2 or 3 hours or overnight at 4°C. After fixation, tissue was exposed to 1% osmium tetroxide and 3% uranyl acetate, dehydrated in ethanol, embedded in Epon resin, and polymerized for 48 hours at 60°C. Then ultrathin sections were made using Ultracut UC7 Ultramicrotome (Leica Microsystems) and contrasted with 3% uranyl acetate and Reynolds’s lead citrate. Samples were imaged using an FEI Company Tecnai Spirit G2 transmission electron microscope operated at 80 kV. Images were captured by Eagle 4k HR 200 kV charge-coupled device camera. Caveolar pits were identified as membrane-bound invaginations (40–80 nm in diameter) that were directly attached to the basal and luminal surfaces of ECs. Caveolae were also distinguished from clathrin-coated vesicles according to their size, as well as by the density and absence of obvious protein spike along their membrane surfaces. Only well-formed caveolae, showing direct attachment to either of the endothelial membranes, were counted for this analysis. Cytoplasmic vacuoles were identified as single membrane vesicles ranging in sizes (150–250 nm in diameter) without any electron-dense content in majority cases. We utilized a mixed effects linear model to assess the effect of sonication status on the frequency of basal and luminal endothelial caveolae and vacuoles relative to the cross-sectional surface area of the endothelial cytoplasm. For each type of cellular structure, models were constructed to compare a null scenario, considering interpatient variability, with an alternative model that included sonication status as a fixed effect. P values were obtained by likelihood ratio tests of the full model with the effect in question against the model without the effect in question. P < 0.05 was considered statistically significant. Post hoc analyses were also used to determine any relationship between the frequency of these structures at nonsonicated, early sonicated, and late sonicated time points. This study was approved by the institutional review board of Northwestern University Feinberg School of Medicine (STU00212298), and all patients provided written informed consent, which included consent for the translational pharmacokinetics study and for nonidentifiable data collected to be included in scientific publications. Quality assurance monitors from the Clinical Trials Office at the Robert H. Lurie Comprehensive Cancer Center of Northwestern University verified the underlying study data and confirmed the accuracy of the results presented in this article. The scRNA-Seq data have been deposited to the National Center for Biotechnology Information Gene Expression Omnibus (GSE208074). for all figures and analyses can be found in the file, which can be found in the online supplemental material. Single-cell suspension was performed by LC, CD, VAA, and BC. scRNA-Seq analysis was performed by YL, YH, and MY under the supervision of FY. Electron microscopy and related analyses were performed by FVK, AG, DZ, and MLIA. CA, RW, CG, JB, RS, and AMS managed the clinical and regulatory aspects of the clinical trial for the correlatives presented. GB and MC performed the imaging analysis and sonication-related technical assistance. The manuscript was drafted by AG, KH, VAA, and AMS. Statistical analysis was performed by VAA. Surgery and intraoperative LIPU/MB were performed by AMS with assistance from CA, CG, RW, AG, and JB. AMS designed and supervised the project. Supplemental data Supporting data values
Does preventive dental care reduce nonpreventive dental visits and expenditures among Medicaid‐enrolled adults?
0a74d73d-6e77-48b9-aa59-d0226bafb170
9643079
Dental[mh]
To maintain optimal oral health and avoid poor oral health outcomes, dental providers recommend routine preventive dental care. Previous evidence of the effectiveness of adult preventive dental care is limited and subject to bias from unobserved characteristics that may confound the relationship between preventive dental care and future adverse oral health outcomes. Studies on the effectiveness of certain dental procedures among Medicaid populations are needed to inform state administrators and decision makers who are trying to determine the optimal balance of covered services with limited budgetary resources. Previous year preventive dental visits are associated with fewer subsequent nonpreventive visits and lower dental expenditures among Medicaid‐enrolled adults. From a Medicaid insurance program standpoint, supporting preventive dental care use may improve population oral health outcomes by reducing the number of nonpreventive visits and associated costs. Despite common recommendations for adults to have regular dental care, the number of Medicaid adult enrollees having at least one yearly dental visit was low and irregular. INTRODUCTION Poor oral health remains a significant public health challenge in the United States, particularly for low‐income adults. , , Adverse outcomes such as caries, periodontal disease (advanced gum disease), and tooth loss are associated with pain, , decreased chewing function, negative social perceptions, and reduced quality of life. , , , , To maintain optimal oral health and avoid these poor outcomes, dental providers recommend routine preventive dental care. , , , The recommended frequency of preventive dental care is based on a dental provider's assessment of the individual's risk of (and from) oral disease. Typically, most adults are recommended to receive routine preventive dental care 1–3 times annually. Routine dental care allows for early identification of oral diseases, preventive care, and/or tailored delivery of oral hygiene education, all of which may prevent more serious or extensive disease(s) and treatment(s). , , However, evidence as to whether routine preventive dental care reduces nonpreventive dental services and expenditures among adults is limited. , Some insurance payors have reported lower total dental expenditures and fewer dental emergencies among adult enrollees who receive preventive dental care than those who do not. , , One study of a sample of Medicaid‐enrolled adults with chronic diseases found preventive dental care was associated with an increased likelihood of future nonpreventive dental visits yet lower total dental expenditures. However, previous evidence has been subject to bias from unobserved characteristics such as individual oral health behaviors, habits, and beliefs, which may confound the relationship between preventive dental care and future adverse oral health outcomes. Given the high prevalence of poor oral health and unmet dental needs among low‐income adults, , it is important to determine whether preventive dental care is effective against adverse oral health outcomes among this population, especially from a public insurance program perspective. States are not mandated to provide dental benefits for Medicaid‐enrolled adults, and as a result, coverage varies greatly across states ranging from no dental benefits whatsoever to “extensive” or comprehensive dental benefits. Some states ( n = 16) provide “limited” Medicaid dental benefits to eligible low‐income adults and cover diagnostic, preventive, and some minor restorative services, but overall cover less than one‐sixth of all dental procedures. Ultimately, little is understood about the effectiveness of preventive dental, and how various state Medicaid dental benefit programs are related to oral health outcomes and expenditures. This study examined whether and to what extent preventive dental visits are associated with nonpreventive dental visits, nonpreventive expenditures, and overall dental expenditures among a population of low‐income adults enrolled in a state Medicaid program. Specifically, we examine the Healthy Indiana Plan (HIP) Plus program, a “limited dental benefit program,” during the first 4 years of its implementation following Medicaid expansion in February 2015. Our study design takes advantage of an econometric technique that controls for unobserved time‐invariant characteristics that may confound the relationship between preventive dental care and nonpreventive dental care and expenditures, including individuals' intrinsic care‐seeking attitudes and their level of health consciousness. Findings from this study may inform state administrators and decision makers who are trying to determine the optimal balance of covered services with limited budgetary resources. In addition, this study also contributes to evidence on the effectiveness of preventive dental care, which has thus far been very limited. METHODS This study used a repeated measures design with individual fixed effects at the person‐year level to estimate the relationship between preventive dental visits (PDV) and nonpreventive dental visits (NPV) and dental expenditures among Medicaid‐enrolled adults with dental coverage. 2.1 Population and data Our primary data were administrative enrollment and claims data from Indiana's Family and Social Services Administration Office of Medicaid Policy and Planning. Our inclusion criteria required adults to be continuously enrolled for 36 months in the HIP Plus program with no gap in coverage greater than 1 month between February 1, 2015, and December 31, 2018. Under the HIP Plus program, enrollees contribute a fixed monthly payment to a special savings account (referred to as a POWER account), which enrollees can use to help pay for their health care. Monthly payments range from $1 to $20, depending on the enrollee's income. As part of their coverage benefits, enrollees are able to receive two dental cleanings a year, up to four minor restorative services (e.g., fillings) every year, and one major restorative service (e.g., crown). , Our primary data was also supplemented with data from the Area Health Resources File that tracks whether a county is a dental health professional shortage area. , Given our data were deidentified, our study received an exemption from review by the BLINDED Institutional Review Board. 2.2 Dependent variables For each 12‐month period of enrollment, we computed the following three outcomes: (1) number of NPVs, (2) annual expenditures for NPVs, and (3) total annual expenditures for all dental visits. We defined a NPV as a dental claim with Common Dental Procedure (CDT) codes for restorative (D2000‐D2999), endodontic (D3000‐D3999), periodontic (D4000‐D4999), prosthodontic (D5000‐D5999, D6200‐D6999), oral and maxillofacial surgery (D7000‐D7999) and/or all other nonpreventive (D6000‐D6199, D8000‐D9999) dental procedures. All dental services rendered by providers were counted, regardless of whether they were reimbursed or denied by Medicaid. Dental expenditures were calculated as the total amount paid by Medicaid for dental services over an annual enrollment period, adjusted for inflation using the 2019 Consumer Price Index. 2.3 Main explanatory variable Our main explanatory variable was a categorical variable indicating the total number of preventive dental visits in the prior year (0, 1, 2, 3, or more). We defined a preventive dental visit as the presence of a dental claim with CDT codes D0120 (periodic oral evaluation), D0150 (comprehensive oral evaluation), D1110 (adult prophylaxis), D1206 (topical application of fluoride varnish), D1208 (topical application of fluoride excluding varnish), D1351 (tooth sealant), and D1330 (oral hygiene instructions), and the absence of CDT codes D2000‐D9999 on the same claim. 2.4 Analysis We characterized the adults included in the study and calculated summary statistics for expenditures and preventive, nonpreventive, and total dental visits conditional on having a dental visit within a 12‐month enrollment period. Next, we analyzed two models at the person‐year level for each of our outcomes of interest (i.e., number of NPVs, NPV expenditures, and total dental expenditures) using two‐way fixed effects (individual and year) linear regressions. We examined whether and to what extent the previous year's PDVs are associated with each outcome of interest. Individual fixed effects treat each adult as their own control, thus reducing bias from time‐invariant individual characteristics, even if unobserved. We also include controls for observable time‐varying characteristics in our population, namely age, whether the enrollee resided in a county designated as a dental health professional shortage area, and year. Results can be understood as the average change in the outcome attributed to each level of preventive visits (i.e., 1, 2, 3, or more) versus none. We used SAS 9.2 for data management and Stata SE version 17 for all analyses. We conducted multiple sensitivity analyses. First, we examined the relationship between prior year PDVs and current year PDVs to assess overall utilization over time (Table ). Since our dependent variables were nonnegative, we estimated fixed effects Poisson estimators in Table . We evaluated a more restrictive exclusion criterion for those without NPVs in the first 6 months of enrollment (Table ) to account for the possibility that these enrollees may have pent‐up and previously unmet dental needs. We analyzed cost outcomes using the modal value paid by Medicaid for each procedure, rather than the paid amount as it appeared in the claims (Table ) to assess any effect of Medicaid's benefit limits, such as a maximum of four minor restorative visits per enrollment year are covered. Finally, to capture longer duration outcomes, we examined the total number of PDVs in the previous 2 years associated with each outcome of interest (Table ). Population and data Our primary data were administrative enrollment and claims data from Indiana's Family and Social Services Administration Office of Medicaid Policy and Planning. Our inclusion criteria required adults to be continuously enrolled for 36 months in the HIP Plus program with no gap in coverage greater than 1 month between February 1, 2015, and December 31, 2018. Under the HIP Plus program, enrollees contribute a fixed monthly payment to a special savings account (referred to as a POWER account), which enrollees can use to help pay for their health care. Monthly payments range from $1 to $20, depending on the enrollee's income. As part of their coverage benefits, enrollees are able to receive two dental cleanings a year, up to four minor restorative services (e.g., fillings) every year, and one major restorative service (e.g., crown). , Our primary data was also supplemented with data from the Area Health Resources File that tracks whether a county is a dental health professional shortage area. , Given our data were deidentified, our study received an exemption from review by the BLINDED Institutional Review Board. Dependent variables For each 12‐month period of enrollment, we computed the following three outcomes: (1) number of NPVs, (2) annual expenditures for NPVs, and (3) total annual expenditures for all dental visits. We defined a NPV as a dental claim with Common Dental Procedure (CDT) codes for restorative (D2000‐D2999), endodontic (D3000‐D3999), periodontic (D4000‐D4999), prosthodontic (D5000‐D5999, D6200‐D6999), oral and maxillofacial surgery (D7000‐D7999) and/or all other nonpreventive (D6000‐D6199, D8000‐D9999) dental procedures. All dental services rendered by providers were counted, regardless of whether they were reimbursed or denied by Medicaid. Dental expenditures were calculated as the total amount paid by Medicaid for dental services over an annual enrollment period, adjusted for inflation using the 2019 Consumer Price Index. Main explanatory variable Our main explanatory variable was a categorical variable indicating the total number of preventive dental visits in the prior year (0, 1, 2, 3, or more). We defined a preventive dental visit as the presence of a dental claim with CDT codes D0120 (periodic oral evaluation), D0150 (comprehensive oral evaluation), D1110 (adult prophylaxis), D1206 (topical application of fluoride varnish), D1208 (topical application of fluoride excluding varnish), D1351 (tooth sealant), and D1330 (oral hygiene instructions), and the absence of CDT codes D2000‐D9999 on the same claim. Analysis We characterized the adults included in the study and calculated summary statistics for expenditures and preventive, nonpreventive, and total dental visits conditional on having a dental visit within a 12‐month enrollment period. Next, we analyzed two models at the person‐year level for each of our outcomes of interest (i.e., number of NPVs, NPV expenditures, and total dental expenditures) using two‐way fixed effects (individual and year) linear regressions. We examined whether and to what extent the previous year's PDVs are associated with each outcome of interest. Individual fixed effects treat each adult as their own control, thus reducing bias from time‐invariant individual characteristics, even if unobserved. We also include controls for observable time‐varying characteristics in our population, namely age, whether the enrollee resided in a county designated as a dental health professional shortage area, and year. Results can be understood as the average change in the outcome attributed to each level of preventive visits (i.e., 1, 2, 3, or more) versus none. We used SAS 9.2 for data management and Stata SE version 17 for all analyses. We conducted multiple sensitivity analyses. First, we examined the relationship between prior year PDVs and current year PDVs to assess overall utilization over time (Table ). Since our dependent variables were nonnegative, we estimated fixed effects Poisson estimators in Table . We evaluated a more restrictive exclusion criterion for those without NPVs in the first 6 months of enrollment (Table ) to account for the possibility that these enrollees may have pent‐up and previously unmet dental needs. We analyzed cost outcomes using the modal value paid by Medicaid for each procedure, rather than the paid amount as it appeared in the claims (Table ) to assess any effect of Medicaid's benefit limits, such as a maximum of four minor restorative visits per enrollment year are covered. Finally, to capture longer duration outcomes, we examined the total number of PDVs in the previous 2 years associated with each outcome of interest (Table ). RESULTS A total of 28,152 adults (constituting 108,349 observation years) met the study inclusion criteria. Population characteristics are presented in Table . Approximately 59% of the population were female, 76% were non‐Hispanic whites, and 45% were never married. On average, included individuals were enrolled continuously for approximately 43 months. Overall, 36.0% had a dental visit, 27.8% had a preventive dental visit, and 22.1% had a nonpreventive dental visit. Approximately 13% had at least one dental visit, and 9% had a PDV each year of their enrollment. Table presents summary statistics for enrollees' overall annual number of dental services and expenditures, and the number of dental services and expenditures by year of enrollment, conditional on any dental care use. On average, among adults who had dental care, enrollees had 2.35 dental visits (SD = 1.42) per enrollment period. This included 1.09 (SD = 0.78) preventive‐only visits and 0.68 (SD = 0.93) nonpreventive only visits. The median total cost for all dental visits in a 12‐month enrollment period among adults with any dental visit was $263.99 (IQR = 148.35–497.22) per enrollee, $93.26 (IQR = 47.07–136.00) for preventive visits, and $97.59 (IQR = 0–298.89) for nonpreventive visits. Results from fixed‐effects linear regression models predicting the total number of NPVs, total NPV expenditures, and total dental expenditures following PDVs in the prior year are shown in Table . Compared to having no PDVs in the prior year, having at least one PDV was associated with fewer NPVs ( β = −0.13; 95% CI –0.12, −0.11), lower NPV expenditures ( β = −$29.12; 95% CI –32.74, −25.50), and lower total dental expenditures (−$70.12; 95% –74.92, −65.31). Additional PDVs in the prior year were associated with fewer NPVs, lower NPV expenditures, and lower total dental expenditures relative to no PDVs.Full model output and sensitivity analyses, which were consistent with our main analysis, can be found in Tables . DISCUSSION We examined the relationship between PDVs and NPVs and dental expenditures among Medicaid‐enrolled adults with dental coverage. When accounting for within‐person characteristics, we observed having any PDVs in the previous year (or in the previous 2 years) was associated with subsequently fewer NPVs, lower nonpreventive dental care expenditures, and lower overall dental expenditures. Our findings suggest preventive dental care may improve oral health by reducing the need for costly restorative care, or it may reduce the perceived need for services. We examined the first 4 years of expanded Medicaid dental benefits within a previously uninsured population. Thus we cannot exclude the possibility of pent‐up demand for dental care use, especially since dental care utilization was inconsistent and all services, including PDVs, declined per person over time. Future research should examine perceptions of need and patterns of dental care utilization among adult Medicaid enrollees, including potential barriers to access and adverse selection. Our findings are similar to Pourat et al., who observed preventive dental care was associated with lower overall dental expenditures among a sample of Medicaid‐enrolled adults. Although Pourat et al. did not observe preventive dental care associated with fewer nonpreventive dental care services, their findings support the notion that more frequent preventive services reduce the need for extensive and costly nonpreventive care. Our study, which accounted for time‐invariant individual characteristics, provides evidence that preventive dental care may reduce both nonpreventive dental care use and associated expenditures. Optimal management of oral health relies on the early treatment of minor problems to prevent more invasive and more costly nonpreventive treatments. Thus, from a public insurance program standpoint, coverage of preventive dental care may translate to downstream improved population oral health outcomes among low‐income adults. This is a particularly salient point for states considering whether to add dental benefits to their Medicaid programs, and states with existing adult Medicaid dental coverage, as these benefits are optional and reduced or eliminated with state budgets that are often constrained. Importantly, Pourat et al. examined a sample of Medicaid‐enrolled adults in California, a state with comprehensive or “extensive” dental benefits for its enrollees, whereas we examined a state that offers “limited” dental benefits for adults enrolled in the HIP Plus program. Similar to 15 other states, this level of generosity in dental benefits covers fewer than 100 of 600 dental procedures and generally focuses on the prevention or emergency care but limits the options for restorative care (e.g., root canals are not covered). Given these benefit limits, public dental insurance programs may not be structured to incentivize optimal oral health across one's lifespan. Thus, beyond oral health outcomes, future research should also consider how the quality of life is affected by the design of a state's dental insurance program. Despite common recommendations for adults to have regular dental care, few enrollees had a dental visit each year of enrollment or at least one PDV each year of enrollment. Although utilization may decline within an individual over time as oral health status improves, particularly when they may not have had dental coverage previously, it is unlikely that only 9% of the population was advised to have at least 1 PDV annually. This contrasts with other populations studied, particularly children in public insurance programs, wherein PDV utilization is much more frequent. , , Other barriers to regular care beyond coverage may exist. For example, lack of time to visit the dentist and inability to easily travel to see a dentist are consistent reasons reported by Medicaid‐enrolled adults as to why they forgo visiting a dentist annually. , Regardless, additional research using robust mixed methods approaches is needed to determine the reasons why there is irregular use and the long‐term consequences of such inconsistent care. As a strength, this study employed a two‐way fixed effects study design that allowed us to reduce bias from unobserved time‐invariant confounders. Furthermore, we provided insights into the dental services covered in a state Medicaid program that provides “limited” dental benefits, which have not been explored. Still, some limitations are worth noting. First, our study design does not permit control for unobserved time‐varying factors that may confound the relationship between preventive dental visits and nonpreventive dental visits and expenditures, such as health literacy campaigns or consumer incentives from managed care organizations. We cannot rule out the possibility of reverse causality, wherein NPV leads to PDV. We assumed individual characteristics remained constant (i.e., health consciousness, oral behaviors, and hygiene habits) but acknowledge some behaviors may have changed. However, if these behavioral changes were motivated by dental professionals during a preventive dental visit, this would be appropriately captured in the effect estimates of our analyses. Ultimately, since we lack relevant oral health diagnoses, we are unable to account for certain care‐seeking behaviors and selection of treatment options. Additionally, we are unable to account for changes in an individual's diet, which may alter caries risk. Given the short study time period, we are unable to rigorously analyze cumulative, repetitive preventive dental care. Finally, our findings may not generalize to adults who disenroll prior to 36 months of coverage or to low‐income adults who have coverage in a state Medicaid program with a different level of generosity in dental benefits. CONCLUSION Our findings suggest that prior year PDVs are associated with fewer subsequent NPVs and lower dental expenditures among Medicaid‐enrolled adults, but also subsequent PDVs. Thus, from a public insurance program standpoint, supporting preventive dental care use may translate into improved population oral health outcomes and lower dental costs among certain low‐income adult populations, but barriers to consistent utilization of PDV prohibit definitive findings. Data S1 Supporting information. Click here for additional data file.
A qualitative RE-AIM evaluation of an embedded community paramedicine program in an Ontario Family Health Team
f7161a96-bba7-4ea9-84d5-cdfa816a10d6
11929184
Community Health Services[mh]
Community paramedicine Traditionally, paramedics are seen as emergency responders; they drive ambulances and bring people to the hospital. However, in response to the growing demands of an aging population and their escalating health care needs, the paramedic role has expanded to include a subset of providers called community paramedics [ – ]. The scope of practice of community paramedics includes preventive care and primary health care in home and community settings . They take a proactive approach to addressing the needs of complex, high-risk patients by providing care directly in patients' homes and performing activities such as health assessments, chronic disease management, medication administration, and monitoring . In Canada, most community paramedicine programs operate independently of typical primary care settings, such as physician- or nurse-led clinics. When needed, they consult with external primary care physicians or refer to other health care providers . Guo et al. conducted a review of various community paramedicine programs in Canada, Australia, England, and the United States, highlighting the diversity in program implementation and goals. Some programs primarily targeted older adults, while others focused on patients with complex chronic diseases. Regarding effectiveness, their review found that community paramedicine programs have been associated with reductions in emergency calls, emergency department visits, hospital admissions, and emergency transport charges . In Ontario, a community paramedic-led primary care clinic improved health outcomes for older adults living in subsidized housing by decreasing blood pressure levels and improving quality-adjusted life years. It also reduced 911 calls lessening the strain on the health care system . Further evaluation is needed to establish the long-term impact and effectiveness of community paramedicine programs . With increasing pressure on the Canadian health care system from an aging population , federal ministers, including the Minister of Health, have highlighted the pressing need for health care systems to evolve and explore new ways to adapt resource allocation to improve patient care and system efficiency . An embedded community paramedicine program In 2014, a rural Family Health Team (FHT) in the West Ottawa region of Ontario piloted a community paramedicine program embedded in their clinical practice, which is not a commonly seen model in Canada. Ontario FHTs consist of interdisciplinary health professionals, such as family physicians, nurses, and allied health professionals (AHPs), who collaborate to deliver more comprehensive primary care in a single-payer health care system . By expanding their team to include community paramedics, the FHT improved access to care for their most complex patients. This initiative involved directing patients with multiple co-morbidities, limited social support, and high emergency department (ED) visits for non-emergency issues to the paramedicine program, with the goal of reducing health provider workload and improving patient reach and care, especially for high-risk patients living in rural settings. The FHT gave the paramedic team full access to their electronic medical records (EMR), medical supplies, and organizational supports. Currently, this stands as one of the few FHTs in Ontario that has successfully implemented and sustained a community paramedicine program embedded within their clinic. This allowed for a unique opportunity for the study to evaluate the utility of having community paramedics embedded in a rural FHT. Traditionally, paramedics are seen as emergency responders; they drive ambulances and bring people to the hospital. However, in response to the growing demands of an aging population and their escalating health care needs, the paramedic role has expanded to include a subset of providers called community paramedics [ – ]. The scope of practice of community paramedics includes preventive care and primary health care in home and community settings . They take a proactive approach to addressing the needs of complex, high-risk patients by providing care directly in patients' homes and performing activities such as health assessments, chronic disease management, medication administration, and monitoring . In Canada, most community paramedicine programs operate independently of typical primary care settings, such as physician- or nurse-led clinics. When needed, they consult with external primary care physicians or refer to other health care providers . Guo et al. conducted a review of various community paramedicine programs in Canada, Australia, England, and the United States, highlighting the diversity in program implementation and goals. Some programs primarily targeted older adults, while others focused on patients with complex chronic diseases. Regarding effectiveness, their review found that community paramedicine programs have been associated with reductions in emergency calls, emergency department visits, hospital admissions, and emergency transport charges . In Ontario, a community paramedic-led primary care clinic improved health outcomes for older adults living in subsidized housing by decreasing blood pressure levels and improving quality-adjusted life years. It also reduced 911 calls lessening the strain on the health care system . Further evaluation is needed to establish the long-term impact and effectiveness of community paramedicine programs . With increasing pressure on the Canadian health care system from an aging population , federal ministers, including the Minister of Health, have highlighted the pressing need for health care systems to evolve and explore new ways to adapt resource allocation to improve patient care and system efficiency . In 2014, a rural Family Health Team (FHT) in the West Ottawa region of Ontario piloted a community paramedicine program embedded in their clinical practice, which is not a commonly seen model in Canada. Ontario FHTs consist of interdisciplinary health professionals, such as family physicians, nurses, and allied health professionals (AHPs), who collaborate to deliver more comprehensive primary care in a single-payer health care system . By expanding their team to include community paramedics, the FHT improved access to care for their most complex patients. This initiative involved directing patients with multiple co-morbidities, limited social support, and high emergency department (ED) visits for non-emergency issues to the paramedicine program, with the goal of reducing health provider workload and improving patient reach and care, especially for high-risk patients living in rural settings. The FHT gave the paramedic team full access to their electronic medical records (EMR), medical supplies, and organizational supports. Currently, this stands as one of the few FHTs in Ontario that has successfully implemented and sustained a community paramedicine program embedded within their clinic. This allowed for a unique opportunity for the study to evaluate the utility of having community paramedics embedded in a rural FHT. Study design The study follows a qualitative descriptive design , commonly used in applied health research, including evaluations such as this RE-AIM framework-based study. This approach provides straightforward descriptions of participants' experiences and program outcomes . Moreover, the study follows a community-based research approach , emphasizing collaboration between researchers and community partners to address issues relevant to the community and improve the application of findings to real-world contexts. This study team includes community paramedics and staff from the FHT who actively participated in the research process. Framework We used the RE-AIM framework to evaluate the aims, strengths, and challenges of the FHT-embedded program. This framework has been used for over two decades to provide in-depth assessments of the successes and limitations of public health interventions across five dimensions: reach, effectiveness, adoption, implementation, and maintenance . The framework’s strength lies in its comprehensive and flexible approach to tailoring evaluations to suit smaller communities and clinical settings , and it has been used to evaluate other community paramedicine programs . Using qualitative methods (e.g., staff interviews) within the RE-AIM framework provided more holistic insights into the program's adoption and implementation . Study team The study team includes researchers, community paramedics, and clinic staff from the FHT. Our community researchers assisted with recruitment, governance, and the co-development of outputs, such as the program blueprint. They were not involved in conducting interviews or analyzing the evaluation results, ensuring objectivity in the data collection and analysis phases. Data collection Recruitment The community paramedic team (G.B., K.H., T.I., K.S.) contacted FHT staff to gauge interest in participating in the study. The research team contacted interested participants for further information and to schedule interviews. L.K., K.K.M., and S.P. conducted interviews over Microsoft Teams or in person at the clinic based on participant preference. Interview details are included in Appendix A. Interviews The interview guide used in this study was developed by the research team and informed by the RE-AIM framework (Supplementary Material 1). The interviewers obtained verbal consent from participants and conducted interviews lasting 30 to 70 minutes, which were transcribed verbatim. To support rigour, interviewers engaged in reflexive journaling and memo writing during the interview process . Journal notes on key codes and reflections during/after the interview were discussed during bi-weekly research meetings with the multidisciplinary research team (K.K.M, S.P., L.K., S.T.). Documents To obtain a fulsome description of the program, the research team collected and analyzed documents on the community paramedicine program’s organizational structure, including training protocols and referral resources. De-identified descriptive patient data The research team collaborated with the FHT’s Health Informatic specialist (M.F.) to obtain de-identified data regarding the characteristics of patients who were or are enrolled in the program from its inception in 2014 to 2022. The dataset of 335 patients described age, sex, co-morbidities and types of referrals made by the FHT. Data analysis RE-AIM qualitative analysis Content analysis of documents and analysis of interviews were guided by the RE-AIM framework and managed using the qualitative software package MAXQDA. This approach combined deductive and inductive thematic analysis. Deductive analysis was applied to most RE-AIM domains, ensuring alignment with the framework’s established evaluation criteria. However, for the " " domain of RE-AIM, an inductive approach was used to allow themes to emerge directly from the data, following Braun and Clarke’s approach for thematic analysis . Since health administrative data was unavailable at the time of the study, this inductive approach allowed the study to focus on capturing the perspectives of FHT staff and community paramedics and identify areas they viewed as contributing to its success. This combination of deductive and inductive analysis allowed for an evaluation of the program that highlighted key factors that made it effective from the perspective of its implementers. Our approach to qualitative content analysis aligns with established methods and our deductive analysis follows the same RE-AIM methods as other studies that have used this framework for qualitative evaluations . A codebook based on the RE-AIM framework was created by S.P. and distributed to the research team for input (Appendix B). The first two transcripts were group coded by three research team members (L.K., S.P., S.T.). The next two transcripts were consensus-coded by two members of the team (S.P., S.T.), with any unresolved discrepancies being resolved by a third coder (L.K.). The remaining transcripts were divided and double-coded (S.P., S.T.). The codes were consolidated, and the research team identified content and themes relevant to the RE-AIM framework domains. Key quotes were identified as evidence of dominant themes and reviewed with the broader research team (C.B., G.B., K.F., K.H., T.I., K.S., K.K.M.). Descriptive statistical analysis The de-identified patient data was analyzed in RStudio (S.P.), focusing on descriptive statistics, including mean age, gender proportions, and the prevalence of co-morbidities and referral types. Ethics approval Ethical approval for the current study was approved by the Bruyère Health Research Ethics Board (M16-23-023). The study follows a qualitative descriptive design , commonly used in applied health research, including evaluations such as this RE-AIM framework-based study. This approach provides straightforward descriptions of participants' experiences and program outcomes . Moreover, the study follows a community-based research approach , emphasizing collaboration between researchers and community partners to address issues relevant to the community and improve the application of findings to real-world contexts. This study team includes community paramedics and staff from the FHT who actively participated in the research process. We used the RE-AIM framework to evaluate the aims, strengths, and challenges of the FHT-embedded program. This framework has been used for over two decades to provide in-depth assessments of the successes and limitations of public health interventions across five dimensions: reach, effectiveness, adoption, implementation, and maintenance . The framework’s strength lies in its comprehensive and flexible approach to tailoring evaluations to suit smaller communities and clinical settings , and it has been used to evaluate other community paramedicine programs . Using qualitative methods (e.g., staff interviews) within the RE-AIM framework provided more holistic insights into the program's adoption and implementation . The study team includes researchers, community paramedics, and clinic staff from the FHT. Our community researchers assisted with recruitment, governance, and the co-development of outputs, such as the program blueprint. They were not involved in conducting interviews or analyzing the evaluation results, ensuring objectivity in the data collection and analysis phases. Recruitment The community paramedic team (G.B., K.H., T.I., K.S.) contacted FHT staff to gauge interest in participating in the study. The research team contacted interested participants for further information and to schedule interviews. L.K., K.K.M., and S.P. conducted interviews over Microsoft Teams or in person at the clinic based on participant preference. Interview details are included in Appendix A. Interviews The interview guide used in this study was developed by the research team and informed by the RE-AIM framework (Supplementary Material 1). The interviewers obtained verbal consent from participants and conducted interviews lasting 30 to 70 minutes, which were transcribed verbatim. To support rigour, interviewers engaged in reflexive journaling and memo writing during the interview process . Journal notes on key codes and reflections during/after the interview were discussed during bi-weekly research meetings with the multidisciplinary research team (K.K.M, S.P., L.K., S.T.). Documents To obtain a fulsome description of the program, the research team collected and analyzed documents on the community paramedicine program’s organizational structure, including training protocols and referral resources. De-identified descriptive patient data The research team collaborated with the FHT’s Health Informatic specialist (M.F.) to obtain de-identified data regarding the characteristics of patients who were or are enrolled in the program from its inception in 2014 to 2022. The dataset of 335 patients described age, sex, co-morbidities and types of referrals made by the FHT. The community paramedic team (G.B., K.H., T.I., K.S.) contacted FHT staff to gauge interest in participating in the study. The research team contacted interested participants for further information and to schedule interviews. L.K., K.K.M., and S.P. conducted interviews over Microsoft Teams or in person at the clinic based on participant preference. Interview details are included in Appendix A. The interview guide used in this study was developed by the research team and informed by the RE-AIM framework (Supplementary Material 1). The interviewers obtained verbal consent from participants and conducted interviews lasting 30 to 70 minutes, which were transcribed verbatim. To support rigour, interviewers engaged in reflexive journaling and memo writing during the interview process . Journal notes on key codes and reflections during/after the interview were discussed during bi-weekly research meetings with the multidisciplinary research team (K.K.M, S.P., L.K., S.T.). To obtain a fulsome description of the program, the research team collected and analyzed documents on the community paramedicine program’s organizational structure, including training protocols and referral resources. The research team collaborated with the FHT’s Health Informatic specialist (M.F.) to obtain de-identified data regarding the characteristics of patients who were or are enrolled in the program from its inception in 2014 to 2022. The dataset of 335 patients described age, sex, co-morbidities and types of referrals made by the FHT. RE-AIM qualitative analysis Content analysis of documents and analysis of interviews were guided by the RE-AIM framework and managed using the qualitative software package MAXQDA. This approach combined deductive and inductive thematic analysis. Deductive analysis was applied to most RE-AIM domains, ensuring alignment with the framework’s established evaluation criteria. However, for the " " domain of RE-AIM, an inductive approach was used to allow themes to emerge directly from the data, following Braun and Clarke’s approach for thematic analysis . Since health administrative data was unavailable at the time of the study, this inductive approach allowed the study to focus on capturing the perspectives of FHT staff and community paramedics and identify areas they viewed as contributing to its success. This combination of deductive and inductive analysis allowed for an evaluation of the program that highlighted key factors that made it effective from the perspective of its implementers. Our approach to qualitative content analysis aligns with established methods and our deductive analysis follows the same RE-AIM methods as other studies that have used this framework for qualitative evaluations . A codebook based on the RE-AIM framework was created by S.P. and distributed to the research team for input (Appendix B). The first two transcripts were group coded by three research team members (L.K., S.P., S.T.). The next two transcripts were consensus-coded by two members of the team (S.P., S.T.), with any unresolved discrepancies being resolved by a third coder (L.K.). The remaining transcripts were divided and double-coded (S.P., S.T.). The codes were consolidated, and the research team identified content and themes relevant to the RE-AIM framework domains. Key quotes were identified as evidence of dominant themes and reviewed with the broader research team (C.B., G.B., K.F., K.H., T.I., K.S., K.K.M.). Descriptive statistical analysis The de-identified patient data was analyzed in RStudio (S.P.), focusing on descriptive statistics, including mean age, gender proportions, and the prevalence of co-morbidities and referral types. Content analysis of documents and analysis of interviews were guided by the RE-AIM framework and managed using the qualitative software package MAXQDA. This approach combined deductive and inductive thematic analysis. Deductive analysis was applied to most RE-AIM domains, ensuring alignment with the framework’s established evaluation criteria. However, for the " " domain of RE-AIM, an inductive approach was used to allow themes to emerge directly from the data, following Braun and Clarke’s approach for thematic analysis . Since health administrative data was unavailable at the time of the study, this inductive approach allowed the study to focus on capturing the perspectives of FHT staff and community paramedics and identify areas they viewed as contributing to its success. This combination of deductive and inductive analysis allowed for an evaluation of the program that highlighted key factors that made it effective from the perspective of its implementers. Our approach to qualitative content analysis aligns with established methods and our deductive analysis follows the same RE-AIM methods as other studies that have used this framework for qualitative evaluations . A codebook based on the RE-AIM framework was created by S.P. and distributed to the research team for input (Appendix B). The first two transcripts were group coded by three research team members (L.K., S.P., S.T.). The next two transcripts were consensus-coded by two members of the team (S.P., S.T.), with any unresolved discrepancies being resolved by a third coder (L.K.). The remaining transcripts were divided and double-coded (S.P., S.T.). The codes were consolidated, and the research team identified content and themes relevant to the RE-AIM framework domains. Key quotes were identified as evidence of dominant themes and reviewed with the broader research team (C.B., G.B., K.F., K.H., T.I., K.S., K.K.M.). The de-identified patient data was analyzed in RStudio (S.P.), focusing on descriptive statistics, including mean age, gender proportions, and the prevalence of co-morbidities and referral types. Ethical approval for the current study was approved by the Bruyère Health Research Ethics Board (M16-23-023). Study population We interviewed 12 participants: the community paramedicine team (n=4), which includes three community paramedics and their patient care coordinator, and other staff from the FHT working in adjacent positions ( n =8), including physicians, nurse practitioners, allied health professionals (AHPs), and the program director. Program description The program consisted of two full-time community paramedics, a patient care coordinator who was a paramedic on modified duties, and a clinical consultant who was a physician who provided guidance for the program’s medical directives. These paramedics visited patients' homes on the FHT rosters to conduct clinical assessments and deliver treatments where possible. The paramedics managed around 100 patients (50 each) in coordination with the FHT clinic staff, conducting 3-5 home visits daily, each lasting 45-75 minutes. In addition to these visits, the paramedics and their patient care coordinator performed phone check-ins and triaged incoming patient calls as part of their daily responsibilities. About 70% of their patients were from the FHT roster, while the remaining 30% were referred by hospitals or home care. This program was funded by Ontario Health and a local hospital, with costs representing $100,000 per paramedic. The FHT was not responsible for recruiting community paramedics, as these staff were hired from an external paramedicine service, including both Primary Care Paramedics (PCPs) and Advanced Care Paramedics (ACPs). PCPs provide essential emergency medical care, including basic life support and patient stabilization, while ACPs have additional training in advanced medical procedures . To support both certification levels, the FHT’s training program was designed to accommodate their varying skill sets, offering supplementary training as needed. Once selected to be part of the FHT’s embedded community paramedic program, staff were trained by the lead community paramedic. Training included shadowing a community paramedic and FHT staff, e-learning, and workshops. This continued until the hired paramedic felt equipped to conduct home visits independently. Reach Target population The initial criteria for recruiting patients for the community paramedic program were patients with multiple chronic diseases, such as congestive heart failure, chronic obstructive pulmonary disease, Parkinson’s disease, as well as mental health conditions (e.g., dementia, bipolar disorder). They also included patients with limited social support and those with higher health care usage. These criteria established an initial pilot group of 155 patients for the program, identified through their EMR. As understanding of the program’s service criteria grew, the referral process shifted to an ongoing basis where physicians referred patients with complex health issues. Occasionally, patients would request a referral to the paramedicine program themselves after hearing about the program from other patients. Table describes the baseline characteristics of the community paramedicine program’s participants ( n =335) from 2014 to 2022. Due to limited data availability, the table does not represent the 30% ( n =100-150) of external patients the program serves. Based on the clinic staff’s responses and the demographic profiles of the paramedics’ patients, the program was recruiting and serving the FHT’s most complex and high-needs patients. These patients had an average age of 78 and typically presented with 2.7 of the top 10 prioritized comorbidities from the FHT, with dementia and mental health diagnoses being the most common (Table ). Mitigating fear The community paramedic team and FHT physicians shared that the primary barrier to patient recruitment into the community paramedicine program was that patients feared accepting care from the paramedicine team could lead to an undesired placement in long-term care. These concerns were primarily addressed through initial phone conversations and first visits with the community paramedic, during which the program’s goals and patient concerns were discussed. This strategy fostered rapport among patients and paramedics, as patients began to see these visits as supportive measures that enabled them to age in their homes longer. And it's going to take a few visits to figure it out because that first visit or two, they're doing to us what they do to the physicians. They're just making everything sound wonderful. And they're often really suspicious when the doctors send a paramedic into their home. They think that we're there to get them placed into long-term care or senior living. That's huge for these people. So, to really allay those fears early on, that we're just there to support them at home is really important. - Community Paramedic Team 2 Effectiveness Home visits offer deeper insights into patients’ health circumstances A recurring theme was that in-home visits facilitate a deeper understanding of each patient's health situation. These visits highlighted aspects such as medication adherence, dietary practices, home safety, and hygiene, which are often difficult to determine in a clinical setting but contribute to the success of treatment plans and health outcomes. I think it’s a lot of question asking. But it’s also, visual cues of looking at their environments, looking at how things are maintained. If you find ways to check their insulin in the fridge, look at sort of their situation with respect to food, look at all the different mobility aids they have. - Community Paramedic Team 4 For physicians who had uncertainties regarding a patient's living conditions or suspected an element of their health is unaccounted for, paramedic home visits provided an opportunity to gain a more holistic view of something that can be masked or missed in a clinic setting. FHT staff have noted that patients with cognitive impairment or dementia can have difficulty accurately conveying their health needs, and these visits allow for a better understanding of their care requirements. It's been very useful to have someone go into the home to sort of just see how safe they are at home […]. Also, medication-wise it can be useful to have because they can kind of go over the pill bottles and see if the patient seems to be taking them or not, and if they seem to have an organized system for it. - Physician 2 Moreover, by scanning and engaging with patients in their homes, community paramedics can better identify and address potential health risks, as they are often the first to detect these issues. “ It may be that the paramedic is the first person to detect that things are not going all that well cognitively. Detection and assessment are probably one thing that they can do” (Physician 1). Patient care coordination with physicians Most of the patients on the current community paramedicine roster were referred by their family physicians. This emphasized one of the guiding aims of the community paramedicine program: fostering collaboration between community paramedics and physicians to offer additional and more tailored care. “I think that one of the things that’s really valuable is we are the eyes and ears for the physicians at home” (Community Paramedic Team 4). Additionally, the program's integrated nature fostered seamless and quick exchanges between physicians and community paramedics, either through their EMR channels or in the office, enabling more coordinated and effective care. “We can see everything. We can see their charting. We can see when they have an upcoming appointment with the doctor. So usually if they have an upcoming appointment with the doctor within this month, I would not schedule a paramedic to go out and see them, because it's like duplicating the work.” (Community Paramedic Team 3). This contrasted with communication gaps often observed with external health services. I don't hear from [Personal Support Worker] PSWs. They don't call me and say, did you know that he's had a fall, and he's got a sore on his leg, and he's been eating rotting bananas for a week? Like, I don't get that feedback from the PSW. Whereas with the paramedic, I get frequent updates if they're not doing well. - Physician 3 Finally, community paramedics played a crucial role in advocating for patients on their roster, helping them obtain resources or additional care from their FHT team and/or external health and social care services. “ I'm just here to support you [patient], to stay in your home and stay safe any way I can do that and help communicate back to the physician because maybe you can't get to the clinic very easily ” (Community Paramedic Team 2). Patient care coordination with nurses and allied health professionals (AHPs) An embedded community paramedicine program in an FHT offered access to a diverse pool of expertise, including nurse practitioners, pharmacists, and social workers. “Often, it's just so nice to be able to walk upstairs and just pick the brain of any one of these experts in their fields when we're really hitting a brick wall with people” (Community Paramedic Team 2). In the pilot phase of the program, nurses and AHPs collaborated with the community paramedics to increase their scope of practice to better serve complex patients at home. This included performing medication reconciliation, providing wound care, conducting blood work analysis, and facilitating cognitive assessments at the patient’s home. Following onboarding, community paramedics have collaborated with nurses and AHPs as needed. For example, nurses and AHPs could request that the community paramedic conduct follow-ups or check-in on specific concerns for shared patients. “ I might say, ‘While you're there, can you check on their insulin injections? Can you teach them how to inject Ozempic?’” (Clinic Staff 3). Additionally, community paramedics helped persuade hesitant patients of the benefits of accepting an AHP referral. Increased collaboration was occasionally hindered by two main challenges: limited overlap in patient rosters, with paramedics often focusing on a smaller subset of highly complex cases, and operational limitations inherent in a rural FHT context, where AHP roles are often limited and subject to frequent turnover. Patient care coordination with social services It was repeatedly mentioned that community paramedics were well-versed in community resources, often being the first to identify and facilitate referrals with physician approval. “ They are very knowledgeable, or they have become very knowledgeable, about the other services available in the community” (Physician 1). However, the FHT and community paramedic team encountered significant challenges when collaborating with social services, primarily due to issues with service accessibility, reliability and communication. These challenges were particularly acute in rural settings where services are sparse, resulting in extended wait times or complete unavailability of services. Additionally, such issues disproportionately impacted patients who have minimal social support or transportation options. Especially with [our] area, even things like transport have been issues in the past, to get to day programs, to get to food banks, to get into for appointments. So I think a lot of the frustration is around what resources are available. And, even if they are available, are they serviced and what are the wait times like - Community Paramedic Team 4 As a result, community paramedics filled this role to the best of their ability while patients waited for services . It would be easier if the social services were able to step up more, right? I think we're gap fillers right now, so a lot of the work we do is not even what we set out to do, but we're just trying to keep these patients afloat until the right care kicks in. - Community Paramedic Team 2 Patient care coordination with external health services Community paramedics worked in conjunction with various other external health services, including geriatric teams, hospitals, and home care services. This collaboration was especially valuable for patients recently discharged from the hospital who needed temporary support to ensure a safe transition back to their homes. Knowing that community paramedics can provide follow-up care enables hospitals to reduce the length of patient stays without compromising patient safety. Community paramedics also aided in increasing capacity in the home care sector, as they worked in tandem with personal support workers (PSWs) and care coordinators to offer more frequent monitoring of patients. So I feel like I increase their [home care] capacity quite a bit, because some people when they are in crisis, I can see them every 2-3 weeks or every two months depending what they need. Which the care coordinator will never be able to do, their caseload is just too big. - Community Paramedic Team 2 However, community paramedics in rural settings struggled with long wait times for allied health referrals such as occupational therapy, PSW, and physiotherapy. This often resulted in an increased workload for the paramedics, requiring more frequent visits and making it difficult to deliver care promptly. Adoption Community paramedicine – staff level Community paramedics embedded in the FHT valued the opportunity to engage with patients as part of the clinic, as it fostered strong and long-lasting patient-paramedic relationships. “I like the idea of working with not just the patient themselves, but their entire care team, their family members, that sort of thing, getting rapport. Having time on scene is huge” (Community Paramedic Team 2). The community paramedic role offered a more multidisciplinary and preventive approach to care for their patients compared to working in a traditional paramedic role. Integration into the FHT empowered community paramedics to broaden their scope of practice, allowing them to provide care that traditionally falls beyond the purview of paramedicine. One of the main reasons they preferred an embedded practice setting over operating in a standalone community paramedicine clinic was that being part of the FHT guaranteed easy collaboration with physicians. “ You have the buy-in from the docs, so that is so crucial in doing care planning ” (Community Paramedic Team 1). Moreover, the embedded community paramedicine model served to dismantle the traditional barriers that often segregate and silo health care services. This was facilitated through direct communication in the clinic, via the EMR system or through shared access to patient charts. It was particularly beneficial during initial home visits because paramedics had access to patient histories, enabling more informed care planning. However, it took the program several years to achieve its current state of care integration due to unsecured funding in its pilot phase. Presently, a significant challenge arose from the community paramedic role not being formally recognized as a funded position within FHTs at a systems level. Consequently, community paramedics still relied on alternative funding sources from different financial streams within Ontario Health, distinct from typical FHT positions. Furthermore, they had to report to an external paramedic service, which complicated their scope of practice and duties. For example, the paramedics began and ended their day at the external paramedic service and borrowed their vehicles. I always wear multiple hats. It’s difficult because when I work in the clinic, I can take orders from the doctors here, but if I’m driving to the clinic in a paramedic vehicle, then I’m also liable. let’s say if there’s an accident in front of me, I’m expected to stop and provide care but I’m under a different medical director - Community Paramedic Team 1 Family Health Team on community paramedics - staff level & setting level Clinic staff expressed appreciation for having access to embedded community paramedics who conducted home visits, particularly for patients who have difficulty attending clinic appointments. This service not only enriched patient care by ensuring continuous care but also offered staff reassurance about patients’ ability to manage their health in their own environment. “ Having eyes and ears in that patient’s direct environment can allow us, as health care providers, to make better decisions about their health ” (Clinic Staff 1). Moreover, the lead community paramedic who joined the pilot program in 2014 has stayed with the team, building long-term relationships and trust with the staff. The FHT staff expressed that all paramedic team members consistently demonstrated a willingness to engage and offer additional support when requested, showing their commitment to the team and the broader healthcare mission. “I've never had any issues communicating with any of my [community paramedic] colleagues. I've never had a hard time getting ahold of them. They're always really onboard and willing to help” (Physician 3). Implementation Staff training All paramedics in the program expressed confidence in their training and highlighted that the program was designed to give them the flexibility to expand their scope of practice to better meet patient needs, with support from their clinical consultant and FHT director. “ We just keep adding training as we learn more about our patients and their needs, so we’re really open to whatever makes sense for that population ” (Community Paramedic Team 2). Internal barriers and adaptions The key to sustaining the embedded community paramedicine program following the first two years of the pilot phase was developing funding partnerships with the local hospital and Ontario Health. They advocated for baseline funding by highlighting the impact of community paramedics in reducing hospital stay length for complex patients. However, while this funding has been critical, limitations remain as it does not fully support scaling up the program’s services. A critical evolution to the program was granting community paramedics access to the FHT’s EMR. This allowed them to consult and update patient charts directly, eliminating the need for a separate documentation process. Without the record, you're really relying on those notes that they have from their previous visit[...] Before if there was something urgent that they had to do that with, they'd have to pick up the phone, call the admin person, and the admin person would have to go into the patient record share whatever they could. - Clinic Staff 2 Community paramedics continued to face challenges in patient engagement, particularly in conducting assessments for conditions such as cognitive impairments using tools like the Montreal Cognitive Assessment. Patient hesitancy was often rooted in fear of potential consequences, such as long-term care placement or loss of driving privileges. Additionally, this reluctance extended to accessing social services due to pride or perceived stigma. This can impact their ability to provide appropriately tailored care. “ There’s sort of a stigma around asking for help. […] It’s hard, sometimes, to have them agree to go down that route ” (Community Paramedic Team 4). External barriers A significant external barrier that impacted the community paramedicine program's implementation capacity was its operation in a rural setting. The geographical spread of patients limited the number of visits community paramedics could conduct daily, with travel time consuming a considerable portion of their schedule. This challenge was partially mitigated by phone check-ins and optimizing visit routes based on geography. “Some days, I feel that I really didn’t see that many people, but I drove 250 km” (Community Paramedic Team 1). Maintenance Program’ sustainability The sustainability of the community paramedicine program was closely tied to its alignment with the FHT’s mission. Assessing the program's cost-saving impact was challenging due to the paramedics' focus on upstream preventive care. However, it should be noted that any potential cost-savings would likely benefit hospitals and emergency services, not the FHT. While the FHT's increased investment in primary care may not directly reflect cost savings for them, it could reduce the burden on downstream hospital services. “ It’s theoretical cost-saving. It's an improved patient condition, it's an improved patient experience, and probably an improved provider experience, but maybe in the end doesn't actually save any money. But it may use our money more wisely ” (Clinic Staff 5). The FHT placed significant value on the program's ability to expand service capacity, thereby better serving their most vulnerable populations and reducing the need for in-person clinic visits. I see it as two pieces to it, somebody who needs to be seen but won't come in, or somebody who needs to be seen and would come in, but maybe not frequently enough to identify that they're going to end up in the hospital if they're not otherwise checked in on. - Clinic Staff 5 Recommendation for growth FHT clinic staff suggested expanding the scope of practice for community paramedics to allow more assessments to be conducted directly in patients' homes, thereby improving accessibility and efficiency of care. To support the sustainability and scalability of an embedded community paramedicine model across the province, it would be important to secure dedicated funding for this role in the primary care setting. “ To make it easy, community paramedicine should be its own thing in whatever area. Whether I’m at headquarters or at the clinic or I’m somewhere else. I should be able to do the same things” (Community Paramedic Team 1). We interviewed 12 participants: the community paramedicine team (n=4), which includes three community paramedics and their patient care coordinator, and other staff from the FHT working in adjacent positions ( n =8), including physicians, nurse practitioners, allied health professionals (AHPs), and the program director. The program consisted of two full-time community paramedics, a patient care coordinator who was a paramedic on modified duties, and a clinical consultant who was a physician who provided guidance for the program’s medical directives. These paramedics visited patients' homes on the FHT rosters to conduct clinical assessments and deliver treatments where possible. The paramedics managed around 100 patients (50 each) in coordination with the FHT clinic staff, conducting 3-5 home visits daily, each lasting 45-75 minutes. In addition to these visits, the paramedics and their patient care coordinator performed phone check-ins and triaged incoming patient calls as part of their daily responsibilities. About 70% of their patients were from the FHT roster, while the remaining 30% were referred by hospitals or home care. This program was funded by Ontario Health and a local hospital, with costs representing $100,000 per paramedic. The FHT was not responsible for recruiting community paramedics, as these staff were hired from an external paramedicine service, including both Primary Care Paramedics (PCPs) and Advanced Care Paramedics (ACPs). PCPs provide essential emergency medical care, including basic life support and patient stabilization, while ACPs have additional training in advanced medical procedures . To support both certification levels, the FHT’s training program was designed to accommodate their varying skill sets, offering supplementary training as needed. Once selected to be part of the FHT’s embedded community paramedic program, staff were trained by the lead community paramedic. Training included shadowing a community paramedic and FHT staff, e-learning, and workshops. This continued until the hired paramedic felt equipped to conduct home visits independently. Target population The initial criteria for recruiting patients for the community paramedic program were patients with multiple chronic diseases, such as congestive heart failure, chronic obstructive pulmonary disease, Parkinson’s disease, as well as mental health conditions (e.g., dementia, bipolar disorder). They also included patients with limited social support and those with higher health care usage. These criteria established an initial pilot group of 155 patients for the program, identified through their EMR. As understanding of the program’s service criteria grew, the referral process shifted to an ongoing basis where physicians referred patients with complex health issues. Occasionally, patients would request a referral to the paramedicine program themselves after hearing about the program from other patients. Table describes the baseline characteristics of the community paramedicine program’s participants ( n =335) from 2014 to 2022. Due to limited data availability, the table does not represent the 30% ( n =100-150) of external patients the program serves. Based on the clinic staff’s responses and the demographic profiles of the paramedics’ patients, the program was recruiting and serving the FHT’s most complex and high-needs patients. These patients had an average age of 78 and typically presented with 2.7 of the top 10 prioritized comorbidities from the FHT, with dementia and mental health diagnoses being the most common (Table ). Mitigating fear The community paramedic team and FHT physicians shared that the primary barrier to patient recruitment into the community paramedicine program was that patients feared accepting care from the paramedicine team could lead to an undesired placement in long-term care. These concerns were primarily addressed through initial phone conversations and first visits with the community paramedic, during which the program’s goals and patient concerns were discussed. This strategy fostered rapport among patients and paramedics, as patients began to see these visits as supportive measures that enabled them to age in their homes longer. And it's going to take a few visits to figure it out because that first visit or two, they're doing to us what they do to the physicians. They're just making everything sound wonderful. And they're often really suspicious when the doctors send a paramedic into their home. They think that we're there to get them placed into long-term care or senior living. That's huge for these people. So, to really allay those fears early on, that we're just there to support them at home is really important. - Community Paramedic Team 2 The initial criteria for recruiting patients for the community paramedic program were patients with multiple chronic diseases, such as congestive heart failure, chronic obstructive pulmonary disease, Parkinson’s disease, as well as mental health conditions (e.g., dementia, bipolar disorder). They also included patients with limited social support and those with higher health care usage. These criteria established an initial pilot group of 155 patients for the program, identified through their EMR. As understanding of the program’s service criteria grew, the referral process shifted to an ongoing basis where physicians referred patients with complex health issues. Occasionally, patients would request a referral to the paramedicine program themselves after hearing about the program from other patients. Table describes the baseline characteristics of the community paramedicine program’s participants ( n =335) from 2014 to 2022. Due to limited data availability, the table does not represent the 30% ( n =100-150) of external patients the program serves. Based on the clinic staff’s responses and the demographic profiles of the paramedics’ patients, the program was recruiting and serving the FHT’s most complex and high-needs patients. These patients had an average age of 78 and typically presented with 2.7 of the top 10 prioritized comorbidities from the FHT, with dementia and mental health diagnoses being the most common (Table ). The community paramedic team and FHT physicians shared that the primary barrier to patient recruitment into the community paramedicine program was that patients feared accepting care from the paramedicine team could lead to an undesired placement in long-term care. These concerns were primarily addressed through initial phone conversations and first visits with the community paramedic, during which the program’s goals and patient concerns were discussed. This strategy fostered rapport among patients and paramedics, as patients began to see these visits as supportive measures that enabled them to age in their homes longer. And it's going to take a few visits to figure it out because that first visit or two, they're doing to us what they do to the physicians. They're just making everything sound wonderful. And they're often really suspicious when the doctors send a paramedic into their home. They think that we're there to get them placed into long-term care or senior living. That's huge for these people. So, to really allay those fears early on, that we're just there to support them at home is really important. - Community Paramedic Team 2 Home visits offer deeper insights into patients’ health circumstances A recurring theme was that in-home visits facilitate a deeper understanding of each patient's health situation. These visits highlighted aspects such as medication adherence, dietary practices, home safety, and hygiene, which are often difficult to determine in a clinical setting but contribute to the success of treatment plans and health outcomes. I think it’s a lot of question asking. But it’s also, visual cues of looking at their environments, looking at how things are maintained. If you find ways to check their insulin in the fridge, look at sort of their situation with respect to food, look at all the different mobility aids they have. - Community Paramedic Team 4 For physicians who had uncertainties regarding a patient's living conditions or suspected an element of their health is unaccounted for, paramedic home visits provided an opportunity to gain a more holistic view of something that can be masked or missed in a clinic setting. FHT staff have noted that patients with cognitive impairment or dementia can have difficulty accurately conveying their health needs, and these visits allow for a better understanding of their care requirements. It's been very useful to have someone go into the home to sort of just see how safe they are at home […]. Also, medication-wise it can be useful to have because they can kind of go over the pill bottles and see if the patient seems to be taking them or not, and if they seem to have an organized system for it. - Physician 2 Moreover, by scanning and engaging with patients in their homes, community paramedics can better identify and address potential health risks, as they are often the first to detect these issues. “ It may be that the paramedic is the first person to detect that things are not going all that well cognitively. Detection and assessment are probably one thing that they can do” (Physician 1). Patient care coordination with physicians Most of the patients on the current community paramedicine roster were referred by their family physicians. This emphasized one of the guiding aims of the community paramedicine program: fostering collaboration between community paramedics and physicians to offer additional and more tailored care. “I think that one of the things that’s really valuable is we are the eyes and ears for the physicians at home” (Community Paramedic Team 4). Additionally, the program's integrated nature fostered seamless and quick exchanges between physicians and community paramedics, either through their EMR channels or in the office, enabling more coordinated and effective care. “We can see everything. We can see their charting. We can see when they have an upcoming appointment with the doctor. So usually if they have an upcoming appointment with the doctor within this month, I would not schedule a paramedic to go out and see them, because it's like duplicating the work.” (Community Paramedic Team 3). This contrasted with communication gaps often observed with external health services. I don't hear from [Personal Support Worker] PSWs. They don't call me and say, did you know that he's had a fall, and he's got a sore on his leg, and he's been eating rotting bananas for a week? Like, I don't get that feedback from the PSW. Whereas with the paramedic, I get frequent updates if they're not doing well. - Physician 3 Finally, community paramedics played a crucial role in advocating for patients on their roster, helping them obtain resources or additional care from their FHT team and/or external health and social care services. “ I'm just here to support you [patient], to stay in your home and stay safe any way I can do that and help communicate back to the physician because maybe you can't get to the clinic very easily ” (Community Paramedic Team 2). Patient care coordination with nurses and allied health professionals (AHPs) An embedded community paramedicine program in an FHT offered access to a diverse pool of expertise, including nurse practitioners, pharmacists, and social workers. “Often, it's just so nice to be able to walk upstairs and just pick the brain of any one of these experts in their fields when we're really hitting a brick wall with people” (Community Paramedic Team 2). In the pilot phase of the program, nurses and AHPs collaborated with the community paramedics to increase their scope of practice to better serve complex patients at home. This included performing medication reconciliation, providing wound care, conducting blood work analysis, and facilitating cognitive assessments at the patient’s home. Following onboarding, community paramedics have collaborated with nurses and AHPs as needed. For example, nurses and AHPs could request that the community paramedic conduct follow-ups or check-in on specific concerns for shared patients. “ I might say, ‘While you're there, can you check on their insulin injections? Can you teach them how to inject Ozempic?’” (Clinic Staff 3). Additionally, community paramedics helped persuade hesitant patients of the benefits of accepting an AHP referral. Increased collaboration was occasionally hindered by two main challenges: limited overlap in patient rosters, with paramedics often focusing on a smaller subset of highly complex cases, and operational limitations inherent in a rural FHT context, where AHP roles are often limited and subject to frequent turnover. Patient care coordination with social services It was repeatedly mentioned that community paramedics were well-versed in community resources, often being the first to identify and facilitate referrals with physician approval. “ They are very knowledgeable, or they have become very knowledgeable, about the other services available in the community” (Physician 1). However, the FHT and community paramedic team encountered significant challenges when collaborating with social services, primarily due to issues with service accessibility, reliability and communication. These challenges were particularly acute in rural settings where services are sparse, resulting in extended wait times or complete unavailability of services. Additionally, such issues disproportionately impacted patients who have minimal social support or transportation options. Especially with [our] area, even things like transport have been issues in the past, to get to day programs, to get to food banks, to get into for appointments. So I think a lot of the frustration is around what resources are available. And, even if they are available, are they serviced and what are the wait times like - Community Paramedic Team 4 As a result, community paramedics filled this role to the best of their ability while patients waited for services . It would be easier if the social services were able to step up more, right? I think we're gap fillers right now, so a lot of the work we do is not even what we set out to do, but we're just trying to keep these patients afloat until the right care kicks in. - Community Paramedic Team 2 Patient care coordination with external health services Community paramedics worked in conjunction with various other external health services, including geriatric teams, hospitals, and home care services. This collaboration was especially valuable for patients recently discharged from the hospital who needed temporary support to ensure a safe transition back to their homes. Knowing that community paramedics can provide follow-up care enables hospitals to reduce the length of patient stays without compromising patient safety. Community paramedics also aided in increasing capacity in the home care sector, as they worked in tandem with personal support workers (PSWs) and care coordinators to offer more frequent monitoring of patients. So I feel like I increase their [home care] capacity quite a bit, because some people when they are in crisis, I can see them every 2-3 weeks or every two months depending what they need. Which the care coordinator will never be able to do, their caseload is just too big. - Community Paramedic Team 2 However, community paramedics in rural settings struggled with long wait times for allied health referrals such as occupational therapy, PSW, and physiotherapy. This often resulted in an increased workload for the paramedics, requiring more frequent visits and making it difficult to deliver care promptly. A recurring theme was that in-home visits facilitate a deeper understanding of each patient's health situation. These visits highlighted aspects such as medication adherence, dietary practices, home safety, and hygiene, which are often difficult to determine in a clinical setting but contribute to the success of treatment plans and health outcomes. I think it’s a lot of question asking. But it’s also, visual cues of looking at their environments, looking at how things are maintained. If you find ways to check their insulin in the fridge, look at sort of their situation with respect to food, look at all the different mobility aids they have. - Community Paramedic Team 4 For physicians who had uncertainties regarding a patient's living conditions or suspected an element of their health is unaccounted for, paramedic home visits provided an opportunity to gain a more holistic view of something that can be masked or missed in a clinic setting. FHT staff have noted that patients with cognitive impairment or dementia can have difficulty accurately conveying their health needs, and these visits allow for a better understanding of their care requirements. It's been very useful to have someone go into the home to sort of just see how safe they are at home […]. Also, medication-wise it can be useful to have because they can kind of go over the pill bottles and see if the patient seems to be taking them or not, and if they seem to have an organized system for it. - Physician 2 Moreover, by scanning and engaging with patients in their homes, community paramedics can better identify and address potential health risks, as they are often the first to detect these issues. “ It may be that the paramedic is the first person to detect that things are not going all that well cognitively. Detection and assessment are probably one thing that they can do” (Physician 1). Most of the patients on the current community paramedicine roster were referred by their family physicians. This emphasized one of the guiding aims of the community paramedicine program: fostering collaboration between community paramedics and physicians to offer additional and more tailored care. “I think that one of the things that’s really valuable is we are the eyes and ears for the physicians at home” (Community Paramedic Team 4). Additionally, the program's integrated nature fostered seamless and quick exchanges between physicians and community paramedics, either through their EMR channels or in the office, enabling more coordinated and effective care. “We can see everything. We can see their charting. We can see when they have an upcoming appointment with the doctor. So usually if they have an upcoming appointment with the doctor within this month, I would not schedule a paramedic to go out and see them, because it's like duplicating the work.” (Community Paramedic Team 3). This contrasted with communication gaps often observed with external health services. I don't hear from [Personal Support Worker] PSWs. They don't call me and say, did you know that he's had a fall, and he's got a sore on his leg, and he's been eating rotting bananas for a week? Like, I don't get that feedback from the PSW. Whereas with the paramedic, I get frequent updates if they're not doing well. - Physician 3 Finally, community paramedics played a crucial role in advocating for patients on their roster, helping them obtain resources or additional care from their FHT team and/or external health and social care services. “ I'm just here to support you [patient], to stay in your home and stay safe any way I can do that and help communicate back to the physician because maybe you can't get to the clinic very easily ” (Community Paramedic Team 2). An embedded community paramedicine program in an FHT offered access to a diverse pool of expertise, including nurse practitioners, pharmacists, and social workers. “Often, it's just so nice to be able to walk upstairs and just pick the brain of any one of these experts in their fields when we're really hitting a brick wall with people” (Community Paramedic Team 2). In the pilot phase of the program, nurses and AHPs collaborated with the community paramedics to increase their scope of practice to better serve complex patients at home. This included performing medication reconciliation, providing wound care, conducting blood work analysis, and facilitating cognitive assessments at the patient’s home. Following onboarding, community paramedics have collaborated with nurses and AHPs as needed. For example, nurses and AHPs could request that the community paramedic conduct follow-ups or check-in on specific concerns for shared patients. “ I might say, ‘While you're there, can you check on their insulin injections? Can you teach them how to inject Ozempic?’” (Clinic Staff 3). Additionally, community paramedics helped persuade hesitant patients of the benefits of accepting an AHP referral. Increased collaboration was occasionally hindered by two main challenges: limited overlap in patient rosters, with paramedics often focusing on a smaller subset of highly complex cases, and operational limitations inherent in a rural FHT context, where AHP roles are often limited and subject to frequent turnover. It was repeatedly mentioned that community paramedics were well-versed in community resources, often being the first to identify and facilitate referrals with physician approval. “ They are very knowledgeable, or they have become very knowledgeable, about the other services available in the community” (Physician 1). However, the FHT and community paramedic team encountered significant challenges when collaborating with social services, primarily due to issues with service accessibility, reliability and communication. These challenges were particularly acute in rural settings where services are sparse, resulting in extended wait times or complete unavailability of services. Additionally, such issues disproportionately impacted patients who have minimal social support or transportation options. Especially with [our] area, even things like transport have been issues in the past, to get to day programs, to get to food banks, to get into for appointments. So I think a lot of the frustration is around what resources are available. And, even if they are available, are they serviced and what are the wait times like - Community Paramedic Team 4 As a result, community paramedics filled this role to the best of their ability while patients waited for services . It would be easier if the social services were able to step up more, right? I think we're gap fillers right now, so a lot of the work we do is not even what we set out to do, but we're just trying to keep these patients afloat until the right care kicks in. - Community Paramedic Team 2 Community paramedics worked in conjunction with various other external health services, including geriatric teams, hospitals, and home care services. This collaboration was especially valuable for patients recently discharged from the hospital who needed temporary support to ensure a safe transition back to their homes. Knowing that community paramedics can provide follow-up care enables hospitals to reduce the length of patient stays without compromising patient safety. Community paramedics also aided in increasing capacity in the home care sector, as they worked in tandem with personal support workers (PSWs) and care coordinators to offer more frequent monitoring of patients. So I feel like I increase their [home care] capacity quite a bit, because some people when they are in crisis, I can see them every 2-3 weeks or every two months depending what they need. Which the care coordinator will never be able to do, their caseload is just too big. - Community Paramedic Team 2 However, community paramedics in rural settings struggled with long wait times for allied health referrals such as occupational therapy, PSW, and physiotherapy. This often resulted in an increased workload for the paramedics, requiring more frequent visits and making it difficult to deliver care promptly. Community paramedicine – staff level Community paramedics embedded in the FHT valued the opportunity to engage with patients as part of the clinic, as it fostered strong and long-lasting patient-paramedic relationships. “I like the idea of working with not just the patient themselves, but their entire care team, their family members, that sort of thing, getting rapport. Having time on scene is huge” (Community Paramedic Team 2). The community paramedic role offered a more multidisciplinary and preventive approach to care for their patients compared to working in a traditional paramedic role. Integration into the FHT empowered community paramedics to broaden their scope of practice, allowing them to provide care that traditionally falls beyond the purview of paramedicine. One of the main reasons they preferred an embedded practice setting over operating in a standalone community paramedicine clinic was that being part of the FHT guaranteed easy collaboration with physicians. “ You have the buy-in from the docs, so that is so crucial in doing care planning ” (Community Paramedic Team 1). Moreover, the embedded community paramedicine model served to dismantle the traditional barriers that often segregate and silo health care services. This was facilitated through direct communication in the clinic, via the EMR system or through shared access to patient charts. It was particularly beneficial during initial home visits because paramedics had access to patient histories, enabling more informed care planning. However, it took the program several years to achieve its current state of care integration due to unsecured funding in its pilot phase. Presently, a significant challenge arose from the community paramedic role not being formally recognized as a funded position within FHTs at a systems level. Consequently, community paramedics still relied on alternative funding sources from different financial streams within Ontario Health, distinct from typical FHT positions. Furthermore, they had to report to an external paramedic service, which complicated their scope of practice and duties. For example, the paramedics began and ended their day at the external paramedic service and borrowed their vehicles. I always wear multiple hats. It’s difficult because when I work in the clinic, I can take orders from the doctors here, but if I’m driving to the clinic in a paramedic vehicle, then I’m also liable. let’s say if there’s an accident in front of me, I’m expected to stop and provide care but I’m under a different medical director - Community Paramedic Team 1 Family Health Team on community paramedics - staff level & setting level Clinic staff expressed appreciation for having access to embedded community paramedics who conducted home visits, particularly for patients who have difficulty attending clinic appointments. This service not only enriched patient care by ensuring continuous care but also offered staff reassurance about patients’ ability to manage their health in their own environment. “ Having eyes and ears in that patient’s direct environment can allow us, as health care providers, to make better decisions about their health ” (Clinic Staff 1). Moreover, the lead community paramedic who joined the pilot program in 2014 has stayed with the team, building long-term relationships and trust with the staff. The FHT staff expressed that all paramedic team members consistently demonstrated a willingness to engage and offer additional support when requested, showing their commitment to the team and the broader healthcare mission. “I've never had any issues communicating with any of my [community paramedic] colleagues. I've never had a hard time getting ahold of them. They're always really onboard and willing to help” (Physician 3). Community paramedics embedded in the FHT valued the opportunity to engage with patients as part of the clinic, as it fostered strong and long-lasting patient-paramedic relationships. “I like the idea of working with not just the patient themselves, but their entire care team, their family members, that sort of thing, getting rapport. Having time on scene is huge” (Community Paramedic Team 2). The community paramedic role offered a more multidisciplinary and preventive approach to care for their patients compared to working in a traditional paramedic role. Integration into the FHT empowered community paramedics to broaden their scope of practice, allowing them to provide care that traditionally falls beyond the purview of paramedicine. One of the main reasons they preferred an embedded practice setting over operating in a standalone community paramedicine clinic was that being part of the FHT guaranteed easy collaboration with physicians. “ You have the buy-in from the docs, so that is so crucial in doing care planning ” (Community Paramedic Team 1). Moreover, the embedded community paramedicine model served to dismantle the traditional barriers that often segregate and silo health care services. This was facilitated through direct communication in the clinic, via the EMR system or through shared access to patient charts. It was particularly beneficial during initial home visits because paramedics had access to patient histories, enabling more informed care planning. However, it took the program several years to achieve its current state of care integration due to unsecured funding in its pilot phase. Presently, a significant challenge arose from the community paramedic role not being formally recognized as a funded position within FHTs at a systems level. Consequently, community paramedics still relied on alternative funding sources from different financial streams within Ontario Health, distinct from typical FHT positions. Furthermore, they had to report to an external paramedic service, which complicated their scope of practice and duties. For example, the paramedics began and ended their day at the external paramedic service and borrowed their vehicles. I always wear multiple hats. It’s difficult because when I work in the clinic, I can take orders from the doctors here, but if I’m driving to the clinic in a paramedic vehicle, then I’m also liable. let’s say if there’s an accident in front of me, I’m expected to stop and provide care but I’m under a different medical director - Community Paramedic Team 1 Clinic staff expressed appreciation for having access to embedded community paramedics who conducted home visits, particularly for patients who have difficulty attending clinic appointments. This service not only enriched patient care by ensuring continuous care but also offered staff reassurance about patients’ ability to manage their health in their own environment. “ Having eyes and ears in that patient’s direct environment can allow us, as health care providers, to make better decisions about their health ” (Clinic Staff 1). Moreover, the lead community paramedic who joined the pilot program in 2014 has stayed with the team, building long-term relationships and trust with the staff. The FHT staff expressed that all paramedic team members consistently demonstrated a willingness to engage and offer additional support when requested, showing their commitment to the team and the broader healthcare mission. “I've never had any issues communicating with any of my [community paramedic] colleagues. I've never had a hard time getting ahold of them. They're always really onboard and willing to help” (Physician 3). Staff training All paramedics in the program expressed confidence in their training and highlighted that the program was designed to give them the flexibility to expand their scope of practice to better meet patient needs, with support from their clinical consultant and FHT director. “ We just keep adding training as we learn more about our patients and their needs, so we’re really open to whatever makes sense for that population ” (Community Paramedic Team 2). Internal barriers and adaptions The key to sustaining the embedded community paramedicine program following the first two years of the pilot phase was developing funding partnerships with the local hospital and Ontario Health. They advocated for baseline funding by highlighting the impact of community paramedics in reducing hospital stay length for complex patients. However, while this funding has been critical, limitations remain as it does not fully support scaling up the program’s services. A critical evolution to the program was granting community paramedics access to the FHT’s EMR. This allowed them to consult and update patient charts directly, eliminating the need for a separate documentation process. Without the record, you're really relying on those notes that they have from their previous visit[...] Before if there was something urgent that they had to do that with, they'd have to pick up the phone, call the admin person, and the admin person would have to go into the patient record share whatever they could. - Clinic Staff 2 Community paramedics continued to face challenges in patient engagement, particularly in conducting assessments for conditions such as cognitive impairments using tools like the Montreal Cognitive Assessment. Patient hesitancy was often rooted in fear of potential consequences, such as long-term care placement or loss of driving privileges. Additionally, this reluctance extended to accessing social services due to pride or perceived stigma. This can impact their ability to provide appropriately tailored care. “ There’s sort of a stigma around asking for help. […] It’s hard, sometimes, to have them agree to go down that route ” (Community Paramedic Team 4). External barriers A significant external barrier that impacted the community paramedicine program's implementation capacity was its operation in a rural setting. The geographical spread of patients limited the number of visits community paramedics could conduct daily, with travel time consuming a considerable portion of their schedule. This challenge was partially mitigated by phone check-ins and optimizing visit routes based on geography. “Some days, I feel that I really didn’t see that many people, but I drove 250 km” (Community Paramedic Team 1). All paramedics in the program expressed confidence in their training and highlighted that the program was designed to give them the flexibility to expand their scope of practice to better meet patient needs, with support from their clinical consultant and FHT director. “ We just keep adding training as we learn more about our patients and their needs, so we’re really open to whatever makes sense for that population ” (Community Paramedic Team 2). The key to sustaining the embedded community paramedicine program following the first two years of the pilot phase was developing funding partnerships with the local hospital and Ontario Health. They advocated for baseline funding by highlighting the impact of community paramedics in reducing hospital stay length for complex patients. However, while this funding has been critical, limitations remain as it does not fully support scaling up the program’s services. A critical evolution to the program was granting community paramedics access to the FHT’s EMR. This allowed them to consult and update patient charts directly, eliminating the need for a separate documentation process. Without the record, you're really relying on those notes that they have from their previous visit[...] Before if there was something urgent that they had to do that with, they'd have to pick up the phone, call the admin person, and the admin person would have to go into the patient record share whatever they could. - Clinic Staff 2 Community paramedics continued to face challenges in patient engagement, particularly in conducting assessments for conditions such as cognitive impairments using tools like the Montreal Cognitive Assessment. Patient hesitancy was often rooted in fear of potential consequences, such as long-term care placement or loss of driving privileges. Additionally, this reluctance extended to accessing social services due to pride or perceived stigma. This can impact their ability to provide appropriately tailored care. “ There’s sort of a stigma around asking for help. […] It’s hard, sometimes, to have them agree to go down that route ” (Community Paramedic Team 4). A significant external barrier that impacted the community paramedicine program's implementation capacity was its operation in a rural setting. The geographical spread of patients limited the number of visits community paramedics could conduct daily, with travel time consuming a considerable portion of their schedule. This challenge was partially mitigated by phone check-ins and optimizing visit routes based on geography. “Some days, I feel that I really didn’t see that many people, but I drove 250 km” (Community Paramedic Team 1). Program’ sustainability The sustainability of the community paramedicine program was closely tied to its alignment with the FHT’s mission. Assessing the program's cost-saving impact was challenging due to the paramedics' focus on upstream preventive care. However, it should be noted that any potential cost-savings would likely benefit hospitals and emergency services, not the FHT. While the FHT's increased investment in primary care may not directly reflect cost savings for them, it could reduce the burden on downstream hospital services. “ It’s theoretical cost-saving. It's an improved patient condition, it's an improved patient experience, and probably an improved provider experience, but maybe in the end doesn't actually save any money. But it may use our money more wisely ” (Clinic Staff 5). The FHT placed significant value on the program's ability to expand service capacity, thereby better serving their most vulnerable populations and reducing the need for in-person clinic visits. I see it as two pieces to it, somebody who needs to be seen but won't come in, or somebody who needs to be seen and would come in, but maybe not frequently enough to identify that they're going to end up in the hospital if they're not otherwise checked in on. - Clinic Staff 5 Recommendation for growth FHT clinic staff suggested expanding the scope of practice for community paramedics to allow more assessments to be conducted directly in patients' homes, thereby improving accessibility and efficiency of care. To support the sustainability and scalability of an embedded community paramedicine model across the province, it would be important to secure dedicated funding for this role in the primary care setting. “ To make it easy, community paramedicine should be its own thing in whatever area. Whether I’m at headquarters or at the clinic or I’m somewhere else. I should be able to do the same things” (Community Paramedic Team 1). The sustainability of the community paramedicine program was closely tied to its alignment with the FHT’s mission. Assessing the program's cost-saving impact was challenging due to the paramedics' focus on upstream preventive care. However, it should be noted that any potential cost-savings would likely benefit hospitals and emergency services, not the FHT. While the FHT's increased investment in primary care may not directly reflect cost savings for them, it could reduce the burden on downstream hospital services. “ It’s theoretical cost-saving. It's an improved patient condition, it's an improved patient experience, and probably an improved provider experience, but maybe in the end doesn't actually save any money. But it may use our money more wisely ” (Clinic Staff 5). The FHT placed significant value on the program's ability to expand service capacity, thereby better serving their most vulnerable populations and reducing the need for in-person clinic visits. I see it as two pieces to it, somebody who needs to be seen but won't come in, or somebody who needs to be seen and would come in, but maybe not frequently enough to identify that they're going to end up in the hospital if they're not otherwise checked in on. - Clinic Staff 5 FHT clinic staff suggested expanding the scope of practice for community paramedics to allow more assessments to be conducted directly in patients' homes, thereby improving accessibility and efficiency of care. To support the sustainability and scalability of an embedded community paramedicine model across the province, it would be important to secure dedicated funding for this role in the primary care setting. “ To make it easy, community paramedicine should be its own thing in whatever area. Whether I’m at headquarters or at the clinic or I’m somewhere else. I should be able to do the same things” (Community Paramedic Team 1). Reach The program served the FHT’s most complex and high-needs patients, with an average age of 78 and typically presenting with 2.7 of the top 10 prioritized comorbidities identified by the FHT. Dementia and mental health diagnoses are the most common conditions among these patients. This patient profile aligned with other community paramedicine programs in Canada and globally, which focused on providing care for complex older populations with multiple morbidities and those in precarious housing situations . A systematic review demonstrated that community paramedics support older adults by aiding in chronic disease management, health assessment, and education, positively impacting the health of older patients. Although being a rural-based program limits the reach and capacity of the community paramedicine program, it is important to recognize that the rural aspect makes home visits especially important, particularly given how costly and time-consuming they can be for physicians and nurse practitioners to conduct. This research builds on existing literature demonstrating that community paramedicine programs in rural settings have the potential to significantly increase the accessibility of care in remote areas . Effectiveness From the perspective of FHT staff, community paramedics' ability to conduct home visits and assessments was instrumental in providing a holistic view of patients’ health behaviours. This understanding allowed community paramedics, physicians, and AHPs to glean more accurate information, often unattainable or missed in clinical settings. This corresponds with findings from previous research on the importance of home visits conducted by physicians or nurses in providing person-centred and context-driven care. Furthermore, these home visits enabled paramedics to conduct assessments, administer vaccines, and collect blood and urine samples at the patient’s home. This reduced physical strain for patients with decreased mobility and transportation challenges, who would otherwise struggle to visit the clinic. A systematic review similarly highlights the important role of home care in supporting older adults aging in place. The review demonstrated that home care can reduce hospitalization by better managing chronic conditions, reducing emergency visits, and minimizing the length of hospital stays. This underscored the value of embedding community paramedic programs to improve primary care capacity and provide care at home for older adults. Access and reliability issues in social and health services remain a major challenge in many rural areas in Canada. Patients and primary care staff encounter shortages in mental health services and social services , which is compounded by labour shortages in primary care and home care staff, such as PSWs, occupational therapists, and physiotherapists . This stresses the value of leveraging other forms of primary care to increase the capacity for health and social services in these rural environments. Expanding the role of paramedics to support primary health care can provide additional capacity to areas with limited service availability, helping patients who are waiting for care. Adoption As traditional paramedics increasingly seek to expand their roles beyond emergency response, there is a growing desire for their role to include preventive care and health promotion . Community paramedicine has emerged as an avenue for paramedics to be more involved in patient care. Embedding a community paramedicine program into an FHT can increase learning opportunities for paramedics while also facilitating a multidisciplinary approach to health care. Implementation Since the program was small and training was facilitated by the same paramedic, implementation of their duties was consistent across the paramedics who had been part of the program to date. This process will likely need to be adapted to work in a larger clinical setting. Embedded within an FHT, community paramedics benefited from increased access to patients' charts and notes and expanded their scope of practice with training from FHT physicians, nurses, and AHPs. This integration represented a key advantage of the embedded community program model over separate community paramedicine programs. Separate programs lack rostered physicians for their patients, have limited access to patient histories, and offer fewer opportunities for collaboration with other healthcare professionals. Second, although our study was unable to investigate the potential cost-saving impact of the program on avoidable emergency department visits and hospital admissions, prior research has demonstrated that primary care community paramedic clinics do alleviate the burden and costs on hospital services . Future research could explore these cost-saving effects in greater detail. Maintenance and future directions Integrating community paramedics into a primary health care team exemplifies a more holistic care model, reflecting a shift towards preventive and comprehensive health care strategies that align with Ontario’s Ministry of Health and Long-Term Care priorities. As part of its plan for connected and convenient care , this approach supports the goals of improving access to care at home, and increasing the number of primary care providers and expertise available to family physicians and nurse practitioners. Our findings demonstrate the potential for community paramedic roles to integrate seamlessly into the primary care model, increasing their scope of practice while supporting the workload of physicians. With Ontario and Canada overall facing a shortage of family physicians and ongoing staff issues, expanding the role of traditional paramedics presents an opportunity to forge beneficial relationships with primary care providers. As shown in our findings, embedded community paramedics can help physicians manage large caseloads while also improving health care access for rural and high-needs patients. Moreover, it creates opportunities to better support older adults as they age in place. This additional support is especially needed in rural areas where travel time , and factors such as mobility issues , cognitive decline and dementia may prevent regular clinic visits. Limitations This evaluation has a few limitations. First, the evaluation did not include interviews with patients using the community paramedicine program, thereby missing the patient perspective. As a result, the RE-AIM evaluation does not fully capture the program's effectiveness from the patients' point of view. While provider anecdotes suggest improvements in patient care and experience, such as better care coordination and access, this evidence is indirect and insufficient to speak to patient experiences or outcomes. The study instead focused on how the program supports primary care teams and physicians by increasing primary care service capacity, informing care strategies, and improving access for high-risk, rural older adults. Future research should include patient perspectives to better assess the program's impact on patient outcomes and experiences. Second, a limitation of this evaluation is the limited generalizability of the findings due to the program’s small scale and rural setting. As a novel program in Ontario operating within a specific context, the results and the role of community paramedics may not fully translate to larger or urban healthcare settings with different patient populations or resource availability. Our co-developed blueprint (Supplementary Material 2) aims to improve generalizability by providing initial steps for different primary care practices to consider when embedding community paramedics into their practice. The program served the FHT’s most complex and high-needs patients, with an average age of 78 and typically presenting with 2.7 of the top 10 prioritized comorbidities identified by the FHT. Dementia and mental health diagnoses are the most common conditions among these patients. This patient profile aligned with other community paramedicine programs in Canada and globally, which focused on providing care for complex older populations with multiple morbidities and those in precarious housing situations . A systematic review demonstrated that community paramedics support older adults by aiding in chronic disease management, health assessment, and education, positively impacting the health of older patients. Although being a rural-based program limits the reach and capacity of the community paramedicine program, it is important to recognize that the rural aspect makes home visits especially important, particularly given how costly and time-consuming they can be for physicians and nurse practitioners to conduct. This research builds on existing literature demonstrating that community paramedicine programs in rural settings have the potential to significantly increase the accessibility of care in remote areas . From the perspective of FHT staff, community paramedics' ability to conduct home visits and assessments was instrumental in providing a holistic view of patients’ health behaviours. This understanding allowed community paramedics, physicians, and AHPs to glean more accurate information, often unattainable or missed in clinical settings. This corresponds with findings from previous research on the importance of home visits conducted by physicians or nurses in providing person-centred and context-driven care. Furthermore, these home visits enabled paramedics to conduct assessments, administer vaccines, and collect blood and urine samples at the patient’s home. This reduced physical strain for patients with decreased mobility and transportation challenges, who would otherwise struggle to visit the clinic. A systematic review similarly highlights the important role of home care in supporting older adults aging in place. The review demonstrated that home care can reduce hospitalization by better managing chronic conditions, reducing emergency visits, and minimizing the length of hospital stays. This underscored the value of embedding community paramedic programs to improve primary care capacity and provide care at home for older adults. Access and reliability issues in social and health services remain a major challenge in many rural areas in Canada. Patients and primary care staff encounter shortages in mental health services and social services , which is compounded by labour shortages in primary care and home care staff, such as PSWs, occupational therapists, and physiotherapists . This stresses the value of leveraging other forms of primary care to increase the capacity for health and social services in these rural environments. Expanding the role of paramedics to support primary health care can provide additional capacity to areas with limited service availability, helping patients who are waiting for care. As traditional paramedics increasingly seek to expand their roles beyond emergency response, there is a growing desire for their role to include preventive care and health promotion . Community paramedicine has emerged as an avenue for paramedics to be more involved in patient care. Embedding a community paramedicine program into an FHT can increase learning opportunities for paramedics while also facilitating a multidisciplinary approach to health care. Since the program was small and training was facilitated by the same paramedic, implementation of their duties was consistent across the paramedics who had been part of the program to date. This process will likely need to be adapted to work in a larger clinical setting. Embedded within an FHT, community paramedics benefited from increased access to patients' charts and notes and expanded their scope of practice with training from FHT physicians, nurses, and AHPs. This integration represented a key advantage of the embedded community program model over separate community paramedicine programs. Separate programs lack rostered physicians for their patients, have limited access to patient histories, and offer fewer opportunities for collaboration with other healthcare professionals. Second, although our study was unable to investigate the potential cost-saving impact of the program on avoidable emergency department visits and hospital admissions, prior research has demonstrated that primary care community paramedic clinics do alleviate the burden and costs on hospital services . Future research could explore these cost-saving effects in greater detail. Integrating community paramedics into a primary health care team exemplifies a more holistic care model, reflecting a shift towards preventive and comprehensive health care strategies that align with Ontario’s Ministry of Health and Long-Term Care priorities. As part of its plan for connected and convenient care , this approach supports the goals of improving access to care at home, and increasing the number of primary care providers and expertise available to family physicians and nurse practitioners. Our findings demonstrate the potential for community paramedic roles to integrate seamlessly into the primary care model, increasing their scope of practice while supporting the workload of physicians. With Ontario and Canada overall facing a shortage of family physicians and ongoing staff issues, expanding the role of traditional paramedics presents an opportunity to forge beneficial relationships with primary care providers. As shown in our findings, embedded community paramedics can help physicians manage large caseloads while also improving health care access for rural and high-needs patients. Moreover, it creates opportunities to better support older adults as they age in place. This additional support is especially needed in rural areas where travel time , and factors such as mobility issues , cognitive decline and dementia may prevent regular clinic visits. This evaluation has a few limitations. First, the evaluation did not include interviews with patients using the community paramedicine program, thereby missing the patient perspective. As a result, the RE-AIM evaluation does not fully capture the program's effectiveness from the patients' point of view. While provider anecdotes suggest improvements in patient care and experience, such as better care coordination and access, this evidence is indirect and insufficient to speak to patient experiences or outcomes. The study instead focused on how the program supports primary care teams and physicians by increasing primary care service capacity, informing care strategies, and improving access for high-risk, rural older adults. Future research should include patient perspectives to better assess the program's impact on patient outcomes and experiences. Second, a limitation of this evaluation is the limited generalizability of the findings due to the program’s small scale and rural setting. As a novel program in Ontario operating within a specific context, the results and the role of community paramedics may not fully translate to larger or urban healthcare settings with different patient populations or resource availability. Our co-developed blueprint (Supplementary Material 2) aims to improve generalizability by providing initial steps for different primary care practices to consider when embedding community paramedics into their practice. As the population in Ontario and Canada overall ages, the demand for accessible and effective health care is increasing. Profiling this innovative FHT-embedded community paramedicine model offers valuable insights not only for other FHTs and primary care settings in Ontario but also for international health systems seeking to manage the needs of complex patient populations. By collaborating with community paramedics, health care systems can alleviate the strain on emergency departments, hospitals, and primary care providers while improving patient access to timely care. This model could be adapted to meet the needs of aging populations in other regions with strained emergency services and limited primary care access. The next steps for this research involve disseminating a “how to” blueprint (Supplementary Material 2) for embedding the community paramedic role in primary care across various FHT and primary care settings throughout Ontario and Canada. Supplementary Material 1. Supplementary Material 2.
Molecular diagnostics in the management of chronic hepatitis C: key considerations in the era of new antiviral therapies
2bb0b9fa-12ec-4c2b-a8ce-acd25d8e21c3
4160902
Pathology[mh]
Chronic HCV (CHC) infection is a global public-health problem, with approximately 170 million persons chronically infected who are at an increased risk of morbidity and mortality due to liver cirrhosis, hepatocellular carcinoma (HCC), and extra-hepatic complications that develop. The incidence of cirrhosis and HCC is projected to dramatically increase over the next decade in certain populations such as the U.S. "baby boomer" birth cohort. With the development of interferon free, all oral, potent antiviral agents with less adverse effects, the need for screening individuals and successfully treating at-risk CHC patients becomes increasingly more important and possible. The CDC has previously recommended routine HCV screening for persons most likely infected with HCV based on the known epidemiologic risk factors and has published guidelines for laboratory testing using HCV antibody and HCV RNA assays . In 2012, the CDC amended testing recommendations to include one-time HCV testing for all persons born between 1945 and 1965 ("baby boomers") in the U.S. . Screening for hepatitis C starts with anti-HCV antibody. The OraQuick HCV Rapid Antibody Test (OraSure Technologies) is a rapid assay for the presumptive detection of HCV antibody in finger stick capillary blood and venipuncture whole blood . In the U.S., this test is approved for use in doctor's offices or clinics that able to use laboratory-based IVD tests . Rapid tests are also available in Europe as well as other parts of the world. The Recombinant Immunoblot Assay (RIBA) HCV 3.0 Strip Immunoblot Assay (Novartis Vaccines and Diagnostics) that was previously recommended for supplemental testing of blood samples after initial HCV antibody testing is no longer available or recommended. In 2013, the recommendations were updated for supplementary testing whereby the diagnosis of a current HCV infection (a positive antibody test) should be confirmed by using a NAT test (Figure ). This is because an anti-HCV antibody test result can be positive in patients who were previously infected with HCV but have spontaneously cleared infection and are no longer viremic. HCV RNA tests can detect the presence of an active HCV infection. In clinical practice guidelines, using a sensitive molecular method (LLOD <15 IU/mL) is recommended for the diagnosis of acute hepatitis and CHC . However, it's important to note that currently there are no real-time PCR HCV RNA viral load monitoring tests that have been reviewed or approved by any regulatory agencies, including the FDA that have a diagnostic intended use claim supporting these recommendations. Measurement of HCV RNA is essential for measuring an active infection at baseline, during treatment, at the end of treatment, and for detecting relapse after stopping antiviral therapy ( e.g . 12 or 24 weeks later). Absence of viral replication as measured in the bloodstream 3 or 6 months after an antiviral treatment regimen indicates the patient is cured. Current molecular methods A variety of molecular methods have been used to manage CHC patients. The majority of tests that are used by routine clinical laboratories are based on real-time PCR technologies which quantify HCV RNA during the exponential phase of amplification, with greater sensitivity and a broader linear dynamic range (~10 to 10 8 IU/mL). There are several HCV RNA commercial, real-time PCR tests that are available (Table ). The COBAS ® AmpliPrep / COBAS ® TaqMan ® HCV Quantitative and Qualitative Test, version 2.0 (TaqMan ® HCV Test, v2.0) (Roche Molecular Systems) uses a magnetic silica bead-based automated RNA extraction on the COBAS AmpliPrep platform followed by HCV target specific (5' UTR) amplification and detection performed on the COBAS TaqMan thermal cycler. The assay is approved as a FDA-IVD and CE-IVD quantitative test and as a CE-IVD qualitative test. Both the quantitative and qualitative tests use a dual probe approach where two fluorescently labeled hydrolysis probes simultaneously detect amplicon, providing broader detection and quantification of rare genotype 4 sequences . HCV RNA titer is calculated using a competitive quantitative standard, obviating the need for the laboratory to perform calibrations. Reagents are stored at 2-8 o C. A manual version, the COBAS ® TaqMan ® Test v2.0 for use with the High Pure System (the HP-TaqMan HCV, v2.0) (Roche Molecular Systems), which instead uses a column-based manual extraction is also available. This test has been predominantly used in the clinical trials for currently approved DAA-interferon containing regimens. The Abbott RealTi me HCV assay (Abbott Molecular) uses an automated, magnetic particle-based nucleic acid extraction on the m2000sp platform followed by a manual sealing of the reaction plate to prepare it for HCV target specific amplification and detection on the m2000rt platform. To detect the HCV RNA target, a DNA probe with a covalently linked fluorescent moiety and a covalently linked quenching moiety is used. Since a noncompetitive internal control (derived from a pumpkin gene) is used, the laboratory is required to perform lot-specific calibrations. Reagents must be shipped and stored frozen . The Versant HCV RNA 1.0 test (Siemens Healthcare) is a real-time PCR assay that uses a magnetic silica bead-based automated RNA extraction followed by automated amplification of the HCV genome and detection on the Versant Kinetic PCR (kPCR) Molecular System platform. This test replaces the quantitative, branched DNA (bDNA)-based, signal amplification test as well as the qualitative TMA-based test . The Artus Hepatitis C QS-RGQ assay is a real-time PCR assay that uses a magnetic particle-based automated RNA extraction on the QIAsymphony SP platform (Qiagen) followed by amplification of the HCV genome and detection on the Rotor-Gene Q platform . Other molecular methods that are used in management of CHC patients include genotyping tests (for HCV genotypes 1-6), which help determine the type and duration of treatment as well as to predict treatment outcomes. Currently, HCV genotyping tests use direct DNA sequencing ( e.g . THE TRUGENE ® HCV Genotyping Assay, Siemens Erlangen, Germany) and bi-directional sequences where genotype and subtype characterization is determined by two fluorescently labeled DNA primers or a line probe assay (INNO-LiPA HCV II Genotype Test, Innogenetics, Ghent, Belgium), that simultaneously detects of 5′UTR and Core regions to improve genotype 1 characterization using a linear probe array . Several real-time PCR-based non-IVD Tests ( e.g . GenMark) are used and more recently the Abbott HCV Genotype Test is currently available as the only FDA-approved test. In a recent report, the Abbott HCV Genotype Test (Abbott Molecular) has been found to be useful for characterizing genotype 2-6 but may require a confirmatory method for correct genotype 1 characterization Non-molecular methods HCV core antigen serology tests have been proposed for the use in either on-treatment monitoring or for assessing SVR, but this application may miss approximately half of the samples <2,000 IU/mL by PCR and may only be reliable in results >6,000 IU/mL . Therefore, HCV core antigen may not be suitable for detecting an active HCV infection. Unlike HCV core antigen tests, the clinical utility of using HCV RNA PCR-based tests in managing CHC patients is well established . A variety of molecular methods have been used to manage CHC patients. The majority of tests that are used by routine clinical laboratories are based on real-time PCR technologies which quantify HCV RNA during the exponential phase of amplification, with greater sensitivity and a broader linear dynamic range (~10 to 10 8 IU/mL). There are several HCV RNA commercial, real-time PCR tests that are available (Table ). The COBAS ® AmpliPrep / COBAS ® TaqMan ® HCV Quantitative and Qualitative Test, version 2.0 (TaqMan ® HCV Test, v2.0) (Roche Molecular Systems) uses a magnetic silica bead-based automated RNA extraction on the COBAS AmpliPrep platform followed by HCV target specific (5' UTR) amplification and detection performed on the COBAS TaqMan thermal cycler. The assay is approved as a FDA-IVD and CE-IVD quantitative test and as a CE-IVD qualitative test. Both the quantitative and qualitative tests use a dual probe approach where two fluorescently labeled hydrolysis probes simultaneously detect amplicon, providing broader detection and quantification of rare genotype 4 sequences . HCV RNA titer is calculated using a competitive quantitative standard, obviating the need for the laboratory to perform calibrations. Reagents are stored at 2-8 o C. A manual version, the COBAS ® TaqMan ® Test v2.0 for use with the High Pure System (the HP-TaqMan HCV, v2.0) (Roche Molecular Systems), which instead uses a column-based manual extraction is also available. This test has been predominantly used in the clinical trials for currently approved DAA-interferon containing regimens. The Abbott RealTi me HCV assay (Abbott Molecular) uses an automated, magnetic particle-based nucleic acid extraction on the m2000sp platform followed by a manual sealing of the reaction plate to prepare it for HCV target specific amplification and detection on the m2000rt platform. To detect the HCV RNA target, a DNA probe with a covalently linked fluorescent moiety and a covalently linked quenching moiety is used. Since a noncompetitive internal control (derived from a pumpkin gene) is used, the laboratory is required to perform lot-specific calibrations. Reagents must be shipped and stored frozen . The Versant HCV RNA 1.0 test (Siemens Healthcare) is a real-time PCR assay that uses a magnetic silica bead-based automated RNA extraction followed by automated amplification of the HCV genome and detection on the Versant Kinetic PCR (kPCR) Molecular System platform. This test replaces the quantitative, branched DNA (bDNA)-based, signal amplification test as well as the qualitative TMA-based test . The Artus Hepatitis C QS-RGQ assay is a real-time PCR assay that uses a magnetic particle-based automated RNA extraction on the QIAsymphony SP platform (Qiagen) followed by amplification of the HCV genome and detection on the Rotor-Gene Q platform . Other molecular methods that are used in management of CHC patients include genotyping tests (for HCV genotypes 1-6), which help determine the type and duration of treatment as well as to predict treatment outcomes. Currently, HCV genotyping tests use direct DNA sequencing ( e.g . THE TRUGENE ® HCV Genotyping Assay, Siemens Erlangen, Germany) and bi-directional sequences where genotype and subtype characterization is determined by two fluorescently labeled DNA primers or a line probe assay (INNO-LiPA HCV II Genotype Test, Innogenetics, Ghent, Belgium), that simultaneously detects of 5′UTR and Core regions to improve genotype 1 characterization using a linear probe array . Several real-time PCR-based non-IVD Tests ( e.g . GenMark) are used and more recently the Abbott HCV Genotype Test is currently available as the only FDA-approved test. In a recent report, the Abbott HCV Genotype Test (Abbott Molecular) has been found to be useful for characterizing genotype 2-6 but may require a confirmatory method for correct genotype 1 characterization Non-molecular methods HCV core antigen serology tests have been proposed for the use in either on-treatment monitoring or for assessing SVR, but this application may miss approximately half of the samples <2,000 IU/mL by PCR and may only be reliable in results >6,000 IU/mL . Therefore, HCV core antigen may not be suitable for detecting an active HCV infection. Unlike HCV core antigen tests, the clinical utility of using HCV RNA PCR-based tests in managing CHC patients is well established . HCV core antigen serology tests have been proposed for the use in either on-treatment monitoring or for assessing SVR, but this application may miss approximately half of the samples <2,000 IU/mL by PCR and may only be reliable in results >6,000 IU/mL . Therefore, HCV core antigen may not be suitable for detecting an active HCV infection. Unlike HCV core antigen tests, the clinical utility of using HCV RNA PCR-based tests in managing CHC patients is well established . After a decade of using PEG2α/RBV to treat CHC patients, boceprevir (VICTRELIS ® , Merck & Co., Inc., Whitehouse Station, NJ) and telaprevir (INCIVEK ® , Vertex Pharmaceuticals Incorporated Cambridge, MA), NS3/4A protease inhibitors, co-administered with PEG2α/RBV were approved for HCV genotype 1 infected patients in 2011 after demonstrating significant improvements in SVR rates. Recently, at the end of 2013, two more drugs were approved demonstrating even greater improvements in SVR rates. Simeprevir (OLYSIO™, Janssen Therapeutics, Titusville, NJ) a NS3/4A protease inhibitor and Sofosbuvir (SOVALDI™, Gilead Sciences, Inc., Foster City CA) a potent HCV nucleotide analog NS5B polymerase inhibitor are now available . Simeprevir (plus PEG2α/RBV) was approved for HCV genotype 1 infected subjects with compensated liver disease (including cirrhosis) along with a screening requirement for patients with HCV genotype 1a infections for the presence of the NS3 Q80K polymorphism (in which case this therapy is not recommended). Sofosbuvir represents the first all oral, interferon free DAA-containing regimen (combined with RBV) and the first DAA-interferon-free regimen approved for treating patients with HCV genotype 2 or 3 infections . Sofosbuvir (plus RBV) has a shorter treatment duration for genotype 2 (12 weeks) than genotype 3 (24 weeks). HCV genotypes 1 or 4 infections can also be treated with sofosbuvir but require coadministration of PEGα/RBV for 12 weeks. The AASLD/IDSA recommendations for testing, managing, and treating HCV were updated in 2014 in response to the changing landscape of HCV treatment options . HCV RNA Test results & interpretations A definition and description of terms used to describe HCV RNA levels is provided (Table ) and HCV RNA VL results and interpretations are described (Table ). To note, if HCV RNA is detected by PCR (and lower than the linear range of the test), the result is reported by the software as "HCV RNA, detected less than the Lower Limit of Quantitation (LLOQ)", even if the actual VL titer is below the sensitivity or Limit Of Detection (LOD) of the test. Being able to " detect " HCV RNA that is below the LOD of the test may seem counterintuitive since it is typically presumed that if the actual HCV RNA titer is below the LOD then there is nothing there to " detect ". However, the LOD is defined and calculated by the tests ability to detect HCV RNA ≥95% of the time. This means that even at HCV RNA titers that are half the LOD, the PCR amplification may still detect HCV RNA ~50% of the time, in which case, the result will be reported as "HCV RNA detected, < LLOQ" if the HCV RNA is "detected". Viral kinetics and RGT In patients treated with PEGα/RBV, the best predictor of an SVR was shown to be a rapid on-treatment HCV RNA decline to "undetectable" early in therapy . To this end, a rapid virological response (RVR), or "undetectable" ( e.g . <50 IU/mL) by 4 weeks of PEGα/RBV, has been used to determine eligibility for shortening therapy ( e.g . 24 weeks versus 48 weeks, genotype 1). New definitions for an "undetectable" HCV RNA VL While the goal of treating CHC patients is to eradicate the infection as measured by an "undetectable" HCV RNA result, "undetectable" has evolved alongside the treatment algorithm. For PEGα/RBV therapy, an "undetectable" result was any result that is <50 IU/mL (Table ). In contrast, for PEGα/RBV + boceprevir or telaprevir regimens, the term "undetectable" was defined as a "target not detected" result, which was required for patients to be eligible for shorten therapy; but for SVR assessments, a "<25 IU/mL, HCV RNA detected" was an acceptable endpoint. For the recently approved regimen containing simeprevir, a stopping rule "cutoff" of 25 IU/mL is used at 4, 12, or 24 weeks in which all therapies are discontinued if HCV RNA results are above this cutoff (Table ). For sofosbuvir, HCV RNA testing is only recommended after treatment after a fixed duration and to assess SVR. Both regimens use "<25 IU/mL, HCV RNA detected" for defining "undetectable". In the "real world setting" it is likely that there will be less patient compliance than in the clinical trials. Therefore, it may be useful to investigate whether HCV RNA VL "adherence monitoring" on-therapy is worthwhile in patients suspected of noncompliance, especially when considering the high treatment costs. Given that the trials used a test with a LLOQ of 25 IU/mL, differences in a tests LLoQ is important. How should clinicians handle a quantifiable result of 22 IU/mL derived from a different test than the one used in the clinical trials ( e.g . one that has a lower LLOQ)? These are practical considerations that may cause uncertainty for clinicians. Using "target not detected" for shortening therapy With the introduction of boceprevir and telaprevir, new RGT rules were introduced which lead to considerable confusion in the terms used to define "undetectable" and when to apply this interpretation. These rules were based on a re-analysis of the boceprevir and telaprevir trials data that was published by the FDA where it was concluded that a "HCV RNA detectable, <LLOQ" result predicted a significantly lower cure rate compared with subjects with an "undetectable" ("Target Not Detected") result . Based on this analysis, it was determined that a confirmed "detectable but below the LLOQ" HCV RNA result should not be considered equivalent to an "undetectable" HCV RNA ("Target Not Detected") result for the purposes of RGT. Therefore, a "Target Not Detected" result at both 4 and 12 weeks of PEGα/RBV + telaprevir therapy was required to shorten therapy (48 weeks to 24 or 36 weeks of PEGα/RBV). To further add complexity, stopping rules were also different for boceprevir and telaprevir regimens (100 and 1,000 IU/mL, respectively). Differences between HCV RNA assays with DAA therapies Although all commonly used HCV RNA assays report the results in the standardized IU/mL, not all tests perform similarly. Several reports have demonstrated differences between how assays report results, particularly in detecting low amounts of HCV RNA . In these studies, concordance analyses have determined that HCV RNA differences in reporting results that are "Target Not Detected" versus "HCV RNA detected, < LLOQ" have become apparent. This is particularly true in one study that investigated results generated from the TaqMan ® HCV Test, v2.0 used as part of a phase III clinical trial with simeprevir plus PEGα/RBV and compared it to Abbott RealTi m e HCV Test . Overall, there was good agreement between the 2 assays; however, a large number of samples (26%-35%) at week 4 of treatment had detectable HCV RNA levels (<LLOQ) with the Abbott RealTi m e assay that were "Target Not Detected" by the HPS-TaqMan ® HCV Test, v2.0. These patients received shortened therapy based on the HPS-TaqMan ® HCV Test, v2.0 TND result and high SVR rates were achieved. Thus, if the Abbott RealTi me assay results at week 4 of therapy had been used to determine treatment duration, these patients may have been over-treated by an additional 6 months. Since these DAA-containing triple therapy requires HCV RNA to be TND at both weeks 4 and 12 in order to shorten therapy, differences between HCV RNA assays can affect key medical decisions, in this case resulting in a larger portion of patients treated for longer durations (if the same cutoffs are used). It was therefore suggested that a cutoff of <12 IU/mL (detected) may be appropriate for the Abbott RealTi m e HCV Test. However, since this cutoff has not yet been clinically validated and further studies are needed. While boceprevir and telaprevir containing regimens have been replaced by more potent regimens, differences in the performance of HCV RNA tests might be of importance, particularly if they are not clinically validated. Therapies expected in the near future Faldaprevir is a HCV protease inhibitor in late stage phase 3 clinical trials and administered once daily is being tested in combination with PEGα/RBV, and in IFN-free regimens with other DAA agents. Sofosbuvir is also being investigated in combination with antiviral agents that target different virus proteins such as daclatasvir and ledipasvir (nonstructural protein 5A [NS5A] inhibitors), with or without RBV . Preliminary results of Phase 3 trials of the interferon-free sofosbuvir and ledipasvir combination regimen in patients with HCV genotype 1 infection have shown SVR12 rates of 93%-99% (Table ). Abbvie is evaluating an interferon-free 3-DAA combination regimen containing the ABT-450, ritonavir, and ABT-267 co-formulated tablet (ABT-450/r/ABT-267) and ABT-333 tablet administered with or without RBV. ABT-450 is a NS3A protease inhibitor; ABT-267 is a NS5A inhibitor; and ABT-333 is a non-nucleoside inhibitor of the NS5B polymerase. This 3-DAAs regimen with and without RBV has reported SVR12 rates of 90%-100% in a phase 2 trial of patients infected with HCV genotype 1 . Preliminary results of Phase 3 trials of this 3-DAA regimen have shown very high SVR12 across different HCV genotype 1 infected patient populations (Table ). A definition and description of terms used to describe HCV RNA levels is provided (Table ) and HCV RNA VL results and interpretations are described (Table ). To note, if HCV RNA is detected by PCR (and lower than the linear range of the test), the result is reported by the software as "HCV RNA, detected less than the Lower Limit of Quantitation (LLOQ)", even if the actual VL titer is below the sensitivity or Limit Of Detection (LOD) of the test. Being able to " detect " HCV RNA that is below the LOD of the test may seem counterintuitive since it is typically presumed that if the actual HCV RNA titer is below the LOD then there is nothing there to " detect ". However, the LOD is defined and calculated by the tests ability to detect HCV RNA ≥95% of the time. This means that even at HCV RNA titers that are half the LOD, the PCR amplification may still detect HCV RNA ~50% of the time, in which case, the result will be reported as "HCV RNA detected, < LLOQ" if the HCV RNA is "detected". In patients treated with PEGα/RBV, the best predictor of an SVR was shown to be a rapid on-treatment HCV RNA decline to "undetectable" early in therapy . To this end, a rapid virological response (RVR), or "undetectable" ( e.g . <50 IU/mL) by 4 weeks of PEGα/RBV, has been used to determine eligibility for shortening therapy ( e.g . 24 weeks versus 48 weeks, genotype 1). New definitions for an "undetectable" HCV RNA VL While the goal of treating CHC patients is to eradicate the infection as measured by an "undetectable" HCV RNA result, "undetectable" has evolved alongside the treatment algorithm. For PEGα/RBV therapy, an "undetectable" result was any result that is <50 IU/mL (Table ). In contrast, for PEGα/RBV + boceprevir or telaprevir regimens, the term "undetectable" was defined as a "target not detected" result, which was required for patients to be eligible for shorten therapy; but for SVR assessments, a "<25 IU/mL, HCV RNA detected" was an acceptable endpoint. For the recently approved regimen containing simeprevir, a stopping rule "cutoff" of 25 IU/mL is used at 4, 12, or 24 weeks in which all therapies are discontinued if HCV RNA results are above this cutoff (Table ). For sofosbuvir, HCV RNA testing is only recommended after treatment after a fixed duration and to assess SVR. Both regimens use "<25 IU/mL, HCV RNA detected" for defining "undetectable". In the "real world setting" it is likely that there will be less patient compliance than in the clinical trials. Therefore, it may be useful to investigate whether HCV RNA VL "adherence monitoring" on-therapy is worthwhile in patients suspected of noncompliance, especially when considering the high treatment costs. Given that the trials used a test with a LLOQ of 25 IU/mL, differences in a tests LLoQ is important. How should clinicians handle a quantifiable result of 22 IU/mL derived from a different test than the one used in the clinical trials ( e.g . one that has a lower LLOQ)? These are practical considerations that may cause uncertainty for clinicians. Using "target not detected" for shortening therapy With the introduction of boceprevir and telaprevir, new RGT rules were introduced which lead to considerable confusion in the terms used to define "undetectable" and when to apply this interpretation. These rules were based on a re-analysis of the boceprevir and telaprevir trials data that was published by the FDA where it was concluded that a "HCV RNA detectable, <LLOQ" result predicted a significantly lower cure rate compared with subjects with an "undetectable" ("Target Not Detected") result . Based on this analysis, it was determined that a confirmed "detectable but below the LLOQ" HCV RNA result should not be considered equivalent to an "undetectable" HCV RNA ("Target Not Detected") result for the purposes of RGT. Therefore, a "Target Not Detected" result at both 4 and 12 weeks of PEGα/RBV + telaprevir therapy was required to shorten therapy (48 weeks to 24 or 36 weeks of PEGα/RBV). To further add complexity, stopping rules were also different for boceprevir and telaprevir regimens (100 and 1,000 IU/mL, respectively). Differences between HCV RNA assays with DAA therapies Although all commonly used HCV RNA assays report the results in the standardized IU/mL, not all tests perform similarly. Several reports have demonstrated differences between how assays report results, particularly in detecting low amounts of HCV RNA . In these studies, concordance analyses have determined that HCV RNA differences in reporting results that are "Target Not Detected" versus "HCV RNA detected, < LLOQ" have become apparent. This is particularly true in one study that investigated results generated from the TaqMan ® HCV Test, v2.0 used as part of a phase III clinical trial with simeprevir plus PEGα/RBV and compared it to Abbott RealTi m e HCV Test . Overall, there was good agreement between the 2 assays; however, a large number of samples (26%-35%) at week 4 of treatment had detectable HCV RNA levels (<LLOQ) with the Abbott RealTi m e assay that were "Target Not Detected" by the HPS-TaqMan ® HCV Test, v2.0. These patients received shortened therapy based on the HPS-TaqMan ® HCV Test, v2.0 TND result and high SVR rates were achieved. Thus, if the Abbott RealTi me assay results at week 4 of therapy had been used to determine treatment duration, these patients may have been over-treated by an additional 6 months. Since these DAA-containing triple therapy requires HCV RNA to be TND at both weeks 4 and 12 in order to shorten therapy, differences between HCV RNA assays can affect key medical decisions, in this case resulting in a larger portion of patients treated for longer durations (if the same cutoffs are used). It was therefore suggested that a cutoff of <12 IU/mL (detected) may be appropriate for the Abbott RealTi m e HCV Test. However, since this cutoff has not yet been clinically validated and further studies are needed. While boceprevir and telaprevir containing regimens have been replaced by more potent regimens, differences in the performance of HCV RNA tests might be of importance, particularly if they are not clinically validated. Therapies expected in the near future Faldaprevir is a HCV protease inhibitor in late stage phase 3 clinical trials and administered once daily is being tested in combination with PEGα/RBV, and in IFN-free regimens with other DAA agents. Sofosbuvir is also being investigated in combination with antiviral agents that target different virus proteins such as daclatasvir and ledipasvir (nonstructural protein 5A [NS5A] inhibitors), with or without RBV . Preliminary results of Phase 3 trials of the interferon-free sofosbuvir and ledipasvir combination regimen in patients with HCV genotype 1 infection have shown SVR12 rates of 93%-99% (Table ). Abbvie is evaluating an interferon-free 3-DAA combination regimen containing the ABT-450, ritonavir, and ABT-267 co-formulated tablet (ABT-450/r/ABT-267) and ABT-333 tablet administered with or without RBV. ABT-450 is a NS3A protease inhibitor; ABT-267 is a NS5A inhibitor; and ABT-333 is a non-nucleoside inhibitor of the NS5B polymerase. This 3-DAAs regimen with and without RBV has reported SVR12 rates of 90%-100% in a phase 2 trial of patients infected with HCV genotype 1 . Preliminary results of Phase 3 trials of this 3-DAA regimen have shown very high SVR12 across different HCV genotype 1 infected patient populations (Table ). While the goal of treating CHC patients is to eradicate the infection as measured by an "undetectable" HCV RNA result, "undetectable" has evolved alongside the treatment algorithm. For PEGα/RBV therapy, an "undetectable" result was any result that is <50 IU/mL (Table ). In contrast, for PEGα/RBV + boceprevir or telaprevir regimens, the term "undetectable" was defined as a "target not detected" result, which was required for patients to be eligible for shorten therapy; but for SVR assessments, a "<25 IU/mL, HCV RNA detected" was an acceptable endpoint. For the recently approved regimen containing simeprevir, a stopping rule "cutoff" of 25 IU/mL is used at 4, 12, or 24 weeks in which all therapies are discontinued if HCV RNA results are above this cutoff (Table ). For sofosbuvir, HCV RNA testing is only recommended after treatment after a fixed duration and to assess SVR. Both regimens use "<25 IU/mL, HCV RNA detected" for defining "undetectable". In the "real world setting" it is likely that there will be less patient compliance than in the clinical trials. Therefore, it may be useful to investigate whether HCV RNA VL "adherence monitoring" on-therapy is worthwhile in patients suspected of noncompliance, especially when considering the high treatment costs. Given that the trials used a test with a LLOQ of 25 IU/mL, differences in a tests LLoQ is important. How should clinicians handle a quantifiable result of 22 IU/mL derived from a different test than the one used in the clinical trials ( e.g . one that has a lower LLOQ)? These are practical considerations that may cause uncertainty for clinicians. With the introduction of boceprevir and telaprevir, new RGT rules were introduced which lead to considerable confusion in the terms used to define "undetectable" and when to apply this interpretation. These rules were based on a re-analysis of the boceprevir and telaprevir trials data that was published by the FDA where it was concluded that a "HCV RNA detectable, <LLOQ" result predicted a significantly lower cure rate compared with subjects with an "undetectable" ("Target Not Detected") result . Based on this analysis, it was determined that a confirmed "detectable but below the LLOQ" HCV RNA result should not be considered equivalent to an "undetectable" HCV RNA ("Target Not Detected") result for the purposes of RGT. Therefore, a "Target Not Detected" result at both 4 and 12 weeks of PEGα/RBV + telaprevir therapy was required to shorten therapy (48 weeks to 24 or 36 weeks of PEGα/RBV). To further add complexity, stopping rules were also different for boceprevir and telaprevir regimens (100 and 1,000 IU/mL, respectively). Although all commonly used HCV RNA assays report the results in the standardized IU/mL, not all tests perform similarly. Several reports have demonstrated differences between how assays report results, particularly in detecting low amounts of HCV RNA . In these studies, concordance analyses have determined that HCV RNA differences in reporting results that are "Target Not Detected" versus "HCV RNA detected, < LLOQ" have become apparent. This is particularly true in one study that investigated results generated from the TaqMan ® HCV Test, v2.0 used as part of a phase III clinical trial with simeprevir plus PEGα/RBV and compared it to Abbott RealTi m e HCV Test . Overall, there was good agreement between the 2 assays; however, a large number of samples (26%-35%) at week 4 of treatment had detectable HCV RNA levels (<LLOQ) with the Abbott RealTi m e assay that were "Target Not Detected" by the HPS-TaqMan ® HCV Test, v2.0. These patients received shortened therapy based on the HPS-TaqMan ® HCV Test, v2.0 TND result and high SVR rates were achieved. Thus, if the Abbott RealTi me assay results at week 4 of therapy had been used to determine treatment duration, these patients may have been over-treated by an additional 6 months. Since these DAA-containing triple therapy requires HCV RNA to be TND at both weeks 4 and 12 in order to shorten therapy, differences between HCV RNA assays can affect key medical decisions, in this case resulting in a larger portion of patients treated for longer durations (if the same cutoffs are used). It was therefore suggested that a cutoff of <12 IU/mL (detected) may be appropriate for the Abbott RealTi m e HCV Test. However, since this cutoff has not yet been clinically validated and further studies are needed. While boceprevir and telaprevir containing regimens have been replaced by more potent regimens, differences in the performance of HCV RNA tests might be of importance, particularly if they are not clinically validated. Faldaprevir is a HCV protease inhibitor in late stage phase 3 clinical trials and administered once daily is being tested in combination with PEGα/RBV, and in IFN-free regimens with other DAA agents. Sofosbuvir is also being investigated in combination with antiviral agents that target different virus proteins such as daclatasvir and ledipasvir (nonstructural protein 5A [NS5A] inhibitors), with or without RBV . Preliminary results of Phase 3 trials of the interferon-free sofosbuvir and ledipasvir combination regimen in patients with HCV genotype 1 infection have shown SVR12 rates of 93%-99% (Table ). Abbvie is evaluating an interferon-free 3-DAA combination regimen containing the ABT-450, ritonavir, and ABT-267 co-formulated tablet (ABT-450/r/ABT-267) and ABT-333 tablet administered with or without RBV. ABT-450 is a NS3A protease inhibitor; ABT-267 is a NS5A inhibitor; and ABT-333 is a non-nucleoside inhibitor of the NS5B polymerase. This 3-DAAs regimen with and without RBV has reported SVR12 rates of 90%-100% in a phase 2 trial of patients infected with HCV genotype 1 . Preliminary results of Phase 3 trials of this 3-DAA regimen have shown very high SVR12 across different HCV genotype 1 infected patient populations (Table ). Given the global burden of CHC and the advent of newer, more potent regimens with higher cure rates, increasing screening to identify at-risk CHC patients and linking them to care is even more important. New guidelines that support screening are important but linkage to care is an ongoing global challenge. With the first DAA-containing regimens, clinical decisions based on HCV RNA VL results (and new interpretations) created complexity for the laboratory and clinician. Further, tests were shown to perform differently in some of DAA-containing regimens. Therefore, additional testing with each new DAA containing regimen across various commercially available HCV RNA tests is important. While the new interferon-free therapies have demonstrated greater efficacy, accurate HCV RNA quantification remains important. In addition, interferon-free regimens may have fixed durations, but on-therapy "adherence monitoring" may be helpful, particularly given the high cost of the new regimens. Therefore, for these and other reasons discussed here, measuring HCV RNA will likely continue to be important. CHC: chronic hepatitis C; CLIA: Clinical Laboratory Improvements Amendment; DAA: direct acting antivirals; HCC: hepatocellular carcinoma; HCV: hepatitis C virus; IU: international units; LLOD: lower limit of detection; LLOQ: lower limit of quantitation; LOD: limit of detection; NAT: nucleic acid amplification; NS5A: nonstructural protein 5A; PCR: polymerase chain reaction; PEGα/RBV: pegylated-interferon/ribavirin; RGT: response-guided therapy; RVR: rapid virological response; SVR: sustained virologic response; ULOQ: upper limit of quantitation; TMA: transcription mediated amplification. Bryan Cobb is an employee of Roche Molecular Systems Inc. Regis A. Vilchez is an employee of Abbvie BC, GH and RV contributed to the data analysis and manuscript writing.
A scoping review of guidelines on caries management for children and young people to inform UK undergraduate core curriculum development
5c65fbe2-9740-4027-82e1-29f04b4039f4
11055302
Dental[mh]
Background Concepts on the management of caries have shifted significantly over the past twenty years, with the evidence base demonstrating the efficacy and/or effectiveness, and benefits of minimally invasive dentistry (MID) . Change in practice does not happen through production of evidence, but through its implementation and it can be difficult to change practitioners’ ways of working once these are established. One of the biggest opportunities to effect change in professional practice is through the undergraduate dental education of future clinicians . However, the change in evidence towards MID was not reflected in the findings of a recent national survey, which found wide disparity in the content and methods of teaching caries management in children and young people (CYP) to undergraduate dental students in the UK . There was wide variation in paediatric caries management methods taught to the next generation of dental practitioners, with outdated practice still evident in teaching. This impacts on the appropriateness of care provided for CYP’s oral and dental health and emphasises the need for recommendations to support a national curriculum for the management of caries in CYP. Within this work we defined children and young people as those under the age of 18, this is the definition used by the UK government, United Nations Convention on the Rights of the Child and civil legislation in England and Wales . Rationale for the review There are a number of guidelines, produced by various organisations internationally, on the management of dental caries . Within paediatric dentistry, there are several international groups producing such guidelines for professional bodies including the International-, American-, European- and British Associations for Paediatric Dentistry and others within the UK alone, such as the Scottish Dental Clinical Effectiveness Programme and the Scottish Intercollegiate Guidelines Network [ , – ]. However, these are of variable quality and each are designed for specific environments. High-quality guidelines that are UK-relevant should be informing education and practice within the UK and could be used for the development of recommendations for a core curriculum. To begin development of these, we aimed to map the recommendations from current guidelines through a scoping review. Scoping reviews can be defined as “a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (i.e., primary research, reviews, non-empirical evidence) within or across particular contexts” . A preliminary search of MEDLINE, the Cochrane Database of Systematic Reviews and JBI Evidence Synthesis found no current or historic systematic reviews or scoping reviews on this topic. This evaluation of the current guidance on the management of caries in CYP forms part of a package of work to inform the development of a position statement and nationally agreed curricula on caries management teaching for undergraduate students within UK dental schools. This scoping review identifies and appraises the quality of clinical guidelines relevant to the management of caries in CYP and maps their recommendations. This will inform the development of a consensus on the curricula for teaching caries management to undergraduate Dentistry and Dental Hygiene and Therapy students at UK dental schools. Aim and objectives The aim of the review was to evaluate current guidelines for caries management in CYP to inform undergraduate dental education in the UK. The specific objectives were to: Identify guidelines relevant to the management of caries in CYP; Appraise the quality of the guidelines using the AGREE II tool; Synthesise recommendations from relevant guidelines of acceptable quality to guide undergraduate teaching of caries management in CYP in the UK; and Identify gaps in the current guidelines regarding the management of caries in CYP. Concepts on the management of caries have shifted significantly over the past twenty years, with the evidence base demonstrating the efficacy and/or effectiveness, and benefits of minimally invasive dentistry (MID) . Change in practice does not happen through production of evidence, but through its implementation and it can be difficult to change practitioners’ ways of working once these are established. One of the biggest opportunities to effect change in professional practice is through the undergraduate dental education of future clinicians . However, the change in evidence towards MID was not reflected in the findings of a recent national survey, which found wide disparity in the content and methods of teaching caries management in children and young people (CYP) to undergraduate dental students in the UK . There was wide variation in paediatric caries management methods taught to the next generation of dental practitioners, with outdated practice still evident in teaching. This impacts on the appropriateness of care provided for CYP’s oral and dental health and emphasises the need for recommendations to support a national curriculum for the management of caries in CYP. Within this work we defined children and young people as those under the age of 18, this is the definition used by the UK government, United Nations Convention on the Rights of the Child and civil legislation in England and Wales . There are a number of guidelines, produced by various organisations internationally, on the management of dental caries . Within paediatric dentistry, there are several international groups producing such guidelines for professional bodies including the International-, American-, European- and British Associations for Paediatric Dentistry and others within the UK alone, such as the Scottish Dental Clinical Effectiveness Programme and the Scottish Intercollegiate Guidelines Network [ , – ]. However, these are of variable quality and each are designed for specific environments. High-quality guidelines that are UK-relevant should be informing education and practice within the UK and could be used for the development of recommendations for a core curriculum. To begin development of these, we aimed to map the recommendations from current guidelines through a scoping review. Scoping reviews can be defined as “a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (i.e., primary research, reviews, non-empirical evidence) within or across particular contexts” . A preliminary search of MEDLINE, the Cochrane Database of Systematic Reviews and JBI Evidence Synthesis found no current or historic systematic reviews or scoping reviews on this topic. This evaluation of the current guidance on the management of caries in CYP forms part of a package of work to inform the development of a position statement and nationally agreed curricula on caries management teaching for undergraduate students within UK dental schools. This scoping review identifies and appraises the quality of clinical guidelines relevant to the management of caries in CYP and maps their recommendations. This will inform the development of a consensus on the curricula for teaching caries management to undergraduate Dentistry and Dental Hygiene and Therapy students at UK dental schools. The aim of the review was to evaluate current guidelines for caries management in CYP to inform undergraduate dental education in the UK. The specific objectives were to: Identify guidelines relevant to the management of caries in CYP; Appraise the quality of the guidelines using the AGREE II tool; Synthesise recommendations from relevant guidelines of acceptable quality to guide undergraduate teaching of caries management in CYP in the UK; and Identify gaps in the current guidelines regarding the management of caries in CYP. The protocol for this scoping review was registered prospectively on 27/03/23 on Open Science Framework (10.17605/OSF.IO/SBHC3). The review was reported according to PRISMA-ScR (see supplementary material) . Eligibility Inclusion criteria To be included, the publication must: Be a clinical guideline; Contain information on the management of dental caries in CYP; Have been developed using a structured guideline methodology; Provide recommendations on the management of dental caries in children and/or young people; Be endorsed or created by a recognised dental organisation; Be written by multiple authors; and, Be published from 2007 onwards. Guidelines from any country, relating to any dental setting (primary, secondary, or tertiary care) and written in English (these were considered likely to be most relevant to the UK setting) were considered. Studies published since 2007 were included as this was when the first clinical trial of the Hall Technique was published . This is generally considered a time when an institutional shift in the thinking behind caries management, towards biological management of caries, began to occur. The definition of, and ages at which people are considered to be, children and young people vary internationally therefore any publication that used the term children and or young people was included to ensure relevant publications were not excluded based on this point. Exclusion criteria Expert opinion papers, position statements and guidelines produced by industry were not considered for inclusion. Types of sources Only guidelines, or conference proceedings subsequently published as guidelines, endorsed or created by recognised dental organisations were considered. Selection of sources of evidence The search strategy aimed to locate both published and unpublished guidelines. An initial limited search of MEDLINE via PubMed was undertaken to identify articles on the topic. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles were used to develop a comprehensive search strategy (Appendix ). This search strategy, including all identified keywords and index terms, was adapted for each included database and information source. The databases searched included Cochrane Library, MEDLINE via PubMed, TRIP Medical Database and Web of Science. Sources of unpublished guidelines/grey literature were searched by contacting authors of existing guidelines to find out whether they were aware of any others underway. Webpages of major dental organisations in this field known to the authors were also searched, alongside a hand search of conference proceedings and a Google Web Search™ (Google LLC, California, United States of America). The reference lists of all included sources of evidence were screened for additional studies. Search strategy The search (Fig. ) was conducted for guidelines published between 01/01/2007 and 21/04/2023. The TRIP database search was further modified to include “guidance” as “guidelines” yielded only five results. The search was repeated on 25/01/2024 and no new guidelines were identified that met the criteria for this review. Selection of guidelines Results from databases were screened independently and concurrently by two reviewers (FC and NI) against the inclusion criteria using Rayyan© software (Rayyan, Massachusetts, United States of America) . Hand searching was conducted by one researcher (FC). All findings were compiled into a Microsoft® Excel® (Microsoft® Corporation, Washington, United States of America) spreadsheet (Appendix 2). Full texts were obtained, and reviewers met to discuss any disagreement for both database and hand searching. Guideline selection was guided by the minimum score of 4.5 in the overall AGREE II scoring indicating quality standard (Table ), but reviewers also included wider considerations relating to the relevance to the UK education and wider paediatric dentistry environment as well as the paediatric caries-specific curriculum. Data extraction and charting Guidelines initially considered to meet the inclusion criteria were distributed between two teams of three reviewers for independent, duplicate data extraction (calibrated through data extraction and discussion of one guideline) with discussion to achieve a single agreed dataset. Microsoft® Forms® (Microsoft® Corporation, Washington, United States of America) was used for data extraction (Appendix 3) and quality appraisal. Quality appraisal Quality appraisal was undertaken by each reviewer within the same two teams alongside data extraction using the AGREE II criteria . Table details the AGREE II tool and the domains covered. Calibration was undertaken alongside calibration for data extraction. Reviewers were blinded and quality appraisal was undertaken using Microsoft Forms® (Microsoft® Corporation, Washington, United States of America). Any disagreements were discussed, and a consensus reached for each domain. Synthesis of results Results were collated by one reviewer in Microsoft® Excel® (Microsoft® Corporation, Washington, United States of America). Reviewers met to discuss any conflicts and agree the final dataset. Data from each guideline were tabulated and summarised in categories relating to the specific area of caries management including depth of lesion and primary/permanent dentition. Inclusion criteria To be included, the publication must: Be a clinical guideline; Contain information on the management of dental caries in CYP; Have been developed using a structured guideline methodology; Provide recommendations on the management of dental caries in children and/or young people; Be endorsed or created by a recognised dental organisation; Be written by multiple authors; and, Be published from 2007 onwards. Guidelines from any country, relating to any dental setting (primary, secondary, or tertiary care) and written in English (these were considered likely to be most relevant to the UK setting) were considered. Studies published since 2007 were included as this was when the first clinical trial of the Hall Technique was published . This is generally considered a time when an institutional shift in the thinking behind caries management, towards biological management of caries, began to occur. The definition of, and ages at which people are considered to be, children and young people vary internationally therefore any publication that used the term children and or young people was included to ensure relevant publications were not excluded based on this point. Exclusion criteria Expert opinion papers, position statements and guidelines produced by industry were not considered for inclusion. To be included, the publication must: Be a clinical guideline; Contain information on the management of dental caries in CYP; Have been developed using a structured guideline methodology; Provide recommendations on the management of dental caries in children and/or young people; Be endorsed or created by a recognised dental organisation; Be written by multiple authors; and, Be published from 2007 onwards. Guidelines from any country, relating to any dental setting (primary, secondary, or tertiary care) and written in English (these were considered likely to be most relevant to the UK setting) were considered. Studies published since 2007 were included as this was when the first clinical trial of the Hall Technique was published . This is generally considered a time when an institutional shift in the thinking behind caries management, towards biological management of caries, began to occur. The definition of, and ages at which people are considered to be, children and young people vary internationally therefore any publication that used the term children and or young people was included to ensure relevant publications were not excluded based on this point. Expert opinion papers, position statements and guidelines produced by industry were not considered for inclusion. Only guidelines, or conference proceedings subsequently published as guidelines, endorsed or created by recognised dental organisations were considered. The search strategy aimed to locate both published and unpublished guidelines. An initial limited search of MEDLINE via PubMed was undertaken to identify articles on the topic. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles were used to develop a comprehensive search strategy (Appendix ). This search strategy, including all identified keywords and index terms, was adapted for each included database and information source. The databases searched included Cochrane Library, MEDLINE via PubMed, TRIP Medical Database and Web of Science. Sources of unpublished guidelines/grey literature were searched by contacting authors of existing guidelines to find out whether they were aware of any others underway. Webpages of major dental organisations in this field known to the authors were also searched, alongside a hand search of conference proceedings and a Google Web Search™ (Google LLC, California, United States of America). The reference lists of all included sources of evidence were screened for additional studies. The search (Fig. ) was conducted for guidelines published between 01/01/2007 and 21/04/2023. The TRIP database search was further modified to include “guidance” as “guidelines” yielded only five results. The search was repeated on 25/01/2024 and no new guidelines were identified that met the criteria for this review. Results from databases were screened independently and concurrently by two reviewers (FC and NI) against the inclusion criteria using Rayyan© software (Rayyan, Massachusetts, United States of America) . Hand searching was conducted by one researcher (FC). All findings were compiled into a Microsoft® Excel® (Microsoft® Corporation, Washington, United States of America) spreadsheet (Appendix 2). Full texts were obtained, and reviewers met to discuss any disagreement for both database and hand searching. Guideline selection was guided by the minimum score of 4.5 in the overall AGREE II scoring indicating quality standard (Table ), but reviewers also included wider considerations relating to the relevance to the UK education and wider paediatric dentistry environment as well as the paediatric caries-specific curriculum. Guidelines initially considered to meet the inclusion criteria were distributed between two teams of three reviewers for independent, duplicate data extraction (calibrated through data extraction and discussion of one guideline) with discussion to achieve a single agreed dataset. Microsoft® Forms® (Microsoft® Corporation, Washington, United States of America) was used for data extraction (Appendix 3) and quality appraisal. Quality appraisal was undertaken by each reviewer within the same two teams alongside data extraction using the AGREE II criteria . Table details the AGREE II tool and the domains covered. Calibration was undertaken alongside calibration for data extraction. Reviewers were blinded and quality appraisal was undertaken using Microsoft Forms® (Microsoft® Corporation, Washington, United States of America). Any disagreements were discussed, and a consensus reached for each domain. Results were collated by one reviewer in Microsoft® Excel® (Microsoft® Corporation, Washington, United States of America). Reviewers met to discuss any conflicts and agree the final dataset. Data from each guideline were tabulated and summarised in categories relating to the specific area of caries management including depth of lesion and primary/permanent dentition. The results of the search, screening and agreement for guidelines published between 1/1/07 and 25/01/2024 is shown in Fig. . Of the 581 guidelines identified from the search, there were 16 guidelines meeting the eligibility and quality criteria for inclusion. Table shows the characteristics of the guidelines and Table the quality was appraised (Table ). There were eight guidelines meeting the set quality standard and considered appropriate for inclusion, for which data extraction and synthesis were carried out (Tables and ). Based on guideline quality indicators and relevance to education on the management of dental caries in CYP within the UK setting, eight guidelines were selected for synthesis of their clinical recommendations. The review was carried out to provide an evidence-base to inform the development of a consensus for the undergraduate curriculum for caries management in CYP, specific to the UK. The need for this consensus was highlighted by a UK survey evaluating current teaching practices for caries management in children and young adults, which showed great variance in the content of teaching and a delay in modernising curricula to keep up with best available evidence . Initial screening identified 16 guidelines but following quality appraisal using the AGREE II tool, only eight were suitable for inclusion in the data synthesis. The exclusion of nine of the 16 guidelines demonstrates the constant problem with quality of evidence and waste . In this case, the quality issues surrounded the development and reporting of guidelines. One of the most common reasons for exclusion of guidelines from synthesis in this study related to the lack of detail and transparency around the process for development of the guidelines, meaning that quality for inclusion could not be adequately determined. These guidelines did not have listed authors to contact to clarify this for inclusion. Biological caries management approaches Preformed metal crowns are recommended in all guidance for the restoration of multi-surface carious lesions. However, in UK guidance, it is specified that preformed metal crowns, placed using the Hall Technique, are the treatment of choice for managing lesions that require intervention but no pulpal therapy . Non-restorative cavity control, is “the approach to make the cavitated caries lesions accessible to tooth cleaning by removal of overhanging enamel margins” . This is suggested as an option for management of caries over 1/3 into dentine in primary teeth by SDCEP guidance . There is poor evidence on the suitability of this option and the authors would be reluctant to suggest this other than the rare situation when no other treatment is possible, the child is co-operative for this treatment alone, and excellent oral hygiene and dietary practices are in place at home. The use of SDF in practice alongside restorative options especially Atraumatic Restorative Technique (ART), have been, referred to as the SMART (Silver Modified ART) or SMART Hall where the Hall Technique is used following SDF application . No guideline discussed this, but it is a recent technique and there is very little evidence apart from case reports and some very recent clinical trial work . Key themes from these guidelines include the move to selective caries removal and avoidance of complete caries removal unless in specific circumstances in anterior teeth only . For “early lesions” in primary and permanent teeth with and without cavitation, several guidelines recommend biological management including site specific prevention and fissure sealants [ , , ]. Pulp therapy In the guidelines, pulpotomy was recommended in primary teeth with a carious exposure in some circumstances, with pulpectomy only being recommended in exceptional circumstances for restorable teeth. Interestingly, within the context of the UK, pulp therapy is rarely undertaken in a primary care setting. A recent survey of general dentists in Scotland found that 91% do not offer vital pulp therapy to adult patients due to constraints such as their working contract and costs of materials . Although this survey explored adult treatment it would be unlikely that this group of dentists offers vital pulp treatment to children and not adults, if cost and materials are being cited as barriers. Whilst undergraduate teaching for dentists and therapists in many UK dental schools still include pulp therapy, patients would typically be referred to a clinician with enhanced skills if this approach was required, in accordance with commissioning guidance . As such, there is a need to gain a consensus on whether these recommendations should be taken forward in the development of a paediatric caries curriculum for undergraduate dental and therapy students in the UK, or whether these techniques should instead be taught as an advanced skill at postgraduate level . For permanent teeth with caries into pulp, a partial pulpotomy was recommended in one guideline . This is an evolving area of research with a current randomised control trial underway in the UK to contribute to the evidence base on pulpotomy versus root canal treatment in primary care . Regenerative endodontic treatments were supported by one guideline . This was based on evidence from a position statement by the American Association of Endodontics and a ‘Colleagues for Excellence’ guide, with no precise indications for this option other than immature teeth with pulp necrosis . Most evidence surrounding regenerative endodontics relates to traumatic dental injuries. Although both dental trauma and dental caries, can result in a loss of pulp vitality, the nature of the resulting infection is likely to be different, as may be the prognosis following this procedure. Preformed metal crowns are recommended in all guidance for the restoration of multi-surface carious lesions. However, in UK guidance, it is specified that preformed metal crowns, placed using the Hall Technique, are the treatment of choice for managing lesions that require intervention but no pulpal therapy . Non-restorative cavity control, is “the approach to make the cavitated caries lesions accessible to tooth cleaning by removal of overhanging enamel margins” . This is suggested as an option for management of caries over 1/3 into dentine in primary teeth by SDCEP guidance . There is poor evidence on the suitability of this option and the authors would be reluctant to suggest this other than the rare situation when no other treatment is possible, the child is co-operative for this treatment alone, and excellent oral hygiene and dietary practices are in place at home. The use of SDF in practice alongside restorative options especially Atraumatic Restorative Technique (ART), have been, referred to as the SMART (Silver Modified ART) or SMART Hall where the Hall Technique is used following SDF application . No guideline discussed this, but it is a recent technique and there is very little evidence apart from case reports and some very recent clinical trial work . Key themes from these guidelines include the move to selective caries removal and avoidance of complete caries removal unless in specific circumstances in anterior teeth only . For “early lesions” in primary and permanent teeth with and without cavitation, several guidelines recommend biological management including site specific prevention and fissure sealants [ , , ]. In the guidelines, pulpotomy was recommended in primary teeth with a carious exposure in some circumstances, with pulpectomy only being recommended in exceptional circumstances for restorable teeth. Interestingly, within the context of the UK, pulp therapy is rarely undertaken in a primary care setting. A recent survey of general dentists in Scotland found that 91% do not offer vital pulp therapy to adult patients due to constraints such as their working contract and costs of materials . Although this survey explored adult treatment it would be unlikely that this group of dentists offers vital pulp treatment to children and not adults, if cost and materials are being cited as barriers. Whilst undergraduate teaching for dentists and therapists in many UK dental schools still include pulp therapy, patients would typically be referred to a clinician with enhanced skills if this approach was required, in accordance with commissioning guidance . As such, there is a need to gain a consensus on whether these recommendations should be taken forward in the development of a paediatric caries curriculum for undergraduate dental and therapy students in the UK, or whether these techniques should instead be taught as an advanced skill at postgraduate level . For permanent teeth with caries into pulp, a partial pulpotomy was recommended in one guideline . This is an evolving area of research with a current randomised control trial underway in the UK to contribute to the evidence base on pulpotomy versus root canal treatment in primary care . Regenerative endodontic treatments were supported by one guideline . This was based on evidence from a position statement by the American Association of Endodontics and a ‘Colleagues for Excellence’ guide, with no precise indications for this option other than immature teeth with pulp necrosis . Most evidence surrounding regenerative endodontics relates to traumatic dental injuries. Although both dental trauma and dental caries, can result in a loss of pulp vitality, the nature of the resulting infection is likely to be different, as may be the prognosis following this procedure. Amalgam One US based guideline states that amalgam is not recommended except in some cases when a tooth is anticipated to exfoliate within two years but has limited applicability for UK dental schools working within regulations, such as the EU directive outlawing the use of amalgam in children under 15 except when unavoidable . No guidelines developed within Europe advocate the use of amalgam in CYP. This contrasts with current practice in the UK, shown in findings from the aforementioned evaluation of paediatric caries management teaching practices . GIC Glass ionomer cement definitive restorations are advised against by some guidelines . This continues to be a contentious issue, with the type of glass ionomer cement probably the most important factor in its success . Resin-based materials Given the restrictions on use of amalgam and the limitations of GIC, there is an increasing reliance on use of resin-based composite materials for definitive restorations. As such, it is unsurprising that these materials were advocated in all included guidelines. This was in particular the AAPD Pediatric Restorative Dentistry and SDCEP Prevention and Management of Dental Caries in Children documents, due to composite’s comparable success to amalgam . Evidence gaps Gaps in evidence were identified within the guidelines, for example, on how to manage early cavitated carious lesions of minimal depth which would require complete caries removal solely for the purposes of providing adequate depth for a retentive restoration . These gaps may have been addressed in some of the guidelines we did not include. Nevertheless, they are omitted from otherwise comprehensive and high-quality guidelines. The variability in terminology, for example, continued use of non-specific terms such as “early lesions” and use of Interim Therapeutic Restoration in US-based documents in place of Atraumatic Restorative Treatment, indicate there is a still no widespread adoption of international consensus on terminology . None of the guidelines recommended tooth tissue removal for early carious lesions, in stark contrast with current teaching practices in the UK . In part, because of inappropriate and inexact use of terminology, none of the guidelines specifically defined carious lesions limited to enamel, how these should be classified and therefore this poses a challenge in selecting the most appropriate treatment option as some clinical judgement is required on accurate diagnosis. Another challenge not addressed by the guidelines is monitoring caries lesion transition, which is recommended by some guidelines, without specific detail on how [ , , ]. Current record keeping only allows for gross scoring of the presence or absence of carious lesions on a surface, so it is not possible to tell whether lesions have progressed over time. The International Caries Detection and Assessment System (ICDAS) or photographs may help with this but are rarely used and there is no evidence on their accuracy in monitoring progression. Context and relevance This scoping review, undertaken to inform consensus discussions for the development of a UK undergraduate curriculum for caries management in CYP, has identified gaps in guidelines including the classification of early carious lesions and how early cavitated lesions should be managed for CYP. These key findings must be considered in discussions with stakeholders in the UK, with consideration of the findings of preceding work that evaluated the current teaching of caries management in CYP . Areas for exploration in consensus discussions include total integration of biological caries management, selective caries removal and the consideration of whether a pulpotomy for the management of caries is a specialist treatment that requires onward referral. Furthermore, is important to note that UK dental schools currently provide teaching for students due to graduate and work largely within the National Health Service. There is an expectation that further postgraduate training would be required for delivery of more specialist level procedures. This is in part, related to current UK remuneration systems and possibly the lack of suitable guidelines for incorporation in teaching. As such, students are unlikely to be taught some of the techniques that are mentioned in recommendations in these guidelines, such as use of non-fluoride-based remineralisation agents, resin infiltration for proximal carious lesions, or regenerative endodontic treatments. Further discussion on whether these approaches should be included in a new curriculum would be warranted. These findings are relevant to those involved in undergraduate teaching of paediatric dentistry, those who develop undergraduate curricula and policymakers. Strengths and limitations Rigorous methodology was used when undertaking this review. This involved blinded screening for eligibility, the assessment of the quality of each guideline using the AGREE II tool and independent review of each guideline by at least two researchers. Meetings were held for agreement and discussing results. Authors of relevant guidelines were also contacted for clarity and to ensure the inclusion of relevant sources. Limitations include the possibility of missed literature in the grey literature search, although every effort was made to find relevant guidelines. There was a lack of high quality, methodologically transparent guidance. Although initially 16 guidelines were eligible for inclusion, assessment of quality using the AGREE II tool meant that only eight guidelines were suitably rigorous to include in the analysis. There were also instances of contradictory recommendations. One US based guideline states that amalgam is not recommended except in some cases when a tooth is anticipated to exfoliate within two years but has limited applicability for UK dental schools working within regulations, such as the EU directive outlawing the use of amalgam in children under 15 except when unavoidable . No guidelines developed within Europe advocate the use of amalgam in CYP. This contrasts with current practice in the UK, shown in findings from the aforementioned evaluation of paediatric caries management teaching practices . Glass ionomer cement definitive restorations are advised against by some guidelines . This continues to be a contentious issue, with the type of glass ionomer cement probably the most important factor in its success . Given the restrictions on use of amalgam and the limitations of GIC, there is an increasing reliance on use of resin-based composite materials for definitive restorations. As such, it is unsurprising that these materials were advocated in all included guidelines. This was in particular the AAPD Pediatric Restorative Dentistry and SDCEP Prevention and Management of Dental Caries in Children documents, due to composite’s comparable success to amalgam . Gaps in evidence were identified within the guidelines, for example, on how to manage early cavitated carious lesions of minimal depth which would require complete caries removal solely for the purposes of providing adequate depth for a retentive restoration . These gaps may have been addressed in some of the guidelines we did not include. Nevertheless, they are omitted from otherwise comprehensive and high-quality guidelines. The variability in terminology, for example, continued use of non-specific terms such as “early lesions” and use of Interim Therapeutic Restoration in US-based documents in place of Atraumatic Restorative Treatment, indicate there is a still no widespread adoption of international consensus on terminology . None of the guidelines recommended tooth tissue removal for early carious lesions, in stark contrast with current teaching practices in the UK . In part, because of inappropriate and inexact use of terminology, none of the guidelines specifically defined carious lesions limited to enamel, how these should be classified and therefore this poses a challenge in selecting the most appropriate treatment option as some clinical judgement is required on accurate diagnosis. Another challenge not addressed by the guidelines is monitoring caries lesion transition, which is recommended by some guidelines, without specific detail on how [ , , ]. Current record keeping only allows for gross scoring of the presence or absence of carious lesions on a surface, so it is not possible to tell whether lesions have progressed over time. The International Caries Detection and Assessment System (ICDAS) or photographs may help with this but are rarely used and there is no evidence on their accuracy in monitoring progression. This scoping review, undertaken to inform consensus discussions for the development of a UK undergraduate curriculum for caries management in CYP, has identified gaps in guidelines including the classification of early carious lesions and how early cavitated lesions should be managed for CYP. These key findings must be considered in discussions with stakeholders in the UK, with consideration of the findings of preceding work that evaluated the current teaching of caries management in CYP . Areas for exploration in consensus discussions include total integration of biological caries management, selective caries removal and the consideration of whether a pulpotomy for the management of caries is a specialist treatment that requires onward referral. Furthermore, is important to note that UK dental schools currently provide teaching for students due to graduate and work largely within the National Health Service. There is an expectation that further postgraduate training would be required for delivery of more specialist level procedures. This is in part, related to current UK remuneration systems and possibly the lack of suitable guidelines for incorporation in teaching. As such, students are unlikely to be taught some of the techniques that are mentioned in recommendations in these guidelines, such as use of non-fluoride-based remineralisation agents, resin infiltration for proximal carious lesions, or regenerative endodontic treatments. Further discussion on whether these approaches should be included in a new curriculum would be warranted. These findings are relevant to those involved in undergraduate teaching of paediatric dentistry, those who develop undergraduate curricula and policymakers. Rigorous methodology was used when undertaking this review. This involved blinded screening for eligibility, the assessment of the quality of each guideline using the AGREE II tool and independent review of each guideline by at least two researchers. Meetings were held for agreement and discussing results. Authors of relevant guidelines were also contacted for clarity and to ensure the inclusion of relevant sources. Limitations include the possibility of missed literature in the grey literature search, although every effort was made to find relevant guidelines. There was a lack of high quality, methodologically transparent guidance. Although initially 16 guidelines were eligible for inclusion, assessment of quality using the AGREE II tool meant that only eight guidelines were suitably rigorous to include in the analysis. There were also instances of contradictory recommendations. This scoping review identified a limited number of high-quality guidelines suitable for shaping a UK undergraduate dental curriculum in caries management for CYP. However, there were guidelines of sufficient quality for data synthesis generally supportive of biological approaches, which is largely contradictory to current UK undergraduate teaching. There were some gaps in evidence that need to be addressed in future research and guideline development. The evidence synthesised from this review will be used as the basis for deriving a consensus on the content of a new undergraduate curriculum for paediatric caries management. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
Gamma-irradiated fowl cholera vaccines formulated with different adjuvants induced antibody response and cytokine expression in chickens
66c3ceab-046c-4c4a-9dc6-bee2805c522d
11914910
Vaccination[mh]
Fowl cholera, caused by Pasteurella multocida is one of the serious infectious diseases of poultry ( ). According to Molalegne et al. ( ), the disease is endemic throughout the majority of Ethiopia and causes severe economic losses due to decreased productivity and mortality. Vaccines are one of the effective ways to control an outbreak within a flock ( ) and both live attenuated and killed vaccines against fowl cholera are now available on the market ( , ). Compared to live-attenuated vaccines, inactivated vaccines often have a greater safety profile as risk of reversion is avoided or significantly low. Additionally, they are less reactogenic, but they also have a lower immunogenicity and need multiple doses to produce a protective effect ( ). The development of vaccines using locally circulating strains is essential and desirable ( ). For the pathogen to effectively elicit an immune response, its structure needs to be properly conserved ( ). A variety of pathogen inactivation techniques such as gamma irradiation ( – ), chemical treatment ( ), and heat inactivation ( ) are available for use. Chemical inactivation using formalin is a common practice in the vaccine industry. However, this could cause alteration or damage to the surface antigenic structures thus affecting immunogenicity and efficacy. Furthermore, immunization with formalin-inactivated respiratory syncytial virus and measles vaccines had resulted in disease enhancement that was ascribed to the low avidity non-protective antibodies elicited due to formalin treated antigens ( ). The other hallmark of formalin-killed vaccines is a Th-2 immunity which is implicated in vaccine associated pathologies ( ). Similarly, even if effective in inactivating pathogens, heat inactivation is also reported to distort epitopes and induce less antibody titer than formalin-treated vaccines ( ). Another study by Hashizume-Takizawa found that heat killed recombinant Salmonella enterica serovar Typhimurium could elicit neither systemic IgG nor mucosal IgA ( ). Moreover, heat inactivation has been reported to yield inconsistent results ( ). Gamma irradiation technique uses ionizing radiation to specifically target nucleic acids while preserving surface antigenic protein, making it preferable to develop safe and immunogenic vaccines ( ). It is also a convenient method as it precludes the need to remove chemical agent post-inactivation. According to studies, gamma-irradiated vaccines possess better efficacy and shelf life when compared to live-attenuated and formalin-killed vaccines ( , , ). In addition, vaccines developed by irradiation have been tested and reported as strong inducers of mucosal and humoral immune response ( , ). Mucosal vaccination is an effective and efficient method of immunization against mucosal pathogens as they are convenient to administer for large-scale campaigns and farm settings ( ), and can induce long-lasting humoral and cellular immunity ( ). Previously, we reported the feasibility of developing gamma-irradiated vaccines that induced both systemic and mucosal antibody responses with complete protection against homologous lethal challenge ( ). In the current study, we aimed to broaden our understanding of the immunogenicity of the gamma-irradiated vaccines by including peripheral blood mononuclear cells (PBMC) response analysis. Cytokines are an integral part of the immune response in avian species of infection ( ). They are involved in both inflammatory and specific immune responses to invasive microbes, which were evolved to protect the host from pathogens ( ). As regulators of the initiation and maintenance of host defenses, cytokines ultimately determine the type of response generated and the effector mechanisms generated to mediate resistance ( ). Thus, in addition to serum IgG and mucosal IgA, we assessed PBMC proliferation and the expression of a range of cytokine genes—IL-1β, IL-6, IL-12, IFN-γ, IL-4, and IL-22—that modulate the immune response to infection and vaccination ( ), in response to gamma-irradiated and formalin-inactivated vaccines. Experimental site This study was carried out from December 2022 to November 2023 at the National Veterinary Institute (NVI) in Bishoftu; the National Institute for Control and Eradication of Tsetse and Trypanosome (NICETT) in Addis Ababa; and the Bio and Emerging Technology Institute (BETin) in Addis Ababa. Sample size and experimental chickens G*power provided a samples size of 149 using the following parameters—effect size: 0.3, power: 0.8, number of experimental groups: 6, and numerator df: 5. Thus, A total of 156 eight-week-old Bovans Brown chicks that were Fowl cholera (FC)-specific antibody negative (SAN), hatched from fertile eggs obtained from the National Veterinary Institute (NVI) were used in this experiment up to the age of eight weeks. In addition, the parental stock had no history of vaccination against FC. The experimental chicks were reared under strict farm biosecurity measures. Before introducing the chicks, the room was cover with wood shavings and formalin-fumigated and ventilated for three days. Throughout the experiment, the chickens had free access to food and water ad-libitum . Preparation of vaccine and challenge bacteria Inoculum of P. multocida used in the vaccine formulation and challenge study was prepared according to the NVI’s standard operating procedure ( ). Briefly, lyophilized avian P. multocida biotype A master cell bank obtained from the NVI (MK802880) was thawed, diluted with 2 ml tryptose soya broth (TSB), inoculated into sterile tryptose soya agar (TSA) supplemented with 10% horse serum, and incubated overnight at 37°C. A single colony was then taken into 2 ml TSB supplemented with 10% horse serum, and incubated for 7 h at 37°C. Next, 0.5 ml of the culture was transferred into 30 ml of TSB supplemented with 10% horse serum and incubated overnight at 37°C. Culture was up scaled by inoculating 300 ml of P. multocida biotype A production media with 7 ml of the overnight culture and incubating for 24 h at 37°C and 80 rpm. The culture was adjusted to 3.7x10 8 CFU/ml using streak plate method of serial diluted culture and was ready to be used in the preparation of the vaccines. Challenge bacterium was obtained after taking a pure TSA colony into 200 ml TSB and incubating for 7 h at 37°C. Adjustment was made so that each chicken received about 2.5x10 7 CFU/ml of inoculum when evaluating protective efficacy of the vaccines. Molecular characterization of P. multocida The master seed obtained from NVI was characterized microbiologically, biochemically, and molecularly to confirm its identity and purity. Similarly, the bacteria re-isolated from infection-challenged chicken were also confirmed molecularly. Master seed was cultured in TSA overnight at 37°C. Swab samples from liver, heart, and spleen were collected in PBS, incubated in TSA supplemented with 10% serum at 37°C for 18 h, and stored in a freezer until the next step. The genomic DNA was extracted using DNeasy ® Blood and Tissue Kit (Qiagen, German town, MD, USA) as per the manufacturer’s recommendation. DNA was kept at −20°C awaiting PCR analysis. The capsular biosynthesis gene (capA), a 1044 bp gene, was amplified using the following primers: F: 5′-TGCCAAAATCGCAGTCAG-3′ and R: 5′-TTGCCATCATTGTCAGTG-3′. All the PCR reactions were carried out in a final reaction volume of 25 µL comprising 12.5μl of 2xPCR master mix (Promega, USA), 2μl DNA template, 1μl of 10pmol of each primer, and 8.5 μL of dH2O. The PCR reaction consisted of an initial denaturation at 95°C for 5 min, followed by 35 cycles of reaction involving denaturation at 95°C for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 30 s with a final extension at 72°C for 5 min. As negative control, DNA sample from P. multocida capsular serogroup B was used. Gel electrophoresis of the PCR products was done using 2.0% (w/v) agarose gel. After electrophoresis, the DNA was stained for 10 min in Ethidium bromide (0.5 μg/mL) and visualized using a UV trans-illuminator (Alpha imager, Germany). Gamma irradiation for inactivation avian P. multocida The radiation experiment took place at the NICETT Radiation Laboratory in Addis Ababa, Ethiopia. The culture containing the target bacterial titer (3.7 x 10 8 CFU/ml) was aliquoted into test tubes and spun at 4,000 x g at 4°C for 20 min. Then, the pellet was washed twice with PBS and resuspended in 20% trehalose. The bacterial cell pellet was subjected to gamma irradiation for a varying amount of time depending on the doses, ranging from 0.8 to 1.3 kGy, using a cobalt 60 irradiation machine (MDS Nordion, Canada) at a dose rate of 1.56 kGy/hr. The temperature range of the gamma chamber was maintained at 37–40°C. After completion of the irradiation process, each tube was carefully taken out of the gamma chamber and immediately stored at 4°C until further use. A non-irradiated culture was used as a control. The inactivation capacity of the different radiation doses was evaluated by subculturing serial dilutions of treated culture on TSA plates and estimating the CFU/ml. Formulation of gamma irradiated vaccines Previously, the 1 kGy gamma-irradiated avian P. multocida vaccine was shown to be immunogenic and efficacious in chickens ( ). Thus, it was selected for the vaccine preparation in this study. The avian P. multocida inoculum was prepared at a dose of 3.7 x 108 CFU/ml. Four different vaccines were formulated by mixing the bacterial inoculum with four different adjuvants. The adjuvant’s concentration varied according to the suppliers’ instructions: 20% for Montanide/01 PR gel, 15% for Emulsigen ® -P, 6% for Carbigen ® , and 15% for the combination of Emulsigen ® -D and Alum. These adjuvants have been documented to be safe and enhance immunogenicity and efficacy of various experimental and licensed vaccines ( , , ). The sterility and purity of formulated vaccines were assessed using Gram’s staining and culturing on sterility test media including Sabouraud dextrose agar, TSA, and TSB. Experimental design Vaccination, samples, and sampling schedule Chickens were divided into six groups (G1 to G6) of 26 chickens each, based on the vaccine type they received, as follows: G1: vaccine adjuvanted with Montanide/01 PR gel intranasally (IN) at a dose of 0.3 mL; G2: vaccine adjuvanted with Carbigen ® IN at a dose of 0.5 mL; G3: vaccine adjuvanted with Emulsigen-D and Alum intramuscularly (IM) at a dose of 0.5 mL; G4: vaccine adjuvanted with Emulsigen ® -P IM at a dose of 0.5 mL; G5: formalin-inactivated vaccine IM at a dose of 0.5 mL; and G6 was used as an unvaccinated control. A booster dose was administered 3 weeks after the initial dose. Blood samples were collected prior to vaccination and at days 21, 35, and 56 post-primary vaccination. Serum and PBMC were separated for antibody and cellular immune response analysis, respectively. Four chickens per group were euthanized according to the indicated schedule to collect tracheal and crop lavage to study mucosal immunity. The remaining 10 chickens per group were challenged to assess vaccine efficacy ( ). Safety assessment of the vaccines Safety of the candidate vaccines was evaluated according to the harmonized requirements in VICH GL44 ( ) which is endorsed by the World Organization for Animal Health (WOAH). Experimental chickens were monitored for adverse reactions daily for the entire period of the trial since the time of vaccination. Serum and mucosal antibody response Serum antibody response was assessed by quantifying IgG titer using a commercial indirect ELISA test kit (IDvet, France). Secretory IgA response was evaluated in tracheal and crop lavages using Sandwich ELISA (Chicken IgA ELISA Kit ab157691, Mybiosource, San Diego, USA). Optical density measurements were taken at 450nm. Enrichment of PBMCs and their in vitro stimulation Individual blood samples collected in Na–citrate tubes (Greiner Bio-One, Kremsmünster, Austria) were pooled as per their groups. Pooled blood samples were diluted in PBS containing 2 mM EDTA at a ratio of 1:2. Then, 3% dextran solution was added at a ratio of 1:0.4 and centrifuged at 50 x g for 20 min. The upper phase containing PBMCs was carefully layered onto 50 ml conical tubes containing Ficoll (Greiner Bio-One, Kremsmünster, Austria) (1:2) and centrifuged (Remi Lab World, Mumbai, India) at 800 x g for 35 min at 20°C with the brake off. After centrifugation, PBMC was harvested from the interphase between the bottom Ficoll and the upper plasma. Next, the PBMCs were transferred into 50 ml conical tube and washed twice by centrifuging at 1500 rpm for 7 min at 20°C using PBS. The resulting supernatant was decanted, and the pellet was reconstituted using 5 ml of RPMI-1640 media (UK). Then, cells were counted using an automated cell counter (EVETM, NanoEnTek) after mixing 10µL of cell suspension with 10µL of a 0.4% trypan blue solution. The PBMCs, at a density of about 5×10 7 cells/ml, were cultured overnight in RPMI supplemented with 10% fetal calf serum (Gibco™, Thermo Fisher Scientific), 5% chicken serum (Gibco™, Thermo Fisher Scientific), 100 U/mL penicillin, and 100 g/mL streptomycin at 37°C and 5% CO 2 in untreated flat-bottomed 24-well culture plates (Eppendorf, Hamburg, Germany). To analyze vaccination induced expansion and activation of antigen specific immune cells, PBMCs were treated with gamma irradiated P. multocida antigen at a ratio of 1:1. Lymphocyte activation cocktail, a mixture of PMA (20–50 ng/ml) and ionomycin (0.5–1 g/ml), was added and used as a positive control. Cells were counted using hemocytometer. Following the overnight incubation, non-adherent cells were removed by washing the monolayers with sterile PBS. RNA extraction and cytokine gene analysis using RT qPCR RNA extraction from PBMCs was performed using Direct-zol™ RNA MiniPrep kit (Cat #: R2052) as per the manufacturer’s instructions. RNA concentration was determined spectrophotometrically (Biophotometer Plus, Eppendorf) at 260 nm, and isolated RNA was kept at -80°C until the next step. Then, cDNA was synthesized from 1µg of total RNA using a cDNA synthesis kit (Cat # K1612, Fermentas). Random hexamers were used to generate 15 µl of cDNA for every gene, and concentration was adjusted to 100 ng/µl for RT qPCR. The genes for the following cytokines: IFN-γ, IL-1β, IL-4, IL-6, IL-12P40, and IL-22 were amplified using the primers indicated in . GAPDH was used as a reference gene. RT-qPCR was performed using SYBR ® Green Supermix (Cat # 1708882, USA) in a real-time thermocycler (Mastercycler ® ep realplex, model # 22331) using 7500 Real Time PCR System (Applied Biosystems). PCR conditions were the same for each targeted gene and are as follows: 50°C for 1 min, 95°C for 5 min, followed by 45 cycles of 95°C for 15 seconds and 58°C for 32 seconds. Cycling was terminated after 45 cycles with 95°C for 15 seconds, 60°C for 1 min, and 95°C for 15 seconds. Dissociation curves of the products were generated by increasing the temperature of samples incrementally from 55 to 100°C as the final step of the real-time PCR. Then, melting-curve analysis of amplified products was performed ( ). Efficacy of the candidate vaccines Ten chickens from each group were challenged intranasally with 2.5x10 7 CFU/ml of avian P. multocida biotype A one day after the last sampling and were followed for 14 days. Necropsy and bacterial isolation were performed on chickens that were found dead. Gross lesions were recorded, and tissue samples were taken from lungs, livers, and spleens for bacterial isolation using TSA with 10% serum. Identification was then performed using morphology, Gram staining, and PCR. Data analysis GraphPad Prism 8.4.3 (San Diego, California) was used to perform the statistical analysis and generate the graphs. Serum antibody titer was compared between different sampling time points within each group and across other groups using Friedman and Kruskal-Wallis tests respectively. Dunn’s post-hoc test was used when a difference between groups existed. One sample from day-35 of G3 was excluded from the dataset for being an outlier (significantly higher than the average). Both intragroup and intergroup comparisons of mucosal antibody response were performed using Kruskal-Willis test followed by Dunn’s test. The difference in the proliferative response of PBMCs from different groups was analyzed similarly, Cytokine gene expression analysis was performed using the Livak’s method (2 -ΔΔ CT ) for relative gene expression analysis ( ). Target gene expression was normalized to the endogenous control, GAPDH. Relative fold change was determined by dividing the expression ratio of each target gene by their expression ratio in the control samples. The survival of chickens after infection challenge was analyzed using Kaplan-Meier curve analysis. The data were presented as mean ± standard error of the mean (SEM). Statistical significance was set at p <0.05. Ethical consideration The experimental protocol was approved by the Animal Research Ethics Committee of the NVI (reference number: NVI/1453). All chickens were handled and euthanized humanely according to the ethical standards set in the international guiding principles for animal experiment research ( ). This study was carried out from December 2022 to November 2023 at the National Veterinary Institute (NVI) in Bishoftu; the National Institute for Control and Eradication of Tsetse and Trypanosome (NICETT) in Addis Ababa; and the Bio and Emerging Technology Institute (BETin) in Addis Ababa. G*power provided a samples size of 149 using the following parameters—effect size: 0.3, power: 0.8, number of experimental groups: 6, and numerator df: 5. Thus, A total of 156 eight-week-old Bovans Brown chicks that were Fowl cholera (FC)-specific antibody negative (SAN), hatched from fertile eggs obtained from the National Veterinary Institute (NVI) were used in this experiment up to the age of eight weeks. In addition, the parental stock had no history of vaccination against FC. The experimental chicks were reared under strict farm biosecurity measures. Before introducing the chicks, the room was cover with wood shavings and formalin-fumigated and ventilated for three days. Throughout the experiment, the chickens had free access to food and water ad-libitum . Inoculum of P. multocida used in the vaccine formulation and challenge study was prepared according to the NVI’s standard operating procedure ( ). Briefly, lyophilized avian P. multocida biotype A master cell bank obtained from the NVI (MK802880) was thawed, diluted with 2 ml tryptose soya broth (TSB), inoculated into sterile tryptose soya agar (TSA) supplemented with 10% horse serum, and incubated overnight at 37°C. A single colony was then taken into 2 ml TSB supplemented with 10% horse serum, and incubated for 7 h at 37°C. Next, 0.5 ml of the culture was transferred into 30 ml of TSB supplemented with 10% horse serum and incubated overnight at 37°C. Culture was up scaled by inoculating 300 ml of P. multocida biotype A production media with 7 ml of the overnight culture and incubating for 24 h at 37°C and 80 rpm. The culture was adjusted to 3.7x10 8 CFU/ml using streak plate method of serial diluted culture and was ready to be used in the preparation of the vaccines. Challenge bacterium was obtained after taking a pure TSA colony into 200 ml TSB and incubating for 7 h at 37°C. Adjustment was made so that each chicken received about 2.5x10 7 CFU/ml of inoculum when evaluating protective efficacy of the vaccines. P. multocida The master seed obtained from NVI was characterized microbiologically, biochemically, and molecularly to confirm its identity and purity. Similarly, the bacteria re-isolated from infection-challenged chicken were also confirmed molecularly. Master seed was cultured in TSA overnight at 37°C. Swab samples from liver, heart, and spleen were collected in PBS, incubated in TSA supplemented with 10% serum at 37°C for 18 h, and stored in a freezer until the next step. The genomic DNA was extracted using DNeasy ® Blood and Tissue Kit (Qiagen, German town, MD, USA) as per the manufacturer’s recommendation. DNA was kept at −20°C awaiting PCR analysis. The capsular biosynthesis gene (capA), a 1044 bp gene, was amplified using the following primers: F: 5′-TGCCAAAATCGCAGTCAG-3′ and R: 5′-TTGCCATCATTGTCAGTG-3′. All the PCR reactions were carried out in a final reaction volume of 25 µL comprising 12.5μl of 2xPCR master mix (Promega, USA), 2μl DNA template, 1μl of 10pmol of each primer, and 8.5 μL of dH2O. The PCR reaction consisted of an initial denaturation at 95°C for 5 min, followed by 35 cycles of reaction involving denaturation at 95°C for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 30 s with a final extension at 72°C for 5 min. As negative control, DNA sample from P. multocida capsular serogroup B was used. Gel electrophoresis of the PCR products was done using 2.0% (w/v) agarose gel. After electrophoresis, the DNA was stained for 10 min in Ethidium bromide (0.5 μg/mL) and visualized using a UV trans-illuminator (Alpha imager, Germany). P. multocida The radiation experiment took place at the NICETT Radiation Laboratory in Addis Ababa, Ethiopia. The culture containing the target bacterial titer (3.7 x 10 8 CFU/ml) was aliquoted into test tubes and spun at 4,000 x g at 4°C for 20 min. Then, the pellet was washed twice with PBS and resuspended in 20% trehalose. The bacterial cell pellet was subjected to gamma irradiation for a varying amount of time depending on the doses, ranging from 0.8 to 1.3 kGy, using a cobalt 60 irradiation machine (MDS Nordion, Canada) at a dose rate of 1.56 kGy/hr. The temperature range of the gamma chamber was maintained at 37–40°C. After completion of the irradiation process, each tube was carefully taken out of the gamma chamber and immediately stored at 4°C until further use. A non-irradiated culture was used as a control. The inactivation capacity of the different radiation doses was evaluated by subculturing serial dilutions of treated culture on TSA plates and estimating the CFU/ml. Previously, the 1 kGy gamma-irradiated avian P. multocida vaccine was shown to be immunogenic and efficacious in chickens ( ). Thus, it was selected for the vaccine preparation in this study. The avian P. multocida inoculum was prepared at a dose of 3.7 x 108 CFU/ml. Four different vaccines were formulated by mixing the bacterial inoculum with four different adjuvants. The adjuvant’s concentration varied according to the suppliers’ instructions: 20% for Montanide/01 PR gel, 15% for Emulsigen ® -P, 6% for Carbigen ® , and 15% for the combination of Emulsigen ® -D and Alum. These adjuvants have been documented to be safe and enhance immunogenicity and efficacy of various experimental and licensed vaccines ( , , ). The sterility and purity of formulated vaccines were assessed using Gram’s staining and culturing on sterility test media including Sabouraud dextrose agar, TSA, and TSB. Vaccination, samples, and sampling schedule Chickens were divided into six groups (G1 to G6) of 26 chickens each, based on the vaccine type they received, as follows: G1: vaccine adjuvanted with Montanide/01 PR gel intranasally (IN) at a dose of 0.3 mL; G2: vaccine adjuvanted with Carbigen ® IN at a dose of 0.5 mL; G3: vaccine adjuvanted with Emulsigen-D and Alum intramuscularly (IM) at a dose of 0.5 mL; G4: vaccine adjuvanted with Emulsigen ® -P IM at a dose of 0.5 mL; G5: formalin-inactivated vaccine IM at a dose of 0.5 mL; and G6 was used as an unvaccinated control. A booster dose was administered 3 weeks after the initial dose. Blood samples were collected prior to vaccination and at days 21, 35, and 56 post-primary vaccination. Serum and PBMC were separated for antibody and cellular immune response analysis, respectively. Four chickens per group were euthanized according to the indicated schedule to collect tracheal and crop lavage to study mucosal immunity. The remaining 10 chickens per group were challenged to assess vaccine efficacy ( ). Safety assessment of the vaccines Safety of the candidate vaccines was evaluated according to the harmonized requirements in VICH GL44 ( ) which is endorsed by the World Organization for Animal Health (WOAH). Experimental chickens were monitored for adverse reactions daily for the entire period of the trial since the time of vaccination. Serum and mucosal antibody response Serum antibody response was assessed by quantifying IgG titer using a commercial indirect ELISA test kit (IDvet, France). Secretory IgA response was evaluated in tracheal and crop lavages using Sandwich ELISA (Chicken IgA ELISA Kit ab157691, Mybiosource, San Diego, USA). Optical density measurements were taken at 450nm. Enrichment of PBMCs and their in vitro stimulation Individual blood samples collected in Na–citrate tubes (Greiner Bio-One, Kremsmünster, Austria) were pooled as per their groups. Pooled blood samples were diluted in PBS containing 2 mM EDTA at a ratio of 1:2. Then, 3% dextran solution was added at a ratio of 1:0.4 and centrifuged at 50 x g for 20 min. The upper phase containing PBMCs was carefully layered onto 50 ml conical tubes containing Ficoll (Greiner Bio-One, Kremsmünster, Austria) (1:2) and centrifuged (Remi Lab World, Mumbai, India) at 800 x g for 35 min at 20°C with the brake off. After centrifugation, PBMC was harvested from the interphase between the bottom Ficoll and the upper plasma. Next, the PBMCs were transferred into 50 ml conical tube and washed twice by centrifuging at 1500 rpm for 7 min at 20°C using PBS. The resulting supernatant was decanted, and the pellet was reconstituted using 5 ml of RPMI-1640 media (UK). Then, cells were counted using an automated cell counter (EVETM, NanoEnTek) after mixing 10µL of cell suspension with 10µL of a 0.4% trypan blue solution. The PBMCs, at a density of about 5×10 7 cells/ml, were cultured overnight in RPMI supplemented with 10% fetal calf serum (Gibco™, Thermo Fisher Scientific), 5% chicken serum (Gibco™, Thermo Fisher Scientific), 100 U/mL penicillin, and 100 g/mL streptomycin at 37°C and 5% CO 2 in untreated flat-bottomed 24-well culture plates (Eppendorf, Hamburg, Germany). To analyze vaccination induced expansion and activation of antigen specific immune cells, PBMCs were treated with gamma irradiated P. multocida antigen at a ratio of 1:1. Lymphocyte activation cocktail, a mixture of PMA (20–50 ng/ml) and ionomycin (0.5–1 g/ml), was added and used as a positive control. Cells were counted using hemocytometer. Following the overnight incubation, non-adherent cells were removed by washing the monolayers with sterile PBS. RNA extraction and cytokine gene analysis using RT qPCR RNA extraction from PBMCs was performed using Direct-zol™ RNA MiniPrep kit (Cat #: R2052) as per the manufacturer’s instructions. RNA concentration was determined spectrophotometrically (Biophotometer Plus, Eppendorf) at 260 nm, and isolated RNA was kept at -80°C until the next step. Then, cDNA was synthesized from 1µg of total RNA using a cDNA synthesis kit (Cat # K1612, Fermentas). Random hexamers were used to generate 15 µl of cDNA for every gene, and concentration was adjusted to 100 ng/µl for RT qPCR. The genes for the following cytokines: IFN-γ, IL-1β, IL-4, IL-6, IL-12P40, and IL-22 were amplified using the primers indicated in . GAPDH was used as a reference gene. RT-qPCR was performed using SYBR ® Green Supermix (Cat # 1708882, USA) in a real-time thermocycler (Mastercycler ® ep realplex, model # 22331) using 7500 Real Time PCR System (Applied Biosystems). PCR conditions were the same for each targeted gene and are as follows: 50°C for 1 min, 95°C for 5 min, followed by 45 cycles of 95°C for 15 seconds and 58°C for 32 seconds. Cycling was terminated after 45 cycles with 95°C for 15 seconds, 60°C for 1 min, and 95°C for 15 seconds. Dissociation curves of the products were generated by increasing the temperature of samples incrementally from 55 to 100°C as the final step of the real-time PCR. Then, melting-curve analysis of amplified products was performed ( ). Efficacy of the candidate vaccines Ten chickens from each group were challenged intranasally with 2.5x10 7 CFU/ml of avian P. multocida biotype A one day after the last sampling and were followed for 14 days. Necropsy and bacterial isolation were performed on chickens that were found dead. Gross lesions were recorded, and tissue samples were taken from lungs, livers, and spleens for bacterial isolation using TSA with 10% serum. Identification was then performed using morphology, Gram staining, and PCR. Chickens were divided into six groups (G1 to G6) of 26 chickens each, based on the vaccine type they received, as follows: G1: vaccine adjuvanted with Montanide/01 PR gel intranasally (IN) at a dose of 0.3 mL; G2: vaccine adjuvanted with Carbigen ® IN at a dose of 0.5 mL; G3: vaccine adjuvanted with Emulsigen-D and Alum intramuscularly (IM) at a dose of 0.5 mL; G4: vaccine adjuvanted with Emulsigen ® -P IM at a dose of 0.5 mL; G5: formalin-inactivated vaccine IM at a dose of 0.5 mL; and G6 was used as an unvaccinated control. A booster dose was administered 3 weeks after the initial dose. Blood samples were collected prior to vaccination and at days 21, 35, and 56 post-primary vaccination. Serum and PBMC were separated for antibody and cellular immune response analysis, respectively. Four chickens per group were euthanized according to the indicated schedule to collect tracheal and crop lavage to study mucosal immunity. The remaining 10 chickens per group were challenged to assess vaccine efficacy ( ). Safety of the candidate vaccines was evaluated according to the harmonized requirements in VICH GL44 ( ) which is endorsed by the World Organization for Animal Health (WOAH). Experimental chickens were monitored for adverse reactions daily for the entire period of the trial since the time of vaccination. Serum antibody response was assessed by quantifying IgG titer using a commercial indirect ELISA test kit (IDvet, France). Secretory IgA response was evaluated in tracheal and crop lavages using Sandwich ELISA (Chicken IgA ELISA Kit ab157691, Mybiosource, San Diego, USA). Optical density measurements were taken at 450nm. in vitro stimulation Individual blood samples collected in Na–citrate tubes (Greiner Bio-One, Kremsmünster, Austria) were pooled as per their groups. Pooled blood samples were diluted in PBS containing 2 mM EDTA at a ratio of 1:2. Then, 3% dextran solution was added at a ratio of 1:0.4 and centrifuged at 50 x g for 20 min. The upper phase containing PBMCs was carefully layered onto 50 ml conical tubes containing Ficoll (Greiner Bio-One, Kremsmünster, Austria) (1:2) and centrifuged (Remi Lab World, Mumbai, India) at 800 x g for 35 min at 20°C with the brake off. After centrifugation, PBMC was harvested from the interphase between the bottom Ficoll and the upper plasma. Next, the PBMCs were transferred into 50 ml conical tube and washed twice by centrifuging at 1500 rpm for 7 min at 20°C using PBS. The resulting supernatant was decanted, and the pellet was reconstituted using 5 ml of RPMI-1640 media (UK). Then, cells were counted using an automated cell counter (EVETM, NanoEnTek) after mixing 10µL of cell suspension with 10µL of a 0.4% trypan blue solution. The PBMCs, at a density of about 5×10 7 cells/ml, were cultured overnight in RPMI supplemented with 10% fetal calf serum (Gibco™, Thermo Fisher Scientific), 5% chicken serum (Gibco™, Thermo Fisher Scientific), 100 U/mL penicillin, and 100 g/mL streptomycin at 37°C and 5% CO 2 in untreated flat-bottomed 24-well culture plates (Eppendorf, Hamburg, Germany). To analyze vaccination induced expansion and activation of antigen specific immune cells, PBMCs were treated with gamma irradiated P. multocida antigen at a ratio of 1:1. Lymphocyte activation cocktail, a mixture of PMA (20–50 ng/ml) and ionomycin (0.5–1 g/ml), was added and used as a positive control. Cells were counted using hemocytometer. Following the overnight incubation, non-adherent cells were removed by washing the monolayers with sterile PBS. RNA extraction from PBMCs was performed using Direct-zol™ RNA MiniPrep kit (Cat #: R2052) as per the manufacturer’s instructions. RNA concentration was determined spectrophotometrically (Biophotometer Plus, Eppendorf) at 260 nm, and isolated RNA was kept at -80°C until the next step. Then, cDNA was synthesized from 1µg of total RNA using a cDNA synthesis kit (Cat # K1612, Fermentas). Random hexamers were used to generate 15 µl of cDNA for every gene, and concentration was adjusted to 100 ng/µl for RT qPCR. The genes for the following cytokines: IFN-γ, IL-1β, IL-4, IL-6, IL-12P40, and IL-22 were amplified using the primers indicated in . GAPDH was used as a reference gene. RT-qPCR was performed using SYBR ® Green Supermix (Cat # 1708882, USA) in a real-time thermocycler (Mastercycler ® ep realplex, model # 22331) using 7500 Real Time PCR System (Applied Biosystems). PCR conditions were the same for each targeted gene and are as follows: 50°C for 1 min, 95°C for 5 min, followed by 45 cycles of 95°C for 15 seconds and 58°C for 32 seconds. Cycling was terminated after 45 cycles with 95°C for 15 seconds, 60°C for 1 min, and 95°C for 15 seconds. Dissociation curves of the products were generated by increasing the temperature of samples incrementally from 55 to 100°C as the final step of the real-time PCR. Then, melting-curve analysis of amplified products was performed ( ). Ten chickens from each group were challenged intranasally with 2.5x10 7 CFU/ml of avian P. multocida biotype A one day after the last sampling and were followed for 14 days. Necropsy and bacterial isolation were performed on chickens that were found dead. Gross lesions were recorded, and tissue samples were taken from lungs, livers, and spleens for bacterial isolation using TSA with 10% serum. Identification was then performed using morphology, Gram staining, and PCR. GraphPad Prism 8.4.3 (San Diego, California) was used to perform the statistical analysis and generate the graphs. Serum antibody titer was compared between different sampling time points within each group and across other groups using Friedman and Kruskal-Wallis tests respectively. Dunn’s post-hoc test was used when a difference between groups existed. One sample from day-35 of G3 was excluded from the dataset for being an outlier (significantly higher than the average). Both intragroup and intergroup comparisons of mucosal antibody response were performed using Kruskal-Willis test followed by Dunn’s test. The difference in the proliferative response of PBMCs from different groups was analyzed similarly, Cytokine gene expression analysis was performed using the Livak’s method (2 -ΔΔ CT ) for relative gene expression analysis ( ). Target gene expression was normalized to the endogenous control, GAPDH. Relative fold change was determined by dividing the expression ratio of each target gene by their expression ratio in the control samples. The survival of chickens after infection challenge was analyzed using Kaplan-Meier curve analysis. The data were presented as mean ± standard error of the mean (SEM). Statistical significance was set at p <0.05. The experimental protocol was approved by the Animal Research Ethics Committee of the NVI (reference number: NVI/1453). All chickens were handled and euthanized humanely according to the ethical standards set in the international guiding principles for animal experiment research ( ). Vaccine safety All vaccinated chickens were followed up until challenge and we did not observe any abnormality in both the formalin and gamma-irradiated groups. Systemic IgG response Significant serum IgG titer was detected three weeks after primary vaccination in G1, G3, and G5. IgG titer substantially increased in all vaccinated groups two weeks post-booster dose, and there was a significant difference when comparing G3 with G2 and G4 ( p <0.05). We observed a decline in antibody titer at day-56 in all vaccinated chickens. However, titer remained above the baseline in G1, G2, and G3. Expectedly, no antibody response was observed in all pre-vaccination samples and unvaccinated controls ( ). The dynamic of antibody response within groups G1, G2, and G3 is similar in such a way that IgG titer was significantly higher on days 21, 35, and 56 than the baseline (day-0) with no difference among each day ( ). Mucosal antibody response In this study, mucosal IgA was not detectable after the first dose of vaccine in all the groups. A slight increment in IgA was measurable after the 2 nd dose (day-35) in G1 and G-2. Interestingly, on day-56, IgA titer increased significantly in all groups except formalin-inactivated and unvaccinated chickens. IgA titer was significantly higher on day-56 in chickens injected with Carbigen ® adjuvanted vaccine compared to formalin-inactivated and control groups ( p <0.05) ( ). Cellular immune response Isolation and culturing of PBMCs PBMCs from vaccinated chickens responded to stimulation with gamma-irradiated P. multocida antigen with notable increase in size and number while PBMCs from unvaccinated chickens lack any detectable response. Similar proliferative response was observed in PBMCs stimulated with LAC. Collectively, these results indicate that gamma-irradiated vaccines successfully primed antigen specific immune cells ( ). Cytokine response In this study, the expression of relevant cytokines was assessed, and there was a variable upregulation of cytokines across all the vaccinated groups. In G-1, IFN-γ, IL-6, and IL-12p40 showed increasing fold changes (FC) from day-21 through day-56 post vaccination; 6 to 1234, 11 to 568, and 6 to 224 FC respectively. On the other hand, IL-1β and IL-4 level remained indifferent at day-56 compared to the baseline. The expression of IL-22 did not change to the extent of IFN-γ, IL-6, and IL-12p40 even though it increased by 26-fold on day-56 ( , ). In G-2, like G-1, IFN-γ, IL-6, and IL-12p40 had the highest expression level with fold changes of 440, 838, and 300 respectively at day-56. IL-22 level in this group increased by 100-fold at day-56 compared to baseline. This is contrary to G-3, where IL-22 level did not change during the study. However, IFN-γ (FC: 3019), IL-6 (FC: 430), and IL-12p40 (FC: 445) exhibited notable increment in expression level. IL-1β and IL-4 level dropped down to basal level despite an initial peak at day-21 post-vaccination. The magnitude of change in cytokine expression in G-4 was not as extreme as the previous 3 groups, except for FC of 110 and 242 for IFN-γ and IL-12p40 respectively. Interestingly, in G-5, where chickens were vaccinated with formalin inactivated vaccine, cytokine response was not affected significantly except for IFN-γ (FC: 78) and IL-22 (FC: 77) on day-56 ( , ) Evaluation of efficacy of formulated vaccines The protective efficacy of the candidate vaccines was estimated in infection challenge study. A total volume of 0.5 ml bacterial suspension containing 2.5x10 7 CFU/ml of avian P. multocida biotype A was administered to each individual experimental chicken and followed up for 14 days. Survival analysis is shown . Vaccination with gamma-irradiated vaccine containing Emulsigen ® -D with alum provided a complete protection against intramuscular challenge while vaccines adjuvanted with Montanide Gel 01 PR, Carbigen, Emulsigen-P and formalin inactivated FC vaccines had 50%, 50%, 66.7%, and 66.7% efficacy against the challenge, respectively. Clinical signs such as lameness, diarrhea, and death were observed in challenged chickens in a varying frequency indicated in . Recovery of P. multocida after challenge Following challenge, samples were collected randomly from chickens’ liver and lung tissue from each experimental group and analyzed using PCR to detect the presence of P. multocida capsular serotype A (capA) gene. Results demonstrated that the represented sample vaccination with both Emulsigen-D with aluminum hydroxide gel (Alum) and Montanide/01 PR gel adjuvanted was protected against P. multocida infection. Samples from these vaccinated groups tested negative for the capA gene, indicating a lack of detectable P. multocida capsular serotype A in the liver and lung tissues. Conversely, samples from all non-vaccinated groups tested positive for the capA gene, suggesting the presence of P. multocida capsular serotype A in these chickens following challenge ( ). All vaccinated chickens were followed up until challenge and we did not observe any abnormality in both the formalin and gamma-irradiated groups. Significant serum IgG titer was detected three weeks after primary vaccination in G1, G3, and G5. IgG titer substantially increased in all vaccinated groups two weeks post-booster dose, and there was a significant difference when comparing G3 with G2 and G4 ( p <0.05). We observed a decline in antibody titer at day-56 in all vaccinated chickens. However, titer remained above the baseline in G1, G2, and G3. Expectedly, no antibody response was observed in all pre-vaccination samples and unvaccinated controls ( ). The dynamic of antibody response within groups G1, G2, and G3 is similar in such a way that IgG titer was significantly higher on days 21, 35, and 56 than the baseline (day-0) with no difference among each day ( ). Mucosal antibody response In this study, mucosal IgA was not detectable after the first dose of vaccine in all the groups. A slight increment in IgA was measurable after the 2 nd dose (day-35) in G1 and G-2. Interestingly, on day-56, IgA titer increased significantly in all groups except formalin-inactivated and unvaccinated chickens. IgA titer was significantly higher on day-56 in chickens injected with Carbigen ® adjuvanted vaccine compared to formalin-inactivated and control groups ( p <0.05) ( ). In this study, mucosal IgA was not detectable after the first dose of vaccine in all the groups. A slight increment in IgA was measurable after the 2 nd dose (day-35) in G1 and G-2. Interestingly, on day-56, IgA titer increased significantly in all groups except formalin-inactivated and unvaccinated chickens. IgA titer was significantly higher on day-56 in chickens injected with Carbigen ® adjuvanted vaccine compared to formalin-inactivated and control groups ( p <0.05) ( ). Isolation and culturing of PBMCs PBMCs from vaccinated chickens responded to stimulation with gamma-irradiated P. multocida antigen with notable increase in size and number while PBMCs from unvaccinated chickens lack any detectable response. Similar proliferative response was observed in PBMCs stimulated with LAC. Collectively, these results indicate that gamma-irradiated vaccines successfully primed antigen specific immune cells ( ). Cytokine response In this study, the expression of relevant cytokines was assessed, and there was a variable upregulation of cytokines across all the vaccinated groups. In G-1, IFN-γ, IL-6, and IL-12p40 showed increasing fold changes (FC) from day-21 through day-56 post vaccination; 6 to 1234, 11 to 568, and 6 to 224 FC respectively. On the other hand, IL-1β and IL-4 level remained indifferent at day-56 compared to the baseline. The expression of IL-22 did not change to the extent of IFN-γ, IL-6, and IL-12p40 even though it increased by 26-fold on day-56 ( , ). In G-2, like G-1, IFN-γ, IL-6, and IL-12p40 had the highest expression level with fold changes of 440, 838, and 300 respectively at day-56. IL-22 level in this group increased by 100-fold at day-56 compared to baseline. This is contrary to G-3, where IL-22 level did not change during the study. However, IFN-γ (FC: 3019), IL-6 (FC: 430), and IL-12p40 (FC: 445) exhibited notable increment in expression level. IL-1β and IL-4 level dropped down to basal level despite an initial peak at day-21 post-vaccination. The magnitude of change in cytokine expression in G-4 was not as extreme as the previous 3 groups, except for FC of 110 and 242 for IFN-γ and IL-12p40 respectively. Interestingly, in G-5, where chickens were vaccinated with formalin inactivated vaccine, cytokine response was not affected significantly except for IFN-γ (FC: 78) and IL-22 (FC: 77) on day-56 ( , ) Evaluation of efficacy of formulated vaccines The protective efficacy of the candidate vaccines was estimated in infection challenge study. A total volume of 0.5 ml bacterial suspension containing 2.5x10 7 CFU/ml of avian P. multocida biotype A was administered to each individual experimental chicken and followed up for 14 days. Survival analysis is shown . Vaccination with gamma-irradiated vaccine containing Emulsigen ® -D with alum provided a complete protection against intramuscular challenge while vaccines adjuvanted with Montanide Gel 01 PR, Carbigen, Emulsigen-P and formalin inactivated FC vaccines had 50%, 50%, 66.7%, and 66.7% efficacy against the challenge, respectively. Clinical signs such as lameness, diarrhea, and death were observed in challenged chickens in a varying frequency indicated in . PBMCs from vaccinated chickens responded to stimulation with gamma-irradiated P. multocida antigen with notable increase in size and number while PBMCs from unvaccinated chickens lack any detectable response. Similar proliferative response was observed in PBMCs stimulated with LAC. Collectively, these results indicate that gamma-irradiated vaccines successfully primed antigen specific immune cells ( ). In this study, the expression of relevant cytokines was assessed, and there was a variable upregulation of cytokines across all the vaccinated groups. In G-1, IFN-γ, IL-6, and IL-12p40 showed increasing fold changes (FC) from day-21 through day-56 post vaccination; 6 to 1234, 11 to 568, and 6 to 224 FC respectively. On the other hand, IL-1β and IL-4 level remained indifferent at day-56 compared to the baseline. The expression of IL-22 did not change to the extent of IFN-γ, IL-6, and IL-12p40 even though it increased by 26-fold on day-56 ( , ). In G-2, like G-1, IFN-γ, IL-6, and IL-12p40 had the highest expression level with fold changes of 440, 838, and 300 respectively at day-56. IL-22 level in this group increased by 100-fold at day-56 compared to baseline. This is contrary to G-3, where IL-22 level did not change during the study. However, IFN-γ (FC: 3019), IL-6 (FC: 430), and IL-12p40 (FC: 445) exhibited notable increment in expression level. IL-1β and IL-4 level dropped down to basal level despite an initial peak at day-21 post-vaccination. The magnitude of change in cytokine expression in G-4 was not as extreme as the previous 3 groups, except for FC of 110 and 242 for IFN-γ and IL-12p40 respectively. Interestingly, in G-5, where chickens were vaccinated with formalin inactivated vaccine, cytokine response was not affected significantly except for IFN-γ (FC: 78) and IL-22 (FC: 77) on day-56 ( , ) Evaluation of efficacy of formulated vaccines The protective efficacy of the candidate vaccines was estimated in infection challenge study. A total volume of 0.5 ml bacterial suspension containing 2.5x10 7 CFU/ml of avian P. multocida biotype A was administered to each individual experimental chicken and followed up for 14 days. Survival analysis is shown . Vaccination with gamma-irradiated vaccine containing Emulsigen ® -D with alum provided a complete protection against intramuscular challenge while vaccines adjuvanted with Montanide Gel 01 PR, Carbigen, Emulsigen-P and formalin inactivated FC vaccines had 50%, 50%, 66.7%, and 66.7% efficacy against the challenge, respectively. Clinical signs such as lameness, diarrhea, and death were observed in challenged chickens in a varying frequency indicated in . P. multocida after challenge Following challenge, samples were collected randomly from chickens’ liver and lung tissue from each experimental group and analyzed using PCR to detect the presence of P. multocida capsular serotype A (capA) gene. Results demonstrated that the represented sample vaccination with both Emulsigen-D with aluminum hydroxide gel (Alum) and Montanide/01 PR gel adjuvanted was protected against P. multocida infection. Samples from these vaccinated groups tested negative for the capA gene, indicating a lack of detectable P. multocida capsular serotype A in the liver and lung tissues. Conversely, samples from all non-vaccinated groups tested positive for the capA gene, suggesting the presence of P. multocida capsular serotype A in these chickens following challenge ( ). In this study gamma-irradiated FC vaccines containing different adjuvants were formulated and evaluated for their safety, immunogenicity, and efficacy in chickens. IgG is the most prevalent immunoglobulin type in chicken sera, while secretory IgA is essential for mucosal immunity and is produced locally by plasma cells that are found at mucosal surfaces ( , ). Thus, our assessment of systemic and mucosal immunity was based on serum IgG and secretory IgA respectively. Accordingly, chickens vaccinated with Emulsigen-D+Alum seroconverted with a high IgG titer as compared to Carbigen and Emulsigen-P groups (p <0.05). This might possibly be due to the combination effect of the two adjuvants, Emulsigen ® -D and Alum, as multi-adjuvanted vaccines can stimulate broad and robust protective immune responses by activating a variety of immune mechanisms required to fight infectious diseases ( , ). In addition, we observed that antibody titer persisted in groups receiving Montanide Gel 01 PR, Carbigen, and Emulsigen-D+Alum, but not in formalin inactivated vaccine group, at day-56 post primary vaccination. This finding aligns with previous reports that demonstrated oil-based adjuvants to induce a much more durable immune response than alum ( , ). It has been indicated that emulsions have the ability to form depots that release antigens gradually generating a sustained stimulus to the immune system ( ). Another suggested mechanism is through induction of apoptosis of cells which are subsequently phagocytosed by DCs, which get activated as a result ( ). The decision to choose adjuvants needs to consider the cost and availability of the adjuvant as well as the conferred protection rather than mere immunogenicity parameters unless those parameters are known to correlate with protection. On the other hand, there was no significant difference between the mean average antibody titer of chickens vaccinated with the vaccines containing Carbigen ® , Montanide/01 PR gel, and formalin inactivated FC vaccine. This is contrary to Dessalegn et al. ( ) who reported that Montanide/01 PR gel induced a higher antibody titer than formalin killed FC vaccine. This could be due to the additional third booster dose included in their study. It is known that the avian P. multocida infects poultry species by the mucosal surface of the upper respiratory tract. Mucosal vaccines are attractive and efficient due to their ability to induce both systemic and local immunity, the latter providing immediate and effective response upon entry of the infectious agent ( ). When administered via the intra nasal (IN) route, mucosal vaccines imitate the natural infection pathway of mucosal pathogens as avian P. multocida , perhaps eliciting a more protective immune response than injectable formulations ( ). In this study, all the gamma-irradiated preparations induced a detectable IgA without a significant difference in titer in between them. However, the formalin-inactivated vaccine failed to induce an IgA response. This can be explained by the variation in the route of vaccine administration i.e. all the gamma-irradiated vaccines were administered via mucosal routes while the formalin-inactivated vaccine was injected intramuscularly. Mucosal vaccination, but not parenteral vaccines, are known to induce both systemic and local immunity due to stimulation of B and T cells that migrate to systemic secondary tissue as well as different mucosal compartments ( ). In addition to humoral immune response, this study evaluated cellular immune response, which also plays an inevitable role in defense against FC. The PBMC compartment was investigated for this purpose i.e. PBMC proliferation and cytokine gene expression in response to vaccination were the endpoints for the cellular immunity. PBMCs were isolated and cultured in the presence of P. multocida , to mimic a repeat exposure, and only post-vaccination PBMC proliferated notably in response to the re-stimulation. This response was evident by the observed increase in size, granularity, and overall number of PBMCs compared to unvaccinated PBMCs. This response can be ascribed to the already primed population of PBMCs in vaccinated chickens having a lower threshold for re-activation by the same antigen ( P. multocida ), thereby exhibiting a prompt response. The fact that vaccinated PBMCs respond to re-stimulation with P. multocida antigen is indicative of activation of specific immune response due to vaccination. Cytokines are crucial in orchestrating defense against infection and vaccination ( ), and thus the dynamic of their expression level can be used as probe to study immune responses generated in context of infection and vaccination. Thus, this study assessed the cytokine response of chickens to vaccination at the level of mRNA transcript. The gene expression profile of a panel of cytokines, such as IFN-γ, IL-12p40, IL-4, IL-22, IL-1β, and IL-6 was studied. IFN-γ and IL-12p40 transcripts were upregulated by hundreds to thousands folds in all vaccinated chickens as compared to unvaccinated group. On the other hand, IL-4 expression was not affected due to vaccination except for the subtle (compared to IFN-γ and IL-12p40) spikes observed at day-21 post-vaccination which subsequently declined back to baseline in all the vaccinated groups. The local cytokine milieu is an important factor in governing the type of T cell effector response that is induced ( ). IFN-γ and IL-4 are known for their mutually antagonistic functions ( , ). IL-12 and IFN-γ potently induce type 1 immune responses and IL-4 and is important for the induction of type 2 immune responses ( ). IL-12 induces INF-γ synthesis and has a proliferative effect on chicken splenocytes. A variety of immune cells such as NK cells and Th1 cells produce IFN-γ in response to IL-12 from macrophages. IFN-γ in turn activates macrophages and boosts cytotoxic T cells, allowing them to eliminate intracellular parasites and infected cells ( , ). Based on our data, it can be stated that the gamma-irradiated vaccines induced a predominantly Th1 response which is beneficial against intracellular pathogens ( , ). As mentioned before, there was a slight upregulation of the Th2 cytokine, IL-4, mRNA transcripts in vaccinated chickens. It has been reported that Th2 immune response helps to counterbalance damages induced by an elevated Th1 mediated inflammation ( ). Other studies have also highlighted Th1/Th2 imbalance as a mechanism for pathologies observed following infection or chemical damage ( , ) In our study, cytokine mRNA expression and thus cellular response to formalin inactivated vaccine was of a lesser magnitude relative to gamma-irradiated vaccines as observed from the fold changes. Similar finding was reported by Sedeh et al. ( ) for gamma-irradiated avian influenza vaccine. Nevertheless, the fold change for IL-22 was relatively higher in the group that received the formalin treated vaccine. Being secreted by a wide range of immune cells, IL-22 has been reported to limit Th1 responses and promotes regulatory T cells that inhibit the immune system and cytokines production ( ). This is in accordance with our observation of the relatively lower responses of IL-12 and IFN-γ in this group. The other cytokines involved in this study were IL-1β and IL-6. The level of IL-1β transcript increased at day-21 but could not persist in all vaccinated groups. Whereas IL-6 expression persisted throughout the study period in groups—Montanide Gel 01 PR, Carbigen, and Emulsigen-D+alum. Both cytokines are highly proinflammatory and are critical for initiating an acute-phase immune response against invading pathogens and triggering a variety of immune cells, such as T cells and macrophages ( , ). Lastly, the efficacy (protection from death) of the formulated vaccines ranged from 50% (Montanide Gel 01 PR and Carbigen group) to 100% (Emulsigen-D+alum group). The vaccines used in G-4 (Montanide-P) and G-5 (formalin inactivated) showed an efficacy of 66.7%. In light of the data generated in this study, we can conclude that gamma irradiation offers an effective alternative to produce safe and efficacious mucosal vaccines against fowl cholera with the potential to induce a broad range of humoral (systemic and local) as well as cellular immunity. Emulsigen-D+alum performed better in terms of immunogenicity and efficacy and needs to be backed up by further studies involving large sample size, various routes, doses and formulation. As a limitation, the sample size used for assessing the mucosal immunity was small, thus it might not enable detection of small effect size. In addition, vaccine efficacy was only tested against one bacterial strain. Furthermore, we could not investigate gamma-irradiated formulations adjuvanted with only alum and formalin inactivated preparation adjuvanted with emulsions due to budget and time constraints and could be addressed in future studies.
Applications of Artificial Intelligence to Electronic Health Record Data in Ophthalmology
bea6febe-3dec-40ec-8778-39b7af3e7f7a
7347028
Ophthalmology[mh]
The rapid adoption of electronic health records (EHRs) in recent decades has generated large volumes of clinical data with potential to support secondary use in research. – Indeed, a recurring justification for EHR adoption has been to support the collection and analysis of “big data” to gain meaningful insights. , The clinical research community has expressed growing interest in developing effective techniques to reuse clinical data from EHRs, in part because of the benefits of secondary data reuse over primary data collection. , Researchers reusing EHR data may not need to recruit patients or collect new data, potentially reducing cost compared with traditional clinical research. Moreover, EHR data often contain valuable longitudinal data regarding a patient's status, medical care, and disease progression, which have been previously shown to support clinical decision support, medical concept extraction, diagnosis, and risk assessment. However, there are challenges associated with reusing EHR data, particularly because of its complexity and heterogeneity. For example, in ophthalmology, patient data contained in EHRs may include fields as diverse as demographic information, diagnoses, laboratory tests, prescriptions, eye examinations, imaging, and surgical records. Interpreting these heterogeneous data requires strategies such as information extraction, dimension reduction, and predictive modeling typical of machine learning and, more broadly, artificial intelligence (AI) techniques. Applying AI to EHR data has been productive in a variety of domains. For instance, studies in cardiology have broadly used AI techniques with EHR data for the early detection of heart failure, to predict the onset of congestive heart failure, and to improve risk assessment in patients with suspected coronary artery disease. Likewise in ophthalmology, machine learning classifiers with EHR data have been used to predict risks of cataract surgery complications, improve diagnosis of glaucoma and age-related macular degeneration (AMD), and perform risk assessment of diabetic retinopathy (DR). – Although the application of AI to EHR data related to ocular diseases has increased during the past decade, there have been no published reviews of this literature. One literature review of machine learning techniques applied in ophthalmology was published in 2017 ; however, the included studies mainly focused on the application of machine learning techniques to imaging data, rather than EHR data. This manuscript addresses this knowledge gap by reviewing the literature applying AI techniques to EHR data for ocular disease diagnosis and monitoring. With this review, we explore the type of AI techniques used, the performance of these techniques, and how AI has been applied to specific ocular diseases, providing future directions to clinical practice and research. An exhaustive search was performed in the PubMed database using search terms related to “Artificial intelligence”, “Electronic health records,” and “Eye” in any field of articles. See the for the full query. The results were then examined and narrowed according to the following criteria: 1. Duplicates were removed. 2. Studies were eliminated for lack of relevance after review of the title and abstract; studies that used only imaging data without any EHR data were excluded. 3. Studies without direct clinical application or not related to the topic were excluded. The review process is summarized in . One author (WL) identified articles for inclusion through manual title, abstract, and content review. Two authors (WL and JSC) extracted data for each study: the aim, disease, algorithm, specific techniques, performance assessment, and conclusion of the articles that met the inclusion criteria, as summarized in the . – , – The PubMed query returned 164 articles published through August 2019. In total, 161 articles were reviewed after removing 3 duplicates. Then 118 articles were excluded because of lack of relevance based on the title and abstract. A total of 13 articles were considered that met inclusion criteria ( ). AI Techniques Three major techniques were used in these studies: 11 studies used supervised machine learning , of which 3 studies specifically used a deep learning technique; 2 studies also used natural language processing (NLP) to generate structured data suitable for analysis from unstructured text. Only 1 study used deep learning by itself, and another study used NLP independent of other techniques ( ). illustrates a simplified machine learning process and the relationship among these 3 techniques. In short, NLP can be used to extract useful information from text-based data and process it into a format suitable for machine learning. Supervised machine learning techniques, some of which use deep learning algorithms, can then be applied to these and other structured data sets to develop predictive models or classifiers. Machine Learning Machine learning techniques are computational methods that learn patterns or classifications within data without being explicitly programmed to do so. Machine learning can be divided into 2 methods based on the use of “ground truth” data: supervised learning and unsupervised learning. In supervised learning, a model learns from “ground truth” data in a training data set that contains labeled output data and then can predict the output for new cases. The algorithm is typically a classifier with categorical output or a regression algorithm with continuous output. In unsupervised learning, the model learns from a training data set without labeled output and identifies underlying patterns or structures within its input data. In medicine, machine learning has been widely used in several specialties such as radiology, cardiology, oncology, and ophthalmology to improve diagnostic accuracy and early disease detection. In this review, most studies used supervised machine learning techniques such as random forest, logistic regression, support vector machines (SVMs), gradient boosting, least absolute shrinkage and selection operator (LASSO), AdaBoost, and classification and regression tree (CART). As shown in B, logistic regression is an extension of linear regression ( A). In linear regression, the data is modeled as a linear relationship that can be used to predict a value for a given input. In logistic regression, a non-linear function, called the logistic function, converts prediction values into binary categories based on a threshold. Some methods can be used to improve the prediction accuracy of logistic regression, such as least absolute shrinkage and selection operator (LASSO). LASSO is a statistical method that selects a smaller subset of predictor variables most related to the outcome variable and shrinks regression coefficients to improve accuracy and generalizability. SVM is another popular machine learning model used for classification analysis. As shown in C, a boundary is created to split input data into two distinct groups and can be used to classify new data into similar distinct categories. A decision tree is an important supervised machine learning algorithm. D illustrates a decision tree with a root node as a start followed by the branched nodes and terminal nodes. The root node is the first decision node representing the best predictor variable. Each branched node represents the output of a given input variable. As more input variables are added to subsequent branching nodes, the decision tree becomes more sophisticated in predicting the outcome variable at the terminal nodes. Ensemble methods combine multiple machine learning models and are commonly used to improve the performance of prediction models. The two most common methods: bootstrapping aggregation (bagging) and boosting were shown in E. In a bagging method, multiple subsets of data are randomly selected from the original dataset and each subset data are used to train a separate prediction model. The final predictions will be aggregated from all prediction models. Random forest algorithms are examples of an ensemble machine learning method that combine bagging and decision trees. Boosting is another technique that combines multiple models to create a more accurate one. Adaboost and gradient boosting are widely used boosting machine learning algorithms. As shown in the , random forest was used by Lin et al. to predict myopia onset and by Chaganti et al. to improve the diagnostic accuracy of glaucoma. In addition, Baxter et al. used random forest and logistic regression to identify patients with open-angle glaucoma who had a risk of progression to surgical intervention. Fraccaro et al. used logistic regression, decision trees, SVMs, random forests, and AdaBoost to improve diagnostic accuracy of AMD. In addition, fuzzy random forest (FRF) and dominance-based rough set approach (DRSA) were used by Saleh et al. for DR risk assessment. And Gaskin et al. used random forest and bootstrapped LASSO to identify and predict risks of cataract surgery complications. Moreover, Yoo and Park used elastic net and LASSO to predict DR risk among diabetic patients. Deep Learning Deep learning is a subset of machine learning techniques based on artificial neural networks (ANNs) that mimic human brain processing. As shown in F, multiple layers of computation are constructed in a deep learning model, and each layer is used to perform computations on data from the previous layer. The layers between the input layer and the output layer are called hidden layers. While the information may flow from the input to subsequent output layers (feedforward), information can also flow backward from hidden layers to input layers (backpropagation). The inputs and outputs of hidden layers are not reported; deep learning algorithms present only the final outcome of the output layer. Deep learning does not use structured features for input as machine learning does; therefore, deep learning is useful for raw images because they do not have to be prefiltered as they do for machine learning algorithms. After processing raw input through multiple layers within deep neural networks, the algorithms find appropriate features for classifying output. In this review, several articles used deep learning algorithms such as ANNs, convolutional neural networks (CNNs), multilayer neural network ensemble models (MLNN-EMs), and feed forward neural networks (FNNs). CNN is a subtype of deep neural network commonly used in image classification. In a CNN model, special convolution and pooling layers are used to reduce a raw image to essential features necessary for the model to classify or label the image. In other words, these techniques use machine learning to determine model input features from the raw image data, rather than a human or a separate image processing program. MLNN-EM is a learning technique that integrates several neural networks to aggregated outcome. In addition, FNN is another subtype of neural network where the information moves forward in (one direction) from root nodes; information never moves backwards. The nodes between input and out layers do not form a cycle of information. As shown in the , Lee et al. used CNNs to distinguish AMD from normal OCT images, Baxter et al. used ANNs to identify open-angle glaucoma patients at risk of progression to surgery. Also, Sramka et al. used models MLNN-EMs and support vector machine regression models (SVM-RM) to improve clinical intraocular lens (IOL) calculations, and Skevofilakas et al. used feed forward neural network (FNN) and improved hybrid wavelet neural networks to develop hybrid decision support system for predicting DR risk among diabetic patients. NLP NLP is a branch of AI in which computers attempt to interpret human language in written or spoken form. By using NLP, researchers can extract information from text; some uses in medicine include separating progress notes into sections, determining diagnoses from notes, and identifying the documentation of adverse events. As shown in the , Apostolova et al., Peissig et al., and Gaskin et al. describe the use of NLP in extracting cataract information from free-form text clinical notes. Outcome Metrics for Evaluation of Performance of AI Techniques Performance evaluation of different AI techniques depends on the chosen algorithm, the purpose of the study, and the input data set. In supervised machine learning algorithms, classifiers are evaluated based on a comparison between the known categorical output and the predicted categorical output. For outputs with 2 categories, the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value can be computed. Another important evaluation metric is the AUC-ROC (area under the curve–receiving operating characteristic), which is used to evaluate the performance of classifiers based on different thresholds. ROC is a probability curve that visualizes the true positive rate (sensitivity) change with respect to false positive rate (1–specificity) for different threshold values used in the model. The AUC represents the ability of a model to distinguish between different outcome values. An AUC equal to 1 is ideal and represents the model's ability to perfectly distinguish between two outcomes. On the other hand, an AUC of approximately 0.5 is the worst case because it means that the model is not better than chance for distinguishing between two outcomes. As shown in the , 8 studies used AUC-ROC to evaluate the performance of classifiers. – , – , , The range of AUC-ROC was from 65% to 98.5%, and the median AUC in all included studies was 90%. In addition, precision and recall were used to evaluate the performance of text-mining algorithms. Apostolova et al. and Peissig et al. used precision and recall to evaluate the performance of text classification. For regression models, 2 evaluation metrics—mean absolute error (MAE) and root mean squared error (RMSE)—are commonly used to measure accuracy for continuous variables. They measure the average difference between actual observations and predictions. MAE shows the absolute differences with equal weight for each difference. In contrast, RMSE penalized larger errors by taking the square of the difference before averaging. In the study by Rohm et al., MAE and RMSE were used to evaluate visual acuity prediction. Application of AI to Clinical Ophthalmology AI techniques have been applied clinically to improving ocular disease diagnosis, predicting disease progression, and risk assessment ( ). Several diseases were studied in articles included in this review including glaucoma, cataracts, AMD, and DR. We will present the benefits of AI techniques with EHR data in these diseases. Glaucoma Two studies in this review focused on the field of glaucoma and used supervised machine learning techniques to improve diagnosis and predict progression. , In the study by Chaganti et al., a good performance was obtained (AUC of glaucoma diagnosis 88%), and results showed that the addition of an EMR phenotype could improve the classification accuracy of a random forest classifier with imaging biomarkers. On the other hand, Baxter et al. reported a moderate performance (AUC 67%) in a study that used EHR data alone to predict risk of progression to surgical intervention in patients with open-angle glaucoma. In addition to model performance, it is important to know which factors can be used to improve disease diagnosis. The work performed by Chaganti et al. began to explore this problem by comparing the performance of classifiers using EMR phenotypes, visual disability scores, and imaging metrics. Cataracts Three studies applying different AI techniques to cataract diagnosis and management were reviewed. In the study by Peissig et al., NLP was used to extract cataract information from free-text documents. An EHR-based cataract phenotyping algorithm, which consisted of structured data, information from free-text notes, and optical character recognition on scanned clinical images, was developed to identify cataract subjects. The result of the study showed good performance (predictive positive value >95%). Additionally, Gaskin et al. used supervised machine learning algorithms to identify risk factors and to predict intraoperative and postoperative complications of cataract surgery. The investigators used data mining via NLP to extract cataract information from the EHR system. The predictive model showed moderate performance (AUC 65%), and the risk factors associated with surgical complications included younger patients, refractive surgery history, AMD history, and complex cataract surgery. These risk factors were associated with postoperative complications, and the predictive model showed moderate performance (AUC 65%). Supervised machine learning (SVM-RM) and deep learning (MLNN-EM) algorithms were used to improve the IOL power calculation by Sramka et al. Both SVM-RM and MLNN-EM model provided better IOL calculations than the Barrett Universal II formula. AMD Three studies used AI in AMD. Lee et al. 1 used deep learning techniques to improve the diagnosis of AMD. Optical coherence tomography (OCT) images of each patient were linked to EMR clinical end points extracted from EPIC (Verona, WI) for each patient to predict a diagnosis of AMD. The model had high accuracy with an AUC 97% in distinguishing AMD from normal OCT images. Another study conducted by Rohm et al. used supervised regression models to accurately predict visual acuity in response to anti–vascular endothelial growth factor injections in patients with neovascular AMD. Models predicting treatment response may have implications in encouraging patients adhering to intravitreal therapy. Also, as demonstrated by Fraccaro et al., supervised machine learning techniques can be incorporated into EHR systems providing real-time support for AMD diagnosis. DR DR is one of the most common comorbidities of diabetes, and frequent screening examinations for diabetic patients are resource consuming. Three studies explore this problem by using AI techniques with EHR data to determine patient risk for the development of DR. Saleh et al. used 2 kinds of ensemble classifiers—FRF and DRSA—to predict DR risk using EHRs. Good performance (accuracy 80%) of the FRF model was shown in this study. Similarly, Yoo and Park proposed a comparison between the learning models—ridge, elastic net, and LASSO—using the traditional indicators of DR. They showed that the performance of LASSO (AUC 81%) was significantly better than the traditional indicators (AUC of glycated hemoglobin 69%; AUC of fasting plasma glucose 54%) in diagnosing DR. In addition, a hybrid DSS was developed by Skevofilakas et al. to estimate the risk of a patient with type 1 diabetes to develop DR. The hybrid DSS showed an excellent performance with an AUC of 98%. Overall, these studies show that integrating these techniques with an EHR system has promise in improving early detection of diabetic patients at risk of DR progression. Three major techniques were used in these studies: 11 studies used supervised machine learning , of which 3 studies specifically used a deep learning technique; 2 studies also used natural language processing (NLP) to generate structured data suitable for analysis from unstructured text. Only 1 study used deep learning by itself, and another study used NLP independent of other techniques ( ). illustrates a simplified machine learning process and the relationship among these 3 techniques. In short, NLP can be used to extract useful information from text-based data and process it into a format suitable for machine learning. Supervised machine learning techniques, some of which use deep learning algorithms, can then be applied to these and other structured data sets to develop predictive models or classifiers. Machine Learning Machine learning techniques are computational methods that learn patterns or classifications within data without being explicitly programmed to do so. Machine learning can be divided into 2 methods based on the use of “ground truth” data: supervised learning and unsupervised learning. In supervised learning, a model learns from “ground truth” data in a training data set that contains labeled output data and then can predict the output for new cases. The algorithm is typically a classifier with categorical output or a regression algorithm with continuous output. In unsupervised learning, the model learns from a training data set without labeled output and identifies underlying patterns or structures within its input data. In medicine, machine learning has been widely used in several specialties such as radiology, cardiology, oncology, and ophthalmology to improve diagnostic accuracy and early disease detection. In this review, most studies used supervised machine learning techniques such as random forest, logistic regression, support vector machines (SVMs), gradient boosting, least absolute shrinkage and selection operator (LASSO), AdaBoost, and classification and regression tree (CART). As shown in B, logistic regression is an extension of linear regression ( A). In linear regression, the data is modeled as a linear relationship that can be used to predict a value for a given input. In logistic regression, a non-linear function, called the logistic function, converts prediction values into binary categories based on a threshold. Some methods can be used to improve the prediction accuracy of logistic regression, such as least absolute shrinkage and selection operator (LASSO). LASSO is a statistical method that selects a smaller subset of predictor variables most related to the outcome variable and shrinks regression coefficients to improve accuracy and generalizability. SVM is another popular machine learning model used for classification analysis. As shown in C, a boundary is created to split input data into two distinct groups and can be used to classify new data into similar distinct categories. A decision tree is an important supervised machine learning algorithm. D illustrates a decision tree with a root node as a start followed by the branched nodes and terminal nodes. The root node is the first decision node representing the best predictor variable. Each branched node represents the output of a given input variable. As more input variables are added to subsequent branching nodes, the decision tree becomes more sophisticated in predicting the outcome variable at the terminal nodes. Ensemble methods combine multiple machine learning models and are commonly used to improve the performance of prediction models. The two most common methods: bootstrapping aggregation (bagging) and boosting were shown in E. In a bagging method, multiple subsets of data are randomly selected from the original dataset and each subset data are used to train a separate prediction model. The final predictions will be aggregated from all prediction models. Random forest algorithms are examples of an ensemble machine learning method that combine bagging and decision trees. Boosting is another technique that combines multiple models to create a more accurate one. Adaboost and gradient boosting are widely used boosting machine learning algorithms. As shown in the , random forest was used by Lin et al. to predict myopia onset and by Chaganti et al. to improve the diagnostic accuracy of glaucoma. In addition, Baxter et al. used random forest and logistic regression to identify patients with open-angle glaucoma who had a risk of progression to surgical intervention. Fraccaro et al. used logistic regression, decision trees, SVMs, random forests, and AdaBoost to improve diagnostic accuracy of AMD. In addition, fuzzy random forest (FRF) and dominance-based rough set approach (DRSA) were used by Saleh et al. for DR risk assessment. And Gaskin et al. used random forest and bootstrapped LASSO to identify and predict risks of cataract surgery complications. Moreover, Yoo and Park used elastic net and LASSO to predict DR risk among diabetic patients. Deep Learning Deep learning is a subset of machine learning techniques based on artificial neural networks (ANNs) that mimic human brain processing. As shown in F, multiple layers of computation are constructed in a deep learning model, and each layer is used to perform computations on data from the previous layer. The layers between the input layer and the output layer are called hidden layers. While the information may flow from the input to subsequent output layers (feedforward), information can also flow backward from hidden layers to input layers (backpropagation). The inputs and outputs of hidden layers are not reported; deep learning algorithms present only the final outcome of the output layer. Deep learning does not use structured features for input as machine learning does; therefore, deep learning is useful for raw images because they do not have to be prefiltered as they do for machine learning algorithms. After processing raw input through multiple layers within deep neural networks, the algorithms find appropriate features for classifying output. In this review, several articles used deep learning algorithms such as ANNs, convolutional neural networks (CNNs), multilayer neural network ensemble models (MLNN-EMs), and feed forward neural networks (FNNs). CNN is a subtype of deep neural network commonly used in image classification. In a CNN model, special convolution and pooling layers are used to reduce a raw image to essential features necessary for the model to classify or label the image. In other words, these techniques use machine learning to determine model input features from the raw image data, rather than a human or a separate image processing program. MLNN-EM is a learning technique that integrates several neural networks to aggregated outcome. In addition, FNN is another subtype of neural network where the information moves forward in (one direction) from root nodes; information never moves backwards. The nodes between input and out layers do not form a cycle of information. As shown in the , Lee et al. used CNNs to distinguish AMD from normal OCT images, Baxter et al. used ANNs to identify open-angle glaucoma patients at risk of progression to surgery. Also, Sramka et al. used models MLNN-EMs and support vector machine regression models (SVM-RM) to improve clinical intraocular lens (IOL) calculations, and Skevofilakas et al. used feed forward neural network (FNN) and improved hybrid wavelet neural networks to develop hybrid decision support system for predicting DR risk among diabetic patients. NLP NLP is a branch of AI in which computers attempt to interpret human language in written or spoken form. By using NLP, researchers can extract information from text; some uses in medicine include separating progress notes into sections, determining diagnoses from notes, and identifying the documentation of adverse events. As shown in the , Apostolova et al., Peissig et al., and Gaskin et al. describe the use of NLP in extracting cataract information from free-form text clinical notes. Machine learning techniques are computational methods that learn patterns or classifications within data without being explicitly programmed to do so. Machine learning can be divided into 2 methods based on the use of “ground truth” data: supervised learning and unsupervised learning. In supervised learning, a model learns from “ground truth” data in a training data set that contains labeled output data and then can predict the output for new cases. The algorithm is typically a classifier with categorical output or a regression algorithm with continuous output. In unsupervised learning, the model learns from a training data set without labeled output and identifies underlying patterns or structures within its input data. In medicine, machine learning has been widely used in several specialties such as radiology, cardiology, oncology, and ophthalmology to improve diagnostic accuracy and early disease detection. In this review, most studies used supervised machine learning techniques such as random forest, logistic regression, support vector machines (SVMs), gradient boosting, least absolute shrinkage and selection operator (LASSO), AdaBoost, and classification and regression tree (CART). As shown in B, logistic regression is an extension of linear regression ( A). In linear regression, the data is modeled as a linear relationship that can be used to predict a value for a given input. In logistic regression, a non-linear function, called the logistic function, converts prediction values into binary categories based on a threshold. Some methods can be used to improve the prediction accuracy of logistic regression, such as least absolute shrinkage and selection operator (LASSO). LASSO is a statistical method that selects a smaller subset of predictor variables most related to the outcome variable and shrinks regression coefficients to improve accuracy and generalizability. SVM is another popular machine learning model used for classification analysis. As shown in C, a boundary is created to split input data into two distinct groups and can be used to classify new data into similar distinct categories. A decision tree is an important supervised machine learning algorithm. D illustrates a decision tree with a root node as a start followed by the branched nodes and terminal nodes. The root node is the first decision node representing the best predictor variable. Each branched node represents the output of a given input variable. As more input variables are added to subsequent branching nodes, the decision tree becomes more sophisticated in predicting the outcome variable at the terminal nodes. Ensemble methods combine multiple machine learning models and are commonly used to improve the performance of prediction models. The two most common methods: bootstrapping aggregation (bagging) and boosting were shown in E. In a bagging method, multiple subsets of data are randomly selected from the original dataset and each subset data are used to train a separate prediction model. The final predictions will be aggregated from all prediction models. Random forest algorithms are examples of an ensemble machine learning method that combine bagging and decision trees. Boosting is another technique that combines multiple models to create a more accurate one. Adaboost and gradient boosting are widely used boosting machine learning algorithms. As shown in the , random forest was used by Lin et al. to predict myopia onset and by Chaganti et al. to improve the diagnostic accuracy of glaucoma. In addition, Baxter et al. used random forest and logistic regression to identify patients with open-angle glaucoma who had a risk of progression to surgical intervention. Fraccaro et al. used logistic regression, decision trees, SVMs, random forests, and AdaBoost to improve diagnostic accuracy of AMD. In addition, fuzzy random forest (FRF) and dominance-based rough set approach (DRSA) were used by Saleh et al. for DR risk assessment. And Gaskin et al. used random forest and bootstrapped LASSO to identify and predict risks of cataract surgery complications. Moreover, Yoo and Park used elastic net and LASSO to predict DR risk among diabetic patients. Deep learning is a subset of machine learning techniques based on artificial neural networks (ANNs) that mimic human brain processing. As shown in F, multiple layers of computation are constructed in a deep learning model, and each layer is used to perform computations on data from the previous layer. The layers between the input layer and the output layer are called hidden layers. While the information may flow from the input to subsequent output layers (feedforward), information can also flow backward from hidden layers to input layers (backpropagation). The inputs and outputs of hidden layers are not reported; deep learning algorithms present only the final outcome of the output layer. Deep learning does not use structured features for input as machine learning does; therefore, deep learning is useful for raw images because they do not have to be prefiltered as they do for machine learning algorithms. After processing raw input through multiple layers within deep neural networks, the algorithms find appropriate features for classifying output. In this review, several articles used deep learning algorithms such as ANNs, convolutional neural networks (CNNs), multilayer neural network ensemble models (MLNN-EMs), and feed forward neural networks (FNNs). CNN is a subtype of deep neural network commonly used in image classification. In a CNN model, special convolution and pooling layers are used to reduce a raw image to essential features necessary for the model to classify or label the image. In other words, these techniques use machine learning to determine model input features from the raw image data, rather than a human or a separate image processing program. MLNN-EM is a learning technique that integrates several neural networks to aggregated outcome. In addition, FNN is another subtype of neural network where the information moves forward in (one direction) from root nodes; information never moves backwards. The nodes between input and out layers do not form a cycle of information. As shown in the , Lee et al. used CNNs to distinguish AMD from normal OCT images, Baxter et al. used ANNs to identify open-angle glaucoma patients at risk of progression to surgery. Also, Sramka et al. used models MLNN-EMs and support vector machine regression models (SVM-RM) to improve clinical intraocular lens (IOL) calculations, and Skevofilakas et al. used feed forward neural network (FNN) and improved hybrid wavelet neural networks to develop hybrid decision support system for predicting DR risk among diabetic patients. NLP is a branch of AI in which computers attempt to interpret human language in written or spoken form. By using NLP, researchers can extract information from text; some uses in medicine include separating progress notes into sections, determining diagnoses from notes, and identifying the documentation of adverse events. As shown in the , Apostolova et al., Peissig et al., and Gaskin et al. describe the use of NLP in extracting cataract information from free-form text clinical notes. Performance evaluation of different AI techniques depends on the chosen algorithm, the purpose of the study, and the input data set. In supervised machine learning algorithms, classifiers are evaluated based on a comparison between the known categorical output and the predicted categorical output. For outputs with 2 categories, the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value can be computed. Another important evaluation metric is the AUC-ROC (area under the curve–receiving operating characteristic), which is used to evaluate the performance of classifiers based on different thresholds. ROC is a probability curve that visualizes the true positive rate (sensitivity) change with respect to false positive rate (1–specificity) for different threshold values used in the model. The AUC represents the ability of a model to distinguish between different outcome values. An AUC equal to 1 is ideal and represents the model's ability to perfectly distinguish between two outcomes. On the other hand, an AUC of approximately 0.5 is the worst case because it means that the model is not better than chance for distinguishing between two outcomes. As shown in the , 8 studies used AUC-ROC to evaluate the performance of classifiers. – , – , , The range of AUC-ROC was from 65% to 98.5%, and the median AUC in all included studies was 90%. In addition, precision and recall were used to evaluate the performance of text-mining algorithms. Apostolova et al. and Peissig et al. used precision and recall to evaluate the performance of text classification. For regression models, 2 evaluation metrics—mean absolute error (MAE) and root mean squared error (RMSE)—are commonly used to measure accuracy for continuous variables. They measure the average difference between actual observations and predictions. MAE shows the absolute differences with equal weight for each difference. In contrast, RMSE penalized larger errors by taking the square of the difference before averaging. In the study by Rohm et al., MAE and RMSE were used to evaluate visual acuity prediction. AI techniques have been applied clinically to improving ocular disease diagnosis, predicting disease progression, and risk assessment ( ). Several diseases were studied in articles included in this review including glaucoma, cataracts, AMD, and DR. We will present the benefits of AI techniques with EHR data in these diseases. Glaucoma Two studies in this review focused on the field of glaucoma and used supervised machine learning techniques to improve diagnosis and predict progression. , In the study by Chaganti et al., a good performance was obtained (AUC of glaucoma diagnosis 88%), and results showed that the addition of an EMR phenotype could improve the classification accuracy of a random forest classifier with imaging biomarkers. On the other hand, Baxter et al. reported a moderate performance (AUC 67%) in a study that used EHR data alone to predict risk of progression to surgical intervention in patients with open-angle glaucoma. In addition to model performance, it is important to know which factors can be used to improve disease diagnosis. The work performed by Chaganti et al. began to explore this problem by comparing the performance of classifiers using EMR phenotypes, visual disability scores, and imaging metrics. Cataracts Three studies applying different AI techniques to cataract diagnosis and management were reviewed. In the study by Peissig et al., NLP was used to extract cataract information from free-text documents. An EHR-based cataract phenotyping algorithm, which consisted of structured data, information from free-text notes, and optical character recognition on scanned clinical images, was developed to identify cataract subjects. The result of the study showed good performance (predictive positive value >95%). Additionally, Gaskin et al. used supervised machine learning algorithms to identify risk factors and to predict intraoperative and postoperative complications of cataract surgery. The investigators used data mining via NLP to extract cataract information from the EHR system. The predictive model showed moderate performance (AUC 65%), and the risk factors associated with surgical complications included younger patients, refractive surgery history, AMD history, and complex cataract surgery. These risk factors were associated with postoperative complications, and the predictive model showed moderate performance (AUC 65%). Supervised machine learning (SVM-RM) and deep learning (MLNN-EM) algorithms were used to improve the IOL power calculation by Sramka et al. Both SVM-RM and MLNN-EM model provided better IOL calculations than the Barrett Universal II formula. AMD Three studies used AI in AMD. Lee et al. 1 used deep learning techniques to improve the diagnosis of AMD. Optical coherence tomography (OCT) images of each patient were linked to EMR clinical end points extracted from EPIC (Verona, WI) for each patient to predict a diagnosis of AMD. The model had high accuracy with an AUC 97% in distinguishing AMD from normal OCT images. Another study conducted by Rohm et al. used supervised regression models to accurately predict visual acuity in response to anti–vascular endothelial growth factor injections in patients with neovascular AMD. Models predicting treatment response may have implications in encouraging patients adhering to intravitreal therapy. Also, as demonstrated by Fraccaro et al., supervised machine learning techniques can be incorporated into EHR systems providing real-time support for AMD diagnosis. DR DR is one of the most common comorbidities of diabetes, and frequent screening examinations for diabetic patients are resource consuming. Three studies explore this problem by using AI techniques with EHR data to determine patient risk for the development of DR. Saleh et al. used 2 kinds of ensemble classifiers—FRF and DRSA—to predict DR risk using EHRs. Good performance (accuracy 80%) of the FRF model was shown in this study. Similarly, Yoo and Park proposed a comparison between the learning models—ridge, elastic net, and LASSO—using the traditional indicators of DR. They showed that the performance of LASSO (AUC 81%) was significantly better than the traditional indicators (AUC of glycated hemoglobin 69%; AUC of fasting plasma glucose 54%) in diagnosing DR. In addition, a hybrid DSS was developed by Skevofilakas et al. to estimate the risk of a patient with type 1 diabetes to develop DR. The hybrid DSS showed an excellent performance with an AUC of 98%. Overall, these studies show that integrating these techniques with an EHR system has promise in improving early detection of diabetic patients at risk of DR progression. Two studies in this review focused on the field of glaucoma and used supervised machine learning techniques to improve diagnosis and predict progression. , In the study by Chaganti et al., a good performance was obtained (AUC of glaucoma diagnosis 88%), and results showed that the addition of an EMR phenotype could improve the classification accuracy of a random forest classifier with imaging biomarkers. On the other hand, Baxter et al. reported a moderate performance (AUC 67%) in a study that used EHR data alone to predict risk of progression to surgical intervention in patients with open-angle glaucoma. In addition to model performance, it is important to know which factors can be used to improve disease diagnosis. The work performed by Chaganti et al. began to explore this problem by comparing the performance of classifiers using EMR phenotypes, visual disability scores, and imaging metrics. Three studies applying different AI techniques to cataract diagnosis and management were reviewed. In the study by Peissig et al., NLP was used to extract cataract information from free-text documents. An EHR-based cataract phenotyping algorithm, which consisted of structured data, information from free-text notes, and optical character recognition on scanned clinical images, was developed to identify cataract subjects. The result of the study showed good performance (predictive positive value >95%). Additionally, Gaskin et al. used supervised machine learning algorithms to identify risk factors and to predict intraoperative and postoperative complications of cataract surgery. The investigators used data mining via NLP to extract cataract information from the EHR system. The predictive model showed moderate performance (AUC 65%), and the risk factors associated with surgical complications included younger patients, refractive surgery history, AMD history, and complex cataract surgery. These risk factors were associated with postoperative complications, and the predictive model showed moderate performance (AUC 65%). Supervised machine learning (SVM-RM) and deep learning (MLNN-EM) algorithms were used to improve the IOL power calculation by Sramka et al. Both SVM-RM and MLNN-EM model provided better IOL calculations than the Barrett Universal II formula. Three studies used AI in AMD. Lee et al. 1 used deep learning techniques to improve the diagnosis of AMD. Optical coherence tomography (OCT) images of each patient were linked to EMR clinical end points extracted from EPIC (Verona, WI) for each patient to predict a diagnosis of AMD. The model had high accuracy with an AUC 97% in distinguishing AMD from normal OCT images. Another study conducted by Rohm et al. used supervised regression models to accurately predict visual acuity in response to anti–vascular endothelial growth factor injections in patients with neovascular AMD. Models predicting treatment response may have implications in encouraging patients adhering to intravitreal therapy. Also, as demonstrated by Fraccaro et al., supervised machine learning techniques can be incorporated into EHR systems providing real-time support for AMD diagnosis. DR is one of the most common comorbidities of diabetes, and frequent screening examinations for diabetic patients are resource consuming. Three studies explore this problem by using AI techniques with EHR data to determine patient risk for the development of DR. Saleh et al. used 2 kinds of ensemble classifiers—FRF and DRSA—to predict DR risk using EHRs. Good performance (accuracy 80%) of the FRF model was shown in this study. Similarly, Yoo and Park proposed a comparison between the learning models—ridge, elastic net, and LASSO—using the traditional indicators of DR. They showed that the performance of LASSO (AUC 81%) was significantly better than the traditional indicators (AUC of glycated hemoglobin 69%; AUC of fasting plasma glucose 54%) in diagnosing DR. In addition, a hybrid DSS was developed by Skevofilakas et al. to estimate the risk of a patient with type 1 diabetes to develop DR. The hybrid DSS showed an excellent performance with an AUC of 98%. Overall, these studies show that integrating these techniques with an EHR system has promise in improving early detection of diabetic patients at risk of DR progression. This article reviews the literature applying AI techniques to EHR data to aid in ocular disease diagnosis and risk assessment. We focus the discussion on 3 areas: AI techniques used to analyze EHR data, the performance of techniques, and the ocular diseases most commonly analyzed. First, secondary use of EHR data via AI techniques can be used to improve ocular disease diagnosis, risk assessment, and disease progression. The predictive models across the 8 classifiers showed good performance with a median AUC of 90%. One study, prediction of postoperative complications of cataract surgery, reported moderate accuracy with 65%, perhaps because of insufficient predictors, such as lack of surgeon-relevant information. Also, the prevalence of various complications may affect the reliability of prediction outcomes. For example, a rare prevalence complication may not be handled well with standard classification techniques because of imbalanced data. , When a dataset contains a very few number of cases of disease or complications, there is not enough data about these cases for the model to accurately learn how to predict them. On the other hand, excellent performance of classifiers trained on combined EHR and image data were reported by Skevofilakas et al. and Lee et al. For future studies, a feasible direction might be to develop a hybrid model that uses both the routine EHR data and image data sets to have a more complete picture of patient variables associated with the outcome of interest. Second, supervised machine learning was the most common technique used with EHR data to analyze ocular diseases. These studies focused on improving diagnosis, predicting progression, or assessing risk for early detection. The predictors defined were based on the risk factors of disease, demographic features found from literature review, and clinical experiences. None of the studies reviewed used unsupervised machine learning techniques where the desired output and the relationship between the outcome variable and the predictors are unknown. These methods are used to identify clusters of data that are similar and can help discover the hidden factors that are useful for improving the diagnosis. However, unsupervised learning has been successfully applied to other fields. For example, Marlin et al. demonstrated that the probabilistic clustering model for time-series data from real-world EHRs could be able to capture patterns of physiology and be used to construct mortality prediction models. For future studies, unsupervised machine learning techniques might be used to find hidden patterns from EHR data for improving clinical predictions of ocular diseases. Finally, in this review, studies that analyzed EHR data with AI techniques mainly focused on 4 diseases: glaucoma, DR, AMD, and cataracts. The focus on these diseases (glaucoma, AMD, and DR) is likely due to their prevalence as the major causes of irreversible blindness in the world. Early detection or treatment can delay or halt the progression of such diseases, reduce visual morbidity, and preserve a patient's quality of life. , These studies suggest that AI techniques can be used to achieve this goal. Furthermore, cataract surgery is the most common refractive surgical procedure and is one of the most common surgeries performed in ophthalmology. Risk assessment of the postoperative complications and decreasing the risk of reoperation are crucial to patient outcomes, and AI techniques can help approach these issues. This review presents the AI techniques used in vision sciences based on EHR data. However, several problems still need to be addressed for future studies. One of the major problems is data quality. EHR data required for research are essentially different from data collected during a traditional clinical research study. EHR data collected from clinical practice may have incomplete information due to incorrect data entry, nonanswers, and recording errors. Consequently, the performance of machine learning models will be dependent on data quality and is an issue when using AI techniques with EHR data. – Additionally, except for the work reported by Lin et al., all reviewed studies were single-center studies. Thus, the results of studies may not be generalizable to other healthcare systems. Although imaging data do not suffer from the data quality issues of other clinical data, there is no well-established gold standard for many imaging techniques. For instance, Garvin et al. presented an automated 3-dimensional intraretinal layer segmentation algorithm using OCT image data. The gold standard was determined by 2 retinal experts’ recommendations. This requires more time and resources to analyze and cross-validate the outcomes. Also, different preprocessing and postprocessing algorithms, hardware configurations, and image processing steps are intended to improve image quality for easier automated diagnosis. However, these factors often make models difficult to replicate. In addition, using imaging analysis without other prior information, such as medical history information, may affect the model performance and lead to biased results. Therefore, integration of imaging data and routine EHR data allows us to obtain prior information to input to the predictive model. AI techniques are rapidly being adopted in ophthalmology and have the potential to improve the quality and delivery of ophthalmic care. Moreover, secondary use of EHR data is an emerging approach for clinical research involving AI, particularly given the availability of large-scale data sets and analytic methods. – In this review, we describe applications of AI methods to ocular diseases and problems such as diagnostic accuracy, disease progression, and risk assessment and find that the number of published studies in this area has been relatively limited due to challenges with the current quality of EHR data. In the future, we expect that AI using EHR data will be applied more widely in ophthalmic care, particularly as techniques improve and EHR data quality issues are resolved.
Pandemics, climate change, and the eye
d6f8e7eb-f2ac-40ee-a4c8-d238941ef62f
7525080
Ophthalmology[mh]
Global climate change is primarily a sequel of human’s impact on the planet. More than 150 years ago, Marsh recognized and praised the benefits of the interaction between man and nature. However, he was also the first to severely criticize their relationship, suggesting that further abuse by humans could result in the extinction of the species. This exploitation of Earth’s resources led Nobel Prize laureate Paul Crutzen to coin a new term: “Anthropocene” or “The Age of Man” . Humanity’s disruptive behavior could have started with the Industrial Revolution in the mid-eighteenth century and has continued for the past three centuries. In 1997, Vitousek et al. estimated that 39–50% of the world’s land surface has been transformed or degraded by human activity . Furthermore, a persistent increase in levels of fossil-fuels has released abundant greenhouse gases (GHG), contributing to a global crisis of air pollution. The energy imbalance as a result of pollution induces accumulation of heat with the subsequent warming of the planet. In fact, the Intergovernmental Panel on Climate Change (IPCC) calculated that the Earth will warm by 1.5 °C during this century , causing a massive climate-induced change in the migration pattern of wildlife animals, bringing them into greater contact with humans. Currently, the COVID-19 pandemic has exposed our vulnerability. In a couple of weeks, it brought normal life to an almost complete halt. The primary aim of this review is to describe the impact of environment on the spread of zoonoses and how climate can influence the development of eye diseases. Some of the possible outcomes from COVID-19 will also be delineated, respectively. The climate system is interactive and evolves in time under the influence of several factors: Little Ice Age Little Ice Age is a period between 1300 and 1850 that is known for its colder temperatures, with an average drop of 0.5 °C. Although most of the cooling may have been caused by a decrease in sunspot activity or a surge in volcanic eruptions, evidence suggests that it could be to some extent man-made . A vicious cycle had started: Several plagues had claimed the lives of millions, leaving extensive land to reforest, lowering the levels of carbon dioxide (CO 2 ) with a subsequent decrease in temperature. On a global scale, extreme climate with freezing conditions increased harvest failures, famine, and malnutrition, resulting in prominent outbreaks of new and old epidemics . Global warming Since 1751 the world has emitted—approximately—over 1.5 trillion tons of CO 2 , contributing to the increment of global temperatures over 1 °C, with unprecedented changes since the mid-twentieth century. Prosperity and advances in technology, the hallmarks of the second half of the century, are primary drivers of CO 2 emissions. Therefore, the sustained human release of greenhouse gases is the main reason attributed to the rise in temperatures . To address climate change, the United Nations (UN) established the Paris Agreement , setting a target, limiting average warming to 2 °C, urging the world to urgently reduce emissions. Regarding health effects, several studies have estimated that global warming will induce deadly hurricanes by intensified rainfall and stronger winds. Furthermore, rising sea levels will acidify the ocean, and drought and fires will worsen pollution. Such conditions have a lasting impact on sea temperatures and air quality, direly affecting ecosystems, and leading to waterborne and airborne diseases , aggravating chronic conditions and ultimately anticipating higher rates of premature death. An unstable climate may also threaten pathogen hosts, inducing relocation of microorganisms and hosts . These shifts have been suggested as the reason for emerging infectious diseases (Fig. ). In December of 2019, Dr. Li Wenliang, a Chinese ophthalmologist, warned the world about the potential danger of a cluster of pneumonia cases in Wuhan, China . The outbreak was later traced back to the seafood and wet animal wholesale market in Wuhan. Moreover, samples from the live animal section of the market were positive for the virus. A novel coronavirus was identified as the etiologic agent of this deadly disease. COVID-19 rapidly proved to have human-to-human transmission spreading through the world at a fast pace, jeopardizing the modern world. As of September 4, 2020, COVID-19 has infected more than 6 million people in the USA, killing over 190,000 . However, these statistics might not be entirely accurate. Several factors prevented the containment of the epidemic and enabled exponential growth of cases, revealing our country’s flaws in the management of an outbreak of this magnitude. History has shown that the worst epidemics ravaged nearly entire civilizations, but the impact of an epidemic goes beyond the death toll, involving the global economic collapse and climate change. COVID-19 possible outcomes We can postulate that the aftermath of closing borders, bars, and schools, in addition to travel bans and shelter in place orders around the world, has brought some benefits to the planet. At least momentarily, Mother Earth seems to be healing. Benefit: reduction of air pollutants NASA (National Aeronautics and Space Administration) and ESA (European Space Agency) have announced that China and Italy are showing a cleaner air amid the quarantine. The dramatic drop-off of Nitrogen dioxide (NO 2 ) pollutions have lifted the fog of densely populated cities around the world, in particular, Los Angeles and Mumbai. In a similar manner, satellites from the Copernicus Atmosphere Monitoring Service measured a decline of 20–30% in surface PM 2.5 over large parts of China in February 2020 . At large, carbon emissions have sharply fallen across continents. Similarly, the United States Energy Information Administration has reported a predicted 11.5% reduction in emissions during 2020, as a ramification of the pandemic . Disadvantage: increase of medical and hazardous waste A few months ago, the head of the UN Environment Programme (UNEP) cautioned the world: “COVID-19 is not a silver lining for the climate” . Despite the favorable changes in carbon footprint, the mitigation might be temporary. Patients and healthcare workers are now producing significant medical and hazardous waste. Of more concern, Americans have been hoarding supplies such as masks, gloves, and cleaning materials since the beginning of the epidemic. The proper disposal of these provisions is questionable. In Wuhan, the first epicenter of the pandemic, hospitals’ daily waste reached 240 metric tons, a sixfold increment compared with the amount before the epidemic started . Thus far, hospitals in the USA have not been burdened this way; however, it is uncertain how much additional waste will be produced as a result of the outbreak. Little Ice Age is a period between 1300 and 1850 that is known for its colder temperatures, with an average drop of 0.5 °C. Although most of the cooling may have been caused by a decrease in sunspot activity or a surge in volcanic eruptions, evidence suggests that it could be to some extent man-made . A vicious cycle had started: Several plagues had claimed the lives of millions, leaving extensive land to reforest, lowering the levels of carbon dioxide (CO 2 ) with a subsequent decrease in temperature. On a global scale, extreme climate with freezing conditions increased harvest failures, famine, and malnutrition, resulting in prominent outbreaks of new and old epidemics . Since 1751 the world has emitted—approximately—over 1.5 trillion tons of CO 2 , contributing to the increment of global temperatures over 1 °C, with unprecedented changes since the mid-twentieth century. Prosperity and advances in technology, the hallmarks of the second half of the century, are primary drivers of CO 2 emissions. Therefore, the sustained human release of greenhouse gases is the main reason attributed to the rise in temperatures . To address climate change, the United Nations (UN) established the Paris Agreement , setting a target, limiting average warming to 2 °C, urging the world to urgently reduce emissions. Regarding health effects, several studies have estimated that global warming will induce deadly hurricanes by intensified rainfall and stronger winds. Furthermore, rising sea levels will acidify the ocean, and drought and fires will worsen pollution. Such conditions have a lasting impact on sea temperatures and air quality, direly affecting ecosystems, and leading to waterborne and airborne diseases , aggravating chronic conditions and ultimately anticipating higher rates of premature death. An unstable climate may also threaten pathogen hosts, inducing relocation of microorganisms and hosts . These shifts have been suggested as the reason for emerging infectious diseases (Fig. ). In December of 2019, Dr. Li Wenliang, a Chinese ophthalmologist, warned the world about the potential danger of a cluster of pneumonia cases in Wuhan, China . The outbreak was later traced back to the seafood and wet animal wholesale market in Wuhan. Moreover, samples from the live animal section of the market were positive for the virus. A novel coronavirus was identified as the etiologic agent of this deadly disease. COVID-19 rapidly proved to have human-to-human transmission spreading through the world at a fast pace, jeopardizing the modern world. As of September 4, 2020, COVID-19 has infected more than 6 million people in the USA, killing over 190,000 . However, these statistics might not be entirely accurate. Several factors prevented the containment of the epidemic and enabled exponential growth of cases, revealing our country’s flaws in the management of an outbreak of this magnitude. History has shown that the worst epidemics ravaged nearly entire civilizations, but the impact of an epidemic goes beyond the death toll, involving the global economic collapse and climate change. We can postulate that the aftermath of closing borders, bars, and schools, in addition to travel bans and shelter in place orders around the world, has brought some benefits to the planet. At least momentarily, Mother Earth seems to be healing. Benefit: reduction of air pollutants NASA (National Aeronautics and Space Administration) and ESA (European Space Agency) have announced that China and Italy are showing a cleaner air amid the quarantine. The dramatic drop-off of Nitrogen dioxide (NO 2 ) pollutions have lifted the fog of densely populated cities around the world, in particular, Los Angeles and Mumbai. In a similar manner, satellites from the Copernicus Atmosphere Monitoring Service measured a decline of 20–30% in surface PM 2.5 over large parts of China in February 2020 . At large, carbon emissions have sharply fallen across continents. Similarly, the United States Energy Information Administration has reported a predicted 11.5% reduction in emissions during 2020, as a ramification of the pandemic . Disadvantage: increase of medical and hazardous waste A few months ago, the head of the UN Environment Programme (UNEP) cautioned the world: “COVID-19 is not a silver lining for the climate” . Despite the favorable changes in carbon footprint, the mitigation might be temporary. Patients and healthcare workers are now producing significant medical and hazardous waste. Of more concern, Americans have been hoarding supplies such as masks, gloves, and cleaning materials since the beginning of the epidemic. The proper disposal of these provisions is questionable. In Wuhan, the first epicenter of the pandemic, hospitals’ daily waste reached 240 metric tons, a sixfold increment compared with the amount before the epidemic started . Thus far, hospitals in the USA have not been burdened this way; however, it is uncertain how much additional waste will be produced as a result of the outbreak. reduction of air pollutants NASA (National Aeronautics and Space Administration) and ESA (European Space Agency) have announced that China and Italy are showing a cleaner air amid the quarantine. The dramatic drop-off of Nitrogen dioxide (NO 2 ) pollutions have lifted the fog of densely populated cities around the world, in particular, Los Angeles and Mumbai. In a similar manner, satellites from the Copernicus Atmosphere Monitoring Service measured a decline of 20–30% in surface PM 2.5 over large parts of China in February 2020 . At large, carbon emissions have sharply fallen across continents. Similarly, the United States Energy Information Administration has reported a predicted 11.5% reduction in emissions during 2020, as a ramification of the pandemic . increase of medical and hazardous waste A few months ago, the head of the UN Environment Programme (UNEP) cautioned the world: “COVID-19 is not a silver lining for the climate” . Despite the favorable changes in carbon footprint, the mitigation might be temporary. Patients and healthcare workers are now producing significant medical and hazardous waste. Of more concern, Americans have been hoarding supplies such as masks, gloves, and cleaning materials since the beginning of the epidemic. The proper disposal of these provisions is questionable. In Wuhan, the first epicenter of the pandemic, hospitals’ daily waste reached 240 metric tons, a sixfold increment compared with the amount before the epidemic started . Thus far, hospitals in the USA have not been burdened this way; however, it is uncertain how much additional waste will be produced as a result of the outbreak. The intriguing chain reaction of global warming in ocular health is ominous. The spectrum of eye diseases can be categorized by exposure to specific environmental factors, and their severity appears to be directly linked to the duration of that exposure. Ozone depletion and ultraviolet radiation Most atmospheric ozone is concentrated in the stratosphere. With a program launched in 1970, NASA has continuously monitored the status of the ozone layer, partaking in the discovery of their depletion in the early 1980s . The main function of the ozone layer is to protect life on Earth by absorbing harmful ultraviolet light (UV). Among the radiation reaching Earth’s surface, 10% belongs to the medium-wavelength band (280–315 nm) or UVB; and the long-wavelength band (315–390 nm) or UVA accounts for the remaining 90%. The eye is one of the two organs susceptible to solar irradiance; hence radiation from direct sunlight and sky scattering and reflection from clouds, ground, and other surfaces have deleterious consequences ; and strong epidemiological evidence associates them with the development of photochemical damage to ocular tissues . The photochemical injury is predominantly due to photo-oxidative damage where the creation of reactive oxygen species plays a central role. The length of exposure, the wavelength of UV rays, and tissue irradiance determine the severity of the lesion: Acute phototoxic lesions are seen on the ocular surface as photokeratitis and conjunctivitis and the retina as solar retinopathy. Chronic exposure to solar energy may induce damage to the eyelids : keratoacanthoma, actinic keratosis, and neoplasias; conjunctiva : pterygium, pinguecula, metaplasia, or carcinoma of the conjunctiva; cornea : climatic droplet keratopathy (Labrador), keratoconus, endothelial cell damage, and dry eye; lens : cataract and early presbyopia; and trabecular meshwork: glaucoma. Regarding the retina , studies have failed to conclusively support the relationship of UV light and disorders such as choroidal melanoma and macular degeneration [ – ]. Thus, exogenous agents may contribute to chemical injury by acting as photosensitizers . These components include tetracyclines, chloroquine, nonsteroidal anti-inflammatory drugs, and psoralen, among others, reaching the ocular tissues directly or indirectly via the circulation. Thermal damage High ambient temperature attributable to global warming may influence thermal damage in ocular structures. Bacterial keratitis caused by Staphylococcus aureus and Pseudomonas have been found to be more prevalent in warmer climates . Fungal keratitis is more common in certain geographic areas with hotter weather. A hot environment may then potentially worsen the burden of trachoma and may trigger the formation of cataracts and the occurrence of central retinal artery occlusion (CRAO) [ , , ]. Air pollution Air pollution is a mixture of harmful substances in the air we breathe. Besides of pollutant gases, airborne suspensions “particulate matter” (PM) are particularly detrimental for human health. Several studies have shown evidence that PM 2.5 alters the microvascular endothelium-dependent dilation. Moreover, Adar et al. demonstrated significantly narrower retinal arteriolar diameters in people living in areas of elevated pollution , and Cheng et al. established a possible link between pollutants and CRAO . A large report from the UK also found a considerable association between higher PM 2.5 exposure and the risk of ganglion cell loss and glaucoma . The USA (US) currently ranks first in healthcare spending among the developed nations of the world. It is widely recognized that waste in the healthcare system contributes to the prominent cost of medical care. Several strategies have been evaluated to assist with cost reduction, estimating that the best approach was the minimization of waste, with future savings that could represent > 20% of the total health care expense . Ophthalmology, as a surgical specialty, plays a part in generating one third of all hospital-regulated medical waste . Likewise, healthcare services are responsible for nearly 10% of the USA’s carbon footprint. A study compared the generation of greenhouse gases and expenditures of a single cataract surgery between the Aravind Eye Care System from India and the UK. The results were eye-opening: an eco-friendly resource with comparable-to-better patient outcomes and substantial less spending. The same study reported that up to 60% of drugs used during elective phacoemulsification were discarded, resulting in environmental impact . The Aravind Eye Care System and other low-income countries routinely reuse surgical materials (after proper sterilization) with minimal rate of endophthalmitis . COVID-19 possible outcomes Benefit: reduction of air pollutants through medical planning The American Academy of Ophthalmology (AAO) had provided recommendations and guidelines for ophthalmologists around the world during the COVID-19 pandemic. Initial reduction of elective procedures and less-crowded offices were inherently safer for the patient, the ophthalmologist, the staff, and the environment. Disadvantage: aftermath of recession US Treasury forecasted unemployment in the USA could reach 20% due to COVID-19. At the end of March 2020, the Department of Labor published data showing that the unemployment rate reached 4.4%, among all major worker groups. Several comprehensive and multi-specialty ophthalmology practices closed their offices and laid-off a few of the staff (personal communications). Others remained open for urgent visits and procedures or care for the patients through telemedicine services. Instead, most retina practices continued seeing patients at high risk of blindness, taking all the precautions needed to prevent the spread of the disease, limiting their regularly high volume of patients. On average, retina practice volumes declined between 40 and 70%. The job market seems to be uncertain in the near future too. An article published in 2009 concluded that the job market in ophthalmology is affected for 2–3 years following a recession . Most atmospheric ozone is concentrated in the stratosphere. With a program launched in 1970, NASA has continuously monitored the status of the ozone layer, partaking in the discovery of their depletion in the early 1980s . The main function of the ozone layer is to protect life on Earth by absorbing harmful ultraviolet light (UV). Among the radiation reaching Earth’s surface, 10% belongs to the medium-wavelength band (280–315 nm) or UVB; and the long-wavelength band (315–390 nm) or UVA accounts for the remaining 90%. The eye is one of the two organs susceptible to solar irradiance; hence radiation from direct sunlight and sky scattering and reflection from clouds, ground, and other surfaces have deleterious consequences ; and strong epidemiological evidence associates them with the development of photochemical damage to ocular tissues . The photochemical injury is predominantly due to photo-oxidative damage where the creation of reactive oxygen species plays a central role. The length of exposure, the wavelength of UV rays, and tissue irradiance determine the severity of the lesion: Acute phototoxic lesions are seen on the ocular surface as photokeratitis and conjunctivitis and the retina as solar retinopathy. Chronic exposure to solar energy may induce damage to the eyelids : keratoacanthoma, actinic keratosis, and neoplasias; conjunctiva : pterygium, pinguecula, metaplasia, or carcinoma of the conjunctiva; cornea : climatic droplet keratopathy (Labrador), keratoconus, endothelial cell damage, and dry eye; lens : cataract and early presbyopia; and trabecular meshwork: glaucoma. Regarding the retina , studies have failed to conclusively support the relationship of UV light and disorders such as choroidal melanoma and macular degeneration [ – ]. Thus, exogenous agents may contribute to chemical injury by acting as photosensitizers . These components include tetracyclines, chloroquine, nonsteroidal anti-inflammatory drugs, and psoralen, among others, reaching the ocular tissues directly or indirectly via the circulation. Thermal damage High ambient temperature attributable to global warming may influence thermal damage in ocular structures. Bacterial keratitis caused by Staphylococcus aureus and Pseudomonas have been found to be more prevalent in warmer climates . Fungal keratitis is more common in certain geographic areas with hotter weather. A hot environment may then potentially worsen the burden of trachoma and may trigger the formation of cataracts and the occurrence of central retinal artery occlusion (CRAO) [ , , ]. Air pollution Air pollution is a mixture of harmful substances in the air we breathe. Besides of pollutant gases, airborne suspensions “particulate matter” (PM) are particularly detrimental for human health. Several studies have shown evidence that PM 2.5 alters the microvascular endothelium-dependent dilation. Moreover, Adar et al. demonstrated significantly narrower retinal arteriolar diameters in people living in areas of elevated pollution , and Cheng et al. established a possible link between pollutants and CRAO . A large report from the UK also found a considerable association between higher PM 2.5 exposure and the risk of ganglion cell loss and glaucoma . The USA (US) currently ranks first in healthcare spending among the developed nations of the world. It is widely recognized that waste in the healthcare system contributes to the prominent cost of medical care. Several strategies have been evaluated to assist with cost reduction, estimating that the best approach was the minimization of waste, with future savings that could represent > 20% of the total health care expense . Ophthalmology, as a surgical specialty, plays a part in generating one third of all hospital-regulated medical waste . Likewise, healthcare services are responsible for nearly 10% of the USA’s carbon footprint. A study compared the generation of greenhouse gases and expenditures of a single cataract surgery between the Aravind Eye Care System from India and the UK. The results were eye-opening: an eco-friendly resource with comparable-to-better patient outcomes and substantial less spending. The same study reported that up to 60% of drugs used during elective phacoemulsification were discarded, resulting in environmental impact . The Aravind Eye Care System and other low-income countries routinely reuse surgical materials (after proper sterilization) with minimal rate of endophthalmitis . COVID-19 possible outcomes Benefit: reduction of air pollutants through medical planning The American Academy of Ophthalmology (AAO) had provided recommendations and guidelines for ophthalmologists around the world during the COVID-19 pandemic. Initial reduction of elective procedures and less-crowded offices were inherently safer for the patient, the ophthalmologist, the staff, and the environment. Disadvantage: aftermath of recession US Treasury forecasted unemployment in the USA could reach 20% due to COVID-19. At the end of March 2020, the Department of Labor published data showing that the unemployment rate reached 4.4%, among all major worker groups. Several comprehensive and multi-specialty ophthalmology practices closed their offices and laid-off a few of the staff (personal communications). Others remained open for urgent visits and procedures or care for the patients through telemedicine services. Instead, most retina practices continued seeing patients at high risk of blindness, taking all the precautions needed to prevent the spread of the disease, limiting their regularly high volume of patients. On average, retina practice volumes declined between 40 and 70%. The job market seems to be uncertain in the near future too. An article published in 2009 concluded that the job market in ophthalmology is affected for 2–3 years following a recession . High ambient temperature attributable to global warming may influence thermal damage in ocular structures. Bacterial keratitis caused by Staphylococcus aureus and Pseudomonas have been found to be more prevalent in warmer climates . Fungal keratitis is more common in certain geographic areas with hotter weather. A hot environment may then potentially worsen the burden of trachoma and may trigger the formation of cataracts and the occurrence of central retinal artery occlusion (CRAO) [ , , ]. Air pollution is a mixture of harmful substances in the air we breathe. Besides of pollutant gases, airborne suspensions “particulate matter” (PM) are particularly detrimental for human health. Several studies have shown evidence that PM 2.5 alters the microvascular endothelium-dependent dilation. Moreover, Adar et al. demonstrated significantly narrower retinal arteriolar diameters in people living in areas of elevated pollution , and Cheng et al. established a possible link between pollutants and CRAO . A large report from the UK also found a considerable association between higher PM 2.5 exposure and the risk of ganglion cell loss and glaucoma . The USA (US) currently ranks first in healthcare spending among the developed nations of the world. It is widely recognized that waste in the healthcare system contributes to the prominent cost of medical care. Several strategies have been evaluated to assist with cost reduction, estimating that the best approach was the minimization of waste, with future savings that could represent > 20% of the total health care expense . Ophthalmology, as a surgical specialty, plays a part in generating one third of all hospital-regulated medical waste . Likewise, healthcare services are responsible for nearly 10% of the USA’s carbon footprint. A study compared the generation of greenhouse gases and expenditures of a single cataract surgery between the Aravind Eye Care System from India and the UK. The results were eye-opening: an eco-friendly resource with comparable-to-better patient outcomes and substantial less spending. The same study reported that up to 60% of drugs used during elective phacoemulsification were discarded, resulting in environmental impact . The Aravind Eye Care System and other low-income countries routinely reuse surgical materials (after proper sterilization) with minimal rate of endophthalmitis . Benefit: reduction of air pollutants through medical planning The American Academy of Ophthalmology (AAO) had provided recommendations and guidelines for ophthalmologists around the world during the COVID-19 pandemic. Initial reduction of elective procedures and less-crowded offices were inherently safer for the patient, the ophthalmologist, the staff, and the environment. Disadvantage: aftermath of recession US Treasury forecasted unemployment in the USA could reach 20% due to COVID-19. At the end of March 2020, the Department of Labor published data showing that the unemployment rate reached 4.4%, among all major worker groups. Several comprehensive and multi-specialty ophthalmology practices closed their offices and laid-off a few of the staff (personal communications). Others remained open for urgent visits and procedures or care for the patients through telemedicine services. Instead, most retina practices continued seeing patients at high risk of blindness, taking all the precautions needed to prevent the spread of the disease, limiting their regularly high volume of patients. On average, retina practice volumes declined between 40 and 70%. The job market seems to be uncertain in the near future too. An article published in 2009 concluded that the job market in ophthalmology is affected for 2–3 years following a recession . reduction of air pollutants through medical planning The American Academy of Ophthalmology (AAO) had provided recommendations and guidelines for ophthalmologists around the world during the COVID-19 pandemic. Initial reduction of elective procedures and less-crowded offices were inherently safer for the patient, the ophthalmologist, the staff, and the environment. aftermath of recession US Treasury forecasted unemployment in the USA could reach 20% due to COVID-19. At the end of March 2020, the Department of Labor published data showing that the unemployment rate reached 4.4%, among all major worker groups. Several comprehensive and multi-specialty ophthalmology practices closed their offices and laid-off a few of the staff (personal communications). Others remained open for urgent visits and procedures or care for the patients through telemedicine services. Instead, most retina practices continued seeing patients at high risk of blindness, taking all the precautions needed to prevent the spread of the disease, limiting their regularly high volume of patients. On average, retina practice volumes declined between 40 and 70%. The job market seems to be uncertain in the near future too. An article published in 2009 concluded that the job market in ophthalmology is affected for 2–3 years following a recession . The healthcare industry is the second-largest greenhouse gas polluter after the food industry. Although greenhouse emissions may drop after the COVID-19 pandemic, their effect on air temperatures would take 40 years to centuries to perceive changes considering how long the gases persist in the air. If the emissions return to the pre-pandemic levels after the resolution of the crisis, the progress made would have been undermined. Anthropologically, societies are transformed if a pandemic kills a considerable proportion of the population, unleashing the economic pressures of less productivity and higher consumer prices. At the moment, the total repercussions of the COVID-19 pandemic are yet to be determined. In the meantime, digitalization has been accelerated to optimize services and to mitigate the intrinsic difficulties of social distancing. For doctors, including ophthalmologists, the sequel might require drastic changes in how we practice medicine. Nonetheless, if fundamental lessons are learned, we will start taking responsibility for climate change with promising positive outcomes for the entire humanity. In the end, resilience in the midst of a catastrophic event is the only way forward.
Publication trend of COVID-19 and non-COVID-19 articles in the Indian Journal of Ophthalmology during the pandemic
1022876b-f6df-4236-b55f-fe33b7e455a2
8186616
Ophthalmology[mh]
A retrospective analysis of all COVID-19 and non-COVID-19-related articles published in different issues of IJO from January 2020 to March 2021 was done. The study complied with the tenets of the Declaration of Helsinki. The study did not involve the study participants; hence, study approval was not obtained from the Institutional Review Board (IRB) of the Institutional Ethical Committee (IEC). The data was obtained from the website of IJO and was reconfirmed with the official monthly mail sent by the MedKnow team and by the respected Editor of IJO in the personal mail as well as PubMed search engine online. The articles were segregated based on the type of manuscript; Original Article, Review Article, Case Report/Short Case Series, Letter to the Editor/Letter in Response, Guest Editorial, Research methodology, Point-Counterpoint, Consensus Criteria, Ophthalmic Images, Photo Essay, Surgical Techniques, Tales of Yore, and AIOS Meeting Papers. All manuscripts with a tag of a clinical study, clinical trial, comparative study, brief communication, controlled clinical trial, journal article, and randomized controlled trial were categorized as original articles. The total data was tabulated in the form of tables. contains COVID-19-related articles published month-wise starting from the lockdown phase i.e., from the April issue onward. contains non-COVID-related articles published from January 2020 onward. describes a comparison of articles published in the prelockdown, lockdown, and postlockdown phases 1, 2, and 3. Prelockdown phase as considered by authors include January-March 2020, lockdown phase means April-June 2020, postlockdown phase 1 means July-September 2020, postlockdown phase 2 means October-December 2020, and postlockdown phase 3 means January-March 2021. also depicts the total number of articles published during the year 2020 (COVID-19- and non-COVID-19-related articles). In addition, depicts the total number of articles published in IJO year wise (2010–2020) with growth rate every year. A total of 1343 articles were published in IJO during the COVID-19 pandemic. It included 182 (13.55%) COVID-19-related articles and 1161 (86.55%) non-COVID-19 articles . Among the COVID-19 articles , a maximum proportion was formed by letter to the editors 66 (36.26%) followed by original articles 39 (21.42%), commentaries 24 (13.18%), editorials 18 (9.89%), and preferred practices 13 (7.14%). A maximum number of publications were in July 44 (24.17%), followed by March 2021 23 (12.63%), March and November 2020 with 22 (12.08%) each, May and October 2020 and 2021 with 14 (7.69%) each, August 11 (6.04%) and April 1 (0.54%). A detailed analysis of this has been described in . Among the non-COVID-19 articles , there were a total of 276 (23.77%) original articles, 179 (15.41%) case reports, 157 (13.52%) photo assays, 141 (12.14%) commentaries, and 107 (9.21%) ophthalmic images. The maximum publications were in October 107 (9.21%), January 105 (9.04%), and September 101 (8.69%). The least were in March 42 (3.61%) and April 43 (3.70%) . In the prelockdown, lockdown, and postlockdown phase 1, 2, and 3 comparisons, maximum articles were published in January 105 (9.04%) followed by June 98 (8.44%) and July 93 (8.01%). The lockdown issues with 223 (19.20%) articles, postlockdown phase 1 had 267 (22.99%), postlockdown phase 2 had 321 (27.64%) (total 811 articles) articles, and postlockdown phase 3 had 316 (27.21%) in IJO showed a rising trend in several published articles compared to the prelockdown issues with 216 (18.60%) articles . The detailed analysis has been described in and . Among the total 1343 articles published in IJO during the COVID-19 pandemic, the maximum were original articles 315 (23.45%), followed by case reports 184 (13.70%), letter to the editor 181 (13.47%), commentaries 165 (12.28%), and photo assays 158 (11.76%), and the maximum articles were published in October 121 (9%), November 117 (8.71%), January 2021 112 (8.33%), January 105 (7.81%), and September 101 (7.53%). The detailed analysis has been described in . A steady increase in the growth rate of publications was noted in IJO from the year 2016 onward with 66.47% in 2018, followed by 44.74% in 2020, and 33.45% in 2017. A sudden spike was noted in 2013 with 60.62%. The detailed analysis has been depicted in and . Clinical and medical research has always been the cornerstone in discoveries of better ways to prevent and treat diseases. However, time and again the medical research has remained demanding and ever challenging. Several hurdles such as developing a hypothesis, locating funding, involving clinical trial units, developing agreements with sponsors, obtaining ethical committee approval, attaining patient consent for participation, and carrying out a sizeable amount of paperwork (obtaining data) need to be overcome. In addition to the above, publishing a scientific article requires data analysis, writing, submission, critical reviewing, multiple revisions, and finally publication. At each step, human interaction is required, and to achieve this individuals need to have the time and be focused to achieve the goal. The current worldwide COVID-19 pandemic has certainly stretched the available human resources to meet those needs. The same trend was observed for publications in IJO during this COVID-19 pandemic. A total of 1343 articles were published in IJO during the COVID-19 pandemic. It included 182 (13.55%) COVID-19-related articles and 1161 (86.55%) non-COVID-19 articles. A ratio of 1:6 of COVID-19 vs. non-COVID-19 articles was maintained, despite the sprint for publishing for COVID-19-related articles in every field. The first three issues had only non-COVID articles since at that time the pandemic was peaking up and most of the journals were observing a transition phase considering COVID-19-related publications. The first COVID-19 article was an editorial by the respected editor of IJO and was published in April issue catering to the changing needs of the research community. This probably ignited the spirit for quality submissions related to COVID-19, which was evident in the subsequent issues of IJO. Moreover, there was an overall increased rate of submissions noticed post announcement of lockdown as the clinicians got more time for research and academics. April onward there was a continuous upsurge in COVID-19-related publications except in the special issues on Uvea (September) and Refractive surgery (December). At the same time, the editors of IJO had to face a lot of challenges such as hard copy dissemination, timely and bimonthly online update of publications, expanding the capacity of the journal for COVID-19 articles, expedited review process, fast processing, and publications of COVID-19 articles and simultaneously matching the quality and quantity of non-COVID-19 publications. It was quite evident that the quality and quantity kept on increasing with every issue during the pandemic. The parameters that used to assess the quality of articles were the number of views, number of reads in a short time, number of PDF’s downloaded from the IJO website, number of manuscripts printed, the number of accesses online, citations of the article, and rapid growth in impact factor of the IJO. The impact factor of IJO had remarkably increased from 0.961 in 2017, 0.977 in 2018, and finally 1.25 in 2019. This was only possible due to the quality review by the reviewers, expedited work by the editorial team, publishing articles based on merit in a short period, and a high number of citations. Among the COVID-19 articles [ and ], majority were letter to the editors 66 (36.26%) followed by original articles 39 (21.42%), commentaries 24 (13.18%), editorials 18 (9.89%), and preferred practices 13 (7.14%). A large number of letters to the editors could be attributed to an urge of researchers to share the limited experiences related to COVID-19 from their clinics with the ophthalmology fraternity. Moreover, this is a short version of submission that comes with an advantage of a lesser word limit, thus enabling fast write-up, expedited review process, and allows the faster spread of the message. The original articles were mainly focused on either knowledge, attitude, and practice (KAP) analysis of COVID-19, preferred practice pattern during lockdown compared with the previous year or varied presentations of patients with COVID-19 such as conjunctivitis. The editorials were more focused on COVID-19-related literature by the experts. The preferred practice patterns formed the core of COVID-19 publications as it was important to set the guidelines in the country for clinical ophthalmology practice during the upsurge of a pandemic. Apart from this, there were 6 (3.29%) review articles viz., teleophthalmology, COVID-19 prophylaxis, long-term corneal preservation techniques, therapeutic opportunities to manage COVID-19, lessons learned during COVID-19 pandemic, and ophthalmic manifestations of COVID-19. Surprisingly, there were only five (2.74%) case report, one (0.54%) on follicular conjunctivitis, two (0.10%) on central retinal vein occlusion (CRVO), one on (0.54%) COVID-19-associated papilledema, and one on (0.54%) Adie–Holmes syndrome associated with COVID-19 probably owing to lack of documented evidence and COVID-19 testing facilities at majority centers in India. In addition, there were three (1.64%) articles on current ophthalmology, which added flavor and value to the COVID-19 research. These included sanitizer aerosol-driven ocular surface disease (SADOSD), differential diagnosis of acute ocular pain (teleophthalmology), and impact of COVID-19 on visually disabled. Amazingly, there were two (0.15%) articles on surgical techniques, which probably embarked on the surgical innovations in the COVID-19 era. These were U-shaped tools for follow-up of corneal ulcers during the COVID-19 pandemic and four-in-one keratoplasty during the COVID-19 pandemic. There were few interesting innovation on safe slit-lamp shield (SSS), virus, and aerosol containment box for retinopathy of prematurity, which was helpful as a protective barrier against COVID-19 transmission and was readily adopted by many ophthalmologists across the country. Maximum COVID-19-related publications came in July 44 (24.17%), followed by 23 (12.63%) in March 2021, 22 (12.08%) in March 2020 and November, 14 (7.69%) each in May and October, 11 (6.11%) in August, and one (0.54%) in April. This can be attributed to maximum manuscript submissions during the lockdown and expedited processing of articles by the editorial team. A high number was also seen in March 2021 showing growing interest of researchers towards COVID-19 work. The editorial by the respected editor in the April issue of IJO probably ignited the spark for COVID-19 publications. Despite barriers and hurdles, the journal stood high and matched with utmost standards and quality of COVID-19-related publications throughout the year 2020 and also during 2021. Among the non-COVID-19 articles [ and ], there were a total of 276 (23.77%) original articles. The quality, quantity, and trend for original articles were constant throughout the year except during April and July issue owing to transition during the lockdown and more focus to accommodate more COVID-19-related publications. Throughout the year there were 179 (15.41%) case reports published. These were very few during the initial 6 months, and gradually increasing numbers were seen in the last 9 months (till March 2021). A landmark achievement was birth of a separate sibling journal for case reports. Indian Journal of Ophthalmology Case Reports inaugurated on January 1 st , 2021 which is a quarterly publication. The first issue had a total of 30 case reports, 30 photo assays, and 20 ophthalmic images, which we have equally divided 10 each in the first 3 months of 2021 for comparative analysis. February 2020 was a special issue on community ophthalmology with supplements but had no case reports. The whole year saw in all 157 (13.52%) photo assays. The trend was similar throughout the year except during the initial half due to lockdown and COVID-19-related publications. In addition, there were 141 (12.14%) commentaries, which set high peer-review standards and quality manuscripts by experts in their fields. There were 107 (9.21%) ophthalmic images with a maximum of 20 (1.72%) being during the latter half of the year (in October). The increased number of case reports and images in the latter months could probably be explained to finish the backlog of manuscripts accepted in the same category till June 2020. Moreover, the hybrid nature of publications was seen with publications in form of 115 (9.90%) letter to the editors, 51 (4.39%) editorials, 52 (4.47%) review articles, 25 (2.15%) surgical techniques, 21 (1.80%) perspective, 14 (1.20%) one-minute ophthalmology articles, 7 (0.60%) preferred practices, 3 (0.25%) consensus criteria, and 2 (0.22%) current ophthalmology and innovations each. Despite COVID-19-related challenges, the IJO standards were kept high by invited review articles, guest editorials, and one-minute ophthalmology articles. There was a new introduction in the form of Tales of Yore article from January 2021. Another important observation is that there were three main issues on Community Ophthalmology (February), Uvea (September), and Refractive Surgery (December). The special issues opened the opportunity for specialty publications and expert intellectual content by stalwarts in the field. The hallmark of IJO has always been the review articles by all esteemed national and international experts. The section on innovations in IJO definitely cannot be missed. IJO opened a gateway for innovations making it probably the best-fit journal with amazingly high impact factor. The seven (0.77%) articles were focused on preferred practices and were published amazingly all came in refractive surgery issue of December, which made making it extra special for the readers. These innovative and collaborative efforts by the editorial board were probably the main reason for the impact factor hike during the year. This is evident by the increased growth rate of the published articles since 2016 and an increasing number of citations day by day. Overall, the maximum articles were published in October 107 (9.21%), January 105 (9.04%), and September 101 (8.69%). The least was in March 42 (3.61%) and April 43 (3.70%) probably due to COVID-19-related challenges like lockdown, hard copy postal issues and expedited publication process, social and mental issues related to the new virus, and stress of securing families. highlights the comparison of different types of publications in different phases of lockdown, prelockdown, lockdown, and post lockdown phases 1, 2, and 3. June 22 (12.08%) and July 44 (24.17%) had maximum articles published related to COVID-19 followed by a sudden dip probably due to focusing on special uvea issues in September. November 2020 and March 2021 again had a spike with 22 (12.08%) and 23 (12.63%), respectively. A U-shaped curve was seen from January to July with a maximum article in January 105 (10.22%) (non-COVID-19) and minimum in April 42 (4.08%) and then again rising during October 121 (11.78%) and November 117 (11.39%). This can be attributed to COVID-19-related challenges and the transition phase during the lockdown. Another important observation is that there was again an upsurge in the COVID-19-related article during 2021 probably because the researchers had collected a good amount of data with evidence, with reduced restrictions and COVID-19-related challenges. Similarly, highlights the total number of articles published during the pandemic. To conclude, from a research point of view, COVID-19 pandemic will be remembered as an era which created gateways and opened the door for innumerable research opportunities for the researchers from all field including ophthalmology. There was an unprecedented increase in the number of publications on COVID-19 since the disease started. The most benefitted were the juvenile researchers. The various reasons for increased submissions and high numbers of quality publications on COVID-19 in IJO were (a) Due to lockdown in the country and rest of the world and with suspended clinical activities, the researchers got ample amount of time to write manuscripts and convert them in expedited publications, (b) The prestigious IJO was also inviting COVID-19 related articles, which made it much easier for authors to publish as the editorial process was also fast track, (c) Also, most of the experienced researchers wanted to share their experience of patient profile and rare findings in patients with COVID-19 during the pandemic, and (d) Lastly, the stalwarts in their respective also shared their experience for preparing the AIOS preferred practice guidelines for examining and operating the patients with safety during COVID-19 pandemic. The COVID-19 pandemic, an infectious disease caused by SARS-CoV-2 motivated the scientific community to work together to gather, organize, process, and share experiences on the novel biomedical hazard. The IJO made full justice to the COVID-19 research in Indian ophthalmology and the world by publishing quality content. The substantial number of publications on COVID-19 in IJO indicate the seriousness and widespread consequences of this disease and the inquisitiveness to find solutions to combat it through sharing personal experiences and research on this topic. Despite the adverse circumstances during the pandemic, a balance was maintained between the publication trend of COVID-19 and non-COVID-19-related articles. The IJO through its varied spectrum of publications during this pandemic has continuously gained heights and has set a benchmark for sister journals in Ophthalmology during the COVID-19 pandemic. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest.
Oral Health Care Among Women in Perimenopause or Menopause: An Integrative Review
e92d21f8-7a2d-4224-8855-a1f3dd8376f4
11803492
Dentistry[mh]
Menopause is a significant event in a woman's life, marking the end of her reproductive years. By 2030, it is estimated that the global population of women in menopause will reach 1.2 billion. During this transitional period, women often experience various adverse symptoms, including hot flushes, night sweats, urinary disturbances, mood changes, depression, and declines in cognition, as well as bone and joint pain. Additionally, menopause has been associated with unpleasant oral health changes such as burning sensations of the mouth, dryness of mouth, alterations in taste, inflammation of supporting tissues of the teeth, osteoporosis of the jaws, and an increase in tooth decay. , These oral health conditions can significantly affect quality of life. However, there is currently limited evidence on the knowledge, attitudes, and practices of women in perimenopause or menopause regarding oral health. The few studies focusing on this area have mainly explored the systemic aspects of perimenopause or menopause and oral health. , The only comprehensive review conducted to date has focused on oral health changes and its management in menopause. Providing appropriate assistance during the early period of the menopausal transition may help to reduce the prevalence of oral health problems, improve quality of life, and support a healthy menopausal life. Studies have identified that primary and women's health care providers, such as midwives, could play a key role in promoting oral health among women in perimenopause and menopause. However, this aspect of women's health care has also not been extensively reviewed in the literature. QUICK POINTS ✦ There is very limited research addressing the oral health needs of women in perimenopause or menopause and the practice of their health care providers. ✦ Women in perimenopause or menopause have limited awareness and practices regarding oral health care. ✦ Health care providers have not provided sufficient guidance to women in managing oral health during this period. ✦ There are insufficient practice guidelines to support health care providers in promoting oral health among women in perimenopause or menopause. ✦ High‐quality studies and supportive strategies are needed to improve the oral health of women and guide health care providers in their practice. Background Menopause is a physiologic process characterized by the cessation of the menstrual cycle and is caused by the progressive depletion of ovarian follicles. It usually occurs in women between 45 and 55 years of age. Menopause can also be surgically induced by ovarian surgery (oophorectomy) or treatment induced with radiation therapy, chemotherapy, and other medications. Menopause involves 3 stages: perimenopause, menopause, and postmenopause. Perimenopause describes the menopausal transition phase characterized by continuous changes in the menstrual cycle over the past 11 months, whereas menopause refers to those women who have had a final menstrual period. Postmenopause is the period when a woman has not had a menstrual cycle for 12 months or more. The symptoms of menopause can negatively impact various aspects of women's lives including systemic, physical, and psychological domains. Interviews of Emirati women found that vasomotor symptoms and weight gain during menopause led to anxiety, depression, insomnia, and memory loss. For most, symptoms begin mildly and increase in intensity in the later years of the menopausal transition, impacting on women's quality of life. As their life expectancy has increased, women may experience poorer physical and mental health for an extended period of time. The estrogen variations leading to menopause can also lead to oral health problems. , A cross‐sectional study found that women who have periodontal problems during postmenopause have an impaired quality of life compared with women in postmenopause with healthy a periodontium. The mouth is an organ that supports the process of eating, chewing, and swallowing but also contributes to individuals’ appearance and socialization. Therefore, any imbalance in oral health functioning can affect the daily activities of individuals leading to poor quality of life. Studies have highlighted numerous barriers women face in managing oral health problems in various phases of their life. These primarily include affordability and accessibility of dental services, previous negative experiences, time constraints, lack of priority, lack of knowledge, lack of policies, and cultural barriers. , There is limited evidence, however, describing women's experiences in accessing dental care during menopause. In a 2020 cross‐sectional survey of 1115 women in menopause and postmenopause, Singh and Jamwal identified safety concerns, lack of awareness, time constrictions, and lack of priority as barriers in managing oral health. Studies have shown that health care providers such as primary care providers, gynecologists, nurses, and midwives play an essential role in addressing the unique health care needs of women. This includes promoting oral health among women across their life span. Evidence has shown that health care providers can be effective in improving the oral health outcomes of pregnant women. , Additionally, these models of care are suitable and cost‐effective to implement into practice. , , Numerous challenges have been cited, however, such as time constraints, high workload, and limited interest and knowledge in oral health practices. , Most literature has focused on pregnant women, , with very few studies investigating the importance of managing the oral health of women in menopause. It is evident that women in menopause are at high risk of developing oral health problems, yet no comprehensive review has been undertaken to further explore this area. Currently, it is unclear whether additional barriers exist for women in menopause to maintain oral health and for health care providers to play a role in promoting oral health in this population. Gathering this information will aid in identifying gaps in service delivery and provide a road map to develop tailored preventive strategies to promote good oral health for women approaching menopause, thereby contributing to their overall health and well‐being. Aim The aim of this integrative review was to synthesize current evidence regarding the oral health knowledge, attitudes, and practices of women in perimenopause or menopause and their health care providers. It also aimed to explore the current guidelines and recommendations for oral health promotion among women during this period. The following research questions further guided the review: (1) What are the knowledge, attitudes, and practices of women in perimenopause or menopause toward oral health care? (2) What are the knowledge, practices, and perceptions of health care providers toward oral health care for women in perimenopause or menopause? (3) What are the current guidelines and recommendations to promote oral health for women in perimenopause or menopause? Definition of Terms There are inconsistencies in the literature regarding the various terminologies used to describe the menopausal period when compared with the Stages of Reproductive Aging Workshop staging system, which is widely considered as the gold standard for defining the various stages of menopause. In this study, the term women in perimenopause or menopause has been used to include any women who exhibit hormonal fluctuation, anovulatory cycles, and the onset of cycle irregularity and symptoms, or those who have experienced a complete cessation of their menstrual cycles. Likewise, the term menopause includes the postmenopausal period. Woman or women is defined based on biological sex rather than gender identity. This includes individuals who were born with ovaries, fallopian tubes, and a uterus as anatomical reproductive structures. The term health care providers refer to various health care staff that women come into contact with during perimenopause or menopause other than oral health professionals and include (but not limited to) obstetric and gynecology or primary care physicians, midwives, nurse practitioners, and nurses. Knowledge, Attitudes, and Practices Knowledge includes awareness of the association between perimenopause or menopause and oral health, complications, and the impact of prescribed medication on oral health, health risks of poor oral health, and knowledge on seeking out oral health resources and services for the management of oral health problems during this period. Attitudes refer to a person's perception toward oral health, perceived barriers to accessing oral health services, and perceptions toward health care providers engaging in oral health promotion activities. It also refers to the attitudes of health care providers toward promoting oral health among women in perimenopause or menopause, including the acceptability and feasibility of this role. Practices include the actions that a person engages in to maintain oral health such as tooth brushing frequency, type of aid used, and dental visits. It also refers to oral health promotion activities engaged by health care providers. ✦ There is very limited research addressing the oral health needs of women in perimenopause or menopause and the practice of their health care providers. ✦ Women in perimenopause or menopause have limited awareness and practices regarding oral health care. ✦ Health care providers have not provided sufficient guidance to women in managing oral health during this period. ✦ There are insufficient practice guidelines to support health care providers in promoting oral health among women in perimenopause or menopause. ✦ High‐quality studies and supportive strategies are needed to improve the oral health of women and guide health care providers in their practice. Menopause is a physiologic process characterized by the cessation of the menstrual cycle and is caused by the progressive depletion of ovarian follicles. It usually occurs in women between 45 and 55 years of age. Menopause can also be surgically induced by ovarian surgery (oophorectomy) or treatment induced with radiation therapy, chemotherapy, and other medications. Menopause involves 3 stages: perimenopause, menopause, and postmenopause. Perimenopause describes the menopausal transition phase characterized by continuous changes in the menstrual cycle over the past 11 months, whereas menopause refers to those women who have had a final menstrual period. Postmenopause is the period when a woman has not had a menstrual cycle for 12 months or more. The symptoms of menopause can negatively impact various aspects of women's lives including systemic, physical, and psychological domains. Interviews of Emirati women found that vasomotor symptoms and weight gain during menopause led to anxiety, depression, insomnia, and memory loss. For most, symptoms begin mildly and increase in intensity in the later years of the menopausal transition, impacting on women's quality of life. As their life expectancy has increased, women may experience poorer physical and mental health for an extended period of time. The estrogen variations leading to menopause can also lead to oral health problems. , A cross‐sectional study found that women who have periodontal problems during postmenopause have an impaired quality of life compared with women in postmenopause with healthy a periodontium. The mouth is an organ that supports the process of eating, chewing, and swallowing but also contributes to individuals’ appearance and socialization. Therefore, any imbalance in oral health functioning can affect the daily activities of individuals leading to poor quality of life. Studies have highlighted numerous barriers women face in managing oral health problems in various phases of their life. These primarily include affordability and accessibility of dental services, previous negative experiences, time constraints, lack of priority, lack of knowledge, lack of policies, and cultural barriers. , There is limited evidence, however, describing women's experiences in accessing dental care during menopause. In a 2020 cross‐sectional survey of 1115 women in menopause and postmenopause, Singh and Jamwal identified safety concerns, lack of awareness, time constrictions, and lack of priority as barriers in managing oral health. Studies have shown that health care providers such as primary care providers, gynecologists, nurses, and midwives play an essential role in addressing the unique health care needs of women. This includes promoting oral health among women across their life span. Evidence has shown that health care providers can be effective in improving the oral health outcomes of pregnant women. , Additionally, these models of care are suitable and cost‐effective to implement into practice. , , Numerous challenges have been cited, however, such as time constraints, high workload, and limited interest and knowledge in oral health practices. , Most literature has focused on pregnant women, , with very few studies investigating the importance of managing the oral health of women in menopause. It is evident that women in menopause are at high risk of developing oral health problems, yet no comprehensive review has been undertaken to further explore this area. Currently, it is unclear whether additional barriers exist for women in menopause to maintain oral health and for health care providers to play a role in promoting oral health in this population. Gathering this information will aid in identifying gaps in service delivery and provide a road map to develop tailored preventive strategies to promote good oral health for women approaching menopause, thereby contributing to their overall health and well‐being. The aim of this integrative review was to synthesize current evidence regarding the oral health knowledge, attitudes, and practices of women in perimenopause or menopause and their health care providers. It also aimed to explore the current guidelines and recommendations for oral health promotion among women during this period. The following research questions further guided the review: (1) What are the knowledge, attitudes, and practices of women in perimenopause or menopause toward oral health care? (2) What are the knowledge, practices, and perceptions of health care providers toward oral health care for women in perimenopause or menopause? (3) What are the current guidelines and recommendations to promote oral health for women in perimenopause or menopause? There are inconsistencies in the literature regarding the various terminologies used to describe the menopausal period when compared with the Stages of Reproductive Aging Workshop staging system, which is widely considered as the gold standard for defining the various stages of menopause. In this study, the term women in perimenopause or menopause has been used to include any women who exhibit hormonal fluctuation, anovulatory cycles, and the onset of cycle irregularity and symptoms, or those who have experienced a complete cessation of their menstrual cycles. Likewise, the term menopause includes the postmenopausal period. Woman or women is defined based on biological sex rather than gender identity. This includes individuals who were born with ovaries, fallopian tubes, and a uterus as anatomical reproductive structures. The term health care providers refer to various health care staff that women come into contact with during perimenopause or menopause other than oral health professionals and include (but not limited to) obstetric and gynecology or primary care physicians, midwives, nurse practitioners, and nurses. Knowledge, Attitudes, and Practices Knowledge includes awareness of the association between perimenopause or menopause and oral health, complications, and the impact of prescribed medication on oral health, health risks of poor oral health, and knowledge on seeking out oral health resources and services for the management of oral health problems during this period. Attitudes refer to a person's perception toward oral health, perceived barriers to accessing oral health services, and perceptions toward health care providers engaging in oral health promotion activities. It also refers to the attitudes of health care providers toward promoting oral health among women in perimenopause or menopause, including the acceptability and feasibility of this role. Practices include the actions that a person engages in to maintain oral health such as tooth brushing frequency, type of aid used, and dental visits. It also refers to oral health promotion activities engaged by health care providers. Knowledge includes awareness of the association between perimenopause or menopause and oral health, complications, and the impact of prescribed medication on oral health, health risks of poor oral health, and knowledge on seeking out oral health resources and services for the management of oral health problems during this period. Attitudes refer to a person's perception toward oral health, perceived barriers to accessing oral health services, and perceptions toward health care providers engaging in oral health promotion activities. It also refers to the attitudes of health care providers toward promoting oral health among women in perimenopause or menopause, including the acceptability and feasibility of this role. Practices include the actions that a person engages in to maintain oral health such as tooth brushing frequency, type of aid used, and dental visits. It also refers to oral health promotion activities engaged by health care providers. Due to the limited research in this area, it was important to review both quantitative and qualitative studies to explore current evidence addressing the research questions. Thus, an integrative review methodology was chosen because it enables integration of diverse study designs, assesses the quality of the evidence, and identifies knowledge gaps. This review followed the integrative review methodology suggested by Whittemore and Knafl, which is a staged process that includes problem identification, literature search, data evaluation and analysis, and finally reporting of findings. The Preferred Reporting Items for Systematic Reviews and Meta‐analysis framework were used to report the findings in this integrative review (see Figure ). The review protocol was registered in PROSPERO (CRD42023416503). Institutional review board approval was not necessary for this review as it did not involve direct data collection from human participants. Eligibility Criteria Inclusion criteria included articles published in English that assessed at least one study outcome (knowledge, attitudes, and practices) of women in perimenopause or menopause or their health care providers’ knowledge, attitudes, or practices toward oral health, guidelines, or recommendations regarding oral health for women in perimenopause or menopause. All qualitative, quantitative, and mixed‐methods studies were included along with any experimental studies that had a presurvey component. No restrictions were placed on the time of publication, quality, or location of the study. Data Sources, Search Strategy, and Study Selection A preliminary search was undertaken using one database (MEDLINE) to identify keywords and develop search strategies in consultation with a university librarian from the relevant fields of interest. Following this, a search was undertaken across 5 databases (MEDLINE, CINAHL, Cochrane, ProQuest, and Scopus) using various search strategies. Some of the keywords used included menopause , perimenopause , post menopause , menopausal complaints , oral health , dental care , oral hygiene , health care* , health care providers , doctors , nurs* , interprofessional , knowledge , perceptions , attitudes , awareness practices , barriers , facilitators , guidelines , recommendations , management , and suggestions . Subject headings, Boolean modifiers, and Boolean operators (AND, OR) were used to assist with the search strategy and to combine the search terms. The reference list of the key articles was also hand searched. Articles that matched the inclusion criteria were organized using the EndNote referencing software and then imported into Covidence for screening. Duplicates were removed, abstract‐title screening was performed by 3 investigators (N.T., K.O.R., A.G.), and reviewing of the selected full text was performed by 2 investigators independently (N.T., A.G.). Any discrepancies related to screening were resolved by discussions with a third investigator (K.O.R.). A total of 12 studies met the inclusion criteria and were included in this review. Data Extraction and Data Synthesis Relevant information extracted from the selected studies included the author, year of publication, the country where the study was conducted, study setting, sample size, and age group (Table ). A thematic synthesis approach was used for synthesizing the study findings. This 3‐staged approach initially involved reading the findings and coding them line‐by‐line according to the meaning and content. This was undertaken by one reviewer (N.T.) using both inductive and deductive approaches to develop the initial codes. These codes were then grouped based on similarities and dissimilarities into themes that were then reviewed by a second reviewer (A.G.) and revised accordingly. The final step involved a consensus meeting with the team to explore interpretations and finalize the themes. The themes were generated aligning to the research questions. Quantitative data presented in narrative format along with any direct quotes were used to support the themes (Table ). Quality Assessment The Joanna‐Briggs Institute (JBI) critical appraisal checklist and Agree II checklist were used to assess the methodological quality of the articles and guidelines, respectively. The JBI checklist varied according to the type of studies assessed. The quality of these studies was assessed using a scoring system (one point for each applicable item) and was carried out by 2 authors (N.T. and K.P.). Two authors (M.S., A.G.) were consulted to resolve any discrepancies in the quality assessment scoring. Once consensus was achieved, the overall quality was rated as high (80%‐100%), moderate (50%‐79%), and low (<50%) using cutoff values , (Table ). Inclusion criteria included articles published in English that assessed at least one study outcome (knowledge, attitudes, and practices) of women in perimenopause or menopause or their health care providers’ knowledge, attitudes, or practices toward oral health, guidelines, or recommendations regarding oral health for women in perimenopause or menopause. All qualitative, quantitative, and mixed‐methods studies were included along with any experimental studies that had a presurvey component. No restrictions were placed on the time of publication, quality, or location of the study. A preliminary search was undertaken using one database (MEDLINE) to identify keywords and develop search strategies in consultation with a university librarian from the relevant fields of interest. Following this, a search was undertaken across 5 databases (MEDLINE, CINAHL, Cochrane, ProQuest, and Scopus) using various search strategies. Some of the keywords used included menopause , perimenopause , post menopause , menopausal complaints , oral health , dental care , oral hygiene , health care* , health care providers , doctors , nurs* , interprofessional , knowledge , perceptions , attitudes , awareness practices , barriers , facilitators , guidelines , recommendations , management , and suggestions . Subject headings, Boolean modifiers, and Boolean operators (AND, OR) were used to assist with the search strategy and to combine the search terms. The reference list of the key articles was also hand searched. Articles that matched the inclusion criteria were organized using the EndNote referencing software and then imported into Covidence for screening. Duplicates were removed, abstract‐title screening was performed by 3 investigators (N.T., K.O.R., A.G.), and reviewing of the selected full text was performed by 2 investigators independently (N.T., A.G.). Any discrepancies related to screening were resolved by discussions with a third investigator (K.O.R.). A total of 12 studies met the inclusion criteria and were included in this review. Relevant information extracted from the selected studies included the author, year of publication, the country where the study was conducted, study setting, sample size, and age group (Table ). A thematic synthesis approach was used for synthesizing the study findings. This 3‐staged approach initially involved reading the findings and coding them line‐by‐line according to the meaning and content. This was undertaken by one reviewer (N.T.) using both inductive and deductive approaches to develop the initial codes. These codes were then grouped based on similarities and dissimilarities into themes that were then reviewed by a second reviewer (A.G.) and revised accordingly. The final step involved a consensus meeting with the team to explore interpretations and finalize the themes. The themes were generated aligning to the research questions. Quantitative data presented in narrative format along with any direct quotes were used to support the themes (Table ). The Joanna‐Briggs Institute (JBI) critical appraisal checklist and Agree II checklist were used to assess the methodological quality of the articles and guidelines, respectively. The JBI checklist varied according to the type of studies assessed. The quality of these studies was assessed using a scoring system (one point for each applicable item) and was carried out by 2 authors (N.T. and K.P.). Two authors (M.S., A.G.) were consulted to resolve any discrepancies in the quality assessment scoring. Once consensus was achieved, the overall quality was rated as high (80%‐100%), moderate (50%‐79%), and low (<50%) using cutoff values , (Table ). Twelve studies were identified that addressed the study research questions. , , , , , , , , , , , The studies were published between 2011 and 2023 with a majority conducted in India (n = 4), Iraq (n = 2), and the United States (n = 2). For the first research question, 8 articles were identified involving women in perimenopause, menopause, and postmenopause (n = 1610). , , , , , , , Three articles were reported for the second research question involving health care providers (n = 113). , One study did not provide the number of health care professionals and consisted of 380 medical institutions. All studies involved cross‐sectional surveys. Most studies focused on oral health practices (n = 9) , , , , , , , , and knowledge (n = 8), , , , , , , , and very few focused on attitudes (n = 5). , , , , There were some overlaps, with some studies (n = 7) , , , , , , , focusing on more than one area. Only one guideline was identified for the third research question, which was from the United States. The majority of studies were considered low quality (n = 8), and only one was high quality. The main themes identified were knowledge regarding oral health, attitudes related to oral health, oral health practices, and current guidelines and recommendations (see Supporting Information: Table ). Knowledge Regarding Oral Health Five studies , , , , discussed the oral health knowledge of women in menopause across various areas including risk factors, preventive oral hygiene measures, and the importance of oral health. Risk Factors and Preventive Oral Hygiene Measures Influencing Oral Health Overall, there was a low level of awareness among women regarding maintaining oral health during menopause. , , , , Only some women were aware of menopause being a risk factor for causing oral health problems (ranging from 1% to 53%). , , , For instance, a study conducted on 1115 women in menopause in India revealed that merely 22% were conscious of the fact that menopause may be responsible for causing gum issues. Similarly, only a few participants were aware that appropriate oral hygiene measures such as periodic teeth cleaning (19%) and regular brushing habits (32%) could prevent the loosening of teeth and periodontal problems. , Attitudes Related to Oral Health Among the 8 studies identified, , , , , , , , 3 reported women's attitudes toward oral health. These studies explored women's perceptions of the importance of oral health and consulting dentists. , , Importance of Oral Health and Consulting Dentists There were mixed opinions among women regarding the importance of oral health. In one study by Hameed and Radhi, all women (n = 90) felt that it was their responsibility to maintain a healthy mouth. Conversely, another cross‐sectional study reported the majority (90%) of participants felt oral health could be ignored. Despite these varied attitudes, the majority of participants (69%‐100%) across 3 studies perceived regular dental visits as important, and more than half were willing to consult the dentist again. , , In one study involving 1115 women in menopause and postmenopause, approximately 80% demonstrated their willingness to attend subsequent dental visits. Practices Relating to Oral Health Most of the studies (n = 6) explored the practices of women in menopause in managing oral health. , , , , , , These practices included oral hygiene habits, the use of oral hygiene aids, dental visits, and barriers to accessing dental services. Oral Hygiene Habits and Aids There were limited practices in maintaining oral health among participants. Among the 5 studies that reported on oral hygiene habits and aids, it was observed that most participants brushed only once a day (30%‐84%). , , , , Within the cohort of women (n = 136) exhibiting symptoms of menopause in one study, only 16% were found to engage in brushing twice a day. The most common oral hygiene aids were toothbrushes and toothpaste (76%‐98%). , , One study observed that only 14% of individuals adhered to the practice of changing their toothbrush every 3 months. Another study reported that only 3% used more than one oral hygiene aid for maintaining oral health. Dental Visits There were wide variations (range 14%‐86%) among the 6 studies that reported the frequency of dental visits. , , , , , Only 14% of postmenopausal women in Iraq (n = 61) maintained regular visits to a dentist compared with 86% among a similar demographic in the United States (n = 20). The main reasons for visits were pain (53%) and tooth concerns (71%). , Barriers to Accessing Dental Services Only one cross‐sectional study from India highlighted barriers to accessing dental services for women in menopause. These included time constraints (46%), limited priority for oral health (33%), misleading advice from others about periodontal treatment (15%), and safety concerns in seeking dental care (7%). Oral Health Knowledge of Health Care Providers Three studies reported on oral health knowledge levels of health care providers. These studies explored their understanding of oral health in menopause with a primary focus on risk factors. , , Oral Health Risk Factors There were large variations (35%‐98%) in the knowledge of various health care providers regarding menopause influencing oral health. , , The findings from a cross‐sectional study conducted across 380 medical institutions in Japan reported that nurses and primary care providers were fully aware (90%‐100%) of menopause being a risk factor for oral health. However, in another study involving gynecologists (n = 40), only 35% were knowledgeable about the link between menopause and oral health. Symptoms of Poor Oral Health Only one study conducted by Rashidi Maybodi et al (2018) assessed knowledge about oral symptoms associated with menopause. A majority of the health care providers were knowledgeable in this area (90%), particularly around xerostomia (dry mouth) and thinning of mucosa as a symptom (nearly 70%). Only a few, however, were knowledgeable about reduced salivary flow (3%) and taste alterations (18%) associated with menopause. Oral Health Attitudes of Health Care Providers Treatment of Oral Symptoms and Periodic Dental Check‐ups Health care providers were not seen as proactive in promoting oral health in one cross‐sectional study of gynecologists (n = 73) in India. More than half of the respondents (53%) did not have a positive attitude toward treating gingival symptoms associated with menopause and believed this condition would subside automatically. Most (83%) also did not feel they should recommend patients seek periodic dental check‐ups. The Need for Interprofessional Collaboration Health care providers including physicians (63%) and nurses (57%) across various medical institutions (n = 380) in Japan did agree that cooperation with dentists was necessary for managing the oral health of women in menopause. However, 76% of physicians reported that dentists never referred patients to them for medical treatment of systematic diseases, menopausal symptoms, or tooth extraction consultations. Oral Health Practices of Health Care Providers Prevalence of Poor Oral Health and Reporting Frequency The majority of health care providers (79%) from outpatient clinics for women in Japan highlighted that oral symptoms were reported by patients. The most commonly reported symptom was dry mouth (80%), followed by taste alterations (60%), burning sensation in the mouth (40%), and temporomandibular joint pain (20%). Treatment Strategies Various treatment strategies were employed by health care providers to promote oral health. , The most common included prescribing mouthwashes, gels, antibiotics, and analgesics for the treatment of symptoms (61%); referring women to a specialist such as a dentist, otolaryngologist, or internal medicine specialist for treatment of oral symptoms (59%); documenting the referrals in health care records (48%); and informing patients about oral health changes related to menopause (44%). The less common strategies employed included prescribing medications (15%), conducting tests for Sjögren syndrome (7%), discontinuing medication (0.7%), and providing lifestyle guidance (0.7%). Only 3% of health care providers in this study provided any guidance in the treatment of oral symptoms. Guidelines and Recommendations Only one guideline was identified that provided recommendations for oral health care management during menopause. Although published in the United States, this guideline was developed to improve the quality of care for women worldwide. These recommendations specifically target clinical care for women in midlife and address the bodily changes that occur during this period, potential impact of hormonal fluctuations on gum, inflammation in tooth‐supporting tissues, and increased susceptibility to oral lesions. Also emphasized were recommendations for providers to advise the need for periodic dental examinations, the use of oral hygiene aids containing fluorides, maintaining good oral hygiene, and informing dental care providers about findings of various screening tests and medications use. Five studies , , , , discussed the oral health knowledge of women in menopause across various areas including risk factors, preventive oral hygiene measures, and the importance of oral health. Risk Factors and Preventive Oral Hygiene Measures Influencing Oral Health Overall, there was a low level of awareness among women regarding maintaining oral health during menopause. , , , , Only some women were aware of menopause being a risk factor for causing oral health problems (ranging from 1% to 53%). , , , For instance, a study conducted on 1115 women in menopause in India revealed that merely 22% were conscious of the fact that menopause may be responsible for causing gum issues. Similarly, only a few participants were aware that appropriate oral hygiene measures such as periodic teeth cleaning (19%) and regular brushing habits (32%) could prevent the loosening of teeth and periodontal problems. , Overall, there was a low level of awareness among women regarding maintaining oral health during menopause. , , , , Only some women were aware of menopause being a risk factor for causing oral health problems (ranging from 1% to 53%). , , , For instance, a study conducted on 1115 women in menopause in India revealed that merely 22% were conscious of the fact that menopause may be responsible for causing gum issues. Similarly, only a few participants were aware that appropriate oral hygiene measures such as periodic teeth cleaning (19%) and regular brushing habits (32%) could prevent the loosening of teeth and periodontal problems. , Among the 8 studies identified, , , , , , , , 3 reported women's attitudes toward oral health. These studies explored women's perceptions of the importance of oral health and consulting dentists. , , Importance of Oral Health and Consulting Dentists There were mixed opinions among women regarding the importance of oral health. In one study by Hameed and Radhi, all women (n = 90) felt that it was their responsibility to maintain a healthy mouth. Conversely, another cross‐sectional study reported the majority (90%) of participants felt oral health could be ignored. Despite these varied attitudes, the majority of participants (69%‐100%) across 3 studies perceived regular dental visits as important, and more than half were willing to consult the dentist again. , , In one study involving 1115 women in menopause and postmenopause, approximately 80% demonstrated their willingness to attend subsequent dental visits. There were mixed opinions among women regarding the importance of oral health. In one study by Hameed and Radhi, all women (n = 90) felt that it was their responsibility to maintain a healthy mouth. Conversely, another cross‐sectional study reported the majority (90%) of participants felt oral health could be ignored. Despite these varied attitudes, the majority of participants (69%‐100%) across 3 studies perceived regular dental visits as important, and more than half were willing to consult the dentist again. , , In one study involving 1115 women in menopause and postmenopause, approximately 80% demonstrated their willingness to attend subsequent dental visits. Most of the studies (n = 6) explored the practices of women in menopause in managing oral health. , , , , , , These practices included oral hygiene habits, the use of oral hygiene aids, dental visits, and barriers to accessing dental services. Oral Hygiene Habits and Aids There were limited practices in maintaining oral health among participants. Among the 5 studies that reported on oral hygiene habits and aids, it was observed that most participants brushed only once a day (30%‐84%). , , , , Within the cohort of women (n = 136) exhibiting symptoms of menopause in one study, only 16% were found to engage in brushing twice a day. The most common oral hygiene aids were toothbrushes and toothpaste (76%‐98%). , , One study observed that only 14% of individuals adhered to the practice of changing their toothbrush every 3 months. Another study reported that only 3% used more than one oral hygiene aid for maintaining oral health. Dental Visits There were wide variations (range 14%‐86%) among the 6 studies that reported the frequency of dental visits. , , , , , Only 14% of postmenopausal women in Iraq (n = 61) maintained regular visits to a dentist compared with 86% among a similar demographic in the United States (n = 20). The main reasons for visits were pain (53%) and tooth concerns (71%). , Barriers to Accessing Dental Services Only one cross‐sectional study from India highlighted barriers to accessing dental services for women in menopause. These included time constraints (46%), limited priority for oral health (33%), misleading advice from others about periodontal treatment (15%), and safety concerns in seeking dental care (7%). There were limited practices in maintaining oral health among participants. Among the 5 studies that reported on oral hygiene habits and aids, it was observed that most participants brushed only once a day (30%‐84%). , , , , Within the cohort of women (n = 136) exhibiting symptoms of menopause in one study, only 16% were found to engage in brushing twice a day. The most common oral hygiene aids were toothbrushes and toothpaste (76%‐98%). , , One study observed that only 14% of individuals adhered to the practice of changing their toothbrush every 3 months. Another study reported that only 3% used more than one oral hygiene aid for maintaining oral health. There were wide variations (range 14%‐86%) among the 6 studies that reported the frequency of dental visits. , , , , , Only 14% of postmenopausal women in Iraq (n = 61) maintained regular visits to a dentist compared with 86% among a similar demographic in the United States (n = 20). The main reasons for visits were pain (53%) and tooth concerns (71%). , Only one cross‐sectional study from India highlighted barriers to accessing dental services for women in menopause. These included time constraints (46%), limited priority for oral health (33%), misleading advice from others about periodontal treatment (15%), and safety concerns in seeking dental care (7%). Three studies reported on oral health knowledge levels of health care providers. These studies explored their understanding of oral health in menopause with a primary focus on risk factors. , , Oral Health Risk Factors There were large variations (35%‐98%) in the knowledge of various health care providers regarding menopause influencing oral health. , , The findings from a cross‐sectional study conducted across 380 medical institutions in Japan reported that nurses and primary care providers were fully aware (90%‐100%) of menopause being a risk factor for oral health. However, in another study involving gynecologists (n = 40), only 35% were knowledgeable about the link between menopause and oral health. Symptoms of Poor Oral Health Only one study conducted by Rashidi Maybodi et al (2018) assessed knowledge about oral symptoms associated with menopause. A majority of the health care providers were knowledgeable in this area (90%), particularly around xerostomia (dry mouth) and thinning of mucosa as a symptom (nearly 70%). Only a few, however, were knowledgeable about reduced salivary flow (3%) and taste alterations (18%) associated with menopause. There were large variations (35%‐98%) in the knowledge of various health care providers regarding menopause influencing oral health. , , The findings from a cross‐sectional study conducted across 380 medical institutions in Japan reported that nurses and primary care providers were fully aware (90%‐100%) of menopause being a risk factor for oral health. However, in another study involving gynecologists (n = 40), only 35% were knowledgeable about the link between menopause and oral health. Only one study conducted by Rashidi Maybodi et al (2018) assessed knowledge about oral symptoms associated with menopause. A majority of the health care providers were knowledgeable in this area (90%), particularly around xerostomia (dry mouth) and thinning of mucosa as a symptom (nearly 70%). Only a few, however, were knowledgeable about reduced salivary flow (3%) and taste alterations (18%) associated with menopause. Treatment of Oral Symptoms and Periodic Dental Check‐ups Health care providers were not seen as proactive in promoting oral health in one cross‐sectional study of gynecologists (n = 73) in India. More than half of the respondents (53%) did not have a positive attitude toward treating gingival symptoms associated with menopause and believed this condition would subside automatically. Most (83%) also did not feel they should recommend patients seek periodic dental check‐ups. The Need for Interprofessional Collaboration Health care providers including physicians (63%) and nurses (57%) across various medical institutions (n = 380) in Japan did agree that cooperation with dentists was necessary for managing the oral health of women in menopause. However, 76% of physicians reported that dentists never referred patients to them for medical treatment of systematic diseases, menopausal symptoms, or tooth extraction consultations. Health care providers were not seen as proactive in promoting oral health in one cross‐sectional study of gynecologists (n = 73) in India. More than half of the respondents (53%) did not have a positive attitude toward treating gingival symptoms associated with menopause and believed this condition would subside automatically. Most (83%) also did not feel they should recommend patients seek periodic dental check‐ups. Health care providers including physicians (63%) and nurses (57%) across various medical institutions (n = 380) in Japan did agree that cooperation with dentists was necessary for managing the oral health of women in menopause. However, 76% of physicians reported that dentists never referred patients to them for medical treatment of systematic diseases, menopausal symptoms, or tooth extraction consultations. Prevalence of Poor Oral Health and Reporting Frequency The majority of health care providers (79%) from outpatient clinics for women in Japan highlighted that oral symptoms were reported by patients. The most commonly reported symptom was dry mouth (80%), followed by taste alterations (60%), burning sensation in the mouth (40%), and temporomandibular joint pain (20%). Treatment Strategies Various treatment strategies were employed by health care providers to promote oral health. , The most common included prescribing mouthwashes, gels, antibiotics, and analgesics for the treatment of symptoms (61%); referring women to a specialist such as a dentist, otolaryngologist, or internal medicine specialist for treatment of oral symptoms (59%); documenting the referrals in health care records (48%); and informing patients about oral health changes related to menopause (44%). The less common strategies employed included prescribing medications (15%), conducting tests for Sjögren syndrome (7%), discontinuing medication (0.7%), and providing lifestyle guidance (0.7%). Only 3% of health care providers in this study provided any guidance in the treatment of oral symptoms. The majority of health care providers (79%) from outpatient clinics for women in Japan highlighted that oral symptoms were reported by patients. The most commonly reported symptom was dry mouth (80%), followed by taste alterations (60%), burning sensation in the mouth (40%), and temporomandibular joint pain (20%). Various treatment strategies were employed by health care providers to promote oral health. , The most common included prescribing mouthwashes, gels, antibiotics, and analgesics for the treatment of symptoms (61%); referring women to a specialist such as a dentist, otolaryngologist, or internal medicine specialist for treatment of oral symptoms (59%); documenting the referrals in health care records (48%); and informing patients about oral health changes related to menopause (44%). The less common strategies employed included prescribing medications (15%), conducting tests for Sjögren syndrome (7%), discontinuing medication (0.7%), and providing lifestyle guidance (0.7%). Only 3% of health care providers in this study provided any guidance in the treatment of oral symptoms. Only one guideline was identified that provided recommendations for oral health care management during menopause. Although published in the United States, this guideline was developed to improve the quality of care for women worldwide. These recommendations specifically target clinical care for women in midlife and address the bodily changes that occur during this period, potential impact of hormonal fluctuations on gum, inflammation in tooth‐supporting tissues, and increased susceptibility to oral lesions. Also emphasized were recommendations for providers to advise the need for periodic dental examinations, the use of oral hygiene aids containing fluorides, maintaining good oral hygiene, and informing dental care providers about findings of various screening tests and medications use. The main focus of this integrative review was to assess current evidence around the oral health knowledge, attitudes, and practices among women in perimenopause and menopause, their health care providers, and the current guidelines and recommendations in this area of practice. The preliminary findings of this study indicate there is limited research undertaken in this area, particularly from the perspective of health care providers. Another interesting finding is that although high ‐income countries have better health care services and targeted strategies for health care providers to support women in menopause, , there was minimal research specifically addressing oral health needs. Women experience various hormonal changes across their life course such as puberty, menses, reproduction, and menopause. , Although hormonal activity increases and decreases during these periods, similar oral health issues prevail. Studies pertaining to oral health have mainly focused on pregnant women and are thus the most comparable to the findings in this review. , , Overall findings from the first focus area (women's knowledge, attitudes and practices) indicated a low level of awareness during perimenopause or menopause around oral health care. , , These findings are similar to the results of a systematic review conducted on pregnant women and a cross‐sectional study conducted on menstruating women in India, which also indicated a low level of oral health knowledge among women. Additionally, there were mixed opinions regarding the importance of oral health and dental visits among women in this review, which was reflected by suboptimal oral hygiene practices and lower uptake of dental services. , , These findings are consistent with studies from both developing and developed countries indicating women placed limited importance on oral health during pregnancy. , The lack of priority for oral health among women in perimenopause or menopause was highlighted in only one study. During this period, women undergo hormonal variations that bring about physiologic changes in their body putting them at risk for various systemic diseases. Systemic health issues may have a greater impact on quality of life, thus taking greater priority over oral health. , Time was another barrier cited. A study on the general population reported that getting time off from work or insufficient leave entitlements (especially for individuals with low income) prevented access to dental services. Similar findings were also reported in a qualitative study of pregnant women, which highlighted that caring for other children and active engagement in work prevented them from accessing dental services. Apart from the usual challenges faced by the general population and pregnant women, it is unclear why time constraints would be an issue for older women, an area that needs further exploration. Two other contributing factors cited were safety concerns and misleading advice from others to not seek dental treatment. Although it has been well‐established that dental treatment during menopause and postmenopause is completely safe, similar concerns have been highlighted among pregnant women questioning the safety and effects of dental treatment on the fetus. Equally concerning were reports of receiving misleading advice from others regarding the safety of dental care or procedures. Conflicting advice has been documented among prenatal care providers during interactions with pregnant women. Research exploring the perceptions and practices of prenatal care providers regarding oral health care found that some primary care providers believed dental procedures were unsafe during pregnancy, particularly the use of anesthesia and radiographs. Further research is needed to better understand if women in perimenopause and menopause are being provided the correct oral health advice from their health care providers. It is evident from this integrative review that only some health care providers are informing women about oral health changes associated with menopause, and even fewer are providing guidance on treating oral symptoms. Likewise, there has been a limited focus on preventive strategies such as screening and oral health promotion. Studies have also suggested that women require comprehensive care during this period, but often, health care providers are unable to spend more time due to barriers such as underfunding and staff shortages. Furthermore, reviews around women's health issues worldwide found that primary care providers and midwives were poorly informed about the impact of poor maternal oral health and rarely initiated this topic during prenatal care. Only 22% of prenatal care providers in a study conducted in Turkey discussed the importance of oral health with pregnant women. The are numerous factors that could be contributing to the limited focus of oral health during perimenopause and menopause by health professionals. For example, only one clinical practice guideline developed by the North American Menopause Society (NAMS) was identified. In addition to providing valuable information on the impact of hormonal changes on oral health, the NAMS guideline recommends health professionals provide oral health education, screening, and referrals through collaborations with dental practitioners. In a survey of 393 antenatal care providers in Australia involving primary care providers, midwives, and obstetricians, 81% cited lack of practice guidelines as the main barrier to promoting oral health during pregnancy. More work is clearly needed to develop global evidence based oral health guidelines for health professionals supporting women in perimenopause or menopause. It is important to note that there may be other contributing factors limiting health professionals from promoting oral health among women in perimenopause and menopause. Previous studies involving primary care providers, nurses, midwives, and allied health professionals have highlighted a lack of oral health knowledge and confidence, time constraints, and limited oral health screening tools. , Additionally, issues around the cost and accessibility of dental care are often cited as barriers to oral health care by various population groups at risk for poor oral health such as those with diabetes, mental illness, and cardiovascular disease. , Larger studies and more in‐depth high‐quality research is needed to explore these aspects further. Implications for Practice A main finding from this review is the need for governments and professional organizations to develop appropriate clinical practice guidelines for health care providers that promote oral health among women in perimenopause or menopause. Additional oral health training could also be provided to health care providers via professional development training programs and undergraduate modules to improve awareness of the importance of oral health in this population group. Lastly, interprofessional collaboration between health care providers and dental practitioners should be encouraged to improve the oral health literacy of women in perimenopause and menopause with appropriate dental referral pathways. A number of health professionals who participated in studies included in this review felt that interprofessional collaboration between health care and dental providers was important for promoting oral health care of women. Numerous studies have shown that adopting such an approach can deliver improved patient outcomes and sustainable integrated models of care. , , One such example is the Midwifery Initiated Oral Health (MIOH) program developed in Australia to provide knowledge and skills to midwives to promote oral health in pregnant women. Through this program, midwives are trained to work collaboratively with dental practitioners to provide oral health education, screening, and referrals with the help of a continuing professional development training program (endorsed by the Australian College of Midwives) and a simple validated screening tool. The MIOH program has been shown to be effective in improving the oral health knowledge and confidence of midwives as well as the oral health status and quality of life of pregnant women, and is cost‐effective for health services. The program is recognized by the World Health Organization and has been implemented across Australia and internationally. , Strengths and Limitations To our knowledge, this review is the first to evaluate current evidence on oral health knowledge, attitudes, practices, and guidelines. These results have provided valuable insight into this underresearched area as well as a roadmap for future research. It is also important to acknowledge the study limitations. Because only a limited number of studies were identified for this review, and the majority were of poor quality, the study findings and conclusion must be interpreted with caution. Furthermore, most of the studies were from low‐income countries, and thus the findings may not be applicable in all geographical areas. Moreover, standard practice patterns and guidelines in this area may differ in higher resource countries. A main finding from this review is the need for governments and professional organizations to develop appropriate clinical practice guidelines for health care providers that promote oral health among women in perimenopause or menopause. Additional oral health training could also be provided to health care providers via professional development training programs and undergraduate modules to improve awareness of the importance of oral health in this population group. Lastly, interprofessional collaboration between health care providers and dental practitioners should be encouraged to improve the oral health literacy of women in perimenopause and menopause with appropriate dental referral pathways. A number of health professionals who participated in studies included in this review felt that interprofessional collaboration between health care and dental providers was important for promoting oral health care of women. Numerous studies have shown that adopting such an approach can deliver improved patient outcomes and sustainable integrated models of care. , , One such example is the Midwifery Initiated Oral Health (MIOH) program developed in Australia to provide knowledge and skills to midwives to promote oral health in pregnant women. Through this program, midwives are trained to work collaboratively with dental practitioners to provide oral health education, screening, and referrals with the help of a continuing professional development training program (endorsed by the Australian College of Midwives) and a simple validated screening tool. The MIOH program has been shown to be effective in improving the oral health knowledge and confidence of midwives as well as the oral health status and quality of life of pregnant women, and is cost‐effective for health services. The program is recognized by the World Health Organization and has been implemented across Australia and internationally. , To our knowledge, this review is the first to evaluate current evidence on oral health knowledge, attitudes, practices, and guidelines. These results have provided valuable insight into this underresearched area as well as a roadmap for future research. It is also important to acknowledge the study limitations. Because only a limited number of studies were identified for this review, and the majority were of poor quality, the study findings and conclusion must be interpreted with caution. Furthermore, most of the studies were from low‐income countries, and thus the findings may not be applicable in all geographical areas. Moreover, standard practice patterns and guidelines in this area may differ in higher resource countries. This integrative review has provided a valuable insight in the oral health care of women in perimenopause or menopause. Overall, the findings suggest that there is a lack of oral health awareness and poor oral hygiene practices among women during this life transition. Furthermore, oral health needs are not adequately addressed by health care providers due to various barriers. Practice guidelines are needed to address oral health needs and improve oral health services. These results point out an urgent need for further high‐quality research to confirm findings from both the women's and health care providers’ perspectives and inform supportive strategies across policy and practice in this area. The authors have no conflicts of interest to disclose. Table S1 . Themes and Subthemes
Pericytes as mediators of infiltration of macrophages in multiple sclerosis
5a82180d-82de-4828-bb90-ad31906765fb
8705458
Anatomy[mh]
MS is a chronic inflammatory-neurodegenerative condition that affects the central nervous system (CNS). Blood–brain barrier (BBB) dysregulation and leukocyte migration into the CNS, followed by demyelination and neuroaxonal loss, are recognized as some of the hallmarks of MS pathology [ – ]. However, the regulation of leukocyte transmigration into the CNS, the majority of which are monocyte-derived macrophages, and the specific mechanisms by which they traverse the BBB remain to be fully defined. Pericytes are contractile cells that contact the endothelial layer of capillaries and post-capillary venules throughout the body and they are encased within the basement membranes that line vessels . Pericytes, along with the endothelium, astrocytes and neurons, constitute the neurovascular unit (NVU) and help maintain BBB integrity . In this regard, it is proposed that pericytes’ dysfunction along the vasculature in the CNS is correlated to BBB permeability . Notably, the brain has the highest ratio of pericytes to endothelial cells, which highlights the significance of pericytes in contributing to CNS homeostasis . Indeed, in disease states including Alzheimer’s disease, stroke, spinal cord injury, and diabetic retinopathy, pericyte dysfunction is associated with an increase in vascular permeability, scar formation after acute injury, tight junction degradation, and BBB disruption [ – ]. A recent study shows that pericyte-deficient ( Pdgfb ret/ret ) adult mice have increased transmigration of leukocytes into the brain resulting in enhanced disease severity of experimental autoimmune encephalomyelitis (EAE), an inflammatory model of MS . While the study infers the functions of pericytes in a genetically deficient model, it does not address the role of intact pericytes in the event of EAE/MS pathology, and how wild-type pericytes in EAE may mediate neuroinflammation. Another study reports an increase in the number of pericytes in active MS lesions over chronic lesions , but the clinical relevance of this is not understood. Other studies suggest that extracellular matrix (ECM) components including fibronectin and collagen-I may influence pericyte morphology, migration, and proliferation, while heparan sulfate proteoglycans have inhibitory effects . In this regard, the influence of another major ECM component, chondroitin sulphate proteoglycans (CSPGs), and their dynamic interactions with pericytes with regard to their influence on BBB integrity, inflammatory responses, and facilitating leukocyte migration are not explored. This is an important question since CSPGs are highly expressed in MS lesions and they inhibit oligodendrocyte precursor cell (OPC) differentiation . CSPGs also stimulate the production of pro-inflammatory chemokines/cytokines in macrophages, thereby facilitating their migration . Thus, in this study, we have sought to study the activity of pericytes in EAE and their response to CSPGs. We have focused on inflammatory perivascular cuffs of post-capillary venules , a CSPG-enriched space where leukocytes particularly monocytes gather prior to entering the CNS parenchyma. Our results highlight the contribution of CSPG–pericyte interactions in facilitating macrophage infiltration into the parenchyma in EAE/MS. EAE induction Animal experiments were conducted in accordance with Canadian Council on Animal Care guidelines and with ethics approval from the University Animal Care Committee. Briefly, 8- to 10-week-old female C57BL/6 mice (Charles River) were immunized with 50 μg/100 μL of myelin oligodendrocyte glycoprotein (MOG) 35–55 peptide (Protein and nucleic acid facility, Stanford, CA) in complete Freund’s adjuvant supplemented with 4 mg/mL heat-inactivated Mycobacterium tuberculosis H37Ra (Fisher scientific, Toronto, Canada) subcutaneously. On days 0 and 2 post-MOG immunization, pertussis toxin (300 ng) was injected intraperitoneally. Animals were monitored daily for clinical signs of EAE on the 15-point scale, as described by Weaver et al.. Cerebellar and spinal cord tissues were collected on day 10–13 (pre-peak of EAE), day 16 (peak clinical severity), and day 21 and day 35 (post-peak EAE) of EAE time course for immunohistochemistry. Cerebellum from naïve animals served as experimental controls. MS brain tissues Frozen brain tissues from chronic cases of MS were obtained from the UK MS Tissue Bank at Imperial College, London ( www.ukmstissuebank.imperial.ac.uk ; provided by Richard Reynolds and Djordje Gveric) and Dr. Alex Prat (University of Montreal). Two of the samples were diagnosed as having secondary progressive MS: a 60-year-old female (Fig. A) and a 61-year-old male (Fig. B), and one was 26-year-old male diagnosed as relapsing–remitting MS (Fig. C). Brain tissue sections from cortical areas were analyzed for this study. Tissue processing, immunohistochemistry and confocal microscopy Mice were perfused with phosphate-buffered saline (PBS) and tissues were harvested and frozen in optimal control temperature medium (VWR, 95057-838) and stored at − 80 °C until cryosectioning. 20-μm-thick sections were cut using a cryostat followed by immunohistochemistry. For this, EAE cerebellum tissues and MS brain sections were fixed with ice-cold methanol, or with 4% PFA followed by 0.2% Triton X-100, and blocked with 3% BSA before staining with the following primary antibodies: neural/glial antigen 2 (NG2) (Millipore, AB5320, 1:200) and platelet derived growth factor receptor beta (PDGFRβ) (Invitrogen, 16-1420-82, 1:100; R&D Systems, AF385, 1:100), which were used as the primary markers of pericytes. Anti-pan laminin (a kind gift from Dr. L. Sorokin, Westfälische Wilhelms-Universität, Münster, Germany; 1:1000) that stains the basement membranes of post-capillary venules, and anti-CD31, an endothelial marker (Abcam, 28364, 1:50) were used to characterize post-capillary venules and perivascular cuffs. Anti-CD45 (BD Pharmingen, 550539, 1:75) and anti-F4/80 (Biorad, MCA497RT, 1:100) were used for pan-leukocytes and myeloid cells, respectively. Nuclei were visualized with nuclear yellow (Hoechst). Confocal images were acquired in Z-stacks with ‘confocal-in-a-box’ (Olympus Fluoview FV10i confocal microscope) using a 60× oil-immersion objective. Quantification: pericyte coverage ratio and density The pericyte coverage ratio along the blood vasculature in cerebellar sections was quantified at three time periods in the EAE model (day 10–13, day 16, and day 21) and in naïve animals. Confocal images from cerebellar tissues stained to visualize pericytes, using the markers NG2 and PDGFRβ, and blood vessels through CD31, were assessed. Three naïve and three EAE animals were examined at each time point and three fields of view were quantified per animal. The pericyte coverage ratio was quantified by measuring the total length of the blood vessels in each field of view and the total length of the blood vessel covered by pericytes in each field of view. The pericyte coverage ratio in 60× confocal images having an area of 215 μm × 215 μm was calculated as follows: [12pt]{minimal} $$}\;{}\;{}\;{}\;{}\;{}\;{} \; ( { {}} )}}{{{}\;{}\;{}\;{}\;{} \;( { {}} )}}.$$ Length of blood vessels covered by pericytes μ m Total length of blood vessels μ m . To quantify pericyte density, the number of pericytes along the blood vasculature was quantified in each 60× confocal image, having an area of 215 μm × 215 μm. Primary pericyte cultures in vitro Mouse brain vascular pericytes (iXCells Biotechnologies, 10MU-014) were grown in cell culture flasks coated in 0.01% poly- l -lysine (PLL, Trevigen, 3438-100-01). Cells were grown in Supplemented Mouse Pericyte Growth Media (SMPGM), i.e., 2% fetal bovine serum (FBS), 1% penicillin/streptomycin, and 1% Pericyte Growth Supplement (iXCells, Biotechnologies, MD-0092) in an incubator at 37 °C and 5% CO 2 . Pericytes were plated in PLL-coated black 96-well plates (BD Falcon, 353219) at a density of 7500 cells per well in 200 μL of SMPGM. 48 h later, the medium was replaced with SMPGM containing 0.2% FBS. For stimulation, pericytes were either treated with an inflammatory cytokine cocktail of recombinant mouse interferon gamma (IFN-γ) (PeproTech, 345-05, 10 ng/mL) and recombinant mouse IL-1β (R&D, 401-ML/CF, 10 ng/mL), or with CSPGs (Millipore, CC117, 10 μg/mL). After 48 h of treatment, the medium was discarded, and the cells were then overlaid with fresh serum-free DMEM for another 24 h to eliminate the presence of inflammatory stimuli from the culture medium. Each condition was carried out with four technical replicates. Post-24 h, the conditioned media were collected and centrifuged at 2000 rpm for 3 min to pellet any floating cells. The conditioned media were then stored at − 80 °C for analysis. Bone marrow-derived macrophage isolation Bone marrows from euthanized C57BL/6 mice were flushed out using Dulbecco’s modified Eagle medium (DMEM) (Sigma, D5671) and centrifuged at 1200 rpm for 10 min. The pellet was resuspended in complete high glucose bone marrow growth medium (DMEM, 10% FBS, 2% penicillin/streptomycin and 10% supernatant from the L929 cell-line) and seeded at a density of 10 7 on 10 cm petri dishes as previously described . Cells were grown at 37 °C in 8.5% CO 2 for 5 days, after which half the media was replaced with fresh media. At day 7, the full media was changed. Cells were used after 7 days of culture. TNFα and MMP9 enzyme-linked immunosorbent assay (ELISA) and Luminex assay Conditioned media collected from control and treated pericytes were assessed for TNFα concentrations using an ELISA as per manufacturer’s instructions (Invitrogen, BMS607-3). The remaining conditioned media from the control and treated pericytes were assessed using a mouse 31-plex chemokine/cytokine Luminex assay (Eve Technologies, MD31). MMP9 ELISA was carried out using the pro-MMP9 ELISA kit on supernatants from untreated and CSPG-treated bone marrow-derived macrophages (BMDMs) (Invitrogen). Boyden chamber transmigration assay Pericytes were plated in 6-well culture plates at a density of 5 × 10 5 . Twenty-four hours later, the medium was replaced with SMPGM containing 0.2% FBS. Pericytes were either treated with an inflammatory cytokine cocktail of recombinant mouse IFN-γ and recombinant mouse IL-1β, mixture of CSPGs or lipopolysaccharide (LPS) ( E. coli , 055:B5, Sigma, L5418, 10 ng/mL) as a positive control. Three days later, the treated medium was replaced with SMPGM containing 0.2% FBS. Forty-eight hours later, BMDMs were seeded at a density of 2 × 10 5 cells per transwell filter insert with 5.0 μm pores (Corning Costar, 3421) in serum-free DMEM. The conditioned medium from the control or treated pericytes was placed in the bottom compartment of the Boyden chamber, to potentially serve as a chemotactic stimulus for the BMDMs. Twenty-two hours later, the filters were washed with PBS to remove any remaining cells. The filters were then fixed and stained with hematoxylin, Gill #2 (Sigma, GHS216). The number of BMDMs that migrated was assessed using a 20× brightfield microscope (Olympus BX51). The number of BMDMs that migrated were averaged for each filter by assessing 4 fields of view around the center of the filter, while the edges of the filter were excluded. Images were blinded before quantification. Real-time PCR Quantitative PCR was performed on RNA isolated from treated pericytes to study the changes in pro-MMP9 transcripts. For this, the cells were treated with an inflammatory cocktail or CSPGs for 6 h and then lysed for RNA isolation by Trizol method. Pro-MMP9 primers were purchased from Qiagen. Statistics Datasets were tested for normal distribution using the Shapiro–Wilk normality test ( P > 0.05). Multiple groups were compared using a one-way ANOVA with Tukey’s multiple comparison post hoc test, where P < 0.05 was considered significant. All quantified results are stated in the form of mean ± SD. All statistical analyses were performed with Prism 8.0 software (GraphPad). Animal experiments were conducted in accordance with Canadian Council on Animal Care guidelines and with ethics approval from the University Animal Care Committee. Briefly, 8- to 10-week-old female C57BL/6 mice (Charles River) were immunized with 50 μg/100 μL of myelin oligodendrocyte glycoprotein (MOG) 35–55 peptide (Protein and nucleic acid facility, Stanford, CA) in complete Freund’s adjuvant supplemented with 4 mg/mL heat-inactivated Mycobacterium tuberculosis H37Ra (Fisher scientific, Toronto, Canada) subcutaneously. On days 0 and 2 post-MOG immunization, pertussis toxin (300 ng) was injected intraperitoneally. Animals were monitored daily for clinical signs of EAE on the 15-point scale, as described by Weaver et al.. Cerebellar and spinal cord tissues were collected on day 10–13 (pre-peak of EAE), day 16 (peak clinical severity), and day 21 and day 35 (post-peak EAE) of EAE time course for immunohistochemistry. Cerebellum from naïve animals served as experimental controls. Frozen brain tissues from chronic cases of MS were obtained from the UK MS Tissue Bank at Imperial College, London ( www.ukmstissuebank.imperial.ac.uk ; provided by Richard Reynolds and Djordje Gveric) and Dr. Alex Prat (University of Montreal). Two of the samples were diagnosed as having secondary progressive MS: a 60-year-old female (Fig. A) and a 61-year-old male (Fig. B), and one was 26-year-old male diagnosed as relapsing–remitting MS (Fig. C). Brain tissue sections from cortical areas were analyzed for this study. Mice were perfused with phosphate-buffered saline (PBS) and tissues were harvested and frozen in optimal control temperature medium (VWR, 95057-838) and stored at − 80 °C until cryosectioning. 20-μm-thick sections were cut using a cryostat followed by immunohistochemistry. For this, EAE cerebellum tissues and MS brain sections were fixed with ice-cold methanol, or with 4% PFA followed by 0.2% Triton X-100, and blocked with 3% BSA before staining with the following primary antibodies: neural/glial antigen 2 (NG2) (Millipore, AB5320, 1:200) and platelet derived growth factor receptor beta (PDGFRβ) (Invitrogen, 16-1420-82, 1:100; R&D Systems, AF385, 1:100), which were used as the primary markers of pericytes. Anti-pan laminin (a kind gift from Dr. L. Sorokin, Westfälische Wilhelms-Universität, Münster, Germany; 1:1000) that stains the basement membranes of post-capillary venules, and anti-CD31, an endothelial marker (Abcam, 28364, 1:50) were used to characterize post-capillary venules and perivascular cuffs. Anti-CD45 (BD Pharmingen, 550539, 1:75) and anti-F4/80 (Biorad, MCA497RT, 1:100) were used for pan-leukocytes and myeloid cells, respectively. Nuclei were visualized with nuclear yellow (Hoechst). Confocal images were acquired in Z-stacks with ‘confocal-in-a-box’ (Olympus Fluoview FV10i confocal microscope) using a 60× oil-immersion objective. The pericyte coverage ratio along the blood vasculature in cerebellar sections was quantified at three time periods in the EAE model (day 10–13, day 16, and day 21) and in naïve animals. Confocal images from cerebellar tissues stained to visualize pericytes, using the markers NG2 and PDGFRβ, and blood vessels through CD31, were assessed. Three naïve and three EAE animals were examined at each time point and three fields of view were quantified per animal. The pericyte coverage ratio was quantified by measuring the total length of the blood vessels in each field of view and the total length of the blood vessel covered by pericytes in each field of view. The pericyte coverage ratio in 60× confocal images having an area of 215 μm × 215 μm was calculated as follows: [12pt]{minimal} $$}\;{}\;{}\;{}\;{}\;{}\;{} \; ( { {}} )}}{{{}\;{}\;{}\;{}\;{} \;( { {}} )}}.$$ Length of blood vessels covered by pericytes μ m Total length of blood vessels μ m . To quantify pericyte density, the number of pericytes along the blood vasculature was quantified in each 60× confocal image, having an area of 215 μm × 215 μm. Mouse brain vascular pericytes (iXCells Biotechnologies, 10MU-014) were grown in cell culture flasks coated in 0.01% poly- l -lysine (PLL, Trevigen, 3438-100-01). Cells were grown in Supplemented Mouse Pericyte Growth Media (SMPGM), i.e., 2% fetal bovine serum (FBS), 1% penicillin/streptomycin, and 1% Pericyte Growth Supplement (iXCells, Biotechnologies, MD-0092) in an incubator at 37 °C and 5% CO 2 . Pericytes were plated in PLL-coated black 96-well plates (BD Falcon, 353219) at a density of 7500 cells per well in 200 μL of SMPGM. 48 h later, the medium was replaced with SMPGM containing 0.2% FBS. For stimulation, pericytes were either treated with an inflammatory cytokine cocktail of recombinant mouse interferon gamma (IFN-γ) (PeproTech, 345-05, 10 ng/mL) and recombinant mouse IL-1β (R&D, 401-ML/CF, 10 ng/mL), or with CSPGs (Millipore, CC117, 10 μg/mL). After 48 h of treatment, the medium was discarded, and the cells were then overlaid with fresh serum-free DMEM for another 24 h to eliminate the presence of inflammatory stimuli from the culture medium. Each condition was carried out with four technical replicates. Post-24 h, the conditioned media were collected and centrifuged at 2000 rpm for 3 min to pellet any floating cells. The conditioned media were then stored at − 80 °C for analysis. Bone marrows from euthanized C57BL/6 mice were flushed out using Dulbecco’s modified Eagle medium (DMEM) (Sigma, D5671) and centrifuged at 1200 rpm for 10 min. The pellet was resuspended in complete high glucose bone marrow growth medium (DMEM, 10% FBS, 2% penicillin/streptomycin and 10% supernatant from the L929 cell-line) and seeded at a density of 10 7 on 10 cm petri dishes as previously described . Cells were grown at 37 °C in 8.5% CO 2 for 5 days, after which half the media was replaced with fresh media. At day 7, the full media was changed. Cells were used after 7 days of culture. Conditioned media collected from control and treated pericytes were assessed for TNFα concentrations using an ELISA as per manufacturer’s instructions (Invitrogen, BMS607-3). The remaining conditioned media from the control and treated pericytes were assessed using a mouse 31-plex chemokine/cytokine Luminex assay (Eve Technologies, MD31). MMP9 ELISA was carried out using the pro-MMP9 ELISA kit on supernatants from untreated and CSPG-treated bone marrow-derived macrophages (BMDMs) (Invitrogen). Pericytes were plated in 6-well culture plates at a density of 5 × 10 5 . Twenty-four hours later, the medium was replaced with SMPGM containing 0.2% FBS. Pericytes were either treated with an inflammatory cytokine cocktail of recombinant mouse IFN-γ and recombinant mouse IL-1β, mixture of CSPGs or lipopolysaccharide (LPS) ( E. coli , 055:B5, Sigma, L5418, 10 ng/mL) as a positive control. Three days later, the treated medium was replaced with SMPGM containing 0.2% FBS. Forty-eight hours later, BMDMs were seeded at a density of 2 × 10 5 cells per transwell filter insert with 5.0 μm pores (Corning Costar, 3421) in serum-free DMEM. The conditioned medium from the control or treated pericytes was placed in the bottom compartment of the Boyden chamber, to potentially serve as a chemotactic stimulus for the BMDMs. Twenty-two hours later, the filters were washed with PBS to remove any remaining cells. The filters were then fixed and stained with hematoxylin, Gill #2 (Sigma, GHS216). The number of BMDMs that migrated was assessed using a 20× brightfield microscope (Olympus BX51). The number of BMDMs that migrated were averaged for each filter by assessing 4 fields of view around the center of the filter, while the edges of the filter were excluded. Images were blinded before quantification. Quantitative PCR was performed on RNA isolated from treated pericytes to study the changes in pro-MMP9 transcripts. For this, the cells were treated with an inflammatory cocktail or CSPGs for 6 h and then lysed for RNA isolation by Trizol method. Pro-MMP9 primers were purchased from Qiagen. Datasets were tested for normal distribution using the Shapiro–Wilk normality test ( P > 0.05). Multiple groups were compared using a one-way ANOVA with Tukey’s multiple comparison post hoc test, where P < 0.05 was considered significant. All quantified results are stated in the form of mean ± SD. All statistical analyses were performed with Prism 8.0 software (GraphPad). Pericyte dynamics during EAE pathology EAE pathology affects white matter tracts in spinal cord and cerebellum tissues. Due to enlarged post-capillary venules and less diffused lesions in cerebellum , we investigated this CNS region for EAE-associated pathology. To study pericytes, we stained the sagittal cerebellar tissues from EAE (Fig. A) and naïve mice for PDGFRβ and NG2 expression (Figs. and ). The location of PDGFRβ+ and NG2+ cells within the vessels was evaluated by staining for pan-laminin, which delineates the basement membranes of vessels and by endothelial CD31 staining, respectively. Figure B shows laminin-delineated inflamed vasculature, used to examine the location of pericytes across pre-peak (days 10–13), peak (day 16), and post-peak (day 21) EAE tissue. At all the time points examined, PDGFRβ+ cells were found to be either encased or in close proximity to the pan-laminin+ basement membrane and infrequently noted in the parenchyma (Fig. A). Certain fields-of-view did appear to exhibit PDGFRβ+ cells away from the endothelial cells (post-peak; day 21), but a close examination revealed this observation to be due to weaker laminin staining in those images. Further, PDGFRβ+ cells were noted to be within the confinements of NVU as confirmed by GFAP+ reactive astrocytes in the peak EAE white matter (Additional file : Fig. S1). These findings were corroborated by NG2+ pericytes, which were also found abutting the CD31+ endothelium (Fig. A). Even in chronic (day 35) EAE (Fig. B), pericytes detected by PDGFRβ or NG2 were vessel-associated and not encountered in the parenchyma. 3D-reconstruction of NG2+ cells in day 35 EAE white matter confirmed these cells to wrap around the CD31+ endothelium (Fig. B). It is noteworthy that NG2 is also a marker for oligodendrocyte precursor cells (OPCs), and the NG2+ cells away from CD31, as observed in some panels, could be representative of OPCs. Interestingly, NG2+ cerebellar pericytes that wrapped around CD31+ vasculature in the white matter of EAE appeared to be more elongated and interconnected compared to pericytes in naïve cerebellum (Fig. ). These findings were consistent across all the analyzed time points within EAE cerebellar tissues. This morphological difference between EAE and naïve cerebellar tissues was quantified by calculating a pericyte coverage ratio, which is the ratio of pericyte-covered distance over total length of vasculature (described in methods; Fig. C). The pericyte coverage ratios of EAE capillaries were 0.635 ± 0.023, 0.646 ± 0.036, and 0.686 ± 0.017 during pre-peak, peak and post-peak EAE, respectively, which were significantly higher than the 0.334 ± 0.011 coverage ratio for naïve vasculature (Fig. C). We then assessed pericyte density to determine if the difference in pericyte morphology was reflective of a change in the number of pericytes along the blood vasculature in EAE. Indeed, the pericyte density of 7.33 ± 0.58, 8.78 ± 0.49, 9.86 ± 1.16 during pre-peak, peak, and post-peak EAE, respectively, was significantly lower than the 19.70 ± 1.3, pericyte density observed in naïve condition (Fig. D). It is unclear whether pericytes underwent cell death that resulted in the loss of pericyte density as we did not observe any active caspase-3+ pericytes along the vasculature across all the time points (data not shown). Thus, despite a lower number of pericytes in EAE brain vasculature, these pericytes were elongated seemingly to extend coverage of the vessel to maintain its integrity. CSPGs as potential pericyte activators Due to their critical location along the BBB, pericytes are one of the first cellular components of the NVU to interact with leukocytes attempting to cross into the CNS from the circulation. To study this, we focused on perivascular cuffs, marked with laminin+ endothelial and parenchymal basement membranes separated by infiltrating CD45+ leukocytes (Fig. A). We noted that NG2+ pericytes in EAE were proximal to F4/80+ macrophage infiltrating brain regions (Fig. B). Moreover, in line with previous observations that CSPGs are elevated within and outside of EAE perivascular cuffs , we found widespread CSPG staining in EAE tissue including within the perivascular cuffs demarcated by laminin+ stain and PDGFRβ+ cells, when compared with the naïve cerebellum tissues (Fig. C, D). This suggests the potential of interactions between CSPGs and pericytes. Pericytes produce matrix metalloproteinases-2 and -9 (MMP2 and 9) which cleave basement membrane constituents such as collagen IV, laminin, and fibronectin . First, we addressed whether pericytes respond to different pro-inflammatory stimuli including CSPGs in culture. We treated murine pericytes in culture, which stained for both NG2 and PDGFRβ (Additional file : Fig. S3), with either CSPGs or a mixture of IL-1β and IFNγ (Fig. E, F). We observed that while the inflammatory cocktail of IL-1β and IFγ elevated MMP9 levels in culture supernatants, CSPGs failed to enhance the MMP9 production (Fig. E). However, CSPGs potentially upregulated the pro-MMP9 transcripts 6 h post-treatment (Fig. F), suggesting that CSPGs may influence pericyte activity. Similar to pericytes in EAE tissues, we did not observe any significant change in the number of Ki67+ PDGFRβ+ cells in culture upon CSPG or cytokine cocktail treatment (Additional file : Fig. S2). This suggests that murine pericytes in culture do not undergo a proliferative cell phase to support their inflammatory program. Pericytes as mediators of inflammation Pericytes secrete several adhesion molecules and chemokines/cytokines that assist in the recruitment and migration of monocytes, T cells, eosinophils, and neutrophils [ – ]. Pericytes also express pro-inflammatory factors such as IL-1β and TNFα, which can induce pro-inflammatory states in astrocytes, microglia, and endothelial cells, and help recruit leukocytes . Therefore, we studied the chemokine/cytokine profile of the pericyte secretome in response to inflammatory cytokines (IFN-γ + IL-1β) and CSPGs. Stimulation of pericytes with IFN-γ + IL-1β significantly upregulated the production of several inflammatory cytokines including granulocyte-colony stimulating factor (G-CSF; Fig. A), granulocyte–macrophage colony stimulating factor (GM-CSF; Fig. B), IL-5 (Fig. C), IL-6 (Fig. I), leukemia inhibitory factor (LIF; Fig. J), and vascular endothelial growth factor (VEGF; Fig. L). Chemokines including CCL11 (eotaxin; Fig. D), and chemokine (C-X-C) motif (CXCL1; Fig. H and CXCL9; Fig. K) were also upregulated in response to the treatment with IFN-γ + IL-1β (Fig. ). Treatment with CSPGs upregulated pro-inflammatory cytokines, TNFα (Fig. E) and IL-6 (Fig. I) and CCL2 (Fig. M), a prominent chemotactic stimulus, for peripheral monocytes to transmigrate the inflamed CNS. Notably, the upregulation of CCL3 (macrophage inflammatory protein 1a; MIP-1a; Fig. O), and CCL4 (macrophage inflammatory protein 1b; MIP-1b; Fig. P) was significantly higher in response to treatment with CSPGs as compared to treatment with IFN-γ + IL-1β. This is critical since both these chemokines have been found to be elevated in the CSFs of relapse remitting MS, an acute inflammatory disease phase , suggesting a potential involvement of pericytes during inflammation in MS. Finally, to investigate whether the pericyte secretome in inflammatory conditions indeed has the potential to facilitate macrophage migration into the CNS parenchyma, the transmigration of BMDMs was assessed in vitro, using a Boyden chamber assay (Fig. A). When BMDMs were treated with conditioned medium from pericytes that were treated with inflammatory cocktail (IL1β + IFNγ), CSPGs (Fig. B) and LPS (Additional file : Fig. S4), there was significantly higher macrophage migration across the chamber as compared with controls stimulated with conditioned medium from untreated pericytes (Fig. B and Additional file : Fig. S4). Pericytes in MS lesions To identify if pericytes exhibit different morphologies and dynamics in MS brains, we investigated the location and morphology of PDGFRβ+ pericytes in lesions of 3 MS cases (Fig. A, D, G). We found PDGFRβ+ cells within the perivascular spaces as well as in close proximity to the endothelial barriers in all the MS lesions (Fig. B, C, E, F, H, I; shown with arrows). In all the fields, pericytes were found to be closely associated with laminin+ basement membranes. Notably, PDGFRβ+ cells were not detected within the parenchyma of MS brains even in proximity to highly inflamed vessels. EAE pathology affects white matter tracts in spinal cord and cerebellum tissues. Due to enlarged post-capillary venules and less diffused lesions in cerebellum , we investigated this CNS region for EAE-associated pathology. To study pericytes, we stained the sagittal cerebellar tissues from EAE (Fig. A) and naïve mice for PDGFRβ and NG2 expression (Figs. and ). The location of PDGFRβ+ and NG2+ cells within the vessels was evaluated by staining for pan-laminin, which delineates the basement membranes of vessels and by endothelial CD31 staining, respectively. Figure B shows laminin-delineated inflamed vasculature, used to examine the location of pericytes across pre-peak (days 10–13), peak (day 16), and post-peak (day 21) EAE tissue. At all the time points examined, PDGFRβ+ cells were found to be either encased or in close proximity to the pan-laminin+ basement membrane and infrequently noted in the parenchyma (Fig. A). Certain fields-of-view did appear to exhibit PDGFRβ+ cells away from the endothelial cells (post-peak; day 21), but a close examination revealed this observation to be due to weaker laminin staining in those images. Further, PDGFRβ+ cells were noted to be within the confinements of NVU as confirmed by GFAP+ reactive astrocytes in the peak EAE white matter (Additional file : Fig. S1). These findings were corroborated by NG2+ pericytes, which were also found abutting the CD31+ endothelium (Fig. A). Even in chronic (day 35) EAE (Fig. B), pericytes detected by PDGFRβ or NG2 were vessel-associated and not encountered in the parenchyma. 3D-reconstruction of NG2+ cells in day 35 EAE white matter confirmed these cells to wrap around the CD31+ endothelium (Fig. B). It is noteworthy that NG2 is also a marker for oligodendrocyte precursor cells (OPCs), and the NG2+ cells away from CD31, as observed in some panels, could be representative of OPCs. Interestingly, NG2+ cerebellar pericytes that wrapped around CD31+ vasculature in the white matter of EAE appeared to be more elongated and interconnected compared to pericytes in naïve cerebellum (Fig. ). These findings were consistent across all the analyzed time points within EAE cerebellar tissues. This morphological difference between EAE and naïve cerebellar tissues was quantified by calculating a pericyte coverage ratio, which is the ratio of pericyte-covered distance over total length of vasculature (described in methods; Fig. C). The pericyte coverage ratios of EAE capillaries were 0.635 ± 0.023, 0.646 ± 0.036, and 0.686 ± 0.017 during pre-peak, peak and post-peak EAE, respectively, which were significantly higher than the 0.334 ± 0.011 coverage ratio for naïve vasculature (Fig. C). We then assessed pericyte density to determine if the difference in pericyte morphology was reflective of a change in the number of pericytes along the blood vasculature in EAE. Indeed, the pericyte density of 7.33 ± 0.58, 8.78 ± 0.49, 9.86 ± 1.16 during pre-peak, peak, and post-peak EAE, respectively, was significantly lower than the 19.70 ± 1.3, pericyte density observed in naïve condition (Fig. D). It is unclear whether pericytes underwent cell death that resulted in the loss of pericyte density as we did not observe any active caspase-3+ pericytes along the vasculature across all the time points (data not shown). Thus, despite a lower number of pericytes in EAE brain vasculature, these pericytes were elongated seemingly to extend coverage of the vessel to maintain its integrity. Due to their critical location along the BBB, pericytes are one of the first cellular components of the NVU to interact with leukocytes attempting to cross into the CNS from the circulation. To study this, we focused on perivascular cuffs, marked with laminin+ endothelial and parenchymal basement membranes separated by infiltrating CD45+ leukocytes (Fig. A). We noted that NG2+ pericytes in EAE were proximal to F4/80+ macrophage infiltrating brain regions (Fig. B). Moreover, in line with previous observations that CSPGs are elevated within and outside of EAE perivascular cuffs , we found widespread CSPG staining in EAE tissue including within the perivascular cuffs demarcated by laminin+ stain and PDGFRβ+ cells, when compared with the naïve cerebellum tissues (Fig. C, D). This suggests the potential of interactions between CSPGs and pericytes. Pericytes produce matrix metalloproteinases-2 and -9 (MMP2 and 9) which cleave basement membrane constituents such as collagen IV, laminin, and fibronectin . First, we addressed whether pericytes respond to different pro-inflammatory stimuli including CSPGs in culture. We treated murine pericytes in culture, which stained for both NG2 and PDGFRβ (Additional file : Fig. S3), with either CSPGs or a mixture of IL-1β and IFNγ (Fig. E, F). We observed that while the inflammatory cocktail of IL-1β and IFγ elevated MMP9 levels in culture supernatants, CSPGs failed to enhance the MMP9 production (Fig. E). However, CSPGs potentially upregulated the pro-MMP9 transcripts 6 h post-treatment (Fig. F), suggesting that CSPGs may influence pericyte activity. Similar to pericytes in EAE tissues, we did not observe any significant change in the number of Ki67+ PDGFRβ+ cells in culture upon CSPG or cytokine cocktail treatment (Additional file : Fig. S2). This suggests that murine pericytes in culture do not undergo a proliferative cell phase to support their inflammatory program. Pericytes secrete several adhesion molecules and chemokines/cytokines that assist in the recruitment and migration of monocytes, T cells, eosinophils, and neutrophils [ – ]. Pericytes also express pro-inflammatory factors such as IL-1β and TNFα, which can induce pro-inflammatory states in astrocytes, microglia, and endothelial cells, and help recruit leukocytes . Therefore, we studied the chemokine/cytokine profile of the pericyte secretome in response to inflammatory cytokines (IFN-γ + IL-1β) and CSPGs. Stimulation of pericytes with IFN-γ + IL-1β significantly upregulated the production of several inflammatory cytokines including granulocyte-colony stimulating factor (G-CSF; Fig. A), granulocyte–macrophage colony stimulating factor (GM-CSF; Fig. B), IL-5 (Fig. C), IL-6 (Fig. I), leukemia inhibitory factor (LIF; Fig. J), and vascular endothelial growth factor (VEGF; Fig. L). Chemokines including CCL11 (eotaxin; Fig. D), and chemokine (C-X-C) motif (CXCL1; Fig. H and CXCL9; Fig. K) were also upregulated in response to the treatment with IFN-γ + IL-1β (Fig. ). Treatment with CSPGs upregulated pro-inflammatory cytokines, TNFα (Fig. E) and IL-6 (Fig. I) and CCL2 (Fig. M), a prominent chemotactic stimulus, for peripheral monocytes to transmigrate the inflamed CNS. Notably, the upregulation of CCL3 (macrophage inflammatory protein 1a; MIP-1a; Fig. O), and CCL4 (macrophage inflammatory protein 1b; MIP-1b; Fig. P) was significantly higher in response to treatment with CSPGs as compared to treatment with IFN-γ + IL-1β. This is critical since both these chemokines have been found to be elevated in the CSFs of relapse remitting MS, an acute inflammatory disease phase , suggesting a potential involvement of pericytes during inflammation in MS. Finally, to investigate whether the pericyte secretome in inflammatory conditions indeed has the potential to facilitate macrophage migration into the CNS parenchyma, the transmigration of BMDMs was assessed in vitro, using a Boyden chamber assay (Fig. A). When BMDMs were treated with conditioned medium from pericytes that were treated with inflammatory cocktail (IL1β + IFNγ), CSPGs (Fig. B) and LPS (Additional file : Fig. S4), there was significantly higher macrophage migration across the chamber as compared with controls stimulated with conditioned medium from untreated pericytes (Fig. B and Additional file : Fig. S4). To identify if pericytes exhibit different morphologies and dynamics in MS brains, we investigated the location and morphology of PDGFRβ+ pericytes in lesions of 3 MS cases (Fig. A, D, G). We found PDGFRβ+ cells within the perivascular spaces as well as in close proximity to the endothelial barriers in all the MS lesions (Fig. B, C, E, F, H, I; shown with arrows). In all the fields, pericytes were found to be closely associated with laminin+ basement membranes. Notably, PDGFRβ+ cells were not detected within the parenchyma of MS brains even in proximity to highly inflamed vessels. In normal physiological states, pericytes are known to play a role in BBB integrity by maintaining tight junctions and ensuring specific endothelial vesicular transport . During disease states, pericytes have been shown to migrate to the site of injury . For instance, in traumatic brain and spinal cord injury, pericytes proliferate and ultimately outnumber astrocytes, and contribute to the scar formation by depositing ECM proteins . In contrast to these findings, cerebellar pericytes in EAE appear to retain their microvascular location and encasement within the basement membrane. It is noteworthy that some PDGFRβ+ cells in the post-peak EAE appeared to be outside the basement membranes, but those regions have weaker laminin staining leading to such impressions. Also, some NG2+ cells appear outside CD31+ endothelial lining (Fig. ) and it is highly likely that some of these cells represent OPCs or the staining potentially reflects non-specific staining. We corroborated our observations using MS brains. In our study, we noted PDGFRβ+ cells to be localized within the perivascular spaces in MS brain sections, where they appear not to be migrating to the parenchyma. If the pericytes had migrated, it is likely that we might not have captured those cells either due to differences in lesion chronicity or due to a limited field of view in MS brains. However, based on observations from multiple lesions from 3 different MS brains, we believe that our findings in MS are in contrast to the published reports suggesting migratory behavior of pericytes in different CNS conditions. In the light of these observations, immune-mediated component of EAE and MS needs to be taken into consideration to better understand the observation of non-migratory pericytes and the implication of this on immune cell extravasation into the CNS parenchyma. Further work is needed to investigate whether pericyte migration is elicited at a later time point in EAE or if there is pericyte migration into the parenchyma within the spinal cord or other areas of the brain besides the cerebellum in EAE, as pericyte might have different functions based on their anatomical location within the CNS . This is one of the earlier studies to characterize morphological differences in pericytes within the EAE model as compared to naïve, with the changes starting as early as onset of EAE and persisting into the later phases of the EAE disease course. In line with our findings on pericyte density, Berthiaume et al. observed dynamic pericyte remodeling along the vasculature in response to the targeted ablation of neighboring pericytes . Our findings of a decrease in pericyte density and an increase in coverage ratio in EAE suggests that the morphological change observed in EAE may be a compensatory mechanism in response to loss of pericyte density. Decreased pericyte density along the vasculature has been reported in other CNS disease states such as Alzheimer’s disease and stroke . Since pericytes that are lost along the vasculature do not appear to migrate into the parenchyma, the activation of death pathways in these pericytes need to be investigated to understand the mechanisms behind the loss in pericyte density in EAE cerebellum. In this regard, the activation of apoptotic pathways has been observed in a mouse model of stroke, where capillary pericyte loss was also observed in ischemic conditions . However, our initial findings did not show caspase 3 activation in pericytes in vivo, however, other modes of death cannot be ruled out. One of the key highlights of our studies is the upregulation of several cytokines and chemokines by pericytes in response to the inflammatory molecules, IFN-γ + IL-1β, indicating their ability to actively respond to and propagate inflammation. A previous study in our lab has demonstrated the ECM components, CSPGs, are novel mediators of leukocyte migration into the CNS by their capacity to upregulate motility and the secretion of a number of pro-inflammatory cytokines in macrophages . CSPGs also induce the generation of pro-inflammatory chemokines in pericytes, which suggests a role for pericytes in facilitating the chemoattraction of monocyte/macrophages into the CNS. These findings are significant because they highlight the potential for dynamic interactions between ECM components, such as CSPGs and pericytes in mediating neuroinflammation. The secretions from pericytes stimulated with molecular mediators of inflammation such as IFN-γ + IL-1β and CSPGs upregulated the transmigration of BMDMs. It is also to be kept in mind that pericytes cultured in absence of endothelial cells may have behaviors that are not representative of their functions in vivo; however, we contend that our findings still shed light on the inflammatory potential of pericytes. While it appears that pericytes have the capacity to indirectly induce leukocyte migration through their inflammatory secretome, it is yet to be determined if pericytes can directly facilitate leukocyte extravasation into the CNS. Pericytes have been observed to actively recruit and facilitate neutrophil migration in the study of inflammation in cremaster muscles . Live imaging shows the preferential migration of neutrophils through these gaps between pericytes, which was further facilitated by the expression of adhesion molecules by pericytes. The extent to which changes in pericyte morphology and secretome can directly facilitate immune cell recruitment and extravasation into the CNS parenchyma needs to be better understood. The ability of CSPGs to elicit pro-inflammatory responses in pericytes opens a new avenue for understanding the immunomodulatory functions of the ECM in MS. The findings of this study reinforce the potential role of two novel players in neuroinflammation, pericytes and CSPGs. Further investigation of the dynamic interactions between pericytes, CSPGs, and macrophages can help elucidate the mechanisms of immune cell extravasation into the CNS in the context of inflammatory conditions such as MS. Additional file 1: Figure S1. Pericytes localized within GFAP+ reactive astrocytes in EAE. D16 EAE cerebellum (cbm) tissues (lower panel) were analyzed for GFAP+ astrocyte (green) and PDGFRβ+ pericyte (red) staining to investigate differences in these cells as a part of the neurovascular unit (NVU). When compared with naïve cerebellum (upper panel), reactive astrocytes in EAE cerebellum appear to wrap around the PDGFRβ+ cells in an inflamed capillary venule. Scale 50 µm. Figure S2. Pericyte proliferation is not altered upon IL-1β and IFNγ or CSPG treatments. A. Primary mouse pericytes were seeded at 7,500 cells in a 96-well plate and treated with either 10 ng/mL IL-1β + IFNγ or 10 µg/mL CSPG and stained for DAPI and Ki67 to identify proliferating cells. B. Graphs denote %Ki67+ DAPI+ cells in response to treatment. Data are represented as mean ± SD. Figure S3. Characterization of mouse pericytes in vitro. Primary murine pericytes (passage 6) were seeded at 7500 cells in 96-well plates and stained for PDGFRβ (red) and NG2 (green) after 24 h in cultures. These cells were found to express both these markers. Figure S4. Pericyte induced macrophage migration in vitro. Using the Boyden chamber assay, we investigated migration of bone marrow derived macrophages (BMDMs) in response to supernatants from LPS-treated pericytes. Data points represent technical replicates in untreated and LPS-treated conditions. Data are represented as mean ± SD. **p < 0.01.
Comparative analysis of endoscopic discectomy for demanding lumbar disc herniation
44cca1ff-db1e-439b-b411-5eebe99dad69
11914065
Musculoskeletal System[mh]
Lumbar disc herniation (LDH) is a clinical condition manifested by symptoms such as low back pain, radiating pain along the sciatic nerve, and, in severe cases, cauda equina syndrome. It affects an estimated 1–3% of the general population annually . When the nucleus pulposus protrudes through the posterior longitudinal ligament, the likelihood of displacement significantly increases, resulting in intervertebral disc fragment displacement in 35–72% of cases . High-grade down-migrated lumbar disc herniation (HDM-LDH) is a rare type of lumbar disc herniation, in which the intervertebral disc is displaced downward to below the midpoint of the pedicle . Studies have reported that the incidence of HDM-LDH accounts for about 23% of translocated LDH . Previous studies have demonstrated that the direction of disc migration is influenced by both the disc level and patient age. Discs located at the upper lumbar levels exhibit a higher tendency for upward migration, while those at the lower lumbar levels show a greater propensity for downward migration. Additionally, as age advances, the likelihood of lumbar disc displacement increases, although the incidence of downward displacement decreases , . When the intervertebral disc protrudes downward, it can compress the nerve roots or cauda equina, leading to general symptoms such as pain, numbness, and sensory and motor dysfunctions, or even paralysis. This condition is more harmful than typical LDH . When herniated disc fragments are proximal to the originating disc, therapeutic intervention is relatively straightforward. However, as the extent of displacement escalates, the complexity of treatment correspondingly increases . When patients with HDM-LDH do not respond to conservative treatment or when the effect of such treatment is unsatisfactory, surgical intervention can significantly improve symptoms. Previous studies have demonstrated that fully endoscopic lumbar discectomy has a high failure rate in cases of severe disc displacement, thus open lumbar discectomy or lumbar fusion with internal fixation is often recommended . However, open surgery necessitates the dissection of paraspinal muscles and the removal of lamina and facet joints, which may result in spinal motion segment instability and persistent lower back pain . With the advancement of minimally invasive surgical techniques, unilateral biportal endoscopic (UBE) discectomy and percutaneous interlaminar endoscopic lumbar discectomy (IELD) have emerged as new options for clinicians. Both procedures offer advantages in terms of postoperative recovery, skin incision length, muscle damage, infection rate, surgical pain, and hospitalization time. Notably, the success rates of these operations have improved . Several studies have compared the efficacy of two surgical procedures for treating LDH, but their effectiveness in treating HDM-LDH remains undetermined. This study involved 39 patients with HDM-LDH as observation subjects, comparing and analyzing the clinical efficacy of UBE and IELD. The findings offer novel insights into the treatment of HDM-LDH. Demographic data A retrospective analysis was conducted on patients who underwent UBE or IELD surgery for HDM-LDH at the Department of Minimally Invasive Spine Surgery, First Affiliated Hospital of Guangzhou University of Chinese Medicine, from January 2020 to February 2023. Based on the inclusion and exclusion criteria of the study, the UBE group comprised 18 cases, while the IELD group consisted of 21 cases, totaling 39 cases (see Fig. for the screening process). The preoperative basic data of both groups were compared, and the indicators were found to be comparable (Table ). This study received approval from the Ethics Committee of the First Affiliated Hospital of Guangzhou University of Chinese Medicine (approval number: JY2024-120). Informed consent from patients was not required due to the retrospective nature of the study design. All research procedures adhered to pertinent guidelines and regulatory standards. The surgeons in both groups held equivalent qualifications. Inclusion and exclusion criteria Inclusion criteria : (1) preoperative imaging revealed a single-segment LDH with significant downward displacement, and there was no prior history of LDH; (2) clinical manifestations were consistent with radiological findings, presenting with pronounced low back pain and radiating pain in the lower extremities. Conservative treatment for three months proved ineffective; (3) follow-up extended for at least three months post-treatment. Exclusion criteria : (1) concurrent lumbar spondylolisthesis, instability, or other conditions that could cause low back and leg pain; (2) more than one responsible segment or a history of lumbar surgery on the same segment; (3) concurrent tumors, infections, and other lesions; (4) a history of severe underlying diseases; (5) patients unfit for anesthesia. Surgical method UBE group The patients were positioned prone under general anesthesia. C-arm fluoroscopy was employed preoperatively to identify the targeted intervertebral space and determine the needle insertion site. The target intervertebral space was located 1.5 cm lateral to the spinous process, approximately at the medial edge of the vertebral arch. A puncture needle was introduced along the paramedian line on the affected side, 1 to 1.5 cm above and below the midline of the target intervertebral space, serving as both the observation channel incision and the operational channel incision, with a distance of 2.5 to 3.0 cm between the two incisions. For the right-sided approach, the cranial end served as the operational port, while the caudal end functioned as the endoscopic observation port; the left-sided approach followed the reverse configuration. Under fluoroscopic guidance, two guide rods were inserted into the entry points and intersected at the junction of the spinous process and lamina of the superior vertebra. The soft tissue was gradually expanded using a dilator, and both the endoscopic cannula and working cannula were positioned. The endoscope system was then connected. Once the field of view under water perfusion was clear, radiofrequency ablation and forceps were employed to clean the soft tissue within the field of view. This process exposed the lower edge of the superior lamina and the upper edge of the inferior lamina. A drill and rongeurs were used to separate the yellow ligament at the cephalic end and remove part of the vertebral lamina bone from the caudal end until the nucleus pulposus prolapsed. Subsequently, the yellow ligament was separated from the dura mater. The yellow ligament was gradually removed to expose the dura mater sac and nerve roots, and the free nucleus pulposus was separated from the nerve roots while protecting the dura mater sac and nerve roots. The prolapsed nucleus pulposus tissue was progressively removed. Throughout the entire operation, vigilant attention was paid to bleeding control and hemostasis. After complete decompression of the nerve root, the working cannula was removed, a drainage tube was placed, and the incision was sutured and bandaged. IELD group The patient was positioned prone under general anesthesia. C-arm fluoroscopy was utilized to identify the responsible intervertebral space and determine the needle insertion site preoperatively. A 1.0 cm incision was made lateral to the spinous process, and the puncture needle was used to penetrate the outer edge of the interlaminar space layer by layer. Once the position was confirmed via fluoroscopy, a guide wire was inserted. An 0.8 cm skin incision was then made, and the soft tissue was progressively dilated using a dilator before inserting a working cannula. The entire surgical procedure was conducted under real-time visualization and continuous irrigation with isotonic saline solution. The bone at the superior and inferior edges of the interlaminar space was meticulously exposed, followed by the removal of the yellow ligament situated between the vertebral laminae. A grinding drill and bone rongeurs were employed to excise the caudal portion of the vertebral lamina at the distal end, thereby exposing the traversing nerve roots and the prolapsed intervertebral disc. Subsequently, the extruded nucleus pulposus was carefully dissected away from both the nerve roots and the dural sac. This herniated material was progressively extracted in a controlled manner to alleviate neural compression. Throughout the operation, vigilance was maintained to address any bleeding sites promptly, ensuring adequate hemostasis. Upon completion, the working cannula was removed, the incision was sutured, and a bandage was applied. Perioperative management Post-surgery, conventional symptomatic treatments, including analgesia, were administered. In the UBE group, if the drainage volume was less than 50 mL within 24 h, the drainage tube could be removed. Absent any significant discomfort, both the IELD and UBE groups were permitted to ambulate with protective gear the day following surgery. To assess postoperative recovery, computed tomography (CT) and magnetic resonance imaging (MRI) scans were conducted within 3 days after the procedure. Observation indicators The two patient groups were compared based on hemoglobin (Hb) and C-reactive protein (CRP) levels preoperatively and postoperatively, intraoperative blood loss, duration of surgery, length of hospital stay, and incidence of complications. Pain severity in the back and legs, as well as limb dysfunction, were assessed using the visual analog scale (VAS) and the Oswestry disability index (ODI) at four time points: prior to surgery, one day post-surgery, one month post-surgery, and three months post-surgery. Patient satisfaction with clinical outcomes was gauged through the modified MacNab criteria, which categorized results into four tiers: excellent, good, fair, and poor, with ‘excellent’ denoting complete satisfaction. This assessment was conducted at a follow-up interval of three months post-surgery. Statistical methods The statistical analysis was conducted using SPSS version 26.0 software. For normally distributed measurement data, results were presented as the mean ± standard deviation ( [12pt]{minimal} $$\:$$ ± s ). Inter-group comparisons of such data employed an independent samples t -test, while within-group comparisons at each time point utilized variance analysis. Non-normally distributed measurement data were expressed using median and interquartile range ( M [P25 , P75] ), with inter-group comparisons conducted via the two-sample rank sum test. Categorical data were analyzed using the chi-square ( χ² ) test. Statistical significance was determined ( P < 0.05). A retrospective analysis was conducted on patients who underwent UBE or IELD surgery for HDM-LDH at the Department of Minimally Invasive Spine Surgery, First Affiliated Hospital of Guangzhou University of Chinese Medicine, from January 2020 to February 2023. Based on the inclusion and exclusion criteria of the study, the UBE group comprised 18 cases, while the IELD group consisted of 21 cases, totaling 39 cases (see Fig. for the screening process). The preoperative basic data of both groups were compared, and the indicators were found to be comparable (Table ). This study received approval from the Ethics Committee of the First Affiliated Hospital of Guangzhou University of Chinese Medicine (approval number: JY2024-120). Informed consent from patients was not required due to the retrospective nature of the study design. All research procedures adhered to pertinent guidelines and regulatory standards. The surgeons in both groups held equivalent qualifications. Inclusion criteria : (1) preoperative imaging revealed a single-segment LDH with significant downward displacement, and there was no prior history of LDH; (2) clinical manifestations were consistent with radiological findings, presenting with pronounced low back pain and radiating pain in the lower extremities. Conservative treatment for three months proved ineffective; (3) follow-up extended for at least three months post-treatment. Exclusion criteria : (1) concurrent lumbar spondylolisthesis, instability, or other conditions that could cause low back and leg pain; (2) more than one responsible segment or a history of lumbar surgery on the same segment; (3) concurrent tumors, infections, and other lesions; (4) a history of severe underlying diseases; (5) patients unfit for anesthesia. UBE group The patients were positioned prone under general anesthesia. C-arm fluoroscopy was employed preoperatively to identify the targeted intervertebral space and determine the needle insertion site. The target intervertebral space was located 1.5 cm lateral to the spinous process, approximately at the medial edge of the vertebral arch. A puncture needle was introduced along the paramedian line on the affected side, 1 to 1.5 cm above and below the midline of the target intervertebral space, serving as both the observation channel incision and the operational channel incision, with a distance of 2.5 to 3.0 cm between the two incisions. For the right-sided approach, the cranial end served as the operational port, while the caudal end functioned as the endoscopic observation port; the left-sided approach followed the reverse configuration. Under fluoroscopic guidance, two guide rods were inserted into the entry points and intersected at the junction of the spinous process and lamina of the superior vertebra. The soft tissue was gradually expanded using a dilator, and both the endoscopic cannula and working cannula were positioned. The endoscope system was then connected. Once the field of view under water perfusion was clear, radiofrequency ablation and forceps were employed to clean the soft tissue within the field of view. This process exposed the lower edge of the superior lamina and the upper edge of the inferior lamina. A drill and rongeurs were used to separate the yellow ligament at the cephalic end and remove part of the vertebral lamina bone from the caudal end until the nucleus pulposus prolapsed. Subsequently, the yellow ligament was separated from the dura mater. The yellow ligament was gradually removed to expose the dura mater sac and nerve roots, and the free nucleus pulposus was separated from the nerve roots while protecting the dura mater sac and nerve roots. The prolapsed nucleus pulposus tissue was progressively removed. Throughout the entire operation, vigilant attention was paid to bleeding control and hemostasis. After complete decompression of the nerve root, the working cannula was removed, a drainage tube was placed, and the incision was sutured and bandaged. IELD group The patient was positioned prone under general anesthesia. C-arm fluoroscopy was utilized to identify the responsible intervertebral space and determine the needle insertion site preoperatively. A 1.0 cm incision was made lateral to the spinous process, and the puncture needle was used to penetrate the outer edge of the interlaminar space layer by layer. Once the position was confirmed via fluoroscopy, a guide wire was inserted. An 0.8 cm skin incision was then made, and the soft tissue was progressively dilated using a dilator before inserting a working cannula. The entire surgical procedure was conducted under real-time visualization and continuous irrigation with isotonic saline solution. The bone at the superior and inferior edges of the interlaminar space was meticulously exposed, followed by the removal of the yellow ligament situated between the vertebral laminae. A grinding drill and bone rongeurs were employed to excise the caudal portion of the vertebral lamina at the distal end, thereby exposing the traversing nerve roots and the prolapsed intervertebral disc. Subsequently, the extruded nucleus pulposus was carefully dissected away from both the nerve roots and the dural sac. This herniated material was progressively extracted in a controlled manner to alleviate neural compression. Throughout the operation, vigilance was maintained to address any bleeding sites promptly, ensuring adequate hemostasis. Upon completion, the working cannula was removed, the incision was sutured, and a bandage was applied. Perioperative management Post-surgery, conventional symptomatic treatments, including analgesia, were administered. In the UBE group, if the drainage volume was less than 50 mL within 24 h, the drainage tube could be removed. Absent any significant discomfort, both the IELD and UBE groups were permitted to ambulate with protective gear the day following surgery. To assess postoperative recovery, computed tomography (CT) and magnetic resonance imaging (MRI) scans were conducted within 3 days after the procedure. The patients were positioned prone under general anesthesia. C-arm fluoroscopy was employed preoperatively to identify the targeted intervertebral space and determine the needle insertion site. The target intervertebral space was located 1.5 cm lateral to the spinous process, approximately at the medial edge of the vertebral arch. A puncture needle was introduced along the paramedian line on the affected side, 1 to 1.5 cm above and below the midline of the target intervertebral space, serving as both the observation channel incision and the operational channel incision, with a distance of 2.5 to 3.0 cm between the two incisions. For the right-sided approach, the cranial end served as the operational port, while the caudal end functioned as the endoscopic observation port; the left-sided approach followed the reverse configuration. Under fluoroscopic guidance, two guide rods were inserted into the entry points and intersected at the junction of the spinous process and lamina of the superior vertebra. The soft tissue was gradually expanded using a dilator, and both the endoscopic cannula and working cannula were positioned. The endoscope system was then connected. Once the field of view under water perfusion was clear, radiofrequency ablation and forceps were employed to clean the soft tissue within the field of view. This process exposed the lower edge of the superior lamina and the upper edge of the inferior lamina. A drill and rongeurs were used to separate the yellow ligament at the cephalic end and remove part of the vertebral lamina bone from the caudal end until the nucleus pulposus prolapsed. Subsequently, the yellow ligament was separated from the dura mater. The yellow ligament was gradually removed to expose the dura mater sac and nerve roots, and the free nucleus pulposus was separated from the nerve roots while protecting the dura mater sac and nerve roots. The prolapsed nucleus pulposus tissue was progressively removed. Throughout the entire operation, vigilant attention was paid to bleeding control and hemostasis. After complete decompression of the nerve root, the working cannula was removed, a drainage tube was placed, and the incision was sutured and bandaged. The patient was positioned prone under general anesthesia. C-arm fluoroscopy was utilized to identify the responsible intervertebral space and determine the needle insertion site preoperatively. A 1.0 cm incision was made lateral to the spinous process, and the puncture needle was used to penetrate the outer edge of the interlaminar space layer by layer. Once the position was confirmed via fluoroscopy, a guide wire was inserted. An 0.8 cm skin incision was then made, and the soft tissue was progressively dilated using a dilator before inserting a working cannula. The entire surgical procedure was conducted under real-time visualization and continuous irrigation with isotonic saline solution. The bone at the superior and inferior edges of the interlaminar space was meticulously exposed, followed by the removal of the yellow ligament situated between the vertebral laminae. A grinding drill and bone rongeurs were employed to excise the caudal portion of the vertebral lamina at the distal end, thereby exposing the traversing nerve roots and the prolapsed intervertebral disc. Subsequently, the extruded nucleus pulposus was carefully dissected away from both the nerve roots and the dural sac. This herniated material was progressively extracted in a controlled manner to alleviate neural compression. Throughout the operation, vigilance was maintained to address any bleeding sites promptly, ensuring adequate hemostasis. Upon completion, the working cannula was removed, the incision was sutured, and a bandage was applied. Post-surgery, conventional symptomatic treatments, including analgesia, were administered. In the UBE group, if the drainage volume was less than 50 mL within 24 h, the drainage tube could be removed. Absent any significant discomfort, both the IELD and UBE groups were permitted to ambulate with protective gear the day following surgery. To assess postoperative recovery, computed tomography (CT) and magnetic resonance imaging (MRI) scans were conducted within 3 days after the procedure. The two patient groups were compared based on hemoglobin (Hb) and C-reactive protein (CRP) levels preoperatively and postoperatively, intraoperative blood loss, duration of surgery, length of hospital stay, and incidence of complications. Pain severity in the back and legs, as well as limb dysfunction, were assessed using the visual analog scale (VAS) and the Oswestry disability index (ODI) at four time points: prior to surgery, one day post-surgery, one month post-surgery, and three months post-surgery. Patient satisfaction with clinical outcomes was gauged through the modified MacNab criteria, which categorized results into four tiers: excellent, good, fair, and poor, with ‘excellent’ denoting complete satisfaction. This assessment was conducted at a follow-up interval of three months post-surgery. The statistical analysis was conducted using SPSS version 26.0 software. For normally distributed measurement data, results were presented as the mean ± standard deviation ( [12pt]{minimal} $$\:$$ ± s ). Inter-group comparisons of such data employed an independent samples t -test, while within-group comparisons at each time point utilized variance analysis. Non-normally distributed measurement data were expressed using median and interquartile range ( M [P25 , P75] ), with inter-group comparisons conducted via the two-sample rank sum test. Categorical data were analyzed using the chi-square ( χ² ) test. Statistical significance was determined ( P < 0.05). Perioperative outcomes and complications Compared to the UBE group, the IELD group exhibited a shorter operative duration, reduced hospital stay, and less intraoperative blood loss, with statistically significant differences ( P < 0.05). However, there were no statistically significant differences in the reduction of Hb levels or the increase in CRP levels ( P > 0.05). No complications such as nerve root injury, cerebrospinal fluid leakage, epidural hematoma, or dural tear occurred in the UBE group, nor did any patients experience worsening conditions or require reoperation. Conversely, one patient in the IELD group experienced postoperative neurological symptoms, aggravated limb numbness, and decreased dorsiflexor muscle strength. See Table for details. Clinical indicators Comparison of VAS scores between the two groups Postoperatively, both groups showed significant reductions in VAS scores for low back and leg pain compared to preoperative levels. One day after surgery, the VAS score for low back pain was higher in the UBE group than in the IELD group ( P < 0.05). However, there were no statistically significant differences in the VAS scores for low back pain and lower limb pain at one and three months postoperatively between the two groups ( P > 0.05), as shown in Table . Comparison of ODI scores between the two groups of patients Postoperatively, both groups exhibited a significant reduction in ODI scores compared to their preoperative levels ( P < 0.05). However, no statistically significant differences were observed in ODI scores between the two groups at one day, one month, and three months following surgery ( P > 0.05), as illustrated in Table . Improved MacNab criteria for evaluation of excellent and good rates Follow-up was conducted three months post-surgery. In the IELD group, there were 20 cases classified as excellent and good, representing 95.24% of the total. In the UBE group, 17 cases were classified as excellent and good, accounting for 94.44%. The difference between the two groups was not statistically significant ( P > 0.05), as shown in Table . Representative cases are illustrated in Figs. (UBE) and (IELD). Compared to the UBE group, the IELD group exhibited a shorter operative duration, reduced hospital stay, and less intraoperative blood loss, with statistically significant differences ( P < 0.05). However, there were no statistically significant differences in the reduction of Hb levels or the increase in CRP levels ( P > 0.05). No complications such as nerve root injury, cerebrospinal fluid leakage, epidural hematoma, or dural tear occurred in the UBE group, nor did any patients experience worsening conditions or require reoperation. Conversely, one patient in the IELD group experienced postoperative neurological symptoms, aggravated limb numbness, and decreased dorsiflexor muscle strength. See Table for details. Comparison of VAS scores between the two groups Postoperatively, both groups showed significant reductions in VAS scores for low back and leg pain compared to preoperative levels. One day after surgery, the VAS score for low back pain was higher in the UBE group than in the IELD group ( P < 0.05). However, there were no statistically significant differences in the VAS scores for low back pain and lower limb pain at one and three months postoperatively between the two groups ( P > 0.05), as shown in Table . Comparison of ODI scores between the two groups of patients Postoperatively, both groups exhibited a significant reduction in ODI scores compared to their preoperative levels ( P < 0.05). However, no statistically significant differences were observed in ODI scores between the two groups at one day, one month, and three months following surgery ( P > 0.05), as illustrated in Table . Improved MacNab criteria for evaluation of excellent and good rates Follow-up was conducted three months post-surgery. In the IELD group, there were 20 cases classified as excellent and good, representing 95.24% of the total. In the UBE group, 17 cases were classified as excellent and good, accounting for 94.44%. The difference between the two groups was not statistically significant ( P > 0.05), as shown in Table . Representative cases are illustrated in Figs. (UBE) and (IELD). Postoperatively, both groups showed significant reductions in VAS scores for low back and leg pain compared to preoperative levels. One day after surgery, the VAS score for low back pain was higher in the UBE group than in the IELD group ( P < 0.05). However, there were no statistically significant differences in the VAS scores for low back pain and lower limb pain at one and three months postoperatively between the two groups ( P > 0.05), as shown in Table . Postoperatively, both groups exhibited a significant reduction in ODI scores compared to their preoperative levels ( P < 0.05). However, no statistically significant differences were observed in ODI scores between the two groups at one day, one month, and three months following surgery ( P > 0.05), as illustrated in Table . Follow-up was conducted three months post-surgery. In the IELD group, there were 20 cases classified as excellent and good, representing 95.24% of the total. In the UBE group, 17 cases were classified as excellent and good, accounting for 94.44%. The difference between the two groups was not statistically significant ( P > 0.05), as shown in Table . Representative cases are illustrated in Figs. (UBE) and (IELD). LDH can induce radiating pain and paresthesia as a consequence of nerve compression and inflammatory responses. This condition may weaken the muscles innervated by the affected nerves, thereby impairing motor function. Consequently, the reduced physical activity stemming from LDH adversely impacts both the patient’s employability and overall quality of life. Beyond the direct psychological toll on the individual, prolonged treatment necessitates substantial financial expenditures, imposing an economic strain on the patient’s household , . When non-surgical interventions prove ineffective, common surgical approaches for LDH encompass open fixation, open simple nucleotomy, microscopic nucleotomy, and endoscopic nucleotomy . With the advancement of various surgical techniques, common surgical options for treating LDH have evolved accordingly. Early scholars advocated for open discectomy as the primary treatment for highly displaced LDH. However, since Kambin introduced the concept of the “safe triangle”, percutaneous endoscopic lumbar discectomy has emerged as a prevalent surgical option among physicians. This minimally invasive procedure has an efficacy rate comparable to that of open surgery, reaching up to 90%, and encompasses three distinct approaches: transforaminal, interlaminar, and contralateral transforaminal , . Previous research has established the viability of percutaneous endoscopic transforaminal lumbar discectomy in addressing far-migrated LDH; however, this procedure is associated with a notably high failure rate ranging from 5–22% 15 . Ruetten initially introduced the IELD technique in 2006. The IELD procedure combines the benefits of conventional fenestration discectomy with advanced endoscopic technologies, leveraging the sublaminar corridor for comprehensive visualization of the extruded nucleus pulposus while minimizing excessive removal of the lamina and facet joints. This approach not only conserves the integrity of paraspinal musculature but also mitigates harm to osseous structures, thereby preserving the functional stability of the spinal motion segment , . Nonetheless, the integration of the operating channel within the single-port endoscope necessitates the use of instruments that are both more slender and elongated compared to their conventional counterparts. This deviation in instrumentation profoundly alters the surgical approach, demanding heightened technical proficiency from the surgeon and entailing an extensive period of skill acquisition . From an anatomical standpoint, direct visualization of the distal free nucleus pulposus is challenging due to obstructions posed by the facet joints and pedicles. This issue is exacerbated when severely displaced intervertebral discs fragment into multiple pieces. Consequently, traditional percutaneous endoscopic lumbar discectomy often falls short in treating HDM-LDH because of a limited field of view, inadequate exposure, and the technical difficulty in grasping these remote fragments . Consequently, clinicians have shifted their focus toward the burgeoning UBE technology. The double-hole endoscope technique was initially introduced by Antoni in 1996 and has undergone continuous enhancements to evolve into the contemporary UBE technique . From an instrumentation perspective, UBE can utilize conventional arthroscopes and microsurgical tools to perform the procedure, eliminating the need for a specialized single-port endoscope system and its accompanying surgical instruments. Since the distal operating channel of UBE is not constrained by a rigid working cannula, traditional larger surgical instruments can also be employed. Additionally, the endoscope is separate from the surgical instrument channel, which enhances mobility and provides a more comprehensive visual field, thereby facilitating easier grasping of the distal free nucleus pulposus. For novice surgeons, the surgical pathway and decompression process are analogous to those of conventional microdiscectomy, resulting in a relatively short learning curve , . This investigation revealed that patients undergoing IELD experienced reduced intraoperative hemorrhage, abbreviated surgical durations, and expedited hospital discharges. This advantage can be attributed to the minimally invasive nature of IELD, wherein the establishment of the operative corridor involves less aggressive muscle dissection compared to UBE, thereby minimizing tissue trauma. Specifically, UBE necessitates blunt muscle spreading via dual channel creation for access, leading to heightened tissue disruption. Moreover, during the exposure of the herniated nucleus pulposus in a caudal direction, partial resection of the lamina is undertaken, inducing minor osseous bleeding. Postoperatively, the insertion of a drainage tube mandates monitoring of wound exudate, prolonging both surgical duration and convalescence period . The IELD procedure involves inserting a cannula into the interlaminar space through a single access point. This approach allows for a direct incision of the yellow ligament, thereby exposing the nerve root with minimal tissue damage and reduced bleeding. Additionally, this technique facilitates shorter hospital stays as it obviates the need for a drainage tube to monitor effluent . In Wang’s study, postoperative serum creatine kinase levels and the ratio in the UBE group were higher compared to those in the IELD group. Additionally, postoperative MRI examinations revealed that the cross-sectional area of high-signal lesions in the paraspinal muscles was smaller in the IELD group than in the UBE group, suggesting that IELD is less invasive and causes less muscle damage than UBE . Perioperative anemia not only elevates the risk of postoperative infections and prolongs hospitalization but also significantly impairs postoperative activity levels and functional recovery. However, the comparison of Hb decrease values did not reveal statistical significance ( P > 0.05), and no adverse reactions associated with anemia were observed post-surgery, suggesting that the total blood loss in both groups was minimal. In the intra-group comparison, the VAS scores for low back pain and leg pain demonstrated a downward trend. In the inter-group comparison, the IELD group exhibited lower VAS scores for low back pain at one day post-surgery; however, the VAS scores for low back pain and leg pain at other time points were not statistically significant ( P > 0.05). The ODI scores within each group were compared, revealing that both groups had significantly lower ODI scores post-surgery compared to baseline ( P < 0.05). There was no significant difference in ODI scores between the two groups at one day, one month, and three months post-surgery ( P > 0.05). Additionally, the comparison of the excellent and good rates according to the MacNab standard between the two groups at three months post-surgery was not statistically significant ( P > 0.05). These results suggested that both groups had similar clinical efficacy in terms of functional improvement. Since the inflammatory response is closely related to pain, the lack of a significant difference in CRP increase values between the two groups ( P > 0.05) indicates a similar degree of inflammatory response. This analysis may be influenced by factors such as the surgeon’s proficiency, the precision of the operation, and postoperative medication. In this study, one patient in the IELD group developed neurological symptoms postoperatively, which required rehabilitation and outpatient follow-up. Although the incidence of complications was low, preventive measures should still be implemented. Common intraoperative complications included dural tears (0.18%) and nerve root injuries (0.55%). Postoperative complications comprised weakness (0.92%), limb numbness (3.3%), and incomplete disc removal (0.18%) . Maintaining optimal water pressure during surgery not only helps to control bleeding at the surgical site but also effectively minimizes dural expansion, thereby creating a safe distance between the dura mater and the yellow ligament. This reduces the risk of inadvertent dura mater injuries. Should a tear in the dura mater be detected postoperatively, it must be repaired via suturing or the use of a fibrin patch, depending on the specific circumstances , . Furthermore, persistent irrigation with isotonic saline serves to eliminate locally secreted pro-inflammatory mediators during surgical procedures, thereby mitigating the likelihood of regional inflammatory responses and alleviating postoperative pain experienced by patients , . Postoperative neurological symptoms, such as paresthesia, may be attributed to irritation of the spinal ganglia or nerve roots during the surgical procedure. The use of bipolar coagulation and repeated compression within the working channel can also contribute to postoperative dysesthesia . When the intraoperative exploration range is insufficient, intervertebral disc residue may occur. Therefore, while expanding the exploration range as much as possible, care should be taken to minimize stimulation of the nerve roots. In terms of procedural application, IELD typically involves initially staining the protruding nucleus pulposus followed by its removal. During ligament sectioning, employing basket forceps and lamina rongeurs to create a transverse incision outward facilitates clear exposure of critical anatomical structures, including the dura mater sac, nerve roots, and the protruding nucleus pulposus. Utilizing the lingual surface of the working sleeve for rotation and forward pressure to shield the nerve root, the sleeve’s rotation typically spans from the nerve root’s distal extremity towards its proximal end. In cases of inferiorly displaced intervertebral disc herniations, UBE adopts the Corner angle technique to disengage the superior margin of the lamina’s yellow ligament terminus and excise a portion thereof, with no need to reveal the cranial aspect of said ligament. Subsequent steps entail utilizing a burr and rongeurs to excise a segment of the caudal lamina, thereby unveiling the distal extremity of the descending nucleus pulposus. Post-removal of the entire protrusion, intervertebral space decompression is executed. This study has several limitations: (1) the data were sourced from a single center, resulting in a small sample size; (2) surgeons exhibited varying levels of proficiency in the two surgical procedures; (3) the follow-up period was short, and no long-term postoperative effects were investigated; (4) both surgical techniques are destructive to the lumbar vertebral bone structure, necessitating further comparison and evaluation of postoperative lumbar segment stability. These issues will be addressed in future research. In conclusion, both UBE and IELD demonstrate efficacy in treating HDM-LDH, markedly alleviating symptoms of lower back pain, leg discomfort, and enhancing patient mobility. However, the IELD cohort exhibited advantages such as reduced intraoperative hemorrhage, abbreviated hospitalization durations, and a minimally invasive approach. Conversely, UBE offers enhanced flexibility in surgical maneuverability alongside a distinct decompressive impact. Given these findings, clinicians are advised to tailor their therapeutic selection to individual patient circumstances. Nonetheless, it is imperative to acknowledge that due to the constraints imposed by the limited sample size and brief follow-up period inherent to this retrospective investigation, there remains a necessity for comprehensive, multi-institutional, large-scale prospective trials to definitively assess and compare the long-term outcomes and overall efficacy of UBE versus IELD in managing HDM-LDH. Below is the link to the electronic supplementary material. Supplementary Material 1
Retina and the tubercle
74fef0db-695a-4bb8-baac-2aaf062cb2d6
5381290
Ophthalmology[mh]
CSCR was first described by von Graefe as an inflammatory pathology. In 1918, Masuda strongly blamed tuberculosis as the etiology of this disorder as they found tuberculin reaction to be strongly positive in many cases. They also considered the small yellow spots seen in this condition to be sites of exudation. There were other reports claiming definite response to antitubercular treatment in the 1980s. Owing to the popularity of antitubercular treatment in cases of CSCR despite clinching evidence, a study on a reproducible animal model was carried out. An earlier animal study using rabbit eyes conducted at our center was not carried forward as rabbit eyes were reported to be markedly different from human retina, one major difference being the absence of macula. In addition, rabbits have been found to be relatively more resistant to M. tuberculosis infection compared to monkeys and guinea pigs. Experiments on rhesus monkey eyes are considered ideal for the study of macular lesions as, like humans, they have a pure cone fovea and central avascular central area. The macula in monkey eyes is identified as an ill-defined, yellow, capillary-free zone located temporal and slightly below the center of the optic nerve head . On histopathology, the rhesus monkey retina is multilayered with similar architecture . In 1975, Hayreh reported that the end arterial nature of the choroid vessels and the lobular pattern of the choriocapillaris made the choroid particularly vulnerable to inflammatory, metastatic, and degenerative lesions. Watershed zones, prone to ischemic changes, were also believed to run through the macula or within a close range. Before the study undertaken by Tewari HK (HKT) et al . at the center, reports of fluorescein angiographic (FA) studies on rabbit eyes and monkey eyes existed. Various techniques of producing experimental lesions in the fundus had also been described. Vogel described the suprachoroidal approach using which he studied injection of India ink, suspension of beryllium particles, tubercle Bacillus , other bacterial suspensions, and malignant cells. For producing tubercular lesions, suspensions of 200 bacilli per high-power field prepared by turbidimetric method was injected. In 1968, Nozik and O'Connor used a similar approach to produce experimental toxoplasma retinochoroiditis. In 1973, Mohan et al . from our center used a modified suprachoroidal technique to study presumptive amoebic uveitis. In 1982, Culbertson et al . described producing experimental toxoplasma retinochoroiditis using the nasal transvitreal approach. This technique was simple to accomplish but suffered from the higher risk of direct retinal trauma and postinoculation vitreous haze. The animal model used by Tewari et al . (henceforth termed Hem Kumar Tewari-Rajendra Prasad Centre (HKT-RPC) model) to study experimental tuberculosis is now described in-depth. The results and the relevance of this landmark study to help improve the current understanding of the still enigmatic association between tubercle Bacillus and certain retinochoroidal pathologies are highlighted. As mentioned earlier, rhesus monkey (Macaca mulatta) was chosen as the experimental animal because of similarity in microanatomy of human and monkey macula and past description of successful production of tubercular lesions identical to that seen in humans. Rhesus monkeys of average weight (3.5 kg), no obvious systemic infection, and normal fundus on dilated examination were studied. Tuberculin test was also performed before start of the study. Injection of 0.1 ml of purified protein derivative (intraperitoneal [IP]) with 100 tuberculin units/ml was administered into the upper lid. The site was then observed for 48 h for any reaction. Interpretation was made according to guidelines set by the primate facility at the All India Institute of Medical Sciences (AIIMS) – no reaction (Grade 0), erythema with or without edema (Grade +), edema with ptosis (Grade ++), complete ptosis (Grade +++), and complete ptosis with marked edema (Grade ++++). Only monkeys with reaction below grade + were taken up for the study. A pilot study was first undertaken in three monkey eyes to standardize the surgical technique of injection and the dose of inoculum. Injection using the nasal transvitreal route resulted in direct retinal trauma and endophthalmitis and so this technique was not considered. Instead, a modified trans-scleral, submacular suprachoroidal injection was tried and found to be satisfactory for the study. Injection paraldehyde IP (1 ml/kg) was used for anesthesia. The total calculated dose was injected at two sites (gluteal area and upper arm) to prevent tissue necrosis. As paraldehyde reacts with plastics, only glass syringes were used for giving the injection. Asepsis of the surgical site was achieved using mercurochrome paint. Lateral canthotomy was done after placing a lid speculum. Limited conjunctival peritomy was made to enable isolation and temporary disinsertion of the lateral rectus muscle. Then, the inferior oblique muscle was identified and its insertion was carefully traced. Anterior end of its insertion was found to be about 9 mm behind the midpoint of lateral rectus insertion. Then, a point just behind insertion of the inferior oblique muscle was marked on the sclera. At this point, under the operating microscope, the sclera was punctured carefully using a 27-gauge needle mounted on a tuberculin syringe. The needle was carefully advanced until the scleral resistance gave way, whence it was withdrawn. Through the same opening, a 30-gauge needle (with blunted tip) is inserted first vertically, then tangentially into the globe for about 0.5 mm. Slowly 0.075cc of saline of suspension is injected without altering the orientation of the needle. The needle was withdrawn and the site was compressed for 2 min using a cotton applicator. The globe was then brought back to primary position and ophthalmoscopy was performed to confirm creation of a dark gray elevated area in the macular region. Reinsertion of lateral rectus muscle was followed by final closure of the peritomy. By this method, a lesion primarily in the macular region was produced. Since vitreous was not disturbed, unhindered documentation of the retinochoroidal changes was possible using both ophthalmoscopy and FA. H37Rv strain of live tubercle bacilli grown of Lowenstein–Jenson medium, sensitive to streptomycin was obtained from Department of Microbiology (AIIMS) and used in the study. Desired focal lesion at the macula could be created using a dose of 0.3 mg/ml. For dead inoculum, suspension of organisms was kept in boiling water bath for 30 min. Smear was made and stained by Ziehl–Neelsen (ZN) method to confirm the absence of any live bacilli. FA was performed by injecting 1 ml of sodium fluorescein dye (20%) through the cannulated femoral vein. Eyelids were kept open using a lid speculum, and fundus images during the arterial, arteriovenous, and venous phase were captured using Zeiss fundus camera. FA was carried out at 48 h, day 7, day 14, and day 30. Twelve rhesus monkeys were selected for the study and allotted to three groups: Group 1 (control group, n = 3), Group 2 (dead bacilli injection, n = 3), and Group 3 (live bacilli injection, n = 6). Each of the three groups was injected with 0.075cc of sterile normal saline alone, suspension of 0.3 mg/ml dead bacilli in 0.075cc of normal saline, and suspension of 0.3 mg/ml of live bacilli, respectively. Monkeys in Group 3 were further subgrouped into those receiving injection streptomycin ( n = 2), injection dexamethasone ( n = 2), and no treatment ( n = 2). All eyes were enucleated at the end of the planned experiment period and fixed in 10% formalin fixative. After 48 h, sections (to involve the macular area) were made and stained with both hematoxylin and eosin (HE) and ZN stain. In the past, obtaining histopathological sections through the macular area in enucleated specimens had been challenging for ocular pathologists. Hence, in this study, a novel modification of leaving behind the insertion stump of inferior oblique during enucleation was adopted. Tissue sectioning within 1 mm of the inferior oblique insertion helped in obtaining microscopic details through the macula. During the follow-up period, it was found that all three groups had elevation of the macular area immediately after the injection. In Group 1 eyes, there was persisting elevation in the macular area at 48 h and rest of the vitreous and fundus appeared unremarkable. By day 7, the macular elevation had regressed. It had completely disappeared by day 14. No abnormality was observed on FA and histopathology. In the dead Bacillus group (Group 2), at 48 h, there was persisting macular elevation along with overlying vitreous haze. On FA, multiple, small hyperfluorescent lesions were observed . There was no systemic change in the animal. At day 7, FA showed persistence of the lesions with more pronounced hyperfluorescence. Histopathology showed serous detachment at the macula with clumps of polymorphonuclear cells on HE staining . No AFB was seen on ZN staining. By day 14, the media had cleared significantly. Multiple, small lesions with well-defined margins were seen above the fovea. Thirty days after injection, four small well-defined chorioretinal scars with adjacent mild pigment clumping was noticed, indicating signs of evolving regression. No AFB was seen on ZN staining. In the live Bacillus injection group (Group 3), vitreous haze over the region of macular elevation was seen at 48 h. A single irregular hyperfluorescent lesion was noted on FA . By day 7, the haze had worsened but two lesions with fluffy margins were evident. General examination of the animal was normal. One eye was subjected to histopathological examination (HPE) at this stage and AFB was seen in the retina and choroid. Massive chorioretinal reaction with lymphocytes and giant cells was noted on HPE . Following streptomycin injection, (Group 3a), clearing of media and resolution of lesion was seen by day 30. Streptomycin injection was given intramuscularly at the dose of 30 mg/kg daily for 30 days. HPE showed few giant cells and lymphocytes. No caseation was noted and no AFB was seen. Following dexamethasone phosphate 100 mg intravenous injection (Group 3b) for 20 days, both vitreous haze and number of lesions had increased . No worsening of general condition was seen. AFB was demonstrable on ZN staining of HPE specimen which also showed lymphocytes and giant cells. Forty-five days after injection, chorioretinal scar was noted and confirmed on HPE . In Subgroup 3c (no treatment), lesions were most intense at day 7. Gradual resolution was observed until the end of the study period, and unlike the dexamethasone group, there was no exacerbation. The rate of lesion resolution was however slower than streptomycin group. In summary, this study using the HKT-RPC model of experimental tuberculous maculopathy demonstrated that lesions were not related to the trauma of injection, lesions produced by injection of dead Bacillus was early in onset and there was early spontaneous healing. Lesions produced by injection of live bacilli had late onset and late healing if it was not treated with Streptomycin. In addition, dexamethasone injection worsened the severity and duration of the lesion . Another important observation was the demonstration of lymphocytes, giant cells, and AFB on HE and ZN staining, respectively. One study was carried out using polymerase chain reaction (PCR) (serum ribosomal nucleic acid method for IS6110) for M. tuberculosis , in patients with anterior uveitis (granulomatous and nongranulomatous) and multifocal choroiditis in 1998, uniformly negative results were seen in both cases ( n = 30) and controls ( n = 10). One of the reasons for having a negative result in all samples was the presence of PCR inhibitors in tissue fluid. More recently, it has been reported that IS6110 positivity is significantly lower in the Indian scenario compared to the immunogenic MPT64 protein. A subsequent case–control study in patients with Eales disease ( n = 31) using vitreous samples, studied the presence of the highly immunogenic protein of M. tuberculosis , MPT64 using PCR. This study did not reveal a statistically significant difference ( P = 0.058). However, 50% of epiretinal membranes which were also studied during this study were positive for MPT64 protein. As with most PCR-based reports from our country, final interpretation was done using electrophoretic method. Over a 2 year period, we analyzed patterns of uveitis presenting to our center using a prospectively enrolled database. In this study, we found 5% of our patients having uveitis in association with past or current definitive diagnosis of pulmonary or extrapulmonary tuberculosis. We are yet to consider a diagnosis of presumed ocular tuberculosis as a distinct entity and do not treat patients with antitubercular drugs due to several reasons. A few of these being lack of Level 1–Level 2 evidence, suggested clinical, serological (nucleic acid amplification assays and interferon gamma release assays [IGRAs]) and tissue (aqueous and vitreous) approaches having several limitations and unanswered questions, known side effects of antitubercular drugs (14.1%) and concerns of promoting development of drug-resistant strains. In addition, 25% of patients have been reported to have relapses despite taking a full course of antitubercular treatment, and these patients responded well to an increase in the dose of corticosteroids or immunosuppressants. Hence, in terms of recurrences too, these results suggest that long-term remission can be achieved with comparable efficacy using steroids (within physiological maintenance dose) and immunosuppressants alone. Reports available in literature showed that a statistical reduction in the recurrence rates with the concurrent use of anti-tuberculosis treatment (ATT) is largely confined to anterior uveitis. Whether the benefit of using ATT in anterior uveitis outweighs the other concerns needs a more detailed and independent evaluation. In most studies on presumed ocular tuberculosis including serpiginous choroidopathy, tuberculin skin test (TST) and IGRAs have been recommended as important tools for diagnosis. Hence, in the second study in fifty patients with varied forms of serpiginous choroidopathy in whom results of TST and IGRA were available, we looked at the results retrospectively. IGRA and TST positivity was seen in 60% and 56% respectively. However, only 38% of patients showed positive results for both TST and IGRA and the agreement between the two tests was found to be poor (0.2) (manuscript accepted for publication in NMJI, September 2016). Similar discordance has been reported in literature. Hence, how should one manage a patient with positive TST and negative IGRA and vice versa? This becomes a clinical dilemma particularly when there are no facilities for undertaking NAAT-based tests and the guidelines on when to perform aqueous and vitreous tap in such patients is not clearly laid out. To see if commonly performed tests for tuberculosis, TST, and chest radiography (CXR) was different in patients with serpiginous choroidopathy, we conducted another study evaluating the results of these tests in three groups of patients-serpiginous choroidopathy ( n = 40), nonserpiginous, nonpresumed ocular tuberculosis uveitis ( n = 40) and a noninflammatory retinal pathology, CSCR ( n = 40). The percentage TST positivity in the three groups was 58%, 40%, and 43%, respectively ( P = 0.237), and the percentage of patients showing lesions on CXR was 10%, 12.5%, and 7.5%, respectively ( P = 0.727). So again, these observations suggest that TST and CXR cannot be used as evidence for making a diagnosis of presumed ocular tuberculosis. M. tuberculosis is known to disseminate hematogenously and so, theoretically, it can produce retinal and choroidal diseases. However, beyond the presence of choroidal tubercles and tuberculomas, is their unflinching evidence that it produces the more frequently associated conditions like uveitis and vasculitis? The answer is no. Finding an answer has been difficult owing to a difficult gold standard (culture) with which to compare results, widespread presence of latent and active tuberculosis in endemic countries, slow-growing and paucibacillary nature of M. tuberculosis , inability to safely obtain adequate tissue sample, lack of tests to identify active from latent disease, lack of concordance between available tests, high reliability on commercial PCR and in-house PCR-based results which are highly prone to contamination and poor positive results when using the WHO recommended fully automated, rapid PCR (GeneXpert). The solution may be to go back to using animal models like the HKT-RPC model described herein and try to understand the actual tissue and immune interaction between M. tuberculosis and inner layers of the eye. Transfection experiment is another option that needs to be explored. Using these methods, it may also be possible to fulfill Koch's postulates for cause–effect relationship with respect to an infectious etiology or an identical M. tuberculosis immunity-effect relationship. Hopefully, in the coming decade, revisiting experimental studies using appropriate animal models will help finally solve the enigma of uveitis and retinal vasculitis associated with tuberculosis. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest.
Virtual Case-Based Learning: Nova Estratégia de Ensino e de Treinamento Médico Digital Humanizado em Cardiologia
e06c64b9-8416-4755-a7c8-771f8d9ff407
9750197
Internal Medicine[mh]
Ao longo dos anos, o Problem-Based Learning (PBL), ou aprendizagem baseada em problemas em português, tem sido uma prática pedagógica empregada na educação médica. Este método de ensino-aprendizagem recomenda a realização de atividade guiada por meio de casos clínicos problematizados, e tem por objetivo capacitar os estudantes para discutirem diagnósticos, condutas terapêuticas e outros aspectos do raciocínio clínico enfrentados cotidianamente na profissão. , Em consonância com os desafios atuais, sabe-se que a educação médica tem experienciado rápidas mudanças em todo mundo. O maior desafio encontrado por docentes está em oportunizar e estimular os estudantes sobre a essência que vai além do raciocínio clínico desenvolvido em sala de aula e laboratórios, ou seja, o vínculo com o paciente. Inevitavelmente, nas dependências da universidade, os estudantes são capazes de desenvolver a excelência cognitiva e científica. No entanto, o afeto e a humanização do cuidado só são experimentados quando imersos na prática real. Tradicionalmente, o contexto do atendimento e o contato físico com o paciente têm sido oportunizados somente durante estágios ou internatos. , Assim, a consolidação de novos paradigmas educacionais exige a implantação de estratégias que transformem estudantes em profissionais competentes. Essa busca permanente tem cooperado para o surgimento de metodologias ativas inovadoras de ensino, aprendizagem e avaliação. O método e as fases que compõem a simulação clínica possuem um maior potencial educacional quando comparados aos métodos tradicionais de ensino, no que tange o desenvolvimento do conhecimento e o treinamento de habilidades específicas, devido à oportunidade de vivenciar cenários clínicos simulados, próximos da realidade. - No entanto, por se tratar de uma proposta de ensino-aprendizagem presencial, com a utilização de manequins ou pacientes simulados, programas quantitativos e qualitativos de pesquisa são necessários para comprovar os resultados alcançados nos diferentes contextos, para que possam ser replicados e sintetizados na ciência educacional. Com base nos paradigmas educacionais e nas necessidades ainda não contempladas, um modelo inédito de aprendizagem simulada muito próximo do real, denominado “ Virtual Case-Based Learning (VCBL) ”, foi desenvolvido e testado. O VCBL oferece uma solução potencial para as limitações de simulações tradicionais, ao considerar o ensino híbrido (presencial e remoto) para uma melhor experiência presencial com o paciente, sem comprometer a sua segurança. Para isso, foi utilizada uma plataforma inovadora de ensino, criada para humanizar a interação digital de aprendizagem. Portanto, o objetivo deste estudo foi avaliar o conhecimento e a satisfação de estudantes de medicina antes e após a utilização de um novo modelo humanizado de metodologia ativa de ensino médico chamado VCBL. Delineamento e população do estudo Trata-se de uma pesquisa exploratória, descritiva e de análise documental. O estudo compreendeu o levantamento e o fichamento de dados referentes ao conhecimento teórico e de um instrumento de autoconfiança e satisfação, aplicados em 167 estudantes do oitavo semestre de medicina em uma universidade pública no sul do Brasil. A população de estudo foi dividida em dois períodos, 2018 e 2019. Os alunos que cursaram a disciplina de cardiologia em 2018 foram formados por meio do modelo PBL, utilizado há 20 anos na universidade avaliada (a primeira do Brasil a utilizar o método). Em 2019, o modelo de aprendizagem aplicado foi o VCBL, modelo criado como nova ferramenta de metodologia ativa de ensino. As etapas do protocolo e dos dois modelos de aprendizagem utilizados no estudo estão apresentadas na . Para os estudantes no oitavo semestre do curso de medicina em 2018, a disciplina de Cardiologia foi oferecida segundo método tradicional de PBL, como a seguir: 1) docente apresenta o caso clínico a ser estudado; 2) os estudantes buscam na literatura o conteúdo necessário e apresentam a resolução do problema. Neste modelo, o docente estimula a tomada de decisão e o raciocínio clínico teórico entre os alunos do grupo, por meio da discussão tutorial e aula expositiva. Para a condução da problematização, conforme etapas descritas acima, a turma era dividida em grupos de 10 alunos por estação. As discussões nos pequenos grupos abordavam o material de apoio produzido no programa PowerPoint e o caso clínico em forma de texto. Em 2019, o grupo de estudantes que passou por essa disciplina foi submetido a essa nova proposta de metodologia ativa de ensino médico, denominada VCBL . O método conta com uma plataforma virtual interativa de casos clínicos humanizados, sendo os mesmos casos clínicos discutidos no método PBL (insuficiência coronariana crônica, insuficiência cardíaca crônica, fibrilação atrial, hipertensão arterial e dislipidemia), entretanto, apresentados de maneira simulada interativa humanizada com a utilização da plataforma Paciente 360. O VCBL compreende as mesmas etapas do PBL, adicionadas as interações com a plataforma Paciente 360 de forma síncrona (com apoio docente) ou assíncrona (sem apoio docente), para a autorreflexão do raciocínio clínico humanizado. A fim de avaliar o conhecimento cognitivo dos estudantes em ambos os períodos, foi aplicada a mesma avaliação teórica de 25 questões de múltipla escolha. As questões abordaram todo o conteúdo apresentando na disciplina de cardiologia ao longo do módulo, a seguir: insuficiência coronariana crônica e aguda, insuficiência cardíaca crônica e aguda, arritmias, hipertensão arterial e dislipidemia. Portanto, o tema, tempo para finalização, grau de dificuldade e etapa de discussão de dúvidas foram semelhantes entre os períodos estudados. Além disso, os estudantes de 2019, após a avaliação teórica, responderam um instrumento de satisfação sobre o método de ensino VCBL e a utilização da plataforma Paciente 360. Ferramenta da metodologia ativa de ensino médico O VCBL foi aplicado por meio de uma plataforma digital de metodologia ativa de ensino médico, com simulação realística de casos clínicos. A plataforma apresenta casos clínicos com pessoas reais, e permite ao estudante a interação e tomada de decisão em todas as etapas de uma consulta médica em diferentes temas e especialidades. Assim, a ferramenta proporciona, de forma humanizada, interativa e inovadora, a empatia e a afetividade para a aprendizagem de ensino médico. A plataforma chamada Paciente 360 foi desenvolvida com o objetivo de auxiliar na melhoria da qualidade acadêmica do ensino médico e permitir melhor conexão acadêmica com as novas gerações de alunos. É utilizada desde 2019 em universidades dentro e fora do Brasil. No módulo assincrônico, o aluno, de casa ou de qualquer outro local, sem ajuda de um professor ou tutor, pode atender pacientes com diferentes doenças simuladas, realizar a anamnese, o exame físico completo, solicitar e analisar os resultados de exames laboratoriais e de imagem, dar o diagnóstico e, ao final, escolher a conduta que melhor se aplica para o caso ( ). O docente tutor dá feedback de acertos e erros, e pode ainda, pelo módulo sincrônico, apresentar o caso clínico e realizar a discussão de todas as etapas com grupos de alunos. Coleta de dados A avaliação teórica foi composta por 25 questões de múltipla escolha e avaliou o conhecimento cognitivo dos estudantes no ano de 2018 e 2019. O instrumento de satisfação e autoconfiança com a aprendizagem atual, aplicado em 2019, foi composto por cinco questões likert , construídas pelos docentes da disciplina de cardiologia da mesma universidade. A satisfação com a aprendizagem atual foi avaliada por meio de duas perguntas com pontuação de 0 a 10: 1) “Em uma escala de 0 a 10, qual a chance de você indicar o Paciente 360 para um amigo?”; e 2) “Em uma escala de 0 a 10, como você classifica a metodologia de casos clínicos interativos humanizados VCBL utilizada no atual módulo de Cardiologia em relação à metodologia tradicional de casos clínicos do PBL utilizada nos módulos anteriores do mesmo período (nefrologia e pneumologia)? Além disso, três questões classificavam como “pouco, satisfatório, bom, muito bom e excelente” o ganho da autoconfiança: 3) “Como você avalia seu aprendizado após o uso do Paciente 360?”; 4) “Você se sente mais preparado para o atendimento ambulatorial?”; e 5) “Como você avalia o conteúdo discutido?”. Para a coleta de dados, foi construído um instrumento para a identificação, organização e fichamento da pontuação individual da avaliação teórica aplicada em 2018 e 2019, e do instrumento de satisfação aplicado somente em 2019. Utilizou-se as etapas propostas pela literatura, como a apuração e organização do material disponível, interpretação dos dados e análise crítica dos documentos. Análise estatística A análise descritiva foi realizada por meio de frequências absolutas e relativas das variáveis categóricas e, para as variáveis contínuas foram calculadas médias e desvios-padrão. As comparações entre médias de variáveis contínuas foram analisadas pelo teste t de Student após confirmação da distribuição normal pelo teste de Kolmogorov-Smirnov. Os dados foram analisados usando o software Statistical Package for the Social Sciences (IBM SPSS Statistics for Windows, Versão 20.0. Armonk, NY: IBM Corp.). Para todas as análises, foi considerado um nível de significância estatística de p<0,05. Aspectos éticos O comitê de ética em pesquisa envolvendo seres humanos da Universidade Estadual de Londrina foi consultado para a produção do presente estudo, e este foi liberado sem necessidade de uso do consentimento informado, pois todos os participantes foram informados sobre o objetivo da pesquisa e receberam garantia de anonimato. Trata-se de uma pesquisa exploratória, descritiva e de análise documental. O estudo compreendeu o levantamento e o fichamento de dados referentes ao conhecimento teórico e de um instrumento de autoconfiança e satisfação, aplicados em 167 estudantes do oitavo semestre de medicina em uma universidade pública no sul do Brasil. A população de estudo foi dividida em dois períodos, 2018 e 2019. Os alunos que cursaram a disciplina de cardiologia em 2018 foram formados por meio do modelo PBL, utilizado há 20 anos na universidade avaliada (a primeira do Brasil a utilizar o método). Em 2019, o modelo de aprendizagem aplicado foi o VCBL, modelo criado como nova ferramenta de metodologia ativa de ensino. As etapas do protocolo e dos dois modelos de aprendizagem utilizados no estudo estão apresentadas na . Para os estudantes no oitavo semestre do curso de medicina em 2018, a disciplina de Cardiologia foi oferecida segundo método tradicional de PBL, como a seguir: 1) docente apresenta o caso clínico a ser estudado; 2) os estudantes buscam na literatura o conteúdo necessário e apresentam a resolução do problema. Neste modelo, o docente estimula a tomada de decisão e o raciocínio clínico teórico entre os alunos do grupo, por meio da discussão tutorial e aula expositiva. Para a condução da problematização, conforme etapas descritas acima, a turma era dividida em grupos de 10 alunos por estação. As discussões nos pequenos grupos abordavam o material de apoio produzido no programa PowerPoint e o caso clínico em forma de texto. Em 2019, o grupo de estudantes que passou por essa disciplina foi submetido a essa nova proposta de metodologia ativa de ensino médico, denominada VCBL . O método conta com uma plataforma virtual interativa de casos clínicos humanizados, sendo os mesmos casos clínicos discutidos no método PBL (insuficiência coronariana crônica, insuficiência cardíaca crônica, fibrilação atrial, hipertensão arterial e dislipidemia), entretanto, apresentados de maneira simulada interativa humanizada com a utilização da plataforma Paciente 360. O VCBL compreende as mesmas etapas do PBL, adicionadas as interações com a plataforma Paciente 360 de forma síncrona (com apoio docente) ou assíncrona (sem apoio docente), para a autorreflexão do raciocínio clínico humanizado. A fim de avaliar o conhecimento cognitivo dos estudantes em ambos os períodos, foi aplicada a mesma avaliação teórica de 25 questões de múltipla escolha. As questões abordaram todo o conteúdo apresentando na disciplina de cardiologia ao longo do módulo, a seguir: insuficiência coronariana crônica e aguda, insuficiência cardíaca crônica e aguda, arritmias, hipertensão arterial e dislipidemia. Portanto, o tema, tempo para finalização, grau de dificuldade e etapa de discussão de dúvidas foram semelhantes entre os períodos estudados. Além disso, os estudantes de 2019, após a avaliação teórica, responderam um instrumento de satisfação sobre o método de ensino VCBL e a utilização da plataforma Paciente 360. O VCBL foi aplicado por meio de uma plataforma digital de metodologia ativa de ensino médico, com simulação realística de casos clínicos. A plataforma apresenta casos clínicos com pessoas reais, e permite ao estudante a interação e tomada de decisão em todas as etapas de uma consulta médica em diferentes temas e especialidades. Assim, a ferramenta proporciona, de forma humanizada, interativa e inovadora, a empatia e a afetividade para a aprendizagem de ensino médico. A plataforma chamada Paciente 360 foi desenvolvida com o objetivo de auxiliar na melhoria da qualidade acadêmica do ensino médico e permitir melhor conexão acadêmica com as novas gerações de alunos. É utilizada desde 2019 em universidades dentro e fora do Brasil. No módulo assincrônico, o aluno, de casa ou de qualquer outro local, sem ajuda de um professor ou tutor, pode atender pacientes com diferentes doenças simuladas, realizar a anamnese, o exame físico completo, solicitar e analisar os resultados de exames laboratoriais e de imagem, dar o diagnóstico e, ao final, escolher a conduta que melhor se aplica para o caso ( ). O docente tutor dá feedback de acertos e erros, e pode ainda, pelo módulo sincrônico, apresentar o caso clínico e realizar a discussão de todas as etapas com grupos de alunos. A avaliação teórica foi composta por 25 questões de múltipla escolha e avaliou o conhecimento cognitivo dos estudantes no ano de 2018 e 2019. O instrumento de satisfação e autoconfiança com a aprendizagem atual, aplicado em 2019, foi composto por cinco questões likert , construídas pelos docentes da disciplina de cardiologia da mesma universidade. A satisfação com a aprendizagem atual foi avaliada por meio de duas perguntas com pontuação de 0 a 10: 1) “Em uma escala de 0 a 10, qual a chance de você indicar o Paciente 360 para um amigo?”; e 2) “Em uma escala de 0 a 10, como você classifica a metodologia de casos clínicos interativos humanizados VCBL utilizada no atual módulo de Cardiologia em relação à metodologia tradicional de casos clínicos do PBL utilizada nos módulos anteriores do mesmo período (nefrologia e pneumologia)? Além disso, três questões classificavam como “pouco, satisfatório, bom, muito bom e excelente” o ganho da autoconfiança: 3) “Como você avalia seu aprendizado após o uso do Paciente 360?”; 4) “Você se sente mais preparado para o atendimento ambulatorial?”; e 5) “Como você avalia o conteúdo discutido?”. Para a coleta de dados, foi construído um instrumento para a identificação, organização e fichamento da pontuação individual da avaliação teórica aplicada em 2018 e 2019, e do instrumento de satisfação aplicado somente em 2019. Utilizou-se as etapas propostas pela literatura, como a apuração e organização do material disponível, interpretação dos dados e análise crítica dos documentos. A análise descritiva foi realizada por meio de frequências absolutas e relativas das variáveis categóricas e, para as variáveis contínuas foram calculadas médias e desvios-padrão. As comparações entre médias de variáveis contínuas foram analisadas pelo teste t de Student após confirmação da distribuição normal pelo teste de Kolmogorov-Smirnov. Os dados foram analisados usando o software Statistical Package for the Social Sciences (IBM SPSS Statistics for Windows, Versão 20.0. Armonk, NY: IBM Corp.). Para todas as análises, foi considerado um nível de significância estatística de p<0,05. O comitê de ética em pesquisa envolvendo seres humanos da Universidade Estadual de Londrina foi consultado para a produção do presente estudo, e este foi liberado sem necessidade de uso do consentimento informado, pois todos os participantes foram informados sobre o objetivo da pesquisa e receberam garantia de anonimato. Foram analisadas 87 avaliações teóricas formativas obrigatórias, referentes à turma de cardiologia de 2018. Em 2019, 80 avaliações teóricas foram analisadas e, destes, 17,5% perderam o prazo de sete dias para preenchimento do instrumento sobre satisfação com o modelo VCBL como metodologia ativa de ensino ( ). A comparação incluindo os alunos não respondentes estão representados no material suplementar ( Tabela S1 ). A apresenta a comparação da média percentual da avaliação do conhecimento teórico. Os alunos de 2018 obtiveram uma média 41,7%, variando de 20,0% a 60,0%, e os alunos de 2019 alcançaram a média 73,3%, com variação de 44,0% a 92,0% (p <0,001). Quanto a satisfação com a aprendizagem atual, 76,0% dos estudantes avaliaram com pontuação máxima (9-10) a questão um e 83,0% a questão dois, conforme . Cerca de 70,0% dos estudantes classificaram como “muito bom” o aprendizado adquirido após utilização da plataforma Paciente 360; 78,0% julgaram como “bom” e “muito bom” o sentimento de estarem preparados para o atendimento ambulatorial; e 94,0% avaliaram como “muito bom” e “excelente” a abordagem do conteúdo, por meio da nova proposta de aprendizagem ( ). Ao ingressar no campo clínico, os estudantes de medicina deparam-se com inúmeras condições que exigem a aplicação integrada do conhecimento teórico e habilidades práticas, associada ao desenvolvimento da humanização e empatia com o paciente para a garantia de um cuidado integral. Estudos corroboram , que modelos tradicionais de ensino-aprendizagem não têm atendido aos requisitos do ambiente contemporâneo da realidade médica, em que há uma lacuna entre a formação e a prática clínica integral humanizada. Atualmente, a simulação realística tem sido utilizada por várias universidades, com o intuito de formar profissionais que contemplem as exigências do mercado de trabalho. , , A maior parte delas com manequins de simulação não humanizados ou avatares. Autores de um estudo recente revelaram uma limitação deste método, ao concluir que as fases da simulação realística não permitem ao estudante o desenvolvimento da empatia e a socialização com paciente real, e propuseram que novos métodos sejam criados com esse objetivo. O presente estudo apontou a viabilidade e eficácia do novo modelo proposto de aprendizagem simulada para que outras universidades de medicina possam replicá-lo. Este método mostrou-se eficaz na avaliação formativa de conhecimento teórico da disciplina de cardiologia. A pontuação média obtida pelos alunos de 2019 foi superior à pontuação dos alunos de 2018 em mais de 30,0%, mostrando que o processo de ensino-aprendizagem foi potencializado após a experiência com as etapas propostas pelo método VCBL. Tecnologias de simulação integrada está passando por um rápido desenvolvimento. O ensino médico digital está desempenhando um papel cada vez mais importante no treinamento do conhecimento e habilidades clínicas para estudantes de medicina. Atualmente, nenhuma simulação retrata de forma realista todos os componentes fisiológicos, mentais e comportamentais do atendimento ao paciente. Por isso, o reconhecimento da autoconfiança e da satisfação dos alunos ao participar de novas estratégias contribui para o aperfeiçoamento destas. Todos os estudantes desta pesquisa indicariam a plataforma Paciente 360 para um amigo. Destes, 76,0% optaram pelas opções de maiores pontuações (9-10) do instrumento de satisfação. Aproximadamente 90,0% dos indivíduos classificaram como “muito bom” e excelente” o aprendizado adquirido após utilização da plataforma, resultado no aprimoramento da autoconfiança dos estudantes. As etapas três e cinco ( ) da metodologia VCBL são consideradas o “coração” da nova proposta metodológica. Ela utiliza a nova plataforma como ferramenta em metodologia ativa de ensino, focando em uma discussão de casos clínicos humanizados interativos inicialmente tutoreada pelo docente (síncrona) e posteriormente realizada como reforço pelo aluno em formato de classe de aula invertida (assíncrona), garantindo um aprendizado realístico mais profundo e em multietapas. Este software de aprendizagem interativa, possibilita o contato virtual, presencial ou remoto com um paciente simulado durante a anamnese, exame físico, exames complementares e conduta. A realização virtual do exame físico possibilita a simulação da inspeção, palpação, percussão e ausculta de todos os sistemas do corpo humano. Além disto, durante a consulta médica simulada, o estudante será capaz de propor hipóteses diagnósticas, solicitar e obter resultados de exames, e planejar a conduta adequada para resolução do caso. O docente, da mesma forma, pode utilizar a ferramenta de modo sincrônico para as etapas de discussão tutorial em grupo. A autoconfiança é considerada um indicador de proatividade nas situações clínicas para o desfecho de sucesso. Por isso, o profissional deve se sentir capaz de atuar de forma adequada, caso contrário, podem ocorrer atrasos desnecessários no atendimento, aumento no nível de ansiedade e no número de erros. , Mais de 80,0% dos estudantes classificaram com maiores pontuações (9-10) a metodologia de ensino utilizada no módulo estudado de cardiologia em comparação à metodologia utilizada nos módulos anteriores. Aproximadamente 80,0% dos estudantes avaliaram como “bom” ou “muito bom” o sentimento de estarem preparados para o atendimento ambulatorial, e 94,0% pontuaram que a abordagem do conteúdo neste formato foi muito boa ou excelente. Os resultados desta pesquisa corroboram trabalhos científicos que utilizaram a proposta VCBL. O uso da estratégia proporciona a imersão e aproximação do público ao tema, e amplia o acesso à educação em saúde por meio de interações reais e humanizadas. , Além disso, após atividade piloto de prática de atendimento clínico avaliando um paciente virtual referindo uma queixa cardiológica, os alunos apresentaram 70,0% de reações positivas no Net Promoter Score . Ambos os estudos afirmam que a plataforma Paciente 360 é um modelo de ensino adequado para a realização da educação médica continuada e humanizada em cardiologia, pois promoveu alto grau de satisfação dos participantes, percepção de aquisição de conhecimento e preferência pelo modelo digital de discussão de casos clínicos. Algumas limitações metodológicas devem ser abordadas para a correta interpretação dos resultados deste estudo. Os dados do ano de 2018 foram coletados retrospectivamente e, no período, o único instrumento de avaliação disponível era a avaliação teórica. Em 2019, o mesmo método avaliativo foi utilizado, entretanto, com adição de instrumentos de satisfação e autoconfiança. Portanto, foi possível realizar análises comparativas importantes e adicionar dados diferenciais ao método inédito VCBL. Ainda, embora não haja uma medida direta de quanto o método tenha contribuído para o conhecimento, uma vez que as notas mais elevadas podem ser decorrentes de outros processos institucionais, o uso da plataforma proporcionou aos alunos um alto grau de satisfação e a oportunidade de uma inserção simulada, realística e humanizada em casos clínicos, possivelmente responsável pelo aumento do engajamento e interesse dos alunos na disciplina de cardiologia. Ainda assim, é importante destacar que o termo humanização é tratado de maneira polissêmica na literatura científica, e esta nova proposta estratégica pedagógica pode ser utilizada com o propósito de promover a humanização na educação médica brasileira. O presente estudo indicou melhora no processo de ensino-aprendizagem de estudantes de medicina após a utilização do modelo VCBL em comparação ao método tradicional PBL, mesmo com as limitações apresentadas no estudo. Além disto, foi demonstrada uma grande satisfação dos estudantes ao utilizarem a nova ferramenta em metodologia ativa de ensino médico chamada plataforma Paciente 360. O software proporcionou uma aprendizagem humanizada, imersiva e realista. Embora sejam necessárias mais pesquisas para creditar a eficácia da estratégia de ensino e da ferramenta utilizada, espera-se que este modelo, embasado na metodologia ativa de ensino médico voltada para a geração X, Y e Z, possa fomentar em diferentes universidades a implantação do método e a criação de outros similares. Portanto, a fim de auxiliar na formação de currículos médicos melhores e mais atualizados, os estudantes devem ter oportunidades ampliadas para experienciar o ensino simulado, interativo, digital e humanizado.
Molecular Underpinnings of Brain Metastases
1068e5b5-9dad-4933-a7e2-3022b2aeb4f3
11900073
Cardiovascular System[mh]
Brain metastases (BMs) are the most commonly diagnosed central nervous system (CNS) tumors in adults in the United States, estimated to occur up to ten times more frequently than primary brain tumors . Advancements in treatments of primary tumors have led to prolonged survival of patients, increasing the pool of prevalent cancer patients at risk for BMs . Coupled with advancements in neuroimaging and increased physician and patient awareness, diagnoses of BM can be made earlier, helping to improve patient outcomes. Lung cancer, breast cancer, and melanoma are widely recognized as the three most prevalent causes of BMs , with renal or gastrointestinal causes representing a good fraction of BMs in certain populations . The challenge remains in characterizing the mechanisms under which BMs are initiated and how they progress. This encompasses exploring the molecular and genetic underpinnings of tumors linked to BMs, the factors influencing brain tropism, the dynamics between tumor cells and the brain’s microenvironment, as well as the key mechanisms driving therapy resistance. Developments in molecular science in recent decades have allowed researchers to obtain more information on the intrinsic metastatic progress of tumors but the brain remains an organ in which investigation of these fundamental molecular underpinnings has proved limited. In this review, we intend to compile the most up-to-date information and recent research made on metastatic mechanisms on the brain, focusing on the specifics of breast cancer, lung cancer, and melanoma in the hopes of uncovering research gaps that can be further investigated to improve targeted therapies and patient outcomes. Despite their high frequency, there is a lack of systematic, nationwide reporting of BMs . Although precise values of BMs are unknown, more than 100,000 people are diagnosed annually , and it has been estimated that around 20% of all patients with cancer will develop BMs . However, only estimates can be made since incidence data of BMs from all cancer sites is not widely available, and most studies determine incidence, prevalence, and prognosis using data from the National Cancer Institute’s (NCI) Surveillance, Epidemiology, and End Results (SEER) system , which takes the presence of synchronous BM at time of diagnosis of primary tumor as its only metric . Breast cancer, lung cancer, and melanoma are among the most commonly associated tumors with BMs . Autopsy studies report the incidence of BMs from lung tumors to be as high as 52%, but this varies depending on the histology of the tumor, the patient’s sex, and the stage at the time of diagnosis . For BMs from breast cancer, autopsy studies have reported an incidence of around 18–30%, while studies utilizing the SEER database usually report lower numbers [ , , ]. This is largely due to the fact that BMs are rarely present at times of primary tumor diagnosis, excluding it from the database. Melanoma, ranked as the third leading cause of BMs, has shown incidence rates varying from 6% up to 28% in some population-based studies . Research has demonstrated that BMs are more prevalent in melanomas located in the head and neck region compared to those on the extremities or trunk. This increased incidence is especially notable among male patients, younger individuals, and cases with greater Breslow thickness . Further explorations about patient characteristics, such as age, sex, ethnicity, and their role in BM incidence, can be found in the work of Parker et al. . The presence of synchronous BM at times of diagnosis of primary tumor is associated with poorer survival than finding extracranial metastases only, with median survival time being only 5 months . A study comparing median overall survival (OS) in BMs patients from different solid tumors showed that breast cancer patients had the shorter OS, with 9.9 months in contrast to 10.3 and 13.7 months in melanoma and non-small cell lung cancer (NSCLC), respectively . In contrast, another study suggested that with an OS of 10 months, breast cancer had longer survival than melanoma and NSCLC . The molecular subtype of primary tumors significantly influences both the risk of BMs development and the associated prognosis . In breast cancer, patients with human epidermal growth factor receptor 2 (HER2)-positive, hormone receptor-negative subtypes and those with triple-negative subtypes (negative for estrogen receptor (ER), progesterone receptor (PR), and normal HER2 expression) exhibit a higher incidence of BMs . Notably, the triple-negative subgroup has the shortest median survival, at just 6.0 months . In NSCLC, BMs are more frequently observed in cases with epidermal growth factor receptor (EGFR) mutations or anaplastic lymphoma receptor tyrosine kinase (ALK) rearrangements, with over 45% of patients developing CNS involvement within the first three years of survival . Melanoma is the tumor with the strongest affinity for the CNS, with some autopsy studies reporting up to 75% of CNS involvement . Jakob et al. found that CNS involvement was significantly higher in patients with mutations in the v-raf murine sarcoma viral oncogene homolog B1 ( BRAF ) and neuroblastoma RAS viral (v-ras) oncogene homolog ( NRAS ), occurring in 24% and 23% of cases, respectively . In comparison, patients with wild-type genes exhibited a CNS involvement prevalence of only 12% . Knowledge of the epidemiology of BMs has improved screening, diagnostic, and treatment standards, and these data have been used to create prognostication tools such as the recursive partitioning analysis (RPA) score and the newer Graded Prognostic Assessment (GPA) . GPA, in particular, integrates patient data such as age, Karnofsky Performance Score (KPS), number of BMs, and presence of extracranial metastases with histologic and molecular data from the primary tumor in a diagnosis-specific index (DS-GPA) . An updated, user-friendly GPA calculator can be accessed at BrainMetGPA.com. There is an imperative need for improvement in the reporting of BMs to help identify patients at the highest risk for BMs. Moreover, this could promote research into the mechanisms that make these patients develop BMs in the first place and develop targeted therapies. Metastatic lesions have a different molecular and genomic landscape than their primary tumors . Phylogenetic analysis of tumor variations and migration histories have shown that clonal branching can occur within the primary tumor as often as it occurs after egressing the primary tumor , making circulating tumor cells (CTCs) and circulating tumor DNA better resources for capturing the tumor heterogeneity of metastatic progression . Genome-wide sequencing analyses of CTCs have identified some mutations that can be considered crucial for determining organ tropism. Brastianos et al. sequenced matched BMs, primary tumors, and normal tissue of 86 patients and observed branched evolution from a common ancestor in the metastatic lesions . They found that extracranial and lymph node metastases diverged from BMs but that even spatially and temporally separated BMs were genetically homogeneous to each other, indicating that the genetic alterations acquired by brain-tropic tumor cells are different from those in other metastatic sites and probably grant an advantage for survival in the brain . To identify metastatic signatures, researchers developed a MetMap using 500 cell lines from 21 different solid tumor types in mouse xenograft models. This map revealed patterns of metastasis that are specific to certain organs and linked these patterns to various clinical and genomic characteristics . The MetMap can help determine common molecular and genetic alterations that enhance metastasis to certain organs and potentially find new therapeutic approaches. The complex evolution tumor cells must endure to reach the brain and survive in it requires the cooperation of genetic, epigenetic, transcriptomic, metabolic, and immunologic factors and will only occur in brain-tropic cells, prepared to go through all those changes. offers a summary of specific mutations associated with brain tropism in different primary cancer types. 3.1. Genetic Features of Breast Cancer Brain Metastases Advancements in gene expression profiling have greatly enhanced our understanding of breast cancer. The expression levels of three pivotal receptors in breast cancer—estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2/neu)—help categorize it into four primary molecular subtypes. These include luminal A (ER-positive, PR variable, HER2-negative), luminal B (ER-positive, PR variable, HER2-positive or negative), HER2-enriched (ER-negative, PR-negative, HER2-positive), and basal-like (ER-negative, PR-negative, HER2-negative). Notably, the basal-like subtype constitutes the majority of “triple-negative” breast cancers . Breast cancer molecular subtypes have preferential sites for metastases and possess a protein profile associated with homing of the metastatic site. A large, registry-based, single-institution study showed that patients with the HER2-positive and triple-negative subtypes had the highest incidence of BMs . This follows the line of previous reports showing that basal-like/triple-negative tumors pose a higher risk for BMs, with HER2-enriched in second place . However, HER2 and hormonal receptor statuses are hypothesized to shift upon reaching the brain . Similarly, triple-negative brain-metastatic cells exhibit elevated β2-adrenergic receptor mRNA and protein levels compared to their primary tumor counterparts, enhancing their proliferative capacity . Triple-negative and basal-like breast cancer subtypes have been found to compromise the blood–brain barrier (BBB), whereas BMs from HER2/neu-positive breast cancers generally preserve BBB integrity . This prompts the need for HER2/neu-positive cells to find alternative pathways to penetrate the BBB. HER2-HER3 dimers can form in breast cancer BMs and preferentially link to their ligand heregulin (also known as neuregulin-1) in the endothelial cells of the BBB . Heregulin and HER2 signaling induces activation of extracellular cathepsin B and matrix metalloproteinase (MMP)-9, on which transmigration through the BBB is dependent . MMP-9 enhances extracellular proteolysis and is upregulated by the metalloprotease-disintegrin ADAM8 , which is highly expressed in all BMs but particularly breast cancer cells . Cathepsins, on the other hand, are a family of proteases involved in protein degradation and processing, and their increased expression and activity could promote angiogenesis, invasion, and cell proliferation in some cancers . In BMs, cathepsin S is produced by macrophages and breast cancer tumor cells, and it mediates BBB transmigration via proteolytic cleavage of the junctional adhesion molecule (JAM)-B . Heregulin also upregulates intercellular adhesion molecule 1 (ICAM1), which is linked to increased invasion, motility, and metastasis in breast cancer . Gene expression profiling of paired primary breast carcinomas and their corresponding BMs identified the upregulation of 1314 genes and the downregulation of 1702 genes in BMs relative to the primary tumors . This study also showed activation of the HER2 pathway and gains in transcript and protein expression of rearranged during transfection ( RET ) gene in BMs, both linked to disease progression . Interestingly, there was no loss in PTEN expression in the analyzed specimens, which has been reported as a driver for BMs induced by astrocytes . A nationwide cohort study in Finland determined that basal-like subtypes tended to first metastasize to the brain and had a protein profile with high expression of neural cell stemness-linked proteins nestin and prominin-1, which could potentially help breast tumor cells adapt to the brain . Jin et al. trialed their MetMap with breast cancer cells that metastasized to the brain and demonstrated these cells present an altered lipid metabolism, which is necessary for tumor cell survival within the brain microenvironment . Increased fatty acid synthase ( FASN ) gene expression in breast cancer cells has been identified as a way to overcome low lipid availability in the brain . Enzymes associated with glycolysis, the tricarboxylic acid cycle, and oxidative phosphorylation pathways show elevated expression and could further promote efficient energy production via glucose oxidation. Furthermore, the pentose phosphate pathway and the glutathione system demonstrate heightened activity, contributing to the reduction in reactive oxygen species . Whether reprogramming occurs before seeding the brain or is induced by the lipid-poor environment of the brain is not known, but it is suggested that high FASN expression increases cell propensity to colonize the brain . Increased levels of fatty acid binding protein 7 (FABP7) are also seen in HER2-positive breast cancer BMs, and besides its role in metabolic reprogramming, FABP7 upregulates metastatic genes and pathways, such as Integrins-Src, MEK/ERK, Wnt/β-catenin, and vascular endothelial growth factor (VEGF)-A . Overexpression of proteins involved in fatty acid synthesis and degradation, as well as glucose-regulated protein 94 (GRP94), help cells compensate for the hypoglycemic stress they are subject to in the brain . Other gene signatures expressed in breast cancer cells that metastasize to the brain include COX2 , HBEGF , and ST6GALNAC5 , which mediate BBB migration ; PCDH7 , involved in linkage and interaction of tumor cells with astrocytes ; and GRIN2B, particularly increased in triple-negative breast cancers and involved in coding the GluN2 subunit of the NMDAR . The glutamate-stimulated GluN2B-NMDAR signaling axis activation in cancer cells promotes colonization and metastatic tumor growth in the brain by forming pseudo-tripartite synapses in which tumor cells act as an astrocyte . The MYC oncogene is highly expressed in CTCs from metastatic variants of breast cancer, and it regulates the adaptation of CTCs to the brain environment by reducing the oxidative stress produced by activated microglia via gene upregulation of glutathione peroxidase 1 (GPX1) . A specific protein signature in CTCs consisting of HER2+/EGFR+/HPSE+/Notch1+ was termed “brain metastasis selected markers” by Zhang et al. and proved to increase CTC BMs compared with the parental CTC lines . SERPINA5 is significantly upregulated in breast cancer BMs, which induces the production of anti-PA serpins . Also, the GBP1 gene is upregulated in ER-negative breast cancer cells that develop BMs . GBP1 codes for the Guanylate-Binding Protein 1 (GBP1), binding activated T lymphocytes and enabling tumor cells to cross the BBB . 3.2. Genetic Features of Lung Cancer Brain Metastases Lung cancer is primarily categorized into two major histological types: non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC) . Within the NSCLC group, further classifications include adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, among other subtypes . Whole exome sequencing of samples from NSCLC and SCLC patients showed that NSCLC had a higher percentage of seemingly metastases-specific mutations, suggestive of branched evolution. In contrast, SCLC samples showed low heterogeneity, which suggests these tumors spread using a parallel and linear model of evolution . In a systematic review of 72 studies comprising data from 2346 patients, the most common genetic alterations seen in BMs from NSCLC were EGFR , TP53 , KRAS (Kirsten rat sarcoma viral oncogene), CDKN2A (cyclin-dependent kinase inhibitor 2A), and STK11 . Although not considered a driver gene, mutations in the tumor suppressor TP53 are highly prevalent . These mutations are linked to the development of new distant metastases and exhibit strong concordance between primary NSCLC tumors and their corresponding BMs . Mutations in the p53 protein disrupt cell cycle control, allowing the replication of damaged DNA and resulting in uncontrolled cell proliferation . The EGFR family consists of four distinct members, all belonging to the ErbB/HER protein family: ErbB1, ErbB2, ErbB3, and ErbB4 . Mutations in some tumors can continuously activate EGFR, enhancing tumor growth, invasion, and metastasis . A meta-analysis of 26 studies demonstrated a positive association between EGFR -mutated NSCLC tumors and BMs, with an odds ratio (OR) of 1.58 (95% CI: 1.36–1.84), which confirms that EGFR mutation is a significant risk factor for BMs in NSCLC . ALK gene fusion and RET gene fusion are also positive driver genes for NSCLC BMs . ALK gene rearrangements frequently involve translocation or fusion with another partner gene, including echinoderm microtubule-associated protein-like 4 ( EML4 ), the most prevalent in NSCLC . ALK gene rearrangements result in the formation of an oncogenic ALK tyrosine kinase that persistently activates various downstream signaling pathways, including PI3K-AKT, MEKK2/3-MEK5-ERK5, JAK-STAT, and MAPK. This continuous activation promotes increased proliferation and survival of tumor cells . ALK fusions have been found to be constant between primary NSCLCs and their associated BMs . Compared to EGFR -positive groups, ALK -positive patients have a higher incidence of BMs at the time of initial lung cancer diagnosis . However, Rangachari et al. found a similar baseline incidence of BMs in EGFR -mutated and ALK -rearranged NSCLCs and an evolutionary increase in CNS involvement over time, with >45% of patients in both groups showing BMs after three years of survival . Larger BM tumor size has also been reported in the EML4-ALK fusion groups compared to groups without fusion . The RET protooncogene codes for a receptor tyrosine kinase and has been identified in NSCLC rearranged or fused with over a dozen partner genes, with the kinesin family member 5B gene ( KIF5B ) being the most common . RET fusion-positive NSCLCs have BMs in 25% to 50% of cases [ , , ]. Although the exact mechanism of how RET fusion promotes brain organotropism for tumor cells is not known, recent trial results in patients receiving the selective RET inhibitor selpercatinib demonstrated decreased CNS metastatic progression of RET fusion-positive NSCLC, with no CNS involvement at all in patients with no previous BMs . These results suggest RET plays a fundamental role in promoting tumor cell growth and survival in the brain. The C-ros oncogene 1 ( ROS1 ) encodes a receptor tyrosine kinase that is structurally analogous to ALK . NSCLC harboring ROS1 rearrangements exhibits a cumulative incidence of CNS metastasis comparable to that of ALK fusion-positive tumors . The incidence of BMs in ROS1 -rearranged NSCLC patients at the time of diagnosis is approximately 20–30%, while it is as high as 50% in patients post-crizotinib therapy . Crizotinib, an ALK/MET kinase inhibitor developed for ALK -rearranged NSCLC, is also effective in treating ROS1 -rearranged tumors . However, it has low BBB penetration, and even with therapy, ROS1 -positive patients commonly progress to CNS metastasis . Recent clinical trials have demonstrated the promising efficacy of novel tyrosine kinase inhibitors (TKIs) in overcoming crizotinib resistance in BMs of ROS1 -rearranged NSCLC . Additionally, MET amplification in NSCLC leads to the heightened expression and continuous activation of the Met receptor, also known as the hepatocyte growth factor receptor (HGFR) . This, in turn, promotes tumor cell migration and epithelial-to-mesenchymal transition phenotype . Moreover, MET amplification has been found enriched in NSCLC BMs compared to paired primary tumors . KRAS oncogene mutations are recognized as prevalent drivers in BMs from NSCLC, though their exact incidence varies across different studies . The RAS genes encode a family of proteins that play critical roles in regulating cell growth, differentiation, and apoptosis , and KRAS mutation has been shown to upregulate PD-L1 expression in NSCLC through p-ERK signaling . Activation of the PD-1/PD-L1 axis suppresses T-cell activity within the tumor microenvironment, allowing tumor cells to escape immune detection . This immune regulation function of KRAS -mutations may improve tumor cell survival in the brain, but it also makes NSCLC BMs with KRAS -mutations more susceptible to treatment with immune checkpoint inhibitors (ICIs) . Other less frequent mutations related to BMs from NSCLC include BRAF mutations , Cav-1 , AKT-1 , NRAS , and PTEN . Gene expression signatures able to activate the WNT/TCF pathway are associated with lung adenocarcinoma metastases to the brain and lung . The target genes HOXB9 and LEF1 within the WNT/TCF pathway play pivotal roles in facilitating chemotactic migration and promoting colony expansion in lung adenocarcinoma . Furthermore, overexpression of the hyaluronan receptor by lung adenocarcinoma tumor cells increases inflammation and binding to hyaluronan-rich microenvironments such as the extracellular matrix of brain metastatic niches . Aljohani et al. performed whole-genome sequencing on normal lung tissues, primary NSCLC tumors, their corresponding BMs, and CTCs. The study revealed that primary tumors contained mutations in genes associated with cell adhesion and motility. In contrast, BMs and CTCs exhibited mutations in genes responsible for adaptive and cytoprotective functions related to cellular stress responses, including Keap-1 , Nrf2 , and P300 . Several other adaptations can be seen in lung cancer CTCs. Analysis of tissue from lung adenocarcinoma and its matched CTCs and BMs using scRNA-seq have demonstrated that CTCs were in an intermediate place between BMs tumor cells, which leaned towards the epithelial phenotype and primary tumor cells, mostly found in a mesenchymal state . Furthermore, RAC1 , highly expressed in metastatic tumor tissue, was involved in adhesion, degradation, and VEGF signaling pathways . CTCs overexpressing CD44v6 exhibited increased expression of the mesenchymal marker vimentin and reduced expression of the epithelial marker E-cadherin, thereby facilitating cell invasion and BMs through the activation of epithelial-to-mesenchymal transition . 3.3. Genetic Features of Melanoma Brain Metastases The BRAF oncogene encodes a protein that is essential for the functioning of the mitogen-activated protein kinase/extracellular signal-regulated kinase (MAPK/ERK) signaling pathway . BRAF gene mutations cause MAPK/ERK continuous activation and signal transduction, increasing cell growth, migration, and proliferation . Approximately half of advanced melanomas harbor mutations in the BRAF gene , which are linked to an increased frequency of BMs . Moreover, the incidence of BRAF mutations is higher in BMs than in primary melanomas or metastases to other organs, suggesting there may be an independent evolution of subclones . V600E is the most frequently occurring BRAF mutation in melanoma, and it also has the highest association with BMs . The loss of the PTEN protein has been shown to decrease the time to melanoma BMs in patients with BRAFV600 mutations . However, BRAF mutations alone are not sufficient for BMs to occur, and there is proof that PTEN gene silencing cooperates with BRAFV600E mutations in melanoma progression via the phosphoinositide 3-kinase (PI3K)/AKT pathway activation . Although AKT1 activation can independently drive BM progression, it is augmented by PTEN silencing . The bidirectional communication between the PI3K/AKT/mTOR and the MAPK/ERK pathways is critical for abnormal proliferation and therapy resistance in cancer . BRAF-mutant melanomas have a significantly higher activation of AKT than NRAS -mutant melanomas . However, NRAS mutations are also a significant risk factor for BMs, with a higher risk of developing BMs compared to patients with NRAS wildtype [ , , ]. NRAS forms part of the Ras genes family, which also includes HRAS (Harvey Rat Sarcoma Virus) and KRAS (Kirsten Rat sarcoma virus) . NRAS mutations are present in approximately 20% of human melanomas, whereas HRAS and KRAS mutations occur in only 1% and 2% of melanomas, respectively . Both NRAS and KRAS mutations are enriched in melanoma BMs . These mutated genes code for constitutively active Ras proteins that stimulate multiple signaling cascades, including the MAPK/ERK pathway and the PI3K/AKT pathway . These are the same signaling cascades activated in tumors with BRAF mutations and PTEN silencing. However, NRAS -mutant melanomas exhibit normal PTEN levels, suggesting BRAF -mutant and NRAS -mutant tumors differ in their mechanisms of progression towards BMs . Moreover, concurrent mutation of NRAS and BRAF is rare . Evidence has also shown that the microenvironment in NRAS -mutant melanoma BMs is enriched in neutrophils in contrast to the primary melanoma . In myeloproliferative neoplasms, a link between NRAS mutations and neutrophil hyperleukocytosis via activation of the granulocyte colony-stimulating factor (G-CSF) has been elucidated . The exact mechanism of how neutrophils are seen in NRAS-mutated melanoma BMs remains to be exposed. Other driver genes of melanoma are NF1 inactivation and C-KIT mutations . NF1 functions as an inhibitor of Ras signaling, and its loss results in continuous activation of the MAPK and PI3K pathways. Likewise, mutations in KIT , a receptor tyrosine kinase, similarly initiate the activation of these pathways . Patients with “quadruple negative” disease (no BRAF , NRAS , NF1 , or C-KIT mutations) have the lowest risk of developing BMs . This suggests the important implications of MAPK/ERK and PI3K/AKT activation in the metastatic progression of melanoma and their potential as therapeutic targets. Lessard et al. identified that metastatic melanoma cells exhibit elevated levels of the long intergenic non-coding RNA CASC15, which was associated with BMs in mouse xenograft models . Additionally, increased expression of miR-301a in melanoma is linked to overall metastatic activity . Although various other non-coding RNAs have been recognized as important regulators of melanoma progression and resistance to therapy, their specific roles in BMs remain unclear . Alterations in the CDKN2A gene or the p16-cyclin D-CDK4/6-retinoblastoma protein pathway (CDK4 pathway) have been found in virtually all melanoma cell lines . CDK4 activation inhibits the retinoblastoma protein, promoting cell cycle progression, and is usually associated with tumor suppressor CDKN2A (p16INK4A) deletion, furthering melanoma cell survival . It has also been found that patients with deletions in CDKN2A genes also display MDM2 and MDM4 amplifications, which is associated with a higher risk for metastasis to the brain . MDM2 and MDM4 are negative regulators of p53; therefore, amplifications in MDM2/4 decrease p53 function. The ubiquitin-specific protease 7 (USP7), a protein that protects MDM2/4 from proteasomal degradation, is increased in metastatic melanoma . PPM1D, another negative regulator of p53, is overexpressed in metastatic melanoma, and gain-of-function mutations in immune cells promote immune escape and proliferation . Even though CDKN2A , MDM2/4 , USP7, or PPM1D are not considered driver mutations, their effect in the CDK4 and p53 pathways has a pro-metastatic effect. The nerve growth factor (NGF) receptor CD271 is a low-affinity receptor for NGF, a member of the neurotrophin family proteins, highly expressed in melanoma cells even before BMs . NGF and neurotrophin 3 (NT-3) are highly expressed in tumor-adjacent tissues in the brain, suggesting brain organotropism between the CD271-positive cells and the brain tumor niche . CD271 has also been linked to SOX10, a specific marker of the neural crest, and it provides melanoma cells with neural crest stem cell signatures, a common ancestor between melanocytes, glial cells, and neurons . It has been reported that aggressive melanoma cells hijack neural crest-related signaling pathways to increase plasticity and facilitate invasion in the brain . BMP4 and the Wnt target gene AXIN2 are important for neural crest development and are also upregulated in BMs, suggesting melanoma acquires neuronal-like characteristics that make them highly efficient in metastasizing the brain . Neurotrophins further stimulate invasion by producing extracellular matrix degradative enzymes such as heparinase, which destroys the basement membrane of the BBB . AXL, a receptor tyrosine kinase, is involved in promoting epithelial-to-mesenchymal transition, treatment resistance, and metastasis in melanoma BMs . AXL is typically upregulated in CD271-positive BMs and may actively contribute to immune escape. This occurs through mechanisms such as reduced HLA class I expression, increased production of immunosuppressive cytokines and PD-L1, and diminished infiltration of CD8+ lymphocytes . The immune microenvironment of BMs has shown distinct characteristics when compared to their primary tumors or extracranial metastases, such as decreased IFNγ production and activated T-cells . Moreover, there are fewer inflammatory cytokines, immune cell infiltrates, and maturation of dendritic cells, whereas there is enhanced oxidative phosphorylation . Altogether, melanoma cells can invade and proliferate in the brain through a series of genetic, molecular, and immune mechanisms. 3.4. Genetic Signatures in Other Brain Metastases The mechanisms for the metastatic spread of colorectal cancer to the brain are still not completely understood. There is an association between RAS mutations in colorectal tumors, especially KRAS mutations, and increased risk of BMs [ , , ]. High expression of NFAT5 (Nuclear Factor of Activated T Cells 5), AVCR1C (Activin A Receptor Type 1C), and/or SMC3 (Structural Maintenance of Chromosomes 3) is associated with colorectal BMs . Certain gene variants are also associated with increased risk for BMs and BBB penetration, such as ST6GALNAC5 , which encodes for a sialyltransferase involved in cell–cell adhesion, and ITGB3 , which encodes integrin β3, stimulating adhesion, migration, and angiogenesis . While colorectal cancer was once believed to be an extremely rare cause of BMs, some studies have found that colorectal cancer patients can have an incidence of up to 14.6% of BMs . Most of those patients were asymptomatic at the time of diagnosis. Furthermore, synchronous lung metastases increase the risk of BMs . This finding suggests that the genomic or molecular alterations needed to metastasize to the brain are common to other sites and acquired early during metastatic progression. A genomic next-generation sequencing study in renal cell carcinoma BMs found an enrichment of the SMARCA4 gene in BM tumors in contrast with primary tumor and extracranial lesions . SMARCA4 encodes a subunit of the SWI/SNF chromatin-remodeling complex, which functions as an epigenetic regulator of gene expression and plays a critical role in tumor suppression . Renal cell carcinoma that metastasized to the brain also showed more PI3K pathway alterations, primarily PTEN inactivation, than cells that did not metastasize to the brain . As previously discussed with melanoma BMs, the PI3K pathway plays an important role in metastatic progression. Wyler et al. demonstrated that the expression of chemokines and their receptors play a significant role in the propensity of renal cell carcinoma to metastasize to the brain . Specifically, the levels of the monocyte-specific chemokine CCL7 and its receptor CCR2 were found to be elevated in BMs compared to primary tumors . This suggests that the recruitment of monocytes and macrophages is a key factor contributing to the establishment of BMs . Of note, renal cell carcinoma is a highly immunogenic entity: Harter et al. demonstrated that they had the highest levels of CD3+ and CD8+ lymphocytes and the strongest PD-1 levels, which correlates to smaller brain tumor sizes . Advancements in gene expression profiling have greatly enhanced our understanding of breast cancer. The expression levels of three pivotal receptors in breast cancer—estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2/neu)—help categorize it into four primary molecular subtypes. These include luminal A (ER-positive, PR variable, HER2-negative), luminal B (ER-positive, PR variable, HER2-positive or negative), HER2-enriched (ER-negative, PR-negative, HER2-positive), and basal-like (ER-negative, PR-negative, HER2-negative). Notably, the basal-like subtype constitutes the majority of “triple-negative” breast cancers . Breast cancer molecular subtypes have preferential sites for metastases and possess a protein profile associated with homing of the metastatic site. A large, registry-based, single-institution study showed that patients with the HER2-positive and triple-negative subtypes had the highest incidence of BMs . This follows the line of previous reports showing that basal-like/triple-negative tumors pose a higher risk for BMs, with HER2-enriched in second place . However, HER2 and hormonal receptor statuses are hypothesized to shift upon reaching the brain . Similarly, triple-negative brain-metastatic cells exhibit elevated β2-adrenergic receptor mRNA and protein levels compared to their primary tumor counterparts, enhancing their proliferative capacity . Triple-negative and basal-like breast cancer subtypes have been found to compromise the blood–brain barrier (BBB), whereas BMs from HER2/neu-positive breast cancers generally preserve BBB integrity . This prompts the need for HER2/neu-positive cells to find alternative pathways to penetrate the BBB. HER2-HER3 dimers can form in breast cancer BMs and preferentially link to their ligand heregulin (also known as neuregulin-1) in the endothelial cells of the BBB . Heregulin and HER2 signaling induces activation of extracellular cathepsin B and matrix metalloproteinase (MMP)-9, on which transmigration through the BBB is dependent . MMP-9 enhances extracellular proteolysis and is upregulated by the metalloprotease-disintegrin ADAM8 , which is highly expressed in all BMs but particularly breast cancer cells . Cathepsins, on the other hand, are a family of proteases involved in protein degradation and processing, and their increased expression and activity could promote angiogenesis, invasion, and cell proliferation in some cancers . In BMs, cathepsin S is produced by macrophages and breast cancer tumor cells, and it mediates BBB transmigration via proteolytic cleavage of the junctional adhesion molecule (JAM)-B . Heregulin also upregulates intercellular adhesion molecule 1 (ICAM1), which is linked to increased invasion, motility, and metastasis in breast cancer . Gene expression profiling of paired primary breast carcinomas and their corresponding BMs identified the upregulation of 1314 genes and the downregulation of 1702 genes in BMs relative to the primary tumors . This study also showed activation of the HER2 pathway and gains in transcript and protein expression of rearranged during transfection ( RET ) gene in BMs, both linked to disease progression . Interestingly, there was no loss in PTEN expression in the analyzed specimens, which has been reported as a driver for BMs induced by astrocytes . A nationwide cohort study in Finland determined that basal-like subtypes tended to first metastasize to the brain and had a protein profile with high expression of neural cell stemness-linked proteins nestin and prominin-1, which could potentially help breast tumor cells adapt to the brain . Jin et al. trialed their MetMap with breast cancer cells that metastasized to the brain and demonstrated these cells present an altered lipid metabolism, which is necessary for tumor cell survival within the brain microenvironment . Increased fatty acid synthase ( FASN ) gene expression in breast cancer cells has been identified as a way to overcome low lipid availability in the brain . Enzymes associated with glycolysis, the tricarboxylic acid cycle, and oxidative phosphorylation pathways show elevated expression and could further promote efficient energy production via glucose oxidation. Furthermore, the pentose phosphate pathway and the glutathione system demonstrate heightened activity, contributing to the reduction in reactive oxygen species . Whether reprogramming occurs before seeding the brain or is induced by the lipid-poor environment of the brain is not known, but it is suggested that high FASN expression increases cell propensity to colonize the brain . Increased levels of fatty acid binding protein 7 (FABP7) are also seen in HER2-positive breast cancer BMs, and besides its role in metabolic reprogramming, FABP7 upregulates metastatic genes and pathways, such as Integrins-Src, MEK/ERK, Wnt/β-catenin, and vascular endothelial growth factor (VEGF)-A . Overexpression of proteins involved in fatty acid synthesis and degradation, as well as glucose-regulated protein 94 (GRP94), help cells compensate for the hypoglycemic stress they are subject to in the brain . Other gene signatures expressed in breast cancer cells that metastasize to the brain include COX2 , HBEGF , and ST6GALNAC5 , which mediate BBB migration ; PCDH7 , involved in linkage and interaction of tumor cells with astrocytes ; and GRIN2B, particularly increased in triple-negative breast cancers and involved in coding the GluN2 subunit of the NMDAR . The glutamate-stimulated GluN2B-NMDAR signaling axis activation in cancer cells promotes colonization and metastatic tumor growth in the brain by forming pseudo-tripartite synapses in which tumor cells act as an astrocyte . The MYC oncogene is highly expressed in CTCs from metastatic variants of breast cancer, and it regulates the adaptation of CTCs to the brain environment by reducing the oxidative stress produced by activated microglia via gene upregulation of glutathione peroxidase 1 (GPX1) . A specific protein signature in CTCs consisting of HER2+/EGFR+/HPSE+/Notch1+ was termed “brain metastasis selected markers” by Zhang et al. and proved to increase CTC BMs compared with the parental CTC lines . SERPINA5 is significantly upregulated in breast cancer BMs, which induces the production of anti-PA serpins . Also, the GBP1 gene is upregulated in ER-negative breast cancer cells that develop BMs . GBP1 codes for the Guanylate-Binding Protein 1 (GBP1), binding activated T lymphocytes and enabling tumor cells to cross the BBB . Lung cancer is primarily categorized into two major histological types: non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC) . Within the NSCLC group, further classifications include adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, among other subtypes . Whole exome sequencing of samples from NSCLC and SCLC patients showed that NSCLC had a higher percentage of seemingly metastases-specific mutations, suggestive of branched evolution. In contrast, SCLC samples showed low heterogeneity, which suggests these tumors spread using a parallel and linear model of evolution . In a systematic review of 72 studies comprising data from 2346 patients, the most common genetic alterations seen in BMs from NSCLC were EGFR , TP53 , KRAS (Kirsten rat sarcoma viral oncogene), CDKN2A (cyclin-dependent kinase inhibitor 2A), and STK11 . Although not considered a driver gene, mutations in the tumor suppressor TP53 are highly prevalent . These mutations are linked to the development of new distant metastases and exhibit strong concordance between primary NSCLC tumors and their corresponding BMs . Mutations in the p53 protein disrupt cell cycle control, allowing the replication of damaged DNA and resulting in uncontrolled cell proliferation . The EGFR family consists of four distinct members, all belonging to the ErbB/HER protein family: ErbB1, ErbB2, ErbB3, and ErbB4 . Mutations in some tumors can continuously activate EGFR, enhancing tumor growth, invasion, and metastasis . A meta-analysis of 26 studies demonstrated a positive association between EGFR -mutated NSCLC tumors and BMs, with an odds ratio (OR) of 1.58 (95% CI: 1.36–1.84), which confirms that EGFR mutation is a significant risk factor for BMs in NSCLC . ALK gene fusion and RET gene fusion are also positive driver genes for NSCLC BMs . ALK gene rearrangements frequently involve translocation or fusion with another partner gene, including echinoderm microtubule-associated protein-like 4 ( EML4 ), the most prevalent in NSCLC . ALK gene rearrangements result in the formation of an oncogenic ALK tyrosine kinase that persistently activates various downstream signaling pathways, including PI3K-AKT, MEKK2/3-MEK5-ERK5, JAK-STAT, and MAPK. This continuous activation promotes increased proliferation and survival of tumor cells . ALK fusions have been found to be constant between primary NSCLCs and their associated BMs . Compared to EGFR -positive groups, ALK -positive patients have a higher incidence of BMs at the time of initial lung cancer diagnosis . However, Rangachari et al. found a similar baseline incidence of BMs in EGFR -mutated and ALK -rearranged NSCLCs and an evolutionary increase in CNS involvement over time, with >45% of patients in both groups showing BMs after three years of survival . Larger BM tumor size has also been reported in the EML4-ALK fusion groups compared to groups without fusion . The RET protooncogene codes for a receptor tyrosine kinase and has been identified in NSCLC rearranged or fused with over a dozen partner genes, with the kinesin family member 5B gene ( KIF5B ) being the most common . RET fusion-positive NSCLCs have BMs in 25% to 50% of cases [ , , ]. Although the exact mechanism of how RET fusion promotes brain organotropism for tumor cells is not known, recent trial results in patients receiving the selective RET inhibitor selpercatinib demonstrated decreased CNS metastatic progression of RET fusion-positive NSCLC, with no CNS involvement at all in patients with no previous BMs . These results suggest RET plays a fundamental role in promoting tumor cell growth and survival in the brain. The C-ros oncogene 1 ( ROS1 ) encodes a receptor tyrosine kinase that is structurally analogous to ALK . NSCLC harboring ROS1 rearrangements exhibits a cumulative incidence of CNS metastasis comparable to that of ALK fusion-positive tumors . The incidence of BMs in ROS1 -rearranged NSCLC patients at the time of diagnosis is approximately 20–30%, while it is as high as 50% in patients post-crizotinib therapy . Crizotinib, an ALK/MET kinase inhibitor developed for ALK -rearranged NSCLC, is also effective in treating ROS1 -rearranged tumors . However, it has low BBB penetration, and even with therapy, ROS1 -positive patients commonly progress to CNS metastasis . Recent clinical trials have demonstrated the promising efficacy of novel tyrosine kinase inhibitors (TKIs) in overcoming crizotinib resistance in BMs of ROS1 -rearranged NSCLC . Additionally, MET amplification in NSCLC leads to the heightened expression and continuous activation of the Met receptor, also known as the hepatocyte growth factor receptor (HGFR) . This, in turn, promotes tumor cell migration and epithelial-to-mesenchymal transition phenotype . Moreover, MET amplification has been found enriched in NSCLC BMs compared to paired primary tumors . KRAS oncogene mutations are recognized as prevalent drivers in BMs from NSCLC, though their exact incidence varies across different studies . The RAS genes encode a family of proteins that play critical roles in regulating cell growth, differentiation, and apoptosis , and KRAS mutation has been shown to upregulate PD-L1 expression in NSCLC through p-ERK signaling . Activation of the PD-1/PD-L1 axis suppresses T-cell activity within the tumor microenvironment, allowing tumor cells to escape immune detection . This immune regulation function of KRAS -mutations may improve tumor cell survival in the brain, but it also makes NSCLC BMs with KRAS -mutations more susceptible to treatment with immune checkpoint inhibitors (ICIs) . Other less frequent mutations related to BMs from NSCLC include BRAF mutations , Cav-1 , AKT-1 , NRAS , and PTEN . Gene expression signatures able to activate the WNT/TCF pathway are associated with lung adenocarcinoma metastases to the brain and lung . The target genes HOXB9 and LEF1 within the WNT/TCF pathway play pivotal roles in facilitating chemotactic migration and promoting colony expansion in lung adenocarcinoma . Furthermore, overexpression of the hyaluronan receptor by lung adenocarcinoma tumor cells increases inflammation and binding to hyaluronan-rich microenvironments such as the extracellular matrix of brain metastatic niches . Aljohani et al. performed whole-genome sequencing on normal lung tissues, primary NSCLC tumors, their corresponding BMs, and CTCs. The study revealed that primary tumors contained mutations in genes associated with cell adhesion and motility. In contrast, BMs and CTCs exhibited mutations in genes responsible for adaptive and cytoprotective functions related to cellular stress responses, including Keap-1 , Nrf2 , and P300 . Several other adaptations can be seen in lung cancer CTCs. Analysis of tissue from lung adenocarcinoma and its matched CTCs and BMs using scRNA-seq have demonstrated that CTCs were in an intermediate place between BMs tumor cells, which leaned towards the epithelial phenotype and primary tumor cells, mostly found in a mesenchymal state . Furthermore, RAC1 , highly expressed in metastatic tumor tissue, was involved in adhesion, degradation, and VEGF signaling pathways . CTCs overexpressing CD44v6 exhibited increased expression of the mesenchymal marker vimentin and reduced expression of the epithelial marker E-cadherin, thereby facilitating cell invasion and BMs through the activation of epithelial-to-mesenchymal transition . The BRAF oncogene encodes a protein that is essential for the functioning of the mitogen-activated protein kinase/extracellular signal-regulated kinase (MAPK/ERK) signaling pathway . BRAF gene mutations cause MAPK/ERK continuous activation and signal transduction, increasing cell growth, migration, and proliferation . Approximately half of advanced melanomas harbor mutations in the BRAF gene , which are linked to an increased frequency of BMs . Moreover, the incidence of BRAF mutations is higher in BMs than in primary melanomas or metastases to other organs, suggesting there may be an independent evolution of subclones . V600E is the most frequently occurring BRAF mutation in melanoma, and it also has the highest association with BMs . The loss of the PTEN protein has been shown to decrease the time to melanoma BMs in patients with BRAFV600 mutations . However, BRAF mutations alone are not sufficient for BMs to occur, and there is proof that PTEN gene silencing cooperates with BRAFV600E mutations in melanoma progression via the phosphoinositide 3-kinase (PI3K)/AKT pathway activation . Although AKT1 activation can independently drive BM progression, it is augmented by PTEN silencing . The bidirectional communication between the PI3K/AKT/mTOR and the MAPK/ERK pathways is critical for abnormal proliferation and therapy resistance in cancer . BRAF-mutant melanomas have a significantly higher activation of AKT than NRAS -mutant melanomas . However, NRAS mutations are also a significant risk factor for BMs, with a higher risk of developing BMs compared to patients with NRAS wildtype [ , , ]. NRAS forms part of the Ras genes family, which also includes HRAS (Harvey Rat Sarcoma Virus) and KRAS (Kirsten Rat sarcoma virus) . NRAS mutations are present in approximately 20% of human melanomas, whereas HRAS and KRAS mutations occur in only 1% and 2% of melanomas, respectively . Both NRAS and KRAS mutations are enriched in melanoma BMs . These mutated genes code for constitutively active Ras proteins that stimulate multiple signaling cascades, including the MAPK/ERK pathway and the PI3K/AKT pathway . These are the same signaling cascades activated in tumors with BRAF mutations and PTEN silencing. However, NRAS -mutant melanomas exhibit normal PTEN levels, suggesting BRAF -mutant and NRAS -mutant tumors differ in their mechanisms of progression towards BMs . Moreover, concurrent mutation of NRAS and BRAF is rare . Evidence has also shown that the microenvironment in NRAS -mutant melanoma BMs is enriched in neutrophils in contrast to the primary melanoma . In myeloproliferative neoplasms, a link between NRAS mutations and neutrophil hyperleukocytosis via activation of the granulocyte colony-stimulating factor (G-CSF) has been elucidated . The exact mechanism of how neutrophils are seen in NRAS-mutated melanoma BMs remains to be exposed. Other driver genes of melanoma are NF1 inactivation and C-KIT mutations . NF1 functions as an inhibitor of Ras signaling, and its loss results in continuous activation of the MAPK and PI3K pathways. Likewise, mutations in KIT , a receptor tyrosine kinase, similarly initiate the activation of these pathways . Patients with “quadruple negative” disease (no BRAF , NRAS , NF1 , or C-KIT mutations) have the lowest risk of developing BMs . This suggests the important implications of MAPK/ERK and PI3K/AKT activation in the metastatic progression of melanoma and their potential as therapeutic targets. Lessard et al. identified that metastatic melanoma cells exhibit elevated levels of the long intergenic non-coding RNA CASC15, which was associated with BMs in mouse xenograft models . Additionally, increased expression of miR-301a in melanoma is linked to overall metastatic activity . Although various other non-coding RNAs have been recognized as important regulators of melanoma progression and resistance to therapy, their specific roles in BMs remain unclear . Alterations in the CDKN2A gene or the p16-cyclin D-CDK4/6-retinoblastoma protein pathway (CDK4 pathway) have been found in virtually all melanoma cell lines . CDK4 activation inhibits the retinoblastoma protein, promoting cell cycle progression, and is usually associated with tumor suppressor CDKN2A (p16INK4A) deletion, furthering melanoma cell survival . It has also been found that patients with deletions in CDKN2A genes also display MDM2 and MDM4 amplifications, which is associated with a higher risk for metastasis to the brain . MDM2 and MDM4 are negative regulators of p53; therefore, amplifications in MDM2/4 decrease p53 function. The ubiquitin-specific protease 7 (USP7), a protein that protects MDM2/4 from proteasomal degradation, is increased in metastatic melanoma . PPM1D, another negative regulator of p53, is overexpressed in metastatic melanoma, and gain-of-function mutations in immune cells promote immune escape and proliferation . Even though CDKN2A , MDM2/4 , USP7, or PPM1D are not considered driver mutations, their effect in the CDK4 and p53 pathways has a pro-metastatic effect. The nerve growth factor (NGF) receptor CD271 is a low-affinity receptor for NGF, a member of the neurotrophin family proteins, highly expressed in melanoma cells even before BMs . NGF and neurotrophin 3 (NT-3) are highly expressed in tumor-adjacent tissues in the brain, suggesting brain organotropism between the CD271-positive cells and the brain tumor niche . CD271 has also been linked to SOX10, a specific marker of the neural crest, and it provides melanoma cells with neural crest stem cell signatures, a common ancestor between melanocytes, glial cells, and neurons . It has been reported that aggressive melanoma cells hijack neural crest-related signaling pathways to increase plasticity and facilitate invasion in the brain . BMP4 and the Wnt target gene AXIN2 are important for neural crest development and are also upregulated in BMs, suggesting melanoma acquires neuronal-like characteristics that make them highly efficient in metastasizing the brain . Neurotrophins further stimulate invasion by producing extracellular matrix degradative enzymes such as heparinase, which destroys the basement membrane of the BBB . AXL, a receptor tyrosine kinase, is involved in promoting epithelial-to-mesenchymal transition, treatment resistance, and metastasis in melanoma BMs . AXL is typically upregulated in CD271-positive BMs and may actively contribute to immune escape. This occurs through mechanisms such as reduced HLA class I expression, increased production of immunosuppressive cytokines and PD-L1, and diminished infiltration of CD8+ lymphocytes . The immune microenvironment of BMs has shown distinct characteristics when compared to their primary tumors or extracranial metastases, such as decreased IFNγ production and activated T-cells . Moreover, there are fewer inflammatory cytokines, immune cell infiltrates, and maturation of dendritic cells, whereas there is enhanced oxidative phosphorylation . Altogether, melanoma cells can invade and proliferate in the brain through a series of genetic, molecular, and immune mechanisms. The mechanisms for the metastatic spread of colorectal cancer to the brain are still not completely understood. There is an association between RAS mutations in colorectal tumors, especially KRAS mutations, and increased risk of BMs [ , , ]. High expression of NFAT5 (Nuclear Factor of Activated T Cells 5), AVCR1C (Activin A Receptor Type 1C), and/or SMC3 (Structural Maintenance of Chromosomes 3) is associated with colorectal BMs . Certain gene variants are also associated with increased risk for BMs and BBB penetration, such as ST6GALNAC5 , which encodes for a sialyltransferase involved in cell–cell adhesion, and ITGB3 , which encodes integrin β3, stimulating adhesion, migration, and angiogenesis . While colorectal cancer was once believed to be an extremely rare cause of BMs, some studies have found that colorectal cancer patients can have an incidence of up to 14.6% of BMs . Most of those patients were asymptomatic at the time of diagnosis. Furthermore, synchronous lung metastases increase the risk of BMs . This finding suggests that the genomic or molecular alterations needed to metastasize to the brain are common to other sites and acquired early during metastatic progression. A genomic next-generation sequencing study in renal cell carcinoma BMs found an enrichment of the SMARCA4 gene in BM tumors in contrast with primary tumor and extracranial lesions . SMARCA4 encodes a subunit of the SWI/SNF chromatin-remodeling complex, which functions as an epigenetic regulator of gene expression and plays a critical role in tumor suppression . Renal cell carcinoma that metastasized to the brain also showed more PI3K pathway alterations, primarily PTEN inactivation, than cells that did not metastasize to the brain . As previously discussed with melanoma BMs, the PI3K pathway plays an important role in metastatic progression. Wyler et al. demonstrated that the expression of chemokines and their receptors play a significant role in the propensity of renal cell carcinoma to metastasize to the brain . Specifically, the levels of the monocyte-specific chemokine CCL7 and its receptor CCR2 were found to be elevated in BMs compared to primary tumors . This suggests that the recruitment of monocytes and macrophages is a key factor contributing to the establishment of BMs . Of note, renal cell carcinoma is a highly immunogenic entity: Harter et al. demonstrated that they had the highest levels of CD3+ and CD8+ lymphocytes and the strongest PD-1 levels, which correlates to smaller brain tumor sizes . As early as 1889, Stephen Paget described the pattern in which breast cancers metastasized predominantly to certain organs more than others and compared it to seeds that can only grow on suitable soil . Despite being over a hundred years old, this “Seed and Soil” theory continues to be valid today. The potential of tumor cells to metastasize depends on interactions between selected metastatic cells and mechanisms unique to some organ microenvironments in which chemotaxis and growth can occur . Studies of multiphoton laser scanning microscopy have allowed us to follow individual tumor cells that metastasize to the brain and determine the multi-step process they go through for metastasis . However, this metastatic cascade is an inherently inefficient process, prompting tumor cells to develop adaptations to increase their possibilities of survival. In this section, we focus on brain and CNS metastases and review the key steps and players involved in metastatic progression. offers a visual summary of this metastatic process and the primary elements playing a role in it. 4.1. The Pre-Metastatic Niche First of all, tumor cells confirm their destination and prepare it for their arrival before leaving the primary tumor . This co-evolution of tumor and target-organ microenvironment forms the “pre-metastatic niche” (PMN) , and it involves the interaction between tumor-derived factors, tumor-recruited cells, and local stroma . The specific modifications that occur at the PMN are needed to favor the survival of tumor cells once they arrive, and a wide array of molecules and cells participate in this process. Inflammation plays a crucial role in cancer progression and migration . Studies have shown that hematopoietic progenitor cells from the bone marrow migrate to pre-metastatic sites and form clusters prior to tumor cell arrival, facilitated by receptor–ligand interactions . At these sites, they establish an inflammatory chemokine gradient that attracts additional bone marrow-derived cells and tumor cells to the PMN , a process further supported by increased VEGF-induced vascular density . Many inflammatory chemokines have been identified to help in the recruitment of bone marrow-derived myeloid cells to PMN and favor metastasis in several organs via suppression of host immunity, angiogenesis, and remodeling of tissue . In the brain, granulocyte-derived molecules like lipocalin-2 (LCN2) trigger inflammatory activation of astrocytes, which in turn recruit myeloid cells to the brain . Glycoprotein nonmetastatic melanoma B (GPNMB) expressed by macrophages and microglia is also linked to neuroinflammation, as it is upregulated in microglia, producing higher levels of inflammatory cytokines, including interleukin-1β (IL-1β) and tumor necrosis factor-α (TNF-α) . Increased expression of GPNMB reduces T-cell activation by interacting with syndecan-4, enabling melanoma cells to escape immune detection and destruction . There is also a contrasting anti-inflammatory role for GPNMB when bound to CD44 in astrocytes, showing deduction of nitric oxide synthase, reactive oxygen species, nitric oxide, and IL-6, but its exact effect on BMs has not been elucidated . Cells in the PMN can produce pro-inflammatory cytokines and other pro-tumoral factors, but these can also come from distant cells either in the primary tumor or the bone-marrow and reach their target organs through extracellular vesicles. Extracellular vesicles selectively fuse with resident cells at their intended destinations, displaying unique integrin expression profiles linked to their metastatic organotropism . Extracellular vesicles, including exosomes, microvesicles, and oncosomes, are membrane-enclosed particles released by various cell types into the extracellular space . Extracellular vesicles can cross the intact BBB via transcytosis, facilitated by the upregulation of the endocytic pathway in brain endothelial cells, making them efficient carriers for tumor-derived factors . These extracellular vesicles-derived factors interact with the soluble matrix, influencing the formation of a PMN by activating and recruiting inflammatory and resident cells . Ruan et al. demonstrated how brain cells are reprogrammed by breast cancer-derived extracellular vesicles carrying high levels of miR-199b-5p, which targets solute carrier transporters in astrocytes and neurons, leading to the retention of metabolites in the extracellular space for tumor cells to use . Exosomes carrying miR-122 microRNA can suppress glucose uptake by non-tumor cells in the PMN, which increases nutrient availability for tumor cells . Proteomic analysis of exosomes derived from brain metastatic cells showed increased expression of hyaluronan-binding protein (CEMIP) . Endothelial and microglial cells internalizing CEMIP-positive exosomes triggered angiogenesis and inflammation in the perivascular niche, facilitating brain vascular remodeling and metastasis . BMs and PMN formation primarily take place in the brain parenchyma, traditionally regarded as an immune-privileged site . The metastatic brain tumor microenvironment exhibits a unique cellular and non-cellular composition even when compared with that of primary tumors such as glioblastoma . The extracellular matrix, an integral component of the PMN, has distinct protein arrangements regulated both by tumor and stromal cells, with organ-specific features . For example, the extracellular matrix in metastatic brain tumors shows a highly organized appearance, with thick, dense collagen bundles clustering the tumor cell regions inside well-defined borders . A proteomic analysis of NSCLC BMs has also shown an increased expression of invasion-related molecules such as integrin-α7, integrin-β1, and syndecan-4 in the extracellular matrix . Expression of certain extracellular matrix proteins such as tenascin-C in the brain restricts T-cell migration, decreasing their concentration in the PMN and their ability to lyse tumor cells , promoting an immunosuppressive landscape. Long non-coding (lnc) RNA increases the release of CCL2 to recruit macrophages in breast cancer BMs , while tumor cell-derived ANXA1 promotes microglial migration . Later, both macrophages and microglia are activated via the PI3K signaling pathway to act as metastasis promoters . The interaction between melanoma cells and microglia supports BM progression through melanoma-derived IL-6, which enhances STAT3 phosphorylation and SOCS3 expression in microglia, thereby aiding melanoma cell survival . Moreover, increased expression of MMP-3 in microglia facilitated melanoma cell growth . Both brain-resident microglia and bone marrow-derived macrophages are present in brain tumors and are actively recruited to the tumor microenvironment . Microglia are regarded as the main contributors to CNS-resident tumor-associated macrophages (TAMs) . TAMs represent the predominant type of immune cells detected in BMs, irrespective of primary tumor origin . Karimi et al. used mass cytometry to analyze the tumor microenvironment of 139 high-grade gliomas and 46 BMs, achieving single-cell spatial resolution of immune lineages and activation states . They found that monocyte-derived TAMs constituted about 30.5% of the tumor microenvironment, compared to 9.2% for resident microglia . Collectively, TAMs in the microenvironment of BMs do not fall into the classic polarization of M1 (inflammatory, IFNγ-activated) and M2 (immune-modulating, IL4-activated) phenotypes , and their phenotype can remain highly plastic in response to different signals and interactions within the microenvironment . Both TAMs and tumor cells can interact with lymphocytes in the tumor microenvironment of BMs to produce defective T cell states . In an integrated analysis of brain tumor microenvironments, CD4+ T cells showed a hyporesponsive, anergic phenotype, while CD8+ T cells exhibited an exhaustion signature such as the one seen in chronic activation . The chemokine interferon-γ inducible protein 10 (Cxcl10) is upregulated in myeloid-derived microglia, and despite its role as a chemotactic factor for lymphocytes, it also attracts more CNS-myeloid cells carrying immunosuppressive proteins that reduce T cell activation and promotes tumor growth . Multiomics analyses have shown that in lung cancer BMs, the separation of myeloid and lymphoid cells into specific compartments is driven by unique cytokine networks . Other cells, such as NK cells, neutrophils, and T cells, can also be found in BMs microenvironments, with distinct densities depending on the primary tumor type . For example, in melanoma BMs there is higher neutrophil infiltration and increased CD8+ T cells in the margins of the tumor . CD8+ T cells and other leukocytes have also been found in the CSF, matching the immune microenvironment of BMs and proving that cell exchange occurs between these compartments . PMN formation in the brain, therefore, relies on intricate and diverse processes involving factors derived from tumor cells and brain-resident cells. The combination of inflammation, angiogenesis, immunosuppression, and selective brain-tropism help model the PMN, subsequently promoting tumor cell invasion and growth. 4.2. Lymphatic Spread of Tumor Cells Given the genetic heterogeneity observed between primary tumor cells and their metastatic counterparts, it is suggested that tumor cells can arrive at their destination both through direct hematogenous spread from the primary tumor and sequential progression from lymph nodes . In solid tumors, intravasation into lymphatic vessels and lymph nodes is more common and usually precedes metastasis to the vascular system , while tumor cells reach the lymphatic vessels alone or form clusters . It has been demonstrated that once in the lymph node parenchyma, tumor cells can directly invade the lymph node vessels to enter the blood circulation, bypassing the thoracic duct . Initially, the release of inflammatory chemokines from tumor and inflammatory cells in the tumor microenvironment induces lymphangiogenesis and enhances lymphatic intravasation . Tumor-derived cytokines, soluble factors, and extracellular vesicles can prime the stromal cells in lymph nodes to create a lymphatic metastatic niche similar to the PMN . VEGF-C overexpression in tumor cells has been shown to induce hyperplasia of the peritumoral lymphatic vessels, enhancing flow rate and delivery to lymph nodes . The transport to lymphatic lymph nodes is actively regulated by signaling pathways, including the SDF-1/CXCR4 axis , the CCL1–CCR8 interactions , and the VEGF-C-induced upregulation of CCR7-CCL21 signaling . VEGF-C enhances immune tolerance in murine melanoma models by inducing the deletion of antigen-specific CD8(+) T cells through lymphatic endothelial cells . Chronic IFN signaling during the initial anti-tumor response induces epigenetic rewiring of tumor cells in the lymph nodes, upregulating PD-L1 and promoting immune tolerance . Together with increased MHC-I expression, tumor cells are able to evade NK cells and resist T cell-mediated cytotoxicity . Cancer cells in the lymph nodes also exhibit elevated expression of MHC-II, increasing regulatory T cells while decreasing CD4(+) T cells . Tumor cells in the lymph nodes also undergo metabolic adaptations. Metastasis-initiating cells show elevated expression of the fatty acid receptor CD36 and lipid metabolism genes. Studies have shown that palmitic acid or a high-fat diet can enhance CD36 expression, increasing the metastatic potential of tumor cells . Genes involved in the fatty acid oxidation pathway are upregulated in lymph node-metastatic tumor cells through bile acid-induced activation of the yes-associated protein (YAP) . Moreover, lipid metabolism is also important for tumor cells to overcome ferroptosis in the lymphatic environment since the hallmark of ferroptosis is the lethal accumulation of lipid peroxidation products . In the lymph, however, there are low iron levels and high levels of oleic acid, glutathione, and other antioxidants, decreasing exposure to oxidative stress and making tumor cells more resistant to ferroptosis when they enter the blood . The reprogramming that occurs in the lymph nodes ultimately leads to a survival and metastatic advantage for tumor cells. Phylogenetic studies of metastatic breast cancer tumors have discerned that the genetic heterogeneity between primary tumors and their distant metastases comes from either a monoclonal metastatic precursor that evolves outside the primary tumor or polyclonal precursors originated from the primary tumor from the start . Phylogenetic studies for other types of cancer have yielded similar results . Mohammed et al. discovered that tumor cells reaching the lymphatic vessels exhibit a gene and protein profile indicative of a hybrid epithelial/mesenchymal phenotype and stem cell-like characteristics, which contribute to their high metastatic potential . In mouse melanoma models, key driver mutations such as BRAF alterations and changes in genes like MET or CDKNA2 (either gain or loss) are observed to occur within lymph nodes . Cells with driver mutations combined with a loss of tumor suppressor genes that would normally help eliminate cells with mutations can, therefore, undergo transformation freely toward the development of melanoma . Chromatin modifier histone deacetylase 11 (HDAC11) serves as a dynamic epigenetic regulator in the lymph nodes microenvironment as demonstrated in breast cancer cells models, in which increased HDAC11 expression inhibits cell cycle suppressors E2F7 and E2F8, promoting tumorigenesis and growth in the lymph nodes while downregulation of HDAC11 upregulates RRM2, promoting migration and egress from lymph nodes to distant sites predominantly through the draining blood vessels of lymph nodes . 4.3. Hematogenous Spread of Tumor Cells Tumor cells can also enter directly from the primary tumor into the bloodstream. A clone subpopulation of breast cancer cells reprogrammed to overexpress the proteins Serpine2 and Slpi showed vascular mimicry and efficient blood intravasation . However, only a small percentage of tumor cells that enter the circulation will survive the environmental pressures of the bloodstream to successfully metastasize . CTCs, or tumor microemboli, initially enter the bloodstream as single cells but rapidly aggregate into clusters. This clustering, occurring after detachment from the primary tumor, is facilitated by the cancer stem cell marker CD44 through intercellular CD44-CD44 homophilic interactions . Stemness of tumor cells is induced by epithelial-to-mesenchymal TRANSITION, and it supports migration . Both mesenchymal and epithelial tumor cells enter the bloodstream and contribute to CTC clusters, with evidence indicating a dynamic plasticity between epithelial and mesenchymal states . Hydrodynamic shear stress in the systemic circulation promotes epithelial-to-mesenchymal transition in CTCs. This process is driven by the generation of reactive oxygen species and nitric oxide and the suppression of extracellular signal-regulated kinase and glycogen synthase kinase 3β signaling pathways . CTC clusters are more resistant to cell death than individual CTCS, increasing their potential to metastasize . Their larger size allows them to overcome fluid shear stress and collisions with other cells in the circulation, promoting margination to the endothelium wall, which increases their probability of arrest and adhesion to wall receptors . Despite their larger size, CTC clusters can navigate capillary-sized vessels by rearranging them into single-file chains , enabling them to reach distant organs. Additionally, capillary beds with slower flow rates promote CTC arrest and enhance active cell adhesion . CTCs associated with neutrophils showed a transcriptomic profile that supported cell cycle progression and cell migration . In later cancer stages, neutrophils show increased immunosuppressive functions, suggesting they have a dynamic role in metastatic progression . Inflamed neutrophils form aggregates with CTC in the intraluminal space, but once CTC arrest occurs in a vessel, they lose contact with the tumor cells. However, they remain in close proximity to the clusters and endothelium due to chemokine signaling mediated by self-secreted IL-8, tumor-derived CXCL-1, and the endothelial cell glycocalyx . IL-8 also causes endothelial barrier disruption and extravasation of nearby tumor cells . Neutrophil extracellular traps (NETs) are neutrophil-derived DNA webs released in inflammatory states to trap and kill pathogens, but they can also trap CTCs . They capture CTCs via β1-integrin, which is upregulated in inflammation both in CTCs and NETs . Studies have shown that the interaction between β2 integrin on neutrophils and ICAM-1 on melanoma cells facilitates the anchoring of melanoma cells to the vascular endothelium . ICAM-1 on triple-negative breast cancer cells promotes tumor cell secretion of suPAR, a chemoattractant for neutrophils, and attaches to CD11b molecules on neutrophils to form CTC-neutrophil bonds . Platelets also interact and travel along CTCs, and a bidirectional exchange of lipids, proteins, and RNA occurs between them . Tumor cells can transfer mutant RNA into blood platelets to produce “tumor-educated platelets” . Platelets efficiently transfer structural components to tumor cells through extracellular vesicles, internalization, or direct contact, effectively “educating” tumor cells in the process . Platelet-derived TGFβ and direct contact with CTCs activate the TGFβ/Smad and NF-κB pathways in tumor cells, driving their transition to a mesenchymal-like phenotype . Platelet-derived RGS18 promotes the expression of the immune checkpoint molecule HLA-E in CTCs. As a result, CTCs can escape NK-mediated immune surveillance and killing . Direct contact with platelets can upregulate the inhibitory checkpoint molecule CD155 in CTCs and inhibit NK-cell cytotoxicity when CD155 is engaged with immune receptor TIGIT . The cross-talk between CTCs and platelets, therefore, creates highly dynamic and aggressive phenotypes that help preserve CTCs integrity during their transit in the bloodstream, enhance invasiveness and proliferation, perpetuate epithelial-to-mesenchymal transition and stem-like phenotypes, and evade death . Platelets also stimulate YAP1 dephosphorylation and its nuclear translocation in CTCs, triggering a pro-survival gene expression profile that prevents anoikis in detached conditions . CTC clusters are often accompanied by myeloid-derived suppressor cells (MDSCs), a group of immature myeloid cells that promote both systemic and local immunosuppression, forming a protective barrier around CTCs to aid in metastasis . Aside from their immunosuppressive roles, CTC-MDSC interactions increase the production of reactive oxygen species in MDSC, which induces pro-tumorigenic differentiation and proliferation of tumor cells by upregulation of Notch1 receptor expression and activation in CTCs . There have been attempts to oppose the immunosuppressive aspects of MDSC by targeting therapies against them. Some drugs have been approved by the FDA, some are undergoing clinical trials, and others are being investigated in preclinical models. However, there is still no consensus for their use . Macrophages can have a dual role in regard to immunity against CTCs. On the one hand, CD24, a cancer stemness marker, can be expressed in tumor cells and play a suppressive role in tumor immunity as a phagocytic inhibitor when bound to macrophages via Siglec-10 . On the other hand, Zhang et al. reported macrophages that engulf apoptotic tumor cells integrate tumor DNA into their nuclei, transforming into tumor stem cells while maintaining macrophage surface markers, enabling them to evade immune detection . Therefore, some macrophages can promote metastasis while others interfere with it. More recently, Fu et al. discovered that microbiota from the primary tumor can be carried by CTCs as intracellular bacteria capable of reorganizing the actin cytoskeleton of tumor cells and enhancing resistance to mechanical stress . Tumor cells thrive in hypoxic conditions due to metabolic reprogramming , and CTC clusters offer protection against the toxic oxygen concentrations in the bloodstream . Hypoxic CTC clusters promote a cancer stem-like phenotype in CTCs and the acquisition of a reactive oxygen species-resistant phenotype that enhances CTC survival upon reoxygenation . Oxidative stress and hypoxia favor CTCs to develop resistance mechanisms against anoikis, a form of apoptosis induced upon cell detachment from their native environment . Several other adaptations have been linked to anoikis resistance, such as increased epithelial-to-mesenchymal transition, change in integrins’ profiles, oncogene activation, and overexpression of key metabolic enzymes or receptors . The inclusion of carcinoma-associated fibroblasts in CTC clusters provides a metastatic advantage to tumor cells , likely favoring anoikis resistance, transportation of nutrients, and epithelial-to-mesenchymal transition . 4.4. Vascular Cooption Vascular cooption describes the mechanism by which metastatic cells preferentially grow alongside the outer surfaces of existing blood vessels. This strategy is present in more than 95% of early micrometastases within the CNS . Adhesion to vessels depends upon tumor cell β1 integrins adhesion to the vascular basement membrane and the establishment of microcolonies . In some organs, the presence of mesenchymal stem cells acting as pericytes at the perivascular space of the PMN mediate the extravasation of tumor cells . It has been proposed that pericytic mimicry or angiotropism is a process closely related to vascular cooption and can be seen in melanoma cells that metastasize to the brain . CTC from lung and breast cancers that metastasize to the brain utilize the cell adhesion molecule L1CAM to move along capillaries. This movement involves the activation of YAP through interactions with β1 integrin and integrin-linked kinase (ILK) . Plasmin suppresses vascular cooption by deactivating L1CAM, an axon guidance molecule utilized by metastatic cells to navigate along brain capillaries. However, serpins that inhibit plasminogen activators produced by cancer cells, including neuroserpin and serpin B2, prevent the generation of plasmin. This inhibition facilitates vascular cooption in BMs associated with lung cancer, breast cancer , and melanoma . Serpins also protect cancer cells by inhibiting the plasmin-generated FasL death signal . A lncRNA associated with breast cancer cells increased expression of ICAM1, which mediated vascular co-option by increasing tumor cells’ ability to stretch over brain capillaries and extravasate into the brain parenchyma . Intravascular cell arrest in brain microvessels before extravasation has been demonstrated to create a focal hypoxic microenvironment in the PMN, leading to ischemic changes that upregulate vascular remodeling factors such as Angiopoietin-2 (Ang-2) and VEGF . Ang-2 facilitates tumor cell colonization and transmigration in the PMN and later supports stable oxygen and nutrient supply for metastatic growth . 4.5. Blood–Brain Barrier Penetration There is evidence that the BBB, known as the tightest endothelial barrier, can be modified by soluble factors secreted by tumor cells or dysregulation of the normal brain microenvironment. Tumor-derived heparinase, for instance, can degrade the basement membrane of the BBB, facilitating tumor cell invasion into the brain . Additionally, the absence of normal astrocytes leads to the downregulation of the DHA transporter Mfsd2a, expressed by endothelial cells, causing disruption of the BBB . There are also tumor-derived extracellular vesicles that can be taken up by endothelial cells to increase the permeability of the BBB. Lung cancer cells secrete exosomes mediated by the action of TGF-β carrying lncRNA that increases the expression of MMP-2 in brain microvascular endothelial cells . MMP-2 destroys tight junctions between endothelial cells both in the lung and the brain, increasing vascular permeability, tumor cell migration, and invasion . Extracellular vesicles from breast cancer carrying miR-181c promote BBB disruption by altering actin dynamics , while extracellular vesicles containing miR-105 target the tight junction protein ZO-1 in endothelial barriers, compromising their integrity . Tumor cells must acquire specialized adaptations before brain colonization, some of which involve specific mediators for BBB crossing. For example, breast cancer cells express COX2, the EGFR ligand HBEGF, and the α2,6-sialyltransferase ST6GALNAC5, which facilitate their traversal across the BBB . ST6GALNAC5, in particular, was found to be specifically expressed only in brain-tropic metastatic cells, enhancing cooption to endothelial cells . COX2 has been linked to the upregulation of MMP-1, which can degrade components of the BBB such as Claudin and Occludin . In metastatic breast cancer cells, Klotz et al. demonstrated that semaphorin 4D (SEMA4D) regulates tumor cells’ transmigration through the BBB . When SEMA4D binds to its receptor Plexin-B1 (PLXNB1) in endothelial cells, it makes them switch to a proangiogenic phenotype . This effect may also be enhanced by TAMs . Inactivating PLXNB1 has shown a shift in the immune landscape of tumor microenvironments towards an antitumor response; however, angiogenesis is not affected since SEMA4D can bind to alternative receptors . Endothelial cells in the tumor microenvironment of BMs exhibit elevated Ki67 levels and enhanced microvascular proliferation. In contrast, the proliferation is suppressed in the presence of CD8+ T cells. Additionally, the tight-junction protein claudin-5, essential for BBB integrity, is downregulated in cancer cells located near endothelial cells, especially within the cores of BMs. This supports the hypothesis that vascular co-option plays a role in BMs colonization in regions with compromised endothelial junctions . Herrera et al. found that breast-to-brain metastasis cell lines were able to traverse an enhanced blood-cerebrospinal fluid barrier (BCSFB) while primary breast cancer cell lines could not . These findings reflected two things: firstly, cells that have previously colonized the brain must have acquired critical mechanisms to allow them to traverse CNS barriers; secondly, the preferential migration of breast cancer cells through the BCSFB may indicate it is an often-overlooked potential point of entry for tumor cells . Under normal conditions, the BCSFB in the choroid plexus exhibits greater permeability than the BBB due to its transport and secretory functions . Chemotherapies such as paclitaxel and 5-Fluorouracil (5-FU), commonly used in breast cancer, have been demonstrated to increase brain-barrier permeability to tumor cells, especially through the BCSFB due to upregulation of MMP-9 leading to Claudin-6 downregulation in the choroid plexus cells . MMP-9 activity in the choroid plexus cells also resulted in the release of Tau from breast cancer cells, which formed neurofibrillary tangles that further destabilized the BCSFB . Studies also have revealed that patients with parenchymal brain metastatic lesions often exhibit tumor cells in the ipsilateral blood–cerebrospinal fluid barrier . Leptomeningeal disease (LMD) occurs when tumor cells invade the leptomeningeal membrane and the CSF . Intracranial tumor cells spread via three mechanisms: direct perivascular pathways from the brain parenchyma, hematogenous routes from the systemic circulation, or iatrogenic seeding . Extracranial tumor cell dissemination to the CSF occurs via hematogenous spread from the systemic circulation, backward migration along cranial or spinal nerves, invasion from the bone marrow via vascular pathways in the dura or skull, dissemination through meningeal lymphatic vessels, or through iatrogenic implantation . Tropism for the meninges involves specific histological, molecular, and genetic alterations in the primary tumor cells . Once in the leptomeninges, tumor cells continue adapting to overcome the intrinsic microenvironmental challenges of the CSF, including inflammation and sparse micronutrients . Cancer cells within the CSF increase their expression of LCN2 when stimulated by inflammatory cytokines produced by CSF macrophages . Besides its role in activating astrocytes in the PMN , LCN2 can also function as an iron-binding molecule. TAMs in the tumor microenvironment can help deliver iron to tumor cells to promote growth . The uptake of iron in the CSF by tumor cells outcompetes macrophages that need iron to generate reactive oxygen species, therefore impairing the respiratory burst and phagocytosis functions needed for tumor control . Tumor cells located in the cerebrospinal fluid produce complement component 3 (C3), which activates the C3a receptors on the epithelial cells of the choroid plexus. This activation compromises the BCSFB, permitting plasma elements like amphiregulin to enter the CSF and support the growth of tumor cells . 4.6. Astrocytes in Progression of Brain Metastases Astrocytes serve as important mediators of BMs, as they can promote neuroinflammation, immunosuppression, angiogenesis, chemotaxis, and tumor cell invasion. Metastatic lung cancer cells release factors, including macrophage migration inhibitory factor, IL-8, and plasminogen activator inhibitor-1 (PAI-1). These factors activate astrocytes, producing inflammatory cytokines such as IL-6, TNF-α, and IL-1β, promoting increased tumor cell proliferation . Schwartz et al. demonstrated that melanoma-secreted factors activate astrocytes to upregulate the expression of inflammatory chemokines such as CCL2, CXCL10, and CCL7, instigating astrogliosis, neuroinflammation, and hyperpermeability of the BBB . Astrocyte-secreted CXCL10 has been demonstrated to facilitate the migration of melanoma cells toward astrocytes. This effect is attributed to the elevated expression of CXCR3, the receptor for CXCL10, in melanoma cells with a propensity for brain tropism . Similarly, CCL2 can promote transmigration and extravasation of cancer cells via the CCL2-CCR2 astrocyte–cancer cell axis . COX2 expressed in breast cancer cells increases prostaglandins, activating astrocytes to secrete CCL7, promoting self-renewal of tumor-initiating cells . Soluble factors from triple-negative breast cancer cells induced upregulation and activation of the NLRP3 inflammasome in peritumoral astrocytes, consequently increasing IL-1β release, inflammation, and proliferation of metastatic cells . There is evidence that in metastatic triple-negative breast tumors, IL-1β enhances the adhesion of cancer and immune cells to the brain endothelium via upregulation of cell adhesion molecules such as ICAM-1, VCAM-1, and E-selectin . Lung cancer cells produce protocadherin 7 (PCDH7), which facilitates the creation of connexin 43 (Cx43) gap junctions with astrocytes. These connections enable the transfer of the second messenger cGAMP from tumor cells to astrocytes, thereby activating the STING pathway. Activation of this pathway leads to the secretion of inflammatory chemokines such as IFNα and TNFα. These chemokines act as paracrine signals for tumor cells to activate pathways such as the STAT1 and NF-κB signaling pathways, promoting their own growth and chemoresistance . Astrocytes promote immunosuppression by significantly increasing the levels of neuronal-specific cyclin-dependent kinase 5 (Cdk5). This elevated Cdk5 reduces both the expression and functionality of class I major histocompatibility complexes, thereby disrupting the antigen presentation pathway . Furthermore, reactive astrocytes with a signal transducer and activator of transcription 3 (STAT3) activation modify the innate and acquired immune system responses in the metastatic microenvironment . Following early infiltration of tumor cells to the brain, activated astrocytes produce factors such as MMP-9, which promotes angiogenesis and release of growth factors from the extracellular matrix . These signals persist as long as the astrocyte–tumor cell mutual association remains. Astrocytes also epigenetically upregulate Reelin expression in Her2+ breast cancer cells that migrate to the neural niche, conferring them a survival advantage in the brain microenvironment . Peroxisome proliferator-activated receptor γ (PPARγ) in metastatic tumor cells activates astrocytes in the lipid-rich environment around the glial cells, enhancing cell proliferation in advanced BMs but not during early steps . Astrocytes-derived exosomes containing PTEN -targeting microRNAs downregulate PTEN mRNA and protein expression in brain-tropic metastatic tumor cells . PTEN loss in tumor cells facilitates perivascular brain colonization and invasion and later increases secretion of the chemokine CCL2, which attracts myeloid cells, furthering metastatic proliferation . First of all, tumor cells confirm their destination and prepare it for their arrival before leaving the primary tumor . This co-evolution of tumor and target-organ microenvironment forms the “pre-metastatic niche” (PMN) , and it involves the interaction between tumor-derived factors, tumor-recruited cells, and local stroma . The specific modifications that occur at the PMN are needed to favor the survival of tumor cells once they arrive, and a wide array of molecules and cells participate in this process. Inflammation plays a crucial role in cancer progression and migration . Studies have shown that hematopoietic progenitor cells from the bone marrow migrate to pre-metastatic sites and form clusters prior to tumor cell arrival, facilitated by receptor–ligand interactions . At these sites, they establish an inflammatory chemokine gradient that attracts additional bone marrow-derived cells and tumor cells to the PMN , a process further supported by increased VEGF-induced vascular density . Many inflammatory chemokines have been identified to help in the recruitment of bone marrow-derived myeloid cells to PMN and favor metastasis in several organs via suppression of host immunity, angiogenesis, and remodeling of tissue . In the brain, granulocyte-derived molecules like lipocalin-2 (LCN2) trigger inflammatory activation of astrocytes, which in turn recruit myeloid cells to the brain . Glycoprotein nonmetastatic melanoma B (GPNMB) expressed by macrophages and microglia is also linked to neuroinflammation, as it is upregulated in microglia, producing higher levels of inflammatory cytokines, including interleukin-1β (IL-1β) and tumor necrosis factor-α (TNF-α) . Increased expression of GPNMB reduces T-cell activation by interacting with syndecan-4, enabling melanoma cells to escape immune detection and destruction . There is also a contrasting anti-inflammatory role for GPNMB when bound to CD44 in astrocytes, showing deduction of nitric oxide synthase, reactive oxygen species, nitric oxide, and IL-6, but its exact effect on BMs has not been elucidated . Cells in the PMN can produce pro-inflammatory cytokines and other pro-tumoral factors, but these can also come from distant cells either in the primary tumor or the bone-marrow and reach their target organs through extracellular vesicles. Extracellular vesicles selectively fuse with resident cells at their intended destinations, displaying unique integrin expression profiles linked to their metastatic organotropism . Extracellular vesicles, including exosomes, microvesicles, and oncosomes, are membrane-enclosed particles released by various cell types into the extracellular space . Extracellular vesicles can cross the intact BBB via transcytosis, facilitated by the upregulation of the endocytic pathway in brain endothelial cells, making them efficient carriers for tumor-derived factors . These extracellular vesicles-derived factors interact with the soluble matrix, influencing the formation of a PMN by activating and recruiting inflammatory and resident cells . Ruan et al. demonstrated how brain cells are reprogrammed by breast cancer-derived extracellular vesicles carrying high levels of miR-199b-5p, which targets solute carrier transporters in astrocytes and neurons, leading to the retention of metabolites in the extracellular space for tumor cells to use . Exosomes carrying miR-122 microRNA can suppress glucose uptake by non-tumor cells in the PMN, which increases nutrient availability for tumor cells . Proteomic analysis of exosomes derived from brain metastatic cells showed increased expression of hyaluronan-binding protein (CEMIP) . Endothelial and microglial cells internalizing CEMIP-positive exosomes triggered angiogenesis and inflammation in the perivascular niche, facilitating brain vascular remodeling and metastasis . BMs and PMN formation primarily take place in the brain parenchyma, traditionally regarded as an immune-privileged site . The metastatic brain tumor microenvironment exhibits a unique cellular and non-cellular composition even when compared with that of primary tumors such as glioblastoma . The extracellular matrix, an integral component of the PMN, has distinct protein arrangements regulated both by tumor and stromal cells, with organ-specific features . For example, the extracellular matrix in metastatic brain tumors shows a highly organized appearance, with thick, dense collagen bundles clustering the tumor cell regions inside well-defined borders . A proteomic analysis of NSCLC BMs has also shown an increased expression of invasion-related molecules such as integrin-α7, integrin-β1, and syndecan-4 in the extracellular matrix . Expression of certain extracellular matrix proteins such as tenascin-C in the brain restricts T-cell migration, decreasing their concentration in the PMN and their ability to lyse tumor cells , promoting an immunosuppressive landscape. Long non-coding (lnc) RNA increases the release of CCL2 to recruit macrophages in breast cancer BMs , while tumor cell-derived ANXA1 promotes microglial migration . Later, both macrophages and microglia are activated via the PI3K signaling pathway to act as metastasis promoters . The interaction between melanoma cells and microglia supports BM progression through melanoma-derived IL-6, which enhances STAT3 phosphorylation and SOCS3 expression in microglia, thereby aiding melanoma cell survival . Moreover, increased expression of MMP-3 in microglia facilitated melanoma cell growth . Both brain-resident microglia and bone marrow-derived macrophages are present in brain tumors and are actively recruited to the tumor microenvironment . Microglia are regarded as the main contributors to CNS-resident tumor-associated macrophages (TAMs) . TAMs represent the predominant type of immune cells detected in BMs, irrespective of primary tumor origin . Karimi et al. used mass cytometry to analyze the tumor microenvironment of 139 high-grade gliomas and 46 BMs, achieving single-cell spatial resolution of immune lineages and activation states . They found that monocyte-derived TAMs constituted about 30.5% of the tumor microenvironment, compared to 9.2% for resident microglia . Collectively, TAMs in the microenvironment of BMs do not fall into the classic polarization of M1 (inflammatory, IFNγ-activated) and M2 (immune-modulating, IL4-activated) phenotypes , and their phenotype can remain highly plastic in response to different signals and interactions within the microenvironment . Both TAMs and tumor cells can interact with lymphocytes in the tumor microenvironment of BMs to produce defective T cell states . In an integrated analysis of brain tumor microenvironments, CD4+ T cells showed a hyporesponsive, anergic phenotype, while CD8+ T cells exhibited an exhaustion signature such as the one seen in chronic activation . The chemokine interferon-γ inducible protein 10 (Cxcl10) is upregulated in myeloid-derived microglia, and despite its role as a chemotactic factor for lymphocytes, it also attracts more CNS-myeloid cells carrying immunosuppressive proteins that reduce T cell activation and promotes tumor growth . Multiomics analyses have shown that in lung cancer BMs, the separation of myeloid and lymphoid cells into specific compartments is driven by unique cytokine networks . Other cells, such as NK cells, neutrophils, and T cells, can also be found in BMs microenvironments, with distinct densities depending on the primary tumor type . For example, in melanoma BMs there is higher neutrophil infiltration and increased CD8+ T cells in the margins of the tumor . CD8+ T cells and other leukocytes have also been found in the CSF, matching the immune microenvironment of BMs and proving that cell exchange occurs between these compartments . PMN formation in the brain, therefore, relies on intricate and diverse processes involving factors derived from tumor cells and brain-resident cells. The combination of inflammation, angiogenesis, immunosuppression, and selective brain-tropism help model the PMN, subsequently promoting tumor cell invasion and growth. Given the genetic heterogeneity observed between primary tumor cells and their metastatic counterparts, it is suggested that tumor cells can arrive at their destination both through direct hematogenous spread from the primary tumor and sequential progression from lymph nodes . In solid tumors, intravasation into lymphatic vessels and lymph nodes is more common and usually precedes metastasis to the vascular system , while tumor cells reach the lymphatic vessels alone or form clusters . It has been demonstrated that once in the lymph node parenchyma, tumor cells can directly invade the lymph node vessels to enter the blood circulation, bypassing the thoracic duct . Initially, the release of inflammatory chemokines from tumor and inflammatory cells in the tumor microenvironment induces lymphangiogenesis and enhances lymphatic intravasation . Tumor-derived cytokines, soluble factors, and extracellular vesicles can prime the stromal cells in lymph nodes to create a lymphatic metastatic niche similar to the PMN . VEGF-C overexpression in tumor cells has been shown to induce hyperplasia of the peritumoral lymphatic vessels, enhancing flow rate and delivery to lymph nodes . The transport to lymphatic lymph nodes is actively regulated by signaling pathways, including the SDF-1/CXCR4 axis , the CCL1–CCR8 interactions , and the VEGF-C-induced upregulation of CCR7-CCL21 signaling . VEGF-C enhances immune tolerance in murine melanoma models by inducing the deletion of antigen-specific CD8(+) T cells through lymphatic endothelial cells . Chronic IFN signaling during the initial anti-tumor response induces epigenetic rewiring of tumor cells in the lymph nodes, upregulating PD-L1 and promoting immune tolerance . Together with increased MHC-I expression, tumor cells are able to evade NK cells and resist T cell-mediated cytotoxicity . Cancer cells in the lymph nodes also exhibit elevated expression of MHC-II, increasing regulatory T cells while decreasing CD4(+) T cells . Tumor cells in the lymph nodes also undergo metabolic adaptations. Metastasis-initiating cells show elevated expression of the fatty acid receptor CD36 and lipid metabolism genes. Studies have shown that palmitic acid or a high-fat diet can enhance CD36 expression, increasing the metastatic potential of tumor cells . Genes involved in the fatty acid oxidation pathway are upregulated in lymph node-metastatic tumor cells through bile acid-induced activation of the yes-associated protein (YAP) . Moreover, lipid metabolism is also important for tumor cells to overcome ferroptosis in the lymphatic environment since the hallmark of ferroptosis is the lethal accumulation of lipid peroxidation products . In the lymph, however, there are low iron levels and high levels of oleic acid, glutathione, and other antioxidants, decreasing exposure to oxidative stress and making tumor cells more resistant to ferroptosis when they enter the blood . The reprogramming that occurs in the lymph nodes ultimately leads to a survival and metastatic advantage for tumor cells. Phylogenetic studies of metastatic breast cancer tumors have discerned that the genetic heterogeneity between primary tumors and their distant metastases comes from either a monoclonal metastatic precursor that evolves outside the primary tumor or polyclonal precursors originated from the primary tumor from the start . Phylogenetic studies for other types of cancer have yielded similar results . Mohammed et al. discovered that tumor cells reaching the lymphatic vessels exhibit a gene and protein profile indicative of a hybrid epithelial/mesenchymal phenotype and stem cell-like characteristics, which contribute to their high metastatic potential . In mouse melanoma models, key driver mutations such as BRAF alterations and changes in genes like MET or CDKNA2 (either gain or loss) are observed to occur within lymph nodes . Cells with driver mutations combined with a loss of tumor suppressor genes that would normally help eliminate cells with mutations can, therefore, undergo transformation freely toward the development of melanoma . Chromatin modifier histone deacetylase 11 (HDAC11) serves as a dynamic epigenetic regulator in the lymph nodes microenvironment as demonstrated in breast cancer cells models, in which increased HDAC11 expression inhibits cell cycle suppressors E2F7 and E2F8, promoting tumorigenesis and growth in the lymph nodes while downregulation of HDAC11 upregulates RRM2, promoting migration and egress from lymph nodes to distant sites predominantly through the draining blood vessels of lymph nodes . Tumor cells can also enter directly from the primary tumor into the bloodstream. A clone subpopulation of breast cancer cells reprogrammed to overexpress the proteins Serpine2 and Slpi showed vascular mimicry and efficient blood intravasation . However, only a small percentage of tumor cells that enter the circulation will survive the environmental pressures of the bloodstream to successfully metastasize . CTCs, or tumor microemboli, initially enter the bloodstream as single cells but rapidly aggregate into clusters. This clustering, occurring after detachment from the primary tumor, is facilitated by the cancer stem cell marker CD44 through intercellular CD44-CD44 homophilic interactions . Stemness of tumor cells is induced by epithelial-to-mesenchymal TRANSITION, and it supports migration . Both mesenchymal and epithelial tumor cells enter the bloodstream and contribute to CTC clusters, with evidence indicating a dynamic plasticity between epithelial and mesenchymal states . Hydrodynamic shear stress in the systemic circulation promotes epithelial-to-mesenchymal transition in CTCs. This process is driven by the generation of reactive oxygen species and nitric oxide and the suppression of extracellular signal-regulated kinase and glycogen synthase kinase 3β signaling pathways . CTC clusters are more resistant to cell death than individual CTCS, increasing their potential to metastasize . Their larger size allows them to overcome fluid shear stress and collisions with other cells in the circulation, promoting margination to the endothelium wall, which increases their probability of arrest and adhesion to wall receptors . Despite their larger size, CTC clusters can navigate capillary-sized vessels by rearranging them into single-file chains , enabling them to reach distant organs. Additionally, capillary beds with slower flow rates promote CTC arrest and enhance active cell adhesion . CTCs associated with neutrophils showed a transcriptomic profile that supported cell cycle progression and cell migration . In later cancer stages, neutrophils show increased immunosuppressive functions, suggesting they have a dynamic role in metastatic progression . Inflamed neutrophils form aggregates with CTC in the intraluminal space, but once CTC arrest occurs in a vessel, they lose contact with the tumor cells. However, they remain in close proximity to the clusters and endothelium due to chemokine signaling mediated by self-secreted IL-8, tumor-derived CXCL-1, and the endothelial cell glycocalyx . IL-8 also causes endothelial barrier disruption and extravasation of nearby tumor cells . Neutrophil extracellular traps (NETs) are neutrophil-derived DNA webs released in inflammatory states to trap and kill pathogens, but they can also trap CTCs . They capture CTCs via β1-integrin, which is upregulated in inflammation both in CTCs and NETs . Studies have shown that the interaction between β2 integrin on neutrophils and ICAM-1 on melanoma cells facilitates the anchoring of melanoma cells to the vascular endothelium . ICAM-1 on triple-negative breast cancer cells promotes tumor cell secretion of suPAR, a chemoattractant for neutrophils, and attaches to CD11b molecules on neutrophils to form CTC-neutrophil bonds . Platelets also interact and travel along CTCs, and a bidirectional exchange of lipids, proteins, and RNA occurs between them . Tumor cells can transfer mutant RNA into blood platelets to produce “tumor-educated platelets” . Platelets efficiently transfer structural components to tumor cells through extracellular vesicles, internalization, or direct contact, effectively “educating” tumor cells in the process . Platelet-derived TGFβ and direct contact with CTCs activate the TGFβ/Smad and NF-κB pathways in tumor cells, driving their transition to a mesenchymal-like phenotype . Platelet-derived RGS18 promotes the expression of the immune checkpoint molecule HLA-E in CTCs. As a result, CTCs can escape NK-mediated immune surveillance and killing . Direct contact with platelets can upregulate the inhibitory checkpoint molecule CD155 in CTCs and inhibit NK-cell cytotoxicity when CD155 is engaged with immune receptor TIGIT . The cross-talk between CTCs and platelets, therefore, creates highly dynamic and aggressive phenotypes that help preserve CTCs integrity during their transit in the bloodstream, enhance invasiveness and proliferation, perpetuate epithelial-to-mesenchymal transition and stem-like phenotypes, and evade death . Platelets also stimulate YAP1 dephosphorylation and its nuclear translocation in CTCs, triggering a pro-survival gene expression profile that prevents anoikis in detached conditions . CTC clusters are often accompanied by myeloid-derived suppressor cells (MDSCs), a group of immature myeloid cells that promote both systemic and local immunosuppression, forming a protective barrier around CTCs to aid in metastasis . Aside from their immunosuppressive roles, CTC-MDSC interactions increase the production of reactive oxygen species in MDSC, which induces pro-tumorigenic differentiation and proliferation of tumor cells by upregulation of Notch1 receptor expression and activation in CTCs . There have been attempts to oppose the immunosuppressive aspects of MDSC by targeting therapies against them. Some drugs have been approved by the FDA, some are undergoing clinical trials, and others are being investigated in preclinical models. However, there is still no consensus for their use . Macrophages can have a dual role in regard to immunity against CTCs. On the one hand, CD24, a cancer stemness marker, can be expressed in tumor cells and play a suppressive role in tumor immunity as a phagocytic inhibitor when bound to macrophages via Siglec-10 . On the other hand, Zhang et al. reported macrophages that engulf apoptotic tumor cells integrate tumor DNA into their nuclei, transforming into tumor stem cells while maintaining macrophage surface markers, enabling them to evade immune detection . Therefore, some macrophages can promote metastasis while others interfere with it. More recently, Fu et al. discovered that microbiota from the primary tumor can be carried by CTCs as intracellular bacteria capable of reorganizing the actin cytoskeleton of tumor cells and enhancing resistance to mechanical stress . Tumor cells thrive in hypoxic conditions due to metabolic reprogramming , and CTC clusters offer protection against the toxic oxygen concentrations in the bloodstream . Hypoxic CTC clusters promote a cancer stem-like phenotype in CTCs and the acquisition of a reactive oxygen species-resistant phenotype that enhances CTC survival upon reoxygenation . Oxidative stress and hypoxia favor CTCs to develop resistance mechanisms against anoikis, a form of apoptosis induced upon cell detachment from their native environment . Several other adaptations have been linked to anoikis resistance, such as increased epithelial-to-mesenchymal transition, change in integrins’ profiles, oncogene activation, and overexpression of key metabolic enzymes or receptors . The inclusion of carcinoma-associated fibroblasts in CTC clusters provides a metastatic advantage to tumor cells , likely favoring anoikis resistance, transportation of nutrients, and epithelial-to-mesenchymal transition . Vascular cooption describes the mechanism by which metastatic cells preferentially grow alongside the outer surfaces of existing blood vessels. This strategy is present in more than 95% of early micrometastases within the CNS . Adhesion to vessels depends upon tumor cell β1 integrins adhesion to the vascular basement membrane and the establishment of microcolonies . In some organs, the presence of mesenchymal stem cells acting as pericytes at the perivascular space of the PMN mediate the extravasation of tumor cells . It has been proposed that pericytic mimicry or angiotropism is a process closely related to vascular cooption and can be seen in melanoma cells that metastasize to the brain . CTC from lung and breast cancers that metastasize to the brain utilize the cell adhesion molecule L1CAM to move along capillaries. This movement involves the activation of YAP through interactions with β1 integrin and integrin-linked kinase (ILK) . Plasmin suppresses vascular cooption by deactivating L1CAM, an axon guidance molecule utilized by metastatic cells to navigate along brain capillaries. However, serpins that inhibit plasminogen activators produced by cancer cells, including neuroserpin and serpin B2, prevent the generation of plasmin. This inhibition facilitates vascular cooption in BMs associated with lung cancer, breast cancer , and melanoma . Serpins also protect cancer cells by inhibiting the plasmin-generated FasL death signal . A lncRNA associated with breast cancer cells increased expression of ICAM1, which mediated vascular co-option by increasing tumor cells’ ability to stretch over brain capillaries and extravasate into the brain parenchyma . Intravascular cell arrest in brain microvessels before extravasation has been demonstrated to create a focal hypoxic microenvironment in the PMN, leading to ischemic changes that upregulate vascular remodeling factors such as Angiopoietin-2 (Ang-2) and VEGF . Ang-2 facilitates tumor cell colonization and transmigration in the PMN and later supports stable oxygen and nutrient supply for metastatic growth . There is evidence that the BBB, known as the tightest endothelial barrier, can be modified by soluble factors secreted by tumor cells or dysregulation of the normal brain microenvironment. Tumor-derived heparinase, for instance, can degrade the basement membrane of the BBB, facilitating tumor cell invasion into the brain . Additionally, the absence of normal astrocytes leads to the downregulation of the DHA transporter Mfsd2a, expressed by endothelial cells, causing disruption of the BBB . There are also tumor-derived extracellular vesicles that can be taken up by endothelial cells to increase the permeability of the BBB. Lung cancer cells secrete exosomes mediated by the action of TGF-β carrying lncRNA that increases the expression of MMP-2 in brain microvascular endothelial cells . MMP-2 destroys tight junctions between endothelial cells both in the lung and the brain, increasing vascular permeability, tumor cell migration, and invasion . Extracellular vesicles from breast cancer carrying miR-181c promote BBB disruption by altering actin dynamics , while extracellular vesicles containing miR-105 target the tight junction protein ZO-1 in endothelial barriers, compromising their integrity . Tumor cells must acquire specialized adaptations before brain colonization, some of which involve specific mediators for BBB crossing. For example, breast cancer cells express COX2, the EGFR ligand HBEGF, and the α2,6-sialyltransferase ST6GALNAC5, which facilitate their traversal across the BBB . ST6GALNAC5, in particular, was found to be specifically expressed only in brain-tropic metastatic cells, enhancing cooption to endothelial cells . COX2 has been linked to the upregulation of MMP-1, which can degrade components of the BBB such as Claudin and Occludin . In metastatic breast cancer cells, Klotz et al. demonstrated that semaphorin 4D (SEMA4D) regulates tumor cells’ transmigration through the BBB . When SEMA4D binds to its receptor Plexin-B1 (PLXNB1) in endothelial cells, it makes them switch to a proangiogenic phenotype . This effect may also be enhanced by TAMs . Inactivating PLXNB1 has shown a shift in the immune landscape of tumor microenvironments towards an antitumor response; however, angiogenesis is not affected since SEMA4D can bind to alternative receptors . Endothelial cells in the tumor microenvironment of BMs exhibit elevated Ki67 levels and enhanced microvascular proliferation. In contrast, the proliferation is suppressed in the presence of CD8+ T cells. Additionally, the tight-junction protein claudin-5, essential for BBB integrity, is downregulated in cancer cells located near endothelial cells, especially within the cores of BMs. This supports the hypothesis that vascular co-option plays a role in BMs colonization in regions with compromised endothelial junctions . Herrera et al. found that breast-to-brain metastasis cell lines were able to traverse an enhanced blood-cerebrospinal fluid barrier (BCSFB) while primary breast cancer cell lines could not . These findings reflected two things: firstly, cells that have previously colonized the brain must have acquired critical mechanisms to allow them to traverse CNS barriers; secondly, the preferential migration of breast cancer cells through the BCSFB may indicate it is an often-overlooked potential point of entry for tumor cells . Under normal conditions, the BCSFB in the choroid plexus exhibits greater permeability than the BBB due to its transport and secretory functions . Chemotherapies such as paclitaxel and 5-Fluorouracil (5-FU), commonly used in breast cancer, have been demonstrated to increase brain-barrier permeability to tumor cells, especially through the BCSFB due to upregulation of MMP-9 leading to Claudin-6 downregulation in the choroid plexus cells . MMP-9 activity in the choroid plexus cells also resulted in the release of Tau from breast cancer cells, which formed neurofibrillary tangles that further destabilized the BCSFB . Studies also have revealed that patients with parenchymal brain metastatic lesions often exhibit tumor cells in the ipsilateral blood–cerebrospinal fluid barrier . Leptomeningeal disease (LMD) occurs when tumor cells invade the leptomeningeal membrane and the CSF . Intracranial tumor cells spread via three mechanisms: direct perivascular pathways from the brain parenchyma, hematogenous routes from the systemic circulation, or iatrogenic seeding . Extracranial tumor cell dissemination to the CSF occurs via hematogenous spread from the systemic circulation, backward migration along cranial or spinal nerves, invasion from the bone marrow via vascular pathways in the dura or skull, dissemination through meningeal lymphatic vessels, or through iatrogenic implantation . Tropism for the meninges involves specific histological, molecular, and genetic alterations in the primary tumor cells . Once in the leptomeninges, tumor cells continue adapting to overcome the intrinsic microenvironmental challenges of the CSF, including inflammation and sparse micronutrients . Cancer cells within the CSF increase their expression of LCN2 when stimulated by inflammatory cytokines produced by CSF macrophages . Besides its role in activating astrocytes in the PMN , LCN2 can also function as an iron-binding molecule. TAMs in the tumor microenvironment can help deliver iron to tumor cells to promote growth . The uptake of iron in the CSF by tumor cells outcompetes macrophages that need iron to generate reactive oxygen species, therefore impairing the respiratory burst and phagocytosis functions needed for tumor control . Tumor cells located in the cerebrospinal fluid produce complement component 3 (C3), which activates the C3a receptors on the epithelial cells of the choroid plexus. This activation compromises the BCSFB, permitting plasma elements like amphiregulin to enter the CSF and support the growth of tumor cells . Astrocytes serve as important mediators of BMs, as they can promote neuroinflammation, immunosuppression, angiogenesis, chemotaxis, and tumor cell invasion. Metastatic lung cancer cells release factors, including macrophage migration inhibitory factor, IL-8, and plasminogen activator inhibitor-1 (PAI-1). These factors activate astrocytes, producing inflammatory cytokines such as IL-6, TNF-α, and IL-1β, promoting increased tumor cell proliferation . Schwartz et al. demonstrated that melanoma-secreted factors activate astrocytes to upregulate the expression of inflammatory chemokines such as CCL2, CXCL10, and CCL7, instigating astrogliosis, neuroinflammation, and hyperpermeability of the BBB . Astrocyte-secreted CXCL10 has been demonstrated to facilitate the migration of melanoma cells toward astrocytes. This effect is attributed to the elevated expression of CXCR3, the receptor for CXCL10, in melanoma cells with a propensity for brain tropism . Similarly, CCL2 can promote transmigration and extravasation of cancer cells via the CCL2-CCR2 astrocyte–cancer cell axis . COX2 expressed in breast cancer cells increases prostaglandins, activating astrocytes to secrete CCL7, promoting self-renewal of tumor-initiating cells . Soluble factors from triple-negative breast cancer cells induced upregulation and activation of the NLRP3 inflammasome in peritumoral astrocytes, consequently increasing IL-1β release, inflammation, and proliferation of metastatic cells . There is evidence that in metastatic triple-negative breast tumors, IL-1β enhances the adhesion of cancer and immune cells to the brain endothelium via upregulation of cell adhesion molecules such as ICAM-1, VCAM-1, and E-selectin . Lung cancer cells produce protocadherin 7 (PCDH7), which facilitates the creation of connexin 43 (Cx43) gap junctions with astrocytes. These connections enable the transfer of the second messenger cGAMP from tumor cells to astrocytes, thereby activating the STING pathway. Activation of this pathway leads to the secretion of inflammatory chemokines such as IFNα and TNFα. These chemokines act as paracrine signals for tumor cells to activate pathways such as the STAT1 and NF-κB signaling pathways, promoting their own growth and chemoresistance . Astrocytes promote immunosuppression by significantly increasing the levels of neuronal-specific cyclin-dependent kinase 5 (Cdk5). This elevated Cdk5 reduces both the expression and functionality of class I major histocompatibility complexes, thereby disrupting the antigen presentation pathway . Furthermore, reactive astrocytes with a signal transducer and activator of transcription 3 (STAT3) activation modify the innate and acquired immune system responses in the metastatic microenvironment . Following early infiltration of tumor cells to the brain, activated astrocytes produce factors such as MMP-9, which promotes angiogenesis and release of growth factors from the extracellular matrix . These signals persist as long as the astrocyte–tumor cell mutual association remains. Astrocytes also epigenetically upregulate Reelin expression in Her2+ breast cancer cells that migrate to the neural niche, conferring them a survival advantage in the brain microenvironment . Peroxisome proliferator-activated receptor γ (PPARγ) in metastatic tumor cells activates astrocytes in the lipid-rich environment around the glial cells, enhancing cell proliferation in advanced BMs but not during early steps . Astrocytes-derived exosomes containing PTEN -targeting microRNAs downregulate PTEN mRNA and protein expression in brain-tropic metastatic tumor cells . PTEN loss in tumor cells facilitates perivascular brain colonization and invasion and later increases secretion of the chemokine CCL2, which attracts myeloid cells, furthering metastatic proliferation . Treatment for BMs includes neurosurgical resection, radiotherapy (i.e., either stereotactic radiosurgery, or whole-brain radiotherapy), and tumor-specific chemotherapy and targeted therapies . The optimal therapeutic approach for each etiology of BMs will depend on the specific molecular and genetic landscape of the primary tumor. For example, HER2-positive patients can be treated with monoclonal antibodies such as trastuzumab or pertuzumab, whereas triple-negative breast cancer BMs treatment relies on BBB-permeable chemotherapeutics, such as capecitabine, cisplatin, and temozolomide . In NSCLC BMs, tyrosine kinase inhibitors (erlotinib, gefitinib) have demonstrated good BBB penetration, and ALK inhibitors and ICIs are also available options for treatment . In melanoma, chemotherapeutic agents have limited efficacy, which has started the investigation of a combination of immunotherapy and targeted therapy . However, the development of resistance mechanisms limits the success of targeted therapies. Traditionally, resistance mechanisms were classified as intrinsic or acquired, but evidence has shown that the pattern of “acquired” resistance might be intra-tumor heterogeneity present from the start and expanded under selective pressures from targeted therapies . Metastases and BMs, in particular, pose an extra challenge in determining targetable patterns and elucidating the development of resistance mechanisms. Over half of BMs harbor genomic alterations not found in primary tumors , prompting the need for direct BMs biopsies, which are not always accessible or feasible due to poor patient conditions . Liquid biopsies have emerged as alternatives to analyzing tumor tissue. CTCs, cell-free tumor DNA (ctDNA), and extracellular vesicles from plasma and CSF can be collected and processed. However, most published research has been retrospective and performed in small, heterogeneous patient cohorts, and methodologic techniques for processing are diverse [ , , ]. Translation into clinical practice must first overcome several technical barriers, including assay optimization and standardization, incorporation of liquid biopsies in more clinical trials, and creation of data biobanks to facilitate translational research . The BBB represents a physical obstacle for chemotherapeutic agents to enter the brain. The integrity of the BBB varies in brain tumors and selectively excludes molecules based on factors such as electric charge, lipid solubility, and molecular weight. Disruption of the BBB using hyperosmolar mannitol has been investigated as a method to improve the delivery of large molecules, including proteins, antibodies, immunoconjugates, and viral vectors . Low-intensity pulsed ultrasound with systemic microbubbles can increase BBB permeability as demonstrated with several agents in primary brain tumors . Enhanced drug delivery with nanotechnology and nanocarriers has also been extensively researched . Unfortunately, none of these methods has reached clinical feasibility yet. Resistance in melanoma BMs with BRAFV600 mutations is primarily mediated by drug efflux transporters, including P-glycoprotein (P-gp; ABCB1) and breast cancer resistance protein (BCRP; ABCG2), located at the BBB and impeding penetration into BMs . Moreover, the brain microenvironment may also exert an effect on resistance mechanisms that have not yet been fully elucidated. Immunotherapies have become a pillar of cancer therapies, and ICIs targeting the PD-1/PD-L1 pathway, such as pembrolizumab (Keytruda), nivolumab (Opdivo), and atezolizumab (Tecentriq), have been approved by the FDA to treat different primary tumors presenting with BMs . However, some patients exhibit resistance primarily due to the tumor microenvironment mechanisms such as defects in antigen presentation, cytokines signaling, presence of immune inhibitory molecules, and T cell exclusion . There is an urgent need to advance BMs therapies and improve outcomes for patients. The gap in knowledge about mechanisms for metastases to the brain is still wide, and researchers must account for tumor heterogeneity and fast evolution when developing new therapies. Identifying more patients at risk or in the early stages of metastasis could further help researchers understand how different BMs develop and how to block their progression. Brain metastases continue to be frequent causes of central nervous system tumors, and research has actively tried to determine the mechanisms that promote their formation. Advances in genetic and molecular sciences have allowed us to define models of the complex interactions between tumor cells, brain microenvironment, and host adaptations. However, there is still a gap in our knowledge about the molecular underpinnings of these tumors, what makes certain cells brain-tropic, and what makes them more equipped for survival in the hostile brain microenvironment. The heterogeneity of the tumors of origin and the infinite numbers of adaptations tumor cells can go through under different environmental pressures make this a highly dynamic field of study, which hinders the clinical applicability of study results for patients in a real-life setting. We are reaching an era in which molecular studies in medicine require the intervention of other fields of research, such as artificial intelligence, machine learning, and computational simulation systems, which would allow for the processing of greater loads of information and hopefully create predictive models that help to determine tumor behavior, patient prognosis, and response to therapy using easily accessible samples (i.e., tumor circulating cells in the blood and primary tumor tissue), which remains a challenge for brain tumors.
Dissolution Profiles of Immediate Release Products of Various Drugs in Biorelevant Bicarbonate Buffer: Comparison with Compendial Phosphate Buffer
922b846e-3480-4f13-8fdc-a2c6326bf7fd
11116250
Pharmacology[mh]
Dissolution tests have been widely used to assess the bioavailability (BA) and bioequivalence (BE) of orally administered drugs in drug discovery, development, and manufacturing. Recently, biorelevant dissolution media have been intensively investigated to improve the BA/BE predictability of dissolution tests . Biorelevant dissolution media should mimic gastrointestinal fluids as accurately as possible. The intestinal pH value is maintained by bicarbonate buffer (BCB) . Therefore, BCB should be used for biorelevant dissolution media. BCB maintains the pH value by the following chemical equilibrium. 1 [12pt]{minimal} $${HCO}_{3}^{-}+{H}^{+} {H}_{2}{CO}_{3} {H}_{2}O+{CO}_{2}$$ HCO 3 - + H + ⇄ H 2 CO 3 ⇄ H 2 O + CO 2 The reaction rate of CO 2 hydration is significantly slower than that of H 2 CO 3 dehydration. This unique property of BCB affects the dissolution rates of drug substances and products . However, phosphate buffer (PPB) has been used for many years for practical reasons. When BCB is exposed to air, the pH value rapidly increases as CO 2 volatilizes from the solution. A CO 2 gas supply has been used to compensate for the loss of CO 2 during dissolution testing . However, this requires specialized equipment such as a CO 2 gas cylinder, a gas regulator, and a pH monitor. In addition, the use of surfactants with gas bubbling causes foaming . Furthermore, gas bubbling can have an artificial effect on the precipitation of a drug (manuscript submitted). To overcome these challenges, we recently developed the floating lid method . In this method, a floating lid is placed on the surface of a BCB solution to prevent the loss of CO 2 . By using a floating lid, the pH increase can be kept below 0.1 pH units for several hours. The floating-lid method is simple, low-cost, robust, and easy to operate. It has already been applied to various experimental conditions . Previously, the dissolution rate of several active pharmaceutical ingredients (API) has been investigated in BCB . For example, ibuprofen was reported to show a slower dissolution profile in BCB than in PPB . In the case of salt form APIs, BCB and PPB differently affected the precipitation of its corresponding free forms at the dissolving particle surface . However, the number of tested drugs was limited. In addition, raw drug substances have been used in these studies. Therefore, it has been unclear to what extent BCB affects the dissolution profiles of immediate-release (IR) products of various drugs. The purpose of the present study was to clarify the extent to which the dissolution profiles of IR products of various drugs differ between biorelevant BCB and compendial PPB. In this study, the dissolution profiles of IR products of 15 drugs were determined in compendial PPB and biorelevant BCB (Fig. , Tables and ). This study focused on poorly soluble ionizable drugs because the choice of dissolution media is critically important for such drugs (3 free acids, 3 free bases, 4 acid salts, 3 base salts, and 2 zwitterion salts). The Japanese pharmacopeia second fluid (JP2, phosphate buffer, 25 mM, nominal pH 6.8) was used as a compendial PPB. The pH value of BCB was aligned with the nominal pH of JP2. The bicarbonate concentration and ionic strength ( I ) were set to be relevant to the physiological condition (10 mM and I = 0.14 M adjusted by NaCl). Materials Azilsartan, carvedilol, ciprofloxacin hydrochloride hydrate, dipyridamole, dantrolene sodium hydrate, furosemide, febuxostat, haloperidol, losartan potassium, lurasidone, montelukast sodium hydrate, and pioglitazone hydrochloride were purchased from Tokyo Chemical Industry Co., Ltd. Tamoxifen citrate and tosufloxacin tosylate hydrate were purchased from FUJIFILM Wako Pure Chemical Corporation. Raltegravir was extracted from the tablet. The manufacturers of the IR products are summarized in Table . Methods A compendial paddle dissolution apparatus (NTR-6200A; Toyama Sangyo Co., Ltd., Osaka, Japan) was used for the dissolution test. The pH value was measured using the 9615S-10D Standard ToupH electrode (HORIBA Advanced Techno, Co., Ltd., Kyoto, Japan). The floating lid method was used to maintain the pH value of BCB (pH 6.8, 10 mM bicarbonate, I = 0.14 M (adjusted by NaCl)). The floating lid (foamed styrol, thickness: 5 mm) was designed to cover more than 95% of the surface area of a buffer solution . A NaHCO 3 solution (490 ml, 10.2 mM, NaCl 0.13 M, prewarmed at 37°C in a container for at least 30 min) was added to each vessel. The temperature was maintained at 37°C. The paddle rotation speed was set to 50 rpm. An HCl solution (10 ml, 0.113 M) was added to adjust the pH value to pH 6.8 (this HCl concentration (0.00226 M after dilution) was experimentally determined to give pH 6.8 after adding to the NaHCO 3 solution). The solution surface was covered by a floating lid. As a compendial phosphate buffer solution, the Japanese pharmacopeia second fluid (JP2, 25 mM, phosphate buffer, nominally pH 6.8) was used. The actual pH value of JP2 was reported to be pH 6.9 . The other conditions were the same as the BCB buffer, including the use of the floating lid. One tablet or capsule was added to each vessel (except for tosufloxacin (two tablets)). At specified time intervals, a small volume of samples (1.0 ml) was withdrawn and immediately filtered (hydrophilic PVDF, φ = 4 mm, pore size: 0.22 µm, Merck). The first few droplets were discarded to avoid filter adsorption. The filtrate was diluted with an appropriate medium, and the concentrations of the drugs were measured by UV absorbance (except for lurasidone) (UV-1850, Shimazu Corporation, Kyoto, Japan, or SH-9500lab, CORONA ELECTRIC, Ibaraki, Japan). The detection wavelength, the concentration range, the number of data points, and the determination coefficient ( r 2 ) of standard curves are summarized in Supplemental Material Table . The absence of UV interference from the excipients was confirmed by comparing the UV spectrum of a pure API and its product. The concentration of lurasidone was quantified by HPLC (Shimazu Prominence LC-20 series; Column: Zorbax Eclipse Plus C18, 2.1 × 50 mm, 3.5 μm; mobile phase: acetonitrile/ 0.1% trifluoroacetic acid (40: 60); flow rate: 0.6 mL/min; temperature: 40°C; detection wavelength: 320 nm; injection volume: 10 μL). The dissolution test was performed in triplicate. The area under the dissolution curve (AUDC) was calculated by the trapezoidal method. The β values before adding a drug were calculated using the p K a values at a similar ionic strength reported in the literature (BCB: p K a =6.05 (for I =0.15, at 37°C), PPB: p K a =6.9 (for I =0.05, at 37°C)) . Azilsartan, carvedilol, ciprofloxacin hydrochloride hydrate, dipyridamole, dantrolene sodium hydrate, furosemide, febuxostat, haloperidol, losartan potassium, lurasidone, montelukast sodium hydrate, and pioglitazone hydrochloride were purchased from Tokyo Chemical Industry Co., Ltd. Tamoxifen citrate and tosufloxacin tosylate hydrate were purchased from FUJIFILM Wako Pure Chemical Corporation. Raltegravir was extracted from the tablet. The manufacturers of the IR products are summarized in Table . A compendial paddle dissolution apparatus (NTR-6200A; Toyama Sangyo Co., Ltd., Osaka, Japan) was used for the dissolution test. The pH value was measured using the 9615S-10D Standard ToupH electrode (HORIBA Advanced Techno, Co., Ltd., Kyoto, Japan). The floating lid method was used to maintain the pH value of BCB (pH 6.8, 10 mM bicarbonate, I = 0.14 M (adjusted by NaCl)). The floating lid (foamed styrol, thickness: 5 mm) was designed to cover more than 95% of the surface area of a buffer solution . A NaHCO 3 solution (490 ml, 10.2 mM, NaCl 0.13 M, prewarmed at 37°C in a container for at least 30 min) was added to each vessel. The temperature was maintained at 37°C. The paddle rotation speed was set to 50 rpm. An HCl solution (10 ml, 0.113 M) was added to adjust the pH value to pH 6.8 (this HCl concentration (0.00226 M after dilution) was experimentally determined to give pH 6.8 after adding to the NaHCO 3 solution). The solution surface was covered by a floating lid. As a compendial phosphate buffer solution, the Japanese pharmacopeia second fluid (JP2, 25 mM, phosphate buffer, nominally pH 6.8) was used. The actual pH value of JP2 was reported to be pH 6.9 . The other conditions were the same as the BCB buffer, including the use of the floating lid. One tablet or capsule was added to each vessel (except for tosufloxacin (two tablets)). At specified time intervals, a small volume of samples (1.0 ml) was withdrawn and immediately filtered (hydrophilic PVDF, φ = 4 mm, pore size: 0.22 µm, Merck). The first few droplets were discarded to avoid filter adsorption. The filtrate was diluted with an appropriate medium, and the concentrations of the drugs were measured by UV absorbance (except for lurasidone) (UV-1850, Shimazu Corporation, Kyoto, Japan, or SH-9500lab, CORONA ELECTRIC, Ibaraki, Japan). The detection wavelength, the concentration range, the number of data points, and the determination coefficient ( r 2 ) of standard curves are summarized in Supplemental Material Table . The absence of UV interference from the excipients was confirmed by comparing the UV spectrum of a pure API and its product. The concentration of lurasidone was quantified by HPLC (Shimazu Prominence LC-20 series; Column: Zorbax Eclipse Plus C18, 2.1 × 50 mm, 3.5 μm; mobile phase: acetonitrile/ 0.1% trifluoroacetic acid (40: 60); flow rate: 0.6 mL/min; temperature: 40°C; detection wavelength: 320 nm; injection volume: 10 μL). The dissolution test was performed in triplicate. The area under the dissolution curve (AUDC) was calculated by the trapezoidal method. The β values before adding a drug were calculated using the p K a values at a similar ionic strength reported in the literature (BCB: p K a =6.05 (for I =0.15, at 37°C), PPB: p K a =6.9 (for I =0.05, at 37°C)) . The dissolution profiles of the IR products are shown in Figs. (free acids), (free bases), (acid salts), (base salts), and (others). The drug concentration was reported as a free form. The initial pH values of BCB and PPB were pH 6.80 ± 0.05 (mean ± S.D., N = 45) and 6.96 ± 0.02 (mean ± S.D., N = 45) (Supplemental Information Table ). The final pH values of BCB were in the range of 6.80 to 7.00 except for dantrolene Na H 2 O (pH 7.10 ± 0.00) (mean ± S.D., N = 3 (the same hereinafter)), tamoxifen citrate (pH 7.17 ± 0.03), lurasidone HCl (pH 7.22 ± 0.05), raltegravir K (pH 7.11 ± 0.01), ciprofloxacin HCl (pH 6.76 ± 0.03), and tosufloxacin tosylate H 2 O (pH 6.60 ± 0.05). The final pH values of PPB were in the range of 6.90 to 7.10 except for tosufloxacin tosylate H 2 O (pH 6.81 ± 0.02). The ratio of AUDC (AUDCr), the f 2 values, and the maximum or minimum ratio of dissolved% at a time point ( D % ratio ) are summarized in Table . In 4/15 cases, AUDC was not equivalent (AUDCr < 0.8 or > 1.25-fold). When comparing D % ratio at each time point, a difference of < 0.8 or > 1.25-fold was observed in 11/15 cases. The f 2 value was not used because it is usually valid only for complete dissolution cases . The dissolution rate of free-form drugs tended to be slower in BCB than in PPB except for azilsartan. In the case of salt-form drugs, marked differences were observed in initial dissolution and supersaturation profiles in many cases such as dantrolene Na H 2 O, montelukast Na H 2 O, tamoxifen citrate, and tosufloxacin tosylate H 2 O. However, no trend was observed in the differences. The difference in the dissolved drug concentrations became smaller after 60 min except for lurasidone HCl. This study compared for the first time the dissolution profiles of IR products of a wide range of poorly soluble ionizable drugs in biorelevant BCB and compendial PPB (JP2). The results demonstrated that a significant portion of IR products showed marked differences in the initial dissolution and supersaturation profiles between biorelevant BCB and JP2. Theoretically, the equilibrium pH and solubility of a drug should be similar for BCB and PPB when the buffer capacity is sufficient. In this study, the pH value of the bulk phase after dissolution testing and the dissolved drug concentration after 60 min were similar between BCB and JP2. These experimental results were in good agreement with the theory. However, due to the slow hydration rate of CO 2 , the neutralization rate of BCB is much slower than that of PPB. The differences in the initial dissolution and supersaturation profile would be attributed to the pH value at the surface of drug particles that can be affected by the neutralization rate of buffer species. In the cases of free-form drugs, theoretically, the dissolution rate should be slower in BCB than in JP2 because the particle surface pH is more slowly neutralized by BCB . The result of this study was qualitatively in good agreement with the theory. Theoretical quantitative prediction of the particle surface pH of BCB requires information on the particle size of the drug substance, which is often not available for commercial products. Agitation conditions may also affect the particle surface pH because the effective p K a of BCB is a function of the hydrodynamics . The difference in particle surface pH between BCB and PPB can be more than 0.5 pH unit, resulting in a threefold or greater difference in dissolution rates . Theoretical quantitative predictions of surface pH and dissolution rate are important and should be investigated further in the future. In the cases of salt-form drugs, free-form precipitation can occur either on the particle surface during particle dissolution or in the bulk phase after particle dissolution , both of which are affected by the neutralization rate of buffer species . Theoretically, the dissolution of salt particles should be faster in BCB than in JP2 because pH neutralization at the particle surface should be slower in BCB than in JP2. In addition, the bulk phase precipitation of a free form should be slower in BCB than in JP2 (manuscript submitted). Therefore, in theory, more significant supersaturation should be observed in BCB than in JP2. However, the results of this study were not simply predicted by the above-mentioned theory, suggesting that more complex mechanisms exist for the dissolution and precipitation of a salt-form drug . In addition, an IR product contains various excipients that potentially affect the precipitation of a drug. In this study, the buffer capacity ( β ) was different between BCB and PPB (BCB: β = 3.0 mM/pH, PPB (JP2): β = 14 mM/pH) . In addition, the ionic strength ( I ) was also different. In our previous study, even when β and I were aligned between BCB and PPB, the dissolution profiles of salt drugs were markedly different . The buffer capacity of the compendial PPB of USP is twice greater than that of JP2 . Therefore, a more significant difference could be observed between biorelevant BCB and USP PPB. In the case of salt-form APIs, supersaturation was observed except for raltegravir K and ciprofloxacin HCl H 2 O. In the case of raltegravir K, visual observation of the dissolution test suggested that tablet disintegration was likely to be the rate-limiting process. Komasaka et al . previously reported that the dissolution rate of the raltegravir K tablet was affected by pre-exposure to an acidic pH environment due to its conversion to an insoluble free acid form . Therefore, a pH shift process should be coupled with BCB for further investigation . For ciprofloxacin HCl H 2 O, it was not clear why no supersaturation was observed in the dissolution profile. In conclusion, a significant portion of IR products showed differences in the dissolution profiles between biorelevant BCB and compendial PPB, especially for salt-form drugs. With the floating lid method, BCB is as simple and easy to use as PPB. The advantages of BCB have been demonstrated for the evaluation of enteric-coated products , sustained-release products , and amorphous solid dispersions . The result of this study suggested that BCB is recommended as a first choice for biorelevant dissolution tests of IR products. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 32.8 KB)
Relationship between the expressions of DLL3, ASC1, TTF-1 and Ki-67: First steps of precision medicine at SCLC
eefa4199-d5f6-4825-b1d3-30ce8ce53688
11468345
Anatomy[mh]
Small cell lung cancer (SCLC) is an aggressive type of lung cancer that contributes to approximately 15% of lung cancer cases annually . Patients with SCLC have a poor prognosis, with a 5-year survival rate ranging from 3 to 27%, depending on the stage of the disease . SCLC is a highly proliferative lung cancer that is not amenable to surgery in most cases due to rapid growth, early spread, and a tendency to develop drug resistance and relapse . Genes and genomics/proteomic modifications related to the development, plasticity, and progression of SCLC, which could be identified as possible biomarkers for targeted therapy of this deadly disease, were already described: TP53/RB1 (98%/91%), TP73 (13%), PI3K3CA (15%), PTEN (9%), FGFR1 (8%), Hedgehog Signaling Pathway (80%), MYC (20%), KMT2D (13%), and NOTCH1 signaling (25%) . By July 19, 2022, 107 patients received Tarlatamab in dose exploration (0.003 to 100 mg; n = 73) and expansion (100 mg; n = 34) cohorts. The median progression-free and overall survival were 3.7 months (95% CI, 2.1 to 5.4) and 13.2 months (95% CI, 10.5 to not reached), respectively. Exploratory analysis suggests that selecting for increased DLL3 expression can increase clinical benefit . On May 16, 2024, the US Food and Drug Administration (FDA) granted accelerated approval to tarlatamab-dlle for extensive-stage small cell lung cancer (ES-SCLC) with disease progression on or after platinum-based chemotherapy . A phase 2 study was conducted on subjects with relapsed/refractory SCLC after two or more prior lines of treatment . Efficacy, safety, tolerability, and pharmacokinetics of Tarlatamab were evaluated in 99 patients enrolled in DeLLphi-301, an open-label, multicenter, multi-cohort study . Tarlatamab, administered at a 10-mg dose every two weeks, showed antitumor activity with durable objective responses and promising survival outcomes in patients with previously treated SCLC. No new safety signals were identified . Tarlatamab (AMG 757) is the first DLL3-targeting bispecific T-cell engager therapy that activates a patient’s T cells to attack DLL3-expressing tumor cells, which is a bispecific T-cell engager molecule that binds both DLL3 and CD3, leading to T-cell-mediated tumor lysis . DLL3 is a protein that plays a critical role in the Notch signaling pathway, which is involved in cell differentiation, proliferation, and apoptosis . In humans, DLL3 is predominantly expressed in neuroendocrine tissues. It has been aberrantly expressed on the surface of up to 80–85% of SCLC cells and minimally expressed in normal tissues, making it a compelling therapeutic target , such as in other neuroendocrine carcinomas . It is expressed both in the cytoplasm and in the membrane of SCLC cells . Despite the growing body of knowledge on the role of DLL3 in lung cancer, there remains a significant gap in our understanding of the actual expression rate of DLL3 when assessed by immunohistochemistry (IHC) in routine clinical laboratories. In a real-world study of DLL3 as an SCLC therapeutic target, positive DLL3 expression (defined as ≥25% of tumor cells) was identified in 895/1050 (85%) patients with one specimen and evaluable DLL3 expression; 719/1050 (68%) patients had high DLL3 expression (defined as ≥75% of tumor cells). There was no significant difference in median overall survival from SCLC diagnosis for evaluable patients with non-missing data based on DLL3 expression (negative DLL3 expression ( n = 139), 9.5 months; positive DLL3 expression ( n = 747), 9.5 months; all evaluable patients ( n = 893, 9.5 months) . With the advent of anti-DLL3 therapies, studies of interrelationships between different molecules still need to be included, such as thyroid transcription factor-1 (TTF-1), which is involved in the differentiation of lung epithelial cells and is commonly expressed in high-grade lung and neuroendocrine adenocarcinomas, or Ki-67 protein (MKI67) which is a cellular marker for proliferation, found in the nucleus of cancer cells that are actively growing and dividing . These relationships could provide insights into the tumor biology of SCLC and rare tumors such as the Large-Cell Neuroendocrine Carcinomas (LCNEC), representing 1–3% of all primary lung cancers, and potentially guide treatment decisions and prognostication in a clinical setting . In this study, the qualitative and quantitative protein expression of DLL3, ASCL1, TTF-1, and Ki-67 was retrospectively analyzed by digital pathology in patients with SCLC, and this expression was linked to median overall survival using a multivariate mathematical model. Patients’ characteristics Sixty-four cases were included (mean age 71 ± 10), with a balanced relation between gender (32 females and 32 males, ). The mean age for males was 72 ± 10 years, and for females, 70 ± 10 years ( p = 0.460). Most patients were older than 60 (54 patients, 84,4%), as depicted in the population pyramid . The majority of cases were biopsied from lung parenchyma, either by transbronchial/endobronchial biopsies or transthoracic CT-guided procurement (56 cases, 90,3%). Four cases were pleural biopsies, and two were metastasis in lymph nodes. Chromogranin was positive in 70,3% of cases, with 15,4% showing 1+ intensity, 19,2% 2+ intensity, and 23,1% 3+ intensity. Synaptophysin was positive in 83,8% of cases, with 24,0% showing 1+ intensity, 20,0% 2+ intensity, and 32,0% 3+ intensity. CD56 was positive in 94,4% of cases, and its intensity was not evaluated . All cases had at least one classical neuroendocrine marker positive and conventional small-cell carcinoma morphology. Fifteen patients (18%) were followed by palliative care and did not receive chemotherapy. All remaining patients included in the study received standard chemotherapy for small-cell neuroendocrine carcinoma. The follow-up was complete until the patients died from the disease. The mean overall survival was 77.5 days with a 95% confidence interval of 36 to 116 days , with a maximum of 557 days. TTF-1 expression While TTF-1 is not usually considered a conventional marker for diagnosing small cell carcinoma in most centers, it is positive in most of them. In the current cohort, it was positive in 33 cases (52%) and negative in 31 cases (48%) . The percentage of tumor cells with TTF-1 averaged 39.6% (SD 43.4). Eleven (11, 18.3%) had 100% of TTF-1 positivity. When assigned a histologic score of percentage versus intensity of positivity, cases had an H-score median of 37,30 (SD 110,08). Twenty-one cases (21, 33%) had an H-score of 150 or higher . Ki67 expression Ki67 was positively expressed in all cases diagnosed with small cell carcinoma due to its high proliferation rate . In the cohort, Ki67 showed positive expression in 100% of the cases, with an average percentage of positive cells of 73.73% (SD: 15.80). The case with the highest expression exhibited an immunohistochemical positivity of 97.20%, while the case with the lowest expression showed positivity in 40% of the cells . ASCL1 expression Tissue was available for the study of ASCL1 in 64 cases . The H-score had a median of 57,08 (SD 54.55). Only two cases (3%) were completely negative for this antibody, while the majority (55 cases, 86%) had an H-score of 10–150 and were considered low-expressors. Seven cases (7, 11%) were considered high expression. Only one case (1.4%) had an H-score of more than 250 . DLL3 expression DLL3-positive SCLC tissue was used as a positive control, and DLL3-negative lung adenocarcinoma tissue was used as a negative control. As per previously published data , the staining pattern was cytoplasmic and membranous . Forty-six (46, 72%) had some expression of DLL3 (18 negative, 28%). Nineteen cases (30%) expressed DLL3 in less than 50% of tumor cells, while 27 (42%) expressed it in more than 50% of cells. When the h-score was calculated, only five cases (8%) scored above 150 . Association between DLL3, ASC1, TTF-1 and Ki-67 immunoexpression Both TTF-1 and DLL3 were evaluated by the percentage of positive cells and H-score. ASCL1 was evaluated by H-score. As expected, ASCL1 expression was strongly associated with synaptophysin positivity ( p = 0,003) ASCL1 expression did not have any differences regarding age, Ki-67 positivity, chromogranin or TTF-1 expression . DLL3 expression was strongly associated with TTF-1 positivity ( , and ). Tumors that were positive for TTF-1 had a higher percentage of DLL-3 expression both in percentage as well as in H-score ( p < 0.001). The correlation between biomarkers TTF-1 and DLL3 was positive demonstrated in . Survival and multivariate analyses The mean global survival of all patients included in the study was 77.5 days . Age, sex, and all conventional neuroendocrine markers did not correlate with overall survival. Using Cox regression, epidemiological variables, as well as TTF-1 and DLL3 expression were tested. It was observed that TTF1 negative patients are a marker of worse prognosis in patients with SCLC compared to patients with positive expression ( p = 0.014) . DLL3 and ASCL1 did not have any correlation with overall survival . Sixty-four cases were included (mean age 71 ± 10), with a balanced relation between gender (32 females and 32 males, ). The mean age for males was 72 ± 10 years, and for females, 70 ± 10 years ( p = 0.460). Most patients were older than 60 (54 patients, 84,4%), as depicted in the population pyramid . The majority of cases were biopsied from lung parenchyma, either by transbronchial/endobronchial biopsies or transthoracic CT-guided procurement (56 cases, 90,3%). Four cases were pleural biopsies, and two were metastasis in lymph nodes. Chromogranin was positive in 70,3% of cases, with 15,4% showing 1+ intensity, 19,2% 2+ intensity, and 23,1% 3+ intensity. Synaptophysin was positive in 83,8% of cases, with 24,0% showing 1+ intensity, 20,0% 2+ intensity, and 32,0% 3+ intensity. CD56 was positive in 94,4% of cases, and its intensity was not evaluated . All cases had at least one classical neuroendocrine marker positive and conventional small-cell carcinoma morphology. Fifteen patients (18%) were followed by palliative care and did not receive chemotherapy. All remaining patients included in the study received standard chemotherapy for small-cell neuroendocrine carcinoma. The follow-up was complete until the patients died from the disease. The mean overall survival was 77.5 days with a 95% confidence interval of 36 to 116 days , with a maximum of 557 days. While TTF-1 is not usually considered a conventional marker for diagnosing small cell carcinoma in most centers, it is positive in most of them. In the current cohort, it was positive in 33 cases (52%) and negative in 31 cases (48%) . The percentage of tumor cells with TTF-1 averaged 39.6% (SD 43.4). Eleven (11, 18.3%) had 100% of TTF-1 positivity. When assigned a histologic score of percentage versus intensity of positivity, cases had an H-score median of 37,30 (SD 110,08). Twenty-one cases (21, 33%) had an H-score of 150 or higher . Ki67 was positively expressed in all cases diagnosed with small cell carcinoma due to its high proliferation rate . In the cohort, Ki67 showed positive expression in 100% of the cases, with an average percentage of positive cells of 73.73% (SD: 15.80). The case with the highest expression exhibited an immunohistochemical positivity of 97.20%, while the case with the lowest expression showed positivity in 40% of the cells . Tissue was available for the study of ASCL1 in 64 cases . The H-score had a median of 57,08 (SD 54.55). Only two cases (3%) were completely negative for this antibody, while the majority (55 cases, 86%) had an H-score of 10–150 and were considered low-expressors. Seven cases (7, 11%) were considered high expression. Only one case (1.4%) had an H-score of more than 250 . DLL3-positive SCLC tissue was used as a positive control, and DLL3-negative lung adenocarcinoma tissue was used as a negative control. As per previously published data , the staining pattern was cytoplasmic and membranous . Forty-six (46, 72%) had some expression of DLL3 (18 negative, 28%). Nineteen cases (30%) expressed DLL3 in less than 50% of tumor cells, while 27 (42%) expressed it in more than 50% of cells. When the h-score was calculated, only five cases (8%) scored above 150 . Both TTF-1 and DLL3 were evaluated by the percentage of positive cells and H-score. ASCL1 was evaluated by H-score. As expected, ASCL1 expression was strongly associated with synaptophysin positivity ( p = 0,003) ASCL1 expression did not have any differences regarding age, Ki-67 positivity, chromogranin or TTF-1 expression . DLL3 expression was strongly associated with TTF-1 positivity ( , and ). Tumors that were positive for TTF-1 had a higher percentage of DLL-3 expression both in percentage as well as in H-score ( p < 0.001). The correlation between biomarkers TTF-1 and DLL3 was positive demonstrated in . The mean global survival of all patients included in the study was 77.5 days . Age, sex, and all conventional neuroendocrine markers did not correlate with overall survival. Using Cox regression, epidemiological variables, as well as TTF-1 and DLL3 expression were tested. It was observed that TTF1 negative patients are a marker of worse prognosis in patients with SCLC compared to patients with positive expression ( p = 0.014) . DLL3 and ASCL1 did not have any correlation with overall survival . Precision medicine is an innovative approach to disease prevention and treatment that considers differences in people’s genes, injuries, environments, and lifestyles to target the right therapies to the right patients at the right time. In oncology, precision medicine uses genetic and molecular information, tailoring treatment on a single patient profile, optimizing efficacy, and minimizing toxicities . This approach is revolutionizing lung cancer diagnosis and treatment. However, despite being widely adopted, its benefit in clinical practice still remains to be fully elucidated . SCLC continues to carry a poor prognosis, with a five-year survival rate of 3.5% and a 10-year survival rate of 1.8% . The pathogenesis remains unclear, and no known predictive or diagnostic biomarkers exist. Delta-like ligand 3 (DLL3) is an inhibitory Notch ligand that is highly expressed in small cell lung cancer (SCLC) and has been identified as a potential therapeutic target . DLL3 expression is not commonly found in normal adult tissues, which makes it an attractive target for anti-cancer therapies . High DLL3 expression has been associated with poor prognosis in SCLC patients, suggesting its potential role as a prognostic biomarker . However, the prognostic significance of DLL3 expression in SCLC remains controversial, with some conflicting studies indicating a potential association between high DLL3 expression and overall survival . Therapeutic strategies targeting DLL3, such as antibody-drug conjugates (ADCs), bispecific T-cell engagers, and chimeric antigen receptor (CAR) T-cell therapies, are under development . Rovalpituzumab tesirine (Rova-T), an ADC targeting DLL3, has been evaluated in clinical trials, although it did not meet the expected outcomes in Phase III trials . Other investigational therapies, including bispecific T-cell engagers like tarlatamab (AMG 757) and CAR T-cell therapies targeting DLL3, have shown promise in preclinical models and early clinical trials . The study conducted by Furuta et al. provides critical insights into the expression of these proteins in surgically resected SCLC samples . The study reveals a high prevalence of DLL3 and ASCL1 expression in SCLC patients, with ASCL1 expression detected in 83% of the evaluated samples. These findings agree with our paper, which showed 90% positivity of ASCL1. This high expression rate aligns with DLL3’s potential role in the disease’s pathology and supports the development of DLL3-targeted therapies. The positive correlation between DLL3 and ASCL1 expressions further underscores their interconnected roles in SCLC’s molecular landscape, suggesting that interventions targeting these pathways could offer new avenues for treatment . Their study also explores the prognostic implications of DLL3 and ASCL1 expression, finding no direct association with patient survival. Similarly to their findings, in out cohort we have not found any direct association of ASCL1 and DLL3 with the overall survival, although we found a relation between positive TTF1 and a better survival rate (quantified by percentage of positive cells). These findings may be important in establishing practical protocols for scoring these immunohistochemical studies and selecting patients that may benefit from targeted therapies. Similarly, another recent study demonstrated that high DLL3 and ASCL1 expression was associated with certain morphological features in LCNECs and SCLCs, and in early-stage patients without metastasis who underwent chemotherapy, high expression of both DLL3 and ASCL1 was linked to a better prognosis and a lower risk of death . Furthermore, DLL3 expression in LCNEC was associated with the expression of ASCL1 and neuroendocrine markers, suggesting a relationship between DLL3 expression and the neuroendocrine profile of these tumors . These findings suggest that DLL3 and ASCL1 are not only correlated in their expression but may also be involved in the neuroendocrine phenotype of lung neuroendocrine tumors and could serve as potential therapeutic targets or prognostic indicators in these diseases. Specifically, ASCL1-positive/DLL3-high tumors may represent a subgroup of SCLC with unique vulnerabilities to DLL3-targeted therapies. Further research is warranted to validate these findings and explore the clinical utility of ASCL1/DLL3 co-expression as a predictive biomarker for therapeutic response. In adenocarcinomas, TTF-1 has been shown to play a significant role in the pathogenesis of lung cancer, being expressed in 69–80% of lung adenocarcinoma cases. Clinically, TTF-1 expression is a diagnostic tool for identifying the histological type of lung cancer, distinguishing primary lung adenocarcinomas from metastatic forms, and acting as a prognostic indicator. Studies have shown that patients with positive TTF-1 expression exhibit longer overall survival (OS) in stage I lung adenocarcinoma . Small Cell Lung Cancer (SCLC), typically characterized as an undifferentiated cancer, exhibits TTF-1 positivity in 80–90% of cases, indicating a function beyond epithelial cell differentiation. Evidence of TTF-1 expression in non-pulmonary small cell cancers, such as aggressive small cell prostate cancer, supports its association with neuroendocrine differentiation and aggressive tumor behavior rather than characteristics of terminal respiratory unit cells . In our samples, of interest, was the association of TTF-1 score with DLL3 expression, showing a potential role in TTF-1 as a differentiation and mechanistic marker, much more than only a diagnostic one. The significant prevalence of DLL3 and ASCL1 expression in early-stage SCLC, as highlighted by Furuta et al. and corroborated by our findings, underscores their potential as therapeutic targets and prognostic biomarkers . Our study further expands upon this, revealing a correlation between TTF1 positive expression and improved survival outcomes, emphasizing the importance of standardized scoring protocols for these immunohistochemical markers. This may enable the identification of patient subgroups that could particularly benefit from DLL3-targeted therapies, potentially personalizing treatment approaches for SCLC. Additionally, the intriguing association between TTF-1 expression and DLL3, as observed in our study, suggests a multifaceted role for TTF-1 beyond its established diagnostic utility. This finding may have implications for understanding the molecular underpinnings of SCLC and could inform the development of novel therapeutic strategies. Further investigations into the mechanistic link between TTF-1 and DLL3 could uncover new avenues for intervention in this aggressive disease. Despite the promising insights and potential therapeutic implications highlighted in our study, there are several limitations that should be acknowledged. First, our study’s retrospective design may introduce selection bias, as it relies on previously collected data and samples, which may not be representative of the broader SCLC patient population. Additionally, the relatively small sample size limits the generalizability of our findings and may impact the statistical power to detect significant associations or differences in survival outcomes. Furthermore, our study primarily focuses on the expression of DLL3 and ASCL1 in small SCLC samples, which may not fully capture the heterogeneity of SCLC, especially in that most cases are inoperable or treated with different modalities. The lack of longitudinal data to track changes in marker expression over time and in response to treatment is another limitation. Finally, the interpretation of immunohistochemical scoring can be subjective, and inter-observer variability might affect the consistency of the results, even with the attempted scoring protocols tried here. Future studies should aim to include larger, more diverse cohorts and incorporate prospective designs to validate these findings and enhance their clinical applicability. In summary, our findings and corroborative studies present a compelling case for the significance of TTF1 in the clinical landscape of small-cell lung cancer. The evidence of a better survival rate in patients with high expression of these proteins, despite the generally poor prognosis associated with SCLC, indicates their potential utility as biomarkers and as focal points for targeted therapy. Future research should continue to explore the mechanistic pathways influenced by these proteins, emphasizing developing therapeutic strategies that can effectively exploit these targets. By advancing our understanding of DLL3 and ASCL1 within the broader context of lung cancer pathology, we can hope to refine diagnostic criteria and enhance the specificity and efficacy of treatment protocols, ultimately leading to improved survival rates and quality of life for patients afflicted by this formidable disease. Cohort description This observational, cross-sectional, and analytical study had a cohort of sixty-four sequential patients recruited between May 2018 and November 2022. Biopsies were analyzed in a reference thoracic pathology laboratory. Data were collected from electronic medical records in the respective hospital units where each patient was diagnosed and followed up. Inclusion criteria were defined as adults over 18 years of age with transbronchial biopsy of a primary SCLC tumor confirmed by histological analysis, sufficient material for the study of HE, DLL3, ASCL1, TTF-1, and Ki-67, and clinical follow-up to death. Exclusion Criteria were under 18, insufficient material for IHC analysis, lack of clinical data, or loss of clinical follow-up. This protocol was reviewed and approved by the Research Ethics Committee at the Federal University of Ceará (Protocol CAAE 59399322.9.0000.5049). The study was conducted under the Good Clinical Practice Guidelines and the Helsinki Declaration. Immunohistochemistry Each tumor formalin-fixed, and paraffin-embedded tissue block was sectioned at 2 µm. A hematoxylin and eosin (HE) staining was performed. Slides were stained with anti-DLL3-specific monoclonal antibody (dilution 1:100; clone EPR22592-18; cat. no. ab229902; Abcam, Cambridge, UK); anti-ASCL1 polyclonal antibody (dilution 1:200; cat. no. PA5-77868; Invitrogen, Massachusetts, USA); anti-TTF-1 specific monoclonal antibody (prediluted; clone 8G7G3/1; cat. no. 790-4398; Ventana Medical Systems, Inc.); and anti-Ki-67 monoclonal specific antibody (prediluted; clone 30-9; cat. no. 790-4286; Ventana Medical Systems, Inc.). We used the Ultraview DAB IHC Detection Kit (cat. no. 760–500; Ventana Medical Systems, Inc.), which includes a blocking reagent and a secondary antibody conjugated with polymer. Staining was performed using standard automated immunostaining equipment (Ultraview Benchmark Ventana; Ventana Medical Systems, Inc., Tucson, AZ, USA) according to the manufacturer’s protocol. Chromogranin, synaptophysin and Ki-67 had been previously performed for the diagnosis, and retrieved from the pathology files. IHC slides had a positive control tissue: glioblastoma for DLL3, neuroendocrine tumor for ASCL1, thyroid tissue for TTF-1, and tonsil tissue for Ki-67. Positive and negative control slides were included in each assay. The slides were analyzed by optical microscopy to evaluate the positive and negative controls. Digital pathology analysis ASCL1, DLL3, TTF-1 and Ki-67 s lides were scanned using the KFBIO scanner equipment at 40x magnification. The SVS files were then imported to QuPath ® software v. 0.5.0 as “DAB Brightfield,” which allowed sample analysis. The files were loaded onto a project in QuPath software (QuPath source code, documentation, and links to the software download are available at https://qupath.github.io ). QuPath’s segmentation feature can detect thousands of cells, identify them as objects in a hierarchical manner below the annotation or cases, and measure cell morphology and biomarker expression simultaneously (12). QuPath has recently been used as annotation software in deep learning to distinguish small-cell from large-cell neuroendocrine lung cancer . For each slide the stain vectors were recalibrated on “Estimate Stain Vector” with automatic calibration. The positive cell detection was performed by the nucleus evaluation according to default parameters; the nucleus staining intensity threshold was set as 0.1, and the cell expansion was set to default to 5 micrometers, which is the default measurement for cell cytoplasm expansion from the nucleus until it meets the neighboring cell. The DAB intensity threshold was standardized according to each marker. For DLL3, the “thresholdCompartmen” was set to be “Cytoplasm: DAB OD Mean,” and for ASCL1, Ki-67, and TTF-1 the “thresholdCompartmen” was set to be “Nucleus: DAB OD mean.” For H-Score analysis, the intensity threshold parameters were set with three threshold points: the “thresholdPositive1” was set to 0.2, the “thresholdPositive2” was set to 0.4, and the “thresholdPositive3” was set to 0.6. The analysis was performed for each marker and the results were obtained as positive and negative, percentage and HScore. depicts an example of DLL3 expression in a tumor showing the deployment of QuPath algorithm to assess cells with zero, low, moderate and high expressions, which is color coded and curated by an experienced pathologist. Snapshots of representative images were exported to ImageJ for storage and illustrations ( and ), exported in high quality using TIFF extensions with 300 dpi and at least 5 inches in the shortest axis. Scoring criteria biomarkers For DLL3, ASCL1, and TTF-1, IHC scoring was performed in two ways. First, the staining was semi-quantitatively evaluated using an immunohistochemical H-score (HS) method by an experienced thoracic pathologist and also by using a algorithm developed and of free access by QuPath . The H-score method was applied based on the extent and intensity of cytoplasmic staining (1, 2, or 3) multiplied by the percentage of cells positive (proportion score), with a potential score ranging from 0 to 300. The H-score is a classic semi-quantitative method used in pathology to assess the intensity and distribution of immunohistochemical staining in tissue samples. It is particularly valuable in research for evaluating the expression levels of various proteins within specific cells or tissue regions, which can be crucial for diagnosing and determining the prognosis of diseases, especially cancer. It has been used in several organ systems and cancer types, including oral squamous cancer, kidney cancer, breast cancer and lung cancer . Over the past decade, several studies have developed automated algorithms for the quantitative assessment of IHC images. However, significant efforts are still needed to improve quantification accuracy and efficiency . More recently, several articles have automated the use of H-scoring to increase accuracy and reproducibility, using the QuPath software, as in the current study . The second way was the analysis of the percentage of positive cells (0–100%). The cut-off of negative and positive, low and high, was according to each protein expression profile and was used as described in previous studies . DLL3 and TTF-1 were considered positive if at least 1% of tumor cells had cytoplasmic and/or membranous on DLL3 and nuclear staining on TTF-1. Both proteins were considered low expression if positive in less than 50% of tumor cells, while high expression was assumed if the protein was positive in more than 50% of tumor cells. ASCL1 was considered positive if at least 10% of tumor cells had nuclear staining. ASCL1 – H-score patients ≤10 were considered negative, H-scores of 11–149 were considered low expressed, and 150–300 were considered high expressed. Chromogranin and synaptophysin were considered positive if at least 5% of tumor cells had cytoplasmic and/or membranous staining. In addition, a semi-quantitative scoring of 1, 2, and 3 intensity of staining was estimated by at least one pathologist. CD56 staining was considered only as positive when shown a membranous staining, or negative . The most recent 2021 WHO classification identifies the three markers indicative of neuroendocrine (NE) differentiation: chromogranin A, synaptophysin, and CD56. In addition, it mentions INSM1 as a potential new marker . Determining positivity for these markers lacks defined thresholds, necessitating consideration of morphological features. Chromogranin and synaptophysin are genuine indicators of NE differentiation, as they bind to epitopes present in neurosecretory granules or synaptic vesicles. In SCLC, focal positivity for chromogranin A in some tumor cells is diagnosed . Statistical analysis Univariate descriptive statistics were performed on the recollected data. Normal variables were reported by their mean and standard deviation, and non-normal counterparts by median and interquartile range; count data were reported by absolute frequency and percentage. Overall survival analysis included univariate Kaplan-Meier curves using different biomarker strata according to DLL3, ASCL1, and TTF-1 presence, expression levels, and gender. Multivariate analysis included a correlation plot over the numerical variables and Cox regression analysis using a backstep variable selection strategy. This observational, cross-sectional, and analytical study had a cohort of sixty-four sequential patients recruited between May 2018 and November 2022. Biopsies were analyzed in a reference thoracic pathology laboratory. Data were collected from electronic medical records in the respective hospital units where each patient was diagnosed and followed up. Inclusion criteria were defined as adults over 18 years of age with transbronchial biopsy of a primary SCLC tumor confirmed by histological analysis, sufficient material for the study of HE, DLL3, ASCL1, TTF-1, and Ki-67, and clinical follow-up to death. Exclusion Criteria were under 18, insufficient material for IHC analysis, lack of clinical data, or loss of clinical follow-up. This protocol was reviewed and approved by the Research Ethics Committee at the Federal University of Ceará (Protocol CAAE 59399322.9.0000.5049). The study was conducted under the Good Clinical Practice Guidelines and the Helsinki Declaration. Each tumor formalin-fixed, and paraffin-embedded tissue block was sectioned at 2 µm. A hematoxylin and eosin (HE) staining was performed. Slides were stained with anti-DLL3-specific monoclonal antibody (dilution 1:100; clone EPR22592-18; cat. no. ab229902; Abcam, Cambridge, UK); anti-ASCL1 polyclonal antibody (dilution 1:200; cat. no. PA5-77868; Invitrogen, Massachusetts, USA); anti-TTF-1 specific monoclonal antibody (prediluted; clone 8G7G3/1; cat. no. 790-4398; Ventana Medical Systems, Inc.); and anti-Ki-67 monoclonal specific antibody (prediluted; clone 30-9; cat. no. 790-4286; Ventana Medical Systems, Inc.). We used the Ultraview DAB IHC Detection Kit (cat. no. 760–500; Ventana Medical Systems, Inc.), which includes a blocking reagent and a secondary antibody conjugated with polymer. Staining was performed using standard automated immunostaining equipment (Ultraview Benchmark Ventana; Ventana Medical Systems, Inc., Tucson, AZ, USA) according to the manufacturer’s protocol. Chromogranin, synaptophysin and Ki-67 had been previously performed for the diagnosis, and retrieved from the pathology files. IHC slides had a positive control tissue: glioblastoma for DLL3, neuroendocrine tumor for ASCL1, thyroid tissue for TTF-1, and tonsil tissue for Ki-67. Positive and negative control slides were included in each assay. The slides were analyzed by optical microscopy to evaluate the positive and negative controls. ASCL1, DLL3, TTF-1 and Ki-67 s lides were scanned using the KFBIO scanner equipment at 40x magnification. The SVS files were then imported to QuPath ® software v. 0.5.0 as “DAB Brightfield,” which allowed sample analysis. The files were loaded onto a project in QuPath software (QuPath source code, documentation, and links to the software download are available at https://qupath.github.io ). QuPath’s segmentation feature can detect thousands of cells, identify them as objects in a hierarchical manner below the annotation or cases, and measure cell morphology and biomarker expression simultaneously (12). QuPath has recently been used as annotation software in deep learning to distinguish small-cell from large-cell neuroendocrine lung cancer . For each slide the stain vectors were recalibrated on “Estimate Stain Vector” with automatic calibration. The positive cell detection was performed by the nucleus evaluation according to default parameters; the nucleus staining intensity threshold was set as 0.1, and the cell expansion was set to default to 5 micrometers, which is the default measurement for cell cytoplasm expansion from the nucleus until it meets the neighboring cell. The DAB intensity threshold was standardized according to each marker. For DLL3, the “thresholdCompartmen” was set to be “Cytoplasm: DAB OD Mean,” and for ASCL1, Ki-67, and TTF-1 the “thresholdCompartmen” was set to be “Nucleus: DAB OD mean.” For H-Score analysis, the intensity threshold parameters were set with three threshold points: the “thresholdPositive1” was set to 0.2, the “thresholdPositive2” was set to 0.4, and the “thresholdPositive3” was set to 0.6. The analysis was performed for each marker and the results were obtained as positive and negative, percentage and HScore. depicts an example of DLL3 expression in a tumor showing the deployment of QuPath algorithm to assess cells with zero, low, moderate and high expressions, which is color coded and curated by an experienced pathologist. Snapshots of representative images were exported to ImageJ for storage and illustrations ( and ), exported in high quality using TIFF extensions with 300 dpi and at least 5 inches in the shortest axis. For DLL3, ASCL1, and TTF-1, IHC scoring was performed in two ways. First, the staining was semi-quantitatively evaluated using an immunohistochemical H-score (HS) method by an experienced thoracic pathologist and also by using a algorithm developed and of free access by QuPath . The H-score method was applied based on the extent and intensity of cytoplasmic staining (1, 2, or 3) multiplied by the percentage of cells positive (proportion score), with a potential score ranging from 0 to 300. The H-score is a classic semi-quantitative method used in pathology to assess the intensity and distribution of immunohistochemical staining in tissue samples. It is particularly valuable in research for evaluating the expression levels of various proteins within specific cells or tissue regions, which can be crucial for diagnosing and determining the prognosis of diseases, especially cancer. It has been used in several organ systems and cancer types, including oral squamous cancer, kidney cancer, breast cancer and lung cancer . Over the past decade, several studies have developed automated algorithms for the quantitative assessment of IHC images. However, significant efforts are still needed to improve quantification accuracy and efficiency . More recently, several articles have automated the use of H-scoring to increase accuracy and reproducibility, using the QuPath software, as in the current study . The second way was the analysis of the percentage of positive cells (0–100%). The cut-off of negative and positive, low and high, was according to each protein expression profile and was used as described in previous studies . DLL3 and TTF-1 were considered positive if at least 1% of tumor cells had cytoplasmic and/or membranous on DLL3 and nuclear staining on TTF-1. Both proteins were considered low expression if positive in less than 50% of tumor cells, while high expression was assumed if the protein was positive in more than 50% of tumor cells. ASCL1 was considered positive if at least 10% of tumor cells had nuclear staining. ASCL1 – H-score patients ≤10 were considered negative, H-scores of 11–149 were considered low expressed, and 150–300 were considered high expressed. Chromogranin and synaptophysin were considered positive if at least 5% of tumor cells had cytoplasmic and/or membranous staining. In addition, a semi-quantitative scoring of 1, 2, and 3 intensity of staining was estimated by at least one pathologist. CD56 staining was considered only as positive when shown a membranous staining, or negative . The most recent 2021 WHO classification identifies the three markers indicative of neuroendocrine (NE) differentiation: chromogranin A, synaptophysin, and CD56. In addition, it mentions INSM1 as a potential new marker . Determining positivity for these markers lacks defined thresholds, necessitating consideration of morphological features. Chromogranin and synaptophysin are genuine indicators of NE differentiation, as they bind to epitopes present in neurosecretory granules or synaptic vesicles. In SCLC, focal positivity for chromogranin A in some tumor cells is diagnosed . Univariate descriptive statistics were performed on the recollected data. Normal variables were reported by their mean and standard deviation, and non-normal counterparts by median and interquartile range; count data were reported by absolute frequency and percentage. Overall survival analysis included univariate Kaplan-Meier curves using different biomarker strata according to DLL3, ASCL1, and TTF-1 presence, expression levels, and gender. Multivariate analysis included a correlation plot over the numerical variables and Cox regression analysis using a backstep variable selection strategy.
E‐cadherin staining in the diagnosis of lobular versus ductal neoplasms of the breast: the emperor has no clothes
860bca4f-38a4-4c4c-88d4-7097ed237ee1
11707503
Anatomy[mh]
Invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC) account for the vast majority of breast cancer cases seen in clinical practice. Although therapeutic approaches to both tumour types have yet to be segregated, classic ductal and lobular carcinomas are distinct in their biologic, histologic, and clinical characteristics. ILC is notoriously subtle, multifocal, and shows unusual patterns of metastatic spread. It is also associated with a higher incidence of positive margins, more advanced stage at diagnosis, and poorer survival outcomes, especially in long‐term follow‐up compared to IDC. , , The preinvasive lesions of ILC and IDC, namely, lobular carcinoma in situ (LCIS) and ductal carcinoma in situ (DCIS), stand even more evidently apart, as they incur significantly different therapeutic approaches. It follows that the need for accurate discrimination between lobular and ductal differentiation is critical, especially as our understanding of the molecular aspects and disease progression of lobular neoplasia deepens and treatment methods advance. Historically, lobular carcinoma was introduced to the lexicon of breast pathology by the pioneers Fred Stewart and Frank Foote, who described LCIS as a subtype of breast neoplasia in The American Journal of Pathology in 1941, during their tenure at Memorial Hospital in New York. They defined LCIS by emphasizing the distinguishing histological findings of uniformity and diminished cell cohesion characteristic of both LCIS and ILC. Their detailed and remarkably accurate histological survey of lobular neoplasia's microscopic features survives intact to this day. During the early 1990s, over half a century later, significant breakthroughs were made as researchers identified, for the first time, distinctive differences in the expression of E‐cadherin in lobular versus ductal carcinomas, cementing Stewart and Foote's seminal observations. , , Since this remarkable and foreshadowing marriage of histology and molecular biology, the absence of E‐cadherin has become a hallmark of lobular breast cancer, with many pathologists adopting it as a definitive diagnostic criterion in their daily practice. Although our understanding of the different histomorphological variants of ILC has continued to improve, confusion in the diagnosis of lobular neoplasia persists. It has been the authors' experience that much of this confusion is due to an overreliance on E‐cadherin as a sole, categorical diagnostic marker, and on difficulties pathologists face when interpreting aberrant/discordant patterns of E‐cadherin staining such as attenuated, focal, fragmented, granular, and beaded membrane staining, or displacement of staining to the cytoplasm with a diffuse or perinuclear Golgi‐type pattern (Figure ). Undoubtedly, E‐cadherin and the cadherin–catenin complex play a pivotal role in the biology and morphology of most lobular carcinomas, causing the neoplastic cells to lose cell–cell cohesion, primarily—though not exclusively—through the absence of E‐cadherin protein expression. , However, concordance between morphology and immunohistochemistry (IHC) is far from perfect. For example, some tubule‐forming tumours can be negative for E‐cadherin expression, while classic lobular tumours can retain E‐cadherin immunoreactivity. , , Also, many tumours show variable staining intensity and different staining patterns within the same tumour, raising questions about criteria and thresholds for interpretation and diagnosis , (Figures and ). The lack of robust guidelines and the prevailing binary interpretation of E‐cadherin by pathologists is nicely demonstrated by Choi et al. , who showed improved concordance among pathologists when E‐cadherin staining is straightforward, and increased interobserver variability when E‐cadherin shows unconventional staining patterns. Even when E‐cadherin staining improves agreement among pathologists in cases with unequivocal staining patterns, it completely discounts the contribution of morphology to the diagnosis, therefore potentially compromising accuracy. , In this review we delve deeper into the role of E‐cadherin and the underlying mechanisms of its altered expression, and attempt to provide both a theoretical framework and a practical approach for addressing the challenges associated with E‐cadherin staining in the diagnosis of ductal and lobular carcinomas of the breast. Finally, we hope that E‐cadherin's flaws will be revealed to the reader, just like Hans Christian Andersen's famed folktale emperor who stood naked among the crowds, displaying a magnificent new attire that was not actually there. E‐cadherin is the product of the CDH1 gene located on the long arm of chromosome 16 (16q22.1). It is a crucial component of cellular adhesion in breast and other tissues, and consists of five extracellular domains along with a transmembrane domain and a cytoplasmic domain. , Each extracellular (N‐terminal) domain features binding sites for calcium, which are essential for the stability of the cell–cell adhesion. The intracellular (C‐terminal) domain forms a connection to the actin cytoskeleton through a variety of catenins, including α‐, β‐, γ‐, and p120 catenin, collectively forming the intricate catenin–cadherin complex , (Figure ). At least three members of this transmembrane complex (E‐cadherin, β‐catenin, and p120) are commercially available as IHC stains, E‐cadherin and β‐catenin being the most widely offered and p120 being the least commonly utilized by pathologists. Both extra‐ and intracellular domains of the members of this complex lie in close proximity to the cell membrane, and therefore exhibit a strong membranous positivity by IHC in normal mammary epithelial cells. E‐cadherin interacts with various signalling pathways, such as the Wnt pathway, and indirectly serves as a tumour suppressor, as its loss disrupts cell–cell adhesion, facilitating invasion, metastasis, and the progression of malignancy. In normal and neoplastic breast epithelium with a ductal phenotype (including DCIS and IDC), components of the cadherin–catenin complex typically demonstrate strong expression along the cell membrane. Conversely, in LCIS and ILC, loss of E‐cadherin is characteristic, and believed to be an early oncogenic event causing the buildup of p120 catenin in the cytoplasm. This leads to interaction with diverse effector molecules and pathways, including the Rho/Rock signalling pathway, contributing to resistance against anoikis and promoting tumour progression. , Anoikis, a specialized form of programmed cell death, is activated by the disruption of cell contact with the extracellular matrix and other cells, particularly through cadherin‐mediated adhesion. Breast cancer employs stromal alterations to counter anoikis, evading this critical mechanism of cell death. Below, we briefly discuss the various mechanisms of abnormal expression and function of the E‐cadherin molecule and its relationship to tumour histomorphology. CDH1 gene mutations Inactivation of the CDH1 gene through—mostly somatic—mutations, is a commonly reported occurrence, offering a simple and satisfactory explanation for E‐cadherin loss in a significant number of ILC cases. In most instances, mutations in E‐cadherin have been found to co‐occur with loss of heterozygosity (LOH) at the chromosomal locus of E‐cadherin, , emphasizing the role of E‐cadherin as a tumour suppressor gene. In a study of 127 ILC cases, 80 had CDH1 mutations, and among these, 89% exhibited co‐occurring LOH. In addition, germline mutations in the CDH1 gene have been observed in some cases of ILC and hereditary gastric cancer. Although CDH1 mutations are more commonly associated with lobular breast cancer, nonlobular tumours can also harbour loss‐of‐function (LOF) mutations in the CDH1 gene. , This highlights the need to consider CDH1 mutation as a potential contributor to tumour development in breast cancer subtypes beyond just lobular cancer. Hypermethylation of the CDH1 promoter Promoter hypermethylation in specific tissues primarily impacts gene expression, particularly in regions with a high density of CpG sites. Due to this, it has long been suggested that hypermethylation of the CDH1 promoter could potentially explain the lack of E‐cadherin expression in ILC. While numerous studies indicated the presence of CDH1 methylation in ILC with wide variations in percentages, , , , , , other research has contradicted these findings by demonstrating an absence of methylation. Ciriello et al . reported that no DNA hypermethylation was detected in the CDH1 promoter in ILCs, suggesting that the loss of E‐cadherin may not be influenced by epigenetic mechanisms. Similarly, Alexander et al . observed no evidence of CDH1 hypermethylation in ILC. Bücker et al . suggested that this discrepancy may be due to the extensive application of the nonquantitative, highly sensitive methylation‐specific PCR (MSP) method in studies that showed the presence of methylation, potentially leading to false‐positive outcomes. In contrast, Ciriello et al . and Alexander et al . utilized high‐resolution quantitative techniques, employing the 450 and 850k arrays, respectively, for DNA methylation profiling. , , However, in a recent study by Dopeso et al ., promoter hypermethylation was found in 62.5% of ILC cases without CDH1 genetic alteration. Most of these cases exhibited concomitant 16q loss along with negative E‐cadherin staining. Notably, only one case showed promoter hypermethylation without concomitant 16q loss but had aberrant IHC results for E‐cadherin. It is important to note that the study employed both MSP and quantitative digital droplet PCR (ddPCR) methods. In another study, Yu et al . evaluated CDH1 methylation in so‐called lobular‐like invasive mammary carcinomas. They described these tumours as invasive mammary carcinomas composed of uniform discohesive cells with low‐ to intermediate‐grade nuclei, dispersed growth pattern, and rare to absent tubule formation, with concomitant moderate to strong uniform circumferential membranous reactivity for both E‐cadherin and p120. In their analysis, the authors employed quantitative ddPCR methods and found that four out of seven cases of lobular‐like invasive mammary carcinomas had CDH1 methylation. Interestingly, the four cases with methylation had a mild reduction in E‐cadherin expression without pathogenic CDH1 LOF mutations. The remaining three cases without methylation exhibited robust membranous expression of E‐cadherin. This suggests the hypothesis that aberrant E‐cadherin expression may be linked to epigenetic changes such as methylation. However, further studies are required to precisely elucidate this association. In IDC, it has been described that hypermethylation of CDH1 can occur concurrently with reduced or negative expression of E‐cadherin. Reported frequencies of this mechanism are, however, very high (up to 94% of cases), and the reliability and significance of this finding is unclear. , , Transcriptional repression and stimulation of epithelial to mesenchymal transition (EMT) EMT is a dynamic cellular process that enables tumour cells to shed their epithelial characteristics and adopt mesenchymal traits, giving them the ability to infiltrate surrounding tissues and establish distant metastases. This transformation is mediated by the interplay of molecular regulators that orchestrate the dissolution of cell–cell adhesions, the remodelling of the extracellular matrix, and the acquisition of migratory and invasive potential. , The process is driven by a network of transcription factors, such as Snail, Zeb, Slug, and Twist, which coordinate the expression of genes that promote EMT and suppress epithelial characteristics. Snail, Zeb, and Twist collaborate to repress E‐cadherin via attaching to specific DNA motifs within the E‐boxes of the CDH1 promoter and activating mesenchymal genes, including vimentin and N‐cadherin, enabling tumour cells to detach from their primary tumour and embark on their metastatic path. , , , , Such mechanisms are likely in part responsible for the unfavourable prognosis linked to loss of E‐cadherin in IDC, , and the greater likelihood of the aggressive triple‐negative breast cancers (TNBC) to lack E‐cadherin. In contrast, loss of E‐cadherin is observed in the early stages of tumour development in ILC, and its relationship to EMT is uncertain. In a murine model of ILC, created through conditional CDH1 mutation and p53 knockout, the sole absence of E‐cadherin did not induce EMT, and the tumour cells remained epithelial in their appearance. However, cases of ILC may exhibit a partial EMT phenotype, as evidenced by the downregulation of E‐cadherin, activation of Twist with nuclear localization, and activation of multiple mesenchymal markers, albeit with cells retaining epithelial features and expressing epithelial markers, while lacking N‐cadherin expression. Aberrant glycosylation Glycosylation is a posttranslational modification mechanism that is crucial to the folding, trafficking, and stability of E‐cadherin at the cell membrane. , E‐cadherin can undergo posttranslational modifications through oxygen (O)‐ and nitrogen (N)‐glycosylation. , , Impaired O‐glycosylation of E‐cadherin within its cytoplasmic domain hinders surface exocytosis, impeding cell adhesion, and contributes to enhanced cell migration and metastasis in breast cancer by diminishing surface E‐cadherin levels. , In contrast, N‐glycosylation occurs in the extracellular domains, and abnormal N‐glycosylation leads to instability of the E‐cadherin protein by altering the composition of adherens junctions in breast cancer. , Zhang et al . found that in breast cancer the E‐cadherin protein level was downregulated when N‐glycosylation was altered and GCNT2 , a gene‐encoding glucosaminyl (N‐acetyl) transferase 2, I‐branching enzyme, was upregulated. Also, studies have revealed an association between glycosylation and EMT. Wen et al . demonstrated that, in breast cancer, N‐glycosylation of epithelial cell adhesion molecules could regulate EMT through MAPK and PI3K / Akt pathways. While aberrant glycosylation is suggested as a potential mechanism for E‐cadherin downregulation in breast cancer, further investigation is required to elucidate the role of this mechanism in E‐cadherin loss, particularly in lobular breast cancers. Enzymatic cleavage Numerous enzymes known for cleaving the E‐cadherin protein and encompassing matrix metalloproteinases (MMPs), a disintegrin and metalloproteinases (ADAMs), and neutrophil elastase have been documented as part of another posttranslational modification mechanism. These enzymes specifically act on the extracellular (N‐terminal) domain of E‐cadherin, contributing to the regulatory network governing its proteolytic processing. , , , , These enzymatic actions may potentially alter E‐cadherin staining patterns, leading to aberrant or lost expression. Table summarizes the mechanisms of alteration or loss of E‐cadherin in both ductal and lobular carcinoma by frequency. , , , , , , , , , gene mutations Inactivation of the CDH1 gene through—mostly somatic—mutations, is a commonly reported occurrence, offering a simple and satisfactory explanation for E‐cadherin loss in a significant number of ILC cases. In most instances, mutations in E‐cadherin have been found to co‐occur with loss of heterozygosity (LOH) at the chromosomal locus of E‐cadherin, , emphasizing the role of E‐cadherin as a tumour suppressor gene. In a study of 127 ILC cases, 80 had CDH1 mutations, and among these, 89% exhibited co‐occurring LOH. In addition, germline mutations in the CDH1 gene have been observed in some cases of ILC and hereditary gastric cancer. Although CDH1 mutations are more commonly associated with lobular breast cancer, nonlobular tumours can also harbour loss‐of‐function (LOF) mutations in the CDH1 gene. , This highlights the need to consider CDH1 mutation as a potential contributor to tumour development in breast cancer subtypes beyond just lobular cancer. CDH1 promoter Promoter hypermethylation in specific tissues primarily impacts gene expression, particularly in regions with a high density of CpG sites. Due to this, it has long been suggested that hypermethylation of the CDH1 promoter could potentially explain the lack of E‐cadherin expression in ILC. While numerous studies indicated the presence of CDH1 methylation in ILC with wide variations in percentages, , , , , , other research has contradicted these findings by demonstrating an absence of methylation. Ciriello et al . reported that no DNA hypermethylation was detected in the CDH1 promoter in ILCs, suggesting that the loss of E‐cadherin may not be influenced by epigenetic mechanisms. Similarly, Alexander et al . observed no evidence of CDH1 hypermethylation in ILC. Bücker et al . suggested that this discrepancy may be due to the extensive application of the nonquantitative, highly sensitive methylation‐specific PCR (MSP) method in studies that showed the presence of methylation, potentially leading to false‐positive outcomes. In contrast, Ciriello et al . and Alexander et al . utilized high‐resolution quantitative techniques, employing the 450 and 850k arrays, respectively, for DNA methylation profiling. , , However, in a recent study by Dopeso et al ., promoter hypermethylation was found in 62.5% of ILC cases without CDH1 genetic alteration. Most of these cases exhibited concomitant 16q loss along with negative E‐cadherin staining. Notably, only one case showed promoter hypermethylation without concomitant 16q loss but had aberrant IHC results for E‐cadherin. It is important to note that the study employed both MSP and quantitative digital droplet PCR (ddPCR) methods. In another study, Yu et al . evaluated CDH1 methylation in so‐called lobular‐like invasive mammary carcinomas. They described these tumours as invasive mammary carcinomas composed of uniform discohesive cells with low‐ to intermediate‐grade nuclei, dispersed growth pattern, and rare to absent tubule formation, with concomitant moderate to strong uniform circumferential membranous reactivity for both E‐cadherin and p120. In their analysis, the authors employed quantitative ddPCR methods and found that four out of seven cases of lobular‐like invasive mammary carcinomas had CDH1 methylation. Interestingly, the four cases with methylation had a mild reduction in E‐cadherin expression without pathogenic CDH1 LOF mutations. The remaining three cases without methylation exhibited robust membranous expression of E‐cadherin. This suggests the hypothesis that aberrant E‐cadherin expression may be linked to epigenetic changes such as methylation. However, further studies are required to precisely elucidate this association. In IDC, it has been described that hypermethylation of CDH1 can occur concurrently with reduced or negative expression of E‐cadherin. Reported frequencies of this mechanism are, however, very high (up to 94% of cases), and the reliability and significance of this finding is unclear. , , EMT is a dynamic cellular process that enables tumour cells to shed their epithelial characteristics and adopt mesenchymal traits, giving them the ability to infiltrate surrounding tissues and establish distant metastases. This transformation is mediated by the interplay of molecular regulators that orchestrate the dissolution of cell–cell adhesions, the remodelling of the extracellular matrix, and the acquisition of migratory and invasive potential. , The process is driven by a network of transcription factors, such as Snail, Zeb, Slug, and Twist, which coordinate the expression of genes that promote EMT and suppress epithelial characteristics. Snail, Zeb, and Twist collaborate to repress E‐cadherin via attaching to specific DNA motifs within the E‐boxes of the CDH1 promoter and activating mesenchymal genes, including vimentin and N‐cadherin, enabling tumour cells to detach from their primary tumour and embark on their metastatic path. , , , , Such mechanisms are likely in part responsible for the unfavourable prognosis linked to loss of E‐cadherin in IDC, , and the greater likelihood of the aggressive triple‐negative breast cancers (TNBC) to lack E‐cadherin. In contrast, loss of E‐cadherin is observed in the early stages of tumour development in ILC, and its relationship to EMT is uncertain. In a murine model of ILC, created through conditional CDH1 mutation and p53 knockout, the sole absence of E‐cadherin did not induce EMT, and the tumour cells remained epithelial in their appearance. However, cases of ILC may exhibit a partial EMT phenotype, as evidenced by the downregulation of E‐cadherin, activation of Twist with nuclear localization, and activation of multiple mesenchymal markers, albeit with cells retaining epithelial features and expressing epithelial markers, while lacking N‐cadherin expression. Glycosylation is a posttranslational modification mechanism that is crucial to the folding, trafficking, and stability of E‐cadherin at the cell membrane. , E‐cadherin can undergo posttranslational modifications through oxygen (O)‐ and nitrogen (N)‐glycosylation. , , Impaired O‐glycosylation of E‐cadherin within its cytoplasmic domain hinders surface exocytosis, impeding cell adhesion, and contributes to enhanced cell migration and metastasis in breast cancer by diminishing surface E‐cadherin levels. , In contrast, N‐glycosylation occurs in the extracellular domains, and abnormal N‐glycosylation leads to instability of the E‐cadherin protein by altering the composition of adherens junctions in breast cancer. , Zhang et al . found that in breast cancer the E‐cadherin protein level was downregulated when N‐glycosylation was altered and GCNT2 , a gene‐encoding glucosaminyl (N‐acetyl) transferase 2, I‐branching enzyme, was upregulated. Also, studies have revealed an association between glycosylation and EMT. Wen et al . demonstrated that, in breast cancer, N‐glycosylation of epithelial cell adhesion molecules could regulate EMT through MAPK and PI3K / Akt pathways. While aberrant glycosylation is suggested as a potential mechanism for E‐cadherin downregulation in breast cancer, further investigation is required to elucidate the role of this mechanism in E‐cadherin loss, particularly in lobular breast cancers. Numerous enzymes known for cleaving the E‐cadherin protein and encompassing matrix metalloproteinases (MMPs), a disintegrin and metalloproteinases (ADAMs), and neutrophil elastase have been documented as part of another posttranslational modification mechanism. These enzymes specifically act on the extracellular (N‐terminal) domain of E‐cadherin, contributing to the regulatory network governing its proteolytic processing. , , , , These enzymatic actions may potentially alter E‐cadherin staining patterns, leading to aberrant or lost expression. Table summarizes the mechanisms of alteration or loss of E‐cadherin in both ductal and lobular carcinoma by frequency. , , , , , , , , , Despite well‐known biological and prognostic differences, invasive ductal and invasive lobular carcinomas tend to be lumped into the same oncologic and therapeutic baskets. A handful of potential management‐related differences remain, however. These include ILC's greater likelihood of bilateral and clinically occult tumours potentially prompting additional imaging modalities and procedures, its more aggressive clinical behaviour despite a higher percentage of oestrogen receptor‐positive disease and significantly lower overall Oncotype DX scores, and finally, unusual patterns of metastatic spread to uncommon sites such as the genital tract, the gastrointestinal system, and the bone marrow, among others, requiring greater alertness to detect unconventional metastatic deposits. , , It is worth noting that the unique characteristics of ILC have recently gained more clinical attention through numerous ongoing clinical trials ( ClinicalTrials.gov ) highlighting the need for more appropriately tailored management protocols than the ones currently available. Similarly, DCIS and LCIS, although associated with a comparable relative risk for the future development of breast cancer, differ significantly as far as biology and clinical management are concerned. Classic LCIS (not including florid and pleomorphic LCIS, the treatment of which may be more in line with DCIS) is notoriously multifocal and bilateral and is managed with hormonal therapy without the need for surgical intervention, at least by standard of care guidelines. Conversely, a diagnosis of DCIS implies an increased local risk for the development of invasive carcinoma, which leads to surgical excision to negative margins and adjuvant radiation therapy with or without hormonal therapy. Differentiating ductal from lobular in situ neoplasia therefore carries major implications on the medical management of the affected patient. The presumed dichotomous nature of ductal versus lobular proliferations is not, in our opinion, as straightforward as much of the prevalent teaching suggests. Morphologic and immunohistochemical overlap is arguably significantly more common in everyday practice than in theory and may have molecular roots in the long arm of chromosome 16, where deletions are documented in the entire spectrum of low‐grade mammary neoplasia spanning columnar cell change, lobular neoplasia, atypical ductal hyperplasia, and low‐grade DCIS. , It therefore follows that many of the difficulties and controversies associated with distinguishing invasive ductal from invasive lobular carcinoma reside in the frequent and probably underreported discrepancy between morphologic and IHC findings, and the wide and overlapping morphologic spectrum combining both ductal and lobular features (tubular formation and single infiltrating tumour cells) within the same tumour. , , In cases where a tumour exhibits classic lobular patterns of infiltration or a classic ductal phenotype, IHC is not typically requested. However, in our experience reflex testing for E‐cadherin has become much more commonplace in many practice settings, and the resulting discordant findings between morphology and IHC have led to inconsistent and likely erroneous interpretations of the nature of the neoplasm. It is well established that the majority of ductal and lobular tumours will exhibit expected patterns of staining for E‐cadherin (strongly positive membranous staining in ductal tumours and completely negative staining in lobular tumours), with E‐cadherin showing a good discriminatory ability (AUC of 0.85) according to a study by Sivadas et al . However, a small yet significant percentage of cases will show one of the four unconventional scenarios depicted in Table , namely, lobular tumours with aberrant or positive staining and ductal tumours with aberrant or negative staining. , , When such scenarios arise, pathologists may ignore a tumour's histology and classify it solely based on their interpretation of the E‐cadherin stain. The interpretation itself may also fall prey to considering an attenuated/fragmented pattern of staining as evidence of convincing positivity. This was demonstrated in a cohort study by Grabenstetter et al ., who revealed that misdiagnosis predominantly occurred in cases showing positive E‐cadherin expression. Among the 47 cases exhibiting either strong or aberrant E‐cadherin positivity, a significant 51% (24 cases) underwent a diagnostic change upon rereviewing slides based on morphological features. These cases were initially classified as IDC or invasive mammary carcinoma with mixed ductal and lobular features, but were later reclassified as ILC by the expert breast pathologists conducting the study. Complicating matters, E‐cadherin staining can be variable within the same tumour, sometimes in conjunction with morphology and often independently of it. Although CDH1 mutation is the most common cause of E‐cadherin dysfunction and lobular differentiation, CDH1 mutation and E‐cadherin expression are not always concordant (Table ). In a study conducted by Grote et al ., it was found that among 128 breast cancer cases with CDH1 mutation, seven tumours (5.5%) were ILC, exhibiting positive albeit fragmented E‐cadherin immunoreactivity, and four tumours (3.1%) were nonlobular tumours with completely negative E‐cadherin expression. In another study conducted by Derakhshan et al ., rare instances of nonlobular breast cancers were identified, exhibiting biallelic genetic alterations coupled with either LOH or homozygous deletions in CDH1 . The researchers performed IHC on five out of seven such cases. Among these, three out of five exhibited absent E‐cadherin, and two out of five showed aberrant E‐cadherin staining. Despite the LOF mutations in CDH1 and the subsequent loss of E‐cadherin, along with cytoplasmic localization of p120 and reduced expression of β‐catenin, the distinctive histological features characteristic of classic ILC were not evident in these nonlobular breast cancers with CDH1 biallelic genetic alterations. Although the underlying reasons for this absence remain unknown, it is plausible that compensatory mechanisms involving upregulation of other epithelial adhesion molecules mitigated the effects of CDH1 LOF in these tumours. This unexpected observation suggests that there might be additional morphologically ductal tumours characterized by undetected negative or aberrant E‐cadherin staining, which have not been tested due to their histologically evident ductal differentiation. Moreover, some cases of otherwise classic ILC exhibit tubular elements, prompting questions about how rigid architectural structures can form in the absence of E‐cadherin. Interestingly, Christgen et al . demonstrated that the switching from E‐cadherin to P‐cadherin, a marker encoded by CDH3 and expressed in mammary myoepithelial cells, offers a molecular explanation for tubule formation in CDH1 ‐deficient ILC. , Also, an in vitro study of human breast cancer cells revealed that in the presence of P‐cadherin and the absence of E‐cadherin, cells can aggregate, demonstrating preserved cohesiveness. This may be one of several compensatory mechanisms providing the tumour cells with alternative means of adherence and aggregation. In terms of ILC cases without CDH1 genetic alteration, all scenarios are possible with negative, aberrant, or positive E‐cadherin expression. This brings up two key issues. First, how can E‐cadherin staining be lost in the absence of a CDH1 genetic alteration, as we discussed above, and second, what explains lobular histology when E‐cadherin is positive and CDH1 is intact? This apparent paradox is partly amplified by the overreliance on E‐cadherin as definitional in the lobular vs. ductal distinction. It is well known that lobular morphology can be caused by disruptions in the cadherin–catenin complex without abnormalities in E‐cadherin itself. For example, studies have shown that inactivation of α‐catenin gene ( CTNNA1 ) can cause a lobular phenotype in the presence of E‐cadherin. Dopeso et al . described new deleterious fusions affecting CTNND1 (p120) as a cause of a lobular phenotype. The study also found that the presence of deleterious fusion genes and a truncating mutation affecting AXIN2 , which plays a crucial role in various cellular pathways ( Wnt , TGFβ , and Hippo ) that regulate cell adhesion, survival, and differentiation, can also result in a lobular phenotype with enhanced cell migration and resistance to anoikis. Another potential source of misdiagnosis when relying on E‐cadherin is the variability in the E‐cadherin antibodies used for pathological diagnosis across different laboratories. Data sheets for most antibodies lack detailed information on epitopes, leading to the hypothetical possibility that some antibodies target the N‐terminal, while others interact with the C‐terminal. If alterations occur specifically in either the N‐terminal or C‐terminal, the antibody may still produce a positive E‐cadherin result, even if the whole molecule is dysfunctional, leading to a lobular phenotype. For instance, Yasui et al . identified N‐terminal deficiency as a novel mechanism for the loss of E‐cadherin function in ILC. Although the mechanism behind this finding is unknown, E‐cadherin may exhibit positive staining in these cases because the antibody utilized for E‐cadherin detection may target the preserved C‐terminal region. Additionally, α‐, β‐, γ‐, and p120 catenins may appear normal, potentially leading to a misdiagnosis of IDC (Figure ). In cases of mixed ductal and lobular cancers, the complexity escalates further. If only a portion of E‐cadherin, be it the N‐terminal or C‐terminal, is altered, E‐cadherin immunoreactivity may hypothetically be either preserved or lost in both the ductal and lobular components. An additional potentially confounding factor is the reported and frequently identified diminished staining intensity of myoepithelial cells by E‐cadherin. In ductal proliferations with a prominent myoepithelial component, an E‐cadherin stain may be misinterpreted as evidence of a neoplastic lobular proliferation. The duality between lobular and ductal neoplasia is ambiguous, as is the role of E‐cadherin in this context. Loss of E‐cadherin is neither necessarily equivalent to loss of cellular cohesion, nor is it the only mechanism by which loss of cell cohesion can occur. This, however, does not change the prevalent tendency for pathologists to default to a dichotomous interpretation of both histology and IHC in breast neoplasms. We agree with the WHO guidelines that the definition of lobular carcinoma is fundamentally morphologic, meaning that E‐cadherin expression cannot serve as a definitive marker for distinguishing between lobular and ductal differentiation. However, E‐cadherin staining has become ubiquitous, and is unlikely to be abandoned by practicing pathologists who would have to revert to haematoxylin and eosin as the gold standard of diagnostic classification, especially in the common instances of overlapping morphologic features. With that in mind, we propose that, in cases exhibiting classic morphology, be it ductal or lobular, reliance on E‐cadherin should be minimized if not avoided, while cases presenting with questionable morphology may be stained with a battery of E‐cadherin, β‐catenin, and p120, and interpreted descriptively rather than categorically as belonging to one type or another. This would serve as a means to offer more information to clinicians as these tumours are further studied and better understood. Discordant results would therefore be described as is, providing additional evidence for the complexity and nonclassical nature of the stained neoplasm. This approach may open the door to a deeper understanding and a more robust definition of these tumours, which would potentially lead to improvements in patient management, particularly as treatment options become more tailored to specific molecular subtypes. Additionally, it may facilitate further research by highlighting the complexity of these breast cancer subtypes, all the while addressing the current inconsistency in nomenclature within the medical community. As far as in situ lesions are concerned, the assessment of E‐cadherin expression is more justifiable due to the distinct management approaches for patients with classical LCIS versus DCIS. However, this context is also subject to certain caveats similar to the ones described in invasive carcinoma. These include discrepancies between histology and IHC (LCIS with positive E‐cadherin staining and DCIS with absent E‐cadherin staining), misinterpretation of lobular cells as being positive for E‐cadherin when they are admixed with benign E‐cadherin‐positive ductal cells, and cases of florid and pleomorphic LCIS, which may better be managed like DCIS but could be diagnosed as classic LCIS based on E‐cadherin loss. In other words, in both in situ and invasive proliferations, we believe that the distinction between ductal and lobular proliferations should largely rely on histologic features, and the use of IHC, if deemed necessary by the pathologist, should be approached with caution, and would probably benefit from a combination of all three available members of the cadherin–catenin complex rather than a strict reliance on E‐cadherin, with the potential future inclusion of P‐cadherin and other potential contributing members of this complex interaction. In conclusion, the value of E‐cadherin staining is questionable, and its practice potentially misleading. The link between negative staining and loss of function is less robust than conventional wisdom seems to suggest. Awareness of the subtleties in the morphologic and IHC patterns of these breast tumours and lower thresholds for descriptive rather than absolute diagnoses are, in our opinion, the path towards fewer disagreements and confusion among pathologists, clinicians, and patients. This is especially true as the many forthcoming efforts at more tailored therapeutic approaches for lobular carcinoma would almost certainly suffer from faulty and poorly reproducible classification schemes. None. None.
European expert consensus on a structured approach to circular stapling anastomosis in minimally invasive left‐sided colorectal resection
4c9e8fdd-98bc-45fd-a0e1-101c3bb3d738
11842941
Surgical Procedures, Operative[mh]
The performance of intestinal anastomosis is one of the most critical steps in colorectal surgery, and complications associated with anastomosis can have devastating consequences for the patient's clinical, functional and oncological outcomes. Complications also create a significant burden on the healthcare system. Circular stapling is commonly performed in left‐sided colorectal anastomosis (sigmoid colectomy, high and low anterior resection) for benign and malignant conditions. It is used in open, laparoscopic and robotic surgeries. A recent review of a healthcare database with 13 167 patients who underwent left‐sided colorectal resection showed that 22.7% of patients had circular anastomotic complications . In another study, knowledge gaps in many surgeons' understandings of the safe use of various commonly used medical devices, including stapling knowledge, were reported . A high incidence of technical errors involving the use of circular staplers has also been reported . Consequently, there is a need for surgical strategies and technologies to standardize and quality assure anastomotic techniques to lower the risk of anastomotic complications . Emerging evidence has shown a strong relationship between the intraoperative performance of the surgeon operator and patient outcomes . Our endeavour from a surgical community is to improve intraoperative performance , which we believe will have a considerable impact on patient safety and operative outcomes. One scientific approach to improving intraoperative performance is proficiency‐based progression (PBP) simulation training. PBP begins by deconstructing the procedure or skill being focused on into explicitly defined (binary) performance metrics, which are then validated . The PBP approach to training makes skill acquisition more objective, transparent and fair. During training, trainees are given metric‐based feedback on their performance, which is explicit, constructive and formative . In a recent systematic review of 12 prospective randomized and blinded clinical studies (PBP‐trained versus traditionally trained surgeons), PBP‐trained surgeons demonstrated significantly fewer performance errors (a 60% reduction) . Our overarching goal was to improve training in circular stapling devices in minimally invasive left‐sided colorectal anastomosis using PBP methodology, and this first part of our project was to develop and objectively define performance metrics that characterize a reference approach to the application of circular stapling devices in left‐sided colorectal anastomosis during minimally invasive operations (i.e. laparoscopic and robotic) and to obtain face and content validity through a consensus meeting (i.e. with a Delphi panel) of very experienced and expert colorectal surgeons (senior consultant >10 years’ colorectal practice). The principle of metric development and stress testing (face and content validation) for PBP training has been described in detail previously . This approach was applied when developing the circular stapling anastomosis metrics for minimally invasive left‐sided colorectal anastomosis and is described below. Metrics Group The Metrics Group consists of three experienced colorectal surgeons (AW, GB, ST) with a special interest in minimally invasive surgery, a senior behavioural scientist and an education–training expert (AGG), and a research fellow who is specialized in metrics development for surgical procedures (RF). Input was sought from device engineers who specialize in circular stapling devices. Circular stapling anastomosis metrics development A detailed task analysis and deconstruction process was used to deconstruct a reference approach to the use of circular stapling anastomosis for minimally invasive left‐sided colorectal procedures in small, nonoverlapping performance units . Published written guidelines, video teaching materials, manufacturer's instructions for use and access to 10 anonymized unedited minimally invasive left‐sided colorectal operations using circular stapling anastomosis performed by surgeons with different levels of experience supported the metrics development and procedure characterization process. The goal was to characterize a ‘reference’ approach to circular stapling anastomosis used in minimally invasive left‐sided colorectal operations. A reference procedure is assumed to be a straightforward and uncomplicated guide for trainees in learning the optimum performance of these procedures. The phases and steps are the same for female and male patients undergoing the anastomosis part of the minimally invasive left‐sided colorectal resection. For the ‘reference procedure’ there are agreed criteria for patient selection and procedure‐specific factors (Table ). A one‐day preliminary face‐to‐face planning meeting, three face‐to‐face meetings for metrics identification and definition and the metric stress test were conducted. Videoconferences (a total of 5 h) using Zoom (San Jose, CA, USA) and email exchanges were used to complement face‐to‐face meetings for further clarification and definition of the metrics. At the beginning of the metrics development the Metrics Group agreed on the following definitions: Performance metrics: units of observable behaviour which together constitute a stepwise description of a reference approach to a procedure. Procedural phase: a group or series of integrally related events or actions that, when combined with other phases, make up or constitute a complete operative procedure. Step: a component task, the series aggregate of which forms the completion of a specific procedure. Error: a deviation from optimal performance. Critical error: a major deviation from optimal performance, which is likely to cause harm to the patient or compromise the safe completion of the procedure . The metrics, therefore, consist of procedural phases involved in a minimally invasive left‐sided colorectal anastomosis. Each phase comprises specific steps required for accomplishment. The importance of the metrics approach in defining these phases and steps is that these are explicit and unambiguous. The procedural step either occurred or did not occur and can be scored as such by an external reviewer with high reliability . Similarly, procedural errors and critical errors were defined associated with particular steps within different phases of the procedure. For errors, behaviours exhibited by the operator may not necessarily in and of themselves lead to a bad outcome or an event with more serious consequences, but their enactment sets the stage or increases the probability for a more serious event to occur or detracts from the efficient and possibly safe execution of the desired procedure. In contrast, a ‘critical error’ is a more serious occurrence and represents operative performance that could either jeopardize the outcome of the procedure or lead to significant iatrogenic damage . Figure illustrates an example of a procedural phase characterized by circular stapling anastomosis in minimally invasive left‐sided colorectal procedures. In addition to the metrics, valuable knowledge and principles of the operation were compiled, such as the mechanics and science of anastomosis, to facilitate the learning process; these formed the didactic component for the learner during the training process. Once the Metrics Group had defined the metrics they were then used to score five unedited anonymized circular stapler anastomosis parts of the minimally invasive approach for left‐sided colorectal resection performed by different surgeons with various levels of experience. Scoring was performed by the members of the Metrics Group independently. Any difference in the scoring was discussed in order to identify discrepancies in interpretation or ambiguities in the metric definition. Based on this process, and if agreed upon, changes were made in the metrics, which facilitated the scoring agreement. This process was repeated for each video until the Metrics Group was satisfied with the metrics and they could be scored with a high degree of reliability (i.e. inter‐rater reliability >0.8, which is the internationally agreed gold standard) . Metrics stress testing (face and content validation) with a modified Delphi approach Once the metrics for the circular stapling anastomosis for minimally invasive left‐sided colorectal resection had been defined and characterized, face validity and content were verified by a group of experienced colorectal surgeons. An international panel of expert colorectal surgeons was invited to join the Delphi panel to provide a more objective and independent assessment of the metrics. Informed consent was obtained from the Delphi panel members. The panel was chosen for their colorectal surgical experience and their demonstrated educational interests and commitment. The equality, diversity and inclusion principle was adhered to when selecting the Delphi panel members . Sixteen expert colorectal surgeons, including the Metrics Group members from nine countries, a nonvoting behavioural scientist and a nonvoting fellow who is familiar with metrics development in surgical procedures, attended a consensus meeting in Dublin, Ireland on 23 September 2022 (Table ). A brief overview of the project and meeting objectives was presented. Background information regarding PBP training methodology, prior literature demonstrating the validity of this training approach for procedural specialties and the specific objectives of the current Delphi panel were reviewed . Each phase of the procedure, the procedural steps that were included in that phase, and the potential errors were presented. It was also explained that the associated metrics had been developed by the Metrics Group for a reference approach to circular stapling anastomosis for minimally invasive left‐sided colorectal resections. It was acknowledged that the designated reference procedure might not reflect the exact techniques employed by individual Delphi panellists, but that the operative steps presented accurately embodied the essential and key components of the procedure and ‘were not wrong’ . To assess the correlation of the procedural steps, errors and critical errors before and after the Delphi process, changes were analysed with the Pearson chi‐square test (IPM SPSS Statistics for Windows, version 26; IBM Corp., Armonk, NY, USA). A p ‐value of <0.05 was considered statistically significant. The Metrics Group consists of three experienced colorectal surgeons (AW, GB, ST) with a special interest in minimally invasive surgery, a senior behavioural scientist and an education–training expert (AGG), and a research fellow who is specialized in metrics development for surgical procedures (RF). Input was sought from device engineers who specialize in circular stapling devices. A detailed task analysis and deconstruction process was used to deconstruct a reference approach to the use of circular stapling anastomosis for minimally invasive left‐sided colorectal procedures in small, nonoverlapping performance units . Published written guidelines, video teaching materials, manufacturer's instructions for use and access to 10 anonymized unedited minimally invasive left‐sided colorectal operations using circular stapling anastomosis performed by surgeons with different levels of experience supported the metrics development and procedure characterization process. The goal was to characterize a ‘reference’ approach to circular stapling anastomosis used in minimally invasive left‐sided colorectal operations. A reference procedure is assumed to be a straightforward and uncomplicated guide for trainees in learning the optimum performance of these procedures. The phases and steps are the same for female and male patients undergoing the anastomosis part of the minimally invasive left‐sided colorectal resection. For the ‘reference procedure’ there are agreed criteria for patient selection and procedure‐specific factors (Table ). A one‐day preliminary face‐to‐face planning meeting, three face‐to‐face meetings for metrics identification and definition and the metric stress test were conducted. Videoconferences (a total of 5 h) using Zoom (San Jose, CA, USA) and email exchanges were used to complement face‐to‐face meetings for further clarification and definition of the metrics. At the beginning of the metrics development the Metrics Group agreed on the following definitions: Performance metrics: units of observable behaviour which together constitute a stepwise description of a reference approach to a procedure. Procedural phase: a group or series of integrally related events or actions that, when combined with other phases, make up or constitute a complete operative procedure. Step: a component task, the series aggregate of which forms the completion of a specific procedure. Error: a deviation from optimal performance. Critical error: a major deviation from optimal performance, which is likely to cause harm to the patient or compromise the safe completion of the procedure . The metrics, therefore, consist of procedural phases involved in a minimally invasive left‐sided colorectal anastomosis. Each phase comprises specific steps required for accomplishment. The importance of the metrics approach in defining these phases and steps is that these are explicit and unambiguous. The procedural step either occurred or did not occur and can be scored as such by an external reviewer with high reliability . Similarly, procedural errors and critical errors were defined associated with particular steps within different phases of the procedure. For errors, behaviours exhibited by the operator may not necessarily in and of themselves lead to a bad outcome or an event with more serious consequences, but their enactment sets the stage or increases the probability for a more serious event to occur or detracts from the efficient and possibly safe execution of the desired procedure. In contrast, a ‘critical error’ is a more serious occurrence and represents operative performance that could either jeopardize the outcome of the procedure or lead to significant iatrogenic damage . Figure illustrates an example of a procedural phase characterized by circular stapling anastomosis in minimally invasive left‐sided colorectal procedures. In addition to the metrics, valuable knowledge and principles of the operation were compiled, such as the mechanics and science of anastomosis, to facilitate the learning process; these formed the didactic component for the learner during the training process. Once the Metrics Group had defined the metrics they were then used to score five unedited anonymized circular stapler anastomosis parts of the minimally invasive approach for left‐sided colorectal resection performed by different surgeons with various levels of experience. Scoring was performed by the members of the Metrics Group independently. Any difference in the scoring was discussed in order to identify discrepancies in interpretation or ambiguities in the metric definition. Based on this process, and if agreed upon, changes were made in the metrics, which facilitated the scoring agreement. This process was repeated for each video until the Metrics Group was satisfied with the metrics and they could be scored with a high degree of reliability (i.e. inter‐rater reliability >0.8, which is the internationally agreed gold standard) . Once the metrics for the circular stapling anastomosis for minimally invasive left‐sided colorectal resection had been defined and characterized, face validity and content were verified by a group of experienced colorectal surgeons. An international panel of expert colorectal surgeons was invited to join the Delphi panel to provide a more objective and independent assessment of the metrics. Informed consent was obtained from the Delphi panel members. The panel was chosen for their colorectal surgical experience and their demonstrated educational interests and commitment. The equality, diversity and inclusion principle was adhered to when selecting the Delphi panel members . Sixteen expert colorectal surgeons, including the Metrics Group members from nine countries, a nonvoting behavioural scientist and a nonvoting fellow who is familiar with metrics development in surgical procedures, attended a consensus meeting in Dublin, Ireland on 23 September 2022 (Table ). A brief overview of the project and meeting objectives was presented. Background information regarding PBP training methodology, prior literature demonstrating the validity of this training approach for procedural specialties and the specific objectives of the current Delphi panel were reviewed . Each phase of the procedure, the procedural steps that were included in that phase, and the potential errors were presented. It was also explained that the associated metrics had been developed by the Metrics Group for a reference approach to circular stapling anastomosis for minimally invasive left‐sided colorectal resections. It was acknowledged that the designated reference procedure might not reflect the exact techniques employed by individual Delphi panellists, but that the operative steps presented accurately embodied the essential and key components of the procedure and ‘were not wrong’ . To assess the correlation of the procedural steps, errors and critical errors before and after the Delphi process, changes were analysed with the Pearson chi‐square test (IPM SPSS Statistics for Windows, version 26; IBM Corp., Armonk, NY, USA). A p ‐value of <0.05 was considered statistically significant. The ages of the panel members ranged from 34 to 65 years, and there were five female surgeons. Six panel members were heads of their respective departments and four were full professors affiliated with universities. The combined number of colorectal resections performed or supervised by the Delphi panel was more than 1500 per annum. The Metrics Group proposed three phases for the circular stapling anastomosis in minimally invasive left‐sided colorectal resection, each with a defined beginning and end (Table ). Some criteria needed to be fulfilled before the circular stapling anastomosis stage. During the Delphi meeting, the Delphi panel suggested and agreed upon two additional conditions (see section): the rectal stump should be clean and the surgeon should (have) read the instructions for use for the circular stapling device. During the Delphi meeting, four steps were added, making a total of 36 steps for the three phases of the circular stapling anastomosis (Table ). The added steps were ‘Surgeons request the correct staple length and height’ when using a linear stapler in the transection of the rectum (Phase I), ‘Surgeons request for the correct stapler and stapler size’ when using a circular stapler in the preparation of the proximal colon for anastomosis (phase II), ‘Verify verbal communication between the surgical team members before firing the stapler’, ‘Surgeon fire the stapler in a standing position (to stabilize during firing) during anastomosis’ (Phase III). Modifications were made in four steps (Phases I and II) to make the steps more explicit and instructive. The Metrics Group identified 40 procedural errors in the three phases, and after the Delphi meeting the total number of procedural errors was 42 (Table ). There were 38 procedural critical errors before and 39 after the Delphi meeting (Table ). Furthermore, the number of procedural steps, errors and critical errors before and after the Delphi changes were highly correlated [Pearson correlation coefficient r = 0.974 (95% CI r = 0.861–0.994) p < 0.001]. On average, there were more procedural steps [before 10.7 (SD = 5.9); after 12 (SD = 6.2)] at the end of the Delphi meeting. The same was observed for errors [before 13.3 (SD = 9.3); after 14 (SD = 9.2)] and critical errors [before 12.7 (SD = 9); after 13 (SD = 8)]. When we compared these differences with Wilcoxon sign rank (two‐tail) tests none of the differences were found to be statistically significant (steps, Z = −1.633, p = 0.102; errors, Z = −0.447, p = 0.665; critical errors, Z = 0, p = 1.0). After discussion and changes to the metrics incorporated during the meeting, the metrics for circular stapling anastomosis in minimally invasive left‐sided colorectal resection received 100% consensus from the Delphi panel. Anastomotic complications are common following left‐sided colorectal resection. Among these complications, an anastomotic leak can have devastating consequences for patients' outcomes, including survival rate, cancer recurrence, permanent stoma, negative impact on the bowel and sexual function and long‐term quality of life . Complications also increase the length of hospital stay and place a significant extra resource burden on healthcare institutions . Researchers have been studying the factors associated with anastomotic complications and identifying management strategies to reduce the burden caused by these complications . The circular stapling device is commonly used in left‐sided colorectal anastomosis, in both cancer and benign conditions, but this crucial step of the procedure has not been taught in surgical training. Given that evidence suggests there are gaps in stapling knowledge and a high incidence of technical errors when using a circular stapler, there is an imperative to standardize and define structured training for this critical part of the procedure . More focus is now placed on the surgeon's skill, as evidence now shows that it is strongly linked with patient outcomes . The Metrics Group has identified one scientific approach to structured training in circular stapling anastomosis in minimally invasive left‐sided colorectal resection, namely PBP simulation training. This method makes skill acquisition more objective, transparent and fair. Based on Level 1a evidence, use of the PBP method significantly reduced performance errors by 60% . Using the PBP method, we characterized the performance metrics (procedural phases, steps, errors, critical errors) for circular stapling anastomosis for minimally invasive left‐side colorectal resection. A minimally invasive approach for left‐sided resection is widely practised, but practitioners would find the metrics useful for the open approach. The performance metrics development process was robust and has been used with success in other disciplines . The Metrics Group consisted of three expert colorectal surgeons and individuals who specialize in the PBP methodology, including a senior behavioural scientist with more than two decades of experience in surgical training. Expert engineers working with the circular stapling device were consulted, specifically in relation to instructions for use and technical device handling. These performance metrics were scrutinized by a panel of expert colorectal surgeons from different European countries and a renowned minimally invasive expert surgeon from an academic centre in Malaysia. During a minimally invasive approach to left‐sided colorectal resection, surgeons have variations of practice when performing circular stapling anastomosis. The performance metrics presented in the Delphi meeting aimed to outline a standardized approach suitable for learners. Minor modifications were made during the Delphi meeting to make the performance metrics more explicit and instructive. Some general principles, for example stapling technologies, will be provided as didactic to the trainees in addition to the metrics. The pre‐ and post‐Delphi metrics were highly correlated (Tables , , , ). After incorporating the changes suggested by the Delphi panel, voting was obtained at the end of the discussion of each phase. All of the procedural phases received unanimous agreement. Anastomotic complications, particularly leaks, are among the most feared complications in colorectal surgery. The anastomotic part of the procedure is performed towards the end of an operation; potentially, issues of fatigue and concentration may be introduced at this crucial part of the operation. A successful operation also depends on the skills of the operating team, not only the lead surgeon. This is important, as often the introduction of the circular stapling device is performed by more junior surgical team members. During the Delphi meeting, the panel members recognized the knowledge gap and training needed in the use of the circular stapling device. Some valuable additional comments were made and incorporated into the performance metrics, such as ‘Surgeons request for the correct stapler and stapler size’ and ‘Verify verbal communication between the surgical team members before firing the stapler’. The PBP approach to characterize these three phases of circular stapling anastomosis during a crucial part of a minimally invasive approach to left‐sided colorectal resection allows surgeons to learn the steps with explicit performance instructions about what to do and, possibly more importantly, what not to do. The PBP method affords performance assessments where the metrics are used to provide feedback to learners that are objective, transparent, explicit, constructive and formative. The errors and critical errors that were described would further enhance training. The proposed metrics are for a standard and straightforward procedure. The aim is to provide a structured stepwise approach to use of the device during this segment of the procedure. We do, however, appreciate the variety of practices; for example, when making the purse‐string for the proximal end of the colon, a purse‐string applicator can be used instead of a manual purse‐string, as detailed in our metrics. During a minimally invasive approach to left‐sided colorectal resection, circular stapling anastomosis can be broken down into procedural phases and steps, with errors and critical errors known as performance metrics. Data from a large group of expert colorectal surgeons from Europe provided evidence to support the face and content of these metrics. We consider the metrics essential for developing structured training using circular stapling anastomosis in a minimally invasive approach to left‐sided colorectal resection. Further development of these metrics is vital to guide the training curriculum and assessment. Samson Tou: Conceptualization; investigation; funding acquisition; writing – original draft; methodology; validation; writing – review and editing; project administration; data curation; supervision; resources; visualization; formal analysis. Anthony G. Gallagher: Conceptualization; investigation; funding acquisition; methodology; validation; writing – review and editing; visualization; formal analysis; project administration; data curation; supervision; resources. Gabriele Bislenghi: Investigation; writing – review and editing. Rui Farinha: Investigation; methodology; writing – review and editing. Albert Wolthuis: Conceptualization; investigation; funding acquisition; methodology; validation; visualization; writing – review and editing; project administration; supervision; resources. Medtronic (Surgical Division) provided the educational grant for this study but did not influence the selection of the experts, the design and conduct of the research, data collection, analysis or the preparation of the manuscript. ST received education grants from Intuitive Foundation and Medtronic. AGG holds education research grants from Medtronic (Dublin, Ireland), AO Education Institute (Davos, Switzerland), and the Arthroscopic Association of North America (Chicago, USA) to investigate metric‐based education and training. All participants provided informed consent prior to participating in the study, and the study protocol was approved by the institutional review board at the University of Leuven.
Therapeutic experience and key techniques of tubeless percutaneous nephrolithotomy
b7dbad7c-ba78-4371-9563-cda8c91dc1e5
11707359
Laparoscopy[mh]
Percutaneous nephrolithotomy (PCNL) is a key milestone in developing endoscopic urology. Compared to traditional open stone surgery, PCNL offers advantages such as minimal trauma, reduced bleeding, rapid recovery, and high stone-free rates , . PCNL removes stones by creating percutaneous renal access, followed by the placement of a nephrostomy tube and a ureteral stent postoperatively. To ensure a high stone-free rate, the early access is often large in diameter (> 26F), leading to a higher rate of complications and increased patient discomfort . In this context, the concepts of miniaturized access and tubeless PCNL have been introduced. However, two cases of severe extravasation in tubeless PCNL have sparked controversy over this technique . With the accumulation of experience and advances in technology, tubeless PCNL is increasingly being accepted and attempted by more urological surgeons. Despite the risks with tubeless PCNL, its excellent performance in terms of reducing hospital stay, alleviating patient pain, and promoting quick recovery makes it a hot topic among endourological surgeons . Previous experiences have shown that strict patient selection is essential to minimize complications associated with tubeless PCNL . However, there are currently no definitive guidelines or consensus on tubeless PCNL in clinical practice. This study combines insights from previous studies and clinical practice to summarize the criteria for patient selection in tubeless PCNL. Additionally, we summarize several key technical improvements implemented at our institution, aiming to promote the widespread adoption of tubeless PCNL and enhance the treatment experience for patients. Statements, case selection and patient information This retrospective study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology, affiliated Union Hospital. All research was performed in accordance with relevant guidelines/regulations, and we confirmed that informed consent was obtained from all participants and/or their legal guardians. All research had been performed in accordance with the Declaration of Helsinki. Patients are divided into the tubeless percutaneous nephrolithotomy (PCNL) group and the conventional PCNL group according to whether or not a nephrostomy tube was ultimately placed. All 40 patients, who underwent surgery by the same surgeon in the Department of Urology at Wuhan Union Hospital from December 2023 to April 2024, were included. During this period, 20 patients underwent tubeless PCNL, while the remaining 20 patients underwent conventional PCNL on the same day or week as the tubeless PCNL cases. We analyzed the clinical data of these patients, including general patient information, complete blood count, blood biochemistry, urine culture, kidney-ureter-bladder (KUB) X-ray, renal computer tomography (CT), renal ultrasound results and so on. Surgical procedure After anesthesia and preparation of the kidney, the patient was placed in the prone position. A Chinese one-shot dilation technique was employed under color Doppler guidance to establish a working channel (typically 20F/22F), with successful establishment indicated by the ability to aspirate urine. Following channel establishment, a 10.5Fr ureterorenoscope (KARL STORZ) was introduced, and a Holmium:YAG laser (by Lumenis) was used for lithotripsy. Upon completion, residual calculi were detected using Hitachi color Doppler ultrasound. If no residual fragments or very few and small fragments (diameter < 4 mm), and no calculus likely to cause obstruction were detected, lithotripsy was finished. Subsequently, a 5F double J stent was placed, followed by a decision regarding whether to place an 18F/20F nephrostomy tube. On the first postoperative day, the urethral catheter was removed, and the nephrostomy tube was clamped. If there was no fever or urinary leakage, the nephrostomy tube was removed 12 to 24 h after clamping. Anesthesia method The majority of patients underwent general anesthesia for the surgery, while some patients received regional anesthesia . The specific procedure for regional anesthesia was as follows: patients did not undergo retrograde intubation or urinary catheterization. Half an hour before the surgery, patients received an intramuscular injection of 100 mg tramadol and 25 mg dexmedetomidine. Intraoperatively, a mixture of 1% lidocaine and 0.5% ropivacaine was administered via subcutaneous infiltration (approximately 10 mL) at the puncture site for regional anesthesia. During the surgery, dexmedetomidine (5 mg) was administered intravenously, and ondansetron was given prophylactically to prevent nausea and vomiting. The remaining technical procedures were no difference from the conventional method . Kidney preparation All included patients underwent kidney preparation using stimulated diuresis technology . Upon arrival in the operating room, patients were placed in the prone position and received an intravenous injection of normal saline (500 to 1000 mL), followed by furosemide (0.5 mg/kg) to facilitate diuresis. The dilation of the renal pelvis was measured using ultrasound, which typically reached peak values in 6–15 min . Following this, kidney preparation was completed. Color Doppler-guided puncture and intrarenal fold-line puncture technique All included patients underwent puncture procedures guided by color Doppler imaging. Traditionally, the center of the renal pelvis had been considered the ideal avascular puncture point (Fig. A, Supplementary video 1). But vascular variations were common, the center of the renal pelvis did not always indicate an avascular area (Fig. B, Supplementary video 2). Therefore, individualized puncture pathways were designed based on color Doppler ultrasound. The vascular distribution of the target calyx was examined, by initially setting the color Doppler blood flow velocity at 15–17 cm/s. If vascular density was low or moderate, avascular areas are identified and selected as the puncture pathway (Supplementary video 3). In cases with abundant vessels in the calyx, the avascular areas were further investigated by resetting the blood flow velocity to 25–27 cm/s . For a few patients with significant renal artery variations, finding an optimal straight puncture pathway was challenging. Therefore, an intrarenal fold-line puncture technique was utilized under color Doppler guidance. Under these conditions, the puncture needle entered the target calyx along a fold line to avoid blood vessels (Supplementary video 4, 5). Chinese one-shot dilation technique for channel establishment All enrolled patients underwent channel establishment using the Chinese one-shot dilation technique . Initially, an 18 G Chiba needle was inserted into the collecting system under color Doppler guidance . Successful puncture was confirmed by the ability to aspirate urine after removing the stylet. A super-rigid guidewire was then introduced, followed by the withdrawal of the puncture needle. Before removing the puncture needle, the fascia and skin at the puncture site were incised. Besides, while removing the puncture needle, the direction and depth were recorded. Subsequently, a 20F/22F pencil-shaped fascial dilator with a matching oblique sheath was inserted along the super-rigid guidewire (Fig. A). Rotational breakthrough was employed during the passage from the renal cortex into the collecting system. The pencil-shaped fascial dilator, with its slender and pointed tip, has been clinically proven to possess single-step dilation capability . Upon successful aspiration of urine from the dilator, the dilator was removed, maintaining the position of the oblique sheath. Thus, the working channel was completed. Application of oblique sheath Prior to establishing the channel, the sheaths were trimmed. The anterior end of the sheath was trimmed into an oblique opening (Fig. B,2C). This modification has demonstrated several advantages in application. First, it facilitated matching with the inner dilator. Second, it provided ease of dilation and entry into narrow calyces and ureters, thereby reducing mucosal injury and calyceal tear (Supplementary video 6, 7). Third, it enhanced visibility for locating and observing relatively parallel renal calyces (Supplementary video 8). Fourth, it allows for the retraction of the sheath to the renal capsule, rotation of the outer sheath to observe any bleeding in the channel, and reduction of the channel loss rate (F g. D-I, Supplementary video 9, 10). Case selection of tubeless PCNL Based on preoperative evaluations and intraoperative performance, patients meeting the following criteria were prioritized for tubeless procedures: Preoperative hemoglobin level higher than 90 g/L and normal coagulation function. Low risk of sepsis preoperatively and intraoperatively. No renal upper calyx puncture. No significant residual stones. No significant bleeding in the channel. Early clinical data indicate that postoperative hemoglobin decrease did not exceed 30 g/L under the strict principle of no bleeding in the channel . Therefore, patients with preoperative hemoglobin greater than 90 g/L are at low risk of requiring postoperative blood transfusion. Sepsis is one of the most common complications, with a high mortality rate ranging from 22 to 76% . The risk of sepsis must be assessed on both preoperative and intraoperative performance: preoperatively, absence of fever, normal blood leukocyte counts, and normal procalcitonin levels; intraoperatively, clear urine, absence of pus and purulent deposits in the kidney, and absence of fibrinous material on the stone surface, indicate a lower risk of sepsis – . Tubeless PCNL is not chosen for upper calyx punctures due to the relatively higher risk of pleural injury , . If pleural injury occurs and tubeless procedures are performed, respiratory efforts combined with urinary leakage can easily lead to pleural effusion and hemorrhage. The residual condition of stones should undergo a comprehensive evaluation to avoid secondary surgeries: Preoperatively, confirm the stone locations based on imaging results; intraoperatively, identify the stones in the renal pelvis and calyces according to anatomical features and imaging results, and confirm whether the positions change with ultrasound; postoperatively, check for residual stones using ultrasound. Ensuring there is no bleeding in the channel is important, since one of the functions of the nephrostomy tube is to drain potential blood and effusion. The presence of bleeding in the channel can be assessed by the oblique sheath: After lithotripsy, retract the anterior end of the oblique sheath to the renal capsule, then rotate the dilator, and observe the entire renal puncture channel for significant bleeding (under conditions of 150 mmHg pressure and 300 ml/min flow rate, clear visibility indicates no bleeding, Supplementary video 9, 10). Statistical analysis For all the patients of two groups, the following parameters were evaluated: 1. Stone clearance rate: Kidney-ureter-bladder (KUB) radiographs were routinely performed before removing the double J tube. No residual stones or residual stones < 4 mm were considered to be stone free; 2. Duration of surgery; 3. Pain level: Assessed by using the Visual Analog Scale (VAS) on the first postoperative day; 4. Postoperative hospital stay; 5. Postoperative change in hemoglobin (Hb) levels; 6. Postoperative change in serum creatinine (umol/L) and estimated glomerular filtration rate (eGFR, ml/min/1.73m 2 ); 7. Postoperative changes in inflammatory markers, including neutrophil count, procalcitonin (PCT), and C-reactive protein (CRP). Statistical analysis was conducted using SPSS 26.0.2 software . Continuous data were presented as mean ± standard deviation. Student’s t-test was employed to analyze intergroup differences in the mean values. P < 0.05 was considered statistically significant. This retrospective study was approved by the Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology, affiliated Union Hospital. All research was performed in accordance with relevant guidelines/regulations, and we confirmed that informed consent was obtained from all participants and/or their legal guardians. All research had been performed in accordance with the Declaration of Helsinki. Patients are divided into the tubeless percutaneous nephrolithotomy (PCNL) group and the conventional PCNL group according to whether or not a nephrostomy tube was ultimately placed. All 40 patients, who underwent surgery by the same surgeon in the Department of Urology at Wuhan Union Hospital from December 2023 to April 2024, were included. During this period, 20 patients underwent tubeless PCNL, while the remaining 20 patients underwent conventional PCNL on the same day or week as the tubeless PCNL cases. We analyzed the clinical data of these patients, including general patient information, complete blood count, blood biochemistry, urine culture, kidney-ureter-bladder (KUB) X-ray, renal computer tomography (CT), renal ultrasound results and so on. After anesthesia and preparation of the kidney, the patient was placed in the prone position. A Chinese one-shot dilation technique was employed under color Doppler guidance to establish a working channel (typically 20F/22F), with successful establishment indicated by the ability to aspirate urine. Following channel establishment, a 10.5Fr ureterorenoscope (KARL STORZ) was introduced, and a Holmium:YAG laser (by Lumenis) was used for lithotripsy. Upon completion, residual calculi were detected using Hitachi color Doppler ultrasound. If no residual fragments or very few and small fragments (diameter < 4 mm), and no calculus likely to cause obstruction were detected, lithotripsy was finished. Subsequently, a 5F double J stent was placed, followed by a decision regarding whether to place an 18F/20F nephrostomy tube. On the first postoperative day, the urethral catheter was removed, and the nephrostomy tube was clamped. If there was no fever or urinary leakage, the nephrostomy tube was removed 12 to 24 h after clamping. Anesthesia method The majority of patients underwent general anesthesia for the surgery, while some patients received regional anesthesia . The specific procedure for regional anesthesia was as follows: patients did not undergo retrograde intubation or urinary catheterization. Half an hour before the surgery, patients received an intramuscular injection of 100 mg tramadol and 25 mg dexmedetomidine. Intraoperatively, a mixture of 1% lidocaine and 0.5% ropivacaine was administered via subcutaneous infiltration (approximately 10 mL) at the puncture site for regional anesthesia. During the surgery, dexmedetomidine (5 mg) was administered intravenously, and ondansetron was given prophylactically to prevent nausea and vomiting. The remaining technical procedures were no difference from the conventional method . Kidney preparation All included patients underwent kidney preparation using stimulated diuresis technology . Upon arrival in the operating room, patients were placed in the prone position and received an intravenous injection of normal saline (500 to 1000 mL), followed by furosemide (0.5 mg/kg) to facilitate diuresis. The dilation of the renal pelvis was measured using ultrasound, which typically reached peak values in 6–15 min . Following this, kidney preparation was completed. Color Doppler-guided puncture and intrarenal fold-line puncture technique All included patients underwent puncture procedures guided by color Doppler imaging. Traditionally, the center of the renal pelvis had been considered the ideal avascular puncture point (Fig. A, Supplementary video 1). But vascular variations were common, the center of the renal pelvis did not always indicate an avascular area (Fig. B, Supplementary video 2). Therefore, individualized puncture pathways were designed based on color Doppler ultrasound. The vascular distribution of the target calyx was examined, by initially setting the color Doppler blood flow velocity at 15–17 cm/s. If vascular density was low or moderate, avascular areas are identified and selected as the puncture pathway (Supplementary video 3). In cases with abundant vessels in the calyx, the avascular areas were further investigated by resetting the blood flow velocity to 25–27 cm/s . For a few patients with significant renal artery variations, finding an optimal straight puncture pathway was challenging. Therefore, an intrarenal fold-line puncture technique was utilized under color Doppler guidance. Under these conditions, the puncture needle entered the target calyx along a fold line to avoid blood vessels (Supplementary video 4, 5). Chinese one-shot dilation technique for channel establishment All enrolled patients underwent channel establishment using the Chinese one-shot dilation technique . Initially, an 18 G Chiba needle was inserted into the collecting system under color Doppler guidance . Successful puncture was confirmed by the ability to aspirate urine after removing the stylet. A super-rigid guidewire was then introduced, followed by the withdrawal of the puncture needle. Before removing the puncture needle, the fascia and skin at the puncture site were incised. Besides, while removing the puncture needle, the direction and depth were recorded. Subsequently, a 20F/22F pencil-shaped fascial dilator with a matching oblique sheath was inserted along the super-rigid guidewire (Fig. A). Rotational breakthrough was employed during the passage from the renal cortex into the collecting system. The pencil-shaped fascial dilator, with its slender and pointed tip, has been clinically proven to possess single-step dilation capability . Upon successful aspiration of urine from the dilator, the dilator was removed, maintaining the position of the oblique sheath. Thus, the working channel was completed. Application of oblique sheath Prior to establishing the channel, the sheaths were trimmed. The anterior end of the sheath was trimmed into an oblique opening (Fig. B,2C). This modification has demonstrated several advantages in application. First, it facilitated matching with the inner dilator. Second, it provided ease of dilation and entry into narrow calyces and ureters, thereby reducing mucosal injury and calyceal tear (Supplementary video 6, 7). Third, it enhanced visibility for locating and observing relatively parallel renal calyces (Supplementary video 8). Fourth, it allows for the retraction of the sheath to the renal capsule, rotation of the outer sheath to observe any bleeding in the channel, and reduction of the channel loss rate (F g. D-I, Supplementary video 9, 10). The majority of patients underwent general anesthesia for the surgery, while some patients received regional anesthesia . The specific procedure for regional anesthesia was as follows: patients did not undergo retrograde intubation or urinary catheterization. Half an hour before the surgery, patients received an intramuscular injection of 100 mg tramadol and 25 mg dexmedetomidine. Intraoperatively, a mixture of 1% lidocaine and 0.5% ropivacaine was administered via subcutaneous infiltration (approximately 10 mL) at the puncture site for regional anesthesia. During the surgery, dexmedetomidine (5 mg) was administered intravenously, and ondansetron was given prophylactically to prevent nausea and vomiting. The remaining technical procedures were no difference from the conventional method . All included patients underwent kidney preparation using stimulated diuresis technology . Upon arrival in the operating room, patients were placed in the prone position and received an intravenous injection of normal saline (500 to 1000 mL), followed by furosemide (0.5 mg/kg) to facilitate diuresis. The dilation of the renal pelvis was measured using ultrasound, which typically reached peak values in 6–15 min . Following this, kidney preparation was completed. All included patients underwent puncture procedures guided by color Doppler imaging. Traditionally, the center of the renal pelvis had been considered the ideal avascular puncture point (Fig. A, Supplementary video 1). But vascular variations were common, the center of the renal pelvis did not always indicate an avascular area (Fig. B, Supplementary video 2). Therefore, individualized puncture pathways were designed based on color Doppler ultrasound. The vascular distribution of the target calyx was examined, by initially setting the color Doppler blood flow velocity at 15–17 cm/s. If vascular density was low or moderate, avascular areas are identified and selected as the puncture pathway (Supplementary video 3). In cases with abundant vessels in the calyx, the avascular areas were further investigated by resetting the blood flow velocity to 25–27 cm/s . For a few patients with significant renal artery variations, finding an optimal straight puncture pathway was challenging. Therefore, an intrarenal fold-line puncture technique was utilized under color Doppler guidance. Under these conditions, the puncture needle entered the target calyx along a fold line to avoid blood vessels (Supplementary video 4, 5). All enrolled patients underwent channel establishment using the Chinese one-shot dilation technique . Initially, an 18 G Chiba needle was inserted into the collecting system under color Doppler guidance . Successful puncture was confirmed by the ability to aspirate urine after removing the stylet. A super-rigid guidewire was then introduced, followed by the withdrawal of the puncture needle. Before removing the puncture needle, the fascia and skin at the puncture site were incised. Besides, while removing the puncture needle, the direction and depth were recorded. Subsequently, a 20F/22F pencil-shaped fascial dilator with a matching oblique sheath was inserted along the super-rigid guidewire (Fig. A). Rotational breakthrough was employed during the passage from the renal cortex into the collecting system. The pencil-shaped fascial dilator, with its slender and pointed tip, has been clinically proven to possess single-step dilation capability . Upon successful aspiration of urine from the dilator, the dilator was removed, maintaining the position of the oblique sheath. Thus, the working channel was completed. Prior to establishing the channel, the sheaths were trimmed. The anterior end of the sheath was trimmed into an oblique opening (Fig. B,2C). This modification has demonstrated several advantages in application. First, it facilitated matching with the inner dilator. Second, it provided ease of dilation and entry into narrow calyces and ureters, thereby reducing mucosal injury and calyceal tear (Supplementary video 6, 7). Third, it enhanced visibility for locating and observing relatively parallel renal calyces (Supplementary video 8). Fourth, it allows for the retraction of the sheath to the renal capsule, rotation of the outer sheath to observe any bleeding in the channel, and reduction of the channel loss rate (F g. D-I, Supplementary video 9, 10). Based on preoperative evaluations and intraoperative performance, patients meeting the following criteria were prioritized for tubeless procedures: Preoperative hemoglobin level higher than 90 g/L and normal coagulation function. Low risk of sepsis preoperatively and intraoperatively. No renal upper calyx puncture. No significant residual stones. No significant bleeding in the channel. Early clinical data indicate that postoperative hemoglobin decrease did not exceed 30 g/L under the strict principle of no bleeding in the channel . Therefore, patients with preoperative hemoglobin greater than 90 g/L are at low risk of requiring postoperative blood transfusion. Sepsis is one of the most common complications, with a high mortality rate ranging from 22 to 76% . The risk of sepsis must be assessed on both preoperative and intraoperative performance: preoperatively, absence of fever, normal blood leukocyte counts, and normal procalcitonin levels; intraoperatively, clear urine, absence of pus and purulent deposits in the kidney, and absence of fibrinous material on the stone surface, indicate a lower risk of sepsis – . Tubeless PCNL is not chosen for upper calyx punctures due to the relatively higher risk of pleural injury , . If pleural injury occurs and tubeless procedures are performed, respiratory efforts combined with urinary leakage can easily lead to pleural effusion and hemorrhage. The residual condition of stones should undergo a comprehensive evaluation to avoid secondary surgeries: Preoperatively, confirm the stone locations based on imaging results; intraoperatively, identify the stones in the renal pelvis and calyces according to anatomical features and imaging results, and confirm whether the positions change with ultrasound; postoperatively, check for residual stones using ultrasound. Ensuring there is no bleeding in the channel is important, since one of the functions of the nephrostomy tube is to drain potential blood and effusion. The presence of bleeding in the channel can be assessed by the oblique sheath: After lithotripsy, retract the anterior end of the oblique sheath to the renal capsule, then rotate the dilator, and observe the entire renal puncture channel for significant bleeding (under conditions of 150 mmHg pressure and 300 ml/min flow rate, clear visibility indicates no bleeding, Supplementary video 9, 10). For all the patients of two groups, the following parameters were evaluated: 1. Stone clearance rate: Kidney-ureter-bladder (KUB) radiographs were routinely performed before removing the double J tube. No residual stones or residual stones < 4 mm were considered to be stone free; 2. Duration of surgery; 3. Pain level: Assessed by using the Visual Analog Scale (VAS) on the first postoperative day; 4. Postoperative hospital stay; 5. Postoperative change in hemoglobin (Hb) levels; 6. Postoperative change in serum creatinine (umol/L) and estimated glomerular filtration rate (eGFR, ml/min/1.73m 2 ); 7. Postoperative changes in inflammatory markers, including neutrophil count, procalcitonin (PCT), and C-reactive protein (CRP). Statistical analysis was conducted using SPSS 26.0.2 software . Continuous data were presented as mean ± standard deviation. Student’s t-test was employed to analyze intergroup differences in the mean values. P < 0.05 was considered statistically significant. The patients included were from the Department of Urology at Wuhan Union Hospital. They underwent surgery during the period of December 2023-May 2024, performed by the same surgeon. During the time, 20 patients underwent tubeless PCNL. Cases for conventional PCNL were selected from patients who underwent surgery on the same day or within the same week as tubeless PCNL patients. Information for all 40 patients is presented in Table . Both groups of patients achieved successful stone clearance, with one working channel established (Fig. A,3B). The tubeless PCNL group had a shorter hospital stay (P = 0.005). Additionally, the VAS scores were significantly lower in the tubeless PCNL group (P < 0.001). However, there was no significant difference in surgical duration between the two groups. Moreover, there were no significant differences in renal function and inflammatory responses (P > 0.05). None of the included patients developed fever postoperatively (Table ). The first report of PCNL was published in 1976 . Since then, extensive clinical experience has accumulated. Based on institutional experience, clinical guidelines , and expert opinions , we believe that in the following clinical scenarios, PCNL is superior to ureteroscopy and flexible ureteroscopy: A. large stone burden (> 2 cm); b. prolonged stone impaction (more than six months) combined with suspected ureteral stricture; c. recent history of ipsilateral ureteroscopy with incomplete stone clearance, potential due to suspected ureteral stricture; d. severe hydronephrosis, thinning of the renal cortex or atrophic kidney, with poor renal function making stone clearance difficult; e. combination of ureteral stones with a large number of lower pole stones; f. lower pole stones with a ureter-lower calyx distance (ULD) of less than 2.5 cm ; g. intolerance to general anesthesia; h. CT evidence of significant renal swelling with perinephric exudative changes; i. gross hematuria indicating poor visualization during ureteroscopy; j. renal anatomical abnormalities (e.g., duplicated kidneys, horseshoe kidneys, etc.). PCNL patients are routinely placed with internal and external drains, which are nephrostomy tubes and double J stents. The nephrostomy tube compresses the tract to achieve hemostasis, drains renal fluids to decrease intrarenal pressure, and prevents potential bloodstream infections . Still, it preserves a pathway for the next operation if needed. The double J tube ensures patency of the renal-bladder pathway, preventing secondary ureteral strictures due to operative injury, stone impaction, or infection. However, nephrostomy tubes have several limitations. They increase patient discomfort, impose an economic burden, and raise the risk of urinary leakage and secondary infections , , . Thus, tubeless PCNL has become a primary focus for endourologists. Our goal is to balance the pros and cons and select suitable patients for tubeless PCNL. We select patients eligible for tubeless PCNL based on preoperative and intraoperative performance. A hemoglobin level above 90 g/L significantly reduces the need for postoperative transfusion. Normal coagulation function and absence of significant bleeding in the channel effectively prevent increases in intrarenal pressure, a potential adverse consequence of tubeless PCNL. Absence of fever, normal blood leukocyte counts, and preoperative procalcitonin (PCT) preoperatively, along with clear renal urine and absence of pus or debris on stone surfaces intraoperatively, reduce the risk of postoperative renal infection and greatly lower the probability of sepsis. Avoiding upper calyx puncture prevents pleural effusion, which is a potential complication of tubeless PCNL. The absence of obvious stone residue avoids the possibility of secondary surgery. Furthermore, we incorporate new technologies to enhance the feasibility of tubeless PCNL for patients. Individualized path planning and fold-line puncture guided by color Doppler greatly avoid blood vessels, significantly reducing bleeding rate . The Chinese one-shot dilation technique eliminates the need for repeated dilator changes, further lowering the bleeding rate . Additionally, the use of oblique sheaths allows for a larger field of view within a smaller range of sheath movement, further reducing the risks associated with tubeless procedures. Our research results also demonstrated the significant advantages of tubeless PCNL. Tubeless PCNL significantly shortened patients’ hospital stay, reduced postoperative pain, and had no significant impact on renal function (serum creatinine and eGFR). Based on the patient selection and technical improvements mentioned above, both groups exhibited similar postoperative hemoglobin decreases, with no cases requiring postoperative transfusion. Moreover, inflammatory markers including neutrophil count, PCT, CRP, showed no significant differences between the two groups, suggesting that, under strict indications, tubeless PCNL might not affect infectious probability. No patients developed fever postoperatively, including those with preoperative positive urine cultures. It suggested that preoperative positive urine cultures might not be an absolute exclusion criterion for tubeless PCNL, although this result was limited by the sample size and might not be definitive. Tubeless PCNL is being explored by numerous urologists domestically and internationally. Zhang et al. pointed out that a renal cortex thickness (> 5 mm) at the site of channel favored tract contraction, reducing urine leakage, which can serve as a screening criterion for tubeless PCNL . Mao and Jian et al. argue that patients with ureteropelvic junction obstruction (UPJO) or ipsilateral ureteral stricture are not suitable for tubeless PCNL, due to potential drainage obstruction and increased intrarenal pressure , . Furthermore, studies by Lei et al. suggest that patients with renal collecting system perforation are also not suitable for tubeless PCNL, as nephrostomy tubes will provide better drainage . However, Jou et al. hold the opposite opinion . Additionally, single puncture, single tract, no intraoperative bleeding, no extravasation, and no pus intraoperatively all tend to favor tubeless PCNL with experience. Although tubeless PCNL criteria may vary slightly among different institutions, all show lower pain scores, shorter operation and hospitalization times, reduced surgical costs, and comparable surgical outcomes when compared to conventional methods. In conclusion, this study, along with various related studies, highlights the advantages of tubeless PCNL. Additionally, this study details several technical improvements implemented in our institution for both conventional and tubeless PCNL patients. These improvements standardize the inclusion and exclusion criteria for tubeless PCNL, enhancing patient safety while ensuring stone clearance rates. A limitation of the study is the small number of patients included in this study, which leaves some potential risks and complications remain unknown. Undoubtedly, a prospective study with randomization is needed, and we hope to apply these techniques to more eligible patients in future clinical work, further optimizing procedural details. Meanwhile, with advancing technology and accumulated experience, we are also exploring the possibility of a completely tubeless PCNL (without nephrostomy tubes or double J tubes), which will undoubtedly enhance patient experience in terms of both comfort and time. Supplementary Video 1. Supplementary Video 2. Supplementary Video 3. Supplementary Video 4. Supplementary Video 5. Supplementary Video 6. Supplementary Video 7. Supplementary Video 8. Supplementary Video 9. Supplementary Video 10. Supplementary Information 1.
Comparative metabolomic analysis reveals shared and unique features of COVID-19 cytokine storm and surgical sepsis
01eae825-9b8a-4506-9e88-cbf9e8a5c74a
11850835
Biochemistry[mh]
As of January 14, 2024, the past SARS-CoV-2 pandemic has claimed more than 7 million lives . Despite a significant number of publications on COVID-19 (391,381 in PubMed as of January 29, 2024), many questions related to the pathophysiology of the disease remain unresolved. Cytokine storm (CS), observed in a substantial number of COVID-19 patients is extensively discussed syndrome linked to the illness. CS, a systemic inflammatory state characterized by immune cell hyperactivation and uncontrolled cytokine release, is not exclusive to COVID-19. It is known to be triggered by various factors such as infections, tumor processes, autoimmune conditions and others . CS can cause acute respiratory distress syndrome (ARDS) or multiple organ dysfunction, which can be potentially fatal . The clinical manifestations of COVID-19-associated CS and its consequences are similar to those of the acute phase of sepsis . According to the Third International Consensus Definitions Task Force (Sepsis-3) sepsis is a life-threatening organ dysfunction caused by dysregulation of the host response to infection . Traditionally, bacterial infection has been considered the major cause of sepsis . The COVID-19 pandemic has led to a reassessment of the role of viruses in the occurrence of sepsis, because the multi-organ dysfunction caused by CS in COVID-19 largely corresponds to the concept of Sepsis-3 and is currently considered as viral sepsis . Sepsis caused by CS associated with COVID-19 exhibits distinctive characteristics, even though its clinical symptoms typically resemble those of bacterial sepsis. COVID-19 is distinguished by a less pronounced and more prolonged occurrence of systemic multi-organ inflammation , , an accelerated onset of acute respiratory distress syndrome (ARDS), reduced levels of inflammatory markers like IL-6 in comparison with sepsis , and an immune signature due to differences in response to bacterial and viral infection . Currently, metabolomic profiling of human biological fluids is of great interest because of its potential to provide additional insight into disease pathogenesis and potential therapeutic targets. To date, many studies have been published investigating the metabolomic profile of patients with sepsis , – . The pandemic of COVID-19 raised the interest to the metabolomics of this disease and of the CS as its hyperinflammation stage. Thus, since the beginning of the pandemic, numerous studies on this topic have been published – . Both series of works found the similar alterations in the metabolites associated with energetic of the cells and inflammation as in the amino acids levels both glucogenic and ketogenic. At the same time, there is a lack of studies comparing the metabolomics of these syndromes. This study focuses on comparing targeted metabolic profiles in the blood serum of patients with surgical sepsis (SS) and those with CS associated with COVID-19. The clinical data of patients with CS do not fully meet the Sepsis-3 criteria due to the patients’ blood was collected at an early stage of disease development (before treatment), when the rate of multiple organ dysfunction in this group is still quite low (SOFA ≤ 2). Variations in SOFA scores between groups are significant due to the late manifestation of multiple organ dysfunction in COVID-19 patients, the opposite of the explosive course of the disease, which is a characteristic of bacterial sepsis . But current hypotheses suggest a connection between CS, caused by COVID-19, and the occurrence of viral sepsis . That’s why the comparison of patients with COVID-19 associated with CS and patients with SS seems reasonable. Comorbidities can make the course of COVID-19 more severe and enhance CS , . In this regard, it would be interesting to determine how comorbidity would affect the results of comparing the metabolomic profiles of patients with COVID-19 and surgical septic patients. This study is one of the first metabolomic studies using a biobank and a significant amount of samples from COVID-19 patients with cytokine storm and surgical sepsis from St. Petersburg and the Leningrad region (Russian Federation). The study was carried out on serum material collected strictly before the beginning of treatment, which allowed to obtain fairly “clean” serum samples without additional drug interference in the patients’ metabolome. The aim of this study was to compare the serum metabolomic profiles of patients with COVID-19-associated CS with the serum metabolomic profiles of septic patients after surgery. Participants Frozen blood serum from the collection of the biobank of St. Petersburg State Healthcare Establishment "City Hospital No. 40"was used. The study was conducted within the framework of the research project "Biobanking and biomedical research of human tissue and fluid samples" and was approved by the Expert Council on Ethics of St. Petersburg State Healthcare Establishment "City Hospital No. 40" (session No. 119, February 9, 2017). A total of 234 patients who underwent treatment at City Hospital No. 40 took part in the current retrospective study of the metabolomics profile. Samples were collected during hospital stays from January 2018 to March 2021 for septic patients and from May 2020 to June 2021 for COVID-19 patients. All patients underwent standard examinations according to clinical recommendations and diagnosis. Written informed consent was obtained from all patients for sample collection and storage in a biobank for subsequent use for scientific purposes and for the results’ publication. The study was conducted in accordance with the World Medical Association’s Code of Ethics (Declaration of Helsinki) for experiments involving humans. The patients were grouped as follows: 1.COVID: COVID-19 patients with CS without comorbidity (n = 40). 2.COVID: COVID-19 patients with CS with comorbidity (n = 43). 3. Sepsis: SS patients (n = 41). 4.Control: healthy volunteers (n = 110). COVID-19 was diagnosed by a polymerase chain reaction (PCR) of nasopharyngeal swabs. CS was determined according to the following conditions: ferritin > 485 μg/L, C-reactive protein > 50 mg/L, d-dimer > 2.5 μg/mL, interleukin-6 > 25 pg/mL, LDH > 550 U/L. Comorbidities were determined based on the patient’s self-report on admission and confirmed, if necessary, by further examinations. The Charlson Comorbidity Index (CCI) was calculated for each patient with COVID-19 . Patients with CCI ≤ 2 were included in the COVID group, and patients with CCI ≥ 5 were included in the COVID group. The diagnosis of sepsis was made according to the Sepsis-3 consensus criteria and the SOFA scale . The cause of sepsis in this group was bacterial infection as a complication after abdominal surgery.The Control group was selected during regular preventive check-ups. The criteria for inclusion in the group were the absence of COVID-19, confirmed with PCR, and the absence of sepsis. The general inclusion criterion was age over 18 years. Since our study is conducted in a real-world setting, we didn’t employ any additional inclusion or exclusion criteria. Study design The objective of this study was to compare the serum metabolomic profiles of patients with CS associated with COVID-19 and patients with sepsis. Metabolomic profiles were obtained using target LC–MS/MS analyses. The study solved the following tasks: 1) t-SNE clustering of patients. 2)Comparison of serum metabolomes of patients of the COVID and COVID groups: identification of common and differentially represented metabolites that alter levels relative to the Control group. 3)Identification of metabolites differentially represented in the serum of the Sepsis group compared to the Control. 4)Comparison of serum metabolomic profiles of patients of the Sepsis groups with serum metabolomes of patients in the COVID and COVID groups: identification of common and differentially represented metabolites. Sample collection and storage Blood samples from patients diagnosed with COVID-19 were collected within day of hospital admission and prior to treatment initiation. Blood samples from septic patients were collected when they were admitted to the intensive care unit (ICU) before starting antibiotic treatment. The blood samples for the Control group were obtained during a routine examination of volunteers. All samples were collected into Vacutest tubes (Gel and clot act., Vacutest Kima S.R.L., Italy). After centrifugation for 10 min at 4⁰C, 2200 rpm, the serum was collected and immediately frozen at -80⁰C. All samples were annotated, indicating the stage of the disease, gender, age, etc. Before analysis, the frozen samples were slowly warmed to room temperature and thoroughly mixed. Metabolomic profiling of serum samples The preparation of all blood serum samples for targeted metabolome studies was carried out in duplicate. 2-(N-morpholino)ethanesulfonic acid hydrate (MES) (CAS: 4432–31-9; cat no. M8250, Sigma-Aldrich) and L-methionine sulfone (CAS: 7314–32-1, cat No. M0876, Sigma-Aldrich) were used as internal standards. The samples were thawed at room temperature, and 100 μl of an ice-cold mixture of internal standards of a fixed concentration (25 μg/ml) in acetonitrile (cat no. 9012.2500GL, LC–MS-grade, J.T. Baker) was added to 50 μl of serum. The mixture was vortexes and incubated for 10 min at room temperature, then centrifuged (Centrifuge 5810R, Eppendorf) at 12,000 g for 10 min at 4 °C. An aliquot of 80 uL supernatant was diluted with Milli-Q—water acidified with formic acid (cat No. 533002, for LC–MS LiChropur, > 99%, Merck) to pH 2 and analyzed using the LC–MS/MS method. The prepared standard solutions and extracts were stored at -20 ⁰C. Targeted metabolite profiling was performed using a liquid chromatography-mass spectrometer with a triple quadrupole LCMS-8050 (Shimadzu) along with a Nexera X2 (Shimadzu) chromatography system. The analysis was carried out following the established "LC/MS/MS Method Package for Primary Metabolites" by Shimadzu, utilizing the multiple reaction monitoring mode. This method allows simultaneous analysis of 98 analytes of the main chemical classes of clinically significant low-molecular compounds, including amino acids, organic acids, nucleotides, nucleosides, and coenzymes (Supplementary Table S1, S3). An analytical column Discovery HS F5-3 (150 × 2.1 mm, 3 mkm) (Supelco, Merck) and SecurityGuard “SupelGuard Discovery HS F5-3” (20 × 2.1 mm, 3 mkm, Supelco) were used to separate the analytes. Mass-spectrometry parameters and chromatographic conditions were meticulously set in accordance with the guidelines provided in the manual for the "LC/MS/MS Method Package for Primary Metabolites" method. Briefly, both ESI and ESI ( −) modes used water (LC–MS grade) as mobile phase (A) and ACN as mobile phase (B), and formic acid was used as mobile phase modifier. The gradient program was changed from 0 to 95% (B). Chromatographic and mass spectrometric parameters, as set out in the manual, are given in Supplementary Table S2. Data collection and processing were performed using LabSolutions software. Metabolites were identified based on chromatographic retention time, m/z values of product ions, and their intensity ratios. Only chromatographic peaks with a peak-to-noise cutoff ratio ≥ 10 were considered. The content of the investigated substances was determined using the internal standard method, which considers the response (area) of the analyte connections in relation to the response (area) of the internal standard. All level measurements are given in arbitrary units of the content of the internal standard (arbitrary units — a.u.). The results correspond to the average of two parallel measurements of the same sample. To ensure quality control (QA/QC), control samples were analyzed along with experimental samples to monitor instrument performance and facilitate chromatographic alignment. Control samples consisted of extracts of averaged blood serum samples with internal standards, prepared following the same procedure as the experimental samples. The averaged serum, derived from thoroughly mixed blood serum samples from seven donors, was aliquoted in 200 μl portions and frozen at -80 °C for later analysis. An appropriate volume of Milli-Q water passed through all stages of sample preparation was used as a “blank” sample. Additionally, all solvents used in sample preparation were also analyzed. We utilized quality control samples to assess the reproducibility and stability of the prepared extracts on both intra- and inter-day bases. The internal standard solutions and control extracts were examined over a 10-day period, when they were stored at -20 °C between analyses and once at + 5 °C for 24 h. Overall process variability was determined by calculating the median RSD for IS and all endogenous metabolites (i.e., non-instrumental standards) present in control samples. The discrepancy between parallel measurements remained within 15% of the average values, with average daily deviations below 20%. Quality control samples were examined after every 50–60 experimental sample injections in order to ensure the consistency of the chromatographic retention times and responses of the studied compounds. Instrument variability was determined by calculating the median relative standard deviation (RSD) of the internal standards added to each sample during extraction. The scatter of the obtained values was evaluated during averaging, ensuring a maximum difference of 20% between the averaged parallel measurements. If this threshold was exceeded 25%, the sample was reanalyzed, and the initial result was disregarded. Statistical analysis To test the hypothesis of normal data distribution, the Shapiro–Wilk test was used. Data were transformed using Median & Quantile Absolute Deviation based Z-Score. The Z-Score transformation was applied solely to provide a more visually interpretable representation of the data in tabular form. To identify intergroup differences in the concentration levels of the studied metabolites, assessed through a.u., a nonparametric one-way analysis of variance was performed using the Kruskal–Wallis test; the Mann–Whitney test was used as a post-hoc analysis. The data was clustered using the t-distributed stochastic neighbor embedding (t-SNE) method, and the resulting data was visualized in two-dimensional space. The following parameters were used for the t-SNE analysis: Perplexity = 30, Learning Rate = 500, and Number of Iterations = 2000. The log fold change (lfc) was used as a measure reflecting the difference in the range of values between samples; descriptive statistics are presented by the median (Me) and interquartile range [Q1-Q3]. The difference between samples was considered significant at p < 0.05 and |lfc|> 0.5. A Volcano plot was used to visualize the test results. Data processing and statistical analysis were performed using the R programming language version 4.3.1 and the Python programming language version 3.12. Spearman’s rank correlation coefficient was used for the correlation analysis. Metabolic pathway analysis was performed using the MetaboAnalyst 6.0 package and KEGG database – . Frozen blood serum from the collection of the biobank of St. Petersburg State Healthcare Establishment "City Hospital No. 40"was used. The study was conducted within the framework of the research project "Biobanking and biomedical research of human tissue and fluid samples" and was approved by the Expert Council on Ethics of St. Petersburg State Healthcare Establishment "City Hospital No. 40" (session No. 119, February 9, 2017). A total of 234 patients who underwent treatment at City Hospital No. 40 took part in the current retrospective study of the metabolomics profile. Samples were collected during hospital stays from January 2018 to March 2021 for septic patients and from May 2020 to June 2021 for COVID-19 patients. All patients underwent standard examinations according to clinical recommendations and diagnosis. Written informed consent was obtained from all patients for sample collection and storage in a biobank for subsequent use for scientific purposes and for the results’ publication. The study was conducted in accordance with the World Medical Association’s Code of Ethics (Declaration of Helsinki) for experiments involving humans. The patients were grouped as follows: 1.COVID: COVID-19 patients with CS without comorbidity (n = 40). 2.COVID: COVID-19 patients with CS with comorbidity (n = 43). 3. Sepsis: SS patients (n = 41). 4.Control: healthy volunteers (n = 110). COVID-19 was diagnosed by a polymerase chain reaction (PCR) of nasopharyngeal swabs. CS was determined according to the following conditions: ferritin > 485 μg/L, C-reactive protein > 50 mg/L, d-dimer > 2.5 μg/mL, interleukin-6 > 25 pg/mL, LDH > 550 U/L. Comorbidities were determined based on the patient’s self-report on admission and confirmed, if necessary, by further examinations. The Charlson Comorbidity Index (CCI) was calculated for each patient with COVID-19 . Patients with CCI ≤ 2 were included in the COVID group, and patients with CCI ≥ 5 were included in the COVID group. The diagnosis of sepsis was made according to the Sepsis-3 consensus criteria and the SOFA scale . The cause of sepsis in this group was bacterial infection as a complication after abdominal surgery.The Control group was selected during regular preventive check-ups. The criteria for inclusion in the group were the absence of COVID-19, confirmed with PCR, and the absence of sepsis. The general inclusion criterion was age over 18 years. Since our study is conducted in a real-world setting, we didn’t employ any additional inclusion or exclusion criteria. The objective of this study was to compare the serum metabolomic profiles of patients with CS associated with COVID-19 and patients with sepsis. Metabolomic profiles were obtained using target LC–MS/MS analyses. The study solved the following tasks: 1) t-SNE clustering of patients. 2)Comparison of serum metabolomes of patients of the COVID and COVID groups: identification of common and differentially represented metabolites that alter levels relative to the Control group. 3)Identification of metabolites differentially represented in the serum of the Sepsis group compared to the Control. 4)Comparison of serum metabolomic profiles of patients of the Sepsis groups with serum metabolomes of patients in the COVID and COVID groups: identification of common and differentially represented metabolites. Blood samples from patients diagnosed with COVID-19 were collected within day of hospital admission and prior to treatment initiation. Blood samples from septic patients were collected when they were admitted to the intensive care unit (ICU) before starting antibiotic treatment. The blood samples for the Control group were obtained during a routine examination of volunteers. All samples were collected into Vacutest tubes (Gel and clot act., Vacutest Kima S.R.L., Italy). After centrifugation for 10 min at 4⁰C, 2200 rpm, the serum was collected and immediately frozen at -80⁰C. All samples were annotated, indicating the stage of the disease, gender, age, etc. Before analysis, the frozen samples were slowly warmed to room temperature and thoroughly mixed. The preparation of all blood serum samples for targeted metabolome studies was carried out in duplicate. 2-(N-morpholino)ethanesulfonic acid hydrate (MES) (CAS: 4432–31-9; cat no. M8250, Sigma-Aldrich) and L-methionine sulfone (CAS: 7314–32-1, cat No. M0876, Sigma-Aldrich) were used as internal standards. The samples were thawed at room temperature, and 100 μl of an ice-cold mixture of internal standards of a fixed concentration (25 μg/ml) in acetonitrile (cat no. 9012.2500GL, LC–MS-grade, J.T. Baker) was added to 50 μl of serum. The mixture was vortexes and incubated for 10 min at room temperature, then centrifuged (Centrifuge 5810R, Eppendorf) at 12,000 g for 10 min at 4 °C. An aliquot of 80 uL supernatant was diluted with Milli-Q—water acidified with formic acid (cat No. 533002, for LC–MS LiChropur, > 99%, Merck) to pH 2 and analyzed using the LC–MS/MS method. The prepared standard solutions and extracts were stored at -20 ⁰C. Targeted metabolite profiling was performed using a liquid chromatography-mass spectrometer with a triple quadrupole LCMS-8050 (Shimadzu) along with a Nexera X2 (Shimadzu) chromatography system. The analysis was carried out following the established "LC/MS/MS Method Package for Primary Metabolites" by Shimadzu, utilizing the multiple reaction monitoring mode. This method allows simultaneous analysis of 98 analytes of the main chemical classes of clinically significant low-molecular compounds, including amino acids, organic acids, nucleotides, nucleosides, and coenzymes (Supplementary Table S1, S3). An analytical column Discovery HS F5-3 (150 × 2.1 mm, 3 mkm) (Supelco, Merck) and SecurityGuard “SupelGuard Discovery HS F5-3” (20 × 2.1 mm, 3 mkm, Supelco) were used to separate the analytes. Mass-spectrometry parameters and chromatographic conditions were meticulously set in accordance with the guidelines provided in the manual for the "LC/MS/MS Method Package for Primary Metabolites" method. Briefly, both ESI and ESI ( −) modes used water (LC–MS grade) as mobile phase (A) and ACN as mobile phase (B), and formic acid was used as mobile phase modifier. The gradient program was changed from 0 to 95% (B). Chromatographic and mass spectrometric parameters, as set out in the manual, are given in Supplementary Table S2. Data collection and processing were performed using LabSolutions software. Metabolites were identified based on chromatographic retention time, m/z values of product ions, and their intensity ratios. Only chromatographic peaks with a peak-to-noise cutoff ratio ≥ 10 were considered. The content of the investigated substances was determined using the internal standard method, which considers the response (area) of the analyte connections in relation to the response (area) of the internal standard. All level measurements are given in arbitrary units of the content of the internal standard (arbitrary units — a.u.). The results correspond to the average of two parallel measurements of the same sample. To ensure quality control (QA/QC), control samples were analyzed along with experimental samples to monitor instrument performance and facilitate chromatographic alignment. Control samples consisted of extracts of averaged blood serum samples with internal standards, prepared following the same procedure as the experimental samples. The averaged serum, derived from thoroughly mixed blood serum samples from seven donors, was aliquoted in 200 μl portions and frozen at -80 °C for later analysis. An appropriate volume of Milli-Q water passed through all stages of sample preparation was used as a “blank” sample. Additionally, all solvents used in sample preparation were also analyzed. We utilized quality control samples to assess the reproducibility and stability of the prepared extracts on both intra- and inter-day bases. The internal standard solutions and control extracts were examined over a 10-day period, when they were stored at -20 °C between analyses and once at + 5 °C for 24 h. Overall process variability was determined by calculating the median RSD for IS and all endogenous metabolites (i.e., non-instrumental standards) present in control samples. The discrepancy between parallel measurements remained within 15% of the average values, with average daily deviations below 20%. Quality control samples were examined after every 50–60 experimental sample injections in order to ensure the consistency of the chromatographic retention times and responses of the studied compounds. Instrument variability was determined by calculating the median relative standard deviation (RSD) of the internal standards added to each sample during extraction. The scatter of the obtained values was evaluated during averaging, ensuring a maximum difference of 20% between the averaged parallel measurements. If this threshold was exceeded 25%, the sample was reanalyzed, and the initial result was disregarded. To test the hypothesis of normal data distribution, the Shapiro–Wilk test was used. Data were transformed using Median & Quantile Absolute Deviation based Z-Score. The Z-Score transformation was applied solely to provide a more visually interpretable representation of the data in tabular form. To identify intergroup differences in the concentration levels of the studied metabolites, assessed through a.u., a nonparametric one-way analysis of variance was performed using the Kruskal–Wallis test; the Mann–Whitney test was used as a post-hoc analysis. The data was clustered using the t-distributed stochastic neighbor embedding (t-SNE) method, and the resulting data was visualized in two-dimensional space. The following parameters were used for the t-SNE analysis: Perplexity = 30, Learning Rate = 500, and Number of Iterations = 2000. The log fold change (lfc) was used as a measure reflecting the difference in the range of values between samples; descriptive statistics are presented by the median (Me) and interquartile range [Q1-Q3]. The difference between samples was considered significant at p < 0.05 and |lfc|> 0.5. A Volcano plot was used to visualize the test results. Data processing and statistical analysis were performed using the R programming language version 4.3.1 and the Python programming language version 3.12. Spearman’s rank correlation coefficient was used for the correlation analysis. Metabolic pathway analysis was performed using the MetaboAnalyst 6.0 package and KEGG database – . Patient characteristics Table presents the demographic and clinical characteristics of the patient groups included in the study. The average age of comorbid patients was slightly higher than the average age in other groups (mean 74 ± 9 y vs. 51 ± 9.8 y for COVID, 65.2 ± 16 y for Sepsis and 48.1 ± 14 y for Control). The imbalance can be explained by the fact that comorbidities usually arise at a later age. The spectrum of diagnosis in COVID group includes the illnesses related to the cardiovascular system, digestive system and liver function, renal function, neurology, diabetes mellitus and oncology. The general characteristics of the changes in the serum metabolome of COVID-19 and sepsis groups The levels of 100 compounds were studied using the LC/MS/MS procedure (Supplementary Table S1, S3). 83 compounds whose results were differed from zero were taken for the further analysis (Supplementary Table S4). The graphical result is presented on a heatmap (Fig. ). There was a significant difference in age of the patients in the studied groups. To answer the question about the influence of age factor on the metabolites levels we conducted the correlation analysis (Supplementary Table S5). As all can see for most of the metabolites the correlation with age of the patients is closer to weak. The metabolites with most growing levels with age are cytosine, L-acetylcarnitine, DL-Dopa (ρ 0.45–0.48). The week decline of the level with age (ρ = -0.43) was shown for glutathione. t-SNE clustering of patients was performed based on the obtained metabolomic data (Supplementary Table S4). It revealed 3 clear clusters groups: Sepsis, Control and COVID-19 (Fig. ). Interestingly, the COVID and COVID groups formed approximately one cluster, practically not separated. The Control group was notably segregated, situated far away from the other clusters. Additionally, data from sepsis patients was also well separated from others, while being in proximity to a cluster of COVID-19 patients. We conducted the correlation analysis of to investigate the point of changing the relations of metabolites in groups and found that COVID, COVID and Sepsis groups differed significantly in functioning of metabolome which reflected in changing of correlation links between the groups (Fig. ). It was rather unexpectedly because the t-SNE clustering demonstrated that COVID and COVID groups belonged to one cluster. t-SNE is a dimensionality reduction technique used to visualize high-dimensional data in two- or three-dimensional space. In this study, t-SNE was applied to group patients based on their metabolomic profiles. The clustering revealed clear separation among the Sepsis, Control, and COVID-19 groups. However, the COVID and COVID groups formed a single cluster, indicating global similarity in their metabolomic profiles. Correlation analysis, on the other hand, examined the relationships between individual metabolites, showing how they covary within each group. This analysis revealed significant differences in the correlation patterns of metabolites between the COVID, COVID, and Sepsis groups, highlighting distinct metabolic interactions in each group. These findings were unexpected, given that t-SNE clustering placed the COVID and COVID groups within the same cluster. These results are not contradictory, as each method evaluates different aspects of the data. t-SNE clustering focuses on global similarities in metabolomic profiles, while correlation analysis highlights subtle differences in the functional organization and metabolic interactions. Thus, despite the overall similarity in COVID and COVID profiles as indicated by t-SNE, correlation analysis reveals that the metabolic relationships within these groups are likely influenced by comorbidities. COVID vs. COVID groups metabolomics We compared the metabolomic profiles of COVID and COVID patients (Supplementary Table S4, Fig. , ). 62 and 67 metabolites were changed compared to the Control in COVID and COVID groups (p < 0.05), respectively. A decline of 38 metabolites along with rise of 24 compounds was recorded in COVID serum; the most prominent changes are presented in Fig. AD. While for COVID group, 45 metabolites showed a decrease and 22 metabolites exhibited an increase compared to the Control group (Fig. BD ). Among 9 compounds mostly increased in the COVID group, 6 metabolites were also risen in the COVID group. These were dimethylglycine, L-acetylcarnitine, L-kynurenine, L-phenylalanine, L-cystathionine, adenosine monophosphate (Fig. AB). Among 25 metabolites mostly decreased in the COVID group, the levels of 17 compounds also fell in the COVID group. These were L-histidine, citrulline, ornithine, uridine, uric acid, L-arginine, asymmetric dimethylarginine, pantothenic acid, L-threonine, 4-hydroxyproline, choline, allantoin, inosine, glycine, L-leucine, acetylcholine chloride, L-isoleucine (Fig. AB). In spite of the presence of some common characteristics of metabolomics changes, there were 29 compounds, which revealed significant (p < 0.05) differences in the levels between the COVID and COVID groups (Supplementary Table S4, Fig. A ). 15 metabolites exhibited higher levels in the COVID group compared to the COVID group. L-kynurenine, L-lactic acid, L-alanine, uric acid, uracil, carnosine, ornithine, and norepinephrine were the most prominent. Conversely, 14 metabolites showed decreased levels in the COVID group, with L-proline and serine being the most notable (Fig. A). Interestingly, some of these compounds changed their levels in series from Control to COVID and then to COVID group. For example, L-kynurenine and ornithine revealed such dynamics (Supplementary Table S4). To evaluate the potential metabolomic shifts that resulted from changes in measured compounds, an enriched pathway analysis was held. We identified the top 8 common pathways from the KEGG database that were most significantly dysregulated in the COVID and COVID groups compared with the Control group (Fig. AB; Supplementary Table S6 AB). The four of these pathways were uniquely disturbed in each group (Fig. C; Supplementary Table S5 D). The metabolism of cysteine and methionine showed the largest difference, being more disrupted in COVID vs. COVID patients (Fig. A; Supplementary Table S6 AB). These data, in consistency with t-SNE clustering (p. 3.2), demonstrated that the difference between COVID and COVID groups was not substantial and had rather quantitative than qualitative character. Sepsis group metabolomics In comparing the Sepsis and Control groups, we found differences in the levels of 75 metabolites (p < 0.05) (Supplementary Table S4, Fig. CD). Among these metabolites, 35 exhibited elevated levels and 40 metabolites displayed decreased levels. Enriched pathway analysis identified the top 10 pathways in the KEGG database that were potentially disturbed in the Sepsis group (Fig. C; Supplementary Table S6 C). These pathways included arginine biosynthesis; cysteine and methionine metabolism; alanine, aspartate, and glutamate metabolism; glycine, serine, and threonine metabolism; citrate (TCA) cycle, arginine and proline metabolism; pyrimidine metabolism; tyrosine metabolism; histidine metabolism, and glutathione metabolism. Sepsis vs. COVID-19 groups metabolomics Comparison of the Control group with the Sepsis, COVID, and COVID groups indicated significant differences in metabolite abundance (p ≤ 0.05). Among the groups, the greatest number of changes in metabolite levels (74) was observed in the Sepsis vs. Control comparison. In the comparison of COVID vs. Control, 60 metabolites showed varying levels, while 65 metabolites displayed differences in the COVID vs. Control comparison (Supplementary Table S4, Fig. E). The levels of 53 metabolites changed significantly (p ≤ 0.05) both for COVID and Sepsis groups relative to the Control group. Within these groups, the levels of 11 metabolites increased, and 22 metabolites decreased in both groups. Similarly, the levels of 57 metabolites showed significant changes in both the COVID and Sepsis groups relative to the Control group; the levels of 14 metabolites increased and 25 metabolites decreased in both groups (Supplementary Table S4). The levels of 47 metabolites changed in all three groups. All three groups exhibited increased levels of 11 metabolites relative to the Control. The most prominent were dimethylglycine, L-acetylcarnitine, L-cystathionine, adenosine monophosphate. Additionally, 19 metabolites displayed lower levels compared to the controls across all groups. The most prominent are L-histidine, citrulline, ornithine, uridine, pantothenic acid, L-threonine, choline, inosine, L-leucine, and L-isoleucine (Fig. ABC ). Comparison of the metabolomic profiles of the Sepsis and COVID groups showed that the levels of 60 metabolites were significantly different (p < 0.05) (Supplementary Table S4). Among these, the levels of 38 metabolites showed higher levels in the Sepsis group, and 22 metabolites displayed lower levels in the Sepsis group. The most significant changes were displayed on Fig. B. The comparison of metabolomic profiles between the Sepsis and COVID groups showed that the levels of 56 metabolites were significantly different (p < 0.05) (Supplementary Table S4). Of these, in the Sepsis group, the levels of 36 metabolites were higher and the levels of 20 metabolites were lower. The most significant changes were displayed on Fig. C. The levels of 36 metabolites were significantly different (p ≤ 0.01) in both COVID-19 groups relative to the Sepsis group; of these, the levels of 28 metabolites were higher in the serum of septic patients (Supplementary Table S4, Fig. BC). The most prominent were acetylcholine chloride, L-histidine, uric acid, allantoin, 4-hydroxyproline, asymmetric dimethylarginine, creatine, creatinine, citric acid, methionine sulfoxide, guanosine, L-carnitine, L-cystathionine, carnosine. The levels of 12 compounds were lower in the serum of septic patients compared to COVID-19 one. The most prominent were L-proline, L-aspartic acid, L-tryptophan, niacinamide, L-tyrosine, L-phenylalanine, L-glutamic acid (Fig. BC). Some metabolites levels changed solely in the Sepsis or COVID-19 groups. So as 4-hydroxyproline, isocitric and pyruvic acids, procollagen-5-hydroxylisine, creatine, creatnine, SAM, acetylcholine chlorid, cytric acid, serotonine, symmetric dimethiylarginine significantly increased, and L-aspartic acid, L-glutamic acid, L-proline, L-lysine, GABA, niacinamide significantly decreased exclusively in the Sepsis group. In contrast to the Sepsis group, both groups of patients with COVID-19 were characterized by a significant increase in citicoline, 5-thymidilic acid, GABA, nicotinic acid, L-phenylalanine, histamine and a decrease in allantoin, 4-hydroxiproline, isocytric and pyruvic acids, procollagen-5-hydroxylisine, acetylcholine chloride, symmetric and asymmetric dimethylarginine, L-methionine, uric acid, and L-lactic acid (Supplementary Table S4). Enriched pathway analysis of the KEGG databases identified eight metabolic pathways, the differences in which for surgical septic and COVID-19 (COVID and COVID patients (p ≤ 0.05) (Fig. BC ; Supplementary Table S6 EF). These pathways remained consistent when comparing Sepsis with both COVID and COVID. There were cysteine and methionine metabolism, histidine metabolism, arginine and proline metabolism, arginine biosynthesis pathway, aspartate, glutamate, and alanine metabolism, phenylalanine, tyrosine, and tryptophan biosynthesis pathways, phenylalanine metabolism, and pyrimidine metabolism. Table presents the demographic and clinical characteristics of the patient groups included in the study. The average age of comorbid patients was slightly higher than the average age in other groups (mean 74 ± 9 y vs. 51 ± 9.8 y for COVID, 65.2 ± 16 y for Sepsis and 48.1 ± 14 y for Control). The imbalance can be explained by the fact that comorbidities usually arise at a later age. The spectrum of diagnosis in COVID group includes the illnesses related to the cardiovascular system, digestive system and liver function, renal function, neurology, diabetes mellitus and oncology. The levels of 100 compounds were studied using the LC/MS/MS procedure (Supplementary Table S1, S3). 83 compounds whose results were differed from zero were taken for the further analysis (Supplementary Table S4). The graphical result is presented on a heatmap (Fig. ). There was a significant difference in age of the patients in the studied groups. To answer the question about the influence of age factor on the metabolites levels we conducted the correlation analysis (Supplementary Table S5). As all can see for most of the metabolites the correlation with age of the patients is closer to weak. The metabolites with most growing levels with age are cytosine, L-acetylcarnitine, DL-Dopa (ρ 0.45–0.48). The week decline of the level with age (ρ = -0.43) was shown for glutathione. t-SNE clustering of patients was performed based on the obtained metabolomic data (Supplementary Table S4). It revealed 3 clear clusters groups: Sepsis, Control and COVID-19 (Fig. ). Interestingly, the COVID and COVID groups formed approximately one cluster, practically not separated. The Control group was notably segregated, situated far away from the other clusters. Additionally, data from sepsis patients was also well separated from others, while being in proximity to a cluster of COVID-19 patients. We conducted the correlation analysis of to investigate the point of changing the relations of metabolites in groups and found that COVID, COVID and Sepsis groups differed significantly in functioning of metabolome which reflected in changing of correlation links between the groups (Fig. ). It was rather unexpectedly because the t-SNE clustering demonstrated that COVID and COVID groups belonged to one cluster. t-SNE is a dimensionality reduction technique used to visualize high-dimensional data in two- or three-dimensional space. In this study, t-SNE was applied to group patients based on their metabolomic profiles. The clustering revealed clear separation among the Sepsis, Control, and COVID-19 groups. However, the COVID and COVID groups formed a single cluster, indicating global similarity in their metabolomic profiles. Correlation analysis, on the other hand, examined the relationships between individual metabolites, showing how they covary within each group. This analysis revealed significant differences in the correlation patterns of metabolites between the COVID, COVID, and Sepsis groups, highlighting distinct metabolic interactions in each group. These findings were unexpected, given that t-SNE clustering placed the COVID and COVID groups within the same cluster. These results are not contradictory, as each method evaluates different aspects of the data. t-SNE clustering focuses on global similarities in metabolomic profiles, while correlation analysis highlights subtle differences in the functional organization and metabolic interactions. Thus, despite the overall similarity in COVID and COVID profiles as indicated by t-SNE, correlation analysis reveals that the metabolic relationships within these groups are likely influenced by comorbidities. We compared the metabolomic profiles of COVID and COVID patients (Supplementary Table S4, Fig. , ). 62 and 67 metabolites were changed compared to the Control in COVID and COVID groups (p < 0.05), respectively. A decline of 38 metabolites along with rise of 24 compounds was recorded in COVID serum; the most prominent changes are presented in Fig. AD. While for COVID group, 45 metabolites showed a decrease and 22 metabolites exhibited an increase compared to the Control group (Fig. BD ). Among 9 compounds mostly increased in the COVID group, 6 metabolites were also risen in the COVID group. These were dimethylglycine, L-acetylcarnitine, L-kynurenine, L-phenylalanine, L-cystathionine, adenosine monophosphate (Fig. AB). Among 25 metabolites mostly decreased in the COVID group, the levels of 17 compounds also fell in the COVID group. These were L-histidine, citrulline, ornithine, uridine, uric acid, L-arginine, asymmetric dimethylarginine, pantothenic acid, L-threonine, 4-hydroxyproline, choline, allantoin, inosine, glycine, L-leucine, acetylcholine chloride, L-isoleucine (Fig. AB). In spite of the presence of some common characteristics of metabolomics changes, there were 29 compounds, which revealed significant (p < 0.05) differences in the levels between the COVID and COVID groups (Supplementary Table S4, Fig. A ). 15 metabolites exhibited higher levels in the COVID group compared to the COVID group. L-kynurenine, L-lactic acid, L-alanine, uric acid, uracil, carnosine, ornithine, and norepinephrine were the most prominent. Conversely, 14 metabolites showed decreased levels in the COVID group, with L-proline and serine being the most notable (Fig. A). Interestingly, some of these compounds changed their levels in series from Control to COVID and then to COVID group. For example, L-kynurenine and ornithine revealed such dynamics (Supplementary Table S4). To evaluate the potential metabolomic shifts that resulted from changes in measured compounds, an enriched pathway analysis was held. We identified the top 8 common pathways from the KEGG database that were most significantly dysregulated in the COVID and COVID groups compared with the Control group (Fig. AB; Supplementary Table S6 AB). The four of these pathways were uniquely disturbed in each group (Fig. C; Supplementary Table S5 D). The metabolism of cysteine and methionine showed the largest difference, being more disrupted in COVID vs. COVID patients (Fig. A; Supplementary Table S6 AB). These data, in consistency with t-SNE clustering (p. 3.2), demonstrated that the difference between COVID and COVID groups was not substantial and had rather quantitative than qualitative character. In comparing the Sepsis and Control groups, we found differences in the levels of 75 metabolites (p < 0.05) (Supplementary Table S4, Fig. CD). Among these metabolites, 35 exhibited elevated levels and 40 metabolites displayed decreased levels. Enriched pathway analysis identified the top 10 pathways in the KEGG database that were potentially disturbed in the Sepsis group (Fig. C; Supplementary Table S6 C). These pathways included arginine biosynthesis; cysteine and methionine metabolism; alanine, aspartate, and glutamate metabolism; glycine, serine, and threonine metabolism; citrate (TCA) cycle, arginine and proline metabolism; pyrimidine metabolism; tyrosine metabolism; histidine metabolism, and glutathione metabolism. Comparison of the Control group with the Sepsis, COVID, and COVID groups indicated significant differences in metabolite abundance (p ≤ 0.05). Among the groups, the greatest number of changes in metabolite levels (74) was observed in the Sepsis vs. Control comparison. In the comparison of COVID vs. Control, 60 metabolites showed varying levels, while 65 metabolites displayed differences in the COVID vs. Control comparison (Supplementary Table S4, Fig. E). The levels of 53 metabolites changed significantly (p ≤ 0.05) both for COVID and Sepsis groups relative to the Control group. Within these groups, the levels of 11 metabolites increased, and 22 metabolites decreased in both groups. Similarly, the levels of 57 metabolites showed significant changes in both the COVID and Sepsis groups relative to the Control group; the levels of 14 metabolites increased and 25 metabolites decreased in both groups (Supplementary Table S4). The levels of 47 metabolites changed in all three groups. All three groups exhibited increased levels of 11 metabolites relative to the Control. The most prominent were dimethylglycine, L-acetylcarnitine, L-cystathionine, adenosine monophosphate. Additionally, 19 metabolites displayed lower levels compared to the controls across all groups. The most prominent are L-histidine, citrulline, ornithine, uridine, pantothenic acid, L-threonine, choline, inosine, L-leucine, and L-isoleucine (Fig. ABC ). Comparison of the metabolomic profiles of the Sepsis and COVID groups showed that the levels of 60 metabolites were significantly different (p < 0.05) (Supplementary Table S4). Among these, the levels of 38 metabolites showed higher levels in the Sepsis group, and 22 metabolites displayed lower levels in the Sepsis group. The most significant changes were displayed on Fig. B. The comparison of metabolomic profiles between the Sepsis and COVID groups showed that the levels of 56 metabolites were significantly different (p < 0.05) (Supplementary Table S4). Of these, in the Sepsis group, the levels of 36 metabolites were higher and the levels of 20 metabolites were lower. The most significant changes were displayed on Fig. C. The levels of 36 metabolites were significantly different (p ≤ 0.01) in both COVID-19 groups relative to the Sepsis group; of these, the levels of 28 metabolites were higher in the serum of septic patients (Supplementary Table S4, Fig. BC). The most prominent were acetylcholine chloride, L-histidine, uric acid, allantoin, 4-hydroxyproline, asymmetric dimethylarginine, creatine, creatinine, citric acid, methionine sulfoxide, guanosine, L-carnitine, L-cystathionine, carnosine. The levels of 12 compounds were lower in the serum of septic patients compared to COVID-19 one. The most prominent were L-proline, L-aspartic acid, L-tryptophan, niacinamide, L-tyrosine, L-phenylalanine, L-glutamic acid (Fig. BC). Some metabolites levels changed solely in the Sepsis or COVID-19 groups. So as 4-hydroxyproline, isocitric and pyruvic acids, procollagen-5-hydroxylisine, creatine, creatnine, SAM, acetylcholine chlorid, cytric acid, serotonine, symmetric dimethiylarginine significantly increased, and L-aspartic acid, L-glutamic acid, L-proline, L-lysine, GABA, niacinamide significantly decreased exclusively in the Sepsis group. In contrast to the Sepsis group, both groups of patients with COVID-19 were characterized by a significant increase in citicoline, 5-thymidilic acid, GABA, nicotinic acid, L-phenylalanine, histamine and a decrease in allantoin, 4-hydroxiproline, isocytric and pyruvic acids, procollagen-5-hydroxylisine, acetylcholine chloride, symmetric and asymmetric dimethylarginine, L-methionine, uric acid, and L-lactic acid (Supplementary Table S4). Enriched pathway analysis of the KEGG databases identified eight metabolic pathways, the differences in which for surgical septic and COVID-19 (COVID and COVID patients (p ≤ 0.05) (Fig. BC ; Supplementary Table S6 EF). These pathways remained consistent when comparing Sepsis with both COVID and COVID. There were cysteine and methionine metabolism, histidine metabolism, arginine and proline metabolism, arginine biosynthesis pathway, aspartate, glutamate, and alanine metabolism, phenylalanine, tyrosine, and tryptophan biosynthesis pathways, phenylalanine metabolism, and pyrimidine metabolism. We compared the serum metabolome of patients with COVID-19 related CS vs. surgical sepsis. Both syndromes stem from infections, with COVID-19 linked to a viral infection and surgical sepsis to a bacterial one. These conditions can lead to an increased inflammatory response that poses significant risks to patients. We carried out this study to contribute to the understanding of the pathophysiology underlying these hyperinflammatory processes. The presence of comorbidities can make the course of COVID-19 more severe , . To investigate the impact of concomitant illnesses on metabolomic alterations two groups of CS patients were created: one without comorbidities (COVID and the other with comorbidities (COVID. The metabolomic profiles of these groups were also compared. Our research revealed parallel shifts in the metabolome of both the COVID and COVID groups. Levels of almost all amino acids, including proteogenic and glycogenic types, as well as key markers of energy metabolism such as pyruvate, lactate, and TCA cycle acids were significantly decreased. Other researchers have also reported decreased levels of these metabolites in patients with CS , . Furthermore, an association was found between the increase in plasma cytokines and the decrease in amino acid levels . In this study, patients experienced more significant changes in certain energy-related metabolites within the COVID group, where the IL6 level was elevated. Both COVID-19 groups were characterized by decreased carnitine along with increased acetylcarnitine levels, which might indicate an energy deficiency and impaired transport in the mitochondria. Similar changes have been reported by other authors – . Also a reduction in the levels of arginine, citrulline and ornithine indicates a significant disturbance in nitrogen metabolism in COVID-19 patients, aligning with findings from other researchers , . The pathway of tryptophan degradation linked to inflammation modulation underwent a common alteration in both groups of COVID-19, shifting towards kynurenine synthesis. These findings of increased kynurenine levels and decreased tryptophan and serotonin levels are consistent with existing data . In COVID-19 patients, elevated levels of some other metabolites linked to the inflammatory response were observed. These included the inflammatory mediator histamine and the neurotransmitter gamma-aminobutyric acid (GABA), which is known to have anti-inflammatory properties . We assumed that the increase in GABA levels was compensatory. The difference between the COVID and COVID groups can be considered insignificant. The most significant variations in the dynamics of changes were observed for L-proline and serine. Their levels increased sharply in the COVID group and approached control levels in the COVID group. It could be assumed that the changes in these amino acids in patients with comorbidities were related to the characteristics of the accompanying illnesses and their therapies. So we can propose that the main changes in the metabolomic profiles in both groups of COVID-19 patients were due to the virus infection, not to the comorbidities. This proposal is in line with the conclusion in . Nevertheless, an interesting feature is related to the correlation analysis, which showed that the ratios of the studied metabolites were different in COVID and COVID groups. We suppose that it was the influence of comorbidity on the functioning of organism—with less changes levels we saw significant discrepancies in connections between metabolites in COVID and COVID groups. The obtained data on the metabolomics of surgical septic patients, presented in Supplementary Table S4 and Fig. C, were consistent with the data of other researchers. This change in the levels of proteogenic and glycogenic amino acids was considered to be one of the characteristic features of sepsis . The decrease in serum amino acids levels was explained by their use as substrate for the TCA cycle and glycolysis for energy demand, which greatly increased during sepsis . The energy imbalance in the Sepsis group was confirmed in our study by increased levels of oxoglutarate and citrate, metabolites of the TCA cycle. These data are consistent in particular with studies presented in articles , . Another feature of sepsis is a violation of beta-oxidation of fatty acids in mitochondria and an increase in acetylcarnitines levels , . In our study, L-acetylcarnitine was increased in surgical septic patients as in . In addition, surgical septic patients had elevated kynurenine levels and decreased tryptophan and serotonin levels, suggesting a redirection of tryptophan degradation toward kynurenine synthesis. These results were consistent with data presented in . The inflammatory marker procollagen 5-hydroxy-L-lysine was also significantly increased in the Sepsis group. Similar findings were reported in . The increase in the levels of dopamine as in and acetylcholine chloride that we found could be attributed to the compensatory anti-inflammatory effect of these metabolites , . In addition, we detected a significant change in the levels of compounds associated with the development of oxidative stress in surgical septic patients—decreased levels of glutathione and increased levels of its precursor gamma-glutamylcysteine, ophthalmic acid and symmetric dimethylarginine, as noted by other authors , , . We should note that there is not enough information in the literature on the levels of oxoglutaric acid and acetylcholine chloride in septic serum or plasma. Therefore, our results for these metabolites were among the first and needed confirmation. Overall, the metabolomic profile data of patients with COVID-19 and surgical sepsis were in agreement with findings from other researchers. We observed a similarity in the direction of level changes for various metabolites when comparing the metabolomic profiles of patients with COVID-19 and surgical sepsis, across the Control—COVID—COVID—Sepsis series (Fig. ABCD ). For example, the levels of L-alanine, ornithine and uric acid increased sequentially. It could be linked to a higher breakdown of proteins and nucleic acids caused by cell death and inflammation due to a more severe course of the pathological process in the COVID group than in the COVID group and more inflammation and oxidative stress in the Sepsis group than in COVID-19 patients , . An increase in uric acid levels was also a marker of gradual deterioration of renal function in the Control—COVID—COVID—Sepsis series . This was confirmed by clinical analysis of patients’ creatinine levels (Table ). The inflammatory marker kynurenine showed elevated levels in all three groups. Activation of the kynurenine pathway with increased levels of kynurenine and other participants in this pathway is associated with immunosuppression in response to inflammatory signaling . In the Sepsis group its level was lower than in the COVID group, but the difference is insignificant, so making it difficult to draw any conclusions on this point. While in the COVID group the kynurenine level was the highest, presumably with more severe inflammation. It is known that the kynurenine pathway of tryptophan degradation is activated by high levels of proinflammatory cytokines, in particular IFNγ and IL6 . Based on the data in Table , we can assume that the higher level of IL6 in the COVID group indicates a greater degree of activation of the kynurenine pathway. Some comorbidities, such as the most common cardiovascular problems, are associated with inflammation and activation of the kynurenin pathway . Therefore, we can assume that the comorbidities in the COVID group contributed to the elevated kynurenin levels. Another participant in the tryptophan degradation pathway, serotonin, showed the opposite alteration. Its level was highest in the Sepsis group and lowest in the COVID group. This fact requires further investigation to better understanding of the involvement of the kynurenine pathway in hyperinflammatory conditions of various origins. The level of the antioxidant carnosine increased sequentially in the COVID—COVID—Sepsis series. Moreover, its level was lower in the COVID group than in the Control group, but significantly higher in the COVID and Sepsis groups. Also in the COVID, COVID and Sepsis series, there was a consistent increase in the levels of norepinephrine, a metabolite known for its reported anti-inflammatory properties . At the same time, the levels of the norepinephrine precursor in the synthesis pathway, L-tyrosine, changed in the opposite direction and decreased proportionally across the series. This suggests that there may be compensatory production of carnosine and norepinephrine as inflammation and oxidative stress escalated within the sequence Control—COVID—COVID—Sepsis. The sequential increase in lactic acid in the COVID—COVID—Sepsis series appeared to be explicable. As lactic acid levels increased in the groups studied, it indicated a transition towards glycolysis, signaling a shift in energy metabolism. In all three groups, it was noteworthy that the level of lactic acid remained lower compared to the Control group, despite documented elevations in lactic acid levels observed in sepsis and severe cases of COVID-19 , . Significant changes in methionine metabolism were observed in all groups. In the progression from COVID to COVID to Sepsis, a notable change in the concentrations of two metabolites associated with the methionine cycle was detected: an increase in SAH levels and a decrease in serine levels. It should be noted that a decrease in serine levels could affect both methionine synthesis and the pathway for the production of the antioxidant glutathione, which showed similarly low levels in all three groups. In the progression from COVID to COVID to Sepsis, methionine sulfoxide, another participant in the methionine cycle, showed a steady increase in levels. We assumed that this was necessary to maintain the level of methionine as a key metabolite in many processes. Levels of several metabolites fell to a similar extent in patients with COVID-19 and in the Sepsis group (Fig. ABCD). Firstly, it is the decrease in amino acid levels observed in both the Sepsis group and patients with COVID-19. This decrease in amino acid levels may indicate an increased demand for oxidative sources to energy cycles in all groups . Levels of glutathione and choline, known for their ability to decrease oxidative stress and inflammation, diminished equally in individuals with surgical sepsis and with COVID-19. So, while there were resemblances in the metabolomic profiles, the distinctions in them for Sepsis and COVID-19 patients were also significant. Figure E schematically demonstrated the number of metabolites whose levels changed significantly in the three groups. The greatest number of such metabolites belonged to the Sepsis group (74); the COVID and COVID groups differed slightly in this feature (60 and 65 metabolites, respectively), but the difference between both COVID-19 groups and Sepsis group was much more.This could be explained by the different dynamics of both pathological processes. As already mentioned, sepsis is characterized by a more explosive course than CSS, which also leads to severe consequences, but more slowly . For the intergroup comparison of Sepsis vs. COVID-19, 28 metabolites showed a significant increase (p ≤ 0.01) compared to both groups of COVID-19 patients (p. 3.4). These included metabolites related to mitochondrial function, markers of oxidative stress, collagen destruction, methionine and transsulfuration cycles, and renal failure. Most striking were the differences in the levels of metabolites in the TCA cycle. This probably reflects the biological basis of both syndromes. COVID-19 is characterized in most cases with lung injury at the beginning of the infection and oxygen deficiency as a consequence. Thus, suppression of the TCA cycle occurred. Surgical sepsis is not necessarily associated with hard lung injury , especially in the early stage, as in our patients. In the absence of hypooxigenation, the TCA cycle is upregulated to meet the increased demand for energy production to maintain hyperinflammation and immune function. Some metabolites exhibited a significant (p ≤ 0.01) downregulation in surgical septic patients compared to COVID-19 patients (Fig. BC). The lower levels of several amino acids in the Sepsis group could be associated with the greatest need for energy and more intensive utilization of amino acids in energy cycles. However, the most significant changes were observed in the profiles of GABA and niacinamide. These metabolites were increased in the COVID and COVID groups, but their levels decreased by several orders of magnitude in the Sepsis group. GABA may have an anti-inflammatory function , and niacinamide was the precursor in the synthesis of NAD. Such alterations might indicate a significantly higher level of inflammation and energy problems in surgical septic patients compared to patients with COVID-19. To evaluate the potential changes in metabolomic pathways, we performed an enriched analysis of the KEGG databases. The analysis revealed that certain pathways common to the COVID, COVID, and Sepsis groups were significantly perturbed. These pathways included: glycine, serine and threonine metabolism; arginine metabolism; cysteine and methionine metabolism; arginine and proline metabolism; alanine, aspartate and glutamate metabolism; glutathione metabolism; tyrosine metabolism; and histidine metabolism (Fig. ). Among these, the cysteine and methionine pathway was one of the most altered in all groups. By evaluating the degree of alteration in this pathway based on the number of metabolites analyzed, the p-value, and the impact factor, one could see a greater impairment in the COVID and Sepsis groups than in the COVID group. The cysteine and methionine pathway is one of the most important in the body. It is associated both with maintaining the level of methionine, which is a donor of methyl groups, and with the transformations of sulfur-containing amino acids responsible for redox potential. In our study, the pathways of arginine biosynthesis and histidine metabolism were also similarly altered in all three groups. The TCA cycle was disturbed only in the COVID and Sepsis groups, but to a greater extent in the Sepsis group, as assessed by the number of metabolites analyzed, p-value and impact factor. This analysis confirmed that significant metabolic changes in the serum of COVID-19 and surgical septic patients were primarily linked to amino acid metabolism and alterations in redox potential and energy cycle, with metabolic irregularities amplifying from COVID to COVID to Sepsis. As we mentioned in Sect. “ ”, such important characteristics as SOFA score differed between the COVID-19 and Sepsis groups. This difference was due to the specificity of the collection of biological material from COVID-19 patients, as we tried to collect the material before the beginning of treatment. In this context the question arises: did this SOFA discrepancy influence the results? Several investigations of other scientists revealed that some metabolites, connected with inflammation and oxygen deficiency were growing in accordance with the severity of COVID-19 , , , . On the basis of this knowledge we may carefully suppose that in unreal case of presence of the group of untreated patients with SOFA like in Sepsis group the results will be similar to those obtained in this work or more differentiating the groups in relation of metabolites of oxygen energetics or inflammation due in particular to more lowering the TCA metabolites and growth the levels of metabolites of kynurenin pathway in COVID-19 patients. This study had several limitations. First, we performed a targeted metabolomic study, so our results did not cover metabolomic changes as completely as a non-targeted metabolomic study can. Secondly, there was a substantial disparity in SOFA levels between the COVID-19 and Sepsis groups. This incongruity arose from our deliberate collection of blood samples from COVID-19 patients prior to treatment initiation to mitigate the impact of antibiotics, glucocorticoids, and similar factors. So, the observed SOFA discrepancy appeared due to the slower progression of multiple organ failure in COVID-19 compared to surgical sepsis. Finally, our study reflected real-world conditions, resulting in differences in age parameters between the compared groups. It is imperative to consider these limitations when evaluating the results of this research. This study used biobank samples of COVID-19 patients with cytokine storm and surgical sepsis from St. Petersburg and the Leningrad region (Russian Federation) and represented one of the first comparative serum metabolomic studies in this geographical area. The serum of patients with COVID-19 and surgical sepsis showed significant changes in various metabolites linked to amino acid metabolism, nitrogen metabolism, inflammation, folate and methionine cycles, and glycolysis. The differences between the COVID and COVID groups were not significant. Changes in metabolite levels tended to increase consistently from COVID to COVID to Sepsis groups, with more pronounced changes in the Sepsis group. The most significant differences between surgical septic and COVID-19 patients appeared in metabolites related to kynurenine synthesis, niacinamide, the TCA cycle and GABA. In all groups, there were significant alterations in cysteine and methionine metabolic pathways. So, our study revealed common and different features of the metabolomic profiles of patients with surgical sepsis and CS associated with COVID-19. Supplementary Information.
Lessons for the clinical nephrologist: acute kidney injury during therapy with apixaban
f7756b88-e049-4f43-9324-505b1b615999
11043096
Internal Medicine[mh]
An 81-year-old Caucasian male was hospitalized with respiratory distress due to pneumonia with pleural effusion. His medical history was remarkable for chronic kidney disease with serum creatinine (sCreat) of 1.6 mg/dL, chronic obstructive pulmonary disease, and atrial fibrillation on therapy with apixaban. Laboratory tests showed worsening of renal function during the infection (serum creatinine up to 2.2 mg/dL). He was treated with antibiotic therapy and pleural drainage; during hospitalization, anticoagulant therapy was modified to enoxaparin. Given the improvement of the patient’s general conditions, apixaban was resumed at hospital discharge. Ten days later a chest computed tomography was performed as follow up and documented new ground glass areas and pulmonary consolidations, therefore the patient was referred to the emergency department. Physical examination showed pitting edema and bilateral rhonchi. Blood analysis revealed severe renal dysfunction (sCreat, 10.78 mg/dL; serum urea, 264 mg/dL) and hematuria. The patient was oliguric (urine output of 300 mL/day). Apixaban therapy was suspended. Since vasculitis was suspected, steroid therapy was started (methylprednisolone 125 mg/day with subsequent tapering). Given the persistence of oliguria hemodialysis was initiated. Antinuclear antibodies, antineutrophil cytoplasmic antibodies and anti-glomerular basement membrane antibodies tested negative. Further workup ruled out hepatitis B and C as well as tuberculosis infection. Urine sediment analysis documented dysmorphic red blood cell casts. He also presented petechial eruption on his hands and lower limbs, therefore a skin biopsy was carried out and showed leukocytoclastic vasculitis. Fibrobronchoscopy with bronchoalveolar lavage was performed revealing evidence of alveolar hemorrhage. In order to better define the etiology of persistent kidney injury (as indicated by in case of unexplained worsening of renal function in a patient treated with anticoagulants), to exclude drug-induced vasculitis and to evaluate the need for further immunosuppressive therapy, the patient underwent ultrasound-guided kidney biopsy. Subsequently, on the same day, he presented hypotension (systolic blood pressure of 60 mmHg) and acute anemia (hemoglobin level drop from 11.1 to 8.7 g/dL). Urgent computed tomography scan showed hemorrhage due to left renal artery damage, thus artery embolization was performed, with temporary stabilization of the patient’s conditions. However, in the following days he presented hemodynamic instability requiring inotropic therapy, with poor tolerance to hemodialysis. Multiple organ failure ensued and the patient died on the 25th day of hospitalization. Kidney biopsy showed acute tubular damage and glomerular hemorrhage with erythrocyte casts in Bowman’s space and renal tubules, compatible with the diagnosis of anticoagulant-related nephropathy. The glomeruli were unremarkable: no intra- or extracapillary proliferation was found, therefore vasculitis was excluded. Immunofluorescence was negative for immune complex deposits. Anticoagulant-related nephropathy is a newly recognized cause of acute kidney injury, and the mechanism of kidney damage is still not completely understood. It is characterized by glomerular hemorrhage and by the presence of erythrocyte casts in renal tubules. Glomeruli are normal or present only minor changes and anticoagulant-related nephropathy should be suspected in case of disproportion between a high number of red blood cell casts, signs of acute tubular necrosis and minimal or absent glomerular lesions . Initially, it was reported in patients receiving warfarin, but subsequently the association with direct oral anticoagulant therapy was also documented . The postulated mechanism is tubular obstruction by erythrocyte casts with release of free hemoglobin from red blood cells leading to increased oxidative stress . The available data suggest that injury is not due to coagulopathy alone but that other factors causing pre-existing glomerular damage must be present . The main risk factors are chronic kidney disease, age, diabetes mellitus, hypertension and heart failure . Data on direct oral anticoagulant use and its safety in chronic kidney disease are limited. A recent review of available studies which included patients with chronic kidney disease and atrial fibrillation who were on therapy with direct oral anticoagulants compared to warfarin revealed a trend towards less major bleeding with lower thromboembolic risk in the novel anticoagulant group . Furthermore, several trials (including post hoc analysis) demonstrated lower risk of acute kidney injury and slower chronic kidney disease progression with novel agents compared to warfarin . Nevertheless, cases of anticoagulant-related nephropathy in patients on therapy with direct oral anticoagulants were reported . This emphasizes the need for particular attention to, and monitoring of, patients with chronic kidney disease on anticoagulant treatment, both with older and novel agents. Furthermore, a recent analysis of anticoagulant-related nephropathy showed that its prevalence is also not uncommon in patients with previously reported normal kidney function (although warfarin is associated with higher risk than direct oral anticoagulant therapy) . Therefore, this recently recognized cause of acute kidney injury should be considered in cases of otherwise unexplainable worsening of renal function in patients on anticoagulant therapy. Renal biopsy is seldom performed because of the high risk of complications, but can provide valuable information. There are reports in the literature where anticoagulant-related nephropathy was the first clinical presentation of underlying glomerular disease (such as IgA nephropathy) which further supports the usefulness of histological examination in this context. Currently there are no guidelines addressing the management of anticoagulant-related nephropathy. General recommendations suggest optimization of anticoagulant therapy to a therapeutic range, moreover, on the basis of available data, switching to direct oral anticoagulants in patients on warfarin and reducing the dose in patients already receiving novel agents has been suggested . Temporary discontinuation of anticoagulant therapy, when possible, is also proposed. Some authors suggest a course of corticosteroid treatment, with careful evaluation of the risk of complications, in particular life-threatening infections . Prognosis is uncertain and the mortality rate is high; many patients do not recover kidney function . In the literature there are reports of anticoagulant-related nephropathy with leukocytoclastic vasculitis caused by warfarin and direct oral anticoagulant-related cutaneous vasculitis without kidney injury . Our patient presented biopsy-proven anticoagulant nephropathy and the vasculitic cutaneous lesions were most likely induced by the same drug. This case reminds us that acute kidney injury with clinical presentation suggestive of systemic vasculitis can be a manifestation of anticoagulant-related nephropathy (Fig. ). Learning points Anticoagulant-related nephropathy is a recently recognized and most likely underestimated cause of acute renal injury. It is associated with both older and novel anticoagulant agents, although available data demonstrate higher risk on therapy with warfarin than with direct oral anticoagulants. Preexisting chronic kidney disease is one of the main risk factors, therefore careful follow up is necessary in this group of patients on anticoagulant therapy. Kidney biopsy provides valuable information about the kidney injury but is not always performed due to the increased risk of complications. There are no guidelines regarding treatment of anticoagulant-related nephropathy but a switch to direct oral anticoagulants in patients on warfarin, dose reduction in patients already on therapy with novel agents, temporary discontinuation of anticoagulant therapy and a course of corticosteroid treatment may be considered. Anticoagulant-related nephropathy is a recently recognized and most likely underestimated cause of acute renal injury. It is associated with both older and novel anticoagulant agents, although available data demonstrate higher risk on therapy with warfarin than with direct oral anticoagulants. Preexisting chronic kidney disease is one of the main risk factors, therefore careful follow up is necessary in this group of patients on anticoagulant therapy. Kidney biopsy provides valuable information about the kidney injury but is not always performed due to the increased risk of complications. There are no guidelines regarding treatment of anticoagulant-related nephropathy but a switch to direct oral anticoagulants in patients on warfarin, dose reduction in patients already on therapy with novel agents, temporary discontinuation of anticoagulant therapy and a course of corticosteroid treatment may be considered. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 16 kb)
Food-borne disease risk: biosurveillance in water networks
50c97a73-3c83-4edb-92b8-212da09919d8
11395284
Microbiology[mh]
After Edward Haynes (molecular biologist at Fera Science Ltd (Fera), and PATH-SAFE Science Fellow) gave a short informative presentation covering the context of the workshop and studies being carried out across the wider PATH-SAFE programme, the following presentations provided overviews of different elements of the pilot studies being conducted under PATH-SAFE. A multi-factor study of wastewater in the Taw & Torridge catchment The presentation by David Walker (Environmental Microbiologist), Cefas, described one of the pilot studies, investigating prevalence and diversity of pathogens in weekly samples of raw influent and treated effluent at two sewage treatment works, over respective winter and summer sampling periods. The pathogens of interest (norovirus, Listeria , Salmonella and E. coli ) were assessed against contemporaneous samples of river water and shellfish in the study area, to examine any spatial or temporal variability in their presence or persistence. The benefits of using wastewater surveillance as a preventative and mitigative measure for FBD outbreaks was highlighted. Assessment and prediction of the risks posed by hospital- and community-derived norovirus and AMR in coastal waters and shellfish Kata Farkas (Environmental Virologist) and Reshma Silvester (Environmental Microbiologist), both Bangor University, presented a study examining the risk of viral release through effluent discharges from municipal and hospital wastes and, in particular, the potential for antibiotics in such discharges to contribute to the growing AMR problem. It was discussed how combining wastewater-based epidemiology with environmental surveillance can assist in identification of potential risks to human health from municipal and hospital discharges, helping to inform appropriate measures for minimising those risks. Food-borne pathogens affecting the Ribble shellfisheries: a review of available data and evidence Ellie Brown (Strategic Evidence and GIS Manager), Ribble Rivers Trust, gave an overview of the characteristics of the River Ribble catchment, its natural environment, and human activities that may impact on the prevalence of food-borne pathogens in its estuarine waters, where three classified shellfisheries are located. The talk examined, among other things, land use, recreation, sewer misconnections, septic tanks, agriculture, livestock and high concentrations of both wild and farmed birds, all of which may act as sources or vectors of pathogen transmission in the catchment. Current monitoring and data availability were discussed, limitations highlighted and recommendations made for improved targeted surveillance in the catchment, including in partnership with ongoing programmes and initiatives. Understanding sources and pathways of estuarine pathogen loads: a tale of two models Paulette Posen (Principal Investigator on PATH-SAFE Workstream 2a, presenting on behalf of Richard Heal (Spatial Environmental Scientist)); and David Haverson (Numerical Modeller), both Cefas, focussed on the terrestrial and aquatic aspects of modelling pathogen transport from river catchments to coastal waters, based on the Taw and Torridge pilot study carried out under PATH-SAFE. Firstly, the catchment model was considered, which included land use and wastewater discharge points as potential pathogen sources, using E. coli as the pathogen of interest to model likely transmission routes of faecal contamination from either humans or from agricultural livestock. The model considered how pathogen loadings and transport could be influenced by rainfall (hence river flow rates) and other environmental characteristics. The way in which model outputs could be calibrated and validated using existing monitoring data, or data collected during pilot study sampling, was conveyed. The second element of this presentation focused on the use of a hydrodynamic model to take outputs of E. coli loadings from the river catchment model, to assess pathogen transport and decay in estuarine and coastal waters. It was highlighted how model accuracy is constrained by availability of good quality data for model set up and validation. Machine learning assisted identification of pathogens in the microbiome of water systems In the final presentation, Jaime Martinez-Urtaza (Molecular Epidemiologist), Universitat Autonòma de Barcelona (UAB), showed how novel machine learning techniques could be employed to build a picture of spatial and temporal pathogen prevalence and diversity from routine or study-specific sampling and analysis. By using whole genome sequencing techniques to generate new data from samples, and incorporating reference genomes from existing databases, these tools use decision-tree and optimisation methods to train the model to predict the presence or absence of pathogens of interest. As well as supporting the assessment of times and locations of highest risk with respect to pathogen contamination, the approach can be used to identify contamination events, and to achieve source tracking of implicated microorganisms with a very high resolution. The presentation by David Walker (Environmental Microbiologist), Cefas, described one of the pilot studies, investigating prevalence and diversity of pathogens in weekly samples of raw influent and treated effluent at two sewage treatment works, over respective winter and summer sampling periods. The pathogens of interest (norovirus, Listeria , Salmonella and E. coli ) were assessed against contemporaneous samples of river water and shellfish in the study area, to examine any spatial or temporal variability in their presence or persistence. The benefits of using wastewater surveillance as a preventative and mitigative measure for FBD outbreaks was highlighted. Kata Farkas (Environmental Virologist) and Reshma Silvester (Environmental Microbiologist), both Bangor University, presented a study examining the risk of viral release through effluent discharges from municipal and hospital wastes and, in particular, the potential for antibiotics in such discharges to contribute to the growing AMR problem. It was discussed how combining wastewater-based epidemiology with environmental surveillance can assist in identification of potential risks to human health from municipal and hospital discharges, helping to inform appropriate measures for minimising those risks. Ellie Brown (Strategic Evidence and GIS Manager), Ribble Rivers Trust, gave an overview of the characteristics of the River Ribble catchment, its natural environment, and human activities that may impact on the prevalence of food-borne pathogens in its estuarine waters, where three classified shellfisheries are located. The talk examined, among other things, land use, recreation, sewer misconnections, septic tanks, agriculture, livestock and high concentrations of both wild and farmed birds, all of which may act as sources or vectors of pathogen transmission in the catchment. Current monitoring and data availability were discussed, limitations highlighted and recommendations made for improved targeted surveillance in the catchment, including in partnership with ongoing programmes and initiatives. Paulette Posen (Principal Investigator on PATH-SAFE Workstream 2a, presenting on behalf of Richard Heal (Spatial Environmental Scientist)); and David Haverson (Numerical Modeller), both Cefas, focussed on the terrestrial and aquatic aspects of modelling pathogen transport from river catchments to coastal waters, based on the Taw and Torridge pilot study carried out under PATH-SAFE. Firstly, the catchment model was considered, which included land use and wastewater discharge points as potential pathogen sources, using E. coli as the pathogen of interest to model likely transmission routes of faecal contamination from either humans or from agricultural livestock. The model considered how pathogen loadings and transport could be influenced by rainfall (hence river flow rates) and other environmental characteristics. The way in which model outputs could be calibrated and validated using existing monitoring data, or data collected during pilot study sampling, was conveyed. The second element of this presentation focused on the use of a hydrodynamic model to take outputs of E. coli loadings from the river catchment model, to assess pathogen transport and decay in estuarine and coastal waters. It was highlighted how model accuracy is constrained by availability of good quality data for model set up and validation. In the final presentation, Jaime Martinez-Urtaza (Molecular Epidemiologist), Universitat Autonòma de Barcelona (UAB), showed how novel machine learning techniques could be employed to build a picture of spatial and temporal pathogen prevalence and diversity from routine or study-specific sampling and analysis. By using whole genome sequencing techniques to generate new data from samples, and incorporating reference genomes from existing databases, these tools use decision-tree and optimisation methods to train the model to predict the presence or absence of pathogens of interest. As well as supporting the assessment of times and locations of highest risk with respect to pathogen contamination, the approach can be used to identify contamination events, and to achieve source tracking of implicated microorganisms with a very high resolution. Following the presentations, the workshop participants were split into breakout groups, ensuring a broad representation of occupations, disciplines and interests within every group. Every group discussed the answers to four questions related to the establishment of an effective national biosurveillance framework for pathogens implicated in FBD, as follows: What do you perceive are the gaps and limitations to the assessment and management of microbiological risk? What do you see as the biggest opportunities for underpinning a future sustainable biosurveillance network? What are the benefits or barriers to a ‘One Health’ approach to biosurveillance? What do you see as the main benefits (and barriers) to emerging technologies/practices in the field of biosurveillance? The activities prompted discussion on the perceived benefits from, and/or barriers to, improving and developing these systems and processes, including through new technologies. Data, collaboration and resources (both human and financial) were common themes throughout the discussions, with associated positive and negative aspects considered and identified across these themes. The aim was to draw out ideas for potential improvement and innovation, that could transform existing frameworks and translate into successful, cost-effective measures for a fully integrated national surveillance system for the prevention and mitigation of FBD outbreaks. The issue of central leadership, highlighted under the ‘One Health’ discussion, was a cross-cutting topic of conversation, aligned with the need for joined-up priorities mapped across government, as raised in the wider discussions on gaps, limitations and opportunities. Within this mapping of objectives, there was also a need for forward thinking to anticipate the management of new and emerging risks. It was agreed that surveillance activities should be ‘co-productional’ from the outset, addressing multiple needs, using a common language and having a common strategic direction that could be used as levers for stakeholders. It was generally agreed that there were both benefits and barriers to the mechanics and implementation of a holistic ‘co-productional’ approach to biosurveillance. For instance, implementation of parallel, multi-criteria assessments, via coordinated cross-organisational, multi-purpose monitoring, may lead to greater efficiencies or improved detection of combined pressures or previously unconsidered risks. However, correlating and interpreting the combined outputs across human, animal and environmental domains would be challenging, not least the digital processing power that would be required at the national level. Likewise, the holistic approach may be easier for some stakeholders to understand in the context of their real-world observations and experiences, but translation of findings from complex real-world interactions to an accessible language for a range of stakeholders can be difficult. The necessity to include uncertainty in analytical outputs was also stressed, even though many stakeholders still fail to take this into consideration. It was suggested that uncertainty could be described to stakeholders via a traffic light system, similar to that now used on some food products as an indicator of nutritional content (in which red, amber and green labels define whether a food has high, medium or low quantities, respectively, of fat, saturates, sugar and salt). It was maintained that there needed to be an effective way of evaluating generalised models (e.g. meteorological/hydrological/hydrodynamic models for river catchment, estuarine and coastal transport modelling; geostatistical models for quantifying microbial risk; scenario modelling to evaluate future environmental risk) for their suitability to guide actions at local scales. One option would be to have a set of ‘characteristic’ models at regional catchment levels (taking account of, for instance, different natural geographical and human activity factors) which could then be further refined for application at the local level, taking specific features and characteristics (e.g. conurbations, land use, etc) into consideration. One discussion group felt that timeliness was an issue in relation to the assessment and management of microbiological risk, both in terms of the timeliness in development and implementation of tools in various stakeholder strategies, and in the frequent disconnect between observations, reporting to regulators and mitigative actions. Timeliness was also seen to be an important consideration and/or constraint in emerging technologies and practices, which require a period of testing and validation before full implementation. It was thought that the costs and benefits of emerging technologies in biosurveillance were yet to be explored. The workshop brought together opinions from different stakeholder perspectives on the gaps and limitations in current surveillance of pathogens in water networks, with a particular emphasis on FBD risk. The main messages from the workshop regarding the establishment of a successful biosurveillance framework were: Collaboration across a wide range of stakeholders is vital. Availability, sharing and integration of high-quality data are crucial for risk assessment. Funding and human resources are important limitations to successful implementation. Citizen science and automated data collection are among the biggest opportunities for achieving this goal.
The effect of educational program based on theory of planned behavior on promoting retinopathy preventive behaviors in patients with type 2 diabetes: RCT
6bf0d7e6-a2ab-4ef1-ac02-aa013f6cd7e7
7809809
Patient Education as Topic[mh]
Diabetes, a major public health problem affecting more than four hundred million people worldwide . According to a study conducted in 116 countries from 2010 to 2019, the prevalence of diabetes in adults aged 20 to 79 will increase from 6.9% in 2010 to 7.7% in 2030 . Diabetes has been associated with the development of various complications including retinopathy . Studies have shown that people with diabetes are 25 times more likely to be blind than others . Optimal management of diabetic retinopathy should include annual screening, adequate control of associated risk factors and timely treatment . Currently, with the rising prevalence of diabetes in the world, WHO has declared it as a latent epidemic and believes that increasing patients’ awareness about complications disease . A significant element towards an optimal management, which is often undervalued, is the improvement of knowledge and education among patients with diabetes . Therefore, it is essential to have information about the beliefs and awareness of those at risk in order to develop preventive strategies . Previous studies assessing knowledge, attitude and practices regarding eye diseases in patients with diabetes for example a study in Turkey showed that 31% of patients with diabetes had not received eye care training and did not know that the disease affects their eyesight . Also in Nepal, only 12% of patients with diabetes were aware of the ocular complication of diabetes . Other studies have emphasized the need to educate patients with diabetes to increase their awareness and performance in the prevention of retinopathy [ , , ]. On the other hand, due to the important role of patients with diabetes in adopting health behaviors to prevent the complication of retinopathy, the importance of performing educational interventions based on appropriate behavioral theories for these patients is even greater. Therefore, in the present study, Theory of Planned Behavior (TPB) has been used. According to this theory, a patient’s attitude is his or her favorable or unfavorable evaluation to perform a particular behavior that has been formed through his or her mental perceptions or past experiences. Behavioral intention is the decision of an individual to adopt a behavior, and subjective norms are the effects of different people on the behavior of an individual. Perceived behavioral control refers to patients perception of his or her competence to successfully perform hygiene-related behaviors . Prevention care includes blood sugar control behaviors, regular visits to an ophthalmologist and timely eye examinations, adherence to a medication regimen, and adherence to a proper diet. To measure patients’ behavior more accurately Fasting Blood Sugar (FBS) and HbA1C quarterly blood sugar were used. Figure shows Theory of Planned Behavior. According to the studies, no intervention was found based on TPB on the promotion of eye care behaviors in patients with diabetes. Therefore, in the present study, it has been tried to teach eye care behaviors in patients with diabetes based on TPB constructs and to measure the effect of this training was assessed by measuring the behavior and blood sugar control indicators of FBS and HbA1C in the patients. This study is an educational randomized controlled trial (single blind) that was carried out on 94 patients with diabetes referred to Diabetes Clinic in Arak. Prospectively registered 5 Apr 2019, https://fa.irct.ir/trial/38401 . This study adheres to CONSORT guidelines. To determine the sample size based on similar study a total of 42 individuals were calculated for each group, with 10% added to the sample size in each group, taking into account the rate of non-response to the questionnaires and the loss of samples during follow-up. Finally 47 individuals in each group were calculated in each group. For sampling, a list of all patients was obtained from Diabetes Clinic of Arak. Then, 94 samples were selected by simple random sampling from patients with the criteria for entering the study and were randomly divided into two groups of control and intervention. The conceptual framework of this study was that according to the similar study Malekmahmoodi et al. about primary and secondary out-come and structure of intervention program based on TPB. Inclusion criteria were patients with at least 1 year of diabetes history, no ocular complications, volunteering to participate in the study, being between 30and 70 years old, and literate at least until the fifth grade. Exclusion criteria included patients who developed ocular complications during the study at the discretion of the ophthalmologist and needed special treatment and educations, lack of patients willingness and refusing to participate in the study. Data collection tool The data collection instrument was a valid and reliable questionnaire that was previously used in a studies consisted of the following sections: Patient Demographic Information Questionnaire including age, occupation, education, duration of the disease, and type of treatment. Patient Awareness Questionnaire for Diabetes and Diabetic Complications, which included 10 four-choice items. The Theory of Planned Behavior questionnaire included the following constructs: A) Patients’ Attitudes toward Eye Care: Included 9 questions; B) Patients’ Perceived Behavior Control in Eye Care: Included 5 questions; C) Patients’ Subjective Norm for Eye Care: Included 5 questions; D) Patients’ Intention for Eye Care: Included 10 questions; and E) Patients’ performance in eye care: Included 6 questions on measuring eye care behaviors. In this study, retinopathy prevention cares included caring behaviors regarding blood sugar control, regular visits to an ophthalmologist and timely eye examinations, adherence to a medication regimen, adherence to a proper diet, and performing appropriate physical activity. The behaviors were measured by a standard questionnaire and indices including FBS and HbA1C. HbA1C tests were conducted using a bio-system kit and chromatography method. Bio-system kits are standard kits approved by Iran’s Ministry of Health and Medical Education. FBS is the most common test used to diagnose diabetes. The test is done in the morning, before the person has eaten. The range of normal blood glucose is between 70 to 100 mg/dl. Levels between 100 and 126 mg/dl are considered as impaired fasting glucose or pre-diabetes. Diabetes is generally diagnosed when fasting blood glucose levels are 126 mg/dl or higher . In this study scoring, validity and reliability of questionnaires was done based on the similar study . Educational intervention In this study, based on the initial need assessment (pre-test), the educational materials were prepared and educational sessions were conducted in the form of 4 sessions as follows: The first session focused on improving patients’ awareness of diabetes, familiarity with the structure of the eye, and proper eye care. The second session focused on improving patients’ attitudes and subjective norms, including increasing patients’ attitudes about the importance and benefits of proper eye care and the negative consequences of not caring of it. The third session focused on perceived behavioral control, familiarizing patients with the barriers to retinopathy and improving patients’ intentions to take proper care of their eyes. The fourth session focused on improving retinopathy preventive behaviors, including regular blood sugar measurement, adherence to a proper diet, seeing an ophthalmologist, and taking medications regularly. Finally 3 months after the completion of the educational intervention, using the questionnaire and FBS and HbA1C, the data of both intervention and control groups were collected again and both groups were compared with each other. Data analysis Data analysis was performed using SPSS version 22 and according to the normality of data distribution based on Kolmogorov-Smirnov test, the data were analyzed using Chi-Square, Pair t-test, and Independent t-test. Significance level of tests was considered less than 0.05. Ethical considerations The study protocol was reviewed and approved by the ethic committee of research in Arak university of medical sciences (Approval ID: IR.ARAKMU.REC.1397.169). This trial has been registered at Iranian Registry of Clinical Trials, IRCT20180819040834N1. Written informed consent was obtained from all participants, and data are being kept confidential and anonymous. The data collection instrument was a valid and reliable questionnaire that was previously used in a studies consisted of the following sections: Patient Demographic Information Questionnaire including age, occupation, education, duration of the disease, and type of treatment. Patient Awareness Questionnaire for Diabetes and Diabetic Complications, which included 10 four-choice items. The Theory of Planned Behavior questionnaire included the following constructs: A) Patients’ Attitudes toward Eye Care: Included 9 questions; B) Patients’ Perceived Behavior Control in Eye Care: Included 5 questions; C) Patients’ Subjective Norm for Eye Care: Included 5 questions; D) Patients’ Intention for Eye Care: Included 10 questions; and E) Patients’ performance in eye care: Included 6 questions on measuring eye care behaviors. In this study, retinopathy prevention cares included caring behaviors regarding blood sugar control, regular visits to an ophthalmologist and timely eye examinations, adherence to a medication regimen, adherence to a proper diet, and performing appropriate physical activity. The behaviors were measured by a standard questionnaire and indices including FBS and HbA1C. HbA1C tests were conducted using a bio-system kit and chromatography method. Bio-system kits are standard kits approved by Iran’s Ministry of Health and Medical Education. FBS is the most common test used to diagnose diabetes. The test is done in the morning, before the person has eaten. The range of normal blood glucose is between 70 to 100 mg/dl. Levels between 100 and 126 mg/dl are considered as impaired fasting glucose or pre-diabetes. Diabetes is generally diagnosed when fasting blood glucose levels are 126 mg/dl or higher . In this study scoring, validity and reliability of questionnaires was done based on the similar study . In this study, based on the initial need assessment (pre-test), the educational materials were prepared and educational sessions were conducted in the form of 4 sessions as follows: The first session focused on improving patients’ awareness of diabetes, familiarity with the structure of the eye, and proper eye care. The second session focused on improving patients’ attitudes and subjective norms, including increasing patients’ attitudes about the importance and benefits of proper eye care and the negative consequences of not caring of it. The third session focused on perceived behavioral control, familiarizing patients with the barriers to retinopathy and improving patients’ intentions to take proper care of their eyes. The fourth session focused on improving retinopathy preventive behaviors, including regular blood sugar measurement, adherence to a proper diet, seeing an ophthalmologist, and taking medications regularly. Finally 3 months after the completion of the educational intervention, using the questionnaire and FBS and HbA1C, the data of both intervention and control groups were collected again and both groups were compared with each other. Data analysis was performed using SPSS version 22 and according to the normality of data distribution based on Kolmogorov-Smirnov test, the data were analyzed using Chi-Square, Pair t-test, and Independent t-test. Significance level of tests was considered less than 0.05. The study protocol was reviewed and approved by the ethic committee of research in Arak university of medical sciences (Approval ID: IR.ARAKMU.REC.1397.169). This trial has been registered at Iranian Registry of Clinical Trials, IRCT20180819040834N1. Written informed consent was obtained from all participants, and data are being kept confidential and anonymous. Tables and presents descriptive statistics for the diabetes sample. The mean age of patients with diabetes in the intervention and control group was 57.6 ± 8 and 59.1 ± 7.1 years respectively, which did not have a significant difference based on the results of independent t-test ( p = 0.381). Other demographic characteristics of the patients studied are reported in Tables and . The results showed that there was no significant difference between the two groups of intervention and control in terms of TPB constructs before the intervention. After the intervention, the independent t-test showed a significant difference between the intervention and control groups in terms of TPB constructs and retinopathy preventive behaviors (Table ). Performance of the intervention group in retinopathy-preventive behaviors increased from 2.95 ± 1.42 to 4.48 ± 1.45 after the intervention ( p < 0.001). As shown in Table , the mean FBS and HbA1C of the patients in the intervention group decreased significantly 3 months after the educational intervention ( P < 0.05), while this decrease was not observed in the control group. This study found that the training of patients with diabetes based on Theory of Planned Behavior, promoted preventive behaviors of ocular complications and improved control of FBS and HbA1C in the patients. In this study, the results of the pre-test showed that the patients’ information about retinopathy was very weak. While after intervention the majority of patients were aware of the positive influence of good glycemic control and of regular eye examinations by an ophthalmologist on the prevention of diabetic eye diseases. In a study in Goa, India , only about one-third of patients (34%) were aware of the ocular complications of diabetes. Other similar studies conducted in Nepal and Africa reported that patients’ awareness was very low, and researchers said that patients need training in this area. In a study by Bandurska et al., in Poland the level of awareness of patients with diabetic retinopathy increased from 39 to 44% due to the training given to them , which is consistent with the results of the present study. But in a study by Dan et al. in training with multimedia for 10 min showed a very small increase in patient awareness . The reason for the failure of the above program can be one-way training and short training time, as well as the high cost of eye care. In the present study, an increase in the attitude of patients with diabetes towards diabetes care has been reported in similar study in Iran and Ontario, Canada . However, there was no change in attitude in Khalaf et al. study due to the short duration of training because changing patients’ attitudes, unlike their awareness, requires longer intervention. In the study of Grimshaw et al. in Ottawa Hospital Research Institute and the study of Zwarenstein et al. in London , training by providing educational booklets to physicians to increase their attitude towards referring patients for ophthalmological examinations was not so effective. Therefore, the results of these studies are inconsistent with the present study. The method of presenting the educational booklets should not be used as the only way to teach and change the attitude just because of its low cost. In the present study, providing the educational content in educational sessions through role-playing, expressing the role of influential people on the patient’s behaviors, providing relevant educational materials through an ophthalmologist, as well as giving educational booklets led to an increase in subjective norms. In the Woolley study on individuals with type 2 diabetes mellitus were identified through diabetes eye clinics and general practices in UK , physicians and nurses, in the Azami study , only nurses, in the Graham-Rowe study , health care providers and physicians were identified as sources of information and factors influencing patients’ subjective norms. This shows that training through these people can be more effective. In the present study, presenting an educational program on factors facilitating the behaviors, providing incentives, reducing and eliminating the perceived barriers, breaking the behaviors into small steps, practical education, and using the experiences of other patients with diabetes increased the patients’ perceived behavioral control in the intervention group. Alwazae et al. reported the high cost and lack of awareness of Saudi patients as barriers to the prevention of retinopathy. In this study, the mean age of the patients was 54 and most of them (69%) were women, which is largely consistent with the demographic characteristics of the current study sample . However, in the study of Hardeman et al. in UK, increased perceived behavioral control and attitudes did not lead to increased behavior in patients with diabetes. Researchers attribute this to environmental and non-behavioral factors and recommend further research in this area . The results of the present study showed that the mean scores of behavioral intention in the intervention group significantly increased after the educational intervention. In Lin study, TPB-based training during the five educational sessions, along with 3 months of follow-up, had a positive effect on increasing the behavioral intention of patients regarding diabetes care behaviors . Other studies have emphasized the role of behavioral intention in controlling blood sugar . In the present study, the mean score of performance in the intervention group was (4.48) significantly higher than the control group after the educational intervention. Of course for the general public, some educational content can be delivered indirectly by means of educational booklets, pamphlets or via social media to reduce the number of training sessions. Raising awareness as well as other constructs of Theory of Planned Behavior, including attitudes, behavioral intention, and perceived behavioral control, all led to increased skills and preventive behaviors by patients with diabetes. Liu and colleagues also suggest that eye care behaviors can prevent blindness by up to 90% . In the Mumba study, the number of ophthalmological examinations increased from 29 to 47% by training patients at a Tanzanian referral hospital , but Dan et al. did not observe an increase in patients performance by providing 10-min educational multimedia . In the present study, in addition to the use of educational videos, direct and face-to-face teaching methods were also used and the performance of the patients increased further and a significant change in the mean FBS and HbA1C of the patients was observed after the educational intervention. Given that HbA1c represents the mean fluctuations in blood sugar over the past 3 months, reducing it can greatly prevent the complications of diabetes, including retinopathy in patients with diabetes in Poland and Ontario, Canada that all of whom needed education. Face-to-face education reduced patients’ HbA1C by 48% in the Hidvégi study in Budapest and by 1.5% in the Bandurska study in Poland . Some limitations of this study are using a small sample and self-reported questionnaires which may be prone to recall as well as desirability bias. Moreover the attitude and performance of patients in eye care was assessed over the past 3 months, whereas longer follow-up could provide more accurate results. Therefore, it is recommended that the educational program and the follow-up of patients be continued for a longer period of time and the outcomes be evaluated in longer periods after the intervention. Finally it is suggested that in future studies be conducted with a larger sample size. Teaching patients with diabetes based on TPB can improve the preventive behaviors of retinopathy complications and also improve the mean FBS and HbA1C in patients with diabetes. Finally, instead of using traditional methods of educating patients with diabetes, it is recommended to use educational approaches in which patients participate and are active. Using successful patients as educators, supporting patients to empower them to engage in self-care activities, and using visual media in educational programs to make the training more effective are also recommended.
Comparative Proteomics of ccRCC Cell Lines to Identify Kidney Cancer Progression Factors
3a7799f5-06af-435a-b835-3950e14b11de
11534031
Biochemistry[mh]
Cell culture . Three human ccRCC cell lines, including HEK-293, 786-O, and Caki-1, were used, and were all grown in high glucose containing Dulbecco modified Eagle medium (DMEM - high glucose) containing 10% fetal bovine serum (FBS, Hyclone, Marlborough, MA, USA) and 1% penicillin/streptomycin (Gibco, Waltham, MA, USA) at 37˚C with 5% CO 2 in a humidified incubator. The cell culture medium was replaced every 2 days and harvested using 0.25% trypsin-EDTA. HEK-293 and Caki-1 cells were obtained from Korea Cell Line Bank and 786-O cell was purchased from ATCC (Manassas, VA, USA). SILAC label and proteomics sample preparation . For SILAC labeling, HEK-293 and 786-O cell lines were grown in SILAC medium (Thermo Fisher Scientific, Rockford, IL, USA) and isotope-labeled amino acids (CIL, Andover, MA, USA) were added in each medium with 10% FBS and 1% P/S. The cells grown in “heavy” medium were labeled with K8R10 (Lys8, 13 C 6 - 15 N 2 ; Arg10, 13 C 6 - 15 N 4 ), while those grown in “medium” were labeled with K4R6 (Lys4, D 4 ; Arg6, 13 C 6 ); a normal DMEM medium (K0R0) was used for cells grown in “light”. Each cell was cultured at least five times in 150-mm cell culture dishes (Corning, NY, USA), and SILAC labeling efficiency test was performed. After assessing that the labeling efficiency was over 98%, the cells including light, medium, and heavy medium were grown in sufficient quantities for use in the next experiment. Each labeled cell was prepared in biological triplicates. All cell lines were grown to approximately 85% confluence in 150-mm cell culture dishes and harvested for use in the experiment, and the cell pellets were stored at –80˚C until use. The frozen cell pellets were dissolved using a radio-immunoprecipitation assay (RIPA) buffer (Thermo Fisher Scientific) including the Halt protease inhibitor cocktail (Thermo Fisher Scientific). Samples were sonicated five times each, centrifuged at 4˚C (12,000 × g for 10 min), and the supernatant was collected from centrifuged samples and transferred to new tubes. Protein concentration was measured using the bicinchoninic acid colorimetric assay kit (Thermo Fisher Scientific). For the reduction and alkylation, 150-mM dithiothreitol was added into each sample for reduction and incubated at 56˚C for 30 min in a shaking incubator, and then a 600-mM iodoacetamide was added for alkylation in the dark for 30 min at room temperature (RT). Next, 10% trichloroacetic acid was added to purify the lysed proteins at 4˚C for 4 h and the protein pellets were washed twice using ice-cold acetone after the protein samples were centrifuged. Prior to adding trypsin, 50-mM ammonium bicarbonate was added and repeatedly vortexed and sonicated to re-dissolve. Trypsin was added to re-dissolved protein samples to digest proteins with a ratio of 1:50 (trypsin:protein) and incubated at 37˚C overnight after measuring the pH. Lastly, 10% trifluoroacetic acid was added to quench the digestion and centrifugation was performed to extract the supernatant. The peptide samples were dried in speed-vac and kept at –80˚C until use. nanoHPLC and Mass spectrometric analysis . Before analysis, peptides were desalted using C18 resin-packed Zip-Tip (Millipore, Burlington, MA, USA). The desalted peptide was separated using a NanoLC-Ultra HPLC (Eksigent Technology, Dublin, CA, USA) connected to a homemade column packed with Jupiter Proteo (Phenomenex, Torrance, CA, USA). The column was equipped with an NSI Ion Source (Thermo Fisher Scientific, Waltham, MA, USA). For the mobile phase, 0.1% formic acid containing water and acetonitrile were used as solvents A and B, respectively. A linear gradient of 3%-23% solvent B for 50 min, with a flow rate of 300 nl/min, was used for every sample. The eluted peptides were analyzed using LTQ Orbitrap Velos MS/MS instrument (Thermo Fisher Scientific, Waltham, MA, USA). The ionization source voltage was set to 1.8 kV, and the capillary temperature was 300˚C. In tandem MS scans, data-dependent analysis (DDA) scans were acquired, and the top 10 most abundant peptide precursor ions in the range of 300–2,000 m/z were fragmented via collision-induced dissociation (CID) of 28% normalized collision energy (NCE). The MS/MS proteomic data have been deposited to the ProteomeXchange Consortium via the PRIDE ( ) partner repository with the dataset identifier PXD045898. Peak alignment and bioinformatics . To match MS/MS spectra against identified proteins from the Uniprot human reference proteome database (Release 2021_02), the MaxQuant (version 1.6.10.43) search engine was used. Labels Arg6 and Lys4 for “medium” and Arg10 and Lys8 for “heavy” were set to quantify the proteins. Oxidation (M) and Acetyl (Protein N-term) were set as variables, and Carbamidomethyl (C) was a fixed modification. Trypsin/P was set as a digest enzyme with a specific mode. The precursor peptide tolerance was set to 20 ppm and revert decoy mode was employed. The false discovery rate (FDR) was set to 0.01 and 40 for the andromeda score for identification. Three comparison groups were used for comparative proteomics analysis, differentially expressed protein (DEPs were defined as fold change (FC), which was calculated from a ratio of intensity in each labeling within the same protein group. To measure the linear correlation between biological replications, the Pearson correlation coefficient was performed. Protein groups that have an absolute value of log2 transformed FC ≥1 were picked up and performed Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis through the DAVID functional annotation tool. DEPs were adjusted to z-scoring and then clustered into five types using the hierarchical clustering algorithm, and trends were visualized using the graphical representation heatmap technique in the Perseus software platform (version 2.1.4.0). The volcano plot visualized through the log2 transformed FC complied with both-sided student’s t -test p -value, with statistical significance set as p <0.05. Western blotting to validate quantitative proteome data . The pellet of ccRCC SILAC cell lines were sonicated and dissolved using RIPA buffer containing Halt protease inhibitor cocktail (Thermo Fisher Scientific, Rockford, IL, USA) to extract proteins. Next, 5-μg of proteins was separated via 10% sodium dodecyl-sulfate polyacrylamide gel electrophoresis gel for 90 min and transferred onto polyvinylidene difluoride membrane for 110 min. After completing the transfer step, the membranes were blocked using 5% BSA in TBST for 2 h at RT and then incubated with the primary antibodies in 5% BSA at 4˚C overnight. Vimentin and β-actin were used as the primary antibodies and diluted at 1:2,000 and 1:1,000, respectively. The membranes were washed three times with TBST every 10 min. The second antibody was diluted at 1:2,000 in 5% BSA and incubated at RT for 2 h. Thereafter, the membrane was washed with TBST again three times every 10 min, and the protein bands were detected using ECL™ prime western blotting detection reagents (Cytiva, Buckinghamshire, UK) to visualize the bands. Finally, we analyzed the signal using iBright 1500 (Thermo, Waltham, MA, USA). Immunohistochemistry (IHC) . In this study, the biosamples used were provided by the Kyungpook National University Chilgok Hospital (the patient’s consent was obtained prior to procurement of all tumor tissues, IRB KNUCH 2016-05-021-020) and were stored at 80˚C until required. For IHC, tissues were fixed in 4% formaldehyde and formed into paraffin blocks. Then, sections were cut into 4-um thickness and attached to coated glass slides. Sections underwent antigen retrieval and blocked in citrated antigen retrieval buffer and 5% BSA after deparaffinization and hydration, respectively. The primary antibody (Abcam) was diluted as per manufacturer’s recommendations and used at 4˚C for 18 h. A secondary antibody (1:1,000) was applied for 1 h at RT. Sections were dehydrated and cleared using alcohol and xylene and then mounted with 4’,6-diamidino-2-phenylindole (DAPI) in a mounting medium. Comparative proteomics analysis of ccRCC cell lines . We analyzed proteins using comparative proteomic analysis, analysis was performed to identify proteins involved in cancer progression in ccRCC cell lines ( A). In this study, three subtypes of RCC cells, such as HEK-293, 786-O, and Caki-1, were selected. HEK-293, a cell line isolated from the kidneys of human embryos, was used as the normal group in this experiment, while 786-O and Caki-1 cells were used as model cells representing primary and metastatic ccRCC, respectively ( ). Samples were analyzed using HR MS, and peak alignments were performed in MaxQuant (v.1.6.10.43, https://www.maxquant.org/) against a Homo sapiens database (Uniprot, uploaded December 2018) with an FDR ≤1% and score ≥40. In total, 1,448 proteins were identified, and 1,106 proteins were quantified. Among them, there were 347 DEPs in 786-O vs. HEK-293, 345 DEPs in Caki-1 vs. HEK-293, and 89 DEPs in Caki-1 vs. 786-O (Figure 1B). Proteins demonstrating a ratio ≥2.0 were considered up-regulated, while those with a ratio of ≤0.5 were classified as down-regulated. Pearson correlation coefficient values were used to examine the technical coefficients in biological triplicates for the SILAC-tagged ratio. Next, unsupervised hierarchical clustering of the quantitative protein data was performed to produce protein clusters with similar patterns among RCC cell lines ( ). The quantitative ratio of DEPs was re-normalized using the Z-score. Based on the clustering analysis conducted for each cell line, five groups (A-E) were generated that contained 71, 199, 104, 358, and 374 DEPs. Among the five clusters, we focused on Cluster B, which was sequentially increased in 786-O and Caki-1 cells compared to HEK-293. Proteins in Cluster B correlate with the progression from primary to metastatic ccRCC. In GO and KEGG enrichment analysis using DAVID to understand the potential functional implications of the DEPs identified in Cluster B, we selected the top two terms with respect to biological process (GOBP), cellular component (GOCC), molecular function (GOMF), and KEGG (Figure 2). The GOBP showed that proteins in Cluster B were involved in proteasome-mediated ubiquitin-dependent protein catabolic process and tumor necrosis factor-mediated signaling pathway categories. In the GOMF analysis, manganese ion binding and metallopeptidase activity were enriched in Cluster B. For the KEGG pathway enrichment analysis, the proteins in Cluster B were mainly associated with proteasomes and phagosomes. Characterization of ccRCC progression markers . To investigate the proteins involved in the progression to cancer and metastasis, we compared RCC and mRCC cell lines with their respective control cell line. We identified significantly differentially expressed proteins (sDEPs) in the 786-O vs. HEK-293 and Caki-1 vs. HEK-293 groups, respectively, and these findings were visualized using volcano plots ( A) sDEPs were filtered using a criterion of p <0.05 and a change in expression greater than or equal to two-fold (log2 scale ≤1 and ≥1). First, 160 sDEPs were identified between 786-O and HEK-293 cells, with 119 proteins being up-regulated and 41 proteins being down-regulated. A total of 162 sDEPs were identified between Caki-1and HEK-293 cells, with 105 up-regulated and 56 down-regulated, respectively. There were 75 commonly up-regulated sDEPs and 24 common down-regulated sDEPs in 786-O and Caki-1 cells compared to HEK-293 cells. To explore ccRCC progression markers, we focused on sDEPs that were frequently increased in 786-O and Caki-1 cells compared to HEK-293, among which vimentin (VIM) was most significantly increased in 786-O and Caki-1 cells. The top 25 proteins were selected based on the ratio between 786-O and HEK-293 cells and were shown in . Following vimentin (VIM), annexin A2 (ANXA2) and A4 (ANXA4) were increased, followed by aldo–keto reductase family 1 member B1 (AKR1B1), which was also up-regulated in both cells. Verification of DEPs in ccRCC cell lines and RCC tissue . In both groups, vimentin was identified among the proteins with the largest FC. To determine its association with cancer progression, validation experiments were conducted in the ccRCC cell lines and RCC tissue (Figure 3B and C). Immunoblotting analysis was performed to verify vimentin expression in each cell line. The results showed increased vimentin expression in 786-O compared to HEK-293 and an increase in Caki-1. Moreover, the immunoblotting results from three replicates confirmed increased expression in 786-O and Caki-1 cells (Figure 3B). In RCC tissue, we performed IHC to correlate vimentin expression in patient tissue samples, IHC was performed: normal, T1G3, T3G3, and mRCC. As the tumor grade increased, the strong signals of positive cells for vimentin were increased in the mRCC group; the strongest positive reactivity and the highest number of positive cells were observed. Vimentin is a major constituent of the intermediate filament family of proteins, and its overexpression in cancer is associated with increased tumor growth, invasion, and poor prognosis ( ). In prostate cancer, vimentin was found to be overexpressed in prostate cancer cell lines CL1 and PC-3M-1E8 which are linked in heightened cell invasiveness ( , ). The correlation between high vimentin expression and increased cancer cell migration and invasion has been established in various cancer types ( ). Additionally, the effects of vimentin overexpression on cancer have been evaluated not only in cell lines but also in human patients where it has been associated with reduced survival, particularly in cases with metastasis. For example, in non-small-cell lung cancer (NSCLC), vimentin is known to regulate invasiveness ( ) and its overexpression was identified as an independent prognosticator for poor survival in patients with resected NSCLC ( ). Given the role of vimentin in other cancers, we hypothesized that vimentin overexpression in ccRCC may also be important in RCC progression. Although vimentin was the most commonly increased proteins in the ccRCC cell lines identified by quantitative proteomics analysis, annexin A2 was the second most increased protein. Annexins are a family of calcium-regulated phospholipids-binding proteins widely expressed in various cell and tissue types ( ). A total of 12 annexin proteins exist in humans, and annexin A2 is the one most extensively studied, and found to be involved in diverse cell functions and processes, such as cell exocytosis, endocytosis, migration, and proliferation. In previous studies, an increased annexin A2 expression has been found in RCC, which has been found to promote RCC migration, invasion, and proliferation ( - ). Although annexin A2 was not directly evaluated for increased expression through immunoblotting in this study, as vimentin was, dy, it is well recognized as a factor involved in RCC progression. Given that ccRCC is the most common type of kidney cancer, accounting for approximately 75% of kidney cancers, various omics studies are being conducted to understand the underlying mechanisms of cancer progression, establish early diagnostic markers, and discover potential therapeutic targets ( ). In this study, we identified a total of 1,448 proteins and quantified 1,106 proteins in SILAC-labeled cell lines derived from kidneys, such as HEK-293, 786-O, and Caki-1 cells based on MS-based proteomic analysis. To explore progression factors in kidney cancer, we ultimately identified 99 DEPs that were significantly altered in 786-O and Caki-1 cells. In conclusion, we propose 99 DEPs including vimentin as potential modulators of ccRCC progression. However, in this study, only vimentin, the most significantly increased protein, was validated by immunoblotting. Further studies are needed to investigate the molecular regulatory mechanisms of vimentin in the progression of ccRCC. Data are available via ProteomeXchange with identifier PXD045898. The Authors declare no conflicts of interest. Conceptualization: Park, J., Lee, S. and Lee, J.N.; Methodology and analysis: Park, J., Sim, H., and Lee, E.H.; Resources: Lee, J. N.; Data curation: Park, J., Sim, H., and Lee, S.; Writing: Park, J., Lee, E.H., Lee, S. and Lee, J.N.; Funding acquisition: Kim, B.S., Kwon, T.G. and Lee, J.N. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2019R1A2C1004046) (2021R1G1A1092985) (2022R1I1A3069482) (2023R1A2C3003807), by the Korean Fund for Regenerative Medicine (KFRM) grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Health & Welfare) (23A0206L1), and the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare (HR22C1832).
Managing Paediatric Growth Disorders: Integrating Technology Into a Personalised Approach
ecd0ab4e-1988-4b8f-a330-d848917ac923
7499133
Pediatrics[mh]
There have been few articles specifically linking the human component of growth management, i.e. specialist and nurse interaction with the patient, psychological support and training of healthcare professionals in motivational interviewing together with digital innovations such as electronic monitoring of growth hormone (GH) injections. Both the human and digital components are recognised to contribute to GH adherence, but it is the necessity of their partnership that we emphasize. What this study adds? A review on the holistic approach to personalised growth management by multi-disciplinary professionals, but stressing the key importance of the human and technical partnership. Contributions are also provided by a professional coach who is an expert in motivational interviewing and personnel from the UK patient support group, the Child Growth Foundation. There have been few articles specifically linking the human component of growth management, i.e. specialist and nurse interaction with the patient, psychological support and training of healthcare professionals in motivational interviewing together with digital innovations such as electronic monitoring of growth hormone (GH) injections. Both the human and digital components are recognised to contribute to GH adherence, but it is the necessity of their partnership that we emphasize. A review on the holistic approach to personalised growth management by multi-disciplinary professionals, but stressing the key importance of the human and technical partnership. Contributions are also provided by a professional coach who is an expert in motivational interviewing and personnel from the UK patient support group, the Child Growth Foundation. The management of paediatric growth disorders presents a multidisciplinary challenge to healthcare professionals (HCPs) responsible for affected patient care. Several medical HCPs may be involved, including the primary care physician who identifies the initial growth problem, the family general practitioner who refers the child for hospital investigation, the hospital-based paediatrician who sees the child at the initial consultation and the specialist paediatric endocrinologist to whom the child is then referred for an expert opinion and further management. In addition, in many hospital paediatric endocrinology units, the developing role of the paediatric endocrinology nurse specialist has directly improved the quality of liaison with the family and contributes to the care of the child through the addition of a skilled HCP to the management team. Pharmacists, biochemists, psychologists, patient support groups and personnel from the pharmaceutical industry also make important contributions to the three key phases of growth management; namely identification of the initial short stature, investigation and diagnosis of the cause, and treatment with hormone therapy, where indicated, all of which implies a long-term commitment to a potentially invasive therapy . Early diagnosis and early initiation of growth hormone (GH) therapy is associated with improved long-term height gain . The pressures experienced by the patient and family to successfully engage in such a diagnostic and therapeutic journey are also challenging. There are key facts about the nature and implications of the diagnosis to understand and process, including the emotional commitment for therapy to be successful and produce normal growth and adult height. In addition, maintenance of a therapeutic regimen designed to bring long-term improvement, rather than short-term benefit, requires engagement and maturity. These aspects of short stature management will be discussed in this article. A further component of care, which has emerged in recent years, are the electronic tools to aid therapy and adherence. These tools will also be addressed with emphasis on the importance of the human-eHealth partnership, which is necessary to make patient care optimally beneficial. We will discuss the challenges encountered by the patient and family through the experience of staff of the UK Child Growth Foundation (CGF), a patient support charity which advises families of patients with short stature. Current unmet medical needs of growth management will also be discussed followed by a description of the psychological basis and management of poor adherence to GH treatment regimens . eHealth innovations will be covered, followed by the importance of HCP training in relation to acquisition of motivational skills for improved recognition and intervention in poor adherence situations. Finally, the emerging role of the paediatric endocrinology specialist nurse will be summarised, with conclusions highlighting the rationale for joint human-eHealth collaboration to achieve optimal personalised management of the short child. Early recognition of pathological short stature, as opposed to variants of normal height, remains a challenge, particularly in the UK, where routine height surveillance has been reduced to two measurements at primary school and secondary school entry points . The age of diagnosis of disorders of abnormal growth, such as coeliac disease and Turner syndrome, is significantly later than in other countries such as the Netherlands and Finland , where investment in primary care identification of growth disorders has resulted in earlier diagnoses . Historically, a high proportion of children, treated with GH therapy for a variety of growth disorders, have not demonstrated a satisfactory degree of catch-up growth during the first year of therapy . A number of reasons may underlie this, including incorrect diagnosis, incorrect dose of GH at initiation of therapy and inadequate attention to factors predicting individual growth responses . The correct management of poor response to GH remains a priority in such patients . However, it is the presence of poor adherence to the GH treatment regimen which has emerged as a key factor, either alone or in combination with other elements that have an impact on growth response . This issue of non-adherence will be discussed in detail below. Digital health, defined as the use of information and communication technologies for health, is becoming a reality in clinical practice and medical education and has made a significant impact in the day-to-day management of diabetes mellitus in children . Its application to the treatment of growth disorders is more challenging because therapy is geared to long-term responses and benefits, rather than short-term metabolic control. However, one area where digital technology has been effective is in the electronic monitoring of GH injections . The use of an electromechanical auto-injector, which records every injection that is given and communicates the data both to the patient and the HCP, is a major advance . It is known that self-reporting of adherence tends to be inaccurate and to report artificially high values, compared with digital recording of injections . The difference between reported and recorded accuracy, using the electronic device, is significant. In a large international study of GH therapy using electronic recording, adherence was shown to be good during the first year of treatment, but gradually decreased to approximately 60% after five years . These data give two key messages, first that accurately measured adherence decreases over time and secondly that intervention by the HCP is indicated to prevent and correct this trend. The injection device can also demonstrate suboptimal adherence which may not be obvious from auxological measurements. Adherence or compliance can be defined as the extent to which the patient follows a prescribed therapeutic regimen, and in the case of GH, the extent to which daily GH treatment is taken. There are three phases in understanding the way adherence develops. First, there is the uptake stage, which describes the way in which the patient begins to accept the treatment and indeed whether they actually start to take the treatment. It is known that 10% to 15% of patients never start taking the treatments they are prescribed . This is known as primary non-adherence. The second phase, which is really critical for long-term progress, is the way in which the patient, or the family, incorporates the treatment into the habitual pattern of daily life. The last phase describes how long the patient stays with the treatment. It is known that patients may give up after months or years of treatment and there is evidence for a wide range of adherence to GH therapy . Overall, there are figures of up to 50%, 60%, or even 70% of patients not taking GH treatment in a regular and useful way, with a clear relationship demonstrated between non-adherence and not achieving linear growth targets . Given that GH therapy is evidence-based, the question is why are patients not adherent? Older explanations were essentially based around the idea that people did not follow treatment because they did not understand or remember what they had to do . This was often taken to be a symptom of poor communication in healthcare, so interventions were designed to improve communication and patient understanding and the ability to remember and plan treatment. This, unfortunately, is only a small part of the answer. It is now clear that there are different categories and certainly different causes of non-adherence. Two distinct types are recognised, known as intentional and unintentional non-adherence, which have very different drivers, or different origins. The reasons for the two different categories can be summarised in terms of what is known as the COM-B model . In the COM-B acronym, C stands for capability, O for opportunity and M for motivation. In intentional non-adherence many patients know what they have got to do, ie it is not a question of misunderstanding or not remembering, but they are reluctant to adhere, because either the treatment does not make sense to them, or they have worries or concerns about it. In unintentional non-adherence some of the older factors can be responsible such as poor communication, poor experience or satisfaction with the organisational challenges of doing something regularly on a daily basis. There may also be other barriers outside the individual, such as financial or practical constraints. If this is mapped onto the COM-B model we can see that under Capability , there is a range of factors, such as psychological difficulties; eg, people not remembering or not being able to plan. There are also some physical capability issues, eg, not being able to administer the treatment in a way that is effective. Under Opportunity , there are physical factors such as getting access or having barriers to treatment, which lie outside the patient, together with psychological barriers, such as poor support and communication from people close to the patient. However, the really important factors for many patients, particularly related to intentional non-adherence, are the Motivational influences, such as negative or mistaken beliefs about their condition and their treatment. Accepting this variety of factors, it is not surprising there is a range of ways that we have of working with families and patients, to improve their adherence. These can involve both human and digital interventions. Two available strategies are equally important. It is fundamental to use the direct experience in the healthcare situation, ie the consultation, to understand the patient’s issues and perspectives and to anticipate factors around non-adherence which can be managed. Going beyond that, there is a range of digital and personalised interventions available; for example, an initial brief screening questionnaire to identify the particular problems each patient and family may be experiencing. Then, following that, interventions can be developed which are tailored to each patient. In terms of the consultation, a structure is recommended for each family to analyse their understanding of the primary short stature condition and the treatment regimen they are being asked to follow. It is important to make sure that they have a clear rationale for the need for treatment and for daily injections. A recent study in adults with GH deficiency showed that non-adherence was related to lack of understanding of the primary disorder, which can be improved through focused education . A practical plan needs to be agreed for how, where and when the GH injections are given to ensure that treatment is administered more regularly. More generally, factors which cause adherence problems for each individual need to be identified. At the beginning and during treatment, brief screening questionnaires can be used to identify relevant personal issues. Information from the screening questionnaires can be used to start a personalised conversation to understand what is going wrong. From there, basic behaviour change approaches, such as motivational interviewing by HCPs, can attempt to target individual factors. Beyond the consultation, many other digital approaches are available which patients and parents can access on a daily basis. These could be personalised web-based tools, mobile phone applications, daily text messaging or interactive programmes which address particular issues. The main role of a professional coach in the healthcare environment is to support HCPs in learning how to help patients to make healthy choices and decisions in their lives. This can be challenging because patients can struggle to make such choices, particularly when emotional barriers block the logical courses of action. A number of questions can be asked. How can HCPs really influence the behaviour of patients and families, particularly when they have decided they do not want to change? Why can some patients move forward when others are resistant to making progress? These questions and observations have led to the exploration of motivational interviewing practised by HCPs which can be applied in the clinical scenario of outpatient consultations to help patients with adherence to GH therapy. It is proposed that motivational interviewing skills can motivate patients and families to overcome the practical and emotional barriers related to therapy. Motivational Interviewing, which is based on the work of Miller and Rollnick , is a collaborative conversation style which aims to strengthen a person’s motivation and commitment to change. It is a structured, person-centred approach which helps patients and families to resource their own inner motivation to be translated into improving adherence to GH therapy. Motivational interviewing is a skill which needs to be taught and thus learnt by both medical and nursing HCPs. Examples of the benefits of motivational interviewing can be taken from experience in making healthy life choices, such as giving up smoking, reducing alcohol intake or eating in a healthier way. When considering these choices, reaction to the individual can be unhelpful, such as not listening or negatively encouraging regressive behaviour. By contrast, a helpful response to the same life choices would consist of positive reactions such as genuine empathetic listening and exploration of the individual’s feelings without judgement. This behaviour typifies the spirit of motivational interviewing. The principles of motivational interviewing are collaboration, acceptance and compassion. Collaboration is very important because partnership on an equal level with the patient is a key aim. Acceptance leads to better understanding of the decisions and choices that patients and families are making without judgement. These choices are accepted and the HCP responds with guidance. Compassion is a further component that is combined with evocation, which means drawing out a patient’s inner motivation and commitment, and building on this to effect change. Core skills in motivational interviewing can be discussed under the acronym OARS, which stands for Open questions, Affirmations, Reflective listening and Summarising. The conversation can be structured by following these headings. Open questions such as what, how and why will open conversations and evoke dialogue. Other examples would be ‘what are your hopes for your consultation today?’ and ‘I am curious to learn how you have been getting on with your injections?’ These questions can be prefaced by saying ‘help me understand …’ and the conversation can develop by inviting the patient or family to talk about what is on their mind, what are their needs and their priorities. Affirmations are about helping patients to recognise their own strengths and positive beliefs that are going to help them to adhere to GH therapy. Examples could be to say to a patient ‘I can see it took courage for you to try this out today’ or to a parent ‘your creative ideas around this are very helpful’. Reflective listening consists of not only listening and reflecting back what is said, it also helps in verbalising the thinking and feelings that lie underneath, showing a depth of empathy that leads to further conversations. The last skill here is summarising, which serves the useful purpose of wrapping up conversations and can be started by saying ‘let me see if I have got this right, you are feeling this on one hand and perhaps feeling this on the other?’ When patients and families are asked about the difficulties they face related to management of short stature, a wide range of opinions and comments are given. The UK CGF (https: www.childgrowthfoundation.org) is a non-profit patient support group, which was originally founded as a charity in 1977 (UK Registered Charity number 1172807). The CGF receives many requests for information and support and delivers management advice on a wide range of growth disorders. In relation to adherence to GH therapy, the CGF reports that in the consultation setting some HCPs do not have sufficient time or experience of GH treatment which results in them giving conflicting advice to families. Insufficient knowledge of the primary growth disorder results in communication of inadequate or incorrect information. In particular, the patient may not realise how effective and worthwhile long-term therapy with GH can be. Insufficient education of the patient by the HCP can result in the family seeking alternative advice on the internet and thus receiving more confusing, incorrect and worrying messages. More accurate information needs to be available regarding the benefits of GH therapy with advantages outside growth being emphasised, such as improved general health and self-esteem . Accurate information regarding GH injection devices needs to be given with the choice of the most suitable injection device made by the family before the initiation of therapy. Size, comfort and storage requirements should also be considered, together with family dynamics and travel. The concept of patient choice is an organisational decision which is not universally adopted in the framework of growth consultations. Ideally however, the patient and family should be offered the choice of GH brand and injection device and this has been demonstrated to increase the likelihood of good adherence . In 2019 the CGF conducted an online survey amongst its members about initiation of GH therapy . One hundred and eleven responses were received, mostly from patients with GH deficiency, multiple pituitary hormone deficiencies, Silver Russell syndrome, small for gestational age and intrauterine growth retardation. The two most relevant questions were, ‘Were you offered a choice of GH brand and device?’ and ‘How often does your child miss a GH dose?’ Out of 111 responses, 31% of patients were not offered a choice of GH brand or injection device, demonstrating that within the UK, patient choice remains very inconsistent. Guidelines for England and Wales, regarding GH treatment, https: www.nice.org.uk/guidance/ta188/chapter/1-Guidance are not being followed. The survey indicated that 58% of patients never missed a GH dose, with lower values of 30% of non-GH deficient cases compared with 78% in multiple pituitary hormone deficiency cases. From many years’ experience of handling requests for information and from managing the CGF Facebook page, the CGF reports topics, which are frequently repeated, related to barriers to good GH adherence. The first of these is logistical barriers. A daily subcutaneous injection should become part of the family’s routine, provided the routine is not disturbed. However when changes do occur, such as a play-date, a school trip, a sleep-over, a camping trip with refrigeration necessary or particularly when the child’s care is shared between parents in different locations or with grandparents, the first casualty is the GH injection. As the effect of missing one or several GH injections is not immediately apparent, the long-term objective of regular therapy tends to be forgotten leading to chronic poor adherence. Another practical aspect is the maintenance of regular GH supplies, which may not occur if a family waits to order a new supply at the last minute. Children receive GH treatment because they have a long-term health condition but may develop a needle phobia with a fear of the pain of the injection combined sometimes with the noise of the injection device. A vicious cycle of events can develop and escalate in importance, predictably leading to missed injections. The anticipation of the injection and then its attempted administration can be very stressful. In the longer-term, a child might start to feel different to their peers, especially around friends, of whom not many will be having daily injections. Bullying and exclusion of the patient can occur. Peer pressure increases during adolescence, when additional stresses, such as exams, provide further opportunities to miss GH injections and for poor adherence to become habitual. Availability of communication with other patients having similar experiences can be very supportive and can significantly reduce stress and the sense of isolation. Peer support organisations such as the CGF can support and advise their own patients and the HCPs who are responsible for them. Many host social media groups, providing a 24/7 online community for chats, questions, discussions and mutual support. The CGF holds an annual convention, but with e-technology, geographical boundaries have diminished and Facebook groups, educational websites, mobile phone applications and helplines can all contribute to enhanced patient and family support. The roles of the paediatric endocrinology specialist nurse have developed at different rates in different countries. In the USA, UK, Canada, Australia and Scandinavia this nursing speciality has grown, with funding now established for positions in most university paediatric endocrinology departments . In other countries paediatric endocrinology nursing is much less developed. We will discuss roles and responsibilities related to short stature management and specifically GH adherence. Paediatric endocrine specialist nurses are uniquely positioned to offer a high-valued support network to HCPs, patients and their families, by being the regular first point of contact at consultation visits. Relationships, incorporating the whole family, are established and built on trust, specialised knowledge and expertise that is pivotal for families when starting GH therapy. Involvement in the initiation of GH treatment is key to establishing a fruitful relationship with the patient. ‘Ideal’ and ‘worst-case’ scenarios regarding initiation of GH therapy are shown in . If possible, meeting the family before the medical consultation can be very beneficial. Obtaining knowledge of the medical history and whether the family has studied the diagnosis on the internet can also be very valuable. Communication skills are important and as discussed above, training in motivational interviewing can play an essential role in the specialist nurse becoming an effective member of the growth management team and contributing to optimal GH adherence. Organising the patient’s choice of GH brand and injection device is a further responsibility and needs to be based on specialist knowledge of the different GH devices. Education in injection technique will logically lead to the establishment of a network of regular contacts and availability for the patient and family. Contact and support by phone and internet have become inherent in the nurse specialist’s responsibilities. In terms of adherence, the use of electronic monitoring of injections with feed-back to the nurse and endocrinologist allows adherence to be examined, so that a human-eHealth partnership develops to support the family. At consultation visits, it is logically the nurse specialist who can take the lead in non-judgemental interviewing to investigate actual or potential non-adherence. In the long-term, the paediatric endocrinology specialist nurse maintains support and positive relationships with the family and the patient. Everyone needs to continue to work together, ensuring encouragement and a combined committed goal of optimal response to GH therapy. Finally, by using a personalised approach, technology can be positively integrated into care and assist adherence and optimise outcomes. The successful management of paediatric growth disorders, involving GH therapy, can be judged by the achievement of catch-up growth, followed by growth within the normal centile lines leading to an adult height within the genetic target of the family. Relatively few cases achieve this ideal triad and a combination of personalised input by medical and nursing HCPs and the use of technological tools can improve the chances of success. Understanding the personal psychological barriers to good GH adherence in each patient can be combined with the use of an electronic GH injection recorder to monitor and communicate accurate adherence data. Motivational interviewing and a non-judgemental approach are also beneficial. This human-eHealth partnership gives synergistic advantages and improves the likelihood of a clinically beneficial long-term growth outcome.
Virtual slides in peer reviewed, open access medical publication
5a9d56f9-78f4-4dc1-8879-31a793bcea19
3275477
Pathology[mh]
Tissue based diagnosis or diagnostic surgical pathology is about to significantly change in its clinical importance and technology . The times of autopsy and simple microscopic cellular morphology (H & E stained diagnosis) are still the gold standard for therapeutic decision making but are nowadays to a growing extent replaced by live imaging such as Computed Tomography (CT), nuclear resonance imaging (NRI), and nourished by molecular genetics and biology technology methods . The combined application of these methods permits a so - called individualized diagnosis and treatment, especially for cancer patients . The surgical pathologist acts as the clinician's guide, and the whole action is called predictive diagnosis . In addition, modern communication tools and modules have been implemented in medicine, too [ , - ]. These include fast communication lines (fiber optics), standardized communication tools such as digital imaging and communication in medicine (DICOM), picture archiving and communication system (PACS), or the internet. Hospital information systems (HIS) and laboratory information systems (LIS) have been implemented in nearly all larger hospitals and pathology institutes [ - ]. Digitalization of complete microscopic glass slides is an additional tool that fits into the digitized environment of an institute of pathology. Consecutively, approaches are ongoing to implement virtual microscopy (VM) into the pathologist's routine diagnostics [ , , - ]. Additional tools in this world of diagnosis include items that are mandatory for correct diagnostics, i.e., access to reference books, to experts who are specialized for certain diseases, or measurements of certain image features [ - ]. The performance of these aims is enhanced in electronic communication to a high degree. In this article we describe a specialized tool that corresponds to VM and has been implemented in a specialized manner of electronic scientific communication, the so - called open access journals. The logistics and necessary implementation tools are described as well as the performance and acceptance of this innovative publication technology in medical sciences. In addition, the promising perspectives are briefly outlined. Journal data Open access publication has been developed after successful implementation of scientific electronic publication. In pathology, the Electronic Journal of Pathology and Histology (Elec J Pathol Histol) was the first scientific peer reviewed journal in pathology that has been solely electronically distributed to our knowledge . It has been implemented in 1995 and was distributed via floppy-discs worldwide. Internet access and other electronic distribution media had not been available at that time. The main characteristics of Elec J Pathol Histol are displayed in <Figure >. Each floppy-disc was distributed with specific "reader" software, which also allowed the compilation and execution of programs. Tests of so - called interactive publication were also performed. These test allowed the reader to add own data to an already existing article, and to publish the "extended" version by including the authors of the previous article, after they had agreed. All images were displayed in a .pcx format. Although the peer reviewed and published articles had been cited worldwide, the application for inclusion into Reuter's citation index failed several times because a solely electronic distribution of scientific articles was not appreciated in the 1990s . The Elec J Pathol Histol was redesigned in HTLM format and distributed on a CD in 2000. However, several advantages of the DOS environment had to be given up (executable programs, interactive publication), and the production of the Elec J Pathol Histol was terminated in July 2000. The front page of last edition is depicted in <Figure >. The Elec J Pathol Histol has been brought back to live in 2006 once the internet had been established and acknowledged as a useful and flexible communication medium. The new journal was named Journal of Diagnostic Pathology (Diagn Pathol), < http://www.diagnosticpathology.org >. The potential benefits of the internet embedding were analyzed, and corresponding mandatory formal changes were undertaken. These included: Open access publication. The authors have to pay for the publication of their article. In compensation, they still inhabit the publication rights etc. of their article, and the readers have access to all published articles for free. Authors who are working in developing countries can apply for waivers to publish their scientific results for free or at a remarkably reduced price. The principle formal organization remains related to the conventional style of publication; i.e., Diagn Pathol still possesses its own "front page" and is not just a domain which might be embedded in another front page such as "medicine". Access to and reading of the published articles is documented as well as the country of the reader, and statistically analyzed. Reviewers are chosen from the editorial board and contemporary from authors who have published articles that are related to the content under request. These authors are chosen from articles cited in the National Institute of Health (NIH) library (PubMed). The number of included color images, tables, or drawing etc. is not limited and free of additional charges. All published articles are provided with an ID number (DOI) and indexed for interactive search of readers etc. The introduction and establishment of Diagn Pathol lasted for about six months. The total number of published articles rose from 46 in 2006 to 83 in 2010, and will probably extent the level of 120 in 2011. The rejection rate increased from 30% in 2006 to > 50% in 2011. The journal possesses about 1,200 registered readers in 2011. Most articles are assessed by more than 500 readers within their first year of publication. Diagn Pathol was included in the Citation Index (Reuter's) in 2010. It was calculated an impact factor of 1.39 for the year 2010. Including Virtual Slides in Open Access Scientific Publication Virtual slides (VS) are the digital representation of completely digitized glass slides . VS have been developed at the beginning of this century. Image acquisition machines (scanners) are commercially available from about 10 different companies (e.g. 3DHistech, Aperio, General Electric, Leica, Olympus, Phillips, and others). They are now in daily use for interdisciplinary conferences, teaching of medical students, and postgraduate education in most of the Medical Universities located in Western Europe and the United States. Some of the larger institutes of pathology are equipped with scanners that are embedded in the daily routine workflow of diagnosis, and several other institutes of pathology are investigating to replace the conventional workflow by virtual microscopy (VM) [ , , ]. VS are comparable with conventional glass slides in image quality that has been demonstrated by several investigations [ , , ]. The specific features of VM are summarized in <Figure >. In addition to the features of conventional microscopy they include contemporary viewing of different regions of interest (ROI) or stains, interactive labeling, automated scoring, and other electronic assistance. Including VS in an open access peer reviewed scientific journal includes several advantages for the authors, readers, and the publisher <Figure >: The authors can demonstrate the morphological findings of whole microscopic image. The ROI can be analyzed independent from the authors' view. The readers are assured for the originality of the findings, and can proof their own strategy for ROI selection. The publisher increases the attraction of the included articles, and opens the journal for additional perspectives such as implementation of a repository or specific case collection, etc. Constraints can be seen in the mandatory logistics: The authors have to submit the original glass slides to a selected institution that scans the glass slides, because an official VS standard does not exist at present. Most of the companies have developed own specific viewers. A conversation of their own image format into a more general and open standard such as jp2000 is difficult if not impossible without knowing the individual image structure [ - ]. VS are large images of usually 2 - 3 GB in size. They usually require a specific viewer and a related image data bank that can handle these images. Thus, VS have to be organized in an electronic system that has to be separated from that of the published articles. In addition, the publication of still images should not be affected by VS. After several trials, we decided to provide each article with dummy links that are kept empty if VS are not included in the article. These links are activated and connected to the corresponding images if VS are included. The chosen solution possesses logistic and content related advantages: The publication procedure of an article is not affected or delayed by VS as it is completely disconnected from the VS production, and VS do not require a DOI number (Digital Object Identifyer for article identification). Furthermore, VS can be included into each article at any time; even years after its publication by replacing the dummy link with an active one, and the reader has access to VS separately from included still images. The logistic frame is depicted in <Figure >. The reviewers are not informed about potential inclusion of VS in order to completely separate this issue from scientific considerations. Thus, VS do not involve, delay or promote the articles' publication. Experiences and Results The preparations of the described procedure lasted for about 6 months. The authors of newly submitted articles were informed via email about the VS technology and asked to submit the corresponding glass slides for digitalization. A summary of the authors' responses, number of included VS, and published articles is shown in <Figure >. About 50% of requested authors submitted glass slides for VS publication, most of them are working in Asia. There was no time delay in peer reviewing and production process in comparison to articles that do not contain VS. VS seem to promote the interest of readers and the journals reputation as the number of submissions increased remarkably after publication of VS. However, also other reasons might play a significant role in this manner, such as fast review process, the citation index, or increased number of subscribers. The quality of all published VS was judged good or even very good. The access to and display of VS naturally depends on the network and included servers of the readers internet. The navigation and VM control is fast and reliable as only the viewing partitions of VS have to be downloaded. Open access publication has been developed after successful implementation of scientific electronic publication. In pathology, the Electronic Journal of Pathology and Histology (Elec J Pathol Histol) was the first scientific peer reviewed journal in pathology that has been solely electronically distributed to our knowledge . It has been implemented in 1995 and was distributed via floppy-discs worldwide. Internet access and other electronic distribution media had not been available at that time. The main characteristics of Elec J Pathol Histol are displayed in <Figure >. Each floppy-disc was distributed with specific "reader" software, which also allowed the compilation and execution of programs. Tests of so - called interactive publication were also performed. These test allowed the reader to add own data to an already existing article, and to publish the "extended" version by including the authors of the previous article, after they had agreed. All images were displayed in a .pcx format. Although the peer reviewed and published articles had been cited worldwide, the application for inclusion into Reuter's citation index failed several times because a solely electronic distribution of scientific articles was not appreciated in the 1990s . The Elec J Pathol Histol was redesigned in HTLM format and distributed on a CD in 2000. However, several advantages of the DOS environment had to be given up (executable programs, interactive publication), and the production of the Elec J Pathol Histol was terminated in July 2000. The front page of last edition is depicted in <Figure >. The Elec J Pathol Histol has been brought back to live in 2006 once the internet had been established and acknowledged as a useful and flexible communication medium. The new journal was named Journal of Diagnostic Pathology (Diagn Pathol), < http://www.diagnosticpathology.org >. The potential benefits of the internet embedding were analyzed, and corresponding mandatory formal changes were undertaken. These included: Open access publication. The authors have to pay for the publication of their article. In compensation, they still inhabit the publication rights etc. of their article, and the readers have access to all published articles for free. Authors who are working in developing countries can apply for waivers to publish their scientific results for free or at a remarkably reduced price. The principle formal organization remains related to the conventional style of publication; i.e., Diagn Pathol still possesses its own "front page" and is not just a domain which might be embedded in another front page such as "medicine". Access to and reading of the published articles is documented as well as the country of the reader, and statistically analyzed. Reviewers are chosen from the editorial board and contemporary from authors who have published articles that are related to the content under request. These authors are chosen from articles cited in the National Institute of Health (NIH) library (PubMed). The number of included color images, tables, or drawing etc. is not limited and free of additional charges. All published articles are provided with an ID number (DOI) and indexed for interactive search of readers etc. The introduction and establishment of Diagn Pathol lasted for about six months. The total number of published articles rose from 46 in 2006 to 83 in 2010, and will probably extent the level of 120 in 2011. The rejection rate increased from 30% in 2006 to > 50% in 2011. The journal possesses about 1,200 registered readers in 2011. Most articles are assessed by more than 500 readers within their first year of publication. Diagn Pathol was included in the Citation Index (Reuter's) in 2010. It was calculated an impact factor of 1.39 for the year 2010. Virtual slides (VS) are the digital representation of completely digitized glass slides . VS have been developed at the beginning of this century. Image acquisition machines (scanners) are commercially available from about 10 different companies (e.g. 3DHistech, Aperio, General Electric, Leica, Olympus, Phillips, and others). They are now in daily use for interdisciplinary conferences, teaching of medical students, and postgraduate education in most of the Medical Universities located in Western Europe and the United States. Some of the larger institutes of pathology are equipped with scanners that are embedded in the daily routine workflow of diagnosis, and several other institutes of pathology are investigating to replace the conventional workflow by virtual microscopy (VM) [ , , ]. VS are comparable with conventional glass slides in image quality that has been demonstrated by several investigations [ , , ]. The specific features of VM are summarized in <Figure >. In addition to the features of conventional microscopy they include contemporary viewing of different regions of interest (ROI) or stains, interactive labeling, automated scoring, and other electronic assistance. Including VS in an open access peer reviewed scientific journal includes several advantages for the authors, readers, and the publisher <Figure >: The authors can demonstrate the morphological findings of whole microscopic image. The ROI can be analyzed independent from the authors' view. The readers are assured for the originality of the findings, and can proof their own strategy for ROI selection. The publisher increases the attraction of the included articles, and opens the journal for additional perspectives such as implementation of a repository or specific case collection, etc. Constraints can be seen in the mandatory logistics: The authors have to submit the original glass slides to a selected institution that scans the glass slides, because an official VS standard does not exist at present. Most of the companies have developed own specific viewers. A conversation of their own image format into a more general and open standard such as jp2000 is difficult if not impossible without knowing the individual image structure [ - ]. VS are large images of usually 2 - 3 GB in size. They usually require a specific viewer and a related image data bank that can handle these images. Thus, VS have to be organized in an electronic system that has to be separated from that of the published articles. In addition, the publication of still images should not be affected by VS. After several trials, we decided to provide each article with dummy links that are kept empty if VS are not included in the article. These links are activated and connected to the corresponding images if VS are included. The chosen solution possesses logistic and content related advantages: The publication procedure of an article is not affected or delayed by VS as it is completely disconnected from the VS production, and VS do not require a DOI number (Digital Object Identifyer for article identification). Furthermore, VS can be included into each article at any time; even years after its publication by replacing the dummy link with an active one, and the reader has access to VS separately from included still images. The logistic frame is depicted in <Figure >. The reviewers are not informed about potential inclusion of VS in order to completely separate this issue from scientific considerations. Thus, VS do not involve, delay or promote the articles' publication. The preparations of the described procedure lasted for about 6 months. The authors of newly submitted articles were informed via email about the VS technology and asked to submit the corresponding glass slides for digitalization. A summary of the authors' responses, number of included VS, and published articles is shown in <Figure >. About 50% of requested authors submitted glass slides for VS publication, most of them are working in Asia. There was no time delay in peer reviewing and production process in comparison to articles that do not contain VS. VS seem to promote the interest of readers and the journals reputation as the number of submissions increased remarkably after publication of VS. However, also other reasons might play a significant role in this manner, such as fast review process, the citation index, or increased number of subscribers. The quality of all published VS was judged good or even very good. The access to and display of VS naturally depends on the network and included servers of the readers internet. The navigation and VM control is fast and reliable as only the viewing partitions of VS have to be downloaded. This is the first report on successful VS publication in a peer reviewed scientific journal to our knowledge. The acceptance of this new technology by authors and readers can be judged good to very good. The reviewers are not informed about potential VS publication in the corresponding article. Thus, the reviewing process is completely separated from the newly introduced VS publication. This might be a reason of debate. Reviewers are part of the scientific judgment of an article, and VS contribute to its scientific level too. Why to completely exclude reviewers from VS? There are practical and theoretical reasons: One aim in electronic publication is to maintain a short review/production time which could be delayed in combining VS and review. In addition, the selection of published still images depends upon the author. Any errors in selecting additional correct ROI can be judged by the reader if VS are published too. From the theoretical point of view, the diagnostic judgment in tissue based diagnosis can be distinguished in a) selection of the ROI, and b) the diagnostic statement [ , - ]. In surgical pathology both algorithms are usually performed in only one combined step. Young colleagues are trained to perform the final diagnosis by viewing pre-selected areas (ROI) which are displayed in textbooks or atlases . Publishing VS and still images independently from the review procedure separates these two steps, which is of additional educational significance. The main constraint of the described method still remains in logistic "hardware" problems, especially in submission of requested glass slides. The mailing costs have to be covered by the authors. The glass slides have to be sent to an "image acquisition center" which controls the image quality. It is responsible to accurately document and handle VS. After several different trials to accurately link included still images with VS we decided to separately publish still images and VS. The separation of the "conventional" publication procedure from VS inclusion was favored by the production team. It could maintain the production speed. VS can be included at any time after publication. No intervention is needed. The provided links can be transferred to different servers at any time without involving the publication procedure of text, included images, and references. Thus, the journal diagnostic pathology has been "opened" to a new innovative distribution of image content information that offers new regions for additional applications. Most of published VS are derived from case reports which usually describe rare diseases displaying educational significance. They are arranged in an appropriate data bank which will be transformed in an adequate repository in the next step. Certainly, the authors have to agree if the images will be included in such an electronic image retrieval system. Such a system will offer images of rare and educational significance combined with extensive clinical data such as the patient's history and development of the disease. An additional potential application is related to quantitative object related measurements of VS . They can be performed on a broad variety of stains including conventional and fluorescent immunohistochemistry using open access systems such as EAMUS™ or equivalent systems . The reader can use these systems to select ROI from VS and to perform content related measurement. In summary, the publication of VS in an open access peer reviewed scientific journal has been accepted by a broad community of pathologists and colleagues working in related medical and scientific fields shortly after its implementation. It significantly increases the reader's tools that can only be provided in an electronic communication environment such as individual selection of areas of interest, interactive measurements of specific objects, or image content information related selection of case reports for tissue based diagnosis and training. There are practical and theoretical reasons: One aim in electronic publication is to maintain a short review/production time which could be delayed in combining VS and review. In addition, the selection of published still images depends upon the author. Any errors in selecting additional correct ROI can be judged by the reader if VS are published too. From the theoretical point of view, the diagnostic judgment in tissue based diagnosis can be distinguished in a) selection of the ROI, and b) the diagnostic statement [ , - ]. In surgical pathology both algorithms are usually performed in only one combined step. Young colleagues are trained to perform the final diagnosis by viewing pre-selected areas (ROI) which are displayed in textbooks or atlases . Publishing VS and still images independently from the review procedure separates these two steps, which is of additional educational significance. The main constraint of the described method still remains in logistic "hardware" problems, especially in submission of requested glass slides. The mailing costs have to be covered by the authors. The glass slides have to be sent to an "image acquisition center" which controls the image quality. It is responsible to accurately document and handle VS. After several different trials to accurately link included still images with VS we decided to separately publish still images and VS. The separation of the "conventional" publication procedure from VS inclusion was favored by the production team. It could maintain the production speed. VS can be included at any time after publication. No intervention is needed. The provided links can be transferred to different servers at any time without involving the publication procedure of text, included images, and references. Thus, the journal diagnostic pathology has been "opened" to a new innovative distribution of image content information that offers new regions for additional applications. Most of published VS are derived from case reports which usually describe rare diseases displaying educational significance. They are arranged in an appropriate data bank which will be transformed in an adequate repository in the next step. Certainly, the authors have to agree if the images will be included in such an electronic image retrieval system. Such a system will offer images of rare and educational significance combined with extensive clinical data such as the patient's history and development of the disease. An additional potential application is related to quantitative object related measurements of VS . They can be performed on a broad variety of stains including conventional and fluorescent immunohistochemistry using open access systems such as EAMUS™ or equivalent systems . The reader can use these systems to select ROI from VS and to perform content related measurement. In summary, the publication of VS in an open access peer reviewed scientific journal has been accepted by a broad community of pathologists and colleagues working in related medical and scientific fields shortly after its implementation. It significantly increases the reader's tools that can only be provided in an electronic communication environment such as individual selection of areas of interest, interactive measurements of specific objects, or image content information related selection of case reports for tissue based diagnosis and training.
A sensory memory to preserve visual representations across eye movements
bae84418-c8ee-44e7-92d3-14726887c46c
8575989
Physiology[mh]
About three times each second, saccadic eye movements (saccades) interrupt the flow of retinal information to higher visual areas – . To produce a stable sense of vision, our brain is believed to reconstruct at least some portion of the visual world during these gaps . Studying the nature and source of information used by the visual system to fill the perceptual gap during saccades has been a central focus of psychophysicists, physiologists, and cognitive neuroscientists for decades , – . The question holds a critical significance as it directly targets the constructive nature of visual perception: how a continuous perception of the visual scene emerges out of retinal input frequently disrupted by saccades. The prevailing idea is that prefrontal and parietal areas can provide neurons in visual areas with other sources of information beyond their receptive field (RF) to enable them to fill the gap during saccades , , . For example, it has been shown that parietal and prefrontal neurons preemptively process information from their future receptive field (FF) – , and prefrontal neurons across visual space develop a target-centered representation by responding presaccadically to remote stimuli presented around the saccade target (ST) , . Despite these theories, a direct assessment of the nature of the information filling the transsaccadic gap in visual areas—in other words, the neural basis of transsaccadic integration—is still missing. In order to understand the neural basis of transsaccadic integration, we recorded the activity of extrastriate neurons in area V4 and the middle temporal (MT) cortex of macaque monkeys, and developed a computational model to allow an instantaneous readout of the visuospatial representation from spiking responses on the timescale of a saccade. This decoding framework revealed that throughout saccades neural activity represented either the presaccadic or the postsaccadic visual scene, leaving no gap in the visual representation. More importantly, this approach allowed us to decompose the spatiotemporal sensitivity of individual neurons to trace the components required for this transsaccadic integration. This feature enabled us to identify a neural phenomenon as a key player for transsaccadic integration: extrastriate neurons exhibit a late enhancement of responses to stimuli appearing in the original RF around saccade onset, which preserves the history of the visual scene until the new retinal information arrives. This phenomenon, which was verified in both V4 and MT, reveals how by actively maintaining the presaccadic representation, extrastriate neurons can contribute to a stable uninterrupted perception of the visual scene during saccades. Tracing changes in the visual sensitivity of extrastriate neurons across saccades We recorded the spiking activity of 291 V4 and 332 MT neurons while monkeys performed a visually guided saccade task (Fig. , left, also see “Methods”, Supplementary Information section SOM1, and Supplementary Fig. ). The animals maintained their gaze on a central fixation point (FP1) for 700–1100 ms and, upon the FP1 offset, shifted their gaze to a peripheral target (FP2) and fixated there for another 560–750 ms. Prior to, during, and after saccades, 7-ms duration small visual stimuli (probes) were presented pseudorandomly within a matrix of 9 × 9 locations covering the FP1, FP2, and the estimated receptive fields of the neurons before and after the saccade (RF1 and RF2). In most of the sessions ( n = 85), the FP2 location was fixed across all trials. In some of the sessions ( n = 23), the FP2 was randomly placed either on the right or left side of the FP1 (at the same radius; see details in Supplementary Information section SOM1-2). In order to assess how extrastriate neurons represent the visual world we needed to trace the dynamics of their sensitivity as it changes during saccades. The neuron’s sensitivity [12pt]{minimal} $$(g)$$ ( g ) at a certain time relative to the saccade [12pt]{minimal} $$(t)$$ ( t ) is defined as the efficacy of a stimulus at a certain location ( [12pt]{minimal} $$x$$ x and [12pt]{minimal} $$y$$ y ) presented at a specific delay [12pt]{minimal} $$( )$$ ( τ ) before that time to evoke a response in that neuron (Fig. , right). In order to assess this sensitivity map, we employed a computational approach. First, we decomposed the time and location into discrete bins of ~3–6 degrees of visual angle (dva) and [12pt]{minimal} $$7$$ 7 ms time bins (resolution of probes). For the duration and precision of our experimental paradigm, a full description of a neurons’ sensitivity required evaluation of [12pt]{minimal} $${10}^{7}$$ 10 7 of these spatiotemporal units (STUs). We used a dimensionality reduction algorithm to select only those STUs that contribute to the stimulus-response correspondence (see Methods; Supplementary Fig. ). This unbiased approach excluded [12pt]{minimal} $$\! 99.9 \%$$ ~ 99.9 % of STUs, making it feasible to evaluate the contribution of the remaining [12pt]{minimal} $$ \!\!{10}^{4}\,$$ ~ 10 4 STUs to the response of the neuron (8899.17 ± 113.97 STUs per neuron). We developed a computational model to predict the neuron’s response based on an estimated sensitivity map. Using a gradient descent algorithm, we asked the model to determine the contribution (weight) of each STU of the sensitivity map, with the goal of maximizing the similarity between the model’s predicted response and the actual neuronal response (see “Methods” and Supplementary Information section SOM2). Figure shows examples of weighted STUs for a model of an example MT neuron at a location inside its RF1 (top) and RF2 (bottom) for various times relative to the saccade. Note that the combination of STU weights across delays for a certain time and location is a representation of the neuron’s sensitivity, classically known as its “kernel” (middle panel). Overall, the model performed well in capturing the dynamics of neuronal responses, as well as providing high temporal resolution sensitivity maps of neurons (see “Methods” and Supplementary Information section SOM 2; Supplementary Figs. , ) . A model-based readout of transsaccadic integration The goal of our combined electrophysiological and computational approach is to identify the neural components underlying transsaccadic integration. This requires translating the sensitivity map to a readout of the visual scene (employing the decoding aspect of the model), and then using this readout to assess the transsaccadic integration around the time of saccades. By capturing the essential computations of the neuron, the model can be used to generate predictions about any unseen sequence of visual stimuli. An example of how this can be used to generate a readout of the visual scene is shown in Fig. . The model is used to predict responses to 9 probes around RF1 at various times relative to saccade onset. For a pair of probes, the spatial discrimination was then measured using the area under the curve (AUC) in the Receiver Operating Characteristic method based on the model-predicted responses (see “Methods”; Supplementary Fig. ). Location discriminability is assessed by the average AUC across all pairs of probes, and is plotted for a single neuron at various times of its response ( x -axis) for probes presented at different times relative to saccade onset ( y -axis). Figure shows the same location discriminability map for the population of 623 modeled neurons, and the blue contour indicates the times at which the response can differentiate between probe locations above a certain threshold (AUC>0.55). The same contour is shown in Fig. , top, along with the contour assessed with a similar method based on the location discriminability around RF2. Consistent with the subjective perception of a continuous visual scene, there is no time at which spatial sensitivity is lost during saccades, as indicated in the contact between the red and blue regions, and the overlap in their projections along the response time dimension (Fig. , top, overlap = 23.23 ± 4.92 ms; see “Methods” and Supplementary Information section SOM3). The same phenomenon was also observed when assessing the detection performance of neurons (Fig. , bottom, see “Methods”). Therefore, tracing the capacity of neurons to decode location information, the model predicts no gap between encoding information from the presaccadic scene and the postsaccadic one. The approach revealed an important insight about what exactly happens to the visual scene representation around the time of a saccade. As shown in Fig. , responses up to 50 ms after saccade onset show that neurons consistently kept their spatial sensitivity to stimuli as early as 50 ms before that response time (deviation from the line of unity, which is also a reflection of neuronal response latency). Interestingly, the blue curve is slightly farther from the line of unity around the time of the saccade (~74 ms deviation for response times of 50–90 ms), implying that the neuron loses its sensitivity to more recent stimuli and instead remains sensitive to stimuli presented earlier in time, a phenomenon which could contribute to filling the perceptual gap during saccades (see “Methods” and Supplementary Information section SOM4 for verification at the neuronal level; Supplementary Fig. ). Identifying the perisaccadic modulations required for transsaccadic integration Having confirmed that the readout of the visual scene is indeed integrated across saccades, and with evidence that this integration is accompanied by a change in the temporal dynamics of the neuronal response, we started our search to identify the exact extrastriate mechanism underlying this phenomenon. Importantly, the model provides the ability to independently manipulate individual components of neuronal sensitivity and assess their impact on both individual neuronal responses and on the visuospatial representation. This ability proved to be a very powerful tool in identifying the basis of the continuous visual representation across saccades. First, we identified the times at which saccades alter extrastriate neurons’ sensitivity by identifying the STUs whose contribution changes during saccades compared to fixation (see Methods). The temporal distribution of saccade-modulated STUs is shown in Fig. for RF1, RF2, and all locations, across all modeled neurons. On average [12pt]{minimal} $$ \!\!26 \%$$ ~ 26 % of STUs were modulated during saccades (2342.89 ± 45.36). Nulling these modulated STUs in the model, i.e. replacing their weights with the fixation weights, resulted in a clear gap in neurons’ sensitivity to visual information. Unlike the intact model in Fig. , the model lacking the perisaccadic modulations (Fig. ) not only did not show any overlap between RF1 and RF2 sensitivity, it even showed a gap in the visuospatial representation—a temporal window during which extrastriate neurons are not sensitive to stimuli at either location (overlap = −36.88 ± 6.18 ms; see “Methods” and Supplementary Information section SOM3, Supplementary Fig. for V4 and MT neurons separately). Supplementary Fig. shows the method for quantifying overlap and the effect of eliminating modulated STUs on overlap time for the population of individual neurons models. These results demonstrate the necessity of perisaccadic extrastriate changes for maintaining an integrated representation of visual space across saccades. Numerous psychophysical phenomena happen during saccades: targets of eye movements are processed better, sensitivity to detect changes and displacements of other objects are reduced, and perception of time and space alters . Thus maintaining an integrated representation of space is only one of multiple perisaccadic perceptual phenomena, and may only depend on a subset of perisaccadic changes in sensitivity. Therefore, while Fig. verified the necessity of perisaccadic sensitivity changes for this integration, we still need to determine exactly which changes are specifically related to transsaccadic integration. We defined the integration, based on a model readout and then induce assumption-free alterations into the model to determine which of the modulated STUs (Fig. ) are essential for an integrated representation of space, i.e. altering the model readout from Fig. to (See “Methods”; Supplementary Fig. ). Nulling this integration-relevant subset of modulated STUs also results in a gap in the detectability and discriminability maps (Supplementary Fig. ), confirming that the search algorithm for extracting integration-relevant STUs from the saccade-modulated STUs successfully identifies the modulations required for transsaccadic integration. The unbiased search within the space of STUs revealed the times, delays, and locations of “integration-relevant STUs” (Fig. ) (17.04 ± 0.23% of the modulated STUs were integration relevant). Transsaccadic integration depends on the late enhancement of neural responses Importantly, the model can then be used to link the integration-relevant STUs to specific components of the neural response. For example, the regions inside the black contours in Fig. are the integration-relevant STUs for RF1 and RF2 locations for all modeled neurons, and the black contours in Fig. highlight the stimulus-aligned response component generated by those specific integration-relevant STUs at RF1 and RF2. This approach isolated an alteration in the dynamics of responses to RF1 and RF2 stimuli presented within 10 ms of saccades as the neural substrate for an integrated representation of visual space. As indicated in the right panels, around the time of the saccade, the early part of the response to RF1 probes gradually disappears and a late response component emerges instead (which disappears after the eye has landed on the second fixation point). For RF2 probes, a late component emerges first and gradually earlier components add to it to form the stimulus-aligned response of the neuron during the second fixation. The same phenomena observed in the model were also seen in the response of the population of neurons (Fig. ). The phenomenon occurring at RF2 is reminiscent of the previously reported FF remapping (see Supplementary Information section SOM5; Supplementary Fig. ) , . The elongation of the RF1 response (which we call ‘late response enhancement’), however, is an unanticipated finding and provides reassurance that our unbiased search is casting a wide net to identify the neural basis for an integrated representation of visual space. The phenomenon of late response enhancement was observed in populations of both V4 and MT neurons. Figure shows the rastergram and the average response of sample V4 (left) and MT (right) neurons. The average response of these neurons during 75–105 ms after probe onset in V4 and 75–145 ms after probe onset in MT neurons increased by a factor of 2.31 and 2.80 for stimuli appearing around the saccade compared to fixation (V4 fixation = 25.09 ± 1.04 sp/s, V4 saccade = 58.06 ± 8.13 sp/s, p < 0.001; MT fixation = 19.11 ± 0.62 sp/s, MT saccade = 53.61 ± 5.40 sp/s, p < 0.001 Wilcoxon rank-sum test). As shown in Fig. both V4 and MT populations exhibited an enhanced late response to their RF stimulus around the saccade compared to fixation (V4 fixation = 36.15 ± 1.61 sp/s, V4 saccade = 45.66 ± 1.89 sp/s, p < 0.001; MT fixation = 40.97 ± 1.47 sp/s, MT saccade =45.39 ± 1.71 sp/s, p < 0.001) (see Supplementary Fig. for sample neurons). This enhanced late response was accompanied by a suppression of early responses in both V4 and MT; we were predominately struck by the similarities between V4 and MT, both with respect to the prevalence and timecourse of the late response enhancement phenomenon (see Supplementary Information section SOM6; Supplementary Fig. ). In order to examine the spatial selectivity of the observed phenomenon, for each probe we measured the perisaccadic modulation index (PMI) as the difference between saccade and fixation responses (75–105 ms after the stimulus) divided by their sum. PMI for RF probes was significantly greater than PMI for control probes outside the RF ( [12pt]{minimal} $$ {{{{{{}}}}}}_{{{{{{}}}}}4}=0.04 0.01$$ Δ PMI V 4 = 0.04 ± 0.01 , p = 0.005; [12pt]{minimal} $$ {{{{{{}}}}}}_{{{{{{{}}}}}}}=0.04 0.01$$ Δ PMI MT = 0.04 ± 0.01 , p < 0.001; Fig. ). We also confirmed that the late response enhancement phenomenon in MT is independent of whether the saccade direction is congruent or incongruent with the preferred motion direction of the neuron, ruling out saccade-induced retinal motion as the source of this phenomenon (PMI congruent = 0.01 ± 0.01, p = 0.025; PMI incongruent = 0.03 ± 0.02, p = 0.031, p congruent vs. incongruent = 0.64, Wilcoxon rank-sum test; Fig. ; See “Methods” and Supplementary Information section SOM7; Supplementary Fig. ). Thus, V4 and MT neurons display a delayed response to RF stimuli around the time of saccades, which is the feature of perisaccadic neural response modulation that the model identified as essential for integrating the visual representation across saccades. We recorded the spiking activity of 291 V4 and 332 MT neurons while monkeys performed a visually guided saccade task (Fig. , left, also see “Methods”, Supplementary Information section SOM1, and Supplementary Fig. ). The animals maintained their gaze on a central fixation point (FP1) for 700–1100 ms and, upon the FP1 offset, shifted their gaze to a peripheral target (FP2) and fixated there for another 560–750 ms. Prior to, during, and after saccades, 7-ms duration small visual stimuli (probes) were presented pseudorandomly within a matrix of 9 × 9 locations covering the FP1, FP2, and the estimated receptive fields of the neurons before and after the saccade (RF1 and RF2). In most of the sessions ( n = 85), the FP2 location was fixed across all trials. In some of the sessions ( n = 23), the FP2 was randomly placed either on the right or left side of the FP1 (at the same radius; see details in Supplementary Information section SOM1-2). In order to assess how extrastriate neurons represent the visual world we needed to trace the dynamics of their sensitivity as it changes during saccades. The neuron’s sensitivity [12pt]{minimal} $$(g)$$ ( g ) at a certain time relative to the saccade [12pt]{minimal} $$(t)$$ ( t ) is defined as the efficacy of a stimulus at a certain location ( [12pt]{minimal} $$x$$ x and [12pt]{minimal} $$y$$ y ) presented at a specific delay [12pt]{minimal} $$( )$$ ( τ ) before that time to evoke a response in that neuron (Fig. , right). In order to assess this sensitivity map, we employed a computational approach. First, we decomposed the time and location into discrete bins of ~3–6 degrees of visual angle (dva) and [12pt]{minimal} $$7$$ 7 ms time bins (resolution of probes). For the duration and precision of our experimental paradigm, a full description of a neurons’ sensitivity required evaluation of [12pt]{minimal} $${10}^{7}$$ 10 7 of these spatiotemporal units (STUs). We used a dimensionality reduction algorithm to select only those STUs that contribute to the stimulus-response correspondence (see Methods; Supplementary Fig. ). This unbiased approach excluded [12pt]{minimal} $$\! 99.9 \%$$ ~ 99.9 % of STUs, making it feasible to evaluate the contribution of the remaining [12pt]{minimal} $$ \!\!{10}^{4}\,$$ ~ 10 4 STUs to the response of the neuron (8899.17 ± 113.97 STUs per neuron). We developed a computational model to predict the neuron’s response based on an estimated sensitivity map. Using a gradient descent algorithm, we asked the model to determine the contribution (weight) of each STU of the sensitivity map, with the goal of maximizing the similarity between the model’s predicted response and the actual neuronal response (see “Methods” and Supplementary Information section SOM2). Figure shows examples of weighted STUs for a model of an example MT neuron at a location inside its RF1 (top) and RF2 (bottom) for various times relative to the saccade. Note that the combination of STU weights across delays for a certain time and location is a representation of the neuron’s sensitivity, classically known as its “kernel” (middle panel). Overall, the model performed well in capturing the dynamics of neuronal responses, as well as providing high temporal resolution sensitivity maps of neurons (see “Methods” and Supplementary Information section SOM 2; Supplementary Figs. , ) . The goal of our combined electrophysiological and computational approach is to identify the neural components underlying transsaccadic integration. This requires translating the sensitivity map to a readout of the visual scene (employing the decoding aspect of the model), and then using this readout to assess the transsaccadic integration around the time of saccades. By capturing the essential computations of the neuron, the model can be used to generate predictions about any unseen sequence of visual stimuli. An example of how this can be used to generate a readout of the visual scene is shown in Fig. . The model is used to predict responses to 9 probes around RF1 at various times relative to saccade onset. For a pair of probes, the spatial discrimination was then measured using the area under the curve (AUC) in the Receiver Operating Characteristic method based on the model-predicted responses (see “Methods”; Supplementary Fig. ). Location discriminability is assessed by the average AUC across all pairs of probes, and is plotted for a single neuron at various times of its response ( x -axis) for probes presented at different times relative to saccade onset ( y -axis). Figure shows the same location discriminability map for the population of 623 modeled neurons, and the blue contour indicates the times at which the response can differentiate between probe locations above a certain threshold (AUC>0.55). The same contour is shown in Fig. , top, along with the contour assessed with a similar method based on the location discriminability around RF2. Consistent with the subjective perception of a continuous visual scene, there is no time at which spatial sensitivity is lost during saccades, as indicated in the contact between the red and blue regions, and the overlap in their projections along the response time dimension (Fig. , top, overlap = 23.23 ± 4.92 ms; see “Methods” and Supplementary Information section SOM3). The same phenomenon was also observed when assessing the detection performance of neurons (Fig. , bottom, see “Methods”). Therefore, tracing the capacity of neurons to decode location information, the model predicts no gap between encoding information from the presaccadic scene and the postsaccadic one. The approach revealed an important insight about what exactly happens to the visual scene representation around the time of a saccade. As shown in Fig. , responses up to 50 ms after saccade onset show that neurons consistently kept their spatial sensitivity to stimuli as early as 50 ms before that response time (deviation from the line of unity, which is also a reflection of neuronal response latency). Interestingly, the blue curve is slightly farther from the line of unity around the time of the saccade (~74 ms deviation for response times of 50–90 ms), implying that the neuron loses its sensitivity to more recent stimuli and instead remains sensitive to stimuli presented earlier in time, a phenomenon which could contribute to filling the perceptual gap during saccades (see “Methods” and Supplementary Information section SOM4 for verification at the neuronal level; Supplementary Fig. ). Having confirmed that the readout of the visual scene is indeed integrated across saccades, and with evidence that this integration is accompanied by a change in the temporal dynamics of the neuronal response, we started our search to identify the exact extrastriate mechanism underlying this phenomenon. Importantly, the model provides the ability to independently manipulate individual components of neuronal sensitivity and assess their impact on both individual neuronal responses and on the visuospatial representation. This ability proved to be a very powerful tool in identifying the basis of the continuous visual representation across saccades. First, we identified the times at which saccades alter extrastriate neurons’ sensitivity by identifying the STUs whose contribution changes during saccades compared to fixation (see Methods). The temporal distribution of saccade-modulated STUs is shown in Fig. for RF1, RF2, and all locations, across all modeled neurons. On average [12pt]{minimal} $$ \!\!26 \%$$ ~ 26 % of STUs were modulated during saccades (2342.89 ± 45.36). Nulling these modulated STUs in the model, i.e. replacing their weights with the fixation weights, resulted in a clear gap in neurons’ sensitivity to visual information. Unlike the intact model in Fig. , the model lacking the perisaccadic modulations (Fig. ) not only did not show any overlap between RF1 and RF2 sensitivity, it even showed a gap in the visuospatial representation—a temporal window during which extrastriate neurons are not sensitive to stimuli at either location (overlap = −36.88 ± 6.18 ms; see “Methods” and Supplementary Information section SOM3, Supplementary Fig. for V4 and MT neurons separately). Supplementary Fig. shows the method for quantifying overlap and the effect of eliminating modulated STUs on overlap time for the population of individual neurons models. These results demonstrate the necessity of perisaccadic extrastriate changes for maintaining an integrated representation of visual space across saccades. Numerous psychophysical phenomena happen during saccades: targets of eye movements are processed better, sensitivity to detect changes and displacements of other objects are reduced, and perception of time and space alters . Thus maintaining an integrated representation of space is only one of multiple perisaccadic perceptual phenomena, and may only depend on a subset of perisaccadic changes in sensitivity. Therefore, while Fig. verified the necessity of perisaccadic sensitivity changes for this integration, we still need to determine exactly which changes are specifically related to transsaccadic integration. We defined the integration, based on a model readout and then induce assumption-free alterations into the model to determine which of the modulated STUs (Fig. ) are essential for an integrated representation of space, i.e. altering the model readout from Fig. to (See “Methods”; Supplementary Fig. ). Nulling this integration-relevant subset of modulated STUs also results in a gap in the detectability and discriminability maps (Supplementary Fig. ), confirming that the search algorithm for extracting integration-relevant STUs from the saccade-modulated STUs successfully identifies the modulations required for transsaccadic integration. The unbiased search within the space of STUs revealed the times, delays, and locations of “integration-relevant STUs” (Fig. ) (17.04 ± 0.23% of the modulated STUs were integration relevant). Importantly, the model can then be used to link the integration-relevant STUs to specific components of the neural response. For example, the regions inside the black contours in Fig. are the integration-relevant STUs for RF1 and RF2 locations for all modeled neurons, and the black contours in Fig. highlight the stimulus-aligned response component generated by those specific integration-relevant STUs at RF1 and RF2. This approach isolated an alteration in the dynamics of responses to RF1 and RF2 stimuli presented within 10 ms of saccades as the neural substrate for an integrated representation of visual space. As indicated in the right panels, around the time of the saccade, the early part of the response to RF1 probes gradually disappears and a late response component emerges instead (which disappears after the eye has landed on the second fixation point). For RF2 probes, a late component emerges first and gradually earlier components add to it to form the stimulus-aligned response of the neuron during the second fixation. The same phenomena observed in the model were also seen in the response of the population of neurons (Fig. ). The phenomenon occurring at RF2 is reminiscent of the previously reported FF remapping (see Supplementary Information section SOM5; Supplementary Fig. ) , . The elongation of the RF1 response (which we call ‘late response enhancement’), however, is an unanticipated finding and provides reassurance that our unbiased search is casting a wide net to identify the neural basis for an integrated representation of visual space. The phenomenon of late response enhancement was observed in populations of both V4 and MT neurons. Figure shows the rastergram and the average response of sample V4 (left) and MT (right) neurons. The average response of these neurons during 75–105 ms after probe onset in V4 and 75–145 ms after probe onset in MT neurons increased by a factor of 2.31 and 2.80 for stimuli appearing around the saccade compared to fixation (V4 fixation = 25.09 ± 1.04 sp/s, V4 saccade = 58.06 ± 8.13 sp/s, p < 0.001; MT fixation = 19.11 ± 0.62 sp/s, MT saccade = 53.61 ± 5.40 sp/s, p < 0.001 Wilcoxon rank-sum test). As shown in Fig. both V4 and MT populations exhibited an enhanced late response to their RF stimulus around the saccade compared to fixation (V4 fixation = 36.15 ± 1.61 sp/s, V4 saccade = 45.66 ± 1.89 sp/s, p < 0.001; MT fixation = 40.97 ± 1.47 sp/s, MT saccade =45.39 ± 1.71 sp/s, p < 0.001) (see Supplementary Fig. for sample neurons). This enhanced late response was accompanied by a suppression of early responses in both V4 and MT; we were predominately struck by the similarities between V4 and MT, both with respect to the prevalence and timecourse of the late response enhancement phenomenon (see Supplementary Information section SOM6; Supplementary Fig. ). In order to examine the spatial selectivity of the observed phenomenon, for each probe we measured the perisaccadic modulation index (PMI) as the difference between saccade and fixation responses (75–105 ms after the stimulus) divided by their sum. PMI for RF probes was significantly greater than PMI for control probes outside the RF ( [12pt]{minimal} $$ {{{{{{}}}}}}_{{{{{{}}}}}4}=0.04 0.01$$ Δ PMI V 4 = 0.04 ± 0.01 , p = 0.005; [12pt]{minimal} $$ {{{{{{}}}}}}_{{{{{{{}}}}}}}=0.04 0.01$$ Δ PMI MT = 0.04 ± 0.01 , p < 0.001; Fig. ). We also confirmed that the late response enhancement phenomenon in MT is independent of whether the saccade direction is congruent or incongruent with the preferred motion direction of the neuron, ruling out saccade-induced retinal motion as the source of this phenomenon (PMI congruent = 0.01 ± 0.01, p = 0.025; PMI incongruent = 0.03 ± 0.02, p = 0.031, p congruent vs. incongruent = 0.64, Wilcoxon rank-sum test; Fig. ; See “Methods” and Supplementary Information section SOM7; Supplementary Fig. ). Thus, V4 and MT neurons display a delayed response to RF stimuli around the time of saccades, which is the feature of perisaccadic neural response modulation that the model identified as essential for integrating the visual representation across saccades. Many studies have previously investigated changes in visual sensitivity around the time of saccades; , , , , – , – and comprehensive reviews of those findings exist elsewhere , . To link these perisaccadic neurophysiological changes to perception, most of the studies have focused on either mechanistic models – , which try to reproduce the observed changes in perisaccadic neural responses, or computational models , , , which try to provide a theoretical interpretation of changes in the neuron’s perisaccadic spatiotemporal dynamics that could account for perisaccadic perceptual stability. However, many of these studies focus on motor or attentional areas lacking strong visual selectivity, making it unclear how the reported perisaccadic changes in these areas can be translated into representational integration across saccades. Here, we developed a data-driven, statistical framework integrated with electrophysiological experiments that enabled a quantitative description of the stimulus-response relationship on the fast timescale of a saccade. This quantitative description in turn allowed us to perform an unbiased search for the sensory signals in extrastriate areas that contribute to generating an integrated representation of visual space across a saccade. Indeed, assessing the sensitivity of visual neurons with high temporal precision and translating those sensitivity dynamics into their perceptual consequences were the two keys to identifying a neural correlate required for transsaccadic integration. Using a modeling approach to link changes in spatiotemporal sensitivity to visual perception, we found that extrastriate neurons are capable of ‘stitching’ their presaccadic representation to the postsaccadic one by maintaining a memory of the scene. Prior to a saccade, the response of extrastriate neurons at a certain time represents visuospatial phenomena occurring ~50 ms before. When the eye moves and the flow of retinal information is disrupted, instead of representing the visual events 50 ms ago, extrastriate neurons maintain a representation of events further back in time (~75 ms). This delayed response to the presaccadic scene, a brief ‘sensory memory’, prevents there being a period of time in which there is no visual information in the extrastriate representation. Thus, the computational approach not only allowed us to assess the visual representation with high temporal precision, it also enabled us to identify the exact neuronal response changes essential for creating an uninterrupted visual representation, revealing the phenomenon of late response enhancement, a sensory memory mechanism which can preserve information across saccades. This discovery of late response enhancement dovetails with previous psychophysical studies suggesting the necessity of such a memory mechanism to preserve vision throughout the brief periods that the visual signal is lost during saccades or blinks , . Importantly, psychophysics experiments have implied that the preservation of vision across saccades might rely on mid- to high-level visual areas rather than on the earlier parts of the visual hierarchy , . Moreover, observing the late response enhancement phenomenon in both V4 and MT implies that this sensory memory is a characteristic of the visual system independent of whether the signal originates from chromatic/achromatic or motion sensitive pathways earlier in the visual stream . However, what mechanism drives this late enhancement in the perisaccadic responses of extrastriate neurons remains unknown. The intrinsic signal within these areas (e.g. due to the abrupt change in the flow of visual information) might be enough to trigger an enhanced late response, but these areas also receive a copy of the motor command (e.g. via the tectopulvinar pathway to MT ) as well as motor preparatory and attentional signals (via direct projections from the Frontal Eye Field , ). Considering that V4 is thought to receive the motor command through MT, but the dynamics of perisaccadic response changes in MT did not lead those in V4 (Supplementary Information section SOM6; Supplementary Fig. ), it seems the motor command is unlikely to be the source of modulation in V4, and that the role of top-down and intrinsic signals and their interactions are more promising candidates for future studies. This paper sits at the intersection of two rich lines of research: statistical modeling of neural encoding and saccadic modulation of neurons’ visual responses. In the decades since the perisaccadic remapping of visual responses was first reported in LIP , perisaccadic response mapping has been increasing in spatial coverage and temporal precision to provide a more complete picture of changes in spatial sensitivity and their dynamics , . A model-based approach for the mapping of RFs at the level of single trials and with high spatial and temporal precision around the time of saccades represents the next step in this progression. Our computational methods extend the classical GLM models, widely used for modeling neural responses , including for mapping classical, time-independent receptive fields in various brain areas , to a time-dependent RF estimation on the millisecond timescale. Previous changes in extrastriate visual responses around the time of saccades includes FF remapping in both V4 and MT , , . We see evidence of similar FF remapping in our own data (SOM5; although our probes are briefer, 7 ms compared to 25 ms for Neupane et al., or on screen for >600 ms prior to the go cue for Yao et al.). The FF remapping phenomenon they report is similar to our late response enhancement in that probes are presented prior to the eye movement and responses occur after (e.g., ‘memory’ rather than ‘predictive’ remapping); however, our finding is specifically for the presaccadic RF location, reflecting the memory of the presaccadic scene. These previous studies used a sparser sampling over either the space or time dimensions, and either did not probe the RF location , or excluded probes near the time of the saccade , hence they did not observe the late response enhancement we see here. Thus a more precise and complete method for assessing a neuron’s spatiotemporal sensitivity revealed a previously unreported phenomenon. As shown (Fig. and Supplementary Fig. ), the late response enhancement follows suppression of earlier responses. Multiple psychophysical phenomena are observed around the time of saccades, including spatial compression , , temporal compression , and saccadic omission . Although creating a readout model of each of these phenomena is beyond the scope of this paper, it is nevertheless tempting to speculate that the observed early response suppression could contribute to saccadic omission. However, the time window of the observed neural suppression appears narrower than that of perceptual saccadic omission, suggesting that other previously reported phenomena, including saccadic suppression , , and backward masking – , likely also contribute to saccadic omission. It is imperative to emphasize that the phenomenon of perceptual stability, the subjective experience of a stable world during saccades, might require more than an integrated sensory representation. Perceptual stability has been shown to rely on working memory mechanisms , , and information outside a retinotopic framework might also be involved (see Supplementary Information section SOM8; Supplementary Fig. ). The current results, however, show clearly that for a short period of time, retinotopic visual areas are capable of maintaining a brief sensory memory while the input is disrupted, a resource that could be employed by other areas and frames of reference to generate a stable, uninterrupted sense of vision. Experimental paradigm and electrophysiological data recording All experimental procedures complied with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and the Society for Neuroscience Guidelines and Policies. The protocols for all experimental, surgical, and behavioral procedures were approved by Institutional Animal Care and Use Committees of the University of Utah. Animals were pair-housed when possible and had daily access to enrichment activities. During the recording days, they had controlled access to fluids, but food was available ad libitum. Four male rhesus monkeys (monkeys B, P, E, and O; Macaca mulatta) were used in this study. Monkeys performed a visually guided saccade task during which task-irrelevant square stimuli flashed on the screen in pseudorandom order (Fig. ). The monkeys were trained to fixate on a fixation point (FP1; a central red dot) located in the center of the screen. After they fixated, a second target (FP2; a peripheral red dot) appeared 10-15 degrees away. Then, after a randomized time interval between 700 and 1100 ms (drawn from a uniform distribution), the fixation point disappeared, cuing the monkeys to make a saccade to FP2. After remaining fixated on the FP2 for 560–750 ms monkeys received a reward. During this procedure, a series of pseudorandomly located probe stimuli were presented on the screen in a 9 by 9 grid of possible locations. Each stimulus was a white square (full contrast), 0.5 by 0.5 degrees of visual angle (dva), against a black background. Each stimulus lasted for 7 ms and stimuli were presented consecutively without any overlap, such that at each time point there was only one stimulus on the screen. The locations of consecutive probe stimuli followed a pseudorandom order, called a condition. In each condition, a complete sequence of 81 probe stimuli was presented throughout the length of a trial. Conditions were designed to ensure that each probe location occurred at each time in the sequence with equal frequency across trials. For each recording session, the grid of the possible locations of the probes was positioned such that it covered the estimated pre- and postsaccadic receptive fields (RFs) of the neurons under study, as well as the FP1 and FP2. The spatial extent of the probe grids varied from 24 to 48.79 (mean ± SD = 40.63 ± 5.93) dva horizontally, and from 16 to 48.79 (mean ± SD = 39.78 ± 7.81) dva vertically (Supplementary Fig. ). The (center-to-center) distance between two adjacent probe locations varied from 3 to 6.1 (mean ± SD = 5.07 ± 0.74) dva horizontally, and from 2 to 6.1 (mean ± SD = 4.97 ± 0.97) dva vertically. For the MT neurons, the motion direction preference was assessed using a full field Gabor paradigm before the saccade task. The monkey maintained fixation while a full field Gabor stimulus, moving in one of 8 directions, was displayed for 800 ms. In 23 out of 108 sessions, the ST was randomly located either on the right or left side of the FP (at the same radius; see more details in Supplementary Information section SOM1-2); in the rest of the sessions, the ST remained at the same location within a session. Throughout the entire course of the experiment, the spiking activity of the neurons in areas V4 and MT was recorded using a 16-channel linear array electrode (V-probe, Plexon Inc., Dallas, TX, Central software v7.0.6 in Blackrock acquisition system and Cheetah v5.7.4 in Neuralynx acquisition systems) at a sampling rate of 32 KHz, and sorted offline using the Plexon offline spike sorter and Blackrock Offline Spike Sorter (BOSS) softwares. The eye position of the monkeys was monitored with an infrared optical eye-tracking system (EyeLink 1000 Plus Eye Tracker, SR Research Ltd., Ottawa, CA) with a resolution of <0.01 dva (based on manufacturer’s technical specifications), and a sampling frequency of 2 kHz. Stimulus presentation in the experiment was controlled using the MonkeyLogic toolbox . In total, data were recorded from 332 MT and 291 V4 neurons during 108 recording sessions. See Supplementary Information section SOM1 for further details. RF estimation The RF1 and RF2 locations used to calculate detectability and sensitivity in Fig. refer to the probe locations that generated the maximum firing rate during the fixation period before and after the saccade, respectively. For each probe location, the probe-aligned responses are calculated by averaging the spike trains over repetitions of the probe before or after the saccade (greater than 100 ms before or after the saccade onset), from 0–200 ms following probe presentation, across all trials. The response is then smoothed using a Gaussian window of 5 ms full width at half maximum (FWHM). Dimensionality reduction for computing neuron’s time-varying sensitivity map The fast, complex dynamics of changes in the neurons’ spatial sensitivity across a saccadic eye movement demand a high-dimensional representation of neurons’ spatiotemporal kernels in order to capture those perisaccadic dynamics. For any time relative to saccade onset the set of stimuli driving the response can be described in terms of their location ( [12pt]{minimal} $$X$$ X and [12pt]{minimal} $$Y$$ Y ) and the delay between the stimulus presentation and the response time ( [12pt]{minimal} $$$$ τ ). The goal is to determine this sensitivity map and trace its changes across time (Fig. ). In our experiment, for a 200 ms delay kernel across 1000 ms of response time, this space could be decomposed into ~10 7 spatiotemporal units (STUs; Fig. ). Since the stimulus presentation resolution is 7 ms, we represent the variation of sensitivity across the time dimensions using a set of temporal basis functions, [12pt]{minimal} $${{{{{{}}}}}}_{i,j}(t, )$$ B i , j ( t , τ ) , whose centers are separated by 7 ms across [12pt]{minimal} $$$$ τ and [12pt]{minimal} $$t$$ t dimensions (Eq. . This way we down-sample the time into a sequence of binned STUs whose values can change every 7 ms. 1 [12pt]{minimal} $${{{{{{}}}}}}_{i,j}(t, )={{{{{{}}}}}}_{i}( ){{{{{{}}}}}}_{j}(t)$$ B i , j ( t , τ ) = U i ( τ ) V j ( t ) where [12pt]{minimal} $${{{{{{}}}}}}_{i}( )$$ U i ( τ ) and [12pt]{minimal} $${{{{{{}}}}}}_{j}(t)$$ V j ( t ) are chosen to be B-spline functions of order two. [12pt]{minimal} $$\{{{{{{{}}}}}}_{i}( )\}$$ U i ( τ ) span over the delay variable [12pt]{minimal} $$$$ τ , representing a 200 ms-long kernel using a set of 33 knots uniformly spaced at [12pt]{minimal} $$\{-13,-6, ,204,211\}$$ − 13 , − 6 , … , 204 , 211 ms (in total, 30 basis functions), and [12pt]{minimal} $$\{{{{{{{}}}}}}_{j}(t)\}$$ V j ( t ) span over the time variable [12pt]{minimal} $$t$$ t , representing a 1081 ms-long kernel centered at the saccade onset using a set of 159 knots uniformly spaced at [12pt]{minimal} $$\{-554,-547, ,545,552\}$$ { − 554 , − 547 , … , 545 , 552 } ms (in total, 156 basis functions). This representation reduces the dimensionality of the spatiotemporal sensitivity map by about two orders of magnitude, however, it is still far beyond the practical dimensionality for a computationally robust estimation of the sensitivity values using an experimentally tractable amount of data. The short duration of saccade execution makes it infeasible to acquire a large number of data points from all spatial locations and times relative to saccade onset. To address this, we use a statistical approach to identify the STUs whose presence significantly contributes to the neuron’s response generation at a given time. Supplementary Fig. shows this pruning procedure. For each STU we compare the distribution of its weights estimated by fitting a generalized linear model (GLM) on 100 subsets of randomly chosen spike trains (35% of total trials) versus the control distribution obtained using 100 subsets of shuffled trials in which the stimulus-response relationship was distorted. The conditional intensity function (CIF) of this GLM is defined as, 2 [12pt]{minimal} $${ }_{t}=f(_{ =1}^{T}{s}_{i,j}(t- ). .{{{{{{}}}}}}_{i,j}(t, ))$$ λ t = f ∑ τ = 1 T s i , j t − τ . κ . B i , j t , τ where [12pt]{minimal} $${ }_{t}$$ λ t is the instantaneous firing rate of the neuron, [12pt]{minimal} $${s}_{i,j}$$ s i , j is the stimulus history of length [12pt]{minimal} $$T$$ T at location [12pt]{minimal} $$(i,j)$$ ( i , j ) , [12pt]{minimal} $$$$ κ is the weight of a single STU, represented by basis function [12pt]{minimal} $$\,{{{{{{}}}}}}_{i,j}(t, )$$ B i , j ( t , τ ) , whose contribution significance is evaluated. An STU is discarded if the mean of these two weight distributions fails to satisfy the following condition: 3 [12pt]{minimal} $$| -| 1.5$$ μ − μ ~ ≥ 1.5 σ ~ where [12pt]{minimal} $$$$ μ ~ and [12pt]{minimal} $$$$ σ ~ are respectively the mean and standard deviation of the control weights distribution, and [12pt]{minimal} $$$$ μ is the mean of the original weights distribution. This pruning process reduces the dimensionality of STU space to ~10 4 , which makes fitting of our encoding model to the sparse perisaccadic spiking data feasible and prevents an overfitted result. We then use only this subset of STUs to parameterize the linear filtering stage of an encoding model in a much lower dimensional space in order to determine the weights with which these STUs are combined to generate the neuron’s spatiotemporal sensitivity (Fig. ); at each time point relative to the saccade onset, the weighted combination of these STUs over probe locations and delay times describes the neuron’s sensitivity kernels (Fig. , middle panel) defined as, 4 [12pt]{minimal} $${k}_{x,y}(t, )=_{i,j}{ }_{x,y,i,j}.{{{{{{}}}}}}_{i,j}(t, )$$ k x , y ( t , τ ) = ∑ i , j κ x , y , i , j . B i , j ( t , τ ) where [12pt]{minimal} $$\{{ }_{x,y,i,j}\}$$ κ x , y , i , j are the weights of the STUs obtained by estimating the encoding model (defined in Eq.  below). Note that the summation is over the subset of [12pt]{minimal} $${{{{{{}}}}}}_{i,j}(t, )$$ B i , j ( t , τ ) whose corresponding STU was evaluated as significant according to Eq. , while the weights for the remaining STUs were set to zero. This low-dimensional set of the selected STUs enabled us to fit our encoding model to the sparse perisaccadic data and characterize the encoding principles of each neuron at each time relative to the saccade. Encoding model framework and estimation Models based on the GLM framework have been widely used to describe the neural response dynamics in various brain areas , – , including the response dynamics induced by a saccade – . By regressing the neural response on the stimulus variables, GLM-based models have also been used for mapping the neurons’ RF, including the perisaccadic RFs in sensory , , or prefrontal , areas. Our lab has recently developed a variant of the GLM framework, termed the sparse-variable GLM (S-model , Supplementary Fig. ), applicable to sparse spiking data, which tracks the fast and high-dimensional dynamics of information encoding with high temporal precision and accuracy. The S-model enables us to represent the high-dimensional and time-dependent spatiotemporal sensitivity of neurons using a sparse set of STUs selected through a dimensionality reduction process and estimate their quantitative contribution to spike generation on a millisecond timescale across a saccade. Using this set of STUs we parameterize the stimulus kernels [12pt]{minimal} $${k}_{x,y}(t, )$$ k x , y ( t , τ ) in the CIF of the S-model defined as 5 [12pt]{minimal} $${ }^{(l)}(t)={{{{{}}}}}(_{x,y, }{k}_{x,y}(t, ){s}_{x,y}^{(l)}(t- )+_{ }h( ){r}^{(l)}(t- )+b(t)+{b}_{0})$$ λ l ( t ) = f ∑ x , y , τ k x , y ( t , τ ) s x , y l ( t − τ ) + ∑ τ h ( τ ) r l ( t − τ ) + b ( t ) + b 0 where [12pt]{minimal} $${ }^{(l)}(t)$$ λ l ( t ) represents the instantaneous firing rate of the neuron at time [12pt]{minimal} $$t$$ t in trial [12pt]{minimal} $$l$$ l , [12pt]{minimal} $${s}_{x,y}^{(l)}({{{{{}}}}}) \{0,1\}$$ s x , y l ( t ) ∈ 0 , 1 denotes a sequence of probe stimuli presented on the screen at probe location [12pt]{minimal} $$(x,y)$$ x , y in trial [12pt]{minimal} $$l$$ l with 0 and 1 representing respectively an off and on probe condition, [12pt]{minimal} $${r}^{(l)}({{{{{}}}}}) \{0,1\}$$ r l ( t ) ∈ 0 , 1 indicates the spiking response of the neuron for that trial and time, [12pt]{minimal} $${k}_{x,y}(t, )$$ k x , y ( t , τ ) represents the stimulus kernel at probe location [12pt]{minimal} $$(x,y)$$ x , y , [12pt]{minimal} $$h({{{{{}}}}})$$ h ( τ ) is the post-spike kernel applied to the spike history which can capture the response refractoriness, [12pt]{minimal} $$b({{{{{}}}}})$$ b ( t ) is the offset kernel which represents the saccade-induced changes in the baseline activity, [12pt]{minimal} $${b}_{0}={{{{{{}}}}}}^{-1}({r}_{0})$$ b 0 = f − 1 r 0 with [12pt]{minimal} $${r}_{0}$$ r 0 defined as the measured mean firing rate (spikes per second) across all trials in the experimental session, and finally, 6 [12pt]{minimal} $${{{{{}}}}}(u)=_{{ }}}{1+{e}^{-u}}$$ f u = r max 1 + e − u is a static sigmoidal function representing the response nonlinear properties where [12pt]{minimal} $${r}_{{ }}$$ r max indicates the maximum firing rate of the neuron obtained empirically from the experimental data. The fitted models were successful in describing the dynamics of the recorded neural data (Supplementary Fig. ). This choice of the neuron’s nonlinearity is consistent with an empirical nonlinearity estimated nonparametrically from the data. All trials are saccade aligned, i.e., [12pt]{minimal} $$t=0$$ t = 0 refers to the time of saccade onset. Then using an optimization procedure in the point process maximum likelihood estimation framework, we fit the model to sparse spiking data at the level of single trials. The resulting encoding framework enables us to decipher the nature of saccade-induced modulatory computations in a precise and computationally tractable manner using the time-varying kernels representing the neuron’s dynamic sensitivity across different delays and locations for any specific time relative to the saccade. Details of discriminability and detectability analysis The decoding aspect of the model enables us to develop a readout of the visual scene using the model-predicted responses. The model readout provides a detailed description of the neural decoding capability across a saccadic eye movement, which can be used to trace a specific perceptual phenomenon (in our case, visuospatial integration across saccades) and test the specific components of the neural response that the phenomenon relies on. By capturing the essential computations of the neuron, the model can be used to generate predictions about arbitrary sequences of visual stimuli not present in the experimental data. We have used this aspect of the model to predict how the decoding capability of the neural response changes across a saccade, in terms of its ability to detect the presence of a particular probe. The detectability of an arbitrary probe is measured by evaluating the ability to detect the presence of that particular probe from the model-predicted response; i.e., when that probe is presented (ON) versus when it is not (OFF). The detectability of probes can differ based on the time between the probe presentation and the time of response that the decoding is being based on (referred to as the delay). At any time in the neural response (denoted as [12pt]{minimal} $$t$$ t in Supplementary Fig. ), the probe is only detectable if it is presented within a certain delay range ( [12pt]{minimal} $$$$ τ ; we evaluated delay values from 0-200 ms). During the fixation period, long before the saccade onset, the RF1 probe’s detectability is maximum around the latency of the neuron. However, during the perisaccadic period, the neuron becomes sensitive to different probes and with different latencies. Supplementary Fig. shows how the detectability of the RF1 probe of a sample neuron (RF1 probe: [12pt]{minimal} $${s}^{ }$$ s * ) is computed at an arbitrary time ( [12pt]{minimal} $${t}^{ }$$ t * ) where the probe is presented at ( [12pt]{minimal} $${t}^{ }-{ }^{ }$$ t * − τ * ). To measure the detectability of [12pt]{minimal} $${s}^{ }$$ s * at [12pt]{minimal} $${t}^{ },{ }^{ }$$ t * , τ * the AUC measure has been used to evaluate the difference between the distribution of responses evoked at [12pt]{minimal} $${t}^{ }$$ t * ( [12pt]{minimal} $${ }_{{t}^{ }}$$ λ t * ) by the presence of [12pt]{minimal} $${s}^{ }$$ s * at [12pt]{minimal} $${t}^{ }-{ }^{ }$$ t * − τ * versus in the absence of [12pt]{minimal} $${s}^{ }$$ s * , each embedded within a 200 ms random sequence of other probes. The model’s predicted response at time [12pt]{minimal} $${t}^{ }$$ t * (denoted as [12pt]{minimal} $${ }_{{t}^{ }}$$ λ t * ) is generated for 100 random sequences of probes, when specific probe [12pt]{minimal} $${s}^{ }$$ s * is ON and 100 random sequences in which that probe is OFF, [12pt]{minimal} $${ }^{ }$$ τ * before [12pt]{minimal} $${t}^{ }$$ t * (i.e., at time [12pt]{minimal} $${t}^{ }-{ }^{ }$$ t * − τ * ). The detectability is then measured as AUC of the evoked response ( [12pt]{minimal} $${ }_{{t}^{ }}$$ λ t * ) to the ON versus OFF trials (histograms shown in Supplementary Fig. ). To calculate the average detectability, mean AUC was calculated across 20 repetitions for each time and delay combination, each repetition over a randomly selected 80% of ON and OFF trials. Supplementary Fig. shows the detectability at a sample time [12pt]{minimal} $${t}^{ }=+10$$ t * = + 10 ms relative to the saccade where the RF1 probe is presented 140 to 20 ms before saccade onset ( [12pt]{minimal} $$-150 \, < \, \, < --2.5pt 30\,{{{{{{}}}}}}$$ − 150 < τ < − 30 ms ); at this time, the RF1 probe is detectable only when it was presented around 55 ms before the response (normal latency of the neuron). To track detectability across times and delays, the detectability of probes is measured at different values of [12pt]{minimal} $$t$$ t relative to the saccade and [12pt]{minimal} $$$$ τ relative to each response time ( [12pt]{minimal} $$t$$ t : 50 ms before to 300 ms after saccade with 10 ms steps, and [12pt]{minimal} $$$$ τ : 190 to 30 ms before response time with 10 ms steps, Supplementary Fig. ). The detectability map of the neuron for each probe location provides a quantification of the decoding capacity of the neuron across the eye movement (Supplementary Fig. shows the map for the RF1 probe of a sample neuron); the shift in detectability from RF1 to RF2 is shown in Supplementary Fig. . Over time, the probe location with the highest detectability shifts from RF1 to RF2, as shown for the population of neurons in Supplementary Fig. ; the contours show the times and delays at which one can detect the presence of the stimulus based on the response of the neuron, i.e., the AUC values are above a threshold of 0.61. In a similar way, we used the decoding approach to measure the location discriminability of the neural response, in terms of ability to discriminate a probe from the immediately surrounding probes using the model-predicted response. To measure the location discriminability at each probe location, 100 random sequences were presented to the model with the center probe presented at a specific delay, and AUC was measured versus 100 trials where one of the adjacent probes was presented at the same delay. Mean sensitivity for each of the surrounding probes was then calculated across 20 AUC measurements, each using 80% of trials. The location discriminability reported in Fig. are then the average of discriminability over 8 surrounding probes around the RF1 or RF2 probe. The thresholds used in Fig. are ROC > 0.57 for discriminability and ROC > 0.61 for detectability. Identifying modulated and integration-relevant STUs As discussed previously, only the STUs at specific times and delays are contributing to the neural response generation (green STU in Supplementary Fig. ). When the spatial and temporal sensitivity of a neuron changes during the perisaccadic period, the distribution of STUs (across times and delays) will be altered. We defined modulated STUs as those for which the prevalence of STUs in a 3x3 window around that STU’s time and delay is significantly different for stimuli presented perisaccadically vs. during fixation. Each STU is considered modulated if the prevalence of STUs fulfills the following condition: 7 [12pt]{minimal} $$}}}}}p({ }_{n},{t}_{m})-{p}_{1}({ }_{n}){{{{{}}}}}.{{{{{}}}}}p({ }_{n},{t}_{m})-{p}_{2}({ }_{n}){{{{{}}}}}} \, > \, h$$ ∣ p ( τ n , t m ) − p 1 ( τ n ) ∣ . ∣ p ( τ n , t m ) − p 2 ( τ n ) ∣ > h where [12pt]{minimal} $$p({ }_{n},{t}_{m})$$ p τ n , t m is prevalence of STUs in a 3x3 window around the [12pt]{minimal} $$n$$ n th bin of delay and [12pt]{minimal} $$m$$ m th bin in time 1 < n < 30,1 < m < 156, [12pt]{minimal} $${p}_{1}({ }_{n})\,$$ p 1 τ n is the prevalence of the STUs across fixation period before saccade calculated over bins 1 to 60 spanning 540 to 120 ms before saccade onset at [12pt]{minimal} $$\,n$$ n th bin of delay, [12pt]{minimal} $$1 \, < \, n \, < \,30$$ 1 < n < 30 , and [12pt]{minimal} $${p}_{2}({ }_{n})$$ p 2 τ n is the prevalence of STUs in the fixation period after saccade calculated over bins 120 to 156 spanning 280 to 540 ms after saccade onset at [12pt]{minimal} $$\,n$$ n th bin of delay, [12pt]{minimal} $$1 \, < \,n\, < \, 30$$ 1 < n < 30 , and [12pt]{minimal} $$h$$ h is a significance threshold between 0 and 1. The threshold is set to value [12pt]{minimal} $$h=0.7$$ h = 0.7 for illustration purposes in Fig. . For analysis, the threshold value was set to [12pt]{minimal} $$h=0.3$$ h = 0.3 in order to include all perisaccadic STUs that might play a role in transsaccadic integration. As shown in Fig. , the modulated subset of STUs (Fig. ), representing the STUs which are significantly different between the fixation and perisaccadic periods, play a major role in maintaining visuospatial integrity across the saccade. As shown in Fig. , replacing the weights of the modulated STUs in the model with their fixation values results in a gap in the readout of the neural responses—and interruption in the detectability and discriminability at RF1 or RF2 in the perisaccadic period. In the next step, a subset of modulated STUs is identified as contributing to this visuospatial integration—e.g., the continuity of transitioning sensitivity from RF1 to RF2 across the saccade (termed ‘integration-relevant STUs’). The contribution of each modulated STU to the transsaccadic integrity is quantified by evaluating its role in maintaining the sensitivity of the neuron to either the RF1 or RF2 location across a saccadic eye movement, by removing each modulated STU one at a time and testing whether the neuron’s sensitivity decreases. The stimulus kernels of the fitted models ( [12pt]{minimal} $${K}_{x,y}(t, )$$ K x , y t , τ , Supplementary Fig. ) reflect changes in the neurons’ spatiotemporal sensitivity at each probe location ( [12pt]{minimal} $$x,y$$ x , y ) and delay ( [12pt]{minimal} $$$$ τ ) across different times to the saccade ( [12pt]{minimal} $$t$$ t ). The average spatial sensitivity of the neuron to the stimulus in RF1, [12pt]{minimal} $${h}_{1}(t)$$ h 1 ( t ) , is quantified as: 8 [12pt]{minimal} $${h}_{1}(t)=_{(x,y) R{F}_{1}}{ }_{ =1}^{T}|{K}_{x,y}(t, )|\,}{9 T}$$ h 1 ( t ) = ∑ x , y ϵ R F 1 ∑ τ = 1 T K x , y ( t , τ ) 9 * T where [12pt]{minimal} $$1 \, < \, \, < \, T$$ 1 < τ < T is the delay parameter in the kernels (in this study [12pt]{minimal} $$T=200$$ T = 200 ms as length of stimulus kernels), regarding the history of stimulus from time [12pt]{minimal} $$t$$ t , and [12pt]{minimal} $$(x,y)$$ ( x , y ) are the nine probe locations around center of RF1 (Supplementary Fig. ). The spatial sensitivity index of RF1, [12pt]{minimal} $${h}_{1}(t)$$ h 1 ( t ) , representing the average sensitivity in terms of the average absolute kernel values, drops after a saccade, while the spatial sensitivity index [12pt]{minimal} $${h}_{2}(t)$$ h 2 ( t ) for RF2 increases. A shared sensitivity index ( [12pt]{minimal} $$$$ δ ) across RF1 and RF2 is then defined as the minimum sensitivity to either location, measured across time relative to the saccade (gray area in Supplementary Fig. ). The shared sensitivity is defined as [12pt]{minimal} $$ ={ }_{t=-500}^{+500}{{{{{}}}}}({h}_{1}(t),{h}_{2}(t))$$ δ = ∑ t = − 500 + 500 min ( h 1 ( t ) , h 2 ( t ) ) and each modulated STU is considered integration-relevant if nulling its weight results in a decrease in the shared sensitivity of the neuron to the RF1 or RF2. Reporting summary Further information on research design is available in the linked to this article. All experimental procedures complied with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and the Society for Neuroscience Guidelines and Policies. The protocols for all experimental, surgical, and behavioral procedures were approved by Institutional Animal Care and Use Committees of the University of Utah. Animals were pair-housed when possible and had daily access to enrichment activities. During the recording days, they had controlled access to fluids, but food was available ad libitum. Four male rhesus monkeys (monkeys B, P, E, and O; Macaca mulatta) were used in this study. Monkeys performed a visually guided saccade task during which task-irrelevant square stimuli flashed on the screen in pseudorandom order (Fig. ). The monkeys were trained to fixate on a fixation point (FP1; a central red dot) located in the center of the screen. After they fixated, a second target (FP2; a peripheral red dot) appeared 10-15 degrees away. Then, after a randomized time interval between 700 and 1100 ms (drawn from a uniform distribution), the fixation point disappeared, cuing the monkeys to make a saccade to FP2. After remaining fixated on the FP2 for 560–750 ms monkeys received a reward. During this procedure, a series of pseudorandomly located probe stimuli were presented on the screen in a 9 by 9 grid of possible locations. Each stimulus was a white square (full contrast), 0.5 by 0.5 degrees of visual angle (dva), against a black background. Each stimulus lasted for 7 ms and stimuli were presented consecutively without any overlap, such that at each time point there was only one stimulus on the screen. The locations of consecutive probe stimuli followed a pseudorandom order, called a condition. In each condition, a complete sequence of 81 probe stimuli was presented throughout the length of a trial. Conditions were designed to ensure that each probe location occurred at each time in the sequence with equal frequency across trials. For each recording session, the grid of the possible locations of the probes was positioned such that it covered the estimated pre- and postsaccadic receptive fields (RFs) of the neurons under study, as well as the FP1 and FP2. The spatial extent of the probe grids varied from 24 to 48.79 (mean ± SD = 40.63 ± 5.93) dva horizontally, and from 16 to 48.79 (mean ± SD = 39.78 ± 7.81) dva vertically (Supplementary Fig. ). The (center-to-center) distance between two adjacent probe locations varied from 3 to 6.1 (mean ± SD = 5.07 ± 0.74) dva horizontally, and from 2 to 6.1 (mean ± SD = 4.97 ± 0.97) dva vertically. For the MT neurons, the motion direction preference was assessed using a full field Gabor paradigm before the saccade task. The monkey maintained fixation while a full field Gabor stimulus, moving in one of 8 directions, was displayed for 800 ms. In 23 out of 108 sessions, the ST was randomly located either on the right or left side of the FP (at the same radius; see more details in Supplementary Information section SOM1-2); in the rest of the sessions, the ST remained at the same location within a session. Throughout the entire course of the experiment, the spiking activity of the neurons in areas V4 and MT was recorded using a 16-channel linear array electrode (V-probe, Plexon Inc., Dallas, TX, Central software v7.0.6 in Blackrock acquisition system and Cheetah v5.7.4 in Neuralynx acquisition systems) at a sampling rate of 32 KHz, and sorted offline using the Plexon offline spike sorter and Blackrock Offline Spike Sorter (BOSS) softwares. The eye position of the monkeys was monitored with an infrared optical eye-tracking system (EyeLink 1000 Plus Eye Tracker, SR Research Ltd., Ottawa, CA) with a resolution of <0.01 dva (based on manufacturer’s technical specifications), and a sampling frequency of 2 kHz. Stimulus presentation in the experiment was controlled using the MonkeyLogic toolbox . In total, data were recorded from 332 MT and 291 V4 neurons during 108 recording sessions. See Supplementary Information section SOM1 for further details. The RF1 and RF2 locations used to calculate detectability and sensitivity in Fig. refer to the probe locations that generated the maximum firing rate during the fixation period before and after the saccade, respectively. For each probe location, the probe-aligned responses are calculated by averaging the spike trains over repetitions of the probe before or after the saccade (greater than 100 ms before or after the saccade onset), from 0–200 ms following probe presentation, across all trials. The response is then smoothed using a Gaussian window of 5 ms full width at half maximum (FWHM). The fast, complex dynamics of changes in the neurons’ spatial sensitivity across a saccadic eye movement demand a high-dimensional representation of neurons’ spatiotemporal kernels in order to capture those perisaccadic dynamics. For any time relative to saccade onset the set of stimuli driving the response can be described in terms of their location ( [12pt]{minimal} $$X$$ X and [12pt]{minimal} $$Y$$ Y ) and the delay between the stimulus presentation and the response time ( [12pt]{minimal} $$$$ τ ). The goal is to determine this sensitivity map and trace its changes across time (Fig. ). In our experiment, for a 200 ms delay kernel across 1000 ms of response time, this space could be decomposed into ~10 7 spatiotemporal units (STUs; Fig. ). Since the stimulus presentation resolution is 7 ms, we represent the variation of sensitivity across the time dimensions using a set of temporal basis functions, [12pt]{minimal} $${{{{{{}}}}}}_{i,j}(t, )$$ B i , j ( t , τ ) , whose centers are separated by 7 ms across [12pt]{minimal} $$$$ τ and [12pt]{minimal} $$t$$ t dimensions (Eq. . This way we down-sample the time into a sequence of binned STUs whose values can change every 7 ms. 1 [12pt]{minimal} $${{{{{{}}}}}}_{i,j}(t, )={{{{{{}}}}}}_{i}( ){{{{{{}}}}}}_{j}(t)$$ B i , j ( t , τ ) = U i ( τ ) V j ( t ) where [12pt]{minimal} $${{{{{{}}}}}}_{i}( )$$ U i ( τ ) and [12pt]{minimal} $${{{{{{}}}}}}_{j}(t)$$ V j ( t ) are chosen to be B-spline functions of order two. [12pt]{minimal} $$\{{{{{{{}}}}}}_{i}( )\}$$ U i ( τ ) span over the delay variable [12pt]{minimal} $$$$ τ , representing a 200 ms-long kernel using a set of 33 knots uniformly spaced at [12pt]{minimal} $$\{-13,-6, ,204,211\}$$ − 13 , − 6 , … , 204 , 211 ms (in total, 30 basis functions), and [12pt]{minimal} $$\{{{{{{{}}}}}}_{j}(t)\}$$ V j ( t ) span over the time variable [12pt]{minimal} $$t$$ t , representing a 1081 ms-long kernel centered at the saccade onset using a set of 159 knots uniformly spaced at [12pt]{minimal} $$\{-554,-547, ,545,552\}$$ { − 554 , − 547 , … , 545 , 552 } ms (in total, 156 basis functions). This representation reduces the dimensionality of the spatiotemporal sensitivity map by about two orders of magnitude, however, it is still far beyond the practical dimensionality for a computationally robust estimation of the sensitivity values using an experimentally tractable amount of data. The short duration of saccade execution makes it infeasible to acquire a large number of data points from all spatial locations and times relative to saccade onset. To address this, we use a statistical approach to identify the STUs whose presence significantly contributes to the neuron’s response generation at a given time. Supplementary Fig. shows this pruning procedure. For each STU we compare the distribution of its weights estimated by fitting a generalized linear model (GLM) on 100 subsets of randomly chosen spike trains (35% of total trials) versus the control distribution obtained using 100 subsets of shuffled trials in which the stimulus-response relationship was distorted. The conditional intensity function (CIF) of this GLM is defined as, 2 [12pt]{minimal} $${ }_{t}=f(_{ =1}^{T}{s}_{i,j}(t- ). .{{{{{{}}}}}}_{i,j}(t, ))$$ λ t = f ∑ τ = 1 T s i , j t − τ . κ . B i , j t , τ where [12pt]{minimal} $${ }_{t}$$ λ t is the instantaneous firing rate of the neuron, [12pt]{minimal} $${s}_{i,j}$$ s i , j is the stimulus history of length [12pt]{minimal} $$T$$ T at location [12pt]{minimal} $$(i,j)$$ ( i , j ) , [12pt]{minimal} $$$$ κ is the weight of a single STU, represented by basis function [12pt]{minimal} $$\,{{{{{{}}}}}}_{i,j}(t, )$$ B i , j ( t , τ ) , whose contribution significance is evaluated. An STU is discarded if the mean of these two weight distributions fails to satisfy the following condition: 3 [12pt]{minimal} $$| -| 1.5$$ μ − μ ~ ≥ 1.5 σ ~ where [12pt]{minimal} $$$$ μ ~ and [12pt]{minimal} $$$$ σ ~ are respectively the mean and standard deviation of the control weights distribution, and [12pt]{minimal} $$$$ μ is the mean of the original weights distribution. This pruning process reduces the dimensionality of STU space to ~10 4 , which makes fitting of our encoding model to the sparse perisaccadic spiking data feasible and prevents an overfitted result. We then use only this subset of STUs to parameterize the linear filtering stage of an encoding model in a much lower dimensional space in order to determine the weights with which these STUs are combined to generate the neuron’s spatiotemporal sensitivity (Fig. ); at each time point relative to the saccade onset, the weighted combination of these STUs over probe locations and delay times describes the neuron’s sensitivity kernels (Fig. , middle panel) defined as, 4 [12pt]{minimal} $${k}_{x,y}(t, )=_{i,j}{ }_{x,y,i,j}.{{{{{{}}}}}}_{i,j}(t, )$$ k x , y ( t , τ ) = ∑ i , j κ x , y , i , j . B i , j ( t , τ ) where [12pt]{minimal} $$\{{ }_{x,y,i,j}\}$$ κ x , y , i , j are the weights of the STUs obtained by estimating the encoding model (defined in Eq.  below). Note that the summation is over the subset of [12pt]{minimal} $${{{{{{}}}}}}_{i,j}(t, )$$ B i , j ( t , τ ) whose corresponding STU was evaluated as significant according to Eq. , while the weights for the remaining STUs were set to zero. This low-dimensional set of the selected STUs enabled us to fit our encoding model to the sparse perisaccadic data and characterize the encoding principles of each neuron at each time relative to the saccade. Models based on the GLM framework have been widely used to describe the neural response dynamics in various brain areas , – , including the response dynamics induced by a saccade – . By regressing the neural response on the stimulus variables, GLM-based models have also been used for mapping the neurons’ RF, including the perisaccadic RFs in sensory , , or prefrontal , areas. Our lab has recently developed a variant of the GLM framework, termed the sparse-variable GLM (S-model , Supplementary Fig. ), applicable to sparse spiking data, which tracks the fast and high-dimensional dynamics of information encoding with high temporal precision and accuracy. The S-model enables us to represent the high-dimensional and time-dependent spatiotemporal sensitivity of neurons using a sparse set of STUs selected through a dimensionality reduction process and estimate their quantitative contribution to spike generation on a millisecond timescale across a saccade. Using this set of STUs we parameterize the stimulus kernels [12pt]{minimal} $${k}_{x,y}(t, )$$ k x , y ( t , τ ) in the CIF of the S-model defined as 5 [12pt]{minimal} $${ }^{(l)}(t)={{{{{}}}}}(_{x,y, }{k}_{x,y}(t, ){s}_{x,y}^{(l)}(t- )+_{ }h( ){r}^{(l)}(t- )+b(t)+{b}_{0})$$ λ l ( t ) = f ∑ x , y , τ k x , y ( t , τ ) s x , y l ( t − τ ) + ∑ τ h ( τ ) r l ( t − τ ) + b ( t ) + b 0 where [12pt]{minimal} $${ }^{(l)}(t)$$ λ l ( t ) represents the instantaneous firing rate of the neuron at time [12pt]{minimal} $$t$$ t in trial [12pt]{minimal} $$l$$ l , [12pt]{minimal} $${s}_{x,y}^{(l)}({{{{{}}}}}) \{0,1\}$$ s x , y l ( t ) ∈ 0 , 1 denotes a sequence of probe stimuli presented on the screen at probe location [12pt]{minimal} $$(x,y)$$ x , y in trial [12pt]{minimal} $$l$$ l with 0 and 1 representing respectively an off and on probe condition, [12pt]{minimal} $${r}^{(l)}({{{{{}}}}}) \{0,1\}$$ r l ( t ) ∈ 0 , 1 indicates the spiking response of the neuron for that trial and time, [12pt]{minimal} $${k}_{x,y}(t, )$$ k x , y ( t , τ ) represents the stimulus kernel at probe location [12pt]{minimal} $$(x,y)$$ x , y , [12pt]{minimal} $$h({{{{{}}}}})$$ h ( τ ) is the post-spike kernel applied to the spike history which can capture the response refractoriness, [12pt]{minimal} $$b({{{{{}}}}})$$ b ( t ) is the offset kernel which represents the saccade-induced changes in the baseline activity, [12pt]{minimal} $${b}_{0}={{{{{{}}}}}}^{-1}({r}_{0})$$ b 0 = f − 1 r 0 with [12pt]{minimal} $${r}_{0}$$ r 0 defined as the measured mean firing rate (spikes per second) across all trials in the experimental session, and finally, 6 [12pt]{minimal} $${{{{{}}}}}(u)=_{{ }}}{1+{e}^{-u}}$$ f u = r max 1 + e − u is a static sigmoidal function representing the response nonlinear properties where [12pt]{minimal} $${r}_{{ }}$$ r max indicates the maximum firing rate of the neuron obtained empirically from the experimental data. The fitted models were successful in describing the dynamics of the recorded neural data (Supplementary Fig. ). This choice of the neuron’s nonlinearity is consistent with an empirical nonlinearity estimated nonparametrically from the data. All trials are saccade aligned, i.e., [12pt]{minimal} $$t=0$$ t = 0 refers to the time of saccade onset. Then using an optimization procedure in the point process maximum likelihood estimation framework, we fit the model to sparse spiking data at the level of single trials. The resulting encoding framework enables us to decipher the nature of saccade-induced modulatory computations in a precise and computationally tractable manner using the time-varying kernels representing the neuron’s dynamic sensitivity across different delays and locations for any specific time relative to the saccade. The decoding aspect of the model enables us to develop a readout of the visual scene using the model-predicted responses. The model readout provides a detailed description of the neural decoding capability across a saccadic eye movement, which can be used to trace a specific perceptual phenomenon (in our case, visuospatial integration across saccades) and test the specific components of the neural response that the phenomenon relies on. By capturing the essential computations of the neuron, the model can be used to generate predictions about arbitrary sequences of visual stimuli not present in the experimental data. We have used this aspect of the model to predict how the decoding capability of the neural response changes across a saccade, in terms of its ability to detect the presence of a particular probe. The detectability of an arbitrary probe is measured by evaluating the ability to detect the presence of that particular probe from the model-predicted response; i.e., when that probe is presented (ON) versus when it is not (OFF). The detectability of probes can differ based on the time between the probe presentation and the time of response that the decoding is being based on (referred to as the delay). At any time in the neural response (denoted as [12pt]{minimal} $$t$$ t in Supplementary Fig. ), the probe is only detectable if it is presented within a certain delay range ( [12pt]{minimal} $$$$ τ ; we evaluated delay values from 0-200 ms). During the fixation period, long before the saccade onset, the RF1 probe’s detectability is maximum around the latency of the neuron. However, during the perisaccadic period, the neuron becomes sensitive to different probes and with different latencies. Supplementary Fig. shows how the detectability of the RF1 probe of a sample neuron (RF1 probe: [12pt]{minimal} $${s}^{ }$$ s * ) is computed at an arbitrary time ( [12pt]{minimal} $${t}^{ }$$ t * ) where the probe is presented at ( [12pt]{minimal} $${t}^{ }-{ }^{ }$$ t * − τ * ). To measure the detectability of [12pt]{minimal} $${s}^{ }$$ s * at [12pt]{minimal} $${t}^{ },{ }^{ }$$ t * , τ * the AUC measure has been used to evaluate the difference between the distribution of responses evoked at [12pt]{minimal} $${t}^{ }$$ t * ( [12pt]{minimal} $${ }_{{t}^{ }}$$ λ t * ) by the presence of [12pt]{minimal} $${s}^{ }$$ s * at [12pt]{minimal} $${t}^{ }-{ }^{ }$$ t * − τ * versus in the absence of [12pt]{minimal} $${s}^{ }$$ s * , each embedded within a 200 ms random sequence of other probes. The model’s predicted response at time [12pt]{minimal} $${t}^{ }$$ t * (denoted as [12pt]{minimal} $${ }_{{t}^{ }}$$ λ t * ) is generated for 100 random sequences of probes, when specific probe [12pt]{minimal} $${s}^{ }$$ s * is ON and 100 random sequences in which that probe is OFF, [12pt]{minimal} $${ }^{ }$$ τ * before [12pt]{minimal} $${t}^{ }$$ t * (i.e., at time [12pt]{minimal} $${t}^{ }-{ }^{ }$$ t * − τ * ). The detectability is then measured as AUC of the evoked response ( [12pt]{minimal} $${ }_{{t}^{ }}$$ λ t * ) to the ON versus OFF trials (histograms shown in Supplementary Fig. ). To calculate the average detectability, mean AUC was calculated across 20 repetitions for each time and delay combination, each repetition over a randomly selected 80% of ON and OFF trials. Supplementary Fig. shows the detectability at a sample time [12pt]{minimal} $${t}^{ }=+10$$ t * = + 10 ms relative to the saccade where the RF1 probe is presented 140 to 20 ms before saccade onset ( [12pt]{minimal} $$-150 \, < \, \, < --2.5pt 30\,{{{{{{}}}}}}$$ − 150 < τ < − 30 ms ); at this time, the RF1 probe is detectable only when it was presented around 55 ms before the response (normal latency of the neuron). To track detectability across times and delays, the detectability of probes is measured at different values of [12pt]{minimal} $$t$$ t relative to the saccade and [12pt]{minimal} $$$$ τ relative to each response time ( [12pt]{minimal} $$t$$ t : 50 ms before to 300 ms after saccade with 10 ms steps, and [12pt]{minimal} $$$$ τ : 190 to 30 ms before response time with 10 ms steps, Supplementary Fig. ). The detectability map of the neuron for each probe location provides a quantification of the decoding capacity of the neuron across the eye movement (Supplementary Fig. shows the map for the RF1 probe of a sample neuron); the shift in detectability from RF1 to RF2 is shown in Supplementary Fig. . Over time, the probe location with the highest detectability shifts from RF1 to RF2, as shown for the population of neurons in Supplementary Fig. ; the contours show the times and delays at which one can detect the presence of the stimulus based on the response of the neuron, i.e., the AUC values are above a threshold of 0.61. In a similar way, we used the decoding approach to measure the location discriminability of the neural response, in terms of ability to discriminate a probe from the immediately surrounding probes using the model-predicted response. To measure the location discriminability at each probe location, 100 random sequences were presented to the model with the center probe presented at a specific delay, and AUC was measured versus 100 trials where one of the adjacent probes was presented at the same delay. Mean sensitivity for each of the surrounding probes was then calculated across 20 AUC measurements, each using 80% of trials. The location discriminability reported in Fig. are then the average of discriminability over 8 surrounding probes around the RF1 or RF2 probe. The thresholds used in Fig. are ROC > 0.57 for discriminability and ROC > 0.61 for detectability. As discussed previously, only the STUs at specific times and delays are contributing to the neural response generation (green STU in Supplementary Fig. ). When the spatial and temporal sensitivity of a neuron changes during the perisaccadic period, the distribution of STUs (across times and delays) will be altered. We defined modulated STUs as those for which the prevalence of STUs in a 3x3 window around that STU’s time and delay is significantly different for stimuli presented perisaccadically vs. during fixation. Each STU is considered modulated if the prevalence of STUs fulfills the following condition: 7 [12pt]{minimal} $$}}}}}p({ }_{n},{t}_{m})-{p}_{1}({ }_{n}){{{{{}}}}}.{{{{{}}}}}p({ }_{n},{t}_{m})-{p}_{2}({ }_{n}){{{{{}}}}}} \, > \, h$$ ∣ p ( τ n , t m ) − p 1 ( τ n ) ∣ . ∣ p ( τ n , t m ) − p 2 ( τ n ) ∣ > h where [12pt]{minimal} $$p({ }_{n},{t}_{m})$$ p τ n , t m is prevalence of STUs in a 3x3 window around the [12pt]{minimal} $$n$$ n th bin of delay and [12pt]{minimal} $$m$$ m th bin in time 1 < n < 30,1 < m < 156, [12pt]{minimal} $${p}_{1}({ }_{n})\,$$ p 1 τ n is the prevalence of the STUs across fixation period before saccade calculated over bins 1 to 60 spanning 540 to 120 ms before saccade onset at [12pt]{minimal} $$\,n$$ n th bin of delay, [12pt]{minimal} $$1 \, < \, n \, < \,30$$ 1 < n < 30 , and [12pt]{minimal} $${p}_{2}({ }_{n})$$ p 2 τ n is the prevalence of STUs in the fixation period after saccade calculated over bins 120 to 156 spanning 280 to 540 ms after saccade onset at [12pt]{minimal} $$\,n$$ n th bin of delay, [12pt]{minimal} $$1 \, < \,n\, < \, 30$$ 1 < n < 30 , and [12pt]{minimal} $$h$$ h is a significance threshold between 0 and 1. The threshold is set to value [12pt]{minimal} $$h=0.7$$ h = 0.7 for illustration purposes in Fig. . For analysis, the threshold value was set to [12pt]{minimal} $$h=0.3$$ h = 0.3 in order to include all perisaccadic STUs that might play a role in transsaccadic integration. As shown in Fig. , the modulated subset of STUs (Fig. ), representing the STUs which are significantly different between the fixation and perisaccadic periods, play a major role in maintaining visuospatial integrity across the saccade. As shown in Fig. , replacing the weights of the modulated STUs in the model with their fixation values results in a gap in the readout of the neural responses—and interruption in the detectability and discriminability at RF1 or RF2 in the perisaccadic period. In the next step, a subset of modulated STUs is identified as contributing to this visuospatial integration—e.g., the continuity of transitioning sensitivity from RF1 to RF2 across the saccade (termed ‘integration-relevant STUs’). The contribution of each modulated STU to the transsaccadic integrity is quantified by evaluating its role in maintaining the sensitivity of the neuron to either the RF1 or RF2 location across a saccadic eye movement, by removing each modulated STU one at a time and testing whether the neuron’s sensitivity decreases. The stimulus kernels of the fitted models ( [12pt]{minimal} $${K}_{x,y}(t, )$$ K x , y t , τ , Supplementary Fig. ) reflect changes in the neurons’ spatiotemporal sensitivity at each probe location ( [12pt]{minimal} $$x,y$$ x , y ) and delay ( [12pt]{minimal} $$$$ τ ) across different times to the saccade ( [12pt]{minimal} $$t$$ t ). The average spatial sensitivity of the neuron to the stimulus in RF1, [12pt]{minimal} $${h}_{1}(t)$$ h 1 ( t ) , is quantified as: 8 [12pt]{minimal} $${h}_{1}(t)=_{(x,y) R{F}_{1}}{ }_{ =1}^{T}|{K}_{x,y}(t, )|\,}{9 T}$$ h 1 ( t ) = ∑ x , y ϵ R F 1 ∑ τ = 1 T K x , y ( t , τ ) 9 * T where [12pt]{minimal} $$1 \, < \, \, < \, T$$ 1 < τ < T is the delay parameter in the kernels (in this study [12pt]{minimal} $$T=200$$ T = 200 ms as length of stimulus kernels), regarding the history of stimulus from time [12pt]{minimal} $$t$$ t , and [12pt]{minimal} $$(x,y)$$ ( x , y ) are the nine probe locations around center of RF1 (Supplementary Fig. ). The spatial sensitivity index of RF1, [12pt]{minimal} $${h}_{1}(t)$$ h 1 ( t ) , representing the average sensitivity in terms of the average absolute kernel values, drops after a saccade, while the spatial sensitivity index [12pt]{minimal} $${h}_{2}(t)$$ h 2 ( t ) for RF2 increases. A shared sensitivity index ( [12pt]{minimal} $$$$ δ ) across RF1 and RF2 is then defined as the minimum sensitivity to either location, measured across time relative to the saccade (gray area in Supplementary Fig. ). The shared sensitivity is defined as [12pt]{minimal} $$ ={ }_{t=-500}^{+500}{{{{{}}}}}({h}_{1}(t),{h}_{2}(t))$$ δ = ∑ t = − 500 + 500 min ( h 1 ( t ) , h 2 ( t ) ) and each modulated STU is considered integration-relevant if nulling its weight results in a decrease in the shared sensitivity of the neuron to the RF1 or RF2. Further information on research design is available in the linked to this article. Supplementary Information Reporting Summary
Adverse Childhood Experiences and Trauma-Informed Care: An Online Module for Pediatricians
036c40e1-2afb-49c0-8716-fe83898864e8
6952282
Pediatrics[mh]
By the end of this activity, learners will be able to: 1. Explain the science behind the effects of adverse childhood experiences (ACEs) and toxic stress on the health and development of children and the well-being of their families. 2. List the ways that ACEs and childhood toxic stress impact the health and development of children. 3. Give examples of how a pediatrician could initiate a conversation about previous trauma with a parent or patient. 4. Recognize the interactions and behaviors of patients and families who have been affected by toxic stress. 5. Describe approaches to helping parents and children who have been affected by toxic stress. The epidemic of adverse childhood experiences (ACEs) is a public health crisis causing short- and long-term negative health outcomes in children, families, and communities. ACEs are stressors that can impact child development and health outcomes for adults. Stress can induce both psychological and physiologic responses within the body. Physiologic responses to stress include activation of the hypothalamic-pituitary-adrenocortical axis and the sympathetic-adrenomedullary systems, which results in the increase of stress hormones. Chronic stress exposure and release of stress hormones can lead to deterioration in the body. Exposure to stress is unavoidable, but the impact on the body and mind changes depending on the type of stress and subsequent response. There are three different types of stress responses that the body can experience: positive, tolerable, and toxic. A positive stress response occurs when the body returns to baseline levels of stress hormones relatively quickly and the experience is considered mild to moderate. Examples of positive stress range from giving an oral presentation at school to a minor car accident; these experiences can help a person learn and grow. A tolerable stress can happen when events occur unexpectedly, such as the loss of a family member, but there are supportive individuals or communities who help reduce the stress response. Toxic stress is the term used for experiences or stressors that ultimately have lasting negative effects on cognitive, social, emotional, or neurobiological development. Examples of biological changes from toxic stress include changes in neuronal development with overactivation of the fight-or-flight stress response and underdevelopment of other areas of the brain involved in executive function. ACEs can lead to toxic stress in the right environment and are prevalent in the United States. A survey of 4,023 youth, 12-17 years of age, found that 17.4% experienced physical abuse, 8.1% experienced sexual abuse, and 39.4% witnessed interpersonal violence. The original study on the prevalence of ACEs illustrated that more than half of the 9,508 adults who responded reported at least one ACE and that one-fourth reported two or more ACEs. The study demonstrated that those who experienced four or more ACEs, compared to those with none, had four- to 12-fold increased risk for the following: alcoholism, drug abuse, depression, and suicide attempt. In examining the relationship of ACEs and chronic adulthood diseases, a graded dose-response relationship was found between the number of ACEs to the presence of adult diseases including ischemic heart disease, cancer, chronic lung disease, skeletal fractures, and liver disease. Specifically, the more ACEs experienced during childhood, the more likely one would be to develop a chronic health condition in adulthood. Patients with a history of ACEs often have developed certain behavioral coping mechanisms to help them manage thoughts and feelings that arise from their past adversity. , Some of these harmful adaptive coping mechanisms, such as substance use or overeating, can make it difficult for a provider to facilitate behavioral change if the trauma is not addressed. Moreover, these behaviors are not specific for a trauma history and can be subtle or mimic the symptoms of other conditions. It is vital that health care providers understand the prevalence and impact of ACEs on health and development to provide optimal care to patients and families. In fact, individuals with trauma may be more likely to turn to their health care provider, due to the established and trusting relationship. Considering that patients and families may not always connect their trauma to certain health behaviors or be forthcoming about past adversity, it is important for providers to inquire about trauma to make appropriate diagnoses and recommendations for treatment. Pediatricians can play a vital role in a coordinated effort to promote healthy childhood development on both an individual and community level. The pediatrician can work with community organizations to minimize and mitigate stressors that are identified during clinical care. In addition to medical services, pediatricians can assist with developing evidence-based programs for children and families and collaborate with political leaders to impact policy changes. Providing care for patients and families while being mindful of potential past adversities and how these adversities can affect their decisions today is the art of practicing trauma-informed care (TIC). The Substance Abuse and Mental Health Services Administration discusses a TIC approach that entails realizing the impact of trauma, recognizing the symptoms, and responding to the patient by integrating knowledge about trauma into practices while not retraumatizing patients. This approach has the potential to strengthen the patient-provider relationship and, in turn, positively impact physical and mental health outcomes for patients. Although the significance of ACEs and TIC on health and child development is well studied, a structured curriculum on these topics is not available to pediatric medical residents within their training program. Many residents intuitively recognize and understand the significance of ACEs, but they often struggle with how to incorporate this knowledge into practice. There is uncertainty about how to counsel and care for patients who have previously experienced adversity. Residents rely on supervising physicians to teach them all aspects of medicine, including patient-family interactions. The type and amount of education that addresses ACEs and TIC are variable and often minimal within residency curricula. The American Board of Pediatrics delineates expectations for pediatric residents to meet certain milestones during training. ACE and TIC education is integral to achieving competency in multiple areas, most notably in professionalism and interpersonal and communication skills. A few examples include interpersonal and communication skills competency numbers 1 and 2: “Communicate effectively with patients, families, and the public, as appropriate, across a broad range of socioeconomic and cultural backgrounds” and “Demonstrate the insight and understanding into emotion and human response to emotion that allow one to appropriately develop and manage human interactions.” We sought to standardize education on these important topics, as the evidence is relatively new and evolving. It has become increasingly more difficult to get residents and educators together for lectures or small-group discussions due to scheduling conflicts and clinical demands. For these reasons, an online module was chosen to disseminate this information to make it readily accessible and sustainable, without requiring the presence of specific educators. A review of the trauma literature revealed few training modules on trauma for medical professionals. A search in MedEdPORTAL using the term adverse childhood experiences yielded several results. One publication was a curriculum for medical students on the topic of ACEs, TIC, and resiliency. The results showed that the module, along with a facilitated case discussion in small groups, was an effective way to teach students about ACEs. The authors found that one area identified by the students for future improvement would be more information on how to incorporate ACEs into clinical care. This is an important aspect that our module was designed to address. Another module, also for medical students and faculty, focused on teaching physical examination skills in a trauma-informed approach. The results revealed that the module led to an increase in knowledge of trauma-informed language and physical examination maneuvers. The results of these publications support the use of modules as an effective way to teach concepts related to ACEs and TIC. Ours is the first module on ACEs and TIC that is designed for pediatric residents, peer-reviewed, and available online. Our online module can help achieve the following goals: (1) to define ACEs and describe their influences on pediatric patients and their families and (2) to provide health care providers with useful clinical tools to recognize patients at risk for the negative impacts of ACEs and approach their care in a way that is mindful of their unique history and needs. The project team included a small group of pediatric residents and several child abuse and neglect (CAN) physicians. The team designed a project with the following aims as related to ACEs, TIC, toxic stress, and resiliency: (1) assess pediatric residents’ perceived importance of these topics, (2) increase knowledge of these topics, (3) change residents’ behaviors/practices related to these topics, and (4) influence the residency program's overall approach to these topics. In addition, the team established the primary goals for the development of the educational tool: It should be effective, accurate, succinct, and easily accessible to learners and should provide practical information for current and future practice. The team also planned to have the learning tool incorporated into the residency curriculum, requiring that it be sustainable over time without the presence of specific instructors and independent of scheduling conflicts or time constraints. With these considerations, as well as the needs of the learners, in mind, the module was designed to be self-guided, easily accessible, and available online. Two pediatric residents and CAN physicians attended an ACE master training session, designed to train the trainer regarding ACEs, TIC, and resiliency. The training session was an interactive 2-day lecture series that used a licensed PowerPoint presentation created by pioneer ACE researchers Dr. Robert Anda and Laura Porter. It provided background knowledge, emphasized the importance, and encouraged further dissemination of the aforementioned topics. The two pediatric residents utilized the knowledge gained at the training session and completed a literature review on these topics to develop an outline for an online, educational, and interactive module ( ). The residents created the online module and subsequently had key stakeholders edit and revise the module for accuracy. The stakeholders were identified as people with unique perspectives to provide feedback on the module and included pediatric residents, CAN physicians, adult education specialists, and an experienced ACE master trainer. To maximize interactive learning activities in the module, Captivate software (Adobe, San Jose, California) was used for primary module creation, with a similar PowerPoint version also created to allow for distribution of the module for those who did not have access to Captivate. The PowerPoint version is included as . The Captivate version was disseminated to residents through an online education website, Desire to Learn (D2L), which was supported in the residency program. Once the module was made and available online, no further setup or teaching time was required from faculty. The module link was provided to residents at the beginning of their 4-week required rotation in child advocacy and protection (CAP) during the second year of pediatric residency. The module took residents an average of 25 minutes to complete. Residents could complete the module online through the D2L website during any free time. The module was not mandatory, but protected time was arranged and computer space provided during the rotation to facilitate higher rates of module completion. The module was designed to be completed individually and could potentially be followed up with small-group discussion of the topics. During the CAP rotation, residents worked closely with CAN faculty and the topics of ACEs and TIC were frequently discussed during clinical care. In other settings, the module could be completed prior to small-group discussions or as an adjunct to large-group lectures. If the module is used as a lecture guide or part of a small-group discussion, we estimate that it would take at least an hour to go through the material. To gather baseline data, the project team developed an online, computer-based survey ( ) to assess knowledge and comfort in discussing ACEs, TIC, toxic stress, and resiliency, as well as confidence in and frequency of incorporating the information into clinical practice. This survey utilized an online computer-based program (SurveyMonkey), which allowed for easy completion with internet access. Survey questions were developed and utilized such that subsequent postmodule survey data could be directly compared in measuring the reaction to the module and the effects on learning and behavior, in accordance with the Kirkpatrick model. The survey examined the general reaction to the module by inquiring if learners thought the module was effective and how likely they would be to recommend the module to others. To assess the learning and knowledge gained from the module, the participants rated their knowledge on the topics (ACEs, resiliency, TIC, toxic stress) on a 5-point Likert scale (1 = low , 3 = neutral , 5 = high ). To assess behavior changes, the surveys asked for a rating of frequency of discussion of these topics during a regular outpatient clinic visit on a 5-point Likert scale (1 = no visits , 5 = all visits ). To improve completion rates, the survey was linked to the online module, and opening the survey was required prior to beginning the module. The premodule survey results are described as median values ( n = 29). The residents received a postmodule survey 1-3 months following their CAP rotation ( ). This time frame was chosen to balance the response rate and determine if any behavior changes were established and sustained. This survey assessed the efficacy of the module in improving knowledge of, confidence in, and frequency of discussion of the topics. To protect anonymity and allow for matched results, the pre- and postmodule surveys contained unique identifiers. The survey results were analyzed using SPSS software (IBM, Armonk, New York). The Likert-scale answers were tested for normality utilizing the Kurtosis method and found to be not normally distributed. To compare the pre- and postmodule survey median scores of participants who completed both surveys ( n = 11), researchers used the nonparametric Wilcoxon signed rank exact test. The postmodule surveys were sent to residents who completed the module through July 2018. The project, including the surveys, was approved by the Medical College of Wisconsin Institutional Review Board (IRB) on February 15, 2016. In accordance with IRB approval, survey and module completion were voluntary. The number and timing of email reminders for survey completion were also regulated. The survey and module were distributed to a total of 91 pediatric residents over the course of 19 months in 2016-2017. In the first month, the survey and module were sent to all third-year pediatric and medicine-pediatric residents, as these residents had already completed their required 4-week CAP rotation. In the subsequent months, second-year pediatric and medicine-pediatric residents were provided with the survey and module link during the orientation for their required CAP rotation. We were not able to assess how many of these residents completed the module, although we expect that it was viewed at a higher rate than our survey response rate based on verbal feedback from many residents who viewed the module but declined to answer the baseline survey. There were 29 residents who responded to the baseline survey, for a response rate of 32%. The postmodule survey sent out 1-3 months after the CAP rotation was completed by 11 residents, for a response rate of 12%. presents a breakdown of resident participants. Since the study period, the module has remained a part of the CAP rotation; thus, each resident graduating from the program is exposed to the module. In the surveys, questions were asked about confidence, importance, and frequency of discussion in certain topics. It should be noted that TIC is not something discussed but rather practiced. This should be clarified on future surveys. The results described in this section reflect the original wording in the survey. No personal information was obtained from survey respondents. One question asked about the type of continuity clinic in which the resident participated. The majority of respondents participated in a suburban private practice with more than 50% Medicaid patients ( n = 15, 52%), followed by a federally qualified health center ( n = 9, 31%) and a suburban private practice with less than 50% Medicaid patients ( n = 2, 7%). Three people did not answer the question. This question was included to evaluate if there were differences in how residents responded to or incorporated these topics into practice based on where they spent the majority of their clinic time. With the low response rate in the survey, no significant conclusions could be drawn from this information. In the baseline survey, residents felt it was important to discuss ACEs (median = 5 [very important]), TIC (median = 4), toxic stress (median = 5), and resiliency (median = 5) with patients and families. Although residents felt it was important to address ACEs, many residents did not feel confident in discussing ACEs, TIC, or resiliency with patients and families (median = 2). Results from matched pre- and postmodule surveys demonstrated an increased confidence in knowledge of ACEs (from 3 to 4, p < .05), TIC (from 2 to 4, p < .05), toxic stress (from 2 to 4, p < .05), and resiliency (from 3 to 4, p < .05). Confidence in discussing all topics (ACEs, TIC, toxic stress, and resiliency) increased significantly from a median of 2 premodule to a median of 4 postmodule ( p < .05). provides a graphic representation of pre- and postmodule survey results. The importance of discussing ACEs, TIC, toxic stress, and resiliency was rated as important or very important on both the pre- and postmodule surveys. In addition to increasing knowledge and confidence in discussing the topics with families, the module was assessed for its ability to impact behavior change. The residents self-reported increased frequency of discussion of all topics in the postmodule survey. The percentage of residents who reported discussing these topics at some or most visits with families increased following module completion. Discussion of ACEs increased from 28% to 42%, TIC discussion increased from 13% to 42%, toxic stress discussion increased from 27% to 42%, and resiliency discussion increased from 25% to 50% ( p < .01 for all matched pairs). presents a graphical comparison. As a balancing measure, residents were asked how long a typical clinic visit took (in minutes). The responses in the pre- and postmodule surveys were the same, which helps support the idea that incorporating these topics into clinic visits does not significantly lengthen the encounter. All residents who responded reported the module to be an effective means of teaching these concepts and would recommend the module to others (only four answered this question; all others left the answer blank). Comments on the postmodule survey included the following: “The module has made me more aware of how ACEs impact my patients,” “I do consider a patient's background more before clinical encounters and at least feel a little more empathetic,” and “The training made me more aware of these issues and how they impact patients and their families.” The literature demonstrates the importance of understanding ACEs and their relationship to the development of chronic health conditions. However, there is not a standardized curriculum to educate pediatric residents on ACEs and TIC. Some medical schools are in the early stages of incorporating this education into training, but most residents have little to no exposure to these topics prior to residency. This gap in training and knowledge led to the creation of a module focusing on ACEs and TIC education for pediatric residents. The baseline survey results emphasized that pediatric residents found it important to address ACEs, TIC, toxic stress, and resiliency when interacting with families, yet they did not feel confident in doing so. After being introduced to these topics and provided with tools on how to address them with families, pediatric residents significantly increased confidence in their knowledge and ability to discuss or practice the topics. Perhaps most importantly, residents demonstrated behavioral changes as they reported more frequently addressing ACEs, TIC, toxic stress, and resiliency within a clinical encounter. The module functions to ensure that core elements are conveyed to all learners. It has the potential to be utilized in a variety of settings, including individual use, as preparation for a lecture or small-group discussion, as the core of a lecture or presentation to a group, or as a template for the development of a curriculum on these topics. In an effort to keep the module succinct and focused on ACEs and TIC, some related important topics were not covered in detail. Confidentiality and documentation of sensitive materials in the medical record were not stressed in the module, as they were addressed at other times during our pediatric residency training. Similarly, although mandatory reporting principles were included in the module, specific guidelines were not included, as our residents received separate lectures and training on them and more detail was not within the scope of this module. For other institutions, these may be important topics to consider covering more thoroughly with ACE and TIC teaching. The research team noted several limitations in the development and implementation of this module and survey design. The survey asked residents about the frequency of discussion of TIC; however, in actuality, TIC is something that is practiced, and trauma experiences or symptoms are discussed. The survey wording should be clarified if it is to be used again. Residents are interested in learning about these topics, although barriers to gaining this knowledge include limited time during medical training for additional topics and the difficulty in completing an optional module during their free time. To address these barriers, the project team embedded the module into an already established rotation, and the rotation provided protected time to complete the module. To allow for accessibility, the Captivate module distribution was done through D2L, an online interface with which residents were already familiar. Captivate requires a specific website platform for viewing and purchased software for editing; for these reasons, it is not included in this publication. For those who may not have D2L or a similar website with Captivate capabilities, the PowerPoint presentation format is a more widely accepted mode of accessing the module content. A disadvantage of utilizing the PowerPoint version is that the interactive and aesthetic features are not as robust; however, the content of the module remains consistent in both software platforms. In future use, the PowerPoint version may be enhanced through combination with a small-group activity or a voice-over facilitator guide to highlight certain points. Another limitation identified was completion of the surveys. To allow for a greater sample size, premodule survey completion was facilitated by requiring opening of the survey to access the module. Postmodule survey completion rate was low, which made gathering enough paired samples to produce statistically significant results more difficult. The limited postmodule survey response rate is likely due to that survey not being mandatory to complete. The initial results with 11 paired samples show the module's success, although more responses could show greater impact. Going forward, small-group in-person discussions either before or after the module may increase module participation and likely increase survey response rates. Additional opportunities for the growth of this content exist in adjusting the mode of delivery to learners, whether it be through module participation as a group activity or modification for use as a lecture. The survey responses demonstrate a need for pediatric residents to receive TIC and ACE education. To further address ACEs and TIC, the Medical College of Wisconsin Department of Pediatrics has formed partnerships to begin a system-wide approach to TIC. This may be an opportunity for the module to be adapted for more widespread distribution. Due to the lack of curricula and modules available for residency training on ACEs, TIC, toxic stress, and resiliency, we foresee medical schools, residency programs, and hospital systems looking for ways to provide awareness and education on these important topics. This module can be utilized to start this important conversation, as it is designed not only to provide education but also to empower residents to feel more comfortable having these discussions in everyday practice. The module can easily be adapted to other training programs, including medical or other professional health care schools, to increase and augment knowledge, awareness, and confidence in these topics. With further education on ACEs and TIC, we can begin to disrupt the cycle of trauma and its effect on chronic health conditions. A. ACEs PowerPoint.pptx B. ACEs Premodule Survey.docx C. ACEs Postmodule Survey.docx All appendices are peer reviewed as integral parts of the Original Publication.
An osteobiography of a celebrity chimpanzee reflects the changing roles of modern zoos
e1ffd635-58b9-4efc-9578-78f98b5c4716
11904210
Musculoskeletal System[mh]
Modern zoos are centres of education, research, conservation and entertainment – , yet the relative importance of these tenets, which were established in the 1920s , has shifted considerably through the late 20th and 21st centuries. Keeping wild animals in captivity was widespread in Europe from the sixteenth century as a symbol of wealth and prestige, as exotic animals became available with the expansion of global trade routes and colonisation . Entertainment and scientific curiosity were key draws of early zoos, which transitioned from private menageries into public spaces from the early nineteenth century. Zoos were historically centres for the exploitation of nature, but only became focussed on conservation and education (that extended to wider audiences beyond basic taxonomic classification) from the 1960s . These transitions of priorities are evident through the lives of great apes in captivity, whose close affinity with humans has often led to their own celebrity status: Lady Jane (Jenny) the orangutan ( Pongo pygmaeus ), who was taken from the wild in Borneo and brought to London Zoo, was met by Queen Victoria in 1842 who stated: “The Orang-Outang is too wonderful preparing and drinking his tea , doing everything by word of command. He is frightful and painfully and disagreeably human.” . Before her premature death at about five years old, Jenny was visited by Charles Darwin, whose observational notes on Jenny’s behaviour and emotions formed part of his arguments that the difference between humans and animals was one of degree and not of kind . In 1986 Jambo, a silverback western lowland gorilla ( Gorilla gorilla gorilla ) at Jersey Zoo, gained celebrity status by trying to comfort a five-year-old boy who fell into the gorilla enclosure and lost consciousness . This story was echoed in 1996 when Binti Jua, a female western lowland gorilla at Brookfield Zoo, became famous after cradling an unconscious child who fell into her enclosure . These examples are contrasted with the shooting of another gorilla, Harambe, at Cincinnati Zoo in 2016, which sparked international outrage and who was personified online through social media, calling into question the role of zoos in modern society . Owing to the longevity of great apes, individuals have been subject to vast changes in the human-animal relationship within zoos over half-century timescales. Osteobiography is a methodology widely used to understand humans from the past – and is infrequently applied to animals – . Here we create a comprehensive osteobiography to describe the life of Choppers, a celebrity western chimpanzee ( Pan troglodytes verus ). In doing so, we understand the changing mission of modern zoos from the perspective of one of their residents, who lived through vast changes in the priorities of zoos, in public perceptions, and in standards of animal welfare. Born in c.1970, Choppers was famous for her role as Ada Lott in the PG Tips television advertisements in the United Kingdom during the 1970s and was euthanased on health and welfare grounds at Twycross Zoo in England in 2016. An osteobiography of Choppers the chimpanzee. Osteometric, geochemical and pathological analyses were carried out on Choppers’ skeleton, which is in the collections of National Museums Scotland in Edinburgh (register no. NMS.Z.2018.129.1), in order to gain insights into her life from infancy to old age. These analyses supplement archival information, available due to both her celebrity status and because she lived in a modern zoological collection, where information on husbandry is routinely recorded on the international Species 360: ZIMS (Zoological Information Management System) . Choppers was a western chimpanzee who was born in the wild in Sierra Leone between 1969 and 1970 . She was taken from the wild by poachers when she was about six weeks old, and it is suggested that she was shot in her right arm during capture and injured in her knee . This is evident from a remodelled malunion fracture to the proximal shaft of both the right radius and ulna, with both shafts deformed, and shortened by ~ 14% in comparison with the left side (Fig. ). Additionally, her right femur is ~ 4% shorter than her left one. Her mother would likely have been shot both for bushmeat and to enable Choppers’ removal and sale as a pet, thus her probable shooting injuries were incidental. It is also likely that many, if not all, of her social group were killed in her capture (e.g., see ). The trauma inflicted on Choppers during her capture affected her physically for the rest of her life, as in addition to the debilitating shortening of her forearm, her right elbow and left knee joints were subject to considerable arthropathy in comparison with her other long bones (Fig. ), and this caused her pain and difficulty in movement during her later years . These injuries would have affected her quadrupedal gait, and are likely the cause of her asymmetric pelvis and misalignment in some of the vertebral zygapophyses (thoracics 11, 12, lumbar 1). Choppers was purportedly rescued from the poachers by an aid worker, Diane Locke, who raised her like a human infant in Sierra Leone , , . She was likely consuming powdered milk before human weaning age (18–24 months) and a mixed terrestrial diet, including local pap or fufu, from ~ 4 months old . Movement from Sierra Leone to Twycross Zoo Choppers remained in Sierra Leone until she was three to four years old, at which point she was sent to the newly opened Twycross Zoo, in the United Kingdom, under the care of Molly Badham and Nathalie Evans, arriving on 26th June 1973 , . This is evident from trace element and isotopic analyses of Choppers’ tooth enamel, which indicate a distinct geographical ( 18 O DrinkingWater ) and dietary ( 13 C diet ) shift between the ages of three and four (Fig. ). 13 C diet values indicate that Choppers regularly consumed fruit from an early age, which is typical of the wild diets of infant chimpanzees, and of captive chimpanzees in the 1970s , . The reduction in 13C diet values from Sierra Leone to Twycross Zoo likely reflects a decrease of C 4 plants (e.g., cassava, yams) in her diet . The trace elements Ba, Sr, Zn and Fe corroborate this period of fluctuation in Choppers’ diet and location. High Sr and Ba can indicate a highly vegetarian (or low meat) diet – , but the high Sr and Ba levels correlate with an increase in Fe and Zn, which can indicate high protein or meat diet , . Therefore, it is likely that in addition to dietary changes during this time, soil chemistry from different locations during Choppers’ movements has impacted trace element signatures. Her incisors show Linear Enamel Hypoplasia (LEH) (Fig. ), an interruption in the development of enamel in teeth as a result of physiological, nutritional and/or psychological stress during development , which likely relates to these years of dietary turbulence , although we note that this is also common in wild chimpanzees , . Whilst Twycross was a leading centre for captive primate care in the UK, Choppers’ acquisition was underpinned by elements of exploitation. Molly Badham, co-founder of Twycross, states about chimpanzees in the 1970s: “If we were to continue with our tea parties and any other public appearances , we knew that we would have to buy some new young chimps to take over from our old-stagers. They should preferably be under the age of twelve months if we were to be sure they would adapt to living away from the family group. For once chimps have become accustomed to living in groups of their own kind and have learned to depend on each other , they never truly learn to depend on humans in the same way and will never trust them.” . Twycross Zoo and the Brooke Bond Tea Company used chimpanzees for supermarket promotion and television advertisements for the PG Tips tea brand from 1963 , with the chimps acting as humans, drinking tea, and dubbed over with human voices. Choppers played Ada Lott, the grandmother character (despite her young age of between four and seven years old) in the later TV adverts during the 1970s. These adverts helped PG Tips become the market leader of tea in Britain for 35 years . Her performance career was short, occurring before the onset of puberty, and Choppers probably retired at around the age of six or seven years old . In part this is due to behavioural change as adult chimpanzees become less predictable, but also as a result of human perceptions of the cuteness of adult chimpanzees compared to infants: “ Once they grow big and develop huge arms and chests and weigh up to 100lb they no longer look acceptable dressed up. ” . In the late 1970s Choppers transitioned from a relatively active life with high levels of direct interaction with humans, to a sedentary life with two companion chimpanzees, Noddy and Brooke, who were also retired from the entertainment industry . This was deemed a necessity due to the lack of prior interaction and habituation with non-performing chimpanzees, but would have inevitably been a less stimulating and physically smaller environment than that which could be provided with integration into a larger chimpanzee social group . Choppers’ potential daily movement for much of her life would have been greatly reduced compared to that of large social groups in much larger enclosures, and when compared to the 2–4 km daily travel distances of wild chimpanzees . Choppers’ diet during her performing years, and the years immediately following was instrumental in her growth, development, and health throughout life, as female chimpanzees reach adult size in captivity around the age of 11 years old . The performance diets of chimpanzees at Twycross mimicked those of humans, following on from a longstanding trope in the mid-twentieth century of chimpanzees participating in tea parties, eating cake, drinking ‘tea’, and ‘apeing’ human behaviours and society , , . This fascination with primate anthropomorphism in Northwest Europe is documented from at least the 18th century when apes were first imported from Africa and Asia . The chimpanzees drank fruit juice or milk rather than tea during tea parties and advertisements , and it is likely that Choppers had a predominantly fruit-based diet as indicated by 13 C diet values (Fig. ) and feeding practices of chimpanzees at the time , . Throughout Choppers’ life values of 13 C diet , 15 N diet , and 34 S are consistent with a mixed terrestrial plant and animal diet, and so Choppers would have been receiving diverse supplemental foods beyond fruit during this time, including high sugar treats, but likely also protein sources such as eggs . Based on her mean femoral measurements, Choppers was smaller than average for both captive and wild female chimpanzees, yet her weight was much higher than that of wild chimpanzees, and typical of that of captive female chimpanzees (Fig. ). This is likely a result of a positive energy balance due to greater calorific intake and lower physical activity than those of wild chimpanzees , which may have been exacerbated by her injuries and the subsequent development of arthroses that reduced her mobility. Captivity is often associated with obesity , , but it is noted in her veterinary record (19/04/2012), when Choppers was 43 years old, that she had always been a lean chimpanzee, and we note that her maximum weight likely reflected a conscious effort to increase her body condition following illness (Fig. ). Whilst Choppers’ diet in the 1970s/1980s must have provided the calorific content and macronutrients to obtain her large adult size, her dental pathologies are indicative of mechanical deficiencies in her diet. Her canine asymmetry in her maxilla and mandible, and dental malocclusion are likely to have developed during adolescence. Mechanically softer diets have been found to cause malocclusion in humans , non-human primates , and other vertebrates . Soft foods are prevalent in modern human diets and have been a staple of zoological feeding programmes across many taxa through the latter half of the 20th century , . Whilst wild chimpanzees consume significant quantities of fruit, they must dehusk and process hard outer materials, and must spend more time chewing and consuming tougher fruit that is of a lower calorific content than cultivated fruits for human consumption, thereby increasing the peak and cumulative mechanical stresses in the skull and mandible . The elongation of Choppers’ maxilla is characteristic of captive chimpanzees (Fig. ). Alongside her dental pathologies and associated anterior bone proliferation, her long rostrum is likely due in part to the mechanical influence of a soft diet during the development of a weaker musculo-skeletal system , . Late life Choppers’ preserved remains provide us with information from her early life, through her tooth enamel, pathologies, and skeletal morphology, which formed during development. They also provide us with information towards the end of her life due to the turnover of bone and keratinous tissues, which hold isotopic signatures of diet and health, and from the development of pathologies associated with old age. A limitation of an osteobiography of a long-lived animal is that there are decades of Choppers’ adult life which cannot be accounted for until a few years before her death (Fig. ). Choppers was re-housed with another chimpanzee, Bobby, on 8th March 1982, and together they had one daughter, Holly, born on 27th December 1982 (when Choppers was 13). Replacing Bobby, Louis (another performing chimpanzee who had played Mr Shifter in the PG Tips adverts , ) had been rehoused with Choppers and Holly in 1989. Holly was later re-housed and lived at Twycross Zoo until her death on 9th November 2023 . The turnover rate of femoral bone in chimpanzees is ~ 10 years – , and consequently we can infer changes to Choppers’ diet through her bone collagen in the last 5–10 years of her life. Therefore, we pick up Choppers’ adult story through her tissues in 2006 (~ 10 years before death), when Choppers was 38 and cohabiting with Louis. This period is complemented by well-documented health and veterinary records from 2010 and dietary information since 2011 (see Twycross Zoo Chimpanzee Diet Sheet 2011 within the Supplementary Information ). Choppers’ diet in her later years consisted of commercial primate pellets, browse and vegetables, limited quantities of fruit and yoghurt, and the occasional egg. This contrasts against the diet during her early years that is presumed to have been high in fruit and sugar. The 13 C diet values from Choppers’ bone samples (signifying the last 10 years of her life) are stable, and higher than those of her early life in Sierra Leone and Twycross Zoo (Fig. ). This reflects a prescriptive modern diet plan, and an increase in C 4 plant intake from maize protein in primate pellets, and from flaked maize and popcorn scatter feeds. Average life expectancy of wild chimpanzees ranges from late 30s to 40s with maximum ages of some individuals surpassing 50 years . Chimpanzees exhibit an increased susceptibility to infections and bone related pathologies from their 20s and experience reproductive and cognitive decline after the age of 30 – . Thus, Choppers lived ~ 17 years of her life as an elderly chimpanzee (post 30 years old) before her death at 47. She had degenerative skeletal pathologies associated with old age, but these may have been exacerbated by her traumas during infancy and subsequent mobility impairment, as well as her environment (e.g. harder substrates found in captivity ) and diet. Owing to thorough documentation of Choppers’ health and veterinary record since 2010, we are able to compare known and suspected health issues in life with resulting skeletal pathologies in death. Ten of her molars and premolars (81% of teeth analysed) exhibited enamel wear with resulting dentin exposure of up to 1/3 of the total occlusal surface, with two teeth (12.5%) exhibiting dentin exposure over 1/3 (see Table S3 of the Supplementary Information ). This level of wear is typical of wild chimpanzees . She had chronic alveolar osteodystrophy and chronic low-grade periodontal disease and severe bone proliferation around her mandible and maxilla ( Fagan and Woody , pers. comms. ). This is consistent with persistent gum infections in the winter and spring of 2012 (four years before death). Choppers’ abnormal cranial shape has resulted partly from an unknown developmental trauma (canine asymmetry) and the extensive proliferation of alveolar bone. She was diagnosed through a biopsy as having peripheral odontogenic fibroma in July 2012 , but this is not reflected in her skull morphology. Choppers had bilateral mandibular tori growth (Fig. ). Whilst this likely has a genetic component, it may be a consequence of increased mechanical stress as a result of temporomandibular dysfunction, or due to parafunctional activity such as bruxism due to psychological stress – . Mandibular tori may also form due to dietary deficiencies, or an excess of calcium supplementation , both of which are unlikely given Choppers’ varied diet plan in late life (see Twycross Zoo Chimpanzee Diet Sheet 2011 within the Supplementary Information ). Choppers’ long-term companion, Louis, died in 2013 and so at the age of 45 years old, Choppers had to integrate with a new group of chimpanzees. It is likely that Choppers’ hand rearing and later life spent paired with a single chimpanzee may have significantly compromised her ability to socialise with other chimpanzees . Captive older female chimpanzees exhibit more submissive behaviours than younger chimpanzees , and high-value social relationships between chimpanzees are harder to establish later in life . Choppers’ introduction to Benji, Rosie and Tuli in July 2013 led to minor fighting and superficial injuries, with some bullying occurring from Tuli for several months . Choppers’ skeleton shows multiple indications of diffuse idiopathic skeletal hyperostosis (DISH), and extensive arthritic degeneration in most joints of the long bones (Fig. ). DISH is typically manifested as smooth ‘candle-wax’ osteophyte formation surrounding the vertebral bodies due to ossification of the anterior ligament, and in Choppers the three caudal-most vertebrae and the sacrum are fused together. The cause of DISH is unknown and is usually asymptomatic, but it has been associated with rich diets and excess body fat in humans, as well as being associated with coronary heart disease, diabetes mellitus, and inflammatory bowel disease . Whether these factors are causal or correlated, and whether DISH causes pain, is unclear. However, DISH, alongside extensive arthroses within the lumbar region would have limited Choppers’ mobility. Visual inspection of her movement indicated arthritis in Choppers in 2013 (only 2.5 years from her death) . In her later life it was apparent that arthritis (diagnosed in 2015, but likely to have manifested much earlier) was significantly affecting her movement, affecting her hind legs (right knee injured during her capture) on occasion (22/01/2016), and often affecting her injured right arm, leading to significant muscle wastage on this limb . Choppers remained in Sierra Leone until she was three to four years old, at which point she was sent to the newly opened Twycross Zoo, in the United Kingdom, under the care of Molly Badham and Nathalie Evans, arriving on 26th June 1973 , . This is evident from trace element and isotopic analyses of Choppers’ tooth enamel, which indicate a distinct geographical ( 18 O DrinkingWater ) and dietary ( 13 C diet ) shift between the ages of three and four (Fig. ). 13 C diet values indicate that Choppers regularly consumed fruit from an early age, which is typical of the wild diets of infant chimpanzees, and of captive chimpanzees in the 1970s , . The reduction in 13C diet values from Sierra Leone to Twycross Zoo likely reflects a decrease of C 4 plants (e.g., cassava, yams) in her diet . The trace elements Ba, Sr, Zn and Fe corroborate this period of fluctuation in Choppers’ diet and location. High Sr and Ba can indicate a highly vegetarian (or low meat) diet – , but the high Sr and Ba levels correlate with an increase in Fe and Zn, which can indicate high protein or meat diet , . Therefore, it is likely that in addition to dietary changes during this time, soil chemistry from different locations during Choppers’ movements has impacted trace element signatures. Her incisors show Linear Enamel Hypoplasia (LEH) (Fig. ), an interruption in the development of enamel in teeth as a result of physiological, nutritional and/or psychological stress during development , which likely relates to these years of dietary turbulence , although we note that this is also common in wild chimpanzees , . Whilst Twycross was a leading centre for captive primate care in the UK, Choppers’ acquisition was underpinned by elements of exploitation. Molly Badham, co-founder of Twycross, states about chimpanzees in the 1970s: “If we were to continue with our tea parties and any other public appearances , we knew that we would have to buy some new young chimps to take over from our old-stagers. They should preferably be under the age of twelve months if we were to be sure they would adapt to living away from the family group. For once chimps have become accustomed to living in groups of their own kind and have learned to depend on each other , they never truly learn to depend on humans in the same way and will never trust them.” . Twycross Zoo and the Brooke Bond Tea Company used chimpanzees for supermarket promotion and television advertisements for the PG Tips tea brand from 1963 , with the chimps acting as humans, drinking tea, and dubbed over with human voices. Choppers played Ada Lott, the grandmother character (despite her young age of between four and seven years old) in the later TV adverts during the 1970s. These adverts helped PG Tips become the market leader of tea in Britain for 35 years . Her performance career was short, occurring before the onset of puberty, and Choppers probably retired at around the age of six or seven years old . In part this is due to behavioural change as adult chimpanzees become less predictable, but also as a result of human perceptions of the cuteness of adult chimpanzees compared to infants: “ Once they grow big and develop huge arms and chests and weigh up to 100lb they no longer look acceptable dressed up. ” . In the late 1970s Choppers transitioned from a relatively active life with high levels of direct interaction with humans, to a sedentary life with two companion chimpanzees, Noddy and Brooke, who were also retired from the entertainment industry . This was deemed a necessity due to the lack of prior interaction and habituation with non-performing chimpanzees, but would have inevitably been a less stimulating and physically smaller environment than that which could be provided with integration into a larger chimpanzee social group . Choppers’ potential daily movement for much of her life would have been greatly reduced compared to that of large social groups in much larger enclosures, and when compared to the 2–4 km daily travel distances of wild chimpanzees . Choppers’ diet during her performing years, and the years immediately following was instrumental in her growth, development, and health throughout life, as female chimpanzees reach adult size in captivity around the age of 11 years old . The performance diets of chimpanzees at Twycross mimicked those of humans, following on from a longstanding trope in the mid-twentieth century of chimpanzees participating in tea parties, eating cake, drinking ‘tea’, and ‘apeing’ human behaviours and society , , . This fascination with primate anthropomorphism in Northwest Europe is documented from at least the 18th century when apes were first imported from Africa and Asia . The chimpanzees drank fruit juice or milk rather than tea during tea parties and advertisements , and it is likely that Choppers had a predominantly fruit-based diet as indicated by 13 C diet values (Fig. ) and feeding practices of chimpanzees at the time , . Throughout Choppers’ life values of 13 C diet , 15 N diet , and 34 S are consistent with a mixed terrestrial plant and animal diet, and so Choppers would have been receiving diverse supplemental foods beyond fruit during this time, including high sugar treats, but likely also protein sources such as eggs . Based on her mean femoral measurements, Choppers was smaller than average for both captive and wild female chimpanzees, yet her weight was much higher than that of wild chimpanzees, and typical of that of captive female chimpanzees (Fig. ). This is likely a result of a positive energy balance due to greater calorific intake and lower physical activity than those of wild chimpanzees , which may have been exacerbated by her injuries and the subsequent development of arthroses that reduced her mobility. Captivity is often associated with obesity , , but it is noted in her veterinary record (19/04/2012), when Choppers was 43 years old, that she had always been a lean chimpanzee, and we note that her maximum weight likely reflected a conscious effort to increase her body condition following illness (Fig. ). Whilst Choppers’ diet in the 1970s/1980s must have provided the calorific content and macronutrients to obtain her large adult size, her dental pathologies are indicative of mechanical deficiencies in her diet. Her canine asymmetry in her maxilla and mandible, and dental malocclusion are likely to have developed during adolescence. Mechanically softer diets have been found to cause malocclusion in humans , non-human primates , and other vertebrates . Soft foods are prevalent in modern human diets and have been a staple of zoological feeding programmes across many taxa through the latter half of the 20th century , . Whilst wild chimpanzees consume significant quantities of fruit, they must dehusk and process hard outer materials, and must spend more time chewing and consuming tougher fruit that is of a lower calorific content than cultivated fruits for human consumption, thereby increasing the peak and cumulative mechanical stresses in the skull and mandible . The elongation of Choppers’ maxilla is characteristic of captive chimpanzees (Fig. ). Alongside her dental pathologies and associated anterior bone proliferation, her long rostrum is likely due in part to the mechanical influence of a soft diet during the development of a weaker musculo-skeletal system , . Choppers’ preserved remains provide us with information from her early life, through her tooth enamel, pathologies, and skeletal morphology, which formed during development. They also provide us with information towards the end of her life due to the turnover of bone and keratinous tissues, which hold isotopic signatures of diet and health, and from the development of pathologies associated with old age. A limitation of an osteobiography of a long-lived animal is that there are decades of Choppers’ adult life which cannot be accounted for until a few years before her death (Fig. ). Choppers was re-housed with another chimpanzee, Bobby, on 8th March 1982, and together they had one daughter, Holly, born on 27th December 1982 (when Choppers was 13). Replacing Bobby, Louis (another performing chimpanzee who had played Mr Shifter in the PG Tips adverts , ) had been rehoused with Choppers and Holly in 1989. Holly was later re-housed and lived at Twycross Zoo until her death on 9th November 2023 . The turnover rate of femoral bone in chimpanzees is ~ 10 years – , and consequently we can infer changes to Choppers’ diet through her bone collagen in the last 5–10 years of her life. Therefore, we pick up Choppers’ adult story through her tissues in 2006 (~ 10 years before death), when Choppers was 38 and cohabiting with Louis. This period is complemented by well-documented health and veterinary records from 2010 and dietary information since 2011 (see Twycross Zoo Chimpanzee Diet Sheet 2011 within the Supplementary Information ). Choppers’ diet in her later years consisted of commercial primate pellets, browse and vegetables, limited quantities of fruit and yoghurt, and the occasional egg. This contrasts against the diet during her early years that is presumed to have been high in fruit and sugar. The 13 C diet values from Choppers’ bone samples (signifying the last 10 years of her life) are stable, and higher than those of her early life in Sierra Leone and Twycross Zoo (Fig. ). This reflects a prescriptive modern diet plan, and an increase in C 4 plant intake from maize protein in primate pellets, and from flaked maize and popcorn scatter feeds. Average life expectancy of wild chimpanzees ranges from late 30s to 40s with maximum ages of some individuals surpassing 50 years . Chimpanzees exhibit an increased susceptibility to infections and bone related pathologies from their 20s and experience reproductive and cognitive decline after the age of 30 – . Thus, Choppers lived ~ 17 years of her life as an elderly chimpanzee (post 30 years old) before her death at 47. She had degenerative skeletal pathologies associated with old age, but these may have been exacerbated by her traumas during infancy and subsequent mobility impairment, as well as her environment (e.g. harder substrates found in captivity ) and diet. Owing to thorough documentation of Choppers’ health and veterinary record since 2010, we are able to compare known and suspected health issues in life with resulting skeletal pathologies in death. Ten of her molars and premolars (81% of teeth analysed) exhibited enamel wear with resulting dentin exposure of up to 1/3 of the total occlusal surface, with two teeth (12.5%) exhibiting dentin exposure over 1/3 (see Table S3 of the Supplementary Information ). This level of wear is typical of wild chimpanzees . She had chronic alveolar osteodystrophy and chronic low-grade periodontal disease and severe bone proliferation around her mandible and maxilla ( Fagan and Woody , pers. comms. ). This is consistent with persistent gum infections in the winter and spring of 2012 (four years before death). Choppers’ abnormal cranial shape has resulted partly from an unknown developmental trauma (canine asymmetry) and the extensive proliferation of alveolar bone. She was diagnosed through a biopsy as having peripheral odontogenic fibroma in July 2012 , but this is not reflected in her skull morphology. Choppers had bilateral mandibular tori growth (Fig. ). Whilst this likely has a genetic component, it may be a consequence of increased mechanical stress as a result of temporomandibular dysfunction, or due to parafunctional activity such as bruxism due to psychological stress – . Mandibular tori may also form due to dietary deficiencies, or an excess of calcium supplementation , both of which are unlikely given Choppers’ varied diet plan in late life (see Twycross Zoo Chimpanzee Diet Sheet 2011 within the Supplementary Information ). Choppers’ long-term companion, Louis, died in 2013 and so at the age of 45 years old, Choppers had to integrate with a new group of chimpanzees. It is likely that Choppers’ hand rearing and later life spent paired with a single chimpanzee may have significantly compromised her ability to socialise with other chimpanzees . Captive older female chimpanzees exhibit more submissive behaviours than younger chimpanzees , and high-value social relationships between chimpanzees are harder to establish later in life . Choppers’ introduction to Benji, Rosie and Tuli in July 2013 led to minor fighting and superficial injuries, with some bullying occurring from Tuli for several months . Choppers’ skeleton shows multiple indications of diffuse idiopathic skeletal hyperostosis (DISH), and extensive arthritic degeneration in most joints of the long bones (Fig. ). DISH is typically manifested as smooth ‘candle-wax’ osteophyte formation surrounding the vertebral bodies due to ossification of the anterior ligament, and in Choppers the three caudal-most vertebrae and the sacrum are fused together. The cause of DISH is unknown and is usually asymptomatic, but it has been associated with rich diets and excess body fat in humans, as well as being associated with coronary heart disease, diabetes mellitus, and inflammatory bowel disease . Whether these factors are causal or correlated, and whether DISH causes pain, is unclear. However, DISH, alongside extensive arthroses within the lumbar region would have limited Choppers’ mobility. Visual inspection of her movement indicated arthritis in Choppers in 2013 (only 2.5 years from her death) . In her later life it was apparent that arthritis (diagnosed in 2015, but likely to have manifested much earlier) was significantly affecting her movement, affecting her hind legs (right knee injured during her capture) on occasion (22/01/2016), and often affecting her injured right arm, leading to significant muscle wastage on this limb . Using hair and samples through sectioned nail, we gained insight into the final four months of Choppers’ life (Fig. ). Choppers exhibited marked weight loss of ~ 25% during her last year of life as her health deteriorated . Such physiological stress can be associated with an increase in δ 15 N in body tissues due to tissue catabolism , but Choppers’ tissues markedly decreased in δ 15 N in her final year. We speculate that this change is due to Choppers consuming less high-protein food (nuts, yoghurt, eggs, etc.) during her final months, even if these were available in her diet. Choppers, the last surviving chimpanzee from the PG Tips tea commercials, was euthanased on 20th April 2016 following observed jaundice, a persistent cough, and lethargy, and in light of severe weight loss and behavioural change . A post-mortem report indicated that Choppers suffered from chronic hepatitis and cardiomyopathy. She also had yersiniosis ( Yersinia enterocolitica ), which was the first known case in a captive chimpanzee . Osteobiography as a tool has been applied to human lives in the past, most often of people beyond contemporary human memory, where biographical archival material is limited – . There is an increasing application of the tool to understand the lives of modern animals, whose individual histories are poorly understood due to their lack of interaction with humans during life , . Choppers lived during a period of recent history within a well-documented zoo, and her celebrity status means that rich archival information about her life has been readily available, and yet as a chimpanzee, first-hand accounts of her life are not possible. Animal voices and experiences are obscured by a lack of human understanding and human representation of what we think the animal experience was , and whilst here we describe Choppers from a human point of view, we provide an analytical perspective in death, creating a richer context to first-hand human accounts of her life: We know that Choppers was taken from the wild, but here we can visualise the sharp dietary and geographical changes that ensued directly through her body - effects which she carried through life. We have visualised and scored the extent of bone eburnation and osteoarthritis in Choppers’ right elbow and knee, which trigger further empathy to accounts of her injuries during capture and mobility difficulties in later life. We have categorised the extent of DISH on her spine, which lends additional evidence to her reduced mobility that would be difficult to appreciate through observation during her life alone. Choppers highlights the efficacy of osteobiography in understanding the formative years of an individual (encapsulated through developmental plasticity, injury, and the formation of tooth enamel), and of the years before her death (through age-related skeletal pathologies and the chemistry of tissues which turn over throughout life). By analysing different tissues, such as tooth enamel from teeth that erupt at different times, bone, hair, and nail, we can obtain snapshots of her diet and physiology at different stages of her life from infancy to very old age. However, the long period that Choppers lived through from the 1980s until the 2010s is largely undocumented and without trace in her physical remains. Choppers’ cranial and postcranial morphology, which differ from those of her wild conspecifics, creates a story not just of herself, but for captive chimpanzees from the late 20th and 21st centuries, who have experienced similar shifts in environment, husbandry, and human-animal relationships. Her story is representative of the PG Tips chimpanzees, but also of great apes in captivity globally that have experienced shifting conditions and attitudes over decadal timescales. By the 1970s Twycross Zoo was (as it is today) a leading authority on primate care and breeding , , , yet attitudes towards wild animals in captivity and the role of zoos have changed considerably from Choppers’ birth in 1969/1970 to her death in 2016. The origins of animal welfare and modern zoological research in Britain are found in the 18th century , , but it is apparent through her traumatic capture from the wild and use in television that Choppers represents a period, starting from the beginnings of European menageries and modern zoos, where animals were routinely extracted from the wild and entertainment was central to human relationships with wild animals. However, Choppers lived through widespread advances in zoological research, welfare, and conservation - all core tenets of modern zoos. The dietary change between her performing years in the 1970s and her life in the 2010s exemplifies this change in knowledge and husbandry (e.g., through a reduction in cultivated fruit, which is higher in simple sugars than wild fruits, and by providing a diet that better replicates the nutrient profile of wild diets , ). Whilst nature conservation in Britain can be traced to the 17th century , it rose to prominence in its modern form through the latter half of the 20th century , and zoos were instrumental in developing Taxon Advisory Groups from the 1980s, and later Species Survival Plans (USA)/European Endangered Species Programmes (Europe) and international/regional studbooks for better conservation management . The introduction of the Convention on International Trade in Endangered Species (CITES) during the mid-1970s made the removal of animals from the wild more difficult and less common, thereby further promoting breeding programmes within captivity. Choppers’ extraction from the wild in 1969/1970 resulted in lifelong physical injuries and likely the death of multiple wild chimpanzees. She was rescued by Dianne Locke and later Twycross Zoo on justifiable welfare grounds, but which involved further exploitation of Choppers in television adverts in the 1970s, which would be unacceptable today. Whilst direct human interactions and performing may have been enriching for the young chimpanzees involved, it was temporary in nature, and the use of chimpanzees in commercials may have actively undermined conservation goals by distorting public perceptions of wild animals . Indeed, the withdrawal of the high levels of stimulation during these early years of performance would likely have been highly traumatic over several decades. Despite this, through changes to zoo practices over the last 40 years, which has resulted in a shift in their core priorities, Choppers died as an ambassador for her species in captivity, and not as an ageing entertainer ‘apeing’ human behaviour. DISH, dental pathologies, and extensive arthroses (which are widespread beyond Choppers’ injured limbs) are all likely in-part to be related to her old age, and so Choppers’ later life raises new questions regarding the management of ageing zoo animals made prevalent by husbandry and veterinary advances . Whilst Choppers’ story tells us about changing zoo practices through time in Britain, there is considerable global variation in zoo scrutiny, management and welfare today. Despite regional and global accreditation of zoos and improved regulation, the illegal trafficking of chimpanzees and other primates into private collections and disreputable zoos continues . Choppers’ story, as told by her remains and archival records, are testament to the many thousands of chimpanzees that were forcibly extracted from the wild - for zoos, circuses, laboratories and private collections - and similar stories will continue to be revealed as modern chimpanzee populations are exploited today and in the future. Choppers was not an unusual chimpanzee, but her story is an individual one, which resonates with human attitudes towards wildlife, zoos, entertainment, welfare and quality of life. Choppers’ skeleton, nails, and hair were prepared at National Museums Scotland, where she is registered as part of the research collections (register no. NMS.Z.2018.129.1). Pathological analysis High-resolution photographs were taken of Choppers’ dentition and jaws to allow detailed assessment of oral pathologies. These photographs were reviewed by veterinary dentists Dr David Fagan, The Colyer Institute; Dr Allison Woody, San Diego Zoo. The percentage of dentin exposure on Choppers’ molars and premolars was calculated from photographs using ImageJ software. Estimates of total occlusal surface and total exposed dentin were made where postmortem tooth damage had occurred in small discrete locations e.g. to the edge of the occlusal surface where enamel had been removed for isotopic analysis. Choppers’ skeleton was examined for skeletal pathologies, which included the following broad categories: Traumas, including healed fractures. Osteoarthroses, mostly of the long bones, where osteophytes and eburnation are apparent. Spondyloarthroses, including the presence of osteophytes on the vertebrae and also including Diffuse Idiopathic Skeletal Hyperostosis (DISH). It may be difficult to be certain whether arthroses have developed because of non-inflammatory or inflammatory causes so that for the purposes of this analysis, no further analysis was attempted. In addition to a description of the pathologies, the degree of development of osteophytes on different parts of Choppers’ skeleton were recorded following . Scores range from 0 (no osteophytes) to five (fusion of joints by osteophytes). Arthroses, spondyloarthroses and DISH were scored on left and right sides separately for all long bones, vertebrae and the sacrum/pelvis. High-resolution photographs were taken of Choppers’ dentition and jaws to allow detailed assessment of oral pathologies. These photographs were reviewed by veterinary dentists Dr David Fagan, The Colyer Institute; Dr Allison Woody, San Diego Zoo. The percentage of dentin exposure on Choppers’ molars and premolars was calculated from photographs using ImageJ software. Estimates of total occlusal surface and total exposed dentin were made where postmortem tooth damage had occurred in small discrete locations e.g. to the edge of the occlusal surface where enamel had been removed for isotopic analysis. Choppers’ skeleton was examined for skeletal pathologies, which included the following broad categories: Traumas, including healed fractures. Osteoarthroses, mostly of the long bones, where osteophytes and eburnation are apparent. Spondyloarthroses, including the presence of osteophytes on the vertebrae and also including Diffuse Idiopathic Skeletal Hyperostosis (DISH). It may be difficult to be certain whether arthroses have developed because of non-inflammatory or inflammatory causes so that for the purposes of this analysis, no further analysis was attempted. In addition to a description of the pathologies, the degree of development of osteophytes on different parts of Choppers’ skeleton were recorded following . Scores range from 0 (no osteophytes) to five (fusion of joints by osteophytes). Arthroses, spondyloarthroses and DISH were scored on left and right sides separately for all long bones, vertebrae and the sacrum/pelvis. We obtained 3D scans of the skulls and mandibles of 37 adult female chimpanzees (20 captive, 17 wild). Western chimpanzee specimens ( P. t. verus ) were supplemented with Nigeria-Cameroon ( P. t. ellioti ), Central ( P. t. troglodytes ), hybrid, and chimpanzees of unknown subspecies to create a larger sample size. Scans were obtained using an EinScan H structured light surface scanner (accuracy: ±0.05 mm), and through MorphoSource ( www.morphosource.org ). We used 3D geometric morphometrics to characterise the size and shape of both the skull and mandible (see Supplementary Information for specimen list and landmarking protocol). Landmarks were placed using 3D Slicer and imported into the R environment for analysis using the SlicerMorph package . Procrustes superimposition and principal component analysis was performed in the geomorph package in R. The results of this analysis (Fig. ) are provided within the Supplementary Information in relation to specimen age class (prime adult: 13–30 years old, and old adult 30 + years old). The greatest length of the left and right femur and humerus were taken from Choppers and 51 captive adult chimpanzees using digital dial calipers (accuracy: ±0.1 mm). Published average wild female western chimpanzee femur length was used for comparison. Body weights from Choppers, and maximum body weight from 20 captive adult female chimpanzees were obtained from Species 360: ZIMS (Zoological Information Management System) and compared with published average body weights from wild chimpanzee populations . By combining stable isotope and trace element analyses between different tissue types and structures, we reconstructed Choppers’ diet over time (Table ). Tissues that grow incrementally, such as teeth, nails and hair, are ideal for studying diet at various stages in life as they record the stable isotope values at the time of tissue formation . Tooth enamel does not remodel once formed and therefore carbon and oxygen incorporated into the enamel hydroxyapatite structure are retained throughout life, serving as a record of diet during enamel mineralisation . The enamel of different teeth forms at different times during development (Table ), giving an almost annual insight into the first seven years of Choppers’ life. Bone remodels constantly so that stable isotope analysis of bone reveals the average diet over varying periods of time – . The histological development of bone is similar between humans and chimpanzees . For example, femoral bone reflects an individual’s diet over approximately the last 10 years of life , , whereas ribs have faster turnover rates and represent diet from within a period of five to 10 years prior to death , , . Sections of hair and nail are representative of diet in the weeks and months prior to death. Assuming a human nail growth rate of 3 mm per month , Choppers’ 11-mm-long nail is representative of the food she ate in the last four months of her life. Primate and human hair grow at comparable rates , and strands of Choppers’ hair provide data on the last two weeks of her life. The methodological procedures for isotopic and trace element analyses are described in detail within the Supplementary Information . Below is the link to the electronic supplementary material. Supplementary Material 1
Reading between the lines: exploring the unwritten rules of letters of recommendation in the Canadian resident selection process
2c25226e-9a6f-4c7c-872a-e9298b84d895
11586031
Internal Medicine[mh]
Resident selection is a high stakes process that has received considerable attention in recent years, including discussion of the fairness and effectiveness of the methods used to make decisions. , Letters of recommendation (LORs) are commonly used in the resident selection process, but they have come under increasing criticism for the variability in how they are written and assessed; - the presence of gender and other biases; , and their inability to discriminate well between applicants. , Some scholars have also noted that reading LORs are akin to “reading between the lines” or deciphering a code. , , Additionally, faculty members are not often taught how to write LORs nor do people receive feedback to help improve the effectiveness of their LORs. , , Such findings suggest that a key factor limiting the effectiveness of LORs are the unwritten rules and hidden practices surrounding how they are written and read. Despite the foregoing challenges, several studies have highlighted the value of LORs in the resident selection process, particularly for their ability to provide insights into applicants’ strengths and weaknesses , and their potential to predict future performance in a training program. , Accordingly, efforts have been made to reduce LORs’ variability by implementing standardized LORs, - and create faculty development tools focused on writing LORs. , Researchers in medical education and other fields have also made efforts to better understand the language and content of LORs and the practices surrounding their assessment. , , - These studies have identified the substantial impact of unwritten rules on LORs such as nuances in language , , and features beyond the letters such as their writers , , or how their content interacts with other application materials. , While other scholars have identified some of these unwritten rules as a subset of larger research findings, we have not found any studies that have focused specifically on exploring the unwritten rules for writing and assessing LORs despite the impact of these rules on the effectiveness of LORs. Our study sought to gain insight into the unwritten rules by exploring the practices surrounding LORs including how they are written and interpreted. By exploring these unwritten rules more fully, we begin to make tacit knowledge and practices more visible and facilitate more fulsome critiques of LORs. These insights can contribute to improving the effectiveness and fairness of LORs in the resident selection process. Theoretical framework To explore the unwritten rules of LORs, we drew on theories of genre, - and Aristotle’s rhetorical appeals in persuasive communication. Genre refers to patterns in language that shape and are shaped by regularities in social practices. According to Swales, , genres are the mechanism by which communities achieve their collective goals through language. Genres are often guided by unwritten rules about what can and cannot be said, and the social practices surrounding genres can sometimes be hidden or only accessible to specific audiences, such as genres that are used in confidential, high-stakes decisions and processes like LORs for resident selection. , Genre theory has been used in previous medical education studies to demonstrate how unwritten rules and inaccessible practices can present challenges for new or peripheral members of a community to learn and effectively engage in these genres. , Given the connection between language and practices, research that draws on genre theory focuses on analysing language and practices within their specific contexts of use. Therefore, our research will focus on the unwritten rules and language practices of LORs in one Canadian academic medical community. To add clarity to emerging patterns in the data, we supplemented genre theory with Aristotle’s rhetorical triangle, which depicts three rhetorical appeals in persuasive communication: ethos, logos, and pathos. Ethos refers to appeals to the credibility and trustworthiness of the communicator, in the case of our study, the LOR writer. Logos refers to appeals to logic and reason, in the case of our study, how writers establish a clear and logical argument or case for an applicant. Pathos refers to appeals to the audience’s emotions, or the ability for the communicator to relate to the needs and values of the audience who, in the case of our study, are the faculty who evaluate LORs as part of a selection committee. These appeals typically refer to the communicator’s persuasive strategies, but the uptake or recognition of such strategies is a key element of effective communication. , As such, for our study, we focused on the strategies that writers’ use to appeal to ethos, logos, and pathos and the strategies that readers identify or respond to as ethos, logos, and pathos. To explore the unwritten rules of LORs, we drew on theories of genre, - and Aristotle’s rhetorical appeals in persuasive communication. Genre refers to patterns in language that shape and are shaped by regularities in social practices. According to Swales, , genres are the mechanism by which communities achieve their collective goals through language. Genres are often guided by unwritten rules about what can and cannot be said, and the social practices surrounding genres can sometimes be hidden or only accessible to specific audiences, such as genres that are used in confidential, high-stakes decisions and processes like LORs for resident selection. , Genre theory has been used in previous medical education studies to demonstrate how unwritten rules and inaccessible practices can present challenges for new or peripheral members of a community to learn and effectively engage in these genres. , Given the connection between language and practices, research that draws on genre theory focuses on analysing language and practices within their specific contexts of use. Therefore, our research will focus on the unwritten rules and language practices of LORs in one Canadian academic medical community. To add clarity to emerging patterns in the data, we supplemented genre theory with Aristotle’s rhetorical triangle, which depicts three rhetorical appeals in persuasive communication: ethos, logos, and pathos. Ethos refers to appeals to the credibility and trustworthiness of the communicator, in the case of our study, the LOR writer. Logos refers to appeals to logic and reason, in the case of our study, how writers establish a clear and logical argument or case for an applicant. Pathos refers to appeals to the audience’s emotions, or the ability for the communicator to relate to the needs and values of the audience who, in the case of our study, are the faculty who evaluate LORs as part of a selection committee. These appeals typically refer to the communicator’s persuasive strategies, but the uptake or recognition of such strategies is a key element of effective communication. , As such, for our study, we focused on the strategies that writers’ use to appeal to ethos, logos, and pathos and the strategies that readers identify or respond to as ethos, logos, and pathos. We took a rhetorical perspective on our analysis of LORs, that is, our study design and analysis focused on how goals are achieved through language. We conducted a qualitative study that included gathering information about the social practices surrounding LORs and the rhetorical choices that faculty members make when writing or reading LORs. We received approval from the University of Manitoba Health Research Ethics Board to conduct this study (File # HS23568 - H2020:016). Setting and participants The setting for our study is in a regional academic medicine community in Manitoba where the resident selection process is coordinated nationally and facilitated by a third-party organization through which programs can request that applicants provide LORs. A disproportionate amount of research on LORs focuses on surgical specialties, therefore, to provide additional perspectives, we chose to focus on two different specialties, Internal Medicine and Psychiatry, which are two of the largest non-surgical speciality programs in Manitoba. These two programs both require applicants to submit three LORs with their application. We sent a recruitment e-mail to all faculty members in the departments of Internal Medicine (IM) and Psychiatry (P) at the University of Manitoba. Participants were given the choice to participate as a LOR writer or as a LOR reader. In total, 18 faculty members consented to participate. The breakdown of department, interview type, and level of experience is outlined in . In addition, during the interviews, 11 participants provided perspectives on their experiences with both reading and writing. The interviewer did not explicitly query these participants on both activities, but when these perspectives were offered, they were included in the analysis for the appropriate activity. Data collection We conducted semi-structured, discourse-based interviews between June and December 2020. Discourse-based interviews involve guiding and observing participants as they engage with a text to elicit information about their tacit rhetorical and genre knowledge. For interviews focused on writing LORs, participants provided a de-identified sample of a LOR they had previously written for resident selection. For interviews focused on reading LORs, CR and NP fabricated a sample LOR, which CR provided participants (see ). These sample LORs guided discussion about writers’ choices and readers’ attention with regards to content, language, and formatting. In addition, CR asked all participants about the practices surrounding LORs (see ). Interviews were 25-50 minutes in length and were audio-recorded and transcribed verbatim. Data analysis We conducted iterative data analysis facilitated by NVivo 12 software. First, CR conducted descriptive coding of the IM interviews to develop a preliminary coding framework that was guided by key concepts from genre analysis that BC then applied to a sample of 4 interviews. The two coders discussed any similarities and differences in their coding to refine the coding framework. CR then applied the coding framework to all the interviews, revising it is as new information arose. To gain clarity on salient patterns we identified in the data, we categorized descriptive codes using Aristotle’s concepts of ethos, logos, and pathos. The four authors discussed analyses at key intervals to check shared understanding of data and provide perspectives from our diverse experience. Following analysis, we invited participants to provide feedback on a summary of findings. Reflexivity Our diverse expertise and experiences affected how we collected, analyzed, and interpreted our data. CR and BC are PhD-trained researchers who brought experience conducting qualitative research in medical education to data collection and analysis. Additionally, the theoretical and methodological approach to the study was informed by CR’s expertise in applied linguistics and discourse analysis and BC’s expertise in anthropology. NP and WF are educational leaders and clinicians who provided perspectives from their experience as LOR writers, resident selection committee members, program directors, and educational scholars in their respective disciplines of Internal Medicine and Psychiatry. The setting for our study is in a regional academic medicine community in Manitoba where the resident selection process is coordinated nationally and facilitated by a third-party organization through which programs can request that applicants provide LORs. A disproportionate amount of research on LORs focuses on surgical specialties, therefore, to provide additional perspectives, we chose to focus on two different specialties, Internal Medicine and Psychiatry, which are two of the largest non-surgical speciality programs in Manitoba. These two programs both require applicants to submit three LORs with their application. We sent a recruitment e-mail to all faculty members in the departments of Internal Medicine (IM) and Psychiatry (P) at the University of Manitoba. Participants were given the choice to participate as a LOR writer or as a LOR reader. In total, 18 faculty members consented to participate. The breakdown of department, interview type, and level of experience is outlined in . In addition, during the interviews, 11 participants provided perspectives on their experiences with both reading and writing. The interviewer did not explicitly query these participants on both activities, but when these perspectives were offered, they were included in the analysis for the appropriate activity. We conducted semi-structured, discourse-based interviews between June and December 2020. Discourse-based interviews involve guiding and observing participants as they engage with a text to elicit information about their tacit rhetorical and genre knowledge. For interviews focused on writing LORs, participants provided a de-identified sample of a LOR they had previously written for resident selection. For interviews focused on reading LORs, CR and NP fabricated a sample LOR, which CR provided participants (see ). These sample LORs guided discussion about writers’ choices and readers’ attention with regards to content, language, and formatting. In addition, CR asked all participants about the practices surrounding LORs (see ). Interviews were 25-50 minutes in length and were audio-recorded and transcribed verbatim. We conducted iterative data analysis facilitated by NVivo 12 software. First, CR conducted descriptive coding of the IM interviews to develop a preliminary coding framework that was guided by key concepts from genre analysis that BC then applied to a sample of 4 interviews. The two coders discussed any similarities and differences in their coding to refine the coding framework. CR then applied the coding framework to all the interviews, revising it is as new information arose. To gain clarity on salient patterns we identified in the data, we categorized descriptive codes using Aristotle’s concepts of ethos, logos, and pathos. The four authors discussed analyses at key intervals to check shared understanding of data and provide perspectives from our diverse experience. Following analysis, we invited participants to provide feedback on a summary of findings. Our diverse expertise and experiences affected how we collected, analyzed, and interpreted our data. CR and BC are PhD-trained researchers who brought experience conducting qualitative research in medical education to data collection and analysis. Additionally, the theoretical and methodological approach to the study was informed by CR’s expertise in applied linguistics and discourse analysis and BC’s expertise in anthropology. NP and WF are educational leaders and clinicians who provided perspectives from their experience as LOR writers, resident selection committee members, program directors, and educational scholars in their respective disciplines of Internal Medicine and Psychiatry. Through analysis of the interviews, we identified several ways in which participants drew on tacit knowledge and unwritten rules to engage in the social practices surrounding LORs and for constructing or responding to the ethos, logos, and pathos in LORs. These patterns were consistent across the two disciplines and so we have not distinguished between them in the descriptions of our results. Social practices surrounding LORs Participants described how the practices of writing and reading LORs are largely guided by unwritten rules and individual practices. For example, some participants describe writing LORs as “one of the tasks that you often have to perform that no one trains you for” (IM3). While some limited explicit guidance exists, writers often rely on experience from sitting on a selection committee or advice from colleagues and often develop their own templates over time. Similarly, readers described a variety of practices across programs for assessing LORs but indicated that they tend to provide gestalt assessments of LORs rather than use formal rubrics, for example, “We think about is it an average letter or a strong letter...but we don't necessarily provide specific criteria of what a strong letter would be” (IM10). Additionally, participants highlighted how LORs are one tool that is considered in relation to the whole application and selection process. For example, some participants described how LORs help to verify the information that is provided in other application documents, that is, an applicant’s “story should meld” (IM12). Other participants described how LORs provide information that does not exist in the other application documents. This includes information that “can't always be captured within performance scores” (IM2) or “experiential feedback” that “supplements all the other information that's part of the application package” (P2). One of the most common purposes for the LOR, however, was as a “screening tool” (P2) for identifying potential concerns with applicants by “[looking] for the rare person that you probably don't want in your program” (IM6). However, participants also describe how writers generally only write LORs for applicants that they support and have had sufficient time observing and, in general, decline to write LORs otherwise. Rhetorical appeals in LORs Ethos . Interviews revealed that the credibility and trustworthiness of the writer plays an important role in writers’ choices and how readers interpret and assess LORs. Participants described some of the strategies for determining credibility and trustworthiness that are explicitly outlined in LOR guidelines, such as indicating the length and nature of an interaction with an applicant to demonstrate that a writer has sufficient knowledge of an applicant. However, writers and readers appear to depend largely on tacit strategies for appealing to ethos . One unwritten rule, according to participants, is that a LOR’s credibility and trustworthiness is often dependent on a writer’s experience, reputation, and specialty. Interview participants who focused on writing LORs and are relatively new or unknown members of the academic medicine community noted that it is important to demonstrate their ability to make a reliable evaluation of an applicant, such as providing information about their experience with supervising students because, “it tells them I’ve seen a lot of medical students in the past. If I’ve seen a lot of medical students, maybe then I’ll have a pretty good idea of where they’re at” (IM6). Similarly, when reading LORs, participants explained that they first identify who the writer is and whether they know the person and where the person is located. Readers explained that they tend to trust LORs more if they know the writer or the writer is from the same discipline: “For better or for worse, it means more to me that it’s coming from... somebody within our specialty. If I know, and maybe think highly of the referee, that helps” (IM1). Even when readers do not know the writer, they can be influenced by patterns they see across one writer’s LORs: “When you read enough letters, you start to see some patterns and you realize that [this person] always writes that in all of his letters, and so I can’t believe what [he] writes anymore” (IM1). Some readers also note that the writer’s cultural or institutional context influences their trust in the letter, for example, when “all of the letters from this center or from this department are so, so strong that it's hard to know if that’s just how they write letters... versus if it is indeed a class of very exceptional applicants.” (IM10). A writer’s credibility is also established through the quality of the LOR. Quality was discussed in terms of spelling and grammar and being well-structured and formatted, but also in terms of its originality and authenticity. One participant explained: When I read a really badly written letter of reference, sometimes I focus more on how badly written it is than its content. There are certainly times when you read a letter and you go, ‘This person didn’t care at all. They couldn’t care less to sell the attributes of the person they’re writing a reference for,’ and it’s hard to value that kind of a reference letter . (IM3) However, the quality of the LOR appears to be an especially important consideration for writers who perceive themselves as “not somebody who’s got a nationally known name out there” (IM4) and so, as this participant described, “I need to make sure that my letters sound good and grammatically correct…If they don’t know me and I write a sloppy letter, I don’t think it’s going to carry much meaning to them” (IM4). Another participant explained that some physicians’ reputations may take precedent over the quality of their LORs, for example: There are physicians who write letters for people here and even though it’s a pretty generic letter you’d still consider it a nod from that person. However, if that person’s applying elsewhere, you may not know that this attending probably thought strongly just because their letters may not [be] overly descriptive or flowy. That can put people at a disadvantage going elsewhere . (IM10) Logos . Writers and readers described different perspectives on the goals of a clear and logical argument in LORs . Writers tend to focus on highlighting how exceptional an applicant is and are hesitant to include comments about applicants’ weaknesses, “not only because people don’t like writing bad things, but also...if you have something bad to say, you’re opening yourself up to problems without any protection.” (P5). If writers do include information about an applicant’s weaknesses, they “try to put a constructive slant on it” (IM8). Conversely, readers tend to focus on identifying red flags or other potential concerns in LORs and describe how the value of a LOR depends largely on its content. Specifically, they explained that while most LORs are generic and of little value, their value increases if they either contain red flags or successfully demonstrate how an applicant is exceptional. For example, if the LOR contained “negative information” then “it would be given a lot of weight, but depending upon how much is provided, it may be of some value or medium value or a lot of value” (IM5). However, explicit red flags are very rare, and so readers often look for implicit red flags like lukewarm letters, or an applicant’s choice of referees. Readers described how most LORs are positive and so they begin to read between the lines, or as one participant explained, “What is important is what’s not said. Because people do not like to write derogatory things on letters” (IM8). Instead, identifying red flags was described as a process of looking for what is not said or for subtle language cues that imply concerns, or “a little nudge and a wink that maybe things aren’t exactly the way I’m describing” (IM3). One participant gave the example of small phrases such as “with enough time he has matured” (IM2). Subtleties can include the tone of the letter. Tone can include overly accoladed letters in which “everything is just so outstanding, so outstanding, so outstanding. It begs the question, well then why don't you take him?” (IM2). Tone can also refer to lack of accolades because readers are “not ever going to get a terrible letter. So, lukewarm ones are really very bad” (P5). Choice of referees can also be considered a red flag, for example, if for “all three reference letters, people have known them for less than a week” (IM6). Despite differing focuses on either strengths or weaknesses, both writers and readers described how a strong case for the strength of an applicant is supported with evidence, such as providing concrete examples, and through language choices, such as the use of strong supporting adverbs and adjectives like “highly” or “excellent.” However, they also described the tacit ways in which these strategies are employed to make a case for or distinguish the quality of an applicant. For example, participants indicated that strong adjectives and adverbs were important to include in LORs, these words tend to be more meaningful when they are omitted, for example, one participant explained that “even though it’s almost expected…to not be there would be an unexpected, adverse mark against the individual” (IM5). In addition, while participants explained that comparative and sometimes percentile rankings of applicants can be helpful for distinguishing the quality of an applicant, these types of statements can also be used strategically. One participant explained, for example, that “if someone isn’t in the top 20%, that’s still pretty good, but...it’s almost a code for being average” (IM10). Pathos . Writers primarily appeal to needs and values of readers by highlighting the skills and characteristics that are important to a particular specialty and by trying to write a LOR that “helps [the applicant] stand out from the crowd” (IM4) . While readers explained that they respond favourably to these strategies, they are primarily interested in “what it was like to work with [the applicant] as a colleague” (P1). Readers acknowledged that applicants tend to be intelligent and accomplished and so “at our end, we're also looking at how personality structure is going to fit in with our section and our personality structures” (IM2). Some participants also noted that it is important to know how applicants function as learners because “a learner who doesn't pay attention to feedback or who is defensive about feedback is going to have a harder time improving their skills and is just not as nice to work with” (P1). Participants described how the practices of writing and reading LORs are largely guided by unwritten rules and individual practices. For example, some participants describe writing LORs as “one of the tasks that you often have to perform that no one trains you for” (IM3). While some limited explicit guidance exists, writers often rely on experience from sitting on a selection committee or advice from colleagues and often develop their own templates over time. Similarly, readers described a variety of practices across programs for assessing LORs but indicated that they tend to provide gestalt assessments of LORs rather than use formal rubrics, for example, “We think about is it an average letter or a strong letter...but we don't necessarily provide specific criteria of what a strong letter would be” (IM10). Additionally, participants highlighted how LORs are one tool that is considered in relation to the whole application and selection process. For example, some participants described how LORs help to verify the information that is provided in other application documents, that is, an applicant’s “story should meld” (IM12). Other participants described how LORs provide information that does not exist in the other application documents. This includes information that “can't always be captured within performance scores” (IM2) or “experiential feedback” that “supplements all the other information that's part of the application package” (P2). One of the most common purposes for the LOR, however, was as a “screening tool” (P2) for identifying potential concerns with applicants by “[looking] for the rare person that you probably don't want in your program” (IM6). However, participants also describe how writers generally only write LORs for applicants that they support and have had sufficient time observing and, in general, decline to write LORs otherwise. Ethos . Interviews revealed that the credibility and trustworthiness of the writer plays an important role in writers’ choices and how readers interpret and assess LORs. Participants described some of the strategies for determining credibility and trustworthiness that are explicitly outlined in LOR guidelines, such as indicating the length and nature of an interaction with an applicant to demonstrate that a writer has sufficient knowledge of an applicant. However, writers and readers appear to depend largely on tacit strategies for appealing to ethos . One unwritten rule, according to participants, is that a LOR’s credibility and trustworthiness is often dependent on a writer’s experience, reputation, and specialty. Interview participants who focused on writing LORs and are relatively new or unknown members of the academic medicine community noted that it is important to demonstrate their ability to make a reliable evaluation of an applicant, such as providing information about their experience with supervising students because, “it tells them I’ve seen a lot of medical students in the past. If I’ve seen a lot of medical students, maybe then I’ll have a pretty good idea of where they’re at” (IM6). Similarly, when reading LORs, participants explained that they first identify who the writer is and whether they know the person and where the person is located. Readers explained that they tend to trust LORs more if they know the writer or the writer is from the same discipline: “For better or for worse, it means more to me that it’s coming from... somebody within our specialty. If I know, and maybe think highly of the referee, that helps” (IM1). Even when readers do not know the writer, they can be influenced by patterns they see across one writer’s LORs: “When you read enough letters, you start to see some patterns and you realize that [this person] always writes that in all of his letters, and so I can’t believe what [he] writes anymore” (IM1). Some readers also note that the writer’s cultural or institutional context influences their trust in the letter, for example, when “all of the letters from this center or from this department are so, so strong that it's hard to know if that’s just how they write letters... versus if it is indeed a class of very exceptional applicants.” (IM10). A writer’s credibility is also established through the quality of the LOR. Quality was discussed in terms of spelling and grammar and being well-structured and formatted, but also in terms of its originality and authenticity. One participant explained: When I read a really badly written letter of reference, sometimes I focus more on how badly written it is than its content. There are certainly times when you read a letter and you go, ‘This person didn’t care at all. They couldn’t care less to sell the attributes of the person they’re writing a reference for,’ and it’s hard to value that kind of a reference letter . (IM3) However, the quality of the LOR appears to be an especially important consideration for writers who perceive themselves as “not somebody who’s got a nationally known name out there” (IM4) and so, as this participant described, “I need to make sure that my letters sound good and grammatically correct…If they don’t know me and I write a sloppy letter, I don’t think it’s going to carry much meaning to them” (IM4). Another participant explained that some physicians’ reputations may take precedent over the quality of their LORs, for example: There are physicians who write letters for people here and even though it’s a pretty generic letter you’d still consider it a nod from that person. However, if that person’s applying elsewhere, you may not know that this attending probably thought strongly just because their letters may not [be] overly descriptive or flowy. That can put people at a disadvantage going elsewhere . (IM10) Logos . Writers and readers described different perspectives on the goals of a clear and logical argument in LORs . Writers tend to focus on highlighting how exceptional an applicant is and are hesitant to include comments about applicants’ weaknesses, “not only because people don’t like writing bad things, but also...if you have something bad to say, you’re opening yourself up to problems without any protection.” (P5). If writers do include information about an applicant’s weaknesses, they “try to put a constructive slant on it” (IM8). Conversely, readers tend to focus on identifying red flags or other potential concerns in LORs and describe how the value of a LOR depends largely on its content. Specifically, they explained that while most LORs are generic and of little value, their value increases if they either contain red flags or successfully demonstrate how an applicant is exceptional. For example, if the LOR contained “negative information” then “it would be given a lot of weight, but depending upon how much is provided, it may be of some value or medium value or a lot of value” (IM5). However, explicit red flags are very rare, and so readers often look for implicit red flags like lukewarm letters, or an applicant’s choice of referees. Readers described how most LORs are positive and so they begin to read between the lines, or as one participant explained, “What is important is what’s not said. Because people do not like to write derogatory things on letters” (IM8). Instead, identifying red flags was described as a process of looking for what is not said or for subtle language cues that imply concerns, or “a little nudge and a wink that maybe things aren’t exactly the way I’m describing” (IM3). One participant gave the example of small phrases such as “with enough time he has matured” (IM2). Subtleties can include the tone of the letter. Tone can include overly accoladed letters in which “everything is just so outstanding, so outstanding, so outstanding. It begs the question, well then why don't you take him?” (IM2). Tone can also refer to lack of accolades because readers are “not ever going to get a terrible letter. So, lukewarm ones are really very bad” (P5). Choice of referees can also be considered a red flag, for example, if for “all three reference letters, people have known them for less than a week” (IM6). Despite differing focuses on either strengths or weaknesses, both writers and readers described how a strong case for the strength of an applicant is supported with evidence, such as providing concrete examples, and through language choices, such as the use of strong supporting adverbs and adjectives like “highly” or “excellent.” However, they also described the tacit ways in which these strategies are employed to make a case for or distinguish the quality of an applicant. For example, participants indicated that strong adjectives and adverbs were important to include in LORs, these words tend to be more meaningful when they are omitted, for example, one participant explained that “even though it’s almost expected…to not be there would be an unexpected, adverse mark against the individual” (IM5). In addition, while participants explained that comparative and sometimes percentile rankings of applicants can be helpful for distinguishing the quality of an applicant, these types of statements can also be used strategically. One participant explained, for example, that “if someone isn’t in the top 20%, that’s still pretty good, but...it’s almost a code for being average” (IM10). Pathos . Writers primarily appeal to needs and values of readers by highlighting the skills and characteristics that are important to a particular specialty and by trying to write a LOR that “helps [the applicant] stand out from the crowd” (IM4) . While readers explained that they respond favourably to these strategies, they are primarily interested in “what it was like to work with [the applicant] as a colleague” (P1). Readers acknowledged that applicants tend to be intelligent and accomplished and so “at our end, we're also looking at how personality structure is going to fit in with our section and our personality structures” (IM2). Some participants also noted that it is important to know how applicants function as learners because “a learner who doesn't pay attention to feedback or who is defensive about feedback is going to have a harder time improving their skills and is just not as nice to work with” (P1). The findings suggest that the unwritten rules of academic medicine communities shape the visible and invisible rhetorical choices of LOR writers and readers. We found that writers’ appeals and readers’ uptake of ethos, logos, and pathos in LORs relied on textual strategies and textual silences. Textual silences refer to meaningful omissions of relevant information, which includes the content and language that is expected to appear in LORs. Writers use textual strategies and textual silence to appeal to ethos, logos, and pathos and readers also look for and respond to textual strategies and textual silences, but not always as intended by the writers. Additionally, information outside of a single LOR often persuaded readers of a LOR’s credibility more than the LOR itself. Our findings are similar to research outside the field of medical education that has demonstrated the powerful influence of tacit strategies on how LORs are written and interpreted. For example, Albakry demonstrated how typically positive information can seem negative when found in the context of LORs that so often describe applicants in exceptional terms and how the absence of some words is more meaningful than their presence. Additionally, Vidali showed how seemingly innocuous information in a LOR can add to a composite applicant portrayal with unintended consequences when the LOR is read alongside the whole application package. While some studies on LORs for resident selection have identified tacit strategies like considering the writer when assessing LORs, , , we have identified additional unwritten rules that can create challenges for faculty members who are new, peripheral or outside of a given context. Specifically, novice writers or referees outside of or on the periphery of an academic medicine community may unknowingly include or omit information that compromises their credibility and unintentionally raises concerns about an applicant. Similarly, without clear rubrics, some faculty members may interpret textual silences as meaningful when they are not and rely on information outside of LORs to determine the credibility of a LOR, such as their knowledge of the writer. Our study adds to the growing literature on LORs in multiple ways. While some scholars have examined the language of LORs and identified biases, , , , our findings demonstrate the value of considering the silences alongside the language to enable more extensive critiques and potential improvements to the practices surrounding LORs. By making unwritten rules more visible, academic medicine communities can bring greater transparency to the use of LORs by developing faculty development initiatives to support new faculty members with writing LORs that include the implications for what is said and not said in a LOR and developing standardized and transparent processes for how LORs are assessed. Our findings may also lend additional support for the use of standardized LORs - or incentive to explore alternative methods to be used in the resident selection process. More broadly, our study demonstrates how genre analysis can provide valuable insights into how context gives rise to patterns in both visible and invisible rhetorical strategies. Such insights can be used not only to refine resident selection processes in Canada and elsewhere, but also for other educational activities that are facilitated by specific genres. There are several limitations to our study. First, we recruited participants from only two departments at a single institution. Future work could compare our findings to a broader range of disciplines, institutions, and regions to gain further insight into the unwritten rules of different contexts including visible and invisible rhetorical choices in LORs. Also, interviews were guided by a single LOR sample that may have directed the interviewer’s and participants’ attention towards certain LOR features while omitting others. Despite LORs being a key component of many resident selection processes, the unwritten rules of academic communities can impede a nationally facilitated resident selection process by creating challenges for newcomer and peripheral members of a community, or for those outside of that community. Our findings highlight how critiques of LORs, and any potential improvements, should consider writers and readers visible and invisible rhetorical strategies and the role of LORs as a tool that interacts with other parts of the application and selection process.
Medicine for global health: can “simple interventions” improve the worldwide burden of disease?
1279f76f-9fbd-45b7-95ab-f3a4d33278a5
3621694
Preventive Medicine[mh]
“Global health” refers to issues related to population health that, due to the common multi-national nature of the issue or its scientific implications, have relevance beyond a single country’s borders. Given the broad determinants of human health, “global health” topics span a vast spectrum of disciplines compared to with traditional Western medicine. By its very nature, global health must grapple with problems that transcend care provision at the individual patient level. Global health includes ill-defined intersections among human and veterinary medicine (the basic and the clinical), sociology, anthropology, bioethics, environmental health, health services research, economics and the political sciences . Even this list is not exhaustive. Global health is difficult to define—and yet we know it when we see it. Or so our editorial team thinks. We hope you agree. Much that is published in the global health literature addresses the challenges of public health improvements and healthcare delivery in resource-limited settings. Of course, resources for optimizing health and providing healthcare are virtually always limited in some fashion. But the term “resource-limited setting” is usually used to refer to situations in which the lack of resources is so extreme that the effect of the resource limitation itself becomes one of the central challenges to be assessed. Like the term “ global health ”, the term “ resource-limited setting ” is also amorphous. It can describe a community with food insecurity so severe as to result in increased child mortality or it can be a label used to bemoan the lack of neuroimaging capacity to study cerebral malaria in endemic regions. Dissecting the layers of deprivation and deprivation’s impact on human health in resource-limited settings is one of the many challenges of providing healthcare services and conducting research in such circumstances. In recent years, the interest in global health from the general public as well as the academic community has skyrocketed. In the first decade of 2000, the number of US medical schools offering international electives more than doubled . Many domestic philanthropic organizations from wealthy nations have expanded into low income, tropical settings . International aid for development assistance was $5.2 billion in 1990, had increased to $21.8 billion by 2007 , and even in the face of a global recession, this investment continues to rise. What has stimulated this growing interest? The HIV/AIDS pandemic and subsequent moral imperative to facilitate treatment access beyond wealthy nations undoubtedly drew attention to other healthcare issues that transcend national borders. Philanthropists like Bill and Melinda Gates certainly deserve credit for directing their substantial wealth and personal energies towards highlighting opportunities for improving global health. But perhaps our interest in people and problems beyond our own immediate communities has been most affected by the interconnectedness that characterizes this electronic age. Simple interventions may be one of the themes of the article collection Medicine for Global Health , published in BMC Medicine. We learn from a systematic review that hypothermia preventive measures could decrease neonatal mortality rates, since neonatal hypothermia can indirectly contribute to this mortality rate . In another research article, Srinivasan et al. report a randomized controlled trial of zinc supplements in children with severe pneumonia. They found an overall case fatality rate of 4% in the zinc-treated children versus 11.9% in those receiving placebo, with the protective effect being greatest for children with HIV. And Hall et al . detail the importance of including nutritional support as a key component of any intervention, particularly community-based mass treatment programs, aimed at the neglected tropical diseases (NTDs). The sad reality is that what many clinicians most need to help their patients is the capacity to write a prescription for adequate nutrition. Despite the substantial investments made in the provision of improved access to drugs for conditions including HIV/AIDS, malaria, and NTDs, many of our patients are trying to take these medications on a chronically empty stomach. Maybe “hunger” should be added to the list of NTDs. Tools developed by the World Health Organization (WHO)-convened expert groups, which are aimed at optimizing healthcare services, have been highlighted in Katchanov’s article on epilepsy care guidelines for low- and middle-income countries as well as the guide for analysts by Jit et al . on economic evaluations of the human papillomavirus (HPV) vaccine. HPV cost analyses are the focus of Hutubessy et al . and Quentin et al . , contributions that both detail economic analyses of the costs of delivery and scale-up of the vaccine to girls in Tanzania, where cervical cancer is the number one cancer-related cause of death among women. Taking advantage of planning tools developed by the WHO, Hutubessy et al . found that the principal marginal costs are those associated with social mobilization. Accessing girls who are beyond the typical age for scheduled vaccinations for a three-dose vaccine requires some investment, but school-based scale-up could cost as little as ~$26/girl including the cost of the vaccine and the salaries of existing staff . Studies like these suggest there are feasible, cost-effective options for decreasing cancer-related mortalities in countries where cervical cancer is a major killer. Finally, Maitland et al. offer a closer look at data from the Fluid Expansion as Supportive Therapy (FEAST) trial. The original FEAST study surprised most of us with its finding that fluid boluses are associated with excess mortality at 48 hours in African children with sepsis. To shed some light on the pathophysiologic mechanisms underlying this excess mortality, Maitland et al . looked at all-cause mortality by clinical presentation, hemodynamic changes in the first hour and terminal clinical events. The authors hypothesized that excess mortality would be mediated by fluid overload. FEAST continues to surprise us. The excess mortality appears to be related to refractory shock, not fluid overload, suggesting that an ischemia-reperfusion injury following resuscitation may be the problem. Elucidating the mechanism of death in this population of African children has implications for pediatric emergency care measures in all settings, which is further highlighted in a commentary by Simon Finfer and John Myburgh . We seem to have a lot to learn, even about “simple” interventions. The author declares that she has no competing interests. GB is a neurologist affiliated to Michigan State University and based for half the year in southern Africa. She has a special interest in epilepsy, and especially its management in resource-limited settings. She is also one of the guest editors of our article collection “Medicine for Global Health”. The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1741-7015/11/72/prepub
Effects of systemic ozone administration on the fresh extraction sockets healing: a histomorphometric and immunohistochemical study in rats
8b72528f-9dff-4bc9-b91a-1e049115cc54
11093522
Anatomy[mh]
Ozone is a natural compound present in the stratosphere that plays an important role in retaining ultraviolet energy emitted by the sun, thus contributing not only to control the thermal conditions of the stratosphere but also human life. From its discovery in the mid-1840s by the German chemist Christian Friedrich Schönbein and with the advent of the first ozone generator developed by Werner von Siemens in 1857, new horizons have been opened up for its use in both medicine and dentistry. , , Medical ozone is a gaseous mixture composed of 95 to 99.95% oxygen and 0.05 to 5% pure ozone, which can be found in gas or liquid forms (water or oil) , and can be applied topically, infiltratively, or systemically. Ozone presents greater solubility in water and is 1.6 times denser than pure oxygen, with a half-life of 40 min at 20°C and degrades quickly into pure oxygen. Numerous studies have documented its effects, highlighting its bioenergetics, analgesic, anti-hypoxia, and lethal effects on bacteria, protozoa, fungi, and viruses, , as well as on the oxygenation of tumors and in the therapy of HIV-AIDS and COVID-19. A previous study had demonstrated the ability of ozone to modulate the cellular antioxidant system and the inflammatory system, in addition to regulating oxygen metabolism in red blood cells, making a higher rate of oxygen available to tissues. The feasible benefits of O 3 in dentistry and medicine are commonly attributed to its antimicrobial, disinfectant, and healing properties. The inclusion of ozone as a treatment focus in our study stems from previous research showcasing its positive impacts in medical sciences. In dentistry, ozone therapy is widely employed in endodontics, orthodontics, periodontics (specifically for treating gingivitis and periodontitis, as adjuvants), maxillofacial surgery for addressing mucosal lesions, treating oral lichen planus, managing osteonecrosis of the jaw, maintenance of bone mass, and in conservative dentistry (for remineralization, depigmentation, and desensitization of teeth). , - Additionally, ozone is utilized in the prevention and treatment of dental caries, disinfection of dentin tubules, , and in implant dentistry. Those studies have consistently demonstrated favorable outcomes with the use of ozone therapy. Furthermore, recent clinical studies have explored the efficacy of ozone in treating periodontitis, revealing encouraging results. - The effectiveness of ozone therapy in reducing microbial burden and enhancing immune system capabilities associated with minor side effects makes it a viable option for application in clinical studies. - There are some advantages of using the ozone therapy in dentistry. The most reported beneficial effects are its antimicrobial efficacy against many pathogenic microorganisms, , its effectiveness in modulating the immune system, reducing inflammation, preventing hypoxia, its biosynthetic effect, and supporting tissue regeneration. Besides, most of the studies showed no adverse side effects when using the different route of administration and dosages of ozone for dental application. Moreover, the possibility to use ozone therapy as hydrogel appears to be beneficial for treatment of periodontitis and other conditions, which can be performed at the dental office or at home. On the other hand, its use presents limitations. Some potential side effects that may arise include coughing, nausea, vomiting, headaches, inflammation of the nasal passages, and respiratory tract irritation. The most observed side effects include excessive tearing, irritation of the upper respiratory tract, rhinitis, coughing, headaches, occasional nausea, and vomiting. However, exposure to ozone at a concentration of 0.05 parts per million (ppm) for 8 hours have been shown to induce no adverse effects. The highest level of ozone encountered during dental procedures is 0.01 ppm. , Thus, the ozone at 0.05 ppm does not seem to cause any side effect for its clinical use. Despite all these advantages and the widespread application of ozone in dentistry, the role of ozone in the healing of fresh extraction sockets remains to be clarified. , , Therefore, this study aimed to evaluate the effects of ozone therapy on bone repair of post-extraction dental sockets in clinically healthy rats. The study hypothesis was that ozone therapy might benefit bone repair in a dose dependent manner. Animals This study included 72 male Wistar rats ( Rattus norvegicus albinus ) aged 3 months old with mean body weight of 250–300g. The animals were housed in propylene cages (four animals per cage), with controlled temperature (21°C) and humidity (65–70%), and a 12-hour light-dark cycle. Animals consumed standard rat chow (Labina/Purina, Ribeirão Preto, Brazil) and received water ad libitum . The Research Ethics Committee for the use of animals approved this protocol (Proc. #123/2021), and the experimental study design followed all the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Sample size calculation The GPower ® software was used to calculate the sample size. Considering a 0.05 alpha (type I error) and 0.80 beta (type II error) with a medium effect size (ES=0.25), the number of study groups was calculated as a total of 64 animals. Bearing in mind possible complications and sample losses, a margin of 15% was included, resulting in a total number of 72 animals. Tooth extraction All animals were previously weighed and subjected to general anesthesia via intramuscular injection of a combination of ketamine hydrochloride (80 mg/Kg, Francotar, Virbac, SP, Brazil) and xylazine hydrochloride (10 mg/Kg, Coopazine, Coopers Brasil Ltda, Cotia, SP, Brazil). After antisepsis of the surgical area, the flap was raised, the tooth was carefully dislocated, and the extractions of the upper right central incisor of each animal were performed, as previously described. Then, the soft tissue was sutured with polygalactin 910 thread (4-0, Johnson & Johnson, São José dos Campos, SP, Brazil). After suturing, all animals received a single dose of 0.2 ml of antibiotic (Pentabiotic C Veterinário Reforçado Wyeth S.A., Indústrias Farmacêuticas, São Bernardo do Campo, SP, Brazil) and 5mg/kg of analgesic (Tramadol ® , Janssen-Cilag Farmacêutica Ltda, São Paulo, SP, Brazil) intramuscularly. Study design Rats were randomly distributed into four groups (n=18) using a table generated by the website Randomization.com (http://www.randomization.com), as follows: C (Control) – animals did not receive any treatment; OZ0.3 – the animals received an intraperitoneal injection of ozone in a dose of 0.3 mg/kg with a concentration of 15 μg/ml of O 3 ; OZ0.7 – the animals received an intraperitoneal injection of ozone in a dose of 0.7 mg/kg with a concentration of 35 μg/ml of O 3 ; and OZ1.0 – the animals received an intraperitoneal injection of ozone in a dose of 1.0 mg/kg with a concentration of 50 μg/ml of O 3 . The Philozon Medplus V ® ozone generator (Philozon, Balneário Camboriu, SC, Brazil) was used, which automatically regulates oxygen flow and allows adjusting concentrations from 5 to 60 μg/ml. Intraperitoneal injections were performed by a single and experienced examiner. Ozone dose quantities were selected following a study by Erdemci, et al. (2014). Histological processing and histopathological analysis In total, six animals from each group were euthanized via anesthetic overdose (Tiopental ® , 150 mg/kg, Cristália, Itapira, São Paulo, Brazil) after seven, 14, and 21 postoperative days. The jaws containing the tooth extraction site were carefully dissected and kept in 4% formaldehyde in 0.1 M phosphate buffer (pH 7.4) for 48 hours. After fixation, the samples were subjected to demineralization in 10% ethylenediaminetetraacetic acid (EDTA; Sigma-Aldrich) in PBS for 60 days. Once demineralization was completed, the samples were dehydrated in ethanol, cleared in xylene, and impregnated and embedded in paraffin, as described elsewhere. , The microtome cut was performed following a longitudinal cutting plane in relation to the dental socket, and serial cuts involving the central portion of the tooth extraction site were made with 5 µm thickness and collected on silanized glass slides. Histomorphometric analysis Microscopic, histometric, and immunohistochemical analyses were performed by a single calibrated examiner who was blinded to the experimental groups (EE). The analyses were conducted using a system composed of a light microscope (Axio Scope ® , Carl Zeiss Microscopy) with an attached digital camera (AxioCam ® MRc5, Carl Zeiss Microscopy) connected to a microcomputer. Photomicrographs of the histological slides were captured using the ZEN 2 ® software (Blue edition; version 6.1.7601; Carl Zeiss Microscopy). The region of interest (ROI) for the analysis consisted of a rectangle measuring 1280 μm × 960 μm in the central region of the middle third of the dental socket . For histometric analysis of the percentage of bone tissue (PBT), photomicrographs of the histological slides were obtained and analyzed using ImageJ ® software (version 1.51i; National Institutes of Health), with the aid of polygon selection tool, which demarcated and measured the area occupied by PBT. The corresponding percentage of the ROI that was occupied by PBT tissue was then calculated. Immunohistochemical analysis For immunohistochemical analyses, antigen retrieval was performed by immersing the histological slides in 10 mM citrate buffer, pH 6.0 (Spring Bioscience), in a pressurized chamber (Decloaking Chamber ® , Biocare Medical) at 95°C. The histological slides were washed in 0.1 M PBS and pH 7.4. Then, the slides were immersed in a solution consisting of 3% hydrogen peroxide in PBS for 1 hour, and in a solution consisting of 4% skimmed milk powder in PBS for 1 hour to block endogenous peroxidase and biotin, respectively. Blocking of non-specific sites was carried out in 1.5% bovine serum albumin in PBS plus 0.05% Triton ® X-100 (Sigma-Aldrich) for 12 hours, as described elsewhere. The slides were incubated for 24 hours with one of the following primary antibodies: anti-OCN (Abcam Laboratories) and anti-TRAP (Santa Cruz Laboratories). Then, the sections were incubated with biotinylated secondary antibody (Vector Laboratories) for 2 hours and subsequently treated with streptavidin conjugated to horseradish peroxidase (Vector Laboratories) for 2 hours. The 3,3’-diaminobenzidine (Vector Laboratories) was used as a chromogen. The specimens were counterstained with Harris Hematoxylin, then dehydrated in ethanol, cleared in xylene, and covered with mounting medium and glass coverslips. As a negative control, the specimens were subjected to the same procedures described previously, only omitting the use of the primary antibody, as described elsewhere. For osteocalcin (OCN) analysis, a semi-quantitative analysis was performed respecting the following scoring criteria. Score 0: null immunostaining pattern (total absence of immunoreactive cells (IR) and absence of labeling in the extracellular matrix (ECM); Score 1: low immunostaining pattern (≅ 1/4 of IR cells and weak staining in the ECM); Score 2: moderate pattern of immunostaining (≅ 1/2 of IR cells and moderate staining in the ECM); Score 3: high pattern of immunostaining (≅ 3/4 of IR cells and moderate staining in the ECM). For TRAP analysis, the amount of IR was distributed into the following scores: Score 0: null immunostaining pattern (total absence of IR cells); Score 1: low immunostaining pattern (up to 5 IR cells per field); Score 2: moderate immunostaining pattern (from 6 to 12 IR cells per field); Score 3: high immunostaining pattern (more than 12 IR cells per field). Statistical analysis The Biostat 5.0 (IDSM-Amazonas/Brazil) was used for statistical analysis. For the PBT, one-way analysis of variance (ANOVA) and Tukey’s post-test were used. For the OCN and TRAP immunostaining scores, the Kruskal-Wallis test and the Student-Newman-Keuls post-test were used. Differences were considered significant at p<0.05. This study included 72 male Wistar rats ( Rattus norvegicus albinus ) aged 3 months old with mean body weight of 250–300g. The animals were housed in propylene cages (four animals per cage), with controlled temperature (21°C) and humidity (65–70%), and a 12-hour light-dark cycle. Animals consumed standard rat chow (Labina/Purina, Ribeirão Preto, Brazil) and received water ad libitum . The Research Ethics Committee for the use of animals approved this protocol (Proc. #123/2021), and the experimental study design followed all the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. The GPower ® software was used to calculate the sample size. Considering a 0.05 alpha (type I error) and 0.80 beta (type II error) with a medium effect size (ES=0.25), the number of study groups was calculated as a total of 64 animals. Bearing in mind possible complications and sample losses, a margin of 15% was included, resulting in a total number of 72 animals. All animals were previously weighed and subjected to general anesthesia via intramuscular injection of a combination of ketamine hydrochloride (80 mg/Kg, Francotar, Virbac, SP, Brazil) and xylazine hydrochloride (10 mg/Kg, Coopazine, Coopers Brasil Ltda, Cotia, SP, Brazil). After antisepsis of the surgical area, the flap was raised, the tooth was carefully dislocated, and the extractions of the upper right central incisor of each animal were performed, as previously described. Then, the soft tissue was sutured with polygalactin 910 thread (4-0, Johnson & Johnson, São José dos Campos, SP, Brazil). After suturing, all animals received a single dose of 0.2 ml of antibiotic (Pentabiotic C Veterinário Reforçado Wyeth S.A., Indústrias Farmacêuticas, São Bernardo do Campo, SP, Brazil) and 5mg/kg of analgesic (Tramadol ® , Janssen-Cilag Farmacêutica Ltda, São Paulo, SP, Brazil) intramuscularly. Rats were randomly distributed into four groups (n=18) using a table generated by the website Randomization.com (http://www.randomization.com), as follows: C (Control) – animals did not receive any treatment; OZ0.3 – the animals received an intraperitoneal injection of ozone in a dose of 0.3 mg/kg with a concentration of 15 μg/ml of O 3 ; OZ0.7 – the animals received an intraperitoneal injection of ozone in a dose of 0.7 mg/kg with a concentration of 35 μg/ml of O 3 ; and OZ1.0 – the animals received an intraperitoneal injection of ozone in a dose of 1.0 mg/kg with a concentration of 50 μg/ml of O 3 . The Philozon Medplus V ® ozone generator (Philozon, Balneário Camboriu, SC, Brazil) was used, which automatically regulates oxygen flow and allows adjusting concentrations from 5 to 60 μg/ml. Intraperitoneal injections were performed by a single and experienced examiner. Ozone dose quantities were selected following a study by Erdemci, et al. (2014). In total, six animals from each group were euthanized via anesthetic overdose (Tiopental ® , 150 mg/kg, Cristália, Itapira, São Paulo, Brazil) after seven, 14, and 21 postoperative days. The jaws containing the tooth extraction site were carefully dissected and kept in 4% formaldehyde in 0.1 M phosphate buffer (pH 7.4) for 48 hours. After fixation, the samples were subjected to demineralization in 10% ethylenediaminetetraacetic acid (EDTA; Sigma-Aldrich) in PBS for 60 days. Once demineralization was completed, the samples were dehydrated in ethanol, cleared in xylene, and impregnated and embedded in paraffin, as described elsewhere. , The microtome cut was performed following a longitudinal cutting plane in relation to the dental socket, and serial cuts involving the central portion of the tooth extraction site were made with 5 µm thickness and collected on silanized glass slides. Microscopic, histometric, and immunohistochemical analyses were performed by a single calibrated examiner who was blinded to the experimental groups (EE). The analyses were conducted using a system composed of a light microscope (Axio Scope ® , Carl Zeiss Microscopy) with an attached digital camera (AxioCam ® MRc5, Carl Zeiss Microscopy) connected to a microcomputer. Photomicrographs of the histological slides were captured using the ZEN 2 ® software (Blue edition; version 6.1.7601; Carl Zeiss Microscopy). The region of interest (ROI) for the analysis consisted of a rectangle measuring 1280 μm × 960 μm in the central region of the middle third of the dental socket . For histometric analysis of the percentage of bone tissue (PBT), photomicrographs of the histological slides were obtained and analyzed using ImageJ ® software (version 1.51i; National Institutes of Health), with the aid of polygon selection tool, which demarcated and measured the area occupied by PBT. The corresponding percentage of the ROI that was occupied by PBT tissue was then calculated. For immunohistochemical analyses, antigen retrieval was performed by immersing the histological slides in 10 mM citrate buffer, pH 6.0 (Spring Bioscience), in a pressurized chamber (Decloaking Chamber ® , Biocare Medical) at 95°C. The histological slides were washed in 0.1 M PBS and pH 7.4. Then, the slides were immersed in a solution consisting of 3% hydrogen peroxide in PBS for 1 hour, and in a solution consisting of 4% skimmed milk powder in PBS for 1 hour to block endogenous peroxidase and biotin, respectively. Blocking of non-specific sites was carried out in 1.5% bovine serum albumin in PBS plus 0.05% Triton ® X-100 (Sigma-Aldrich) for 12 hours, as described elsewhere. The slides were incubated for 24 hours with one of the following primary antibodies: anti-OCN (Abcam Laboratories) and anti-TRAP (Santa Cruz Laboratories). Then, the sections were incubated with biotinylated secondary antibody (Vector Laboratories) for 2 hours and subsequently treated with streptavidin conjugated to horseradish peroxidase (Vector Laboratories) for 2 hours. The 3,3’-diaminobenzidine (Vector Laboratories) was used as a chromogen. The specimens were counterstained with Harris Hematoxylin, then dehydrated in ethanol, cleared in xylene, and covered with mounting medium and glass coverslips. As a negative control, the specimens were subjected to the same procedures described previously, only omitting the use of the primary antibody, as described elsewhere. For osteocalcin (OCN) analysis, a semi-quantitative analysis was performed respecting the following scoring criteria. Score 0: null immunostaining pattern (total absence of immunoreactive cells (IR) and absence of labeling in the extracellular matrix (ECM); Score 1: low immunostaining pattern (≅ 1/4 of IR cells and weak staining in the ECM); Score 2: moderate pattern of immunostaining (≅ 1/2 of IR cells and moderate staining in the ECM); Score 3: high pattern of immunostaining (≅ 3/4 of IR cells and moderate staining in the ECM). For TRAP analysis, the amount of IR was distributed into the following scores: Score 0: null immunostaining pattern (total absence of IR cells); Score 1: low immunostaining pattern (up to 5 IR cells per field); Score 2: moderate immunostaining pattern (from 6 to 12 IR cells per field); Score 3: high immunostaining pattern (more than 12 IR cells per field). The Biostat 5.0 (IDSM-Amazonas/Brazil) was used for statistical analysis. For the PBT, one-way analysis of variance (ANOVA) and Tukey’s post-test were used. For the OCN and TRAP immunostaining scores, the Kruskal-Wallis test and the Student-Newman-Keuls post-test were used. Differences were considered significant at p<0.05. Histological analysis At seven days post-extraction, the dental alveoli were filled with remnants of the blood clot, with highly vascularized and cellularized connective tissue. In the vicinity of the alveolar socket, deposition of bone matrix and formation of fine bone trabeculae were observed, which were more evident in the OZ1.0 Group . At 14 days, a fine network of bone trabeculae, composed of immature bone tissue, occupied a large part of the dental alveoli. These trabeculae were full of osteoblasts with morphological characteristics of intense activity, especially in groups OZ0.7 and OZ1.0, in which the bone trabeculae were evidently denser. Scattered in the bone trabeculae, the connective tissue was highly vascularized and highly cellularized, especially in the OZ0.7 and OZ1.0 groups . At 21 days, a large part of the alveolar socket was occupied by bone trabeculae, whose thickness and level of maturation were greater in the OZ0.7 group, and especially in the OZ1.0 group. The connective tissue located between the trabeculae was highly vascularized and slightly less cellularized than in previous periods . The histological characteristics presented by the OZ1.0 group are consistent with a more accelerated alveolar repair process (Figure 4d). Histomorphometric analysis In the intragroup analysis, the results showed that PBT was higher at 14 days when compared to 7 days in all experimental groups. At 21 days, PBT was increased compared to 7 days and 14 days in all experimental groups . In the intergroup analysis, the OZ0.7 group demonstrated increased PBT when compared to the C group at 14 days and 21 days. In the OZ1.0 group, PBT was higher than group C in all experimental periods. At 14 days and 21 days in the OZ1.0 group, PBT was higher when compared to OZ0.3 group. The PBT at 21 days in the OZ1.0 group was higher than in the OZ0.7 group . Immunohistochemical analysis In the intragroup analysis, OCN immunostaining at 21 days was higher than 7 days in all experimental groups. At 21 days in C and OZ0.3 groups, immunostaining for OCN was higher at 21 days compared to 14 days. In the intergroup analysis, the OZ1.0 group at 7 days and 14 days, OCN immunostaining was greater than C and OZ0.3 groups . Both the intragroup analysis and the intergroup analysis found no statistically significant difference in immunostaining for TRAP . At seven days post-extraction, the dental alveoli were filled with remnants of the blood clot, with highly vascularized and cellularized connective tissue. In the vicinity of the alveolar socket, deposition of bone matrix and formation of fine bone trabeculae were observed, which were more evident in the OZ1.0 Group . At 14 days, a fine network of bone trabeculae, composed of immature bone tissue, occupied a large part of the dental alveoli. These trabeculae were full of osteoblasts with morphological characteristics of intense activity, especially in groups OZ0.7 and OZ1.0, in which the bone trabeculae were evidently denser. Scattered in the bone trabeculae, the connective tissue was highly vascularized and highly cellularized, especially in the OZ0.7 and OZ1.0 groups . At 21 days, a large part of the alveolar socket was occupied by bone trabeculae, whose thickness and level of maturation were greater in the OZ0.7 group, and especially in the OZ1.0 group. The connective tissue located between the trabeculae was highly vascularized and slightly less cellularized than in previous periods . The histological characteristics presented by the OZ1.0 group are consistent with a more accelerated alveolar repair process (Figure 4d). In the intragroup analysis, the results showed that PBT was higher at 14 days when compared to 7 days in all experimental groups. At 21 days, PBT was increased compared to 7 days and 14 days in all experimental groups . In the intergroup analysis, the OZ0.7 group demonstrated increased PBT when compared to the C group at 14 days and 21 days. In the OZ1.0 group, PBT was higher than group C in all experimental periods. At 14 days and 21 days in the OZ1.0 group, PBT was higher when compared to OZ0.3 group. The PBT at 21 days in the OZ1.0 group was higher than in the OZ0.7 group . In the intragroup analysis, OCN immunostaining at 21 days was higher than 7 days in all experimental groups. At 21 days in C and OZ0.3 groups, immunostaining for OCN was higher at 21 days compared to 14 days. In the intergroup analysis, the OZ1.0 group at 7 days and 14 days, OCN immunostaining was greater than C and OZ0.3 groups . Both the intragroup analysis and the intergroup analysis found no statistically significant difference in immunostaining for TRAP . The results of this study demonstrate that systemic treatment with O 3 positively influenced socket bone repair in rats, characterized by a higher percentage of PBT compared to the control group. Additionally, the data showed that this treatment increased OCN immunostaining, possibly contributing to early bone maturation. However, our findings did not indicate any impact of O 3 on the reduction of osteoclasts, as evidenced by TRAP-staining. Therefore, our data suggests that the beneficial effect of O 3 relies on its regenerative properties rather than acting as an antiosteoclastogenic agent. Thus, our results propose that ozone therapy may accelerate the bone repair process in fresh extraction sockets and could be an interesting potential adjunct following dental extractions. The different dosages of O 3 used in this study i.e., 0.3mg (15 μg/ml of O 3 ), 0.7mg (35 μg/ml of O 3 ) and 1.0mg of O 3 /kg (50 μg/ml of O 3 ) promoted an increase in PBT, suggesting a positive impact of O 3 on bone formation. These favorable results were shown to be dose-dependent since the higher increase in PBT was noted in the animals treated with 1.0 mg/kg with a concentration of 50 μg/ml of O 3 compared to the other doses. These results corroborate previous studies, which demonstrated that low concentrations in dosages of O 3 (10-80 μg/ml) have been used to promote wound healing and regulate immunity. , Ozone therapy is a biological treatment method with a wide range of applications in both medicine and dentistry. The cascade of compounds derived from O 3 can act on different targets in the body and under different pathological conditions. Hypotheses regarding its mechanism of action are presented in the literature but are inconclusive and thus should be used cautiously. Studies have shown that O 3 therapy induces moderate oxidative stress, increases the production of endogenous antioxidants, improves local perfusion and oxygen delivery, and acts on the immune response. Other studies have reported that O 3 increases the transport of oxygen through the blood, resulting in an activation of cellular metabolism of aerobic processes (Krebs cycle, glycolysis, β-oxidation of fatty acids) and the use of energy resources. It is also known that the metabolism of inflamed tissues is increased with ozone therapy due to increased oxygenation and reduction of total inflammatory processes, increasing their capacity for tissue regeneration. , , , Although our animal model and analysis do not allow inference on the exact mechanism of action that resulted in the benefits of treatment with O 3 in the repair of dental extraction wounds, it can be suggested that these results are due to some potential factors, such as the ability of O 3 to promote greater blood supply in the injured tissue, increasing its oxygenation. Studies have shown that O 3 can improve oxygen metabolism, stimulating important enzymes that participate in its metabolism, increasing oxygen saturation in circulating blood with a consequent increase in oxygen supply to the body cells. Moreover, the bactericidal action of ozone is based on its robust oxidation properties with the concomitant formation of free radicals, which favor the eradication of almost all microorganisms. , The literature presents no consensus regarding the most effective dose of O 3 for improving the repair of bone wounds and tissue regeneration. Studies have shown that a dose of 0.7 mg/kg administered intraperitoneally benefited long-term repair of tooth extraction wounds, as well as that systemic O 3 application could accelerate alveolar bone healing after tooth extraction. In the treatment of induced periodontal disease, ozone therapy was suggested to reduce osteoclastic activity and alveolar bone loss. The current findings did not result in any effect of ozone therapy in the number of positive osteoclasts, which differs from the study of Saglam, et al. (2020). The differences in osteoclastic activity between these studies might be accounted for the different route of administration of ozone (topically vs systemically), animal model employed (tooth extraction vs periodontitis), different doses, and number of injections performed (single injection vs multiple). We highlight some limitations of this study. Firstly, we did not evaluate the toxicity of systemic administration of ozone in our animals. Thus, further studies should accomplish the hematoxicity test to provide insights regarding its side effects. Secondly, the systemic administration of ozone does not guarantee the same concentration at the oral tissue, which was also not investigated in our study. Thus, further studies using local treatment should be conducted to achieve more reliable local effects. Finally, the animal model employed can be optimized using extraction of molar teeth instead of central incisor. There is a lack of studies on the effects of local ozone therapy on the repair process, thus we suggest further studies to explore the biological mechanisms involved, including possible influences on other adjuvant therapeutic modalities, relevant to the repair process. Studies involving different dosages, therapeutic protocols, and routes of administration that enhance its biological effects and increase the bioavailability of the ozone in affected tissues are necessary so that clinical trials can be considered. Within the limitations of this in vivo study, our data suggest that ozone therapy can benefit bone repair and increase the newly formed bone in the alveolar socket following tooth extractions in rats.
Treatment of small intracranial aneurysms using the SMALLSS scoring system: a novel system for decision making
ec80f022-02f3-469f-9bbc-d2b15e252ad1
11825591
Microsurgery[mh]
The prevalence of intracranial aneurysms among the general population is reported to be around 3% . A majority of unruptured intracranial aneurysms (UIAs) are small aneurysms less than 7 mm . The management of these small UIAs has been a controversial topic in neurosurgery. There is no clear consensus or guidelines at present on when to treat and when to observe small UIAs . However, thus far the decision to treat has been primarily determined by individual neurosurgeons or neuro-interventionalists by balancing the natural history of the lesion for potential rupture versus risks of treatment. Despite the existing controversy, the average size of unruptured aneurysms that are being treated has decreased over the past 30 years with the advances in both endovascular and microsurgery . The decision on management must weigh aneurysm specific factors and patient demographic factors, both of which influence the natural history of aneurysm rupture and treatment outcomes. A number of retrospective as well as prospective studies have reported the potential predictors for small aneurysm rupture, such as aneurysm size, morphology, location, multiplicity, patient age, smoking status and family history of aneurysms [ , , ]. However, there is no study that combined all these risk factors to formulate a decision-making guideline and a tool to assess subsequent treatment outcomes. Therefore, we aimed to create a scoring system (SMALLSS) based on a comprehensive literature review to guide decisions on management options and to validate the scoring system utilizing data on small aneurysms treated in our center with endovascular or microsurgical procedures. Institutional review board (IRB) approval was obtained prior to the initiation of the study. Individual patient consent was waived given retrospective review of de-identified data. Patients who underwent surgical or endovascular treatment for unruptured aneurysms between January 2014 and December 2021 were identified from our institutional prospectively maintained registry. During the same interval, patients who were evaluated for small aneurysms (< 7mmm) but not offered surgery were also identified. Data on patient demographics, aneurysm characteristics, procedures, complications, imaging as well as clinical follow-up were retrospectively retrieved from the database. Decision making and management details Between January 2014 and December 2021, nearly 1900 patients with small UIAs were evaluated at our institution. The decision to treat was individualized for each patient by incorporating characteristics of the aneurysm (i.e., size, location, morphology, number) and patient demographic data (i.e., age, smoking status, and family history). The patient specific demographic data and radiographic films were reviewed by our multidisciplinary cerebrovascular team, and the decision regarding management options was made by consensus. The results of the consensus were discussed with each patient in detail, and the consensus opinion for treatment versus observation, or treatment with microsurgical or endovascular techniques was presented with its risks and benefits. For patients who were selected for conservative management, regular follow ups with digital subtraction angiography (DSA), magnetic resonance angiography (MRA) or computed tomography angiography (CTA) were scheduled at certain time intervals depending on exact size of the aneurysms and presence of aneurysm rupture risk factors. For open surgical management, elective microsurgical clipping was the treatment of choice. After surgery, patients were discharged in 2–3 days if no complications occurred. For endovascular treatment, the approaches included coil embolization with or without stent or balloon assistance and flow diversion with the PED (pipeline embolization device), FRED (Flow-Redirection Intraluminal Device) or WEB (woven endovascular bridge). All patients were prescribed 14 days of dual antiplatelet therapy prior to the procedure. After the procedure patients stayed for 1–2 days in the hospital if no complications were observed. Dual antiplatelet therapy was continued for 3–6 months (depending on exact device used and location of lesion) and was followed by aspirin (325 mg daily) in most cases. After treatment, patients were followed up at regularly scheduled clinical visits, where they were assessed for aneurysm occlusion status, and treatment-related complications. Per the institutional protocol, DSA, MRA or CTA was performed at around 6 months – 12 months post-procedure to evaluate aneurysmal occlusion status for patients treated with endovascular techniques. Between January 2014 and December 2021, nearly 1900 patients with small UIAs were evaluated at our institution. The decision to treat was individualized for each patient by incorporating characteristics of the aneurysm (i.e., size, location, morphology, number) and patient demographic data (i.e., age, smoking status, and family history). The patient specific demographic data and radiographic films were reviewed by our multidisciplinary cerebrovascular team, and the decision regarding management options was made by consensus. The results of the consensus were discussed with each patient in detail, and the consensus opinion for treatment versus observation, or treatment with microsurgical or endovascular techniques was presented with its risks and benefits. For patients who were selected for conservative management, regular follow ups with digital subtraction angiography (DSA), magnetic resonance angiography (MRA) or computed tomography angiography (CTA) were scheduled at certain time intervals depending on exact size of the aneurysms and presence of aneurysm rupture risk factors. For open surgical management, elective microsurgical clipping was the treatment of choice. After surgery, patients were discharged in 2–3 days if no complications occurred. For endovascular treatment, the approaches included coil embolization with or without stent or balloon assistance and flow diversion with the PED (pipeline embolization device), FRED (Flow-Redirection Intraluminal Device) or WEB (woven endovascular bridge). All patients were prescribed 14 days of dual antiplatelet therapy prior to the procedure. After the procedure patients stayed for 1–2 days in the hospital if no complications were observed. Dual antiplatelet therapy was continued for 3–6 months (depending on exact device used and location of lesion) and was followed by aspirin (325 mg daily) in most cases. After treatment, patients were followed up at regularly scheduled clinical visits, where they were assessed for aneurysm occlusion status, and treatment-related complications. Per the institutional protocol, DSA, MRA or CTA was performed at around 6 months – 12 months post-procedure to evaluate aneurysmal occlusion status for patients treated with endovascular techniques. To revisit previous decision making, data that had been collected prospectively was reviewed retrospectively. Patient demographic variables that were collected included age, gender, race, smoking status, history of autosomal polycystic kidney disease (ADPKD), family history of intracranial aneurysm, history of subarachnoid hemorrhage (SAH), history of hypertension, and other comorbidities as well as pretreatment modified Rankin Scale (mRS). Data on aneurysm characteristics included size of aneurysm measured by diameter, anatomic location (anterior versus posterior), shape (smooth versus irregular), presence of daughter sacs or thrombosis, and multiplicity of aneurysm. Treatment-related complications were categorized as ischemic, intracranial hemorrhagic, extracranial hemorrhagic and others. Other follow-up outcome variables that were assessed included clinical status (as measured by follow-up mRS), aneurysmal occlusion status on the last available DSA, MRA or CTA (as classified by the Raymond Roy scale), permanent morbidity, and retreatment rate. Previous decision making on treatment versus conservative management as well as treatment outcomes were re-evaluated and stratified through the SMALLSS score. The system of evaluation with SMALLSS included S ize, (4–7 mm: 1 point, < 3.9 mm: 0 points), Multiple aneurysms (yes: 1 point, no: 0 point), Anatomic location (posterior: 1 point, anterior: 0 point), Lineage- family history of aneurysm (yes: 1 point, no: 0 point), Lifetime risk (age < 65: 1 point, age > 65: 0 point), S moking history (yes: 1 point, no: 0 point), S hape (irregular-1 point, smooth-0 point). Statistical analysis Statistical analysis was performed using STATA version 17.0 (StataCorp, College Station, Texas, USA). Statistical significance threshold was considered as a p-value of < 0.05. Descriptive statistics were used to summarize patient demographics, aneurysm characteristics and utilization of different treatment modalities. Categorical variables are reported as proportions, and comparison was done using the Fisher’s exact test or Pearson’s chi square test based on data distribution type. Continuous variables are presented as mean and standard deviations, an unpaired two sample t-test or Wilcoxon rank-sum test was used for comparison of the continuous variables. Statistical analysis was performed using STATA version 17.0 (StataCorp, College Station, Texas, USA). Statistical significance threshold was considered as a p-value of < 0.05. Descriptive statistics were used to summarize patient demographics, aneurysm characteristics and utilization of different treatment modalities. Categorical variables are reported as proportions, and comparison was done using the Fisher’s exact test or Pearson’s chi square test based on data distribution type. Continuous variables are presented as mean and standard deviations, an unpaired two sample t-test or Wilcoxon rank-sum test was used for comparison of the continuous variables. After analyzing our own data and performing statistical analysis of our cohort of aneurysms treated < 7 mm, we then externally validated the SMALLSS scoring system by having a high-volume cerebrovascular center retrospectively review 200 aneurysms < 7 mm that were treated with either open micro neurosurgery or endovascular treatment. SMALLSS scores were then calculated from these treated aneurysms. Patient demographics and aneurysms characteristics A total of 1152 cases with unruptured intracranial aneurysms were treated over the study interval, of which 771 aneurysms (66.9%) were under 7 mm. A summary of patient demographics and aneurysm characteristics is presented in Table . Patients with age above or equal to 65 comprised 45.8% of the patients. Female patients were 79.6% of total cases. In terms of racial background, 69.8% were white, 7.4% black, 7.5% Latino/Hispanic, 3.1% Asian, and the rest (12.2%) other/unspecified. Four hundred and forty-five (57.7%) reported a smoking history, among which 187 (24.2%) were active smokers, 258 (33.5%) were former smokers. A history of hypertension was recorded for 362 (66.9%) patients. Other comorbidities were present in 518 (67.2%) cases, history of ADPKD and family history of aneurysm in 10 (1.3%) and 158 (20.5%) cases respectively. In 92(11.9%) cases, patients reported a previous history of aneurysmal subarachnoid hemorrhage. Average aneurysm diameter for the small aneurysms was 4.4 ± 1.3 mm; 668 (86.6%) were located in the anterior circulation, while 103 (13.4%) were located in the posterior circulation; 70 (9.2%) had a daughter sac; 364 (47.2%) had multiple aneurysms and 9 (1.2%) had a partially thrombosed aneurysm. Micro-surgical clipping was used in 253 (32.8%) of cases among the 771 small aneurysms, while 518 (67.2%) had endovascular treatment. In comparison, 17.1% of large aneurysms (> 7 mm) had microsurgical clipping. Among the small aneurysms 220 (28.5%) were < 3.9 mm and 551 (71.5%) were 4–7 mm. In comparison between these two groups, there is no significant difference in age, gender, racial background, smoking status or past medical history as shown in Table . However, significantly more cases in < 3.9 mm group reported a family history of aneurysm ( p = 0.01) and were found to have multiple aneurysms, and in the same group more cases tended to have history of aneurysmal subarachnoid hemorrhage ( p = 0.09) and the index aneurysm located in the posterior circulation ( p = 0.12). In comparison of the treatment modalities between these two groups, small aneurysms < 3.9 mm tended to have microsurgical clipping more often than aneurysms 4–7 mm ( p = 0.19). (Table ) A total of 1152 cases with unruptured intracranial aneurysms were treated over the study interval, of which 771 aneurysms (66.9%) were under 7 mm. A summary of patient demographics and aneurysm characteristics is presented in Table . Patients with age above or equal to 65 comprised 45.8% of the patients. Female patients were 79.6% of total cases. In terms of racial background, 69.8% were white, 7.4% black, 7.5% Latino/Hispanic, 3.1% Asian, and the rest (12.2%) other/unspecified. Four hundred and forty-five (57.7%) reported a smoking history, among which 187 (24.2%) were active smokers, 258 (33.5%) were former smokers. A history of hypertension was recorded for 362 (66.9%) patients. Other comorbidities were present in 518 (67.2%) cases, history of ADPKD and family history of aneurysm in 10 (1.3%) and 158 (20.5%) cases respectively. In 92(11.9%) cases, patients reported a previous history of aneurysmal subarachnoid hemorrhage. Average aneurysm diameter for the small aneurysms was 4.4 ± 1.3 mm; 668 (86.6%) were located in the anterior circulation, while 103 (13.4%) were located in the posterior circulation; 70 (9.2%) had a daughter sac; 364 (47.2%) had multiple aneurysms and 9 (1.2%) had a partially thrombosed aneurysm. Micro-surgical clipping was used in 253 (32.8%) of cases among the 771 small aneurysms, while 518 (67.2%) had endovascular treatment. In comparison, 17.1% of large aneurysms (> 7 mm) had microsurgical clipping. Among the small aneurysms 220 (28.5%) were < 3.9 mm and 551 (71.5%) were 4–7 mm. In comparison between these two groups, there is no significant difference in age, gender, racial background, smoking status or past medical history as shown in Table . However, significantly more cases in < 3.9 mm group reported a family history of aneurysm ( p = 0.01) and were found to have multiple aneurysms, and in the same group more cases tended to have history of aneurysmal subarachnoid hemorrhage ( p = 0.09) and the index aneurysm located in the posterior circulation ( p = 0.12). In comparison of the treatment modalities between these two groups, small aneurysms < 3.9 mm tended to have microsurgical clipping more often than aneurysms 4–7 mm ( p = 0.19). (Table ) As described in the methods section, the system of evaluation with SMALLSS included 7 well established predicting factors for small aneurysm rupture, with the highest score being 7 and lowest score being 0 (Table ). As shown in Table ; Fig. , among the 771 small aneurysms, 5 (0.65%) had SMALLSS scores of 6, 59 (7.65%) had SMALLSS scores of 5, 155 (21.53%) had SMALLSS scores of 4, 266 (34.5%) score of 3, 208 (26.98%) score of 2 and 63 (8.17%) score of 1. Only 4 (0.52%) had a score of zero (Table ). During this same interval, 1126 patients with aneurysms < 7 mm were evaluated and not offered treatment, with the majority having SMALLSS scores of 2 and under (841, 74.7%). No rupture has been observed in this untreated patient cohort (Fig. ). Among the treated small aneurysms, serious neurologic complications occurred in 18 out of 771 aneurysms (2.33%) of which 4 were hemorrhagic and 14 were ischemic. These complications resulted in mRS outcomes of 0–2 in 15 patients and mRS 3–5 in 3 patients with 1 death related to remote hemorrhage after flow diversion. The number of complications of in each SMALLSS group was outlined in Table . Transient complications such as hematoma, TIA (transient ischemic attack) without radiographic correlates or infection were seen in 49 (6.4%) cases. Among 253 cases treated through microsurgical clipping, 98% had complete and near complete occlusion while only 5 had incomplete occlusion. In the endovascular group, 33 (6.4%) patients were lost to follow up, among the patients with complete follow up, 89.7% had complete or near complete occlusion. The overall obliteration rate for small aneurysms was 92.5%. In order to test the validation of the proposed SMALLSS scoring system we applied the system to data from another institution. A consecutive series of 200 patients with unruptured small aneurysms who were offered treatment shows a similar distribution of SMALLSS score, with over 75% of patients with having SMALLSS score 2 or above as shown in Table . The SMALLSS scoring breakdown is included in supplementary Table . The complication rate after treatment was found to be very low and did not distribute based on small scores number which was what was found in our data (Table ). Natural history of small aneurysms less than 7 mm The natural history and rupture risk of small aneurysms have been of great interest for decades. Numerous studies have reported the rupture rate of small aneurysms and patient as well as aneurysm specific risk factors for it. Two of the larger studies with natural history data were the International Study of Unruptured Intracranial Aneurysms (ISUIA) trial and the Japanese unruptured cerebral aneurysms study (UCAS). The ISUIA study reported that the 5-year rupture rate for aneurysms smaller than 7 mm in the anterior circulation is 0% and for those smaller than 7 mm in the posterior circulation is 2.5% . However, the study failed to resolve the discrepancy between their reported extremely low rupture rate for aneurysms < 7 mm at 0.7% per year compared to the large proportion of ruptured aneurysms in this same category . Some argue that the actual rupture risk of small aneurysms might be higher than what they reported. In the UCAS, the annual rupture rate of small (3–4 mm) cerebral aneurysms was 0.36% per year. The proportion of small, ruptured aneurysms < 5 mm was 35% in their cohort . The similar discrepancies in these two studies likely can be explained by interval treatment of the aneurysms which were deemed to be growing or to have a risk of rupture during follow up. Although not directly comparable, data from a systematic review of 13 retrospective studies of unruptured intracranial aneurysms in Japan found a much higher overall rupture rate than that reported in the ISUIA study . Unlike the ISUIA and UCAS, the meta-analysis by Wermer et al. showed that the UIA rupture rate was approximately 1% for smaller UIAs measuring < 5 mm in diameter and 1% for UIAs < 7 mm in diameter with follow-up of 3.7 and 7.7 years, respectively. The rupture rate was 0.24% per year in those < 5 mm and 0.13% per year in those < 7 mm in diameter . In Addition, several studies have reported results showing that the majority of SAHs result from aneurysms < 10 mm in size and a significant proportion of patients present with ruptured small aneurysms < 5 mm [ , , ]. Both the ISUIA and the UCAS have found that the risk of small aneurysm rupture varied according to its size and location . ISUIA, like many other studies, reported that aneurysms in the posterior circulation are proven to rupture more frequently [ , , , ]. However, in this study the posterior communicating artery aneurysms were included in the posterior circulation lesions which has been a point of criticism of the study. In the UCAS, aneurysms in the anterior and posterior communicating arteries were more likely to rupture than those in the middle cerebral artery . Aneurysms that have irregular shapes or irregular necks are considered to be at a higher risk of rupture. In the UCAS, the presence of a daughter sac (an irregular protrusion of the aneurysm wall) was associated with a statistically increased risk of rupture, while the presence of thrombus or calcification did not appear to be related to aneurysm rupture . It has been shown that multiple aneurysms were more likely to grow and rupture than single lesions . Familial aneurysms tend to rupture at a smaller size and younger age than sporadic aneurysms [ , , ]. In one comparative study, the observed annual rupture rate of 1.2% was almost 17 times higher than the rupture rate of aneurysms matched for size and location in the ISUIA . Several prospective and retrospective studies comparing patients with ruptured and unruptured cerebral aneurysms found that smoking and age appeared to increase the risk of rupture [ , , ]. All these variables have become well established and felt to be important factors that influence the day-to-day clinical practice guiding the management of incidental aneurysms. Treatment of small unruptured aneurysms Consideration of treatment of small unruptured aneurysms has been changing as treatment techniques have evolved in both endovascular and microsurgical therapy. Two or three decades ago, the treatment of small UIAs was deemed to carry a high morbidity and mortality. A study based on the National Inpatient Sample between 2001 and 2008 showed that the morbidity and mortality for coiling were 4.8% and 0.6%, respectively, and those for clipping were 16.2% and 1.2%. Both surgical and endovascular repair of incidental small aneurysms (< 7 mm) in the anterior circulation resulted in a net loss of quality-adjusted life-years at all ages over 20 years old . In addition, endovascular treatment for very small intracranial aneurysms had a lower chance of protection from further bleeding. In the ISAT (International subarachnoid aneurysm trial) study and the CARAT (Cerebral Aneurysm Rupture After Treatment) studies, the rates of incomplete aneurysm occlusion with coiling were substantial, and the risks of rebleeding after endovascular therapy were significantly higher compared with the surgically treated groups . The meta-analysis by Brinjikji et al. in 2010 had shown that the risk of treating very small (3 mm or smaller) unruptured intracranial aneurysms is not negligible, as risk of periprocedural rupture was higher than that reported for larger aneurysms and the combined rate of periprocedural mortality and morbidity was 7.3% . Therefore, microsurgical clipping for small UIAs hads been considered as first line treatment because of its durability, with lower recurrence rate and retreatment rate compared with endovascular coiling. However, the considerations for risks of treatment, compared to the risk of rupture of small UIAs has changed as treatment has changed. With the advancement of coiling technique and introduction of flow diversion, more aneurysms are being treated through endovascular procedures. In their national inpatient sample data analysis, Salem et al. showed that in 2004 the utilization of microsurgery was equivalent to endovascular procedures for UIAs, yet by 2014 twice as many unruptured aneurysms were being treated by endovascular procedures. However, in this study they failed to stratify treatment techniques by aneurysm size . In a most recent meta-analysis Khorasanizadeh et al. showed a significant decrease in the size of treated UIAs over time, with a 0.71-mm decrease in the average size of treated UIAs every 5 years since 1987 and an annual mean dropping below 7 mm in 2012. This indirectly reflects the improvement in safety and efficacy of both endovascular and microsurgery . A few studies argue that for UIAs treatment should be only recommended in high-volume centers that can demonstrate a low morbidity rate. It has been shown that the morbidity and mortality rate of both endovascular and surgical treatment groups regarding unruptured aneurysms is significantly lower in patients treated in high-volume centers. However, these studies did not differentiate between aneurysm sizes of the treated unruptured aneurysms [ , , ]. For there to be a benefit from treatment for small aneurysms, the complication risk of these procedures must be lower than the rupture risk of the untreated aneurysm, which is low in small unruptured aneurysms. To validate this hypothesis with practical data, we retrospectively analyzed our data through the SMALLSS score to reassess our decision-making process regarding treatment, as well as to evaluate subsequent treatment outcomes and the efficacy of aneurysm obliteration. We have shown that treatment related risks are not increased with increased natural history risks and that high obliteration rates are achieved in small aneurysms. SMALLSS score for decision making The SMALLSS is a novel scoring system for decision making. It is based on predictors that are established risk factors for small aneurysm rupture. Our data showed that among the cases with small lesion, cases with SMALLSS score of 3 and above were often offered treatment, the ones with SMALLSS of 2 or below were conservatively managed. The treated cases had 0% mortality and 2.3% morbidity, which is slightly lower than what has been reported in the literature. The overall obliteration rate for the treated small aneurysms was 92.5%. No rupture of aneurysms was observed among the conservatively managed small aneurysms. The above findings indicate that the SMALLSS score can be a reliable system for treatment decision making while balancing the natural history of small UIAs. This was then externally validated a high volume cerebrovascular center without any significant deviation in the SMALLSS score treated aneurysms as a majority of aneurysms treated at the outside institution were SMALLSS score 2 or greater (75%). The PHASES score, which incorporates both patient and aneurysm-specific factors, is widely used as a predictive tool for assessing the risk of intracranial aneurysm rupture. Among the six prospective studies included in the PHASES analysis, ISUIA and UCAS had 32% and 48% aneurysms treated during follow up, which can easily introduce selection bias [ , , ]. It was indirectly reflected on results of their regression analysis as the well-established risk factors such as smoking status, multiplicity of aneurysm and history of SAH did not stand out as predictors. Unlike the PHASES scoring system, our SMALLSS score is based on the predictors for small aneurysm rupture that are widely reported by both retrospective and prospective studies. In addition, PHASES assigned zero points to lesions smaller than 7 mm. (Supplementary Table ) Our system of SMALLSS does not include other risk factors such as aneurysm growth, history of SAH and hypertension, which does not mean these specific predictors for aneurysm rupture can be neglected. We assume aneurysm growth is not a characteristic available at baseline. Moreover, it can be measured during follow-up and can affect aneurysm size and cause a subsequent change in the SMALLSS score. In our data, patients with history of SAH often had multiple aneurysms and therefore we included one of the co-existing predictors in the SMALLSS score. Hypertension, which is known as one of the easily modifiable risk factors, is not included either. Our patient population had fairly aggressive management of their blood pressure and it was not possible to estimate the influence of high blood pressure during follow-up on risk of rupture for small unruptured aneurysms. Limitations Our study has some limitations. First and foremost, although the SMALLSS score is based on well-established and extensively reported risk factors, the system of evaluation is designed as a retrospective review of prospectively maintained single center data which cannot avoid certain inherit bias, therefore we assume it warrants validation with larger data from multi-center or with prospectively designed studies. Second, the criteria for open surgery versus endovascular treatment for small UIAs was not discussed in our study. It is worth emphasizing that the decision on treatment modality focuses more on aneurysm size, patient age and projected risks of treatment rather than predicting factors for rupture. Third, our study is designed as a comprehensive literature review to build the SMALLSS scoring system and to validate the system using single center data, therefore it was impossible to explore the natural history of small aneurysms due to retrospective review of the data. Forth, the neurovascular team that evaluated and treated the patients is comprised primarily of three treating physicians who are trained and regularly treat aneurysms with both microsurgical and endovascular techniques, thus centers that heavily focus on open surgery or endovascular procedure may need to carefully reference our outcomes. The natural history and rupture risk of small aneurysms have been of great interest for decades. Numerous studies have reported the rupture rate of small aneurysms and patient as well as aneurysm specific risk factors for it. Two of the larger studies with natural history data were the International Study of Unruptured Intracranial Aneurysms (ISUIA) trial and the Japanese unruptured cerebral aneurysms study (UCAS). The ISUIA study reported that the 5-year rupture rate for aneurysms smaller than 7 mm in the anterior circulation is 0% and for those smaller than 7 mm in the posterior circulation is 2.5% . However, the study failed to resolve the discrepancy between their reported extremely low rupture rate for aneurysms < 7 mm at 0.7% per year compared to the large proportion of ruptured aneurysms in this same category . Some argue that the actual rupture risk of small aneurysms might be higher than what they reported. In the UCAS, the annual rupture rate of small (3–4 mm) cerebral aneurysms was 0.36% per year. The proportion of small, ruptured aneurysms < 5 mm was 35% in their cohort . The similar discrepancies in these two studies likely can be explained by interval treatment of the aneurysms which were deemed to be growing or to have a risk of rupture during follow up. Although not directly comparable, data from a systematic review of 13 retrospective studies of unruptured intracranial aneurysms in Japan found a much higher overall rupture rate than that reported in the ISUIA study . Unlike the ISUIA and UCAS, the meta-analysis by Wermer et al. showed that the UIA rupture rate was approximately 1% for smaller UIAs measuring < 5 mm in diameter and 1% for UIAs < 7 mm in diameter with follow-up of 3.7 and 7.7 years, respectively. The rupture rate was 0.24% per year in those < 5 mm and 0.13% per year in those < 7 mm in diameter . In Addition, several studies have reported results showing that the majority of SAHs result from aneurysms < 10 mm in size and a significant proportion of patients present with ruptured small aneurysms < 5 mm [ , , ]. Both the ISUIA and the UCAS have found that the risk of small aneurysm rupture varied according to its size and location . ISUIA, like many other studies, reported that aneurysms in the posterior circulation are proven to rupture more frequently [ , , , ]. However, in this study the posterior communicating artery aneurysms were included in the posterior circulation lesions which has been a point of criticism of the study. In the UCAS, aneurysms in the anterior and posterior communicating arteries were more likely to rupture than those in the middle cerebral artery . Aneurysms that have irregular shapes or irregular necks are considered to be at a higher risk of rupture. In the UCAS, the presence of a daughter sac (an irregular protrusion of the aneurysm wall) was associated with a statistically increased risk of rupture, while the presence of thrombus or calcification did not appear to be related to aneurysm rupture . It has been shown that multiple aneurysms were more likely to grow and rupture than single lesions . Familial aneurysms tend to rupture at a smaller size and younger age than sporadic aneurysms [ , , ]. In one comparative study, the observed annual rupture rate of 1.2% was almost 17 times higher than the rupture rate of aneurysms matched for size and location in the ISUIA . Several prospective and retrospective studies comparing patients with ruptured and unruptured cerebral aneurysms found that smoking and age appeared to increase the risk of rupture [ , , ]. All these variables have become well established and felt to be important factors that influence the day-to-day clinical practice guiding the management of incidental aneurysms. Consideration of treatment of small unruptured aneurysms has been changing as treatment techniques have evolved in both endovascular and microsurgical therapy. Two or three decades ago, the treatment of small UIAs was deemed to carry a high morbidity and mortality. A study based on the National Inpatient Sample between 2001 and 2008 showed that the morbidity and mortality for coiling were 4.8% and 0.6%, respectively, and those for clipping were 16.2% and 1.2%. Both surgical and endovascular repair of incidental small aneurysms (< 7 mm) in the anterior circulation resulted in a net loss of quality-adjusted life-years at all ages over 20 years old . In addition, endovascular treatment for very small intracranial aneurysms had a lower chance of protection from further bleeding. In the ISAT (International subarachnoid aneurysm trial) study and the CARAT (Cerebral Aneurysm Rupture After Treatment) studies, the rates of incomplete aneurysm occlusion with coiling were substantial, and the risks of rebleeding after endovascular therapy were significantly higher compared with the surgically treated groups . The meta-analysis by Brinjikji et al. in 2010 had shown that the risk of treating very small (3 mm or smaller) unruptured intracranial aneurysms is not negligible, as risk of periprocedural rupture was higher than that reported for larger aneurysms and the combined rate of periprocedural mortality and morbidity was 7.3% . Therefore, microsurgical clipping for small UIAs hads been considered as first line treatment because of its durability, with lower recurrence rate and retreatment rate compared with endovascular coiling. However, the considerations for risks of treatment, compared to the risk of rupture of small UIAs has changed as treatment has changed. With the advancement of coiling technique and introduction of flow diversion, more aneurysms are being treated through endovascular procedures. In their national inpatient sample data analysis, Salem et al. showed that in 2004 the utilization of microsurgery was equivalent to endovascular procedures for UIAs, yet by 2014 twice as many unruptured aneurysms were being treated by endovascular procedures. However, in this study they failed to stratify treatment techniques by aneurysm size . In a most recent meta-analysis Khorasanizadeh et al. showed a significant decrease in the size of treated UIAs over time, with a 0.71-mm decrease in the average size of treated UIAs every 5 years since 1987 and an annual mean dropping below 7 mm in 2012. This indirectly reflects the improvement in safety and efficacy of both endovascular and microsurgery . A few studies argue that for UIAs treatment should be only recommended in high-volume centers that can demonstrate a low morbidity rate. It has been shown that the morbidity and mortality rate of both endovascular and surgical treatment groups regarding unruptured aneurysms is significantly lower in patients treated in high-volume centers. However, these studies did not differentiate between aneurysm sizes of the treated unruptured aneurysms [ , , ]. For there to be a benefit from treatment for small aneurysms, the complication risk of these procedures must be lower than the rupture risk of the untreated aneurysm, which is low in small unruptured aneurysms. To validate this hypothesis with practical data, we retrospectively analyzed our data through the SMALLSS score to reassess our decision-making process regarding treatment, as well as to evaluate subsequent treatment outcomes and the efficacy of aneurysm obliteration. We have shown that treatment related risks are not increased with increased natural history risks and that high obliteration rates are achieved in small aneurysms. The SMALLSS is a novel scoring system for decision making. It is based on predictors that are established risk factors for small aneurysm rupture. Our data showed that among the cases with small lesion, cases with SMALLSS score of 3 and above were often offered treatment, the ones with SMALLSS of 2 or below were conservatively managed. The treated cases had 0% mortality and 2.3% morbidity, which is slightly lower than what has been reported in the literature. The overall obliteration rate for the treated small aneurysms was 92.5%. No rupture of aneurysms was observed among the conservatively managed small aneurysms. The above findings indicate that the SMALLSS score can be a reliable system for treatment decision making while balancing the natural history of small UIAs. This was then externally validated a high volume cerebrovascular center without any significant deviation in the SMALLSS score treated aneurysms as a majority of aneurysms treated at the outside institution were SMALLSS score 2 or greater (75%). The PHASES score, which incorporates both patient and aneurysm-specific factors, is widely used as a predictive tool for assessing the risk of intracranial aneurysm rupture. Among the six prospective studies included in the PHASES analysis, ISUIA and UCAS had 32% and 48% aneurysms treated during follow up, which can easily introduce selection bias [ , , ]. It was indirectly reflected on results of their regression analysis as the well-established risk factors such as smoking status, multiplicity of aneurysm and history of SAH did not stand out as predictors. Unlike the PHASES scoring system, our SMALLSS score is based on the predictors for small aneurysm rupture that are widely reported by both retrospective and prospective studies. In addition, PHASES assigned zero points to lesions smaller than 7 mm. (Supplementary Table ) Our system of SMALLSS does not include other risk factors such as aneurysm growth, history of SAH and hypertension, which does not mean these specific predictors for aneurysm rupture can be neglected. We assume aneurysm growth is not a characteristic available at baseline. Moreover, it can be measured during follow-up and can affect aneurysm size and cause a subsequent change in the SMALLSS score. In our data, patients with history of SAH often had multiple aneurysms and therefore we included one of the co-existing predictors in the SMALLSS score. Hypertension, which is known as one of the easily modifiable risk factors, is not included either. Our patient population had fairly aggressive management of their blood pressure and it was not possible to estimate the influence of high blood pressure during follow-up on risk of rupture for small unruptured aneurysms. Our study has some limitations. First and foremost, although the SMALLSS score is based on well-established and extensively reported risk factors, the system of evaluation is designed as a retrospective review of prospectively maintained single center data which cannot avoid certain inherit bias, therefore we assume it warrants validation with larger data from multi-center or with prospectively designed studies. Second, the criteria for open surgery versus endovascular treatment for small UIAs was not discussed in our study. It is worth emphasizing that the decision on treatment modality focuses more on aneurysm size, patient age and projected risks of treatment rather than predicting factors for rupture. Third, our study is designed as a comprehensive literature review to build the SMALLSS scoring system and to validate the system using single center data, therefore it was impossible to explore the natural history of small aneurysms due to retrospective review of the data. Forth, the neurovascular team that evaluated and treated the patients is comprised primarily of three treating physicians who are trained and regularly treat aneurysms with both microsurgical and endovascular techniques, thus centers that heavily focus on open surgery or endovascular procedure may need to carefully reference our outcomes. The SMALLSS scoring system can be used to help guide treatment decision making with regard to aneurysm and patient specific factors while balancing the natural history of small intracranial aneurysms. The treatment complications of small UIAs, when guided by the SMALLSS score is low and efficacy of treatment is high. Below is the link to the electronic supplementary material. Supplementary Material 1 (DOCX 17.2 KB)
Whole-Genome Analysis of G2P[4] Rotavirus Strains in China in 2022 and Comparison of Their Antigenic Epitopes with Vaccine Strains
84b788a5-7b65-436a-8b64-a4bbf0579907
11945518
Biochemistry[mh]
Group A rotavirus (RVA) is the leading cause of acute gastroenteritis in infants and young children worldwide. In 2016, it was estimated that 128,500 children under the age of five died from rotavirus gastroenteritis (RVGE) globally. Each year, RVGE causes approximately 453,000 deaths, with the vast majority occurring in developing countries in Asia and Sub-Saharan Africa . RV is classified into nine species (A, B, C, D, F, G, H, I, and J) in the genus Rotavirus, family Sedoreoviridae . RVA is a non-enveloped, double-stranded RNA virus. The virion is a three-layered particle consisting of a core, inner capsid, and outer capsid, containing the viral genome composed of 11 double-stranded RNA segments. These segments encode six structural proteins and six non-structural proteins . A binary classification scheme has traditionally been used to classify RVA into G and P types based on the properties of the outer capsid proteins, VP7 and VP4 . The full genome, consisting of 11 segments (VP7-VP4-VP6-VP1-VP2-VP3-NSP1-NSP2-NSP3-NSP4-NSP5/6), can be classified into genotypes as Gx-P[x]-Ix-Rx-Cx-Mx-Ax-Nx-Tx-Ex-Hx. Based on genotype, they can be classified into a Wa-like genome (G1/3/4/9/12-P[8]-I1-R1-C1-M1-A1-N1-T1-E1-H1), a DS-1-like genome (G2-P[4]-I2-R2-C2-M2-A2-N2-T2-E2-H2), and an AU-1-like genome (G3-P[9]-I3-R3-C3-M3-A3-N3-T3-E3-H3) . In the Western Pacific Region, G9P[8] represented 40% of all genotypes, followed by G1P[8] (24%) and G2P[4] (12%) . The proportion of the G2P[4] type is relatively low. However, there was an unusual increase in diarrhea caused by rotavirus type G2P[4] in Gansu Province, China, in 2022. Therefore, a genome-wide analysis of rotavirus type G2P[4] strains across the country is necessary. Results from the Global Rotavirus Surveillance Network (GRSN) indicate that the incidence of RVGE is approximately 38% among children with acute gastroenteritis in countries without a national RVA immunization program. By contrast, in countries where the RVA vaccine has been introduced, the incidence of RVGE is about 23% . Two live attenuated RVA vaccines have been approved for use in 126 countries worldwide: RotaTeq (RV5; Merck, Whitehouse Station, NJ, USA), a pentavalent (G1, G2, G3, G4, P[8]) human–bovine reassortant RVA vaccine ; and Rotarix (RV1; GlaxoSmithKline Biologicals, Rixensart, Belgium), a monovalent (G1P[8]) RVA vaccine derived from an attenuated human strain . The Lanzhou Lamb RVA vaccine (LLR; Lanzhou Institute of Biological Products, Lanzhou, China) is a monovalent G10P[15] live attenuated vaccine used only in China . The trivalent (G2, G3, G4) oral human–lamb reassortant RVA live vaccine (LLR3; Lanzhou Institute of Biological Products) was approved for use in China in April 2023 . LLR3 uses the LLR vaccine strain as the parent strain, with reassortment of the VP7 gene from human RVA epidemic strains. Currently, RVA vaccines are not included in the National Immunisation Program in China, and the vaccination rate and public awareness of RVA vaccines must be improved. The aims of present study were to conduct a full-genotype characterization of thirteen G2P[4] RVA strains in China in 2022 and to perform phylogenetic analysis to understand their genetic diversity and evolution. Comparison of the VP7 and VP4 proteins of the RVA strains with those of the vaccine strains identified potentially important antigenic differences that could facilitate the development and introduction of an RVA vaccine in China. 2.1. Sample Source A total of 13 G2P[4] RVA strains were collected by the Chinese National Viral Diarrhoea Surveillance Network in 2022. ( ). The Chinese Centre for Disease Control and Prevention (China CDC) established national surveillance sites in 31 provinces across the country, with 1~3 sentinel hospitals selected in each province to collect clinical information and stool specimens from all hospitalized diarrheal children under 5 years of age (≤59 months of age) in that hospital. All 13 samples were obtained from children under 5 years old who were hospitalized due to acute gastroenteritis, with informed consent provided by their guardians. The included patients were all ≤25 months of age, with an average admission body temperature of 37.18 °C. On average, the patients experienced five episodes of diarrhea and two episodes of vomiting per day. In 2022, samples were collected from six provinces in China, transported via the cold chain to the National Institute for Viral Disease Control and Prevention at the China CDC, and stored at –80 °C. 2.2. Virus Nucleic Acid Extraction, RT-PCR and Nucleotide Sequencing A 10% fecal suspension was prepared from each sample in phosphate-buffered saline (PBS, 0.01 mol/L, pH 7.2–7.4). After 8000× g centrifugation for 5 min, 100 μL of the supernatant of the stool suspension was used to determine RVA-positive samples by ELISA (Thermo Scientific™ ProSpecT™ Rotavirus Microplate Assay; Thermo Fisher Scientific, Waltham, MA, USA) in accordance with the operation manual. RNA was extracted from the samples using the Tianlong automated nucleic acid extraction system (GeneRotex 96; Xi’an Tianlong Science and Technology Co., Ltd., Xi’an, China) and stored at −80 °C until use. RNA was reverse transcribed into cDNA using a SuperScript™ III Reverse Transcriptase Kit (18080093; Invitrogen, Carlsbad, CA, USA), with conserved primers targeting both ends of the RVA genome. RT-PCRs for the 11 gene segments were performed using primers described by Varghese et al. (VP1, VP2, and VP3), Wang et al. (NSP1, NSP2, NSP4, NSP5/6, and VP6), Magagula et al. (NSP2, NSP3, NSP4, NSP5/6, and VP6), Gómara et al. (VP7), and Simmonds et al. (VP4). The 11 genomic segments were amplified separately using I-5™ 2x T8 High-Fidelity Master Mix (TSE111; Beijing Tsingke Biotech Co., Ltd., Beijing, China). Taq polymerase was activated for 3 min at 94 °C, followed by 35 cycles of amplification (30 s at 94 °C, 30 s at 55 °C, and 60 s at 72 °C), with a final extension for 10 min at 72 °C. The PCR products were sequenced using the Sanger method at Beijing Tsingke Biotech Co., Ltd. (Beijing, China) Nucleotide sequences were determined using an ABIPRISM 3730 automated DNA sequencer (Thermo Fisher Scientific, Waltham, MA, USA). Sequencing used the same primers as PCR reaction. 2.3. Sequence Analysis Whole-genome sequences (excluding primer sequences) were assembled using SeqMan software (DNAStar 5.1). After sequencing and assembly, nearly full-length sequences (except for the 5′ and 3′ terminal sequences) were obtained. The sequencing results were genotyped using the online Basic Local Alignment Search Tool (BLAST). The sequences of 13 G2P[4] RVA strains were aligned using Clustal W. The phylogenetic trees were created using the MEGA v11.0 software based on the maximum likelihood method and selected the best-fit evolutionary model based on the corrected Bayesian information criterion value . The models used in this study were T92 + G + I for VP2, VP3, VP6, and VP7; T92 + G for VP4, NSP2, and NSP5/6; T92 + I for NSP1, NSP3, and NSP4; and GTR + G + I for VP1. Branch support was estimated with 1000 bootstrap replicates, with values >70% considered significant. The lineage classification system referred to Agbemabiese et al. , Doan et al. , and Medeiros et al. . Analysis of sequence identity was performed using MegAlign software (DNAstar 5.1). 2.4. Protein Model Construction and Analysis The VP7 (Protein Data Bank [PDB]: 3FMG) and VP4 (PDB: 1KOR) structures of the G2P[4] strain were constructed using SWISS-MODEL ( https://swissmodel.expasy.org/ , accessed on 4 May 2024). Structural analysis was performed using PyMOL ( http://www.pymol.org/pymol , accessed on 28 May 2024). A total of 13 G2P[4] RVA strains were collected by the Chinese National Viral Diarrhoea Surveillance Network in 2022. ( ). The Chinese Centre for Disease Control and Prevention (China CDC) established national surveillance sites in 31 provinces across the country, with 1~3 sentinel hospitals selected in each province to collect clinical information and stool specimens from all hospitalized diarrheal children under 5 years of age (≤59 months of age) in that hospital. All 13 samples were obtained from children under 5 years old who were hospitalized due to acute gastroenteritis, with informed consent provided by their guardians. The included patients were all ≤25 months of age, with an average admission body temperature of 37.18 °C. On average, the patients experienced five episodes of diarrhea and two episodes of vomiting per day. In 2022, samples were collected from six provinces in China, transported via the cold chain to the National Institute for Viral Disease Control and Prevention at the China CDC, and stored at –80 °C. A 10% fecal suspension was prepared from each sample in phosphate-buffered saline (PBS, 0.01 mol/L, pH 7.2–7.4). After 8000× g centrifugation for 5 min, 100 μL of the supernatant of the stool suspension was used to determine RVA-positive samples by ELISA (Thermo Scientific™ ProSpecT™ Rotavirus Microplate Assay; Thermo Fisher Scientific, Waltham, MA, USA) in accordance with the operation manual. RNA was extracted from the samples using the Tianlong automated nucleic acid extraction system (GeneRotex 96; Xi’an Tianlong Science and Technology Co., Ltd., Xi’an, China) and stored at −80 °C until use. RNA was reverse transcribed into cDNA using a SuperScript™ III Reverse Transcriptase Kit (18080093; Invitrogen, Carlsbad, CA, USA), with conserved primers targeting both ends of the RVA genome. RT-PCRs for the 11 gene segments were performed using primers described by Varghese et al. (VP1, VP2, and VP3), Wang et al. (NSP1, NSP2, NSP4, NSP5/6, and VP6), Magagula et al. (NSP2, NSP3, NSP4, NSP5/6, and VP6), Gómara et al. (VP7), and Simmonds et al. (VP4). The 11 genomic segments were amplified separately using I-5™ 2x T8 High-Fidelity Master Mix (TSE111; Beijing Tsingke Biotech Co., Ltd., Beijing, China). Taq polymerase was activated for 3 min at 94 °C, followed by 35 cycles of amplification (30 s at 94 °C, 30 s at 55 °C, and 60 s at 72 °C), with a final extension for 10 min at 72 °C. The PCR products were sequenced using the Sanger method at Beijing Tsingke Biotech Co., Ltd. (Beijing, China) Nucleotide sequences were determined using an ABIPRISM 3730 automated DNA sequencer (Thermo Fisher Scientific, Waltham, MA, USA). Sequencing used the same primers as PCR reaction. Whole-genome sequences (excluding primer sequences) were assembled using SeqMan software (DNAStar 5.1). After sequencing and assembly, nearly full-length sequences (except for the 5′ and 3′ terminal sequences) were obtained. The sequencing results were genotyped using the online Basic Local Alignment Search Tool (BLAST). The sequences of 13 G2P[4] RVA strains were aligned using Clustal W. The phylogenetic trees were created using the MEGA v11.0 software based on the maximum likelihood method and selected the best-fit evolutionary model based on the corrected Bayesian information criterion value . The models used in this study were T92 + G + I for VP2, VP3, VP6, and VP7; T92 + G for VP4, NSP2, and NSP5/6; T92 + I for NSP1, NSP3, and NSP4; and GTR + G + I for VP1. Branch support was estimated with 1000 bootstrap replicates, with values >70% considered significant. The lineage classification system referred to Agbemabiese et al. , Doan et al. , and Medeiros et al. . Analysis of sequence identity was performed using MegAlign software (DNAstar 5.1). The VP7 (Protein Data Bank [PDB]: 3FMG) and VP4 (PDB: 1KOR) structures of the G2P[4] strain were constructed using SWISS-MODEL ( https://swissmodel.expasy.org/ , accessed on 4 May 2024). Structural analysis was performed using PyMOL ( http://www.pymol.org/pymol , accessed on 28 May 2024). 3.1. Analysis of Whole-Genome Constellation of G2P[4] RVA Strains Comparison of the full genomic sequences of the 11 gene segments from 13 G2P[4] RVA strains collected in China in 2022 revealed that the genotype of the GS2265 strain was G2P[4]-I2-R2-C2-M2-A2-N2-T2-E1-H2. The remaining strains exhibited the typical DS-1-like genetic backbone, with the genotype G2P[4]-I2-R2-C2-M2-A2-N2-T2-E2-H2. When compared with the other reference strains deposited in the GenBank sequence databases, the complete genotype configuration of these, except GS2265 strains, was genotypically identical in whole-genome segments to the RVA/Human-wt/JPN/Tokyo18-41/2018/G2P[4] isolated in Tokyo, Japan , and RVA/Human-wt/CHN/E6896/2021/G2P[4] detected in Wuhan, China (GenBank: OP850417). However, in the NSP4 segment, GS2265 is consistent with typical Wa-like strain RVA/Human-wt/CHN/20200077/2020/G3P[8], isolated in Ningxia, China (GenBank: MN106174). Comparison of the complete genome constellations of the 13 G2P[4] strains with those of the other reference strains is shown in . 3.2. Phylogenetic Analysis of VP7/VP4 Genes Among the 13 G2P[4] RVA strains analyzed in this study, the VP7 gene demonstrated a nucleotide sequence identity > 94.8% and an amino acid sequence identity > 94.7%. Phylogenetic analysis of the VP7 gene revealed that the G2 strains formed four lineages (I, II, IVa, and IVnon-a) ( A). All 13 strains in this study belonged to different sublineages within the same lineage. QD2210 and SX2205 were located in sublineage IVa-1 and showed high similarity (99.9%). The remaining 11 strains were grouped into sublineage IVa-3, with nucleotide identities of 99.6–100%. The RotaTeq G2 strain was situated in lineage II. QD2210 and SX2205 were closely related to G2 strains detected in Fuzhou (Fuzhou23–33, Fuzhou21–51) and Wuhan (E6896), China, between 2021 and 2023, showing an identity of 99.7–100%. The other 11 strains were grouped into sublineage IVa-3, with strains from mainland China, India, Russia, Benin, and Bangladesh. They were closely related to strains from Fujian, China (Fuzhou21–79, 21–7, 21–45, 21–62) and Japan (Tokyo18–41), with a nucleotide identity of 99.6–100%. These 11 strains were more distantly related to the earlier Chinese G2P[4] strain, TB-Chen, with an identity of 96.6–96.7% ( A). In this study, the 13 G2P[4] RVA strains showed a nucleotide sequence identity of 96.1–99.9% and amino acid sequence identity of 95.3–99.9% in the VP4 segment. The clustering pattern of the VP4 phylogenetic tree was similar to that of VP7. The P[4] strains in the VP4 segment formed five lineages (I–IVa, IVnon-a). QD2210 and SX2205 were located in sublineage IVnon-a, showing nucleotide sequence identity of 99.9%. The remaining 11 strains clustered within sublineage IVa, with a nucleotide sequence identity of 98.9–99.9%. QD2210 and SX2205 were closely related to P[4] strains detected in Fuzhou (Fuzhou23–33), Wuhan (E6896), Jilin (JL19–1276) and Hebei (HEB16–1258), China, between 2018 and 2023, with a nucleotide sequence identity of 99.7–99.9%. The other 11 strains clustered with RVA strains detected between 2018 and 2021 in Fujian (Fuzhou21–45, Pingtan21–4), Beijing (2020BJ), Yunnan (2020023), and Japan (Tokyo18–41), with a nucleotide sequence identity > 98.9%. The sequences from this study showed greater genetic distance from early DS-1-like G2P[4] RVA strains from the USA, with an identity of 93.1–94% ( B). 3.3. Phylogenetic Analysis of VP1-VP3, VP6, and NSP1-NSP5 Genes In this study, the VP1, VP2, and VP6 genes of 13 G2P[4] RVA strains showed a nucleotide sequence identity of 96.1–100%, while VP3 showed nucleotide sequence identities ranging from 87% to 100%. Phylogenetic analysis revealed that QD2210 and SX2205 were closely related and located on a different branch from the remaining 11 RVA strains. The strain E6896, detected in Wuhan in 2021, demonstrated close genetic relationships with strains QD2210 and SX2205, with a nucleotide sequence identity of 99.6–100% across four genome segments. The other 11 strains clustered closely with strains from mainland China and Japan, such as Fuzhou21–77 and Tokyo18–41, and showed a 99.3–100% nucleotide sequence identity. For the VP3 gene, QD2210 and SX2205 are located in lineage V, while the remaining strains belong to lineage VI. Furthermore, the RotaTeq vaccine strain was located in the M1 lineage along with Rotarix. In contrast, for the VP1, VP2, and VP6 genes, all 13 G2P[4] RVA strains are of the same lineage. The RotaTeq vaccine strain belongs to the DS-1-like but is in different lineages compared to the strains in this study ( ). The NSP1–NSP3 and NSP5 genes exhibited nucleotide sequence identities ranging from 94.7% to 100%, while the NSP4 segment showed an identity of 82.1–100%. The phylogenetic trees of NSP1–NSP5 showed similar clustering patterns to those of VP1–VP3 and VP6. Notably, GS2265 was uniquely positioned in the E1 lineage for the NSP4 gene. This strain clustered with Wa-like strains from multiple regions in China, including Beijing, Fujian, and Sichuan, as well as strains from neighboring countries (e.g., Japan and India) during the period 2013–2022. GS2265 showed nucleotide identities of 98.8–99.9% with Chinese Wa-like strains and 98.5–99.6% nucleotide identities with Wa-like strains from neighboring countries. In the NSP4 gene, the nucleotide identity between GS2265 and other strains in this study ranged from 82.3% to 99.8%. The Wuhan E6896 strain shared 99.4–100% nucleotide identities across five genome segments with QD2210 and SX2205. Most of the other RVA strains were closely related to G2P[4] strains from mainland China and Japan. Additionally, in the NSP4 and NSP5 genes, these strains clustered with contemporary Russian strains 557 and NN2924–21, with an identity of 99.5–99.8% and 99.9–100%, respectively. The RotaTeq vaccine strains were positioned in lineages A3, T6, and H3 for the NSP1, NSP3, and NSP5 segments, respectively. In contrast, all segments of the Rotarix strain were located in lineage I ( ). 3.4. Comparison of VP7 and VP4 Neutralizing Epitopes with Vaccine Strains The critical epitopes of the VP7 protein are 7–1a, 7–1b and 7–2. The G2 lineage II strains had 92.6–92.8% identity with RotaTeq G2. Using the AA alignments for the VP7 proteins, we identified differences in these antigenic epitopes between the RVA strains and the cognate genes of the RotaTeq strains. The G2 strains showed four amino acid differences compared to RotaTeq G2. Among them, A87T, D96N (7–1a region), S213D, and S242N (7–1b region) may induce immunogenicity changes in the vaccine ( A and ). The VP4 spike protein undergoes proteolytic cleavage by trypsin-like proteases present in the gastrointestinal tract of the host into VP8* and VP5* subunits, which are the targets of neutralizing monoclonal antibodies . The VP8* region contains four antigenic epitopes (8-1 to 8-4) composed of 25 AAs. The P[4] lineage V strains showed 86.2–87.1% identity with RotaTeq P[8]. All G2P[4] RVA strains showed differences in eight glycan-binding sites of the VP8* subunit (E150D, N192D, D195N, V115T, D116N, R131E, D133S, and N89D) ( ). All strains except GS2260 showed amino acid differences at residue N113S in the 8–3 region. A neutralizing antigenic site mutation at residue 114 was found only in QD2210 and SX2205. The VP5* region has five epitopes (5-1 to 5-5) with 12 AAs. The sequence in the present study was highly conserved in the VP5* subunit compared to RotaTeq, with only GS2265 having an L388F amino acid difference in the 5-1 antigenic region ( B and ). Comparison of the full genomic sequences of the 11 gene segments from 13 G2P[4] RVA strains collected in China in 2022 revealed that the genotype of the GS2265 strain was G2P[4]-I2-R2-C2-M2-A2-N2-T2-E1-H2. The remaining strains exhibited the typical DS-1-like genetic backbone, with the genotype G2P[4]-I2-R2-C2-M2-A2-N2-T2-E2-H2. When compared with the other reference strains deposited in the GenBank sequence databases, the complete genotype configuration of these, except GS2265 strains, was genotypically identical in whole-genome segments to the RVA/Human-wt/JPN/Tokyo18-41/2018/G2P[4] isolated in Tokyo, Japan , and RVA/Human-wt/CHN/E6896/2021/G2P[4] detected in Wuhan, China (GenBank: OP850417). However, in the NSP4 segment, GS2265 is consistent with typical Wa-like strain RVA/Human-wt/CHN/20200077/2020/G3P[8], isolated in Ningxia, China (GenBank: MN106174). Comparison of the complete genome constellations of the 13 G2P[4] strains with those of the other reference strains is shown in . Among the 13 G2P[4] RVA strains analyzed in this study, the VP7 gene demonstrated a nucleotide sequence identity > 94.8% and an amino acid sequence identity > 94.7%. Phylogenetic analysis of the VP7 gene revealed that the G2 strains formed four lineages (I, II, IVa, and IVnon-a) ( A). All 13 strains in this study belonged to different sublineages within the same lineage. QD2210 and SX2205 were located in sublineage IVa-1 and showed high similarity (99.9%). The remaining 11 strains were grouped into sublineage IVa-3, with nucleotide identities of 99.6–100%. The RotaTeq G2 strain was situated in lineage II. QD2210 and SX2205 were closely related to G2 strains detected in Fuzhou (Fuzhou23–33, Fuzhou21–51) and Wuhan (E6896), China, between 2021 and 2023, showing an identity of 99.7–100%. The other 11 strains were grouped into sublineage IVa-3, with strains from mainland China, India, Russia, Benin, and Bangladesh. They were closely related to strains from Fujian, China (Fuzhou21–79, 21–7, 21–45, 21–62) and Japan (Tokyo18–41), with a nucleotide identity of 99.6–100%. These 11 strains were more distantly related to the earlier Chinese G2P[4] strain, TB-Chen, with an identity of 96.6–96.7% ( A). In this study, the 13 G2P[4] RVA strains showed a nucleotide sequence identity of 96.1–99.9% and amino acid sequence identity of 95.3–99.9% in the VP4 segment. The clustering pattern of the VP4 phylogenetic tree was similar to that of VP7. The P[4] strains in the VP4 segment formed five lineages (I–IVa, IVnon-a). QD2210 and SX2205 were located in sublineage IVnon-a, showing nucleotide sequence identity of 99.9%. The remaining 11 strains clustered within sublineage IVa, with a nucleotide sequence identity of 98.9–99.9%. QD2210 and SX2205 were closely related to P[4] strains detected in Fuzhou (Fuzhou23–33), Wuhan (E6896), Jilin (JL19–1276) and Hebei (HEB16–1258), China, between 2018 and 2023, with a nucleotide sequence identity of 99.7–99.9%. The other 11 strains clustered with RVA strains detected between 2018 and 2021 in Fujian (Fuzhou21–45, Pingtan21–4), Beijing (2020BJ), Yunnan (2020023), and Japan (Tokyo18–41), with a nucleotide sequence identity > 98.9%. The sequences from this study showed greater genetic distance from early DS-1-like G2P[4] RVA strains from the USA, with an identity of 93.1–94% ( B). In this study, the VP1, VP2, and VP6 genes of 13 G2P[4] RVA strains showed a nucleotide sequence identity of 96.1–100%, while VP3 showed nucleotide sequence identities ranging from 87% to 100%. Phylogenetic analysis revealed that QD2210 and SX2205 were closely related and located on a different branch from the remaining 11 RVA strains. The strain E6896, detected in Wuhan in 2021, demonstrated close genetic relationships with strains QD2210 and SX2205, with a nucleotide sequence identity of 99.6–100% across four genome segments. The other 11 strains clustered closely with strains from mainland China and Japan, such as Fuzhou21–77 and Tokyo18–41, and showed a 99.3–100% nucleotide sequence identity. For the VP3 gene, QD2210 and SX2205 are located in lineage V, while the remaining strains belong to lineage VI. Furthermore, the RotaTeq vaccine strain was located in the M1 lineage along with Rotarix. In contrast, for the VP1, VP2, and VP6 genes, all 13 G2P[4] RVA strains are of the same lineage. The RotaTeq vaccine strain belongs to the DS-1-like but is in different lineages compared to the strains in this study ( ). The NSP1–NSP3 and NSP5 genes exhibited nucleotide sequence identities ranging from 94.7% to 100%, while the NSP4 segment showed an identity of 82.1–100%. The phylogenetic trees of NSP1–NSP5 showed similar clustering patterns to those of VP1–VP3 and VP6. Notably, GS2265 was uniquely positioned in the E1 lineage for the NSP4 gene. This strain clustered with Wa-like strains from multiple regions in China, including Beijing, Fujian, and Sichuan, as well as strains from neighboring countries (e.g., Japan and India) during the period 2013–2022. GS2265 showed nucleotide identities of 98.8–99.9% with Chinese Wa-like strains and 98.5–99.6% nucleotide identities with Wa-like strains from neighboring countries. In the NSP4 gene, the nucleotide identity between GS2265 and other strains in this study ranged from 82.3% to 99.8%. The Wuhan E6896 strain shared 99.4–100% nucleotide identities across five genome segments with QD2210 and SX2205. Most of the other RVA strains were closely related to G2P[4] strains from mainland China and Japan. Additionally, in the NSP4 and NSP5 genes, these strains clustered with contemporary Russian strains 557 and NN2924–21, with an identity of 99.5–99.8% and 99.9–100%, respectively. The RotaTeq vaccine strains were positioned in lineages A3, T6, and H3 for the NSP1, NSP3, and NSP5 segments, respectively. In contrast, all segments of the Rotarix strain were located in lineage I ( ). The critical epitopes of the VP7 protein are 7–1a, 7–1b and 7–2. The G2 lineage II strains had 92.6–92.8% identity with RotaTeq G2. Using the AA alignments for the VP7 proteins, we identified differences in these antigenic epitopes between the RVA strains and the cognate genes of the RotaTeq strains. The G2 strains showed four amino acid differences compared to RotaTeq G2. Among them, A87T, D96N (7–1a region), S213D, and S242N (7–1b region) may induce immunogenicity changes in the vaccine ( A and ). The VP4 spike protein undergoes proteolytic cleavage by trypsin-like proteases present in the gastrointestinal tract of the host into VP8* and VP5* subunits, which are the targets of neutralizing monoclonal antibodies . The VP8* region contains four antigenic epitopes (8-1 to 8-4) composed of 25 AAs. The P[4] lineage V strains showed 86.2–87.1% identity with RotaTeq P[8]. All G2P[4] RVA strains showed differences in eight glycan-binding sites of the VP8* subunit (E150D, N192D, D195N, V115T, D116N, R131E, D133S, and N89D) ( ). All strains except GS2260 showed amino acid differences at residue N113S in the 8–3 region. A neutralizing antigenic site mutation at residue 114 was found only in QD2210 and SX2205. The VP5* region has five epitopes (5-1 to 5-5) with 12 AAs. The sequence in the present study was highly conserved in the VP5* subunit compared to RotaTeq, with only GS2265 having an L388F amino acid difference in the 5-1 antigenic region ( B and ). We conducted a genome-wide analysis of 13 G2P[4] genotype RVA strains from China in 2022 to identify epitope variations compared to vaccine antigens. Phylogenetic analysis revealed that the examined strains demonstrated characteristic features of typical G2P[4] DS-1-like strains in genetic clusters, with the exception of strain GS2265, which formed an independent cluster (E1) within the NSP4 segment. From January to December 2022, 5360 samples were genotyped. G9P[8] was most predominant, followed by G8P[8], G2P[4], G1P[8], and G3P[8] (unpublished data from the National Viral Diarrhea Surveillance Network of China). The G2P[4] genotype is relatively rare in China. Phylogenetic comparison revealed that the study strains shared close genetic relationships with previously reported G2P[4] strains from mainland China , as well as from Japan , Russia , and Thailand . Therefore, the G2P[4] strain in this study is prevalent globally. Previously, evidence was reported of interspecies transmission of the G2P[4] strain with the Chinese ruminant RVA strain in Vietnam . However, the Chinese G2P[4] strain was not found to be associated with RVA in animals. Phylogenetic analyses of 10 segments (excluding NSP4) revealed that QD2210 and SX2205 diverged from the other circulating RVA strains, clustering into two distinct sublineages. The structural (VP1-VP4, VP6, and VP7) and nonstructural (NSP1–NSP5) genes of QD2210 and SX2205 are most closely related at the nucleotide level to the Chinese Wuhan RVA/Human-wt/CHN/E6896/2021/G2P[4] strain. Epidemiological investigations revealed that cases GS2265 and GS2286 were geographically clustered within the same township. Notably, GS2265 developed RVA infection 4 months after the initial diagnosis of GS2286, accompanied by a genotype shift in NSP4 from E2 to E1. Phylogenetically, GS2265 showed close genetic relatedness to Wa-like strains circulating in China and neighboring countries. In addition, GS2265 represents a reassortant strain generated through a NSP4 gene segment reassort with a Wa-like strain, while retaining the DS-1-like genetic backbone. There is currently no specific treatment for RVGE, and vaccination is the most effective measure to prevent RVGE . The proteins encoded by the VP7 and VP4 genes contain a variety of antigenic epitopes that are key targets of the host immune response. Phylogenetic analysis of the VP7 gene showed that the RotaTeq G2 strains were in lineage III and the Chinese G2 strains were all in lineage II and were distantly related. Compared to RotaTeq, the Chinese G2 strain differed in four amino acid residues at the 7–1a and 7–1b antigenic epitopes, consistent with G2 strains detected in China (2016–2019) , Belgium , and Australia . Alterations in A87T, D96N, and S213D, which can lead to altered vaccine immunogenicity , have been observed in most of the G2P[4] strains circulating worldwide, and changes in this combination are a stable feature of modern G2 strains. The protection of homotypic RVA vaccines is more effective than heterotypic varieties. Genetic variation in the epitope region of the VP7 antigen may affect the effectiveness of the RotaTeq vaccine against the G2P[4] strain . Rotarix has 18 amino acid residues in the VP7 epitope that differ significantly from those of the G2 strains in this study. An analysis of phase II and III trial data for the monovalent rotavirus vaccine indicated that Rotarix is significantly less effective against fully heterotypic genotypes, specifically G2P[4], compared to its efficacy against G1P[8] (homotypic) and partially heterotypic genotypes . Consequently, the monovalent vaccine was less effective in protecting against the G2 strains. The strain in this study has a total of 10 amino acid residue mutations in the VP8* subunit compared to RotaTeq. Generally, VP4 is less susceptible to genetic mutation than VP7 because antibodies to VP8* are directly related to cellular receptor binding and can neutralize viral infection by inhibiting attachment and antibodies to VP5* and thus prevent membrane penetration . However, the current study revealed L388F in region 5–1 in strain GS22655, which has not been reported previously in P[4] genotypes. The LLR vaccine strain produced in China is G10P[15] , showing significant amino acid differences from the G2P[4] strain. LLR3 was launched in China in 2023, and its sequence is not yet publicly available. Therefore, further amino acid level comparison between the Chinese G2P[4] strain and LLR3 was not performed. Nonetheless, LLR3 contains strains of G2, G3, and G4 , including the G2 genotype, which is hypothesized to enhance protection against the G2P[4] genotype. In conclusion, the 13 RVA strains examined in this study belong to the G2P[4] genotype, which is prevalent worldwide. The G2P[4] genotype RVA in the Chinese population underwent reassortment with Wa-like strains for certain genes in the short term, forming new reassortant strains. The RotaTeq RV vaccine may have a superior preventive effect to Rotarix and LLR. LLR3 produces homotypic protection against G2 and may therefore more effectively protect against the G2P[4] genotype.
The role of generative artificial intelligence in psychiatric education– a scoping review
df4ef699-9bd6-4022-8ca2-015a559a4f81
11938615
Psychiatry[mh]
Generative artificial intelligence (GenAI) emulates human creativity and intelligence in the form of texts, images, videos, codes, and other modalities. According to Samala et al. (2024), it offers advantages such as cost-effectiveness, multilingual support, and efficiency . It is important that educators learn to use GenAI to improve education through streamlining the process of generating educational resources and creating creative lesson plans, case-based scenarios, and assessments to deepen learners’ cognitive processes. The need for improved psychiatric education has become increasingly evident as mental health issues continue to rise globally. This rise is attributed to various factors, with the most significant being the COVID-19 pandemic, which triggered a 25% increase in the prevalence of anxiety and depression . In response, some countries, such as Singapore, have extended mental health education to primary care physicians, underscoring the need to emphasise psychiatric education more . However, current psychiatric education faces several challenges, including inadequate exposure to diverse patient experiences and limited resources for comprehensive training . The introduction of GenAI may bridge these gaps and better prepare medical students, primary care physicians, and practitioners from other disciplines who are eager to pursue formal psychiatric education for future encounters with patients experiencing mental health-related issues. GenAI applications in medicine can be categorised into two groups: clinical use and educational use. The clinical application of GenAI has been integrated into disease detection, diagnosis, and screening across various fields, such as radiology, cardiology, and gastrointestinal medicine . GenAI has shown promising results in medical education in several areas, including self-directed learning and simulation . In psychiatry, studies on the utility of GenAI primarily focus on clinical applications rather than educational purposes, such as its potential to provide diagnostic assistance, treatment considerations, and enhanced access to mental health support . However, the question of whether GenAI can effectively support psychiatric education, given the unique nature of the field, has not been thoroughly addressed. The skills required of a psychiatrist place a greater emphasis on soft interpersonal skills than procedural skills, marking a significant difference from other specialities such as surgery, radiology, and endocrinology. Psychiatrists must not only be familiarised with the diagnostic criteria and prescribe appropriate medications, but they also need to master interviewing techniques and psychotherapy comprehensively while grasping phenomenology and patients’ subjective experiences to formulate effective treatment plans . Many elements of psychiatric practice rely on soft skills, including conducting a Mental State Examination, suicide risk assessment, motivational interviewing, and Cognitive Behavioural Therapy. Soft skills are often more challenging to teach and evaluate than technical skills, underscoring the distinctive nature of psychiatric education . This indicates that the application of GenAI in psychiatric education may differ significantly from its use in other specialities; prior studies on GenAI in medical education broadly may not be directly applicable to psychiatry. Moreover, there is a lack of standardised guidelines regarding the use of GenAI in psychiatric education and the management of sensitive patient information and data privacy. Furthermore, GenAI may find it challenging to replicate the nuanced clinical judgement inherent in psychiatry, which heightens concerns about its accuracy. Evidence regarding the effectiveness of GenAI in enhancing psychiatric education is also limited. By conducting a scoping review, we aim to explore our research topic by identifying GenAI’s educational aspects, the benefits and risks associated with its use in psychiatric education, and the need for future research in specific areas. We conducted a review, limited to English publications from four databases, to identify GenAI’s role in psychiatric education according to the educational framework proposed by the World Psychiatric Association-Asian Journal of Psychiatry Commission . The scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines . Our findings are presented in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist . A literature search in the PubMed, PsycINFO, and Embase databases was performed on 12 September 2024, followed by a search in Web of Science on 16 February 2025. A fourth database was added due to the limited number of eligible papers available for review from the first three databases. We employed the following search strategy: (“Artificial intelligence” OR “Computer reasoning” OR “Machine intelligence” OR “Machine learning” OR “Deep learning” OR “Foundation model” OR “ChatGPT” OR “Generative AI”) AND (“Mental health” OR “Psych*” OR “Psychiatric”) AND (“Education” OR “Educational” OR “Training” OR “Learning” OR “Teaching”). We limited our search to English publications containing the keywords from the search strategy. The publication years for the identified papers range from 1933 to 2024. Inclusion criteria Original research discussing the use of GenAI or ChatGPT in medical education was selected for review. Only original articles in English are included. Exclusion criteria We excluded papers discussing the clinical use of GenAI, public/patient mental health education, technology in general, virtual reality, and augmented reality, as well as papers addressing a specific field of medical education that is not related to psychiatry (e.g., oncology, surgical), nursing, psychology, and the perception of GenAI. We also excluded conference papers, preprints, editorials, and other non-original research. Studies selection process Search results from the databases were uploaded to EndNote. Duplicates were removed, followed by title and abstract screening using the inclusion and exclusion criteria. LQY and MC conducted the initial screening independently. Studies deemed eligible were downloaded, and full-text screening was carried out by LQY and MC. Any disagreements were resolved by consulting a third senior reviewer (CWO and CSH). Data extraction and analysis Details of the reviewed paper, such as authors, year of publication, type of GenAI, methodology, outcome measure, and key findings, were charted in a table by LQY and MC (refer to Table ). Through thematic analysis, the role of GenAI in psychiatric education was grouped into four themes, and evidence synthesis was done to achieve the aim of this study. The senior authors checked the tabulation of data, themes, and syntheses. Original research discussing the use of GenAI or ChatGPT in medical education was selected for review. Only original articles in English are included. We excluded papers discussing the clinical use of GenAI, public/patient mental health education, technology in general, virtual reality, and augmented reality, as well as papers addressing a specific field of medical education that is not related to psychiatry (e.g., oncology, surgical), nursing, psychology, and the perception of GenAI. We also excluded conference papers, preprints, editorials, and other non-original research. Search results from the databases were uploaded to EndNote. Duplicates were removed, followed by title and abstract screening using the inclusion and exclusion criteria. LQY and MC conducted the initial screening independently. Studies deemed eligible were downloaded, and full-text screening was carried out by LQY and MC. Any disagreements were resolved by consulting a third senior reviewer (CWO and CSH). Details of the reviewed paper, such as authors, year of publication, type of GenAI, methodology, outcome measure, and key findings, were charted in a table by LQY and MC (refer to Table ). Through thematic analysis, the role of GenAI in psychiatric education was grouped into four themes, and evidence synthesis was done to achieve the aim of this study. The senior authors checked the tabulation of data, themes, and syntheses. We identified 12,594 papers, of which 118 were duplicates. After the abstract screening, 12,439 papers were excluded. 37 papers were reviewed in full text, and 32 papers were excluded because they did not meet the inclusive criteria (refer to Fig. ). The remaining 5 papers discussing using GenAI in medical education were selected for review. The types of GenAI used include ChatGPT (3.5 and 4), Claude 3, and Llama 3. All papers addressed the use of ChatGPT. Most examined the differences between content generated by GenAI and that produced by traditional handwritten or expert-written sources. The five papers explored four roles that GenAI can fulfil in medical education: case-based learning, simulation, content synthesis, and assessments [ , – ]. Case-based learning Two papers discussed leveraging GenAI to create case vignettes for case-based learning . In a study by Coşkun et al. (2024), a randomised controlled trial was conducted to compare the quality of ChatGPT-synthesised vignettes with those written by humans. There was no significant difference in quality between the two types of vignettes. The scores suggested that vignettes generated by ChatGPT may promote higher utilisation of clinical reasoning skills among students compared to those created by humans. Furthermore, the study by Smith et al. (2023) highlighted the efficiency and variety of ChatGPT-generated case vignettes. Educators can modify these vignettes to teach various learning outcomes, such as the diagnostic process, treatment, determining if a psychopharmacological therapy is necessary, or the ethics surrounding the case. For instance, adjusting the prompt to exclude suggestions for treatment plans allows students to discuss and describe the factors to consider when prescribing a treatment . The parameters of the case can also be varied, including its difficulty and complexities, offering flexibility in the creation of course materials. Psychiatric disorders often present a complicated array of symptoms. A diverse range of case vignettes could better prepare medical students to diagnose and treat patients. Other advantages include addressing ethical concerns associated with utilising real case vignettes and the capacity to produce case vignettes in various languages . Real case vignettes must undergo rigorous scrutiny and documentation to guarantee informed consent and maintain patient confidentiality. This is particularly challenging in psychiatry, as patients may lack the mental capacity to consent, and ensuring anonymity can be problematic . GenAI-generated vignettes do not face the same issues. Simulation ChatGPT is recognised for its ability to conduct simulations by adopting character roles and providing real-time responses based on input . One study by Smith et al. (2023) briefly noted that ChatGPT could simulate a patient, facilitating interactions with students to practise their clinical skills or their ability to identify risk factors . Previously, a review indicated that simulation in psychiatry effectively enhances students’ competencies in performing psychiatric risk assessments on patients . However, a shortage of studies exists that address the methods and effectiveness of GenAI in patient simulation within psychiatric education, which would facilitate its implementation. Content synthesis and summary ChatGPT can streamline the content synthesis process, enhancing efficiency while upholding academic standards . It has been shown to provide accurate medical information and simplified summaries of complex research . Specifically, a paper discusses using GenAI to create illness scripts for educational purposes . An illness script is a specific format representing patient-oriented clinical knowledge containing valuable information. It is generally dynamic, depending on the physician’s requirements, but it can also be standardised for medical education. Illness scripts can teach medical students clinical reasoning skills, thereby improving diagnostic accuracy. In the study by Yanagita et al. (2024), 84% of the 184 illness scripts demonstrated relatively high accuracy. Assessment Three papers explored the application of GenAI to develop assessment tools for medical students [ – ]. Coşkun et al. (2024) discussed the quality of ChatGPT-generated multiple-choice questions (MCQs). Out of 15 questions generated, six items met the criteria and were concluded to be effective. The quality of MCQs can be further refined by using more complex prompts, including factors such as learner type, competency level, and difficulty level . In addition to generating MCQs, two papers discussed the creation of the Script Concordance Test (SCT) using Large Language Models (LLMs) . The SCT is designed to refine clinical reasoning and decision-making in uncertain clinical situations. Developing an SCT is both challenging and complex; therefore, employing GenAI-like LLMs can help expedite the development process . Hudon et al. (2024) examined the application of ChatGPT-generated SCTs in psychiatry for undergraduate medical education. It has been demonstrated that there is no significant difference between ChatGPT-generated SCTs and those created by experts in terms of the scenario, clinical questions, and expert opinions. With the appropriate target group, a relevant focus of the question, the clinical problem, and guidelines to follow, an SCT for psychiatric education can be easily developed. This can be conveniently achieved using the “Script Concordance Test Generator,” a custom GPT designed for SCT generation . Overall limitations of GenAI Overall, there are concerns regarding inaccuracies, bias, and a lack of control over generated content . Moreover, using GenAI for simulation poses the risk of sharing sensitive or personal data, thereby raising security and privacy issues . ChatGPT may also display grammatical errors in certain languages, exhibit biases against minorities, experience hallucination effects, show a lack of replicability, possess limited awareness of recent events, and may eventually adopt a paywall, leading to inequality . Moreover, GenAI-generated illness scripts for psychiatric disorders received the highest number of “C” ratings, comprising 45.5% of psychiatric scripts . These scripts present generic information, such as “diagnosis based primarily on clinical interview and symptom criteria”, instead of outlining the specific steps involved . This issue may arise from the limited character count; however, this constraint could be alleviated by increasing it. With a greater character count, more details could be covered, particularly considering the wide variety of psychiatric symptoms. Another limitation of GenAI is that the SCTs it generated were too simple . Well-designed and more complex prompts can improve the quality of SCTs. Subject matter experts can make minor adjustments. Appropriate guidelines are still needed to leverage GenAI, as the content generated may not meet the standard required for education use. For example, “none of the above” is discouraged by test development guidelines for MCQ, but it was included as an option for one of the generated questions . Two papers discussed leveraging GenAI to create case vignettes for case-based learning . In a study by Coşkun et al. (2024), a randomised controlled trial was conducted to compare the quality of ChatGPT-synthesised vignettes with those written by humans. There was no significant difference in quality between the two types of vignettes. The scores suggested that vignettes generated by ChatGPT may promote higher utilisation of clinical reasoning skills among students compared to those created by humans. Furthermore, the study by Smith et al. (2023) highlighted the efficiency and variety of ChatGPT-generated case vignettes. Educators can modify these vignettes to teach various learning outcomes, such as the diagnostic process, treatment, determining if a psychopharmacological therapy is necessary, or the ethics surrounding the case. For instance, adjusting the prompt to exclude suggestions for treatment plans allows students to discuss and describe the factors to consider when prescribing a treatment . The parameters of the case can also be varied, including its difficulty and complexities, offering flexibility in the creation of course materials. Psychiatric disorders often present a complicated array of symptoms. A diverse range of case vignettes could better prepare medical students to diagnose and treat patients. Other advantages include addressing ethical concerns associated with utilising real case vignettes and the capacity to produce case vignettes in various languages . Real case vignettes must undergo rigorous scrutiny and documentation to guarantee informed consent and maintain patient confidentiality. This is particularly challenging in psychiatry, as patients may lack the mental capacity to consent, and ensuring anonymity can be problematic . GenAI-generated vignettes do not face the same issues. ChatGPT is recognised for its ability to conduct simulations by adopting character roles and providing real-time responses based on input . One study by Smith et al. (2023) briefly noted that ChatGPT could simulate a patient, facilitating interactions with students to practise their clinical skills or their ability to identify risk factors . Previously, a review indicated that simulation in psychiatry effectively enhances students’ competencies in performing psychiatric risk assessments on patients . However, a shortage of studies exists that address the methods and effectiveness of GenAI in patient simulation within psychiatric education, which would facilitate its implementation. ChatGPT can streamline the content synthesis process, enhancing efficiency while upholding academic standards . It has been shown to provide accurate medical information and simplified summaries of complex research . Specifically, a paper discusses using GenAI to create illness scripts for educational purposes . An illness script is a specific format representing patient-oriented clinical knowledge containing valuable information. It is generally dynamic, depending on the physician’s requirements, but it can also be standardised for medical education. Illness scripts can teach medical students clinical reasoning skills, thereby improving diagnostic accuracy. In the study by Yanagita et al. (2024), 84% of the 184 illness scripts demonstrated relatively high accuracy. Three papers explored the application of GenAI to develop assessment tools for medical students [ – ]. Coşkun et al. (2024) discussed the quality of ChatGPT-generated multiple-choice questions (MCQs). Out of 15 questions generated, six items met the criteria and were concluded to be effective. The quality of MCQs can be further refined by using more complex prompts, including factors such as learner type, competency level, and difficulty level . In addition to generating MCQs, two papers discussed the creation of the Script Concordance Test (SCT) using Large Language Models (LLMs) . The SCT is designed to refine clinical reasoning and decision-making in uncertain clinical situations. Developing an SCT is both challenging and complex; therefore, employing GenAI-like LLMs can help expedite the development process . Hudon et al. (2024) examined the application of ChatGPT-generated SCTs in psychiatry for undergraduate medical education. It has been demonstrated that there is no significant difference between ChatGPT-generated SCTs and those created by experts in terms of the scenario, clinical questions, and expert opinions. With the appropriate target group, a relevant focus of the question, the clinical problem, and guidelines to follow, an SCT for psychiatric education can be easily developed. This can be conveniently achieved using the “Script Concordance Test Generator,” a custom GPT designed for SCT generation . Overall, there are concerns regarding inaccuracies, bias, and a lack of control over generated content . Moreover, using GenAI for simulation poses the risk of sharing sensitive or personal data, thereby raising security and privacy issues . ChatGPT may also display grammatical errors in certain languages, exhibit biases against minorities, experience hallucination effects, show a lack of replicability, possess limited awareness of recent events, and may eventually adopt a paywall, leading to inequality . Moreover, GenAI-generated illness scripts for psychiatric disorders received the highest number of “C” ratings, comprising 45.5% of psychiatric scripts . These scripts present generic information, such as “diagnosis based primarily on clinical interview and symptom criteria”, instead of outlining the specific steps involved . This issue may arise from the limited character count; however, this constraint could be alleviated by increasing it. With a greater character count, more details could be covered, particularly considering the wide variety of psychiatric symptoms. Another limitation of GenAI is that the SCTs it generated were too simple . Well-designed and more complex prompts can improve the quality of SCTs. Subject matter experts can make minor adjustments. Appropriate guidelines are still needed to leverage GenAI, as the content generated may not meet the standard required for education use. For example, “none of the above” is discouraged by test development guidelines for MCQ, but it was included as an option for one of the generated questions . Despite the limited number of papers, GenAI has demonstrated its potential role in psychiatric education. While the role of GenAI is extensively discussed in other specialities and for clinical applications, there is minimal analysis regarding its use in psychiatric education. The intricate nature of psychiatry may be one factor contributing to the lack of exploration into the role of GenAI in this field . In this section, we will utilise an established psychiatric education framework to analyse the applicability of GenAI’s role in psychiatric education. This review demonstrates favourable evidence that GenAI, such as ChatGPT, supports psychiatric education through case-based learning, simulation, content analysis, and assessment. To explore the potential of GenAI in psychiatric education, we compare our findings with the new training framework established by the World Psychiatric Association-Asian Journal of Psychiatry Commission. This framework is based on The Canadian Medical Education Directives for Specialists (CanMEDS) developed by The Royal College of Physicians and Surgeons of Canada in response to the evolving landscape of psychiatry . CanMEDS is applicable to various disciplines, including psychiatric education . It comprises seven competencies: communicator, collaborator, leader, health advocate, scholar, professional, and medical expert (see Fig. ). The World Psychiatric Association-Asian Journal of Psychiatry Commission paper discussed specific requirements and recommendations necessary to achieve these seven competencies given Psychiatry’s increasingly complex and uncertain nature (see Table ). Therefore, by analysing how GenAI can contribute to this framework, we can provide improved solutions for delivering quality psychiatric education that meets modern needs. In the following sections, we explore how the use of GenAI in case-based learning, simulation, content analysis, and assessment contribute to shaping the seven competencies of CanMEDS. Case-based learning Generative AI opens up opportunities for creating various case vignettes. The effectiveness of case-based learning in psychiatric training has been highlighted in previous research . An efficient method for generating case vignettes can contribute to the development of roles such as medical expert, communicator, collaborator, leader, scholar, and professional. Moreover, case vignettes can be adapted to teach the use of diagnostic tools and criteria while practising within legal frameworks and safety protocols. Most importantly, the ability to vary case vignettes can train medical students to handle situations involving uncertainty or dilemmas. This is crucial in today’s world, where mental health issues are complicated by contemporary factors such as social media influences and non-evidence-based self-diagnostic tools found online . In this context, GenAI can potentially support the role of a medical expert. Understanding how to respond to cases involving dilemmas also enhances the roles of communicator, leader, and professional. Furthermore, GenAI can be prompted to create cases that necessitate interdisciplinary collaboration (e.g., a combination of mental and physical illnesses or a scenario where the involvement of a financial adviser or social worker is essential). This fosters medical students’ development of collaborative skills. During discussions of the cases, medical students can explore how to integrate evidence-based knowledge alongside patients’ values and preferences. This competency is expected of a scholar. Simulation In addition to case-based learning, applying what students learn in practice is essential in psychiatry. Compared to other specialities, communication in psychiatry must be extremely precise and sensitive to patients. By having GenAI simulate patient dialogues, students can practise the communication framework they have learnt and learn to be flexible with these communication techniques, such as motivational interviewing for addiction. This aligns with the roles of medical expert and communicator. Students can also learn to maintain professional boundaries by carefully selecting their words during conversations with the simulator, aiding their development into professionals. Through simulation, students can better convey information regarding treatments, apply diagnostic assessments, and gather comprehensive patient histories. However, Dave (2012) highlighted several concerning limitations associated with implementing simulated patients in psychiatric education. Given that mental health illnesses are often complicated to understand, it is challenging to train simulated patients to accurately portray the complexities of psychiatric conditions . Additionally, actors may introduce and act upon their prejudices towards mental illness. Cost is also a significant concern. Interestingly, some studies discuss the use of GenAI-based 2D or 3D avatars to enhance patient encounters in other specialities . GenAI-based simulators could assist in overcoming these challenges, provided there are no inherent biases or paywalls. Further research into its application in psychiatric education is warranted. Content synthesis and summary To encourage medical students to embody the role of medical experts and scholars, GenAI synthesises illness scripts, enabling students to grasp essential information regarding various diseases. However, given the complexities of psychiatric illnesses, further studies are necessary to enhance the quality and examine the effectiveness of GenAI-created illness scripts in psychiatric education. Furthermore, GenAI can also promote lifelong learning, provided graduated healthcare workers are granted free access to it, or no paywalls are implemented in the future. This allows them to obtain the information as and when it becomes available. However, as highlighted in the results section, GenAI has its inaccuracies. Users need to be careful and stay cautious when using GenAI. Assessment In the results section, we discussed using GenAI to generate MCQs and SCTs. These can either serve as summative assessments or as self-quizzes for students to prepare for an assessment. Most medical education exams are conducted as MCQs, or at least in Singapore. Thus, students can practise applying knowledge by generating and completing MCQs for self-preparation. However, this is provided that GenAIs that can generate quality MCQs do not have paywalls; otherwise, this could facilitate greater inequality between different income groups. The use of SCTs in psychiatry has been studied, and the feasibility of evaluating clinical reasoning has been shown . SCTs can be adapted to assess whether students fulfil the roles of CanMEDS. They have the potential to assess psychiatry clinical competencies, such as understanding diagnostic frameworks and clinical assessment tools, dealing with uncertainty, and practising evidence-based medicine. These competencies are required of a medical expert, leader, and professional. In both MCQs and SCTs, GenAI can easily generate a diverse range of questions. These questions can incorporate the socio-economic or racial backgrounds of the patient, allowing for the assessment of the student’s objectivity and training them to remain non-judgemental. This could enhance the student’s role as a health advocate, helping to reduce stigma for patients, particularly those from minority groups. Incorporation of GenAI in psychiatric education The different applications may work and integrate together—for example, GenAI–created case vignettes can be used as a prompt to generate video simulations, and GenAI can assess students’ answers to GenAI-created questions. In addition to the four applications discussed in this study, other applications can be explored, such as using GenAI to translate content into various languages, demolishing language barriers, and promoting access to psychiatric education resources in more countries, contributing to global mental healthcare . However, implementing GenAI in psychiatric education may present some challenges. Educators might hesitate to adopt GenAI since the current approach remains traditional and predominantly face-to-face. They may worry about the potential loss of warmth, empathy, and personal interactions from using GenAI. Many educators and clinicians are not yet trained to use GenAI tools for psychiatric education. There may also be scepticism about whether GenAI can enhance traditional case-based discussions, psychotherapy training, or diagnostic reasoning exercises. Addressing the risks of GenAI GenAI can produce hallucinations– the generation of factually inaccurate information . AI hallucinations occur when AI creates seemingly realistic but entirely fabricated content that may be illogical or incorrect . Several reasons for the occurrence of AI hallucinations include insufficient diversity in training data or biases rooted in certain background traits. GenAI-generated content, such as illness scripts, may lack accuracy . Students who study these illness scripts without expert revisions risk grasping incorrect medical concepts, which could lead to poor medical decisions in the future. Similarly, Coşkun et al. (2024) highlighted that inaccurate information was identified in GenAI-generated clinical vignettes and MCQs, posing the risk of disseminating incorrect information to students . As GenAI heavily relies on training data to generate outputs, assessment questions and vignettes produced by ChatGPT may follow a predictable pattern. This might result in a limited variety of exam questions, failing to encapsulate the sophistication of psychiatric education . When it comes to GenAI, privacy is a significant concern. In this study, however, GenAI can assist with the issue of patient privacy by removing the use of real case vignettes for case-based learning and SCT generation. Nevertheless, there is a risk of question banks being leaked to medical students. The outputs (e.g., generated examination questions) of GenAI may be stored in the AI system, which raises the possibility of the questions being leaked to students using the same AI system . While GenAI presents several risks, including ethical concerns and inaccuracies, these issues can be effectively managed through specific recommendations. Further research should focus on establishing clearer guidelines for GenAI usage in psychiatric education founded on ethical principles. Additionally, the potential for bias in GenAI could be alleviated by training it with more comprehensive datasets. The data utilised must adhere to data protection laws. Furthermore, experts should conduct a manual review to evaluate the accuracy and relevance of GenAI-generated content. Technologies such as Federated Learning and Blockchain can be explored as potential solutions to the issue of question leaks in psychiatric education assessments. Limitations of study Here, we analyse the quality of the studies reviewed and identify the strengths and limitations of each study reviewed: The study by Smith et al. (2023) examined the various applications of ChatGPT in depth; however, it lacked a definitive methodology for assessing its effectiveness in social psychiatry. The study by Coşkun et al. (2024) is a randomised controlled experiment that employs strong methodology and psychometric evidence to justify ChatGPT’s potential in generating clinical vignettes and MCQs for assessment. However, this study does not directly address psychiatric education. Kıyak et al. (2024) examined various types of GenAI beyond ChatGPT and proposed specific prompts for generating SCT items, thereby justifying the potential to streamline the creation of complex educational materials. However, this study did not assess the effectiveness of GenAI-generated SCTs in improving psychiatric educational outcomes. Hudon et al. (2024) have a methodology designed to avoid biased results. A considerable number of clinician-educators and resident doctors evaluated the effectiveness of ChatGPT in psychiatric education. Future studies may consider adopting their framework to assess the effectiveness of GenAI in psychiatric education in other ways. Yanagita et al. (2024) analysed a considerable number of ChatGPT-generated illness scripts. However, only three physicians reviewed the quality of these illness scripts, amplifying the issue of subjectivity. Considering the limitations of existing studies, future research could employ a more quantitative measure to assess GenAI’s effectiveness on student outcomes, explore different types of language models, and involve a larger study size. The risk of publication bias in selecting articles was minimised during the screening process through independent assessments and third-party opinions. However, the limited literature search yielded only two studies that directly addressed psychiatric education, while the remaining three studies focused more generally on medical education, leading to generalisations from medical education to psychiatry. The small number of relevant studies restricts the generalisability of our findings and discussion. The reviewed papers did not permit quantitative analysis and were not comparable. The absence of a quantitative, comparative analysis for drawing conclusions is a limitation of our study. Nonetheless, this underscores the need for further research in this area. Comparison with prior work To our knowledge, no paper has discussed the use of GenAI in psychiatric education. Prior studies mainly focused on the use of GenAI in clinical psychiatry or medical education in general but did not discuss its suitability in psychiatric education. There are several reasons why psychiatric education has received less attention regarding the incorporation of GenAI. Firstly, the skills a psychiatrist must acquire are highly humanistic, emphasising the doctor-patient relationship . Employing GenAI, essentially a non-human entity, to teach psychiatry is an unusual approach at first glance, thus making it a topic that is rarely discussed. In contrast, for other specialities such as radiology, GenAI can directly assist with technical skills important in the field—such as generating images of pathological findings (for example, x-ray imaging and skin lesions) as training materials . This is not applicable in psychiatry, as the diagnosis and management of mental health disorders can be subjective and cannot be easily determined by observing images. Secondly, since soft skills are the core competencies required of a psychiatrist, it is essential to evaluate students’ performance based on these skills. GenAI may not accurately assess this, as it fundamentally lacks a deep understanding of empathy and emotional states . However, in other fields, GenAI can be effectively utilised to assess performance and provide appropriate feedback. For instance, the OpenAI GPT-4 Turbo API could review revisions of radiology reports made by trainees and generate relevant educational feedback . Generative AI opens up opportunities for creating various case vignettes. The effectiveness of case-based learning in psychiatric training has been highlighted in previous research . An efficient method for generating case vignettes can contribute to the development of roles such as medical expert, communicator, collaborator, leader, scholar, and professional. Moreover, case vignettes can be adapted to teach the use of diagnostic tools and criteria while practising within legal frameworks and safety protocols. Most importantly, the ability to vary case vignettes can train medical students to handle situations involving uncertainty or dilemmas. This is crucial in today’s world, where mental health issues are complicated by contemporary factors such as social media influences and non-evidence-based self-diagnostic tools found online . In this context, GenAI can potentially support the role of a medical expert. Understanding how to respond to cases involving dilemmas also enhances the roles of communicator, leader, and professional. Furthermore, GenAI can be prompted to create cases that necessitate interdisciplinary collaboration (e.g., a combination of mental and physical illnesses or a scenario where the involvement of a financial adviser or social worker is essential). This fosters medical students’ development of collaborative skills. During discussions of the cases, medical students can explore how to integrate evidence-based knowledge alongside patients’ values and preferences. This competency is expected of a scholar. In addition to case-based learning, applying what students learn in practice is essential in psychiatry. Compared to other specialities, communication in psychiatry must be extremely precise and sensitive to patients. By having GenAI simulate patient dialogues, students can practise the communication framework they have learnt and learn to be flexible with these communication techniques, such as motivational interviewing for addiction. This aligns with the roles of medical expert and communicator. Students can also learn to maintain professional boundaries by carefully selecting their words during conversations with the simulator, aiding their development into professionals. Through simulation, students can better convey information regarding treatments, apply diagnostic assessments, and gather comprehensive patient histories. However, Dave (2012) highlighted several concerning limitations associated with implementing simulated patients in psychiatric education. Given that mental health illnesses are often complicated to understand, it is challenging to train simulated patients to accurately portray the complexities of psychiatric conditions . Additionally, actors may introduce and act upon their prejudices towards mental illness. Cost is also a significant concern. Interestingly, some studies discuss the use of GenAI-based 2D or 3D avatars to enhance patient encounters in other specialities . GenAI-based simulators could assist in overcoming these challenges, provided there are no inherent biases or paywalls. Further research into its application in psychiatric education is warranted. To encourage medical students to embody the role of medical experts and scholars, GenAI synthesises illness scripts, enabling students to grasp essential information regarding various diseases. However, given the complexities of psychiatric illnesses, further studies are necessary to enhance the quality and examine the effectiveness of GenAI-created illness scripts in psychiatric education. Furthermore, GenAI can also promote lifelong learning, provided graduated healthcare workers are granted free access to it, or no paywalls are implemented in the future. This allows them to obtain the information as and when it becomes available. However, as highlighted in the results section, GenAI has its inaccuracies. Users need to be careful and stay cautious when using GenAI. In the results section, we discussed using GenAI to generate MCQs and SCTs. These can either serve as summative assessments or as self-quizzes for students to prepare for an assessment. Most medical education exams are conducted as MCQs, or at least in Singapore. Thus, students can practise applying knowledge by generating and completing MCQs for self-preparation. However, this is provided that GenAIs that can generate quality MCQs do not have paywalls; otherwise, this could facilitate greater inequality between different income groups. The use of SCTs in psychiatry has been studied, and the feasibility of evaluating clinical reasoning has been shown . SCTs can be adapted to assess whether students fulfil the roles of CanMEDS. They have the potential to assess psychiatry clinical competencies, such as understanding diagnostic frameworks and clinical assessment tools, dealing with uncertainty, and practising evidence-based medicine. These competencies are required of a medical expert, leader, and professional. In both MCQs and SCTs, GenAI can easily generate a diverse range of questions. These questions can incorporate the socio-economic or racial backgrounds of the patient, allowing for the assessment of the student’s objectivity and training them to remain non-judgemental. This could enhance the student’s role as a health advocate, helping to reduce stigma for patients, particularly those from minority groups. The different applications may work and integrate together—for example, GenAI–created case vignettes can be used as a prompt to generate video simulations, and GenAI can assess students’ answers to GenAI-created questions. In addition to the four applications discussed in this study, other applications can be explored, such as using GenAI to translate content into various languages, demolishing language barriers, and promoting access to psychiatric education resources in more countries, contributing to global mental healthcare . However, implementing GenAI in psychiatric education may present some challenges. Educators might hesitate to adopt GenAI since the current approach remains traditional and predominantly face-to-face. They may worry about the potential loss of warmth, empathy, and personal interactions from using GenAI. Many educators and clinicians are not yet trained to use GenAI tools for psychiatric education. There may also be scepticism about whether GenAI can enhance traditional case-based discussions, psychotherapy training, or diagnostic reasoning exercises. GenAI can produce hallucinations– the generation of factually inaccurate information . AI hallucinations occur when AI creates seemingly realistic but entirely fabricated content that may be illogical or incorrect . Several reasons for the occurrence of AI hallucinations include insufficient diversity in training data or biases rooted in certain background traits. GenAI-generated content, such as illness scripts, may lack accuracy . Students who study these illness scripts without expert revisions risk grasping incorrect medical concepts, which could lead to poor medical decisions in the future. Similarly, Coşkun et al. (2024) highlighted that inaccurate information was identified in GenAI-generated clinical vignettes and MCQs, posing the risk of disseminating incorrect information to students . As GenAI heavily relies on training data to generate outputs, assessment questions and vignettes produced by ChatGPT may follow a predictable pattern. This might result in a limited variety of exam questions, failing to encapsulate the sophistication of psychiatric education . When it comes to GenAI, privacy is a significant concern. In this study, however, GenAI can assist with the issue of patient privacy by removing the use of real case vignettes for case-based learning and SCT generation. Nevertheless, there is a risk of question banks being leaked to medical students. The outputs (e.g., generated examination questions) of GenAI may be stored in the AI system, which raises the possibility of the questions being leaked to students using the same AI system . While GenAI presents several risks, including ethical concerns and inaccuracies, these issues can be effectively managed through specific recommendations. Further research should focus on establishing clearer guidelines for GenAI usage in psychiatric education founded on ethical principles. Additionally, the potential for bias in GenAI could be alleviated by training it with more comprehensive datasets. The data utilised must adhere to data protection laws. Furthermore, experts should conduct a manual review to evaluate the accuracy and relevance of GenAI-generated content. Technologies such as Federated Learning and Blockchain can be explored as potential solutions to the issue of question leaks in psychiatric education assessments. Here, we analyse the quality of the studies reviewed and identify the strengths and limitations of each study reviewed: The study by Smith et al. (2023) examined the various applications of ChatGPT in depth; however, it lacked a definitive methodology for assessing its effectiveness in social psychiatry. The study by Coşkun et al. (2024) is a randomised controlled experiment that employs strong methodology and psychometric evidence to justify ChatGPT’s potential in generating clinical vignettes and MCQs for assessment. However, this study does not directly address psychiatric education. Kıyak et al. (2024) examined various types of GenAI beyond ChatGPT and proposed specific prompts for generating SCT items, thereby justifying the potential to streamline the creation of complex educational materials. However, this study did not assess the effectiveness of GenAI-generated SCTs in improving psychiatric educational outcomes. Hudon et al. (2024) have a methodology designed to avoid biased results. A considerable number of clinician-educators and resident doctors evaluated the effectiveness of ChatGPT in psychiatric education. Future studies may consider adopting their framework to assess the effectiveness of GenAI in psychiatric education in other ways. Yanagita et al. (2024) analysed a considerable number of ChatGPT-generated illness scripts. However, only three physicians reviewed the quality of these illness scripts, amplifying the issue of subjectivity. Considering the limitations of existing studies, future research could employ a more quantitative measure to assess GenAI’s effectiveness on student outcomes, explore different types of language models, and involve a larger study size. The risk of publication bias in selecting articles was minimised during the screening process through independent assessments and third-party opinions. However, the limited literature search yielded only two studies that directly addressed psychiatric education, while the remaining three studies focused more generally on medical education, leading to generalisations from medical education to psychiatry. The small number of relevant studies restricts the generalisability of our findings and discussion. The reviewed papers did not permit quantitative analysis and were not comparable. The absence of a quantitative, comparative analysis for drawing conclusions is a limitation of our study. Nonetheless, this underscores the need for further research in this area. To our knowledge, no paper has discussed the use of GenAI in psychiatric education. Prior studies mainly focused on the use of GenAI in clinical psychiatry or medical education in general but did not discuss its suitability in psychiatric education. There are several reasons why psychiatric education has received less attention regarding the incorporation of GenAI. Firstly, the skills a psychiatrist must acquire are highly humanistic, emphasising the doctor-patient relationship . Employing GenAI, essentially a non-human entity, to teach psychiatry is an unusual approach at first glance, thus making it a topic that is rarely discussed. In contrast, for other specialities such as radiology, GenAI can directly assist with technical skills important in the field—such as generating images of pathological findings (for example, x-ray imaging and skin lesions) as training materials . This is not applicable in psychiatry, as the diagnosis and management of mental health disorders can be subjective and cannot be easily determined by observing images. Secondly, since soft skills are the core competencies required of a psychiatrist, it is essential to evaluate students’ performance based on these skills. GenAI may not accurately assess this, as it fundamentally lacks a deep understanding of empathy and emotional states . However, in other fields, GenAI can be effectively utilised to assess performance and provide appropriate feedback. For instance, the OpenAI GPT-4 Turbo API could review revisions of radiology reports made by trainees and generate relevant educational feedback . Our scoping review showed that Generative AI has potential in psychiatric education. GenAI can complement traditional pedagogies, gearing psychiatric education toward achieving the goal of CanMEDS. This suggests that GenAI can cater to the unique nature of psychiatry. Nevertheless, this area remains largely unexplored. Limitations such as content accuracy, privacy, and ethical concerns must be addressed. Further research and measures should be established before implementing GenAI. Future studies should address these limitations, propose mitigating strategies, and evaluate GenAI’s effectiveness on educational outcomes and how such outcomes contribute to students’ performance in clinical practice. There is a need to enhance the engagement of individuals researching the use of GenAI in psychiatric education, discover more effective methods to cater to the nature of psychiatry and encourage educators to be more receptive to participating in the research and implementation of GenAI in psychiatric education. Comprehensive studies on the cost-benefit analysis of implementation should be conducted, assessing benefits (e.g., student outcomes and educator efficiency) against costs (e.g., expenses of integrating GenAI and addressing potential ethical concerns). With further studies, potential breakthroughs in psychiatric education may be realised.
Experiences with a national team-based learning program for advance care planning in pediatric palliative care
5a6c1a2a-f43a-4bba-8089-3b2e3e8754f5
11297680
Pediatrics[mh]
In the Netherlands, yearly, around 1100 children (0–20 year) die from an underlying disease or other cause . About 10.000 children with chronic conditions receive hospital and home care over a period of many years and out of them, 5000–7000 children and their families are eligible for palliative care . Starting in 2012, pediatric palliative care teams (PPCTs) were developed in the seven university hospitals and in the specialized Center for Pediatric Oncology, providing integrated children’s palliative care regardless of where the child is staying . In order to offer family-centered care, these PPCTs gradually give more attention to Advance Care Planning (ACP). ACP is a process that enables patients and relatives to identify and discuss values, goals and preferences for future medical treatment and care . To support children, their parents and healthcare professionals in ACP, the Implementing Pediatric Advance Care Planning Toolkit (IMPACT) was developed in the Netherlands in 2019 . For children with life-limiting or life-threatening diseases, there is often not one “best” approach in terms of care and treatment. Parents aim for integrated care including both control of the disease and symptom management, as well as quality of life for their seriously ill child and their family as a whole . Children and adolescents prefer to live their life as normal as possible. Therefore, especially in pediatric palliative care, exploring and aligning to child and family values, goals and preferences is essential. IMPACT offers a structured and concrete approach that encourages healthcare professionals to explore the perspectives of children with a life-limiting or life-threatening disease and their parents in the physical, psychological, social and spiritual domains in the now and towards the future, and to formulate values, goals and preferences for future care and treatment. Studies show that the IMPACT approach contributes to patient-centered care and supports the process of shared decision-making . To facilitate implementation of IMPACT in pediatric palliative care, a two-day IMPACT training for healthcare professionals was developed. In this training, professionals learn how to conduct an actual ACP conversation based on IMPACT and practice communication skills in addition to the online available IMPACT materials . The training consists of lectures on the concept of ACP in pediatric palliative care and hands on communication training through role plays guided by skilled trainers and actors . Most professionals in pediatric palliative care acknowledge the importance of ACP, but many barriers for conducting ACP conversations with parents and children still exist . The transfer of knowledge and skills from a training context to clinical practice is known to be challenging and depends on several factors, such as the level of learner motivation, engagement and prior level of expertise . Little is known about effective strategies for training of communication skills . In preliminary evaluations, healthcare professionals who participated in the two-day IMPACT training indicated they struggle to apply the learned ACP communication skills in their daily practice while preparing and conducting ACP conversations. Also, they experience difficulties to transfer their acquired knowledge on ACP to their colleagues. Therefore, in this implementation project, a team-based learning program, consisting of a train-the-trainer course and coaching-on-the-job sessions, was developed and evaluated for its potential contribution to a sustainable implementation and dissemination of IMPACT in pediatric palliative care. Kirkpatrick’s four-level model of evaluation of training programs was used to evaluate our program . This model focuses on the evaluation of different levels of transfer of knowledge: level 1 Reaction; level 2: Learning; level 3: Behavior; and level 4: Results. The aim of this project was to explore the participating healthcare professionals’ experiences with this team-based learning program and to evaluate the achieved level of transfer of knowledge and practical use of IMPACT in ACP in pediatric palliative care after introduction of this program. Study design and setting We conducted an implementation study using a mixed-methods design, including (open-ended) questionnaires and field notes, to evaluate how the team-based training program affected the participants’ experiences with ACP and to what level of transfer of knowledge and practical use of IMPACT in pediatric palliative care the introduction of this program led to. All eight Dutch pediatric palliative care teams (PPCTs) related to the seven university hospitals and the national Center for Pediatric Oncology, were invited to participate in this project. A PPCT is a multidisciplinary team consisting of medical, nursing, child life, psychosocial and spiritual specialists that supports children with life-limiting or life-threatening illnesses and their families . Study population Participants Healthcare professionals from each participating PPCT were selected for either the role of ‘facilitator’ or ‘learner’. Facilitators followed the newly developed one-day train-the-trainer course and were asked to transfer their course-acquired knowledge to their team members (learners) by organizing and conducting two coaching-on-the-job sessions. Facilitators were defined as: a) physicians or nurses or nurse practitioners working in a PPCT; b) that completed the two-day IMPACT training c) conducting ACP conversations in the context of their work and; d) willing to participate in the one-day train-the-trainer IMPACT course to be able to lead local coaching-on-the-job sessions. Learners were defined as: a) healthcare professionals working in pediatric palliative care; b) involved in ACP conversations in the context of their work; and c) willing to be trained by a facilitator in ACP communication skills by participation in a coaching-on-the-job session. Recruitment Facilitators were invited by an open e-mail by the research team to all eight PPCTs, also inviting them for a kick-off meeting for this project. Learners were invited by facilitators of the eight participating PPCTs for participation in local coaching-on-the-job sessions between October and December 2022. The intervention and evaluation measures Implementing ACP in palliative care requires a behavior change among professionals . Several authors argue that studies on behavior change interventions in healthcare should focus on use of diverse relevant theories to support complex real-life interventions in practice and their outcomes in healthcare . We developed our team-based learning program and the questionnaires prior to the official six-month study period during which the study was conducted. Both were based on insights from the IMPACT method, and Kirkpatrick’s four-level model . The team-based learning program consisted of two elements: 1) A one-day ‘train-the-trainer’ course for facilitators and 2) A coaching-on-the-job program led by facilitators for training on the use of IMPACT and reflection on actual ACP conversations in a team context in each PPCT. The existing IMPACT materials and training formed the backbone of the program . A detailed description of the intervention is presented in Supplemental file 1. In order to keep the basic process in the coaching-on-the-job sessions similar for all PPCTs, facilitators used a standard presentation format with information about IMPACT and ACP for their introduction, as well as other teaching materials provided to them by the IMPACT team. Level of transfer of training content to the participants own context was used to evaluate our program . Transfer refers to the targeted utilization of training-acquired knowledge and ACP communication skills by professionals in their clinical practice. We used Kirkpatrick’s four-level model of assessing training effectiveness . This is a widely used model, for the first time described in 1959, for evaluating training programs. The model focuses on the evaluation of: 1. Reactions: measures how participants have reacted to training activities in the team-based learning program; 2. Learning: measures what participants have learned from the train-the-trainer course or coaching-on-the-job session; 3. Behavior: measures whether what was learned is being applied on the job, i.e. the transfer of knowledge and skills to the workplace and; 4. Results: measures the occurrence of targeted outcomes. In this study, level 4, refers to the actual number of organized coaching-on-the-job sessions in PPCTs and self-reported individual results regarding practicing with and reflecting on ACP conversations in team context. Data collection Data were collected by questionnaires and field notes. Facilitators received in total a maximum of four questionnaires during the study period: one questionnaire following the train-the-trainer course, one after each coaching-on-the-job session and one last questionnaire at the end of the study period. Learners also received a maximum of four questionnaires: one questionnaire at the start of the study period, one after having participated in a coaching-on-the job session and one last questionnaire at the end of the study period. Furthermore, during the study period, the researcher (ME) had close contact with the facilitators and repeatedly asked them for their intentions and actions taken to transfer acquired knowledge and skills in ACP to colleagues. If scheduling was possible, ME attended a planned coaching-on-the-job session in the role of observer. Field notes were made on all communication (via mail, phone, in person) with facilitators or learners including six coaching-on-the-job sessions ME attended . A time schedule of enrolment, intervention and data collection is presented in Supplemental file 2, Table . Facilitators and learners who participated in the study were assigned a study number. Data were collected in a cloud-based clinical data management system (Castor Electronic Data Capture (EDC)) through invitation emails with a personal link in order to link completed questionnaire(s) to the corresponding study number. For each uncompleted questionnaire, reminders were sent after one and two weeks post the initial invitation. Questionnaires for facilitators Questionnaires were developed based on existing literature and expert validation . The first part of the first questionnaire included background characteristics. The questionnaire further focused on (i) professional’s evaluation of the train-the-trainer course (ii) acquired knowledge and skills in ACP and in team-based learning (iii) behavior in the context of conducting ACP conversations and intentions to get started with the coaching-on-the-job activities. A second and third questionnaire were sent to facilitators who had organized and conducted a first or second coaching-on-the-job session, respectively. These questionnaires focused, on (i) a brief actual reflection on the train-the-trainer course (ii) to what extent the facilitator had taken the role of facilitator (iii) further support needed to transfer acquired knowledge and ACP skills to colleagues. The fourth questionnaire was sent to all 18 facilitators and focused on (i) experiences with the full trajectory of the train-the-trainer course from September 2022 till January 2023 (ii) to what extent the facilitators had acted according to their plan of action prepared at the train-the-trainer course (iii) to what extent the facilitator has plans to continue practicing ACP conversations in teams after the study period. An English translation of the questionnaires for facilitators is presented in Supplemental file 3. Questionnaires for learners The first questionnaire was sent to colleague healthcare professionals, as suggested by the facilitators. The first part of the first questionnaire included questions on the learners’ background characteristics. The questionnaire further focused on (i) attitudes and beliefs towards ACP (ii) behavior in the context of conducting ACP conversations among which the part of the families to whom their PPCT provides care the learner raises the possibility of an ACP conversation. A second and third questionnaire were sent to learners, after they had participated in a first or second coaching-on-the-job session in their team. These questionnaires focused on (i) an evaluation of the attended coaching-on-the-job session (ii) acquired knowledge and skills for ACP conversations (iii) the significance of practicing with ACP for their self-efficacy regarding ACP conversations. Learners who had not previously completed a questionnaire were first asked questions about their background characteristics and experience with ACP conversations. The fourth questionnaire was sent to all learners who had participated in at least one coaching-on-the-job session and had filled in at least one previous questionnaire. This last questionnaire focused on (i) acquired knowledge and skills for ACP conversations ii) changed behavior due to practicing when conducting ACP conversations in their daily practice iii) to what extent the learner wants to continue practicing ACP conversations in a team setting after the study period. An English translation of the questionnaires for learners is presented in Supplemental file 4. Data analysis Data were analyzed using the statistical program IBM SPSS Statistics (version 26). The results are mainly presented by descriptive statistics. Where relevant, answers to open questions in questionnaires were exported from SPSS to Word and ordered in tables. Subsequently, according to qualitative data analysis methods, answers were coded and thematically categorized, and for each open question a summary of the answers was written . These codes and summaries were checked and validated by the research team. A similar analysis was performed on the field notes. Findings from the field notes were used to improve the depth of the results of the questionnaires by adding specific information to themes found where relevant, e.g., information on facilitators/barriers for organizing the coaching-on-the-job sessions as expressed by facilitators outside the questionnaires in their contacts with the researcher . We conducted an implementation study using a mixed-methods design, including (open-ended) questionnaires and field notes, to evaluate how the team-based training program affected the participants’ experiences with ACP and to what level of transfer of knowledge and practical use of IMPACT in pediatric palliative care the introduction of this program led to. All eight Dutch pediatric palliative care teams (PPCTs) related to the seven university hospitals and the national Center for Pediatric Oncology, were invited to participate in this project. A PPCT is a multidisciplinary team consisting of medical, nursing, child life, psychosocial and spiritual specialists that supports children with life-limiting or life-threatening illnesses and their families . Participants Healthcare professionals from each participating PPCT were selected for either the role of ‘facilitator’ or ‘learner’. Facilitators followed the newly developed one-day train-the-trainer course and were asked to transfer their course-acquired knowledge to their team members (learners) by organizing and conducting two coaching-on-the-job sessions. Facilitators were defined as: a) physicians or nurses or nurse practitioners working in a PPCT; b) that completed the two-day IMPACT training c) conducting ACP conversations in the context of their work and; d) willing to participate in the one-day train-the-trainer IMPACT course to be able to lead local coaching-on-the-job sessions. Learners were defined as: a) healthcare professionals working in pediatric palliative care; b) involved in ACP conversations in the context of their work; and c) willing to be trained by a facilitator in ACP communication skills by participation in a coaching-on-the-job session. Recruitment Facilitators were invited by an open e-mail by the research team to all eight PPCTs, also inviting them for a kick-off meeting for this project. Learners were invited by facilitators of the eight participating PPCTs for participation in local coaching-on-the-job sessions between October and December 2022. Healthcare professionals from each participating PPCT were selected for either the role of ‘facilitator’ or ‘learner’. Facilitators followed the newly developed one-day train-the-trainer course and were asked to transfer their course-acquired knowledge to their team members (learners) by organizing and conducting two coaching-on-the-job sessions. Facilitators were defined as: a) physicians or nurses or nurse practitioners working in a PPCT; b) that completed the two-day IMPACT training c) conducting ACP conversations in the context of their work and; d) willing to participate in the one-day train-the-trainer IMPACT course to be able to lead local coaching-on-the-job sessions. Learners were defined as: a) healthcare professionals working in pediatric palliative care; b) involved in ACP conversations in the context of their work; and c) willing to be trained by a facilitator in ACP communication skills by participation in a coaching-on-the-job session. Facilitators were invited by an open e-mail by the research team to all eight PPCTs, also inviting them for a kick-off meeting for this project. Learners were invited by facilitators of the eight participating PPCTs for participation in local coaching-on-the-job sessions between October and December 2022. Implementing ACP in palliative care requires a behavior change among professionals . Several authors argue that studies on behavior change interventions in healthcare should focus on use of diverse relevant theories to support complex real-life interventions in practice and their outcomes in healthcare . We developed our team-based learning program and the questionnaires prior to the official six-month study period during which the study was conducted. Both were based on insights from the IMPACT method, and Kirkpatrick’s four-level model . The team-based learning program consisted of two elements: 1) A one-day ‘train-the-trainer’ course for facilitators and 2) A coaching-on-the-job program led by facilitators for training on the use of IMPACT and reflection on actual ACP conversations in a team context in each PPCT. The existing IMPACT materials and training formed the backbone of the program . A detailed description of the intervention is presented in Supplemental file 1. In order to keep the basic process in the coaching-on-the-job sessions similar for all PPCTs, facilitators used a standard presentation format with information about IMPACT and ACP for their introduction, as well as other teaching materials provided to them by the IMPACT team. Level of transfer of training content to the participants own context was used to evaluate our program . Transfer refers to the targeted utilization of training-acquired knowledge and ACP communication skills by professionals in their clinical practice. We used Kirkpatrick’s four-level model of assessing training effectiveness . This is a widely used model, for the first time described in 1959, for evaluating training programs. The model focuses on the evaluation of: 1. Reactions: measures how participants have reacted to training activities in the team-based learning program; 2. Learning: measures what participants have learned from the train-the-trainer course or coaching-on-the-job session; 3. Behavior: measures whether what was learned is being applied on the job, i.e. the transfer of knowledge and skills to the workplace and; 4. Results: measures the occurrence of targeted outcomes. In this study, level 4, refers to the actual number of organized coaching-on-the-job sessions in PPCTs and self-reported individual results regarding practicing with and reflecting on ACP conversations in team context. Data were collected by questionnaires and field notes. Facilitators received in total a maximum of four questionnaires during the study period: one questionnaire following the train-the-trainer course, one after each coaching-on-the-job session and one last questionnaire at the end of the study period. Learners also received a maximum of four questionnaires: one questionnaire at the start of the study period, one after having participated in a coaching-on-the job session and one last questionnaire at the end of the study period. Furthermore, during the study period, the researcher (ME) had close contact with the facilitators and repeatedly asked them for their intentions and actions taken to transfer acquired knowledge and skills in ACP to colleagues. If scheduling was possible, ME attended a planned coaching-on-the-job session in the role of observer. Field notes were made on all communication (via mail, phone, in person) with facilitators or learners including six coaching-on-the-job sessions ME attended . A time schedule of enrolment, intervention and data collection is presented in Supplemental file 2, Table . Facilitators and learners who participated in the study were assigned a study number. Data were collected in a cloud-based clinical data management system (Castor Electronic Data Capture (EDC)) through invitation emails with a personal link in order to link completed questionnaire(s) to the corresponding study number. For each uncompleted questionnaire, reminders were sent after one and two weeks post the initial invitation. Questionnaires for facilitators Questionnaires were developed based on existing literature and expert validation . The first part of the first questionnaire included background characteristics. The questionnaire further focused on (i) professional’s evaluation of the train-the-trainer course (ii) acquired knowledge and skills in ACP and in team-based learning (iii) behavior in the context of conducting ACP conversations and intentions to get started with the coaching-on-the-job activities. A second and third questionnaire were sent to facilitators who had organized and conducted a first or second coaching-on-the-job session, respectively. These questionnaires focused, on (i) a brief actual reflection on the train-the-trainer course (ii) to what extent the facilitator had taken the role of facilitator (iii) further support needed to transfer acquired knowledge and ACP skills to colleagues. The fourth questionnaire was sent to all 18 facilitators and focused on (i) experiences with the full trajectory of the train-the-trainer course from September 2022 till January 2023 (ii) to what extent the facilitators had acted according to their plan of action prepared at the train-the-trainer course (iii) to what extent the facilitator has plans to continue practicing ACP conversations in teams after the study period. An English translation of the questionnaires for facilitators is presented in Supplemental file 3. Questionnaires for learners The first questionnaire was sent to colleague healthcare professionals, as suggested by the facilitators. The first part of the first questionnaire included questions on the learners’ background characteristics. The questionnaire further focused on (i) attitudes and beliefs towards ACP (ii) behavior in the context of conducting ACP conversations among which the part of the families to whom their PPCT provides care the learner raises the possibility of an ACP conversation. A second and third questionnaire were sent to learners, after they had participated in a first or second coaching-on-the-job session in their team. These questionnaires focused on (i) an evaluation of the attended coaching-on-the-job session (ii) acquired knowledge and skills for ACP conversations (iii) the significance of practicing with ACP for their self-efficacy regarding ACP conversations. Learners who had not previously completed a questionnaire were first asked questions about their background characteristics and experience with ACP conversations. The fourth questionnaire was sent to all learners who had participated in at least one coaching-on-the-job session and had filled in at least one previous questionnaire. This last questionnaire focused on (i) acquired knowledge and skills for ACP conversations ii) changed behavior due to practicing when conducting ACP conversations in their daily practice iii) to what extent the learner wants to continue practicing ACP conversations in a team setting after the study period. An English translation of the questionnaires for learners is presented in Supplemental file 4. Questionnaires were developed based on existing literature and expert validation . The first part of the first questionnaire included background characteristics. The questionnaire further focused on (i) professional’s evaluation of the train-the-trainer course (ii) acquired knowledge and skills in ACP and in team-based learning (iii) behavior in the context of conducting ACP conversations and intentions to get started with the coaching-on-the-job activities. A second and third questionnaire were sent to facilitators who had organized and conducted a first or second coaching-on-the-job session, respectively. These questionnaires focused, on (i) a brief actual reflection on the train-the-trainer course (ii) to what extent the facilitator had taken the role of facilitator (iii) further support needed to transfer acquired knowledge and ACP skills to colleagues. The fourth questionnaire was sent to all 18 facilitators and focused on (i) experiences with the full trajectory of the train-the-trainer course from September 2022 till January 2023 (ii) to what extent the facilitators had acted according to their plan of action prepared at the train-the-trainer course (iii) to what extent the facilitator has plans to continue practicing ACP conversations in teams after the study period. An English translation of the questionnaires for facilitators is presented in Supplemental file 3. The first questionnaire was sent to colleague healthcare professionals, as suggested by the facilitators. The first part of the first questionnaire included questions on the learners’ background characteristics. The questionnaire further focused on (i) attitudes and beliefs towards ACP (ii) behavior in the context of conducting ACP conversations among which the part of the families to whom their PPCT provides care the learner raises the possibility of an ACP conversation. A second and third questionnaire were sent to learners, after they had participated in a first or second coaching-on-the-job session in their team. These questionnaires focused on (i) an evaluation of the attended coaching-on-the-job session (ii) acquired knowledge and skills for ACP conversations (iii) the significance of practicing with ACP for their self-efficacy regarding ACP conversations. Learners who had not previously completed a questionnaire were first asked questions about their background characteristics and experience with ACP conversations. The fourth questionnaire was sent to all learners who had participated in at least one coaching-on-the-job session and had filled in at least one previous questionnaire. This last questionnaire focused on (i) acquired knowledge and skills for ACP conversations ii) changed behavior due to practicing when conducting ACP conversations in their daily practice iii) to what extent the learner wants to continue practicing ACP conversations in a team setting after the study period. An English translation of the questionnaires for learners is presented in Supplemental file 4. Data were analyzed using the statistical program IBM SPSS Statistics (version 26). The results are mainly presented by descriptive statistics. Where relevant, answers to open questions in questionnaires were exported from SPSS to Word and ordered in tables. Subsequently, according to qualitative data analysis methods, answers were coded and thematically categorized, and for each open question a summary of the answers was written . These codes and summaries were checked and validated by the research team. A similar analysis was performed on the field notes. Findings from the field notes were used to improve the depth of the results of the questionnaires by adding specific information to themes found where relevant, e.g., information on facilitators/barriers for organizing the coaching-on-the-job sessions as expressed by facilitators outside the questionnaires in their contacts with the researcher . Participants and training Eighteen facilitators participated in the study. All attended the one-day ‘train the trainer’ course (see Supplemental file 1). Facilitators recruited 29 learners who participated in a first and 17 in a second local coaching-on-the-job session, of whom nine participated in two coaching-on-the-job sessions. An overview of the response rates is presented in Table . Facilitator characteristics Of all 18 facilitators, eight were (specialized) pediatricians, nine were (specialized) pediatric nurses or nurse practitioners and one was physician assistant. Nearly all cared for ten or more children with a life-limiting illness per year. Sixteen facilitators (88.9%) had previously completed the two-day training course on IMPACT core communication skills. Two of them had completed a similar course on communication skills regarding ACP conversations (Table ). Learner characteristics Most learners who participated in at least one coaching-on-the-job session were 40 years or older, female, had more than 10 years working experience and cared for 10 or more children with a life-limiting per year (Table ). Most of the learners who participated in at least one coaching-on-the-job session completed the two-day IMPACT training prior to a learner experience in the coaching-on-the-job-session(s). Transfer of training content For each of the four levels of Kirkpatrick's evaluation model an overview of the most relevant answers to the closed and open questions in the questionnaires is presented for facilitators and learners respectively. Level 1: Assessment of training activities: participants evaluated the train-the-trainer course and coaching-on-the-job sessions positively Both facilitators and learners evaluated the training activities in the team-based learning program (very) positively (see Supplemental file 5, Table S1). Some points for improvement were mentioned, such as that some facilitators would have preferred a more precise indication of what was expected of them during the training program, as well as a longer study period. Some learners would have preferred more information in advance about aim and content of the coaching-on-the-job session (Table ). Level 2. Learning: participants learned to use the ACP communication skills and methodically reflecting on ACP conversations from the train-the-trainer course or coaching-on-the-job sessions All facilitators (100%) shortly following the train-the-trainer course and almost all facilitators (13 out of 14, 92.9%) at the end of the study indicated that the given information on ACP and ACP communication skills as well as the method for methodical reflection on conducting an ACP conversation in team context were clear. At the end of the study, most facilitators who filled in the last questionnaire (9 out of 14 and 8 out of 14, respectively) also indicated that, in daily practice, they were sufficiently able to transfer the ACP communication skills to colleagues in their PPCT and that they could facilitate methodical reflection on conducting ACP conversations in team context (64.3% and 57.1%, respectively) (see Supplemental file 5, Table S2). With regard to learners, at the end of the study, 20 out of 21 learners (95.2%) (totally) agreed that the ACP communication skills were clear to them. Notably, there was a slight decrease in the relative number of learners that mentioned that they felt comfortable preparing parents for ACP from 74.2% at the start of the study to 66.7% at the end of the study as was the same for feeling comfortable conducting an ACP conversation with parents (see Supplemental file 5, Table S2). Level 3 Behavior outcomes: participants applied the learned knowledge and ACP communication skills in their clinical setting On the team level, in 7 PPCTs one or two coaching-on-the-job sessions were organized. The relative number of facilitators that indicated that they regularly reflected on their initiative with one or more colleagues on preparing for or conducting an ACP conversation increased over time from 8 out of 18 (44.4%) shortly following the train-the-trainer course, to 10 out of 14 (71.4%) at the end of the study. Furthermore, the relative number of facilitators who indicated that they raised the possibility of having an ACP conversation with half or more of the families to whom their PPCT provides care increased from 11 out of 18 (61.1%) shortly following the train-the-trainer course, to 10 out of 14 (71.4%) at the end of the study (see Supplemental file 5, Table S3). Helpful for organizing a coaching-on-the-job session were: being with two or three facilitators in a PPCT/consulting together/dividing tasks and using existing organizational structures as a preplanned multidisciplinary session. Barriers for organizing such a session were lack of time, no professional actor involved, having doubts about own skills, knowledge and especially acting abilities, lack of time due to team workload and reluctance among colleagues to role-plays. After having participated in one or two coaching-on-the-job sessions, the relative number of learners that indicated to regularly reflect on their initiative with colleagues to prepare for an ACP conversation increased from 48.4% at the start of the study to 61.9% at the end of the study. At the end of the study, for each ACP communication skill, 7 to 12 out of 21 learners (33.3% to 57.1%) indicated that they felt more confident in this skill. However, at the end of the study, for each of these skills, compared to their feeling confident in a skill, less learners, i.e., 5 to 7 learners (23.8% to 33.3%) indicated that they actually had started to use this ACP communication skill more in ACP conversations (see Supplemental file 5, Table S3). Level 4 Results in PPCT: facilitators transferred training content during coaching-on-the-job sessions resulting in half of the participating learners to report (some) positive change in their attitude and self-confidence towards ACP conversations In this level 4, the targeted outcomes of the team-based learning program were measured. On the team level, the facilitators of 7 out of 8 PPCTs organized a first coaching-on-the job session in their team which was attended by a total of 29 learners (range 1 to 7). In 4 PPCTs a second coaching-on-the job session was organized, which was attended by 17 learners (range 2 to 6). All coaching-on-the-job sessions had a mean of 4.2 learners per session and lasted an average of 85 min per session (range 30 to 120 min). In one PPCT no coaching-on-the-job session was organized. Both at the start and end of the study, facilitators and learners were not able to give an estimation of the number of families involved in an ACP conversation by the PPCT during the past six months. On an individual level, half of the facilitators indicated at the end of the study that they had met their preset goal: i.e., they had organized and conducted two coaching-on-the-job sessions in their PPCT. In addition, at the end of the study, 11 out of 14 (78.6%) of the facilitators expected to continue to apply the skills learned for methodical reflection in their PPCT beyond the end of the research period, and 11 out 14 (78.6%) had already scheduled another coaching-on-the-job session in their team or intended to do so (see Supplemental file 5, Table S4). Four facilitators mentioned as reasons for not having met their goals: I need more practicing, the planning of a session is difficult, in my PPCT due to a high workload there is little motivation for sessions and I don’t know. Almost half of the learners indicated that the coaching-on-the-job session(s) they had attended changed something in their attitude and self-confidence toward conducting ACP conversations by themselves or colleagues (see Supplemental file 5, Table S4). A positive change that was mentioned most frequently in answer to an open question was: I feel more confident in conducting an ACP conversation. At the end of the study, 18 out of 21 (85.7%) learners indicated that they expected to continue to apply the ACP communication skills after the end of the research period and 15 out of 21 (71.4%) strongly intended to participate in a subsequent session for practicing ACP conversations. Best valued elements and elements for improvement of the whole team-based learning program that facilitators and learners mentioned are presented in Table . Field notes show enthusiasm for the program and facilitators’ need for (more) guidance Field notes show the overall enthusiasm of most participants in the program, however most facilitators needed one or more emails and telephone calls from the research team to encourage them to organize their first coaching-on-the-job session. Field notes reveal that facilitators sometimes struggled to identify colleagues eligible for a coaching-on-the-job session, because it was not very clear to them who were involved in ACP. Furthermore, for some PPCTs it worked well to use (part of) a regular team meeting for the coaching-on-the job session. For other PPCTs this meant that if the regular patient briefing was compromised, facilitators found it difficult to find another moment for the coaching-on-the-job session due to team workload and different schedules. Apart from organizational issues some facilitators felt very unsure about their role as facilitator and assumed great resistance to role-playing in their PPCT. With regard to the coaching-on-the-job sessions attended by the researcher (ME), field notes show that most participants actually appreciated the role-plays and indicated that in the role-play they acted as they would normally do in real conversations with parents and/or their child. Some of them also indicated that they learned a lot from observing the way colleagues conducted ACP conversations, from their use of specific sentences or words or silences. Participants also appreciated playing the role of a parent. This increased their empathy for parents and taught them a lot about clinician-parent communication, for example, about how it may appear to a parent when a professional gives a lot of information at once. Eighteen facilitators participated in the study. All attended the one-day ‘train the trainer’ course (see Supplemental file 1). Facilitators recruited 29 learners who participated in a first and 17 in a second local coaching-on-the-job session, of whom nine participated in two coaching-on-the-job sessions. An overview of the response rates is presented in Table . Facilitator characteristics Of all 18 facilitators, eight were (specialized) pediatricians, nine were (specialized) pediatric nurses or nurse practitioners and one was physician assistant. Nearly all cared for ten or more children with a life-limiting illness per year. Sixteen facilitators (88.9%) had previously completed the two-day training course on IMPACT core communication skills. Two of them had completed a similar course on communication skills regarding ACP conversations (Table ). Learner characteristics Most learners who participated in at least one coaching-on-the-job session were 40 years or older, female, had more than 10 years working experience and cared for 10 or more children with a life-limiting per year (Table ). Most of the learners who participated in at least one coaching-on-the-job session completed the two-day IMPACT training prior to a learner experience in the coaching-on-the-job-session(s). Of all 18 facilitators, eight were (specialized) pediatricians, nine were (specialized) pediatric nurses or nurse practitioners and one was physician assistant. Nearly all cared for ten or more children with a life-limiting illness per year. Sixteen facilitators (88.9%) had previously completed the two-day training course on IMPACT core communication skills. Two of them had completed a similar course on communication skills regarding ACP conversations (Table ). Most learners who participated in at least one coaching-on-the-job session were 40 years or older, female, had more than 10 years working experience and cared for 10 or more children with a life-limiting per year (Table ). Most of the learners who participated in at least one coaching-on-the-job session completed the two-day IMPACT training prior to a learner experience in the coaching-on-the-job-session(s). For each of the four levels of Kirkpatrick's evaluation model an overview of the most relevant answers to the closed and open questions in the questionnaires is presented for facilitators and learners respectively. Level 1: Assessment of training activities: participants evaluated the train-the-trainer course and coaching-on-the-job sessions positively Both facilitators and learners evaluated the training activities in the team-based learning program (very) positively (see Supplemental file 5, Table S1). Some points for improvement were mentioned, such as that some facilitators would have preferred a more precise indication of what was expected of them during the training program, as well as a longer study period. Some learners would have preferred more information in advance about aim and content of the coaching-on-the-job session (Table ). Level 2. Learning: participants learned to use the ACP communication skills and methodically reflecting on ACP conversations from the train-the-trainer course or coaching-on-the-job sessions All facilitators (100%) shortly following the train-the-trainer course and almost all facilitators (13 out of 14, 92.9%) at the end of the study indicated that the given information on ACP and ACP communication skills as well as the method for methodical reflection on conducting an ACP conversation in team context were clear. At the end of the study, most facilitators who filled in the last questionnaire (9 out of 14 and 8 out of 14, respectively) also indicated that, in daily practice, they were sufficiently able to transfer the ACP communication skills to colleagues in their PPCT and that they could facilitate methodical reflection on conducting ACP conversations in team context (64.3% and 57.1%, respectively) (see Supplemental file 5, Table S2). With regard to learners, at the end of the study, 20 out of 21 learners (95.2%) (totally) agreed that the ACP communication skills were clear to them. Notably, there was a slight decrease in the relative number of learners that mentioned that they felt comfortable preparing parents for ACP from 74.2% at the start of the study to 66.7% at the end of the study as was the same for feeling comfortable conducting an ACP conversation with parents (see Supplemental file 5, Table S2). Level 3 Behavior outcomes: participants applied the learned knowledge and ACP communication skills in their clinical setting On the team level, in 7 PPCTs one or two coaching-on-the-job sessions were organized. The relative number of facilitators that indicated that they regularly reflected on their initiative with one or more colleagues on preparing for or conducting an ACP conversation increased over time from 8 out of 18 (44.4%) shortly following the train-the-trainer course, to 10 out of 14 (71.4%) at the end of the study. Furthermore, the relative number of facilitators who indicated that they raised the possibility of having an ACP conversation with half or more of the families to whom their PPCT provides care increased from 11 out of 18 (61.1%) shortly following the train-the-trainer course, to 10 out of 14 (71.4%) at the end of the study (see Supplemental file 5, Table S3). Helpful for organizing a coaching-on-the-job session were: being with two or three facilitators in a PPCT/consulting together/dividing tasks and using existing organizational structures as a preplanned multidisciplinary session. Barriers for organizing such a session were lack of time, no professional actor involved, having doubts about own skills, knowledge and especially acting abilities, lack of time due to team workload and reluctance among colleagues to role-plays. After having participated in one or two coaching-on-the-job sessions, the relative number of learners that indicated to regularly reflect on their initiative with colleagues to prepare for an ACP conversation increased from 48.4% at the start of the study to 61.9% at the end of the study. At the end of the study, for each ACP communication skill, 7 to 12 out of 21 learners (33.3% to 57.1%) indicated that they felt more confident in this skill. However, at the end of the study, for each of these skills, compared to their feeling confident in a skill, less learners, i.e., 5 to 7 learners (23.8% to 33.3%) indicated that they actually had started to use this ACP communication skill more in ACP conversations (see Supplemental file 5, Table S3). Level 4 Results in PPCT: facilitators transferred training content during coaching-on-the-job sessions resulting in half of the participating learners to report (some) positive change in their attitude and self-confidence towards ACP conversations In this level 4, the targeted outcomes of the team-based learning program were measured. On the team level, the facilitators of 7 out of 8 PPCTs organized a first coaching-on-the job session in their team which was attended by a total of 29 learners (range 1 to 7). In 4 PPCTs a second coaching-on-the job session was organized, which was attended by 17 learners (range 2 to 6). All coaching-on-the-job sessions had a mean of 4.2 learners per session and lasted an average of 85 min per session (range 30 to 120 min). In one PPCT no coaching-on-the-job session was organized. Both at the start and end of the study, facilitators and learners were not able to give an estimation of the number of families involved in an ACP conversation by the PPCT during the past six months. On an individual level, half of the facilitators indicated at the end of the study that they had met their preset goal: i.e., they had organized and conducted two coaching-on-the-job sessions in their PPCT. In addition, at the end of the study, 11 out of 14 (78.6%) of the facilitators expected to continue to apply the skills learned for methodical reflection in their PPCT beyond the end of the research period, and 11 out 14 (78.6%) had already scheduled another coaching-on-the-job session in their team or intended to do so (see Supplemental file 5, Table S4). Four facilitators mentioned as reasons for not having met their goals: I need more practicing, the planning of a session is difficult, in my PPCT due to a high workload there is little motivation for sessions and I don’t know. Almost half of the learners indicated that the coaching-on-the-job session(s) they had attended changed something in their attitude and self-confidence toward conducting ACP conversations by themselves or colleagues (see Supplemental file 5, Table S4). A positive change that was mentioned most frequently in answer to an open question was: I feel more confident in conducting an ACP conversation. At the end of the study, 18 out of 21 (85.7%) learners indicated that they expected to continue to apply the ACP communication skills after the end of the research period and 15 out of 21 (71.4%) strongly intended to participate in a subsequent session for practicing ACP conversations. Best valued elements and elements for improvement of the whole team-based learning program that facilitators and learners mentioned are presented in Table . Field notes show enthusiasm for the program and facilitators’ need for (more) guidance Field notes show the overall enthusiasm of most participants in the program, however most facilitators needed one or more emails and telephone calls from the research team to encourage them to organize their first coaching-on-the-job session. Field notes reveal that facilitators sometimes struggled to identify colleagues eligible for a coaching-on-the-job session, because it was not very clear to them who were involved in ACP. Furthermore, for some PPCTs it worked well to use (part of) a regular team meeting for the coaching-on-the job session. For other PPCTs this meant that if the regular patient briefing was compromised, facilitators found it difficult to find another moment for the coaching-on-the-job session due to team workload and different schedules. Apart from organizational issues some facilitators felt very unsure about their role as facilitator and assumed great resistance to role-playing in their PPCT. With regard to the coaching-on-the-job sessions attended by the researcher (ME), field notes show that most participants actually appreciated the role-plays and indicated that in the role-play they acted as they would normally do in real conversations with parents and/or their child. Some of them also indicated that they learned a lot from observing the way colleagues conducted ACP conversations, from their use of specific sentences or words or silences. Participants also appreciated playing the role of a parent. This increased their empathy for parents and taught them a lot about clinician-parent communication, for example, about how it may appear to a parent when a professional gives a lot of information at once. Both facilitators and learners evaluated the training activities in the team-based learning program (very) positively (see Supplemental file 5, Table S1). Some points for improvement were mentioned, such as that some facilitators would have preferred a more precise indication of what was expected of them during the training program, as well as a longer study period. Some learners would have preferred more information in advance about aim and content of the coaching-on-the-job session (Table ). All facilitators (100%) shortly following the train-the-trainer course and almost all facilitators (13 out of 14, 92.9%) at the end of the study indicated that the given information on ACP and ACP communication skills as well as the method for methodical reflection on conducting an ACP conversation in team context were clear. At the end of the study, most facilitators who filled in the last questionnaire (9 out of 14 and 8 out of 14, respectively) also indicated that, in daily practice, they were sufficiently able to transfer the ACP communication skills to colleagues in their PPCT and that they could facilitate methodical reflection on conducting ACP conversations in team context (64.3% and 57.1%, respectively) (see Supplemental file 5, Table S2). With regard to learners, at the end of the study, 20 out of 21 learners (95.2%) (totally) agreed that the ACP communication skills were clear to them. Notably, there was a slight decrease in the relative number of learners that mentioned that they felt comfortable preparing parents for ACP from 74.2% at the start of the study to 66.7% at the end of the study as was the same for feeling comfortable conducting an ACP conversation with parents (see Supplemental file 5, Table S2). On the team level, in 7 PPCTs one or two coaching-on-the-job sessions were organized. The relative number of facilitators that indicated that they regularly reflected on their initiative with one or more colleagues on preparing for or conducting an ACP conversation increased over time from 8 out of 18 (44.4%) shortly following the train-the-trainer course, to 10 out of 14 (71.4%) at the end of the study. Furthermore, the relative number of facilitators who indicated that they raised the possibility of having an ACP conversation with half or more of the families to whom their PPCT provides care increased from 11 out of 18 (61.1%) shortly following the train-the-trainer course, to 10 out of 14 (71.4%) at the end of the study (see Supplemental file 5, Table S3). Helpful for organizing a coaching-on-the-job session were: being with two or three facilitators in a PPCT/consulting together/dividing tasks and using existing organizational structures as a preplanned multidisciplinary session. Barriers for organizing such a session were lack of time, no professional actor involved, having doubts about own skills, knowledge and especially acting abilities, lack of time due to team workload and reluctance among colleagues to role-plays. After having participated in one or two coaching-on-the-job sessions, the relative number of learners that indicated to regularly reflect on their initiative with colleagues to prepare for an ACP conversation increased from 48.4% at the start of the study to 61.9% at the end of the study. At the end of the study, for each ACP communication skill, 7 to 12 out of 21 learners (33.3% to 57.1%) indicated that they felt more confident in this skill. However, at the end of the study, for each of these skills, compared to their feeling confident in a skill, less learners, i.e., 5 to 7 learners (23.8% to 33.3%) indicated that they actually had started to use this ACP communication skill more in ACP conversations (see Supplemental file 5, Table S3). In this level 4, the targeted outcomes of the team-based learning program were measured. On the team level, the facilitators of 7 out of 8 PPCTs organized a first coaching-on-the job session in their team which was attended by a total of 29 learners (range 1 to 7). In 4 PPCTs a second coaching-on-the job session was organized, which was attended by 17 learners (range 2 to 6). All coaching-on-the-job sessions had a mean of 4.2 learners per session and lasted an average of 85 min per session (range 30 to 120 min). In one PPCT no coaching-on-the-job session was organized. Both at the start and end of the study, facilitators and learners were not able to give an estimation of the number of families involved in an ACP conversation by the PPCT during the past six months. On an individual level, half of the facilitators indicated at the end of the study that they had met their preset goal: i.e., they had organized and conducted two coaching-on-the-job sessions in their PPCT. In addition, at the end of the study, 11 out of 14 (78.6%) of the facilitators expected to continue to apply the skills learned for methodical reflection in their PPCT beyond the end of the research period, and 11 out 14 (78.6%) had already scheduled another coaching-on-the-job session in their team or intended to do so (see Supplemental file 5, Table S4). Four facilitators mentioned as reasons for not having met their goals: I need more practicing, the planning of a session is difficult, in my PPCT due to a high workload there is little motivation for sessions and I don’t know. Almost half of the learners indicated that the coaching-on-the-job session(s) they had attended changed something in their attitude and self-confidence toward conducting ACP conversations by themselves or colleagues (see Supplemental file 5, Table S4). A positive change that was mentioned most frequently in answer to an open question was: I feel more confident in conducting an ACP conversation. At the end of the study, 18 out of 21 (85.7%) learners indicated that they expected to continue to apply the ACP communication skills after the end of the research period and 15 out of 21 (71.4%) strongly intended to participate in a subsequent session for practicing ACP conversations. Best valued elements and elements for improvement of the whole team-based learning program that facilitators and learners mentioned are presented in Table . Field notes show the overall enthusiasm of most participants in the program, however most facilitators needed one or more emails and telephone calls from the research team to encourage them to organize their first coaching-on-the-job session. Field notes reveal that facilitators sometimes struggled to identify colleagues eligible for a coaching-on-the-job session, because it was not very clear to them who were involved in ACP. Furthermore, for some PPCTs it worked well to use (part of) a regular team meeting for the coaching-on-the job session. For other PPCTs this meant that if the regular patient briefing was compromised, facilitators found it difficult to find another moment for the coaching-on-the-job session due to team workload and different schedules. Apart from organizational issues some facilitators felt very unsure about their role as facilitator and assumed great resistance to role-playing in their PPCT. With regard to the coaching-on-the-job sessions attended by the researcher (ME), field notes show that most participants actually appreciated the role-plays and indicated that in the role-play they acted as they would normally do in real conversations with parents and/or their child. Some of them also indicated that they learned a lot from observing the way colleagues conducted ACP conversations, from their use of specific sentences or words or silences. Participants also appreciated playing the role of a parent. This increased their empathy for parents and taught them a lot about clinician-parent communication, for example, about how it may appear to a parent when a professional gives a lot of information at once. Summary of findings This study explored the experiences of healthcare professionals in pediatric palliative care with a newly team-based learning program on ACP. In addition, it evaluated to what degree this team-based learning program facilitated transfer of knowledge regarding ACP communication skills in conducting ACP conversations from a train-the-trainer course to the participants’ real-work context. Most participants rated the learning program very positively although embedding it in daily practice appeared to be challenging. ‘Facilitators’ of seven out of eight PPCTs organized and guided one or two coaching-on-the job sessions in their team and met our pre-set goal of transferring course acquired knowledge and skills on ACP communication skills to their PPCT. Of the ‘learners’ who participated in these coaching-on-the-job sessions, almost all respondents expected to continue to apply the ACP communication skills learned, during their ACP conversations with parents and/or children, beyond the end of the study period. Continuous practicing of ACP communication skills and methodical reflection on ACP conversations Even if healthcare professionals are familiar with ACP, starting and conducting an ACP conversation seems still to be difficult, often resulting in introducing ACP in a very late phase of the illness trajectory . Most participants in our study appreciate the (re-)introduction of theory on ACP and IMPACT, and the repeated practicing of skills in short role-plays in the newly developed training program. Hereby our results support the results of other studies that argue for continuous learning and evaluation of processes in healthcare to continuously improve care processes in general and ACP in pediatric palliative care in particular . Transfer of knowledge on ACP and ACP communication skills Knowledge transfer is known to be a dynamic process that unfolds over time, resulting from the interaction between persons, situations and criteria over time . Literature shows that three major factors affect the extent of knowledge transfer to the job: 1. trainee characteristics; 2. characteristics of the training activities; and 3. work environmental factors . Trainee characteristics Both facilitators and learners in general were very motivated to participate in learning activities aimed to optimize ACP in pediatric palliative care as is also known from the literature . Our results show that, although a relatively small number of participants, most of them experience positive changes in attitude and skills and (strongly) intend to continue practicing ACP communication skills in combination with methodical reflection on ACP conversations. However, knowledge transfer resulting in professionals applying learned knowledge, skills and attitude over time is known to be difficult and needs ongoing attention . The fact that only one third of learners at the end of the study had started to use more (some) ACP communication skills may be explained by several reasons. Except that the coaching-on-the-job session may have not fit to their professional or personal development needs, a trivial reason may be that some learners did not conduct an ACP conversation during the rather short study period of a maximum of three months between the coaching-on-the-job sessions and the final questionnaire. As is also known from other studies on ACP, healthcare professionals may find it difficult to label conversations with parents and/or children as ACP conversation . Another reason for not changing their behavior may be that participants, in their own opinion, already are doing ACP conversations following a more or less well-defined method and feel that they do not have to change their behavior. A striking finding is a slight decrease in the number of learners that at the end of the study felt comfortable in conducting an ACP conversation with parents. This finding may be explained by the fact that the participation in a coaching-on-the-job session leads to better understanding of the method and ACP communication skills required for ACP conversations or increased (self-) awareness over time and thereby to greater uncertainty about whether one is doing ACP as intended . This is known also as the Dunning-Kruger effect: the tendency of people with low ability—to apply skills—in a specific area to give overly positive assessments of this ability as could have been the case for learners before their participation in the coaching-on-the-job sessions . Characteristics of the training activities Our main influence in the present study was on the second affecting factor, the development of the training activities. Besides training knowledge and ACP communication skills on the individual level, our team- based learning program focuses explicitly on team-level factors which are known to promote interdisciplinary collaboration in palliative care, such as discussing and reflecting on ACP conversations in team-context . Facilitators and learners give overall very positive feedback on the team-level aspects of the training activities. One way to improve the quality of this team-based learning program on ACP communication skills could be to train facilitators also explicitly on the role of champions or frontrunners, who may play an important role in promoting ACP in their PPCT and beyond . Taking a leadership role in their team may involve a great challenge for healthcare professionals . Another issue for improvement in our team-based program is that the target group for the training program should be better defined. Although there has been shown high effectiveness of interprofessional training in pediatric palliative care, ACP may be a too specific medical/nursing intervention to train disciplines that have a key role and disciplines that have a derived role in ACP together. An important next step could be to assess after for example one year what team members actually conduct ACP conversations before and after implementing this program, and then to evaluate the experiences of parents and children with these conversations. Next, the findings of this follow-up studies can be integrated in the team-based learning program. Work environmental factors Most mentioned barriers to the program are found in the work environmental factors, such as having difficulties in planning coaching-on-the-job sessions due to a high team workload and different schedules. From other studies it is known that frontrunners or champions in non-specialized palliative care also have difficulties to disseminate knowledge to colleagues, or may fail to organize meetings due to e.g. a high workload or lack of dedicated time . Therefore, in the future more attention should be paid to the guidance of facilitators in ways appropriate to them/their PPCT to organize coaching-on-the-job sessions and, if needed, adaptation of local coaching-on-the-job activities to the specific needs and characteristics, such as prior ACP training of professionals working in a certain institute. In addition, at the organizational and management level more importance should be given to the ongoing training of healthcare professionals on communication skills, similar to training on both medical and nursing technical skills . Strengths and limitations of the study A strength of the study was triangulation of data by the researcher (ME) attending some sessions leading to extra information in addition to the results of the questionnaires. Furthermore, the regular contact between the research team and the facilitators, and the observations during some sessions, were helpful in getting an overall picture of the process that was going on in the PPCTs. However, this level of intervening in the natural process could also be considered a limitation. Other limitations include the under-representation of male pediatric care professionals under the age of 40 that nonetheless represents actual pediatric care practice. Also the rather short study period of six months might have led to a large time pressure on the facilitators to organize two coaching-on-the-job sessions in a period of two or three months. In this study we found that a pro-active planning of activities guided by the research team proved to be helpful. Another limitation is that in some PPCTs the first questionnaire was distributed broadly to many types of professionals and in other PPCTs more narrowly to nurses and physicians. The same was true for the coaching-on-the-job sessions: in some PPCTs in addition to the original target group of nurses and physicians also other professionals participated in the coaching-on-the-job sessions. This sometimes led to different needs regarding theory on ACP provided by facilitators and practicing ACP during the session and to some professionals feeling not addressed by certain questions in the questionnaires. Finally, the small number of learners means that conclusions must be drawn with caution. Conclusion The newly developed team-based learning program to facilitate continuous training and reflection on the use of IMPACT seems a promising intervention for the 'transfer of knowledge' on ACP, ACP communication skills and reflection on ACP conversations in a team context. The team-based learning program may contribute to a sustainable implementation and dissemination of IMPACT in pediatric palliative care. However, for many healthcare professionals in PPCTs who regularly conduct ACP conversations, practicing ACP communication skills and reflecting on ACP does not come naturally. For methodically practicing with and reflecting on ACP in team context, PPCTs need more dedicated time for coaching-on-the-job activities related to ACP and facilitators need more guidance during these coaching-on-the-job sessions so they know how to deal with individual variation between their team members in conducting ACP. This study explored the experiences of healthcare professionals in pediatric palliative care with a newly team-based learning program on ACP. In addition, it evaluated to what degree this team-based learning program facilitated transfer of knowledge regarding ACP communication skills in conducting ACP conversations from a train-the-trainer course to the participants’ real-work context. Most participants rated the learning program very positively although embedding it in daily practice appeared to be challenging. ‘Facilitators’ of seven out of eight PPCTs organized and guided one or two coaching-on-the job sessions in their team and met our pre-set goal of transferring course acquired knowledge and skills on ACP communication skills to their PPCT. Of the ‘learners’ who participated in these coaching-on-the-job sessions, almost all respondents expected to continue to apply the ACP communication skills learned, during their ACP conversations with parents and/or children, beyond the end of the study period. Continuous practicing of ACP communication skills and methodical reflection on ACP conversations Even if healthcare professionals are familiar with ACP, starting and conducting an ACP conversation seems still to be difficult, often resulting in introducing ACP in a very late phase of the illness trajectory . Most participants in our study appreciate the (re-)introduction of theory on ACP and IMPACT, and the repeated practicing of skills in short role-plays in the newly developed training program. Hereby our results support the results of other studies that argue for continuous learning and evaluation of processes in healthcare to continuously improve care processes in general and ACP in pediatric palliative care in particular . Transfer of knowledge on ACP and ACP communication skills Knowledge transfer is known to be a dynamic process that unfolds over time, resulting from the interaction between persons, situations and criteria over time . Literature shows that three major factors affect the extent of knowledge transfer to the job: 1. trainee characteristics; 2. characteristics of the training activities; and 3. work environmental factors . Trainee characteristics Both facilitators and learners in general were very motivated to participate in learning activities aimed to optimize ACP in pediatric palliative care as is also known from the literature . Our results show that, although a relatively small number of participants, most of them experience positive changes in attitude and skills and (strongly) intend to continue practicing ACP communication skills in combination with methodical reflection on ACP conversations. However, knowledge transfer resulting in professionals applying learned knowledge, skills and attitude over time is known to be difficult and needs ongoing attention . The fact that only one third of learners at the end of the study had started to use more (some) ACP communication skills may be explained by several reasons. Except that the coaching-on-the-job session may have not fit to their professional or personal development needs, a trivial reason may be that some learners did not conduct an ACP conversation during the rather short study period of a maximum of three months between the coaching-on-the-job sessions and the final questionnaire. As is also known from other studies on ACP, healthcare professionals may find it difficult to label conversations with parents and/or children as ACP conversation . Another reason for not changing their behavior may be that participants, in their own opinion, already are doing ACP conversations following a more or less well-defined method and feel that they do not have to change their behavior. A striking finding is a slight decrease in the number of learners that at the end of the study felt comfortable in conducting an ACP conversation with parents. This finding may be explained by the fact that the participation in a coaching-on-the-job session leads to better understanding of the method and ACP communication skills required for ACP conversations or increased (self-) awareness over time and thereby to greater uncertainty about whether one is doing ACP as intended . This is known also as the Dunning-Kruger effect: the tendency of people with low ability—to apply skills—in a specific area to give overly positive assessments of this ability as could have been the case for learners before their participation in the coaching-on-the-job sessions . Characteristics of the training activities Our main influence in the present study was on the second affecting factor, the development of the training activities. Besides training knowledge and ACP communication skills on the individual level, our team- based learning program focuses explicitly on team-level factors which are known to promote interdisciplinary collaboration in palliative care, such as discussing and reflecting on ACP conversations in team-context . Facilitators and learners give overall very positive feedback on the team-level aspects of the training activities. One way to improve the quality of this team-based learning program on ACP communication skills could be to train facilitators also explicitly on the role of champions or frontrunners, who may play an important role in promoting ACP in their PPCT and beyond . Taking a leadership role in their team may involve a great challenge for healthcare professionals . Another issue for improvement in our team-based program is that the target group for the training program should be better defined. Although there has been shown high effectiveness of interprofessional training in pediatric palliative care, ACP may be a too specific medical/nursing intervention to train disciplines that have a key role and disciplines that have a derived role in ACP together. An important next step could be to assess after for example one year what team members actually conduct ACP conversations before and after implementing this program, and then to evaluate the experiences of parents and children with these conversations. Next, the findings of this follow-up studies can be integrated in the team-based learning program. Work environmental factors Most mentioned barriers to the program are found in the work environmental factors, such as having difficulties in planning coaching-on-the-job sessions due to a high team workload and different schedules. From other studies it is known that frontrunners or champions in non-specialized palliative care also have difficulties to disseminate knowledge to colleagues, or may fail to organize meetings due to e.g. a high workload or lack of dedicated time . Therefore, in the future more attention should be paid to the guidance of facilitators in ways appropriate to them/their PPCT to organize coaching-on-the-job sessions and, if needed, adaptation of local coaching-on-the-job activities to the specific needs and characteristics, such as prior ACP training of professionals working in a certain institute. In addition, at the organizational and management level more importance should be given to the ongoing training of healthcare professionals on communication skills, similar to training on both medical and nursing technical skills . Strengths and limitations of the study A strength of the study was triangulation of data by the researcher (ME) attending some sessions leading to extra information in addition to the results of the questionnaires. Furthermore, the regular contact between the research team and the facilitators, and the observations during some sessions, were helpful in getting an overall picture of the process that was going on in the PPCTs. However, this level of intervening in the natural process could also be considered a limitation. Other limitations include the under-representation of male pediatric care professionals under the age of 40 that nonetheless represents actual pediatric care practice. Also the rather short study period of six months might have led to a large time pressure on the facilitators to organize two coaching-on-the-job sessions in a period of two or three months. In this study we found that a pro-active planning of activities guided by the research team proved to be helpful. Another limitation is that in some PPCTs the first questionnaire was distributed broadly to many types of professionals and in other PPCTs more narrowly to nurses and physicians. The same was true for the coaching-on-the-job sessions: in some PPCTs in addition to the original target group of nurses and physicians also other professionals participated in the coaching-on-the-job sessions. This sometimes led to different needs regarding theory on ACP provided by facilitators and practicing ACP during the session and to some professionals feeling not addressed by certain questions in the questionnaires. Finally, the small number of learners means that conclusions must be drawn with caution. Conclusion The newly developed team-based learning program to facilitate continuous training and reflection on the use of IMPACT seems a promising intervention for the 'transfer of knowledge' on ACP, ACP communication skills and reflection on ACP conversations in a team context. The team-based learning program may contribute to a sustainable implementation and dissemination of IMPACT in pediatric palliative care. However, for many healthcare professionals in PPCTs who regularly conduct ACP conversations, practicing ACP communication skills and reflecting on ACP does not come naturally. For methodically practicing with and reflecting on ACP in team context, PPCTs need more dedicated time for coaching-on-the-job activities related to ACP and facilitators need more guidance during these coaching-on-the-job sessions so they know how to deal with individual variation between their team members in conducting ACP. Even if healthcare professionals are familiar with ACP, starting and conducting an ACP conversation seems still to be difficult, often resulting in introducing ACP in a very late phase of the illness trajectory . Most participants in our study appreciate the (re-)introduction of theory on ACP and IMPACT, and the repeated practicing of skills in short role-plays in the newly developed training program. Hereby our results support the results of other studies that argue for continuous learning and evaluation of processes in healthcare to continuously improve care processes in general and ACP in pediatric palliative care in particular . Knowledge transfer is known to be a dynamic process that unfolds over time, resulting from the interaction between persons, situations and criteria over time . Literature shows that three major factors affect the extent of knowledge transfer to the job: 1. trainee characteristics; 2. characteristics of the training activities; and 3. work environmental factors . Trainee characteristics Both facilitators and learners in general were very motivated to participate in learning activities aimed to optimize ACP in pediatric palliative care as is also known from the literature . Our results show that, although a relatively small number of participants, most of them experience positive changes in attitude and skills and (strongly) intend to continue practicing ACP communication skills in combination with methodical reflection on ACP conversations. However, knowledge transfer resulting in professionals applying learned knowledge, skills and attitude over time is known to be difficult and needs ongoing attention . The fact that only one third of learners at the end of the study had started to use more (some) ACP communication skills may be explained by several reasons. Except that the coaching-on-the-job session may have not fit to their professional or personal development needs, a trivial reason may be that some learners did not conduct an ACP conversation during the rather short study period of a maximum of three months between the coaching-on-the-job sessions and the final questionnaire. As is also known from other studies on ACP, healthcare professionals may find it difficult to label conversations with parents and/or children as ACP conversation . Another reason for not changing their behavior may be that participants, in their own opinion, already are doing ACP conversations following a more or less well-defined method and feel that they do not have to change their behavior. A striking finding is a slight decrease in the number of learners that at the end of the study felt comfortable in conducting an ACP conversation with parents. This finding may be explained by the fact that the participation in a coaching-on-the-job session leads to better understanding of the method and ACP communication skills required for ACP conversations or increased (self-) awareness over time and thereby to greater uncertainty about whether one is doing ACP as intended . This is known also as the Dunning-Kruger effect: the tendency of people with low ability—to apply skills—in a specific area to give overly positive assessments of this ability as could have been the case for learners before their participation in the coaching-on-the-job sessions . Characteristics of the training activities Our main influence in the present study was on the second affecting factor, the development of the training activities. Besides training knowledge and ACP communication skills on the individual level, our team- based learning program focuses explicitly on team-level factors which are known to promote interdisciplinary collaboration in palliative care, such as discussing and reflecting on ACP conversations in team-context . Facilitators and learners give overall very positive feedback on the team-level aspects of the training activities. One way to improve the quality of this team-based learning program on ACP communication skills could be to train facilitators also explicitly on the role of champions or frontrunners, who may play an important role in promoting ACP in their PPCT and beyond . Taking a leadership role in their team may involve a great challenge for healthcare professionals . Another issue for improvement in our team-based program is that the target group for the training program should be better defined. Although there has been shown high effectiveness of interprofessional training in pediatric palliative care, ACP may be a too specific medical/nursing intervention to train disciplines that have a key role and disciplines that have a derived role in ACP together. An important next step could be to assess after for example one year what team members actually conduct ACP conversations before and after implementing this program, and then to evaluate the experiences of parents and children with these conversations. Next, the findings of this follow-up studies can be integrated in the team-based learning program. Work environmental factors Most mentioned barriers to the program are found in the work environmental factors, such as having difficulties in planning coaching-on-the-job sessions due to a high team workload and different schedules. From other studies it is known that frontrunners or champions in non-specialized palliative care also have difficulties to disseminate knowledge to colleagues, or may fail to organize meetings due to e.g. a high workload or lack of dedicated time . Therefore, in the future more attention should be paid to the guidance of facilitators in ways appropriate to them/their PPCT to organize coaching-on-the-job sessions and, if needed, adaptation of local coaching-on-the-job activities to the specific needs and characteristics, such as prior ACP training of professionals working in a certain institute. In addition, at the organizational and management level more importance should be given to the ongoing training of healthcare professionals on communication skills, similar to training on both medical and nursing technical skills . Both facilitators and learners in general were very motivated to participate in learning activities aimed to optimize ACP in pediatric palliative care as is also known from the literature . Our results show that, although a relatively small number of participants, most of them experience positive changes in attitude and skills and (strongly) intend to continue practicing ACP communication skills in combination with methodical reflection on ACP conversations. However, knowledge transfer resulting in professionals applying learned knowledge, skills and attitude over time is known to be difficult and needs ongoing attention . The fact that only one third of learners at the end of the study had started to use more (some) ACP communication skills may be explained by several reasons. Except that the coaching-on-the-job session may have not fit to their professional or personal development needs, a trivial reason may be that some learners did not conduct an ACP conversation during the rather short study period of a maximum of three months between the coaching-on-the-job sessions and the final questionnaire. As is also known from other studies on ACP, healthcare professionals may find it difficult to label conversations with parents and/or children as ACP conversation . Another reason for not changing their behavior may be that participants, in their own opinion, already are doing ACP conversations following a more or less well-defined method and feel that they do not have to change their behavior. A striking finding is a slight decrease in the number of learners that at the end of the study felt comfortable in conducting an ACP conversation with parents. This finding may be explained by the fact that the participation in a coaching-on-the-job session leads to better understanding of the method and ACP communication skills required for ACP conversations or increased (self-) awareness over time and thereby to greater uncertainty about whether one is doing ACP as intended . This is known also as the Dunning-Kruger effect: the tendency of people with low ability—to apply skills—in a specific area to give overly positive assessments of this ability as could have been the case for learners before their participation in the coaching-on-the-job sessions . Our main influence in the present study was on the second affecting factor, the development of the training activities. Besides training knowledge and ACP communication skills on the individual level, our team- based learning program focuses explicitly on team-level factors which are known to promote interdisciplinary collaboration in palliative care, such as discussing and reflecting on ACP conversations in team-context . Facilitators and learners give overall very positive feedback on the team-level aspects of the training activities. One way to improve the quality of this team-based learning program on ACP communication skills could be to train facilitators also explicitly on the role of champions or frontrunners, who may play an important role in promoting ACP in their PPCT and beyond . Taking a leadership role in their team may involve a great challenge for healthcare professionals . Another issue for improvement in our team-based program is that the target group for the training program should be better defined. Although there has been shown high effectiveness of interprofessional training in pediatric palliative care, ACP may be a too specific medical/nursing intervention to train disciplines that have a key role and disciplines that have a derived role in ACP together. An important next step could be to assess after for example one year what team members actually conduct ACP conversations before and after implementing this program, and then to evaluate the experiences of parents and children with these conversations. Next, the findings of this follow-up studies can be integrated in the team-based learning program. Most mentioned barriers to the program are found in the work environmental factors, such as having difficulties in planning coaching-on-the-job sessions due to a high team workload and different schedules. From other studies it is known that frontrunners or champions in non-specialized palliative care also have difficulties to disseminate knowledge to colleagues, or may fail to organize meetings due to e.g. a high workload or lack of dedicated time . Therefore, in the future more attention should be paid to the guidance of facilitators in ways appropriate to them/their PPCT to organize coaching-on-the-job sessions and, if needed, adaptation of local coaching-on-the-job activities to the specific needs and characteristics, such as prior ACP training of professionals working in a certain institute. In addition, at the organizational and management level more importance should be given to the ongoing training of healthcare professionals on communication skills, similar to training on both medical and nursing technical skills . A strength of the study was triangulation of data by the researcher (ME) attending some sessions leading to extra information in addition to the results of the questionnaires. Furthermore, the regular contact between the research team and the facilitators, and the observations during some sessions, were helpful in getting an overall picture of the process that was going on in the PPCTs. However, this level of intervening in the natural process could also be considered a limitation. Other limitations include the under-representation of male pediatric care professionals under the age of 40 that nonetheless represents actual pediatric care practice. Also the rather short study period of six months might have led to a large time pressure on the facilitators to organize two coaching-on-the-job sessions in a period of two or three months. In this study we found that a pro-active planning of activities guided by the research team proved to be helpful. Another limitation is that in some PPCTs the first questionnaire was distributed broadly to many types of professionals and in other PPCTs more narrowly to nurses and physicians. The same was true for the coaching-on-the-job sessions: in some PPCTs in addition to the original target group of nurses and physicians also other professionals participated in the coaching-on-the-job sessions. This sometimes led to different needs regarding theory on ACP provided by facilitators and practicing ACP during the session and to some professionals feeling not addressed by certain questions in the questionnaires. Finally, the small number of learners means that conclusions must be drawn with caution. The newly developed team-based learning program to facilitate continuous training and reflection on the use of IMPACT seems a promising intervention for the 'transfer of knowledge' on ACP, ACP communication skills and reflection on ACP conversations in a team context. The team-based learning program may contribute to a sustainable implementation and dissemination of IMPACT in pediatric palliative care. However, for many healthcare professionals in PPCTs who regularly conduct ACP conversations, practicing ACP communication skills and reflecting on ACP does not come naturally. For methodically practicing with and reflecting on ACP in team context, PPCTs need more dedicated time for coaching-on-the-job activities related to ACP and facilitators need more guidance during these coaching-on-the-job sessions so they know how to deal with individual variation between their team members in conducting ACP. Supplementary Material 1. Supplementary Material 2. Supplementary Material 3. Supplementary Material 4. Supplementary Material 5.
How does digital literacy affect the health status of senior citizens? Micro-level evidence from the CFPS data
91a6730d-043d-49dc-84db-235c48e51233
11773774
Health Literacy[mh]
Health status serves as a crucial indicator for measuring the well-being of senior citizens . As the World Health Organization (WHO) emphasizes, improving the health status of the elderly not only enhances their quality of life, but also helps achieve better overall societal benefits . So far, researchers have demonstrated many influencing factors of the health status of the elderly, including micro-level factors such as habits , religious beliefs , and socio-economic status , as well as macro-level factors including environmental pollution , sociocultural aspects , income disparities , and the level of openness to external influences . With the blossom of digital technologies, such as 5G and smart city, digitization has increasingly penetrated into our socio-economic life , and the concept of "digital literacy" has emerged. In the earlier studies, digital literacy referred to the ability to read and understand multimedia content . Eshet and Amichai expanded the definition of digital literacy, defining it as the essential life skills required for individuals to live, learn, and work with digital technologies under the current digital environment . The United Nations Educational, Scientific and Cultural Organization (UNESCO) proposes that digital literacy should include device operation, information processing, communication and writing, content creation, security protection, problem-solving, and specific occupation-related fields . Given the accelerating digitalization process and aging trend worldwide, it is of great research significance to explore whether the enhancement of digital literacy among the elderly has a significant impact on their health status. While the topic of aging and health is a global issue, developing countries face greater challenges in maintaining and improving the health status of the elderly due to the relatively backward medical technology compared to developed countries, as well as the immaturity of their healthcare systems. According to Frank et al., a country is considered to have entered an aging society when the proportion of its population aged 60 and above exceeds 10% . As a member of the developing world, China has witnessed a rapid acceleration of aging over the past decade. Based on the data released by the National Bureau of Statistics, by 2023 (Fig. ), citizens aged 60 or above accounted for 21.1% of the total population, an increase of nearly 8 percentage points compared to 2010. At the same time, the digitalization is experiencing rapid development. According to the Statistical Report on the Development of the Internet in China, from 2010 to 2023, China's Internet penetration rate increased by about 33%, and the proportion of Internet users aged over 60 raised from 1.9% to 15.6%. It can be seen that as China's Internet penetration rate continues to increase, the size of the elderly Internet user group is also expanding. Therefore, exploring the impact of digital literacy on the health status of the elderly based on Chinese samples and identifying the underlying mechanisms will not only contribute to improving the health status of the elderly in China, but also provide valuable insights for other developing countries in addressing the health issues of their aging populations. Given this, our study attempts to explore the impact of digital literacy on the health of the elderly and identify the mechanism. This will not only facilitate healthy aging, but also provide statistical references on how "Digital China" drives the "active aging of the population". According to the existing research , discussions regarding the impact of digitization on the health of senior citizens have primarily focused on the effects of digital technology development and internet usage. Many studies demonstrated that the utilization of digital technology can enhance the accessibility and convenience of high-quality healthcare services, thereby contributing to the maintenance of physical health among the elderly . In addition to this, the application of digital technology in the financial sector can improve household medical utilization through means such as mobile payments and credit access . Moreover, it enhances the accessibility of older people to convenient financial services and products, thereby contributing to the alleviation of financing constraints and strengthening the economic foundation for improving the health status of the elderly . In terms of the influencing mechanism, some studies argue that income level serves as a critical factor. The positive correlation between health and income has been supported by extensive research . At the same time, digital literacy can affect individual income at both the macro- and micro-levels. Regarding the macro-level, digital literacy enables people to grasp new business opportunities , which can lead to economic development and, consequently, to income improvement . At the micro-level, people's access to information has been broadened; thus, they have benefited from the development of digital technology. For example, the vigorous promotion of online education has greatly improved the labor skills and efficiency of workers and contributed to the increase in their income levels . Some scholars also identified the usage of the Internet as one of the mechanisms by which digital literacy affects health status. A study by the Research Group of the Chinese Academy of Social Sciences found that the Internet has increased the rate of older people's communication with the outside world; it enhances their social links and shortens the social distance. The emergence of new media on the Internet has enriched the leisure life of the elderly and facilitated the provision of emotional value. Therefore, the use of the Internet by the elderly in terms of social and recreational activities has a positive impact on their physical and mental health. However, older people's commercial use of the Internet may have a negative impact on their physical and mental health, although this impact does not pass the statistical significance test . The existing literature provides a rich discussion on the relationship between digital literacy and older people's health status . However, because they are confined by data limitations, the existing studies mainly focus on the macro-level, or simply employ Internet usage as the variable to measure digital literacy. Moreover, some of the research did not address the endogeneity problems caused by ignoring variables and reverse causation. With regard to this, this study aims to examine the impact of digital literacy on the health status of senior citizens at the micro-level by utilizing the multi-period Chinese Family Tracking Survey (CFPS) data from 2016 to 2020. The entropy method is used to measure the level of digital literacy of the elderly. Then, the two-way fixed-effects model is constructed to analyze the impact, and the instrumental variable method is adopted to alleviate the problems of reverse causation and omitted variables. Next, we introduce social support as the mediating variable to identify the influencing mechanism by which digital literacy affects the health status of senior citizens. Our results can provide useful insights for countries going through digitalization and aging society. The marginal contributions of this study, compared to the existing literature , lie in the following aspects. First, unlike previous studies that use a single indicator to characterize the level of digital literacy, this study employs multiple indicators with the entropy method to measure the digital literacy level of senior citizens, thereby reflecting the individual's digital literacy more comprehensively. Second, our study identifies the mediating mechanism of social support, broadening the research perspective on how digital literacy affects elderly health. Third, by conducting heterogeneous analysis across age groups, household registration, and education levels, our study offers valuable insights for the government in formulating more targeted active aging strategies. The remaining sections are organized as follows. " " discusses the theoretical analysis and research hypotheses. " " provides the research design. " " presents the empirical results and discussions. " " provides a summary with conclusions and policy implications. The impact of digital literacy on the health status of senior citizens Drawing upon the previous studies, it is clear that most research support the positive relationship between digital literacy and the health status of the elderly . The first obvious explanation is that the increase in digital literacy facilitates the popularization of health knowledge, increasing the importance that older people attach to their health, establishing a more scientific outlook on health, and making them more willing to invest in their own health, thus helping to maintain their health status . Digitalization provides rich online medical resources, including online consultation and cyber doctors. Higher digital literacy enables older people to get access to online public health services, which contributes to the timely provision of medical guidance to older persons . In addition, in the digital society, the leisure and entertainment functions provided by the Internet, such as online social communication and online games, can reduce the psychological loneliness of the elderly, and improve their sense of pleasure and attention, thus relieving psychological depression and preventing cognitive decline . Based on the above, we propose Hypothesis 1: H1: Digital literacy levels significantly improve the health status of senior citizens. The mediating mechanism of digital literacy on the health status of senior citizens The Social Support Theory highlights the view that individuals derive both material and emotional support primarily from the social relationships they establish with others. This support enables them to cope with stress, challenges, and difficulties, thereby improving their health status . Effective social support can assist the elderly in overcoming digital barriers and better embracing the digital society . Seniors who receive more social support tend to have less stress and better health outcomes . The digital society, characterized by the prevalence of the Internet, breaks the constraints of time and space in communication. Individuals equipped with a certain level of digital literacy are capable of communicating with others using online messages and video calls, which extends the scope and frequency of social interactions and enhances social support. The social networks of the elderly population will no longer be confined to traditional circles of acquaintances since digital platforms bring together individuals with similar interests and hobbies. Additionally, when elderly gain assistance and support from children and grandchildren, it will strengthen the emotional bonds and provide psychological compensation. Moreover, online communication enhances social support and reduces the risk of depression . It is also worth noting that effective social support provided by family members plays a positive role in bridging the digital divide . Furthermore, intergenerational interaction between parents and children can increase the initiative of seniors in acquiring digital knowledge and can enhance their ability and proficiency in using digital tools . Based on the above, we propose Hypothesis 2: H2: Social support plays a mediating role in the relationship between digital literacy and the health status of senior citizens. The influencing channel is provided in Fig. . Drawing upon the previous studies, it is clear that most research support the positive relationship between digital literacy and the health status of the elderly . The first obvious explanation is that the increase in digital literacy facilitates the popularization of health knowledge, increasing the importance that older people attach to their health, establishing a more scientific outlook on health, and making them more willing to invest in their own health, thus helping to maintain their health status . Digitalization provides rich online medical resources, including online consultation and cyber doctors. Higher digital literacy enables older people to get access to online public health services, which contributes to the timely provision of medical guidance to older persons . In addition, in the digital society, the leisure and entertainment functions provided by the Internet, such as online social communication and online games, can reduce the psychological loneliness of the elderly, and improve their sense of pleasure and attention, thus relieving psychological depression and preventing cognitive decline . Based on the above, we propose Hypothesis 1: H1: Digital literacy levels significantly improve the health status of senior citizens. The Social Support Theory highlights the view that individuals derive both material and emotional support primarily from the social relationships they establish with others. This support enables them to cope with stress, challenges, and difficulties, thereby improving their health status . Effective social support can assist the elderly in overcoming digital barriers and better embracing the digital society . Seniors who receive more social support tend to have less stress and better health outcomes . The digital society, characterized by the prevalence of the Internet, breaks the constraints of time and space in communication. Individuals equipped with a certain level of digital literacy are capable of communicating with others using online messages and video calls, which extends the scope and frequency of social interactions and enhances social support. The social networks of the elderly population will no longer be confined to traditional circles of acquaintances since digital platforms bring together individuals with similar interests and hobbies. Additionally, when elderly gain assistance and support from children and grandchildren, it will strengthen the emotional bonds and provide psychological compensation. Moreover, online communication enhances social support and reduces the risk of depression . It is also worth noting that effective social support provided by family members plays a positive role in bridging the digital divide . Furthermore, intergenerational interaction between parents and children can increase the initiative of seniors in acquiring digital knowledge and can enhance their ability and proficiency in using digital tools . Based on the above, we propose Hypothesis 2: H2: Social support plays a mediating role in the relationship between digital literacy and the health status of senior citizens. The influencing channel is provided in Fig. . Data and variable Data The data used in this study are collected from the China Family Panel Studies (CFPS). This survey started in 2010 and covers 25 provinces, municipalities, and autonomous regions in China; it has collected abundant data on income, consumption, health behaviors, education, and other aspects at the individual, family, and community levels. The dataset has high quality data and good representativeness, with an annual data tracking rate exceeding 80%. Although the CFPS survey has collected data across five waves spanning from 2010 to 2020, the surveys conducted in 2010, 2012, and 2014 failed to address issues related to digital literacy properly. The questionnaires from 2016, 2018, and 2020 were refined by incorporating questions concerning senior citizens' use of mobile devices for internet access, as well as their engagement in learning, entertainment, and transactions through digital devices, which corresponds with our research objectives. More importantly, there were no changes in the phrasing of these questions, ensuring the continuity and stability of the key variables required for this study. Furthermore, since panel data, compared to cross-sectional data, provides more data points and control for individual differences that do not change over time, thereby enabling more accurate research conclusions, this study utilizes data from the CFPS surveys conducted in three waves, 2016, 2018, and 2020. Given the large volume of micro-level data, we utilized Stata 15.0 to filter data that meet the research requirements, matching them annually based on sample IDs to construct panel data. Then, we employed descriptive statistical analysis and panel regression models to conduct the empirical analysis. Variables Dependent variable: the health status of senior citizens (health). The health status is an ordinal variable, with respondents answering about their own health condition. The variable is assigned a value of 1 when the answer is "unhealthy", a value of 2 when the answer is "average", a value of 3 when the answer is "relatively healthy", a value of 4 when the answer is "very healthy", and a value of 5 when the answer is "extremely healthy". In accordance with the research objectives of this paper and the characteristics of the CFPS dataset, self-rated health is employed as a proxy for the health status of elderly individuals. The main reasons are as follow. Firstly, compared to macro-level measurements of overall health levels such as mortality rates and life expectancy, self-rated health offers a more precise measurement of individual health conditions in that it can reflect individual differences. Therefore, this indicator has been adopted by many existing studies . Secondly, previous studies have confirmed the effectiveness of self-rated health in predicting mortality and anticipating functional impairments resulting from certain diseases . Thirdly, since conducting physical examinations on each sample to obtain accurate health data is impractical, self-rated health is the most feasible and practical proxy for the health status of elderly individuals among the indicators available in the CFPS database. Considering that self-rated health may introduce bias into the research results, other health indicators from the questionnaire are incorporated during the robustness checks. Independent variable: digital literacy (digital). Digital transformation not only takes place at the national socio-economic and enterprise levels but also at the individual level. Unlike the digital economy at the macro-level or the digital transformation of enterprises at the micro-level, the individual's digital literacy is a combination of both macro-supply and micro-demand. It is mainly represented by the individual’s application of digital technologies, such as online payments, e-learning, and online shopping. Following the definition of digital literacy in the existing literature , this paper proposes that in the Chinese context, digital literacy refers to the digital application abilities of individuals to correctly and reasonably use digital tools and devices, gather digital resources to acquire new information and learn new knowledge, engage in social communication with others, and conduct business activities on digital platforms. When it comes to measuring the digital literacy level, many studies simply use the single indicator method, which considers "whether the Internet is used" ; this seems to be one-sided. To comprehensively measure the digital literacy level of senior citizens and avoid the one-sidedness of information reflected by a single indicator, this study constructs an index system of senior citizens’ digital literacy. Specifically, this paper categorizes digital literacy into four dimensions, digital tool usage literacy, digital learning literacy, digital entertainment literacy, and digital commerce literacy, as illustrated in Table . The digital literacy index for the elderly is calculated using the entropy weight method. The specific calculation process is given later. Mediating variable: social support (issupport). In the CFPS questionnaire, social support is primarily represented by questions such as "How is your relationship with your children?", "How frequently do you contact your children?", "How often do you meet with your children?", "Do you think most people are helpful or selfish?", "How much do you trust your neighbors?", and "Do you have someone to take care of you when you are sick?". The response options range from 0 to 10, with higher values indicating stronger social support. The intensity of social support is calculated by summing up the scores of each response. Instrumental variable: In order to address the endogeneity issue, this study introduces two instrumental variables. The first one is the product of the digital financial inclusion index (dfiindex) and the community average of digital literacy (mean_di), while the second one is the product of total postal and telecommunications business volume (tbtbusiness) and the community average of digital literacy (mean_di). The digital financial inclusion index (dfiindex) is published by the Institute of Digital Finance at Peking University; the total postal and telecommunications business volume by province (tbtbusiness) is collected from the National Bureau of Statistics; and the community average of digital literacy (mean_di) is the average digital literacy level of all elderly individuals within the community, excluding the individuals themselves. Control variables: The health status of senior citizens is influenced by many factors. Variables representing individual characteristics, including gender (gender), age (age), education level (edu), Hukou status (hk), marital status (marry), pension insurance status (pension), and income status (income), are introduced to control for individual differences. Since lifestyle can also affect the health status of the elderly, this study also includes smoking habits (smoke), napping (noonbreak), and exercises (exercise) to control for personal lifestyle factors. Table displays the definitions of the variables. Calculation of digitalization levels When measuring the digital literacy index, three methods are employed. In the baseline results, the entropy method is applied to objectively weight each indicator. The factor analysis method and scoring method are applied in the robustness checks. Entropy method It is assumed that there are m samples to be evaluated and n evaluation indicators, which form the original indicator data matrix [12pt]{minimal} $$X = (x_{ij} )m n$$ X = ( x ij ) m × n , where [12pt]{minimal} $$x_{ij} 0$$ x ij ≥ 0 , [12pt]{minimal} $$0 i m$$ 0 ≤ i ≤ m , [12pt]{minimal} $$0 j n$$ 0 ≤ j ≤ n . For a certain indicator [12pt]{minimal} $$x_{j}$$ x j , the larger the difference in the indicator [12pt]{minimal} $$x_{ij}$$ x ij value, the greater role this indicator plays in the comprehensive evaluation. If the indicator values of a certain indicator are similar, then this indicator does not play a critical role in the comprehensive evaluation. In information theory, information entropy is [12pt]{minimal} $$e_{j} = - k _{i = 1}^{m} {p_{ij} p_{ij} }$$ e j = - k · ∑ i = 1 m p ij · ln p ij , where [12pt]{minimal} $$p_{ij} = x_{ij} /_{i = 1}^{m} {x_{ij} }$$ p ij = x ij / ∑ i = 1 m x ij , [12pt]{minimal} $$k> 0$$ k > 0 . The greater the variation in the value of a certain indicator, the smaller the information entropy, and the greater the amount of information provided by the indicator, which suggests a higher weight. Therefore, according to the degree of variation of each indicator, the information entropy is applied to calculate the weight of each indicator. Step 1: Measuring the proportion of the i-th sample under the j-th indicator [12pt]{minimal} $$p_{ij}$$ p ij . 1 [12pt]{minimal} $$p_{ij} = x_{ij} /_{i = 1}^{m} {x_{ij} }$$ p ij = x ij / ∑ i = 1 m x ij Step 2: Calculating the entropy value of the j-th indicator. 2 [12pt]{minimal} $$e_{j} = - k _{i = 1}^{m} {p_{ij} p_{ij} }$$ e j = - k · ∑ i = 1 m p ij ln p ij where [12pt]{minimal} $$k> 0$$ k > 0 , [12pt]{minimal} $$k = 1/ m$$ k = 1 / ln m , and [12pt]{minimal} $$0 e 1$$ 0 ≤ e ≤ 1 . Step 3: Calculating the utility value of each indicator [12pt]{minimal} $$d_{j}$$ d j . For a given j, the smaller the variation in [12pt]{minimal} $$d_{j}$$ d j , the larger the entropy value [12pt]{minimal} $$e_{j}$$ e j . When there is a significant variation in the values of a certain indicator among the samples, [12pt]{minimal} $$e_{j}$$ e j becomes small, indicating that this indicator is more valuable in comparing digital literacy; thus, its weight is greater. The formula for calculating the utility value is: 3 [12pt]{minimal} $$d_{j} = 1 - e_{j}$$ d j = 1 - e j Step 4: Calculating the weight [12pt]{minimal} $$w_{j}$$ w j of indicator [12pt]{minimal} $$x_{j}$$ x j : 4 [12pt]{minimal} $$w_{j} = d_{j} /_{j = 1}^{n} {d_{j} } = }}{{_{j = 1}^{n} {d_{j} } }} = }}{{_{j = 1}^{n} {(1 - e_{j} )} }}$$ w j = d j / ∑ j = 1 n d j = d j ∑ j = 1 n d j = 1 - e j ∑ j = 1 n ( 1 - e j ) Step 5: Calculating the index for digital literacy: 5 [12pt]{minimal} $$digital_{i} = _{j = 1}^{n} {w_{j} p_{ij} }$$ d i g i t a l i = ∑ j = 1 n w j p ij (2) Factor analysis Factor analysis is a method for calculating comprehensive evaluation scores by reducing a number of related variables into a few uncorrelated new common factors. By reducing dimensionality, it simplifies the complexity of problem analysis while retaining the information of the original variables to the greatest extent. The steps to calculate the comprehensive score of digital using factor analysis are as follows: Step 1: Estimate the factor loading matrix based on the original variable matrix. This study chooses the principal component method to estimate the factor loading matrix. Assuming that the original indicator data matrix is [12pt]{minimal} $$X = (x_{ij} )m n$$ X = ( x ij ) m × n , the covariance matrix of X is denoted as [12pt]{minimal} $$ {}$$ ∑ . [12pt]{minimal} $$_{1} _{2} _{n}> 0$$ λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n > 0 represents the eigenvalue of [12pt]{minimal} $$ {}$$ ∑ , and [12pt]{minimal} $$_{i}$$ λ i represents the variance of principal component i. The total variance is specified as [12pt]{minimal} $$_{i = 1}^{n} {_{ii} } = _{i = 1}^{n} {_{i} }$$ ∑ i = 1 n σ ii = ∑ i = 1 n λ i . [12pt]{minimal} $$e_{1} ,e_{2} , ,e_{n}$$ e 1 , e 2 , ⋯ , e n is the corresponding normalized orthogonal eigenvector. Therefore, [12pt]{minimal} $$ {}$$ ∑ can be decomposed as: 6 [12pt]{minimal} $$ { = _{1} } e_{1} e_{1}^{ } + _{2} e_{2} e_{2}^{ } + + _{n} e_{n} e_{n}^{ } \\ c} {} & = \\ ( {_{1} } e_{1} , {_{2} } e_{2} , {_{n} } e_{n} )[ {c} { {_{1} } e_{1}^{ } } \\ { {_{2} } e_{2}^{ } } \\ \\ { {_{n} } e_{n}^{ } } \\ } ] \\ $$ ∑ = λ 1 e 1 e 1 ′ + λ 2 e 2 e 2 ′ + ⋯ + λ n e n e n ′ = ( λ 1 e 1 , λ 2 e 2 , ⋯ λ n e n ) λ 1 e 1 ′ λ 2 e 2 ′ ⋮ λ n e n ′ The decomposition in the above formula represents the covariance matrix structure of a factor model where the number of common factors is the same as the number of variables. When using the factor analysis methods, it is more preferable to reduce the number of common factors k until it is less than the number of variables, i.e., k < n. When the last n-k eigenvalues are relatively small, the contribution of the last n-k terms to [12pt]{minimal} $$ {}$$ ∑ is usually omitted, leading to: 7 [12pt]{minimal} $$ ( {_{1} } e_{1} , {_{2} } e_{2} , {_{k} } e_{k} )[ {c} { {_{1} } e_{1}^{ } } \\ { {_{2} } e_{2}^{ } } \\ \\ { {_{k} } e_{k}^{ } } \\ } ]$$ ∑ ≈ ( λ 1 e 1 , λ 2 e 2 , ⋯ λ k e k ) λ 1 e 1 ′ λ 2 e 2 ′ ⋮ λ k e k ′ where [12pt]{minimal} $$ {_{j} } e_{j}$$ λ j e j denotes the factor loading of common factor j. Step 2: Display the common factors as linear combinations of the variables to obtain the scores of each common factor. Since the number of equations k in the factor score function is less than the number of variables n, it is not possible to calculate the factor scores accurately. The factor scores can be estimated using the least squares method or the maximum likelihood method: 8 [12pt]{minimal} $$_{ij} = _{i0} + _{i1} x_{1} + _{i2} x_{2} + + _{in} x_{n}$$ F ^ ij = β i 0 + β i 1 x 1 + β i 2 x 2 + ⋯ + β in x n Step 3: Establish a comprehensive factor score function by weighted summation, using the proportion of each common factor's variance contribution rate to the total variance contribution rate of all the common factors as the weight: 9 [12pt]{minimal} $${Y}_{j }= { }_{1}{}_{1j}+ { }_{2}{}_{2j}+ +{ }_{k}{}_{kj}, j=, m$$ Y j = γ 1 F ^ 1 j + γ 2 F ^ 2 j + ⋯ + γ k F ^ kj , j = 1,2 , ⋯ m where [12pt]{minimal} $$Y_{j}$$ Y j denotes the comprehensive factor score of sample j, [12pt]{minimal} $$_{ij}$$ F ^ ij represents the score of sample j achieved on common factor I, [12pt]{minimal} $$_{i}$$ γ i is the proportion of common factor i's variance contribution rate to the total variance contribution rate, and [12pt]{minimal} $$_{i} = _{i} /_{i = 1}^{n} {_{i} }$$ γ i = λ i / ∑ i = 1 n λ i . (3) Scoring methods The scoring method is based on the respondents' answers to five questions regarding whether they access the Internet on a computer, whether they access the Internet on a mobile device, whether they make online purchases, whether they engage in online learning, and whether they participate in online entertainment. If the respondent answers "yes" to any of these questions, they receive 1 point. The scores for all five questions are then accumulated to obtain the digital literacy level of the senior citizens. Based on the CFPS data from 2016, 2018, and 2020, this study constructs a balanced panel and employs a two-way fixed-effects model for the estimation. Based on the definition of WHO , the sample involved in this study comprised citizens aged 60 and above. After data cleaning and the removal of samples with missing values, the balanced panel data comprised 7836 samples, with 2612 samples per year. The descriptive statistics of the data are presented in Table . Model First, in order to examine the impact of digital literacy on the health status of senior citizens, a balanced panel model is established. The model is specified below: 10 [12pt]{minimal} $$health_{it} = _{0} + _{1} digital_{it} + _{3} X_{it} + c_{i} + _{t} + _{it}$$ h e a l t h it = β 0 + β 1 d i g i t a l it + β 3 X it + c i + θ t + ε it where [12pt]{minimal} $$health_{it}$$ h e a l t h it denotes the health status of senior citizen i in year t, [12pt]{minimal} $$digital_{it}$$ d i g i t a l it represents the digital literacy level of the senior citizen i in year t, [12pt]{minimal} $$X_{it}$$ X it is a series of control variables, [12pt]{minimal} $$c_{i}$$ c i and [12pt]{minimal} $$_{t}$$ θ t are individual and time fixed effects, and [12pt]{minimal} $$_{it}$$ ε it stands for the random error term. Next, to identify the mediating role of social support, the following models are constructed: 11 [12pt]{minimal} $$issurpport_{it} = _{0} + _{1} digital_{it} + _{3} X_{it} + c_{i} + _{t} + _{it}$$ i s s u r p p o r t it = α 0 + α 1 d i g i t a l it + α 3 X it + c i + θ t + μ it 12 [12pt]{minimal} $$health_{it} = _{0} + _{1} issurpport_{it} + _{2} digital_{it} + _{3} X_{it} + c_{i} + _{t} + _{it}$$ h e a l t h it = λ 0 + λ 1 i s s u r p p o r t it + λ 2 d i g i t a l it + λ 3 X it + c i + θ t + η it In models (11) and (12), [12pt]{minimal} $$ {issurpport}_{it\ it}$$ issurpport i t i t stands for the intensity of social support. Coefficient [12pt]{minimal} $$_{1}$$ λ 1 measures the impact of digital literacy on the health status of the senior citizens. Data The data used in this study are collected from the China Family Panel Studies (CFPS). This survey started in 2010 and covers 25 provinces, municipalities, and autonomous regions in China; it has collected abundant data on income, consumption, health behaviors, education, and other aspects at the individual, family, and community levels. The dataset has high quality data and good representativeness, with an annual data tracking rate exceeding 80%. Although the CFPS survey has collected data across five waves spanning from 2010 to 2020, the surveys conducted in 2010, 2012, and 2014 failed to address issues related to digital literacy properly. The questionnaires from 2016, 2018, and 2020 were refined by incorporating questions concerning senior citizens' use of mobile devices for internet access, as well as their engagement in learning, entertainment, and transactions through digital devices, which corresponds with our research objectives. More importantly, there were no changes in the phrasing of these questions, ensuring the continuity and stability of the key variables required for this study. Furthermore, since panel data, compared to cross-sectional data, provides more data points and control for individual differences that do not change over time, thereby enabling more accurate research conclusions, this study utilizes data from the CFPS surveys conducted in three waves, 2016, 2018, and 2020. Given the large volume of micro-level data, we utilized Stata 15.0 to filter data that meet the research requirements, matching them annually based on sample IDs to construct panel data. Then, we employed descriptive statistical analysis and panel regression models to conduct the empirical analysis. Variables Dependent variable: the health status of senior citizens (health). The health status is an ordinal variable, with respondents answering about their own health condition. The variable is assigned a value of 1 when the answer is "unhealthy", a value of 2 when the answer is "average", a value of 3 when the answer is "relatively healthy", a value of 4 when the answer is "very healthy", and a value of 5 when the answer is "extremely healthy". In accordance with the research objectives of this paper and the characteristics of the CFPS dataset, self-rated health is employed as a proxy for the health status of elderly individuals. The main reasons are as follow. Firstly, compared to macro-level measurements of overall health levels such as mortality rates and life expectancy, self-rated health offers a more precise measurement of individual health conditions in that it can reflect individual differences. Therefore, this indicator has been adopted by many existing studies . Secondly, previous studies have confirmed the effectiveness of self-rated health in predicting mortality and anticipating functional impairments resulting from certain diseases . Thirdly, since conducting physical examinations on each sample to obtain accurate health data is impractical, self-rated health is the most feasible and practical proxy for the health status of elderly individuals among the indicators available in the CFPS database. Considering that self-rated health may introduce bias into the research results, other health indicators from the questionnaire are incorporated during the robustness checks. Independent variable: digital literacy (digital). Digital transformation not only takes place at the national socio-economic and enterprise levels but also at the individual level. Unlike the digital economy at the macro-level or the digital transformation of enterprises at the micro-level, the individual's digital literacy is a combination of both macro-supply and micro-demand. It is mainly represented by the individual’s application of digital technologies, such as online payments, e-learning, and online shopping. Following the definition of digital literacy in the existing literature , this paper proposes that in the Chinese context, digital literacy refers to the digital application abilities of individuals to correctly and reasonably use digital tools and devices, gather digital resources to acquire new information and learn new knowledge, engage in social communication with others, and conduct business activities on digital platforms. When it comes to measuring the digital literacy level, many studies simply use the single indicator method, which considers "whether the Internet is used" ; this seems to be one-sided. To comprehensively measure the digital literacy level of senior citizens and avoid the one-sidedness of information reflected by a single indicator, this study constructs an index system of senior citizens’ digital literacy. Specifically, this paper categorizes digital literacy into four dimensions, digital tool usage literacy, digital learning literacy, digital entertainment literacy, and digital commerce literacy, as illustrated in Table . The digital literacy index for the elderly is calculated using the entropy weight method. The specific calculation process is given later. Mediating variable: social support (issupport). In the CFPS questionnaire, social support is primarily represented by questions such as "How is your relationship with your children?", "How frequently do you contact your children?", "How often do you meet with your children?", "Do you think most people are helpful or selfish?", "How much do you trust your neighbors?", and "Do you have someone to take care of you when you are sick?". The response options range from 0 to 10, with higher values indicating stronger social support. The intensity of social support is calculated by summing up the scores of each response. Instrumental variable: In order to address the endogeneity issue, this study introduces two instrumental variables. The first one is the product of the digital financial inclusion index (dfiindex) and the community average of digital literacy (mean_di), while the second one is the product of total postal and telecommunications business volume (tbtbusiness) and the community average of digital literacy (mean_di). The digital financial inclusion index (dfiindex) is published by the Institute of Digital Finance at Peking University; the total postal and telecommunications business volume by province (tbtbusiness) is collected from the National Bureau of Statistics; and the community average of digital literacy (mean_di) is the average digital literacy level of all elderly individuals within the community, excluding the individuals themselves. Control variables: The health status of senior citizens is influenced by many factors. Variables representing individual characteristics, including gender (gender), age (age), education level (edu), Hukou status (hk), marital status (marry), pension insurance status (pension), and income status (income), are introduced to control for individual differences. Since lifestyle can also affect the health status of the elderly, this study also includes smoking habits (smoke), napping (noonbreak), and exercises (exercise) to control for personal lifestyle factors. Table displays the definitions of the variables. Calculation of digitalization levels When measuring the digital literacy index, three methods are employed. In the baseline results, the entropy method is applied to objectively weight each indicator. The factor analysis method and scoring method are applied in the robustness checks. Entropy method It is assumed that there are m samples to be evaluated and n evaluation indicators, which form the original indicator data matrix [12pt]{minimal} $$X = (x_{ij} )m n$$ X = ( x ij ) m × n , where [12pt]{minimal} $$x_{ij} 0$$ x ij ≥ 0 , [12pt]{minimal} $$0 i m$$ 0 ≤ i ≤ m , [12pt]{minimal} $$0 j n$$ 0 ≤ j ≤ n . For a certain indicator [12pt]{minimal} $$x_{j}$$ x j , the larger the difference in the indicator [12pt]{minimal} $$x_{ij}$$ x ij value, the greater role this indicator plays in the comprehensive evaluation. If the indicator values of a certain indicator are similar, then this indicator does not play a critical role in the comprehensive evaluation. In information theory, information entropy is [12pt]{minimal} $$e_{j} = - k _{i = 1}^{m} {p_{ij} p_{ij} }$$ e j = - k · ∑ i = 1 m p ij · ln p ij , where [12pt]{minimal} $$p_{ij} = x_{ij} /_{i = 1}^{m} {x_{ij} }$$ p ij = x ij / ∑ i = 1 m x ij , [12pt]{minimal} $$k> 0$$ k > 0 . The greater the variation in the value of a certain indicator, the smaller the information entropy, and the greater the amount of information provided by the indicator, which suggests a higher weight. Therefore, according to the degree of variation of each indicator, the information entropy is applied to calculate the weight of each indicator. Step 1: Measuring the proportion of the i-th sample under the j-th indicator [12pt]{minimal} $$p_{ij}$$ p ij . 1 [12pt]{minimal} $$p_{ij} = x_{ij} /_{i = 1}^{m} {x_{ij} }$$ p ij = x ij / ∑ i = 1 m x ij Step 2: Calculating the entropy value of the j-th indicator. 2 [12pt]{minimal} $$e_{j} = - k _{i = 1}^{m} {p_{ij} p_{ij} }$$ e j = - k · ∑ i = 1 m p ij ln p ij where [12pt]{minimal} $$k> 0$$ k > 0 , [12pt]{minimal} $$k = 1/ m$$ k = 1 / ln m , and [12pt]{minimal} $$0 e 1$$ 0 ≤ e ≤ 1 . Step 3: Calculating the utility value of each indicator [12pt]{minimal} $$d_{j}$$ d j . For a given j, the smaller the variation in [12pt]{minimal} $$d_{j}$$ d j , the larger the entropy value [12pt]{minimal} $$e_{j}$$ e j . When there is a significant variation in the values of a certain indicator among the samples, [12pt]{minimal} $$e_{j}$$ e j becomes small, indicating that this indicator is more valuable in comparing digital literacy; thus, its weight is greater. The formula for calculating the utility value is: 3 [12pt]{minimal} $$d_{j} = 1 - e_{j}$$ d j = 1 - e j Step 4: Calculating the weight [12pt]{minimal} $$w_{j}$$ w j of indicator [12pt]{minimal} $$x_{j}$$ x j : 4 [12pt]{minimal} $$w_{j} = d_{j} /_{j = 1}^{n} {d_{j} } = }}{{_{j = 1}^{n} {d_{j} } }} = }}{{_{j = 1}^{n} {(1 - e_{j} )} }}$$ w j = d j / ∑ j = 1 n d j = d j ∑ j = 1 n d j = 1 - e j ∑ j = 1 n ( 1 - e j ) Step 5: Calculating the index for digital literacy: 5 [12pt]{minimal} $$digital_{i} = _{j = 1}^{n} {w_{j} p_{ij} }$$ d i g i t a l i = ∑ j = 1 n w j p ij (2) Factor analysis Factor analysis is a method for calculating comprehensive evaluation scores by reducing a number of related variables into a few uncorrelated new common factors. By reducing dimensionality, it simplifies the complexity of problem analysis while retaining the information of the original variables to the greatest extent. The steps to calculate the comprehensive score of digital using factor analysis are as follows: Step 1: Estimate the factor loading matrix based on the original variable matrix. This study chooses the principal component method to estimate the factor loading matrix. Assuming that the original indicator data matrix is [12pt]{minimal} $$X = (x_{ij} )m n$$ X = ( x ij ) m × n , the covariance matrix of X is denoted as [12pt]{minimal} $$ {}$$ ∑ . [12pt]{minimal} $$_{1} _{2} _{n}> 0$$ λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n > 0 represents the eigenvalue of [12pt]{minimal} $$ {}$$ ∑ , and [12pt]{minimal} $$_{i}$$ λ i represents the variance of principal component i. The total variance is specified as [12pt]{minimal} $$_{i = 1}^{n} {_{ii} } = _{i = 1}^{n} {_{i} }$$ ∑ i = 1 n σ ii = ∑ i = 1 n λ i . [12pt]{minimal} $$e_{1} ,e_{2} , ,e_{n}$$ e 1 , e 2 , ⋯ , e n is the corresponding normalized orthogonal eigenvector. Therefore, [12pt]{minimal} $$ {}$$ ∑ can be decomposed as: 6 [12pt]{minimal} $$ { = _{1} } e_{1} e_{1}^{ } + _{2} e_{2} e_{2}^{ } + + _{n} e_{n} e_{n}^{ } \\ c} {} & = \\ ( {_{1} } e_{1} , {_{2} } e_{2} , {_{n} } e_{n} )[ {c} { {_{1} } e_{1}^{ } } \\ { {_{2} } e_{2}^{ } } \\ \\ { {_{n} } e_{n}^{ } } \\ } ] \\ $$ ∑ = λ 1 e 1 e 1 ′ + λ 2 e 2 e 2 ′ + ⋯ + λ n e n e n ′ = ( λ 1 e 1 , λ 2 e 2 , ⋯ λ n e n ) λ 1 e 1 ′ λ 2 e 2 ′ ⋮ λ n e n ′ The decomposition in the above formula represents the covariance matrix structure of a factor model where the number of common factors is the same as the number of variables. When using the factor analysis methods, it is more preferable to reduce the number of common factors k until it is less than the number of variables, i.e., k < n. When the last n-k eigenvalues are relatively small, the contribution of the last n-k terms to [12pt]{minimal} $$ {}$$ ∑ is usually omitted, leading to: 7 [12pt]{minimal} $$ ( {_{1} } e_{1} , {_{2} } e_{2} , {_{k} } e_{k} )[ {c} { {_{1} } e_{1}^{ } } \\ { {_{2} } e_{2}^{ } } \\ \\ { {_{k} } e_{k}^{ } } \\ } ]$$ ∑ ≈ ( λ 1 e 1 , λ 2 e 2 , ⋯ λ k e k ) λ 1 e 1 ′ λ 2 e 2 ′ ⋮ λ k e k ′ where [12pt]{minimal} $$ {_{j} } e_{j}$$ λ j e j denotes the factor loading of common factor j. Step 2: Display the common factors as linear combinations of the variables to obtain the scores of each common factor. Since the number of equations k in the factor score function is less than the number of variables n, it is not possible to calculate the factor scores accurately. The factor scores can be estimated using the least squares method or the maximum likelihood method: 8 [12pt]{minimal} $$_{ij} = _{i0} + _{i1} x_{1} + _{i2} x_{2} + + _{in} x_{n}$$ F ^ ij = β i 0 + β i 1 x 1 + β i 2 x 2 + ⋯ + β in x n Step 3: Establish a comprehensive factor score function by weighted summation, using the proportion of each common factor's variance contribution rate to the total variance contribution rate of all the common factors as the weight: 9 [12pt]{minimal} $${Y}_{j }= { }_{1}{}_{1j}+ { }_{2}{}_{2j}+ +{ }_{k}{}_{kj}, j=, m$$ Y j = γ 1 F ^ 1 j + γ 2 F ^ 2 j + ⋯ + γ k F ^ kj , j = 1,2 , ⋯ m where [12pt]{minimal} $$Y_{j}$$ Y j denotes the comprehensive factor score of sample j, [12pt]{minimal} $$_{ij}$$ F ^ ij represents the score of sample j achieved on common factor I, [12pt]{minimal} $$_{i}$$ γ i is the proportion of common factor i's variance contribution rate to the total variance contribution rate, and [12pt]{minimal} $$_{i} = _{i} /_{i = 1}^{n} {_{i} }$$ γ i = λ i / ∑ i = 1 n λ i . (3) Scoring methods The scoring method is based on the respondents' answers to five questions regarding whether they access the Internet on a computer, whether they access the Internet on a mobile device, whether they make online purchases, whether they engage in online learning, and whether they participate in online entertainment. If the respondent answers "yes" to any of these questions, they receive 1 point. The scores for all five questions are then accumulated to obtain the digital literacy level of the senior citizens. Based on the CFPS data from 2016, 2018, and 2020, this study constructs a balanced panel and employs a two-way fixed-effects model for the estimation. Based on the definition of WHO , the sample involved in this study comprised citizens aged 60 and above. After data cleaning and the removal of samples with missing values, the balanced panel data comprised 7836 samples, with 2612 samples per year. The descriptive statistics of the data are presented in Table . The data used in this study are collected from the China Family Panel Studies (CFPS). This survey started in 2010 and covers 25 provinces, municipalities, and autonomous regions in China; it has collected abundant data on income, consumption, health behaviors, education, and other aspects at the individual, family, and community levels. The dataset has high quality data and good representativeness, with an annual data tracking rate exceeding 80%. Although the CFPS survey has collected data across five waves spanning from 2010 to 2020, the surveys conducted in 2010, 2012, and 2014 failed to address issues related to digital literacy properly. The questionnaires from 2016, 2018, and 2020 were refined by incorporating questions concerning senior citizens' use of mobile devices for internet access, as well as their engagement in learning, entertainment, and transactions through digital devices, which corresponds with our research objectives. More importantly, there were no changes in the phrasing of these questions, ensuring the continuity and stability of the key variables required for this study. Furthermore, since panel data, compared to cross-sectional data, provides more data points and control for individual differences that do not change over time, thereby enabling more accurate research conclusions, this study utilizes data from the CFPS surveys conducted in three waves, 2016, 2018, and 2020. Given the large volume of micro-level data, we utilized Stata 15.0 to filter data that meet the research requirements, matching them annually based on sample IDs to construct panel data. Then, we employed descriptive statistical analysis and panel regression models to conduct the empirical analysis. Dependent variable: the health status of senior citizens (health). The health status is an ordinal variable, with respondents answering about their own health condition. The variable is assigned a value of 1 when the answer is "unhealthy", a value of 2 when the answer is "average", a value of 3 when the answer is "relatively healthy", a value of 4 when the answer is "very healthy", and a value of 5 when the answer is "extremely healthy". In accordance with the research objectives of this paper and the characteristics of the CFPS dataset, self-rated health is employed as a proxy for the health status of elderly individuals. The main reasons are as follow. Firstly, compared to macro-level measurements of overall health levels such as mortality rates and life expectancy, self-rated health offers a more precise measurement of individual health conditions in that it can reflect individual differences. Therefore, this indicator has been adopted by many existing studies . Secondly, previous studies have confirmed the effectiveness of self-rated health in predicting mortality and anticipating functional impairments resulting from certain diseases . Thirdly, since conducting physical examinations on each sample to obtain accurate health data is impractical, self-rated health is the most feasible and practical proxy for the health status of elderly individuals among the indicators available in the CFPS database. Considering that self-rated health may introduce bias into the research results, other health indicators from the questionnaire are incorporated during the robustness checks. Independent variable: digital literacy (digital). Digital transformation not only takes place at the national socio-economic and enterprise levels but also at the individual level. Unlike the digital economy at the macro-level or the digital transformation of enterprises at the micro-level, the individual's digital literacy is a combination of both macro-supply and micro-demand. It is mainly represented by the individual’s application of digital technologies, such as online payments, e-learning, and online shopping. Following the definition of digital literacy in the existing literature , this paper proposes that in the Chinese context, digital literacy refers to the digital application abilities of individuals to correctly and reasonably use digital tools and devices, gather digital resources to acquire new information and learn new knowledge, engage in social communication with others, and conduct business activities on digital platforms. When it comes to measuring the digital literacy level, many studies simply use the single indicator method, which considers "whether the Internet is used" ; this seems to be one-sided. To comprehensively measure the digital literacy level of senior citizens and avoid the one-sidedness of information reflected by a single indicator, this study constructs an index system of senior citizens’ digital literacy. Specifically, this paper categorizes digital literacy into four dimensions, digital tool usage literacy, digital learning literacy, digital entertainment literacy, and digital commerce literacy, as illustrated in Table . The digital literacy index for the elderly is calculated using the entropy weight method. The specific calculation process is given later. Mediating variable: social support (issupport). In the CFPS questionnaire, social support is primarily represented by questions such as "How is your relationship with your children?", "How frequently do you contact your children?", "How often do you meet with your children?", "Do you think most people are helpful or selfish?", "How much do you trust your neighbors?", and "Do you have someone to take care of you when you are sick?". The response options range from 0 to 10, with higher values indicating stronger social support. The intensity of social support is calculated by summing up the scores of each response. Instrumental variable: In order to address the endogeneity issue, this study introduces two instrumental variables. The first one is the product of the digital financial inclusion index (dfiindex) and the community average of digital literacy (mean_di), while the second one is the product of total postal and telecommunications business volume (tbtbusiness) and the community average of digital literacy (mean_di). The digital financial inclusion index (dfiindex) is published by the Institute of Digital Finance at Peking University; the total postal and telecommunications business volume by province (tbtbusiness) is collected from the National Bureau of Statistics; and the community average of digital literacy (mean_di) is the average digital literacy level of all elderly individuals within the community, excluding the individuals themselves. Control variables: The health status of senior citizens is influenced by many factors. Variables representing individual characteristics, including gender (gender), age (age), education level (edu), Hukou status (hk), marital status (marry), pension insurance status (pension), and income status (income), are introduced to control for individual differences. Since lifestyle can also affect the health status of the elderly, this study also includes smoking habits (smoke), napping (noonbreak), and exercises (exercise) to control for personal lifestyle factors. Table displays the definitions of the variables. When measuring the digital literacy index, three methods are employed. In the baseline results, the entropy method is applied to objectively weight each indicator. The factor analysis method and scoring method are applied in the robustness checks. Entropy method It is assumed that there are m samples to be evaluated and n evaluation indicators, which form the original indicator data matrix [12pt]{minimal} $$X = (x_{ij} )m n$$ X = ( x ij ) m × n , where [12pt]{minimal} $$x_{ij} 0$$ x ij ≥ 0 , [12pt]{minimal} $$0 i m$$ 0 ≤ i ≤ m , [12pt]{minimal} $$0 j n$$ 0 ≤ j ≤ n . For a certain indicator [12pt]{minimal} $$x_{j}$$ x j , the larger the difference in the indicator [12pt]{minimal} $$x_{ij}$$ x ij value, the greater role this indicator plays in the comprehensive evaluation. If the indicator values of a certain indicator are similar, then this indicator does not play a critical role in the comprehensive evaluation. In information theory, information entropy is [12pt]{minimal} $$e_{j} = - k _{i = 1}^{m} {p_{ij} p_{ij} }$$ e j = - k · ∑ i = 1 m p ij · ln p ij , where [12pt]{minimal} $$p_{ij} = x_{ij} /_{i = 1}^{m} {x_{ij} }$$ p ij = x ij / ∑ i = 1 m x ij , [12pt]{minimal} $$k> 0$$ k > 0 . The greater the variation in the value of a certain indicator, the smaller the information entropy, and the greater the amount of information provided by the indicator, which suggests a higher weight. Therefore, according to the degree of variation of each indicator, the information entropy is applied to calculate the weight of each indicator. Step 1: Measuring the proportion of the i-th sample under the j-th indicator [12pt]{minimal} $$p_{ij}$$ p ij . 1 [12pt]{minimal} $$p_{ij} = x_{ij} /_{i = 1}^{m} {x_{ij} }$$ p ij = x ij / ∑ i = 1 m x ij Step 2: Calculating the entropy value of the j-th indicator. 2 [12pt]{minimal} $$e_{j} = - k _{i = 1}^{m} {p_{ij} p_{ij} }$$ e j = - k · ∑ i = 1 m p ij ln p ij where [12pt]{minimal} $$k> 0$$ k > 0 , [12pt]{minimal} $$k = 1/ m$$ k = 1 / ln m , and [12pt]{minimal} $$0 e 1$$ 0 ≤ e ≤ 1 . Step 3: Calculating the utility value of each indicator [12pt]{minimal} $$d_{j}$$ d j . For a given j, the smaller the variation in [12pt]{minimal} $$d_{j}$$ d j , the larger the entropy value [12pt]{minimal} $$e_{j}$$ e j . When there is a significant variation in the values of a certain indicator among the samples, [12pt]{minimal} $$e_{j}$$ e j becomes small, indicating that this indicator is more valuable in comparing digital literacy; thus, its weight is greater. The formula for calculating the utility value is: 3 [12pt]{minimal} $$d_{j} = 1 - e_{j}$$ d j = 1 - e j Step 4: Calculating the weight [12pt]{minimal} $$w_{j}$$ w j of indicator [12pt]{minimal} $$x_{j}$$ x j : 4 [12pt]{minimal} $$w_{j} = d_{j} /_{j = 1}^{n} {d_{j} } = }}{{_{j = 1}^{n} {d_{j} } }} = }}{{_{j = 1}^{n} {(1 - e_{j} )} }}$$ w j = d j / ∑ j = 1 n d j = d j ∑ j = 1 n d j = 1 - e j ∑ j = 1 n ( 1 - e j ) Step 5: Calculating the index for digital literacy: 5 [12pt]{minimal} $$digital_{i} = _{j = 1}^{n} {w_{j} p_{ij} }$$ d i g i t a l i = ∑ j = 1 n w j p ij (2) Factor analysis Factor analysis is a method for calculating comprehensive evaluation scores by reducing a number of related variables into a few uncorrelated new common factors. By reducing dimensionality, it simplifies the complexity of problem analysis while retaining the information of the original variables to the greatest extent. The steps to calculate the comprehensive score of digital using factor analysis are as follows: Step 1: Estimate the factor loading matrix based on the original variable matrix. This study chooses the principal component method to estimate the factor loading matrix. Assuming that the original indicator data matrix is [12pt]{minimal} $$X = (x_{ij} )m n$$ X = ( x ij ) m × n , the covariance matrix of X is denoted as [12pt]{minimal} $$ {}$$ ∑ . [12pt]{minimal} $$_{1} _{2} _{n}> 0$$ λ 1 ≥ λ 2 ≥ ⋯ ≥ λ n > 0 represents the eigenvalue of [12pt]{minimal} $$ {}$$ ∑ , and [12pt]{minimal} $$_{i}$$ λ i represents the variance of principal component i. The total variance is specified as [12pt]{minimal} $$_{i = 1}^{n} {_{ii} } = _{i = 1}^{n} {_{i} }$$ ∑ i = 1 n σ ii = ∑ i = 1 n λ i . [12pt]{minimal} $$e_{1} ,e_{2} , ,e_{n}$$ e 1 , e 2 , ⋯ , e n is the corresponding normalized orthogonal eigenvector. Therefore, [12pt]{minimal} $$ {}$$ ∑ can be decomposed as: 6 [12pt]{minimal} $$ { = _{1} } e_{1} e_{1}^{ } + _{2} e_{2} e_{2}^{ } + + _{n} e_{n} e_{n}^{ } \\ c} {} & = \\ ( {_{1} } e_{1} , {_{2} } e_{2} , {_{n} } e_{n} )[ {c} { {_{1} } e_{1}^{ } } \\ { {_{2} } e_{2}^{ } } \\ \\ { {_{n} } e_{n}^{ } } \\ } ] \\ $$ ∑ = λ 1 e 1 e 1 ′ + λ 2 e 2 e 2 ′ + ⋯ + λ n e n e n ′ = ( λ 1 e 1 , λ 2 e 2 , ⋯ λ n e n ) λ 1 e 1 ′ λ 2 e 2 ′ ⋮ λ n e n ′ The decomposition in the above formula represents the covariance matrix structure of a factor model where the number of common factors is the same as the number of variables. When using the factor analysis methods, it is more preferable to reduce the number of common factors k until it is less than the number of variables, i.e., k < n. When the last n-k eigenvalues are relatively small, the contribution of the last n-k terms to [12pt]{minimal} $$ {}$$ ∑ is usually omitted, leading to: 7 [12pt]{minimal} $$ ( {_{1} } e_{1} , {_{2} } e_{2} , {_{k} } e_{k} )[ {c} { {_{1} } e_{1}^{ } } \\ { {_{2} } e_{2}^{ } } \\ \\ { {_{k} } e_{k}^{ } } \\ } ]$$ ∑ ≈ ( λ 1 e 1 , λ 2 e 2 , ⋯ λ k e k ) λ 1 e 1 ′ λ 2 e 2 ′ ⋮ λ k e k ′ where [12pt]{minimal} $$ {_{j} } e_{j}$$ λ j e j denotes the factor loading of common factor j. Step 2: Display the common factors as linear combinations of the variables to obtain the scores of each common factor. Since the number of equations k in the factor score function is less than the number of variables n, it is not possible to calculate the factor scores accurately. The factor scores can be estimated using the least squares method or the maximum likelihood method: 8 [12pt]{minimal} $$_{ij} = _{i0} + _{i1} x_{1} + _{i2} x_{2} + + _{in} x_{n}$$ F ^ ij = β i 0 + β i 1 x 1 + β i 2 x 2 + ⋯ + β in x n Step 3: Establish a comprehensive factor score function by weighted summation, using the proportion of each common factor's variance contribution rate to the total variance contribution rate of all the common factors as the weight: 9 [12pt]{minimal} $${Y}_{j }= { }_{1}{}_{1j}+ { }_{2}{}_{2j}+ +{ }_{k}{}_{kj}, j=, m$$ Y j = γ 1 F ^ 1 j + γ 2 F ^ 2 j + ⋯ + γ k F ^ kj , j = 1,2 , ⋯ m where [12pt]{minimal} $$Y_{j}$$ Y j denotes the comprehensive factor score of sample j, [12pt]{minimal} $$_{ij}$$ F ^ ij represents the score of sample j achieved on common factor I, [12pt]{minimal} $$_{i}$$ γ i is the proportion of common factor i's variance contribution rate to the total variance contribution rate, and [12pt]{minimal} $$_{i} = _{i} /_{i = 1}^{n} {_{i} }$$ γ i = λ i / ∑ i = 1 n λ i . (3) Scoring methods The scoring method is based on the respondents' answers to five questions regarding whether they access the Internet on a computer, whether they access the Internet on a mobile device, whether they make online purchases, whether they engage in online learning, and whether they participate in online entertainment. If the respondent answers "yes" to any of these questions, they receive 1 point. The scores for all five questions are then accumulated to obtain the digital literacy level of the senior citizens. Based on the CFPS data from 2016, 2018, and 2020, this study constructs a balanced panel and employs a two-way fixed-effects model for the estimation. Based on the definition of WHO , the sample involved in this study comprised citizens aged 60 and above. After data cleaning and the removal of samples with missing values, the balanced panel data comprised 7836 samples, with 2612 samples per year. The descriptive statistics of the data are presented in Table . First, in order to examine the impact of digital literacy on the health status of senior citizens, a balanced panel model is established. The model is specified below: 10 [12pt]{minimal} $$health_{it} = _{0} + _{1} digital_{it} + _{3} X_{it} + c_{i} + _{t} + _{it}$$ h e a l t h it = β 0 + β 1 d i g i t a l it + β 3 X it + c i + θ t + ε it where [12pt]{minimal} $$health_{it}$$ h e a l t h it denotes the health status of senior citizen i in year t, [12pt]{minimal} $$digital_{it}$$ d i g i t a l it represents the digital literacy level of the senior citizen i in year t, [12pt]{minimal} $$X_{it}$$ X it is a series of control variables, [12pt]{minimal} $$c_{i}$$ c i and [12pt]{minimal} $$_{t}$$ θ t are individual and time fixed effects, and [12pt]{minimal} $$_{it}$$ ε it stands for the random error term. Next, to identify the mediating role of social support, the following models are constructed: 11 [12pt]{minimal} $$issurpport_{it} = _{0} + _{1} digital_{it} + _{3} X_{it} + c_{i} + _{t} + _{it}$$ i s s u r p p o r t it = α 0 + α 1 d i g i t a l it + α 3 X it + c i + θ t + μ it 12 [12pt]{minimal} $$health_{it} = _{0} + _{1} issurpport_{it} + _{2} digital_{it} + _{3} X_{it} + c_{i} + _{t} + _{it}$$ h e a l t h it = λ 0 + λ 1 i s s u r p p o r t it + λ 2 d i g i t a l it + λ 3 X it + c i + θ t + η it In models (11) and (12), [12pt]{minimal} $$ {issurpport}_{it\ it}$$ issurpport i t i t stands for the intensity of social support. Coefficient [12pt]{minimal} $$_{1}$$ λ 1 measures the impact of digital literacy on the health status of the senior citizens. Baseline results Table reports the estimation results of the mixed effect models and the fixed effect models, and both show that digital literacy has a significant positive impact on the health status of senior citizens. According to the results from Column m4.4, after controlling for individual fixed effects, time fixed effects, and other control variables, t digital literacy significantly drives the health status of the elderly at the 5% level. Specifically, for every one-unit increase in the digital literacy index, the health status of the elderly improves by an average of 0.236 units, suggesting that an increase in digital literacy is beneficial to the health of the elderly, thus validating H1. Endogeneity tests The baseline result may be challenged by the endogeneity problems due to the omission of variables and the reversion causal relationship. Specifically, although econometric models can control for individual characteristic variables such as gender and age, there may still be factors that are difficult to observe or measure accurately, such as personal personality, preferences, and family culture. These unobservable variables that do not change over time cannot be identified in cross-sectional data, and instrumental variables are also powerless with regard to this issue, leading to endogenous problems caused by omitted variables. Moreover, elderly citizens who are in better health may have a higher likelihood of accessing the Internet and thus may enjoy greater convenience in obtaining digital information, potentially resulting in a higher level of digital literacy, leading to the issue of reverse causality. This study tries to address endogeneity problems in three ways. First, this study constructs a balanced panel using the CFPS data from 2016, 2018, and 2020. In the baseline regression, this study controls for individual fixed effects and time fixed effects. The two-way fixed effects model can overcome the limitations of cross-sectional data and, to a certain extent, address the endogeneity issues arising from unobservable variables that do not change over time. Second, this study employs the time-lagged explanatory variable for estimation. As shown in Column m5.1 in Table , the estimated coefficient for L.digital is significantly positive at the 5% level, which is consistent with the baseline regression. Third, the instrumental variable (IV) method is employed for estimation. As mentioned above, this study employs the financial inclusion index (dfiindex) and the total postal and telecommunications business volume (tbtbusiness) to calculate the instrumental variables. The total postal and telecommunications business volume of provinces in 2008 and the digital inclusive finance index can reflect the level of digital infrastructure and the degree of digital transformation well; in turn, these affect the level of digital services enjoyed by the elderly and, consequently, their digital literacy levels. A higher total postal and telecommunications business volume in early years indicates a more developed postal and telecommunications industry and better facilities, leading to better current digital infrastructure and environment, which are conducive to improving the digital literacy of the elderly. Meanwhile, as an early macroeconomic variable, the total postal and telecommunications business volume of provinces in 2008 is unlikely to influence their current health. The health status of the elderly from 2016 to 2020 cannot affect the early postal and telecommunications business volume. Therefore, tbtbusiness satisfies the requirements of relevance and exogeneity as an instrumental variable. The digital inclusive finance index published by Peking University is a scientific indicator for measuring the digital environment of a region . A higher level of digital inclusive finance in an earlier period makes it easier for the elderly to access digital technologies and share digital benefits. However, the current health status of the elderly cannot affect the previous digital inclusive finance index; so, the previous dfiindex also satisfies the requirements of relevance and exogeneity as an instrumental variable. When employing the IV method for estimation, in order to take into account micro-heterogeneity, the community average of digital literacy (mean_di) is introduced. Consequently, two instrumental variables are constructed. IV1 is the product of dfiindex and mean_di, while IV2 is the product of tbtbusiness and mean_di. The results of the 2SLS estimation using the instrumental variable method are listed in Column m5.2 and Column m5.3 of Table . The first-stage estimation results indicate a strong positive correlation between the instrumental variable and digital literacy, suggesting that the better the digital environment, the higher the level of digital literacy, supporting the selection of the IV. The F-value of the weak-ID test is much greater than the critical value, further demonstrating the effectiveness of the IV. As can be seen from the results, the estimated coefficients for digital are positively significant at 5%, again supporting the findings of the baseline regression. Therefore, increasing the digitalization level is beneficial to the health of the elderly. Robustness tests This study performs robustness tests from two aspects. First, the explanatory variables are replaced. We adopt both the scoring method and factor analysis method to calculate the digital literacy level among the elderly. In the scoring method, the elderly respondent’s digital literacy is determined based on their responses to five questions regarding whether they access the Internet on a computer, whether they access the Internet on a mobile device, whether they shop online, whether they learn online, and whether they engage in online entertainment. For each question, 1 point is received if the answer is "yes". Therefore, the total score from these five questions represents the elderly respondents’ digital literacy. The results are presented in Column m6.1 in Table . The specific indicators used to calculate the digitalization level index with factor analysis are the same as those used in the entropy method mentioned above. The results are reported in Column m6.2 in Table . The estimation results from both replacements are consistent with the baseline regression, indicating that the estimation results are robust. Second, the dependent variable is replaced. We adopt the respondents' answers to the question "Have you felt physically unwell in the past two weeks?" in the questionnaire to replace the health status variable. The estimation results are shown in Column m6.3 of Table . The coefficient of digital remains positive and statistically significant. Moreover, CFPS also investigates citizens aged over 45 using seven questions regarding whether they can go out for outdoor activities independently, whether they can eat meals independently, whether they can perform kitchen activities independently, whether they can use public transportation independently, whether they can shop independently, whether they can clean and perform hygiene activities independently, and whether they can wash clothes independently. This study adopts a scoring method, where 1 point is given if the respondent answers "yes" to a certain question. The scores of these seven questions are summed up to replace the original dependent variable, and the re-estimated model is shown in Column m6.4 of Table . The estimation results remain consistent with the baseline regression results, again verifying H1. Mechanism analysis The results above demonstrate that digital literacy promotes the health status of senior citizens. Does social support act as a mediator in the relationship between digital literacy and the health status of senior citizens? In order to identify the mediation mechanism, models (2) and (3) are estimated. Table displays the mediating results of social support. Column m7.1 displays the impact of digital on issupport, and m7.2 shows the impact of digital and issupport on health. These two columns together reveal the mediating impact of social support on the relationship between digital literacy and the health of the elderly. The estimated coefficients of digital and issupport are strongly significant. Moreover, the direct effect of digital on health in column m7.2 (0.219) is lower than the total effect observed in column m4.4 of Table (0.236). Therefore, social support is a partial mediator in the relationship between digital literacy and health status. The degree of the mediating effect is the product of the coefficients [12pt]{minimal} $$_{1}$$ α 1 in Model (2) and [12pt]{minimal} $$_{1}$$ λ 1 in Model (3), which is 0.017, accounting for 7.2% of the total effect. This result validates H2. That is, digital literacy improves the health status of senior citizens by enhancing social support. Heterogeneous analysis Since the impact of digital literacy on the health status of the elderly may vary among different groups, this study analyzes the heterogeneity of the impact from three perspectives: age, urban–rural type, and education level. Age heterogeneity In this section, the respondents are divided into younger elderly (aged 60–69) and middle-aged and older elderly (aged 70 and above). Based on the estimation results in Columns m8.1 and m8.2 of Table , the coefficient of digital is significantly positive only in the sample of younger elderly, indicating that the impact of digital literacy on the health status of senior citizens exhibits age heterogeneity. Urban–rural heterogeneity In this section, the sample is divided into urban and rural subsamples based on respondent location to analyze the urban–rural differences in the impact of digital literacy on the health status of the elderly. The estimation results in Columns m8.3 and m8.4 of Table show that digital literacy has a significant driving effect on the health of both the urban and rural elderly, with the impact among the urban elderly being stronger. Education level heterogeneity In this section, the sample is divided into illiterate and non-illiterate groups based on respondent education level. As shown in Columns m8.5 and m8.6 of Table , the estimated coefficient of digital for the illiterate elderly is not significant, while it is significantly positive for the non-illiterate elderly. Table reports the estimation results of the mixed effect models and the fixed effect models, and both show that digital literacy has a significant positive impact on the health status of senior citizens. According to the results from Column m4.4, after controlling for individual fixed effects, time fixed effects, and other control variables, t digital literacy significantly drives the health status of the elderly at the 5% level. Specifically, for every one-unit increase in the digital literacy index, the health status of the elderly improves by an average of 0.236 units, suggesting that an increase in digital literacy is beneficial to the health of the elderly, thus validating H1. The baseline result may be challenged by the endogeneity problems due to the omission of variables and the reversion causal relationship. Specifically, although econometric models can control for individual characteristic variables such as gender and age, there may still be factors that are difficult to observe or measure accurately, such as personal personality, preferences, and family culture. These unobservable variables that do not change over time cannot be identified in cross-sectional data, and instrumental variables are also powerless with regard to this issue, leading to endogenous problems caused by omitted variables. Moreover, elderly citizens who are in better health may have a higher likelihood of accessing the Internet and thus may enjoy greater convenience in obtaining digital information, potentially resulting in a higher level of digital literacy, leading to the issue of reverse causality. This study tries to address endogeneity problems in three ways. First, this study constructs a balanced panel using the CFPS data from 2016, 2018, and 2020. In the baseline regression, this study controls for individual fixed effects and time fixed effects. The two-way fixed effects model can overcome the limitations of cross-sectional data and, to a certain extent, address the endogeneity issues arising from unobservable variables that do not change over time. Second, this study employs the time-lagged explanatory variable for estimation. As shown in Column m5.1 in Table , the estimated coefficient for L.digital is significantly positive at the 5% level, which is consistent with the baseline regression. Third, the instrumental variable (IV) method is employed for estimation. As mentioned above, this study employs the financial inclusion index (dfiindex) and the total postal and telecommunications business volume (tbtbusiness) to calculate the instrumental variables. The total postal and telecommunications business volume of provinces in 2008 and the digital inclusive finance index can reflect the level of digital infrastructure and the degree of digital transformation well; in turn, these affect the level of digital services enjoyed by the elderly and, consequently, their digital literacy levels. A higher total postal and telecommunications business volume in early years indicates a more developed postal and telecommunications industry and better facilities, leading to better current digital infrastructure and environment, which are conducive to improving the digital literacy of the elderly. Meanwhile, as an early macroeconomic variable, the total postal and telecommunications business volume of provinces in 2008 is unlikely to influence their current health. The health status of the elderly from 2016 to 2020 cannot affect the early postal and telecommunications business volume. Therefore, tbtbusiness satisfies the requirements of relevance and exogeneity as an instrumental variable. The digital inclusive finance index published by Peking University is a scientific indicator for measuring the digital environment of a region . A higher level of digital inclusive finance in an earlier period makes it easier for the elderly to access digital technologies and share digital benefits. However, the current health status of the elderly cannot affect the previous digital inclusive finance index; so, the previous dfiindex also satisfies the requirements of relevance and exogeneity as an instrumental variable. When employing the IV method for estimation, in order to take into account micro-heterogeneity, the community average of digital literacy (mean_di) is introduced. Consequently, two instrumental variables are constructed. IV1 is the product of dfiindex and mean_di, while IV2 is the product of tbtbusiness and mean_di. The results of the 2SLS estimation using the instrumental variable method are listed in Column m5.2 and Column m5.3 of Table . The first-stage estimation results indicate a strong positive correlation between the instrumental variable and digital literacy, suggesting that the better the digital environment, the higher the level of digital literacy, supporting the selection of the IV. The F-value of the weak-ID test is much greater than the critical value, further demonstrating the effectiveness of the IV. As can be seen from the results, the estimated coefficients for digital are positively significant at 5%, again supporting the findings of the baseline regression. Therefore, increasing the digitalization level is beneficial to the health of the elderly. This study performs robustness tests from two aspects. First, the explanatory variables are replaced. We adopt both the scoring method and factor analysis method to calculate the digital literacy level among the elderly. In the scoring method, the elderly respondent’s digital literacy is determined based on their responses to five questions regarding whether they access the Internet on a computer, whether they access the Internet on a mobile device, whether they shop online, whether they learn online, and whether they engage in online entertainment. For each question, 1 point is received if the answer is "yes". Therefore, the total score from these five questions represents the elderly respondents’ digital literacy. The results are presented in Column m6.1 in Table . The specific indicators used to calculate the digitalization level index with factor analysis are the same as those used in the entropy method mentioned above. The results are reported in Column m6.2 in Table . The estimation results from both replacements are consistent with the baseline regression, indicating that the estimation results are robust. Second, the dependent variable is replaced. We adopt the respondents' answers to the question "Have you felt physically unwell in the past two weeks?" in the questionnaire to replace the health status variable. The estimation results are shown in Column m6.3 of Table . The coefficient of digital remains positive and statistically significant. Moreover, CFPS also investigates citizens aged over 45 using seven questions regarding whether they can go out for outdoor activities independently, whether they can eat meals independently, whether they can perform kitchen activities independently, whether they can use public transportation independently, whether they can shop independently, whether they can clean and perform hygiene activities independently, and whether they can wash clothes independently. This study adopts a scoring method, where 1 point is given if the respondent answers "yes" to a certain question. The scores of these seven questions are summed up to replace the original dependent variable, and the re-estimated model is shown in Column m6.4 of Table . The estimation results remain consistent with the baseline regression results, again verifying H1. The results above demonstrate that digital literacy promotes the health status of senior citizens. Does social support act as a mediator in the relationship between digital literacy and the health status of senior citizens? In order to identify the mediation mechanism, models (2) and (3) are estimated. Table displays the mediating results of social support. Column m7.1 displays the impact of digital on issupport, and m7.2 shows the impact of digital and issupport on health. These two columns together reveal the mediating impact of social support on the relationship between digital literacy and the health of the elderly. The estimated coefficients of digital and issupport are strongly significant. Moreover, the direct effect of digital on health in column m7.2 (0.219) is lower than the total effect observed in column m4.4 of Table (0.236). Therefore, social support is a partial mediator in the relationship between digital literacy and health status. The degree of the mediating effect is the product of the coefficients [12pt]{minimal} $$_{1}$$ α 1 in Model (2) and [12pt]{minimal} $$_{1}$$ λ 1 in Model (3), which is 0.017, accounting for 7.2% of the total effect. This result validates H2. That is, digital literacy improves the health status of senior citizens by enhancing social support. Since the impact of digital literacy on the health status of the elderly may vary among different groups, this study analyzes the heterogeneity of the impact from three perspectives: age, urban–rural type, and education level. Age heterogeneity In this section, the respondents are divided into younger elderly (aged 60–69) and middle-aged and older elderly (aged 70 and above). Based on the estimation results in Columns m8.1 and m8.2 of Table , the coefficient of digital is significantly positive only in the sample of younger elderly, indicating that the impact of digital literacy on the health status of senior citizens exhibits age heterogeneity. Urban–rural heterogeneity In this section, the sample is divided into urban and rural subsamples based on respondent location to analyze the urban–rural differences in the impact of digital literacy on the health status of the elderly. The estimation results in Columns m8.3 and m8.4 of Table show that digital literacy has a significant driving effect on the health of both the urban and rural elderly, with the impact among the urban elderly being stronger. Education level heterogeneity In this section, the sample is divided into illiterate and non-illiterate groups based on respondent education level. As shown in Columns m8.5 and m8.6 of Table , the estimated coefficient of digital for the illiterate elderly is not significant, while it is significantly positive for the non-illiterate elderly. In this section, the respondents are divided into younger elderly (aged 60–69) and middle-aged and older elderly (aged 70 and above). Based on the estimation results in Columns m8.1 and m8.2 of Table , the coefficient of digital is significantly positive only in the sample of younger elderly, indicating that the impact of digital literacy on the health status of senior citizens exhibits age heterogeneity. In this section, the sample is divided into urban and rural subsamples based on respondent location to analyze the urban–rural differences in the impact of digital literacy on the health status of the elderly. The estimation results in Columns m8.3 and m8.4 of Table show that digital literacy has a significant driving effect on the health of both the urban and rural elderly, with the impact among the urban elderly being stronger. In this section, the sample is divided into illiterate and non-illiterate groups based on respondent education level. As shown in Columns m8.5 and m8.6 of Table , the estimated coefficient of digital for the illiterate elderly is not significant, while it is significantly positive for the non-illiterate elderly. This study utilizes the multi-period data from the CFPS database and examine the impact of digital literacy on the health status of senior citizens between 2016 and 2020. The entropy method is employed to measure the digital literacy levels of the elderly. A two-way fixed effects model is constructed, and the instrumental variable method is adopted for estimation to mitigate issues of reverse causality and omitted variable bias. Additionally, social support is introduced as the mediator, aiming to explore the mechanism through which digital literacy affects the health status of the elderly. This study seeks to provide Chinese evidence for the high-quality development of both the digital society and aging society, offering insights for developing countries in addressing related issues. The impact of digital literacy on the health status of senior citizens It is found in the benchmark results that digital literacy contributes to the improvement of elderly health status. This conclusion remains valid in the endogeneity and robustness checks. H1 is therefore validated. This finding is consistent with those of Rajkhowa and Qaim and Jin and Zhao , who found that a higher digital literacy level enables older people to access health information, enrich their leisure activities, and broaden communication channels with relatives and friends, which contribute to the improvement of their health status. Unlike their studies, we adopt an index system and the entropy weight method to measure digital literacy. The mediating effect of social support In the mechanism analysis, it is found that digital literacy improves the health status of senior citizens through the channel of social support. H2 is verified. Existing literature found that income levels and online entertainment mediate the relationship between digital literacy and health. Our research reveals that social support also serves as a mediator between digital literacy and the health status of the elderly, thereby deepening the understanding of the influencing channels. The possible explanation is that, digital knowledge practices have led to the reconstruction of traditional social relationships. Specifically, digital behaviors such as liking, sharing, and commenting have expanded the communication methods between the elderly and their peers . This makes it easier for the elderly to form weak social ties with online acquaintances. Consequently, once equipped with a certain level of digital literacy, the social networks of older adults are no longer confined to traditional acquaintance circles . Moreover, digital platforms bring together individuals with similar interests and hobbies, breaking existing social inertia and facilitating the elderly to obtain more social support . Additionally, the digital-related interaction among families, such as children and grandchildren assisting older generations in enhancing their digital literacy, fosters intergenerational harmony and enhances mutual assistance and support. This will strengthen emotional bonds and psychological compensation, thereby enhancing the resilience and adaptability of family relationships . The heterogeneous impact of digital literacy on the health status of senior citizens In the heterogeneous analysis, we found that the effect of digital literacy on the health status of senior citizens is stronger among the younger age group. For the younger elderly, their technical and learning barriers to digitalization are relatively low, resulting in a more frequent use of digital tools and better connection with social issues. However, as they become older, their cognitive abilities gradually decline, posing greater difficulties in using digital technologies. Therefore, the resulting health improvement effects are limited. In addition, the effect of digital literacy on the health status of senior citizens is more pronounced in the urban elderly. This difference may stem from China's long-standing dual structure , where the aging process is more rapid, and the number of disabled elderly is higher in rural areas. Moreover, rural digital infrastructure significantly lags behind that of urban areas, leading to lower digital literacy levels among the rural elderly and resulting in a weaker health improvement effect. Furthermore, the effect of digital literacy on the health status of senior citizens is stronger among the non-illiterate group. A possible explanation is that the illiterate elderly, due to their limited educational background, face challenges in learning and utilizing digital technologies, making it difficult for them to embrace the convenience and entertainment brought by digital devices. In contrast, the non-illiterate elderly possess sufficient knowledge and stronger learning abilities; so, the positive health effects of digital literacy are more pronounced in this group . Limitations This study examines the impact of digital literacy on the health status of senior citizens and identifies the mediation mechanism, which enhances our understanding of the positive effects of digitalization on aging society. Meanwhile, there is still room for further improvements. First, the measurement of health status is a complicated process. Due to data availability, this study employs self-rated questions to measure the health status of the elderly, which may not be as accurate as clinical diagnostic instruments. Future research could be improved by obtaining elderly physical examination data for more precise measurements. In addition, we will try to further explore the availability of relevant resident health databases. By constructing a comprehensive health measurement index that incorporates multiple dimensions, such as physical examination results and self-rated health, future study can reduce the subjective bias of relying solely on self-rated health, enabling us to reveal the health levels of the elderly more comprehensively and accurately. Secondly, this study focuses on the impact of digital literacy on the health status of the elderly, without further discussing its influence on health inequalities. The current research has not reached a consensus regarding whether digitization contributes to narrowing health inequalities among the elderly. That is, whether the development of digitization can assist the most disadvantaged individuals and reduce health inequalities remains a valuable research question and belongs to one of our next research efforts. In future research, we will try to classify senior citizens according to their digital literacy levels and measure the differences in health distribution across different digital literacy levels. The concentration index decomposition method will be employed to uncover the impact and mechanisms of digital literacy on health inequality of the elderly. This will provide insights for optimizing the medical security system in aging countries and bridging health disparities. Lastly, this study utilizes micro-survey data, mainly discussing the impact of individual-level digital literacy at the micro level. Expanding our perspective to the macro level and discussing the health impact of digitization on the elderly would enhance the comprehensiveness of our research. In the future, we will incorporate comparative analyses across different regions and countries, discussing the impact of digitization on the health of the elderly from the perspective of human society. It is found in the benchmark results that digital literacy contributes to the improvement of elderly health status. This conclusion remains valid in the endogeneity and robustness checks. H1 is therefore validated. This finding is consistent with those of Rajkhowa and Qaim and Jin and Zhao , who found that a higher digital literacy level enables older people to access health information, enrich their leisure activities, and broaden communication channels with relatives and friends, which contribute to the improvement of their health status. Unlike their studies, we adopt an index system and the entropy weight method to measure digital literacy. In the mechanism analysis, it is found that digital literacy improves the health status of senior citizens through the channel of social support. H2 is verified. Existing literature found that income levels and online entertainment mediate the relationship between digital literacy and health. Our research reveals that social support also serves as a mediator between digital literacy and the health status of the elderly, thereby deepening the understanding of the influencing channels. The possible explanation is that, digital knowledge practices have led to the reconstruction of traditional social relationships. Specifically, digital behaviors such as liking, sharing, and commenting have expanded the communication methods between the elderly and their peers . This makes it easier for the elderly to form weak social ties with online acquaintances. Consequently, once equipped with a certain level of digital literacy, the social networks of older adults are no longer confined to traditional acquaintance circles . Moreover, digital platforms bring together individuals with similar interests and hobbies, breaking existing social inertia and facilitating the elderly to obtain more social support . Additionally, the digital-related interaction among families, such as children and grandchildren assisting older generations in enhancing their digital literacy, fosters intergenerational harmony and enhances mutual assistance and support. This will strengthen emotional bonds and psychological compensation, thereby enhancing the resilience and adaptability of family relationships . In the heterogeneous analysis, we found that the effect of digital literacy on the health status of senior citizens is stronger among the younger age group. For the younger elderly, their technical and learning barriers to digitalization are relatively low, resulting in a more frequent use of digital tools and better connection with social issues. However, as they become older, their cognitive abilities gradually decline, posing greater difficulties in using digital technologies. Therefore, the resulting health improvement effects are limited. In addition, the effect of digital literacy on the health status of senior citizens is more pronounced in the urban elderly. This difference may stem from China's long-standing dual structure , where the aging process is more rapid, and the number of disabled elderly is higher in rural areas. Moreover, rural digital infrastructure significantly lags behind that of urban areas, leading to lower digital literacy levels among the rural elderly and resulting in a weaker health improvement effect. Furthermore, the effect of digital literacy on the health status of senior citizens is stronger among the non-illiterate group. A possible explanation is that the illiterate elderly, due to their limited educational background, face challenges in learning and utilizing digital technologies, making it difficult for them to embrace the convenience and entertainment brought by digital devices. In contrast, the non-illiterate elderly possess sufficient knowledge and stronger learning abilities; so, the positive health effects of digital literacy are more pronounced in this group . This study examines the impact of digital literacy on the health status of senior citizens and identifies the mediation mechanism, which enhances our understanding of the positive effects of digitalization on aging society. Meanwhile, there is still room for further improvements. First, the measurement of health status is a complicated process. Due to data availability, this study employs self-rated questions to measure the health status of the elderly, which may not be as accurate as clinical diagnostic instruments. Future research could be improved by obtaining elderly physical examination data for more precise measurements. In addition, we will try to further explore the availability of relevant resident health databases. By constructing a comprehensive health measurement index that incorporates multiple dimensions, such as physical examination results and self-rated health, future study can reduce the subjective bias of relying solely on self-rated health, enabling us to reveal the health levels of the elderly more comprehensively and accurately. Secondly, this study focuses on the impact of digital literacy on the health status of the elderly, without further discussing its influence on health inequalities. The current research has not reached a consensus regarding whether digitization contributes to narrowing health inequalities among the elderly. That is, whether the development of digitization can assist the most disadvantaged individuals and reduce health inequalities remains a valuable research question and belongs to one of our next research efforts. In future research, we will try to classify senior citizens according to their digital literacy levels and measure the differences in health distribution across different digital literacy levels. The concentration index decomposition method will be employed to uncover the impact and mechanisms of digital literacy on health inequality of the elderly. This will provide insights for optimizing the medical security system in aging countries and bridging health disparities. Lastly, this study utilizes micro-survey data, mainly discussing the impact of individual-level digital literacy at the micro level. Expanding our perspective to the macro level and discussing the health impact of digitization on the elderly would enhance the comprehensiveness of our research. In the future, we will incorporate comparative analyses across different regions and countries, discussing the impact of digitization on the health of the elderly from the perspective of human society. Under the background of Healthy Aging and Digital China, it is of great significance to ensure that the elderly can enjoy the dividends of digitalization. Given this, digital literacy plays a crucial role in facilitating healthy aging among the elderly and promoting the Digital China strategy. This study discusses the theoretical mechanisms through which digital literacy affects the health status of senior citizens. Then, using the CFPS data in 2016, 2018, and 2020, this study empirically examines the impact of digital literacy on the health status of senior citizens. The theoretical contributions are below. First, this study considers the elderly population as the research subject and explores the impact mechanism of digital literacy on the health status of senior citizens from both theoretical and empirical perspectives. Since the health of the elderly is a critical component of residential welfare, our study enables the formulation of effective policy recommendations tailored to China's current digital development from a health perspective. Our findings also serve as a scientific reference for other developing countries in their endeavors to promote the health status of senior citizens. Second, this study examines the potential channels through which digital literacy affects the health of the elderly from the perspective of social support. This not only narrows the gaps in the existing research about the micro impact of digitalization, but also provides insights for the health improvement of the elderly. The main conclusions are as follows. First, digital literacy can improve the health status of senior citizens. This result remains valid after introducing lagged explanatory variables and addressing the endogeneity issues. The robustness of this finding is further confirmed through estimations with alternative explanatory and explained variables. Second, social support acts as a partial mediator in the relationship between digital literacy and the health status of senior citizens. An enhanced digital literacy level strengthens social support, thereby promoting the health status of the elderly. Third, the heterogeneity analysis reveals that the effect of digital literacy on the health status of senior citizens varies across age groups, urban–rural types, and education levels. Specifically, the positive effects of digital literacy are more pronounced among the younger, urban, and non-illiterate elderly. Based on the above conclusions, the following policy implications are proposed. First, joint efforts from multiple stakeholders should be made to release the positive impact of digital literacy on senior citizens’ health status. Governments should incorporate the improvement of digital literacy among the elderly into national strategies and encourage participation from all sectors of society in digital literacy training for the elderly to ensure their adaptability to life in the digital age. Additionally, efforts should be strengthened in constructing new types of digital infrastructure, such as big data platforms, to facilitate elderly individuals' access to data elements at a lower cost, thereby improving their digital literacy. At the societal level, greater attention should be paid to the internet usage of the elderly. For example, digital service volunteer stations specifically for the elderly should be established to address the challenges they face in using mobile devices such as smartphones. Community learning classes for smart devices should also be designed for the elderly. Furthermore, separate digital channels should be provided for the elderly in public places such as shopping malls and hospitals. Moreover, every citizen should lend a helping hand to the elderly in their vicinity regarding the use of digital tools. Second, to broaden the way of elderly individuals to utilize digital tools to obtain social support, companies can further promote the elderly-friendly transformation of digital products, such as the artificial intelligence functions in smart home appliances. Enterprises can design more "elderly-friendly" interactive interfaces and functions, such as the "senior mode" with a comfortable experience, body control, and voice search, tailored to the behavioral habits of the elderly, ensuring that they can handle digital services without barriers. This will enable the elderly to gain social support more easily, thereby improving their health. Third, the impact of digital literacy on the health of elderly individuals is relatively weak among the higher age group, those residing in rural areas, and those with lower levels of education. Therefore, it is imperative to focus on the cultivation of their digital literacy. Educational institutions should explore the establishment of a hierarchical lifelong education system regarding digital literacy among the elderly, such as the "Silver Age Digital Classroom" and other digital technology application teaching activities. In addition, the introduction of "new rural elites" serves as an effective means to improve digital literacy in rural areas. For instance, some returning college students and veterans utilize digital technology for daily use or entrepreneurship, thereby assisting or motivating elderly groups to adopt digital technology and acquire digital skills. Moreover, family members are encouraged to strengthen the cultivation of digital literacy among the less educated elderly through methods such as "digital feedback" and "intergenerational integration," enabling them to enjoy the health-promoting effects of digital literacy.
Addition of flexible laryngoscopy to anesthesiological parameters improves prediction of difficult intubation in laryngeal surgery
40ce28c4-f030-414c-9a7c-9e4ab19cc8c5
11781308
Surgical Procedures, Operative[mh]
This prospective pilot clinical study included 50 patients above 18, scheduled for microscopic laryngeal surgery at the Clinic for Otorhinolaryngo-logy, University Clinical Center in Niš, between June and September 2023. This study was approved by the Ethi-cal Committee of Medical School, University in Niš, Niš, Serbia and by the Ethical Committee of the University Clinical Center of Niš, Niš, Serbia. Criteria for inclusion in the study were: diagnosis of a lesion of the vocal folds, planned general endotracheal anaesthesia, age over 18 years and the absence of a tracheostomy. Exclusion criteria were: patients younger than 18 years, presence of a tracheo-stomy cannula, refusal of the patient to participate in our research, inability to understand and/or sign an informed consent form, and urgent surgical interventions. Each patient in the study was informed about our research and signed an informed consent form. Preoperatively, each patient underwent a regular surgical clinical examination, including flexible laryngoscopy. The surgeon preoperatively identified the possibility of difficult intubation based on the flexible laryngoscopy findings and previous experience. Before the surgical intervention, an anesthesio-logist conducted an interview about the patient’s medical history and a detailed airway assessment. The attending anesthesiologist used a specially designed questionnaire to provide all the needed parameters and measurements. The patient’s gene-ral data, such as gender, date of birth, weight in kilograms (kg) and height in centimeters (cm), were entered. From these parameters, the body mass index (BMI) was calculated with the help of an online calculator that can be found at the following address: https://www.calculator.net/bmi-calculator.html . Data related to difficult intubation were entered, e.g. presence of stridor, general condition determined by the ASA score, loud snoring, feeling tired during usual activities, apnea during sleep and hypertension. The ASA score is determined by the official ASA classification used in everyday clinical practice. Prediction of the difficult airway was performed first by observing the patient’s anatomical features, e.g., mandibular prognathism retrognathia and prominent incisors. Specific measurements and tests were conducted after the patient was seated. The examination began by ordering the patient to open his mouth as much as possible. Then, the distance between the upper and lower incisors (inter-incisor gap – IIG) was measured. The patient was then instructed to perform the modified Mallampati test (MMT) by pushing the tongue out of the oral cavity as much as possible while still in the previous position. The interpretation of both tests is given in . The mandibular protrusion test, known as subluxation (S-lux), was performed by instructing the patient to protrude the lower jaw in front of the upper jaw. The results were classified as follows: S-lux > 0 means that lower incisors can be protruded anterior to the upper incisors, and S-lux = 0 means that the lower incisors can be brought edge to edge with the upper incisors. S-lux < 0 means that the lower incisors cannot be brought edge to edge with the upper incisors. The interpretation of the results is given in . For the second part of the measurement, the patient was instructed to perform maximal neck extension in a sitting position. Then, the thyromental distance or Patil’s test was measured together with the sternomental distance. The reclination test was performed in this position by instructing the patient to open his mouth and position the upper teeth horizontally about the surface. Then, the maximum extension of the neck was performed, and the angle of deflection of the upper teeth was determined. The interpretation of the reclination test is given in . The length of the mandible, its anterior and posterior depth, neck circumference, and the acromion-acromion distance were measured. All the patients were intubated using the direct laryngoscopy method, using a laryngoscope with a Macintosh blade, size 4. The intubation difficulty was determined by using the Intubation Difficulty Scale (IDS) immediately after the intubation was performed in the operating room. The data necessary for determining the IDS and their scoring are given in . By ”alternative technique” during intubation, we mean any modification of the intubation process that differs from the routine placement of the tube in the trachea. More precisely, it involves modification of the position and curvature of the tube during intubation, change of the patient’s position, use of a bougie, Magill forceps, and fiberoptic bronchoscope. After assessing the difficulty of intubation according to the results of the IDS, patients were divided into two groups: difficult intubation (DI) and standard intubation (NI). All the results related to continuous variables are expressed as median ± SD. While there were different types of variables, the difference between the two groups was determined using the t -test for independent samples, the Mann-Whitney U test and the c 2 test. A binary logistic regression model was performed to assess the interaction between variables. We used C statistics to evaluate the effectiveness of the combinations between two or more parameters. The area under the curve (AUC) was determined with sensitivity and specificity. A P -value below 0.05 was considered a statistically significant result. All results were statistically processed in the program SPSS 10.0 for Windows (SPSS Inc., Chicago, IL, USA). We included 50 patients between 33 and 83 years of age (61.26 ± 10.87) in our study. The largest group of patients was between 51 and 60 years of age, comprising 18 (36%) patients. Nineteen (38%) patients were female and 31 (62%) were male. The body mass index (BMI) was 26.30 ± 5.51, which is categorized as overweight. The general characteristics of the patients are presented in , while the clinical characteristics that are essential for the prediction of difficult intubation are presented in . According to the IDS scale, 17 (34%) intubations were difficult. Patients in the DI group were more often of male gender ( P = 0.033) and had apnea during sleep ( P = 0.021). Flexible laryngoscopy provided insight into postoperative histopathological and site characteristics of biopsied tumors. As many as 20 (40%) showed malignant and 30 (60%) benign characteristics. There was no statistical significance regarding the histopathological characteristics and intubation difficulty, with P = 0.180. Details are presented in . Patients in the DI group had IIG below 4 cm, a higher class of reclination, a greater neck girth, and a higher class of MMT. presents statistical details. Out of all the measured data and provided tests, the following showed statistical significance: IIG, reclination, neck girth, MMT, and flexible laryngoscopy. Flexible laryngoscopy showed the highest level of statistical significance, with P = 0.0001. The statistically significant parameters were processed in C statistics, and the AUC curves are represented in . Neck girth was the only scale variable that showed a new cut-off value of 40.70 cm, with a CI of 95%, sensitivity of 82.4% and specificity of 55.5%. Flexible laryngoscopy identified 13 (26%) patients as possible difficult intubations, compared to Cormack-Lehane classification results revealed during direct laryngoscopy. Flexible laryngoscopy correctly classified 39 (78%) patients, with c 2 = 9.802, df = 1 and P = 0.002. According to the results, flexible laryngoscopy was the best model for predicting difficult intubation. Therefore, we have estimated that this para-meter can be combined with other statistically significant parameters and measurements. A detailed review of all the statistical models is provided in . Among all the combinations of one parameter and flexible laryngoscopy, reclination had a greater impact on the statistical model than all the other parameters . Then, we combined flexible laryngoscopy and reclination with other parameters, and there were no significant differences between statistical models, as shown in and . When we combined all the statistically significant parameters, the statistical model indicated that flexible laryngoscopy, reclination and neck girth contributed to the model to the greatest extent. This model showed statistical significance, with the values of c 2 = 43.268, P < 0.0001, and an AUC of 0.955 . Considering the specificity of ENT pathology, we expected that our results would differ from those obtained in studies that dealt with other surgical specialties. Namely, many parameters used in everyday anesthesiology practice, which were significant in other studies, did not show statistical significance in our research. Initially, discussing the discrepancy between the incidence of difficult intubations in the literature and our results is essential. Our study showed that difficult intubation was present in as many as 34% of examined patients. Also, according to the available literature, the incidence of difficult intubations in ENT surgery is 15.8%, more than two times lower than in our study. The mentioned incidence is present in the entire ENT pathology, while our patients were subjected exclusively to micro-laryngoscopy. It should be noted that some authors exclusively use the Cormack-Lehane score to determine difficult intubation, while in our study, a more extensive and accurate IDS score was used . The undeniable limitation of our study is the small number of randomly selected patients, and the true incidence will be revealed only after the study is further expanded. Another limitation is that no developed classification would define the flexible laryngoscopy finding as potentially difficult or usual intubation in the field of laryngology. Therefore, we had to rely on the only classification available in practice, i.e. surgeon’s experience. As regards the general data of the patients, statistical significance was found only in the case of gender, while other parameters failed to reach statistical significance. Several studies have found that the frequency of difficult intubation is higher in men, which is in correlation with our results . Wong et al . reported that a history of previous difficult intubations significantly predicts future difficult intubations. In our research, we did not obtain such data, probably because of the absence of such a data registry in our country. Patients are often not informed about a difficult airway during previous intubation. Our study did not show statistical significance of the patient’s age in the summary or within the age groups. Oria et al . found that age over 40 years was a predisposing factor for difficult intubation. Such results are explained in the literature by the fact that with age, specific anatomical changes lead to the appearance of a difficult airway . The lack of correlation with our results can be explained by the fact that the cause of difficult intubations lies in the pathology of the airway itself and the anatomical changes accompanying it. Age is not one of the main predictors of difficult intubation. Considering the parameters used in determining BMI, none showed significance for predicting difficult airways. There are conflicting reports regarding BMI as a predicting parameter of difficult intubation . Moon et al . found that even a group of morbidly obese patients did not have a higher frequency of difficult intubations but only a higher incidence of difficult mask ventilation. In an extensive study, Uribe et al . found that BMI in men was a valuable parameter in predicting difficult intubation. However, only other surgical specialties were included. In their meta-analysis, Wang et al . remained inconclusive regarding the accuracy of this parameter, which correlates with everyday clinical experiences. Patients who undergo laryngeal surgery very often suffer from obstructive sleep apnea (OSA), which is a predisposing factor for difficult intubation. The parameters we examined that indicate the risk for an existing OSA, such as stridor, loud snoring, fatigue, high blood pressure, and endocrine comorbidities, did not show statistical significance. The only statistically significant parameter was the existence of OSA, which the patient already knows. OSA and increased neck circumference have been shown to be independent parameters for predicting a difficult airway in many studies . Nerurkar et al . reported that neck circumference has an impact on predicting difficult airways in microlaryngoscopy, which correlates with our results. The cut-off value found in their research was 37.5 cm, while in our study it was 40.70 cm. The fact is that neither their research nor ours included many patients, and a larger number of patients is necessary to determine a more precise cut-off. Riad et al . stated that setting a cut-off value of 42 cm is essential. Further research is needed to set a more accurate cut-off value. In our study, instead of measuring IIG and interpreting the results as a scale parameter, we used a cut-off value of 4 cm. This parameter was significant in our study and correlates with other studies . The MMP score is a good predictor . However, studies have shown that it cannot be used as an isolated predictor . Bergler et al . concluded that MMP above 3 significantly predicts difficult intubation in oral surgery, with results that correlate with ours. According to the AUC curves, reclination and flexible laryngoscopy are the only parameters that can be used as independent predictors of difficult intubation in laryngeal surgery. Alp et al . confirmed that reclination can be used as an isolated predictor. Some studies have indicated that flexible laryngoscopy is a strong isolated predictor of difficult intubation . Only the study by Budde et al . showed a limited prediction and, according to the authors, a ”tendency towards statistical significance” in obese patients. No studies would combine flexible laryngoscopy as a strong predictor with anesthesiology parameters, which was one of our research goals. The lack of statistical significance of many measurements and clinical assessments of difficult intubation in our study can be explained by the fact that all these parameters are significant but not sufficient for a precise evaluation of a difficult airway in laryngeal surgery. Even if they indicate the absence of a difficult airway, the anesthesiologist can face a possible challenge only after placing the laryngoscope and visualizing the airway. For this reason, in laryngotracheal surgery, the surgical assessment and joint discussion of the airway preoperatively are extremely important for timely equipment preparation and the surgical team’s surgical airway preparation. It is undoubtedly advisable to preoperatively look at as many parameters used in routine practice as possible. However, in the absence of time, it is necessary to look at specific parameters. Through our research, we have identified significant parameters that, after more extensive research, can be used to eventually develop a score for the preoperative assessment of the airway in laryngeal surgery. The statistically significant parameters in our research are scores that allow greater head and neck extension and greater mandible manipulation when visualizing the glottis. With additional surgical observation, these parameters accurately assessed difficult airways in laryngeal surgery. Difficult airway assessment could not be reliably defined using solely the anesthesiologic parameters. Flexible laryngoscopy must be included in the preoperative evaluation of ENT, especially laryngeal surgery. Further studies are needed to classify and make the flexible laryngoscopy findings more objective. We have identified parameters that can be used to develop a reliable and accurate score for preoperative difficult airway assessment in laryngeal surgery.
Japanese Society of Medical Oncology/Japan Society of Clinical Oncology/Japanese Society of Pediatric Hematology/Oncology-led clinical recommendations on the diagnosis and use of immunotherapy in patients with high tumor mutational burden tumors
eb1858c0-d812-4399-a3d5-3ed6b40bd244
10390617
Internal Medicine[mh]
In the field of cancer drug therapy, treatment outcomes and prognosis have improved as effective novel drug therapies have emerged . At the same time, the development of biomarkers to identify groups in which efficacy is likely prior to treatment also has contributed to improvements in cancer treatment outcomes. Conventional cancer treatment has involved a multifaceted assessment that encompasses the pathological diagnosis of the disease and an evaluation of its stage, the benefits and disadvantages of treatment, and the preferences of the patient. In diagnosing the disease, identifying the primary tumor and determining the tissue type have yielded important information that has been key to establishing a treatment plan. Recent advances in molecular biology have elucidated a variety of biological characteristics of tumors, resulting in the clinical development and regulatory approval of tumor-agnostic drugs that transcend the organic characteristics of the disease. An anti-programmed cell death protein 1 (PD-1) antibody drug, pembrolizumab, for advanced/recurrent deficient DNA mismatch repair (dMMR) solid cancers and tropomyosin receptor kinase (TRK) inhibitors against neurotrophic receptor tyrosine kinase (NTRK) fusion gene-positive advanced solid cancers were approved in tumor-agnostic therapy . Moreover, the efficacy of pembrolizumab against tumor mutation burden-high (TMB-H) solid tumors was demonstrated, and the US Food and Drug Administration (FDA) approved it in 2020 . In Japan, pembrolizumab was approved for patients with TMB-H solid tumors, which was the third tumor-agnostic drug approved. This article is a summary of the part describing TMB-H in “Clinical Practice Guidelines for Tumor-Agnostic Treatments in Adult and Pediatric Patients with Advanced Solid Tumors toward Precision Medicine (in Japanese)”. The part regarding dMMR and NTRK fusion has already been reported elsewhere . The present guidelines provide a guide to diagnosis and treatment and should be utilized in clinical practice according to the recommendation levels described and by adjusting them for individual patients. They are expected to contribute to improving treatment outcomes in patients with solid cancer by utilizing them to perform appropriate tests and treatments on appropriate patients at appropriate timing. The current guidelines systematically describe the items to be considered when treating patients with TMB-H solid tumors, including the timing and methods of testing TMB score and the positioning of immunotherapy. In the clinical setting in Japan, if appropriate tests are performed on appropriate patients and the patients receive appropriate treatment at appropriate timing based on the recommended levels described in the present guidelines, treatment outcomes in patients with solid tumors are expected to be improved. In the preparation of these guidelines, clinical questions (CQs) were set, and regarding evidence that provides the basis for the answers to those questions, the literature was collected by hand searches and subjected to a systematic review. In setting the CQs, the working group of the Clinical Practice Guidelines for Tumor-Agnostic Genomic Medicine in Adult and Pediatric Patients with Advanced Solid Tumors (3rd edition) prepared draft CQs and decided which ones would be included in the guidelines. Keywords related to each CQ were selected and sent to the Japan Medical Library Association, which generated queries used to perform comprehensive literature searches. The PubMed, Ichushi Web, and Cochrane Library databases were used in the searches. Important reports by various academic societies also were collected by hand searches and used in the guidelines. Primary and secondary screenings and systematic reviews were performed by persons in charge (SM/YN) of the working group of the Clinical Practice Guidelines for Tumor-Agnostic Genomic Medicine in Adult and Pediatric Patients with Advanced Solid Tumors (3rd edition). The recommendation levels specified for the CQs were determined by voting by the committee members (Table ). The levels, which were determined based on factors such as the strength of the evidence and the expected benefits and disadvantages for patients, are as follows: strongly recommended (SR), recommended (R), expert consensus opinion (ECO), and not recommended (NR). The status of regulatory approval and insurance coverage in Japan for the treatments (including indications for testing and treatment) was not considered during the voting, but was indicated in the remarks section as needed. The overall assessments based on voting were as follows: (1) SR if ≥ 70% of the votes were for SR; (2) R if criterion (1) was not met and SR votes + R votes accounted for ≥ 70% of the total; (3) ECO if criteria (1) and (2) were not met and SR votes + R votes + ECO votes accounted for ≥ 70% of the total; and (4) NR if NR accounted for ≥ 50% of the total regardless of whether criteria (1), (2), or (3) were met. If all of the criteria (1)–(4) were not met, the assessment was "no recommendation level." The recommendations for the CQs include recommendations that are not currently based on strong evidence. As new evidence accumulates, the information and recommendations in these guidelines may change significantly. Although these guidelines will be updated as appropriate, in using a drug clinically, the latest medical information should be reviewed, and every effort made to ensure the drug is used properly. Solid tumors with a high tumor mutation burden (TMB-H) A characteristic of cancer cells is that they have more genetic mutations than normal cells due to external factors such as exposure to ultraviolet light or smoking, therapeutic interventions such as temozolomide administration, or inborn or acquired genetic causes related to the DNA repair mechanism . The tumor mutation burden (TMB) refers to the quantity of somatic mutations in cancer cells and is expressed in the unit “mut/Mb,” which represents mutations per 1 million bases (1 megabase, or Mb). In preclinical studies, it was found that new peptides produced as a result of non-synonymous mutations, among the passenger gene mutations in cancer cells, are presented as neoantigens by the major histocompatibility complex (MHC) of the surface of antigen-presenting cells and may be recognized as non-self by infiltrating immune cells . Next-generation sequencing technology and calculation methods for predicting antigen peptide presentation by MHC have been developed, and the presence of neoantigens recognized by T cells has been reported in high-TMB mouse tumors, which are similar to high-TMB human tumors . Moreover, immunogenicity associated with increased TMB has been confirmed in nonclinical studies, suggesting that those biological characteristics are applicable across cancer types . Furthermore, a review by Schumacher and Schreiber suggested that in tumors with somatic mutations in excess of 10 mut/Mb (equivalent to 150 non-synonymous mutations), neoantigens that are recognized by the immune system may be produced . TMB testing TMB has previously been evaluated by whole genome sequencing (WGS) or whole exome sequencing (WES). However, target sequencing panels (gene panel tests) have also recently been found to enable TMB to be assayed with high sensitivity . Because TMB scores obtained by gene panel tests in which genome sequencing of a 1.1 Mb TMB analysis region is performed correlate with WES TMB scores, these panel tests can measure TMB accurately. However, with a region of ≤ 0.5 Mb, a lower correlation has been reported . The algorithms used in calculating the TMB value (TMB score) are designed to be optimal for each gene panel. This is problematic because they are the intellectual property of each panel and therefore not openly disclosed, resulting in variability (Table ) . TMB harmonization is currently being pursued in a TMB harmonization project led by the Friends of Cancer Research (FoCR). FoCR verifies the correlations between TMB scores calculated based on each gene panel test and WES TMB scores. Although there is variability depending on the type of cancer, good correlations have been reported (Spearman’s correlation coefficients of 0.79–0.88). In Japan, testing can be performed as part of comprehensive genomic profiling under the national health insurance coverage. TMB measured by the FoundationOne ® CDx assay was found to be highly correlated with WES TMB . A strong correlation was also reported for the NCC Oncopanel (Fig. ). In the future, FoCR plans to retrospectively analyze the clinical specimens of patients administered immune checkpoint inhibitors in clinical studies with the aim of making TMB testing available in clinical setting. Pembrolizumab was found to be effective in TMB-H solid tumors in the phase II KEYNOTE-158 study, which used biomarkers to evaluate pembrolizumab efficacy in patients with unresectable advanced or recurrent solid tumors who were refractory or intolerant to prior treatment . In this study, patients with TMB-H ≥ 10 mut/Mb, as analyzed by the FoundationOne ® CDx assay, were defined as being TMB-H patients. Based on the results of this study, the FDA approved pembrolizumab for the treatment of TMB-H solid tumors and FoundationOne ® CDx as a companion diagnostic for pembrolizumab. In Japan, FoundationOne ® CDx was approved on November 15, 2021 to assist in determining whether a drug is indicated for a solid tumor with a high-TMB score. The conventional method of calculating TMB involves an analysis of tumor tissue. Consequently, in cases such as when a tumor is unresectable and only previously collected surgical specimens can be obtained, TMB analysis using FoundationOne ® CDx may not reflect tumor status at the point when systemic treatment is administered. Efforts have therefore been made to evaluate TMB by analyzing circulating tumor DNA (ctDNA) from the blood. As compared with tumor tissue analysis, ctDNA analysis requires less time and may detect intratumoral genetic heterogeneity . Frequency of TMB-H by cancer type The frequency of somatic mutations by cancer type is indicated in Fig. . The frequency varies widely depending on the cancer type, ranging from more than 100 mut/Mb (e.g., melanoma, lung squamous cell carcinoma/lung adenocarcinoma) to a low 0.1 mut/Mb. Even within the same cancer type, differences of 1000-fold or more are seen . The incidence of cancers with TMB scores ≥ 10 mut/Mb as determined using the FoundationOne ® CDx assay (see “ ” for more information) was reported in a review by Chan et al. (Fig. ) . The percentage of the top 30 types of cancer with TMB scores ≥ 10 mut/Mb ranged from approximately 10% to 60% and represented 13.3% of solid tumors as a whole. At a joint meeting of the Japanese Society of Medical Oncology (JSMO), European Society for Medical Oncology (ESMO), American Society of Clinical Oncology (ASCO), and Taiwan Oncology Society (TOS) organized by the Japan Society of Clinical Oncology (JSCO), the incidence of TMB-H tumors in the FoundationOne database was reported using a TMB-H cutoff of TMB ≥ 20 mut/Mb (Table ) . The percentages in the 30 cancer types with the highest incidences ranged from 0.93 to 54.60%. TMB-H solid tumors have been reported to have a poor prognosis . Effort also has been made to evaluate TMB by cancer type using ctDNA analysis. Foundation Medicine Inc. developed an assay to evaluate blood TMB (bTMB) by analyzing blood ctDNA. Pretherapy baseline bTMB in blood specimens was evaluated in the prospective POPLAR and OAK studies, which evaluated the efficacy of atezolizumab as compared with that of docetaxel in non-small cell lung cancer . Tissue TMB (tTMB) also was analyzed at the same time in these studies using tumor tissue, and the sensitivity and specificity of bTMB as compared with tTMB were found to be 64% and 88%, respectively. FoundationOne ® Liquid CDx developed by Foundation Medicine Inc. was approved in Japan in March 2021 for comprehensive genomic profiling of solid tumors using blood specimens. The incidences of tTMB-H (≥ 10 mut/Mb), analyzed in tumor tissue using the FoundationOne ® CDx assay, and bTMB-H (≥ 10 mut/Mb), analyzed in blood using the FoundationOne ® Liquid CDx assay, have been reported by cancer type (Fig. ) . Sixteen cancer types in 167,332 patients were analyzed. The incidence of tTMB-H was 19%, and the cancer types with the highest incidences of tTMB-H were, in descending order of incidence, malignant melanoma (53%), small cell lung cancer (41%), non-small cell lung cancer (40%), bladder cancer (39%), and endometrial cancer (23%). In the bTMB analysis, which examined 16 cancer types in 9312 patients, the incidence of bTMB-H was 13%. The prevalence by cancer type was correlated with prevalence of elevated tissue TMB (r = 0.81). While there are some reports of high correlation in lung cancer, there are also reports of low correlation and concordance rates in gastrointestinal cancer, and differences by metastatic organ have also been reported . Further investigation of bTMB is needed. Efficacy of anti-PD-1/PD-L1 antibody drugs against TMB-H solid tumors In preclinical studies, novel peptides produced by amino acid substitutions resulting from DNA mutations among the passenger gene mutations of cancer cells were presented as neoantigens, causing an antitumor immune response . Mouse tumors with high TMB have been reported to have neoantigens that are recognized by T cells . Moreover, immunogenicity associated with increased TMB has been confirmed in nonclinical studies , suggesting that an increase in neoantigens resulting from increased TMB in tumor cells may promote tumor recognition by T cells. Immune checkpoint inhibitors are therefore likely to have an antitumor effect in TMB-H solid tumors by facilitating T-cell activation. The KEYNOTE-028 study was a phase Ib study that examined the safety and efficacy of pembrolizumab in advanced solid tumors positive for PD-L1 expression. As an exploratory endpoint, the study examined the relationship between TMB and PD-L1. WES TMB was analyzed for 16 cancer types in 77 patients (only 1 of whom was MSI-H), and a stronger tumor regression effect and prolongation of PFS were seen in patients with high TMB . TMB was investigated using the MSK-IMPACT platform in 1662 patients who received immune checkpoint inhibitor monotherapy or combination therapy at the Memorial Sloan Kettering Cancer Center in the United States. A comparison of patients with the highest 20% of TMB scores and the other patients by cancer type showed that OS was significantly longer in the former group (HR: 0.52; p = 1.6 × 10 −6 ) . In addition to these studies, many others have also found TMB to be a useful factor for predicting the efficacy of immune checkpoint inhibitors. When the objective response rate (ORR) seen with immune checkpoint inhibitor monotherapy (anti-PD-1 or anti-PD-L1 antibody) was plotted against the median TMB for 27 cancer types, a significant correlation between ORR and TMB was observed (Fig. ) . The KEYNOTE-158 study was a multicenter, non-randomized, open-label, multicohort phase II study that evaluated the efficacy and safety of pembrolizumab in patients with unresectable or metastatic solid tumors who were refractory or intolerant to prior treatment. The study evaluated various biomarkers that predict pembrolizumab efficacy in a variety of cancer types. TMB was designated in advance as an exploratory biomarker and was analyzed retrospectively using the FoundationOne ® CDx assay. As a post-marketing requirement for FDA approval in the United States, Group M was subsequently added as prospectively enrolled cohort of patients with TMB-H solid tumors. The primary endpoint of the study was ORR, and the secondary endpoints were the duration of response, progression-free survival (PFS), and overall survival (OS). TMB data were obtained for 790 of the 1050 patients in the efficacy analysis set. Using a TMB-H cutoff of ≥ 10 mut/Mb, 102 patients were classified as TMB-H and 688 as TMB-Low (TMB-L, < 10 mut/Mb). Pembrolizumab showed a higher ORR in the TMB-H group than in the TMB-L group (29% vs. 6%). With MSI-H patients and those whose MSI status was unknown excluded, the ORR in 81 patients in the TMB-H group was 28%, which was comparable. PD-L1 expression was also evaluated in this study. No correlation was seen between TMB score and PD-L1 expression (combined positive score: CPS), and ORR was 35% for PD-L1-positive patients (CPS ≥ 1) in the TMB-H group and 21% for PD-L1-negative patients (CPS < 1) in this group . Based on the results of this study, the FDA approved pembrolizumab for the treatment of TMB-H solid tumors. The Targeted Agent and Profiling Utilization Registry (TAPUR) study, which was conducted by ASCO, was a phase II basket study that evaluated the antitumor effects of approved targeted drugs in specific genomic alterations. The results for a TMB-H cohort in the study also were reported. An examination of 27 patients with colorectal cancer with TMB ≥ 9 (25 MSS patients, 2 patients with unknown microsatellite status) showed an antitumor effect, with an ORR of 11% (95% CI 2–29%) median PFS of 9.3 weeks (95% CI 7.3–16.1), and median OS of 51.9 weeks (95% CI 18.7–NR) . A similar examination was conducted in breast cancer with TMB ≥ 9 and an antitumor effect was also seen, with the ORR of 37% (95% CI 21–50%), median PFS of 10.6 weeks (95% CI 7.7–21.1), and median OS of 30.6 weeks (95% CI 18.3–103.3) . Even since the FDA approved pembrolizumab for TMB-H solid tumors, debate has continued regarding the TMB-H cutoff and differences in efficacy in each type of cancer. Immune checkpoint inhibitors showed a strong antitumor effect (ORR: 39.8%; 95% CI 34.9–44.8) in TMB-H tumors for cancer types in which tissue invasive CD8 T-cell levels showed a positive correlation with neoantigen levels, such as malignant melanoma, lung cancer, and bladder cancer. The ORR in TMB-H tumors was significantly higher than in TMB-L tumors [odds ratio (OR): 4.1; 95% CI 2.9–5.8; p < 2 × 10 −16 ]. However, in cancer types for which there was no correlation between CD8 T-cell levels and neoantigen levels, such as breast cancer, prostate cancer, and glioma, the ORR of immune checkpoint inhibitors in TMB-H tumors was 15.3% (95% CI 9.2–23.4; p = 0.95), which was significantly lower than in TMB-L tumors (OR: 0.46; 95% CI 0.24–0.88; p = 0.02) . This suggested that, depending on the cancer type, it may not be possible to predict the efficacy of immune checkpoint inhibitors based on TMB. It has also been suggested that the optimal TMB cutoff may differ depending on cancer type . In gliomas, temozolomide therapy results in an increase in TMB, although the mechanism of that change is unknown. However, in an examination of the efficacy of an immune checkpoint inhibitor in 11 patients with TMB-H and dMMR gliomas that included such patients (5 untreated and 6 posttreatment patients), the best therapeutic effect was disease progression in 82%, with no significant difference seen as compared with TMB-L gliomas . These findings indicate a need for further investigation regarding the optimal method of measuring TMB and the TMB cutoff for each cancer type. Efforts have also been made to evaluate TMB by ctDNA analysis. In 69 patients with solid tumors who were administered an immune checkpoint inhibitor, ctDNA from the blood was analyzed using the Guardant360 assay, a ctDNA testing method. The results showed that PFS was significantly longer in patients with more than 3 variants of unknown significance (VUS) . Moreover, in the OAK and POPLAR studies, which examined the superiority of atezolizumab versus docetaxel in non-small cell lung cancer, ctDNA analysis using the FoundationOne bTMB assay showed that atezolizumab efficacy was greatest in patients whose bTMB score was ≥ 16 . A characteristic of cancer cells is that they have more genetic mutations than normal cells due to external factors such as exposure to ultraviolet light or smoking, therapeutic interventions such as temozolomide administration, or inborn or acquired genetic causes related to the DNA repair mechanism . The tumor mutation burden (TMB) refers to the quantity of somatic mutations in cancer cells and is expressed in the unit “mut/Mb,” which represents mutations per 1 million bases (1 megabase, or Mb). In preclinical studies, it was found that new peptides produced as a result of non-synonymous mutations, among the passenger gene mutations in cancer cells, are presented as neoantigens by the major histocompatibility complex (MHC) of the surface of antigen-presenting cells and may be recognized as non-self by infiltrating immune cells . Next-generation sequencing technology and calculation methods for predicting antigen peptide presentation by MHC have been developed, and the presence of neoantigens recognized by T cells has been reported in high-TMB mouse tumors, which are similar to high-TMB human tumors . Moreover, immunogenicity associated with increased TMB has been confirmed in nonclinical studies, suggesting that those biological characteristics are applicable across cancer types . Furthermore, a review by Schumacher and Schreiber suggested that in tumors with somatic mutations in excess of 10 mut/Mb (equivalent to 150 non-synonymous mutations), neoantigens that are recognized by the immune system may be produced . TMB has previously been evaluated by whole genome sequencing (WGS) or whole exome sequencing (WES). However, target sequencing panels (gene panel tests) have also recently been found to enable TMB to be assayed with high sensitivity . Because TMB scores obtained by gene panel tests in which genome sequencing of a 1.1 Mb TMB analysis region is performed correlate with WES TMB scores, these panel tests can measure TMB accurately. However, with a region of ≤ 0.5 Mb, a lower correlation has been reported . The algorithms used in calculating the TMB value (TMB score) are designed to be optimal for each gene panel. This is problematic because they are the intellectual property of each panel and therefore not openly disclosed, resulting in variability (Table ) . TMB harmonization is currently being pursued in a TMB harmonization project led by the Friends of Cancer Research (FoCR). FoCR verifies the correlations between TMB scores calculated based on each gene panel test and WES TMB scores. Although there is variability depending on the type of cancer, good correlations have been reported (Spearman’s correlation coefficients of 0.79–0.88). In Japan, testing can be performed as part of comprehensive genomic profiling under the national health insurance coverage. TMB measured by the FoundationOne ® CDx assay was found to be highly correlated with WES TMB . A strong correlation was also reported for the NCC Oncopanel (Fig. ). In the future, FoCR plans to retrospectively analyze the clinical specimens of patients administered immune checkpoint inhibitors in clinical studies with the aim of making TMB testing available in clinical setting. Pembrolizumab was found to be effective in TMB-H solid tumors in the phase II KEYNOTE-158 study, which used biomarkers to evaluate pembrolizumab efficacy in patients with unresectable advanced or recurrent solid tumors who were refractory or intolerant to prior treatment . In this study, patients with TMB-H ≥ 10 mut/Mb, as analyzed by the FoundationOne ® CDx assay, were defined as being TMB-H patients. Based on the results of this study, the FDA approved pembrolizumab for the treatment of TMB-H solid tumors and FoundationOne ® CDx as a companion diagnostic for pembrolizumab. In Japan, FoundationOne ® CDx was approved on November 15, 2021 to assist in determining whether a drug is indicated for a solid tumor with a high-TMB score. The conventional method of calculating TMB involves an analysis of tumor tissue. Consequently, in cases such as when a tumor is unresectable and only previously collected surgical specimens can be obtained, TMB analysis using FoundationOne ® CDx may not reflect tumor status at the point when systemic treatment is administered. Efforts have therefore been made to evaluate TMB by analyzing circulating tumor DNA (ctDNA) from the blood. As compared with tumor tissue analysis, ctDNA analysis requires less time and may detect intratumoral genetic heterogeneity . The frequency of somatic mutations by cancer type is indicated in Fig. . The frequency varies widely depending on the cancer type, ranging from more than 100 mut/Mb (e.g., melanoma, lung squamous cell carcinoma/lung adenocarcinoma) to a low 0.1 mut/Mb. Even within the same cancer type, differences of 1000-fold or more are seen . The incidence of cancers with TMB scores ≥ 10 mut/Mb as determined using the FoundationOne ® CDx assay (see “ ” for more information) was reported in a review by Chan et al. (Fig. ) . The percentage of the top 30 types of cancer with TMB scores ≥ 10 mut/Mb ranged from approximately 10% to 60% and represented 13.3% of solid tumors as a whole. At a joint meeting of the Japanese Society of Medical Oncology (JSMO), European Society for Medical Oncology (ESMO), American Society of Clinical Oncology (ASCO), and Taiwan Oncology Society (TOS) organized by the Japan Society of Clinical Oncology (JSCO), the incidence of TMB-H tumors in the FoundationOne database was reported using a TMB-H cutoff of TMB ≥ 20 mut/Mb (Table ) . The percentages in the 30 cancer types with the highest incidences ranged from 0.93 to 54.60%. TMB-H solid tumors have been reported to have a poor prognosis . Effort also has been made to evaluate TMB by cancer type using ctDNA analysis. Foundation Medicine Inc. developed an assay to evaluate blood TMB (bTMB) by analyzing blood ctDNA. Pretherapy baseline bTMB in blood specimens was evaluated in the prospective POPLAR and OAK studies, which evaluated the efficacy of atezolizumab as compared with that of docetaxel in non-small cell lung cancer . Tissue TMB (tTMB) also was analyzed at the same time in these studies using tumor tissue, and the sensitivity and specificity of bTMB as compared with tTMB were found to be 64% and 88%, respectively. FoundationOne ® Liquid CDx developed by Foundation Medicine Inc. was approved in Japan in March 2021 for comprehensive genomic profiling of solid tumors using blood specimens. The incidences of tTMB-H (≥ 10 mut/Mb), analyzed in tumor tissue using the FoundationOne ® CDx assay, and bTMB-H (≥ 10 mut/Mb), analyzed in blood using the FoundationOne ® Liquid CDx assay, have been reported by cancer type (Fig. ) . Sixteen cancer types in 167,332 patients were analyzed. The incidence of tTMB-H was 19%, and the cancer types with the highest incidences of tTMB-H were, in descending order of incidence, malignant melanoma (53%), small cell lung cancer (41%), non-small cell lung cancer (40%), bladder cancer (39%), and endometrial cancer (23%). In the bTMB analysis, which examined 16 cancer types in 9312 patients, the incidence of bTMB-H was 13%. The prevalence by cancer type was correlated with prevalence of elevated tissue TMB (r = 0.81). While there are some reports of high correlation in lung cancer, there are also reports of low correlation and concordance rates in gastrointestinal cancer, and differences by metastatic organ have also been reported . Further investigation of bTMB is needed. In preclinical studies, novel peptides produced by amino acid substitutions resulting from DNA mutations among the passenger gene mutations of cancer cells were presented as neoantigens, causing an antitumor immune response . Mouse tumors with high TMB have been reported to have neoantigens that are recognized by T cells . Moreover, immunogenicity associated with increased TMB has been confirmed in nonclinical studies , suggesting that an increase in neoantigens resulting from increased TMB in tumor cells may promote tumor recognition by T cells. Immune checkpoint inhibitors are therefore likely to have an antitumor effect in TMB-H solid tumors by facilitating T-cell activation. The KEYNOTE-028 study was a phase Ib study that examined the safety and efficacy of pembrolizumab in advanced solid tumors positive for PD-L1 expression. As an exploratory endpoint, the study examined the relationship between TMB and PD-L1. WES TMB was analyzed for 16 cancer types in 77 patients (only 1 of whom was MSI-H), and a stronger tumor regression effect and prolongation of PFS were seen in patients with high TMB . TMB was investigated using the MSK-IMPACT platform in 1662 patients who received immune checkpoint inhibitor monotherapy or combination therapy at the Memorial Sloan Kettering Cancer Center in the United States. A comparison of patients with the highest 20% of TMB scores and the other patients by cancer type showed that OS was significantly longer in the former group (HR: 0.52; p = 1.6 × 10 −6 ) . In addition to these studies, many others have also found TMB to be a useful factor for predicting the efficacy of immune checkpoint inhibitors. When the objective response rate (ORR) seen with immune checkpoint inhibitor monotherapy (anti-PD-1 or anti-PD-L1 antibody) was plotted against the median TMB for 27 cancer types, a significant correlation between ORR and TMB was observed (Fig. ) . The KEYNOTE-158 study was a multicenter, non-randomized, open-label, multicohort phase II study that evaluated the efficacy and safety of pembrolizumab in patients with unresectable or metastatic solid tumors who were refractory or intolerant to prior treatment. The study evaluated various biomarkers that predict pembrolizumab efficacy in a variety of cancer types. TMB was designated in advance as an exploratory biomarker and was analyzed retrospectively using the FoundationOne ® CDx assay. As a post-marketing requirement for FDA approval in the United States, Group M was subsequently added as prospectively enrolled cohort of patients with TMB-H solid tumors. The primary endpoint of the study was ORR, and the secondary endpoints were the duration of response, progression-free survival (PFS), and overall survival (OS). TMB data were obtained for 790 of the 1050 patients in the efficacy analysis set. Using a TMB-H cutoff of ≥ 10 mut/Mb, 102 patients were classified as TMB-H and 688 as TMB-Low (TMB-L, < 10 mut/Mb). Pembrolizumab showed a higher ORR in the TMB-H group than in the TMB-L group (29% vs. 6%). With MSI-H patients and those whose MSI status was unknown excluded, the ORR in 81 patients in the TMB-H group was 28%, which was comparable. PD-L1 expression was also evaluated in this study. No correlation was seen between TMB score and PD-L1 expression (combined positive score: CPS), and ORR was 35% for PD-L1-positive patients (CPS ≥ 1) in the TMB-H group and 21% for PD-L1-negative patients (CPS < 1) in this group . Based on the results of this study, the FDA approved pembrolizumab for the treatment of TMB-H solid tumors. The Targeted Agent and Profiling Utilization Registry (TAPUR) study, which was conducted by ASCO, was a phase II basket study that evaluated the antitumor effects of approved targeted drugs in specific genomic alterations. The results for a TMB-H cohort in the study also were reported. An examination of 27 patients with colorectal cancer with TMB ≥ 9 (25 MSS patients, 2 patients with unknown microsatellite status) showed an antitumor effect, with an ORR of 11% (95% CI 2–29%) median PFS of 9.3 weeks (95% CI 7.3–16.1), and median OS of 51.9 weeks (95% CI 18.7–NR) . A similar examination was conducted in breast cancer with TMB ≥ 9 and an antitumor effect was also seen, with the ORR of 37% (95% CI 21–50%), median PFS of 10.6 weeks (95% CI 7.7–21.1), and median OS of 30.6 weeks (95% CI 18.3–103.3) . Even since the FDA approved pembrolizumab for TMB-H solid tumors, debate has continued regarding the TMB-H cutoff and differences in efficacy in each type of cancer. Immune checkpoint inhibitors showed a strong antitumor effect (ORR: 39.8%; 95% CI 34.9–44.8) in TMB-H tumors for cancer types in which tissue invasive CD8 T-cell levels showed a positive correlation with neoantigen levels, such as malignant melanoma, lung cancer, and bladder cancer. The ORR in TMB-H tumors was significantly higher than in TMB-L tumors [odds ratio (OR): 4.1; 95% CI 2.9–5.8; p < 2 × 10 −16 ]. However, in cancer types for which there was no correlation between CD8 T-cell levels and neoantigen levels, such as breast cancer, prostate cancer, and glioma, the ORR of immune checkpoint inhibitors in TMB-H tumors was 15.3% (95% CI 9.2–23.4; p = 0.95), which was significantly lower than in TMB-L tumors (OR: 0.46; 95% CI 0.24–0.88; p = 0.02) . This suggested that, depending on the cancer type, it may not be possible to predict the efficacy of immune checkpoint inhibitors based on TMB. It has also been suggested that the optimal TMB cutoff may differ depending on cancer type . In gliomas, temozolomide therapy results in an increase in TMB, although the mechanism of that change is unknown. However, in an examination of the efficacy of an immune checkpoint inhibitor in 11 patients with TMB-H and dMMR gliomas that included such patients (5 untreated and 6 posttreatment patients), the best therapeutic effect was disease progression in 82%, with no significant difference seen as compared with TMB-L gliomas . These findings indicate a need for further investigation regarding the optimal method of measuring TMB and the TMB cutoff for each cancer type. Efforts have also been made to evaluate TMB by ctDNA analysis. In 69 patients with solid tumors who were administered an immune checkpoint inhibitor, ctDNA from the blood was analyzed using the Guardant360 assay, a ctDNA testing method. The results showed that PFS was significantly longer in patients with more than 3 variants of unknown significance (VUS) . Moreover, in the OAK and POPLAR studies, which examined the superiority of atezolizumab versus docetaxel in non-small cell lung cancer, ctDNA analysis using the FoundationOne bTMB assay showed that atezolizumab efficacy was greatest in patients whose bTMB score was ≥ 16 . The following requirements have been prepared regarding the TMB testing performed to select patients who are likely to benefit from PD-1/PD-L1 inhibitors and the administration of them. The clinical recommendations propose the following 7 requirements in 3 CQs regarding the TMB testing performed to select patients who are likely to benefit from anti-PD-1/PD-L1 antibody drugs. For patients with solid tumors who are undergoing standard drug therapy or for whom standard treatment is difficult to administer, other than those for which immune checkpoint inhibitors can be used clinically irrespective of the TMB score, TMB testing is recommended to determine whether immune checkpoint inhibitors are indicated. For patients with unresectable solid tumors for which immune checkpoint inhibitors can already be used clinically irrespective of the TMB score, TMB testing should be considered to determine whether immune checkpoint inhibitors are indicated. For patients with solid tumors that are curable with local treatment, TMB testing is not recommended to determine whether immune checkpoint inhibitors are indicated. For patients with unresectable solid tumors for which an immune checkpoint inhibitor has already been used, TMB testing is not recommended to determine again whether immune checkpoint inhibitors are indicated. As TMB testing to determine whether immune checkpoint inhibitors are indicated, an NGS test whose analytical validity has been established (by receiving regulatory approval, etc.) is recommended. For unresectable/metastatic/recurrent solid tumors with TMB-H, the use of immune checkpoint inhibitors is recommended. The use of immune checkpoint inhibitors is recommended for unresectable/metastatic/recurrent solid tumors that have progressed after chemotherapy. Please keep in mind that these clinical recommendations will be revised in a timely manner, along with continuously and steadily advancing cancer treatment and new knowledge on biomarkers. We will explain each CQ in detail. PubMed was searched using the following queries: “Mutation and Tumor Burden or burden * or TMB,” “neoplasm,” and “tested or diagnos* or detect*.” The same queries were used to search Cochrane Library. For the search period from January 1980 to January 2021, 585 articles were extracted from PubMed and 26 from Cochrane Library. In the primary screening, 233 articles were extracted, and 208 were extracted in the secondary screening. A qualitative systematic review of these articles was then performed. The KEYNOTE-158 study examined the efficacy of pembrolizumab in advanced or recurrent solid tumors that progressed after chemotherapy. The TMB score was measured using the FoundationOne ® CDx assay, and ≥ 10 mut/Mb was used as the TMB-H cutoff. The results showed that the ORR of pembrolizumab was higher in the TMB-H group than in the TMB-L group (29% vs. 6%) . Based on the results of this study, the United States FDA granted the expedited approval of pembrolizumab for unresectable or metastatic TMB-H (≥ 10 mut/Mb) solid tumors on June 16, 2020. In addition, FoundationOne ® CDx was approved as a pembrolizumab companion diagnostic. TMB is therefore considered a valid biomarker for immune checkpoint inhibitor use and is also recommended in Japan. TMB testing is generally considered unnecessary in solid tumors for which immune checkpoint inhibitors can be used irrespective of the TMB score, because determining whether such use is indicated does not depend on the TMB score. However, in solid tumors for which determination regarding whether immune checkpoint inhibitors are indicated is based on PD-L1 expression or a biomarker such as dMMR, an immune checkpoint inhibitor is likely to be effective if the biomarker test was negative. In the KEYNOTE-158 study, ORR was 28% and efficacy was seen regardless of PD-L1 expression (ORR was 35% in PD-L1-positive patients and 21% in PD-L1-negative patients) in the patients with TMB-H after the exclusion of MSI-H patients and patients whose MSI status was unknown . Based on the above findings, in solid tumors for which determination regarding whether immune checkpoint inhibitors are indicated is based on a biomarker, TMB testing is recommended if the biomarker test was negative. In malignant melanoma, an anti-PD-1 antibody drug has been shown to be effective as postoperative adjuvant therapy and has been approved (KEYNOTE-054 study , ONO-4538–21 study ). In the multicenter, double-blind, randomized, placebo-controlled phase III PACIFIC study, an anti-PD-L1 antibody drug was administered sequentially in patients with unresectable, locally advanced (Stage III) non-small cell lung cancer that did not progress following curative concurrent chemoradiotherapy (CRT) using a platinum drug. Based on the results of that study, anti-PD-L1 antibody therapy received regulatory approval . In the Checkmate-577 study, the efficacy of nivolumab as postoperative adjuvant therapy was shown in Stage II/III esophageal and gastroesophageal junction cancer that was resected after neoadjuvant chemoradiotherapy . However, because no differences in efficacy according to TMB score were reported in these studies, pretreatment TMB testing is generally unnecessary. Moreover, because the efficacy of immune checkpoint inhibitors as perioperative therapy has not been established for other solid tumors, TMB testing for treatment selection is generally unnecessary for such cancers if the cancer can be cured with local treatment. Based on the above considerations, TMB testing is currently not recommended to determine whether immune checkpoint inhibitors are indicated for patients with solid tumors that are not locally advanced and have not metastasized. Immune checkpoint inhibitors have been approved for use in some solid tumors irrespective of the TMB score. The effectiveness of using a different immune checkpoint inhibitor when one has already been administered has not been demonstrated. Therefore, TMB testing is not recommended for the purpose of using an immune checkpoint inhibitor in patients with solid tumors for which an immune checkpoint inhibitor has already been used. PubMed was searched using the following queries: "Mutation and Tumor Burden or burden * or TMB," and "next-generation sequencing or NGS or Whole-exome sequencing or WES." The same queries were used to search Cochrane Library. For the search period from January 1980 to January 2021, 387 articles were extracted from PubMed and 22 from Cochrane Library. In the primary screening, 215 articles were extracted, and 204 were extracted in the secondary screening. A qualitative systematic review of these articles was then performed. FoundationOne ® CDx was approved in Japan on December 27, 2018 for the purpose of obtaining comprehensive genomic profiles of tumor tissue in patients with solid tumors and for the purpose of detecting somatic gene alterations in order to determine whether some molecularly targeted drugs are indicated in such patients. FoundationOne ® CDx also includes TMB score information. The KEYNOTE-158 study measured TMB scores using the FoundationOne ® CDx assay and examined the efficacy of pembrolizumab in advanced or recurrent solid tumors that progressed after chemotherapy, with a TMB-H cutoff of ≥ 10 mut/Mb. The results showed that the ORR of pembrolizumab was higher in the TMB-H group than in the TMB-L group . Based on the results of this study, the FDA granted expedited approval of pembrolizumab for unresectable or metastatic TMB-H (≥ 10 mut/Mb) solid tumors on June 16, 2020. In addition, FoundationOne ® CDx was approved as a pembrolizumab companion diagnostic. In Japan, FoundationOne ® CDx was approved on November 15, 2021 to assist in determining whether drugs for solid tumors with high-TMB scores are indicated. In addition to FoundationOne ® CDx, OncoGuide™ NCC Oncopanel System was approved in Japan as a comprehensive genomic profiling test for tumor tissue in patients with solid tumors. As with the FoundationOne ® CDx assay, a strong correlation with WES has been reported for this test , indicating that it can predict the therapeutic benefit of an immune checkpoint inhibitor. However, as of June 2021, there had been no reports of studies examining the efficacy of an immune checkpoint inhibitor using the OncoGuide™ NCC Oncopanel System. The algorithm used to calculate TMB scores varies depending on the gene panel used, and attention must therefore be paid to the resulting variability. The FoCR is currently performing a retrospective analysis of clinical specimens from patients administered immune checkpoint inhibitors in clinical studies, and it is anticipated that uniform TMB scores obtained with different gene panels will be available in clinical setting. In addition, the FoundationOne ® Liquid CDx Cancer Genome Profile was approved as a comprehensive genomic profiling test for solid tumors using blood specimens on March 22, 2021, and an application for marketing approval of Guardant360CDx was filed on January 28, 2021. Thus, opportunities to perform measurements in clinical practice are expected to increase. The OAK and POPLAR studies, which examined the superiority of atezolizumab versus docetaxel in non-small cell lung cancer, analyzed blood specimens using a bTMB assay and found that atezolizumab efficacy was greater in patients with bTMB scores ≥ 16 . It is anticipated that the efficacy of atezolizumab is verified in other cancer types. Based on the above findings, an NGS test whose analytical validity has been established using tissue is recommended as TMB testing to determine whether immune checkpoint inhibitors are indicated. PubMed was searched using the following queries: "Mutation and Tumor Burden or burden * or TMB," "PD-1 or PD-L1 *," and "treat*." The same queries were used to search Cochrane Library. For the search period from January 1980 to January 2021, 323 articles were extracted from PubMed and 10 from Cochrane Library. In the primary screening, 74 articles were extracted, and 71 were extracted in the secondary screening. A qualitative systematic review of these articles was then performed. The KEYNOTE-158 study measured TMB scores using the FoundationOne ® CDx assay and examined the efficacy of pembrolizumab in advanced or recurrent solid tumors that progressed after chemotherapy, with a TMB-H cutoff of ≥ 10 mut/Mb. The results showed that the pembrolizumab ORR was higher in the TMB-H group than in the TMB-L group (29% vs. 6%) . Immune checkpoint inhibitors have been shown to have a tumor-agnostic therapeutic effect in TMB-H tumors. It should be noted, however, that the reported sample sizes have been limited for some cancer types and that immune checkpoint inhibitors have shown no efficacy in some cancer types (see "4 Efficacy of anti-PD-1/PD-L1 antibody drugs against TMB-H solid tumors"). The efficacy of immune checkpoint inhibitors in treating TMB-H solid tumors was shown in the KEYNOTE-158 study, which examined advanced or recurrent solid tumors that progressed after chemotherapy. Therefore, it is currently not the first-line therapy of choice. In view of the turnaround time (TAT) required for TMB testing, it is generally considered preferable to start the first-line therapy established for each organ (standard treatment) without waiting for the results of TMB testing. However, TMB is an important biomarker for investigating subsequent therapy, and testing for it and other biomarkers should therefore be considered at an early stage. Immune checkpoint inhibitors significantly prolongs patient survival in many types of cancers; however, significant resistance to this therapeutic modality has been reported. Thus, to identify patients who are more likely to benefit from this therapy, various biomarkers, including TMB, have been examined. However, there are some issues related to administering immune checkpoint inhibitors for TMB-H tumors in the clinical setting. In this guideline, the panel recommends the requirements for performing TMB testing properly to select patients who are likely to benefit from immune checkpoint inhibitors.
Seeking healthcare at their ‘right’ time; the iterative decision process for women with breast cancer
c339a20a-df23-4f33-b815-4db2bf7b7e14
7574193
Patient Education as Topic[mh]
Breast cancer diagnosis at a late stage is a challenge in low and middle-income countries, with 30–98% of breast cancer cases diagnosed at stage III or IV . In some African countries, 70–80% of breast cancer patients present late [ – ]. Of an estimated 1.38 million women diagnosed with breast cancer annually, almost 50% of cases and 58% of resulting deaths are from developing countries . Among African women, advanced stage at diagnosis, aggressive tumour type, poor differentiation and triple negative hormone receptor status are contributing factors [ , , ] to this poor phenomenon. Furthermore, initiating definitive treatment more than 3 months after patients’ discovery of symptoms contributes to the high mortality rates . At the Komfo Anokye Teaching Hospital (KATH) in Ghana, about 85% of breast cancer patients present with stage III/IV disease . Many women discover their breast symptoms themselves , mostly by chance , or during activities such as bathing, dressing, or breast feeding . Subsequently, their interpretation of the symptoms is the first step of the health seeking process . Women have delayed seeking care because they have initially interpreted their breast symptoms as not being serious [ , – ]. Some have attributed their breast symptoms to hormonal changes , trauma , or breastfeeding . When breast symptoms have not met their expectations of breast cancer, it can be evaluated as not being serious. Women’s expectations of how breast cancer symptoms present vary. A painless pea-sized breast lump , or breast pain, have been considered to be cancer . Some believe breast lumps turn into cancer if often pressed or touched . Some studies suggest the absence of pain has been interpreted as an unserious breast cancer symptom [ , , ]. Aside awareness of breast lump, poor knowledge about other early signs of breast cancer also contribute significantly to delayed health seeking [ , – ]. Besides symptom appraisal outcomes, poverty [ , – ], psychological factors such as fear and denial [ , , , , ], and the use of complementary and alternate medicine [ , , , , , ] have been identified as contributing to late health seeking behaviours. The impact of socio-demographic and economic factors such age, education, marital status, area of residence, and income level on breast cancer health seeking behaviour have been reported to be equivocal [ , , – ]. Symptom interpretation is the most important step in the health seeking process for cancer diagnosis and this is deemed to contribute to 60–80% of the health seeking process . Women who delay seeking help continue to actively monitor their symptoms and do seek care when they perceive their symptoms to have become serious [ , , ]. Eventually, there is always a trigger that drives them to seek help, albeit too late in most cases. Such triggers include the onset of pain [ , , , ], increasing severity, and the persistence of the identified symptom such that it interferes with daily activities . What informs the behaviour of women who seek help only when they perceive worsening symptoms? What goes on during that period of active monitoring of their symptoms? There has been great investments into screening, early diagnosis, and management of breast cancer. However, late presentation poses a barrier to realising the possible benefits of advanced breast cancer screening and management modalities available today. It remains a healthcare challenge that some women still present to healthcare facilities with advanced breast cancer – localised or metastatic. This study assessed the symptom appraisal and medical health seeking behaviour of women with either locally advanced or metastatic breast cancer attending the breast clinic at KATH. Study design and setting We conducted a phenomenological study using a descriptive qualitative design among breast cancer patients accessing care at the breast clinic of the Komfo Anokye Teaching Hospital (KATH), Ghana, from May 2015 to March 2016. Komfo Anokye Teaching Hospital is a 1300-bed capacity tertiary hospital that provides surgical, chemotherapy, and radiotherapy treatment for breast cancer. The Hospital has a dedicated breast care centre where women with breast symptoms are first seen. All decisions on breast cancer treatment are taken by an interdisciplinary tumour board comprising surgeons, oncologists, pathologists, radiologists, nurses, and social workers. Participants, recruitment and sampling We purposively recruited fifteen (15) women presenting for the first time to the breast clinic of KATH with breast disease clinically suggestive of Stage III or IV breast cancer as defined by the American Joint Committee on Cancer . The clinical stage was determined by the attending physician based on history, physical examination, and prior laboratory or radiological investigations the women may have done. These women would have invariably observed overt changes in their breast during the period before seeking care at KATH breast clinic. Eligibility also included reporting to KATH breast clinic at least 3 months after identifying the symptom. Age was not a limitation to inclusion. Women who were too frail, such as those who were visibly tired out after the routine clinical consultation and examination were not approached for interviews. The clinic nurse approached eligible participants face to face and explained the study and its purpose to them using the participant information leaflet. Women who voluntarily expressed their interest to participate were directed to contact the researcher in person. Two eligible women declined participation, one due to how the interview time will affect her travel schedule and the other because she felt anonymity had been breached by the researcher referring to her by her name during the interview. Data collection Each participant signed a written Informed consent prior to her interview. Each of the in-depth interviews was conducted in a private room in the breast clinic at KATH by AEA, who was a PhD candidate at the time. The interviews were guided by open ended questions to explore how the women noticed their breast symptoms; how the symptom had evolved since; the experiences that influenced their decision to seek medical care and the activities they had undertaken until arriving at the breast care center. In order to promote a dialogue that could explore their responses, the questions were used flexibly and the order adapted and elaborated to suit each interview context . Data saturation was reached by the 14th interview and confirmed with 15th. The interviews, which were audio recorded, were conducted in the local dialect of Twi and translated into English during transcription. Each interview lasted for about 40–45 min. Back-translation was used reiteratively during the transcription and analysis process to ensure the meaning of the original responses was not lost . Data analysis The data was coded and analysed using a deductive thematic approach with priori themes guided by the Andersen Behavioural Model of Health care utilisation . The Andersen Behavioural Model describes the determinants of health service use to include individual characteristics such as demographic factors, cultural norms, education, occupation, social relationships, health beliefs, personal and family income/wealth, individual-perceived need for health care; and health service factors such as cost and accessibility to care, and patient’s satisfactions and perception of the care provided. We conducted a phenomenological study using a descriptive qualitative design among breast cancer patients accessing care at the breast clinic of the Komfo Anokye Teaching Hospital (KATH), Ghana, from May 2015 to March 2016. Komfo Anokye Teaching Hospital is a 1300-bed capacity tertiary hospital that provides surgical, chemotherapy, and radiotherapy treatment for breast cancer. The Hospital has a dedicated breast care centre where women with breast symptoms are first seen. All decisions on breast cancer treatment are taken by an interdisciplinary tumour board comprising surgeons, oncologists, pathologists, radiologists, nurses, and social workers. We purposively recruited fifteen (15) women presenting for the first time to the breast clinic of KATH with breast disease clinically suggestive of Stage III or IV breast cancer as defined by the American Joint Committee on Cancer . The clinical stage was determined by the attending physician based on history, physical examination, and prior laboratory or radiological investigations the women may have done. These women would have invariably observed overt changes in their breast during the period before seeking care at KATH breast clinic. Eligibility also included reporting to KATH breast clinic at least 3 months after identifying the symptom. Age was not a limitation to inclusion. Women who were too frail, such as those who were visibly tired out after the routine clinical consultation and examination were not approached for interviews. The clinic nurse approached eligible participants face to face and explained the study and its purpose to them using the participant information leaflet. Women who voluntarily expressed their interest to participate were directed to contact the researcher in person. Two eligible women declined participation, one due to how the interview time will affect her travel schedule and the other because she felt anonymity had been breached by the researcher referring to her by her name during the interview. Each participant signed a written Informed consent prior to her interview. Each of the in-depth interviews was conducted in a private room in the breast clinic at KATH by AEA, who was a PhD candidate at the time. The interviews were guided by open ended questions to explore how the women noticed their breast symptoms; how the symptom had evolved since; the experiences that influenced their decision to seek medical care and the activities they had undertaken until arriving at the breast care center. In order to promote a dialogue that could explore their responses, the questions were used flexibly and the order adapted and elaborated to suit each interview context . Data saturation was reached by the 14th interview and confirmed with 15th. The interviews, which were audio recorded, were conducted in the local dialect of Twi and translated into English during transcription. Each interview lasted for about 40–45 min. Back-translation was used reiteratively during the transcription and analysis process to ensure the meaning of the original responses was not lost . The data was coded and analysed using a deductive thematic approach with priori themes guided by the Andersen Behavioural Model of Health care utilisation . The Andersen Behavioural Model describes the determinants of health service use to include individual characteristics such as demographic factors, cultural norms, education, occupation, social relationships, health beliefs, personal and family income/wealth, individual-perceived need for health care; and health service factors such as cost and accessibility to care, and patient’s satisfactions and perception of the care provided. Fifteen (15) women aged between 24 and 79 years, were interviewed. Ten (10) of them had clinical stage III and five (5) had clinical stage IV breast cancer. Four (4) of the women lived in the Ashanti Region (i.e. same region as the Komfo Anokye Teaching Hospital) while the other eleven (11) lived outside the region. Travel to the clinic was typically by public transport and took anywhere from thirty (30) minutes to ninety (90) minutes for those within the same region, and from two (2) to nine (9) hours for those coming from outside the Ashanti Region. All the participants except for one (Int. 2) lived with some family members, that is their husband, children, grandchildren, siblings, parents or other extended family members. One (1) participant was a retired civil servant who received a regular pension. Seven (7) of the participants were traders, and three (3) farmers, all of whom had an irregular income from their occupation. The remaining four (4) were unemployed and relied on family members for remittances. The health seeking journey for the women unfolded in two phases: the personal phase and the health system/facility phase. These phases unfolded either separately or concurrently. The personal phase of health seeking Symptom identification All fifteen (15) women identified their breast change themselves except for one whose lump was noticed by her friend. They were provoked by breast campaign messages or a sense of discomfort/pain to perform self-breast examination (SBE). The first sign identified by most of the interviewees was a lump, whilst others first experienced a breast swelling, or breast pain as the index symptom. “I saw it myself that the breast looked a bit swollen” [Int. 11] “There was something in it, something hard and they have been announcing that when there is something in your breast go and have it checked” [Int. 4] Personal beliefs and appraisal Many of the women thought their first symptom was nothing serious. Their reasons included the small size of the lump that was unapparent to others, and the normal gross appearance of the breast. More importantly, ‘serious signs’ expected to be associated with serious illness such as pain or interruption of one’s normal activities, were absent. “It wasn’t big and I thought it won’t worry me” [Int. 5] “It was hard but it was not paining me and I could wear a dress and go to the farm” [Int. 1] They appraised their breast symptom regularly, and several factors played a role in deciding what it was, and what action to take. One such factor was common cultural or traditional knowledge of breast illness, lumps, or body swellings and how they are commonly managed. Such swellings were commonly regarded as boils which were expected to discharge pus and subsequently resolve, sometimes with the application of locally prepared topical treatment. Previous experience of similar changes amongst family or friends strengthened this expectation. “The way it was hurting I was thinking, as for something that is swollen, it will burst and discharge then you are free and that’s all” [Int. 6] “I had seen women with breast disease, some their breast became very huge [and] they used herbal medicine at home and for the majority it resolved” [Int. 10] Breast cancer campaign messages on symptom appraisal Nine (9) of the women had been exposed to breast cancer messages via mass media (television, radio, posters) or local church/social groups. They understood that breast lumps could be cancers, and that prompt medical attention should be sought if breast lumps are found. They had also acquired skills to perform routine breast self-examination (BSE). However, they did not appear to know that other traditionally known breast symptoms could be cancers, nor how different breast swelling evolved over time. Additionally, they appeared not to know what other signs, aside breast lumps, were suggestive of breast cancers. “I hear always about breast diseases….as for the breast I did not really see anything in it. It was my armpit that I noticed something” [Int. 9] “Our pastor’s wife is a nurse and taught us periodically to examine our breast. She said breast can develop cancer so you must take the right steps else you can die. I had heard this on radio and TV too.” [Int. 12] ‘The tipping point’ - deciding to seek help Symptoms that disrupted daily function, affected quality of life, or was perceived to potentially affect quality of life were important triggers to seek help. “I see that the illness keeps getting worse, it is growing bigger, and now my neck is also hurting so bad I just sit till morning comes so I decided to go and see the doctor” [Int. 4]. “So when it started paining me and I could not go to the farm, I could not do anything, that is when I went to tell my brother and the same day we went to the doctor” [Int. 1] The health system/facility phase The time from symptom identification to reporting to KATH ranged from 4 to 24 months. Six (6) women had their first contact with a health facility less than 3 months after identifying their symptom. The initial contact for all the women was with a non-specialist physician. The referral to KATH was not straightforward for all the women. Some of them saw up to 3 other physicians, including travelling to other towns for diagnostic investigation before presenting to the KATH breast clinic. The time it took to navigate the course from the point of initial healthcare contact to KATH was closely linked to their economic, family and social roles as women. Some economic and sociocultural activities were prioritised ahead of seeking care for the breast. For example, some deferred hospital appointments in order to work for more money; some prioritised treatment for other illnesses perceived to affect their functionality; and others prioritised taking care of an ill child over their own health. “I had heard about breast disease so even though it was not paining me I had to do something about it…..but it was not worrying me and we had entered the Christmas season too so I wanted to work a little more” [Int. 12] “ It has been swollen for over a year, I did not do anything about it…. I was seeing the doctor monthly for my hypertension medication” [Int. 6] “The child’s illness was more serious and so I had to take care of him first, I didn’t consider mine as that serious” [Int. 3] The availability of money to pay for care and diagnostic investigations also significantly influenced the time taken to navigate the referral pathway. “You see, by the time I was discharged about my leg problem, I had no money on me that is why it took me so long” [Int. 5] “By the time we got to the regional hospital, all our money was finished so the doctor asked us to go back home and mobilise some money for the tests.” [Int. 14] . “ Well I wanted to come but the money issues, the money issues” [Int. 2] Even when money was available, it was prioritised for other economic, social or family use. “Recently too I had gone to pay school fees for two of my children so the money I had, if I came, (it) was not enough” [Int. 15] The women did not seem to have any problems with their health seeking journey. They believed their decisions and the timing of their actions taken was justified; it could not have been any other way. At the time they had decided to seek medical care, the breast symptom was a singular priority, and nothing was considered a hindrance. “Well the time I felt the pain I did not delay, I went to the hospital. They gave me a referral on Tuesday, only Wednesday passed and I travelled here on Thursday. I am okay with how things have gone” [Int. 15]. “I am pursuing this because of the children, if I die they will become miserable” [Int. 9] “W ell if you want good for yourself you must spend money” [Int. 7] “I was not bothered about coming here because I want my good health back” [Int. 3] “The distance is far but if it will help me then there is no problem” [Int. 12] Essentially, while deferring healthcare was related to ability to perform economic, family and social roles, pursing healthcare was also for the same economic, family and social reasons or goals. Symptom identification All fifteen (15) women identified their breast change themselves except for one whose lump was noticed by her friend. They were provoked by breast campaign messages or a sense of discomfort/pain to perform self-breast examination (SBE). The first sign identified by most of the interviewees was a lump, whilst others first experienced a breast swelling, or breast pain as the index symptom. “I saw it myself that the breast looked a bit swollen” [Int. 11] “There was something in it, something hard and they have been announcing that when there is something in your breast go and have it checked” [Int. 4] Personal beliefs and appraisal Many of the women thought their first symptom was nothing serious. Their reasons included the small size of the lump that was unapparent to others, and the normal gross appearance of the breast. More importantly, ‘serious signs’ expected to be associated with serious illness such as pain or interruption of one’s normal activities, were absent. “It wasn’t big and I thought it won’t worry me” [Int. 5] “It was hard but it was not paining me and I could wear a dress and go to the farm” [Int. 1] They appraised their breast symptom regularly, and several factors played a role in deciding what it was, and what action to take. One such factor was common cultural or traditional knowledge of breast illness, lumps, or body swellings and how they are commonly managed. Such swellings were commonly regarded as boils which were expected to discharge pus and subsequently resolve, sometimes with the application of locally prepared topical treatment. Previous experience of similar changes amongst family or friends strengthened this expectation. “The way it was hurting I was thinking, as for something that is swollen, it will burst and discharge then you are free and that’s all” [Int. 6] “I had seen women with breast disease, some their breast became very huge [and] they used herbal medicine at home and for the majority it resolved” [Int. 10] Breast cancer campaign messages on symptom appraisal Nine (9) of the women had been exposed to breast cancer messages via mass media (television, radio, posters) or local church/social groups. They understood that breast lumps could be cancers, and that prompt medical attention should be sought if breast lumps are found. They had also acquired skills to perform routine breast self-examination (BSE). However, they did not appear to know that other traditionally known breast symptoms could be cancers, nor how different breast swelling evolved over time. Additionally, they appeared not to know what other signs, aside breast lumps, were suggestive of breast cancers. “I hear always about breast diseases….as for the breast I did not really see anything in it. It was my armpit that I noticed something” [Int. 9] “Our pastor’s wife is a nurse and taught us periodically to examine our breast. She said breast can develop cancer so you must take the right steps else you can die. I had heard this on radio and TV too.” [Int. 12] ‘The tipping point’ - deciding to seek help Symptoms that disrupted daily function, affected quality of life, or was perceived to potentially affect quality of life were important triggers to seek help. “I see that the illness keeps getting worse, it is growing bigger, and now my neck is also hurting so bad I just sit till morning comes so I decided to go and see the doctor” [Int. 4]. “So when it started paining me and I could not go to the farm, I could not do anything, that is when I went to tell my brother and the same day we went to the doctor” [Int. 1] All fifteen (15) women identified their breast change themselves except for one whose lump was noticed by her friend. They were provoked by breast campaign messages or a sense of discomfort/pain to perform self-breast examination (SBE). The first sign identified by most of the interviewees was a lump, whilst others first experienced a breast swelling, or breast pain as the index symptom. “I saw it myself that the breast looked a bit swollen” [Int. 11] “There was something in it, something hard and they have been announcing that when there is something in your breast go and have it checked” [Int. 4] Many of the women thought their first symptom was nothing serious. Their reasons included the small size of the lump that was unapparent to others, and the normal gross appearance of the breast. More importantly, ‘serious signs’ expected to be associated with serious illness such as pain or interruption of one’s normal activities, were absent. “It wasn’t big and I thought it won’t worry me” [Int. 5] “It was hard but it was not paining me and I could wear a dress and go to the farm” [Int. 1] They appraised their breast symptom regularly, and several factors played a role in deciding what it was, and what action to take. One such factor was common cultural or traditional knowledge of breast illness, lumps, or body swellings and how they are commonly managed. Such swellings were commonly regarded as boils which were expected to discharge pus and subsequently resolve, sometimes with the application of locally prepared topical treatment. Previous experience of similar changes amongst family or friends strengthened this expectation. “The way it was hurting I was thinking, as for something that is swollen, it will burst and discharge then you are free and that’s all” [Int. 6] “I had seen women with breast disease, some their breast became very huge [and] they used herbal medicine at home and for the majority it resolved” [Int. 10] Nine (9) of the women had been exposed to breast cancer messages via mass media (television, radio, posters) or local church/social groups. They understood that breast lumps could be cancers, and that prompt medical attention should be sought if breast lumps are found. They had also acquired skills to perform routine breast self-examination (BSE). However, they did not appear to know that other traditionally known breast symptoms could be cancers, nor how different breast swelling evolved over time. Additionally, they appeared not to know what other signs, aside breast lumps, were suggestive of breast cancers. “I hear always about breast diseases….as for the breast I did not really see anything in it. It was my armpit that I noticed something” [Int. 9] “Our pastor’s wife is a nurse and taught us periodically to examine our breast. She said breast can develop cancer so you must take the right steps else you can die. I had heard this on radio and TV too.” [Int. 12] Symptoms that disrupted daily function, affected quality of life, or was perceived to potentially affect quality of life were important triggers to seek help. “I see that the illness keeps getting worse, it is growing bigger, and now my neck is also hurting so bad I just sit till morning comes so I decided to go and see the doctor” [Int. 4]. “So when it started paining me and I could not go to the farm, I could not do anything, that is when I went to tell my brother and the same day we went to the doctor” [Int. 1] The time from symptom identification to reporting to KATH ranged from 4 to 24 months. Six (6) women had their first contact with a health facility less than 3 months after identifying their symptom. The initial contact for all the women was with a non-specialist physician. The referral to KATH was not straightforward for all the women. Some of them saw up to 3 other physicians, including travelling to other towns for diagnostic investigation before presenting to the KATH breast clinic. The time it took to navigate the course from the point of initial healthcare contact to KATH was closely linked to their economic, family and social roles as women. Some economic and sociocultural activities were prioritised ahead of seeking care for the breast. For example, some deferred hospital appointments in order to work for more money; some prioritised treatment for other illnesses perceived to affect their functionality; and others prioritised taking care of an ill child over their own health. “I had heard about breast disease so even though it was not paining me I had to do something about it…..but it was not worrying me and we had entered the Christmas season too so I wanted to work a little more” [Int. 12] “ It has been swollen for over a year, I did not do anything about it…. I was seeing the doctor monthly for my hypertension medication” [Int. 6] “The child’s illness was more serious and so I had to take care of him first, I didn’t consider mine as that serious” [Int. 3] The availability of money to pay for care and diagnostic investigations also significantly influenced the time taken to navigate the referral pathway. “You see, by the time I was discharged about my leg problem, I had no money on me that is why it took me so long” [Int. 5] “By the time we got to the regional hospital, all our money was finished so the doctor asked us to go back home and mobilise some money for the tests.” [Int. 14] . “ Well I wanted to come but the money issues, the money issues” [Int. 2] Even when money was available, it was prioritised for other economic, social or family use. “Recently too I had gone to pay school fees for two of my children so the money I had, if I came, (it) was not enough” [Int. 15] The women did not seem to have any problems with their health seeking journey. They believed their decisions and the timing of their actions taken was justified; it could not have been any other way. At the time they had decided to seek medical care, the breast symptom was a singular priority, and nothing was considered a hindrance. “Well the time I felt the pain I did not delay, I went to the hospital. They gave me a referral on Tuesday, only Wednesday passed and I travelled here on Thursday. I am okay with how things have gone” [Int. 15]. “I am pursuing this because of the children, if I die they will become miserable” [Int. 9] “W ell if you want good for yourself you must spend money” [Int. 7] “I was not bothered about coming here because I want my good health back” [Int. 3] “The distance is far but if it will help me then there is no problem” [Int. 12] Essentially, while deferring healthcare was related to ability to perform economic, family and social roles, pursing healthcare was also for the same economic, family and social reasons or goals. The personal phase – the prioritization cycle Breast symptoms that interfere with daily activity or function is a significant factor in deciding that breast disease is serious and needs medical attention . As observed in this study, it is not the physical characteristics of a breast symptom, but how it affects a woman’s function that provokes health seeking. This suggests that the breast symptom is not appraised as an isolated entity, but with regard to its effects on different aspects of a woman’s functionality. How the breast symptom interrupts economic, family and social function is an important factor in defining disease severity and the need for urgent medical attention. The mere presence of the breast symptom may not be alarming enough until it is perceived as a threat to, or actually impacts, functionality. Breast cancer campaign messages in Ghana generally encourage screening, especially self-breast examination for early signs of breast cancer, and where detected, to immediately seek medical care. The general expectation is that appraisal of a breast cancer symptom should culminate in the decision to seek treatment, and this health seeking action is again anticipated to be immediate – without delay. Reasonably, the responsibility to appraise breast symptoms and follow through to seek medical attention cannot be taken away from the individual. This desired health seeking behaviour has been communicated in diverse ways. In spite of this, delayed health seeking persists. However, having knowledge does not necessarily guarantee timely health seeking behaviours. From their exploration of the cultural model of breast cancer among low-income African American women, Barg and Grier attribute this phenomenon to the different cultural beliefs and experiences that exist about breast cancer. Thus, the intended cognitive and affective response expected from general education on breast cancer may be different from the actual meaning generated during a woman’s interpretation of the message . Even where the intended and actual meanings are congruent, Granek and Fergus in their discussion of issues of agency and liminality associated with women’s symptom appraisal and help-seeking behaviour upon discovery of a breast symptom, assert that women who are not ready to present their breast symptoms to a physician remain ‘deliberately ignorant’ of it because they have other areas in their life that needs attention and that are not hindered by the threat of breast cancer, or the early symptoms of breast disease. Whilst this assertion may be true in some cases, the findings from this study suggest that women are not necessarily deliberately ignorant. Rather, they remain aware of their symptoms but make active choices of prioritising other areas in their life needing attention until such time that seeking care for their breast symptoms is the means to achieve those other previously prioritised choices. In other words, when their perceived priorities are latterly threatened by the breast symptom, seeking medical attention to treat the breast symptoms becomes a means to ensure that those previous priorities are maintained or restored. The decision-making process of health seeking is thus part of an ongoing iterative priority setting process, the ultimate goal of which is to maintain important economic, family and or social function. Acting on the breast symptom is thus one of the other means (e.g. work, child care) to achieving some economic, social or family goal, such that its place on the priority ladder depends on how it is directly related to achieving these goals. There is an awareness of the breast condition, but this ‘joins the queue’ of other important things that also need to be done. As such, for some women, the timing for health seeking is not delayed. The timing is right ; the time when the symptom is beginning to threaten economic/family/social goals; the time when if the symptom is not dealt with, other priorities cannot be achieved; the time when health seeking for the breast symptom becomes the means by which the other activities can be sustained; the time when in the regular priority setting agenda, it has become the number one activity to be done in the daily pursuit of economic, family and social function. The health facility phase Breast cancer treatment tends to be offered in specialised centres. Referrals are therefore necessary for many women on the health seeking journey. For the women in this study, referrals were not just for treatment but also for investigations to establish diagnosis. This journey translates into time, money and the use of other resources, competing with daily efforts to maintain their economic, family and social functions. When resources are limited, the decision to commit any of these resources to health seeking is further delayed, especially when the symptom is not perceived a threat to function. Already existing narratives about the cost burden and time consuming nature of breast cancer care further worsen this inertia. Even when the health seeking journey has been initiated, activities such as diagnostic investigations may be deferred as economic, family or social goals come up and compete for resources. Probably, a clear understanding of the false economy of delaying treatment in competition with the very economic, family and social goals they hold so dear could have led to different health seeking choices and this should feature prominently in breast cancer education. Breast symptoms that interfere with daily activity or function is a significant factor in deciding that breast disease is serious and needs medical attention . As observed in this study, it is not the physical characteristics of a breast symptom, but how it affects a woman’s function that provokes health seeking. This suggests that the breast symptom is not appraised as an isolated entity, but with regard to its effects on different aspects of a woman’s functionality. How the breast symptom interrupts economic, family and social function is an important factor in defining disease severity and the need for urgent medical attention. The mere presence of the breast symptom may not be alarming enough until it is perceived as a threat to, or actually impacts, functionality. Breast cancer campaign messages in Ghana generally encourage screening, especially self-breast examination for early signs of breast cancer, and where detected, to immediately seek medical care. The general expectation is that appraisal of a breast cancer symptom should culminate in the decision to seek treatment, and this health seeking action is again anticipated to be immediate – without delay. Reasonably, the responsibility to appraise breast symptoms and follow through to seek medical attention cannot be taken away from the individual. This desired health seeking behaviour has been communicated in diverse ways. In spite of this, delayed health seeking persists. However, having knowledge does not necessarily guarantee timely health seeking behaviours. From their exploration of the cultural model of breast cancer among low-income African American women, Barg and Grier attribute this phenomenon to the different cultural beliefs and experiences that exist about breast cancer. Thus, the intended cognitive and affective response expected from general education on breast cancer may be different from the actual meaning generated during a woman’s interpretation of the message . Even where the intended and actual meanings are congruent, Granek and Fergus in their discussion of issues of agency and liminality associated with women’s symptom appraisal and help-seeking behaviour upon discovery of a breast symptom, assert that women who are not ready to present their breast symptoms to a physician remain ‘deliberately ignorant’ of it because they have other areas in their life that needs attention and that are not hindered by the threat of breast cancer, or the early symptoms of breast disease. Whilst this assertion may be true in some cases, the findings from this study suggest that women are not necessarily deliberately ignorant. Rather, they remain aware of their symptoms but make active choices of prioritising other areas in their life needing attention until such time that seeking care for their breast symptoms is the means to achieve those other previously prioritised choices. In other words, when their perceived priorities are latterly threatened by the breast symptom, seeking medical attention to treat the breast symptoms becomes a means to ensure that those previous priorities are maintained or restored. The decision-making process of health seeking is thus part of an ongoing iterative priority setting process, the ultimate goal of which is to maintain important economic, family and or social function. Acting on the breast symptom is thus one of the other means (e.g. work, child care) to achieving some economic, social or family goal, such that its place on the priority ladder depends on how it is directly related to achieving these goals. There is an awareness of the breast condition, but this ‘joins the queue’ of other important things that also need to be done. As such, for some women, the timing for health seeking is not delayed. The timing is right ; the time when the symptom is beginning to threaten economic/family/social goals; the time when if the symptom is not dealt with, other priorities cannot be achieved; the time when health seeking for the breast symptom becomes the means by which the other activities can be sustained; the time when in the regular priority setting agenda, it has become the number one activity to be done in the daily pursuit of economic, family and social function. Breast cancer treatment tends to be offered in specialised centres. Referrals are therefore necessary for many women on the health seeking journey. For the women in this study, referrals were not just for treatment but also for investigations to establish diagnosis. This journey translates into time, money and the use of other resources, competing with daily efforts to maintain their economic, family and social functions. When resources are limited, the decision to commit any of these resources to health seeking is further delayed, especially when the symptom is not perceived a threat to function. Already existing narratives about the cost burden and time consuming nature of breast cancer care further worsen this inertia. Even when the health seeking journey has been initiated, activities such as diagnostic investigations may be deferred as economic, family or social goals come up and compete for resources. Probably, a clear understanding of the false economy of delaying treatment in competition with the very economic, family and social goals they hold so dear could have led to different health seeking choices and this should feature prominently in breast cancer education. Deciding to seek care and pursue treatment for breast cancer symptoms may be much more complicated than it appears. Economic, family and social function significantly drive the health seeking process both at the personal and health facility phases of health seeking. It may be useful to adapt breast cancer campaign messages delivered by national and local public health agencies to incorporate these functional goals and their role in symptom appraisal and decision making, rather than focus on the breast symptom as an isolated entity. A national document on breast cancer education approved by all stakeholders should be developed to serve as the basis for breast cancer campaign messages delivered on all media and local group platforms to ensure uniformity and consistency of the information given to women. Further, there is need for more research to explore how health workers, non-governmental organizations and other stakeholders who are involved in breast cancer campaigns and treatment can provide targeted communication and counselling support to women to undergo breast cancer investigations and treatment. Limitation There was likely some recall bias as the women were recollecting activities that had occurred several weeks to months prior to arriving at the breast clinic. Interview transcripts could not be returned to participants for feedback because most of them could not be traced. This could potentially affect rigour. However, immediately after each interview, a quick recap of the interview was always fed back to the interviewee before closing off the interview. The interviewer’s background as a surgeon who had worked in the breast clinic and with breast cancer patients, and as a researcher with academic interests, may have influenced her qualitative analysis. However, the other authors were at liberty to comment on the qualitative findings. There was likely some recall bias as the women were recollecting activities that had occurred several weeks to months prior to arriving at the breast clinic. Interview transcripts could not be returned to participants for feedback because most of them could not be traced. This could potentially affect rigour. However, immediately after each interview, a quick recap of the interview was always fed back to the interviewee before closing off the interview. The interviewer’s background as a surgeon who had worked in the breast clinic and with breast cancer patients, and as a researcher with academic interests, may have influenced her qualitative analysis. However, the other authors were at liberty to comment on the qualitative findings.
Dental Calculus Formation Rate: The Role of Salivary Proteome and Metaproteome
e5c84ca9-584a-48e6-9e74-d9fc00fc8dbe
11949622
Biochemistry[mh]
Introduction Dental calculus is mineralized dental plaque formed on dental and prosthodontic surfaces, and with a prevalence as high as 90% in the adult population, it is the predominant pathological calcification in humans. Calculus is principally composed of inorganic minerals, namely calcium carbonate and calcium phosphate, alongside organic constituents such as proteins and carbohydrates (Akcalı and Lang ). Given its propensity to harbour bacterial colonies adjacent to gingival tissues, dental calculus is a significant contributor to the onset and progression of both gingival and periodontal diseases (Forshaw ). Thus, its efficient control and removal are imperative for optimal periodontal health. It is noteworthy that calculus formation rate is not uniform across the population and varies widely from individual to individual (Fons‐Badal et al. ). Despite stringent oral hygiene and plaque control measures, certain individuals are predisposed to rapid calculus accrual, necessitating more frequent dental interventions. Little is known about the reason behind differences in calculus formation rate across the population, though a confluence of factors, encompassing dietary habits, demographics and medical conditions, are postulated contributors. Specifically, diets rich in carbohydrates and lipids have been implicated in augmented calculus deposition, while protein‐rich diets appear to be protective (Hidaka and Oishi ). Demographic nuances, including advancing age, male sex and black racial backgrounds, have been associated with increased subgingival calculus prevalence. Furthermore, systemic conditions such as chronic kidney disease (Martins et al. ) and medications like beta‐blockers, diuretics, anticholinergics, synthroid and allopurinol have been identified as potential calculus modulators. A shared attribute across these variables is their impact on the composition and properties of saliva. Saliva is composed of both organic and inorganic components, including a variety of electrolytes, proteins, mucins, nitrogenous products and bacteria (Humphrey and Williamson ). Saliva is indispensable for oral health, because of its lubricative, buffering and antibacterial functions, and its role in maintaining calcium equilibrium. Elevated salivary pH is associated with increased calculus index (D'Souza et al. ) and a surge in salivary flow rate augments the susceptibility to plaque mineralization and periodontitis (Rajesh et al. ). Salivary ions, including calcium, phosphorus and urea, among others, have been investigated for their roles in calculus dynamics (D'Souza et al. ; Fons‐Badal et al. ; Pateel et al. ). An observational study documented that levels of phosphorus and urea were significantly elevated in patients with rapid calculus formation (Fons‐Badal et al. ). However, subsequent research failed to establish a direct correlation between salivary urea and calculus forming rate. Instead, a notable association emerged with increased levels of ureolytic bacteria, which metabolise salivary urea into ammonia. This metabolic shift induces a rise in ureolytic pH, leading to augmented calcium phosphate saturation, which, in turn, promotes enhanced calculus deposition (D'Souza et al. ). The salivary concentrations of uric acid, calcium, sodium, potassium and chlorine, however, did not demonstrate any significant correlation with calculus formation (Fons‐Badal et al. ). Proteins are the main organic component of saliva (Castagnola et al. ). There are around 3000 distinct proteins in saliva, some of which, such as cystatins, statherin and acidic proline‐rich proteins (PRPs), are known to play a role in the homeostasis of calcium phosphate (Pateel et al. ). Their calcium‐binding capabilities emanate from specific negatively charged domains that facilitate calcium chelation (Jin and Yip ). Numerous studies have delved into the role of anionic salivary proteins in calculus development, however, results have been inconsistent (Castagnola et al. ; Jin and Yip ; Pateel et al. ). This is probably because these calcium‐regulating proteins appear to have redundancy in their function, so even if some of these proteins are present in diminished concentrations, their functions might be compensated by other salivary proteins with overlapping function. This is why in order to understand the role of salivary proteins in dental calculus it is best to study them as a whole rather than as individual proteins. The overall calcium binding capabilities of salivary proteins as a whole could be investigated by studying the electrochemical charge of salivary proteins by gauging the zeta potential, a metric reflecting the surface charge of proteins in water (Bhattacharjee ). On the other hand, proteomic methods could be used to better investigate associations between specific proteins and calculus formation. Indeed, a canine study revealed a unique salivary proteome associated with the presence or absence of dental calculus in dogs (Bringel et al. ), hinting at similar potential correlations in humans that warrant further investigation. This study aims to investigate the hypothesis that the electrochemical properties of saliva, specifically its overall zeta potential, and its proteomic profile are indicative of an individual's resistance to dental calculus formation. To do this, we analysed the saliva of a group of patients with a history of dental calculus formation to discern the relationship between calculus formation rates with salivary zeta potential and proteome. Materials and Methods This observational study follows the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines for transparent and comprehensive reporting of observational research. This study was approved by the Institutional Review Board (A08‐M36‐16B) at McGill University in accordance with the Helsinki Declaration. All participants of this study provided written informed consent voluntarily. Participants were recruited through advertisement at the dental hygiene clinic of CEGEP Garneau (Quebec City, Canada) between April 2017 and March 2018 (Al‐Hashedi et al. ). A certified hygienist oversaw the recruitment and data collection process. Inclusion criteria for participants were as follows: ≥ 18 years of age, patient compliance with study instructions and timeline; ≥ 20 intact natural teeth, including all lower anterior teeth; a history of calculus formation (at least 1.5 mm wide) on the lingual surfaces of the lower front teeth within 6–9 months after receiving a professional prophylaxis treatment, being in good general health. Exclusion criteria included: any physical, psychological or health conditions that could hinder participants' ability to brush their teeth or attend study appointments; recent use of antibiotics or anti‐inflammatory drugs within the month prior to the study; regular use of chlorhexidine oral products, presence of oral prostheses, dental implants or fixed orthodontic appliances that could increase plaque accumulation on the lower anterior teeth, sensitivity to tartar‐control toothpastes; advanced periodontitis indicated by a Periodontal Screening and Recording (PSR) scale of 4 (Landry and Jean ), pregnancy, and inability to return for evaluation or study recalls. 2.1 Clinical Procedures and Evaluation At the baseline assessment, relevant demographic, dental and medical information, including sex, age, ethnicity, tooth sensitivity, diabetes, hypertension, periodontal conditions and malocclusion, were recorded. To minimise confounding factors related to calculus formation, all participants were provided with a standardised oral hygiene kit that included an off‐the‐shelf toothpaste (Complete Whitening Plus Scope, tartar control; Procter & Gamble, Cincinnati, OH, USA), a toothbrush and a dental floss. They also received detailed instructions on the modified Stillman brushing technique (Al‐Hashedi et al. ) and were closely monitored throughout the study to ensure adherence to these guidelines. The following clinical parameters were recorded: The calculus build‐up on the lingual surfaces of six lower anterior teeth, measured using the Volpe‐Manhold Index (VMI) (Volpe et al. ). The plaque accumulation on the labial and lingual surfaces of the teeth, evaluated using the Quigley‐Hain Plaque Index (QHI) (Turesky et al. ). The condition of the gingiva, including the buccal and lingual marginal gingiva and interdental papillae of all teeth, was assessed using the Modified Gingival Index (MGI) (Lobene et al. ). All patients underwent a standard dental cleaning procedure at the 3‐month mark. They were subsequently recalled every 3 months twice (month 6 and month 9) for re‐assessment of calculus formation (Figure ). At 9 months from the baseline, or 6 months after professional cleaning, participants with a VMI score above 7 were categorised as rapid‐forming subjects, while those with a score of 7 or lower were classified as the slow‐forming subjects (Blank et al. ). Two experienced dental hygienists underwent training and calibration in the clinical measurement process, resulting in excellent agreement and significantly reliable results (Cohen's κ = 0.92–0.96). As an incentive at the end of the study, all participants were offered a complimentary dental cleaning free of charge. 2.2 Saliva Collection and Preparation Unstimulated whole saliva (UWS) samples were collected at the baseline assessment. To minimise potential influences of hunger and circadian variations on saliva composition, sample collection took place in the morning between 9 and 12 a.m. (Young et al. ). Participants were instructed not to eat, drink or brush their teeth for at least 1 h prior to saliva collection. Before collecting saliva, participants rinsed their mouths thoroughly with de‐ionised distilled water and waited for approximately 5 min to allow saliva to accumulate, thereby reducing the risk of sample dilution and minimising potential contamination from food debris, cigarette residue or airborne particulates. Participants were then instructed to expectorate saliva into 15‐mL test tubes, which were promptly sealed and refrigerated at 4°C. The samples were then centrifuged at 10,000 g for 10 min to ensure the complete removal of food particles and cellular debris (Schipper et al. ). The resulting supernatant was carefully collected into small, sterilised Eppendorf tubes and immediately placed on dry ice and then stored at −80°C liquid nitrogen until further analyses. The concentrations of calcium and phosphorous in saliva samples were quantified using inductively coupled plasma‐optical emission spectroscopy (ICP‐OES) (Thermo Scientific iCAP 6500, Cambridge, UK) (details in ) (Vallapragada et al. ). The pH value of each saliva sample was measured using a digital pH meter (Metler Toledo, OH, USA). The zeta potential of saliva samples was assessed at a temperature of 25°C using electrophoretic light scattering with a Zetasizer Nano‐ZS instrument (Malvern instrument, Version 5.0, QC, Canada) (details in ) (Kaszuba et al. ). Salivary proteins were analysed using liquid chromatography–electrospray ionisation tandem mass spectrometry (LC‐ESI‐MS/MS) (Shevchenko et al. ). The raw data obtained from the mass spectrometer were converted into *.mgf format (Mascot generic format) for subsequent searching using the Mascot 2.5.1 search engine (Matrix Science). The searches were conducted against a database of human protein sequences (Uniprot 2020), and the expanded Human Oral Microbiome Database (eHOMD) based on 16S rRNA gene references ( https://www.homd.org/ ) (Verma et al. ). The database search results were imported into Scaffold 5 (Proteome software Inc., Portland, OR, USA) for spectral counting statistical analysis and data visualisation (details in ). Protein identifications were accepted with a probability > 99.0% and contained at least 2 identified unique peptide. Protein probabilities were assigned in Scaffold by the Protein Prophet algorithm (Nesvizhskii et al. ). Data for spectral counting for identified peptides was analysed using the built‐in capabilities of Scaffold 5 software. Scaffold 5 was used to filter proteins, retaining those identified in at least two out of three replicates in at least one condition. The spectral counts were processed and normalised within Scaffold 5. As Scaffold 5 handles missing values internally, no external imputation methods, such as those provided by the DEP R package, were necessary. This approach ensures the robustness and reliability of the spectral count data analysis within the specialised framework of Scaffold 5. Instead of a strict correction, we used pathway enrichment analysis as an additional filter to identify meaningful biological trends, ensuring robustness in the presence of potential false positives (Pascovici et al. ). Differential expression analysis was performed using standard t ‐tests without corrections for multiple comparisons. Results were reported based on a p ‐value cutoff of < 0.05 and a log 2 fold change > log 2 (1.5), acknowledging the exploratory nature of this approach. While this method increases the risk of false positives, it minimises the exclusion of potentially meaningful biological patterns. GO enrichment analysis was conducted using PANTHER against Homo sapiens background with false discovery rate corrections of less than 1% for multiple testing (Burger ; Mi et al. ). It also served as an additional validation layer to identify trends. However, we emphasise that the reported p values should not be interpreted as indicators of statistical significance but rather as exploratory findings requiring further confirmation. Bioinformatic analysis involving pathway analysis, molecular function, biological processes and cellular components of proteins was presented in charts generated using the PANTHER (ProteinAnalysis Through Evolutionary Relationships; http://pantherdb.org ; version 17.0) classification system (Abdallah et al. ; Abu Nada et al. ). The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE (Perez‐Riverol et al. ) partner repository with the dataset identifier PXD050597. 2.3 Statistical Analysis Statistical analysis was performed using Origin 9.0 (Origin Lab, Northampton, MA, USA) and IBM SPSS Statistics 20 (IBM Corporation, Somers, NY, USA) software packages. The normality of data distribution was assessed using the one‐sample Kolmogorov–Smirnov and Shapiro–Wilk tests. For comparisons between two groups, appropriate statistical tests such as Fisher's exact test, Mann–Whitney U test, two‐way ANOVA and Pearson's correlation were employed. The level of statistical significance was set at a p value of less than 0.05. Proteomic, and metaproteomic analysis was performed based on established protocols (details in ). Clinical Procedures and Evaluation At the baseline assessment, relevant demographic, dental and medical information, including sex, age, ethnicity, tooth sensitivity, diabetes, hypertension, periodontal conditions and malocclusion, were recorded. To minimise confounding factors related to calculus formation, all participants were provided with a standardised oral hygiene kit that included an off‐the‐shelf toothpaste (Complete Whitening Plus Scope, tartar control; Procter & Gamble, Cincinnati, OH, USA), a toothbrush and a dental floss. They also received detailed instructions on the modified Stillman brushing technique (Al‐Hashedi et al. ) and were closely monitored throughout the study to ensure adherence to these guidelines. The following clinical parameters were recorded: The calculus build‐up on the lingual surfaces of six lower anterior teeth, measured using the Volpe‐Manhold Index (VMI) (Volpe et al. ). The plaque accumulation on the labial and lingual surfaces of the teeth, evaluated using the Quigley‐Hain Plaque Index (QHI) (Turesky et al. ). The condition of the gingiva, including the buccal and lingual marginal gingiva and interdental papillae of all teeth, was assessed using the Modified Gingival Index (MGI) (Lobene et al. ). All patients underwent a standard dental cleaning procedure at the 3‐month mark. They were subsequently recalled every 3 months twice (month 6 and month 9) for re‐assessment of calculus formation (Figure ). At 9 months from the baseline, or 6 months after professional cleaning, participants with a VMI score above 7 were categorised as rapid‐forming subjects, while those with a score of 7 or lower were classified as the slow‐forming subjects (Blank et al. ). Two experienced dental hygienists underwent training and calibration in the clinical measurement process, resulting in excellent agreement and significantly reliable results (Cohen's κ = 0.92–0.96). As an incentive at the end of the study, all participants were offered a complimentary dental cleaning free of charge. Saliva Collection and Preparation Unstimulated whole saliva (UWS) samples were collected at the baseline assessment. To minimise potential influences of hunger and circadian variations on saliva composition, sample collection took place in the morning between 9 and 12 a.m. (Young et al. ). Participants were instructed not to eat, drink or brush their teeth for at least 1 h prior to saliva collection. Before collecting saliva, participants rinsed their mouths thoroughly with de‐ionised distilled water and waited for approximately 5 min to allow saliva to accumulate, thereby reducing the risk of sample dilution and minimising potential contamination from food debris, cigarette residue or airborne particulates. Participants were then instructed to expectorate saliva into 15‐mL test tubes, which were promptly sealed and refrigerated at 4°C. The samples were then centrifuged at 10,000 g for 10 min to ensure the complete removal of food particles and cellular debris (Schipper et al. ). The resulting supernatant was carefully collected into small, sterilised Eppendorf tubes and immediately placed on dry ice and then stored at −80°C liquid nitrogen until further analyses. The concentrations of calcium and phosphorous in saliva samples were quantified using inductively coupled plasma‐optical emission spectroscopy (ICP‐OES) (Thermo Scientific iCAP 6500, Cambridge, UK) (details in ) (Vallapragada et al. ). The pH value of each saliva sample was measured using a digital pH meter (Metler Toledo, OH, USA). The zeta potential of saliva samples was assessed at a temperature of 25°C using electrophoretic light scattering with a Zetasizer Nano‐ZS instrument (Malvern instrument, Version 5.0, QC, Canada) (details in ) (Kaszuba et al. ). Salivary proteins were analysed using liquid chromatography–electrospray ionisation tandem mass spectrometry (LC‐ESI‐MS/MS) (Shevchenko et al. ). The raw data obtained from the mass spectrometer were converted into *.mgf format (Mascot generic format) for subsequent searching using the Mascot 2.5.1 search engine (Matrix Science). The searches were conducted against a database of human protein sequences (Uniprot 2020), and the expanded Human Oral Microbiome Database (eHOMD) based on 16S rRNA gene references ( https://www.homd.org/ ) (Verma et al. ). The database search results were imported into Scaffold 5 (Proteome software Inc., Portland, OR, USA) for spectral counting statistical analysis and data visualisation (details in ). Protein identifications were accepted with a probability > 99.0% and contained at least 2 identified unique peptide. Protein probabilities were assigned in Scaffold by the Protein Prophet algorithm (Nesvizhskii et al. ). Data for spectral counting for identified peptides was analysed using the built‐in capabilities of Scaffold 5 software. Scaffold 5 was used to filter proteins, retaining those identified in at least two out of three replicates in at least one condition. The spectral counts were processed and normalised within Scaffold 5. As Scaffold 5 handles missing values internally, no external imputation methods, such as those provided by the DEP R package, were necessary. This approach ensures the robustness and reliability of the spectral count data analysis within the specialised framework of Scaffold 5. Instead of a strict correction, we used pathway enrichment analysis as an additional filter to identify meaningful biological trends, ensuring robustness in the presence of potential false positives (Pascovici et al. ). Differential expression analysis was performed using standard t ‐tests without corrections for multiple comparisons. Results were reported based on a p ‐value cutoff of < 0.05 and a log 2 fold change > log 2 (1.5), acknowledging the exploratory nature of this approach. While this method increases the risk of false positives, it minimises the exclusion of potentially meaningful biological patterns. GO enrichment analysis was conducted using PANTHER against Homo sapiens background with false discovery rate corrections of less than 1% for multiple testing (Burger ; Mi et al. ). It also served as an additional validation layer to identify trends. However, we emphasise that the reported p values should not be interpreted as indicators of statistical significance but rather as exploratory findings requiring further confirmation. Bioinformatic analysis involving pathway analysis, molecular function, biological processes and cellular components of proteins was presented in charts generated using the PANTHER (ProteinAnalysis Through Evolutionary Relationships; http://pantherdb.org ; version 17.0) classification system (Abdallah et al. ; Abu Nada et al. ). The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE (Perez‐Riverol et al. ) partner repository with the dataset identifier PXD050597. Statistical Analysis Statistical analysis was performed using Origin 9.0 (Origin Lab, Northampton, MA, USA) and IBM SPSS Statistics 20 (IBM Corporation, Somers, NY, USA) software packages. The normality of data distribution was assessed using the one‐sample Kolmogorov–Smirnov and Shapiro–Wilk tests. For comparisons between two groups, appropriate statistical tests such as Fisher's exact test, Mann–Whitney U test, two‐way ANOVA and Pearson's correlation were employed. The level of statistical significance was set at a p value of less than 0.05. Proteomic, and metaproteomic analysis was performed based on established protocols (details in ). Results A total of 31 participants were recruited but 5 dropped out later, leading to 26 participants included in the study, 3 men (11.5%) and 23 women (88.5%), with a mean age of 37.3 ± 14.5 years. Participants were divided according to their VMI score after the 6‐month post‐scaling follow‐up, into two groups a rapid calculus‐forming group (rapid group, n = 11) and a slow calculus‐forming group (slow group, n = 15). There was no significant difference between the two groups in terms of demographic, medical and oral health characteristics ( p > 0.05) (Table ). 3.1 Physical–Chemical Characteristics of Saliva Chemical and electrochemical characterisation of saliva showed no significant differences between the two groups in terms of pH as well as protein and phosphorous concentration. However, slow calculus formers presented a significantly more negative zeta potential and higher calcium concentration than rapid formers (Table ). Also, the VMI score had a significant direct correlation with the zeta potential ( r = 0.47), but not with calcium concentration. Meanwhile, there was a positive correlation between protein and phosphorous concentration ( r = 0.46) (Details in Tables and ). 3.2 Proteomic Analysis of Salivary Proteins A total of 895 proteins were identified and quantified in the saliva samples. A comparative analysis of the overlapping proteins, based on total spectrum count ( t ‐test, p < 0.05), was shown in a Venn diagram (Figure ). Most of the identified proteins (833, 93.1%) were commonly present in both groups; however, 38 proteins (4.2%) showed higher concentrations in the rapid group and 24 (2.7%) were abundant in the slow group. The differential protein expression between rapid and slow dental calculus formers, with up‐regulated proteins indicated in green and red, is visualised in a volcano plot (Figure ). Exclusive proteins identified by 99% probability in each group were listed in the (Tables and ). The highest enrichment for Gene‐Ontology Biological Process (GOBP) in the rapid group revealed negative regulation of endopeptidase activity ( n = 9/246), hyaluronan metabolic process ( n = 3/23) and regulation of L‐arginine import across plasma membrane ( n = 2/2), while enrichment for Gene‐Ontology Molecular Function (GOMF) revealed glycosaminoglycan binding ( n = 5/241), serine‐type endopeptidase inhibitor activity ( n = 7/101) and fructose‐bisphosphate aldolase activity ( n = 2/3) (Figure ). In contrast, the highest enrichment for GOBP in the slow group revealed innate immune response ( n = 8/827) and positive regulation of B cell activation ( n = 7/170), while enrichment for GOMF revealed antigen binding ( n = 8/193) and immunoglobulin receptor binding ( n = 8/97) (Figure ). Visualisation of the STRING protein–protein interaction revealed 38 nodes and 106 edges on the proteins exclusively detected in the saliva of the rapid group, whereas 14 nodes and 10 edges were shown for saliva in the slow group (Figure ). Panther Pathway analysis showed that compared with the slow group, the rapid group overexpressed proteins related to cell binding (cytoskeletal regulation by Rho GTPase and integrin signalling pathway), inflammatory mediation (chemokine and cytokine signalling pathways), neurodegenerative disorders (Alzheimer's disease‐presenilin pathway, Huntington's disease and Parkinson's disease) and glycolytic metabolism (Fructose galactose metabolism, Glycolysis, Pyruvate metabolism, Pentose phosphate pathway) (Figure ). 3.3 Salivary Metaproteomic and Taxonomic Analysis A total of 670 catalogued bacterial proteins were identified in the salivary samples, 599 of them (89.4%) were found in both groups, 70 (10.4%) were only found in the rapid group and one protein was only found in the slow group (0.1%) (Table ). Phylogenetic distribution across both groups, as visualised in the heat tree (Figure ), unveiled the presence of seven bacterial phyla: Actinobacteria , Bacteriodetes , Firmicutes , Fusobacteria , Spitochaetes , Chlorobi and Proteobacteria . Statistical analysis highlighted a significant predominance of the phyla Firmicutes , especially the genus Streptococcus , in the rapid group, and the genus Rothia in the slow group (Figure ). The complete proteomic and meta‐proteomic datasets are available in (Table ). Physical–Chemical Characteristics of Saliva Chemical and electrochemical characterisation of saliva showed no significant differences between the two groups in terms of pH as well as protein and phosphorous concentration. However, slow calculus formers presented a significantly more negative zeta potential and higher calcium concentration than rapid formers (Table ). Also, the VMI score had a significant direct correlation with the zeta potential ( r = 0.47), but not with calcium concentration. Meanwhile, there was a positive correlation between protein and phosphorous concentration ( r = 0.46) (Details in Tables and ). Proteomic Analysis of Salivary Proteins A total of 895 proteins were identified and quantified in the saliva samples. A comparative analysis of the overlapping proteins, based on total spectrum count ( t ‐test, p < 0.05), was shown in a Venn diagram (Figure ). Most of the identified proteins (833, 93.1%) were commonly present in both groups; however, 38 proteins (4.2%) showed higher concentrations in the rapid group and 24 (2.7%) were abundant in the slow group. The differential protein expression between rapid and slow dental calculus formers, with up‐regulated proteins indicated in green and red, is visualised in a volcano plot (Figure ). Exclusive proteins identified by 99% probability in each group were listed in the (Tables and ). The highest enrichment for Gene‐Ontology Biological Process (GOBP) in the rapid group revealed negative regulation of endopeptidase activity ( n = 9/246), hyaluronan metabolic process ( n = 3/23) and regulation of L‐arginine import across plasma membrane ( n = 2/2), while enrichment for Gene‐Ontology Molecular Function (GOMF) revealed glycosaminoglycan binding ( n = 5/241), serine‐type endopeptidase inhibitor activity ( n = 7/101) and fructose‐bisphosphate aldolase activity ( n = 2/3) (Figure ). In contrast, the highest enrichment for GOBP in the slow group revealed innate immune response ( n = 8/827) and positive regulation of B cell activation ( n = 7/170), while enrichment for GOMF revealed antigen binding ( n = 8/193) and immunoglobulin receptor binding ( n = 8/97) (Figure ). Visualisation of the STRING protein–protein interaction revealed 38 nodes and 106 edges on the proteins exclusively detected in the saliva of the rapid group, whereas 14 nodes and 10 edges were shown for saliva in the slow group (Figure ). Panther Pathway analysis showed that compared with the slow group, the rapid group overexpressed proteins related to cell binding (cytoskeletal regulation by Rho GTPase and integrin signalling pathway), inflammatory mediation (chemokine and cytokine signalling pathways), neurodegenerative disorders (Alzheimer's disease‐presenilin pathway, Huntington's disease and Parkinson's disease) and glycolytic metabolism (Fructose galactose metabolism, Glycolysis, Pyruvate metabolism, Pentose phosphate pathway) (Figure ). Salivary Metaproteomic and Taxonomic Analysis A total of 670 catalogued bacterial proteins were identified in the salivary samples, 599 of them (89.4%) were found in both groups, 70 (10.4%) were only found in the rapid group and one protein was only found in the slow group (0.1%) (Table ). Phylogenetic distribution across both groups, as visualised in the heat tree (Figure ), unveiled the presence of seven bacterial phyla: Actinobacteria , Bacteriodetes , Firmicutes , Fusobacteria , Spitochaetes , Chlorobi and Proteobacteria . Statistical analysis highlighted a significant predominance of the phyla Firmicutes , especially the genus Streptococcus , in the rapid group, and the genus Rothia in the slow group (Figure ). The complete proteomic and meta‐proteomic datasets are available in (Table ). Discussion Our study showed that the saliva of rapid calculus‐forming patients differs from that of slow‐forming patients in terms of elemental, electrochemical, proteomic and metaproteomic characteristics. These differences were most obvious for salivary zeta potential, calcium concentration, proteome and bacterial metaproteome. Our results showed that salivary zeta potential was more negative in the slow group, suggesting a possible correlation between calculus formation and this parameter. The Zeta potential is an indicator of the overall electrical charges of particles (e.g., proteins) in suspensions (e.g., saliva). Thus, our findings suggest that salivary negative charges could be playing a role in preventing calculus formation, probably because negatively charged proteins can chelate calcium ions, thus increasing their solubility and inhibiting their precipitation. Moreover, repulsive interactions with negatively charged surfaces of the oral cavity (i.e., teeth and salivary pellicle) could prevent adherence and colonisation of bacteria and dental plaque (Selvamani ), thereby preventing calculus formation. Interestingly, salivary proteomic analysis revealed that slow calculus‐forming patients had higher concentrations of the negatively charged protein mucin MUC5B (Mendes et al. ). Thus, it could be speculated that the association between salivary zeta potential and calculus formation rate could be traced to differences in this protein, although future studies would be needed to investigate this hypothesis. Calcium precipitation is a key feature of dental calculus formation. However, we found that individuals with slower calculus formation had higher concentrations of calcium in saliva. This counterintuitive observation suggests that calculus deposition is more likely to depend on the solubility rather than the concentration of calcium in saliva. As described above, slow calculus‐forming patients had higher concentrations of negatively charged molecules, and this can increase calcium solubility. Indeed negatively charged salivary proteins are known to bind calcium ions in saliva, and thus prevent their deposition on tooth surfaces (Pateel et al. ). This is probably why patients with a more negative salivary zeta potential had slower calculus formation despite having higher concentrations of calcium. However, further investigations are essential to better understand these interconnections. Our approach, which omits corrections for multiple comparisons, reflects the exploratory focus of this study and acknowledges the trade‐off between identifying potential true positives and controlling false positives. To address this, we rely on pathway enrichment analysis to identify biologically meaningful patterns. Nevertheless, we avoid referring to our findings as statistically significant. Instead, they are meeting a predefined p ‐value cutoff at 0.05. Future studies with larger sample sizes and stricter statistical corrections will be necessary to validate these findings. Salivary proteomic analysis revealed several pathways associated with calculus formation rate. One example was proteins linked to ‘serine‐type endopeptidase inhibitor activity’. These inflammatory pathways are known to surge in gingivitis and wane in advanced periodontitis (Afacan et al. ), thus their upregulation may indicate an attempt to counterbalance the inflammatory processes associated with calculus formation. The rapid group also showed upregulation of the ‘hyaluronan metabolic process’ and ‘glycosaminoglycan binding’, which are known to bolster the reparative potential of saliva, and are implicated in dental calculus formation (Pogrel et al. ). In contrast, the slow group presented upregulation of proteins related to immunomodulation, such as those involved in the ‘innate immune response’, ‘positive regulation of B cell activation’, ‘antigen binding’ and ‘immunoglobulin receptor binding’. These proteins are quintessential for oral health because they prevent adherence, colonisation and penetration of pathogens into the oral mucosa (Cekici et al. ). Panther's analysis also revealed the expression of certain pathways correlated with calculus deposition rates (Tables and ). For example, the rapid group displayed increased expression of cell binding and cytoskeletal regulation pathways, including the Rho GTPase and integrin signalling pathways. These pathways underpin cellular processes, like adhesion, migration and cytoskeletal organisation, and have been implicated in modulating the effects of TGFβ‐1 (Giancotti ). This cytokine, which is known to play a role in periodontal diseases (Lee et al. ) and periodontal tissues through Rho GTPase‐dependent pathways (Wang et al. ), was also observed to be overexpressed in the saliva of dogs with pronounced dental calculus (Bringel et al. ). The rapid group also showed overexpression of inflammatory pathways, which is consistent with the well‐stablished link between salivary cytokines with dental plaque and periodontal diseases (Kurgan and Kantarci ; Tang et al. ). Our analysis of the rapid group also revealed overexpression of pathways related to neurodegenerative diseases such as Alzheimer's disease, Huntington's disease and Parkinson's disease. This observation is in agreement with the established literature on the nexus between periodontitis and these conditions (Alvarenga et al. ). Additionally, pathways related to carbohydrate processing such as the glycolysis, fructose‐galactose, pyruvate and pentose phosphate pathways were up‐regulated in the rapid group (Chandel ). Surplus carbohydrates in the oral environment foster conditions favourable to pathogenic bacteria associated with oral diseases (Moye et al. ). Thus, the metabolic up‐regulation identified in the rapid group might reflect specific metabolic shifts related to calculus formation. Metaproteomic analysis revealed differences in the microbiome of both groups. At the phylum level, the rapid group exhibited increased concentrations of Streptococcus spp. A group of early colonising bacteria that play a pivotal role in dental plaque formation and have been implicated in dental calculus development (Baris et al. ). Surprisingly, notorious periodontopathogens like Aggregatibacter actinomycetemcomitans , Porphyromonas gingivalis and Fusobacterium nucleatum were not associated with calculus formation (Karaaslan et al. ). Interestingly, the genus Rothia spp. was exclusively present in the slow group. Rothia spp., are conventional residents of the oral environment and are frequently observed in individuals devoid of oral disease (Stephen et al. ). These bacteria can convert salivary nitrate to nitrite and nitric oxide (Mashimo et al. ), an effective antimicrobial agent that could curb plaque accumulation and mitigate gingival inflammation (Rosier et al. ). The findings of this study contribute to a better understanding of the underlying mechanisms of dental calculus formation and potential biomarkers and therapeutic targets for diagnosis, prevention and management of pathological dental calcifications and related oral health issues such as gingivitis and periodontitis. However, our study has some important limitations that need to be considered. One limitation of our study is the imbalance in male to female ratio in our cohort. Most of our volunteers were females, and this imbalance could impact the generalizability of the findings. Therefore, further studies with a more balanced gender ratio and a larger sample size would be beneficial to confirm the findings and enhance the applicability. Nonetheless, as shown in Table , there was no significant difference regarding the gender distribution in rapid and slow groups ( p = 0.56). This means that the findings observed in our study remain valid despite the large number of females in the population. In addition, the p value cut‐off of 0.05 was used for the proteomics differential expression analysis. Due to the exploratory focus of our study, we justify a more lenient threshold to ensure potentially significant findings are not prematurely dismissed. Moreover, our study identified several bacteria, proteins, and signalling and metabolic pathways that seem to be associated with calculus formation. However, the proteomic analysis used has limitations in terms of detecting and quantifying proteins, thus future studies specifically designed to analyse the candidates identified would be needed to further confirm our findings. Another major impediment in calculus research pertains to the wide range of potential confounding factors such as variations in patient demographics, medical history, oral hygiene and the presence of pre‐existing calculus deposits. To address this, our cohort study ensured participants received standard oral hygiene guidelines and kits, and a complete dental cleaning at study entry. This strength in our study allowed us to precisely monitor the calculus growth in a controlled set‐up. Moreover, analysis of patient history revealed no significant differences between groups in terms of age, sex, diabetes, hypertension, periodontitis or ethnicity, suggesting negligible confounding influences from these variables. Participant oral hygiene was closely monitored and controlled by clinical indices at different time points, such as the Volpe‐Manhold Index and Modified Gingival Index. The validity of our oral hygiene standardisation was confirmed by the fact that no significant difference was observed between our groups in terms of gingival indices and plaque indices (Table ). Existing literature provides limited evidence on how standardised oral hygiene might affect saliva zeta potential, elemental composition, or the proteome and metaproteome (Belstrøm et al. ; Huang et al. ; Justino et al. ), suggesting these factors are largely independent. However, it is still possible that individual variations in oral hygiene adherence or response to the standardised protocols influenced the results. This highlights the importance of considering oral hygiene as a variable that might interact with other factors influencing calculus formation. Lastly, the decision to omit missing value imputation was made to minimise artificial biases. However, this approach may have contributed to edge effects, particularly for low‐abundance bacterial taxa, potentially exaggerating fold changes. Future studies should explore robust imputation or filtering strategies to address these limitations. Despite the established roles of salivary proteins such as cystatins and proline‐rich proteins in calcium and phosphate homeostasis, no significant differences in their expression were observed between the two groups. This may suggest that these proteins are not critically involved in calculus formation, as initially hypothesized. Alternatively, the lack of significant variation may be attributed to the functional redundancy among salivary proteins, where physiological functions are often mediated by a network of proteins with shared functional groups, rather than being reliant on the activity of a single protein. Conclusion This study shows how calculus formation rate could be associated with various characteristics of saliva such as zeta potential, mineral composition, proteome and bacterial metaproteome. In depth, proteomic analysis revealed several metabolic and signalling pathways, and bacteria in saliva, which seem to be associated with calculus formation. Wenji Cai contributed to the conception, design, acquisition, analysis and interpretation of the study, drafted the manuscript and critically revised it. Nadia Dubreuil contributed to the design, acquisition, and analysis and critically revised the manuscript. Lina Abu‐Nada contributed to data analysis and interpretation, drafted the manuscript, and critically revised it. Wen Bo Sam Zhou contributed to data analysis and interpretation and critically revised the manuscript. Tayebeh Basiri contributed to data acquisition and analysis and critically revised the manuscript. Amir Hadad contributed to data acquisition and analysis and critically revised the manuscript. Priti Charde contributed to data interpretation, drafted the manuscript and critically revised it. Maxime Ducret contributed to the conception, design, acquisition, analysis and interpretation of the study, drafted the manuscript and critically revised it. Faleh Tamimi contributed to the conception, design, acquisition, analysis, and interpretation of the study, drafted the manuscript, and critically revised it. All authors gave their final approval and agreed to be accountable for all aspects of the work. The authors declare no conflicts of interest. Data S1. Table S7.
Correlation of magnetic resonance images with neuropathology of irreversible metronidazole-induced encephalopathy: an autopsy case report
cff956a7-193e-49d2-8243-4823d53d7fd2
9753291
Forensic Medicine[mh]
Metronidazole is one of the mainstay antibiotics used for anaerobic infections, exhibiting excellent penetration into tissues including the central nervous system (CNS), abscesses, bile and peritoneal fluid . Although well tolerated, metronidazole can be associated with a serious adverse effect on the CNS known as metronidazole-induced encephalopathy (MIE), especially when patients with pre-existing liver disease receive prolonged therapy or large doses of metronidazole . Patients with MIE may develop various types of neurological symptoms or signs including dysarthria (63%), gait instability (55%), limb dyscoordination (53%) and altered mental status (41%) . Magnetic resonance imaging (MRI) of the brain shows characteristic symmetrical hyperintensity in the dentate nuclei (90%) and corpus callosum (44%) on T2-weighted images or fluid-attenuated inversion recovery (FLAIR) . T2 hyperintense lesions in the splenium of the corpus callosum can be found in many conditions [e.g. epilepsy, acute infectious encephalitis, demyelinating diseases, osmotic myelinolysis and acute toxic encephalopathy (MIE)]. Conversely, those in the bilateral dentate nuclei are of great diagnostic value because they are seen in only a few diseases including methyl bromide intoxication, maple syrup urine disease and enteroviral encephalomyelitis in addition to MIE . While these clinical and radiographic features are reversible in most patients with MIE after the withdrawal of metronidazole, a small proportion of patients with MIE can still have residual neurological symptoms and radiographic abnormalities . Several research groups have reported that metronidazole can also cause polyneuropathy due to a combination of demyelination and axonal degeneration of peripheral nerves, with a predilection for large sensory fibers . However, the pathological signature of irreversible MIE remains unknown. Here we report the clinical, radiographical and histopathological features of MIE in a 74-year-old Japanese woman who developed the condition during the course of pancreatic neuroendocrine tumour (P-NET) with metastatic tumours in the liver. A Japanese woman with no clinical history of methyl bromide intoxication, maple syrup urine disease and cognitive impairment was diagnosed as having P-NET and underwent total pancreatectomy with splenectomy and partial resection of the liver (S6) at the age of 70 years. She developed liver metastases six months after the surgery and was administered treatments for P-NET [a synthetic analogue of somatostatin (lanreotide), an inhibitor of mammalian target of rapamycin (everolimus), and a multi-targeted receptor tyrosine kinase inhibitor (sunitinib)]. At the age of 72 years, she developed a hepatic abscess 20 mm in diameter, which caused intraabdominal bleeding. Although we performed abscess drainage, there was a residual abscess which was treated medically. Thus, we administered metronidazole (1.5 g/day) in combination with tazobactam/piperacillin or cefmetazole. However, after 79 days of metronidazole treatment (cumulative dose 118.5 g), the patient developed transient dysarthria followed by hand tremor and altered mental status [mini-mental state examination (MMSE) score 11/30]. Blood examinations showed normal liver function at the time of onset. Brain MRI at the time of onset demonstrated hyperintensities in the deep white matter of the bilateral parietal lobes and splenium of the corpus callosum on diffusion-weighted imaging (DWI), and in the dentate nuclei on FLAIR images (Fig. a-f). The apparent diffusion coefficient (ADC) was reduced in the corresponding regions of the parietal lobes and corpus callosum, but not in the dentate nuclei (Fig. g-i). Withdrawal of metronidazole led to improvement of the hand tremor, and resolution of hyperintensity in the dentate nuclei on FLAIR, thus allowing a diagnosis of MIE to be made. Although the blood thiamine level was within normal limits (27 ng/mL), we also administered thiamine intravenously. Despite these treatments, her cognition remained affected 6 and 12 weeks after drug withdrawal (MMSE score 19/30 and 21/30, respectively, the former of which showed severe impairment of orientation and attention). Follow-up MRI of the brain after two years demonstrated widespread hyperintensities in the deep white matter including the parietal lobes and the splenium of the corpus callosum on FLAIR (Fig. j-o). She died of P-NET at the age of 74 years. Histopathologic examination of the liver showed that the tumour cells had a diffuse alveolar pattern with a macrovascular stroma, being immunoreactive for chromogranin A and synaptophysin, thus allowing a pathological diagnosis of NET G3 (KI-67 > 20%). Klüver-Barrera staining of the brain revealed severe demyelination with foamy macrophages in the splenium of the corpus callosum and deep white matter of the parietal lobes (Fig. a-c). Immunohistochemistry with an antibody against neurofilament (clone 2F11; Dako Cytomation, Glostrup, Denmark; 1:200) also confirmed moderate axonal loss with axonal swellings in the same region (Fig. d). On the other hand, minute loss of neurons in the dentate nuclei was noted with preservation of the superior cerebellar peduncles (data not shown). There were no tumour cells, inflammatory cell infiltration suggestive of viral encephalomyelitis, petechial haemorrhages or haemosiderin-laden macrophages in the brain. To our knowledge, this is the first autopsy case of irreversible MIE to have been reported. In our patient, severe demyelination with moderate axonal degeneration in the affected regions was a cardinal feature, similar to that in metronidazole-induced peripheral neuropathy . Interestingly, the regions showing hyperintensities on DWI with reduced ADC values corresponded to those with demyelination and axonal degeneration. By contrast, the abnormal signal intensity of the dentate nuclei disappeared after metronidazole withdrawal, and accordingly the pathological features in the dentate nuclei were largely resolved. These radiographic and pathological features may well explain the persistent cognitive impairment even after drug withdrawal, in contrast to the improvement of hand tremor. Sørensen et al. performed a systematic literature review of 112 papers, which comprise 136 patients with MIE. They reported cumulative dose of metronidazole in 110 MIE cases available (a lower quartile of 36 g, a median of 65.4 g and upper quartile 110.8 g) and total duration of treatment in 125 cases (a lower quartile of 19.5 days, a median of 35 days and upper quartile of 63 days) . They further examined these factors in six MIE cases with severe residual neurological symptoms . Interestingly, similarly to our patient, all of them showed abnormal signal intensity in the cerebral white matter on T2/FLAIR, DWI or ADC. However, duration of treatment and cumulative dose of metronidazole varied among cases (duration of treatment: 6 to 180 days; cumulative dose: 33 to 250 g) . Although these factors in our patient were beyond the upper quartiles, the findings in six patients with residual neurological symptoms also suggest some additional factors associated with the occurrence of MIE. The mechanism of metronidazole-induced neurotoxicity has been unclear. In an early study reported by Rogulja et al., a CNS Wernicke-like picture was found in rats treated with metronidazole (800 mg/kg/day) . Recently, Hassan et al. have shown that metronidazole causes thiamine deficiency and oxidative stress in rats; haematoxylin and eosin staining revealed some degeneration of Purkinje cells in the cerebellum . These findings suggest an association of metronidazole with the metabolism of thiamine in experimental models of MIE. On the other hand, acute Wernicke’s encephalopathy in the human brain is characterized by the presence of petechial haemorrhages in subcortical regions around the third and fourth ventricles. In the chronic stage, rarefaction of the mammillary bodies with haemosiderin-laden macrophages is commonly observed . In our patient, however, neither extravasation of erythrocytes nor haemosiderin-laden macrophages was evident in the affected regions, indicating some differences in the mechanism of MIE between humans and experimental models. In MRI of the brain, callosal and white matter lesions showed diffusion restriction. Thus, cytotoxic oedema may well be triggered by the toxic effect of metronidazole, although pathological examinations could not identify the direct cause of demyelination and axonal loss. In conclusion, we have reported the clinical, radiographic and pathological features of irreversible MIE. Hyperintensities on brain DWI with reduced ADC values in patients with MIE may indicate an irreversible pathological change involving severe demyelination and axonal degeneration, and therefore a poor clinical prognosis. Additional file 1.
Congenitally missing permanent canines in a sample of Chinese population: a retrospective study
fc6e8c34-4af0-4eda-92c2-20b4b6a5bdde
11580490
Dentistry[mh]
Congenitally missing teeth, which refer to teeth that fail to develop or form during the process of odontogenesis , have a prevalence ranging from 4.4 to 13.4% . These dental anomalies can be divided into isolated and syndromic types on the basis of their association with systemic diseases . Research on this issue is crucial as it can provide more direct guidance to improve patients’ oral health and promote standardized treatment approaches. Canines, commonly known as “fang teeth” or “eye teeth”, are located at the front corners of the mouth and have relatively thick roots, the longest length, and the longest retention time in the oral cavity, serving functions such as tearing food, guiding occlusion, and providing support to facial soft tissue. Permanent canines can be used as abutments and play an important role in prosthodontic treatment. Congenitally missing permanent canines (CMPC) have a significant impact on function and aesthetics . Patients without complaints at the time of presentation usually have persistent primary canines that mask the problem . CMPC can lead to several developmental abnormalities in the maxillary bone, including hypoplasia and asymmetry. This condition often results in insufficient support for facial soft tissues, which may cause facial collapse and contribute to an aged appearance. Additionally, CMPC are associated with various malocclusion conditions, such as dental crowding, altered occlusion, and functional impairments. Furthermore, it may induce resorption and morphological alterations in the alveolar bone of the affected area. The etiology of CMPC is multifactorial, and involves genetic regulation and environmental factors, with the former playing a more important role . Genetic influence often appears as a familial trait involving multiple genes such as PAX9, MSX1, AXIN2, and EDA, which play roles in signalling pathways and signal transduction cascades . Environmental factors, including early childhood radiation therapy, maternal rubella during pregnancy, and thalidomide exposure, can lead to tooth agenesis . The reported prevalence of CMPC varies widely, such as 0.04 ~ 0.14% in America , 0.18% in Japan , 0.23% in Sweden , 0.29% in Hungary , 0.45% in Hong Kong, China , 0.51% in China , and 0.76% in Israel . CMPC are rare and existing studies on this topic are predominantly case reports, with limited large sample studies , particularly in China. Previous studies often combined data from both children and adults in their analysis , which may obscure age-specific trends. Furthermore, the complaints differed between patients who visited before and after the average age of canine eruption, suggesting that the prevalence of CMPC may vary between different age groups. Dental anomalies are associated with CMPC . Microdontia often appear conical and deviate from normal tooth morphology. Research indicates that mutations in genes responsible for tooth germ development and size, such as PAX9 and MSX1, can lead to congenital tooth absence or microdontia. This retrospective study aimed to investigate the prevalence and distribution of CMPC using PR among Chinese outpatient population under 18 years of age, and to compare the differences in the prevalence of CMPC between the two different age groups, and genders. Additionally, the presence of concomitant dental anomalies such as persistent primary canines, congenitally missing other permanent teeth, microdontia were analysed. Sample size calculation To investigate the prevalence of CMPC, the sample size estimation in this study was calculated using the formula [12pt]{minimal} $$\:n={(_{\:/2}}{\:})}^{2}\:\:(1-\:)$$ , where n represents the required sample size, α is the significance level set at 0.05, δ is the allowable error set at 0.0015, and π is the prevalence of CMPC.A literature revealed that the domestic prevalence of CMPC was 0.51%. When the formula, a sample size of 8664 individuals was needed. Considering a 10% dropout rate, a minimum of 9627 individuals needed to be examined. Study subjects PR images of a total of 10,447 patients admitted to outpatient clinics at Beijing Children’s Hospital in China during the period from August 2021 to December 2023 were selected for the study, without consideration of gender. Instances of repeated visits by the same patient were counted as a single case. Panoramic radiographs were obtained using panoramic Cranex D (Soredex, Tuusula, Finland) operating at 70 kV, 14 mA and an exposure time of 12 s. The inclusion criteria were patients of Chinese origin aged ≤ 18 years, and good quality PR images devoid of distortion, facilitating the identification of patients with missing teeth. In cases where documentation was lacking, guardians were contacted by phone to ascertain the cause of the missing permanent canine. The exclusion criteria were as follows: incomplete information; history of permanent canine extraction for orthodontic reasons, trauma, jaw cysts or other pathologic lesions; syndromes or fusion between the permanent lateral incisor and canine; and distorted or blurred PR images. Procedures Two experienced clinicians reviewed the 10,447 panoramic radiograph images and made the diagnoses of CMPCs. If the two clinicians were consistent with the diagnoses of congenitally missing permanent canine, then the diagnosis was accepted; if not, a third well-experienced clinician was consulted to make a diagnosis which was then recorded. If the diagnoses of the three clinicians were not consistent, they consulted together to reach a consensus or the case was excluded. Microdontia was typically diagnosed by two clinicians, with any disagreements resolved by a third experienced clinician. The diagnostic criteria include a tooth that is smaller than normal, deviates from the normal tooth morphology, and exhibits a tapering coronal form, where the incisal mesio-distal width of the crown is narrower than the cervical width . Data collection For patients diagnosed with CMPC, the age, gender, location and number of missing permanent canines were recorded. Additionally, the number of missing primary canines, the location and numbers of congenitally missing other permanent teeth and the presence of concomitant anomalies such as persistent primary canines, supernumerary teeth and microdontia were recorded. Based on the average age of permanent canine eruption, all the study PR images of patients were divided into two groups based on age: group A (< 121 months of age) and group B (≥ 121 months of age). The number of patients with CMPC in the two groups was recorded. Statistical analysis The qualitative data were presented as counts (percentages) and were analysed using the χ² test in SPSS (SPSS Inc. Chicago, IL) ( P < 0.05). To investigate the prevalence of CMPC, the sample size estimation in this study was calculated using the formula [12pt]{minimal} $$\:n={(_{\:/2}}{\:})}^{2}\:\:(1-\:)$$ , where n represents the required sample size, α is the significance level set at 0.05, δ is the allowable error set at 0.0015, and π is the prevalence of CMPC.A literature revealed that the domestic prevalence of CMPC was 0.51%. When the formula, a sample size of 8664 individuals was needed. Considering a 10% dropout rate, a minimum of 9627 individuals needed to be examined. PR images of a total of 10,447 patients admitted to outpatient clinics at Beijing Children’s Hospital in China during the period from August 2021 to December 2023 were selected for the study, without consideration of gender. Instances of repeated visits by the same patient were counted as a single case. Panoramic radiographs were obtained using panoramic Cranex D (Soredex, Tuusula, Finland) operating at 70 kV, 14 mA and an exposure time of 12 s. The inclusion criteria were patients of Chinese origin aged ≤ 18 years, and good quality PR images devoid of distortion, facilitating the identification of patients with missing teeth. In cases where documentation was lacking, guardians were contacted by phone to ascertain the cause of the missing permanent canine. The exclusion criteria were as follows: incomplete information; history of permanent canine extraction for orthodontic reasons, trauma, jaw cysts or other pathologic lesions; syndromes or fusion between the permanent lateral incisor and canine; and distorted or blurred PR images. Two experienced clinicians reviewed the 10,447 panoramic radiograph images and made the diagnoses of CMPCs. If the two clinicians were consistent with the diagnoses of congenitally missing permanent canine, then the diagnosis was accepted; if not, a third well-experienced clinician was consulted to make a diagnosis which was then recorded. If the diagnoses of the three clinicians were not consistent, they consulted together to reach a consensus or the case was excluded. Microdontia was typically diagnosed by two clinicians, with any disagreements resolved by a third experienced clinician. The diagnostic criteria include a tooth that is smaller than normal, deviates from the normal tooth morphology, and exhibits a tapering coronal form, where the incisal mesio-distal width of the crown is narrower than the cervical width . For patients diagnosed with CMPC, the age, gender, location and number of missing permanent canines were recorded. Additionally, the number of missing primary canines, the location and numbers of congenitally missing other permanent teeth and the presence of concomitant anomalies such as persistent primary canines, supernumerary teeth and microdontia were recorded. Based on the average age of permanent canine eruption, all the study PR images of patients were divided into two groups based on age: group A (< 121 months of age) and group B (≥ 121 months of age). The number of patients with CMPC in the two groups was recorded. The qualitative data were presented as counts (percentages) and were analysed using the χ² test in SPSS (SPSS Inc. Chicago, IL) ( P < 0.05). Prevalence and distribution of congenitally missing permanent canines Of 10,447 patients (5842 males and 4605 females), resulting in a male-to-female ratio of 1.27:1, the overall prevalence of CMPC was 0.69% (72/10447), with an average age of 9.2 years. Among the 72 patients, 40 males and 32 females were documented (male-to-female ratio 1.25:1). The prevalence of CMPC was lower in males (0.68%; 40/5842) than in females (0.69%; 32/4605), but the difference was not statistically significant (χ²=0.004, p = 0.950) (Table ). 66 patients (91.67%) had maxillary permanent canines affected, 5 patients (6.94%) had mandibular canines affected, and 1 patient (1.39%) had both maxillary and mandibular canines affected (Table ). The prevalence of CMPC in group B (1.08%, 26/2400) was significantly greater than that in group A (0.57%, 46/8047) (χ²=7.072, p = 0.008) (Table ). A total of 104 congenitally missing permanent canines were found (96 in the maxilla, 8 in the mandible). CMPC were significantly more likely to occur in the maxilla (χ²=74.647, p < 0.001). 53 permanent canines were congenitally missing on the left side and 51 were missing on the right side, showing no statistically significant difference (χ²=0.039, p = 0.844) (Table ). The number of congenitally missing permanent canines One permanent canine was congenitally missing for 40 patients, accounting for 55.56% (40/72) (Fig. ). 32 patients (44.44%, 32/72) had two congenitally missing permanent canines, with all but one exhibiting bilateral absence (Figs. , and ). Cases with three or four congenitally missing permanent canines were not found. In this study, 104 congenitally missing canines were observed. Of these, 93 (89.42% ,93/104) primary canines were present, while 11 were missing. Congenitally missing permanent canines and other concomitant dental anomalies Among patients with CMPC, 58.33% (42/72) exhibited only permanent canine loss, with no other dental anomalies, excluding persistent primary canines, whereas 30.56% (22/72) had congenitally missing other permanent teeth. Among patients with CMPC, a total of 58 other non-canine permanent teeth were missing. Among the 58 missing teeth, 39.66% (23/58) were associated with the absence of second premolars, followed by 25.86% (15/58) with first premolars (Fig. ), 22.41% (13/58) with lateral incisors, and 12.07% (7/58) with mandibular central incisors. 3 patients presented with congenital absence of maxillary permanent canines and supernumerary teeth (Fig. ) 0.12 maxillary permanent lateral incisors exhibited microdontia in 7 patients, and impacted permanent canines could also be observed (Fig. ) (Table ). Of 10,447 patients (5842 males and 4605 females), resulting in a male-to-female ratio of 1.27:1, the overall prevalence of CMPC was 0.69% (72/10447), with an average age of 9.2 years. Among the 72 patients, 40 males and 32 females were documented (male-to-female ratio 1.25:1). The prevalence of CMPC was lower in males (0.68%; 40/5842) than in females (0.69%; 32/4605), but the difference was not statistically significant (χ²=0.004, p = 0.950) (Table ). 66 patients (91.67%) had maxillary permanent canines affected, 5 patients (6.94%) had mandibular canines affected, and 1 patient (1.39%) had both maxillary and mandibular canines affected (Table ). The prevalence of CMPC in group B (1.08%, 26/2400) was significantly greater than that in group A (0.57%, 46/8047) (χ²=7.072, p = 0.008) (Table ). A total of 104 congenitally missing permanent canines were found (96 in the maxilla, 8 in the mandible). CMPC were significantly more likely to occur in the maxilla (χ²=74.647, p < 0.001). 53 permanent canines were congenitally missing on the left side and 51 were missing on the right side, showing no statistically significant difference (χ²=0.039, p = 0.844) (Table ). One permanent canine was congenitally missing for 40 patients, accounting for 55.56% (40/72) (Fig. ). 32 patients (44.44%, 32/72) had two congenitally missing permanent canines, with all but one exhibiting bilateral absence (Figs. , and ). Cases with three or four congenitally missing permanent canines were not found. In this study, 104 congenitally missing canines were observed. Of these, 93 (89.42% ,93/104) primary canines were present, while 11 were missing. Among patients with CMPC, 58.33% (42/72) exhibited only permanent canine loss, with no other dental anomalies, excluding persistent primary canines, whereas 30.56% (22/72) had congenitally missing other permanent teeth. Among patients with CMPC, a total of 58 other non-canine permanent teeth were missing. Among the 58 missing teeth, 39.66% (23/58) were associated with the absence of second premolars, followed by 25.86% (15/58) with first premolars (Fig. ), 22.41% (13/58) with lateral incisors, and 12.07% (7/58) with mandibular central incisors. 3 patients presented with congenital absence of maxillary permanent canines and supernumerary teeth (Fig. ) 0.12 maxillary permanent lateral incisors exhibited microdontia in 7 patients, and impacted permanent canines could also be observed (Fig. ) (Table ). This study focused on isolated congenitally missing permanent canines, a complex condition influenced by polygenic inheritance, as well as environmental and epigenetic factors. The genetic patterns of congenital tooth agenesis may include autosomal dominant, autosomal recessive, or X-linked inheritance . Research indicated that EDARADD (c.308 C > T, p.Ser103Phe) and COL5A1 (c.1588G > A, p.Gly530Ser) were specifically associated with canine agenesis. A study on a family with congenital maxillary canine agenesis identified ITGAV as a potential pathogenic gene. Whilst, some studies suggested that the absence of maxillary canines were associated with WNT10A gene . In this study, the overall prevalence of CMPC was 0.69%. In the younger age group, the prevalence of CMPC was 0.57%, aligning closely with findings of Qiu (0.51%) and Davis (0.45%) in China. A Study has also reported CMPC in the orthodontic population in Israel, with a prevalence of 0.76%. It is speculated that racial factors significantly influence this process. Research findings indicated that racial differences exist in the occurrence of congenitally missing teeth. Genetic processes such as genetic drift, gene flow, and natural selection lead to variations in genomic compositions among different races and populations .The variations from other studies may also be attributed to differences in regional populations, sample sizes, and sample selection. Samples from specialist hospitals were more likely to detect missing teeth. Different complaints for dental visits before and after the expected eruption age of permanent canines should be considered. Younger children were less likely to have chief complaint of non-eruption of canines, which could better indicate the true prevalence of CMPC. Research has shown that mandibular canines erupt at approximately 9 years of age, whereas maxillary canines erupt at approximately 10 years of age, with an average of approximately 10.1 years or 121 months. This study categorized participants into two groups based on the age of 121 months. The significantly greater prevalence of CMPC in older children(1.08%), which is twice that of younger children(0.57%), may be attributed to parents observing no eruption of their children’s canines and prompting them to seek medical attention. The result in older age group (1.08%) was similar to those(0.76%) in orthodontic patients reported by Finkelstein . Previous research has indicated a predominance of females in cases of Hypodontia . Whilst, this study demonstrated no gender predominance for CMPC, aligning with earlier findings . Nonetheless, a study suggested that women were more affected by CMPC and that CMPC differed from hypodontia, exhibiting different gender tendencies . No significant difference was found in the prevalence of CMPC between the left and right sides, which is consistent with previous research . CMPC are more frequently found in the maxilla, which is consistent with the findings of previous studies . Some studies have reported a greater prevalence of congenitally missing left maxillary canines. Cleft lip and alveolar anomalies are more likely to affect the maxilla and may be associated with dental abnormalities . It’s suspected that CMPC were more prevalent in the maxilla, likely for similar reasons. However, right mandibular dominance has been reported in CMPC, which remains unexplained . In this study, of the 4 quadrants, maxillary canines were more commonly affected than mandibular canines, with no other similar site dominance observed. CMPC typically involved the absence of one or two permanent canines with persistent primary canines, with no cases exceeding the absence of three found in this study. However, previous studies have reported cases in which three to four permanent canines were congenitally missing . This discrepancy may be attributed to the extremely low prevalence of congenitally missing mandibular permanent canines. This study found that 89.42% (93/104) of the missing permanent canines had visible primary canines in patients with CMPC, aligning with Qiu’s research . The absence of permanent successors delayed normal resorption of primary tooth roots, resulting in primary teeth being retained for 40 or 50 years . The presence of associations among various tooth anomalies is significant in clinical practice , as the early detection of one anomaly may suggest an increased risk for additional anomalies . A total of 58.33% of patients with CMPC had no other dental abnormalities. There was a suspicion that CMPC demonstrated a tendency toward isolated absence. Among 58 accompanying permanent tooth absence, second premolars (39.66%) were most observed, followed by first premolars (25.86%) and lateral incisors (22.41%). These characteristics were consistent with those observed when these anomalies occur independently. 7 patients with CMPC exhibited microdontia of maxillary lateral incisors. Microdontia and hypodontia exhibit significant and intimate genetic associations . Additionally, a rare combination of abnormal tooth numbers was observed in 3 patients with CMPC, who also presented with supernumerary teeth in the maxilla. Disturbances in the differentiation, migration and proliferation of neural crest cells are associated with interactions between epithelial and mesenchymal cells during odontogenesis initiation and may be responsible for “concomitant hypo-hyperdontia” . In multistage restorative therapy for CMPC, many factors should be considered, such as the early diagnosis of agenesis, malocclusion and the facial skeleton. Ephraim et al. emphasized the impact of the congenital absence of teeth on developing dentition, underscoring the significance of early diagnosis of mixed dentition to prevent malocclusion from evolving. Primary canines, when not replaced by their permanent successors, usually exhibit minimal or no root resorption, which helps preserve the alveolar bone for future prosthetic rehabilitation . Therefore, preservation of primary canines is recommended whenever feasible. However, despite their prolonged retention time, primary canines still face the issues of root resorption and tooth loss. Consulting an orthodontist and restorative dentist is necessary to prevent difficulties caused by space loss and jaw atrophy resulting from premature primary canine loss in the developing dentition. To optimize the restoration process, clinicians should focus on minimizing or consolidating the edentulous space and reducing the required number of implants. For instance, the consideration of guided eruption and orthodontic treatment for premolar substitution in cases where permanent maxillary canines are missing can be beneficial . One study reported the successful replacement of primary canines with implants . Mini-implants can be utilized for the temporary restoration of missing permanent teeth in adolescent patients . This alternative approach facilitates vertical development of the alveolar process and maintains bone density and alveolar morphology, obviating the need for additional surgical procedures for dental implantation and leading to favourable long-term outcomes. One limitation of this study was that it might not accurately represent the prevalence and distribution of CMPC among the Chinese population; hence, the generalizability of the results is limited. To validate the prevalence observed in this study, multicenter research utilizing electronic health records (EHR) systems or other comprehensive medical databases is recommended. Additionally, long-term follow-up to assess the absence of second permanent molars and other dental anomalies, such as taurodontism is recommended. Further genetic investigations into the pathogenic genes associated with CMPC in affected patients may provide additional insights. This study provides valuable data on CMPC and their co-occurrence with other dental anomalies in the Chinese population. The prevalence of CMPC was 0.69%. CMPC were more likely to occur in the maxilla with persistent primary canines, showing no gender or side trends. Early panoramic radiograph is recommended to facilitate early diagnosis, intervention and referral for treatment, thereby minimizing or preventing the functional and aesthetic complications associated with CMPC.
Use of Ultra-Hydrophilic Absorbable Polysaccharide for Bleeding Control in Cardiothoracic Surgical Procedures
4d6082be-09a7-4c01-9dd5-080e4a0971e6
11857388
Surgical Procedures, Operative[mh]
Perioperative bleeding control is a crucial issue during cardiac procedures. To prevent or control bleeding, sutures and electrocautery are primarily used alongside precise surgical techniques. Nevertheless, significant intraoperative bleeding can lead to prolonged operating room times, increased reliance on blood transfusions, and, in severe cases, a heightened risk of adverse events or mortality . In case conventional hemostatic techniques with sutures or electrocautery are ineffective, various hemostatic agents can be used as an adjunct to control or reduce bleeding in cardiothoracic procedures . However, it is crucial to acknowledge that some of these hemostatic products may carry the risk of adverse effects . Topical hemostatic agents are typically applied directly to the bleeding site. This study focuses on the clinical application and outcomes associated with the hemostat powder, which is a plant-derived microporous polysaccharide powder used as a topical agent to control perioperative bleeding during cardiothoracic surgeries. 2.1. Patient Population The data were routinely collected during routine internal quality control checks to assess the efficacy and safety of medical products in patients who underwent cardiothoracic surgery at our institution between January 2012 and January 2015. Medical products for bleeding control were used for complex cardiothoracic surgery or surgery with prolonged hemostasis at the surgeon’s discretion. Of 65 patients in the database, the hemostat powder was used to control bleeding in 42 patients. These patients were compared to 23 patients who received other methods of bleeding control (e.g., Ostene ® (Baxter, Unterschleißheim, Germany) for hemostasis on the sternum after sternotomy, oxidized regenerated cellulose, Tachosil ® (Takeda Austria GmbH, Linz, Austria), or fibrin glue). The other products were also used where Starsil ® Hemostat (Hemostat Medical GmbH, Velen, Germany) was not recommended according to the instructions for use (IFU). Only patients with routine indications were included in the data analysis. Data from emergency patients or patients with preoperatively proven bacterial colonization in the preoperatively performed control or patients with preoperative infections were excluded. Demographic patent data are listed in . There were no significant changes in technique or protocol in the performance of the operations during the study period. The main focus of this study was the safety of use, the performance of the hemostat powder on local hemostasis, and the surgeons’ satisfaction with the product. Surgeons rated the performance of the hemostat powder, e.g., for bleeding control and in terms of the handling of the device, using a visual analog scale (VAS) from 1 to 10, with 1 being very bad and 10 being a perfect product performance. Bleeding control was rated satisfactory if the bleeding stopped within 2 min as described in the IFU. Other study goals were laboratory values such as hemoglobin, leucocytes, C-reactive protein, creatinine, and blood glucose. In addition, the temperature of patients was documented. Preoperative values in were obtained one day before surgery or on admission. The maximum/minimum values were taken during the inpatient stay (day 3–day 8) and the “discharge” values were determined within the last 24 h before discharge. 2.2. Surgical Technique All operations were performed through full median sternotomy. Heparinization was instituted in all operations. For cardiac operations performed with cardiopulmonary bypass (CPB), an active clotting time (ACT) of >600 s. was aimed for. For off-pump coronary artery bypass (OPCAB) procedures and others without CPB, the ACT had to be >300 s. After weaning from extracorporeal circulation or completion of the anastomoses of the OPCAB operations, heparin was antagonized with protamine sulfate. Further hemostasis of active bleeding sources with sutures and/or electrocautery followed. Patients treated with the hemostat powder to control bleeding received up to 10 g of the hemostatic powder. The hemostat powder was applied to the bleeding sources according to the IFU provided by the manufacturer ( , and ). The number of applications or the amount of the hemostat powder used depended on the level of bleeding and was the surgeon’s decision. Before sternal closure, residual powder was applied to both sides of the sternum . The sternum was closed with sternal wires, subcutaneous fat, and skin with absorbable sutures. 2.3. Study Product Starsil ® Hemostat (Hemostat Medical GmbH, Velen, Germany) is a class III medical device that is intended to be used in surgery as an adjunctive hemostat to control capillary, arterial, or venous bleeding in situations where the use of ligatures, pressure, or other conventional methods prove to be inadequate or impracticable. The device is indicated for adhesion prevention as well in cavities covered by mesothelium. Starsil ® Hemostat is a hemostat consisting of 5 g of a purified plant-based absorbable polysaccharide that can be administered to the entire operation area. The powder is available off-the-shelf without any further preparation. In order to obtain hemostasis, it can be applied directly to a bleeding wound. The hemostatic effect results from rapid dehydration and subsequent concentration of blood components like red blood cells, platelets, and serum proteins (thrombin, fibrinogen, etc.), thus accelerating the clotting cascade. As a result, a gelled adhesive matrix is produced. Normal platelet activation and fibrin deposition produce a clot that functions as a mechanical barrier and limits further bleeding. The absorption of the particles is achieved within approximately 48 to 72 h. The hemostat powder is biocompatible, non-pyrogenic, and contains no allo- or xenogenic additions. 2.4. Statistical Methods Data were retrospectively entered into a computerized database and analyzed through SPSS software version 11.0.1 for Windows under the guidance of a statistician. Continuous data are presented as the mean +/− standard deviation. The results are given as mean values with the standard deviation of the mean values within the population. The α-level is 5%, whereby values with p < 0.05 are considered significant. The data were routinely collected during routine internal quality control checks to assess the efficacy and safety of medical products in patients who underwent cardiothoracic surgery at our institution between January 2012 and January 2015. Medical products for bleeding control were used for complex cardiothoracic surgery or surgery with prolonged hemostasis at the surgeon’s discretion. Of 65 patients in the database, the hemostat powder was used to control bleeding in 42 patients. These patients were compared to 23 patients who received other methods of bleeding control (e.g., Ostene ® (Baxter, Unterschleißheim, Germany) for hemostasis on the sternum after sternotomy, oxidized regenerated cellulose, Tachosil ® (Takeda Austria GmbH, Linz, Austria), or fibrin glue). The other products were also used where Starsil ® Hemostat (Hemostat Medical GmbH, Velen, Germany) was not recommended according to the instructions for use (IFU). Only patients with routine indications were included in the data analysis. Data from emergency patients or patients with preoperatively proven bacterial colonization in the preoperatively performed control or patients with preoperative infections were excluded. Demographic patent data are listed in . There were no significant changes in technique or protocol in the performance of the operations during the study period. The main focus of this study was the safety of use, the performance of the hemostat powder on local hemostasis, and the surgeons’ satisfaction with the product. Surgeons rated the performance of the hemostat powder, e.g., for bleeding control and in terms of the handling of the device, using a visual analog scale (VAS) from 1 to 10, with 1 being very bad and 10 being a perfect product performance. Bleeding control was rated satisfactory if the bleeding stopped within 2 min as described in the IFU. Other study goals were laboratory values such as hemoglobin, leucocytes, C-reactive protein, creatinine, and blood glucose. In addition, the temperature of patients was documented. Preoperative values in were obtained one day before surgery or on admission. The maximum/minimum values were taken during the inpatient stay (day 3–day 8) and the “discharge” values were determined within the last 24 h before discharge. All operations were performed through full median sternotomy. Heparinization was instituted in all operations. For cardiac operations performed with cardiopulmonary bypass (CPB), an active clotting time (ACT) of >600 s. was aimed for. For off-pump coronary artery bypass (OPCAB) procedures and others without CPB, the ACT had to be >300 s. After weaning from extracorporeal circulation or completion of the anastomoses of the OPCAB operations, heparin was antagonized with protamine sulfate. Further hemostasis of active bleeding sources with sutures and/or electrocautery followed. Patients treated with the hemostat powder to control bleeding received up to 10 g of the hemostatic powder. The hemostat powder was applied to the bleeding sources according to the IFU provided by the manufacturer ( , and ). The number of applications or the amount of the hemostat powder used depended on the level of bleeding and was the surgeon’s decision. Before sternal closure, residual powder was applied to both sides of the sternum . The sternum was closed with sternal wires, subcutaneous fat, and skin with absorbable sutures. Starsil ® Hemostat (Hemostat Medical GmbH, Velen, Germany) is a class III medical device that is intended to be used in surgery as an adjunctive hemostat to control capillary, arterial, or venous bleeding in situations where the use of ligatures, pressure, or other conventional methods prove to be inadequate or impracticable. The device is indicated for adhesion prevention as well in cavities covered by mesothelium. Starsil ® Hemostat is a hemostat consisting of 5 g of a purified plant-based absorbable polysaccharide that can be administered to the entire operation area. The powder is available off-the-shelf without any further preparation. In order to obtain hemostasis, it can be applied directly to a bleeding wound. The hemostatic effect results from rapid dehydration and subsequent concentration of blood components like red blood cells, platelets, and serum proteins (thrombin, fibrinogen, etc.), thus accelerating the clotting cascade. As a result, a gelled adhesive matrix is produced. Normal platelet activation and fibrin deposition produce a clot that functions as a mechanical barrier and limits further bleeding. The absorption of the particles is achieved within approximately 48 to 72 h. The hemostat powder is biocompatible, non-pyrogenic, and contains no allo- or xenogenic additions. Data were retrospectively entered into a computerized database and analyzed through SPSS software version 11.0.1 for Windows under the guidance of a statistician. Continuous data are presented as the mean +/− standard deviation. The results are given as mean values with the standard deviation of the mean values within the population. The α-level is 5%, whereby values with p < 0.05 are considered significant. Outcome Laboratory parameters were collected from all patients . The average age of the entire patient group was 70.2 years. At the time of surgery, the patients with the hemostat powder were between 47 and 84 years old, and those without were 58–84 years old. The average hospital stay was 12.9 days. The patients in whom the hemostat powder was used during surgery did not have significantly shorter hospitalization times (12.6 ± 2.0 vs. 13.1 ± 2.7 days; p = 0.933). There were no intraoperative or postoperative deaths in either group. There was one reoperation due to postoperative bleeding in each group. In the Starsil group, the identified cause was bleeding due to inadequate coagulation of a subcostal artery of the mammary artery bed. In the control group, a side branch of the mammary artery graft had to be clipped to stop the bleeding. In both groups, there were no deep or major sternal wound infections that required re-intervention or led to prolonged hospitalization. All but two patients were routinely discharged from the hospital to a rehabilitation center. Two patients refused rehabilitation. In the Starsil group, 97.6% ( n = 41 patients) were followed up in our medical center during the first six months; in the group without the hemostat powder, the figure was 95.6% ( n = 22 patients). One patient in each group did not want the follow-up examination at our center but at their family doctor. During the follow-up of the patients up to 6 months postoperatively, none of the patients had died. No adverse events attributable to Starsil ® Hemostat or to any other medical product were reported during the postoperative follow-up period. As seen in , Starsil ® Hemostat had no significant impact on the laboratory parameters. Hemoglobin and creatinine did not significantly differ between groups. Preoperative blood glucose (mg/dl) was significantly higher in the group that underwent surgery without the hemostat powder (108.9 ± 31.0 vs. 144.0 ± 24.2; p = 0.008). The postoperative maximum value was also significantly higher in the group without the hemostat powder (134.0 ± 18.5 vs. 177.5 ± 22.6; p = 0.003). Before discharge, the value was not significantly higher (103.9 ± 25.23 vs. 130.4 ± 25.2; p = 0.055). The infection parameters obtained perioperatively are listed in . But in detail, the CRP preop was not significantly slightly higher in the Starsil group than in the patients in whom Starsil was not used (0.83 ± 0.45 vs. 0.62 ± 0.28; p = 0.458). The same applies to CRP max. It was not significantly higher in the Starsil group (19.63 ± 8.09 vs. 19.13 ± 4.13, p = 0.924). Even before discharge, the value was not significantly higher in the Starsil group (4.9 ± 2.38 vs. 3.63 ± 1.08, p = 0.286). For the leukocytes , the preoperative average values were within the normal range and were not significantly different between the groups (7.25 ± 0.58 vs. 7.86 ± 0.84; p = 0.206). For the postoperative maximum values, the leukocyte values were not significantly lower in the Starsil group (12.85 ± 1.38 vs. 15.12 ± 2.96; p = 0.109) and were back in the normal range in both groups before discharge (7.91 ± 0.72 vs. 7.70 ± 0.84; p = 0.704). The postoperative maximum body temperature was not significantly higher in the Starsil group than in the control group (37.3 ± 0.33 vs. 36.8 ± 0.38; p = 0.075). Before discharge, the body temperature was back in normal range in both groups (36.8 ± 0.09 vs. 36.6 ± 0.22; p = 0.175). In the Starsil group, satisfactory bleeding control was achieved in all but one case. One surgeon rated a second application and the use as “non-satisfactory”. The hemostat led to satisfactory bleeding control within 2 min in 88% of the patients with a 5 g unit. Five patients needed a second application of the hemostat using another 5 g unit for bleeding control. On average, bleeding control was achieved in 94 ± 56 s. In general, the surgeon’s satisfaction according to the VAS was 8.3 ± 1.2 . Laboratory parameters were collected from all patients . The average age of the entire patient group was 70.2 years. At the time of surgery, the patients with the hemostat powder were between 47 and 84 years old, and those without were 58–84 years old. The average hospital stay was 12.9 days. The patients in whom the hemostat powder was used during surgery did not have significantly shorter hospitalization times (12.6 ± 2.0 vs. 13.1 ± 2.7 days; p = 0.933). There were no intraoperative or postoperative deaths in either group. There was one reoperation due to postoperative bleeding in each group. In the Starsil group, the identified cause was bleeding due to inadequate coagulation of a subcostal artery of the mammary artery bed. In the control group, a side branch of the mammary artery graft had to be clipped to stop the bleeding. In both groups, there were no deep or major sternal wound infections that required re-intervention or led to prolonged hospitalization. All but two patients were routinely discharged from the hospital to a rehabilitation center. Two patients refused rehabilitation. In the Starsil group, 97.6% ( n = 41 patients) were followed up in our medical center during the first six months; in the group without the hemostat powder, the figure was 95.6% ( n = 22 patients). One patient in each group did not want the follow-up examination at our center but at their family doctor. During the follow-up of the patients up to 6 months postoperatively, none of the patients had died. No adverse events attributable to Starsil ® Hemostat or to any other medical product were reported during the postoperative follow-up period. As seen in , Starsil ® Hemostat had no significant impact on the laboratory parameters. Hemoglobin and creatinine did not significantly differ between groups. Preoperative blood glucose (mg/dl) was significantly higher in the group that underwent surgery without the hemostat powder (108.9 ± 31.0 vs. 144.0 ± 24.2; p = 0.008). The postoperative maximum value was also significantly higher in the group without the hemostat powder (134.0 ± 18.5 vs. 177.5 ± 22.6; p = 0.003). Before discharge, the value was not significantly higher (103.9 ± 25.23 vs. 130.4 ± 25.2; p = 0.055). The infection parameters obtained perioperatively are listed in . But in detail, the CRP preop was not significantly slightly higher in the Starsil group than in the patients in whom Starsil was not used (0.83 ± 0.45 vs. 0.62 ± 0.28; p = 0.458). The same applies to CRP max. It was not significantly higher in the Starsil group (19.63 ± 8.09 vs. 19.13 ± 4.13, p = 0.924). Even before discharge, the value was not significantly higher in the Starsil group (4.9 ± 2.38 vs. 3.63 ± 1.08, p = 0.286). For the leukocytes , the preoperative average values were within the normal range and were not significantly different between the groups (7.25 ± 0.58 vs. 7.86 ± 0.84; p = 0.206). For the postoperative maximum values, the leukocyte values were not significantly lower in the Starsil group (12.85 ± 1.38 vs. 15.12 ± 2.96; p = 0.109) and were back in the normal range in both groups before discharge (7.91 ± 0.72 vs. 7.70 ± 0.84; p = 0.704). The postoperative maximum body temperature was not significantly higher in the Starsil group than in the control group (37.3 ± 0.33 vs. 36.8 ± 0.38; p = 0.075). Before discharge, the body temperature was back in normal range in both groups (36.8 ± 0.09 vs. 36.6 ± 0.22; p = 0.175). In the Starsil group, satisfactory bleeding control was achieved in all but one case. One surgeon rated a second application and the use as “non-satisfactory”. The hemostat led to satisfactory bleeding control within 2 min in 88% of the patients with a 5 g unit. Five patients needed a second application of the hemostat using another 5 g unit for bleeding control. On average, bleeding control was achieved in 94 ± 56 s. In general, the surgeon’s satisfaction according to the VAS was 8.3 ± 1.2 . The primary aim of this study was to evaluate the safety and efficacy of Starsil ® Hemostat based on the data collected for our clinic’s quality assurance program. Hemostasis was achieved in the predicted time of less than 2 min according to the IFU in most cases and with high satisfaction of the surgeons. This fact is supported by other studies using polysaccharide hemostatic powders. Li reported achieving hemostasis in a mean of 110 s. No adverse events occurred in their study or our study. The laboratory parameters in our study remained within the expected range. Shifts to abnormal values were typical for the surgical interventions performed and showed no significant differences compared to the control group, except for the blood glucose. Values were already higher in the control group. Limitations of the study are mainly the retrospective character of the study, a highly selected and heterogeneous patient group, short follow-up duration, and, of course, the small study population. In addition, due to the blinded data set, potentially essential data such as the number of required blood units could not be collected retrospectively. However, bleeding control is an integral part of all cardiac surgeries, and one of the study’s primary objectives was to ensure the safety of this medical product. In the context of the available literature , in which the hemostat powder was used in cardiac surgery and other procedures, the present study supports using the polysaccharide powder as a safe adjunct to surgical bleeding control. Bruckner has demonstrated that the use of starch powder results in significant improvements when performing complex cardiothoracic procedures . In his retrospective study, he observed 240 patients from January 2009 to January 2013 with ( n = 103) or without ( n = 137) the use of a polysaccharide absorbable hemostat powder. It led to a significant reduction in hemostasis time compared to an untreated control group (hemostat 93.4 ± 41 min vs. control 107.6 ± 56 min; p = 0.02), postoperative chest tube output, and need for postoperative blood transfusion. Furthermore, a non-significantly shorter ICU length of stay ( p = 0.08) was observed in the observation group. Although 30-day mortality was not significantly different, these results confirm both the clinical benefits of starch powders and also indicate potential economic benefits by reducing blood transfusions and length of stay. Boucher reported that the economic impact of bleeding is significant, as it is directly related to increased resource consumption in hospitals. The extent of bleeding correlates directly with increased costs, e.g., surgical re-interventions. For instance, the costs in cardiac surgery can rise dramatically, as much as EUR 30,000. Surgeons have a high acceptance of hemostatic powders as an adjunct in hemostasis. A European multidisciplinary expert group in hemostasis and hemostatics confirmed the positive benefits of hemostatic powders in surgical practice in a consensus paper . Questionnaires were answered by 79 high-volume surgeons from various surgical fields regarding the use of hemostatic powders, and 95% of statements regarding hemostatic powders were rated as important or very important. However, this paper also confirmed that the hemostat should not be a substitute for good surgical technique and the appropriate use of electrocautery, sutures, clips, and staples. The proper use and handling of hemostatic products may effectively support bleeding control during surgery, as recently confirmed in the Cochrane Database of Systematic Reviews . The meta-analysis included 24 trials with 2376 participants undergoing vascular surgery. The sealant used demonstrated a reduction in time to hemostasis. In our study and the literature, almost no adverse events were reported with starch powders. However, it is crucial to use the right hemostatic agent for the adequate surgical procedure in an appropriately timed fashion, not only to improve clinical outcomes and avoid adverse events, but also to limit the overall cost of treatment. Furthermore, in a study with the hemostatic polysaccharide Perclot, adverse events were reported when using the powder during cardiac rhythm device implantation, such as CRP increase and pain. However, there are some shortcomings of the study: the use of Perclot was only described rudimentarily in the manuscript—the important step of deactivating excess Perclot with saline after hemostasis was not carried out, as well as specific product training. However, the instructions for use make specific reference to this. Powder that has not been deactivated is often still very active and can, e.g., lead to pH shifts and the dehydration of the wound, leading to aggressive reactions. In contrast, none of these adverse events were observed in another study by House . In the prospective randomized multicenter study (19 centers) with 324 patients undergoing open elective cardiac, general, or urologic surgery, hemostasis was performed with two starch powders (Arista ® and Perclot ® ) equivalent to Starsil ® . In this study, 161 patients were treated with Perclot ® and 163 with Arista ® and monitored for up to six weeks postoperatively. No disadvantage of Perclot ® compared to Arista ® was found. No safety concerns were identified. This impressively demonstrates how crucial the correct indication for the product group and the appropriate application according to IFU is. In our study, other hemostatic products were used in the control group, and no product-related adverse events were reported either. The literature highlights their effectiveness in reducing blood loss and hematoma formation. However, significant risks and side effects are also noted. For instance, in a review, Masoudi discusses the complications associated with oxidized regenerated cellulose, which continue to emerge; above all, these include granulomas, abscesses, hematomas, cysts, hemorrhagic complications, and masses misdiagnosed as tumors, as well as pain and infections. Other frequently used hemostats include products made from porcine gelatin, bovine collagen, human-pooled plasma thrombin, or mixtures of bovine gelatin and human-pooled plasma thrombin, among others. While their hemostatic efficacy is well-established, they are associated with considerable costs and serious complications, such as anaphylactic reactions, infection nidus formation, compression of local structures (e.g., nerves or vessels), delayed wound healing, product dislodgement, granulomas, inflammation, or neurotoxicity . In cardiac surgery, one of the most commonly used hemostats is Tachosil ® (Takeda Pharmaceutical Company Limited, Tokyo, Japan), which is composed of human fibrinogen and thrombin combined with equine collagen. It is a porous patch that is activated with saline and adheres effectively. Few side effects have been reported, including hypersensitivity to human proteins and the risk of ileus in abdominal surgeries. Unfortunately, it is expensive and limited to local applications. Moreover, its application in hard-to-reach areas is challenging, and it is nearly impossible to use in thoracoscopic procedures, making it unsuitable for minimally invasive surgeries. In contrast, Starsil ® Hemostat powder can be applied over larger areas of the surgical field, particularly in difficult-to-access regions or through endoscopic instruments. Starsil ® Hemostat is a safe and effective adjunctive for hemostasis during cardiac surgery. There were no unusual perioperative changes in laboratory parameters or body temperature. No adverse in-hospital or post-discharge events were attributed to the use of the powder. This hemostatic powder may fasten hemostasis, decrease transfusion, and reduce operation time. Further and more extensive studies are needed to confirm this.
Chinese Guidelines for the Diagnosis and Management of Tumefactive Demyelinating Lesions of Central Nervous System
1cb79a18-ba82-4977-8931-966efaa5277a
5547837
Physiology[mh]
LINICAL C HARACTERISTICS Onset of the disorder There is a lack of data about the prevalence and incidence of TDLs. The onset of TDLs is usually acute or subacute, with less cases having chronic onset. The patients rarely have prodromal symptoms, and some patients may have a history of vaccination or cold. There is no gender predominance. The onset of disease is mainly in young- and middle-aged patients though they may occur in any age. The average age of onset is 35 years in the cases reported in China, while the age of onset is older in other countries. For the 15 cases reported by Kim et al ., the mean age of onset was 42 years. Disease course Previously, experts proposed that TDLs were intermediate type in between MS and acute disseminated encephalomyelitis (ADEM). Adolescent ADEM may have comorbid TDLs. Poser and Brinar believed that TDL is a phenotypic variant of classic MS and so did Lolekha and Kulkantrakorn. Recently, studies in China and other countries showed that most of the TDLs were monophasic, and some cases of TDLs may transform to relapsing-remitting MS (RRMS) or take the form of recurrent TDLs. A very few cases of TDLs may overlap with neuromyelitis optica spectrum disorders (NMOSDs). Symptoms and signs Most of the lesions of TDLs are in the brain, with less of them being in the spinal cord. In comparison with glioma, most of the patients with TDLs have more severe symptoms and signs. However, less frequently, patients with TDLs may have large lesions on neuroimaging with relatively less severe symptoms and signs, similar to glioma. Patients with TDLs often present with headaches, slurred speech, and weakness of limbs. Some patients may have cognitive and psychiatric symptoms such as memory loss, retardation, and apathy, which are easily neglected by the patients and family members. The symptoms usually deteriorate or more symptoms are presenting with the progression of the disorder, and visual impairment may occur. The symptoms and signs of TDLs are related to the involved location and scope of lesions, and the symptoms may deteriorate or more symptoms may occur during an attack, but seizures occur less frequently, which more often occur in glioma. Disseminated or multiple lesions of TDLs may have impact on cognition and even cause incontinence of urine and stool. White matters of the brain are more frequently involved in TDLs, and cortical and subcortical areas can also be involved. The lesions can be single or multiple, and often they are bilateral, and rarely the spinal cord can also be involved simultaneously. Frontal lobes are most frequently involved, and temporal and parietal lobes, basal ganglia area, corpus callosum, and centrum semiovale can also be involved. There is a lack of data about the prevalence and incidence of TDLs. The onset of TDLs is usually acute or subacute, with less cases having chronic onset. The patients rarely have prodromal symptoms, and some patients may have a history of vaccination or cold. There is no gender predominance. The onset of disease is mainly in young- and middle-aged patients though they may occur in any age. The average age of onset is 35 years in the cases reported in China, while the age of onset is older in other countries. For the 15 cases reported by Kim et al ., the mean age of onset was 42 years. Previously, experts proposed that TDLs were intermediate type in between MS and acute disseminated encephalomyelitis (ADEM). Adolescent ADEM may have comorbid TDLs. Poser and Brinar believed that TDL is a phenotypic variant of classic MS and so did Lolekha and Kulkantrakorn. Recently, studies in China and other countries showed that most of the TDLs were monophasic, and some cases of TDLs may transform to relapsing-remitting MS (RRMS) or take the form of recurrent TDLs. A very few cases of TDLs may overlap with neuromyelitis optica spectrum disorders (NMOSDs). Most of the lesions of TDLs are in the brain, with less of them being in the spinal cord. In comparison with glioma, most of the patients with TDLs have more severe symptoms and signs. However, less frequently, patients with TDLs may have large lesions on neuroimaging with relatively less severe symptoms and signs, similar to glioma. Patients with TDLs often present with headaches, slurred speech, and weakness of limbs. Some patients may have cognitive and psychiatric symptoms such as memory loss, retardation, and apathy, which are easily neglected by the patients and family members. The symptoms usually deteriorate or more symptoms are presenting with the progression of the disorder, and visual impairment may occur. The symptoms and signs of TDLs are related to the involved location and scope of lesions, and the symptoms may deteriorate or more symptoms may occur during an attack, but seizures occur less frequently, which more often occur in glioma. Disseminated or multiple lesions of TDLs may have impact on cognition and even cause incontinence of urine and stool. White matters of the brain are more frequently involved in TDLs, and cortical and subcortical areas can also be involved. The lesions can be single or multiple, and often they are bilateral, and rarely the spinal cord can also be involved simultaneously. Frontal lobes are most frequently involved, and temporal and parietal lobes, basal ganglia area, corpus callosum, and centrum semiovale can also be involved. UXILIARY E XAMINATION Cerebrospinal fluid and blood tests Cerebrospinal fluid (CSF) tests: Intracranial pressures are usually normal or slightly elevated, and proteins levels are normal or slightly or moderately elevated. Cell count is usually in normal reference range. Some cases may have mild or strongly positive oligoclonal band (OB). Levels of myelin basic protein (MBP) or IgG index may elevate. Persistent positive OB with dynamic observation may indicate the possibility of MS transformation. Serum test: NMSOD transformation may occur in very few cases with TDLs, with seropositive AQP4 antibody. Cases with positive extractable nuclear antigen antibodies tend to be more likely relapsing. Electrophysiology Electrophysiological studies are not specific for TDLs, but visual, brainstem, and somatic evoked potentials, served as subclinical evidence, can be used for localizing the lesions and determining the area of TDLs. Neuroimaging Lesions of TDLs can be classified into three types based on morphological features on neuroimaging [Figures – ]: (1) diffuse infiltrating lesions, with unclear margins, uneven enhancement, taking a diffusely infiltrating growth pattern on T2-weighted image (T2WI) [Figure and ]; (2) ring-shaped lesions : the lesions are round or round-like, with closed-ring- or open-ring-shaped enhancement; and (3) megacystic lesions: hypointensity on T1-weighted image (T1WI) and hyperintensity on T2WI signals, with clear margin and ring-shaped enhancement . Head computed tomography On plain head computed tomography (CT), most of the TDLs are hypodense , and few lesions are isodense , without obvious enhancement on contrast-enhanced imaging. Brain magnetic resonance imaging Plain magnetic resonance imaging (MRI): Lesions of TDLs usually show hypointensity on T1WI and hyperintensity on T2WI on plain MRI and are larger than they are on CT. In general, in 70–100% of the patients with TDLs, the lesions show hyperintensity on T2WI, with clear margins , and hypointensity on T2WI may exist in the margin of some lesions . Most of the lesions of TDLs have mass effect [Figures , , , and ], but less severe than that of brain tumor, and edema around the lesions is often observed. In acute or subacute phase, the edema is mainly cytotoxic and shows high signal on diffusion-weighted imaging (DWI) . The lesions may decrease or diminish within several weeks after the treatment with steroids. Contrast-enhanced brain MRI: Due to the breakdown of blood-brain barrier, in the acute and subacute phases of TDLs, various patterns such as nodular-, closed-ring-, open-ring-, and flame-shaped enhancement are noted on gadolinium-diethylenetriaminepentaacetic acid contrast-enhanced MRI. Open-ring-shaped enhancement (also named “C-shaped” enhancement, ) is most characteristic, i.e., perilesional discontinuous semi-ring-shaped enhancement. In addition, for some lesions of TDLs, the dilated venules show “comb” structure [Figures and ], which is vertical to lateral ventricles and often observed in acute or subacute phase, and this imaging feature is relatively specific for TDLs and not observed in brain tumor. In China, a study of 60 cases with TDLs showed that the lesions of TDLs were dynamic in consistent with the disease progression: (a) in acute phase (≤3 weeks after the onset), the lesions showed patchy or nodular enhancement ; (b) in subacute phase (4–6 weeks after the onset), the lesions gradually evolved to “open-ring”-, “closed-ring”- or “rosette”-shaped enhancement, combined with patchy enhancement ; (c) in chronic phase (>7 weeks after the onset), the lesions still demonstrated “open-ring”- or “closed-ring”-shaped enhancement, and the enhancement gradually became weakly patchy or vanished. Magnetic resonance spectroscopy Magnetic resonance spectroscopy (MRS) may reflect the metabolism in the tissues of a lesion and is valuable for the differential diagnosis of TDLs with PCNSL. TDLs on MRS show as follows: elevation of choline (Cho) peak, reduction of N-acetylarginine (NAA) peak, and most of the lesions have some elevation of lactate peak . Perfusion-weighted imaging They can be used for the differential diagnosis of TDLs with brain tumors. Hyperperfusion is usually observed in glioma [Figure and ], which does not occur in TDLs [Figure and ]. Cerebrospinal fluid (CSF) tests: Intracranial pressures are usually normal or slightly elevated, and proteins levels are normal or slightly or moderately elevated. Cell count is usually in normal reference range. Some cases may have mild or strongly positive oligoclonal band (OB). Levels of myelin basic protein (MBP) or IgG index may elevate. Persistent positive OB with dynamic observation may indicate the possibility of MS transformation. Serum test: NMSOD transformation may occur in very few cases with TDLs, with seropositive AQP4 antibody. Cases with positive extractable nuclear antigen antibodies tend to be more likely relapsing. Electrophysiological studies are not specific for TDLs, but visual, brainstem, and somatic evoked potentials, served as subclinical evidence, can be used for localizing the lesions and determining the area of TDLs. Lesions of TDLs can be classified into three types based on morphological features on neuroimaging [Figures – ]: (1) diffuse infiltrating lesions, with unclear margins, uneven enhancement, taking a diffusely infiltrating growth pattern on T2-weighted image (T2WI) [Figure and ]; (2) ring-shaped lesions : the lesions are round or round-like, with closed-ring- or open-ring-shaped enhancement; and (3) megacystic lesions: hypointensity on T1-weighted image (T1WI) and hyperintensity on T2WI signals, with clear margin and ring-shaped enhancement . Head computed tomography On plain head computed tomography (CT), most of the TDLs are hypodense , and few lesions are isodense , without obvious enhancement on contrast-enhanced imaging. Brain magnetic resonance imaging Plain magnetic resonance imaging (MRI): Lesions of TDLs usually show hypointensity on T1WI and hyperintensity on T2WI on plain MRI and are larger than they are on CT. In general, in 70–100% of the patients with TDLs, the lesions show hyperintensity on T2WI, with clear margins , and hypointensity on T2WI may exist in the margin of some lesions . Most of the lesions of TDLs have mass effect [Figures , , , and ], but less severe than that of brain tumor, and edema around the lesions is often observed. In acute or subacute phase, the edema is mainly cytotoxic and shows high signal on diffusion-weighted imaging (DWI) . The lesions may decrease or diminish within several weeks after the treatment with steroids. Contrast-enhanced brain MRI: Due to the breakdown of blood-brain barrier, in the acute and subacute phases of TDLs, various patterns such as nodular-, closed-ring-, open-ring-, and flame-shaped enhancement are noted on gadolinium-diethylenetriaminepentaacetic acid contrast-enhanced MRI. Open-ring-shaped enhancement (also named “C-shaped” enhancement, ) is most characteristic, i.e., perilesional discontinuous semi-ring-shaped enhancement. In addition, for some lesions of TDLs, the dilated venules show “comb” structure [Figures and ], which is vertical to lateral ventricles and often observed in acute or subacute phase, and this imaging feature is relatively specific for TDLs and not observed in brain tumor. In China, a study of 60 cases with TDLs showed that the lesions of TDLs were dynamic in consistent with the disease progression: (a) in acute phase (≤3 weeks after the onset), the lesions showed patchy or nodular enhancement ; (b) in subacute phase (4–6 weeks after the onset), the lesions gradually evolved to “open-ring”-, “closed-ring”- or “rosette”-shaped enhancement, combined with patchy enhancement ; (c) in chronic phase (>7 weeks after the onset), the lesions still demonstrated “open-ring”- or “closed-ring”-shaped enhancement, and the enhancement gradually became weakly patchy or vanished. Magnetic resonance spectroscopy Magnetic resonance spectroscopy (MRS) may reflect the metabolism in the tissues of a lesion and is valuable for the differential diagnosis of TDLs with PCNSL. TDLs on MRS show as follows: elevation of choline (Cho) peak, reduction of N-acetylarginine (NAA) peak, and most of the lesions have some elevation of lactate peak . Perfusion-weighted imaging They can be used for the differential diagnosis of TDLs with brain tumors. Hyperperfusion is usually observed in glioma [Figure and ], which does not occur in TDLs [Figure and ]. ATHOLOGY The lesions involve mainly white matters in TDLs, and cortical and subcortical areas can also be involved . Pathological features of TDLs are as follows: (1) loss of tissue structure and demyelination are detected using hematoxylin and eosin (HE) and myeline staining, respectively; (2) axonal and immunohistochemical neurofilament staining reveal axon preservation in the area of demyelination; (3) HE and immunohistochemical staining of CD68 demonstrate phagocytosis myelin by macrophages within the lesions; in the acute phase of the disorder, Luxol fast blue staining reveals that macrophages are filled with debris of myelin in the cytoplasm; (4) perivessel “cuffing” lymphocytes and infiltration can be observed in and around the lesions, and lymphocytes are mainly T-cells; (5) HE and immunohistochemical staining of glial fibrillary acidic protein (GFAP) demonstrate various degrees of astrocytosis in the lesions, and the astrocytes have prominent cytoplasm, with eccentric nuclei and multiple asterisk processes using GFAP or Holzer staining; (6) in most of the lesions, scattered Creutzfeldt cells can be observed (eccentric and enlarged astrocyte), which are characterized by abundant cytoplasm, weak staining, loss of nuclei membrane, irregular chromosome called “aborted karyokinesis,” and the lesion is easily misdiagnosed as glioma. Creutzfeldt cell does not provide diagnostic value for TDLs but is helpful for the diagnosis of TDLs in combination with pathological demyelination; (7) the pathological features are changing with the clinical courses of TDLs. In acute phase (≤3 weeks after the onset), the pathological result is consistent with the acute stage of inflammatory responses: inflammatory activities are active in the lesions, with massive loss of myelin and various degrees of axonal swelling. In subacute phase (4–6 weeks after the onset), the pathological characteristics are consistent with the chronic stage of inflammatory responses: clear margin of the lesions, relative axonal preservation, and macrophages containing myelin debris being aggravated in a radiative pattern around the lesions. In chronic phase (>7 weeks after the onset), smoldering or no activity of inflammation is the main pathological feature: partial remyelination in the lesions, no active inflammation and less inflammatory cells found in the core of the lesions, macrophages and microglia being in the peripheral area of the lesions, and degradated myelin seldom found in these cells. Gradual remyelination is the main picture in the nonactive inflammation stage of the lesions [ Supplementary Material 1 ]. S UPPLEMENTARY M ATERIAL 1. Commonly used staining for tumefactive demyelinating lesions in pathological studies: Click here for additional data file. Caution (1) Limitations of brain biopsy and pathological study should be considered. Some patients with brain tumors may be misdiagnosed as TDLs due to the atypical pathological feature. As exacerbation of the disease occurs during the follow-ups, the patients are finally diagnosed with brain tumors after repeating or even multiple brain biopsy. Thus, for the patients with atypical pathological or neuroimaging findings, repeat pathological and neuroimaging studies are important. (2) Studies in other countries found that prebiopsy steroids use is one of the common factors causing atypical pathology, especially for PCNSL, so such that prebiopsy steroids should be avoided. (3) The location for biopsy is another factor that may determine the yield of pathological study, and thus, sample from the lesion area with strong contrast enhancement on MRI is appropriate because it reflects the immune activities in the lesion. (1) Limitations of brain biopsy and pathological study should be considered. Some patients with brain tumors may be misdiagnosed as TDLs due to the atypical pathological feature. As exacerbation of the disease occurs during the follow-ups, the patients are finally diagnosed with brain tumors after repeating or even multiple brain biopsy. Thus, for the patients with atypical pathological or neuroimaging findings, repeat pathological and neuroimaging studies are important. (2) Studies in other countries found that prebiopsy steroids use is one of the common factors causing atypical pathology, especially for PCNSL, so such that prebiopsy steroids should be avoided. (3) The location for biopsy is another factor that may determine the yield of pathological study, and thus, sample from the lesion area with strong contrast enhancement on MRI is appropriate because it reflects the immune activities in the lesion. IAGNOSTIC C RITERIA Based on the clinical presentations, results of laboratory workups, neuroimaging, and pathological studies, the diagnosis of TDLs includes items which are basic, supportive, warning, and exclusive. Three categories of the diagnosis of TDLs are recommended [ Supplementary Material 2 ]: S UPPLEMENTARY M ATERIAL 2. Flowchart for the diagnosis of tumefactive demyelinating lesions. Click here for additional data file. Pathologically definite TDLs: typical pathology of TDLs, without other findings to exclude the diagnosis. Clinical definite TDLs: must satisfy the followings: (1) no evidence exists for excluding the diagnosis; (2) all basic diagnostic criteria met; (3) four out of six of the supportive items met; (4) no warning items exist. Clinical probable TDLs: the followings need to be satisfied: (1) no exclusion item exist; (2) all basic diagnostic criteria satisfied; (3) at least four of the six supportive items satisfied; (4) warning item(s) exist(s), which can be countered by supportive items: (a) one warning item exists, at least one supportive item should exist; (b) two warning items exist, at least two supportive items should exist; and (c) more than two warning items are not allowed. Details of diagnosis criteria Basic criteria Persistent symptoms and signs >24 h, progression within a period of time, with or without neurological deficits. Brain MRI (≥1.5T): one or multiple lesions, at least one lesion with mass effect, with or without edema, and the size of the lesion in the long dimension ≥2 cm. Mass effect rating scale: (a) mild: sulcal effacement; (b) moderate: ventricular compression; (c) severe: midline shift, or uncal herniation, or subfalcine herniation. Edema rating scale: (a) mild: <1 cm; (b) moderate: 1–3 cm; (c) severe: >3 cm. Mainly the white matter involved. Hypodense or isodense lesion on the head CT. Patient's clinical presentations and the results of laboratory and neuroimaging cannot be explained by other intracranial lesions. Supportive items For the clinical symptoms and signs, three of the four items need to be satisfied: (1) young adults or adults onset; (2) acute or subacute onset; (3) headache as the initial symptom; (4) the severity of the disease is consistent with neuroimaging findings (for some infectious diseases, the clinical symptoms and signs are more obvious than neuroimaging findings, while the glioma is the opposite). For laboratory workups, three of the five items need to be satisfied: (1) normal or mild elevation of intracranial pressure (usually ≤240 mmH 2 O); (2) normal or mildly elevated cell count (usually ≤50/mm 3 ); (3) normal or mildly to moderately elevated protein in CSF (usually ≤10,000 mg/L); (4) positive CSF-OB and/or elevated MBP; (5) positive serum AQP4. For neuroimaging, one of the following two items needs to be satisfied: (1) multiple foci, but not miliary, two hemispheres involved; (2) clear margin of the lesion (sometimes hypointense margin on T2WI). Dynamics of the lesions on contrast-enhanced MRI develop in different clinical stages (≤3 weeks, 4–6 weeks, and >7 weeks): the same lesion shows “nodular”- or “patchy”- to “circular”- (“open-ring-shaped,” “rosette-shaped,” “flame-shaped”) shaped enhancement, and then the enhancement reduced gradually. Lesion with “ring”-shaped enhancement in morphology is detected on contrast-enhanced MRI, with the following features: the “ring” is not continuous, with one or multiple openings, and thus showing “open-ring”-, “C”- or inverse “C”-shaped enhancement. Positive “comb” sign: “comb”-shaped dilated venules within the paraventricular lesions on contrast-enhanced MRI. Warning items The diagnosis of TDLs is less likely if the followings exist: One of the following clinical features exists: (1) age of onset >60 years; (2) insidious onset, with course of disease longer than 1 year; (3) more severe findings on neuroimaging, and less severe symptoms and signs exist; (4) meningeal irritation sign; (5) fever lasts >24 h, without other known etiologies. Seizures as initial symptoms. Lesion with vague margin on T1WI and/or T2WI. Bleeds and necrosis within the lesion or hypointense or mixed hyper- and hypo-signals in the lesion on DWI. Contrast-enhanced MRI: the lesion with appearance of regularity, smooth outer wall, and closed “ring” in shape. MRS: Cho/NAA ≥2 or high peak of lip in the region of interest. Relapse of the disease whiting 3 months after the treatment with high dose of steroids. Exclusion criteria Tumor cells in CSF study. Hyperdense lesion on the head CT (calcification, bleeds, and spongiform vascular malformation are not included). Contrast-enhanced MRI: (1) typical findings of PCNSL: even patchy-shaped enhancement, “notch” or “closed fist” signs; (2) typical findings of glioma: basilar artery “embedding” sign; (3) other typical findings of tumor or nontumor lesions. Arterial spin labeling (ASL) or perfusion-weighted imaging (PWI): obvious high perfusion in a lesion. Positron emission tomography-CT: high metabolism in a lesion. Definitely diagnosed noninflammatory demyelinating disease, i.e., tumor, infection, or angiitis. Basic criteria Persistent symptoms and signs >24 h, progression within a period of time, with or without neurological deficits. Brain MRI (≥1.5T): one or multiple lesions, at least one lesion with mass effect, with or without edema, and the size of the lesion in the long dimension ≥2 cm. Mass effect rating scale: (a) mild: sulcal effacement; (b) moderate: ventricular compression; (c) severe: midline shift, or uncal herniation, or subfalcine herniation. Edema rating scale: (a) mild: <1 cm; (b) moderate: 1–3 cm; (c) severe: >3 cm. Mainly the white matter involved. Hypodense or isodense lesion on the head CT. Patient's clinical presentations and the results of laboratory and neuroimaging cannot be explained by other intracranial lesions. Supportive items For the clinical symptoms and signs, three of the four items need to be satisfied: (1) young adults or adults onset; (2) acute or subacute onset; (3) headache as the initial symptom; (4) the severity of the disease is consistent with neuroimaging findings (for some infectious diseases, the clinical symptoms and signs are more obvious than neuroimaging findings, while the glioma is the opposite). For laboratory workups, three of the five items need to be satisfied: (1) normal or mild elevation of intracranial pressure (usually ≤240 mmH 2 O); (2) normal or mildly elevated cell count (usually ≤50/mm 3 ); (3) normal or mildly to moderately elevated protein in CSF (usually ≤10,000 mg/L); (4) positive CSF-OB and/or elevated MBP; (5) positive serum AQP4. For neuroimaging, one of the following two items needs to be satisfied: (1) multiple foci, but not miliary, two hemispheres involved; (2) clear margin of the lesion (sometimes hypointense margin on T2WI). Dynamics of the lesions on contrast-enhanced MRI develop in different clinical stages (≤3 weeks, 4–6 weeks, and >7 weeks): the same lesion shows “nodular”- or “patchy”- to “circular”- (“open-ring-shaped,” “rosette-shaped,” “flame-shaped”) shaped enhancement, and then the enhancement reduced gradually. Lesion with “ring”-shaped enhancement in morphology is detected on contrast-enhanced MRI, with the following features: the “ring” is not continuous, with one or multiple openings, and thus showing “open-ring”-, “C”- or inverse “C”-shaped enhancement. Positive “comb” sign: “comb”-shaped dilated venules within the paraventricular lesions on contrast-enhanced MRI. Warning items The diagnosis of TDLs is less likely if the followings exist: One of the following clinical features exists: (1) age of onset >60 years; (2) insidious onset, with course of disease longer than 1 year; (3) more severe findings on neuroimaging, and less severe symptoms and signs exist; (4) meningeal irritation sign; (5) fever lasts >24 h, without other known etiologies. Seizures as initial symptoms. Lesion with vague margin on T1WI and/or T2WI. Bleeds and necrosis within the lesion or hypointense or mixed hyper- and hypo-signals in the lesion on DWI. Contrast-enhanced MRI: the lesion with appearance of regularity, smooth outer wall, and closed “ring” in shape. MRS: Cho/NAA ≥2 or high peak of lip in the region of interest. Relapse of the disease whiting 3 months after the treatment with high dose of steroids. Exclusion criteria Tumor cells in CSF study. Hyperdense lesion on the head CT (calcification, bleeds, and spongiform vascular malformation are not included). Contrast-enhanced MRI: (1) typical findings of PCNSL: even patchy-shaped enhancement, “notch” or “closed fist” signs; (2) typical findings of glioma: basilar artery “embedding” sign; (3) other typical findings of tumor or nontumor lesions. Arterial spin labeling (ASL) or perfusion-weighted imaging (PWI): obvious high perfusion in a lesion. Positron emission tomography-CT: high metabolism in a lesion. Definitely diagnosed noninflammatory demyelinating disease, i.e., tumor, infection, or angiitis. IFFERENTIAL D IAGNOSIS Astrocytoma (1) Clinical characteristics: for astrocytoma, mass effect is prominent on neuroimaging, and the symptoms are relatively mild in comparison with TDLs. The reason behind this is that glioma cells grow slowly along and in between nerve fibers and cause little damage of neurons and their fibers. Statistical analysis showed that about 25% of patients with TDLs presented with headaches and are easily misdiagnosed as brain tumors, and 20% of patients with astrocytoma have seizures as initial presentations, while seizures as initial presentation of TDLs were seldom reported. (2) Head CT: more than half of astrocytoma show hyperdense or isodense lesions , while over 98% of TDLs show low-dense lesions, which are significant for the differential diagnosis. (3) Plain brain MRI: in comparison with TDLs [Figures and ], astrocytoma shows slightly hypointense or isointense signals on T1WI and vague margin on T2WI , and the lesion has prominent mass effect, significant perilesional edema and shift of midline even when the tumor is not large; some lesions of astrocytoma show increasingly high signals on DWI with the progression of the disease. For high grade of astrocytoma with necrosis, bleeds, and cyst, low or mixed signals can be observed within the high signal on DWI , while the signals of TDLs on DWI become weaker gradually with the progression of the disorder; (4) contrast-enhanced MRI: astrocytoma shows diverse patterns of enhancement and the patterns are mainly nodular, massive, or foggy-like, based on different pathological types or WHO grades, and glioblastoma is characterized by becoming more easily become cystic, bleeding, and necrotic on neuroimaging; (5) function MRI (fMRI): fMRI including MRS and ASL can be used for the differential diagnosis. Glioblastoma may have high lip peak, and astrocytoma may have Cho/NAA ≥2; thus, the significantly elevated levels of lip and Cho/NAA are clinically meaningful . Some foci of glioma show hyperperfusion on PWI or ASL, which is more obvious for high-grade glioma [Figure and ], while the lesions of TDLs frequently show isoperfusion or mildly hypoperfusion; (6) specific neuroimaging signs: (a) “comb” sign on contrast-enhanced MRI [Figures and ] is relatively specific for TDLs; (b) “wrapped basilar artery” sign in pons highly indicated astrocytoma . Primary central nervous system lymphoma (1) Clinical features: patients with PCNSL usually present with cognition impairment including memory loss as initial symptoms, and some of the patients may present with vision decline. However, patients with TDLs often present with headache as the initial symptom, with few patients being accompanied by vision decline; (2) head CT: most of the PCNSLs show hyperdense or isodense lesions , and few PCNSLs have hypodense lesions in early stage on head CT, and the lesions can gradually become hyperdense with the progression of the disease. Endocentric enhancement (globular shaped) is usually seen on contrast-enhanced CT; (3) brain MRI scan: in comparison with PCNSL, most of the lesions of TDLs have clear margins on T2WI, with relatively limited area involved and less severe mass effect; the lesions of PCNSL usually show high signal on DWI , and the signal become even higher with the progression of the disorder; (4) contrast-enhanced MRI: lesions of PCNSL show evenly enhanced patchy or globular signal, and “notch” sign , “angle” sign , or raindrop-shaped appearance can be observed, which are different from the “comb” sign and dynamic changes of the lesions of TDLs; (5) fMRI: in comparison with the lesions of TDLs, the lesions of PCNSL usually show Cho/NAA ≥2 and high lip peak , which are the important characteristics for the differential diagnosis of the two disorders. Primary angiitis of central nervous system PACNS is an idiopathic inflammatory disorder of small arterioles, originated from CNS, and characterized by multiple lesions with mass effect. The clinical and neuroimaging presentations of PACNS are difficult to be distinguished from TDLs, and they are easily mutually misdiagnosed. Pathological results of PANCS may be atypical and easily misdiagnosed as TDLs. In comparison with TDLs, some characteristics of PACNS can be used for the differential diagnosis: (1) the onset is usually acute, and the lesions are closer to the cortex, and more seizures are observed; (2) due to the cortex is more often involved, gyrus-shaped enhancement is observed on contrast-enhanced MRI , and sometimes, the midline structures can be involved, often bilaterally; (3) edema around the lesions and their mass effect are less severe in comparison with TDLs; (4) laboratory workups: reports from other countries show that in some 30% of PACNS, mild to moderate elevation of platelet can be observed, and p-ANCA and c-ANCA are positive, which are somewhat valuable for the differential diagnosis; (5) for some patients in acute and subacute phases of PACNS, necrosis with bleeds may occur, hyperintensity on T1WI and hypointensity on T2WI and low or mixed signals on DWI can be observed, and the bleeds can be confirmed by SWI; (6) the response of PACNS to steroids is relative slow, and the enhancement of the lesion on MRI less likely reduces quickly with steroids treatment; (7) based on pathological features, PACNS can be classified: lymphocyte infiltrate, granulomatous, and acute necrotic types; microscopically, infiltrate and necrosis of inflammatory cells around blood vessels and occlusion of the involved blood vessel can be observed, which is distinguishable from TDLs. Other Germinoma and metastatic brain tumor can also show hyperdensity on head CT. However, other signs on MRI can also be detected for germinoma: for basal ganglion area germinoma, atrophy of ipsilateral cerebral peduncle and shift of lateral ventricle toward tumor can be observed; In addition, for patients with germinoma, the age of onset is early, with male predominance; metastatic brain tumors are usually secondary to pulmonary or breast cancers, and multiple lesions are usually seen and located in subcortical area which has abundant blood supply. Some of the lesions may have circular shaped enhancement. Some others may have cystic-shaped enhancement. The age of onset and sex predominance are related to the primary tumors. (1) Clinical characteristics: for astrocytoma, mass effect is prominent on neuroimaging, and the symptoms are relatively mild in comparison with TDLs. The reason behind this is that glioma cells grow slowly along and in between nerve fibers and cause little damage of neurons and their fibers. Statistical analysis showed that about 25% of patients with TDLs presented with headaches and are easily misdiagnosed as brain tumors, and 20% of patients with astrocytoma have seizures as initial presentations, while seizures as initial presentation of TDLs were seldom reported. (2) Head CT: more than half of astrocytoma show hyperdense or isodense lesions , while over 98% of TDLs show low-dense lesions, which are significant for the differential diagnosis. (3) Plain brain MRI: in comparison with TDLs [Figures and ], astrocytoma shows slightly hypointense or isointense signals on T1WI and vague margin on T2WI , and the lesion has prominent mass effect, significant perilesional edema and shift of midline even when the tumor is not large; some lesions of astrocytoma show increasingly high signals on DWI with the progression of the disease. For high grade of astrocytoma with necrosis, bleeds, and cyst, low or mixed signals can be observed within the high signal on DWI , while the signals of TDLs on DWI become weaker gradually with the progression of the disorder; (4) contrast-enhanced MRI: astrocytoma shows diverse patterns of enhancement and the patterns are mainly nodular, massive, or foggy-like, based on different pathological types or WHO grades, and glioblastoma is characterized by becoming more easily become cystic, bleeding, and necrotic on neuroimaging; (5) function MRI (fMRI): fMRI including MRS and ASL can be used for the differential diagnosis. Glioblastoma may have high lip peak, and astrocytoma may have Cho/NAA ≥2; thus, the significantly elevated levels of lip and Cho/NAA are clinically meaningful . Some foci of glioma show hyperperfusion on PWI or ASL, which is more obvious for high-grade glioma [Figure and ], while the lesions of TDLs frequently show isoperfusion or mildly hypoperfusion; (6) specific neuroimaging signs: (a) “comb” sign on contrast-enhanced MRI [Figures and ] is relatively specific for TDLs; (b) “wrapped basilar artery” sign in pons highly indicated astrocytoma . (1) Clinical features: patients with PCNSL usually present with cognition impairment including memory loss as initial symptoms, and some of the patients may present with vision decline. However, patients with TDLs often present with headache as the initial symptom, with few patients being accompanied by vision decline; (2) head CT: most of the PCNSLs show hyperdense or isodense lesions , and few PCNSLs have hypodense lesions in early stage on head CT, and the lesions can gradually become hyperdense with the progression of the disease. Endocentric enhancement (globular shaped) is usually seen on contrast-enhanced CT; (3) brain MRI scan: in comparison with PCNSL, most of the lesions of TDLs have clear margins on T2WI, with relatively limited area involved and less severe mass effect; the lesions of PCNSL usually show high signal on DWI , and the signal become even higher with the progression of the disorder; (4) contrast-enhanced MRI: lesions of PCNSL show evenly enhanced patchy or globular signal, and “notch” sign , “angle” sign , or raindrop-shaped appearance can be observed, which are different from the “comb” sign and dynamic changes of the lesions of TDLs; (5) fMRI: in comparison with the lesions of TDLs, the lesions of PCNSL usually show Cho/NAA ≥2 and high lip peak , which are the important characteristics for the differential diagnosis of the two disorders. PACNS is an idiopathic inflammatory disorder of small arterioles, originated from CNS, and characterized by multiple lesions with mass effect. The clinical and neuroimaging presentations of PACNS are difficult to be distinguished from TDLs, and they are easily mutually misdiagnosed. Pathological results of PANCS may be atypical and easily misdiagnosed as TDLs. In comparison with TDLs, some characteristics of PACNS can be used for the differential diagnosis: (1) the onset is usually acute, and the lesions are closer to the cortex, and more seizures are observed; (2) due to the cortex is more often involved, gyrus-shaped enhancement is observed on contrast-enhanced MRI , and sometimes, the midline structures can be involved, often bilaterally; (3) edema around the lesions and their mass effect are less severe in comparison with TDLs; (4) laboratory workups: reports from other countries show that in some 30% of PACNS, mild to moderate elevation of platelet can be observed, and p-ANCA and c-ANCA are positive, which are somewhat valuable for the differential diagnosis; (5) for some patients in acute and subacute phases of PACNS, necrosis with bleeds may occur, hyperintensity on T1WI and hypointensity on T2WI and low or mixed signals on DWI can be observed, and the bleeds can be confirmed by SWI; (6) the response of PACNS to steroids is relative slow, and the enhancement of the lesion on MRI less likely reduces quickly with steroids treatment; (7) based on pathological features, PACNS can be classified: lymphocyte infiltrate, granulomatous, and acute necrotic types; microscopically, infiltrate and necrosis of inflammatory cells around blood vessels and occlusion of the involved blood vessel can be observed, which is distinguishable from TDLs. Germinoma and metastatic brain tumor can also show hyperdensity on head CT. However, other signs on MRI can also be detected for germinoma: for basal ganglion area germinoma, atrophy of ipsilateral cerebral peduncle and shift of lateral ventricle toward tumor can be observed; In addition, for patients with germinoma, the age of onset is early, with male predominance; metastatic brain tumors are usually secondary to pulmonary or breast cancers, and multiple lesions are usually seen and located in subcortical area which has abundant blood supply. Some of the lesions may have circular shaped enhancement. Some others may have cystic-shaped enhancement. The age of onset and sex predominance are related to the primary tumors. REATMENT (1) Pathological and clinical definite TDLs: the treatment for TDLs can be initiated; (2) clinical probable TDLs: biopsy is recommended based on the location of the lesion and the assessment of the risk for the biopsy. If the result of pathological study is atypical and the diagnosis of TDLs cannot be made, repeat biopsy and pathological study can be performed after finding out the reason for the unsuccessful biopsy and pathological study. The treatment plan can be work out according to the result of pathological study; (3) if patients cannot be diagnosed based on the result of pathological study and repeat biopsy cannot be done due to various reasons, steroids can be given if there is no contraindication. The patients need to be reassessed with contrast-enhanced MRI after steroids. If the lesions resolved completely or mostly, glioma is very unlikely. The patient needs to be followed up on a regular basis, and if relapse or exacerbation happens within half a year, lymphoma should be considered. TDL is a special type of demyelinating disease of CNS. Based on recent reports about its prognosis, most of the TDLs are monophasic, and few of the cases may relapse. Some of the cases may overlap with MS and NMOSD. For relapsing TDLs, the treatment similar to MS and NMOSD can be initiated, including the management in acute and remitting phases (disease-modifying management), neurotrophic treatment, symptomatic treatment, rehabilitation, and counseling for daily living; because most of the cases with TDLs are monophasic and less likely relapse, and the lesions are relatively large, the treatment for the disorder is unique, which is different from the treatment for NMOSD with “sustained low dose of steroids” and the treatment for MS with “short-term steroids treatment.” There is such a significant difference in the treatment between NMOSD and MS that the serum AQP4 antibodies should be determined first. Positive AQP4 antibodies may indicate the transformation of TDLs to NMOSD, and patients of TDLs with positive AQP4 may have high rate of relapse and relatively more obvious neurological deficits. The treatment of TDLs in acute and/or relapse phases can be conducted based on “China NMOSD diagnosis and management guidelines” of 2016; if the AQP4 antibodies are negative, the recommended managements are as follows. Treatment of tumefactive demyelinating lesions in acute phase Therapeutic goal: alleviate the symptoms in acute phase, shorten the disease course, improve the neurological deficits, reduce or even resolve the size of lesion to reach the remitting or cure on neuroimaging, and prevent complications. Indication: first attack of TDLs or new attack with objective neurological deficits. Medications and usage: Steroids As the first choice, it can alleviate the symptoms in the acute phase of TDLs and reduce the size of lesion and enhancement on imaging. However, in comparison with MS, the lesion of TDLs is larger and symptoms are more severe, and thus, the tapering after pulse steroids treatment may last longer to avoid the relapse or exacerbation of the disorder. Principle of the treatment with steroids: pulse treatment with high dose, slow tapering. Approach: (a) adult: methylprednisolone 1000 mg/d, intravenous (IV) 3–4 h, 3–5 days, then tapering, cutting down half of the dose each time, with each dose continuing for 2–3 days, and when the dose being cut down to 120 mg/d, 80 mg/d, the dose form should be changed to oral prednisone 40 mg/d for 3 days, then taper in such dose as 32 mg/d for 3 days, 28 mg/d for 3 days, and one tablet should be cut down each week until discontinuation. (b) Child: methylprednisolone 20–30 mg·kg −1 ·d −1 (≤1000 mg), IV 3–4 h, for 5 days. In consideration of adverse effects in children, short-term use is recommended. If full resolution of the disease is reached, oral prednisone 1 mg·kg −1 ·d −1 can be started and then cut down 5 mg every other day until discontinuation; if the symptoms alleviate slowly, the dose can be cut down in half every 2–3 days. When the dose of methylprednisolone tapers to 80 mg/d, oral prednisone is used and the dose is the same as above. Most of the TDLs are responsive to steroids and can resolve with pulse IV methylprednisolone followed by an oral prednisone taper; during the steroids tapering, if the patient has symptoms rebound or new symptoms, another round of pulse IV methylprednisolone can be tried or one course of intravenous immunoglobulin can be given (for more details see the following). Precautions: (a) steroids should be given in the morning, which is consistent with rhythm of endogenous steroids secretion and reduce the inhibition of hypothalamic-pituitary-adrenal axis; (b) high dose of steroids can cause cardiac arrhythmia so that the IV steroids should not be given too quickly, and the infusion should be finished in 3–4 h. Steroids should be hold and timely effective solutions could be provided when arrhythmia occurs; (c) other adverse effects include hypokalemia, hyperglycemia, high blood pressure, dyslipidemia, upper gastrointestinal bleeding, osteoporosis, caput femoris necrosis, etc. Simultaneous use of proton pump inhibitor and supplement of potassium, calcium, and vitamin should be considered; In addition, high dose of steroids can cause insomnia, which can be managed with zolpidem; (d) for the patient with suspected PCNSL, steroids should be avoided before biopsy because it can lead to the atypical transformation in neuroimaging and pathology, making the diagnosis complicated. Combined use of steroids and immunosuppressant For patients without responding to steroids, immunosuppressants such as azathioprine, cyclophosphamide, mycophenolate mofetil, methotrexate, and tacrolimus may be considered. No evidence supports the use of them for TDLs. For detailed usage and caution for these medications, refer “China NMOSD diagnosis and management guidelines” of 2016. Intravenous immunoglobulin No evidence supports the use for TDLs. It may be suitable for the patients with positive serum AQP4 antibodies or the patients who cannot be treated with steroids or do not respond to steroids or are not suitable for immunosuppressant, i.e., pregnancy, breastfeeding, and children. The recommended dose is 0.4 g·kg −1 ·d −1 , IV, for 5 days as a course of treatment. Maintenance management for relapsing tumefactive demyelinating lesions Therapeutic goal: reduce the progress and prevent the relapse of the disorder. For the patients with symptoms consistent with MS which is multiple in time and space, the management includes immunosuppressant and disease-modifying therapy (DMT), and for the patients whose diagnoses not consistent with MS or NMOSD, treatment with immunosuppressant is a choice even though it is lack of evidence. Main DMTs: The US Food and Drug Administration approved DMTs for MS include 10 medications: (a)first line: Betaseron (interferon β-1b), Extavia (interferon β-1b), Rebif (interferon β-1a), Avonex (interferon β-1a), glatiramer acetate, dimethyl fumarate, teriflunomide; (b) second line: natalizumab; (c) third line: mitoxantrone. Currently, in China, the approved DMTs by the China Food and Drug Administration are Betaferon and Rebif. Cases of TDLs induced by fingolimod were reported in other countries; thus, it should be used with caution for the treatment of TDLs. Although there is lack of evidence for treating TDLs with DMT, several international studies have confirmed the efficacy (level A evidence) of Betaseron and Rebif for MS. Compared with placebo, Betaseron and Rebif both can: (a) reduce the ratio of transformation of clinical solitary syndrome to clinical definite MS; (b) significantly reduce the numbers and volume of active lesions on MRI T2WI; (c) reduce 34% of the recurrence rate of RRMS and reduce the numbers and volume of new lesions; and (d) restrain the progress of disability in patients with MS. Recommendations For relapsing TDLs with negative serum AQP4-IgG, DMT can be used. For approaches and precautions, see reference of “Diagnosis and treatment of MS experts consensus China 2014.” 3. Immunosuppressant: these groups of medications can be considered as third lines if the TDLs meet the diagnostic criteria of MS, while for those TDLs not fulfilling the criteria of MS or NMOSD, these groups of medications can be used as first line. Azathioprine, cyclophosphamide, and mycophenolate are commonly used. For detailed information about the approaches and precautions, please see reference of 2016 “China NMOSD diagnosis and treatment guidelines.” Neuroprotection Vitamin B including Vitamin B1, mecobalamin, Vitamin B complex, and folic acid can be used as conventional dose. In addition, nerve growth factor, ganglioside, and citicoline can also be tried. Symptomatic management Depression/anxiety: medications are recommended including selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), noradrenergic and specific serotonergic antidepressant (NaSSA), and serotonin 1A receptor agonist, i.e., tandospirone. Cognition impairment: cholinesterase inhibitor can be used. Headache: headache is one of the common initial symptoms. For headaches related to intracranial hypertension, mannitol or fructose-added glycerol solution can be used. Other pain-killer medications can be used for the management of headaches. Painful spasticity: carbamazepine, oxcarbazepine, pregabalin, gabapentin, baclofen, and tizanidine can be chosen for the management of pain related to spasticity. Chronic pain and paresthesia: pregabalin, antidepressants/anxiolytics such as amitriptyline, 5-HT1a receptor agonist, SSRI, SNRI, or NaSSA, and tizanidine can be used as pharmacotherapy. Psychotherapy can also be used as a supplementary choice. Fatigue and lethargy: modafinil and amantadine can be used. Dysfunction of bowel and bladder: (a) for urinary incontinence, imipramine, oxybutynin, prazosin, and tamsulosin can be used; (b) catheterization can also be used for urine incontinence; (c) constipation: laxative can be used, and severe cases can be managed with enema; (d) sexual dysfunction can be managed with medications that can improve the function. Rehabilitation and counseling activity of daily living After the acute phase of TDLs, some residual neurological deficits may exist such that rehabilitation (rehab) is important. Rehab for extremities, language, and swallowing should be started early under the instruction of rehab therapist. Patients should avoid factors that may trigger the relapse of inflammatory demyelinating disorders of the nervous system, such as hot bath, exposure to high temperature under sunshine, cigarette smoking, and vaccination. At the same time, patient should keep a stable and happy mood and maintain a healthy life pattern and mild to moderate exercise. Vitamin D supplement can be given. The doctors should provide counseling for those patients with relapsing TDLs about marriage and pregnancy for female patients. Therapeutic goal: alleviate the symptoms in acute phase, shorten the disease course, improve the neurological deficits, reduce or even resolve the size of lesion to reach the remitting or cure on neuroimaging, and prevent complications. Indication: first attack of TDLs or new attack with objective neurological deficits. Medications and usage: Steroids As the first choice, it can alleviate the symptoms in the acute phase of TDLs and reduce the size of lesion and enhancement on imaging. However, in comparison with MS, the lesion of TDLs is larger and symptoms are more severe, and thus, the tapering after pulse steroids treatment may last longer to avoid the relapse or exacerbation of the disorder. Principle of the treatment with steroids: pulse treatment with high dose, slow tapering. Approach: (a) adult: methylprednisolone 1000 mg/d, intravenous (IV) 3–4 h, 3–5 days, then tapering, cutting down half of the dose each time, with each dose continuing for 2–3 days, and when the dose being cut down to 120 mg/d, 80 mg/d, the dose form should be changed to oral prednisone 40 mg/d for 3 days, then taper in such dose as 32 mg/d for 3 days, 28 mg/d for 3 days, and one tablet should be cut down each week until discontinuation. (b) Child: methylprednisolone 20–30 mg·kg −1 ·d −1 (≤1000 mg), IV 3–4 h, for 5 days. In consideration of adverse effects in children, short-term use is recommended. If full resolution of the disease is reached, oral prednisone 1 mg·kg −1 ·d −1 can be started and then cut down 5 mg every other day until discontinuation; if the symptoms alleviate slowly, the dose can be cut down in half every 2–3 days. When the dose of methylprednisolone tapers to 80 mg/d, oral prednisone is used and the dose is the same as above. Most of the TDLs are responsive to steroids and can resolve with pulse IV methylprednisolone followed by an oral prednisone taper; during the steroids tapering, if the patient has symptoms rebound or new symptoms, another round of pulse IV methylprednisolone can be tried or one course of intravenous immunoglobulin can be given (for more details see the following). Precautions: (a) steroids should be given in the morning, which is consistent with rhythm of endogenous steroids secretion and reduce the inhibition of hypothalamic-pituitary-adrenal axis; (b) high dose of steroids can cause cardiac arrhythmia so that the IV steroids should not be given too quickly, and the infusion should be finished in 3–4 h. Steroids should be hold and timely effective solutions could be provided when arrhythmia occurs; (c) other adverse effects include hypokalemia, hyperglycemia, high blood pressure, dyslipidemia, upper gastrointestinal bleeding, osteoporosis, caput femoris necrosis, etc. Simultaneous use of proton pump inhibitor and supplement of potassium, calcium, and vitamin should be considered; In addition, high dose of steroids can cause insomnia, which can be managed with zolpidem; (d) for the patient with suspected PCNSL, steroids should be avoided before biopsy because it can lead to the atypical transformation in neuroimaging and pathology, making the diagnosis complicated. Combined use of steroids and immunosuppressant For patients without responding to steroids, immunosuppressants such as azathioprine, cyclophosphamide, mycophenolate mofetil, methotrexate, and tacrolimus may be considered. No evidence supports the use of them for TDLs. For detailed usage and caution for these medications, refer “China NMOSD diagnosis and management guidelines” of 2016. Intravenous immunoglobulin No evidence supports the use for TDLs. It may be suitable for the patients with positive serum AQP4 antibodies or the patients who cannot be treated with steroids or do not respond to steroids or are not suitable for immunosuppressant, i.e., pregnancy, breastfeeding, and children. The recommended dose is 0.4 g·kg −1 ·d −1 , IV, for 5 days as a course of treatment. Therapeutic goal: reduce the progress and prevent the relapse of the disorder. For the patients with symptoms consistent with MS which is multiple in time and space, the management includes immunosuppressant and disease-modifying therapy (DMT), and for the patients whose diagnoses not consistent with MS or NMOSD, treatment with immunosuppressant is a choice even though it is lack of evidence. Main DMTs: The US Food and Drug Administration approved DMTs for MS include 10 medications: (a)first line: Betaseron (interferon β-1b), Extavia (interferon β-1b), Rebif (interferon β-1a), Avonex (interferon β-1a), glatiramer acetate, dimethyl fumarate, teriflunomide; (b) second line: natalizumab; (c) third line: mitoxantrone. Currently, in China, the approved DMTs by the China Food and Drug Administration are Betaferon and Rebif. Cases of TDLs induced by fingolimod were reported in other countries; thus, it should be used with caution for the treatment of TDLs. Although there is lack of evidence for treating TDLs with DMT, several international studies have confirmed the efficacy (level A evidence) of Betaseron and Rebif for MS. Compared with placebo, Betaseron and Rebif both can: (a) reduce the ratio of transformation of clinical solitary syndrome to clinical definite MS; (b) significantly reduce the numbers and volume of active lesions on MRI T2WI; (c) reduce 34% of the recurrence rate of RRMS and reduce the numbers and volume of new lesions; and (d) restrain the progress of disability in patients with MS. Recommendations For relapsing TDLs with negative serum AQP4-IgG, DMT can be used. For approaches and precautions, see reference of “Diagnosis and treatment of MS experts consensus China 2014.” 3. Immunosuppressant: these groups of medications can be considered as third lines if the TDLs meet the diagnostic criteria of MS, while for those TDLs not fulfilling the criteria of MS or NMOSD, these groups of medications can be used as first line. Azathioprine, cyclophosphamide, and mycophenolate are commonly used. For detailed information about the approaches and precautions, please see reference of 2016 “China NMOSD diagnosis and treatment guidelines.” Vitamin B including Vitamin B1, mecobalamin, Vitamin B complex, and folic acid can be used as conventional dose. In addition, nerve growth factor, ganglioside, and citicoline can also be tried. Depression/anxiety: medications are recommended including selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), noradrenergic and specific serotonergic antidepressant (NaSSA), and serotonin 1A receptor agonist, i.e., tandospirone. Cognition impairment: cholinesterase inhibitor can be used. Headache: headache is one of the common initial symptoms. For headaches related to intracranial hypertension, mannitol or fructose-added glycerol solution can be used. Other pain-killer medications can be used for the management of headaches. Painful spasticity: carbamazepine, oxcarbazepine, pregabalin, gabapentin, baclofen, and tizanidine can be chosen for the management of pain related to spasticity. Chronic pain and paresthesia: pregabalin, antidepressants/anxiolytics such as amitriptyline, 5-HT1a receptor agonist, SSRI, SNRI, or NaSSA, and tizanidine can be used as pharmacotherapy. Psychotherapy can also be used as a supplementary choice. Fatigue and lethargy: modafinil and amantadine can be used. Dysfunction of bowel and bladder: (a) for urinary incontinence, imipramine, oxybutynin, prazosin, and tamsulosin can be used; (b) catheterization can also be used for urine incontinence; (c) constipation: laxative can be used, and severe cases can be managed with enema; (d) sexual dysfunction can be managed with medications that can improve the function. After the acute phase of TDLs, some residual neurological deficits may exist such that rehabilitation (rehab) is important. Rehab for extremities, language, and swallowing should be started early under the instruction of rehab therapist. Patients should avoid factors that may trigger the relapse of inflammatory demyelinating disorders of the nervous system, such as hot bath, exposure to high temperature under sunshine, cigarette smoking, and vaccination. At the same time, patient should keep a stable and happy mood and maintain a healthy life pattern and mild to moderate exercise. Vitamin D supplement can be given. The doctors should provide counseling for those patients with relapsing TDLs about marriage and pregnancy for female patients. ROGNOSIS AND F OLLOW-UPS The prognosis of TDLs, with no large sample being studied, is good in limited data. Liu et al . followed up 60 cases with TDLs for 3–6 years and found that most of the cases had good prognosis, and only two patients died with the causes not related to TDLs. Most of TDLs are monophasic, and some of the cases may relapse. Some of them may transform to MS or overlap with NMO, with the former being the most. Similar results were reported in other country. The difference between our report and the reports from other country is that the frequency of relapse is lower for TDLs in ours. In our follow-up data, the highest relapse times were three, and the main findings were small patchy signals (more like MS), with few being large lesions of TDLs. We found that some cases with TDLs were misdiagnosed even in pathological study. Patients alleviated initially after the treatment with steroids followed by relapse and deterioration with glioma or PCNSL finally diagnosed after surgery and repeat pathological studies (some of these cases showed hypodense lesions on CT in early stage and later turned hyperdense). Thus, the following strategies for follow-ups are recommended: (1) follow-ups with telephone interview for all patients with TDLs (within the first 3 years after diagnosis, for pathological definite TDLs, once a year at least; for clinical definite TDL, at least once every 6 months; and for clinical probable TDLs, once every 3 months); (2) for those with relapsing TDLs, a contrast-enhanced brain MRI should be repeated every 3–6 months; (3) for those patients with the lesion reappearing or becoming larger, head CT is recommended, and repeat biopsy may be necessary. Committee of specialists (in an order of first letter of family name) : Zhong-Ping An (Tianjin Huanhu Hospital); Bi-Tao Bu (Tongji Hospital of Tongji Medical University, Huazhong Technology University); Li-Li Zeng (Shanghai Ruijin Hospital); Xiang-Jun Chen (Huashan Hospital of Fudan University); Jiang Cheng (General Hospital of Ningxia Medical University); Qi Cheng (Ruijin Hospital of Shanghai Jiao Tong University); Lan Chu (Affiliated Hospital of Guiyang Medical College); Hui-Qing Dong (Xuanwu Hospital of Capital Medical University); Yan-Hui Du (General Hospital of Ningxia Medical University); Rui-Sheng Duan (Qianfoshan Hospital of Shandong University); Cong Gao (The Second Affiliated Hospital of Guangzhou Medical University); Feng Gao (The First Hospital of Beijing University); Yang-Tai Guan (Renji Hospital of Shanghai Jiao Tong University); Li Guo (The Second Hospital of Hebei Medical University); Xue-Qiang Hu (The Third Affiliated Hospital of Sun Yat-sen University); De-Hui Huang (The PLA General Hospital); Wei-Zhong Ji (Qinghai Provincial People's Hospital); Tao Jin (The First Hospital of Jilin University); Jun Jing (Beijing Tongren Hospital of Capital Medical University); De-Hong Lu (Department of Pathology at Xuanwu hospital, Capital Medical University); Hai-Feng Li (Qilu Hospital of Shandong University); Hong-Zeng Li (Tangdu Hospital of the Fourth Military Medical University); Jian-Guo Liu (Navy General Hospital); Ze-Yu Li (The Affiliated Hospital of Inner Mongolia Medical University); Zhu-Yi Li (Tangdu Hospital of the Fourth Military Medical University); Xiao-Ping Liao (Hainan Medical College); Guang-Zhi Liu (Peking University People's Hospital); Wei-Bin Liu (The First Affiliated Hospital of Sun Yat-sen University); Lin Ma (Department of Radiology at The PLA General Hospital); Xue-An Muo (The Institute of Neurology, Guangxi Medical University); Xiao-Kun Qi (Navy General Hospital); Xin-Yue Qin (The First Affiliated Hospital of Chongqing Medical University); Wei Qiu (The Third affiliated Hospital of Sun Yat-sen University); Hong-Dang Qu (The Affiliated Hospital of Bengbu Medical College); Fu-Dong Shi (The General Hospital of Tianjin Medical University); Hong-Hao Wang (Nanfang Hospital of Southern Medical University); Jia-Wei Wang (Tongren Hospital of the Capital Medical University); Jin-Cun Wang (Xijing Hospital of the Fourth Military Medical University); Li-Hua Wang (The Second Affiliated Hospital of Harbin Medical University); Man-Xia Wang (The Second Affiliated Hospital of Lanzhou University); Wei-Zhi Wang (The Second Affiliated Hospital of Harbin Medical University);Yong-Gang Wang (Renji Hospital of Shanghai Jiao Tong University); Dong-Ning Wei (The 309 th Hospital Chinese People's Liberation Army);Wei-Ping Wu (South Building of Chinese PLA General Hospital); Xiao-Mu Wu (Jiangxi Provincial People's Hospital); Bao-Guo Xiao (The Institute of Neurology, Huashan Hospital of Shanghai Fudan University); Yan Xu (Peking Union Medical College Hospital); Zhu Xu (The Affiliated Hospital of Guizhou Medical University); Xian-Hao Xu (Beijing Hospital); Gang Yu (The First Affiliated Hospital of Chongqing Medical University); Hua Zhang (Beijing Hospital); Mei-Ni Zhang (The First Hospital of Shanxi Medical University); Xing-Hu Zhang (The Affiliated Tiantan Hospital of Capital University); Xu Zhang (The First Affiliated Hospital of Wenzhou Medical University); Yu-Wu Zhao (The Sixth People's Hospital of Shanghai Jiao Tong University); Kui-Hong Zheng (Department of Radiology of Navy General Hospital); Xue-Ping Zheng (The Affiliated Hospital of Qingdao University); Hong-Yu Zhou (West China Hospital of Sichuan University); Wen-Bin Zhou (Xiangya Hospital of Central South University); Ming Ren (Xuanwu Hospital of Capital Medical University). Financial support and sponsorship This study was supported by the grants from Biological Medicine and Life Sciences Innovation and Cultivation of Research Projects of Beijing Municipal Science and Technology Commission (No. Z151100003915113) and Capital Foundation of Medical Developments (NO. 2009-2054). Conflicts of interest There are no conflicts of interest. This study was supported by the grants from Biological Medicine and Life Sciences Innovation and Cultivation of Research Projects of Beijing Municipal Science and Technology Commission (No. Z151100003915113) and Capital Foundation of Medical Developments (NO. 2009-2054). There are no conflicts of interest.
Novel insights into antioxidant status, gene expression, and immunohistochemistry in an animal model infected with camel-derived
afb2000c-1c79-498d-b5c2-f08b668c025a
11575088
Anatomy[mh]
Camels can survive under the harsh climatic conditions of the desert and contribute significantly to the socioeconomic uplift of a country, both as draft animals and as a protein source, as well as for various basic livelihood demands such as milk, meat, racing, riding, and packing . Although camels are known to adapt to adverse desert climates, they may suffer from numerous parasitic infections, which severely limit their health progress . Hemoprotozoan diseases such as anaplasmosis, babesiosis, trypanosomiasis, and theileriosis adversely affect infected camels, causing massive economic losses by affecting the quality of milk, meat, and other animal byproducts . In Egypt, the two main vector-borne hemoprotozoans that typically manifest as chronic diseases in camels are tropical theileriosis (caused by Theileria annulata ) and surra (caused by Trypanosoma evansi ). The proportion of T. evansi in Egyptian camels ranges from 22.7% to 50.51% . T. evansi infections lead to significant productivity losses and can be lethal if not quickly identified and treated . In Egypt, T. annulata is responsible for the subclinical form of the infection, but it causes significant alterations in blood and lipid profiles . T. annulata has been reported to infect 21.1% of camels , rising to 38% in some cases . Hemoprotozoan diseases may drastically impair the normal functioning of vital body organs . Infection by T. evansi and T. annulata results in significant financial losses due to the severe clinical disease in infected and carrier camels . Camels naturally tolerate a certain level of parasitemia, particularly in chronic cases, and the parasite may disappear from the blood while being stored in the bone marrow or other hemopoietic organs of the infected camel. Parasitological techniques, including direct microscopic observation of parasites in stained fixed blood films, are considered easy and inexpensive, but lack sensitivity . However, during the acute stage of illness, when a high parasite counts occur, microscopic diagnosis is more reliable . Serological methods lack sensitivity and specificity, as they rely on detecting parasite antibodies or antigens in animal blood . Among these techniques, diagnosis through the detection of specific parasite DNA using polymerase chain reaction (PCR) is considered highly useful for early diagnosis, even during the prepatent period or chronic late phase of infection . Factors such as the amount and quality of DNA in the samples must be considered to maximize the effectiveness of PCR diagnosis . Parasitic infections can induce oxidative stress through the production of reactive oxygen species (ROS) by the host, primarily as a defense against pathogen invasion . The host’s antioxidant defense systems are sometimes activated, but oxidative stress occurs when ROS levels overwhelm these systems . Excessive ROS production leads to oxidative damage to key cellular components—DNA, lipids, and proteins . ROS-induced lipid peroxidation compromises cell membrane integrity, while oxidative DNA damage can result in mutations, replication errors, and cell death. Oxidative protein damage impairs cellular functions . In the case of theileriosis, studies have demonstrated reduced antioxidant levels and elevated oxidative stress markers in affected animals . A cell-mediated immune response is the most significant factor in diagnosing intracellular parasites, as it prevents their proliferation within the host and may trigger disease. Interferon-gamma (IFNγ) is essential for inducing host immunity and enhancing parasiticidal activity. In addition to oxidative stress, the immune response plays a critical role in controlling intracellular parasites. A robust cell-mediated immune response is essential to prevent parasite proliferation and subsequent disease. Key cytokines, such as IFNγ, enhance host immunity by promoting parasiticidal activity, while interleukin-1 beta (IL-1β) has been shown to inhibit parasite replication in vitro . On the other hand, transforming growth factor 1 beta (TGF-1β), a regulatory cytokine, is less effective in treating acute trypanosomiasis, as demonstrated by . TGF-1β, produced by various immune cells, activates immune responses following microbial invasion and induces cytotoxicity against tumor cells . Sampling and investigating camels poses significant challenges due to the nature of camel farming practices, which complicate rearing, handling, and securing methods . Additionally, camels are predominantly distributed in tropical, developing regions, contributing to the limited understanding of camel parasite genotyping, cytokine production, and oxidative stress parameters during infection. This gap in knowledge underscores the need for studies like the present one. Therefore, the present study aimed to use molecular methods to identify two crucial blood protozoa infecting camels in Egypt ( T. evansi and T. annulata ). These parasites were subsequently used to induce experimental infections in mice. The study evaluated the expression of immune response genes (IFNγ, TGF-1β, and IL-1β) and oxidative stress parameters [superoxide dismutase (SOD), glutathione peroxidase (GPX), and catalase (CAT)] in both naturally infected camels and experimentally infected mice. Furthermore, the correlation between histopathological alterations and inflammatory reactions in the liver, spleen, and kidney, along with the immunohistochemical expression of caspase-3, proliferating cell nuclear antigen (PCNA), and TNF, was investigated to assess the impact of these infections. Selection of Trypanosoma and Theileria- infected blood samples From April to September 2023, 190 fresh noncoagulated blood samples were collected in sterile dipotassium EDTA-coated vacutainer tubes from the jugular veins of one-humped camels ( Camelus dromedarius ) directly before slaughter at El-Basatin Abattoir, Cairo, Egypt . The samples were transported to the Parasitology Laboratory, Faculty of Veterinary Medicine, Cairo University, in an icebox. Each sample was identified and divided into two parts, one for molecular identification and the other for experimental infection. Giemsa-stained thin blood films were prepared and examined using a light microscope for infection levels. Blood samples with considerable parasitemia from Trypanosoma or Theileria , without other parasitic infections, were inoculated into mice as an animal model to meet the study’s objectives. Stained thin blood film preparation Giemsa-stained thin blood films were prepared from each noncoagulated camel blood sample using the slide-to-slide method . Trypanosoma infection between red blood cells (RBCs) and Theileria spp. gametocytes in RBCs or their schizonts inside circulating lymphocytes were diagnosed using an oil immersion lens (×1000). The level of parasitemia was calculated after investigating ten separate fields per sample. The parasites were identified according to previous methods . Molecular identification of the diagnosed parasites DNA extraction, PCR, and sequencing Phylogenetic analysis achieved an accurate identification of Trypanosoma and Theileria species in infected camel blood samples. Genomic DNA was extracted using the QIAamp DNA micro kit (Qiagen, USA), following the manufacturer’s instructions . Samples from infected camel blood were genotyped using multiplex PCR (mPCR) against reference samples at the Central Laboratory for Evaluation of Veterinary Biologics, Agriculture Research Centre, Cairo, Egypt, following previous methods . According to previous studies , the following primer sets were used for mPCR: T. annulata : 5′ ACTTTGGCCGTAATGTTAAAC 3′; 5′ CTCTGGACCAACTGTTTGG 3′; and T. evansi : 5′ TGCTTGTTTCAAGGACTTAGCCA 3′; 5′ CGCTGACTGAGAGAATCACGGTT 3′. The reaction was performed using Emerald Amp GT PCR master mix (Takara, Japan) in a 25 µL reaction mixture, which included 10 pmol of each primer (Metabion, Germany) and 5 µL of sample DNA as a template. The thermal profile included 30 cycles of denaturation (95 °C for 50 s), primer annealing (50 °C for 50 s), and extension (65 °C for 1 min), and a final extension of 10 min at 72 °C. The PCR product was electrophoresed on a 1.5% agarose gel with 10 µL/mL SYBER Safe (Thermo Scientific) in Tris–acetate EDTA buffer at 100 V for 45 min and photographed under ultraviolet transilluminators (ImageQuantLaz4000, GE Healthcare Life Science, Hammersmith, UK) . Positive samples were sequenced for the small subunit ribosomal RNA gene (18S rRNA) at Macrogen Europe (The Netherlands), and the output was compared to sequences in GenBank using BLAST ( http://blast.ncbi.nlm.nih.gov/ ) . The tested animal models Experimental animals The study utilized 90 inbred Swiss albino mice obtained from the Department of Animal and Poultry Management and Behavior, Faculty of Veterinary Medicine, Cairo University. The mice were between 6 and 8 weeks old, weighing between 26 and 32 g. They were housed in separate cages with air conditioning and provided unlimited access to filtered water and pellet rodent feed . Humidity and room temperature were maintained at 50–70% and 25 ± 2 °C, respectively. During the 15 day acclimatization period, the animals appeared healthy, with no blood or enteric parasites detected through blood smears or copro-parasitological investigation . Animal infection Ninety Swiss albino mice were divided into three equal groups (A, B, and C). Group A served as the control group, while group B was inoculated with T. evansi -infected camel blood, and group C was inoculated with T. annulata -infected camel blood. Each mouse in the control group (A) received sterile PBS intraperitoneally. According to a previous study , 30 mice per isolate (groups B and C) were injected with infected blood having T. evansi (5 × 10 5 trypanosomes/mL) or T. annulata (1 × 10 7 Theileria -infected RBCs) at a dose of 0.5 mL intraperitoneally to assess immunological, hematological, and histopathological changes induced in mice tissue due to infection with T. evansi or T. annulata . Daily blood examinations were performed to monitor infection status. Parasitemia was checked daily until infection was confirmed, and the overall death rate was recorded until day 30 postinfection (PI). Every 5 days, a blood film from the tail vein of five different mice from groups B and C was stained with Giemsa to determine parasite count. All animals were sacrificed on day 30 PI for histopathological analysis. Oxidative stress markers Camel and mice blood were collected in tubes containing 0.5 mg/mL EDTA and stored at −20 °C until use. The blood samples were centrifuged for 10 min at 2000 rpm to estimate oxidative parameters . The buffy coat and plasma were removed, and the erythrocyte pellet was packed and diluted in an EDTA–mercaptoethanol-stabilizing solution at a 1:9 (V/V) ratio. The pellet was stored at 4 °C for further analysis. The activity of SOD, GPX, and CAT was measured using these 10% packed erythrocytes. All oxidative parameter tests were completed within 2 hours of sample collection . Measurements of oxidative stress markers Oxidative stress markers were assessed using specialized kits to measure SOD, GPX, and CAT activity in both positive and negative blood samples . Assessment of cytokine expression RNA extraction Total RNA was extracted from the blood of naturally infected camels and the liver, kidney, and spleen of experimentally infected mice with Trypanosoma and Theileria using 8.5 µL of sterile TRIzol reagent (Invitrogen Life Technologies, Carlsbad) and 10 pmol of Metabion (International AG). Two microliters of template DNA were used, following the manufacturer’s instructions. The RNA was diluted in RNase-free water and stored at −80 °C for gene expression analysis. The concentration and purity of RNA were determined using a Nano-Drop ND-1000 spectrophotometer (Nano-Drop Technologies Inc, Delaware, USA) . Real-time PCR (RT–PCR) Cytokine genes IFNγ, IL-1β, and TGF-1β were quantified using real-time PCR. Beta-actin served as a reference gene for normalization. The primers listed in Table were used for real-time RT–PCR using the Cepheid SmartCycler II (Sunnyvale, CA, USA). Samples were tested using SYBR Green PCR master mix (Applied Biosystems, USA), 0.5 µL of each primer (10 pmol), 1 µL of cDNA (400 ng), and 10.5 µL of RNase-free water. Positive and negative controls were included for each gene of interest. For β-actin, IFNγ, TGF-1β, and IL-1β genes, amplification cycles included an initial incubation at 95 °C for 5 min, followed by 40 cycles of denaturation at 95 °C for 30 s, annealing at 60 °C for 30 s, and extension at 72 °C for 30 s . Histopathology and immunohistochemistry Formalin-fixed tissues from the liver, spleen, and kidney of experimentally infected mice were trimmed, embedded in paraffin, and stained with hematoxylin and eosin (H&E) as described previously . Tissues were examined under a light microscope (Olympus, BX43) connected to a digital camera (DP27) and Cell Sens Dimensions software. Degeneration, necrosis, and inflammation were scored as follows: 0 = Normal, 1 =  < 25%, 2 = 25–50%, 3 = 50–75%, and 4 =  > 75% . The total lesion score for each mouse was calculated based on six fields at 200× magnification. Following a standardized protocol, immunohistochemical staining for caspase-3, PCNA, and TNFα was performed using the avidin–biotin–peroxidase complex method . The area percentage of positive expression was calculated using ImageJ software (six fields/mouse/200×). Statistical analysis Data were expressed as mean ± standard error. Statistical analyses were performed using SPSS version 28 (SPSS Inc., Chicago, IL, USA) . A t -test for independent samples was applied, and a P -value of ≤ 0.05 was considered statistically significant . Trypanosoma and Theileria- infected blood samples From April to September 2023, 190 fresh noncoagulated blood samples were collected in sterile dipotassium EDTA-coated vacutainer tubes from the jugular veins of one-humped camels ( Camelus dromedarius ) directly before slaughter at El-Basatin Abattoir, Cairo, Egypt . The samples were transported to the Parasitology Laboratory, Faculty of Veterinary Medicine, Cairo University, in an icebox. Each sample was identified and divided into two parts, one for molecular identification and the other for experimental infection. Giemsa-stained thin blood films were prepared and examined using a light microscope for infection levels. Blood samples with considerable parasitemia from Trypanosoma or Theileria , without other parasitic infections, were inoculated into mice as an animal model to meet the study’s objectives. Giemsa-stained thin blood films were prepared from each noncoagulated camel blood sample using the slide-to-slide method . Trypanosoma infection between red blood cells (RBCs) and Theileria spp. gametocytes in RBCs or their schizonts inside circulating lymphocytes were diagnosed using an oil immersion lens (×1000). The level of parasitemia was calculated after investigating ten separate fields per sample. The parasites were identified according to previous methods . DNA extraction, PCR, and sequencing Phylogenetic analysis achieved an accurate identification of Trypanosoma and Theileria species in infected camel blood samples. Genomic DNA was extracted using the QIAamp DNA micro kit (Qiagen, USA), following the manufacturer’s instructions . Samples from infected camel blood were genotyped using multiplex PCR (mPCR) against reference samples at the Central Laboratory for Evaluation of Veterinary Biologics, Agriculture Research Centre, Cairo, Egypt, following previous methods . According to previous studies , the following primer sets were used for mPCR: T. annulata : 5′ ACTTTGGCCGTAATGTTAAAC 3′; 5′ CTCTGGACCAACTGTTTGG 3′; and T. evansi : 5′ TGCTTGTTTCAAGGACTTAGCCA 3′; 5′ CGCTGACTGAGAGAATCACGGTT 3′. The reaction was performed using Emerald Amp GT PCR master mix (Takara, Japan) in a 25 µL reaction mixture, which included 10 pmol of each primer (Metabion, Germany) and 5 µL of sample DNA as a template. The thermal profile included 30 cycles of denaturation (95 °C for 50 s), primer annealing (50 °C for 50 s), and extension (65 °C for 1 min), and a final extension of 10 min at 72 °C. The PCR product was electrophoresed on a 1.5% agarose gel with 10 µL/mL SYBER Safe (Thermo Scientific) in Tris–acetate EDTA buffer at 100 V for 45 min and photographed under ultraviolet transilluminators (ImageQuantLaz4000, GE Healthcare Life Science, Hammersmith, UK) . Positive samples were sequenced for the small subunit ribosomal RNA gene (18S rRNA) at Macrogen Europe (The Netherlands), and the output was compared to sequences in GenBank using BLAST ( http://blast.ncbi.nlm.nih.gov/ ) . Phylogenetic analysis achieved an accurate identification of Trypanosoma and Theileria species in infected camel blood samples. Genomic DNA was extracted using the QIAamp DNA micro kit (Qiagen, USA), following the manufacturer’s instructions . Samples from infected camel blood were genotyped using multiplex PCR (mPCR) against reference samples at the Central Laboratory for Evaluation of Veterinary Biologics, Agriculture Research Centre, Cairo, Egypt, following previous methods . According to previous studies , the following primer sets were used for mPCR: T. annulata : 5′ ACTTTGGCCGTAATGTTAAAC 3′; 5′ CTCTGGACCAACTGTTTGG 3′; and T. evansi : 5′ TGCTTGTTTCAAGGACTTAGCCA 3′; 5′ CGCTGACTGAGAGAATCACGGTT 3′. The reaction was performed using Emerald Amp GT PCR master mix (Takara, Japan) in a 25 µL reaction mixture, which included 10 pmol of each primer (Metabion, Germany) and 5 µL of sample DNA as a template. The thermal profile included 30 cycles of denaturation (95 °C for 50 s), primer annealing (50 °C for 50 s), and extension (65 °C for 1 min), and a final extension of 10 min at 72 °C. The PCR product was electrophoresed on a 1.5% agarose gel with 10 µL/mL SYBER Safe (Thermo Scientific) in Tris–acetate EDTA buffer at 100 V for 45 min and photographed under ultraviolet transilluminators (ImageQuantLaz4000, GE Healthcare Life Science, Hammersmith, UK) . Positive samples were sequenced for the small subunit ribosomal RNA gene (18S rRNA) at Macrogen Europe (The Netherlands), and the output was compared to sequences in GenBank using BLAST ( http://blast.ncbi.nlm.nih.gov/ ) . Experimental animals The study utilized 90 inbred Swiss albino mice obtained from the Department of Animal and Poultry Management and Behavior, Faculty of Veterinary Medicine, Cairo University. The mice were between 6 and 8 weeks old, weighing between 26 and 32 g. They were housed in separate cages with air conditioning and provided unlimited access to filtered water and pellet rodent feed . Humidity and room temperature were maintained at 50–70% and 25 ± 2 °C, respectively. During the 15 day acclimatization period, the animals appeared healthy, with no blood or enteric parasites detected through blood smears or copro-parasitological investigation . The study utilized 90 inbred Swiss albino mice obtained from the Department of Animal and Poultry Management and Behavior, Faculty of Veterinary Medicine, Cairo University. The mice were between 6 and 8 weeks old, weighing between 26 and 32 g. They were housed in separate cages with air conditioning and provided unlimited access to filtered water and pellet rodent feed . Humidity and room temperature were maintained at 50–70% and 25 ± 2 °C, respectively. During the 15 day acclimatization period, the animals appeared healthy, with no blood or enteric parasites detected through blood smears or copro-parasitological investigation . Ninety Swiss albino mice were divided into three equal groups (A, B, and C). Group A served as the control group, while group B was inoculated with T. evansi -infected camel blood, and group C was inoculated with T. annulata -infected camel blood. Each mouse in the control group (A) received sterile PBS intraperitoneally. According to a previous study , 30 mice per isolate (groups B and C) were injected with infected blood having T. evansi (5 × 10 5 trypanosomes/mL) or T. annulata (1 × 10 7 Theileria -infected RBCs) at a dose of 0.5 mL intraperitoneally to assess immunological, hematological, and histopathological changes induced in mice tissue due to infection with T. evansi or T. annulata . Daily blood examinations were performed to monitor infection status. Parasitemia was checked daily until infection was confirmed, and the overall death rate was recorded until day 30 postinfection (PI). Every 5 days, a blood film from the tail vein of five different mice from groups B and C was stained with Giemsa to determine parasite count. All animals were sacrificed on day 30 PI for histopathological analysis. Camel and mice blood were collected in tubes containing 0.5 mg/mL EDTA and stored at −20 °C until use. The blood samples were centrifuged for 10 min at 2000 rpm to estimate oxidative parameters . The buffy coat and plasma were removed, and the erythrocyte pellet was packed and diluted in an EDTA–mercaptoethanol-stabilizing solution at a 1:9 (V/V) ratio. The pellet was stored at 4 °C for further analysis. The activity of SOD, GPX, and CAT was measured using these 10% packed erythrocytes. All oxidative parameter tests were completed within 2 hours of sample collection . Oxidative stress markers were assessed using specialized kits to measure SOD, GPX, and CAT activity in both positive and negative blood samples . RNA extraction Total RNA was extracted from the blood of naturally infected camels and the liver, kidney, and spleen of experimentally infected mice with Trypanosoma and Theileria using 8.5 µL of sterile TRIzol reagent (Invitrogen Life Technologies, Carlsbad) and 10 pmol of Metabion (International AG). Two microliters of template DNA were used, following the manufacturer’s instructions. The RNA was diluted in RNase-free water and stored at −80 °C for gene expression analysis. The concentration and purity of RNA were determined using a Nano-Drop ND-1000 spectrophotometer (Nano-Drop Technologies Inc, Delaware, USA) . Total RNA was extracted from the blood of naturally infected camels and the liver, kidney, and spleen of experimentally infected mice with Trypanosoma and Theileria using 8.5 µL of sterile TRIzol reagent (Invitrogen Life Technologies, Carlsbad) and 10 pmol of Metabion (International AG). Two microliters of template DNA were used, following the manufacturer’s instructions. The RNA was diluted in RNase-free water and stored at −80 °C for gene expression analysis. The concentration and purity of RNA were determined using a Nano-Drop ND-1000 spectrophotometer (Nano-Drop Technologies Inc, Delaware, USA) . Cytokine genes IFNγ, IL-1β, and TGF-1β were quantified using real-time PCR. Beta-actin served as a reference gene for normalization. The primers listed in Table were used for real-time RT–PCR using the Cepheid SmartCycler II (Sunnyvale, CA, USA). Samples were tested using SYBR Green PCR master mix (Applied Biosystems, USA), 0.5 µL of each primer (10 pmol), 1 µL of cDNA (400 ng), and 10.5 µL of RNase-free water. Positive and negative controls were included for each gene of interest. For β-actin, IFNγ, TGF-1β, and IL-1β genes, amplification cycles included an initial incubation at 95 °C for 5 min, followed by 40 cycles of denaturation at 95 °C for 30 s, annealing at 60 °C for 30 s, and extension at 72 °C for 30 s . Formalin-fixed tissues from the liver, spleen, and kidney of experimentally infected mice were trimmed, embedded in paraffin, and stained with hematoxylin and eosin (H&E) as described previously . Tissues were examined under a light microscope (Olympus, BX43) connected to a digital camera (DP27) and Cell Sens Dimensions software. Degeneration, necrosis, and inflammation were scored as follows: 0 = Normal, 1 =  < 25%, 2 = 25–50%, 3 = 50–75%, and 4 =  > 75% . The total lesion score for each mouse was calculated based on six fields at 200× magnification. Following a standardized protocol, immunohistochemical staining for caspase-3, PCNA, and TNFα was performed using the avidin–biotin–peroxidase complex method . The area percentage of positive expression was calculated using ImageJ software (six fields/mouse/200×). Data were expressed as mean ± standard error. Statistical analyses were performed using SPSS version 28 (SPSS Inc., Chicago, IL, USA) . A t -test for independent samples was applied, and a P -value of ≤ 0.05 was considered statistically significant . Microscopic examination of blood smears Microscopic examination of Giemsa-stained thin blood films from all investigated animals revealed infection with T. evansi between the RBCs. The parasite exhibited a typical trypanosome spindle shape, with a central nucleus, subterminal kinetoplast, long free flagellum, and well-developed undulating membrane. The parasite was diagnosed in 33 samples (17.37%), with mixed infections of Anaplasma and Theileria identified in 14 samples (7.37%). Additionally, 22 samples (11.58%) were infected with Theileria spp., as its schizont was observed in circulating lymphocytes, displaying a spherical body with chromatin granules and blue cytoplasm adjacent to the kidney-shaped lymphocyte nucleus. Comma-shaped gametocytes were detected in some RBCs. Genotypic detection of T. evansi and T. annulata The 18S rRNA gene was successfully amplified in all positive camel blood samples. The amplified products of 18S rRNA were 340 bp and 315 bp for T. evansi and T. annulata , respectively. BLAST analysis of the sequences confirmed the absolute similarity among all tested samples, identifying them as T. evansi and T. annulata . The sequences were submitted to GenBank, with accession numbers OR116429 and OR103130 for T. evansi and T. annulata , respectively. The 18S rRNA sequence of T. evansi showed 100% nucleotide similarity with accession number MT490639 and 99.17% similarity with MF737081 and LC546902. In the case of T. annulata , the 18S rRNA sequence displayed 100% nucleotide similarity with MT341857, MK415058, and AY524666. Oxidative stress and cytokine gene expression in both naturally infected camels and experimentally infected mice with T. evansi and T. annulata In naturally infected camels The biochemical profile of antioxidant and inflammatory markers in naturally infected camels in comparison with their level in apparently healthy ones are shown in Fig. . There was also a significant increase ( P ≤ 0.05) in the concentration of the investigated oxidative stress markers (CAT, SOD, and GPX) in naturally infected camels in comparison with the control noninfected camels. The elevation was higher in the case of infection with T. evansi than that recorded in camels infected by T. annulata (Fig. A). At the same time, there was a significant increase ( P ≤ 0.05) in the level of immunogenic cytokines (IFNγ, TGF-1β and IL-1β) in T. evansi and T. annulata naturally infected camels in comparison with the control camels. The difference is considered to be higher in animals infected by T. annulata than those infected by T. evansi (Fig. B). Experimentally infected mice After confirming parasitemia in inoculated mice, oxidative stress markers were assessed in each group. The results showed that levels of CAT, GPX, and SOD were higher in both Trypanosoma - and Theileria -infected groups compared to the healthy control group (Fig. A). Additionally, cytokine gene expression levels (IFNγ, TGF-1β, and IL-1β) were elevated in infected mice (Fig. B–D). The observed alterations in oxidative stress and gene expression parameters suggest a connection to the pathophysiology of mice infected with T. evansi and T. annulata . Histopathology and immunohistochemistry Microscopic examination of the control group showed normal liver, kidney, and spleen histological architecture, with mild splenic extramedullary hematopoiesis and megakaryocytes. In contrast, histopathological examination of the liver in T. evansi - and T. annulata -infected groups revealed significant alterations, including diffuse vacuolization of hepatocellular cytoplasm, hepatocellular necrosis, and mononuclear cell infiltration extending into portal areas. Examination of kidney tissues revealed necrobiotic changes in the renal tubular epithelium, while the spleen showed congestion, increased extramedullary hematopoiesis, megakaryopoiesis, and proliferation of erythroid elements (Fig. ). Immunohistochemistry analysis demonstrated significant upregulation of caspase-3, PCNA, and TNF expression in hepatic, renal, and splenic tissues from infected groups compared to control mice (Figs. , , and ). Microscopic examination of Giemsa-stained thin blood films from all investigated animals revealed infection with T. evansi between the RBCs. The parasite exhibited a typical trypanosome spindle shape, with a central nucleus, subterminal kinetoplast, long free flagellum, and well-developed undulating membrane. The parasite was diagnosed in 33 samples (17.37%), with mixed infections of Anaplasma and Theileria identified in 14 samples (7.37%). Additionally, 22 samples (11.58%) were infected with Theileria spp., as its schizont was observed in circulating lymphocytes, displaying a spherical body with chromatin granules and blue cytoplasm adjacent to the kidney-shaped lymphocyte nucleus. Comma-shaped gametocytes were detected in some RBCs. T. evansi and T. annulata The 18S rRNA gene was successfully amplified in all positive camel blood samples. The amplified products of 18S rRNA were 340 bp and 315 bp for T. evansi and T. annulata , respectively. BLAST analysis of the sequences confirmed the absolute similarity among all tested samples, identifying them as T. evansi and T. annulata . The sequences were submitted to GenBank, with accession numbers OR116429 and OR103130 for T. evansi and T. annulata , respectively. The 18S rRNA sequence of T. evansi showed 100% nucleotide similarity with accession number MT490639 and 99.17% similarity with MF737081 and LC546902. In the case of T. annulata , the 18S rRNA sequence displayed 100% nucleotide similarity with MT341857, MK415058, and AY524666. T. evansi and T. annulata In naturally infected camels The biochemical profile of antioxidant and inflammatory markers in naturally infected camels in comparison with their level in apparently healthy ones are shown in Fig. . There was also a significant increase ( P ≤ 0.05) in the concentration of the investigated oxidative stress markers (CAT, SOD, and GPX) in naturally infected camels in comparison with the control noninfected camels. The elevation was higher in the case of infection with T. evansi than that recorded in camels infected by T. annulata (Fig. A). At the same time, there was a significant increase ( P ≤ 0.05) in the level of immunogenic cytokines (IFNγ, TGF-1β and IL-1β) in T. evansi and T. annulata naturally infected camels in comparison with the control camels. The difference is considered to be higher in animals infected by T. annulata than those infected by T. evansi (Fig. B). The biochemical profile of antioxidant and inflammatory markers in naturally infected camels in comparison with their level in apparently healthy ones are shown in Fig. . There was also a significant increase ( P ≤ 0.05) in the concentration of the investigated oxidative stress markers (CAT, SOD, and GPX) in naturally infected camels in comparison with the control noninfected camels. The elevation was higher in the case of infection with T. evansi than that recorded in camels infected by T. annulata (Fig. A). At the same time, there was a significant increase ( P ≤ 0.05) in the level of immunogenic cytokines (IFNγ, TGF-1β and IL-1β) in T. evansi and T. annulata naturally infected camels in comparison with the control camels. The difference is considered to be higher in animals infected by T. annulata than those infected by T. evansi (Fig. B). After confirming parasitemia in inoculated mice, oxidative stress markers were assessed in each group. The results showed that levels of CAT, GPX, and SOD were higher in both Trypanosoma - and Theileria -infected groups compared to the healthy control group (Fig. A). Additionally, cytokine gene expression levels (IFNγ, TGF-1β, and IL-1β) were elevated in infected mice (Fig. B–D). The observed alterations in oxidative stress and gene expression parameters suggest a connection to the pathophysiology of mice infected with T. evansi and T. annulata . Microscopic examination of the control group showed normal liver, kidney, and spleen histological architecture, with mild splenic extramedullary hematopoiesis and megakaryocytes. In contrast, histopathological examination of the liver in T. evansi - and T. annulata -infected groups revealed significant alterations, including diffuse vacuolization of hepatocellular cytoplasm, hepatocellular necrosis, and mononuclear cell infiltration extending into portal areas. Examination of kidney tissues revealed necrobiotic changes in the renal tubular epithelium, while the spleen showed congestion, increased extramedullary hematopoiesis, megakaryopoiesis, and proliferation of erythroid elements (Fig. ). Immunohistochemistry analysis demonstrated significant upregulation of caspase-3, PCNA, and TNF expression in hepatic, renal, and splenic tissues from infected groups compared to control mice (Figs. , , and ). Camelus dromedarius (one-humped camels) significantly contribute to the socioeconomic development of numerous countries. However, camel meat and interactions with camels pose substantial zoonotic disease risks due to their susceptibility to various infections . Hemoprotozoan diseases such as anaplasmosis, babesiosis, trypanosomiasis, and theileriosis negatively impact the productivity, growth, and performance of animals and humans . These diseases cause massive economic losses by affecting the quality of milk, meat, and other animal byproducts . Phylogenetic analyses and the taxonomic identification of Theileria spp. and trypanosomes depend on molecular analysis of ribosomal gene sequences . The DNA sequence of the nuclear small subunit (18S) rRNA, has become the most widely accepted method for rapid diagnosis and classification of different protozoan parasites . This study determined the phylogenies and relatedness of the Egyptian isolates using phylogenetic trees based on 18S rRNA analysis. The Egyptian isolates grouped with T. evansi and T. annulata obtained from GenBank based on their 18S rRNA sequences. Previous studies that isolated T. evansi from Egypt, Saudi Arabia, and Paraguay, are consistent with the 18S region-based results. However, the T. annulata findings align with studies on isolates from China, Italy, and Turkey . Concerning the effect of infection on variations in antioxidant and inflammatory markers in natural or experimentally infected animals, it is known that the immune system is stimulated by increasing exposure to infections, relative to the level of tissue damage that occurs as a result of this infection . As the body’s army, the immune system keeps out bacterial and parasitic invaders. Its primary components are glycoproteins, which are small proinflammatory cytokines. They are effective in regulating the interactions and communication between different immune cells. In the current study, cytokine production was triggered by Trypanosoma and Theileria infection, initiating an immune response. Proinflammatory cytokines are produced by various immune cells in response to the presence, endurance, mobility, and proliferation of protozoa in an animal’s body, and the associated wounded tissues . Oxidative stress occurs when the body’s defense mechanisms against free radicals and ROS are outmatched. In the current study, infected camels had significantly higher levels of CAT, SOD, and GPX. The obtained results were in agreement with previous publications . Regarding the variation in the elevated levels of the estimated parameters related to infection with T. evansi or T. annulata in the naturally infected camels in this study, the authors attribute this to other factors such as the general health conditions and immune status of the animals, as well as the level of parasitemia. Furthermore, the possibility of other unapparent infections may also contribute to these variations. Lipid peroxidation and oxidative reactions play a role in the pathophysiology of anemia. According to our findings, CAT, GPX, and SOD levels were higher in experimentally infected mice compared to controls. These results align with previous studies , which found increased CAT and SOD activity in the hearts of T. evansi -infected mice. In contrast to our results, a previous study reported increased CAT activity and decreased SOD activity in sheep naturally infected with T. annulata , which may indicate weakened CAT antioxidant capacity. Numerous studies have reported that SOD demonstrates the highest catalytic efficiency and resistance to oxidative stress among known enzymes . However, significant reductions in SOD activity have been observed in cattle affected by theileriosis . Additionally, elevated GPX activity in the blood of rats infected with T. evansi has been observed . Conversely, our findings contradict another study , which found a decrease in GPX activity in the whole blood of rats infected with trypanosomosis. Cytokines play an essential role in regulating humoral and cellular immune responses. While cytokine expression, particularly during the subclinical stage, may provide insights into the immune response against Trypanosoma and Theileria infections, it may not be the most reliable standalone diagnostic tool due to variability in cytokine levels influenced by host factors and nonspecificity. Therefore, cytokine profiling is better used as a complementary approach alongside conventional diagnostic methods, such as PCR or serology, to enhance diagnostic accuracy. To the best of our knowledge, this is the first study in Egypt to address the gene expression of IFNγ, TGF-1β, and IL-1β in both naturally infected camels and experimentally infected mice with T. evansi and T. annulata as an anomalous model for mammals to simplify the diagnosis of intracellular parasites. According to our findings, levels of IFNγ, TGF-1β, and IL-1β in both naturally infected camels and experimentally infected mice were high compared to controls. These results align with previous studies, which reported that the inflammatory response and parasitemia occurring in infected animals may be linked to this increase . IFNγ is believed to be the first inflammatory cytokine crucial in triggering macrophage activation when the parasite stimulates the cells with antigen. Activated macrophages induce TGF-1β and IL-1β . Conversely, earlier research on intracellular parasites revealed lower levels of IFNγ mRNA expression and higher levels of TGF-1β and IL-1β expression compared to healthy animals . Additionally, IL-1β inhibits the in vitro proliferation of parasites. A positive correlation between TGF-1β and IL-1β has been observed, indicating that IL-1β may be upregulated in response to TGF-1β upregulation . This increase in pathogenicity and immune suppression exacerbates the overall health of infected camels, reducing productivity and increasing susceptibility to other pathogens. Histopathological examination revealed significant damage to the liver, kidney, and spleen of the infected group, attributed to ROS production and hypersensitivity to infective agents . Furthermore, parasitic toxin release causes degeneration and necrosis of tissues . Immunohistochemical analysis showed increased caspase-3 upregulation in the infected groups compared to controls, indicating its involvement in apoptosis. This finding explains the degeneration and apoptosis observed in the kidney, liver, and spleen on histopathological analysis. Cellular apoptosis in infected groups may result from toxic byproducts inducing hypoxia and inflammation . TNF expression was elevated in the liver, kidney, and spleen of the infected groups. T. evansi and T. annulata have been reported to induce macrophage activation, which stimulates multiple inflammatory cytokines such as TNF, IL-2, IL-4, IL-6, and IL-12 . PCNA, a cell proliferation marker, is typically expressed in hepatocytes, renal tubular cells, and splenocytes during regeneration . The increased PCNA expression in the infected mice tissues indicates regeneration initiation. Moreover, a positive correlation was found between apoptotic markers such as BAX, caspase-3, and PCNA, as previously reported . This study effectively identified and genotyped two economically important blood protozoa, T. evansi and T. annulata , from camels in Egypt, and the relevant genomes were deposited in GenBank. Furthermore, the experimental animal model gave valuable insights into the immunological response, oxidative stress, and histopathological alterations caused by these parasites, with results comparable to naturally infected camels. These findings emphasize the model’s potential for studying parasite–host interactions and immune responses, which will help us understand the pathogenic mechanisms of T. evansi and T. annulata infections. This model could be beneficial in future studies on disease control and treatment interventions. Supplementary Material 1.
Detection of blood aspiration in deadly head gunshots comparing postmortem computed tomography (PMCT) and autopsy
f69ed473-4e70-48ec-998e-44ac917379a8
5090890
Pathology[mh]
One of the major issues in forensic practice is to determine whether an injury was inflicted during life or after death . The reliability of the four basic vital reactions needs to be carefully assessed, since several confounders, e.g., petechial bleeding caused by hypostasis, potentially may alter the final aspect . Breathing is exclusively triggered by the central nervous system depending on its activity. Thus, if breathing stops along with circulatory arrest or brain death signs of vitality based on an intact respiration can highly reliable be attributed . Therefore, one of the major pulmonary vital reactions is aspiration. Detecting signs of aspiration is of high forensic relevance due to the fact that it provides information whether an injury occurred pre- or postmortem and/or it was the primary or contributing factor to the cause of death. Especially blood aspiration with blood found deep in the bronchial tree is accepted as a sign of vitality. However, of course blood resulting from, e.g., resuscitation maneuvers, etc., perimortal and even postmortem flow of material into the respiratory tract should be excluded first in this context . The typical macroscopic appearance of blood aspiration during forensic autopsy comprises red rounded areas on the surface of the lungs and on the cut surfaces. These macroscopic findings are eventually confirmed by the microscopic description of small airways filled with blood. In addition also the major airways are examined for the presence of intraluminal blood . While autopsy and its traditional subspecialties like histopathology represent the current gold standard for differentiation of pre- and postmortem injuries, postmortem computed tomography (PMCT) is gaining more and more relevance in forensic medicine . In ballistic analysis, PMCT presents a non-invasive, but rather effective imaging technique especially for reconstruction of projectile tracks . However, of course besides advantages in ballistic analysis, PMCT offers further positive characteristics compared to autopsy in terms of detection of intracorporeal gas, detection of fractures and/or foreign bodies. Recently, Filograna et al. provided first evidence of the applicability of PMCT in detection of blood aspiration in a gross series of death for different reasons . Therefore, the aim of our study was to analyze the reliability of PMCT versus autopsy in detecting signs of blood aspiration in a distinct group of patients following deadly head, mouth or floor of mouth gunshot injuries. Subjects The study protocol was approved by the University’s board of ethics (Reference nr. 151/08). For the presented retrospective study, all whole PMCTs with the suspicion for deadly head gunshot including gunshots to the mouth and floor of mouth between October 2008 and April 2011 were enrolled. All cases with additional chest trauma were excluded afterwards to rule out retrograde blood aspiration . Moreover, all cases with any kind of resuscitation maneuvers documented or suspected were excluded. According to our study protocol, in a first step PMCT was performed followed by autopsy and forensic analysis in a second step. Date, potential cause of death and position after death were found by the reading radiologist during image analysis. All autopsy findings were blinded to the radiologist and PMCT findings were blinded to the forensic pathologists. In a final step, PMCT and autopsy analysis were compared for signs and extent of blood aspiration. Postmortem computed tomography (PMCT) imaging and analysis PMCT was performed in a standardized manner with the corpses lying in a supine position. All corpses were kept within the body bag they were deposited after finding on scene. A native CT scan, i.e., without administration of contrast agent was performed on a 64-slice scanner (Brilliance 64 Philips, Amsterdam, Netherlands; GE Discovery 750 HD GE Healthcare Massachusetts USA). First, a scan scout of the head and cervical spine was performed followed by the CT itself with axial reformats in 3.75 mm thickness. The CT scan of the thoracic and abdominal cavity including pelvis and parts of the lower extremities up to a maximum scan length of 200 cm (GE), respectively, 180 cm (Philips) was reformatted in 1.25-mm axial slices. Following scan procedures, PMCT data were transferred to the PACS (picture achieving computer system) for storage and further evaluation. PMCT data were read and evaluated by one board-certified radiologist with expert experience in forensic radiology and one radiology resident with novice experience. Findings were stated in a consensus reading of both. PMCT analysis comprised the evaluation of the airways as well as of the lung. Criteria for blood assessment in the trachea were on the one hand observing sedimentation and/or potential luminal airway occlusion and on the other hand measuring Hounsfield Units (HU) for the presence of blood-like density values with a range from 20 to 90 HU being considered since blood can potentially sediment and exhibit different blood typically HU compared to the lung. Signs of aspiration within the major airways were classified using a four-level scale (see Table ). Findings such as round, possibly converging spots with irregular margins and ground glass opacities (ggo) in the lung parenchyma with blood-like density values of 50–70 Hounsfield units (HU) were considered as suggestive for blood aspiration on PMCT . The level of blood aspiration within the lung parenchyma was defined according to Table (see Fig. ). Autopsy and analysis Following PMCT, the corpses underwent autopsy at our University Institute of Forensic medicine. Board-certified forensic pathologists performed all autopsies according to the standards of the German Government’s guidelines (§87, 89 German Code of Criminal Procedure). Standard autopsy in lethal ballistic injury comprises first the inspection of the entire skin looking for the entrance and exit wound, respectively, second by opening of all three body cavities (skull, thorax and abdomen), and third the dedicated examination of all internal organs. The respiratory tract was resected en block by transecting the trachea right below the glottis. In the following, the airways and pulmonary blood vessels were inspected and dissected. Then the lung surface was inspected and the lung parenchyma was cut into 1.5-cm slices of varying thickness to assess for potential signs of aspiration from the apex to the base. Tissue samples from all organs were harvested for histopathology. Autopsy reports were reviewed for the description of blood or blood-like fluid in the larynx, trachea, main bronchi considered as major airways and small bronchi but also for the description of red rounded spots with irregular margin and diffuse pattern on the lungs’ surface or in the cutting area. Parallel to the PMCT evaluation, aspiration within the major airways was classified using the four-level scale and the degree of blood aspiration within the lung parenchyma according to Tables and . Diagnosis of blood aspiration of the lung was confirmed by histology. The study protocol was approved by the University’s board of ethics (Reference nr. 151/08). For the presented retrospective study, all whole PMCTs with the suspicion for deadly head gunshot including gunshots to the mouth and floor of mouth between October 2008 and April 2011 were enrolled. All cases with additional chest trauma were excluded afterwards to rule out retrograde blood aspiration . Moreover, all cases with any kind of resuscitation maneuvers documented or suspected were excluded. According to our study protocol, in a first step PMCT was performed followed by autopsy and forensic analysis in a second step. Date, potential cause of death and position after death were found by the reading radiologist during image analysis. All autopsy findings were blinded to the radiologist and PMCT findings were blinded to the forensic pathologists. In a final step, PMCT and autopsy analysis were compared for signs and extent of blood aspiration. PMCT was performed in a standardized manner with the corpses lying in a supine position. All corpses were kept within the body bag they were deposited after finding on scene. A native CT scan, i.e., without administration of contrast agent was performed on a 64-slice scanner (Brilliance 64 Philips, Amsterdam, Netherlands; GE Discovery 750 HD GE Healthcare Massachusetts USA). First, a scan scout of the head and cervical spine was performed followed by the CT itself with axial reformats in 3.75 mm thickness. The CT scan of the thoracic and abdominal cavity including pelvis and parts of the lower extremities up to a maximum scan length of 200 cm (GE), respectively, 180 cm (Philips) was reformatted in 1.25-mm axial slices. Following scan procedures, PMCT data were transferred to the PACS (picture achieving computer system) for storage and further evaluation. PMCT data were read and evaluated by one board-certified radiologist with expert experience in forensic radiology and one radiology resident with novice experience. Findings were stated in a consensus reading of both. PMCT analysis comprised the evaluation of the airways as well as of the lung. Criteria for blood assessment in the trachea were on the one hand observing sedimentation and/or potential luminal airway occlusion and on the other hand measuring Hounsfield Units (HU) for the presence of blood-like density values with a range from 20 to 90 HU being considered since blood can potentially sediment and exhibit different blood typically HU compared to the lung. Signs of aspiration within the major airways were classified using a four-level scale (see Table ). Findings such as round, possibly converging spots with irregular margins and ground glass opacities (ggo) in the lung parenchyma with blood-like density values of 50–70 Hounsfield units (HU) were considered as suggestive for blood aspiration on PMCT . The level of blood aspiration within the lung parenchyma was defined according to Table (see Fig. ). Following PMCT, the corpses underwent autopsy at our University Institute of Forensic medicine. Board-certified forensic pathologists performed all autopsies according to the standards of the German Government’s guidelines (§87, 89 German Code of Criminal Procedure). Standard autopsy in lethal ballistic injury comprises first the inspection of the entire skin looking for the entrance and exit wound, respectively, second by opening of all three body cavities (skull, thorax and abdomen), and third the dedicated examination of all internal organs. The respiratory tract was resected en block by transecting the trachea right below the glottis. In the following, the airways and pulmonary blood vessels were inspected and dissected. Then the lung surface was inspected and the lung parenchyma was cut into 1.5-cm slices of varying thickness to assess for potential signs of aspiration from the apex to the base. Tissue samples from all organs were harvested for histopathology. Autopsy reports were reviewed for the description of blood or blood-like fluid in the larynx, trachea, main bronchi considered as major airways and small bronchi but also for the description of red rounded spots with irregular margin and diffuse pattern on the lungs’ surface or in the cutting area. Parallel to the PMCT evaluation, aspiration within the major airways was classified using the four-level scale and the degree of blood aspiration within the lung parenchyma according to Tables and . Diagnosis of blood aspiration of the lung was confirmed by histology. General data Between October 2008 and April 2011, overall 57 cases with head gunshot-related death underwent PMCT and autopsy. 16 head shot cases sustained additional gunshot to chest and/or thoracic spine so that they were excluded from the study. Consecutively, 41 PMCTs of 7 women (17.1%) and 34 men (82.9%) with a median age at death of 57 years (range: 14–89 years) were enrolled. PMCT and autopsy were performed at a median of 26 h (range 12–30 h) and 50 h (30–56 h) after death, respectively. The pattern of gunshots comprised 20 gunshots to the head (48.8%), four gunshots through the floor of mouth (9.8%) and 17 gunshots into the mouth (41.5%) (see Table ). Major airways The status of the airways regarding the present signs of aspiration was rated as equal in autopsy and PMCT in 22 of the 41 enrolled cases. In 11 cases, a gradual difference was found, whereas in 8 cases a total difference resulted. In autopsy, in 10 cases no signs of blood aspiration in the airways were described, whereas 31 cases were reported with blood in the airways. PMCT revealed 26 cases with blood equal density material in the airways and 15 cases without signs of blood aspiration in the major airways (see Table ). Lungs In 25 cases, blood-like content was described by both, PMCT and autopsy, (Fig. ) within the major airways. In 18 of these 25 cases, significant signs of aspiration were detected in the lungs as well by PMCT and autopsy. PMCT versus autopsy 29 (70.7%) of the enrolled 41 cases presented equal signs of aspiration on PMCT and autopsy. In 6/29 cases no signs of aspiration (level 0) were reported. In one case, level I aspiration was assessed (see Fig. ), in two cases level 2 and in 18 cases aspiration level 3 (see Figs. , ) resulted. In 5 cases (12.2%), 1° difference on PMCT than stated in the autopsy reports was found. 7 cases (17.1%) were evaluated with more than one level difference between gold standard and PMCT. 5 (12.2%) of these 7 cases were graded with a difference of three levels between autopsy and PMCT and 2 (4.9%) with two-level difference. In these cases, the autopsy reports described no signs of aspiration whereas PMCT described level 2 aspiration (A: 0 PMCT: 2). Two cases were graded in autopsy with level 0 and in PMCT were (Fig. ) graded as level 3 (A: 0 PMCT: 3). Three cases were graded in autopsy with level 3, whereas in PMCT was evaluated as level 0 (see Table ). Between October 2008 and April 2011, overall 57 cases with head gunshot-related death underwent PMCT and autopsy. 16 head shot cases sustained additional gunshot to chest and/or thoracic spine so that they were excluded from the study. Consecutively, 41 PMCTs of 7 women (17.1%) and 34 men (82.9%) with a median age at death of 57 years (range: 14–89 years) were enrolled. PMCT and autopsy were performed at a median of 26 h (range 12–30 h) and 50 h (30–56 h) after death, respectively. The pattern of gunshots comprised 20 gunshots to the head (48.8%), four gunshots through the floor of mouth (9.8%) and 17 gunshots into the mouth (41.5%) (see Table ). The status of the airways regarding the present signs of aspiration was rated as equal in autopsy and PMCT in 22 of the 41 enrolled cases. In 11 cases, a gradual difference was found, whereas in 8 cases a total difference resulted. In autopsy, in 10 cases no signs of blood aspiration in the airways were described, whereas 31 cases were reported with blood in the airways. PMCT revealed 26 cases with blood equal density material in the airways and 15 cases without signs of blood aspiration in the major airways (see Table ). In 25 cases, blood-like content was described by both, PMCT and autopsy, (Fig. ) within the major airways. In 18 of these 25 cases, significant signs of aspiration were detected in the lungs as well by PMCT and autopsy. 29 (70.7%) of the enrolled 41 cases presented equal signs of aspiration on PMCT and autopsy. In 6/29 cases no signs of aspiration (level 0) were reported. In one case, level I aspiration was assessed (see Fig. ), in two cases level 2 and in 18 cases aspiration level 3 (see Figs. , ) resulted. In 5 cases (12.2%), 1° difference on PMCT than stated in the autopsy reports was found. 7 cases (17.1%) were evaluated with more than one level difference between gold standard and PMCT. 5 (12.2%) of these 7 cases were graded with a difference of three levels between autopsy and PMCT and 2 (4.9%) with two-level difference. In these cases, the autopsy reports described no signs of aspiration whereas PMCT described level 2 aspiration (A: 0 PMCT: 2). Two cases were graded in autopsy with level 0 and in PMCT were (Fig. ) graded as level 3 (A: 0 PMCT: 3). Three cases were graded in autopsy with level 3, whereas in PMCT was evaluated as level 0 (see Table ). In the current literature, the use of postmortem CT in relation to the gold standard autopsy is constantly discussed . However, to the best of our knowledge, there exists no study about the potential of PMCT for the detection of blood aspiration in deadly gunshot injuries to the head, to the mouth or mouth floor in comparison to the gold standard autopsy. The actual study showed in 29 of 41 enrolled cases the same location and extent of blood aspiration in the major airways and in the lungs on PMCT and in conventional autopsy. In addition, the evaluation of the status of the major airways and lung revealed almost identical results comparing PMCT to autopsy. Thus, the presented results provide evidence that PMCT might potentially help in the detection of blood aspiration in cases of deadly head gunshot cases. Breathing may terminate along with circulatory arrest and/or brain death so that the detection of blood aspiration can support the forensic pathologists regarding the assessment whether an injury had occurred pre-, peri- or postmortem. In addition, suction of great amount of blood into the deep respiratory tract might help in determining the cause of death. PMCT in general In general, modern cross-sectional imaging techniques are already considered as reliable exams complementary to conventional forensic techniques especially autopsy . In the context of gunshot-related death, PMCT presents a non-invasive, but effective imaging technique to localize gunshot wound tracks and support findings of autopsy in gunshot victims . Furthermore, PMCT is useful in traumatic death allowing for an immediate identification of causes of death providing detailed information on bony lesions, brain injuries and gas formations . PMCT and blood aspiration Reviewing the current literature, detection of blood aspiration by PMCT has only been discussed by Filograna et al. . They reported cases on a retrospective basis regarding signs of aspiration in autopsy and afterwards PMCT data were retrospectively analyzed for signs of blood aspiration. In general, the results of these studies are comparable to our results regarding the detection rates of aspiration on PMCT. Therefore, in the presented study, a rather prospective way of assessing signs of blood aspiration on PMCT was chosen and compared to the autopsy as gold standard. All of our cases have not been preselected in terms of aspiration signs to evaluate the pure detection rates of aspiration on PMCT. In the presented study, a semi-quantitative scale to rate blood aspiration was used. Other authors used a scale differentiating between quantity, position and consolidation of ground glass opacities and the presence of damage to the lungs using an evaluation of differentiating between yes/no, scarce and many ground glass opacities and consolidations . To simplify the evaluation procedure, we used a scaling which was geared to the autopsy reports meaning that PMCT findings were graded in degrees based on the quantity and characteristics of ground glass opacities (ggo) inside the lungs and on the content of the major airways. Every semi-quantitative scaling is affected with the problem that it depends on the interpreter and his experience in evaluating such data. Thus, the availability of objective scales would be of great value for the PMCT aspiration detection. In this context, one potential option might be the definition of CT images with signs of aspirations and accounting the aspiration spots, measuring the density in HU and evaluating the confluence of these opacities . Regarding the results of the presented study for 70% of the included cases, the same degree of aspiration in PMCT and autopsy resulted. Five cases showed only 1° difference between both methods whereas five cases had a difference of three degrees meaning that over 82% of aspiration was shown by PMCT. In 3 of these 5 cases, autopsy described severe signs of aspiration compared to PMCT findings and in the remaining 2 cases PMCT showed more signs of aspiration than the gold standard. Possible reasons for these grading differences might be that other pathologic findings mimic ggos as for PMCT in terms of beginning lung edema or posttraumatic changes and thus were misinterpreted as signs of aspiration. In three of these five cases, the differentiation between clouded spots in PMCT and ggos due to aspiration was nearly impossible. This might be due to sedimentation or decomposing processes in the base of the lungs. Another reason for autopsy rating aspiration levels higher than PMCT might be that it was not adequately assessed and reported in autopsy. Autopsy reports were evaluated retrospectively whereas the PMCT data sets were evaluated prospectively, so there was no possibility to reevaluate the findings in autopsy. Also on PMCT, findings such as gastric content might mimic ggos thus might explain some cases of misinterpretation. However, measuring the density of changes of the lung parenchyma provides some help in differentiating blood from, e.g., gastric content. Since in the current literature to the best of our knowledge no study with a comparable study design exists, a comparison to the actual literature is rather difficult. Overall reviewing the subject of our study, it is safe to say that PMCT can help to find and quantify aspiration in cases of gunshot-related death to the head, mouth or floor of mouth. To confirm PMCT as a standard procedure for aspiration detection in forensic cases, it is necessary to further investigate and evaluate an objective scale to set standards and define the limits and advantages of PMCT in this matter . In this context, one needs to be aware of the fact that blood possibly reaches the airways not only in an antegrade way, for example following traumatic skull base fractures, but also in a retrograde way, e.g., following injury to lung parenchyma beneath the bronchial tree. However, it should also be mentioned that such retrograde aspiration is far less present than antegrade “typical” aspiration and is usually in close relation regarding location to its origin in terms of lung injury . Strength and limitations of PMCT In general, PMCT presents a non-invasive valuable imaging technique with only a few decades of experience whereas autopsy owns hundreds of years of experience as well as an unlimited field of examination. In this context, it should be mentioned that several body areas, e.g., maxilla-facial region, are very hard to access in autopsy and often in only a destructive way so that pathologies in these regions are only examined in autopsy if indirect signs are present which is easily examined and also evaluated by PMCT . Another strength of PMCT is the fact that the exam itself is performed faster than autopsy . In addition, the original CT data are documented, stored in the archives and reevaluated if necessary. Another limitation of PMCT in general is the high expenses for purchasing a CT scanner. In addition, the expenses for a whole body PMCT is relatively high compared to the expenses regarding autopsy although the information depriving from PMCT is great especially in context with the detection of blood and aspiration helping to a great extent in solving the puzzle of the injury happening pre-, peri- or postmortem . Regarding the presented study one strength of PMCT is the fact that PMCT easily provides a visualization of the entire lung parenchyma with the consequence of avoiding an underestimation of pulmonary aspiration in autopsy. However, the number of enrolled cases is relatively low so that a prospective study is recommended to confirm the results of our imaging analysis on a larger patient cohort. In general, modern cross-sectional imaging techniques are already considered as reliable exams complementary to conventional forensic techniques especially autopsy . In the context of gunshot-related death, PMCT presents a non-invasive, but effective imaging technique to localize gunshot wound tracks and support findings of autopsy in gunshot victims . Furthermore, PMCT is useful in traumatic death allowing for an immediate identification of causes of death providing detailed information on bony lesions, brain injuries and gas formations . Reviewing the current literature, detection of blood aspiration by PMCT has only been discussed by Filograna et al. . They reported cases on a retrospective basis regarding signs of aspiration in autopsy and afterwards PMCT data were retrospectively analyzed for signs of blood aspiration. In general, the results of these studies are comparable to our results regarding the detection rates of aspiration on PMCT. Therefore, in the presented study, a rather prospective way of assessing signs of blood aspiration on PMCT was chosen and compared to the autopsy as gold standard. All of our cases have not been preselected in terms of aspiration signs to evaluate the pure detection rates of aspiration on PMCT. In the presented study, a semi-quantitative scale to rate blood aspiration was used. Other authors used a scale differentiating between quantity, position and consolidation of ground glass opacities and the presence of damage to the lungs using an evaluation of differentiating between yes/no, scarce and many ground glass opacities and consolidations . To simplify the evaluation procedure, we used a scaling which was geared to the autopsy reports meaning that PMCT findings were graded in degrees based on the quantity and characteristics of ground glass opacities (ggo) inside the lungs and on the content of the major airways. Every semi-quantitative scaling is affected with the problem that it depends on the interpreter and his experience in evaluating such data. Thus, the availability of objective scales would be of great value for the PMCT aspiration detection. In this context, one potential option might be the definition of CT images with signs of aspirations and accounting the aspiration spots, measuring the density in HU and evaluating the confluence of these opacities . Regarding the results of the presented study for 70% of the included cases, the same degree of aspiration in PMCT and autopsy resulted. Five cases showed only 1° difference between both methods whereas five cases had a difference of three degrees meaning that over 82% of aspiration was shown by PMCT. In 3 of these 5 cases, autopsy described severe signs of aspiration compared to PMCT findings and in the remaining 2 cases PMCT showed more signs of aspiration than the gold standard. Possible reasons for these grading differences might be that other pathologic findings mimic ggos as for PMCT in terms of beginning lung edema or posttraumatic changes and thus were misinterpreted as signs of aspiration. In three of these five cases, the differentiation between clouded spots in PMCT and ggos due to aspiration was nearly impossible. This might be due to sedimentation or decomposing processes in the base of the lungs. Another reason for autopsy rating aspiration levels higher than PMCT might be that it was not adequately assessed and reported in autopsy. Autopsy reports were evaluated retrospectively whereas the PMCT data sets were evaluated prospectively, so there was no possibility to reevaluate the findings in autopsy. Also on PMCT, findings such as gastric content might mimic ggos thus might explain some cases of misinterpretation. However, measuring the density of changes of the lung parenchyma provides some help in differentiating blood from, e.g., gastric content. Since in the current literature to the best of our knowledge no study with a comparable study design exists, a comparison to the actual literature is rather difficult. Overall reviewing the subject of our study, it is safe to say that PMCT can help to find and quantify aspiration in cases of gunshot-related death to the head, mouth or floor of mouth. To confirm PMCT as a standard procedure for aspiration detection in forensic cases, it is necessary to further investigate and evaluate an objective scale to set standards and define the limits and advantages of PMCT in this matter . In this context, one needs to be aware of the fact that blood possibly reaches the airways not only in an antegrade way, for example following traumatic skull base fractures, but also in a retrograde way, e.g., following injury to lung parenchyma beneath the bronchial tree. However, it should also be mentioned that such retrograde aspiration is far less present than antegrade “typical” aspiration and is usually in close relation regarding location to its origin in terms of lung injury . In general, PMCT presents a non-invasive valuable imaging technique with only a few decades of experience whereas autopsy owns hundreds of years of experience as well as an unlimited field of examination. In this context, it should be mentioned that several body areas, e.g., maxilla-facial region, are very hard to access in autopsy and often in only a destructive way so that pathologies in these regions are only examined in autopsy if indirect signs are present which is easily examined and also evaluated by PMCT . Another strength of PMCT is the fact that the exam itself is performed faster than autopsy . In addition, the original CT data are documented, stored in the archives and reevaluated if necessary. Another limitation of PMCT in general is the high expenses for purchasing a CT scanner. In addition, the expenses for a whole body PMCT is relatively high compared to the expenses regarding autopsy although the information depriving from PMCT is great especially in context with the detection of blood and aspiration helping to a great extent in solving the puzzle of the injury happening pre-, peri- or postmortem . Regarding the presented study one strength of PMCT is the fact that PMCT easily provides a visualization of the entire lung parenchyma with the consequence of avoiding an underestimation of pulmonary aspiration in autopsy. However, the number of enrolled cases is relatively low so that a prospective study is recommended to confirm the results of our imaging analysis on a larger patient cohort. In conclusion, it seems reasonable to suggest performing PMCT additionally to traditional postmortem exams in terms of autopsy in cases of suspected blood aspiration to rule out false-negative cases since PMCT of the lungs and airways alone does not provide enough information to certainly differentiate between findings of the lung following blood aspiration and due to other causes. CT provides an easy visualization of the entire lung parenchyma with the possibility to avoid underestimation of the severity of aspiration in autopsy. Thus, CT imaging should be considered as complementary tool to conventional autopsy assessing blood aspiration as a sign of vital reaction or the cause of death. However, the adequate use of PMCT in this context needs further evaluation and the definition of an objective scale for aspiration detection on PMCT needs to be established in future studies.
Longitudinal serum proteome mapping reveals biomarkers for healthy ageing and related cardiometabolic diseases
9190407f-b2f9-41f4-8d2e-10f7b7ef282b
11774760
Biochemistry[mh]
Ageing is a ubiquitous process characterized by progressive degeneration at molecular, cellular and organic levels, leading to functional decline and heightened susceptibility to diseases. Although physiological measurements, functional tests and sophisticated omics approaches have been used to predict chronological age , , comprehending the intricate biology of ageing remains challenging. In addition to the identification of reliable ageing biomarkers, gaining deeper insights into their involvement in ageing-related pathologies would contribute substantially to the advancement of clinical interventions aimed at extending a healthy lifespan. Protein homeostasis has a critical role in supporting a naturally long lifespan, and loss of proteostasis has been recognized as a key hallmark of ageing , . Perturbations in proteome-wide homeostasis across diverse tissues have been closely associated with accelerated ageing . Blood, a reservoir of proteins from cells, extracellular fluids, tissues and organs, represents an ideal platform for identifying proteomic biomarkers indicative of various health conditions – . Previous cross-sectional studies using mass spectrometry (MS)-based, antibody-based (Olink) or modified aptamer-based (SomaScan) proteomics technologies have identified hundreds of blood proteins that are correlated with age – . Furthermore, several longitudinal studies have identified personal proteomic and other omics markers as well as lipid–cytokine networks associated with ageing within small cohorts over time frames of 2–6 years. However, these findings could potentially be limited by small sample sizes and short follow-up durations. Notably, a recent large-scale study across five independent cohorts has developed cross-sectional plasma proteomic signatures for organ ageing that are capable of forecasting long-term risks of heart failure, Alzheimer’s disease and mortality . Nevertheless, there remains a significant scarcity of longitudinal evidence for ageing-related proteomic biomarkers and their associations with various health outcomes. Here, we perform a longitudinal proteomic profiling of 7,565 serum samples from a cohort of 3,796 middle-aged and elderly participants over a 9-year follow-up period. We capture the trajectories of serum proteome changes during follow-up visits and identify 86 ageing-related proteins based on longitudinal proteomic data. To gain insights into the biological significance and the clinical implications of these ageing-related proteins, we explore the biological pathways and functional networks represented by these proteins and further investigate the longitudinal association of ageing-related proteins with 32 clinical measures as well as 14 ageing-related chronic diseases. Using a machine-learning algorithm, we identify 22 out of the 86 ageing-related proteins to discriminate the healthy status of the participants. We then establish a PHAS based on the 22 proteins that can predict the incidence of cardiometabolic diseases during the ageing process. We further reveal the potential intrinsic, lifestyle, dietary and multi-omics determinants of the PHAS, highlighting the important role of gut microbiota in the modulation of PHAS. The integration of longitudinal serum proteome and clinical data reveals potential new interventional or therapeutic targets for treating ageing-related cardiometabolic diseases and for monitoring the healthy ageing status among the general population. To maximize the resource value of our study, we share our results through an open-access web server available at https://omics.lab.westlake.edu.cn/resource/aging.html . Serum proteome profiling of longitudinal cohorts The present study used data from the Guangzhou Nutrition and Health Study (GNHS), which included 3,796 individuals with 7,565 serum samples across three time points over a 9-year follow-up period. We separated these 3,796 participants into two subcohorts: the discovery cohort, which comprised 4,637 serum samples from 1,939 participants of a multi-omics substudy within GNHS, and the validation cohort, which included 2,928 serum samples from the remaining 1,857 participants (Fig. and Extended Data Fig. ). Additionally, we included an external validation cohort of 124 participants, with 200 serum samples collected at two cohort visits during a 4-year follow-up period (Fig. and Extended Data Fig. ). Baseline characteristics of participants were similar among the included cohorts, except that participants in the external validation cohort were over one decade older (median age, 70 years) than participants in either the GNHS discovery cohort (median age, 57.6 years) or the GNHS validation cohort (median age, 57.1 years) (Supplementary Table ). We used MS-based proteomics technology for the measurement of the serum proteome of study participants. Using DIA-NN software (v.1.8) and a spectral library containing 5,102 peptides and 819 unique human proteins reviewed by Swiss-Prot, we quantified a total of 438 proteins in the GNHS discovery cohort, 413 proteins in the GNHS validation cohort and 432 proteins in the external validation cohort (see ‘Serum proteome profiling’ in the and Supplementary Table ). Longitudinal trajectories of serum proteome during follow-up To track the global proteomic trajectories during the ageing process, we analysed data from 1,018 participants in the GNHS discovery cohort with serum proteome measurements across three time points (Fig. ). k -means clustering was used to classify 438 serum proteins based on their z -scored mean levels across the three time points (Fig. ). We identified four distinct trajectory clusters: cluster 1 with 32 proteins exhibiting a sharp increase, cluster 2 with 124 proteins showing a slight increase, cluster 3 with 179 proteins remaining relatively constant and cluster 4 with 103 proteins exhibiting a decline over time (Fig. and Supplementary Table ). We used a linear mixed model to evaluate the statistical significance of the trends of protein trajectories and found that the levels of 7, 37, 34 and 62 proteins from each of the four clusters were significantly changed during the follow-up period (false discovery rate (FDR) < 0.05) (Fig. and Supplementary Table ). We then explored whether the protein trajectories in the four clusters were consistent across different subgroups of participants stratified by sex and baseline age (>60 or ≤60 years). We set the age of 60 as the cutoff point, as a previous study has shown that changes in blood proteins often peak at this threshold . The protein trajectories in clusters 1 and 2 remained consistent across all subgroups. However, slight variations were observed in the trajectories of proteins within clusters 3 and 4, particularly among male participants and those over 60 years of age (Extended Data Fig. ). To elucidate the biological implications of the four distinct protein trajectories, we performed functional enrichment analyses using the g:Profile toolkit . This analysis revealed a total of 10, 152, 451 and 558 significantly enriched Gene Ontology (GO) terms and pathways (FDR < 0.05) for proteins from clusters 1 to 4, respectively (Fig. and Supplementary Table ). The diverse enriched GO terms and pathways illustrate the biological trends associated with each protein cluster. For example, the prominent enriched GO terms in cluster 1, such as the actin filament bundle, contractile actin filament bundle and actomyosin, could potentially indicate an imbalance in muscle protein synthesis and breakdown during age-related loss of skeletal muscle mass and function . Moreover, one of the top enriched GO terms and pathways for proteins in cluster 4 that exhibited a decrease during follow-up was the blood microparticle, which was consistent with previous findings . Identifying ageing-related proteins using longitudinal data To identify ageing-related proteomic biomarkers, we examined the correlation between serially measured levels of serum proteins and the corresponding chronological ages of the participants over follow-up, by using linear mixed models adjusted for sex, measurement batches and instruments. In the GNHS discovery cohort consisting of 1,939 participants with 4,637 serum samples collected during follow-up (Extended Data Fig. ), we identified 148 serum proteins significantly associated with age (FDR < 0.05). Among these, 77 displayed a negative association and 71 exhibited a positive association (Fig. and Supplementary Table ). Subsequently, 86 of the 148 proteins also significantly associated with age in the same direction in the GNHS validation cohort comprising 1,857 participants with 2,928 serum samples collected during follow-up (FDR < 0.05), and the coefficients for these associations were highly correlated (Pearson’s r = 0.96) between the GNHS discovery and validation cohorts (Fig. and Supplementary Tables and ). Additionally, we replicated the associations of the 86 identified proteins with age in the external validation cohort comprising 124 participants with 200 serum samples collected over a 4-year follow-up period. We observed similar associations with age for these 86 proteins between the GNHS discovery cohort and the external validation cohort (Pearson’s r = 0.80; Extended Data Fig. ). Notably, although 68 out of the 86 proteins exhibited nominal significant associations, only 15 proteins demonstrated significance at FDR < 0.05 in the external validation cohort (Supplementary Tables and ), which might be partly attributed to its relatively small sample size, shorter follow-up period and different characteristics of the participants (over 10 years older). To further investigate whether the 86 ageing-related proteins can predict age, we fitted a generalized linear mixed model by L1-penalized estimation (GLMMLasso) model based on the longitudinal serum proteome data. We identified a subset of 83 ageing-related proteins that showed high accuracy in predicting age within the GNHS discovery cohort (Pearson’s r = 0.70), GNHS validation cohort (Pearson’s r = 0.74) and external validation cohort (Pearson’s r = 0.67) (Extended Data Fig. ). Given that previous studies suggested the presence of sex differences in the ageing process , , we delved into whether the 86 ageing-related proteins were also correlated with sex. We identified 108 proteins that showed significant sex differences in both the GNHS discovery and GNHS validation cohorts (FDR < 0.05) (Fig. and Supplementary Tables and ). Notably, this set included several well-known sex hormones such as the pregnancy zone protein, which has consistently displayed sex differences during the ageing process and has been suggested to be associated with Alzheimer’s disease – . Furthermore, we observed an overlap of 41 proteins that were associated with both age and sex (Fig. and Supplementary Tables and ). Among them, we identified a significant interaction between age and sex for seven proteins ( Q values for interaction of <0.05 in both the GNHS discovery and validation cohort), namely prothrombin (THRB), tRNA-splicing endonuclease subunit Sen2 (SEN2), alpha-2-macroglobulin (A2MG), inter-alpha-trypsin inhibitor heavy chain H3 (ITIH3), ras-interacting protein 1 (RAIN), pigment epithelium-derived factor (PEDF) and coagulation factor IX (FA9) (Fig. and Supplementary Table ). These proteins displayed differential associations with age in males and females. Specifically, four proteins (THRB, A2MG, ITIH3 and RAIN) exhibited stronger associations with age in males than in females, whereas the remaining three proteins (PEDF, FA9 and SEN2) showed robust associations with age only in females, with SEN2 even showing opposite associations in males (Fig. ). These findings suggest that the associations of these proteins with age may be sex-dependent. Biological implications of ageing-related proteins To understand how ageing-related proteins may interconnect with others and their biological implications, we identified functional networks of protein groups using the Ingenuity Pathway Analysis (IPA), an algorithm that connects proteins based on well-established knowledge . We identified four protein–protein networks consisting of nine or more ageing-related proteins, each centred around a hub including apolipoprotein B (APOB), complement C3 (C3), nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) and immunoglobulin, respectively (Fig. ). These networks represented specific functional pathways including lipid metabolism, organismal injury and abnormalities, neurological disease and cell-to-cell signalling and interaction, potentially revealing different biological aspects of the ageing process. Network 1, for instance, showed that 18 ageing-related proteins were involved in lipid metabolism, a crucial pathway in regulating the ageing process . Similarly, network 3 suggested that the hub NF-κB, a nuclear factor that regulates the immune response to infection, might be a potential pathway underlying ageing-related neurological disease, as reported previously . Additionally, we performed upstream regulator analysis using IPA and identified hepatocyte nuclear factor 1-alpha (HNF1A) and interleukin-6 (IL-6) as the top two upstream regulators for the 86 ageing-related proteins (Extended Data Fig. ). We also conducted functional enrichment analysis for proteins from networks using the g:Profiler toolkit . The analysis revealed 536, 337, 274 and 193 significantly enriched GO terms and pathways for proteins from each of the four networks, respectively (Fig. and Supplementary Table ), suggesting that the protein networks are probably involved in ageing biology in a multifaceted manner. We used linear mixed models to investigate the longitudinal relationships between ageing-related proteins with 32 clinical traits including anthropometric, metabolic, inflammatory, cognitive and hepatic and renal parameters. Our analysis revealed a distinct overall pattern of ageing-related proteins associated with clinical traits (Fig. , Extended Data Fig. and Supplementary Tables and ). Specifically, the protein–trait associations between the GNHS discovery and validation cohorts exhibited high correlations for anthropometric parameters, lipid and glucose metabolic profiles and several hepatic and renal biomarkers. Overall, we identified a total of 320 significant protein–trait associations (FDR < 0.05) in both the GNHS discovery and validation cohorts. Notably, the associations were predominantly significant in anthropometric and cardiometabolic outcomes. Remarkably, we noted compelling concurrences between the inferred biological implications from network analyses and the actual protein–trait associations. For instance, 12 out of 18 ageing-related proteins from protein network 1, which represents lipid metabolism, were significantly associated with at least one serum lipid biomarkers (Fig. ). Additionally, several proteins from protein network 2, which signifies organismal injury and abnormalities, exhibited noteworthy associations with functional biomarkers of liver and kidney (Fig. ). These alignments have strengthened our findings regarding the biological relevance of ageing-related proteins and pinpointed their strong correlations with health conditions during ageing. Associations of ageing-related proteins with diseases To investigate the potential roles of the above-identified proteomic biomarkers in ageing-related pathologies, we used Cox proportional hazards models to examine the prospective associations between baseline levels of 86 ageing-related proteins and the incidence of 14 chronic diseases during a 9-year follow-up period in 3,414 participants from the entire GNHS cohort (that is, pooling the GNHS discovery and validation cohorts to increase the incident disease cases). We observed a total of 131 nominally significant associations for 67 ageing-related proteins, with more than ten proteins being associated with the incidence of dyslipidemia, hypertension, type 2 diabetes (T2D), fatty liver and hepatitis (Supplementary Table ). We then clustered these associations into eight hierarchical groups that represented protein signatures for the long-term risk of incident chronic diseases (Extended Data Fig. ). To illustrate, the proteins within cluster 1 exhibited a positive association with the risk of developing renal diseases, whereas those within cluster 3 were positively associated with the risk of incident hepatitis. Cluster 2 proteins displayed a strong positive association and cluster 4 proteins demonstrated a strong negative association with the risk of incident cirrhosis. Likewise, proteins in cluster 6 were predominantly negatively associated and proteins in cluster 7 were positively associated with the risk of developing T2D and fatty liver. These findings suggest that our identified ageing-related proteins may have a temporal role in the development of age-related pathologies. Furthermore, our analysis revealed that 35 out of the 131 observed protein–disease associations retained significance following correction for multiple testing (FDR < 0.05). Specifically, we identified 13, 11, 5, 1, 3, 1 and 1 proteins that exhibited significant association with incident T2D, fatty liver, hepatitis, renal disease, dyslipidemia, hypertension and rheumatoid arthritis, respectively (Fig. and Supplementary Table ). Among them, eight proteins, including alpha-1-antitrypsin (A1AT), leucine-rich alpha-2-glycoprotein (A2GL), A2MG, adiponectin (ADIPO), zinc finger protein Gfi-1 (GFI1), ITIH3, RAIN and vitronectin (VTNC), were associated with two or more ageing-related metabolic diseases. For example, a 1 s.d. increase in baseline levels of A1AT and A2GL was associated with a 30% and 29% lower risk of incident T2D and a 17% and 17% lower risk of fatty liver, respectively (Fig. and Supplementary Table ). Moreover, we found that 16 out of the 25 disease-associated proteins were drug-targetable as indicated by the DrugBank database (Supplementary Table ). Interestingly, ten disease-associated proteins were targeted by zinc and zinc compounds, indicating the potential benefits of zinc supplementation in promoting healthy ageing. Ageing-related proteins as cardiometabolic health indicators Next, we used the random forest machine-learning algorithm to examine whether a combination of these proteins could function as a proteomic classifier to discriminate the overall healthy and unhealthy status of participants (defined by the presence of any of the 14 ageing-related diseases). We set 1,785 participants with serum proteome at baseline from the GNHS discovery cohort as the training dataset and set the 1,629 participants with serum proteome at baseline from the GNHS validation cohort as the validation dataset. The random forest model incorporating 86 ageing-related proteins achieved an area under the receiver operating characteristic curve (AUC) of 0.70 in distinguishing between healthy and unhealthy participants, which was comparable to the performance of the model using 408 serum proteins (AUC = 0.72) (Fig. ). By performing tenfold cross-validation, we identified a more concise random forest model consisting of the top 22 most important ageing-related proteins (Fig. and Supplementary Table ). This concise model achieved equivalent accuracy (AUC = 0.70) compared to the model containing 86 ageing-related proteins (Fig. ) and demonstrated significantly higher accuracy than ten models using random subsets of 22 proteins selected from the total pool of 408 proteins (all P < 0.05; Extended Data Fig. ). Additionally, the majority of the top 22 ageing-related proteins exhibited significant differences between healthy and unhealthy participants, as evident in both the training and validation datasets (Fig. and Supplementary Table ). Given that the 22 proteins were all correlated with age and predominantly associated with sex and BMI, we proceeded to compare their predictive performance with intrinsic factors (age, sex, BMI) in distinguishing between healthy and unhealthy participants. The model using age, sex and BMI achieved an AUC of 0.63, while the model including the 22 proteins achieved an AUC of 0.70 and the full model incorporating a combination of these factors achieved an AUC of 0.72 (Fig. ). These results suggest that the discriminate power of the 22 ageing-related proteins may be not solely attributed to age, sex and BMI. Based on our random forest model using 22 ageing-related proteins, we developed a PHAS to serve as an indicator of overall health status. We observed that higher PHAS values were longitudinally associated with improved anthropometric parameters, lipid and glucose metabolic biomarkers and improved hepatic and renal biomarkers (Fig. and Supplementary Table ), with consistent associations (Pearson’s r = 0.72) in both the GNHS discovery and validation cohorts (Fig. and Supplementary Table ). We proceeded to validate the performance of 22 proteins in distinguishing between healthy and unhealthy participants at baseline within the external validation cohort. We found a comparable accuracy (AUC = 0.71) (Extended Data Fig. ) to that observed in the GNHS validation cohort (AUC = 0.70) (Fig. ). Additionally, within the external validation cohort, the PHAS exhibited longitudinal associations with lower waist circumference and serum uric acid as well as improved lipid and glucose metabolic measures (FDR < 0.05) (Extended Data Fig. ). To further validate these pivotal findings, we used multiple reaction monitoring (MRM)–MS-based targeted proteomics to quantify the levels of the 22 proteins and replicated these analyses in the external validation cohort. We found a considerable alignment of protein levels (median Spearman’s ρ = 0.47) between the DIA-MS-based proteomics assay and the MRM–MR-based targeted proteomics assay (Extended Data Fig. ). Moreover, the 22 proteins measured by MRM–MR-based targeted proteomics achieved similar performance in distinguishing between healthy and unhealthy participants (AUC = 0.69) (Extended Data Fig. ) compared to our primary analysis using the DIA-MS-based proteomics assay (AUC = 0.71) (Extended Data Fig. ). Furthermore, the PHAS generated by the random forest models based on these two proteomics assays were highly correlated (Pearson’s r = 0.68, mean absolute error = 0.08) (Extended Data Fig. ) and exhibited similar longitudinal associations with clinical traits (Pearson’s r = 0.87) (Extended Data Fig. ). Specifically, the PHAS based on targeted proteomics were also significantly associated with lower waist circumference and serum uric acid as well as improved lipid profiles (lower triglycerides and low-density lipoprotein cholesterol) and glucose metabolic measures (lower hemoglobin A1c and insulin levels) (FDR < 0.05) (Extended Data Fig. ). The validation by targeted proteomics in the external validation cohort has notably strengthened the reliability of our findings. To delve into the long-term clinical implications of PHAS, we examined its association with the incidence of 14 ageing-related diseases among 3,414 participants within the entire GNHS cohort, using multivariable-adjusted Cox proportional hazard models. Our analysis revealed that for a 1 s.d. increase in baseline PHAS, there was a 72% lower risk of developing chronic diseases during follow-up (Fig. and Supplementary Table ). Additionally, we observed 53%, 32%, 53% and 40% lower risk of incident T2D, dyslipidemia, fatty liver and hypertension (Fig. and Supplementary Table ). These findings suggest that the PHAS might serve as a predictive indicator for incident cardiometabolic diseases during the ageing process. Determinants of PHAS and 22 ageing-related proteins To investigate potential determinants of PHAS and the 22 ageing-related proteins used for constructing the PHAS, we evaluated the proportion of variances explained by intrinsic factors (age, sex, BMI), lifestyle factors (smoking, alcohol drinking, physical activity), diet (15 food groups or items), gut microbiota (219 species based on metagenome sequencing) and host genetics (65 independent genetic variants associated with the identified proteins based a recent Chinese protein quantitative trait loci study ) among 1,325 participants from the GNHS discovery cohort who had all multi-omics data available. We used permutational multivariate analysis of variance (PERMANOVA) with a backward feature selection procedure to assess the explained variance for the entire set of the 22 ageing-related proteins and used linear models with variables selected through the least absolute shrinkage and selection operator (LASSO) method to determine the explained variance for individual proteins as well as the PHAS (see ‘Distance-based and linear model-based variance estimation’ in the ). We found that host genetics accounted for the largest proportion of variance (7.9%) for the whole set of 22 ageing-related proteins, followed by intrinsic factors (4.0%) and gut microbiota (3.8%). By contrast, lifestyle factors (0.6%) and diet (0.4%) explained smaller proportions of the variance (Fig. and Supplementary Table ). Consistently, host genetics, intrinsic factors and gut microbiota were the primary determinants of variance for most individual proteins. Out of the 22 ageing-related proteins, ten individual proteins were predominantly explained by intrinsic factors, six were explained by host genetics and four were explained by gut microbiota (Fig. and Supplementary Table ). With respect to the PHAS, intrinsic factors explained the largest proportion of the variance (7.0%), followed by gut microbiota (6.3%) and host genetics (4.1%), whereas lifestyle factors (0.1%) and diet (0.5%) had a much smaller contribution to the variance (Fig. and Supplementary Table ). Given that gut microbiota emerged as the modifiable factor explaining the variation of PHAS, we conducted exploratory analyses to examine the associations of PHAS with each of the 18 gut microbial species that contributed to the variance explanation. We found that 15 out of the 18 microbial species exhibited significant associations with the PHAS (FDR < 0.05) (Fig. and Supplementary Table ). We then created a gut microbial score based on the 18 gut microbial species and found that the gut microbial score had a strong positive association with PHAS ( β = 0.65, P = 4.30 × 10 −19 ) in the GNHS discovery cohort (Fig. ), which remained stable in the external validation cohort ( β = 0.79, P = 4.64 × 10 −2 ) (Fig. ). These results indicate that gut microbiota may be an important modifiable factor associated with the PHAS. The present study used data from the Guangzhou Nutrition and Health Study (GNHS), which included 3,796 individuals with 7,565 serum samples across three time points over a 9-year follow-up period. We separated these 3,796 participants into two subcohorts: the discovery cohort, which comprised 4,637 serum samples from 1,939 participants of a multi-omics substudy within GNHS, and the validation cohort, which included 2,928 serum samples from the remaining 1,857 participants (Fig. and Extended Data Fig. ). Additionally, we included an external validation cohort of 124 participants, with 200 serum samples collected at two cohort visits during a 4-year follow-up period (Fig. and Extended Data Fig. ). Baseline characteristics of participants were similar among the included cohorts, except that participants in the external validation cohort were over one decade older (median age, 70 years) than participants in either the GNHS discovery cohort (median age, 57.6 years) or the GNHS validation cohort (median age, 57.1 years) (Supplementary Table ). We used MS-based proteomics technology for the measurement of the serum proteome of study participants. Using DIA-NN software (v.1.8) and a spectral library containing 5,102 peptides and 819 unique human proteins reviewed by Swiss-Prot, we quantified a total of 438 proteins in the GNHS discovery cohort, 413 proteins in the GNHS validation cohort and 432 proteins in the external validation cohort (see ‘Serum proteome profiling’ in the and Supplementary Table ). To track the global proteomic trajectories during the ageing process, we analysed data from 1,018 participants in the GNHS discovery cohort with serum proteome measurements across three time points (Fig. ). k -means clustering was used to classify 438 serum proteins based on their z -scored mean levels across the three time points (Fig. ). We identified four distinct trajectory clusters: cluster 1 with 32 proteins exhibiting a sharp increase, cluster 2 with 124 proteins showing a slight increase, cluster 3 with 179 proteins remaining relatively constant and cluster 4 with 103 proteins exhibiting a decline over time (Fig. and Supplementary Table ). We used a linear mixed model to evaluate the statistical significance of the trends of protein trajectories and found that the levels of 7, 37, 34 and 62 proteins from each of the four clusters were significantly changed during the follow-up period (false discovery rate (FDR) < 0.05) (Fig. and Supplementary Table ). We then explored whether the protein trajectories in the four clusters were consistent across different subgroups of participants stratified by sex and baseline age (>60 or ≤60 years). We set the age of 60 as the cutoff point, as a previous study has shown that changes in blood proteins often peak at this threshold . The protein trajectories in clusters 1 and 2 remained consistent across all subgroups. However, slight variations were observed in the trajectories of proteins within clusters 3 and 4, particularly among male participants and those over 60 years of age (Extended Data Fig. ). To elucidate the biological implications of the four distinct protein trajectories, we performed functional enrichment analyses using the g:Profile toolkit . This analysis revealed a total of 10, 152, 451 and 558 significantly enriched Gene Ontology (GO) terms and pathways (FDR < 0.05) for proteins from clusters 1 to 4, respectively (Fig. and Supplementary Table ). The diverse enriched GO terms and pathways illustrate the biological trends associated with each protein cluster. For example, the prominent enriched GO terms in cluster 1, such as the actin filament bundle, contractile actin filament bundle and actomyosin, could potentially indicate an imbalance in muscle protein synthesis and breakdown during age-related loss of skeletal muscle mass and function . Moreover, one of the top enriched GO terms and pathways for proteins in cluster 4 that exhibited a decrease during follow-up was the blood microparticle, which was consistent with previous findings . To identify ageing-related proteomic biomarkers, we examined the correlation between serially measured levels of serum proteins and the corresponding chronological ages of the participants over follow-up, by using linear mixed models adjusted for sex, measurement batches and instruments. In the GNHS discovery cohort consisting of 1,939 participants with 4,637 serum samples collected during follow-up (Extended Data Fig. ), we identified 148 serum proteins significantly associated with age (FDR < 0.05). Among these, 77 displayed a negative association and 71 exhibited a positive association (Fig. and Supplementary Table ). Subsequently, 86 of the 148 proteins also significantly associated with age in the same direction in the GNHS validation cohort comprising 1,857 participants with 2,928 serum samples collected during follow-up (FDR < 0.05), and the coefficients for these associations were highly correlated (Pearson’s r = 0.96) between the GNHS discovery and validation cohorts (Fig. and Supplementary Tables and ). Additionally, we replicated the associations of the 86 identified proteins with age in the external validation cohort comprising 124 participants with 200 serum samples collected over a 4-year follow-up period. We observed similar associations with age for these 86 proteins between the GNHS discovery cohort and the external validation cohort (Pearson’s r = 0.80; Extended Data Fig. ). Notably, although 68 out of the 86 proteins exhibited nominal significant associations, only 15 proteins demonstrated significance at FDR < 0.05 in the external validation cohort (Supplementary Tables and ), which might be partly attributed to its relatively small sample size, shorter follow-up period and different characteristics of the participants (over 10 years older). To further investigate whether the 86 ageing-related proteins can predict age, we fitted a generalized linear mixed model by L1-penalized estimation (GLMMLasso) model based on the longitudinal serum proteome data. We identified a subset of 83 ageing-related proteins that showed high accuracy in predicting age within the GNHS discovery cohort (Pearson’s r = 0.70), GNHS validation cohort (Pearson’s r = 0.74) and external validation cohort (Pearson’s r = 0.67) (Extended Data Fig. ). Given that previous studies suggested the presence of sex differences in the ageing process , , we delved into whether the 86 ageing-related proteins were also correlated with sex. We identified 108 proteins that showed significant sex differences in both the GNHS discovery and GNHS validation cohorts (FDR < 0.05) (Fig. and Supplementary Tables and ). Notably, this set included several well-known sex hormones such as the pregnancy zone protein, which has consistently displayed sex differences during the ageing process and has been suggested to be associated with Alzheimer’s disease – . Furthermore, we observed an overlap of 41 proteins that were associated with both age and sex (Fig. and Supplementary Tables and ). Among them, we identified a significant interaction between age and sex for seven proteins ( Q values for interaction of <0.05 in both the GNHS discovery and validation cohort), namely prothrombin (THRB), tRNA-splicing endonuclease subunit Sen2 (SEN2), alpha-2-macroglobulin (A2MG), inter-alpha-trypsin inhibitor heavy chain H3 (ITIH3), ras-interacting protein 1 (RAIN), pigment epithelium-derived factor (PEDF) and coagulation factor IX (FA9) (Fig. and Supplementary Table ). These proteins displayed differential associations with age in males and females. Specifically, four proteins (THRB, A2MG, ITIH3 and RAIN) exhibited stronger associations with age in males than in females, whereas the remaining three proteins (PEDF, FA9 and SEN2) showed robust associations with age only in females, with SEN2 even showing opposite associations in males (Fig. ). These findings suggest that the associations of these proteins with age may be sex-dependent. To understand how ageing-related proteins may interconnect with others and their biological implications, we identified functional networks of protein groups using the Ingenuity Pathway Analysis (IPA), an algorithm that connects proteins based on well-established knowledge . We identified four protein–protein networks consisting of nine or more ageing-related proteins, each centred around a hub including apolipoprotein B (APOB), complement C3 (C3), nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) and immunoglobulin, respectively (Fig. ). These networks represented specific functional pathways including lipid metabolism, organismal injury and abnormalities, neurological disease and cell-to-cell signalling and interaction, potentially revealing different biological aspects of the ageing process. Network 1, for instance, showed that 18 ageing-related proteins were involved in lipid metabolism, a crucial pathway in regulating the ageing process . Similarly, network 3 suggested that the hub NF-κB, a nuclear factor that regulates the immune response to infection, might be a potential pathway underlying ageing-related neurological disease, as reported previously . Additionally, we performed upstream regulator analysis using IPA and identified hepatocyte nuclear factor 1-alpha (HNF1A) and interleukin-6 (IL-6) as the top two upstream regulators for the 86 ageing-related proteins (Extended Data Fig. ). We also conducted functional enrichment analysis for proteins from networks using the g:Profiler toolkit . The analysis revealed 536, 337, 274 and 193 significantly enriched GO terms and pathways for proteins from each of the four networks, respectively (Fig. and Supplementary Table ), suggesting that the protein networks are probably involved in ageing biology in a multifaceted manner. We used linear mixed models to investigate the longitudinal relationships between ageing-related proteins with 32 clinical traits including anthropometric, metabolic, inflammatory, cognitive and hepatic and renal parameters. Our analysis revealed a distinct overall pattern of ageing-related proteins associated with clinical traits (Fig. , Extended Data Fig. and Supplementary Tables and ). Specifically, the protein–trait associations between the GNHS discovery and validation cohorts exhibited high correlations for anthropometric parameters, lipid and glucose metabolic profiles and several hepatic and renal biomarkers. Overall, we identified a total of 320 significant protein–trait associations (FDR < 0.05) in both the GNHS discovery and validation cohorts. Notably, the associations were predominantly significant in anthropometric and cardiometabolic outcomes. Remarkably, we noted compelling concurrences between the inferred biological implications from network analyses and the actual protein–trait associations. For instance, 12 out of 18 ageing-related proteins from protein network 1, which represents lipid metabolism, were significantly associated with at least one serum lipid biomarkers (Fig. ). Additionally, several proteins from protein network 2, which signifies organismal injury and abnormalities, exhibited noteworthy associations with functional biomarkers of liver and kidney (Fig. ). These alignments have strengthened our findings regarding the biological relevance of ageing-related proteins and pinpointed their strong correlations with health conditions during ageing. To investigate the potential roles of the above-identified proteomic biomarkers in ageing-related pathologies, we used Cox proportional hazards models to examine the prospective associations between baseline levels of 86 ageing-related proteins and the incidence of 14 chronic diseases during a 9-year follow-up period in 3,414 participants from the entire GNHS cohort (that is, pooling the GNHS discovery and validation cohorts to increase the incident disease cases). We observed a total of 131 nominally significant associations for 67 ageing-related proteins, with more than ten proteins being associated with the incidence of dyslipidemia, hypertension, type 2 diabetes (T2D), fatty liver and hepatitis (Supplementary Table ). We then clustered these associations into eight hierarchical groups that represented protein signatures for the long-term risk of incident chronic diseases (Extended Data Fig. ). To illustrate, the proteins within cluster 1 exhibited a positive association with the risk of developing renal diseases, whereas those within cluster 3 were positively associated with the risk of incident hepatitis. Cluster 2 proteins displayed a strong positive association and cluster 4 proteins demonstrated a strong negative association with the risk of incident cirrhosis. Likewise, proteins in cluster 6 were predominantly negatively associated and proteins in cluster 7 were positively associated with the risk of developing T2D and fatty liver. These findings suggest that our identified ageing-related proteins may have a temporal role in the development of age-related pathologies. Furthermore, our analysis revealed that 35 out of the 131 observed protein–disease associations retained significance following correction for multiple testing (FDR < 0.05). Specifically, we identified 13, 11, 5, 1, 3, 1 and 1 proteins that exhibited significant association with incident T2D, fatty liver, hepatitis, renal disease, dyslipidemia, hypertension and rheumatoid arthritis, respectively (Fig. and Supplementary Table ). Among them, eight proteins, including alpha-1-antitrypsin (A1AT), leucine-rich alpha-2-glycoprotein (A2GL), A2MG, adiponectin (ADIPO), zinc finger protein Gfi-1 (GFI1), ITIH3, RAIN and vitronectin (VTNC), were associated with two or more ageing-related metabolic diseases. For example, a 1 s.d. increase in baseline levels of A1AT and A2GL was associated with a 30% and 29% lower risk of incident T2D and a 17% and 17% lower risk of fatty liver, respectively (Fig. and Supplementary Table ). Moreover, we found that 16 out of the 25 disease-associated proteins were drug-targetable as indicated by the DrugBank database (Supplementary Table ). Interestingly, ten disease-associated proteins were targeted by zinc and zinc compounds, indicating the potential benefits of zinc supplementation in promoting healthy ageing. Next, we used the random forest machine-learning algorithm to examine whether a combination of these proteins could function as a proteomic classifier to discriminate the overall healthy and unhealthy status of participants (defined by the presence of any of the 14 ageing-related diseases). We set 1,785 participants with serum proteome at baseline from the GNHS discovery cohort as the training dataset and set the 1,629 participants with serum proteome at baseline from the GNHS validation cohort as the validation dataset. The random forest model incorporating 86 ageing-related proteins achieved an area under the receiver operating characteristic curve (AUC) of 0.70 in distinguishing between healthy and unhealthy participants, which was comparable to the performance of the model using 408 serum proteins (AUC = 0.72) (Fig. ). By performing tenfold cross-validation, we identified a more concise random forest model consisting of the top 22 most important ageing-related proteins (Fig. and Supplementary Table ). This concise model achieved equivalent accuracy (AUC = 0.70) compared to the model containing 86 ageing-related proteins (Fig. ) and demonstrated significantly higher accuracy than ten models using random subsets of 22 proteins selected from the total pool of 408 proteins (all P < 0.05; Extended Data Fig. ). Additionally, the majority of the top 22 ageing-related proteins exhibited significant differences between healthy and unhealthy participants, as evident in both the training and validation datasets (Fig. and Supplementary Table ). Given that the 22 proteins were all correlated with age and predominantly associated with sex and BMI, we proceeded to compare their predictive performance with intrinsic factors (age, sex, BMI) in distinguishing between healthy and unhealthy participants. The model using age, sex and BMI achieved an AUC of 0.63, while the model including the 22 proteins achieved an AUC of 0.70 and the full model incorporating a combination of these factors achieved an AUC of 0.72 (Fig. ). These results suggest that the discriminate power of the 22 ageing-related proteins may be not solely attributed to age, sex and BMI. Based on our random forest model using 22 ageing-related proteins, we developed a PHAS to serve as an indicator of overall health status. We observed that higher PHAS values were longitudinally associated with improved anthropometric parameters, lipid and glucose metabolic biomarkers and improved hepatic and renal biomarkers (Fig. and Supplementary Table ), with consistent associations (Pearson’s r = 0.72) in both the GNHS discovery and validation cohorts (Fig. and Supplementary Table ). We proceeded to validate the performance of 22 proteins in distinguishing between healthy and unhealthy participants at baseline within the external validation cohort. We found a comparable accuracy (AUC = 0.71) (Extended Data Fig. ) to that observed in the GNHS validation cohort (AUC = 0.70) (Fig. ). Additionally, within the external validation cohort, the PHAS exhibited longitudinal associations with lower waist circumference and serum uric acid as well as improved lipid and glucose metabolic measures (FDR < 0.05) (Extended Data Fig. ). To further validate these pivotal findings, we used multiple reaction monitoring (MRM)–MS-based targeted proteomics to quantify the levels of the 22 proteins and replicated these analyses in the external validation cohort. We found a considerable alignment of protein levels (median Spearman’s ρ = 0.47) between the DIA-MS-based proteomics assay and the MRM–MR-based targeted proteomics assay (Extended Data Fig. ). Moreover, the 22 proteins measured by MRM–MR-based targeted proteomics achieved similar performance in distinguishing between healthy and unhealthy participants (AUC = 0.69) (Extended Data Fig. ) compared to our primary analysis using the DIA-MS-based proteomics assay (AUC = 0.71) (Extended Data Fig. ). Furthermore, the PHAS generated by the random forest models based on these two proteomics assays were highly correlated (Pearson’s r = 0.68, mean absolute error = 0.08) (Extended Data Fig. ) and exhibited similar longitudinal associations with clinical traits (Pearson’s r = 0.87) (Extended Data Fig. ). Specifically, the PHAS based on targeted proteomics were also significantly associated with lower waist circumference and serum uric acid as well as improved lipid profiles (lower triglycerides and low-density lipoprotein cholesterol) and glucose metabolic measures (lower hemoglobin A1c and insulin levels) (FDR < 0.05) (Extended Data Fig. ). The validation by targeted proteomics in the external validation cohort has notably strengthened the reliability of our findings. To delve into the long-term clinical implications of PHAS, we examined its association with the incidence of 14 ageing-related diseases among 3,414 participants within the entire GNHS cohort, using multivariable-adjusted Cox proportional hazard models. Our analysis revealed that for a 1 s.d. increase in baseline PHAS, there was a 72% lower risk of developing chronic diseases during follow-up (Fig. and Supplementary Table ). Additionally, we observed 53%, 32%, 53% and 40% lower risk of incident T2D, dyslipidemia, fatty liver and hypertension (Fig. and Supplementary Table ). These findings suggest that the PHAS might serve as a predictive indicator for incident cardiometabolic diseases during the ageing process. To investigate potential determinants of PHAS and the 22 ageing-related proteins used for constructing the PHAS, we evaluated the proportion of variances explained by intrinsic factors (age, sex, BMI), lifestyle factors (smoking, alcohol drinking, physical activity), diet (15 food groups or items), gut microbiota (219 species based on metagenome sequencing) and host genetics (65 independent genetic variants associated with the identified proteins based a recent Chinese protein quantitative trait loci study ) among 1,325 participants from the GNHS discovery cohort who had all multi-omics data available. We used permutational multivariate analysis of variance (PERMANOVA) with a backward feature selection procedure to assess the explained variance for the entire set of the 22 ageing-related proteins and used linear models with variables selected through the least absolute shrinkage and selection operator (LASSO) method to determine the explained variance for individual proteins as well as the PHAS (see ‘Distance-based and linear model-based variance estimation’ in the ). We found that host genetics accounted for the largest proportion of variance (7.9%) for the whole set of 22 ageing-related proteins, followed by intrinsic factors (4.0%) and gut microbiota (3.8%). By contrast, lifestyle factors (0.6%) and diet (0.4%) explained smaller proportions of the variance (Fig. and Supplementary Table ). Consistently, host genetics, intrinsic factors and gut microbiota were the primary determinants of variance for most individual proteins. Out of the 22 ageing-related proteins, ten individual proteins were predominantly explained by intrinsic factors, six were explained by host genetics and four were explained by gut microbiota (Fig. and Supplementary Table ). With respect to the PHAS, intrinsic factors explained the largest proportion of the variance (7.0%), followed by gut microbiota (6.3%) and host genetics (4.1%), whereas lifestyle factors (0.1%) and diet (0.5%) had a much smaller contribution to the variance (Fig. and Supplementary Table ). Given that gut microbiota emerged as the modifiable factor explaining the variation of PHAS, we conducted exploratory analyses to examine the associations of PHAS with each of the 18 gut microbial species that contributed to the variance explanation. We found that 15 out of the 18 microbial species exhibited significant associations with the PHAS (FDR < 0.05) (Fig. and Supplementary Table ). We then created a gut microbial score based on the 18 gut microbial species and found that the gut microbial score had a strong positive association with PHAS ( β = 0.65, P = 4.30 × 10 −19 ) in the GNHS discovery cohort (Fig. ), which remained stable in the external validation cohort ( β = 0.79, P = 4.64 × 10 −2 ) (Fig. ). These results indicate that gut microbiota may be an important modifiable factor associated with the PHAS. In this longitudinal study, we identified serum proteomic biomarkers associated with ageing in a cohort of 3,796 middle-aged and elderly adults over a 9-year follow-up period. By incorporating deep phenotyping data, we comprehensively investigated the biological implications of identified ageing-related proteins and their associations with various clinical traits and chronic diseases. Our findings suggest that ageing-related proteins are closely associated with health status and disease risk during ageing, especially for cardiometabolic health. Based on these ageing-related proteins, we created a PHAS that was associated with long-term risk of several cardiometabolic diseases. We used an MS-based proteomics approach to measure serum proteome. Although aptamer-based proteomics (SOMAscan) and antibody-based proteomics (Olink) have the advantage of high sensitivity and the ability to detect thousands of proteins , , MS-based proteomics offers an unbiased and hypothesis-free way of analysing the serum proteome. In comparison to a previous short-term longitudinal study involving 106 participants using MS-based proteomics , we identified an overlap of 20 ageing-related proteins, with 12 exhibiting consistent directions and 8 showing opposite directions. Compared to a recent large-scale study that investigated the cross-sectional association of age with plasma proteins measured by aptamer-based proteomics , we found an overlap of 42 ageing-related proteins, with 35 exhibiting consistent directions and 7 showing opposite directions (Supplementary Table ). Overall, our study identified 38 ageing-related proteins that were novel compared to the two previous studies. Furthermore, our large-scale longitudinal study, conducted over an extended follow-up period, holds an advantage in addressing the potential heterogeneities in individual patterns of proteomic changes during ageing, which strengthens the validity of these novel proteins. However, additional longitudinal studies including diverse populations with different ethnic and environmental backgrounds are needed to examine the generalizability of our findings. As ageing is commonly associated with declining health , probing into the biological and clinical implications of ageing-related biomarkers is of high importance. We have shown that the identified 86 ageing-related proteins are interconnected in functional networks including lipid metabolism, organismal injury and abnormalities, neurological disease and cell-to-cell signalling and interaction, which are closely correlated with the ageing process , , , . In line with previous studies , further functional enrichment analysis for proteins from these networks has identified over 1,000 GO terms and functional pathways, suggesting that ageing-related proteins may contribute to the ageing process in a more complex manner. Nevertheless, these functional networks as well as diverse biological pathways have deepened our understanding of the roles of ageing-related proteins in ageing biology. Our findings on the prospective associations of ageing-related proteins with long-term risk of cardiometabolic diseases appear to be biologically plausible. For instance, the inverse associations of alpha-1 antitrypsin and zinc-alpha2-glycoprotein with incident T2D and fatty liver could be explained by their multifunctional roles in regulating metabolism. Alpha-1 antitrypsin, an alpha globulin glycoprotein, has been suggested to prevent overt hyperglycemia, increase insulin secretion and protect pancreatic β cells from apoptosis in diabetes . Similarly, the zinc-alpha2-glycoprotein has been linked to improved glycemic control and insulin sensitivity . Although our study did not establish causal relationships between ageing-related proteins and diseases, the identified prospective protein–disease associations may unveil potential therapeutic targets for cardiometabolic diseases that need further investigation, especially given that nearly two-thirds of the identified disease-associated ageing proteins are drug-targetable. For example, ten disease-associated ageing proteins can be targeted by zinc and zinc compounds, which could be particularly promising for intervention as zinc is essential for immune responses and protein synthesis and is frequently deficient in the elderly population . Using a random forest machine-learning model, we developed the PHAS to distinguish between healthy and unhealthy participants. As anticipated, PHAS demonstrated associations with improved clinical phenotypes and with lower incidence of overall and specific ageing-related cardiometabolic diseases. Given that the PHAS is constructed using a concise combination of 22 serum ageing-related proteins, it would be a readily accessible tool for monitoring cardiometabolic health in the future. Furthermore, our analyses of lifestyle and multi-omics determinants of PHAS unveil several potential therapeutic targets for future investigation. We found that gut microbiota may be one of the most important modifiable factors influencing PHAS. A microbial score derived from variance-contributing microbial species for PHAS showed a robust positive association with PHAS, aligning with previous evidence suggesting that gut microbiome patterns can reflect healthy ageing and predict survival in older age . However, our findings on the determinants of PHAS should be interpreted with caution. For instance, the relatively low explained variance of PHAS by lifestyle factors could be because of the low prevalence of smoking (6.87%) and alcohol drinking (7.02%) among participants. Importantly, this observation does not inherently conflict with the well-established detrimental health effects associated with smoking and alcohol drinking. We acknowledge several limitations in our study. Firstly, our MS-based proteomics approach identified a total of 438 serum proteins after quality control, which is fewer than targeted proteomics methods such as the SomaScan assay . Nonetheless, this approach still enables us to explore and identify novel ageing-related proteins in a hypothesis-free way. Secondly, although we demonstrate the temporal relationship of the serum ageing-related proteins with various clinical outcomes, causality could not be established at this stage. Thirdly, defining ‘healthy’ and ‘unhealthy’ participants based on the presence of 14 ageing-related diseases may be insufficient and could potentially introduce bias caused by misclassification. Fourthly, well-studied proteins tend to have richer annotations than less-known proteins, which could introduce bias and affect the comprehensiveness of pathway analyses and the interpretation of results. Lastly, it is important to note that our longitudinal analyses were limited to a cohort of middle-aged and elderly Chinese participants, and the external validation of our primary findings was based on a small cohort of the elderly population with a short follow-up period. Therefore, it is imperative to carry out additional large-scale longitudinal studies to validate and generalize our findings. In conclusion, this longitudinal study expands our knowledge of the serum proteomic landscape in the context of ageing and its implications for human health. Our study has identified serum proteomic biomarkers associated with ageing and provided valuable insights into the underlying mechanisms of human ageing from a proteomics perspective. Importantly, our findings indicate that these discovered proteomic biomarkers have the potential to serve as valuable tools for monitoring and predicting ageing-related cardiometabolic disease. These ageing-related proteomic biomarkers hold great clinical relevance, offering promising intervention and therapeutic targets for addressing ageing-related morbidities. Study design and participants The present study complies with all relevant ethical regulations and was approved by the Ethics Committee of the School of Public Health at Sun Yat-sen University and the Ethics Committee of Westlake University. All participants provided written informed consent. Our study used data from the GNHS, an ongoing community-based prospective cohort study involving 4,048 middle-aged and older Chinese adults living in Guangzhou City in southern China (ClinicalTrials.gov identifier: NCT03179657 ). Participants were recruited between 2008 and 2013 and were followed up approximately every 3 years. Socio-demographic and lifestyle characteristics, dietary factors, medical history, anthropometric data and clinical traits were collected through face-to-face interviews and health examinations during follow-up . We collected a total of 7,890 serum samples from 3,840 participants during the 9-year follow-up period. After excluding participants who did not provide detailed demographic information ( n = 44) and performing serum proteome data cleaning and filtration, we included 3,796 participants with 7,565 serum samples for analysis. We divided the 3,796 participants into two subcohorts: subcohort 1, which included 1,939 participants from a multi-omics substudy within the GNHS (with genomic and faecal metagenomic data), was set as the GNHS discovery cohort; and subcohort 2, which comprised the remaining 1,857 participants, was set as the GNHS validation cohort. Figure illustrates the distributions of 7,565 serum samples with proteomic profiles across participants during follow-up in the GNHS discovery and validation cohorts. The median baseline age in the GNHS discovery and validation cohorts was 57.6 years (first quartile, 53.9 years; third quartile, 61.8 years) and 57.1 years (first quartile, 53.6 years; third quartile, 62.1 years), respectively (Supplementary Table ). We included 124 participants from an independent external cohort. These participants, with a median age of 70 years (first quartile, 64 years; third quartile, 74 years), were recruited in 2009, and 76 of them were further followed up approximately 4 years later. Among these participants, we collected 200 serum samples from the two visits for proteomics measurement. We set this cohort as an external validation cohort to verify the ageing-related proteins. Serum proteome profiling Serum proteins were identified and quantified by MS-based proteomics – . In brief, peptide samples were prepared from the serum samples and injected into an Eksigent NanoLC 400 System coupled to a TripleTOF 5600 system (SCIEX) for the SWATH-MS analysis. We measured serum samples of the GNHS discovery and validation cohorts in 178 and 132 batches, respectively, each containing 29 serum samples, two biological replicates and one pooled serum sample for quality control. The 200 serum samples from the external validation cohort were randomly assigned to seven out of the 178 measurement batches for the GNHS discovery cohort. Serum samples were acquired two to three times using the 20-min DIA-MS method as previously described . The MS files were analysed using DIA-NN software (v.1.8) within a spectral library containing 5,102 peptides and 819 unique proteins from the Swiss-Prot database of Homo sapiens , . After data cleaning and filtration for samples used in this study, we obtained a proteomic matrix containing 438 proteins from 4,637 serum samples in the GNHS discovery cohort, a matrix containing 413 proteins from 2,928 serum samples in the GNHS validation cohort and a matrix containing 432 proteins from 200 serum samples in the external validation cohort. Details of the data cleaning process have been described previously . Our proteomic data showed high consistency and reproducibility, as the median Pearson correlation coefficients between pooled serum samples, biological replicates and technical replicates were all ≥0.93 (ref. ). To validate our key findings based on the DIA-MS-proteomics assay, we further quantified the levels of 22 crucial proteins that constitute the PHAS in 179 available serum samples from 115 participants within the external validation cohort by using MRM, a more sensitive, reproducible and high-throughput MS-based targeted proteomic assay. A total of 49 peptides selected from the aforementioned spectral library , were quantified by MRM, including 15 common internal retention time standard peptides used for retention time calibration and 34 peptides used for quantification of the 22 proteins (Supplementary Table ). Peptides were separated using a Jasper high-performance liquid chromatography (HPLC) system (SCIEX) at a flow rate of 0.2 ml min −1 over a 10 min liquid chromatography gradient, ranging from 3% to 35% buffer B (buffer A, 0.1% formic acid (Fisher Chemical, cat. no. A117-50) in HPLC water (Fisher Chemical, cat. no. W6-4); buffer B, 0.1% formic acid in acetonitrile (Fisher Chemical, cat. no. A955-4)). The ionized peptides were analysed by the TRIPLE QUAD 4500MD system (SCIEX) in MRM acquisition mode. A total of 161 transitions from 49 peptides were quantified within a 5-min time window using a time-scheduled acquisition model. The cycle time was configured to 10 ms, comprising 5 ms of dwell time and 5 ms of pause time. We log 10 -transformed the relative abundance of serum proteins and normalized the transformed values of each protein within each cohort. Note that this normalization of protein levels was essential when clustering serum proteome trajectories during follow-up. Clustering serum proteome trajectories during follow-up To capture the global proteome trajectories during ageing, we clustered changes in serum protein levels across three time points over the 9-year follow-up period. The analysis included 1,018 participants from the GNHS discovery cohort who had serum proteome data available across all three time points. We calculated the mean z -score for each of the 438 proteins at each time point and then clustered them by k -means clustering. The optimal number of clusters was determined by the elbow method using the sum of squared errors. Trajectories of proteins in each cluster were visualized by line plots. We also captured these trajectories for participants stratified by sex and baseline age (>60 years or ≤60 years) to explore potential heterogeneity. Linear mixed models To handle the longitudinal data during follow-up, we fitted linear mixed models to investigate the linear associations between variables: [12pt]{minimal} $${}={X}\,{}+{Z}{}+{}$$ Y = X 𝛃 + Z 𝛍 + 𝛆 where Y is a vector of observations, β is a vector of fixed effects, μ is a vector of random effects and ε is a vector of random errors. X and Z are model matrices of independent variables related to β and μ , respectively. In this study, we fitted random intercepts for participant ID. To identify protein trajectories that showed significant trends during follow-up, we fitted the following linear mixed model: [12pt]{minimal} $$}\,{}} { }+{{}}_{1}\,{{}}}}+{{}}_{2}\,{}+{{}}_{3}\,{}\\+{{}}_{4}\,{}+{}+(|{}\,{})$$ Protein level ~ α + 𝛃 1 follow - up + 𝛃 2 sex + 𝛃 3 batch + 𝛃 4 instrument + 𝛆 + ( ∣ participant ID ) where the ordinal variable of time points during follow-up was set as the independent variable, proteomic measurement batches and instruments were set as covariates and participant ID was accounted for random intercepts. α represents the intercept of the model. We did not include age as a covariate because of collinearity with follow-up. To investigate associations of age and sex with protein levels during follow-up, we fitted the following linear mixed model: [12pt]{minimal} $$}\,{} { }+{{}}_{1}\,{}+{{}}_{2}\,{}+{{}}_{3}\,{}\\+{{}}_{4}\,{}+{}+(|{}\,{})$$ Protein level ~ α + 𝛃 1 age + 𝛃 2 sex + 𝛃 3 batch + 𝛃 4 instrument + 𝛆 + ( ∣ participant ID ) We applied this model to the GNHS discovery cohort and validation cohort, respectively, to identify proteins associated with age and sex. The interaction between age and sex on protein levels was examined by incorporating an interaction term (multiplying age and sex) into the linear mixed model. We also fitted the above linear mixed model in the external validation cohort for proteins that showed significant associations in both the GNHS discovery and validation cohorts. To determine the relationship between identified ageing-related proteins and PHAS with 32 clinical traits, we used the following linear mixed model: [12pt]{minimal} $$}\,{} { }+{{}}_{1}\,{}\,{}+{{}}_{2}\,{}+{{}}_{3}\,{}\\+{{}}_{4}\,{}+{{}}_{5}\,{}+{}+(|{}\,{})$$ Protein level ~ α + 𝛃 1 clinical traits + 𝛃 2 age + 𝛃 3 sex + 𝛃 4 batch + 𝛃 5 instrument + 𝛆 + ( ∣ participant ID ) Here, the 32 clinical traits of interest were anthropometric parameters including BMI, waist circumference, systolic blood pressure and diastolic blood pressure; serum lipid profiles including high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides and total cholesterol; biomarkers of glucose metabolism including fasting blood glucose, insulin and haemoglobin A1c; inflammatory cytokines including IL-1, IL-6 and tumour necrosis factor; hepatic biomarkers including serum alanine transaminase, aspartate aminotransferase and superoxide dismutase; renal biomarkers including serum alkaline phosphatase, urine acid and urine creatinine; and total Mini-Mental State Examination (MMSE) score and MMSE scores of immediate orientation, spatial orientation, temporal memory, attention, delayed recall, naming, verbal repetition, verbal comprehension, reading, writing and constructional praxis. We standardized the values of these clinical traits to facilitate the comparability of coefficients for the associations between these traits and proteins. We conducted the analyses only with the GNHS discovery and GNHS validation cohort but not the external validation cohort owing to data availability. In addition, MMSE measures were only available at the third follow-up for the GNHS discovery and validation cohorts. For all these models, we calculated the FDR-adjusted P values ( Q values) using the Benjamini and Hochberg approach to control for multiple testing . Generalized linear mixed models by L1-penalized estimation To examine whether the identified ageing-related proteins could predict chronological age, we used the GLMMLasso model based on the longitudinal data. We trained the model within 1,018 participants from the GNHS discovery cohort who had three serum proteome measurements taken during follow-up and tested the performance of the model on the remaining GNHS discovery cohort (1,583 observations), GNHS validation cohort (2,928 observations) and the external validation cohort (200 observations). We included sex and the 86 ageing-related proteins as initial input variables and determined the optimal GLMMLasso model using the Akaike information criterion. A subset of 83 ageing-related proteins was included in the final GLMMLasso model. The performance of the GLMMLasso model was evaluated using the Pearson correlation between actual and predicted age. Random forest model for the PHAS We used a random forest model (randomForest package in R ) to identify the proteomic signatures that can differentiate healthy from unhealthy participants at baseline. We defined participants as unhealthy if they had any of the 14 ageing-related diseases including dyslipidemia, T2D, hypertension, stroke, coronary heart diseases, fatty liver, cirrhosis, hepatitis, renal disease, cancer, gout, rheumatoid arthritis, cataracts or Parkinson’s disease. To ensure the independence of samples in the random forest model, we set the 1,785 participants at baseline from the GNHS discovery cohort as the training dataset to train the model and the 1,629 participants from the GNHS validation cohort at baseline as the validation dataset to test the performance of the model. We initially trained two random forest models: one included all the 408 serum proteins that are common in both the training and validation dataset as input features and another included 86 ageing-related proteins as the input features. Based on the model that included the 86 ageing-related proteins, we identified a more concise model that included the top 22 important ageing proteins evaluated by a mean decrease in accuracy by performing tenfold cross-validation. To validate the superiority of the model using the top 22 important ageing-related proteins, we compared it to ten simulated models, each using a random subset of 22 proteins selected from the total pool of 408 proteins. As the 22 ageing-related proteins were associated with age and also largely associated with sex and BMI, we investigated whether the predictive value of the 22 ageing-related proteins was attributed to age, sex and BMI by training other two random forest models: one model only including age, sex and BMI and another including age, sex, BMI and the 22 ageing-related serum proteins. The performances of all the random forest models were assessed by calculating the AUC. Differences in model performance were examined by DeLong test. We used the random forest model based on the top 22 important ageing-related proteins to generate the PHAS, which reflects the probability of participants being classified as ‘healthy’ according to the random forest model. This probability (PHAS) was estimated by using the tree voting aggregation approach , which calculates the proportion of trees voting for the ‘healthy’ class. The calculation was performed according to the following formula: [12pt]{minimal} $${}\;({`}{}|{X})=_{{t}={1}}^{{T}}{{h}}_{{t}}({X},\, {`}{})/{T}$$ Probability ( ‘ healthy ’ ∣ X ) = ∑ t = 1 T h t ( X , ‘ healthy ’ ) / T where X represents the matrix of the 22 ageing-related proteins for a particular participant, T represents the total number of trees in the random forest model (which was set to 1,000 in our study) and the function h t ( X , ‘healthy’) denotes the prediction of whether a participant was classified as ‘healthy’ in tree t ( t = 1, 2, 3, …, T ). Functional enrichment analysis for the serum proteins To explore the biological significance of the identified groups of proteins, we conducted functional enrichment analysis using the g:Profiler toolkit . We mapped all the proteins to gene Entrez ID as input for functional enrichment analysis. For proteins mapped to multiple Entrez IDs, we only used the first Entrez ID to avoid false positive enrichment. We tested the over-representation of gene groups of interest against the background of H. sapiens (human) genes. To correct for multiple testing, we calculated the Q values using the Benjamini and Hochberg approach independently for KEGG , Reactome and WikiPathways databases as well as the three subclasses of GO : GO molecular function, GO cellular component and GO biological process. IPA for protein networks To gain insight into the biological implications and protein–protein networks of the identified ageing-related proteins, we used QIAGEN IPA (v.122103623), an application that facilitates analysis and interpretation of omics data based on the Ingenuity Knowledge Base . We input the gene symbols of 86 ageing-related proteins into IPA. For the protein haemoglobin subunit alpha (HBA) that mapped to multiple genes (HBA1, HBA2) (Supplementary Table ), we used the first gene symbol (HBA1). We then conducted the IPA network analysis to identify interactions and networks of the ageing-related proteins. Protein networks are algorithmically generated, including both direct and indirect confirmed relationships between genes and gene products. Within each network, molecules having the most interactions with others were identified as hubs, and edges were used to represent functional activation or inhibition as well as the regulation of biological processes between molecules. Each network was limited by a maximum of 35 molecules to keep it concise and discrete from the others. To further determine the biological relevance of proteins in each network, we performed functional enrichment analysis for ageing-related proteins in each network using the g:Profiler toolkit . For exploratory purposes, we also identified the upstream regulators for ageing-related proteins using IPA core analysis. Cox proportional hazards models To explore the potential role of the identified ageing-related proteins in healthy ageing, we examined the prospective associations between baseline levels of ageing-related proteins and the incidence of 14 chronic diseases during follow-up using Cox proportional hazards models in Stata (v.15.0). The diseases of interest included dyslipidemia, T2D, hypertension, stroke, coronary heart diseases, fatty liver, cirrhosis, hepatitis, renal disease, cancer, gout, rheumatoid arthritis, cataracts and Parkinson’s disease. To ensure statistical power, particularly for diseases with limited cases during the follow-up period, we analysed data from the entire GNHS study (3,414 participants available at baseline) instead of separating the GNHS discovery and GNHS validation cohort. To account for potential heterogeneity, we included the subcohort information as a covariate in the Cox proportional hazards models. We excluded participants with diseases of interest at baseline, and adjustments were made for age, sex, BMI, subcohort and the presence of other diseases. To correct for multiple testing, we calculated Q values using the Benjamini and Hochberg approach . We then explored the drug-targetable information of proteins significantly associated with the incidence of any of the 14 chronic diseases (FDR < 0.05) by consulting the DrugBank database . To investigate the long-term health implications of the PHAS, we examined the prospective associations between baseline PHAS and incidence of chronic diseases during follow-up by performing the following Cox proportional hazards models: model 1 was adjusted for age, sex, BMI and subcohort; model 2 was adjusted for covariates in model 1 plus the presence of the other 13 diseases at baseline. It is important to note that for the overall incidence of chronic diseases, model 2 was the same as model 1, given that participants with any of the 14 diseases at baseline would be excluded. Distance-based and linear model-based variance estimation To explore the potential determinants of PHAS and the 22 ageing-related proteins included in the random forest model, we estimated the proportion of their variance explained by the intrinsic factors (age, sex, BMI), lifestyle (smoking, drinking and physical activity), dietary factors, gut microbiota and host genetics. Among them, physical activity was measured by metabolic equivalent for task. Dietary intake of 15 food groups, including whole grain, refined grain, vegetables, fruits, legumes, nuts, red meat, poultry, fish, dairy, egg, tea, coffee, juices and sweetened beverages, was assessed by a validated food frequency questionnaire . For the gut microbiota, we used 219 out of 1,008 microbial species that were present in at least 10% of the samples. For genetic variants, we used 65 genetic variants that were significantly associated with any of the 18 proteins at P < 5 × 10 −8 from our previous Chinese proteome genome-wide association study . We performed the variance estimation analysis based on 1,325 participants from the GNHS discovery cohort for whom multi-omics data were available. We used a distance-based PERMANOVA procedure to estimate the explained variance for the entire set of 22 serum proteins. To account for potential overfitting of included factors, we first identified individual factors that were significantly associated with the β-diversity of the 22 serum proteins by PERMANOVA. Only significant individual factors were included in the overall PERMANOVA model. We then performed backward selection; that is, we eliminated individual factors that were not significant in the overall PERMANOVA model and re-fitted the model until all included factors were significant. We applied linear models to estimate the explained variance of PHAS and each of the 22 serum proteins. We used the adjusted R 2 to represent the explained variance. To address potential overfitting issues, we selected contributing factors by using the LASSO model at ‘Lambda.min’ with tenfold cross-validation, which provides a conservative estimation for the explained variance . For exploratory purposes, we examined the pairwise associations of PHAS with the 18 microbial species that were selected by LASSO to explain the variance of PHAS. Linear models adjusted for age, sex and BMI were used for this analysis. To account for multiple testing, we calculated FDR-adjusted P values ( Q values) using the Benjamini and Hochberg approach . Furthermore, we calculated a microbial score by summing the relative abundance weighted by the coefficients representing the association between each of the 18 microbial species and the PHAS. We then investigated the associations between microbial score and PHAS using linear models adjusted for age, sex and BMI and further replicated this analysis with participants from the external validation cohort for whom multi-omics data were available. Reporting summary Further information on research design is available in the linked to this article. The present study complies with all relevant ethical regulations and was approved by the Ethics Committee of the School of Public Health at Sun Yat-sen University and the Ethics Committee of Westlake University. All participants provided written informed consent. Our study used data from the GNHS, an ongoing community-based prospective cohort study involving 4,048 middle-aged and older Chinese adults living in Guangzhou City in southern China (ClinicalTrials.gov identifier: NCT03179657 ). Participants were recruited between 2008 and 2013 and were followed up approximately every 3 years. Socio-demographic and lifestyle characteristics, dietary factors, medical history, anthropometric data and clinical traits were collected through face-to-face interviews and health examinations during follow-up . We collected a total of 7,890 serum samples from 3,840 participants during the 9-year follow-up period. After excluding participants who did not provide detailed demographic information ( n = 44) and performing serum proteome data cleaning and filtration, we included 3,796 participants with 7,565 serum samples for analysis. We divided the 3,796 participants into two subcohorts: subcohort 1, which included 1,939 participants from a multi-omics substudy within the GNHS (with genomic and faecal metagenomic data), was set as the GNHS discovery cohort; and subcohort 2, which comprised the remaining 1,857 participants, was set as the GNHS validation cohort. Figure illustrates the distributions of 7,565 serum samples with proteomic profiles across participants during follow-up in the GNHS discovery and validation cohorts. The median baseline age in the GNHS discovery and validation cohorts was 57.6 years (first quartile, 53.9 years; third quartile, 61.8 years) and 57.1 years (first quartile, 53.6 years; third quartile, 62.1 years), respectively (Supplementary Table ). We included 124 participants from an independent external cohort. These participants, with a median age of 70 years (first quartile, 64 years; third quartile, 74 years), were recruited in 2009, and 76 of them were further followed up approximately 4 years later. Among these participants, we collected 200 serum samples from the two visits for proteomics measurement. We set this cohort as an external validation cohort to verify the ageing-related proteins. Serum proteins were identified and quantified by MS-based proteomics – . In brief, peptide samples were prepared from the serum samples and injected into an Eksigent NanoLC 400 System coupled to a TripleTOF 5600 system (SCIEX) for the SWATH-MS analysis. We measured serum samples of the GNHS discovery and validation cohorts in 178 and 132 batches, respectively, each containing 29 serum samples, two biological replicates and one pooled serum sample for quality control. The 200 serum samples from the external validation cohort were randomly assigned to seven out of the 178 measurement batches for the GNHS discovery cohort. Serum samples were acquired two to three times using the 20-min DIA-MS method as previously described . The MS files were analysed using DIA-NN software (v.1.8) within a spectral library containing 5,102 peptides and 819 unique proteins from the Swiss-Prot database of Homo sapiens , . After data cleaning and filtration for samples used in this study, we obtained a proteomic matrix containing 438 proteins from 4,637 serum samples in the GNHS discovery cohort, a matrix containing 413 proteins from 2,928 serum samples in the GNHS validation cohort and a matrix containing 432 proteins from 200 serum samples in the external validation cohort. Details of the data cleaning process have been described previously . Our proteomic data showed high consistency and reproducibility, as the median Pearson correlation coefficients between pooled serum samples, biological replicates and technical replicates were all ≥0.93 (ref. ). To validate our key findings based on the DIA-MS-proteomics assay, we further quantified the levels of 22 crucial proteins that constitute the PHAS in 179 available serum samples from 115 participants within the external validation cohort by using MRM, a more sensitive, reproducible and high-throughput MS-based targeted proteomic assay. A total of 49 peptides selected from the aforementioned spectral library , were quantified by MRM, including 15 common internal retention time standard peptides used for retention time calibration and 34 peptides used for quantification of the 22 proteins (Supplementary Table ). Peptides were separated using a Jasper high-performance liquid chromatography (HPLC) system (SCIEX) at a flow rate of 0.2 ml min −1 over a 10 min liquid chromatography gradient, ranging from 3% to 35% buffer B (buffer A, 0.1% formic acid (Fisher Chemical, cat. no. A117-50) in HPLC water (Fisher Chemical, cat. no. W6-4); buffer B, 0.1% formic acid in acetonitrile (Fisher Chemical, cat. no. A955-4)). The ionized peptides were analysed by the TRIPLE QUAD 4500MD system (SCIEX) in MRM acquisition mode. A total of 161 transitions from 49 peptides were quantified within a 5-min time window using a time-scheduled acquisition model. The cycle time was configured to 10 ms, comprising 5 ms of dwell time and 5 ms of pause time. We log 10 -transformed the relative abundance of serum proteins and normalized the transformed values of each protein within each cohort. Note that this normalization of protein levels was essential when clustering serum proteome trajectories during follow-up. To capture the global proteome trajectories during ageing, we clustered changes in serum protein levels across three time points over the 9-year follow-up period. The analysis included 1,018 participants from the GNHS discovery cohort who had serum proteome data available across all three time points. We calculated the mean z -score for each of the 438 proteins at each time point and then clustered them by k -means clustering. The optimal number of clusters was determined by the elbow method using the sum of squared errors. Trajectories of proteins in each cluster were visualized by line plots. We also captured these trajectories for participants stratified by sex and baseline age (>60 years or ≤60 years) to explore potential heterogeneity. To handle the longitudinal data during follow-up, we fitted linear mixed models to investigate the linear associations between variables: [12pt]{minimal} $${}={X}\,{}+{Z}{}+{}$$ Y = X 𝛃 + Z 𝛍 + 𝛆 where Y is a vector of observations, β is a vector of fixed effects, μ is a vector of random effects and ε is a vector of random errors. X and Z are model matrices of independent variables related to β and μ , respectively. In this study, we fitted random intercepts for participant ID. To identify protein trajectories that showed significant trends during follow-up, we fitted the following linear mixed model: [12pt]{minimal} $$}\,{}} { }+{{}}_{1}\,{{}}}}+{{}}_{2}\,{}+{{}}_{3}\,{}\\+{{}}_{4}\,{}+{}+(|{}\,{})$$ Protein level ~ α + 𝛃 1 follow - up + 𝛃 2 sex + 𝛃 3 batch + 𝛃 4 instrument + 𝛆 + ( ∣ participant ID ) where the ordinal variable of time points during follow-up was set as the independent variable, proteomic measurement batches and instruments were set as covariates and participant ID was accounted for random intercepts. α represents the intercept of the model. We did not include age as a covariate because of collinearity with follow-up. To investigate associations of age and sex with protein levels during follow-up, we fitted the following linear mixed model: [12pt]{minimal} $$}\,{} { }+{{}}_{1}\,{}+{{}}_{2}\,{}+{{}}_{3}\,{}\\+{{}}_{4}\,{}+{}+(|{}\,{})$$ Protein level ~ α + 𝛃 1 age + 𝛃 2 sex + 𝛃 3 batch + 𝛃 4 instrument + 𝛆 + ( ∣ participant ID ) We applied this model to the GNHS discovery cohort and validation cohort, respectively, to identify proteins associated with age and sex. The interaction between age and sex on protein levels was examined by incorporating an interaction term (multiplying age and sex) into the linear mixed model. We also fitted the above linear mixed model in the external validation cohort for proteins that showed significant associations in both the GNHS discovery and validation cohorts. To determine the relationship between identified ageing-related proteins and PHAS with 32 clinical traits, we used the following linear mixed model: [12pt]{minimal} $$}\,{} { }+{{}}_{1}\,{}\,{}+{{}}_{2}\,{}+{{}}_{3}\,{}\\+{{}}_{4}\,{}+{{}}_{5}\,{}+{}+(|{}\,{})$$ Protein level ~ α + 𝛃 1 clinical traits + 𝛃 2 age + 𝛃 3 sex + 𝛃 4 batch + 𝛃 5 instrument + 𝛆 + ( ∣ participant ID ) Here, the 32 clinical traits of interest were anthropometric parameters including BMI, waist circumference, systolic blood pressure and diastolic blood pressure; serum lipid profiles including high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides and total cholesterol; biomarkers of glucose metabolism including fasting blood glucose, insulin and haemoglobin A1c; inflammatory cytokines including IL-1, IL-6 and tumour necrosis factor; hepatic biomarkers including serum alanine transaminase, aspartate aminotransferase and superoxide dismutase; renal biomarkers including serum alkaline phosphatase, urine acid and urine creatinine; and total Mini-Mental State Examination (MMSE) score and MMSE scores of immediate orientation, spatial orientation, temporal memory, attention, delayed recall, naming, verbal repetition, verbal comprehension, reading, writing and constructional praxis. We standardized the values of these clinical traits to facilitate the comparability of coefficients for the associations between these traits and proteins. We conducted the analyses only with the GNHS discovery and GNHS validation cohort but not the external validation cohort owing to data availability. In addition, MMSE measures were only available at the third follow-up for the GNHS discovery and validation cohorts. For all these models, we calculated the FDR-adjusted P values ( Q values) using the Benjamini and Hochberg approach to control for multiple testing . To examine whether the identified ageing-related proteins could predict chronological age, we used the GLMMLasso model based on the longitudinal data. We trained the model within 1,018 participants from the GNHS discovery cohort who had three serum proteome measurements taken during follow-up and tested the performance of the model on the remaining GNHS discovery cohort (1,583 observations), GNHS validation cohort (2,928 observations) and the external validation cohort (200 observations). We included sex and the 86 ageing-related proteins as initial input variables and determined the optimal GLMMLasso model using the Akaike information criterion. A subset of 83 ageing-related proteins was included in the final GLMMLasso model. The performance of the GLMMLasso model was evaluated using the Pearson correlation between actual and predicted age. We used a random forest model (randomForest package in R ) to identify the proteomic signatures that can differentiate healthy from unhealthy participants at baseline. We defined participants as unhealthy if they had any of the 14 ageing-related diseases including dyslipidemia, T2D, hypertension, stroke, coronary heart diseases, fatty liver, cirrhosis, hepatitis, renal disease, cancer, gout, rheumatoid arthritis, cataracts or Parkinson’s disease. To ensure the independence of samples in the random forest model, we set the 1,785 participants at baseline from the GNHS discovery cohort as the training dataset to train the model and the 1,629 participants from the GNHS validation cohort at baseline as the validation dataset to test the performance of the model. We initially trained two random forest models: one included all the 408 serum proteins that are common in both the training and validation dataset as input features and another included 86 ageing-related proteins as the input features. Based on the model that included the 86 ageing-related proteins, we identified a more concise model that included the top 22 important ageing proteins evaluated by a mean decrease in accuracy by performing tenfold cross-validation. To validate the superiority of the model using the top 22 important ageing-related proteins, we compared it to ten simulated models, each using a random subset of 22 proteins selected from the total pool of 408 proteins. As the 22 ageing-related proteins were associated with age and also largely associated with sex and BMI, we investigated whether the predictive value of the 22 ageing-related proteins was attributed to age, sex and BMI by training other two random forest models: one model only including age, sex and BMI and another including age, sex, BMI and the 22 ageing-related serum proteins. The performances of all the random forest models were assessed by calculating the AUC. Differences in model performance were examined by DeLong test. We used the random forest model based on the top 22 important ageing-related proteins to generate the PHAS, which reflects the probability of participants being classified as ‘healthy’ according to the random forest model. This probability (PHAS) was estimated by using the tree voting aggregation approach , which calculates the proportion of trees voting for the ‘healthy’ class. The calculation was performed according to the following formula: [12pt]{minimal} $${}\;({`}{}|{X})=_{{t}={1}}^{{T}}{{h}}_{{t}}({X},\, {`}{})/{T}$$ Probability ( ‘ healthy ’ ∣ X ) = ∑ t = 1 T h t ( X , ‘ healthy ’ ) / T where X represents the matrix of the 22 ageing-related proteins for a particular participant, T represents the total number of trees in the random forest model (which was set to 1,000 in our study) and the function h t ( X , ‘healthy’) denotes the prediction of whether a participant was classified as ‘healthy’ in tree t ( t = 1, 2, 3, …, T ). To explore the biological significance of the identified groups of proteins, we conducted functional enrichment analysis using the g:Profiler toolkit . We mapped all the proteins to gene Entrez ID as input for functional enrichment analysis. For proteins mapped to multiple Entrez IDs, we only used the first Entrez ID to avoid false positive enrichment. We tested the over-representation of gene groups of interest against the background of H. sapiens (human) genes. To correct for multiple testing, we calculated the Q values using the Benjamini and Hochberg approach independently for KEGG , Reactome and WikiPathways databases as well as the three subclasses of GO : GO molecular function, GO cellular component and GO biological process. To gain insight into the biological implications and protein–protein networks of the identified ageing-related proteins, we used QIAGEN IPA (v.122103623), an application that facilitates analysis and interpretation of omics data based on the Ingenuity Knowledge Base . We input the gene symbols of 86 ageing-related proteins into IPA. For the protein haemoglobin subunit alpha (HBA) that mapped to multiple genes (HBA1, HBA2) (Supplementary Table ), we used the first gene symbol (HBA1). We then conducted the IPA network analysis to identify interactions and networks of the ageing-related proteins. Protein networks are algorithmically generated, including both direct and indirect confirmed relationships between genes and gene products. Within each network, molecules having the most interactions with others were identified as hubs, and edges were used to represent functional activation or inhibition as well as the regulation of biological processes between molecules. Each network was limited by a maximum of 35 molecules to keep it concise and discrete from the others. To further determine the biological relevance of proteins in each network, we performed functional enrichment analysis for ageing-related proteins in each network using the g:Profiler toolkit . For exploratory purposes, we also identified the upstream regulators for ageing-related proteins using IPA core analysis. To explore the potential role of the identified ageing-related proteins in healthy ageing, we examined the prospective associations between baseline levels of ageing-related proteins and the incidence of 14 chronic diseases during follow-up using Cox proportional hazards models in Stata (v.15.0). The diseases of interest included dyslipidemia, T2D, hypertension, stroke, coronary heart diseases, fatty liver, cirrhosis, hepatitis, renal disease, cancer, gout, rheumatoid arthritis, cataracts and Parkinson’s disease. To ensure statistical power, particularly for diseases with limited cases during the follow-up period, we analysed data from the entire GNHS study (3,414 participants available at baseline) instead of separating the GNHS discovery and GNHS validation cohort. To account for potential heterogeneity, we included the subcohort information as a covariate in the Cox proportional hazards models. We excluded participants with diseases of interest at baseline, and adjustments were made for age, sex, BMI, subcohort and the presence of other diseases. To correct for multiple testing, we calculated Q values using the Benjamini and Hochberg approach . We then explored the drug-targetable information of proteins significantly associated with the incidence of any of the 14 chronic diseases (FDR < 0.05) by consulting the DrugBank database . To investigate the long-term health implications of the PHAS, we examined the prospective associations between baseline PHAS and incidence of chronic diseases during follow-up by performing the following Cox proportional hazards models: model 1 was adjusted for age, sex, BMI and subcohort; model 2 was adjusted for covariates in model 1 plus the presence of the other 13 diseases at baseline. It is important to note that for the overall incidence of chronic diseases, model 2 was the same as model 1, given that participants with any of the 14 diseases at baseline would be excluded. To explore the potential determinants of PHAS and the 22 ageing-related proteins included in the random forest model, we estimated the proportion of their variance explained by the intrinsic factors (age, sex, BMI), lifestyle (smoking, drinking and physical activity), dietary factors, gut microbiota and host genetics. Among them, physical activity was measured by metabolic equivalent for task. Dietary intake of 15 food groups, including whole grain, refined grain, vegetables, fruits, legumes, nuts, red meat, poultry, fish, dairy, egg, tea, coffee, juices and sweetened beverages, was assessed by a validated food frequency questionnaire . For the gut microbiota, we used 219 out of 1,008 microbial species that were present in at least 10% of the samples. For genetic variants, we used 65 genetic variants that were significantly associated with any of the 18 proteins at P < 5 × 10 −8 from our previous Chinese proteome genome-wide association study . We performed the variance estimation analysis based on 1,325 participants from the GNHS discovery cohort for whom multi-omics data were available. We used a distance-based PERMANOVA procedure to estimate the explained variance for the entire set of 22 serum proteins. To account for potential overfitting of included factors, we first identified individual factors that were significantly associated with the β-diversity of the 22 serum proteins by PERMANOVA. Only significant individual factors were included in the overall PERMANOVA model. We then performed backward selection; that is, we eliminated individual factors that were not significant in the overall PERMANOVA model and re-fitted the model until all included factors were significant. We applied linear models to estimate the explained variance of PHAS and each of the 22 serum proteins. We used the adjusted R 2 to represent the explained variance. To address potential overfitting issues, we selected contributing factors by using the LASSO model at ‘Lambda.min’ with tenfold cross-validation, which provides a conservative estimation for the explained variance . For exploratory purposes, we examined the pairwise associations of PHAS with the 18 microbial species that were selected by LASSO to explain the variance of PHAS. Linear models adjusted for age, sex and BMI were used for this analysis. To account for multiple testing, we calculated FDR-adjusted P values ( Q values) using the Benjamini and Hochberg approach . Furthermore, we calculated a microbial score by summing the relative abundance weighted by the coefficients representing the association between each of the 18 microbial species and the PHAS. We then investigated the associations between microbial score and PHAS using linear models adjusted for age, sex and BMI and further replicated this analysis with participants from the external validation cohort for whom multi-omics data were available. Further information on research design is available in the linked to this article. Reporting Summary Supplementary Tables Supplementary Tables 1–19. Source Data Fig. 2 Statistical source data for Fig. 2. Source Data Fig. 3 Statistical source data for Fig. 3. Source Data Fig. 4 Statistical source data for Fig. 4. Source Data Fig. 5 Statistical source data for Fig. 5. Source Data Fig. 6 Statistical source data for Fig. 6. Source Data Fig. 7 Statistical source data for Fig. 7.
First use of Simulation in Therapeutic Patient Education (S-TPE) in adults with diabetes: a pilot study
27e03db8-dea8-446b-a8c9-a43f654f94a9
8886441
Patient Education as Topic[mh]
Therapeutic patient education (TPE) administered in a hospital or community context helps chronically ill patients to develop self-care and daily life skills within the constraints imposed by the disease. TPE has met the needs of patients with diabetes, asthma and heart failure. Patient benefits include greater compliance, fewer complications, higher quality of life and an increase in perceived health status. TPE reduces hospital stays and costs, but there are not enough trained educators, especially in remote areas. Most TPE programmes only offer instruction but neglect skills necessary for everyday life. Some self-care skills may be difficult to acquire during TPE, including managing uncontrolled and severe hypoglycaemia. The current evaluation of TPE’s contribution to healthcare in patients with diabetes may be too narrow and inadequately captures its effects. In a systematic review of TPE for type 1 diabetes, Fonte et al concluded that studies are too focused on clinical, biological and economic outcomes and often failed to measure psychosocial or coping skills, for which patients must acquire social and practical skills to cope with chronic illness. Unfortunately, no validated teaching method both instructs patients in the necessary skill set and ensures that they can use those skills in daily life. If TPE training was combined with simulation, patients might find it easier to develop real world coping skills. Simulation provides a structured learning environment in which patients can learn to handle real word situations and develop their skills and abilities without imposing ethical, economic or technical risks. Simulation improves self-efficacy in parents caring for children with diabetes and children leaving neonatal care. Simulation develops skills in professionals but is not yet integrated into TPE. Whether simulation would extend the benefits of TPE is a matter of debate. Coleman advocated its use because it was successful in training programmes for health professionals, but Lefèvre et al thought Simulation in Therapeutic Patient Education (S-TPE) may be too complicated for patients and could present difficulties for multimorbid, low literacy, or fragile patients. We thus designed this pilot study to determine if S-TPE was feasible and acceptable to patients and educators and to identify facilitators or barriers to its incorporation into routine TPE practice. We also aimed to ensure that the simulation was accessible to carers carrying out TPE and that the methods and means were accessible to them. This research is essential in order to enable a multi-centre trial to be carried out afterwards to study the effects of this S-TPE on patients. Study population The study population included adults with type 1 or 2 diabetes who needed insulin and diabetes educators in charge of TPE at their institution. The criteria for inclusion were as follows: to be of legal age, to have given their unopposed consent and to be insulin-dependent diabetic who had participated in a full TPE programme (three sessions) for the implementation of a Free Style Libre. The exclusion criteria were as follows: to be subject to a legal protection measure (curatorship, guardianship) or the subject of a legal safeguard measure or to be of legal age and incapable or unable to express consent. Patients, drawn at random from the list of eligible patients and then contacted by telephone, were enrolled between March and June 2019 at Dijon Bourgogne University Hospital. They received the protocol in the mail, and then the educator explained the study over the phone. All participants provided written informed consent before starting the trial. All educators trained in diabetes patient therapeutic education were eligible for the trial and provided informed consent. Outcomes Our two primary outcomes were (1) the patients’ and educators’ perception of the usefulness of S-TPE and (2) patient satisfaction level at the conclusion of the simulation sequence. Our secondary outcomes were (1) change in patients’ S-TPE self-efficacy score (pre to post), (2) patients’ anxiety scores and (3) organisational, human, material and temporal facilitating and limiting factors for patients and educators. To obtain these outcomes, we administered a series of five questionnaires to patients and two to educators. Ahead of the S-TPE session, the following questionnaires were given to patients only, to measure their baseline score: Self-efficacy was measured by the self-administered Schwarzer General Self-Efficacy Scale (GSES), which we adapted to patients who wear a continuous interstitial glucose metre ( ). Anxiety was measured by the validated French translation of the State and Trait Anxiety Inventory (STAI). The STAI contains two questionnaires, one measuring the respondent’s usual emotional state (trait anxiety questionnaire—STAI-Y1) and one measuring their situational anxiety (state anxiety questionnaire—STAI-Y2). Each questionnaire includes 20 items to be rated on a 4-point Likert scale (‘almost never’ to ‘almost always’) ( ). The level of a patient’s nervousness and anxiety during the S-TPE session was determined by their score on the STAI-Y2 questionnaire. Score thresholds are detailed in . At the end of the S-TPE session, the following questionnaires were administered to patients and educators, or patients only: The GSES and STAI-Y2 were re-administered to patients only, to obtain post-intervention scores for self-efficacy and situational anxiety. A self-administered questionnaire, scored on a 5-point Likert scale from ‘strongly disagree’ to ‘strongly agree’, was given to patients and educators, to measure their perception of S-TPE’s usefulness. The patients’ questionnaire contains four items ( ) and the educators’ eight items ( ). Patients completed a self-administered satisfaction questionnaire ( ) that contained 12 items, scored on a 5-point Likert scale from ‘strongly disagree’ to ‘strongly agree’. Based on level I (‘Evaluation—Reaction’) of Kirkpatrick’s global model for evaluating training courses, and on criteria for measuring the quality of therapeutic education, this questionnaire incorporated the following elements: objectives, expectations, progression, questioning, method, place, duration, quality of the exchanges with the participants and professionals, and recommendation to another person. A final self-administered questionnaire administered to patients ( ) and educators ( ) containing three open-ended questions on the organisational, human, material, temporal, facilitating and limiting factors of S-TPE and areas for improvement. 10.1136/bmjopen-2021-049454.supp1 Supplementary data To improve and deepen the transcripts of responses to the three open-ended questions, an investigator (CP) conducted a semi-directed follow-up phone interview 15–30 days after the S-TPE session. Skills covered by the S-TPE session We based our trial on S-TPE recommendations made by a group of 25 TPE and simulation experts and expert diabetes patients who came to consensus at a conference in December 2017. They recommended 10 objectives and specified learner characteristics, conditions of use and ethical conditions ( ). Box 1 Skills for which simulation brings added value to therapeutic patient education (10 statements) D1: Simulation is recommended for learning to cope with unusual/infrequent situations. D2: Simulation is recommended for developing communication skills. D3: Simulation is recommended for promoting the integration of new technologies in disease self-management. D4: Simulation is recommended for promoting partnerships between the care team and the patient for his/her own health or as an expert patient. D5: Simulation is recommended for learning to cope with stress. D6: Simulation is recommended for reinforcing the feeling of self-efficacy. D7: Simulation is recommended for learning how to adjust treatment. D8: Simulation is recommended for learning how to manage a crisis or emergency. D9: Simulation is recommended to learn to involve the social network in care. D10: Simulation is recommended for increasing the motivation to take care of oneself. We set up the pilot to test these objectives: ‘use simulation to promote integration of new technologies into self-management of diseases’ and ‘use simulation to show patients how to manage a crisis or emergency’. The simulation was developed by educators, including two nurses and a doctor in charge of the TPE, the nurse in charge of the diabetology department, a TPE expert and the three people responsible for simulation training at our institution, two of whom were trained in both TPE and simulation. The educators decided to incorporate three more objectives that can be described as ‘taking the proper steps when faced with hypoglycaemia’: (1) ‘identifying possible signs of hypoglycaemia to initiate appropriate management’; (2) ‘interpreting the screen data of the continuous interstitial glucose metre’ and (3) ‘how to act in cases of hypoglycaemia’. The educators ran through the simulation three times to accustom themselves to pre-briefing and debriefing patients who might have different reactions. Description of S-TPE Standard TPE comprises four sessions where two or three trained physicians or nurses teach up to 10 patients. The educators conducted a single session with each group of patients. One patient simulated and the others observed. Our S-TPE were led by a trained health professional who had been practicing TPE for at least 3 years. At the beginning of the S-TPE session, a research technician administered the three pre-intervention questionnaires to patients. We held three S-TPE sessions, of seven, nine and eight patients. Patients attended one S-TPE session. In each session, one patient was asked to volunteer to be the ‘simulating’ patient; they were filmed and broadcasted to the other patients who observed in a nearby room. Simulating patients were pre-briefed on the scenario and played the scene as if they were at home in a standardised room. They simulated hypoglycaemia and acted after an interstitial glucose reading. Post-simulation, all patients debriefed together. We used one simulating patient per group because participants and observers benefit equally. The S-TPE session had three phases: A nurse led the briefing phase that familiarised patients with material, context, confidentiality, ethics rules, instructions and expectations for the simulation. Educators instructed patients to, above all, be kind to one another and to suspend judgement and guaranteed this behaviour. The simulating patient was also briefed on the scenario and paraphrased it to ensure they understood. The patient was told educators would intervene if they deviated from the scenario, in the form of a visit during the simulation. In the scenario phase (scenario), the simulating patient’s performance was guided by the trainer. All educators and patients participated in the steps of the debriefing phase : description, analysis, synthesis ( ). Then, patients and educators filled out post-intervention (section Outcomes). The intervention followed the methodology recommended by the experts in their consensus conference. TPE sessions always remained focused on the objectives of the session, that is, here the management of hypoglycaemia with an interstitial glucose metre, as well as the recognition of the signs of hypoglycaemia, glycaemic corrective actions and the particularities of continuous interstitial glucose reading. Everything was planned in the guide written for the session so all three groups received the same education. shows the process of this research. 10.1136/bmjopen-2021-049454.supp2 Supplementary data Statistical analysis Our primary analysis was based on data provided by patients who answered the study questionnaires. We used descriptive statistics to characterise the patients’ socio-demographic characteristics and expressed quantitative variables as numbers and percentages. Quantitative variables were reported as means and their SD, with minimum (min) and maximum (max) value for scores. We used a χ 2 test or Fisher exact test to compare qualitative variables pre-intervention and post-intervention. We compared means with a Student t-test for matched pairs or a Wilcoxon signed-rank test after we determined distribution. For all tests, we considered p<0.05 significant. SAS V.9.4 was used for all analyses. In our qualitative analysis, we organised and interpreted the narrative data, both written and transcribed (see paragraph Study population), to identify themes and create reference categories. One person (CP) condensed the data and coded it to assign keywords. CP extracted in vivo quotes, characterised them with keywords, sorted them into categories and then derived themes from the categories. We then described different dimensions and identified barriers and facilitators of S-TPE to determine which factors would need to be modified or maintained for a large-scale efficacy trial. Our analysis adhered to the Standards for Reporting Qualitative Research (SRQR). The primary objective of this non-randomised study was to estimate the feasibility and acceptability of S-TPE. The sample of 24 patients was based on the estimate of Hennink et al that 16–24 qualitative interviews generally achieve saturation. No formal sample size calculation was performed. Patient and public involvement Patients and/or the public were involved in the consensus conference that paves this work as well as in the construction of the simulation and the design of the study. Information on the publication of this study will be provided to the patients on the website ( http://www.chu-dijon.fr ) and the social networks of our hospital. The study population included adults with type 1 or 2 diabetes who needed insulin and diabetes educators in charge of TPE at their institution. The criteria for inclusion were as follows: to be of legal age, to have given their unopposed consent and to be insulin-dependent diabetic who had participated in a full TPE programme (three sessions) for the implementation of a Free Style Libre. The exclusion criteria were as follows: to be subject to a legal protection measure (curatorship, guardianship) or the subject of a legal safeguard measure or to be of legal age and incapable or unable to express consent. Patients, drawn at random from the list of eligible patients and then contacted by telephone, were enrolled between March and June 2019 at Dijon Bourgogne University Hospital. They received the protocol in the mail, and then the educator explained the study over the phone. All participants provided written informed consent before starting the trial. All educators trained in diabetes patient therapeutic education were eligible for the trial and provided informed consent. Our two primary outcomes were (1) the patients’ and educators’ perception of the usefulness of S-TPE and (2) patient satisfaction level at the conclusion of the simulation sequence. Our secondary outcomes were (1) change in patients’ S-TPE self-efficacy score (pre to post), (2) patients’ anxiety scores and (3) organisational, human, material and temporal facilitating and limiting factors for patients and educators. To obtain these outcomes, we administered a series of five questionnaires to patients and two to educators. Ahead of the S-TPE session, the following questionnaires were given to patients only, to measure their baseline score: Self-efficacy was measured by the self-administered Schwarzer General Self-Efficacy Scale (GSES), which we adapted to patients who wear a continuous interstitial glucose metre ( ). Anxiety was measured by the validated French translation of the State and Trait Anxiety Inventory (STAI). The STAI contains two questionnaires, one measuring the respondent’s usual emotional state (trait anxiety questionnaire—STAI-Y1) and one measuring their situational anxiety (state anxiety questionnaire—STAI-Y2). Each questionnaire includes 20 items to be rated on a 4-point Likert scale (‘almost never’ to ‘almost always’) ( ). The level of a patient’s nervousness and anxiety during the S-TPE session was determined by their score on the STAI-Y2 questionnaire. Score thresholds are detailed in . At the end of the S-TPE session, the following questionnaires were administered to patients and educators, or patients only: The GSES and STAI-Y2 were re-administered to patients only, to obtain post-intervention scores for self-efficacy and situational anxiety. A self-administered questionnaire, scored on a 5-point Likert scale from ‘strongly disagree’ to ‘strongly agree’, was given to patients and educators, to measure their perception of S-TPE’s usefulness. The patients’ questionnaire contains four items ( ) and the educators’ eight items ( ). Patients completed a self-administered satisfaction questionnaire ( ) that contained 12 items, scored on a 5-point Likert scale from ‘strongly disagree’ to ‘strongly agree’. Based on level I (‘Evaluation—Reaction’) of Kirkpatrick’s global model for evaluating training courses, and on criteria for measuring the quality of therapeutic education, this questionnaire incorporated the following elements: objectives, expectations, progression, questioning, method, place, duration, quality of the exchanges with the participants and professionals, and recommendation to another person. A final self-administered questionnaire administered to patients ( ) and educators ( ) containing three open-ended questions on the organisational, human, material, temporal, facilitating and limiting factors of S-TPE and areas for improvement. 10.1136/bmjopen-2021-049454.supp1 Supplementary data To improve and deepen the transcripts of responses to the three open-ended questions, an investigator (CP) conducted a semi-directed follow-up phone interview 15–30 days after the S-TPE session. We based our trial on S-TPE recommendations made by a group of 25 TPE and simulation experts and expert diabetes patients who came to consensus at a conference in December 2017. They recommended 10 objectives and specified learner characteristics, conditions of use and ethical conditions ( ). Box 1 Skills for which simulation brings added value to therapeutic patient education (10 statements) D1: Simulation is recommended for learning to cope with unusual/infrequent situations. D2: Simulation is recommended for developing communication skills. D3: Simulation is recommended for promoting the integration of new technologies in disease self-management. D4: Simulation is recommended for promoting partnerships between the care team and the patient for his/her own health or as an expert patient. D5: Simulation is recommended for learning to cope with stress. D6: Simulation is recommended for reinforcing the feeling of self-efficacy. D7: Simulation is recommended for learning how to adjust treatment. D8: Simulation is recommended for learning how to manage a crisis or emergency. D9: Simulation is recommended to learn to involve the social network in care. D10: Simulation is recommended for increasing the motivation to take care of oneself. We set up the pilot to test these objectives: ‘use simulation to promote integration of new technologies into self-management of diseases’ and ‘use simulation to show patients how to manage a crisis or emergency’. The simulation was developed by educators, including two nurses and a doctor in charge of the TPE, the nurse in charge of the diabetology department, a TPE expert and the three people responsible for simulation training at our institution, two of whom were trained in both TPE and simulation. The educators decided to incorporate three more objectives that can be described as ‘taking the proper steps when faced with hypoglycaemia’: (1) ‘identifying possible signs of hypoglycaemia to initiate appropriate management’; (2) ‘interpreting the screen data of the continuous interstitial glucose metre’ and (3) ‘how to act in cases of hypoglycaemia’. The educators ran through the simulation three times to accustom themselves to pre-briefing and debriefing patients who might have different reactions. Standard TPE comprises four sessions where two or three trained physicians or nurses teach up to 10 patients. The educators conducted a single session with each group of patients. One patient simulated and the others observed. Our S-TPE were led by a trained health professional who had been practicing TPE for at least 3 years. At the beginning of the S-TPE session, a research technician administered the three pre-intervention questionnaires to patients. We held three S-TPE sessions, of seven, nine and eight patients. Patients attended one S-TPE session. In each session, one patient was asked to volunteer to be the ‘simulating’ patient; they were filmed and broadcasted to the other patients who observed in a nearby room. Simulating patients were pre-briefed on the scenario and played the scene as if they were at home in a standardised room. They simulated hypoglycaemia and acted after an interstitial glucose reading. Post-simulation, all patients debriefed together. We used one simulating patient per group because participants and observers benefit equally. The S-TPE session had three phases: A nurse led the briefing phase that familiarised patients with material, context, confidentiality, ethics rules, instructions and expectations for the simulation. Educators instructed patients to, above all, be kind to one another and to suspend judgement and guaranteed this behaviour. The simulating patient was also briefed on the scenario and paraphrased it to ensure they understood. The patient was told educators would intervene if they deviated from the scenario, in the form of a visit during the simulation. In the scenario phase (scenario), the simulating patient’s performance was guided by the trainer. All educators and patients participated in the steps of the debriefing phase : description, analysis, synthesis ( ). Then, patients and educators filled out post-intervention (section Outcomes). The intervention followed the methodology recommended by the experts in their consensus conference. TPE sessions always remained focused on the objectives of the session, that is, here the management of hypoglycaemia with an interstitial glucose metre, as well as the recognition of the signs of hypoglycaemia, glycaemic corrective actions and the particularities of continuous interstitial glucose reading. Everything was planned in the guide written for the session so all three groups received the same education. shows the process of this research. 10.1136/bmjopen-2021-049454.supp2 Supplementary data Our primary analysis was based on data provided by patients who answered the study questionnaires. We used descriptive statistics to characterise the patients’ socio-demographic characteristics and expressed quantitative variables as numbers and percentages. Quantitative variables were reported as means and their SD, with minimum (min) and maximum (max) value for scores. We used a χ 2 test or Fisher exact test to compare qualitative variables pre-intervention and post-intervention. We compared means with a Student t-test for matched pairs or a Wilcoxon signed-rank test after we determined distribution. For all tests, we considered p<0.05 significant. SAS V.9.4 was used for all analyses. In our qualitative analysis, we organised and interpreted the narrative data, both written and transcribed (see paragraph Study population), to identify themes and create reference categories. One person (CP) condensed the data and coded it to assign keywords. CP extracted in vivo quotes, characterised them with keywords, sorted them into categories and then derived themes from the categories. We then described different dimensions and identified barriers and facilitators of S-TPE to determine which factors would need to be modified or maintained for a large-scale efficacy trial. Our analysis adhered to the Standards for Reporting Qualitative Research (SRQR). The primary objective of this non-randomised study was to estimate the feasibility and acceptability of S-TPE. The sample of 24 patients was based on the estimate of Hennink et al that 16–24 qualitative interviews generally achieve saturation. No formal sample size calculation was performed. Patients and/or the public were involved in the consensus conference that paves this work as well as in the construction of the simulation and the design of the study. Information on the publication of this study will be provided to the patients on the website ( http://www.chu-dijon.fr ) and the social networks of our hospital. Patients’ description In total, 24 patients were included in the study. One patient was wrongly excluded since he was not wearing an interstitial glucose reading device, and thus was quickly excluded (see for characteristics). Wearing an interstitial glucose reading device was an inclusion criterion, and a patient was thus quickly excluded because he was not wearing the device. The 23 participants were 63±15 years old, with a 29.5±15-year history of diabetes. Among them, 18 (78%) had comorbid conditions: 4 (17%) had thyroid disorders; 13 (57%) had cardiopulmonary disorders; 13 (57%) had miscellaneous disorders; 10 (43%) had diabetes-related disorders and 5 (21%) were deemed as more than 70% disabled according to the definition of the French social security rating. Result of the main analysis in patients and educators Patients found S-TPE to be very useful (mean score 20.6±3.5 for a maximum of 25) and expressed high satisfaction at the end of the session (mean 51.9±4.9 for a maximum of 60) ( ). Perceived usefulness of S-TPE was also high among educators, whose scores increased after each session (30.6±5.8, 36±1.4 and 37.5±3.5 for sessions 1, 2 and 3, for a maximum, respectively; for a maximum of 40) ( ). Result of secondary analysis in patients and educators Our analysis of post-S-TPE phone interviews with patients identified these characteristics of S-TPE: summarise the results for each group and by theme with samples supporting quotes. All patients expressed their overall satisfaction (23/23), to the point of stating that ‘It allowed me to modify my practice’. When asked about technical improvement, they could suggest the following, two patients suggested that the sound should be improved, one of them also stated that he/she did not want the session to last more than 2 hours and the other one reported difficulty in expressing himself/herself in front of the group. When asked about the potential improvements to be made, 19 patients (82.6%) did not express a need for changing the technique, one patient suggested doing more different sessions and one expressed he/she would like to review the objectives. The remaining 21 did not suggest anything to improve. Benefits of the S-TPE expressed by the patients were as follow: the relationship skills (7/23, 30.4%), ‘The exchanges, the relationships with the other participants are richer’; the pedagogical qualities (15/23, 65.2%), ‘It’s more concrete, it allows you to approach problems in different ways’ and the effects on daily life (8/23, 34.8%), ‘I've changed, something clicked,’ ‘I know now’. ‘This method removes certain beliefs’ (see ). Our analysis of post-S-TPE phone interviews with educators identified these points: ‘development of coping skills (not feeling alone, gaining self-confidence, managing stress, talking about one’s illness)’; ‘a complete overview of the issue (hypoglycaemia in this case) and is used in all its dimensions’ and ‘concrete, speaking, explicit, it’s living for them, they recognised themselves in the situation’. One participant called it ‘very positive’: ‘I would've surrendered at first, but now I’d do it again. It’s an immense satisfaction, a lot of fun. I've learned a lot. It was very rewarding to use a new method, to share this’ (see ). All (3/3) educators were very satisfied with the method overall. Even though it was considered stressful and requiring skill development, they were willing to try it again. S-TPE improved relational skills for all educators (3/3) and two of them stated that this method can be used in patients with physical disabilities or not fluent in French. Two out of three educators also reported the sound quality to be suboptimal. All the educators noted the pedagogical interest of the method and the good quality of the relationship with the group. The exchanges have been improved and enriched, thanks to this method of S-ETP. They report that ‘a complicity has appeared, there is a better mutual acquaintance’. The groups should be of a maximum of eight people, because with family carers it can quickly become difficult to manage. All educators noted that S-TPE is relevant with patients, ‘Relevant teaching method: it’s a fun method that appeals to all the senses: visual, auditory, kinaesthetic. This method allows each person to express themselves, their daily life, they were able to communicate, exchange’. One educator noted that the final synthesis could be improved and that it is necessary to ensure that the objectives are those of the patients. Two educators noted that 2 hours was the optimal duration for the session, which should neither be reduced nor be exceeded. Two training sessions were initially scheduled, but extended to three sessions as per educators’ request. In the post S-TPE session phone interviews, educators said that they needed five to eight sessions to become fully comfortable with the method. S-TPE did not change patients’ self-efficacy score (35.6±3 points pre-S-TPE vs 35.3±3 points post-S-TPE, p=0.29) ( ). The STAI-Y1 (anxiety status assessment) scored showed that 13.0% (3/23) of patients (all men) had an anxious personality. The STAI-Y2 (anxiety trait) showed that anxiety scores dropped significantly after S-TPE in women (35.1±4.5 pre-S-TPE vs 32.7±5.5 post-S-TPE, p=0.04), but not in men (34.2±7.8 pre-S-TPE vs 32.1±5.2 post-S-TPE, p=0.17) ( ). In total, 24 patients were included in the study. One patient was wrongly excluded since he was not wearing an interstitial glucose reading device, and thus was quickly excluded (see for characteristics). Wearing an interstitial glucose reading device was an inclusion criterion, and a patient was thus quickly excluded because he was not wearing the device. The 23 participants were 63±15 years old, with a 29.5±15-year history of diabetes. Among them, 18 (78%) had comorbid conditions: 4 (17%) had thyroid disorders; 13 (57%) had cardiopulmonary disorders; 13 (57%) had miscellaneous disorders; 10 (43%) had diabetes-related disorders and 5 (21%) were deemed as more than 70% disabled according to the definition of the French social security rating. Patients found S-TPE to be very useful (mean score 20.6±3.5 for a maximum of 25) and expressed high satisfaction at the end of the session (mean 51.9±4.9 for a maximum of 60) ( ). Perceived usefulness of S-TPE was also high among educators, whose scores increased after each session (30.6±5.8, 36±1.4 and 37.5±3.5 for sessions 1, 2 and 3, for a maximum, respectively; for a maximum of 40) ( ). Our analysis of post-S-TPE phone interviews with patients identified these characteristics of S-TPE: summarise the results for each group and by theme with samples supporting quotes. All patients expressed their overall satisfaction (23/23), to the point of stating that ‘It allowed me to modify my practice’. When asked about technical improvement, they could suggest the following, two patients suggested that the sound should be improved, one of them also stated that he/she did not want the session to last more than 2 hours and the other one reported difficulty in expressing himself/herself in front of the group. When asked about the potential improvements to be made, 19 patients (82.6%) did not express a need for changing the technique, one patient suggested doing more different sessions and one expressed he/she would like to review the objectives. The remaining 21 did not suggest anything to improve. Benefits of the S-TPE expressed by the patients were as follow: the relationship skills (7/23, 30.4%), ‘The exchanges, the relationships with the other participants are richer’; the pedagogical qualities (15/23, 65.2%), ‘It’s more concrete, it allows you to approach problems in different ways’ and the effects on daily life (8/23, 34.8%), ‘I've changed, something clicked,’ ‘I know now’. ‘This method removes certain beliefs’ (see ). Our analysis of post-S-TPE phone interviews with educators identified these points: ‘development of coping skills (not feeling alone, gaining self-confidence, managing stress, talking about one’s illness)’; ‘a complete overview of the issue (hypoglycaemia in this case) and is used in all its dimensions’ and ‘concrete, speaking, explicit, it’s living for them, they recognised themselves in the situation’. One participant called it ‘very positive’: ‘I would've surrendered at first, but now I’d do it again. It’s an immense satisfaction, a lot of fun. I've learned a lot. It was very rewarding to use a new method, to share this’ (see ). All (3/3) educators were very satisfied with the method overall. Even though it was considered stressful and requiring skill development, they were willing to try it again. S-TPE improved relational skills for all educators (3/3) and two of them stated that this method can be used in patients with physical disabilities or not fluent in French. Two out of three educators also reported the sound quality to be suboptimal. All the educators noted the pedagogical interest of the method and the good quality of the relationship with the group. The exchanges have been improved and enriched, thanks to this method of S-ETP. They report that ‘a complicity has appeared, there is a better mutual acquaintance’. The groups should be of a maximum of eight people, because with family carers it can quickly become difficult to manage. All educators noted that S-TPE is relevant with patients, ‘Relevant teaching method: it’s a fun method that appeals to all the senses: visual, auditory, kinaesthetic. This method allows each person to express themselves, their daily life, they were able to communicate, exchange’. One educator noted that the final synthesis could be improved and that it is necessary to ensure that the objectives are those of the patients. Two educators noted that 2 hours was the optimal duration for the session, which should neither be reduced nor be exceeded. Two training sessions were initially scheduled, but extended to three sessions as per educators’ request. In the post S-TPE session phone interviews, educators said that they needed five to eight sessions to become fully comfortable with the method. S-TPE did not change patients’ self-efficacy score (35.6±3 points pre-S-TPE vs 35.3±3 points post-S-TPE, p=0.29) ( ). The STAI-Y1 (anxiety status assessment) scored showed that 13.0% (3/23) of patients (all men) had an anxious personality. The STAI-Y2 (anxiety trait) showed that anxiety scores dropped significantly after S-TPE in women (35.1±4.5 pre-S-TPE vs 32.7±5.5 post-S-TPE, p=0.04), but not in men (34.2±7.8 pre-S-TPE vs 32.1±5.2 post-S-TPE, p=0.17) ( ). We identified no barriers to implementing a trial to assess the value of an S-TPE programme for adults with diabetes. Our study suggests that S-TPE may decrease patient anxiety, though this finding was statistically significant only for women. On average, patients ranked S-TPE as very useful (20.6/25), with only one patient scoring its usefulness as low (12.5/20). Patient satisfaction at the end of the S-TPE session was high (51.9±4.9/60). Patients unanimously approved of the approach and said it created a favourable climate for learning and gave them opportunities to talk about problems in their daily life. They appreciated the structured approach, which allowed everyone to express themselves. They felt that the simulation helped them understand the effects of their disease-related behaviours without putting themselves at risk The overall positive reception of S-TPE is encouraging. The only patient who ranked the usefulness of S-TPE as low appeared to prioritise hyperglycaemia rather than hypoglycaemia management ( ). S-TPE appears to meet the needs of patients with different backgrounds. Two patients in our study were not native French speakers, one had a hearing impairment and another a walking disability; all found S-TPE helpful. Interviews with educators revealed their fear and stress in the first session and the desire to perform well. They said that their stress lessened during the sessions and that they are now focusing on managing group dynamics, for example, paying attention to each patient and animating patients rather just transmitting information. Educators asked many questions and said they need five to eight S-TPE sessions before they would feel comfortable implementing the method ( ). Our pilot study was limited in the following ways: its small sample size (23 patients) limits our ability to generalise findings to the target population of adult patients with diabetes. Like another small pilot study, we could not show that S-TPE improved self-efficacy in patients. In contrast to the heterogeneous approaches French health authorities take towards TPE in type 1 and type 2 diabetes, we took a standardised approach to evaluating S-TPE, building on a consensus conference that we recently conducted and published. These recommendations were to involve ‘expert patients’ (patients recognised for their advanced understanding of the condition) in constructing the scenarios we used. Our pilot study demonstrated the acceptability and feasibility of S-TPE for adult patients with diabetes and provided preliminary data that we will use to design and conduct a large randomised controlled trial to evaluate efficacy of S-TPE in diabetes. Educators said that they ‘increased skill and confidence’, that ‘this tool could be used during the hospital stay with specific objectives’ and that ‘the pluridisciplinary teamwork in TPE was richer’, but studies that include more educators are needed to determine if our positive results are consistent and generalisable. These studies should determine the optimal duration and number of training sessions for educators. If S-TPE works for patients with diabetes, it should be possible to extend the programme to provide S-TPE to patients with other chronic conditions. Expert patients should be systematically involved at an early stage when designing interventions to improve TPE programmes and specifically S-TPE. This pilot study opens a path to testing the intervention in a larger, more representative population of patients and educators. If the results of our future efficacy trial of S-TPE in patients with diabetes are positive, this method may improve the management of diabetes by patients and educators, by unlocking self-skills previously not accessible, and transform TPE as a patient-centred approach. It will also open the possibility to transpose this method in other chronic diseases. Reviewer comments Author's manuscript
A recursive framework for predicting the time-course of drug sensitivity
8a4fa194-8426-4f61-9f07-123764bfde0e
7573611
Pharmacology[mh]
Prediction of drug response based on patients’ clinical and molecular features is a major challenge in personalized medicine. A computational model that can make accurate predictions can be used to identify the best course of treatment for patients, or to identify therapeutic targets that can overcome drug resistance . Considerable efforts have been made to identify molecular biomarkers of drug sensitivity and to develop computational models to predict drug response based on these biomarkers. Gene expression data is one of the most commonly used molecular data type in these studies, due to their high predictive ability, and numerous methods have been proposed for drug response prediction based on gene expression data – . However, many existing methods only use basal gene expression data (i.e., gene expression values before administration of the drug) and hence can only capture the influence of the steady state of the cells on their response to a drug. For example, Costello et al. analyzed 44 drug response prediction methods that employed gene expression profiles of breast cancer cell lines taken before treatment to predict dose-response values, e.g., GI50—the concentration for 50% of maximal inhibition of cell proliferation from a single time-point. In practice, however, for many diseases (e.g. cancers) the response to a drug changes over time due to various reasons such as the development of drug resistance or changes in the progress of the disease. To capture such changes at a molecular level, a collection of temporal gene expression profiles of samples over a series of time-points during the course of a biological process is necessary to provide more insights than a single (or two) time-point(s) . Therefore, developing algorithms that can predict the drug response over time using time-course gene data is of great interest. With the advancement of gene sequencing technologies, collecting gene expression levels (GEXs) over multiple time-points and their matched drug response values is now feasible. In parallel with these technological developments, there has been growing interest in the application of machine learning methods to analyze the time-course gene expression data. For example, time-course gene expression data can be used to not only identify longitudinal phenotypic markers , , but also assess the association between the time course molecular data and cytokine production in this HIV trial and predict drug response during a treatment , . In , the authors proposed an integrated Bayesian inference system to select genes for drug response classification from time-course gene expression data. However, the method only uses the data from the first time-point, and hence does not benefit from the additional temporal information. Lin et al. presented a Hidden Markov model (HMM)-based classifier, in which the HMM had fewer states than time points to align different patient response rates. This discriminative HMM classifier enabled distinguishing between good/bad responders. Nevertheless, choosing the number of states for this HMM is a major practical issue. In addition, this method cannot handle missing data and it requires the full knowledge of GEXs at all time-points a priori. This implies that the HMM may not be able to predict drug response at multiple stages in future time points, since the corresponding GEXs are not measurable. The time-course gene expression data contains the GEXs of different patients over a series of time points, which can be indexed as patient-gene-time and represented as a three-dimensional tensor. Motivated by this, several tensor decomposition based algorithms have been proposed. For example, Taguchi employed tensor decomposition to identify drug target genes using time-course gene expression profiles of human cell lines. Li and Ngom proposed a higher-order non-negative matrix factorization (HONMF) tool for classifying good or poor responders from a latent subspace corresponding to patients learned from HONMF. One limitation of this work is that the latent subspace may not have discriminative ability in classifying patients, since it is learned without accounting for the class-label information. Moreover, this method simply discards samples with missing values, causing unnecessary information loss. Recently, Fukushima et al. developed an algorithm for joint gene selection and drug response prediction for time-course data. The method uses Elastic-Net (EN) to select a set of genes that show discrimination of patients’ drug responses throughout the treatment. The selected genes are then passed to a logistic regression (LR) classifier for drug response prediction. But in real applications, due to the existence of noise and missing values in the data, finding discriminative genes for all patients may be difficult. In fact, several studies have shown that it is more viable to find genes that have consistent discrimination in a subset of samples along the time series – . Therefore, relying only on discriminative gene selection but without modifying classification algorithms may not achieve satisfactory performance. In this paper, we take a different approach for time-course drug response prediction. We hypothesize that a patient’s drug response at a given time-point can be inferred from the response at a previous time point. This means that not only the GEXs but also the past response results can be integrated to identify the drug response for a subsequent time point. We develop a REcursive Prediction (REP) algorithm to predict the drug response of samples using their time-course gene expression data and their drug response at previous time-points. REP has a built-in recursive structure that exploits the intrinsic time-course nature of the data through integrating past drug responses for subsequent prediction. In other words, in REP, not only the GEXs but also the past drug responses are treated as features for drug response prediction. Furthermore, by taking into consideration the intrinsic tensor structure of the time-course gene expression data and leveraging identifiability of low-rank tensors, REP can alleviate the noise corruption in GEX measurements, complete missing GEXs and even predict GEXs for subsequent time points. These features enable REP to evaluate drug response at any stage of a given treatment from some GEXs measured in the beginning of a treatment. Experiments on real data are included to demonstrate the effectiveness of the REP algorithm. Figure sketches the idea behind the proposed REP algorithm, where the Fig. a–c show the pre-processing, model training and prediction of our method, respectively. The tensor structure of time-course gene expression data is shown in Fig. a. In the following, we explain them in more detail. Pre-processing One major issue in using gene expression data for drug response prediction is the existence of missing values. To overcome this problem, we first impute the missing values during pre-processing. Various methods have been previously suggested for handling missing values, such as median-imputation and nearest neighbor imputation , . Instead, we employ a low-rank tensor model to fit the time-course gene expression dataset such that the missing values can be completed. Our supporting hypothesis is that genes never function in an isolated way, but oftentimes groups of genes interact together to maintain the complex biological process, which results in correlation in the GEX data . We note that our low-rank tensor model suggests three factors that uniquely determine the values of GEXs, i.e., the factors corresponding to patient, gene and time, respectively (see Fig. ). As we will see later, our model allows us to estimate the variation of GEX over time from a set of initial GEX measurements; these estimated values are then used to predict the time-course of drug response. Towards this goal, we first assume (For high-enough but finite F , any patient-gene-time dataset can be expressed this way. See for a tutorial overview of tensor rank decomposition.) that each GEX is represented as a summation of F triple products from the latent factors of patient, gene and time, respectively. Let us denote [12pt]{minimal} $$g_{ijk}$$ g ijk as the j th GEX of patient i recorded at time k . Based on our assumption, we have 1 [12pt]{minimal} $$ g_{ijk} = _{f=1}^{F} a_{if}b_{jf}c_{kf} $$ g ijk = ∑ f = 1 F a if b jf c kf where [12pt]{minimal} $$a_{if}$$ a if , [12pt]{minimal} $$b_{jf}$$ b jf and [12pt]{minimal} $$c_{kf}$$ c kf are the latent factors of patient, gene and time, respectively. Suppose that there are J genes measured over K time points. By varying the indices j and k in , the expression of the genes in all the time-points in patient i can be represented as 2 [12pt]{minimal} $$ {G}_{i} = {B} {D}_i( {A}) {C}^T {R}^{J K} $$ G i = B D i ( A ) C T ∈ R J × K where [12pt]{minimal} $$ {A} {R}^{I F}$$ A ∈ R I × F , [12pt]{minimal} $$ {B} {R}^{J F}$$ B ∈ R J × F , [12pt]{minimal} $$ {C} {R}^{K F}$$ C ∈ R K × F . In this equation, [12pt]{minimal} $$ {D}_i( {A})$$ D i ( A ) represents a diagonal matrix holding the i th row of [12pt]{minimal} $$ {A}$$ A as the main diagonal, which is a latent representation of the i th patient. We use [12pt]{minimal} $$a_{if}$$ a if to represent the ( i ; f )-entry of [12pt]{minimal} $$ {A}$$ A , [12pt]{minimal} $$b_{jf}$$ b jf to represent the ( j ; f )-entry of [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$c_{kf}$$ c kf to represent the ( k ; f )-entry of [12pt]{minimal} $$ {C}$$ C . Assume that there are I patients in the training set. After collecting [12pt]{minimal} $$\{ {G}_1, , {G}_I\}$$ { G 1 , … , G I } , we stack them in parallel along the patient-axis, which results in a GEX tensor that takes the form of 3 [12pt]{minimal} $$ {}}:= {A}, {B}, {C} = _{f=1}^F {a}_f {b}_f {c}_f {R}^{I J K} $$ G _ : = 〚 A , B , C 〛 = ∑ f = 1 F a f ∘ b f ∘ c f ∈ R I × J × K where [12pt]{minimal} $$ $$ ∘ is the outer product and [12pt]{minimal} $$ {a}_f$$ a f is the f th column of [12pt]{minimal} $$ {A}$$ A , and likewise for [12pt]{minimal} $$ {b}_f$$ b f and [12pt]{minimal} $$ {c}_f$$ c f . Here, we assume that [12pt]{minimal} $${}}$$ G _ is the noiseless GEX data and [12pt]{minimal} $${}}$$ X _ is the corresponding noisy data with missing values. The relationship between [12pt]{minimal} $${}}$$ G _ and [12pt]{minimal} $${}}$$ X _ is described as 4 [12pt]{minimal} $$ {P}_ ({}}) = {P}_ ({}}) + {P}_ ({}}) $$ P Ω ( X _ ) = P Ω ( G _ ) + P Ω ( N _ ) where [12pt]{minimal} $${}}$$ N _ is the noise in the data, [12pt]{minimal} $$ $$ Ω is the index set of the observed GEXs in [12pt]{minimal} $${}}$$ X _ , and [12pt]{minimal} $$ {P}_{ }$$ P Ω is the operator that keeps the entries in [12pt]{minimal} $$ $$ Ω and zeros out the others. The model in indicates that the gene and time factors (i.e., [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C ) are identical for different patients, and the variability among patients is captured by [12pt]{minimal} $$ {A}$$ A . In other words, given [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C , each row of the patient factor matrix [12pt]{minimal} $$ {A}$$ A uniquely determines the GEXs of the corresponding patient. As we will see later, our model is able to predict unseen GEXs, which also enables to prescreen the drug response for different stages of a treatment. Assuming non-negative GEXs, (Due to some preprocessing steps such as z-score normalization, the GEX values can be negative. To facilitate our method, we undo these preprocessing steps or use the raw dataset.) we can use non-negative tensor factorization to compute missing GEX values: 5 [12pt]{minimal} $$ _{ {A}, {B}, {C},{}}}&\| {}}- {A}, {B}, {C} \| _F^2 + ( {A} _F^2 + {B} _F^2 + {C} _F^2) \\ {s.~t.}\;&\; {P}_{ }({}}) = {P}_{ }({}}), {A} 0, {B} 0, {C} 0 $$ min A , B , C , G _ G _ - 〚 A , B , C 〛 F 2 + μ ‖ A ‖ F 2 + ‖ B ‖ F 2 + ‖ C ‖ F 2 s . t . P Ω ( G _ ) = P Ω ( X _ ) , A ≥ 0 , B ≥ 0 , C ≥ 0 where many sophisticated algorithms are applicable to optimize , e.g., block coordinate descent , . Intuitively, seeks to identify the lowest rank solution [12pt]{minimal} $$ {G}$$ G that best matches the observations [12pt]{minimal} $${}}$$ X _ . The regularization terms are added to further encourage low rank and prevent over-fitting. When is solved, we complete the GEX data through 6 [12pt]{minimal} $$ {}}= {P}_{ }({}}) + {P}_{ ^c}({}}) $$ Z _ = P Ω ( X _ ) + P Ω c ( G _ ) where [12pt]{minimal} $$ ^c$$ Ω c contains the indices of missing values in [12pt]{minimal} $${}}$$ X _ . Training The effects of drugs are usually cumulative over time , i.e., drug doses taken in the past will affect the current response. This implies that the drug response in the past time-points may help predict the current response. Based on this hypothesis, we propose a recursive prediction algorithm, henceforth referred to as REP for simplicity, which enables to integrate past drug response records with gene expression values for subsequent drug response predictions. Figure c shows an overview of REP’s pipeline, where drug responses [12pt]{minimal} $$\{y(0), ,y(t-1)\}$$ { y ( 0 ) , … , y ( t - 1 ) } in the previous time stages are integrated with the gene expression information [12pt]{minimal} $$ {z}_{t}$$ z t for predicting the current response y ( t ). Here, we accumulate the historical responses by concatenating them into a new vector as 7 [12pt]{minimal} $$ {}}(t) = [y(t-1), y(t-2), , y(0), 0, , 0]^T {R}^{K-1} $$ y ~ ( t ) = [ y ( t - 1 ) , y ( t - 2 ) , … , y ( 0 ) , 0 , … , 0 ] T ∈ R K - 1 which is then fed back as an input feature for subsequent drug response prediction. Therefore, at time t , the output of the predictor depends not only on the GEX at that time point, but also the previously observed drug responses. We always insert the drug response from the most recent time point into the first element of [12pt]{minimal} $${}}(t)$$ y ~ ( t ) , so that the model can capture a sense of time, and learn from recent/emerging trends in drug response. For the i th patient at time t , we concatenate [12pt]{minimal} $${}}_{i,t}$$ y ~ i , t and the corresponding GEX vector [12pt]{minimal} $$ {z}_{i,t}$$ z i , t together, where [12pt]{minimal} $${}}_{i,t}$$ y ~ i , t denotes the historical responses of patient i at time t . We then pass [12pt]{minimal} $$ {z}_{i,t}$$ z i , t and [12pt]{minimal} $$ {y}_{i,t}$$ y i , t to a predictor [12pt]{minimal} $$f( )$$ f ( · ) to predict drug response, i.e., 8 [12pt]{minimal} $$ y_{i,t} = f( {z}_{i,t}, {}}_{i,t}) $$ y i , t = f ( z i , t , y ~ i , t ) where [12pt]{minimal} $$f( )$$ f ( · ) can be trained by minimizing the following cost function 9 [12pt]{minimal} $$ L() = _{i=1}^{I} _{t=1}^K (f( {z}_{i,t}, {}}_{i,t}), y_{i,t}) + r() $$ L ( θ ) = 1 IK ∑ i = 1 I ∑ t = 1 K ℓ ( f ( z i , t , y ~ i , t ) , y i , t ) + λ r ( θ ) where [12pt]{minimal} $$$$ θ contains the parameters of the predictor, [12pt]{minimal} $$ ( )$$ ℓ ( · ) is the loss function of a classifier such as hinge loss and cross-entropy loss, [12pt]{minimal} $$r( )$$ r ( · ) is a regularizer that imposes a certain structure on [12pt]{minimal} $$$$ θ , and [12pt]{minimal} $$ 0$$ λ ≥ 0 is a regularization parameter. In the literature, popular regularizers include [12pt]{minimal} $$r() = _2^2$$ r ( θ ) = ‖ θ ‖ 2 2 , [12pt]{minimal} $$r() = _0$$ r ( θ ) = ‖ θ ‖ 0 , [12pt]{minimal} $$r() = _1$$ r ( θ ) = ‖ θ ‖ 1 and [12pt]{minimal} $$r()=1_+()$$ r ( θ ) = 1 + ( θ ) , i.e., the indicator function of the non-negative orthant. Our main idea is to feed back the historical drug responses and then combine them with GEX values to predict the drug response in the future. This is the major difference between our method and the state-of-the-art algorithms: prior art ignored the previous drug response outputs. Therefore, the training set for our method is created in a slightly different way. Recall that at each time point, we stack the historic drug responses into a vector. For any patient in the training set, we can further concatenate all such prior response vectors for K time points together, which yields a feedback matrix for patient i as 10 [12pt]{minimal} $$ {}}_i = {}}_{1,1}&{}}_{1,2}&&{}}_{1,K} ^T {R}^{K (K-1)}. $$ Y ~ i = y ~ 1 , 1 y ~ 1 , 2 ⋯ y ~ 1 , K T ∈ R K × ( K - 1 ) . Furthermore, we can create a tensor [12pt]{minimal} $$}}} {R}^{I K (K-1)}$$ Y ~ _ ∈ R I × K × ( K - 1 ) by concatenating all [12pt]{minimal} $$\{{}}_1, , {}}_I\}$$ { Y ~ 1 , … , Y ~ I } together, where [12pt]{minimal} $$}}}(i, :, :) = {}}_i$$ Y ~ _ ( i , : , : ) = Y ~ i . Finally, the features in the training set are formed by concatenating [12pt]{minimal} $${}}$$ Z _ and [12pt]{minimal} $$}}}$$ Y ~ _ along the gene-axis as shown in the left-bottom corner of Fig. b, and the training labels are 11 [12pt]{minimal} $$ {Y}= y_{1,1} &{} y_{1,2} &{} &{} y_{1,K} \\ y_{2,1} &{} y_{2,2} &{} &{} y_{2,K} \\ &{} &{} &{} \\ y_{I,1} &{} y_{I,2} &{} &{} y_{I,K} . $$ Y = y 1 , 1 y 1 , 2 ⋯ y 1 , K y 2 , 1 y 2 , 2 ⋯ y 2 , K ⋮ y I , 1 y I , 2 ⋯ y I , K . It is also important to mention that our method can predict either binary or non-binary drug responses, e.g., continuous values. When the drug response is binary, the predictor [12pt]{minimal} $$f( {z}_{i,t},{}}_{i,t})$$ f ( z i , t , y ~ i , t ) will typically be a classifier. When the drug response is continuous, [12pt]{minimal} $$f( {z}_{i,t},{}}_{i,t})$$ f ( z i , t , y ~ i , t ) will be a regression algorithm. It is important to note that our approach is really a framework that is applicable no matter what is the choice of the final classification or regression algorithm. Nevertheless, for the purposes of exemplifying and illustrating the merits of our proposed framework, we are particularly interested in support vector machines (SVM and SVR, for classification and regression, respectively), which have shown promising performance in this type of task (cite a few past papers using SVM for this task here). We set [12pt]{minimal} $$=[ {u}^T, {v}^T]^T$$ θ = [ u T , v T ] T and [12pt]{minimal} $$ ( )$$ ℓ ( · ) to be the hinge loss, resulting in 12 [12pt]{minimal} $$ _{ {u},v,b}& _{i=1}^{I} _{t=1}^K ( 0, 1 - y_{i,t}( {u}^T {z}_{i,t} + {v}^T {}}_{i,t} + b)) + ( {u} _2^2+ {v} ^2) \\ {s.~t.}&[ {u}^T, {v}^T]^T {C} $$ min u , v , b 1 IK ∑ i = 1 I ∑ t = 1 K max 0 , 1 - y i , t ( u T z i , t + ρ v T y ~ i , t + b ) + λ 2 ‖ u ‖ 2 2 + ‖ v ‖ 2 s . t . [ u T , v T ] T ∈ C where b is the intercept, [12pt]{minimal} $$ {C}$$ C denotes a convex set such as [12pt]{minimal} $$ _1$$ ℓ 1 -ball for feature selection and [12pt]{minimal} $$ $$ ρ represents the importance of the response at a previous time point on the subsequent one. In our formulation, the drug response feedback plays an important role and it can be viewed as a “must-have” feature. In SVMs, we penalize the two-norm of the linear weights equally—the implicit assumption being that features have similar powers. In our context, however, the GEX values are much larger than the drug response labels which are either 1 or [12pt]{minimal} $$-1$$ - 1 . As a result, the GEX values are likely to end up playing a more significant role in the prediction—simply because we cannot scale the labels up to any meaningful level, due to the regularization term. To compensate for this imbalance, in the above formulation, we introduce fixed weight [12pt]{minimal} $$ $$ ρ in the cost function. In practice, we recommend to choose a relatively large [12pt]{minimal} $$ $$ ρ . Drug response prediction Our method can predict the drug response values for a new patient at any time point. Specifically, given the GEXs of a new patient at time t , i.e., [12pt]{minimal} $$ {x}(t)$$ x ( t ) , we first check if there are missing values. If so, we employ the factors [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C to complete [12pt]{minimal} $$ {x}(t)$$ x ( t ) . Let us denote [12pt]{minimal} $${{}}$$ Ω ¯ and [12pt]{minimal} $${{}}^c$$ Ω ¯ c as the sets of indices of the observed and missing elements in [12pt]{minimal} $$ {x}(t)$$ x ( t ) . According to our model in , [12pt]{minimal} $$ {x}(t)$$ x ( t ) can be uniquely determined by [12pt]{minimal} $$ {B}$$ B , [12pt]{minimal} $$ {C}$$ C and an unknown vector [12pt]{minimal} $$ {a}$$ a —a latent representation of this new patient. Thus, for the expression level of the j th gene at time t , we have 13 [12pt]{minimal} $$ x_j(t)&= b_{j1}&&b_{jF} a_{1} &{} &{} \\ &{} &{} \\ &{} &{}a_{F} c_{t1} \\ \\ c_{tF} + {n} \\&= ( {C}(t,:) {B}(j,:)) {a}+ n_j,~ j {{}} $$ x j ( t ) = b j 1 ⋯ b jF a 1 ⋱ a F c t 1 ⋮ c tF + n = C ( t , : ) ⊙ B ( j , : ) a + n j , ∀ j ∈ Ω ¯ where [12pt]{minimal} $$n_j$$ n j is the additive noise which is assumed as Gaussian distributed, [12pt]{minimal} $$ $$ ⊙ is the Khatri-Rao (column-wise Kronecker) product, and [12pt]{minimal} $$ {B}(j,:)$$ B ( j , : ) and [12pt]{minimal} $$ {C}(t,:)$$ C ( t , : ) denote the t th row of [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C , respectively. Since [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C are known, the problem of estimating [12pt]{minimal} $$ {a}$$ a can be formulated as 14 [12pt]{minimal} $$ {{}}} = _{ {a} 0} _{j {{}}} ( x_j(t) - ( {C}(t,:) {B}(j,:)) {a}) ^2 $$ a ^ = arg min a ≥ 0 ∑ j ∈ Ω ¯ x j ( t ) - C ( t , : ) ⊙ B ( j , : ) a 2 which is a non-negative least squares (NLS) problem and can be optimally solved. We note that to obtain a unique estimate [12pt]{minimal} $${{}}}$$ a ^ , the number of available gene expression entries in [12pt]{minimal} $$ {x}(t)$$ x ( t ) should be [12pt]{minimal} $$ F$$ ≥ F . The GEX vector of the patient is then estimated as 15 [12pt]{minimal} $$ {g}(t) = ( {C}(t,:) {B}) {{}}} $$ g ( t ) = C ( t , : ) ⊙ B a ^ which leads to a completed GEX vector 16 [12pt]{minimal} $$ {z}(t) = {P}_{{{}}}( {x}(t)) + {P}_{{{}}^c}( {g}(t)). $$ z ( t ) = P Ω ¯ ( x ( t ) ) + P Ω ¯ c ( g ( t ) ) . The vector [12pt]{minimal} $$ {z}(t)$$ z ( t ) together with the cumulated historical drug response [12pt]{minimal} $${}}(t)$$ y ~ ( t ) , are the input data for our predictor [12pt]{minimal} $$f( )$$ f ( · ) . We estimate the drug response of this patient at time t via 17 [12pt]{minimal} $$ {}(t) = f( {z}(t), {}}(t) ) . $$ y ^ ( t ) = f z ( t ) , y ~ ( t ) . It is crucial to mention that in some cases, there might be missing labels in the testing set, such that [12pt]{minimal} $${}}(t)$$ y ~ ( t ) cannot be constructed. To handle this scenario, we can use the predicted labels instead of the missing ones to construct [12pt]{minimal} $${}}(t)$$ y ~ ( t ) . More specifically, we start from [12pt]{minimal} $$t=0$$ t = 0 and predict y (0), which is used to construct [12pt]{minimal} $${}}(1)$$ y ~ ( 1 ) . Then we use [12pt]{minimal} $${}}(1)$$ y ~ ( 1 ) to predict the response at [12pt]{minimal} $$t=1$$ t = 1 , so on and so forth. Predicting unseen GEXs Previously, we have explained how to predict drug response for patients at a certain time point. However, in practice, we are more interested in knowing the drug response of a few time-points in the future from the beginning of a treatment. This requires to know the GEXs of all time points up to the one of interest a priori, which is impossible in practice. In this subsection, we provide an efficient solution that allows to predict the unseen GEXs. Recall that in our model, the GEX of a patient is determined by three factors, i.e., the latent representation of patient— [12pt]{minimal} $$ {a}$$ a , the time evolution factor— [12pt]{minimal} $$ {B}$$ B and the gene factor matrix— [12pt]{minimal} $$ {C}$$ C , where [12pt]{minimal} $$ {a}$$ a is different for patients, and needs to be estimated for the new patient. On the other hand, [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C are common gene and time evolution bases that reflect different types of patients, as determined from historical patient data—the training data. Therefore, the problem boils down to the estimation of [12pt]{minimal} $$ {a}$$ a from the initial GEXs of the new patient. We can simply substitute [12pt]{minimal} $$t=1$$ t = 1 in to find [12pt]{minimal} $${{}}}$$ a ^ . Finally, the GEXs for the remaining time points are estimated as 18 [12pt]{minimal} $$ {}}(t) = ( {C}(t,:) {B}) {{}}}, t= 2, ,K. $$ x ^ ( t ) = C ( t , : ) ⊙ B a ^ , ∀ t = 2 , … , K . Now we have estimated the unseen GEXs for [12pt]{minimal} $$t 2$$ t ≥ 2 , which allows us to predict drug response values for the whole duration of the treatment. We start from [12pt]{minimal} $${{}}}(1)$$ x ^ ( 1 ) and estimate the drug response for [12pt]{minimal} $$t=1$$ t = 1 as 19 [12pt]{minimal} $$ {}(1) = f({}}(1), {}}(1)) $$ y ^ ( 1 ) = f ( x ^ ( 1 ) , y ~ ( 1 ) ) where [12pt]{minimal} $${}}(1) = 0$$ y ~ ( 1 ) = 0 . When [12pt]{minimal} $${}(1)$$ y ^ ( 1 ) is available, we set [12pt]{minimal} $${}}(2) = {}(1)$$ y ~ ( 2 ) = y ^ ( 1 ) . With the GEX estimate [12pt]{minimal} $${}}(2)$$ x ^ ( 2 ) from , we can predict [12pt]{minimal} $${}(2) = f({}}(2), {}}(2))$$ y ^ ( 2 ) = f ( x ^ ( 2 ) , y ~ ( 2 ) ) , and so forth for the other time points. Remark 1 Note that here we substitute predicted drug responses for the unseen drug responses. Clearly, when actual drug responses for past time ticks are available, they should be used. We only do the substitution here for a preliminary assessment of how well a patient is likely to respond over time, before the beginning of treatment—which is naturally a more ambitious goal. One major issue in using gene expression data for drug response prediction is the existence of missing values. To overcome this problem, we first impute the missing values during pre-processing. Various methods have been previously suggested for handling missing values, such as median-imputation and nearest neighbor imputation , . Instead, we employ a low-rank tensor model to fit the time-course gene expression dataset such that the missing values can be completed. Our supporting hypothesis is that genes never function in an isolated way, but oftentimes groups of genes interact together to maintain the complex biological process, which results in correlation in the GEX data . We note that our low-rank tensor model suggests three factors that uniquely determine the values of GEXs, i.e., the factors corresponding to patient, gene and time, respectively (see Fig. ). As we will see later, our model allows us to estimate the variation of GEX over time from a set of initial GEX measurements; these estimated values are then used to predict the time-course of drug response. Towards this goal, we first assume (For high-enough but finite F , any patient-gene-time dataset can be expressed this way. See for a tutorial overview of tensor rank decomposition.) that each GEX is represented as a summation of F triple products from the latent factors of patient, gene and time, respectively. Let us denote [12pt]{minimal} $$g_{ijk}$$ g ijk as the j th GEX of patient i recorded at time k . Based on our assumption, we have 1 [12pt]{minimal} $$ g_{ijk} = _{f=1}^{F} a_{if}b_{jf}c_{kf} $$ g ijk = ∑ f = 1 F a if b jf c kf where [12pt]{minimal} $$a_{if}$$ a if , [12pt]{minimal} $$b_{jf}$$ b jf and [12pt]{minimal} $$c_{kf}$$ c kf are the latent factors of patient, gene and time, respectively. Suppose that there are J genes measured over K time points. By varying the indices j and k in , the expression of the genes in all the time-points in patient i can be represented as 2 [12pt]{minimal} $$ {G}_{i} = {B} {D}_i( {A}) {C}^T {R}^{J K} $$ G i = B D i ( A ) C T ∈ R J × K where [12pt]{minimal} $$ {A} {R}^{I F}$$ A ∈ R I × F , [12pt]{minimal} $$ {B} {R}^{J F}$$ B ∈ R J × F , [12pt]{minimal} $$ {C} {R}^{K F}$$ C ∈ R K × F . In this equation, [12pt]{minimal} $$ {D}_i( {A})$$ D i ( A ) represents a diagonal matrix holding the i th row of [12pt]{minimal} $$ {A}$$ A as the main diagonal, which is a latent representation of the i th patient. We use [12pt]{minimal} $$a_{if}$$ a if to represent the ( i ; f )-entry of [12pt]{minimal} $$ {A}$$ A , [12pt]{minimal} $$b_{jf}$$ b jf to represent the ( j ; f )-entry of [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$c_{kf}$$ c kf to represent the ( k ; f )-entry of [12pt]{minimal} $$ {C}$$ C . Assume that there are I patients in the training set. After collecting [12pt]{minimal} $$\{ {G}_1, , {G}_I\}$$ { G 1 , … , G I } , we stack them in parallel along the patient-axis, which results in a GEX tensor that takes the form of 3 [12pt]{minimal} $$ {}}:= {A}, {B}, {C} = _{f=1}^F {a}_f {b}_f {c}_f {R}^{I J K} $$ G _ : = 〚 A , B , C 〛 = ∑ f = 1 F a f ∘ b f ∘ c f ∈ R I × J × K where [12pt]{minimal} $$ $$ ∘ is the outer product and [12pt]{minimal} $$ {a}_f$$ a f is the f th column of [12pt]{minimal} $$ {A}$$ A , and likewise for [12pt]{minimal} $$ {b}_f$$ b f and [12pt]{minimal} $$ {c}_f$$ c f . Here, we assume that [12pt]{minimal} $${}}$$ G _ is the noiseless GEX data and [12pt]{minimal} $${}}$$ X _ is the corresponding noisy data with missing values. The relationship between [12pt]{minimal} $${}}$$ G _ and [12pt]{minimal} $${}}$$ X _ is described as 4 [12pt]{minimal} $$ {P}_ ({}}) = {P}_ ({}}) + {P}_ ({}}) $$ P Ω ( X _ ) = P Ω ( G _ ) + P Ω ( N _ ) where [12pt]{minimal} $${}}$$ N _ is the noise in the data, [12pt]{minimal} $$ $$ Ω is the index set of the observed GEXs in [12pt]{minimal} $${}}$$ X _ , and [12pt]{minimal} $$ {P}_{ }$$ P Ω is the operator that keeps the entries in [12pt]{minimal} $$ $$ Ω and zeros out the others. The model in indicates that the gene and time factors (i.e., [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C ) are identical for different patients, and the variability among patients is captured by [12pt]{minimal} $$ {A}$$ A . In other words, given [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C , each row of the patient factor matrix [12pt]{minimal} $$ {A}$$ A uniquely determines the GEXs of the corresponding patient. As we will see later, our model is able to predict unseen GEXs, which also enables to prescreen the drug response for different stages of a treatment. Assuming non-negative GEXs, (Due to some preprocessing steps such as z-score normalization, the GEX values can be negative. To facilitate our method, we undo these preprocessing steps or use the raw dataset.) we can use non-negative tensor factorization to compute missing GEX values: 5 [12pt]{minimal} $$ _{ {A}, {B}, {C},{}}}&\| {}}- {A}, {B}, {C} \| _F^2 + ( {A} _F^2 + {B} _F^2 + {C} _F^2) \\ {s.~t.}\;&\; {P}_{ }({}}) = {P}_{ }({}}), {A} 0, {B} 0, {C} 0 $$ min A , B , C , G _ G _ - 〚 A , B , C 〛 F 2 + μ ‖ A ‖ F 2 + ‖ B ‖ F 2 + ‖ C ‖ F 2 s . t . P Ω ( G _ ) = P Ω ( X _ ) , A ≥ 0 , B ≥ 0 , C ≥ 0 where many sophisticated algorithms are applicable to optimize , e.g., block coordinate descent , . Intuitively, seeks to identify the lowest rank solution [12pt]{minimal} $$ {G}$$ G that best matches the observations [12pt]{minimal} $${}}$$ X _ . The regularization terms are added to further encourage low rank and prevent over-fitting. When is solved, we complete the GEX data through 6 [12pt]{minimal} $$ {}}= {P}_{ }({}}) + {P}_{ ^c}({}}) $$ Z _ = P Ω ( X _ ) + P Ω c ( G _ ) where [12pt]{minimal} $$ ^c$$ Ω c contains the indices of missing values in [12pt]{minimal} $${}}$$ X _ . The effects of drugs are usually cumulative over time , i.e., drug doses taken in the past will affect the current response. This implies that the drug response in the past time-points may help predict the current response. Based on this hypothesis, we propose a recursive prediction algorithm, henceforth referred to as REP for simplicity, which enables to integrate past drug response records with gene expression values for subsequent drug response predictions. Figure c shows an overview of REP’s pipeline, where drug responses [12pt]{minimal} $$\{y(0), ,y(t-1)\}$$ { y ( 0 ) , … , y ( t - 1 ) } in the previous time stages are integrated with the gene expression information [12pt]{minimal} $$ {z}_{t}$$ z t for predicting the current response y ( t ). Here, we accumulate the historical responses by concatenating them into a new vector as 7 [12pt]{minimal} $$ {}}(t) = [y(t-1), y(t-2), , y(0), 0, , 0]^T {R}^{K-1} $$ y ~ ( t ) = [ y ( t - 1 ) , y ( t - 2 ) , … , y ( 0 ) , 0 , … , 0 ] T ∈ R K - 1 which is then fed back as an input feature for subsequent drug response prediction. Therefore, at time t , the output of the predictor depends not only on the GEX at that time point, but also the previously observed drug responses. We always insert the drug response from the most recent time point into the first element of [12pt]{minimal} $${}}(t)$$ y ~ ( t ) , so that the model can capture a sense of time, and learn from recent/emerging trends in drug response. For the i th patient at time t , we concatenate [12pt]{minimal} $${}}_{i,t}$$ y ~ i , t and the corresponding GEX vector [12pt]{minimal} $$ {z}_{i,t}$$ z i , t together, where [12pt]{minimal} $${}}_{i,t}$$ y ~ i , t denotes the historical responses of patient i at time t . We then pass [12pt]{minimal} $$ {z}_{i,t}$$ z i , t and [12pt]{minimal} $$ {y}_{i,t}$$ y i , t to a predictor [12pt]{minimal} $$f( )$$ f ( · ) to predict drug response, i.e., 8 [12pt]{minimal} $$ y_{i,t} = f( {z}_{i,t}, {}}_{i,t}) $$ y i , t = f ( z i , t , y ~ i , t ) where [12pt]{minimal} $$f( )$$ f ( · ) can be trained by minimizing the following cost function 9 [12pt]{minimal} $$ L() = _{i=1}^{I} _{t=1}^K (f( {z}_{i,t}, {}}_{i,t}), y_{i,t}) + r() $$ L ( θ ) = 1 IK ∑ i = 1 I ∑ t = 1 K ℓ ( f ( z i , t , y ~ i , t ) , y i , t ) + λ r ( θ ) where [12pt]{minimal} $$$$ θ contains the parameters of the predictor, [12pt]{minimal} $$ ( )$$ ℓ ( · ) is the loss function of a classifier such as hinge loss and cross-entropy loss, [12pt]{minimal} $$r( )$$ r ( · ) is a regularizer that imposes a certain structure on [12pt]{minimal} $$$$ θ , and [12pt]{minimal} $$ 0$$ λ ≥ 0 is a regularization parameter. In the literature, popular regularizers include [12pt]{minimal} $$r() = _2^2$$ r ( θ ) = ‖ θ ‖ 2 2 , [12pt]{minimal} $$r() = _0$$ r ( θ ) = ‖ θ ‖ 0 , [12pt]{minimal} $$r() = _1$$ r ( θ ) = ‖ θ ‖ 1 and [12pt]{minimal} $$r()=1_+()$$ r ( θ ) = 1 + ( θ ) , i.e., the indicator function of the non-negative orthant. Our main idea is to feed back the historical drug responses and then combine them with GEX values to predict the drug response in the future. This is the major difference between our method and the state-of-the-art algorithms: prior art ignored the previous drug response outputs. Therefore, the training set for our method is created in a slightly different way. Recall that at each time point, we stack the historic drug responses into a vector. For any patient in the training set, we can further concatenate all such prior response vectors for K time points together, which yields a feedback matrix for patient i as 10 [12pt]{minimal} $$ {}}_i = {}}_{1,1}&{}}_{1,2}&&{}}_{1,K} ^T {R}^{K (K-1)}. $$ Y ~ i = y ~ 1 , 1 y ~ 1 , 2 ⋯ y ~ 1 , K T ∈ R K × ( K - 1 ) . Furthermore, we can create a tensor [12pt]{minimal} $$}}} {R}^{I K (K-1)}$$ Y ~ _ ∈ R I × K × ( K - 1 ) by concatenating all [12pt]{minimal} $$\{{}}_1, , {}}_I\}$$ { Y ~ 1 , … , Y ~ I } together, where [12pt]{minimal} $$}}}(i, :, :) = {}}_i$$ Y ~ _ ( i , : , : ) = Y ~ i . Finally, the features in the training set are formed by concatenating [12pt]{minimal} $${}}$$ Z _ and [12pt]{minimal} $$}}}$$ Y ~ _ along the gene-axis as shown in the left-bottom corner of Fig. b, and the training labels are 11 [12pt]{minimal} $$ {Y}= y_{1,1} &{} y_{1,2} &{} &{} y_{1,K} \\ y_{2,1} &{} y_{2,2} &{} &{} y_{2,K} \\ &{} &{} &{} \\ y_{I,1} &{} y_{I,2} &{} &{} y_{I,K} . $$ Y = y 1 , 1 y 1 , 2 ⋯ y 1 , K y 2 , 1 y 2 , 2 ⋯ y 2 , K ⋮ y I , 1 y I , 2 ⋯ y I , K . It is also important to mention that our method can predict either binary or non-binary drug responses, e.g., continuous values. When the drug response is binary, the predictor [12pt]{minimal} $$f( {z}_{i,t},{}}_{i,t})$$ f ( z i , t , y ~ i , t ) will typically be a classifier. When the drug response is continuous, [12pt]{minimal} $$f( {z}_{i,t},{}}_{i,t})$$ f ( z i , t , y ~ i , t ) will be a regression algorithm. It is important to note that our approach is really a framework that is applicable no matter what is the choice of the final classification or regression algorithm. Nevertheless, for the purposes of exemplifying and illustrating the merits of our proposed framework, we are particularly interested in support vector machines (SVM and SVR, for classification and regression, respectively), which have shown promising performance in this type of task (cite a few past papers using SVM for this task here). We set [12pt]{minimal} $$=[ {u}^T, {v}^T]^T$$ θ = [ u T , v T ] T and [12pt]{minimal} $$ ( )$$ ℓ ( · ) to be the hinge loss, resulting in 12 [12pt]{minimal} $$ _{ {u},v,b}& _{i=1}^{I} _{t=1}^K ( 0, 1 - y_{i,t}( {u}^T {z}_{i,t} + {v}^T {}}_{i,t} + b)) + ( {u} _2^2+ {v} ^2) \\ {s.~t.}&[ {u}^T, {v}^T]^T {C} $$ min u , v , b 1 IK ∑ i = 1 I ∑ t = 1 K max 0 , 1 - y i , t ( u T z i , t + ρ v T y ~ i , t + b ) + λ 2 ‖ u ‖ 2 2 + ‖ v ‖ 2 s . t . [ u T , v T ] T ∈ C where b is the intercept, [12pt]{minimal} $$ {C}$$ C denotes a convex set such as [12pt]{minimal} $$ _1$$ ℓ 1 -ball for feature selection and [12pt]{minimal} $$ $$ ρ represents the importance of the response at a previous time point on the subsequent one. In our formulation, the drug response feedback plays an important role and it can be viewed as a “must-have” feature. In SVMs, we penalize the two-norm of the linear weights equally—the implicit assumption being that features have similar powers. In our context, however, the GEX values are much larger than the drug response labels which are either 1 or [12pt]{minimal} $$-1$$ - 1 . As a result, the GEX values are likely to end up playing a more significant role in the prediction—simply because we cannot scale the labels up to any meaningful level, due to the regularization term. To compensate for this imbalance, in the above formulation, we introduce fixed weight [12pt]{minimal} $$ $$ ρ in the cost function. In practice, we recommend to choose a relatively large [12pt]{minimal} $$ $$ ρ . Our method can predict the drug response values for a new patient at any time point. Specifically, given the GEXs of a new patient at time t , i.e., [12pt]{minimal} $$ {x}(t)$$ x ( t ) , we first check if there are missing values. If so, we employ the factors [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C to complete [12pt]{minimal} $$ {x}(t)$$ x ( t ) . Let us denote [12pt]{minimal} $${{}}$$ Ω ¯ and [12pt]{minimal} $${{}}^c$$ Ω ¯ c as the sets of indices of the observed and missing elements in [12pt]{minimal} $$ {x}(t)$$ x ( t ) . According to our model in , [12pt]{minimal} $$ {x}(t)$$ x ( t ) can be uniquely determined by [12pt]{minimal} $$ {B}$$ B , [12pt]{minimal} $$ {C}$$ C and an unknown vector [12pt]{minimal} $$ {a}$$ a —a latent representation of this new patient. Thus, for the expression level of the j th gene at time t , we have 13 [12pt]{minimal} $$ x_j(t)&= b_{j1}&&b_{jF} a_{1} &{} &{} \\ &{} &{} \\ &{} &{}a_{F} c_{t1} \\ \\ c_{tF} + {n} \\&= ( {C}(t,:) {B}(j,:)) {a}+ n_j,~ j {{}} $$ x j ( t ) = b j 1 ⋯ b jF a 1 ⋱ a F c t 1 ⋮ c tF + n = C ( t , : ) ⊙ B ( j , : ) a + n j , ∀ j ∈ Ω ¯ where [12pt]{minimal} $$n_j$$ n j is the additive noise which is assumed as Gaussian distributed, [12pt]{minimal} $$ $$ ⊙ is the Khatri-Rao (column-wise Kronecker) product, and [12pt]{minimal} $$ {B}(j,:)$$ B ( j , : ) and [12pt]{minimal} $$ {C}(t,:)$$ C ( t , : ) denote the t th row of [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C , respectively. Since [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C are known, the problem of estimating [12pt]{minimal} $$ {a}$$ a can be formulated as 14 [12pt]{minimal} $$ {{}}} = _{ {a} 0} _{j {{}}} ( x_j(t) - ( {C}(t,:) {B}(j,:)) {a}) ^2 $$ a ^ = arg min a ≥ 0 ∑ j ∈ Ω ¯ x j ( t ) - C ( t , : ) ⊙ B ( j , : ) a 2 which is a non-negative least squares (NLS) problem and can be optimally solved. We note that to obtain a unique estimate [12pt]{minimal} $${{}}}$$ a ^ , the number of available gene expression entries in [12pt]{minimal} $$ {x}(t)$$ x ( t ) should be [12pt]{minimal} $$ F$$ ≥ F . The GEX vector of the patient is then estimated as 15 [12pt]{minimal} $$ {g}(t) = ( {C}(t,:) {B}) {{}}} $$ g ( t ) = C ( t , : ) ⊙ B a ^ which leads to a completed GEX vector 16 [12pt]{minimal} $$ {z}(t) = {P}_{{{}}}( {x}(t)) + {P}_{{{}}^c}( {g}(t)). $$ z ( t ) = P Ω ¯ ( x ( t ) ) + P Ω ¯ c ( g ( t ) ) . The vector [12pt]{minimal} $$ {z}(t)$$ z ( t ) together with the cumulated historical drug response [12pt]{minimal} $${}}(t)$$ y ~ ( t ) , are the input data for our predictor [12pt]{minimal} $$f( )$$ f ( · ) . We estimate the drug response of this patient at time t via 17 [12pt]{minimal} $$ {}(t) = f( {z}(t), {}}(t) ) . $$ y ^ ( t ) = f z ( t ) , y ~ ( t ) . It is crucial to mention that in some cases, there might be missing labels in the testing set, such that [12pt]{minimal} $${}}(t)$$ y ~ ( t ) cannot be constructed. To handle this scenario, we can use the predicted labels instead of the missing ones to construct [12pt]{minimal} $${}}(t)$$ y ~ ( t ) . More specifically, we start from [12pt]{minimal} $$t=0$$ t = 0 and predict y (0), which is used to construct [12pt]{minimal} $${}}(1)$$ y ~ ( 1 ) . Then we use [12pt]{minimal} $${}}(1)$$ y ~ ( 1 ) to predict the response at [12pt]{minimal} $$t=1$$ t = 1 , so on and so forth. Predicting unseen GEXs Previously, we have explained how to predict drug response for patients at a certain time point. However, in practice, we are more interested in knowing the drug response of a few time-points in the future from the beginning of a treatment. This requires to know the GEXs of all time points up to the one of interest a priori, which is impossible in practice. In this subsection, we provide an efficient solution that allows to predict the unseen GEXs. Recall that in our model, the GEX of a patient is determined by three factors, i.e., the latent representation of patient— [12pt]{minimal} $$ {a}$$ a , the time evolution factor— [12pt]{minimal} $$ {B}$$ B and the gene factor matrix— [12pt]{minimal} $$ {C}$$ C , where [12pt]{minimal} $$ {a}$$ a is different for patients, and needs to be estimated for the new patient. On the other hand, [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C are common gene and time evolution bases that reflect different types of patients, as determined from historical patient data—the training data. Therefore, the problem boils down to the estimation of [12pt]{minimal} $$ {a}$$ a from the initial GEXs of the new patient. We can simply substitute [12pt]{minimal} $$t=1$$ t = 1 in to find [12pt]{minimal} $${{}}}$$ a ^ . Finally, the GEXs for the remaining time points are estimated as 18 [12pt]{minimal} $$ {}}(t) = ( {C}(t,:) {B}) {{}}}, t= 2, ,K. $$ x ^ ( t ) = C ( t , : ) ⊙ B a ^ , ∀ t = 2 , … , K . Now we have estimated the unseen GEXs for [12pt]{minimal} $$t 2$$ t ≥ 2 , which allows us to predict drug response values for the whole duration of the treatment. We start from [12pt]{minimal} $${{}}}(1)$$ x ^ ( 1 ) and estimate the drug response for [12pt]{minimal} $$t=1$$ t = 1 as 19 [12pt]{minimal} $$ {}(1) = f({}}(1), {}}(1)) $$ y ^ ( 1 ) = f ( x ^ ( 1 ) , y ~ ( 1 ) ) where [12pt]{minimal} $${}}(1) = 0$$ y ~ ( 1 ) = 0 . When [12pt]{minimal} $${}(1)$$ y ^ ( 1 ) is available, we set [12pt]{minimal} $${}}(2) = {}(1)$$ y ~ ( 2 ) = y ^ ( 1 ) . With the GEX estimate [12pt]{minimal} $${}}(2)$$ x ^ ( 2 ) from , we can predict [12pt]{minimal} $${}(2) = f({}}(2), {}}(2))$$ y ^ ( 2 ) = f ( x ^ ( 2 ) , y ~ ( 2 ) ) , and so forth for the other time points. Remark 1 Note that here we substitute predicted drug responses for the unseen drug responses. Clearly, when actual drug responses for past time ticks are available, they should be used. We only do the substitution here for a preliminary assessment of how well a patient is likely to respond over time, before the beginning of treatment—which is naturally a more ambitious goal. Previously, we have explained how to predict drug response for patients at a certain time point. However, in practice, we are more interested in knowing the drug response of a few time-points in the future from the beginning of a treatment. This requires to know the GEXs of all time points up to the one of interest a priori, which is impossible in practice. In this subsection, we provide an efficient solution that allows to predict the unseen GEXs. Recall that in our model, the GEX of a patient is determined by three factors, i.e., the latent representation of patient— [12pt]{minimal} $$ {a}$$ a , the time evolution factor— [12pt]{minimal} $$ {B}$$ B and the gene factor matrix— [12pt]{minimal} $$ {C}$$ C , where [12pt]{minimal} $$ {a}$$ a is different for patients, and needs to be estimated for the new patient. On the other hand, [12pt]{minimal} $$ {B}$$ B and [12pt]{minimal} $$ {C}$$ C are common gene and time evolution bases that reflect different types of patients, as determined from historical patient data—the training data. Therefore, the problem boils down to the estimation of [12pt]{minimal} $$ {a}$$ a from the initial GEXs of the new patient. We can simply substitute [12pt]{minimal} $$t=1$$ t = 1 in to find [12pt]{minimal} $${{}}}$$ a ^ . Finally, the GEXs for the remaining time points are estimated as 18 [12pt]{minimal} $$ {}}(t) = ( {C}(t,:) {B}) {{}}}, t= 2, ,K. $$ x ^ ( t ) = C ( t , : ) ⊙ B a ^ , ∀ t = 2 , … , K . Now we have estimated the unseen GEXs for [12pt]{minimal} $$t 2$$ t ≥ 2 , which allows us to predict drug response values for the whole duration of the treatment. We start from [12pt]{minimal} $${{}}}(1)$$ x ^ ( 1 ) and estimate the drug response for [12pt]{minimal} $$t=1$$ t = 1 as 19 [12pt]{minimal} $$ {}(1) = f({}}(1), {}}(1)) $$ y ^ ( 1 ) = f ( x ^ ( 1 ) , y ~ ( 1 ) ) where [12pt]{minimal} $${}}(1) = 0$$ y ~ ( 1 ) = 0 . When [12pt]{minimal} $${}(1)$$ y ^ ( 1 ) is available, we set [12pt]{minimal} $${}}(2) = {}(1)$$ y ~ ( 2 ) = y ^ ( 1 ) . With the GEX estimate [12pt]{minimal} $${}}(2)$$ x ^ ( 2 ) from , we can predict [12pt]{minimal} $${}(2) = f({}}(2), {}}(2))$$ y ^ ( 2 ) = f ( x ^ ( 2 ) , y ~ ( 2 ) ) , and so forth for the other time points. Remark 1 Note that here we substitute predicted drug responses for the unseen drug responses. Clearly, when actual drug responses for past time ticks are available, they should be used. We only do the substitution here for a preliminary assessment of how well a patient is likely to respond over time, before the beginning of treatment—which is naturally a more ambitious goal. Note that here we substitute predicted drug responses for the unseen drug responses. Clearly, when actual drug responses for past time ticks are available, they should be used. We only do the substitution here for a preliminary assessment of how well a patient is likely to respond over time, before the beginning of treatment—which is naturally a more ambitious goal. In this section, we provide some numerical experiments to showcase the effectiveness of REP for drug response prediction from time-course gene expression data. We examine two tasks: classification on binary drug response and regression on continuous drug response. Dataset We consider two datasets to evaluate the performance of our method. The first dataset used is the interferon (IFN)- [12pt]{minimal} $$ $$ β time-course dataset which is available in the supplementary of . The dataset was collected from 53 Multiple Sclerosis (MS) patients who received IFN- [12pt]{minimal} $$ $$ β treatment for 2 years. The gene expression data (microarray) was obtained from peripheral blood mononuclear cells of the patients, which contained the expression levels of 76 pre-selected genes over seven stages (i.e., time-points) of the treatment, where the time lag between two adjacent time points was 3 months in the first year, and 6 months in the second year. The responses to the therapy were measured at each time point using the expanded disability status scale (EDSS) which is a method of quantifying disability in multiple sclerosis and monitoring changes in the level of disability over time . Note that EDSS values in the dataset are between 0 and 7, where the higher EDSS values reflect more severe disability. Except for the EDSS at the initial time point, the others were measured after the IFN- [12pt]{minimal} $$ $$ β injection at each time point. Therefore, we focus on the prediction of EDSS after [12pt]{minimal} $$t=1$$ t = 1 . In addition to EDSS, whether a patient had good or poor response to each treatment was also recorded, for each patient—and this is the indicator that we seek to predict in our classification experiments. The percentage of good patient responses to individual treatments was 58.5%; the remaining 41.5% responses were poor, on average, across the patient population considered. There are also missing values in this dataset, where most missing values were caused by the absence of patients at some stages. Only 27 patients had records for all stages, while the other 26 patients missed at least one stage, which resulted in the entire GEXs as well as the drug response at that stage being missed. In the following experiments, unless specified otherwise, we employ the 27 full records to evaluate the performance of algorithms, where the final GEX data is of size [12pt]{minimal} $$27 7 76$$ 27 × 7 × 76 and the response data is of size [12pt]{minimal} $$27 7$$ 27 × 7 . The second dataset is from a Gene Expression Omnibus (GEO) record GSE24427 also corresponding to MS. In the dataset, there are 16 female patients and 9 male patients who received IFN- [12pt]{minimal} $$ $$ β therapy for 24 months. During the treatment, the RNA expression values were measured five times: at baseline (before first IFN- [12pt]{minimal} $$ $$ β injection), 2 days (before second IFN- [12pt]{minimal} $$ $$ β injection), and 1 month (before month-1 IFN- [12pt]{minimal} $$ $$ β injection), 12 months (before month-12 IFN- [12pt]{minimal} $$ $$ β injection), and 24 months (before month-24 IFN- [12pt]{minimal} $$ $$ β injection), respectively. The EDSS values were measured four times: at baseline, after 1 year, 2 years and 5 years of the initial injection, respectively. We use (1) the RNA expressions measured before month-12 to predict the EDSS measured after 1 year treatment, (2) the gene expressions measured before month-24 to predict the EDSS measured after 2 years treatment. There are 47,522 gene probes in this dataset. We employed the python package mygene (mygene: https://mygene.info/ ) to map the probes to gene names, which yielded 19,565 gene names. Unlike the first dataset, we do not have binary drug response in the second dataset. Therefore, we focus on the prediction of binary drug response on the first dataset (whether or not a patient will have good or poor response), while the prediction of EDSS for both datasets is viewed as a regression task, because EDSS is an ordinal variable (predicting a 6 as a 7 is better than predicting a 6 as a 3; thus mean absolute error and room mean squared error make sense as performance metrics). Methods for comparison We examine the predictive ability on the prediction of binary drug response and ordinal EDSS response. For the binary case, we apply a number of classifiers including two linear models (EN-LR and SVM), one nonlinear model ( K -nearest neighbors (KNN) ), and a probabilistic graphical model (discriminative loop hidden Markov model (dl-HMM) ) to real-world time-course data. We did not include SVM with nonlinear kernels (e.g. Gaussian), since its performance was inferior compared to the linear kernel. Note that EN-LR and dl-HMM were specifically designed for prediction of drug response values based on time-course gene expression data, while SVM and KNN are widely used classification methods. For ordinal prediction, we implement Elastic Net, Support Vector Regression (SVR) with radial basis function (rbf) krenel, Random Forest and KNN on the two datasets. All methods are implemented via the Python sklearn package with version 0.0. We use default settings for Elastic Net and SVR algorithm. For Random Forest, we set the number of trees in the forest to 20. For KNN, we set the number of neighbors to 10. For each dataset, we first create two versions of training and testing sets, where one is with the drug response feedback described in Fig. and the other one is without feedback. We use REP-ElasticNet, REP-SVR, REP-RandomForest and REP-KNN to denote the respective algorithms with drug response feedback. Evaluation metric For classification, we use prediction accuracy (ACC) and area under receiver operating characteristic (ROC) curve (AUC) to evaluate the performance of REP, where ACC is defined as: [12pt]{minimal} $$ {ACC} = }{}. $$ ACC = TP + TN TP + FP + FN + TN . In the equations above, TP, FP, FN and TN stand for the number of true positives, false positives, false negatives and true negatives, respectively. The calculation of AUC is based on the ROC, which plots TP versus FP. Here, each pair of TP and FP are obtained by comparing the score of a classifier with a varying threshold. For regression, we use mean squared error (MSE) and mean absolute error (MAE) to evaluate the performance, where MSE and MAE are defined as [12pt]{minimal} $$ {MSE}&= _{m=1}^M _{n=1}^{N} {y}_m - {}}_m _2^2 \\ {MAE}&= _{m=1}^M _{n=1}^{N} _{t=1}^T| y_m(t) - {}_m(t)| $$ MSE = 1 MN ∑ m = 1 M ∑ n = 1 N ‖ y m - y ^ m ‖ 2 2 MAE = 1 MNT ∑ m = 1 M ∑ n = 1 N ∑ t = 1 T y m ( t ) - y ^ m ( t ) where M is the number of samples in the testing set, N is the number of Monte-Carlo tests, [12pt]{minimal} $$ {y}_m$$ y m denotes the ground-truth drug response of the m th testing sample and [12pt]{minimal} $${}}_m$$ y ^ m is its estimate with [12pt]{minimal} $$y_m(t)$$ y m ( t ) and [12pt]{minimal} $${}_m(t)$$ y ^ m ( t ) being their values at time t , respectively. We report performance for all methods by using the same training, validation and testing sets. Specifically, we employ leave-one-out cross validation (LOO) for testing, where at each fold, we split 27 patients into a training set with 26 patients and a testing set with one patient. We then hold the testing set and randomly split the training set into two parts, where the first part has 25 patients and the second part has one patient, i.e., the validation set. We train models on the first part and tune hyper-parameters on the second part. Note that for each algorithm, we select its best hyper-parameters—those that yield the highest prediction accuracy on the validation set. Finally, we apply the selected model to the testing set. For a fair comparison, in all experiments, we apply the same missing value imputation method to all algorithms. For REP-SVM, its hyper-parameters include [12pt]{minimal} $$ $$ λ and [12pt]{minimal} $$ $$ ρ in , which are selected from [12pt]{minimal} $$ =\{0.1, 0.5\}$$ λ = { 0.1 , 0.5 } and [12pt]{minimal} $$ \{50,100\}$$ ρ ∈ { 50 , 100 } . For the standard SVM method, it solves the following problem: 20 [12pt]{minimal} $$ _{ {u},b}&~ _{i=1}^{I} _{t=1}^K ( 0, 1 - y_{i,t}( {u}^T {z}_{i,t} + b)) + {u} _2^2 $$ min u , b 1 IK ∑ i = 1 I ∑ t = 1 K max 0 , 1 - y i , t ( u T z i , t + b ) + λ 2 ‖ u ‖ 2 2 where [12pt]{minimal} $$ $$ λ is tuned from [12pt]{minimal} $$\{0.01,0.1,1,10\}$$ { 0.01 , 0.1 , 1 , 10 } ; for EN-LR, we set [12pt]{minimal} $$ =0.5$$ α = 0.5 which is a hyper-parameter balancing the ridge and LASSO regularizations; for KNN, the number of neighbors is selected from [12pt]{minimal} $$\{3,5,8,10\}$$ { 3 , 5 , 8 , 10 } . After that, we apply the trained classifier to the testing data to calculate ACC. We implemented REP-SVM, EN-LR, SVM and KNN in Python 3.7. Since the authors of dl-HMM have published their MATLAB codes ( http://www.cs.cmu.edu/~thlin/tram/ ), we used their MATLAB implementation for our comparison. The hyper-parameter for dl-HMM is the number of hidden states, which is chosen from [12pt]{minimal} $$\{2,3,4\}$$ { 2 , 3 , 4 } . Results Parameter selection for tensor completion We first study how the hyper-parameters F and [12pt]{minimal} $$ $$ μ in affect the prediction performance. The percentage of missing values in GEXs is fixed at 5%. We vary F from 2 to 5 and [12pt]{minimal} $$ $$ μ from 0.01 to 100, and report ACC of REP-SVM on the classification task, i.e., predicting good or poor responder. The ACC is calculated using leave-one-out (LOO) cross-validation, where in each fold, we select one patient’s record as a testing set that contains a [12pt]{minimal} $$7 76$$ 7 × 76 GEX matrix and a response vector with length 7, while the remaining 26 records are assigned to the training set. It can be seen in Fig. that [12pt]{minimal} $$F 3$$ F ≤ 3 produces better results than [12pt]{minimal} $$F 4$$ F ≥ 4 in general, especially when [12pt]{minimal} $$ 1$$ μ ≥ 1 . Performance evaluation on binary drug response We now compare the performance of the five algorithms in terms of prediction accuracy and AUC. In the raw data, the missing values in [12pt]{minimal} $${}}$$ X _ are only 0.23%. For all the methods, the missing GEXs were completed using non-negative tensor completion in Section II-A, where we set [12pt]{minimal} $$F=3$$ F = 3 and [12pt]{minimal} $$ =1$$ μ = 1 . It can be seen in Table (i.e., the rows with % miss as 0.23) that REP-SVM achieves a higher prediction accuracy compared to other methods and its performance is followed by the SVM and EN-LR algorithms. The KNN and dl-HMM algorithms have relatively low accuracy. We note that REP-SVM takes a similar formulation as the SVM. However, REP-SVM is 2.5% more accurate than SVM, which implies that the recursive structure in REP-SVM is helpful in improving the prediction accuracy. We sought to determine the effect of missing values on the performance of these methods. For this purpose, we randomly sampled the GEX data and hid the selected entries. As the percentage of missing values increases, all methods suffer performance loss, but REP-SVM’s ACC and AUC remain the highest in all cases (see Table ). We highlight that when the percentage of missing values is 20%, REP-SVM has ACC close to 0.872 and AUC greater than 0.941. EN-LR outperforms the classical SVM method in many cases. When the percentage of missing values increases, the performance of EN-LR and SVM drop significantly, while that of REP-SVM still remains at a high level. For example, when the percentage of missing increases to 15%, the ACC of EN-LR drops to 0.844 and that of SVM drops to 0.837, but ours is 0.887, which indicates that REP-SVM is more robust against missing values. We mention that in this experiment, the ratio of the positive and negative classes is 19/8. So the percentage of the positive class is about 70.4%. We found that most of the predicted labels of KNN were positive, meaning that it cannot distinguish the negative class. The scores produced by KNN were not good enough to separate the two classes. This is why KNN yields seemingly reasonable accuracy but low AUC. In this example, we evaluate the performance comparison on all patients in Dataset1. As we have mentioned before, there are 26 patients that do not have seven time points records, so they cannot be used in the training step of REP-SVM. Therefore, we use them for testing, and the training is based on the 27 patients with full temporal records, where we set [12pt]{minimal} $$F=3$$ F = 3 and [12pt]{minimal} $$ =1$$ μ = 1 . Note that in the testing data, approximately 18.1% of the GEX values and 22% drug response labels are missing. Here, the missing GEXs are completed through our non-negative tensor completion. We calculate the ACC and AUC based on the known drug response labels. The results are shown in Table . We see that REP-SVM outperforms the other competitors in both accuracy and AUC, where it achieves about 0.790 in ACC and 0.884 in AUC. EN-LR has the second best performance and is followed by SVM in terms of accuracy, but EN-LR has higher AUC than SVM. In this case, the distributions of missing values in the training and testing sets are very different, where the percentage of missing values in training set is about 0.23% but that in the testing set is about 18.1%. Recall that in Table , where the missing values were randomly assigned through a uniform distribution, REP-SVM, EN-LR and SVM have higher ACC and AUC than the results in Table , even though the percentage of missing values reaches 20%. This indicates that the distribution of missing values in training and testing sets may affect the performance of drug response predictors. We have shown that under the same completion algorithm, REP-SVM has better performance than the methods, but one may wonder if the same conclusion holds for other types of completion. To answer this question, we further compare REP-SVM, EN-LR, and SVM with mean, median, and KNN imputation. The results are shown in Table . We see that all predictors with tensor completion achieve the highest ACC and AUC compared to the standard imputation methods. We also note that REP-SVM continues to have the best performance even when used with less sophisticated imputation methods. Figure shows the top 20 genes selected by REP-SVM. Note that we run REP-SVM ten times on the 27 patients with full temporal records, average the weights corresponding to the genes and then rank the weights to generate the gene ranking. The genes IRF3, IRF4, IRF6 and IRF8 belong to the interferon regulatory transcription factor (IRF) family, which is critical for induction of type I (IFN- [12pt]{minimal} $$ / $$ α / β ) and type III (IFN- [12pt]{minimal} $$ $$ β ) IFN expression – . Performance evaluation on ordinal drug response Next, we evaluate the performance of the proposed method on the prediction of EDSS. Since the number of genes is much larger than the number of patients, it is challenging to perform regression directly on such an underdetermined dataset. To handle this issue, we select the genes which are also in the first dataset. This results in 42 overlapped genes (see Supplementary Table ). Therefore, in the following experiments, we only used the selected 42 genes. Note that all methods use the same genes for regression, so reducing the number of genes will not affect the fairness. The results are shown in Table , where the percentage of missing values of GEX in Dataset1 is 0.23% and that in Dataset2 is 0. In both datasets, all methods with drug response feedback outperform their respective version without feedback in terms of MAE and MSE. Table shows the overall performance comparison by averaging all time points. One may also be interested in the performance of REP at each time point. In Supplementary Tables – , we summarized the performance of the predictors at each time point, where we can see that REP-ElasticNet, REP-KNN, REP-RandomForest and REP-SVR achieved better performance than Elastic Net, KNN, Random Forest and SVR, respectively, in all cases. We would like to demonstrate that the REP framework can produce more accurate results for patients with not only constant drug responses but also time-varying drug responses. In Dataset1, there is only one patient has constant drug responses at different time points while the other 26 patients have time varying drug responses, i.e., there is at least one switch in the responses between two time points. In Dataset2, there are 14 patients with constant drug responses and 11 patients with time varying drug responses. For each dataset, we calculated the MSE and MAE based on two cases: one is based on patients with constant drug responses and the other one is based on patients with time-varying drug responses. The results are summarized in Supplementary Table , where REP-ElasticNet, REP-KNN, REP-RandomForest and REP-SVR performed better in all cases. In the constant response case in Dataset1, all methods had large MAE and MSE. The reason is that all training samples were with time-varying drug responses such that the algorithms were not trained well for the constant drug response case. In the second dataset, since we had more cases of constant drug responses in the training set, REP based algorithms worked much better. This indicates again that standard predictors with the REP framework can predict the drug responses more accurately [see Supplementary Figs. and , where the actual and predicted drug responses (from the predictors) of different patients were plotted]. The above experiments are all based on the actual GEX values with only a small portion of them imputed. In the last experiment, we assume that in the testing set, only the GEX values at the initial time point have been observed. The training set and the way of imputing the missing values therein are kept the same as in the previous experiment in Table . Our task is to predict future drug responses for the remaining time points at an initial time point. In this case, the GEX values for the future time points are entirely missing. To handle this issue, we first implement the tensor completion method to learn the latent factor [12pt]{minimal} $$ {B}$$ B for GEX and another latent factor [12pt]{minimal} $$ {C}$$ C corresponding to the T time points from the training set . Then according to , given the GEX vector of the i th patient at the initial time point, we can estimate his/her latent representation denoted by [12pt]{minimal} $$ {a}_i$$ a i , by solving a non-negative last squares problem, i.e., [12pt]{minimal} $$ _{ {a}_i 0} {g}_0 - {B} {diag}( {c}_0) {a}_i _2^2$$ min a i ≥ 0 ‖ g 0 - B diag ( c 0 ) a i ‖ 2 2 , where [12pt]{minimal} $$ {g}_0$$ g 0 is the observed GEX values at [12pt]{minimal} $$t=0$$ t = 0 and [12pt]{minimal} $$ {c}_0$$ c 0 is the first row of the estimated temporal factor matrix [12pt]{minimal} $$ {C}$$ C corresponding to [12pt]{minimal} $$t=0$$ t = 0 . This problem is convex and it can be optimally solved. Then we can substitute [12pt]{minimal} $$ {a}_i$$ a i into to predict the entire GEX matrix for the i th patient and we use the second to the last column of [12pt]{minimal} $$ {G}$$ G for prediction, where the t th column in [12pt]{minimal} $$ {G}_i$$ G i represents the predicted GEX values at time t . Furthermore, we do not have any information about the drug responses for the future except for the one at [12pt]{minimal} $$t=0$$ t = 0 , and thus unable to feed back the actual drug response at [12pt]{minimal} $$t 1$$ t ≥ 1 for subsequent prediction. We have mentioned previously that for such cases, we can feed back the predicted drug responses for future prediction if a previous drug response is unavailable. Therefore, in the testing step, we start from [12pt]{minimal} $$t=1$$ t = 1 where we can use the GEX values and EDSS value at [12pt]{minimal} $$t=0$$ t = 0 to predict [12pt]{minimal} $$t=1$$ t = 1 . Then we use this estimated drug response and the estimated GEX at [12pt]{minimal} $$t=1$$ t = 1 to predict [12pt]{minimal} $$t=2$$ t = 2 , so on and so forth. Table shows the results, where the column ‘Using estimated GEX’ stands for using estimated GEXs and feeding back estimated drug responses for subsequent prediction, while ’Using actual GEX’ stands for using actual GEXs and feeding back actual drug responses. It can be seen in Table that predictors using the predicted GEX with the feedback of predicted drug responses have slightly worse performance than using the actual GEX with the feedback of actual drug responses. But the MAE and MSE in both scenarios are very close, especially for the REP-RandomForest algorithm. We consider two datasets to evaluate the performance of our method. The first dataset used is the interferon (IFN)- [12pt]{minimal} $$ $$ β time-course dataset which is available in the supplementary of . The dataset was collected from 53 Multiple Sclerosis (MS) patients who received IFN- [12pt]{minimal} $$ $$ β treatment for 2 years. The gene expression data (microarray) was obtained from peripheral blood mononuclear cells of the patients, which contained the expression levels of 76 pre-selected genes over seven stages (i.e., time-points) of the treatment, where the time lag between two adjacent time points was 3 months in the first year, and 6 months in the second year. The responses to the therapy were measured at each time point using the expanded disability status scale (EDSS) which is a method of quantifying disability in multiple sclerosis and monitoring changes in the level of disability over time . Note that EDSS values in the dataset are between 0 and 7, where the higher EDSS values reflect more severe disability. Except for the EDSS at the initial time point, the others were measured after the IFN- [12pt]{minimal} $$ $$ β injection at each time point. Therefore, we focus on the prediction of EDSS after [12pt]{minimal} $$t=1$$ t = 1 . In addition to EDSS, whether a patient had good or poor response to each treatment was also recorded, for each patient—and this is the indicator that we seek to predict in our classification experiments. The percentage of good patient responses to individual treatments was 58.5%; the remaining 41.5% responses were poor, on average, across the patient population considered. There are also missing values in this dataset, where most missing values were caused by the absence of patients at some stages. Only 27 patients had records for all stages, while the other 26 patients missed at least one stage, which resulted in the entire GEXs as well as the drug response at that stage being missed. In the following experiments, unless specified otherwise, we employ the 27 full records to evaluate the performance of algorithms, where the final GEX data is of size [12pt]{minimal} $$27 7 76$$ 27 × 7 × 76 and the response data is of size [12pt]{minimal} $$27 7$$ 27 × 7 . The second dataset is from a Gene Expression Omnibus (GEO) record GSE24427 also corresponding to MS. In the dataset, there are 16 female patients and 9 male patients who received IFN- [12pt]{minimal} $$ $$ β therapy for 24 months. During the treatment, the RNA expression values were measured five times: at baseline (before first IFN- [12pt]{minimal} $$ $$ β injection), 2 days (before second IFN- [12pt]{minimal} $$ $$ β injection), and 1 month (before month-1 IFN- [12pt]{minimal} $$ $$ β injection), 12 months (before month-12 IFN- [12pt]{minimal} $$ $$ β injection), and 24 months (before month-24 IFN- [12pt]{minimal} $$ $$ β injection), respectively. The EDSS values were measured four times: at baseline, after 1 year, 2 years and 5 years of the initial injection, respectively. We use (1) the RNA expressions measured before month-12 to predict the EDSS measured after 1 year treatment, (2) the gene expressions measured before month-24 to predict the EDSS measured after 2 years treatment. There are 47,522 gene probes in this dataset. We employed the python package mygene (mygene: https://mygene.info/ ) to map the probes to gene names, which yielded 19,565 gene names. Unlike the first dataset, we do not have binary drug response in the second dataset. Therefore, we focus on the prediction of binary drug response on the first dataset (whether or not a patient will have good or poor response), while the prediction of EDSS for both datasets is viewed as a regression task, because EDSS is an ordinal variable (predicting a 6 as a 7 is better than predicting a 6 as a 3; thus mean absolute error and room mean squared error make sense as performance metrics). We examine the predictive ability on the prediction of binary drug response and ordinal EDSS response. For the binary case, we apply a number of classifiers including two linear models (EN-LR and SVM), one nonlinear model ( K -nearest neighbors (KNN) ), and a probabilistic graphical model (discriminative loop hidden Markov model (dl-HMM) ) to real-world time-course data. We did not include SVM with nonlinear kernels (e.g. Gaussian), since its performance was inferior compared to the linear kernel. Note that EN-LR and dl-HMM were specifically designed for prediction of drug response values based on time-course gene expression data, while SVM and KNN are widely used classification methods. For ordinal prediction, we implement Elastic Net, Support Vector Regression (SVR) with radial basis function (rbf) krenel, Random Forest and KNN on the two datasets. All methods are implemented via the Python sklearn package with version 0.0. We use default settings for Elastic Net and SVR algorithm. For Random Forest, we set the number of trees in the forest to 20. For KNN, we set the number of neighbors to 10. For each dataset, we first create two versions of training and testing sets, where one is with the drug response feedback described in Fig. and the other one is without feedback. We use REP-ElasticNet, REP-SVR, REP-RandomForest and REP-KNN to denote the respective algorithms with drug response feedback. For classification, we use prediction accuracy (ACC) and area under receiver operating characteristic (ROC) curve (AUC) to evaluate the performance of REP, where ACC is defined as: [12pt]{minimal} $$ {ACC} = }{}. $$ ACC = TP + TN TP + FP + FN + TN . In the equations above, TP, FP, FN and TN stand for the number of true positives, false positives, false negatives and true negatives, respectively. The calculation of AUC is based on the ROC, which plots TP versus FP. Here, each pair of TP and FP are obtained by comparing the score of a classifier with a varying threshold. For regression, we use mean squared error (MSE) and mean absolute error (MAE) to evaluate the performance, where MSE and MAE are defined as [12pt]{minimal} $$ {MSE}&= _{m=1}^M _{n=1}^{N} {y}_m - {}}_m _2^2 \\ {MAE}&= _{m=1}^M _{n=1}^{N} _{t=1}^T| y_m(t) - {}_m(t)| $$ MSE = 1 MN ∑ m = 1 M ∑ n = 1 N ‖ y m - y ^ m ‖ 2 2 MAE = 1 MNT ∑ m = 1 M ∑ n = 1 N ∑ t = 1 T y m ( t ) - y ^ m ( t ) where M is the number of samples in the testing set, N is the number of Monte-Carlo tests, [12pt]{minimal} $$ {y}_m$$ y m denotes the ground-truth drug response of the m th testing sample and [12pt]{minimal} $${}}_m$$ y ^ m is its estimate with [12pt]{minimal} $$y_m(t)$$ y m ( t ) and [12pt]{minimal} $${}_m(t)$$ y ^ m ( t ) being their values at time t , respectively. We report performance for all methods by using the same training, validation and testing sets. Specifically, we employ leave-one-out cross validation (LOO) for testing, where at each fold, we split 27 patients into a training set with 26 patients and a testing set with one patient. We then hold the testing set and randomly split the training set into two parts, where the first part has 25 patients and the second part has one patient, i.e., the validation set. We train models on the first part and tune hyper-parameters on the second part. Note that for each algorithm, we select its best hyper-parameters—those that yield the highest prediction accuracy on the validation set. Finally, we apply the selected model to the testing set. For a fair comparison, in all experiments, we apply the same missing value imputation method to all algorithms. For REP-SVM, its hyper-parameters include [12pt]{minimal} $$ $$ λ and [12pt]{minimal} $$ $$ ρ in , which are selected from [12pt]{minimal} $$ =\{0.1, 0.5\}$$ λ = { 0.1 , 0.5 } and [12pt]{minimal} $$ \{50,100\}$$ ρ ∈ { 50 , 100 } . For the standard SVM method, it solves the following problem: 20 [12pt]{minimal} $$ _{ {u},b}&~ _{i=1}^{I} _{t=1}^K ( 0, 1 - y_{i,t}( {u}^T {z}_{i,t} + b)) + {u} _2^2 $$ min u , b 1 IK ∑ i = 1 I ∑ t = 1 K max 0 , 1 - y i , t ( u T z i , t + b ) + λ 2 ‖ u ‖ 2 2 where [12pt]{minimal} $$ $$ λ is tuned from [12pt]{minimal} $$\{0.01,0.1,1,10\}$$ { 0.01 , 0.1 , 1 , 10 } ; for EN-LR, we set [12pt]{minimal} $$ =0.5$$ α = 0.5 which is a hyper-parameter balancing the ridge and LASSO regularizations; for KNN, the number of neighbors is selected from [12pt]{minimal} $$\{3,5,8,10\}$$ { 3 , 5 , 8 , 10 } . After that, we apply the trained classifier to the testing data to calculate ACC. We implemented REP-SVM, EN-LR, SVM and KNN in Python 3.7. Since the authors of dl-HMM have published their MATLAB codes ( http://www.cs.cmu.edu/~thlin/tram/ ), we used their MATLAB implementation for our comparison. The hyper-parameter for dl-HMM is the number of hidden states, which is chosen from [12pt]{minimal} $$\{2,3,4\}$$ { 2 , 3 , 4 } . Parameter selection for tensor completion We first study how the hyper-parameters F and [12pt]{minimal} $$ $$ μ in affect the prediction performance. The percentage of missing values in GEXs is fixed at 5%. We vary F from 2 to 5 and [12pt]{minimal} $$ $$ μ from 0.01 to 100, and report ACC of REP-SVM on the classification task, i.e., predicting good or poor responder. The ACC is calculated using leave-one-out (LOO) cross-validation, where in each fold, we select one patient’s record as a testing set that contains a [12pt]{minimal} $$7 76$$ 7 × 76 GEX matrix and a response vector with length 7, while the remaining 26 records are assigned to the training set. It can be seen in Fig. that [12pt]{minimal} $$F 3$$ F ≤ 3 produces better results than [12pt]{minimal} $$F 4$$ F ≥ 4 in general, especially when [12pt]{minimal} $$ 1$$ μ ≥ 1 . Performance evaluation on binary drug response We now compare the performance of the five algorithms in terms of prediction accuracy and AUC. In the raw data, the missing values in [12pt]{minimal} $${}}$$ X _ are only 0.23%. For all the methods, the missing GEXs were completed using non-negative tensor completion in Section II-A, where we set [12pt]{minimal} $$F=3$$ F = 3 and [12pt]{minimal} $$ =1$$ μ = 1 . It can be seen in Table (i.e., the rows with % miss as 0.23) that REP-SVM achieves a higher prediction accuracy compared to other methods and its performance is followed by the SVM and EN-LR algorithms. The KNN and dl-HMM algorithms have relatively low accuracy. We note that REP-SVM takes a similar formulation as the SVM. However, REP-SVM is 2.5% more accurate than SVM, which implies that the recursive structure in REP-SVM is helpful in improving the prediction accuracy. We sought to determine the effect of missing values on the performance of these methods. For this purpose, we randomly sampled the GEX data and hid the selected entries. As the percentage of missing values increases, all methods suffer performance loss, but REP-SVM’s ACC and AUC remain the highest in all cases (see Table ). We highlight that when the percentage of missing values is 20%, REP-SVM has ACC close to 0.872 and AUC greater than 0.941. EN-LR outperforms the classical SVM method in many cases. When the percentage of missing values increases, the performance of EN-LR and SVM drop significantly, while that of REP-SVM still remains at a high level. For example, when the percentage of missing increases to 15%, the ACC of EN-LR drops to 0.844 and that of SVM drops to 0.837, but ours is 0.887, which indicates that REP-SVM is more robust against missing values. We mention that in this experiment, the ratio of the positive and negative classes is 19/8. So the percentage of the positive class is about 70.4%. We found that most of the predicted labels of KNN were positive, meaning that it cannot distinguish the negative class. The scores produced by KNN were not good enough to separate the two classes. This is why KNN yields seemingly reasonable accuracy but low AUC. In this example, we evaluate the performance comparison on all patients in Dataset1. As we have mentioned before, there are 26 patients that do not have seven time points records, so they cannot be used in the training step of REP-SVM. Therefore, we use them for testing, and the training is based on the 27 patients with full temporal records, where we set [12pt]{minimal} $$F=3$$ F = 3 and [12pt]{minimal} $$ =1$$ μ = 1 . Note that in the testing data, approximately 18.1% of the GEX values and 22% drug response labels are missing. Here, the missing GEXs are completed through our non-negative tensor completion. We calculate the ACC and AUC based on the known drug response labels. The results are shown in Table . We see that REP-SVM outperforms the other competitors in both accuracy and AUC, where it achieves about 0.790 in ACC and 0.884 in AUC. EN-LR has the second best performance and is followed by SVM in terms of accuracy, but EN-LR has higher AUC than SVM. In this case, the distributions of missing values in the training and testing sets are very different, where the percentage of missing values in training set is about 0.23% but that in the testing set is about 18.1%. Recall that in Table , where the missing values were randomly assigned through a uniform distribution, REP-SVM, EN-LR and SVM have higher ACC and AUC than the results in Table , even though the percentage of missing values reaches 20%. This indicates that the distribution of missing values in training and testing sets may affect the performance of drug response predictors. We have shown that under the same completion algorithm, REP-SVM has better performance than the methods, but one may wonder if the same conclusion holds for other types of completion. To answer this question, we further compare REP-SVM, EN-LR, and SVM with mean, median, and KNN imputation. The results are shown in Table . We see that all predictors with tensor completion achieve the highest ACC and AUC compared to the standard imputation methods. We also note that REP-SVM continues to have the best performance even when used with less sophisticated imputation methods. Figure shows the top 20 genes selected by REP-SVM. Note that we run REP-SVM ten times on the 27 patients with full temporal records, average the weights corresponding to the genes and then rank the weights to generate the gene ranking. The genes IRF3, IRF4, IRF6 and IRF8 belong to the interferon regulatory transcription factor (IRF) family, which is critical for induction of type I (IFN- [12pt]{minimal} $$ / $$ α / β ) and type III (IFN- [12pt]{minimal} $$ $$ β ) IFN expression – . Performance evaluation on ordinal drug response Next, we evaluate the performance of the proposed method on the prediction of EDSS. Since the number of genes is much larger than the number of patients, it is challenging to perform regression directly on such an underdetermined dataset. To handle this issue, we select the genes which are also in the first dataset. This results in 42 overlapped genes (see Supplementary Table ). Therefore, in the following experiments, we only used the selected 42 genes. Note that all methods use the same genes for regression, so reducing the number of genes will not affect the fairness. The results are shown in Table , where the percentage of missing values of GEX in Dataset1 is 0.23% and that in Dataset2 is 0. In both datasets, all methods with drug response feedback outperform their respective version without feedback in terms of MAE and MSE. Table shows the overall performance comparison by averaging all time points. One may also be interested in the performance of REP at each time point. In Supplementary Tables – , we summarized the performance of the predictors at each time point, where we can see that REP-ElasticNet, REP-KNN, REP-RandomForest and REP-SVR achieved better performance than Elastic Net, KNN, Random Forest and SVR, respectively, in all cases. We would like to demonstrate that the REP framework can produce more accurate results for patients with not only constant drug responses but also time-varying drug responses. In Dataset1, there is only one patient has constant drug responses at different time points while the other 26 patients have time varying drug responses, i.e., there is at least one switch in the responses between two time points. In Dataset2, there are 14 patients with constant drug responses and 11 patients with time varying drug responses. For each dataset, we calculated the MSE and MAE based on two cases: one is based on patients with constant drug responses and the other one is based on patients with time-varying drug responses. The results are summarized in Supplementary Table , where REP-ElasticNet, REP-KNN, REP-RandomForest and REP-SVR performed better in all cases. In the constant response case in Dataset1, all methods had large MAE and MSE. The reason is that all training samples were with time-varying drug responses such that the algorithms were not trained well for the constant drug response case. In the second dataset, since we had more cases of constant drug responses in the training set, REP based algorithms worked much better. This indicates again that standard predictors with the REP framework can predict the drug responses more accurately [see Supplementary Figs. and , where the actual and predicted drug responses (from the predictors) of different patients were plotted]. The above experiments are all based on the actual GEX values with only a small portion of them imputed. In the last experiment, we assume that in the testing set, only the GEX values at the initial time point have been observed. The training set and the way of imputing the missing values therein are kept the same as in the previous experiment in Table . Our task is to predict future drug responses for the remaining time points at an initial time point. In this case, the GEX values for the future time points are entirely missing. To handle this issue, we first implement the tensor completion method to learn the latent factor [12pt]{minimal} $$ {B}$$ B for GEX and another latent factor [12pt]{minimal} $$ {C}$$ C corresponding to the T time points from the training set . Then according to , given the GEX vector of the i th patient at the initial time point, we can estimate his/her latent representation denoted by [12pt]{minimal} $$ {a}_i$$ a i , by solving a non-negative last squares problem, i.e., [12pt]{minimal} $$ _{ {a}_i 0} {g}_0 - {B} {diag}( {c}_0) {a}_i _2^2$$ min a i ≥ 0 ‖ g 0 - B diag ( c 0 ) a i ‖ 2 2 , where [12pt]{minimal} $$ {g}_0$$ g 0 is the observed GEX values at [12pt]{minimal} $$t=0$$ t = 0 and [12pt]{minimal} $$ {c}_0$$ c 0 is the first row of the estimated temporal factor matrix [12pt]{minimal} $$ {C}$$ C corresponding to [12pt]{minimal} $$t=0$$ t = 0 . This problem is convex and it can be optimally solved. Then we can substitute [12pt]{minimal} $$ {a}_i$$ a i into to predict the entire GEX matrix for the i th patient and we use the second to the last column of [12pt]{minimal} $$ {G}$$ G for prediction, where the t th column in [12pt]{minimal} $$ {G}_i$$ G i represents the predicted GEX values at time t . Furthermore, we do not have any information about the drug responses for the future except for the one at [12pt]{minimal} $$t=0$$ t = 0 , and thus unable to feed back the actual drug response at [12pt]{minimal} $$t 1$$ t ≥ 1 for subsequent prediction. We have mentioned previously that for such cases, we can feed back the predicted drug responses for future prediction if a previous drug response is unavailable. Therefore, in the testing step, we start from [12pt]{minimal} $$t=1$$ t = 1 where we can use the GEX values and EDSS value at [12pt]{minimal} $$t=0$$ t = 0 to predict [12pt]{minimal} $$t=1$$ t = 1 . Then we use this estimated drug response and the estimated GEX at [12pt]{minimal} $$t=1$$ t = 1 to predict [12pt]{minimal} $$t=2$$ t = 2 , so on and so forth. Table shows the results, where the column ‘Using estimated GEX’ stands for using estimated GEXs and feeding back estimated drug responses for subsequent prediction, while ’Using actual GEX’ stands for using actual GEXs and feeding back actual drug responses. It can be seen in Table that predictors using the predicted GEX with the feedback of predicted drug responses have slightly worse performance than using the actual GEX with the feedback of actual drug responses. But the MAE and MSE in both scenarios are very close, especially for the REP-RandomForest algorithm. We first study how the hyper-parameters F and [12pt]{minimal} $$ $$ μ in affect the prediction performance. The percentage of missing values in GEXs is fixed at 5%. We vary F from 2 to 5 and [12pt]{minimal} $$ $$ μ from 0.01 to 100, and report ACC of REP-SVM on the classification task, i.e., predicting good or poor responder. The ACC is calculated using leave-one-out (LOO) cross-validation, where in each fold, we select one patient’s record as a testing set that contains a [12pt]{minimal} $$7 76$$ 7 × 76 GEX matrix and a response vector with length 7, while the remaining 26 records are assigned to the training set. It can be seen in Fig. that [12pt]{minimal} $$F 3$$ F ≤ 3 produces better results than [12pt]{minimal} $$F 4$$ F ≥ 4 in general, especially when [12pt]{minimal} $$ 1$$ μ ≥ 1 . We now compare the performance of the five algorithms in terms of prediction accuracy and AUC. In the raw data, the missing values in [12pt]{minimal} $${}}$$ X _ are only 0.23%. For all the methods, the missing GEXs were completed using non-negative tensor completion in Section II-A, where we set [12pt]{minimal} $$F=3$$ F = 3 and [12pt]{minimal} $$ =1$$ μ = 1 . It can be seen in Table (i.e., the rows with % miss as 0.23) that REP-SVM achieves a higher prediction accuracy compared to other methods and its performance is followed by the SVM and EN-LR algorithms. The KNN and dl-HMM algorithms have relatively low accuracy. We note that REP-SVM takes a similar formulation as the SVM. However, REP-SVM is 2.5% more accurate than SVM, which implies that the recursive structure in REP-SVM is helpful in improving the prediction accuracy. We sought to determine the effect of missing values on the performance of these methods. For this purpose, we randomly sampled the GEX data and hid the selected entries. As the percentage of missing values increases, all methods suffer performance loss, but REP-SVM’s ACC and AUC remain the highest in all cases (see Table ). We highlight that when the percentage of missing values is 20%, REP-SVM has ACC close to 0.872 and AUC greater than 0.941. EN-LR outperforms the classical SVM method in many cases. When the percentage of missing values increases, the performance of EN-LR and SVM drop significantly, while that of REP-SVM still remains at a high level. For example, when the percentage of missing increases to 15%, the ACC of EN-LR drops to 0.844 and that of SVM drops to 0.837, but ours is 0.887, which indicates that REP-SVM is more robust against missing values. We mention that in this experiment, the ratio of the positive and negative classes is 19/8. So the percentage of the positive class is about 70.4%. We found that most of the predicted labels of KNN were positive, meaning that it cannot distinguish the negative class. The scores produced by KNN were not good enough to separate the two classes. This is why KNN yields seemingly reasonable accuracy but low AUC. In this example, we evaluate the performance comparison on all patients in Dataset1. As we have mentioned before, there are 26 patients that do not have seven time points records, so they cannot be used in the training step of REP-SVM. Therefore, we use them for testing, and the training is based on the 27 patients with full temporal records, where we set [12pt]{minimal} $$F=3$$ F = 3 and [12pt]{minimal} $$ =1$$ μ = 1 . Note that in the testing data, approximately 18.1% of the GEX values and 22% drug response labels are missing. Here, the missing GEXs are completed through our non-negative tensor completion. We calculate the ACC and AUC based on the known drug response labels. The results are shown in Table . We see that REP-SVM outperforms the other competitors in both accuracy and AUC, where it achieves about 0.790 in ACC and 0.884 in AUC. EN-LR has the second best performance and is followed by SVM in terms of accuracy, but EN-LR has higher AUC than SVM. In this case, the distributions of missing values in the training and testing sets are very different, where the percentage of missing values in training set is about 0.23% but that in the testing set is about 18.1%. Recall that in Table , where the missing values were randomly assigned through a uniform distribution, REP-SVM, EN-LR and SVM have higher ACC and AUC than the results in Table , even though the percentage of missing values reaches 20%. This indicates that the distribution of missing values in training and testing sets may affect the performance of drug response predictors. We have shown that under the same completion algorithm, REP-SVM has better performance than the methods, but one may wonder if the same conclusion holds for other types of completion. To answer this question, we further compare REP-SVM, EN-LR, and SVM with mean, median, and KNN imputation. The results are shown in Table . We see that all predictors with tensor completion achieve the highest ACC and AUC compared to the standard imputation methods. We also note that REP-SVM continues to have the best performance even when used with less sophisticated imputation methods. Figure shows the top 20 genes selected by REP-SVM. Note that we run REP-SVM ten times on the 27 patients with full temporal records, average the weights corresponding to the genes and then rank the weights to generate the gene ranking. The genes IRF3, IRF4, IRF6 and IRF8 belong to the interferon regulatory transcription factor (IRF) family, which is critical for induction of type I (IFN- [12pt]{minimal} $$ / $$ α / β ) and type III (IFN- [12pt]{minimal} $$ $$ β ) IFN expression – . Next, we evaluate the performance of the proposed method on the prediction of EDSS. Since the number of genes is much larger than the number of patients, it is challenging to perform regression directly on such an underdetermined dataset. To handle this issue, we select the genes which are also in the first dataset. This results in 42 overlapped genes (see Supplementary Table ). Therefore, in the following experiments, we only used the selected 42 genes. Note that all methods use the same genes for regression, so reducing the number of genes will not affect the fairness. The results are shown in Table , where the percentage of missing values of GEX in Dataset1 is 0.23% and that in Dataset2 is 0. In both datasets, all methods with drug response feedback outperform their respective version without feedback in terms of MAE and MSE. Table shows the overall performance comparison by averaging all time points. One may also be interested in the performance of REP at each time point. In Supplementary Tables – , we summarized the performance of the predictors at each time point, where we can see that REP-ElasticNet, REP-KNN, REP-RandomForest and REP-SVR achieved better performance than Elastic Net, KNN, Random Forest and SVR, respectively, in all cases. We would like to demonstrate that the REP framework can produce more accurate results for patients with not only constant drug responses but also time-varying drug responses. In Dataset1, there is only one patient has constant drug responses at different time points while the other 26 patients have time varying drug responses, i.e., there is at least one switch in the responses between two time points. In Dataset2, there are 14 patients with constant drug responses and 11 patients with time varying drug responses. For each dataset, we calculated the MSE and MAE based on two cases: one is based on patients with constant drug responses and the other one is based on patients with time-varying drug responses. The results are summarized in Supplementary Table , where REP-ElasticNet, REP-KNN, REP-RandomForest and REP-SVR performed better in all cases. In the constant response case in Dataset1, all methods had large MAE and MSE. The reason is that all training samples were with time-varying drug responses such that the algorithms were not trained well for the constant drug response case. In the second dataset, since we had more cases of constant drug responses in the training set, REP based algorithms worked much better. This indicates again that standard predictors with the REP framework can predict the drug responses more accurately [see Supplementary Figs. and , where the actual and predicted drug responses (from the predictors) of different patients were plotted]. The above experiments are all based on the actual GEX values with only a small portion of them imputed. In the last experiment, we assume that in the testing set, only the GEX values at the initial time point have been observed. The training set and the way of imputing the missing values therein are kept the same as in the previous experiment in Table . Our task is to predict future drug responses for the remaining time points at an initial time point. In this case, the GEX values for the future time points are entirely missing. To handle this issue, we first implement the tensor completion method to learn the latent factor [12pt]{minimal} $$ {B}$$ B for GEX and another latent factor [12pt]{minimal} $$ {C}$$ C corresponding to the T time points from the training set . Then according to , given the GEX vector of the i th patient at the initial time point, we can estimate his/her latent representation denoted by [12pt]{minimal} $$ {a}_i$$ a i , by solving a non-negative last squares problem, i.e., [12pt]{minimal} $$ _{ {a}_i 0} {g}_0 - {B} {diag}( {c}_0) {a}_i _2^2$$ min a i ≥ 0 ‖ g 0 - B diag ( c 0 ) a i ‖ 2 2 , where [12pt]{minimal} $$ {g}_0$$ g 0 is the observed GEX values at [12pt]{minimal} $$t=0$$ t = 0 and [12pt]{minimal} $$ {c}_0$$ c 0 is the first row of the estimated temporal factor matrix [12pt]{minimal} $$ {C}$$ C corresponding to [12pt]{minimal} $$t=0$$ t = 0 . This problem is convex and it can be optimally solved. Then we can substitute [12pt]{minimal} $$ {a}_i$$ a i into to predict the entire GEX matrix for the i th patient and we use the second to the last column of [12pt]{minimal} $$ {G}$$ G for prediction, where the t th column in [12pt]{minimal} $$ {G}_i$$ G i represents the predicted GEX values at time t . Furthermore, we do not have any information about the drug responses for the future except for the one at [12pt]{minimal} $$t=0$$ t = 0 , and thus unable to feed back the actual drug response at [12pt]{minimal} $$t 1$$ t ≥ 1 for subsequent prediction. We have mentioned previously that for such cases, we can feed back the predicted drug responses for future prediction if a previous drug response is unavailable. Therefore, in the testing step, we start from [12pt]{minimal} $$t=1$$ t = 1 where we can use the GEX values and EDSS value at [12pt]{minimal} $$t=0$$ t = 0 to predict [12pt]{minimal} $$t=1$$ t = 1 . Then we use this estimated drug response and the estimated GEX at [12pt]{minimal} $$t=1$$ t = 1 to predict [12pt]{minimal} $$t=2$$ t = 2 , so on and so forth. Table shows the results, where the column ‘Using estimated GEX’ stands for using estimated GEXs and feeding back estimated drug responses for subsequent prediction, while ’Using actual GEX’ stands for using actual GEXs and feeding back actual drug responses. It can be seen in Table that predictors using the predicted GEX with the feedback of predicted drug responses have slightly worse performance than using the actual GEX with the feedback of actual drug responses. But the MAE and MSE in both scenarios are very close, especially for the REP-RandomForest algorithm. We studied the problem of drug response prediction for time-course gene expression data and presented a computational framework (REP) that: (1) has a recursive structure that integrates past drug response records for subsequent predictions, (2) offers higher prediction accuracy than several classical algorithms such as SVM and LR, (3) exploits the tensor structure of the data for missing GEX completion and unseen GEX prediction, (4) can predict drug response of different stages of a treatment from some initial GEX measurements. The achieved performance improvement for real data application suggests that our method serves as a better predictor of drug response using the time-course data. Supplementary Information.
Evaluation of the efficacy, safety, and stability of posterior chamber phakic intraocular lenses for correcting intractable myopic anisometropic amblyopia in a pediatric cohort
04830fe0-fc26-49fc-8090-de745dfc552b
8397845
Pediatrics[mh]
Amblyopia development in pediatric patients is one of the most challenging situations that can face an ophthalmologist. Its prevention and correction require proper cooperation of the child and his/her guardians, which is difficult to achieve in many instances . Anisometropic amblyopia is a common amblyopic form which leads to aniseikonia and unilateral image blur, with a consequent suppression of this blurred image by the brain . The conventional correction of anisometropia using spectacles remains the gold standard that is adopted by many pediatric ophthalmologists. Children can, in many instances, tolerate glasses while having large refractive differences between both eyes. This is mainly encountered with axial rather than refractive myopia, assuming Knapps’ law of visual optics. However, literature has demonstrated an induced stretching of the retina with significantly long globes that can be a primary cause of reduced spatial resolution in the peripheral field . This renders the glasses an inconvenient corrective modality in a major portion of myopes with high errors or those having refractive rather than axial myopia. Besides, if anisometropic amblyopia develops, occlusion or penalization of the fellow eye can be challenging and difficult to implement . Contact lenses (CLs) are another available option for correcting anisometropia. Nonetheless, intolerance to their use and poor compliance, especially with younger age groups, can lead to treatment failure . When spectacles and CLs fail to guarantee the desired visual acuity for the pediatric age group, other treatment modalities should be addressed to prevent amblyopia. Corneal excimer laser ablative procedures are an available alternative. Yet, the risks of flap related complications, postoperative corneal haze, and the possible development of corneal ectasia are higher among the pediatric patients . Another refractive correction modality for amblyopia with higher values of refractive errors is refractive lens exchange. Such procedures however carry major disadvantages, the most prominent of which are the greater risks of retinal detachments and the permanent loss of the accommodative power . The use of phakic intraocular lenses (pIOLs) has been proposed as an effective modality for correcting intractable anisometropic amblyopia in children . The major advantages of using pIOLs include their predictability, high optical quality, preservation of the child’s accommodative power, and avoiding the hazards of corneal ablative procedures . Though previous studies reported that both iris-fixated pIOLs and posterior chamber pIOLs (PC-pIOLs) have equally satisfactory postoperative visual outcomes, implantation of iris-fixated pIOLs carries a higher risk of endothelial cell loss and intraocular inflammation in adulthood. On the other side, PC-pIOLs have a significantly lower risk of such complications . Though the implantation of a PC-pIOL in a child seems more convenient than an iris fixated pIOL, it can induce other complications that usually arise from preoperative miscalculations or mispositioning of the IOL, including mainly anterior subcapsular cataract formation and shallow anterior chamber . Though reports of such complications in the pediatric population are few, this may be attributed to the paucity of studies on this age range, especially for long follow-up intervals . The aim of the present study was to evaluate the refractive efficacy, safety, and stability of PC-pIOLs (Visian Intraocular Collamer Lenses “ICLs”) in a pediatric cohort with myopic anisometropic amblyopia. The primary outcome was to assess the visual performance of the enrolled pediatrics 1 month following the surgical intervention and at their last follow-up visit, while the secondary outcomes were to detect the long-term stability and the possible long-term complications, by performing slit lamp examination, IOP measurement, and Pentacam examination. This is a prospective, consecutive, non-controlled, interventional, case series study that was performed on a pediatric group of patients (aged 3 to 18 years) who sought medical advice at Watany Eye Hospital, Cairo, Egypt. All the recruited patients performed the surgical procedure in the period from January 2016 to July 2020. The study adhered to the tenets of the Declaration of Helsinki and was conducted in compliance with the Ethical Standards set by the Institutional Review Board of the Watany Research and Development Center (the registration code is REF-2016-002). The guardians of the participating children and teenagers signed preoperative informed consents and were counselled about the nature of the surgical technique and the possible postoperative outcomes. The exclusion criteria included pediatric patients with previous ocular trauma or surgeries, corneal pathologies (mainly corneal dystrophies or ectatic conditions), angle anomalies, congenital glaucoma, any lenticular abnormalities [including abnormal lenticular shapes (mainly spherophakia and microspherophakia), abnormal lens positions (ectopia lentis), and lenses with cataractous changes], Anterior Chamber Depth (ACD) less than 2.8 mm, and any posterior segment abnormalities. Besides, cases with high cylindrical errors (either exceeding 3 D in the operated eye or having a difference in the cylindrical component between both eyes of more than 2 D) were excluded from the selected candidates. Hirshberg and cover tests were performed during the patients’ clinical examination to evaluate the existence of strabismus, where any participant with co-existing manifest strabismus was excluded from the study and was referred to a strabismus consultant for proper detailed evaluation and re-assessing the proper management thereafter. The study enrolled children and young teenagers with myopic anisometropic amblyopia (errors of range − 6 to − 18 Diopters “D” were included) and unsuccessful conventional amblyopic therapy (using spectacles, contact lenses, and/or occlusion therapy). Other than analyzing the results for the whole pediatric cohort, two subgroupings were performed for the enrolled participants based on both age and refractive condition of the other eye. For the age subgrouping, the participants were subdivided into three groups; group 1 (aged 3 to 6 years), group 2 (aged 7 to 12 years), and group 3 (aged 13 to 18 years). As regards to the subgrouping based on the refractive condition of the other eye, group 1 included pediatric patients with low myopia (more than 1 D and less than 6 D), while group 2 comprised patients with myopia of less than 1 D or emmetropia. Both groups were compared regarding the visual performance (Unaided Distance Visual Acuity “UDVA”, Corrected Distance Visual Acuity “CDVA”, and Spherical Equivalent “SE”) of the eye with the pediatric ICL implantation. For all candidates of the case series, a baseline ophthalmological examination was performed before the surgical intervention. This included automated refraction for measuring the SE (which was performed under complete cycloplegia with cyclopentolate 1%) and assessment of UDVA and CDVA. Subjective refraction was tried with clinical judgement which relied mainly on the refraction obtained from the automated autorefractometer. Snellen acuity chart was conventionally used, but for the very young children where the Snellen acuity was inconvenient, Sheridan-Gardiner test was used. No crowding was used on subjective visual assessment. The visual acuity was then converted to LogMAR for the statistical analysis. Furthermore, slit lamp examination, intraocular pressure (IOP) measurement using air puff tonometer, and fundus examination by indirect ophthalmoscopy were done for all participants. Hirschberg and cover tests were performed to reassure orthophoria and patients with existing strabismus were excluded. Prior to the selection of the suitable candidates, good quality scans of the Pentacam HR, branded as Allegro Oculyzer II (WaveLight, Erlangen, Germany, software version 1.20r20) were captured for all the patients to rule out corneal ectatic conditions and to measure the ICL vault. Essential biometric measurements were performed, including the ACD, central corneal thickness, keratometric values, and measuring the white-to white (WTW) diameter. To ensure the proper sizing of the ICL and for calculation of its suitable power as well as validation of the WTW and ACD measurements, the values obtained from the Pentacam HR were also confirmed by capturing good quality scans from the IOL Master 500 (Carl Zeiss Meditec, Germany) for all the enrolled participants. The ACD values in all patients exceeded 2.8 mm, which was measured from the anterior lens surface to the corneal endothelium. Statistical comparisons were made between the preoperative parameters and the corresponding ones in the two postoperative visits. Furthermore, plotting of Pearson correlations was performed to determine the possible relations between the visual improvements (in UDVA, CDVA, and SE) and the major variables that were assumed to possibly affect it (namely patients’ age and the difference in refraction between both eyes). The results of these possible relations were furtherly validated by including the most significant contributors in regression models (univariate and multivariate analyses) to estimate their effects on the outcomes. Surgical technique The surgical technique was performed for all the cases by the same experienced surgeon (F.F.M). All the surgeries were carried out under general anesthesia due to the young age group. All the pediatric participants implanted a Visian ICL (Model V4c, STAAR Surgical, Monrovia, California, USA), which was introduced and positioned into its proper location as per the conventional method of its implantation . On entering the patients’ refractive powers in the online calculation software of the company, emmetropia was targeted for the recruited participants. Postoperative management Postoperatively, eyedrops containing a steroid/antibiotic combination were prescribed 4 times daily and tapered out weekly for 1 month. The suture was removed 2 weeks after surgery. Automated refraction and subjective visual assessment (using Snellen acuity chart or Sheridan-Gardiner test in younger ages), slit lamp examination, IOP measurement using air puff tonometry, and Pentacam HR were performed for all the participating pediatrics along the follow-up visits, where the first visit was scheduled to be 1 month after the surgical intervention, and the data from the last follow-up visit for each participant was enrolled in the study. All the aforementioned examinations were done for all the participants on both postoperative visits, except for the Pentacam evaluation that was only performed on the last follow-up visit. The Pentacam images were evaluated to document the ICL stability, the value of the anterior ICL vault, and any detectable postoperative complications. Occlusion of the fellow eye during the day was prescribed for at least 3 h, combined with 1 hour of near visual activities , along the first month following surgery. The occlusion therapy was prescribed thereafter if necessary. Adherence to the prescribed occlusion was assessed during the patients’ follow-ups by reporting the detailed way of performing it. Statistical analysis Data analysis was performed using IBM SPSS Statistics for Windows (Version 25.0. Armonk, NY: IBM Corp.). The one-sample Kolmogorov-Smirnov test was used to test for normality. Quantitative data were presented as mean, standard deviation (SD) and ranges. Sex differences were evaluated by the chi-squared test. The comparison between more than two paired groups with quantitative data and non-parametric distribution was done by using Kruskall Wallis Test followed by post hoc analysis using Wilcoxon Rank test. Pearson correlation coefficients were used followed by univariate and multivariate linear regression using enter method to assess the correlation between different variables and the improvement of visual parameters. The confidence interval was set to 95% and the margin of error accepted was set to 5%. So, the p -value was considered significant at the level of < 0.05. Both the efficacy and the safety indices for the ICL implantation were calculated for the recruited cohort, where the cut-off level of the efficacy index was set to 0.80 and that of the safety index was set to 0.85. The surgical technique was performed for all the cases by the same experienced surgeon (F.F.M). All the surgeries were carried out under general anesthesia due to the young age group. All the pediatric participants implanted a Visian ICL (Model V4c, STAAR Surgical, Monrovia, California, USA), which was introduced and positioned into its proper location as per the conventional method of its implantation . On entering the patients’ refractive powers in the online calculation software of the company, emmetropia was targeted for the recruited participants. Postoperatively, eyedrops containing a steroid/antibiotic combination were prescribed 4 times daily and tapered out weekly for 1 month. The suture was removed 2 weeks after surgery. Automated refraction and subjective visual assessment (using Snellen acuity chart or Sheridan-Gardiner test in younger ages), slit lamp examination, IOP measurement using air puff tonometry, and Pentacam HR were performed for all the participating pediatrics along the follow-up visits, where the first visit was scheduled to be 1 month after the surgical intervention, and the data from the last follow-up visit for each participant was enrolled in the study. All the aforementioned examinations were done for all the participants on both postoperative visits, except for the Pentacam evaluation that was only performed on the last follow-up visit. The Pentacam images were evaluated to document the ICL stability, the value of the anterior ICL vault, and any detectable postoperative complications. Occlusion of the fellow eye during the day was prescribed for at least 3 h, combined with 1 hour of near visual activities , along the first month following surgery. The occlusion therapy was prescribed thereafter if necessary. Adherence to the prescribed occlusion was assessed during the patients’ follow-ups by reporting the detailed way of performing it. Data analysis was performed using IBM SPSS Statistics for Windows (Version 25.0. Armonk, NY: IBM Corp.). The one-sample Kolmogorov-Smirnov test was used to test for normality. Quantitative data were presented as mean, standard deviation (SD) and ranges. Sex differences were evaluated by the chi-squared test. The comparison between more than two paired groups with quantitative data and non-parametric distribution was done by using Kruskall Wallis Test followed by post hoc analysis using Wilcoxon Rank test. Pearson correlation coefficients were used followed by univariate and multivariate linear regression using enter method to assess the correlation between different variables and the improvement of visual parameters. The confidence interval was set to 95% and the margin of error accepted was set to 5%. So, the p -value was considered significant at the level of < 0.05. Both the efficacy and the safety indices for the ICL implantation were calculated for the recruited cohort, where the cut-off level of the efficacy index was set to 0.80 and that of the safety index was set to 0.85. The present study was conducted on 42 eyes of 42 children with unilateral high myopia or myopic anisometropic amblyopia, where the ICL was implanted in the more ametropic eye. The age range of the recruited pediatric cohort was 3 to 18 years, with a mean ± SD of 10.74 years ±4.16. The female to male percentage was 40.5 to 59.5%. Twenty-two eyes were right while 20 eyes were left. The mean preoperative SE was − 12.85 D ± 2.74 (range of − 19.00: − 7.13 D), the mean preoperative cylindrical error was − 2.17 D ± 1.05, while the mean Visian ICL power was − 12.77 D ± 2.39 (range of − 18.00: − 9.00 D). The follow-up visits had a mean ± SD of 14.67 months ±16.56 (range of 1 to 54 months). Since many patients were non-compliant with the regular follow-up intervals that were pre-set before the study (either due to the COVID-19 circumstances or due to living in remote governorates), only the data from the first follow-up visit (1 month after surgery) and from the last fulfilled follow-up (considered as the second visit) was enrolled in the study. Table shows the mean values of the patients’ visual acuity and refraction on each of the preoperative visit, the first postoperative visit, and the last follow-up visit and the P -values of significance between them. The results declared statistically significant differences between the values of the preoperative and the first postoperative visit, with a significant improvement in each of the postoperative UDVA ( P value < 0.01), CDVA ( P value < 0.01, with a mean improvement of 0.2 LogMAR ±0.50), and SE ( P value < 0.01, with mean improvement of − 11.83 D ± 4.78). Moreover, the results obviously showed refractive stability among the participating patients, as there was a slight (statistically insignificant) improvement in all the mean values of the patients’ visual acuity and refraction between the first postoperative and the last follow-up visit, except for a single statistically significant improvement in the UDVA ( P value =0.012). The values of both the efficacy and the safety indices for the enrolled patients were determined and showed remarkably high values of 1.18 ± 0.3 and 1.09 ± 0.24, respectively. The slit lamp examination during the first postoperative and the last follow-up visit showed clear corneas, quiet anterior chamber (AC) with no detected inflammatory reactions or pigmentary deposits, and a centralized ICL. The Pentacam images which were captured on the last follow-up visit confirmed the stability of the ICL in place, with no detected obstruction of the AC angle, and a sufficient space between the ICL and the crystalline lens. The anterior ICL vaulting had a mean and SD of 490 um ± 40.23. As regards to the IOP measurements and the fundus examination of the participants, the findings were unremarkable before and after the surgical performance and during the follow-up visits. A significant portion of the children’s guardians (80%) reported poor compliance with the prescribed occlusion therapy, despite strict instructions that were given to abide by it. Yet, the parents of all the children reported enhanced physical activities and improved social intermingling for all the participating patients within a short time interval of performing the surgical intervention. Regarding the age subgrouping, our examined cohort included 7 patients in group 1, 18 patients in group 2, and 17 patients in group 3. No significant differences were detected among the three age subgroups regarding the visual or refractive changes before and after the surgical procedure. For the subgrouping that was based on the refractive status of the fellow eye, group 1 patients had a spherical equivalent that ranged between − 1.25 and − 4.75 D, and the results declared no statistically significant differences between the patients of group 1 (17 eyes) and group 2 (25 eyes) regarding the visual performance of the eye that performed the pediatric ICL implantation. The results of the performed Pearson correlations showed a single significant relation, where the difference in refraction between both eyes was negatively correlated with the improvement of SE ( r = − 0.83, P value < 0.001). This relation was furtherly augmented by the linear regressions, with the same significant relation detected in both the univariate (beta coefficient = − 0.73, P value < 0.001) and the multivariate (beta coefficient = − 0.83, P value < 0.001) analyses. Contrarily, the age factor was not correlated with any of the included parameters. This prospective case series study showed the efficacy (efficacy index value of 1.18 ± 0.3), safety (safety index value of 1.09 ± 0.24), and stability of Visian ICLs for correcting myopic anisometropic amblyopia in a pediatric cohort with unilateral high myopia and non-compliance with the conventional treatment modalities. To date, the present study comprised the largest number of pediatric patients who implanted an ICL for correcting anisometropic amblyopia, and it is also the first study to document this long follow-up interval that reached up to 54 months. Thus, the present report validates the use of Visian ICLs in young children and teenagers without concerns about their long-term refractive stability or about the development of long-term complications. Besides, the absence of significant differences in the visual performance among the three age subgroups indicates promising results for the whole included pediatric age range (3 to 18 years). Our studied population included cases of unilateral high myopia. This population was shown to be more prone to develop anisometropic amblyopia, even with trials of conventional treatments using spectacles, contact lenses, and occlusion therapy . For all candidates included in the present study, the cylindrical component did not exceed 3 D in the operated eye and the difference in the cylindrical component between both eyes was no more than 2 D. We excluded patients having higher cylindrical errors that would require toric ICLs for correcting this high astigmatism, assuming that the corneal toricity will change along the time and thus implanting a toric ICL at this young age would possibly require a secondary exchange within few years. Even though our enrolled patients had relatively low cylindrical values, the patients who were left postoperatively with a visually-significant cylinder were corrected by spectacles, especially that this study aimed at correcting the anisometropic amblyopia rather than attaining glass independence for the candidates. Implantation of PC-pIOLs in children for preventing and treating anisometropic amblyopia can be considered as a preferable technique by many surgeons, and also, after proper counselling, by many parents. This can be attributed to the efficacy and safety of the procedure in restoring the visual performance, the lack of post-operative noxious precautions which are encountered with the corneal refractive surgeries (especially with the younger ages), the significantly lower risk of endothelial cell loss than the AC-IOLs (especially with the inevitable eye rubbing in children), the unbreaching of the corneal architecture (allowing for future successful corneal refractive surgeries if needed), and the reversible nature of the technique (if needed) . The enrolled pediatric cohort in the present study did not experience post-operative complications. Lack of surgical experience and an improper vault size are the two main reported risk factors for a higher incidence of developing pediatric secondary cataract (in cases with low vaults) or pupillary block glaucoma (with high vault values) following the surgical intervention . Yet, it is noteworthy that the complications related to the improper ICL vault were more frequently encountered with the older ICL models. The newer model ICLs (V4c used in this study as well as the newer model V5) include a central port which greatly minimizes the risk of either cataract development or pupillary block . The absence of the two aforementioned risk factors in our study may clearly explain the absence of post-operative complications in our recruited patients. In our studied pediatric population, the mean vault value was within the normal ranges and towards the higher normal values. A relatively higher value for the pediatric ICL vault has been advocated, considering the expected progressive reduction of the central vault over time with the slow (yet steady) axial growth of the crystalline lens over the years. That is why higher vault values (within the normal ranges) can be more preferable for younger age groups . Worthy of mention is that the compliance with the occlusion therapy was poor for most of the pediatric cohort, which has also been reported in previous studies . This can be attributed to many factors, including mainly skin irritation, poor cosmetic appearance, lengthy treatment periods, and the stress suffered by the child and his parents. These factors make the occlusion therapy difficult to achieve and more likely to be abandoned or applied considerably less than required. This validates the use of the ICLs at an early phase if the conventional therapy is ineffective, so as to avoid the occurrence of anisometropic amblyopia. The parents of the pediatric cohort reported improved physical and social activities within a short period of the ICL implantation. These short-term enhancements cannot be attributed to simple maturation of the children that requires longer time intervals, so we can attribute these improvements to the better visual performance following the ICL implantation. Previous studies reported the outcomes of implanting iris-fixated ICLs for correcting pediatric anisometropic amblyopia. Though the visual outcomes were satisfactory, some complications were documented, including progressive endothelial cell loss with eye rubbing (that is mostly uncontrollable with younger ages) and iris chaffing. Furthermore, the relatively short follow-up intervals render the results of these studies unreliable for the true evaluation of the possible consequent complications . To the authors’ knowledge, few case series studies were conducted on implanting PC-pICLs for myopic anisometropic children. All these studies recruited a fewer number of children than the present study, and the follow-up ranges were shorter. Table displays the clinically relevant results of these studies, which are collectively in accordance with our study results in validating the stability and the absence of significant complications after implanting PC-pICLs . In our study, the age subgrouping detected equivocal visual results, and the performed linear regression analysis did not show a significant relation between the visual improvement and the age factor, denoting that the ICL implantation in such a cohort is a preferable technique for all the included age range. Although the age subgrouping of our enrolled pediatrics yielded no significant differences among the three subgroups, future studies conducted on larger cohorts are needed to validate these results, especially that the number of patients in the smallest age subgroup (aged 3 to 6) was smaller than the other two subgroups. We also performed a subgrouping for the enrolled cohort that was based on the refractive condition of the fellow eye. This aimed to declare whether the eyes with low myopia in the fellow eye had a more favorable visual prognosis along the follow-up visits than those with emmetropia in the fellow eye, assuming that the amblyopic eye will be favored from the refractive aspect after the ICL implantation. Although our study did not show significant differences between the two subgroups, we believe that these results should be negated or reinforced by future studies performed on larger cohorts and having a more equivocal number of patients in both groups (as the eyes in group 1 with low myopia in the fellow eye represented 39.5% only of the enrolled patients). In our studied cohort, specular microscopy was not performed for the patients, as we did not expect a significant compromise for the corneal endothelium by the implanted ICLs (owing to their posterior location behind the iris). Previous reports have documented that PC-pICLs are much safer on the corneal endothelium than AC-IOLs . Moreover, a recent study by Fernández-Vega-Cueto and his co-workers reassured the lack of long term traumatizing effects of the modern ICL designs on the corneal endothelium, where their study showed a very minimal endothelial cell loss of 2.6% after V4c ICL implantation at the last follow-up visit along a follow-up interval of 7 years. Future studies along longer follow-up periods can more robustly declare the refractive stability and the safety of the Visian ICLs. Besides, highlighting the impact of improving the visual performance on the binocular vision is recommended in the upcoming studies. In conclusion, the present study declared the long term visual and refractive efficacy, safety, and stability of the Visian ICL for correcting myopic anisometropic amblyopia in a pediatric cohort with a mean preoperative SE of 12.85 D ± 2.74 and a range of − 19.00 to − 7.00 D. Based on our study results, the implantation of Visian ICLs for cases of unilateral high myopia with intractable anisometropic amblyopia results in a long-term visual and refractive stability, and can also be gauged as a low risk procedure, evidenced by the long-term absence of reported complications. Furthermore, the reported non-compliance with occlusion therapy in many of our studied pediatrics validates the early implantation of Visian ICLs in cases of failure of the conventional conservative correction and occlusion therapy to guard against anisometropic amblyopia.
Evaluation of medical malpractice claims in thoracic surgery
9ae7da62-23e3-463e-8a9a-020bf20091d0
10315966
Forensic Medicine[mh]
Medical malpractice occurs in cases where damage develops in a patient as a result of a doctor’s deviation from the standard practice or care. The surgical specialties are at a higher risk for medical malpractice claims than any other areas of specialization. Today, young doctors do not prefer surgical branches in their professional careers due to the high risk of malpractice together with long and exhausting working hours. The time spent on defense due to lawsuits and the emotional burden that these lawsuits can have on surgeons may also affect their medical practice; some physicians have abandoned surgery completely after cases resulting in compensation. To reduce and prevent medical malpractice claims that may cause serious consequences for physicians, cases with medical malpractice claims should be closely analyzed. Unfortunately, few studies published in Turkey have raised awareness about thoracic surgeons and medical malpractice. The purpose of this study was to evaluate thoracic surgery cases that resulted in death, where medical malpractice claims were filed to increase the awareness of thoracic surgeons about cases with alleged medical malpractice. Sampling Medical malpractice claims that were filed in thoracic surgery cases that resulted in death were retrospectively analyzed from among the report archives of the First Board of Specialization of the Council of Forensic Medicine between January 01, 2010, and December 31, 2015. Diagnostic Methods The First Forensic Medicine Specialization Board of the Forensic Medicine Institute acts as an expert on cases with medical malpractice claims that resulted in death and were filed by judicial authorities. The board consists of a chairman and ten members (two forensic medicine specialists, one pathologist, one internist, one cardiologist, one general surgeon, one neurosurgeon, one anesthetist, one gynecologist, and one pediatrician). In addition, members from different medical specialties (such as thoracic surgery) may be appointed to the board. After the case reaches the board, it is examined by the rapporteur. If there are any deficiencies in the file, a letter is written to the judicial authority requesting any necessary information. If the file is complete, the rapporteur evaluates statements from the victims, accused doctors, and witnesses; all medical documents, surgery notes, epicrisis reports, observation documents, and radiological examination documents and images; autopsy reports; and photographs. The prepared report is then presented to the chairman and members of the board, and a final report is prepared and sent to the court detailing whether the physician has been determined to be at fault. Data Collection and Implementation While the data were being recorded, the following parameters were scrutinized: The gender and age of the cases, the healthcare organization visited, the reason for the visit to the hospital, the academic title of the physician, the clinical diagnosis, medical and/or surgical treatments performed, and any emergency-elective interventions, whether the death was traumatic or natural, the presence and type of any complications, and the phase in which confirmed malpractice was occured. The present study was a retrospective study that included no identifying data or human/animal subjects, so informed consent was not required. All study procedures were performed after obtaining the scientific and ethical approval of the Ministry of Justice Council of Forensic Medicine dated February 23, 2016, No.21589509/77 and in accordance with the 1964 Declaration of Helsinki including its later amendments. Statistical Analysis The Statistical Package for the Social Sciences 21.0 (Armonk, NY) statistics program was used for the data analysis in this study. Descriptive statistics were presented as the frequency, percentage, mean (mean), standard deviation, minimum, and maximum values. Fisher’s exact test was used for the comparison of qualitative data, along with descriptive statistical methods. The significance level was accepted as p<0.05. Medical malpractice claims that were filed in thoracic surgery cases that resulted in death were retrospectively analyzed from among the report archives of the First Board of Specialization of the Council of Forensic Medicine between January 01, 2010, and December 31, 2015. The First Forensic Medicine Specialization Board of the Forensic Medicine Institute acts as an expert on cases with medical malpractice claims that resulted in death and were filed by judicial authorities. The board consists of a chairman and ten members (two forensic medicine specialists, one pathologist, one internist, one cardiologist, one general surgeon, one neurosurgeon, one anesthetist, one gynecologist, and one pediatrician). In addition, members from different medical specialties (such as thoracic surgery) may be appointed to the board. After the case reaches the board, it is examined by the rapporteur. If there are any deficiencies in the file, a letter is written to the judicial authority requesting any necessary information. If the file is complete, the rapporteur evaluates statements from the victims, accused doctors, and witnesses; all medical documents, surgery notes, epicrisis reports, observation documents, and radiological examination documents and images; autopsy reports; and photographs. The prepared report is then presented to the chairman and members of the board, and a final report is prepared and sent to the court detailing whether the physician has been determined to be at fault. While the data were being recorded, the following parameters were scrutinized: The gender and age of the cases, the healthcare organization visited, the reason for the visit to the hospital, the academic title of the physician, the clinical diagnosis, medical and/or surgical treatments performed, and any emergency-elective interventions, whether the death was traumatic or natural, the presence and type of any complications, and the phase in which confirmed malpractice was occured. The present study was a retrospective study that included no identifying data or human/animal subjects, so informed consent was not required. All study procedures were performed after obtaining the scientific and ethical approval of the Ministry of Justice Council of Forensic Medicine dated February 23, 2016, No.21589509/77 and in accordance with the 1964 Declaration of Helsinki including its later amendments. The Statistical Package for the Social Sciences 21.0 (Armonk, NY) statistics program was used for the data analysis in this study. Descriptive statistics were presented as the frequency, percentage, mean (mean), standard deviation, minimum, and maximum values. Fisher’s exact test was used for the comparison of qualitative data, along with descriptive statistical methods. The significance level was accepted as p<0.05. This study included 81 cases: Fifty-nine of the cases were male (72.8%), 22 were female (27.2%). The mean age is 51.13±18.97. The most common age range is over the age 60 (n=35, 43.2%), followed by 40–59 years (n=28, 34.6%), 18–39 years (n=13, 16%), and 0–17 years (n=5, 6.2%). Medical malpractice was confirmed in 11 (13.6%) of the cases. Eighty-nine doctors (three resident, 77 medical specialist, two asistant professor, two associate professor, and five professor) charged with malpractice allegations. When the distribution of the involved hospitals where these incidents took place was examined, it was determined that the most frequent treatment occurred at state hospitals (n=49, 60.5%), followed by education and research hospitals (n=15, 18.5%), university hospitals (n=10, 12.3%), and private hospitals (n=7, 8.6%). In the 11 cases where medical malpractice was confirmed by the board, the most common cause of error was diagnostic error (n=7, 63.6%). The most common cause of diagnostic error was failure to diagnose on time (n=4, 36.4%) ( ). Forensic examinations indicated that 54 (66.7%) of the cases were traumatic deaths, while 27 (33.3%) were deaths from natural causes. When the disease diagnoses of the cases were examined, the most frequent diagnosis was “injuries due to trauma” (n=54, 66.7%), followed by lung cancer (n=9, 11.1%) ( ). The first intervention performed by thoracic surgeons frequently occurs in the emergency department (n=59, 72.8%). Of all 81 cases evaluated in this study, 31 (38.3%) underwent surgical treatment, and while 50 received medical treatment. Surgery was performed under emergency conditions in 17 (54.8%) of the 31 cases who underwent surgery and under elective conditions in the remaining 14 (45.2%) patients. No statistically significant difference in medical malpractice rates was found between surgical interventions performed on an emergent basis and elective interventions (p>0.05) ( ). When the physician’s role in patient care was examined, it was found that 80.2% (n=65) of the accused doctors intervened as a consultant, and 19.8% (n=16) were the primary attending physician. There was no significant difference between the consultant physician and the responsible physician in terms of their medical malpractice rates (p>0.05) ( ). Complications developed in 48 (59.3%) of the cases during their treatment course. The most common complication was pneumonia (n=7, 14.6%) ( ). No statistically significant difference was found between the development of complications and medical malpractice rates (p>0.05) ( ). In Turkish studies, the overwhelming majority of cases that are associated with alleged medical malpractice lawsuits are filed by male patients. In the present study, most cases (72.8%) were also male. In Turkey, the incidents that cause medical malpractice claims frequently occur at state hospitals.[ , , ] In the present study, 60.5% of the cases were treated at a state hospital. Because the physicians that care for emergent patients often do not have sufficient information about the patient when treatment begins, they are very likely to encounter medical malpractice claims due to the need to make quick decisions in an acute situation, the limited time allocated to patients and their relatives, and the discontinuous patient–doctor relationship. In the present study, thoracic surgeons often performed the first intervention in the emergency department (n=59, 72.8%). Surgical intervention has been carried out in the vast majority of cases of medical malpractice claims that are filed against doctors from surgical specialties.[ , , ] However, patients who undergo a nonsurgical medical treatment reportedly have a statistically significantly higher incidence of encountering a medical error compared to patients treated with surgery. Surgical procedures performed under emergent conditions may seem to be more prone to error, but the literature indicates otherwise. Emergency surgery was carried out in 54.3% of cases that filed medical malpractice claims; these patients underwent surgical interventions in the general surgery department, but no significant relationship was identified between emergent and elective surgical intervention and medical errors. Only 38.3% (n=31) of the patients in the present study underwent a surgical intervention. No statistically significant difference was found between surgical interventions performed under emergent and elective conditions according to medical malpractice (p>0.05) ( ). However, it was quite remarkable that none of the cases who underwent elective surgical intervention had any malpractice, and 17.6% (n=3) of the cases treated with emergency surgery had confirmed medical malpractice. Doctors may often solicit ideas and suggestions from their colleagues in other specialties about their patient’s follow-up or treatment, and they can modify the patient’s treatment plan based on these consultations. Although the primary treatment responsibility lies with the attending physician, consulting physicians also have a responsibility to report their opinions about the patient and to provide the most appropriate treatment recommendations to the attending physician in a comprehensive verbal and written form. In the present study, 80.2% (n=65) of the thoracic surgeons accused of malpractice examined the patient as a consulting physician. Trauma cases have a reputation for being at high risk of becoming involved with malpractice claims. One study found that 16.6% of cases with medical malpractice claims in the general surgery specialty were traumatic cases. In the general surgery specialty, this rate was 32.3% in cases with medical malpractice claims that resulted in death. In another study that involved 275 neurosurgical medical malpractice claims, 17.5% of the cases were trauma cases. In this present study, 54 (66.7%) of the cases were found to be traumatic forensic deaths, while 27 (33.3%) were ruled as natural death. While 18.5% of the cases who died as a result of trauma were determined to have medical malpractice, this rate was only 4.7% in cases where the patient died from natural death causes. This situation demonstrates that in trauma cases, medical malpractice rate is about four times more than in cases, where the patient dies from natural causes; doctors would be prudent to exercise great caution in these cases. “Incorrect application, incorrect technique, failure to recognize the complication, forgetting a foreign body, incorrect management, unnecessary procedure, operation of the wrong body part, lack/failure in informed consent, failure to perform the procedure, and delay in implementation” are the ten most common causes of medical malpractice that result in compensation in surgical specialties. A study involving 58,158 surgical medical malpractice cases found that 41.8% of the cases received paid compensation. Regenbogen et al. reported that 52% of the cases with surgical medical practice error claims included technical errors, and the most common reasons for technical errors were injury of internal organs or other anatomical structures as a result of an accident or a lack of judgment and knowledge. In Turkey, the most common reasons for medical malpractice in the branch of general surgery are incomplete evaluation before and after surgery and misdiagnoses. In the study of Üzün et al., the most frequent mistakes made in the branch of general surgery were caused by deficiencies in the treatment process (47.8%). In this study, we found that medical malpractice was reported in 11 (13.6%) of the cases by the Board of Specialization, and the most common reason for reporting medical malpractice by the Board of Specialization was a diagnostic error (n=7, 63.6%) ( ). The Board of Specialization decided that the medical procedure performed in 86.4% of the cases was appropriate. In other words, 86.4% of the physicians accused of medical malpractice were accused of unfair reasons. Since medical malpractice lawsuits continue for many years and have serious negative effects on physicians, it is obvious that new legal arrangements should be prepared for medical malpractice lawsuits. It is claimed that trauma patients have a low risk of filing a real malpractice lawsuit. Trauma surgeons are significantly more at risk in terms of unwanted patient complaints than surgeons in other specialties, but this risk is likely due to the small number of trauma surgeons and not associated with the field itself. Trauma and injury patients constitute 22–36% of cases in Turkey, in which general surgeons are found guilty of medical malpractice. In the present study, trauma and injuries (n=54, 66.7%) were the most common cause for medical malpractice claims among primary diagnoses made at healthcare institutions ( ). In addition, 90.1% of the cases in which the thoracic surgeon was found to have committed medical malpractice were trauma and injury patients. The goal of treatment in trauma patients is to identify the injuries as soon as possible and begin treatment. Delays in diagnosis are associated with high morbidity and mortality rates, which lead to longer hospital stays and high health costs.[ – ] In trauma patients, 19–23.3% of the diagnoses that could not be made on time are clinically significant injuries, and 56.3% of the factors that cause missed diagnoses in multiple trauma cases are preventable. In our trauma cases where thoracic surgeons were determined to have committed medical malpractice (n=10), the most common reason for a misdiagnosis was the inability to diagnose on time (n=4, 40%). Repeated clinical evaluations during the follow-up process after the first emergent intervention play an important role in the detection of missed diagnoses.[ – ] New complaints observed in the patient, especially during the follow-up period, may be closely related to a possibly missed diagnosis. Failure to diagnose an issue on time (80%) is the most common reason for that patients sue doctors who treat lung cancer patients (80%), followed by errors in surgery and chemotherapy (7%) and a false positive diagnosis of lung cancer (7%). In a study that included 583 diagnostic errors made by 310 clinicians, lung cancer (3.9%) was the third most frequently missed diagnosis after pulmonary embolism and drug reaction. In addition, primary care physicians and radiologists have a higher risk of being sued for malpractice claims related to lung cancer, while this risk is lower for thoracic surgeons who operate on lung cancer patients. In this study, the second most common diagnosis associated with medical malpractice claims was lung cancer (n=9, 11.1%) ( ). While the complication rate of surgeries was 9.1% in 1990 in the United States, this rate had increased to 83.6% in 2012. The most common postoperative pulmonary complications that occur after thoracic surgery are pneumonia and atelectasis. Postoperative pulmonary complications are responsible for 80% of deaths that take place after thoracic surgery. In this study, 59.3% (n=48) of cases developed complications and the most common complication was pneumonia ( ). No statistically significant difference was found between the development of complications and the medical malpractice rates (p>0.05) ( ). This trend indicates that thoracic surgeons are successful in recognizing and managing complications related to the thoracic diseases and treatments. This study had strengths as well as weaknesses. First of all, the decisions provided regarding medical malpractice are only the decisions of an expert institution and do not represent the final decisions of the court. The inability to include the final decisions of the court was an important limitation. Since the Forensic Medicine Institute is not the only authority, the expert report given by the board can be appealed, and the judge is not required to comply with the expert’s decision. Another limitation was the lack of information about the compensation amounts that the physicians had to pay as a result of the lawsuit. In addition, since our study included only cases that resulted in death, it cannot be said to adequately represent the entire sample. It is vital that future studies include cases from all over the country to provide important clues for thoracic surgeons about cases of alleged medical malpractice. Despite these limitations, this was the first study in Turkey that included cases with medical malpractice claims filed against thoracic surgeons. We found that the first forensic medicine board reported that thoracic surgeons are involved in 13.6% of all medical malpractice cases. In other words, 86.4% of the physicians accused of medical malpractice were accused of unfair reasons. The most common reason for medical malpractice was a diagnostic error (n=7, 63.6%). The incident that was the subject of the complaint took place most frequently in a state hospital, and the specialist doctors were blamed most often. The most frequent diagnosis was “injuries due to trauma.” Most of the accused doctors were asked by the attending physician to consult on the patients’ case. Examining cases with medical malpractice claims will help physicians not only to better understand the characteristics of malpractice claims but also to develop strategies to prevent malpractice claims.
Personalized contrast agent dosing to prevent contrast induced nephropathy in high risk populations in Guangdong, China
b3404f21-649e-436e-bea5-5d558deaf2d2
11865306
Surgical Procedures, Operative[mh]
Iodinated contrast media are vital in modern medicine, enhancing diagnostic and interventional procedures across various specialties, such as cardiology and neurology. These agents improve image contrast on X-rays, CT scans, and angiography, enabling precise visualization of internal structures. This clarity is crucial for accurately diagnosing and managing diseases, leading to better patient outcomes. Iodinated contrast media enhance diagnostic accuracy and facilitate targeted treatments, underscoring their indispensable value in healthcare , . Contrast-induced nephropathy (CIN) is a potential complication that can arise following the intravascular administration of contrast media. Since the introduction of iodinated contrast agents in the early 1950s, their ability to delineate vascular structures and improve imaging precision has significantly increased diagnostic accuracy , . However, soon after their inception, the nephrotoxic potential of these agents was recognized. Over the past few decades, with the increasing use of contrast agents in various procedures, the incidence and implications of CIN have become significant concerns for clinicians – . CIN incidence rates vary widely, with reporteds values ranging from as low as 2% to as high as 25% in high-risk populations – . Several risk factors that increase the susceptibility of an individual to CIN developmenthave been identified. These risk factors include preexisting chronic kidney disease (CKD), diabetes mellitus (DM), advanced age, volume depletion, concurrent use of nephrotoxic drugs, and high volumes or repeated doses of contrast media – . Current evidence incompletely details the intricate interplay of these factors and makes current risk stratification models imprecise. This study addresses these knowledge gaps by exploring nuanced risk factor interactions, refining prediction models, and investigating targeted interventions. Through the present findings, we aimed to contribute to the development of more accurate risk assessments and effective clinical strategies, that would advance the prevention and management of CIN. Therefore, the purpose of this study was to elucidate the relationship between contrast agent dosage and the risk of CIN development, with a particular focus on various patient subgroups differentiated by medical conditions such as DM, hyperuricemia (HUA), heart failure (HF), CKD, and anemia. Our research revealed notable differences in the correlation between contrast agent dosage and CIN risk across various patient subgroups. Specifically, we examined the increased risk of CIN development in patients stratified by sex, age, BMI, DM status, hypertension status, HF status, CKD status, anemia status, and HUA status. Understanding these relationships will be crucial for optimizing contrast agent dosage and minimizing the risk of CIN development in diverse patient populations. Using a threshold effect model, our study further delineated the cutoff points for contrast agent dosage that correspond to an increased risk of CIN development in different patient subgroups. These findings offer valuable insights for clinicians in determining safe contrast agent dosages, particularly in high-risk populations. Study design and participants We conducted a retrospective analysis involving patients who underwent percutaneous coronary intervention (PCI) or computed tomography angiography (CTA) at four hospitals in Guangdong, China, between 2010 and 2018. These hospitals included Dongguan People’s Hospital(Tenth Affiliated Hospital of Southern Medical University), Taishan People’s Hospital, Dongguan Xiegang People’s Hospital, and Guangdong Provincial People’s Hospital. This study included patients aged 18 years and above who received either hypo-osmolar nonionic monomeric iodixanol or isotonic nonionic dimer iodixanol contrast agents(both of which contained 320 mgI/ml iodine), and signed an informed consent form. Exclusion criteria: Patients were excluded if they met the following criteria: Receipt of contrast agents other than those specified; An active tumor status or current treatment with nephrotoxic chemotherapy; No pre- or post-exposure renal function data available within three days of contrast agent administration; Undocumented contrast agent doses; Additional potential confounding factors affecting renal function, including previous acute renal injury, renal insufficiency, renal tubular injury, and other renal diseases, such as diabetes nephropathy, hypertensive nephropathy, or tubulointerstitial diseases. Data collection The collected data included demographic details (age, sex, weight, height, blood pressure) and medical history (DM, hypertension, hypotension, HF, CKD, anemia, intra-aortic balloon pump [IABP] use, hydration status). The laboratory parameters included hemoglobin (HGB), renal function (assessed before and within three days postcontrast exposure), and uric acid (UA) levels. Body mass index (BMI) was calculated via the standard formula (weight in kilograms divided by the square of height in meters). The estimated glomerular filtration rate (eGFR) was derived via the CKD Epidemiology Collaboration (CKD-EPI) Equation . Definition of CIN CIN was defined as an absolute increase in serum creatinine (SCr) of ≥ 0.5 mg/dl (44.2 µmol/L) or a relative increase of ≥ 25% from baseline within 48 to 72 h following contrast exposure. Covariates The following covariates were included in the analyses: sex, age, BMI, systolic and diastolic blood pressure, laboratory results (HGB and UA), and disease history (DM, hypertension, hypotension, HF, CKD, anemia, IABP use, hydration status). Statistical analysis Continuous variables are presented herein as means ± standard deviations, whereas categorical variables are expressed as percentages. To compare population characteristics according to the occurrence of CIN, one-way analysis of variance (ANOVA) and chi-square tests were used. The association between contrast agent dose and CIN risk was evaluated via multivariate logistic regression, which yielded odds ratios (ORs) and 95% confidence intervals (CIs) adjusted for relevant covariates. The contrast dose was analyzed as both a continuous variable and a categorical variable on the basis of clinically established cutoff points. A penalized spline method was used for smooth curve fitting to visualize the relationship between the contrast agent dose and the risk of CIN development. Subgroup analyses were conducted to explore potential modifiers affecting the contrast dose‒CIN relationship, focusing on variables such as sex, age (divided into < 65 and ≥ 65 years), BMI (divided into < 24 and ≥ 24 kg/m 2 ), hypertension, DM, HF, CKD, anemia, and high UA levels. Stratified analyses and interaction tests were performed to assess potential effect modifications. We employed a threshold effect model to capture the nonlinear relationship between the explanatory variable (X) and the outcome variable (Y). In biomedical research, many factors and their associated outcomes do not follow a simple linear relationship. Instead, their effect may be either null or positive within a certain range, and once a threshold is exceeded, the magnitude or direction of the effect may change. To detect such threshold effects, we first applied a smoothing curve fitting technique to examine the relationship between X and Y. This method allowed us to visually assess whether a piecewise linear relationship exists. To formally identify and validate the threshold, we used the Empower Solutions software by X&Y, which includes a dedicated module for threshold effect analysis. This software utilizes maximum likelihood estimation (MLE) to determine the threshold(s) based on the observed piecewise linear relationship. The software offers two options: if prior knowledge of the threshold exists, the user can input a specified value, or alternatively, the software can automatically determine the threshold based on the data and the segmented fitting approach. In our analysis, we chose to use the software’s automatic threshold identification feature. A two-sided p value of less than 0.05 was considered to indicate statistical significance. All analyses were conducted via R software, version 3.4.3 ( www.R-project.org ), and EmpowerStats, version 2.17.8 ( www.empowerstats.com , X&Y Solutions, Inc.). We conducted a retrospective analysis involving patients who underwent percutaneous coronary intervention (PCI) or computed tomography angiography (CTA) at four hospitals in Guangdong, China, between 2010 and 2018. These hospitals included Dongguan People’s Hospital(Tenth Affiliated Hospital of Southern Medical University), Taishan People’s Hospital, Dongguan Xiegang People’s Hospital, and Guangdong Provincial People’s Hospital. This study included patients aged 18 years and above who received either hypo-osmolar nonionic monomeric iodixanol or isotonic nonionic dimer iodixanol contrast agents(both of which contained 320 mgI/ml iodine), and signed an informed consent form. Exclusion criteria: Patients were excluded if they met the following criteria: Receipt of contrast agents other than those specified; An active tumor status or current treatment with nephrotoxic chemotherapy; No pre- or post-exposure renal function data available within three days of contrast agent administration; Undocumented contrast agent doses; Additional potential confounding factors affecting renal function, including previous acute renal injury, renal insufficiency, renal tubular injury, and other renal diseases, such as diabetes nephropathy, hypertensive nephropathy, or tubulointerstitial diseases. The collected data included demographic details (age, sex, weight, height, blood pressure) and medical history (DM, hypertension, hypotension, HF, CKD, anemia, intra-aortic balloon pump [IABP] use, hydration status). The laboratory parameters included hemoglobin (HGB), renal function (assessed before and within three days postcontrast exposure), and uric acid (UA) levels. Body mass index (BMI) was calculated via the standard formula (weight in kilograms divided by the square of height in meters). The estimated glomerular filtration rate (eGFR) was derived via the CKD Epidemiology Collaboration (CKD-EPI) Equation . CIN was defined as an absolute increase in serum creatinine (SCr) of ≥ 0.5 mg/dl (44.2 µmol/L) or a relative increase of ≥ 25% from baseline within 48 to 72 h following contrast exposure. The following covariates were included in the analyses: sex, age, BMI, systolic and diastolic blood pressure, laboratory results (HGB and UA), and disease history (DM, hypertension, hypotension, HF, CKD, anemia, IABP use, hydration status). Continuous variables are presented herein as means ± standard deviations, whereas categorical variables are expressed as percentages. To compare population characteristics according to the occurrence of CIN, one-way analysis of variance (ANOVA) and chi-square tests were used. The association between contrast agent dose and CIN risk was evaluated via multivariate logistic regression, which yielded odds ratios (ORs) and 95% confidence intervals (CIs) adjusted for relevant covariates. The contrast dose was analyzed as both a continuous variable and a categorical variable on the basis of clinically established cutoff points. A penalized spline method was used for smooth curve fitting to visualize the relationship between the contrast agent dose and the risk of CIN development. Subgroup analyses were conducted to explore potential modifiers affecting the contrast dose‒CIN relationship, focusing on variables such as sex, age (divided into < 65 and ≥ 65 years), BMI (divided into < 24 and ≥ 24 kg/m 2 ), hypertension, DM, HF, CKD, anemia, and high UA levels. Stratified analyses and interaction tests were performed to assess potential effect modifications. We employed a threshold effect model to capture the nonlinear relationship between the explanatory variable (X) and the outcome variable (Y). In biomedical research, many factors and their associated outcomes do not follow a simple linear relationship. Instead, their effect may be either null or positive within a certain range, and once a threshold is exceeded, the magnitude or direction of the effect may change. To detect such threshold effects, we first applied a smoothing curve fitting technique to examine the relationship between X and Y. This method allowed us to visually assess whether a piecewise linear relationship exists. To formally identify and validate the threshold, we used the Empower Solutions software by X&Y, which includes a dedicated module for threshold effect analysis. This software utilizes maximum likelihood estimation (MLE) to determine the threshold(s) based on the observed piecewise linear relationship. The software offers two options: if prior knowledge of the threshold exists, the user can input a specified value, or alternatively, the software can automatically determine the threshold based on the data and the segmented fitting approach. In our analysis, we chose to use the software’s automatic threshold identification feature. A two-sided p value of less than 0.05 was considered to indicate statistical significance. All analyses were conducted via R software, version 3.4.3 ( www.R-project.org ), and EmpowerStats, version 2.17.8 ( www.empowerstats.com , X&Y Solutions, Inc.). Baseline characteristics of the study participants During the study period, data were collected from 16,038 patients. Among these patients, 227 were excluded because of the absence of renal function data before contrast media administration, 3,371 were excluded because of the absence of pre- and post-exposure renal function data within three days before and after the use of contrast agents, and 64 were excluded because the contrast agent dose was not recorded. This process resulted in a final cohort of 12,376 patients (Fig. ), with an average age of 63.0 years (standard deviation of 12.4 years), 63.8% of whom were male. The incidence of CIN was 6.4%, affecting 797 of the 12,376 patients included in the analysis (Table ). The results of this study revealed that patients who developed CIN were generally characterized by a greater use of contrast, older age, and male sex, along with a lower BMI and HGB concentration. They also had higher systolic and diastolic blood pressure (SBP and DBP), baseline serum creatinine (SCr), blood urea nitrogen (BUN), and uric acid (UA) levels. Hypertension, DM, HF, the use of an intra-aortic balloon pump (IABP), anemia, hydration, and elevated UA were more common in the CIN group ( p < 0.05). Compared with those who underwent CTA, patients who underwent PCI had a greater probability of CIN development. No significant differences were noted regarding hypotension status (Table ). Associations between contrast dosage and CIN incidence To investigate the impact of contrast agent dosage on the CIN incidence, we initiated a structured analysis focused on understanding the relationship between varying dosages and the incidence of CIN. By employing statistical models and adjusting for confounding variables, we aimed to reveal how different dosages influence CIN risk. The correlation between the dosage of contrast agent and the incidence of CIN is illustrated in Fig. , highlighting a discernible increase in the CIN incidence rate in patients with increased contrast agent dosages. A more detailed examination, as summarized in Table , confirmed the significant association between incremental increases in contrast agent dosage (per standard deviation) and the incidence of CIN (adjusted odds ratio (OR) 1.007 (95% CI: 1.006–1.008)). Expanding on these findings, we analyzed outcomes across specific dosage ranges. The multivariate-adjusted ORs for groups receiving 101–110 ml, 111–120 ml, 121–130 ml, and 131–140 ml of contrast agent were 1.09 (0.62, 1.89), 0.90 (0.54, 1.51), 1.65 (0.98, 2.79), and 2.31 (1.24, 4.30), respectively, in comparison to those administered less than 100 ml. Notably, the risk of CIN development increased substantially at higher doses. Compared with doses below 100 ml, doses in the 141–200 ml range were associated with adjusted ORs ranging from 2.29 to 3.66; doses in the 201–300 ml range were associated with an OR of 3.21 (2.46, 4.19); and doses exceeding 300 ml were associated with an OR of 5.78 (4.20, 7.96). This linear trend in the association between contrast agent dosage and CIN incidence was statistically significant (p for trend < 0.001), highlighting the critical need for dosage management in clinical practice to mitigate CIN risk. In the analysis employing a threshold effect model with a cutoff point of 140 ml, doses of contrast agent above this threshold were associated with a significantly increased risk of CIN development, with an adjusted OR of 3.27 (95% CI: 2.75–3.89). This finding indicates a more than threefold increase in the risk of CIN development associated with dosages exceeding 140 ml. Subgroup analyses To further delineate the relationship between contrast agent dosage and the risk of CIN development within specific patient populations, subgroup analyses were performed. These analyses were aimed to determine how underlying health conditions might influence the association between contrast agent dosages and CIN incidence. As illustrated in Fig. , the investigation revealed notably stronger associations in patients with preexisting conditions such as hypertension, DM, and HUA when exposed to higher contrast agent doses (greater than 140 ml) than in those who were exposed to lower doses (140 ml or less). In hypertensive patients, a significant increase in CIN risk was observed, with an odds ratio (OR) of 4.97 (95% CI: 4.01–6.16), which was markedly greater than that in nonhypertensive individuals, who presented an OR of 1.69 (95% CI: 1.26–2.28);this finding highlighted a significant interaction effect ( p -interaction < 0.001). Similarly, DM patients faced a heightened risk of CIN development (OR = 3.19, 95% CI: 2.58–3.94), albeit slightly lower than that of non-DM patients (OR = 3.36, 95% CI: 2.75–4.09), with a significant interaction effect ( p -interaction = 0.02). Moreover, the increase in risk was more pronounced in patients with HUA (OR = 4.43, 95% CI: 3.23–6.07) than in those without HUA (OR = 2.94, 95% CI: 2.38–3.63), which also indicated a significant interaction ( p -interaction = 0.002). There was no significant interaction effect between factors such as sex, age, BMI, HF status, CKD status, and anemia status on the relationship between a contrast agent dose > 140 ml and CIN risk. (all p -interactions > 0.05). In this investigation, we successfully applied a threshold effect model to identify specific contrast agent dosage levels that mark increased risks for various patient subgroups, directly aligning with the study’s objective to understand dosage-related risk variations. Compared with other patients, patients with DM, HF, CKD, anemia or HUA and patients receiving PCI had a greater risk of CIN development at lower dose thresholds (95 ml, 95 ml, 115 ml, 95 ml, 105 ml, and 95 ml, respectively). Compared with non-DM patients, patients without HF, patients with normal renal function, patients without anemia, patients with normal UA levels, and patients receiving CTA had higher tolerance levels (170 ml, 140 ml, 165 ml, 145 ml, 190 ml, and 160 ml, respectively) (Fig. ). During the study period, data were collected from 16,038 patients. Among these patients, 227 were excluded because of the absence of renal function data before contrast media administration, 3,371 were excluded because of the absence of pre- and post-exposure renal function data within three days before and after the use of contrast agents, and 64 were excluded because the contrast agent dose was not recorded. This process resulted in a final cohort of 12,376 patients (Fig. ), with an average age of 63.0 years (standard deviation of 12.4 years), 63.8% of whom were male. The incidence of CIN was 6.4%, affecting 797 of the 12,376 patients included in the analysis (Table ). The results of this study revealed that patients who developed CIN were generally characterized by a greater use of contrast, older age, and male sex, along with a lower BMI and HGB concentration. They also had higher systolic and diastolic blood pressure (SBP and DBP), baseline serum creatinine (SCr), blood urea nitrogen (BUN), and uric acid (UA) levels. Hypertension, DM, HF, the use of an intra-aortic balloon pump (IABP), anemia, hydration, and elevated UA were more common in the CIN group ( p < 0.05). Compared with those who underwent CTA, patients who underwent PCI had a greater probability of CIN development. No significant differences were noted regarding hypotension status (Table ). To investigate the impact of contrast agent dosage on the CIN incidence, we initiated a structured analysis focused on understanding the relationship between varying dosages and the incidence of CIN. By employing statistical models and adjusting for confounding variables, we aimed to reveal how different dosages influence CIN risk. The correlation between the dosage of contrast agent and the incidence of CIN is illustrated in Fig. , highlighting a discernible increase in the CIN incidence rate in patients with increased contrast agent dosages. A more detailed examination, as summarized in Table , confirmed the significant association between incremental increases in contrast agent dosage (per standard deviation) and the incidence of CIN (adjusted odds ratio (OR) 1.007 (95% CI: 1.006–1.008)). Expanding on these findings, we analyzed outcomes across specific dosage ranges. The multivariate-adjusted ORs for groups receiving 101–110 ml, 111–120 ml, 121–130 ml, and 131–140 ml of contrast agent were 1.09 (0.62, 1.89), 0.90 (0.54, 1.51), 1.65 (0.98, 2.79), and 2.31 (1.24, 4.30), respectively, in comparison to those administered less than 100 ml. Notably, the risk of CIN development increased substantially at higher doses. Compared with doses below 100 ml, doses in the 141–200 ml range were associated with adjusted ORs ranging from 2.29 to 3.66; doses in the 201–300 ml range were associated with an OR of 3.21 (2.46, 4.19); and doses exceeding 300 ml were associated with an OR of 5.78 (4.20, 7.96). This linear trend in the association between contrast agent dosage and CIN incidence was statistically significant (p for trend < 0.001), highlighting the critical need for dosage management in clinical practice to mitigate CIN risk. In the analysis employing a threshold effect model with a cutoff point of 140 ml, doses of contrast agent above this threshold were associated with a significantly increased risk of CIN development, with an adjusted OR of 3.27 (95% CI: 2.75–3.89). This finding indicates a more than threefold increase in the risk of CIN development associated with dosages exceeding 140 ml. To further delineate the relationship between contrast agent dosage and the risk of CIN development within specific patient populations, subgroup analyses were performed. These analyses were aimed to determine how underlying health conditions might influence the association between contrast agent dosages and CIN incidence. As illustrated in Fig. , the investigation revealed notably stronger associations in patients with preexisting conditions such as hypertension, DM, and HUA when exposed to higher contrast agent doses (greater than 140 ml) than in those who were exposed to lower doses (140 ml or less). In hypertensive patients, a significant increase in CIN risk was observed, with an odds ratio (OR) of 4.97 (95% CI: 4.01–6.16), which was markedly greater than that in nonhypertensive individuals, who presented an OR of 1.69 (95% CI: 1.26–2.28);this finding highlighted a significant interaction effect ( p -interaction < 0.001). Similarly, DM patients faced a heightened risk of CIN development (OR = 3.19, 95% CI: 2.58–3.94), albeit slightly lower than that of non-DM patients (OR = 3.36, 95% CI: 2.75–4.09), with a significant interaction effect ( p -interaction = 0.02). Moreover, the increase in risk was more pronounced in patients with HUA (OR = 4.43, 95% CI: 3.23–6.07) than in those without HUA (OR = 2.94, 95% CI: 2.38–3.63), which also indicated a significant interaction ( p -interaction = 0.002). There was no significant interaction effect between factors such as sex, age, BMI, HF status, CKD status, and anemia status on the relationship between a contrast agent dose > 140 ml and CIN risk. (all p -interactions > 0.05). In this investigation, we successfully applied a threshold effect model to identify specific contrast agent dosage levels that mark increased risks for various patient subgroups, directly aligning with the study’s objective to understand dosage-related risk variations. Compared with other patients, patients with DM, HF, CKD, anemia or HUA and patients receiving PCI had a greater risk of CIN development at lower dose thresholds (95 ml, 95 ml, 115 ml, 95 ml, 105 ml, and 95 ml, respectively). Compared with non-DM patients, patients without HF, patients with normal renal function, patients without anemia, patients with normal UA levels, and patients receiving CTA had higher tolerance levels (170 ml, 140 ml, 165 ml, 145 ml, 190 ml, and 160 ml, respectively) (Fig. ). The primary aim of this retrospective study was to investigate the association between contrast agent dosage and the risk of CIN development, with a particular focus on understanding how patient characteristics and procedural factors may modify this relationship. Our analysis of data from 12,376 patients revealed several key findings. The results indicate that as the dosage of the contrast agent increases, the incidence of CIN also significantly increases. Furthermore, this study established a dosage threshold of 140 ml, above which there was a significant increase in the incidence of CIN. This finding underscores the importance of managing contrast agent dosages in clinical practice to mitigate the risk of CIN and reveals a clear linear trend between dosage and CIN incidence that is statistically significant. Subgroup analyses revealed that patients with hypertension, DM, or HUA face significantly greater risks of CIN when exposed to contrast agent dosages above 140 ml. These subgroups exhibited distinct thresholds: patients with DM, HF, CKD, anemia, or HUA are particularly sensitive to lower dosages of contrast agents, with thresholds set at 95 ml, 95 ml, 115 ml, 95 ml, and 105 ml, respectively, during PCI or CTA. Conversely, individuals without these conditions had higher tolerance thresholds, suggesting a differential susceptibility to CIN on the basis of preexisting health conditions. The findings of this study highlight the complex interplay between contrast agent dosage and the risk of CIN development, particularly in patients with preexisting conditions such as DM and HUA. The observed higher ORs in DM and HUA patients suggest a heightened vulnerability in these subgroups andunderscore the need for careful consideration of contrast agent dosages in these populations. Delving further into the particular risks associated with DM, the correlation between DM status and a higher risk of CIN development was more pronounced. The increased risk of CIN development (OR: 3.19) in DM patients compared with non-DM patients (OR: 3.36), with a significant p-interaction, indicates that DM may exacerbate the nephrotoxic effects of contrast agents. This finding is consistent with the literature that recognizes DM as a risk factor for renal impairment because of its contribution to vascular and microvascular complications , . These results call for more stringent monitoring and perhaps a reevaluation of contrast agent dosage thresholds in DM patients undergoing procedures requiring contrast media. Similarly, the strong correlation between higher contrast agent doses and increased CIN risk in patients with HUA (OR: 4.43), as opposed to those without HUA (OR: 2.94), signals an increased risk in this group. HUA, which is often associated with renal pathology, may act as a synergistic factor exacerbating the nephrotoxicity of contrast agents – . This novel insight suggests that HUA should be considered a significant risk factor in the management of patients requiring contrast-enhanced imaging procedures. This differential tolerance reflects the heightened vulnerability of certain populations to CIN, as corroborated by studies indicating that conditions such as DM, HF, CKD, and anemia can significantly impair renal function, thereby increasing the risk of nephrotoxicity from contrast agents , – . Although PCI procedures typically require larger volumes of contrast agents for effective vascular imaging and catheter manipulation, especially in complex cases, it is noteworthy that patients undergoing PCI are susceptible to CIN at relatively lower contrast agent doses than those typically used in CTA. This increased susceptibility in PCI patients can be attributed to factors such as direct exposure of the renal circulation to contrast media and a higher prevalence of preexisting renal impairment among these patients. Second, PCI patients may require multiple administrations of contrast agents postsurgery, particularly after complex procedures or for monitoring postoperative complications. Despite the lower individual doses used each time, cumulative exposure to contrast agents over multiple procedures may also increase the long-term risk of CIN development. In summary, compared with CTA patients, PCI patients are more prone to CIN development at lower doses because of the higher contrast agent volumes used during procedures, the presence of underlying medical conditions, and potential multiple exposures to contrast agents. These factors highlight the importance of implementing renal protective strategies in clinical practice for PCI patients to mitigate the occurrence of CIN. The clinical relevance of these findings cannot be overstated. By offering a method to personalize contrast agent dosages on the basis of individual risk profiles, our study paves the way for more effective, personalized strategies to prevent CIN. This approach aligns with the growing emphasis on personalized medicine and the need for individualized risk assessment in the administration of contrast agents , . In essence, the insights garnered from the application of the threshold effect model underscore the importance of tailored medical interventions for enhancing patient safety and outcomes in the context of contrast agent use. While our study offers valuable insights into the relationship between contrast agent dosage and the risk of CIN development, it is important to acknowledge several limitations. First, the retrospective nature of the study design introduces inherent biases and limitations, such as the potential for selection bias and incomplete data capture. Additionally, the reliance on electronic health records for data extraction may introduce inaccuracies or missing information. Despite these limitations, our large sample size and rigorous statistical analysis help mitigate these concerns to some extent. Another limitation is the reliance on observational data, which precludes establishing causality between contrast agent dosage and CIN risk. While we observed significant associations, further prospective studies, including randomized controlled trials, are warranted to confirm these findings and elucidate the underlying mechanisms involved. Despite these limitations, our study offers critical insights into the necessity of individualized dosing strategies for contrast agent administration, especially among high-risk populations such as patients with DM, HF, CKD, anemia, and HUA. By recognizing these limitations and their implications for interpretation, we emphasize the importance of reevaluating dosage thresholds and refining risk assessment models. This approach allows us to better contextualize our findings and guide future research efforts to develop more effective preventive measures and tailored dosing protocols that can significantly reduce the risk of CIN in these vulnerable groups. Our study emphasizes the urgent need for personalized risk assessment and dose optimization in the administration of contrast agents, with a particular focus on patients with DM, HF, CKD, anemia, or HUA. These findings underscore the necessity of reevaluating current dosage thresholds and developing tailored dosing protocols that can effectively minimize the incidence of CIN in these vulnerable populations. Furthermore, our results support the inclusion of HUA as a critical factor in CIN risk assessment models, which have been previously under represented in clinical practice. Moving forward, it is imperative that research continues to refine these dosing protocols and explore comprehensive strategies to effectively mitigate the risk of CIN. Below is the link to the electronic supplementary material. Supplementary Material 1
Phoniatry: otorhinolaryngology expands its limits
80c2027f-834c-4a77-a861-09dc05b9f300
9442694
Otolaryngology[mh]
The author declares no conflicts of interest.
Ideal suturing technique for robot-assisted microsurgical anastomoses
75611abb-98b8-43b1-ae08-a9fcf145447e
11217096
Microsurgery[mh]
Robot-assisted microsurgery in plastic surgery has become increasingly popular due to its potential to improve accuracy, safety and surgical ergonomics of procedures. Novel robotic systems are equipped with specialized tools and instruments that enable the surgeon to perform difficult tasks with greater precision and accuracy compared to traditional techniques. The key features of such systems are motion scaling and elimination of tremors, allowing for ultimate control over the instruments when handling (sub)-millimeter structures. The only currently available system specifically designed for open microsurgery is the Symani Surgical System (Medical Microinstruments Inc., Wilmington, DE, USA). It offers wristed microsurgical and supermicrosurgical instruments, adding distal motion axes for an improved range of motion compared to conventional microsurgical instruments. Several preclinical studies revealed improved precision and ergonomics upon application of the Symani for the performance of microvascular anastomoses in vitro [ – ] and in vivo . Furthermore, feasibility and safety of robot-assisted microsurgery was demonstrated in multiple initial clinical trials. Therefore, successful application of the Symani has been described in the fields of lymphatic surgery [ – ], extremity reconstruction [ – ], autologous breast reconstruction and peripheral nerve surgery . However, in spite of steep learning curves upon introduction of novel robotic systems , most studies consistently revealed increased surgical times of robot-assisted procedures and anastomoses compared to conventional approaches [ , , , ]. In order to fully leverage the benefits of robotic technology and to guarantee the best possible results of microvascular anastomoses, we sought to determine a robotic suturing technique combining time efficiency, precision and accuracy on a high level. Here, we describe two suturing techniques for robot-assisted microsurgical anastomoses and compare their speed, quality and error susceptibility in a preclinical setting, using the Symani Surgical System in combination with the RoboticScope (BHS Technologies, Innsbruck, Austria) for the performance of microvascular anastomoses on artificial silicone vessels. Setup Microsurgical anastomoses were performed at our clinic’s microsurgery laboratory. The Symani was utilized for robot-assisted microsurgical anastomoses. This system provides wristed microsurgical and supermicrosurgical instruments, with motion scaling from 7 to 20 ×, tremor filtration, and increased range of motion through additional distal motion axes. The surgeon sits in a highly ergonomic chair and operates the system using wired controllers that resemble forceps, which can be freely moved and rotated while in an ergonomic position. All movements are transmitted with high precision and in real time to two robotic slave arms. The operating unit can be covered with sterile drapes and positioned above the desired operating field with flexibility. The robotic digital microscope RoboticScope is a high-definition camera system that is connected to an augmented reality headset. It projects a high-quality, stereotactic image in front of the surgeon’s eyes, creating a three-dimensional live image. The surgeon’s head movements are converted onto the system through motion tracking using a multi-axis robotic arm. The surgeon can navigate through an augmented menu that appears on top of the operating field image using head gestures. This allows for the adjustment of zoom and focus, changes in orbital view, navigation through the operating field, and image/video recording completely freehand, without interrupting the surgery. The RoboticScope is equipped with a high-resolution camera, which was used for all video and photo recordings. Participants were seated away from the operation table to perform the anastomoses, thereby being able to maintain an optimal ergonomic position. Study population Six experienced microsurgeons from our institution with more than 5 years of experience in free flap reconstruction participated in this study. Participants previously underwent a comprehensive training on operating the robotic systems and performed multiple anastomoses with the investigated approach until reaching a steady state in the preclinical learning curve. Each study attendee completed six robot-assisted microvascular end-to-end anastomoses on 1.0-mm-diameter artificial silicone vessels (WetLab, Japan) with six stitches of 10-0 sutures (Ethilon, Ethicon, USA), three on the frontside and three on the backside after flipping the vessel. Three anastomoses were performed with each suturing technique, respectively (see below). Silicone vessels were stabilized using a microvascular approximator on a foam training platform. Suturing techniques The steady-thread suturing technique (steady technique) and the switch-thread suturing technique (switch technique) are illustrated in Fig. and Supplementary Video 1 and 2 for better understanding. The robotic micro dilator is operated with the non-dominant hand (in this study always the left hand), while the robotic micro needle holder with inbuilt scissors close to the joint of the instrument is operated with the dominant hand (in this study always the right hand). The steady technique describes the suturing technique, which is mainly used for conventional microanastomoses at our institution, and was adapted to the robotic approach. The long end of the thread is held with the needle holder and double-looped around the dilator. Then, the short end of the thread is grasped with the dilator and pulled through the loop. The long end of the thread is kept with the needle holder for the second and third knot. It is now single-looped around the dilator and the short end is pulled through again. Finally, it is single looped around the dilator in the opposite direction, the short end is pulled through and the thread is cut with the inbuilt scissors (Fig. a, Sup. Vid. 1). The switch technique describes the suturing technique, which was proposed by the manufacturer for achieving square knots through robot-assisted suturing. The first knot is performed in the same manner as with the steady technique. However, then the long end of the thread is passed from the needle holder to the dilator and single-looped around the needle holder, which pulls the short end of the thread through the loop in the opposite direction. For the third knot, the long end is passed to the needle holder again and single-looped around the dilator, which grasps and pulls the short end. Finally, the thread is cut with the inbuilt scissors (Fig. b, Sup. Vid. 2). In short, the main difference between the two techniques is whether the thread is kept with the same instrument for each knot to save steps and time (steady technique), or if it is passed between the instruments after each knot to prevent crossing and collision of the robotic instruments (switch technique). Data collection and processing During each microvascular anastomosis, the time to complete the anastomosis was recorded. Using video recordings, the total time per anastomosis was divided into the major steps of each anastomosis, which were analyzed separately: needle positioning, piercing, passage through vessel wall, knot tying, cutting of suture and additional time. After finishing each anastomosis, participants had to fill out a questionnaire evaluating their subjective satisfaction with the anastomosis and the knot technique, as well as their satisfaction with the Symani performance, RoboticScope performance, and combined performance of both systems on a numeric rating scale from 0 to 10 (0 = minimum, 10 = maximum). To assess the quality of microvascular anastomoses, the Anastomosis Lapse Index (ALI) was applied. This involved cutting the anastomoses longitudinally and photographing the inside. Deidentified and blinded photographs were analyzed by a single reviewer to identify specific types of errors and the total number of errors, that were previously described by Ghanem et al . (anastomosis line disruption, backwall or sidewall catch, oblique stitch causing distortion, bite leading to tissue infoldment, partial thickness stitch, unequal distancing of sutures, visible tear in vessel wall, strangulation of tissue edges, thread in lumen, large edge overlap) . Furthermore, microsurgical skills using the different suturing techniques were analyzed by videorecording the procedures and evaluating the deidentified and blinded videos. An experienced microsurgeon used a modified version of the Structured Assessment of Microsurgery Skills (SAMS) by van Mulken et al. to assess all anastomoses. The modified SAMS evaluates dexterity (steadiness, instrument handling, tissue handling), visuo-spatial ability (suture placement, knot technique) and operative flow (steps, motion, speed), as well as the overall performance and indicative skill level on a numeric rating scale from 1 to 5 (5 representing excellent skills). Lastly, technical error messages generated by the Symani during anastomoses interrupting the workflow were recorded and quantified. Possible error messages included: master moved too quickly, device exceeded motion range, joint of device at pivot stop, master outside console workspace and device at workspace boundary. In addition, the number of threads torn unintentionally with both suturing techniques was counted and the total number of threads used per anastomosis was documented (only if the thread was too short after rupture, a new thread was used). Statistical analysis Statistical analysis was performed using GraphPad Prism (GraphPad Software Inc., USA). In all plots and bar charts, dots represent individual values with arithmetic mean and standard deviation. Statistical significance was assessed for surgical time, questionnaire items, ALI scores, SAMS scores and thread count using a two-way ANOVA when comparing multiple groups (corrected for multiple comparisons with Tukey and Sidak test, 95% confidence interval) and Student’s t -test when comparing the means of two groups (unpaired, two-tailed, 95% confidence interval). P values < 0.05 were considered statistically significant. Microsurgical anastomoses were performed at our clinic’s microsurgery laboratory. The Symani was utilized for robot-assisted microsurgical anastomoses. This system provides wristed microsurgical and supermicrosurgical instruments, with motion scaling from 7 to 20 ×, tremor filtration, and increased range of motion through additional distal motion axes. The surgeon sits in a highly ergonomic chair and operates the system using wired controllers that resemble forceps, which can be freely moved and rotated while in an ergonomic position. All movements are transmitted with high precision and in real time to two robotic slave arms. The operating unit can be covered with sterile drapes and positioned above the desired operating field with flexibility. The robotic digital microscope RoboticScope is a high-definition camera system that is connected to an augmented reality headset. It projects a high-quality, stereotactic image in front of the surgeon’s eyes, creating a three-dimensional live image. The surgeon’s head movements are converted onto the system through motion tracking using a multi-axis robotic arm. The surgeon can navigate through an augmented menu that appears on top of the operating field image using head gestures. This allows for the adjustment of zoom and focus, changes in orbital view, navigation through the operating field, and image/video recording completely freehand, without interrupting the surgery. The RoboticScope is equipped with a high-resolution camera, which was used for all video and photo recordings. Participants were seated away from the operation table to perform the anastomoses, thereby being able to maintain an optimal ergonomic position. Six experienced microsurgeons from our institution with more than 5 years of experience in free flap reconstruction participated in this study. Participants previously underwent a comprehensive training on operating the robotic systems and performed multiple anastomoses with the investigated approach until reaching a steady state in the preclinical learning curve. Each study attendee completed six robot-assisted microvascular end-to-end anastomoses on 1.0-mm-diameter artificial silicone vessels (WetLab, Japan) with six stitches of 10-0 sutures (Ethilon, Ethicon, USA), three on the frontside and three on the backside after flipping the vessel. Three anastomoses were performed with each suturing technique, respectively (see below). Silicone vessels were stabilized using a microvascular approximator on a foam training platform. The steady-thread suturing technique (steady technique) and the switch-thread suturing technique (switch technique) are illustrated in Fig. and Supplementary Video 1 and 2 for better understanding. The robotic micro dilator is operated with the non-dominant hand (in this study always the left hand), while the robotic micro needle holder with inbuilt scissors close to the joint of the instrument is operated with the dominant hand (in this study always the right hand). The steady technique describes the suturing technique, which is mainly used for conventional microanastomoses at our institution, and was adapted to the robotic approach. The long end of the thread is held with the needle holder and double-looped around the dilator. Then, the short end of the thread is grasped with the dilator and pulled through the loop. The long end of the thread is kept with the needle holder for the second and third knot. It is now single-looped around the dilator and the short end is pulled through again. Finally, it is single looped around the dilator in the opposite direction, the short end is pulled through and the thread is cut with the inbuilt scissors (Fig. a, Sup. Vid. 1). The switch technique describes the suturing technique, which was proposed by the manufacturer for achieving square knots through robot-assisted suturing. The first knot is performed in the same manner as with the steady technique. However, then the long end of the thread is passed from the needle holder to the dilator and single-looped around the needle holder, which pulls the short end of the thread through the loop in the opposite direction. For the third knot, the long end is passed to the needle holder again and single-looped around the dilator, which grasps and pulls the short end. Finally, the thread is cut with the inbuilt scissors (Fig. b, Sup. Vid. 2). In short, the main difference between the two techniques is whether the thread is kept with the same instrument for each knot to save steps and time (steady technique), or if it is passed between the instruments after each knot to prevent crossing and collision of the robotic instruments (switch technique). During each microvascular anastomosis, the time to complete the anastomosis was recorded. Using video recordings, the total time per anastomosis was divided into the major steps of each anastomosis, which were analyzed separately: needle positioning, piercing, passage through vessel wall, knot tying, cutting of suture and additional time. After finishing each anastomosis, participants had to fill out a questionnaire evaluating their subjective satisfaction with the anastomosis and the knot technique, as well as their satisfaction with the Symani performance, RoboticScope performance, and combined performance of both systems on a numeric rating scale from 0 to 10 (0 = minimum, 10 = maximum). To assess the quality of microvascular anastomoses, the Anastomosis Lapse Index (ALI) was applied. This involved cutting the anastomoses longitudinally and photographing the inside. Deidentified and blinded photographs were analyzed by a single reviewer to identify specific types of errors and the total number of errors, that were previously described by Ghanem et al . (anastomosis line disruption, backwall or sidewall catch, oblique stitch causing distortion, bite leading to tissue infoldment, partial thickness stitch, unequal distancing of sutures, visible tear in vessel wall, strangulation of tissue edges, thread in lumen, large edge overlap) . Furthermore, microsurgical skills using the different suturing techniques were analyzed by videorecording the procedures and evaluating the deidentified and blinded videos. An experienced microsurgeon used a modified version of the Structured Assessment of Microsurgery Skills (SAMS) by van Mulken et al. to assess all anastomoses. The modified SAMS evaluates dexterity (steadiness, instrument handling, tissue handling), visuo-spatial ability (suture placement, knot technique) and operative flow (steps, motion, speed), as well as the overall performance and indicative skill level on a numeric rating scale from 1 to 5 (5 representing excellent skills). Lastly, technical error messages generated by the Symani during anastomoses interrupting the workflow were recorded and quantified. Possible error messages included: master moved too quickly, device exceeded motion range, joint of device at pivot stop, master outside console workspace and device at workspace boundary. In addition, the number of threads torn unintentionally with both suturing techniques was counted and the total number of threads used per anastomosis was documented (only if the thread was too short after rupture, a new thread was used). Statistical analysis was performed using GraphPad Prism (GraphPad Software Inc., USA). In all plots and bar charts, dots represent individual values with arithmetic mean and standard deviation. Statistical significance was assessed for surgical time, questionnaire items, ALI scores, SAMS scores and thread count using a two-way ANOVA when comparing multiple groups (corrected for multiple comparisons with Tukey and Sidak test, 95% confidence interval) and Student’s t -test when comparing the means of two groups (unpaired, two-tailed, 95% confidence interval). P values < 0.05 were considered statistically significant. Assessment of surgical time and subjective satisfaction The surgical time required to complete each anastomosis was divided into major steps, which were analyzed separately. On average per anastomosis, needle positioning took 1.16 ± 0.39 min with the steady technique and 0.95 ± 0.37 min with the switch technique, piercing took 1.76 ± 0.47 min with the steady technique and 1.92 ± 0.48 min with the switch technique, passage through vessel wall took 2.40 ± 1.36 min with the steady technique and 2.35 ± 0.77 min with the switch technique, knot tying took 4.11 ± 0.85 min with the steady technique and 6.40 ± 1.83 min with the switch technique, cutting of suture took 0.85 ± 0.40 min with the steady technique and 0.98 ± 0.49 min with the switch technique and the additional time was 3.17 ± 1.96 min with steady technique and 3.51 ± 2.00 min with the switch technique (Fig. a). Thereby, most steps did not significantly differ between the two approaches. However, knot tying, which distinguishes the two techniques, was significantly faster with the steady technique ( p = 0.000043). The total time per anastomosis from the initial needle positioning to the last suture cutting was 13.40 ± 4.06 min with the steady technique and 16.10 ± 4.11 min with the switch technique (Fig. b). Thereby, the steady technique was faster overall, however, not statistically significant ( p = 0.0769). After completion of each anastomosis, participants evaluated different aspects on a questionnaire (steady vs. switch technique). Subjective satisfaction with the anastomoses in general was evaluated better with the steady technique (7.72 ± 1.85 points vs. 6.67 ± 1.97 points), while especially the knot technique was evaluated significantly better with the steady technique (8.67 ± 1.33 points vs. 6.61 ± 1.60 points, p = 0.000269). Symani performance, RoboticScope performance and the combined performance of both systems also showed slightly better evaluations with the steady technique; however, they were consistently on a high level with both techniques, never dropping below 7.72 out of 10 points (Fig. ). Thus, the steady technique was overall preferred by participants, mostly attributed to the knot technique. Anastomosis quality and microsurgical skills Anastomosis quality was assessed using the ALI score (Fig. a) and total errors per anastomosis were determined for each anastomosis with both techniques (Fig. b). On average 2.61 ± 1.21 errors per anastomosis occurred when using the steady technique compared to 3.0 ± 1.29 errors per anastomosis when using the switch technique (Fig. c). Furthermore, microsurgical skills were assessed by an experienced microsurgeon according to a modified version of the SAMS score. Most SAMS categories, such as “steadiness”, “instrument handling”, “tissue handling”, “suture placement”, “steps” and “motion” were consistently evaluated at proficient levels with both techniques, ranging between 3.5 and 5.0 points. However, “knot technique” was evaluated significantly better with the steady technique ( p = 0.039) and also “speed” showed slightly non-significantly better evaluations with this technique ( p = 0.056), which resulted in a significantly improved “overall performance” ( p = 0.027) and “indicative skill” ( p = 0.019), when using the steady technique for microsurgical anastomoses (Fig. ). Thereby, anastomosis quality and microsurgical skills were consistently evaluated on high levels with both techniques; however, the steady technique performed slightly better on both scores. Error messages and thread count Technical error messages generated by the Symani interrupting the workflow were quantified for both techniques. Regarding specific error messages, “device exceeded motion range”, “joint of device at pivot stop” and “master outside console workspace” occurred more often with the switch technique, while “master moved too quickly” occurred twice as often with the steady technique and “device at workspace boundary” did not occur at all with both techniques (Fig. a). Overall, 12 error messages were generated during 18 anastomoses using the steady technique and 14 error messages using the switch technique (Fig. b). Moreover, the number of threads torn unintentionally per anastomosis and the total number of threads used per anastomosis were recorded. Notably, the switch technique was associated with twice as many thread ruptures per anastomosis compared to the steady technique (steady: 0.5 ± 0.69 vs. switch: 1.0 ± 0.88) (Fig. c). Nevertheless, the total number of threads used per anastomosis was comparable between both techniques (steady: 1.3 ± 0.59 vs. switch: 1.4 ± 0.78), since most threads were still long enough to be reused after rupture (Fig. d). Altogether, performance of microanastomoses was still more efficient with the steady technique than with the switch technique, regarding workflow interruptions by technical error messages and thread ruptures. The surgical time required to complete each anastomosis was divided into major steps, which were analyzed separately. On average per anastomosis, needle positioning took 1.16 ± 0.39 min with the steady technique and 0.95 ± 0.37 min with the switch technique, piercing took 1.76 ± 0.47 min with the steady technique and 1.92 ± 0.48 min with the switch technique, passage through vessel wall took 2.40 ± 1.36 min with the steady technique and 2.35 ± 0.77 min with the switch technique, knot tying took 4.11 ± 0.85 min with the steady technique and 6.40 ± 1.83 min with the switch technique, cutting of suture took 0.85 ± 0.40 min with the steady technique and 0.98 ± 0.49 min with the switch technique and the additional time was 3.17 ± 1.96 min with steady technique and 3.51 ± 2.00 min with the switch technique (Fig. a). Thereby, most steps did not significantly differ between the two approaches. However, knot tying, which distinguishes the two techniques, was significantly faster with the steady technique ( p = 0.000043). The total time per anastomosis from the initial needle positioning to the last suture cutting was 13.40 ± 4.06 min with the steady technique and 16.10 ± 4.11 min with the switch technique (Fig. b). Thereby, the steady technique was faster overall, however, not statistically significant ( p = 0.0769). After completion of each anastomosis, participants evaluated different aspects on a questionnaire (steady vs. switch technique). Subjective satisfaction with the anastomoses in general was evaluated better with the steady technique (7.72 ± 1.85 points vs. 6.67 ± 1.97 points), while especially the knot technique was evaluated significantly better with the steady technique (8.67 ± 1.33 points vs. 6.61 ± 1.60 points, p = 0.000269). Symani performance, RoboticScope performance and the combined performance of both systems also showed slightly better evaluations with the steady technique; however, they were consistently on a high level with both techniques, never dropping below 7.72 out of 10 points (Fig. ). Thus, the steady technique was overall preferred by participants, mostly attributed to the knot technique. Anastomosis quality was assessed using the ALI score (Fig. a) and total errors per anastomosis were determined for each anastomosis with both techniques (Fig. b). On average 2.61 ± 1.21 errors per anastomosis occurred when using the steady technique compared to 3.0 ± 1.29 errors per anastomosis when using the switch technique (Fig. c). Furthermore, microsurgical skills were assessed by an experienced microsurgeon according to a modified version of the SAMS score. Most SAMS categories, such as “steadiness”, “instrument handling”, “tissue handling”, “suture placement”, “steps” and “motion” were consistently evaluated at proficient levels with both techniques, ranging between 3.5 and 5.0 points. However, “knot technique” was evaluated significantly better with the steady technique ( p = 0.039) and also “speed” showed slightly non-significantly better evaluations with this technique ( p = 0.056), which resulted in a significantly improved “overall performance” ( p = 0.027) and “indicative skill” ( p = 0.019), when using the steady technique for microsurgical anastomoses (Fig. ). Thereby, anastomosis quality and microsurgical skills were consistently evaluated on high levels with both techniques; however, the steady technique performed slightly better on both scores. Technical error messages generated by the Symani interrupting the workflow were quantified for both techniques. Regarding specific error messages, “device exceeded motion range”, “joint of device at pivot stop” and “master outside console workspace” occurred more often with the switch technique, while “master moved too quickly” occurred twice as often with the steady technique and “device at workspace boundary” did not occur at all with both techniques (Fig. a). Overall, 12 error messages were generated during 18 anastomoses using the steady technique and 14 error messages using the switch technique (Fig. b). Moreover, the number of threads torn unintentionally per anastomosis and the total number of threads used per anastomosis were recorded. Notably, the switch technique was associated with twice as many thread ruptures per anastomosis compared to the steady technique (steady: 0.5 ± 0.69 vs. switch: 1.0 ± 0.88) (Fig. c). Nevertheless, the total number of threads used per anastomosis was comparable between both techniques (steady: 1.3 ± 0.59 vs. switch: 1.4 ± 0.78), since most threads were still long enough to be reused after rupture (Fig. d). Altogether, performance of microanastomoses was still more efficient with the steady technique than with the switch technique, regarding workflow interruptions by technical error messages and thread ruptures. In recent years, major advancements in the development robotic surgical assistance devices and robotic surgical microscopes have led to the admission of novel robotic systems specifically designed for open microsurgery into clinical practice. Thereby, novel areas of robot-assisted procedures are gradually being investigated especially in the field of plastic surgery and microsurgery. The Symani system has already been successfully applied for lymphatic surgery [ – ], reconstructive free flap surgery [ – ] and peripheral nerve surgery , consistently revealing improved surgical ergonomics and microsurgical precision compared to conventional manual approaches. Importantly, during these procedures, in its current configuration the Symani is only being used for the performance of microsurgical anastomoses, while the whole preparation is performed conventionally. Nevertheless, at the current state of knowledge, surgical time appears to be a specific drawback of robotic procedures, as it was shown to be increased in most studies [ , , ]. To further improve time efficiency, we sought to determine an ideal suturing technique for robot-assisted microsurgical anastomoses without impairing anastomosis quality. Upon preclinical training with the Symani system, a suturing technique that involves switching the thread ends between instruments after each knot was suggested by the manufacturer (switch-thread technique), as it prevents collision of the robotic instruments, requires less rotation and bending of instrument tips and leads to more intuitive folding of each square knot in the correct orientation, thereby being easier to apply upon training with the novel system. However, the suturing technique applied for microanastomoses during conventional manual microsurgery at our institution is performed in a different manner, keeping the long thread end with the needle holder for all three knots and opposing the orientation the thread is looped around the dilator each time, in order to tighten the knots correctly (steady-thread technique). Results from this study revealed that knot tying with the steady technique is indeed significantly faster compared to the switch technique, also leading to an overall improvement of anastomosis time. Consistently, the steady technique was evaluated significantly better by experienced microsurgeons on a numeric rating scale, with high levels of satisfaction with the robotic setup in general using both approaches. Importantly, the quality of microanastomoses assessed by the ALI score was shown to be on comparably proficient levels with both techniques, not showing statistically significant differences. Thereby, it is demonstrated that the improvement of surgical time was not met with an impairment of anastomosis quality. On the contrary, microsurgical skills assessed by the SAMS score were even improved using the steady technique, mostly attributed to significant improvements in the category “knot technique”. On the other hand, workflow interruptions by technical error messages generated by the Symani system occurred more often with the switch technique. These error messages appear for example, when the threshold of the workspace is reached with one of the masters, when the master is moved too quickly, or when the motion range of the robotic instruments is exceeded either regarding the rotation and bending limits or the motion range of the robotic arms. Upon such an event, a quick resynchronization of the instruments is required, leading to a short delay of the procedure, which should, therefore, be avoided. Furthermore, workflow interruptions by thread ruptures occurred twice as often with the switch technique compared to the steady technique, potentially requiring the usage of a new thread, which causes another delay in surgical time and elevated costs for suture material. Interestingly, it was observed that thread ruptures were mostly not caused by too intense force application during tightening the knots or accidentally cutting the thread with the inbuilt scissors in the needle holder, but repetitively by sharp edges and corners of the instruments upon looping the thread around the needle holder, where it was prone to entanglement. Since only the needle holder has these edges, while the dilator is very smooth, this provides another advantage for the steady technique, where the thread is kept with the needle holder and only looped around the dilator in different orientations. Summarizing, it was demonstrated that suturing with the steady-thread technique is superior over the switch-thread technique considering time efficiency, microsurgical skills and error susceptibility, without affecting the quality of microanastomoses. Therefore, we suggest the steady-thread technique for robot-assisted microsurgical procedures, providing an approach to improve the time of robotic microanastomoses. Nevertheless, since the vascular or nerval anastomosis only accounts for a small portion of the overall surgical time, further research investigating additional concepts to further optimize robot-assisted microsurgical procedures is needed in order to fully leverage the benefits of novel robotic systems. Below is the link to the electronic supplementary material. Supplementary file1 (MOV 59697 KB) Supplementary file2 (MOV 87858 KB)
Physiologists as medical scientists: An early warning from the German academic system
3df4ad74-e129-41b2-8e2b-66e045681245
11513198
Physiology[mh]
MEDICAL SCIENTISTS Medical scientists (in some academic communities also referred to as biomedical scientists) are postgraduate investigators qualified in a science, technology, engineering, or mathematics (STEM) subject or in medicine, who are engaged in health research, but who do not participate in patient care. In comparison, clinician scientists are medically qualified, conduct biomedical research, and do treat patients (Table ). Medical scientists provide diverse perspectives and expertise and are thus indispensable for translational research (Forum Gesundheitsforschung, ). However, the concept of “medical scientists” remains poorly recognised, and many physiologists may not even realise that they are medical scientists. Moreover, medical scientists face specific demands in terms of training, career prospects, recognition, and support, all in the context of an increasingly challenging shortage of skilled, medically aware basic scientists in Germany and beyond (European Commission, , ; Langin, ). The German Cardiac Society (DGK) and the German Centre for Cardiovascular Research (DZHK) held a Translational Workshop in Bonn, Germany, in October 2023 to explore this complex topic. This paper is based on the findings of the workshop and reflects on the current status of medical scientists, with a particular focus on the example of cardiovascular research in Germany. CLINICIAN SCIENTISTS: A SUCCESS STORY A key challenge for clinician scientists is that they lack time for research while fulfilling their clinical duties. Until recently, many clinicians could only perform laboratory‐based research during the early stages of their professional development, or while “off duty” later on. Recognising the cost of losing out on research by highly motivated physicians, as of 2022 nearly all 39 medical faculties at German public universities had implemented (or were in the process of implementing) clinician scientist support schemes that offer “buyouts” from clinical duties and a curriculum of cross‐disciplinary training (Stiftung Charité, ; Pittet, ; Medizinischer Fakultätentag, .; Medizinischer Fakultätentag, ). So, could the support programmes available to clinician scientists serve as a blueprint for medical scientist support? MEDICAL SCIENTISTS: THE STATE OF PLAY Medical scientists and clinician scientists go through comparable stages of training (Figure ), which we refer to as basic training (5–6 years of BSc, MSc, MD, or equivalent studies); specialisation (roughly 4–5 years for a PhD in an STEM subject (Jaksztat et al., ) or medical specialisation); and professional independence (3–4 years post PhD or specialisation). However, medical scientists face their own set of challenges. Precarious career prospects : Although specific numbers for physiologists are unavailable, data on physicians and non‐physicians suggest that similar proportions of clinician scientists and medical scientists have permanent positions at German medical faculties (approximately 44% of physicians and 45% of nonphysicians; Forum Gesundheitsforschung, ). However, the situation for the 55% on non‐permanent contracts differs, as many medical scientists (i.e., those without medical training) will not be able to “fall back” on practising medicine if their career in research does not work out. Indeed, data suggest that most doctoral graduates struggle to progress in their academic careers in Germany regardless of discipline. In 2023, just 1592 scientists completed a habilitation (a postdoctoral qualification that is seen as a formal requirement for professorship in the German academic system) and just 37% of scientists who habilitated were women (Destatis Statisches Bundesamt, ). Professorships at German medical faculties are also distributed unevenly depending on professional background. A 2017 survey suggested that 8.1% of medically trained staff held a professorial title at German university medical faculties, compared to just 4.1% of nonmedically trained staff. At nonuniversity medical research institutions, nearly a quarter (24.1%) of medically trained staff held a professorship, compared to just 3.4% of nonmedically trained staff (Forum Gesundheitsforschung, ). Assuming that most medical scientists have not had medical training, these data suggest that medical scientists are less likely to achieve the rank of professor than their clinician or clinician scientist colleagues. A contributing factor is the relative scarcity of professorships: across all subjects, there were 200,300 students undertaking doctoral studies and approximately 50,000 professors in Germany in 2021 (a 4:1 ratio (Statista (a), ; Destatis Statisches Bundesamt, )). When accounting for the substantially longer tenures of professorial appointments compared to the typical duration of doctoral studies (say, 25 years vs. ~4.5 years), the realistic chances of any particular doctoral student in Germany eventually attaining a professorship may be roughly 1 in 20. The situation in Germany appears less favourable than in Sweden or the Netherlands, where the ratios of students to professors are around 3 to 1 (Statista (b), .; Silander & Pietilä, ; de Goede et al., ). The fact that very few PhD students will become professors needs to be communicated more proactively to trainees. The Coalition for Next Generation Life Sciences is an example of a group of (predominantly North American) universities and research institutions that collect and publish data on the career outcomes of their graduates ( https://nglscoalition.org ). Encouragingly, funding bodies in Germany and Europe have started to collect similar information. Women are more likely to leave academia : Worryingly, female researchers continue to be more likely to lose out on a career in academia compared to their male counterparts across all disciplines. Although the proportion of female and male doctoral students in Germany is reasonably well balanced (48% vs. 52%, respectively (Destatis Statistiches Bundesamt, )), only 37% of habilitations were awarded to women in 2023 and just 28% of 51,200 full‐time professorships were held by women in 2022 (Destatis Statistiches Bundesamt, ; Destatis Statistiches Bundesamt, ; GWK, ). A report by the DGK suggests that this mirrors trends seen in cardiology, where just 3.4% of those working at the director level are women (Lerchenmüller et al., ), and national statistics that place Germany near the bottom of European Union in terms of the share of women working in research overall (29.4% in 2021) (Destatis Statistiches Bundesamt, ). This trend is not limited to Germany: in the USA, women are underrepresented among faculty in nearly all academic fields, and they are more likely to leave academia at every career stage, frequently feeling “pushed out” due to workplace climate (Spoon et al., ). Similarly, female life science researchers in the UK are less likely to progress in their careers or to remain in academia than their male counterparts (Dias Lopes & Wakeling, ). These figures indicate troubling structural problems that disproportionally affect women internationally. Limited structured support : Currently, medical scientist support programmes lag behind the multi‐tiered training and support structures for clinician scientists in Germany. The DFG (the leading German federal funding body for research) and the German Federal Ministry of Education and Research (or BMBF) support 23 early clinician scientist and 8 advanced clinician scientist programmes across Germany (Bundesministerium für Bildung und Forschung, ; Deutsche Forschungsgemeinschaft, )—but not one programme that is specifically focused on medical scientists. To date, the Else Kröner‐Fresenius Stiftung (EKFS) offers the only national funding scheme dedicated to supporting structured training for medical scientists (Else Kröner Fresenius Stiftung, ). As of 2023, 6 EKFS schools for Medical Scientists exist, each supported with €1.1 million of funding for a 4‐year term (Else Kröner Fresenius Stiftung, ). A number of professional bodies have expressed concerns over levels of funding and support for physiological research internationally (Gregorio, ; Rodrigues et al., ; Sengupta & Barman, ). Like clinician scientists, medical scientists of all disciplines are eligible for highly competitive personal fellowships, such as those offered by the DFG (Deutsche Forschungsgemeinschaft, ). The most advanced levels of these fellowships require a commitment by the host faculties to turn supported positions into tenured posts upon completion, which is becoming increasingly difficult (if not backed up by a “professional position,” such as for physicians in university hospitals). With their host institution's permission, medical scientists can also independently apply for “standard research grants” that include their salary, but these are generally not tailored to the requirements of medical scientists. Funding bodies may expect recipients to dedicate 100% of their time to grant‐funded research, precluding other important postdoctoral career development activities such as teaching, academic self‐governance, or structured network initiatives. Here, programmes offering buyouts from grant‐funded research could foster broader academic engagement and networking. This would illustrate commitment on the part of host institutions and (as only part of the principal investigator's salary would have to be sought externally) could increase flexibility in funding requests or even increase funding rates, particularly for medical scientists applying for their first independent research grant. Time limits on career progression : A major challenge for medical scientists in German academia is the strict time limit imposed on their professional development by the “Wissenschaftszeitvertragsgesetz” (the “Act on Fixed‐Term Employment Contracts in Academia” or WissZeitVG ), which, at the time of writing, will soon be updated to restrict nonpermanent employment on intramural funding to just 4 years after obtaining a PhD (Bundesministerium für Bildung und Forschung, ; Davidson et al., ; Bundesministerium für Bildung und Forschung, ). This is in spite of data showing that medical scientists require, on average, 8.6 years from completion of their PhD to attain permanent posts in academia—more than twice the limit allowed in the new proposals (Kordel et al., ). While the WissZeitVG includes family‐related provisions (allowing an extension to the time limit of 2 years per child for academics with children), the clock does not reset if a researcher changes university, unlike in other countries with similar constraints on duration of employment (Davidson et al., ). While the updated law is intended to accelerate scientists' career progression and enable a better work–life balance, it will likely worsen the situation for early career medical scientists by increasing workloads, pressure to publish, and competition for already scarce permanent positions, while also reducing the time available for gaining essential independent expertise in research and teaching. As the updated law is not linked to an increase in university funding, it may contribute to a “brain drain,” as researchers leave Germany, academia, or science altogether—and paradoxically cause researchers to delay wider life goals, and thereby negate the intended improvements to work–life balance (Consortium for the National Report on Junior Scholars, ; Zelarayán et al., ). Perhaps reflecting these pressures, the proportion of PhD students in Germany who wish to work in academia fell from 22% in 2017/2018 to 14% in 2021/2022, and up to 80% of postdoctoral researchers will leave academic research and teaching to work in other sectors (Deutsches Zentrum für Hochschul‐ und Wissenschaftsforschung (DZHW), ; Deutsches Zentrum für Hochschul‐ und Wissenschaftsforschung (DZHW), ). Academic employment law in Germany is clearly at odds with reality and will likely continue to undermine medical scientists' career prospects. Medical scientist careers—all or nothing ? These challenges contribute to a feeling that the occupation of “medical scientist” is not a “profession” in the classic sense but a “career,” as medical scientists must demonstrate continual and unrealistically rapid professional progression through ever‐higher ranks in the professional hierarchy if they are to finally obtain secure, permanent employment. As a full professorship is not a realistic prospect for most early career medical scientists, new ideas are needed to safeguard long‐term prospects for medical scientists, both within and beyond academia. However, this is complicated by a lack of professional representation of medical scientists' interests in the broader scientific community. BUILDING A MEDICAL SCIENTIST BRAND Recognition and representation of physiologists and medical scientists remains limited, particularly within Europe (Eisner et al., ; Rodrigues et al., ). Awareness could be improved by building a medical scientist “brand” that represents the common interests of medical scientists. In this regard, physiologists have made important progress by introducing the “specialist physiologist” ( Fachphysiologe ) title. This title was established by the German Physiological Society (DPG) and is awarded to qualified scientists working in the field of physiology after obtaining relevant qualifications and experience in research, publications, teaching, and didactics over a 5‐year period (usually overlapping with PhD training (Deutsche Physiologische Gesellschaft, )). The Fachphysiologe title is intended to demonstrate the medical scientist's ability to perform independent scientific research and training in the field of physiology, with the goal of improving future employability. A similar specialisation programme and certificate for physiologists and other medical scientists working in cardiovascular research, under the auspices of national bodies like the DGK and DZHK, could establish a brand (e.g., Medical Scientist—Cardiovascular Research ), raise awareness, and aid training and professional development of medical scientists. A first step could be the development of a common postgraduate training curriculum that can be shared across emerging medical scientist programmes to ensure they are consistent and coherent, and to help build a sense of identity, common purpose, mobility, and cohesion among medical scientists. This could mimic the DGK Academy's continuing medical education (CME)‐certified courses for physicians (though tailored to medical scientists). The various subject‐specific professional societies that offer dedicated training for medical scientists, such as the DPG, DZHK, and DGK, could also work together to support medical scientist course development, improve networking and lobbying, and perhaps develop accreditation criteria for new medical scientist training programmes. Regardless, beyond the need for a medical scientist “brand,” there is a broader need for structured training and support for medical scientists. MEDICAL SCIENTISTS: TRAINING AND SUPPORT Terminology and concepts : It would not be prudent to copy clinician scientist training and support structures for medical scientists on a one‐for‐one basis, as their key needs differ. This begins with exactly when researchers qualify for what is typically “postdoctoral” support. As most clinician scientists in the German system conduct medical doctorate research projects in parallel with their undergraduate education, they therefore generally qualify for postdoctoral support upon completion of their undergraduate education, at an average age of 25.9 years (Statista (c), ). For medical scientists, doctoral research commences after completion of BSc and MSc studies, and the average PhD takes 4.7 years (Consortium for the National Report on Junior Scholars, ). Medical scientists consequently qualify for postdoctoral support at an average age of 31.7 years, 6 years later than clinician scientists (Kordel et al., ). What is more, “postdoctoral” support would become available to medical scientists after, not during, the critical phase of their professional training and specialisation. If support programmes for medical scientists are to match the opportunities available to early and advanced clinician scientists in spirit and not merely in name, they should provide support for early (i.e., during or even before PhD studies) and advanced medical scientists (after completion of their research‐based doctorate; Figure ). Examples of training programmes : Current training programmes for medical scientists can be broadly divided into basic training , specialisation , and development toward professional independence (Figure ). Basic training programmes for nonclinicians in cardiovascular research are scarce; they include the University Medical Centre Göttingen's 2‐year MSc in Cardiovascular Science , and the University of Freiburg's 1‐year “pre‐PhD” in Medical Sciences—Cardiovascular Research . Both introduce STEM scientists to a specific facet of medical research—in this case, the cardiovascular system—a precondition for fruitful doctoral research. While specialisation training programmes in cardiovascular research are included as dedicated tracks in many graduate schools, or at the heart of dedicated programmes such as the PhD in Cardiovascular Science at the University of Göttingen, training and support programmes for postdoctoral medical scientists working toward professional independence are also limited at present. They include the Hannover Medical School's Medical Scientist Programme and the University of Freiburg's Hans A. Krebs Medical Scientist Programme. Further details of all these programmes can be found in Appendix . MEDICAL SCIENTISTS: BROADER CAREER PROSPECTS As medical scientists, physiologists possess valuable subject‐related knowledge and transferable skills, and medical scientists have a wide range of career options outside of academic research (Figure ). However, medical scientists are typically not well prepared for careers outside academia, due to limited exposure to other career options during training and the negative connotations that are sometimes associated with such a move. Remedying this requires the provision of information relating to career prospects and trajectories (as championed by the Coalition for Next Generation Life Sciences), as well as structural improvements, such as incorporating trainee rotations in industry, science communication, or the public sector into academic curricula. This is an objective of the European Union's Marie Skłodowska‐Curie training networks, which in 2022 funded 149 doctoral programmes, including 14 industrial doctoral programmes to train PhD candidates outside academia (Marie Skłodowska‐Curie Actions, ). However, given the relatively short duration of PhD studies in Germany and the limited time available for postdoctoral development, the incorporation of additional content to existing doctoral training should be carefully considered – or incorporated into pre‐PhD training (such as the Freiburg model mentioned above and described in Appendix ). CONCLUSION Medical research, education, and innovation depend on the interdisciplinary exchange between experts from various backgrounds, including medical scientists, who constitute a sizable fraction of staff in university and nonuniversity health research institutions. It is time that we adjusted our professional and societal approach to training and supporting medical scientists in order to offer the necessary means for protecting and nurturing this category of research staff in Germany and elsewhere. While progress is being made, much remains to be done to improve the standing, training, and support available for physiologists and other medical scientists. A summary of some actions that professional societies, funding bodies, and universities can carry out to improve support for medical scientists working in the field of cardiovascular research in Germany can be found in Appendix . Awareness and representation of medical scientists must be improved through education and the involvement of professional bodies like the DGK and DZHK. Financial support and specialist training for medical scientists should begin early, ideally during their PhD studies. Above all, there is a need to improve the long‐term career prospects for physiologists and medical scientists in general, with a focus on female scientists. In particular, the predictably negative effects that time restrictions such as those imposed by the German WissZeitVG have on medical scientists' careers must be remedied. Overall, more must be done to ensure that medical scientists have a reasonable chance for a stable, long‐term career, within and beyond academia. This paper was written based on discussions at the DGK/DZHK Translational Workshop “Medical Scientists in Cardiovascular Research: contents, structures, challenges, needs” held in Bonn, Germany, on October 4, 2023. The authors contributed to workshop discussions, made suggestions on the manuscript drafted by PK, NK, CM, RBS, KS‐B, and LCZ, and all authors approved the final version. The workshop was financially supported by the DGK and the University Heart Centre Freiburg · Bad Krozingen. KS‐B received research support from Novartis and BionTECH and speaker's honoraria from Novartis. CM is an advisor to Amgen, AstraZeneca, Boehringer Ingelheim, Bristol Myers Squibb, Novo Nordisk, and Servier and received speaker honoraria from AstraZeneca, Bayer, Bristol Myers Squibb, Boehringer Ingelheim, Berlin Chemie, Edwards, Novartis, Novo Nordisk, and Servier. RBS has received lecture fees and advisory board fees from BMS/Pfizer and Bayer outside this work. CN is an employee at Nuvisan ICB GmbH. SSt received a speaker honorarium from NovoNordisk. TZ received funding from Vifor Pharma, is listed as co‐inventor of an international patent on the use of a computing device to estimate the probability of myocardial infarction (International Publication Number WO2022043229A1), and is shareholder of the ART.EMIS GmbH Hamburg. NL participated in the translational workshop as the spokesperson of the nextGENERATION Medical Scientist programme, which is funded by the Else Kröner Fresenius Stiftung. PK is course director of the Freiburg Medical Scientist MSc programme. All other authors have no competing interests to declare. This review article did not involve any data collection from human participants or animals. Therefore, no ethics approval was required.
The effect of the SNAPPS (summarize, narrow, analyze, probe, plan, and select) method versus teacher-centered education on the clinical gynecology skills of midwifery students in Iran
3979a337-802f-4cc5-9132-7a6aad93f1d7
5286214
Gynaecology[mh]
Students in the SNAPPS method groups were educated according to the 6 steps introduced by the developer of this method . In step 1, the student should briefly summarize the patient’s history and findings. In step 2, the student should narrow the differential diagnoses to 2 or 3 relevant possible diagnoses. In step 3, the student should analyze the differential diagnosis by comparing and contrasting the possibilities. In step 4, the student should probe the preceptor by asking questions about ambiguities and alternative approaches. In step 5, the student should make a plan for the patient’s medical problem. In step 6, the student should select a case-related issue for self-directed learning. During the first day of training, students in the SNAPPS group received information regarding this method, and their questions about the method were answered. On the same day, the preceptor presented a few cases according to the 6 steps of the SNAPPS method. A questionnaire was used to gather the socio-economic information of students. A checklist was used to record the students’ final assessment. This checklist was prepared according to the Nursing and Midwifery school evaluation form that is in use to evaluate midwifery students’ clinical skills in gynecology. This evaluation form was composed according to the learning theory syllabus of gynecology . The checklist included 56 questions ( ). A 5-item Likert scale was used for scoring, ranging from 0 to 4, where 4 indicated a very high level of skill and 0 indicated an unacceptable level of skill. The assessment in both groups was performed by a preceptor who was not aware of the purpose of this study. The validity of both the socio-demographic questionnaire and the checklist was measured by content validation. All data were analyzed using IBM SPSS ver. 21.0 (IBM Co., Armonk, NY, USA). An independent t-test was used to compare means between the 2 groups. Categorical data were compared using the chi-square test. The study was approved by the Ethics Committee of Ahvaz Jundisahpur University of Medical Sciences (Ref no: AJUMS.REC.1394.83). The mean age of participants was 22.38±1.03 years and 22.57±0.96 years in the teacher-centered education and SNAPPS groups, respectively (P>0.05). The mean grade point average of the students’ final semester was 16.99±0.88 (min=16, max =18.91) and 17.07±0.75 (min=15.5, max=18.56) in the teacher-centered education and SNAPPS groups, respectively (P>0.05). Most students in both groups lived in dormitories (77.8% in the teacher-centered education method group and 73.7% in the SNAPPS group; P>0.05). As evident from , ability to gain the trust of the patient and verbal and nonverbal communication skills, history taking (P=0.001), gynecological history taking (P=0.002), and total score of history taking (P=0.001) were significantly better in the SNAPPS group compared with the teacher-centered education group (P<0.01). The students in the SNAPPS group were better able to prepare the patient for gynecological examination (P=0.006) and exhibited better observance of the principles of sexually transmitted disease prevention (P<0.001). The 2 groups did not show any significant difference in the students’ ability to perform an examination of the external and internal genitalia, to insert a speculum with minimal discomfort, to comply with sterilization procedures in pelvic examination, and to consider preventive measures when assessing patients affected with vaginitis and cervicitis or when taking a pap smear ( ). The 2 groups did not show any significant difference in para-clinical measures except for ‘ability to read and interpret the results of para-clinical measures’ (P=0.01) ( ). Raw data is available in . The students in the SNAPPS group were better able to diagnose common diseases (P<0.05) ( ). The SNAPPS method significantly improved the ability of students in the treatment of common diseases (P<0.05) ( ). The results of this study showed that SNAPPS increased students’ ability in history taking, preparing the patient for gynecological examination, and observing the principles of sexually transmitted disease prevention. The SNAPPS group was significantly better in reading and interpreting the results of para-clinical measures and in the diagnosis and treatment of common diseases. Other studies have also shown that teacher-centered methods may not be able to educate and evaluate students, especially in the clinical setting. The SNAPPS method is a type of constructive learning in which students are treated as thinkers who are able to develop new knowledge and teachers are treated as learning partners for the students, while in the traditional method teachers are responsible for conveying information to students and providing the correct answer to students’ questions . Learning preferences such as a person’s characteristic patterns of strengths, weaknesses and preferences in absorbing, processing, and retrieving information differed among students and these differences may have affected the results of this study. In conclusion, the results of this study showed that the SNAPPS education method can significantly improve midwifery students’ skills in gynecology, namely history taking and differential diagnosis and treatment of common diseases.
Undergraduate occupational medicine education in European Medical Schools: better training to meet today’s challenges
5ac68b80-46ab-4ad2-be4b-5936dd7d89c1
11304825
Preventive Medicine[mh]
Despite all efforts made, the specialty of Occupational Medicine (OM) is still under-valued by doctors, and it is not clearly recognized as an influential medical discipline that produces transversal knowledge and skills, which are necessary for medical specialties . Moreover, among employees, the misconception that OM represents employers’ interest persists and they are more likely to trust the general practitioners (GPs) or other medical doctors rather than the OM physician . In fact, GPs are trusted by workers when they need information or advice regarding work-related health problems, occupational hazards in their company or their fitness to return to work . Besides, employees in small and medium-sized businesses in many European nations are unable to obtain occupational health services, thus they must rely on the knowledge and skills of their GPs in occupational medicine . Therefore, although communication between OM physicians and GPs is essential to help GP to detect occupational diseases or advising on return to work it has been reported a poor interaction and lack of communication . Among the root reasons, GP and those graduated in medicine worldwide generally receive very limited training in OM . Since the initial publication of the report about OM training in 1974 in which it was pointed out that the training in OM was insufficient , several reports have been conducted around the globe examining the situation of OM training in different countries (Table ). More than just academics have attempted to raise awareness of this issue. In 1988, the World Health Organisation (WHO) released a report that raised alarms about the disparities in medical education across students, even within the same region and later the American College of Occupational and Environmental Medicine (ACOEM) and the International Occupational Medicine Society Collaborative (IOMSC), have published reports to point out the need to improve OM undergraduate medical student training. However, it was not the specific topic of these studies . For the first time, in 2014, Gehanno et al. conducted a survey to examine the undergraduate training of medical students in OM in different European countries . The research was supported by the European Association of Schools of Occupational Medicine (EASOM) and revealed the need to improve undergraduate training in occupational medicine at European universities . The findings of this research also indicated that while most medical schools in Europe offered undergraduate training in OM, there was considerable variability between universities in different European countries and even within the same country. In short, there was a lack of harmonisation in terms of topics, number of hours devoted to training, and so forth. However, ten years later, the field of OM is facing new challenges on a global scale such as the shortage of OM doctors . To overcome these difficulties medical student training at the undergraduate level is needed as a pivotal component in addressing the present shortage . Education and training are highly valuable tools to change student´s perceptions towards OM . Irrespective of the vocation for occupational health that could be fostered in medical students, adequate training in OM is essential for many reasons. As Lalloo et al. recently stated in 2024 , every doctor should be competent to recognize occupational diseases/illnesses, assist their patients in returning to work after illness or injury, and understand the principles of retaining workers with long-term health conditions in the workplace. In addition, early exposure of medical students to occupational safety and health can help them understand the importance of work as a health outcome . In the light of these considerations, it is of interest to examine whether the OM training of medical students in European medical schools has overcome the shortcomings described in this regard a decade ago. Especially when countries such as the UK have taken the initiative and led the most recent call for action on this issue with the publication of a new OM Competence Framework for Great Britain medical students . In this context, with the most recent publication on the status of OM in European medical schools occurring a decade ago the purpose of this study was to examine the reality of medical education in OM in European medical schools at present time. Study design A descriptive study was designed to determine the status of OM teaching in Europe. Between 1 March and 1 August 2022, an email survey was sent to all the medical schools or faculties ( N = 347) across 28 European countries on “OM Training in European Medical Schools” (Belgium, Bosnia and Herzegovina, Croatia, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Malta, Latvia, Moldavia, Montenegro, Netherlands, Norway, Poland, Portugal, Macedonia, Romania, Serbia, Slovenia, Spain, Sweden, Switzerland, Turkey, and UK). Finally, 53 medical schools from 19 European countries returned the completed questionnaire. This represents a response rate of 15.3% (53 out of 347) and includes information covering more than 75,000 European students, as reported by the universities. All these countries were represented in the EASOM or had links to members of this association due to collaborative activities. The questionnaire used in the present study was identical to the one used by Gehanno et al. (Supplementary material, Table ). The survey design was influenced by prior research on undergraduate OM teaching in France and the UK . Later, improvements were made to the project members’ input. Finally, a pilot test was carried out at their respective universities, with subsequent changes. The final version was a 2-page, closed questionnaire, with the inclusion of open-ended questions at the end of the survey. Questionnaires were sent via e-mail to the teachers in charge of undergraduate teaching of OM in all medical schools, identified through the EASOM network. In the event of a responsible teacher could not be located, the medical school dean was furnished with the questionnaire. If no response was forthcoming, in order to increase response rates, an email reminder was sent after a month, followed by another reminder after two months. Ethics approval and consent to participate The ethics committee of Aragon (Research Ethics Committee of the Autonomous Community of Aragon, CEICA), Spain, was consulted but the present study did not require any assessment according to the CEICA Even though, the consent to participate was also considered unnecessary since no personal data of any kind are collected, a data management agreement was signed with the University of Zaragoza, Spain for data protection (I.D: 100621). A descriptive study was designed to determine the status of OM teaching in Europe. Between 1 March and 1 August 2022, an email survey was sent to all the medical schools or faculties ( N = 347) across 28 European countries on “OM Training in European Medical Schools” (Belgium, Bosnia and Herzegovina, Croatia, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Malta, Latvia, Moldavia, Montenegro, Netherlands, Norway, Poland, Portugal, Macedonia, Romania, Serbia, Slovenia, Spain, Sweden, Switzerland, Turkey, and UK). Finally, 53 medical schools from 19 European countries returned the completed questionnaire. This represents a response rate of 15.3% (53 out of 347) and includes information covering more than 75,000 European students, as reported by the universities. All these countries were represented in the EASOM or had links to members of this association due to collaborative activities. The questionnaire used in the present study was identical to the one used by Gehanno et al. (Supplementary material, Table ). The survey design was influenced by prior research on undergraduate OM teaching in France and the UK . Later, improvements were made to the project members’ input. Finally, a pilot test was carried out at their respective universities, with subsequent changes. The final version was a 2-page, closed questionnaire, with the inclusion of open-ended questions at the end of the survey. Questionnaires were sent via e-mail to the teachers in charge of undergraduate teaching of OM in all medical schools, identified through the EASOM network. In the event of a responsible teacher could not be located, the medical school dean was furnished with the questionnaire. If no response was forthcoming, in order to increase response rates, an email reminder was sent after a month, followed by another reminder after two months. The ethics committee of Aragon (Research Ethics Committee of the Autonomous Community of Aragon, CEICA), Spain, was consulted but the present study did not require any assessment according to the CEICA Even though, the consent to participate was also considered unnecessary since no personal data of any kind are collected, a data management agreement was signed with the University of Zaragoza, Spain for data protection (I.D: 100621). Responses were obtained from 19 countries out of a total of 28 countries that were invited to participate (68%) (Belgium, Bosnia Herzegovina, Croatia, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Latvia, Netherlands, Norway, Portugal, Macedonia, Romania, Serbia, Slovenia, and Spain). The percentage of medical schools responding in each country was uneven, ranging from 100% of medical schools in countries such as Bosnia Herzegovina, Greece, and Latvia, 75% in Hungary, 70% in Denmark, 63% in Romania, 50% in Belgium and Slovenia, or 40% in Finland and Serbia, among others. The lowest response rate was obtained in three countries with a strong historical tradition of OM teaching and training, such as Italy, Germany, and France (Table ). Four Greek universities, one German university, and one Belgian university, making a total of six respondents (11% of the sample), indicated that OM was not taught in their medical schools. The remaining 47 universities (89% of the sample) reported that they provided formal OM training. 20% of respondents indicated that OM was taught in the first years of their university career, whereas 80% indicated that it was taught in the last years of the degree (20% vs. 80%). The mean number of OM training hours per academic course was 24.3 h with variability even within the same country. For instance, in Spain, there is a discrepancy of 100 h between the minimum (25 h) and the maximum (125 h). In 28% of cases ( n = 14), the duration of OM teaching was limited to 10 h or less, while in up to 46% ( n = 23) of cases, the duration was 20 h or less. In terms of teaching methods, most respondents reported to use lectures (98%), followed by seminars (76%), with a proportion using more contemporary approaches such as problem-based learning (61%) and e-learning (57%). Other methods such as workplace visits (43%), short work placements (30%), project work (30%) and ward-based- tuition (13%) were used in less than half the cases (Table ). In summary, all teaching methods experienced an increase compared to the previous study. The most frequently training topics in OM were occupational respiratory diseases (89%), principles of prevention (89%), occupational health law and ethics (79%) and musculoskeletal disorders (79%). (Table ). These topics are closely followed by others such as occupational cancer (74%), stress at work (74%) and occupational hazards for physicians (74%) (Table ). The least frequently taught topics included the following: medico-legal reporting (38%), disability assessment (40%), and environment and effects of industrial activity (43%), which were among the topics found at the bottom of the list. Consequently, less than half of the students received training in these subjects. Moreover, nearly half of the students received training in ergonomics (55%), and in how to collaborate with the OM physician (55%) (Table ). Out of the 47 faculties that taught OM, 36 (77%) indicated that they assessed their students with an exam, while 11 (23%) did not require an exam to pass the subject. The preferred method was the multiple-choice test (70%), followed by oral exams (38%) and open questions (38%). When asked if they felt their opinion was representative of other OM faculties in their country, 55% felt it was. The aim of this study was to provide an updated overview of occupational medicine (OM) education for medical students in European medical schools. The results of this study give an insight into the current state of OM education for undergraduate students in European medical schools, with a comparison to the conditions that existed ten years ago. The present survey outcomes are consistent with those of the previous study by Gehanno et al. . It should be noted the variability in the number of hours devoted to OM training, the covered topics and the compulsory or voluntary nature of the training, among others in undergraduate OM training across European countries. Furthermore, it was observed that there is a general tendency to prioritize classical content (occupational diseases, history of OM) over topics that have grown in significance within the field of OM recently. These encompasses collaboration between general practitioners and OM specialists, return to work, and environmental effects. Our results showed that the adaptation of training to new contexts and needs is frequently suboptimal. Perhaps the most obvious example is that themes that have acquired prominence in the recent decade, such as occupational cancer and psychosocial risks , are in very similar numbers to those of 2014. In other words, one in every four students at European institutions receives no training in these areas. Another illustrative example of the results of this study is the increase in OM history instruction from 2014 to the present (55% vs. 48%) In contrast, the environmental impact of industrial activity, which is the topic most closely related to the climate emergency, had not only decreased in terms of its percentage of instruction (43% vs. 47%), but also the number of hours (1.4 h per week vs. 1.7). Furthermore, a comparable trend was observed in another relevant topic: “How to collaborate with the OM physician.” The proportion of respondents who indicated receiving instructions on collaborating with the OM physician remained relatively unchanged, with a slight decrease from 57 to 55%. This finding underscores the need for a new shared competency framework for medical students studying OM within European countries, including the UK. Such a framework would standardize competencies and enhance collaboration between medical professionals across Europe . Nevertheless, our findings indicate an increasing use of modern instructional tools and methodologies, particularly learner-centered approaches such as problem-based learning and e-learning. These methods have been proposed as effective in stimulating students’ interest in OM . This focus on more technological methods has not prevented other approaches from increasing, albeit less than desired. Practices that have been shown to be beneficial, such as visits to work environments and work placements , have only experienced a slight increase (43% vs. 38%). Nevertheless, it should be bear in mind that our present survey was conducted during the first quarter of 2022, in the context of the COVID-19 pandemic. Moreover, a potential increase was observed in the average time spent on OM teaching, compared to 2014, although in the most favorable scenario, the average time spent on occupational health teaching was less than 30 h during the academic course. However, currently some medical faculties do not include OM in their curricula, despite of the importance in acquiring core OM competencies . This implies that a proportion of medical students at European universities have limited or no opportunity to study occupational medicine (OM) during their undergraduate training. It is therefore reasonable to assume that the lack of knowledge about essential aspects of occupational medicine (OM) and the lack of necessary skills will have a negative impact on their future professional performance as physicians. It should be noted that this inconsistent fragmented scenario among countries occurs in a continent that offers the best conditions for academic harmonization due to its geopolitical location and common academic regulations . The global situation drawn by other institutions such as ACOEM and IOMSC in their joint reports of 2017 and 2022 is even more concerning. Hence, our findings show no real improvement on the situation described a decade ago . It must be reminded that basic university training in this area was identified by Green-Mckenzie et al. as one of the most critical factors that would motivate a young doctor to pursue a career in OM. Furthermore, these authors have recently reported on similar needs in the training of their students in United States medical schools and their relationship to the vocational deficit and the consequent decline of occupational and environmental specialists that the United States currently faces. It is also worth recalling how the COVID-19 pandemic revealed significant deficiencies in the occupational safety and health (OSH) training of health care workers. In the early weeks of the epidemic and in the aftermath of the pandemic, healthcare workers with inadequate occupational safety and health training were unnecessarily exposed to the COVID-19 virus, resulting in the deaths of a significant number of healthcare workers . In addition, the pandemic demonstrated the vital role of occupational health and safety professionals in maintaining the functioning of production systems and their workers . Adequate training in occupational and environmental health and safety is essential to prepare new physicians for any new pandemic or crisis that may arise in the future (including that related to climate change). Once again, we must not underestimate the lack of OM training in medical schools in Europe, because it is a major problem, as is the decline in the number of occupational health physicians in Europe, their replacement, and the readiness of our doctors to face a possible new health crisis in the future . The results obtained justify an urgent debate on the competencies/knowledge that every doctor should possess in OM upon completing their medical school curricula. It is necessary to establish a core curriculum for undergraduate training in OM in Europe and implicate OM professional associations as well as international organizations directly involved in the OM field. With a substantial sample distributed around Europe, this study’s international viewpoint and extensive information make it a valuable source of data to evaluate. Despite its limitations, the agreement with Gehanno’s results invites us to consider the data obtained as trustworthy. However, it is important to note that the findings of this study are limited as not all European countries were included. Adding to that a significant drop in the response rate (15.3% vs. 44.3%) is observed in this second survey. Several possible explanations can be put forward for this relatively low response rate. It is reasonable to assume that most of the non-respondents do not incorporate significant levels of OM teaching into their medical student curriculum. In other words, the universities that were unwilling to participate in the study may be those that do not offer adequate OM teaching. Indeed, in this second survey, responses were received from a small number of European universities stating that OM training was not offered at all in the undergraduate curriculum of their medical school. Consequently, while data from the current study are compared with those from a 2014 study, it is acknowledged that the composition of participating faculties may have differed between the two time periods. Variations in faculty demographics and expertise could potentially impact the comparability of results, affecting the validity and reliability of the findings. Another potential explanation for this low response rate is the ageing of the OM workforce in recent years. This trend may also extend to OM teachers in medical schools. As previously highlighted, it has been observed that the OM workforce is ageing, with up to 40% of practitioners over the age of 50 . This situation is of significant concern, as the lack of adequate generational replacement of occupational physicians teaching in medical schools could exacerbate the consequences of suboptimal undergraduate education in occupational medicine. Moreover, while a 100% response rate was achieved in some surveyed countries, responses from nations such as France, Italy, and Germany were notably scarce, despite the long-standing tradition of OM undergraduate teaching within the medical curriculum in these countries. This implies that a representative sample of all Faculties of Medicine was not obtained from European nations. The very recent publication in 2024 of a new Competence Framework in Occupational Medicine for the training of new doctors in all UK medical schools , which puts an end to the fragmentation, lack of standardization and inconsistency that had been proven to exist within different UK universities, may be a good example to consider at this time. An example to consider as it has been defined after years of study, with the involvement and consensus of all parties concerned and established from a pragmatic perspective to respond to real needs. Although there may be local differences between medical schools in different European countries or within individual nations, a basic OM competency framework should be generated, established and required for all European countries. It is needed to ensure that every European graduate has “the necessary knowledge and skills to deliver positive OM outcomes for patients, as well as the tools to manage their own resilience and the demands of a career in medicine, whatever their chosen” . In European Union (EU) member countries, such a common basic OM competency framework must be considered not only a necessity, but as a mandatory requirement based on the European Community legislation related to the free movement of professionals . It is imperative that undergraduate OM instruction in European medical schools to be updated, harmonized, and standardized. But to address this issue, European societies, regulatory agencies, academic institutions, and policy makers must work together promptly. Cooperation of WHO, International Commission on Occupational Health (ICOH), European Union information agency for occupational safety and health. (EU-OSHA), International Labour Organization (ILO), European Union of Medical Specialists of occupational medicine (UEMS-OM), and other international organizations is also needed. The data indicate that a substantial proportion of European medical schools may be providing suboptimal OM teaching and training to their students. In addition, there is evidence of a significant lack of updating, standardization, and harmonization of OM teaching both between and within European countries. These problems were identified a decade ago by an EASOM team but remain largely unaddressed. There is a need to develop a common framework of core competencies in OM in EU member countries. The establishment of such a common framework is of utmost importance to ensure that all European physicians are adequately equipped with core competencies in OM to meet current needs and are also to be prepared for future challenges such as those posed by the COVID pandemic. The establishment of this framework should be seen as a mandatory requirement in accordance with European Community legislation related to the free movement of professionals within EU countries. OM education at the undergraduate level must no longer be underestimated. It has a great importance and needs to be urgently addressed and improved definitively. Below is the link to the electronic supplementary material. Supplementary Material 1
Optimizing Prehospital Acute Transfer of Patients With Presumed Stroke Given Economic Constraints
e07f27b5-c1fb-4994-8a9c-ffcc2c295711
11907367
Surgery[mh]
Introduction Reperfusion therapy in the form of intravenous thrombolysis (IVT) and mechanical thrombectomy (MT) has become the standard of care in eligible patients with acute ischemic stroke (AIS) due to anterior circulation large vessel occlusion (LVO). Concurrently with technological advances in medical devices and further refinement of endovascular techniques, the indication for EVT keeps expanding; it encompasses select patients in the late time window beyond 6 h and up to 24 h from last seen well, patients with vertebrobasilar occlusions or large core infarcts [ , , , , ]. To reap the benefits of these advancements, patients need access to specialized acute stroke care, and measures that reduce the time from symptom onset to treatment start (OTT) still entail substantial improvements of functional outcomes in the majority of patients treated with acute reperfusion therapies . The lingering inaccessibility to and under‐utilization of MT across stroke systems of care prompt further implementation of thrombectomy services . To overcome the prevailing shortcomings, it is necessary to either shorten the geographical distance or increase the speed of travel between patients and thrombectomy services, preferably in the most cost‐effective possible way. Currently, seven comprehensive stroke centers (CSC) and one thrombectomy‐capable stroke center (TSC) serve a population of 10.5 million inhabitants in Sweden. Approximately 60% of patients treated with MT arrive at a thrombectomy center by interhospital transfer from the first admitting IVT‐ready hospital . Due to the current geographical distribution of ambulance helicopters and in the void of a nationally coordinated system for airborne transportation of patients with presumed stroke due to LVO, very few patients arrive at thrombectomy centers by ambulance helicopter . The cost‐effectiveness of increasing the number of optimally located CSCs or TSCs has been evaluated previously . The alternative implementation strategy of increasing the number of optimally located ambulance helicopters while keeping thrombectomy centers fixed has been evaluated within the modeling framework of cost‐effectiveness analysis too . Indeed, these studies have demonstrated how to attain the most cost‐effective number and locations of thrombectomy centers and ambulance helicopters, respectively, in separate analysis. Although methodologically and computationally challenging, the ability to conjunctively model the optimal number and locations of thrombectomy centers and ambulance helicopters would capacitate searching for even more cost‐effective implementation strategies by delineating all possible combinations within the probable range of cost‐effectiveness on the vast solution space. It may offer insights into the interaction effect of thrombectomy centers and ambulance helicopters on patient outcomes and costs too. This study aims to determine the most cost‐effective combination of optimally located ambulance helicopters and thrombectomy centers compared with the current eight thrombectomy centers in Sweden and no ambulance helicopters in patients with presumed AIS due to LVO. Methods This study optimizes prehospital acute transfer of patients with presumed stroke by combining data from national quality registers in geographic information system network analyses and in an economic modeling framework for decision analysis. 2.1 Data The study material derives from a consolidated dataset of anonymized, individual patient‐level registry data on acute stroke patients spanning over a 6‐year period between 2012 and 2017, that includes emergency medical services (EMS) call‐out data from emergency call operator companies, inpatient healthcare episodes, and eventual cause of death data from the Swedish National Board of Health and Welfare, and stroke care data from the Swedish stroke registry (RIKSSTROKE) [ , , , ]. It contains 220,267 EMS records on call‐outs to patients with suspected stroke, and 124,484 case records of patients discharged from hospital with confirmed stroke diagnosis encoded with the tenth revision of the International Classification of Diseases (ICD‐10). A description of patient characteristics and selection criteria has been detailed previously . The study population for analysis consists of 18,793 cases of patients with suspected AIS and potential eligibility for MT treatment. It encompasses all patients with a hospital discharge diagnosis code for stroke due to cerebral infarction (I63) presenting with a National Institute of Health Stroke Scale (NIHSS) score greater than or equal to six (NIHSS ⩾ 6) at hospital admission ( n = 13,355), and all patients presenting with an NIHSS score less than six (NIHSS < 6) at hospital admission who received treatment with MT ( n = 164). The study population for analysis additionally encompasses 5247 patient cases of false positives, constituted by patients with intracerebral hemorrhage (I61) ( n = 2945) and stroke mimics ( n = 2302) to reflect the proportion of false positives among patients assessed with the prehospital stroke triage system termed the A2L2 test in Sweden . 2.2 Modeling 2.2.1 Geographic Network Analysis Geographic network analyses are conducted within the software environment of ESRI ArcGIS Desktop 10.6.1. and employ the national road network from the Swedish national road database and a created network of geodesic distances. The built‐in solvers of ArcGIS employ a range of heuristics to find good solutions; this includes semi‐randomized initial solutions, a vertex substitution heuristic, and a metaheuristic. The number of candidate facilities for locating thrombectomy centers includes the current seven CSCs and one TSC in Sweden and four CSCs in neighboring countries, namely in Copenhagen (Denmark), Oulu (Finland), Oslo (Norway), and Trondheim (Norway). It also includes 55 IVT‐ready hospitals in Sweden, making the total number of candidate facilities 67. The number of candidate heliports for locating ambulance helicopters is 35. On a catchment area spanning over 530,000 km 2 over land and water, with an estimated population of 10.5 million inhabitants, each patient case is represented by the pick‐up point of location in network analyses . The strategic decision problems for locating thrombectomy centers and ambulance helicopters translate into p ‐median facility location–allocation problems. The network analyses for EMS patient transportations include providing solutions for both the Drip‐and‐Ship (DS) and Mothership (MS) organizational paradigms, respectively. The model estimates EMS travel times using the quickest path in the road network calculated with road length and maximum allowed speed for each road link. The network analyses for locating ambulance helicopters provide solutions for the MS paradigm and estimate HEMS travel times using the shortest path in the Euclidean network of geodesic distances. Additionally, the model applies a maximum constraint on travel time and distance for patient transportation with EMS and helicopter emergency medical services (HEMS), respectively. Moreover, the model requires the allocation of at least 50 MT interventions per year for a candidate facility to qualify as a potential facility for locating thrombectomy centers . The given set of candidate facilities for locating thrombectomy centers and ambulance helicopters makes the number of possible combinations large for some solutions (Appendix ). Thus, results from previous studies motivate limiting the solution space to solving the location‐allocation problem for n = 8, …, 12 thrombectomy centers using the road network, and for n = 5, …, 16 ambulance helicopters using the geodesic network, to find the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters . Additionally, the analysis provides solutions to n = 8, …, 12 thrombectomy centers while holding fixed the current 8 thrombectomy centers in Sweden. The model sets the upper limit of the OTT window to 270 min for IVT and 360 min for MT. Furthermore, the model assumes fixed time lapses for some actions and processes in the acute stroke care management (Table ). Hence, the maximum allowed travel time for EMS transportation of a patient from the pick‐up point of location to the nearest IVT‐ready hospital is 170 min. The corresponding upper limits to the nearest thrombectomy center under the Drip‐and‐Ship (DS) and Mothership (MS) paradigms are 185 and 240 min, respectively. The maximum allowed travel time at disposal for HEMS from heliport to the patient pick‐up point of location, and then onwards to the nearest thrombectomy center is 227.6 min. This translates into a travel distance of 1024 km at the average cruising speed of 270 km/h. 2.2.2 Costs and Health Effects The patient‐level cost consists of the staffing cost of running thrombectomy center services and medical equipment costs associated with the respective treatment modality in addition to the individually estimated distance‐based cost for patient transportation with EMS by the DS and MS paradigm, respectively, and with HEMS. A full breakdown of cost items related to the operability of thrombectomy centers and the medical equipment costs associated with the respective treatment modality has been delineated previously . The estimated patient‐level costs for the 1st and 2nd year post‐stroke according to mRS category are obtained from literature and take on a societal perspective . Costs are converted into 2021 euro, using the average exchange rate for the year 2021 between Swedish krona and euro: €1 = SEK10.515. Patient age and admission NIHSS score remain fixed in each patient case, while the calculated OTTs for IVT and MT vary across solutions. By applying predictive generalized linear models (GLMs) (one for each treatment modality), the model estimates the mRS‐90d score in each patient case and for all available combinations of mode of transportation, organizational paradigm, and treatment modality in a solution. Moreover, the model selects the mode of transportation, organizational paradigm, and treatment modality that minimize the expected mRS‐90d score in each individual patient case. Thus, the preferred mode of transportation, organizational paradigm, and the availability and clinical effectiveness of different treatment modalities in individual patients hinge upon the proximity to ambulance helicopters and thrombectomy centers that vary across solutions. The selected mRS‐90d scores are then converted into utility weights obtained from literature . Three‐year survival rates are calculated using study population data and according to mRS categories 0–2, 3, 4, and 5. The age‐adjusted, annual survival rate trend in patients with ischemic stroke aligns with that of the Swedish reference population at 3 years post‐stroke. Therefore, the age‐adjusted survival rate trend in the Swedish reference population serves as the basis for calculating survival rates from year four and onward (Table ) . Outcome distributions of the remaining quality‐adjusted life years (QALY) for patients of each mRS category are obtained with a time‐inhomogeneous, discrete‐time Markov chain (DTMC) . 2.2.3 Measures of Cost‐Effectiveness Within the Decision‐Analytical Framework The selected cost‐effectiveness measures derive from patient‐level costs and QALYs and consist of the net health benefit (NHB), the net monetary benefit (NMB), and the incremental NMB (INMB). These metrics are innately suitable in cost‐effectiveness analysis (CEA) with more than two comparators; fixed quantity measures of cost‐effectiveness make cross comparisons and ranking of comparators seamlessly easy to undertake. Nonetheless, the main focus of the CEA is to compare solutions to various combinations of optimally located thrombectomy centers and ambulance helicopters, with the status quo of thrombectomy centers in Sweden. The solution that attains the highest expected INMB is selected as the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters in the prehospital acute stroke care system for triage‐positive patients due to suspected LVO AIS. 2.2.4 Modeling Assumptions and Scenarios The modeling scenarios assume 1229 eligible candidates for treatment with MT per year among an estimated 1708 triage‐positive patient cases and reflect the national MT rate at 7% of all confirmed cases of patients with AIS in Sweden during the year 2021 . The maximum willingness‐to‐pay (WTP) per QALY gained set at €80,000 represents the lowest cost per QALY gained among declined reimbursements of treatments in severe health conditions by the Swedish Dental and Pharmaceutical Benefits Agency . The base‐case solution assumes the current number and locations of thrombectomy centers in Sweden as per the year 2023, comprising seven comprehensive stroke centers (CSC) and one thrombectomy‐capable stroke center (TSC). It corresponds to a thrombectomy center density of 0.77 per one million inhabitants. It has recently been suggested that the most cost‐effective number of optimally located TSCs to complement the CSCs in Sweden is four, setting the said density to 1.05 per one million inhabitants . The main scenario assumes no predetermined locations of thrombectomy centers and is tasked with determining the most cost‐effective combination of freely located thrombectomy centers and ambulance helicopters. In the secondary scenario, the analysis sets out to determine the most cost‐effective combination of optimally located ambulance helicopters and the 9th, …, 12th optimally located thrombectomy centers, respectively, to complement the current 8 thrombectomy centers. 2.2.5 Deterministic Sensitivity Analysis The deterministic sensitivity analysis (DSA) examines the sensitivity in results by varying the maximum WTP per QALY gained in the range between €0 and €200,000. Data The study material derives from a consolidated dataset of anonymized, individual patient‐level registry data on acute stroke patients spanning over a 6‐year period between 2012 and 2017, that includes emergency medical services (EMS) call‐out data from emergency call operator companies, inpatient healthcare episodes, and eventual cause of death data from the Swedish National Board of Health and Welfare, and stroke care data from the Swedish stroke registry (RIKSSTROKE) [ , , , ]. It contains 220,267 EMS records on call‐outs to patients with suspected stroke, and 124,484 case records of patients discharged from hospital with confirmed stroke diagnosis encoded with the tenth revision of the International Classification of Diseases (ICD‐10). A description of patient characteristics and selection criteria has been detailed previously . The study population for analysis consists of 18,793 cases of patients with suspected AIS and potential eligibility for MT treatment. It encompasses all patients with a hospital discharge diagnosis code for stroke due to cerebral infarction (I63) presenting with a National Institute of Health Stroke Scale (NIHSS) score greater than or equal to six (NIHSS ⩾ 6) at hospital admission ( n = 13,355), and all patients presenting with an NIHSS score less than six (NIHSS < 6) at hospital admission who received treatment with MT ( n = 164). The study population for analysis additionally encompasses 5247 patient cases of false positives, constituted by patients with intracerebral hemorrhage (I61) ( n = 2945) and stroke mimics ( n = 2302) to reflect the proportion of false positives among patients assessed with the prehospital stroke triage system termed the A2L2 test in Sweden . Modeling 2.2.1 Geographic Network Analysis Geographic network analyses are conducted within the software environment of ESRI ArcGIS Desktop 10.6.1. and employ the national road network from the Swedish national road database and a created network of geodesic distances. The built‐in solvers of ArcGIS employ a range of heuristics to find good solutions; this includes semi‐randomized initial solutions, a vertex substitution heuristic, and a metaheuristic. The number of candidate facilities for locating thrombectomy centers includes the current seven CSCs and one TSC in Sweden and four CSCs in neighboring countries, namely in Copenhagen (Denmark), Oulu (Finland), Oslo (Norway), and Trondheim (Norway). It also includes 55 IVT‐ready hospitals in Sweden, making the total number of candidate facilities 67. The number of candidate heliports for locating ambulance helicopters is 35. On a catchment area spanning over 530,000 km 2 over land and water, with an estimated population of 10.5 million inhabitants, each patient case is represented by the pick‐up point of location in network analyses . The strategic decision problems for locating thrombectomy centers and ambulance helicopters translate into p ‐median facility location–allocation problems. The network analyses for EMS patient transportations include providing solutions for both the Drip‐and‐Ship (DS) and Mothership (MS) organizational paradigms, respectively. The model estimates EMS travel times using the quickest path in the road network calculated with road length and maximum allowed speed for each road link. The network analyses for locating ambulance helicopters provide solutions for the MS paradigm and estimate HEMS travel times using the shortest path in the Euclidean network of geodesic distances. Additionally, the model applies a maximum constraint on travel time and distance for patient transportation with EMS and helicopter emergency medical services (HEMS), respectively. Moreover, the model requires the allocation of at least 50 MT interventions per year for a candidate facility to qualify as a potential facility for locating thrombectomy centers . The given set of candidate facilities for locating thrombectomy centers and ambulance helicopters makes the number of possible combinations large for some solutions (Appendix ). Thus, results from previous studies motivate limiting the solution space to solving the location‐allocation problem for n = 8, …, 12 thrombectomy centers using the road network, and for n = 5, …, 16 ambulance helicopters using the geodesic network, to find the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters . Additionally, the analysis provides solutions to n = 8, …, 12 thrombectomy centers while holding fixed the current 8 thrombectomy centers in Sweden. The model sets the upper limit of the OTT window to 270 min for IVT and 360 min for MT. Furthermore, the model assumes fixed time lapses for some actions and processes in the acute stroke care management (Table ). Hence, the maximum allowed travel time for EMS transportation of a patient from the pick‐up point of location to the nearest IVT‐ready hospital is 170 min. The corresponding upper limits to the nearest thrombectomy center under the Drip‐and‐Ship (DS) and Mothership (MS) paradigms are 185 and 240 min, respectively. The maximum allowed travel time at disposal for HEMS from heliport to the patient pick‐up point of location, and then onwards to the nearest thrombectomy center is 227.6 min. This translates into a travel distance of 1024 km at the average cruising speed of 270 km/h. 2.2.2 Costs and Health Effects The patient‐level cost consists of the staffing cost of running thrombectomy center services and medical equipment costs associated with the respective treatment modality in addition to the individually estimated distance‐based cost for patient transportation with EMS by the DS and MS paradigm, respectively, and with HEMS. A full breakdown of cost items related to the operability of thrombectomy centers and the medical equipment costs associated with the respective treatment modality has been delineated previously . The estimated patient‐level costs for the 1st and 2nd year post‐stroke according to mRS category are obtained from literature and take on a societal perspective . Costs are converted into 2021 euro, using the average exchange rate for the year 2021 between Swedish krona and euro: €1 = SEK10.515. Patient age and admission NIHSS score remain fixed in each patient case, while the calculated OTTs for IVT and MT vary across solutions. By applying predictive generalized linear models (GLMs) (one for each treatment modality), the model estimates the mRS‐90d score in each patient case and for all available combinations of mode of transportation, organizational paradigm, and treatment modality in a solution. Moreover, the model selects the mode of transportation, organizational paradigm, and treatment modality that minimize the expected mRS‐90d score in each individual patient case. Thus, the preferred mode of transportation, organizational paradigm, and the availability and clinical effectiveness of different treatment modalities in individual patients hinge upon the proximity to ambulance helicopters and thrombectomy centers that vary across solutions. The selected mRS‐90d scores are then converted into utility weights obtained from literature . Three‐year survival rates are calculated using study population data and according to mRS categories 0–2, 3, 4, and 5. The age‐adjusted, annual survival rate trend in patients with ischemic stroke aligns with that of the Swedish reference population at 3 years post‐stroke. Therefore, the age‐adjusted survival rate trend in the Swedish reference population serves as the basis for calculating survival rates from year four and onward (Table ) . Outcome distributions of the remaining quality‐adjusted life years (QALY) for patients of each mRS category are obtained with a time‐inhomogeneous, discrete‐time Markov chain (DTMC) . 2.2.3 Measures of Cost‐Effectiveness Within the Decision‐Analytical Framework The selected cost‐effectiveness measures derive from patient‐level costs and QALYs and consist of the net health benefit (NHB), the net monetary benefit (NMB), and the incremental NMB (INMB). These metrics are innately suitable in cost‐effectiveness analysis (CEA) with more than two comparators; fixed quantity measures of cost‐effectiveness make cross comparisons and ranking of comparators seamlessly easy to undertake. Nonetheless, the main focus of the CEA is to compare solutions to various combinations of optimally located thrombectomy centers and ambulance helicopters, with the status quo of thrombectomy centers in Sweden. The solution that attains the highest expected INMB is selected as the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters in the prehospital acute stroke care system for triage‐positive patients due to suspected LVO AIS. 2.2.4 Modeling Assumptions and Scenarios The modeling scenarios assume 1229 eligible candidates for treatment with MT per year among an estimated 1708 triage‐positive patient cases and reflect the national MT rate at 7% of all confirmed cases of patients with AIS in Sweden during the year 2021 . The maximum willingness‐to‐pay (WTP) per QALY gained set at €80,000 represents the lowest cost per QALY gained among declined reimbursements of treatments in severe health conditions by the Swedish Dental and Pharmaceutical Benefits Agency . The base‐case solution assumes the current number and locations of thrombectomy centers in Sweden as per the year 2023, comprising seven comprehensive stroke centers (CSC) and one thrombectomy‐capable stroke center (TSC). It corresponds to a thrombectomy center density of 0.77 per one million inhabitants. It has recently been suggested that the most cost‐effective number of optimally located TSCs to complement the CSCs in Sweden is four, setting the said density to 1.05 per one million inhabitants . The main scenario assumes no predetermined locations of thrombectomy centers and is tasked with determining the most cost‐effective combination of freely located thrombectomy centers and ambulance helicopters. In the secondary scenario, the analysis sets out to determine the most cost‐effective combination of optimally located ambulance helicopters and the 9th, …, 12th optimally located thrombectomy centers, respectively, to complement the current 8 thrombectomy centers. 2.2.5 Deterministic Sensitivity Analysis The deterministic sensitivity analysis (DSA) examines the sensitivity in results by varying the maximum WTP per QALY gained in the range between €0 and €200,000. Geographic Network Analysis Geographic network analyses are conducted within the software environment of ESRI ArcGIS Desktop 10.6.1. and employ the national road network from the Swedish national road database and a created network of geodesic distances. The built‐in solvers of ArcGIS employ a range of heuristics to find good solutions; this includes semi‐randomized initial solutions, a vertex substitution heuristic, and a metaheuristic. The number of candidate facilities for locating thrombectomy centers includes the current seven CSCs and one TSC in Sweden and four CSCs in neighboring countries, namely in Copenhagen (Denmark), Oulu (Finland), Oslo (Norway), and Trondheim (Norway). It also includes 55 IVT‐ready hospitals in Sweden, making the total number of candidate facilities 67. The number of candidate heliports for locating ambulance helicopters is 35. On a catchment area spanning over 530,000 km 2 over land and water, with an estimated population of 10.5 million inhabitants, each patient case is represented by the pick‐up point of location in network analyses . The strategic decision problems for locating thrombectomy centers and ambulance helicopters translate into p ‐median facility location–allocation problems. The network analyses for EMS patient transportations include providing solutions for both the Drip‐and‐Ship (DS) and Mothership (MS) organizational paradigms, respectively. The model estimates EMS travel times using the quickest path in the road network calculated with road length and maximum allowed speed for each road link. The network analyses for locating ambulance helicopters provide solutions for the MS paradigm and estimate HEMS travel times using the shortest path in the Euclidean network of geodesic distances. Additionally, the model applies a maximum constraint on travel time and distance for patient transportation with EMS and helicopter emergency medical services (HEMS), respectively. Moreover, the model requires the allocation of at least 50 MT interventions per year for a candidate facility to qualify as a potential facility for locating thrombectomy centers . The given set of candidate facilities for locating thrombectomy centers and ambulance helicopters makes the number of possible combinations large for some solutions (Appendix ). Thus, results from previous studies motivate limiting the solution space to solving the location‐allocation problem for n = 8, …, 12 thrombectomy centers using the road network, and for n = 5, …, 16 ambulance helicopters using the geodesic network, to find the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters . Additionally, the analysis provides solutions to n = 8, …, 12 thrombectomy centers while holding fixed the current 8 thrombectomy centers in Sweden. The model sets the upper limit of the OTT window to 270 min for IVT and 360 min for MT. Furthermore, the model assumes fixed time lapses for some actions and processes in the acute stroke care management (Table ). Hence, the maximum allowed travel time for EMS transportation of a patient from the pick‐up point of location to the nearest IVT‐ready hospital is 170 min. The corresponding upper limits to the nearest thrombectomy center under the Drip‐and‐Ship (DS) and Mothership (MS) paradigms are 185 and 240 min, respectively. The maximum allowed travel time at disposal for HEMS from heliport to the patient pick‐up point of location, and then onwards to the nearest thrombectomy center is 227.6 min. This translates into a travel distance of 1024 km at the average cruising speed of 270 km/h. Costs and Health Effects The patient‐level cost consists of the staffing cost of running thrombectomy center services and medical equipment costs associated with the respective treatment modality in addition to the individually estimated distance‐based cost for patient transportation with EMS by the DS and MS paradigm, respectively, and with HEMS. A full breakdown of cost items related to the operability of thrombectomy centers and the medical equipment costs associated with the respective treatment modality has been delineated previously . The estimated patient‐level costs for the 1st and 2nd year post‐stroke according to mRS category are obtained from literature and take on a societal perspective . Costs are converted into 2021 euro, using the average exchange rate for the year 2021 between Swedish krona and euro: €1 = SEK10.515. Patient age and admission NIHSS score remain fixed in each patient case, while the calculated OTTs for IVT and MT vary across solutions. By applying predictive generalized linear models (GLMs) (one for each treatment modality), the model estimates the mRS‐90d score in each patient case and for all available combinations of mode of transportation, organizational paradigm, and treatment modality in a solution. Moreover, the model selects the mode of transportation, organizational paradigm, and treatment modality that minimize the expected mRS‐90d score in each individual patient case. Thus, the preferred mode of transportation, organizational paradigm, and the availability and clinical effectiveness of different treatment modalities in individual patients hinge upon the proximity to ambulance helicopters and thrombectomy centers that vary across solutions. The selected mRS‐90d scores are then converted into utility weights obtained from literature . Three‐year survival rates are calculated using study population data and according to mRS categories 0–2, 3, 4, and 5. The age‐adjusted, annual survival rate trend in patients with ischemic stroke aligns with that of the Swedish reference population at 3 years post‐stroke. Therefore, the age‐adjusted survival rate trend in the Swedish reference population serves as the basis for calculating survival rates from year four and onward (Table ) . Outcome distributions of the remaining quality‐adjusted life years (QALY) for patients of each mRS category are obtained with a time‐inhomogeneous, discrete‐time Markov chain (DTMC) . Measures of Cost‐Effectiveness Within the Decision‐Analytical Framework The selected cost‐effectiveness measures derive from patient‐level costs and QALYs and consist of the net health benefit (NHB), the net monetary benefit (NMB), and the incremental NMB (INMB). These metrics are innately suitable in cost‐effectiveness analysis (CEA) with more than two comparators; fixed quantity measures of cost‐effectiveness make cross comparisons and ranking of comparators seamlessly easy to undertake. Nonetheless, the main focus of the CEA is to compare solutions to various combinations of optimally located thrombectomy centers and ambulance helicopters, with the status quo of thrombectomy centers in Sweden. The solution that attains the highest expected INMB is selected as the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters in the prehospital acute stroke care system for triage‐positive patients due to suspected LVO AIS. Modeling Assumptions and Scenarios The modeling scenarios assume 1229 eligible candidates for treatment with MT per year among an estimated 1708 triage‐positive patient cases and reflect the national MT rate at 7% of all confirmed cases of patients with AIS in Sweden during the year 2021 . The maximum willingness‐to‐pay (WTP) per QALY gained set at €80,000 represents the lowest cost per QALY gained among declined reimbursements of treatments in severe health conditions by the Swedish Dental and Pharmaceutical Benefits Agency . The base‐case solution assumes the current number and locations of thrombectomy centers in Sweden as per the year 2023, comprising seven comprehensive stroke centers (CSC) and one thrombectomy‐capable stroke center (TSC). It corresponds to a thrombectomy center density of 0.77 per one million inhabitants. It has recently been suggested that the most cost‐effective number of optimally located TSCs to complement the CSCs in Sweden is four, setting the said density to 1.05 per one million inhabitants . The main scenario assumes no predetermined locations of thrombectomy centers and is tasked with determining the most cost‐effective combination of freely located thrombectomy centers and ambulance helicopters. In the secondary scenario, the analysis sets out to determine the most cost‐effective combination of optimally located ambulance helicopters and the 9th, …, 12th optimally located thrombectomy centers, respectively, to complement the current 8 thrombectomy centers. Deterministic Sensitivity Analysis The deterministic sensitivity analysis (DSA) examines the sensitivity in results by varying the maximum WTP per QALY gained in the range between €0 and €200,000. Results 3.1 Accessibility to MT , Organizational Paradigms, and Utilization of HEMS With the current eight thrombectomy centers and no operational ambulance helicopters, it is estimated that 98.5% of all patients with presumed acute stroke have access to treatment with MT within 360 min from symptom onset. In terms of modeled patient outcomes, the DS pathway is preferred over the MS pathway in 29.9% of all patient cases. In comparison, the modeled solution with eight optimally located thrombectomy centers and no ambulance helicopters provides access to MT for 99.8% of all patients with presumed acute stroke, and prefers the DS pathway in 24.5% of all patient cases. With the introduction of five optimally located ambulance helicopters, the preference for the DS pathway decreases to 20.8%, and the ambulance helicopter becomes the preferred mode of transportation in 9.5% of all patient cases. With 12 optimally located thrombectomy centers and 16 optimally located ambulance helicopters, the preference for the DS pathway plunges to 15.9%, and ambulance helicopters handle 11.5% of all patient transportations. 3.2 OTTs , Costs, and QALYs The lowest achievable mean OTT to IVT with the fewest number of located thrombectomy centers and ambulance helicopters was 127 min in the solution with 10 thrombectomy centers and 10 ambulance helicopters. The corresponding solution for achieving the lowest mean OTT to MT comprised 12 thrombectomy centers and 12 ambulance helicopters. The maximum QALY production per year (3100 QALYs) was attained by locating 12 thrombectomy centers and 16 ambulance helicopters (Table ). 3.3 Cost‐Effective Combinations of Thrombectomy Centers and Ambulance Helicopters Among all studied combinations of optimally located thrombectomy centers and ambulance helicopters, the solution with 11 thrombectomy centers and 14 ambulance helicopters was the most cost‐effective in comparison with the current eight thrombectomy centers and no ambulance helicopter operability (Figure ). It corresponds to a thrombectomy center density of 1.05 per one million inhabitants and an ambulance helicopter density of 1.34 per one million inhabitants. This translates into a ratio of circa 4:5 between thrombectomy centers and ambulance helicopters. The most cost‐effective solution has an estimated annual INMB close to €13.6 million, which translates into an average INMB per patient of €11,050 (Figure ). In the scenario analysis with the first eight thrombectomy centers locked at the current locations of thrombectomy centers in Sweden, the highest annual NMB was reached with a combination of one additional thrombectomy center and 13 ambulance helicopters, making it the most cost‐effective solution (Figure ). This is equivalent to densities of 0.86 and 1.24 per one million inhabitants for thrombectomy centers and ambulance helicopters, respectively. The solution would generate an annual INMB of €3.8 million, with an average INMB per patient equal to €3125 (Figure ). The model did not select the CSCs of neighboring countries in any solution. 3.4 Varying the Maximum WTP per QALY Gained When solutions were compared in the maximum WTP per QALY gained range between €0 and €200,000, cost‐effectiveness emerged in the solution with 11 C‐/TSCs and 13 ambulance helicopters when the maximum WTP per QALY gained settled at €57,077. This combination prevailed as the most cost‐effective solution until the maximum WTP per QALY gained reached €77,707, when it was overtaken by the combination comprising 11 C‐/TSCs and 14 ambulance helicopters. It remained as the most cost‐effective solution until the maximum WTP per QALY gained hit €195,144, when the combination of 12 C‐/TSCs and 15 ambulance helicopters became the most cost‐effective solution for the remaining range up to €200,000. Accessibility to MT , Organizational Paradigms, and Utilization of HEMS With the current eight thrombectomy centers and no operational ambulance helicopters, it is estimated that 98.5% of all patients with presumed acute stroke have access to treatment with MT within 360 min from symptom onset. In terms of modeled patient outcomes, the DS pathway is preferred over the MS pathway in 29.9% of all patient cases. In comparison, the modeled solution with eight optimally located thrombectomy centers and no ambulance helicopters provides access to MT for 99.8% of all patients with presumed acute stroke, and prefers the DS pathway in 24.5% of all patient cases. With the introduction of five optimally located ambulance helicopters, the preference for the DS pathway decreases to 20.8%, and the ambulance helicopter becomes the preferred mode of transportation in 9.5% of all patient cases. With 12 optimally located thrombectomy centers and 16 optimally located ambulance helicopters, the preference for the DS pathway plunges to 15.9%, and ambulance helicopters handle 11.5% of all patient transportations. OTTs , Costs, and QALYs The lowest achievable mean OTT to IVT with the fewest number of located thrombectomy centers and ambulance helicopters was 127 min in the solution with 10 thrombectomy centers and 10 ambulance helicopters. The corresponding solution for achieving the lowest mean OTT to MT comprised 12 thrombectomy centers and 12 ambulance helicopters. The maximum QALY production per year (3100 QALYs) was attained by locating 12 thrombectomy centers and 16 ambulance helicopters (Table ). Cost‐Effective Combinations of Thrombectomy Centers and Ambulance Helicopters Among all studied combinations of optimally located thrombectomy centers and ambulance helicopters, the solution with 11 thrombectomy centers and 14 ambulance helicopters was the most cost‐effective in comparison with the current eight thrombectomy centers and no ambulance helicopter operability (Figure ). It corresponds to a thrombectomy center density of 1.05 per one million inhabitants and an ambulance helicopter density of 1.34 per one million inhabitants. This translates into a ratio of circa 4:5 between thrombectomy centers and ambulance helicopters. The most cost‐effective solution has an estimated annual INMB close to €13.6 million, which translates into an average INMB per patient of €11,050 (Figure ). In the scenario analysis with the first eight thrombectomy centers locked at the current locations of thrombectomy centers in Sweden, the highest annual NMB was reached with a combination of one additional thrombectomy center and 13 ambulance helicopters, making it the most cost‐effective solution (Figure ). This is equivalent to densities of 0.86 and 1.24 per one million inhabitants for thrombectomy centers and ambulance helicopters, respectively. The solution would generate an annual INMB of €3.8 million, with an average INMB per patient equal to €3125 (Figure ). The model did not select the CSCs of neighboring countries in any solution. Varying the Maximum WTP per QALY Gained When solutions were compared in the maximum WTP per QALY gained range between €0 and €200,000, cost‐effectiveness emerged in the solution with 11 C‐/TSCs and 13 ambulance helicopters when the maximum WTP per QALY gained settled at €57,077. This combination prevailed as the most cost‐effective solution until the maximum WTP per QALY gained reached €77,707, when it was overtaken by the combination comprising 11 C‐/TSCs and 14 ambulance helicopters. It remained as the most cost‐effective solution until the maximum WTP per QALY gained hit €195,144, when the combination of 12 C‐/TSCs and 15 ambulance helicopters became the most cost‐effective solution for the remaining range up to €200,000. Discussion This interdisciplinary study employs applied operational research methodologies and applied health economics to evaluate a wide range of combinations of optimally located thrombectomy centers and ambulance helicopters within the decision‐analytical framework of cost‐effectiveness modeling, using individual, patient‐level registry data and current evidence from the literature. The most cost‐effective solution set the densities of thrombectomy centers and ambulance helicopters to 1.05 and 1.34 per one million inhabitants, respectively. The solution generates substantial health gains in comparison with the current density of 0.77 thrombectomy centers per one million inhabitants in Sweden, and no ambulance helicopter operability. In the scenario analysis with the first eight thrombectomy centers locked at the current locations, the most cost‐effective solution settled the densities of thrombectomy centers and ambulance helicopters at 0.86 and 1.24 per one million inhabitants, respectively. It may be noted that the base‐case scenario that constitutes the comparator in the cost‐effectiveness analysis does not mirror the current ambulance helicopter operability in the Swedish healthcare system with 10 operating ambulance helicopters. It has previously been shown that both the number of thrombectomy centers and ambulance helicopters are important factors to consider in the further development of acute stroke care systems for patients with presumed stroke due to LVO and potential eligibility for treatment with MT . This study demonstrates that the choice of locations for thrombectomy centers has a great impact on results too. Results show that the optimal number and locations of thrombectomy centers shift with the optimal number and locations of ambulance helicopters. Thus, to design a cost‐effective acute stroke care system for patients with presumed AIS due to LVO requires the capability to evaluate the potential number and locations of thrombectomy centers and ambulance helicopters in conjunction. Partial optimization of thrombectomy center locations has a decisively adverse impact on the cost‐effectiveness of solutions, which the scenario analysis with the first eight thrombectomy centers locked at current locations exemplifies with distinct clarity. The sensitivity analysis shows that results are sensitive to the maximum WTP per QALY gained. Three different solutions interchanged the position as the most cost‐effective solution over the studied maximum WTP per QALY range. The comprehensive dataset connecting data from emergency call operator services with data from national quality registries to create individual, patient‐level registry data provides the detailed information on each single patient case required for making real‐world, patient‐level cost‐effectiveness analyses. The interdisciplinary approach of applied health economics and operational research facilitates the comprehensive economic evaluation of acute stroke care systems in regard to thrombectomy centers and ambulance helicopters. While the solvers provided by ArcGIS do not guarantee mathematically optimal solutions, it is unlikely that it has had any major impact on results; all the numerical analyses are valid for the presented solutions. The reliability of results is limited to the Swedish healthcare setting. The underlying study population data for analysis stems from a 6‐year study period between 2012 and 2017 when the indication for MT was limited to the narrow time window of 360 min from symptom onset. Therefore, results may not reflect the extended indication for thrombectomy and current reperfusion rates and patient outcomes from endovascular therapies. On the basis of the tissue‐clock selection paradigm, the benefit of thrombectomy may extend well beyond the narrow time window for select patients with target mismatch profiles. Indeed, the extension of ischemic injury is not perfectly correlated with the time lapsed from symptom onset, as the prevalence of the “late window paradox” phenomenon, particularly pronounced in patients with large‐core ischemic stroke shows . However, it does not follow that shorter onset‐to‐treatment time has no impact on functional outcomes . Still, most patients benefit from shortened onset‐to‐treatment time to acute reperfusion therapies. Furthermore, whether patients are fast or slow progressors, eligible for treatment with IVT, MT or IVT + MT remains uncertain in the prehospital phase of acute stroke care. Therefore, the most cost‐effective combination of the optimal number and locations of thrombectomy centers remains valid and informatively applicable to real‐world settings for as long as confirmation of diagnosis, occlusion site(s), collateral status, and other characteristics underlying treatment decisions depend on in‐hospital examination. However, and contingent upon availability of adequate individual‐level patient data, it seems reasonable to hypothesize that the incorporation of extended indications for thrombectomy and thrombolysis in future analyses would favor Drip‐and‐Ship over Mothership in more patient cases than the current analysis. This study paves the way for addressing issues of inaccessibility to and under‐utilization of endovascular reperfusion therapies with a comprehensive take on both prehospital modes of transportation and thrombectomy center density and locations, guided by the decision‐analytical framework of cost‐effectiveness analysis. Efforts to improve patient outcomes following stroke are plenty and conducted across a vast field of disciplines. To keep up with the latest advancements in the field of acute stroke care and in particular with the fast pace in the development of new drugs and medical devices for improved rates of successful reperfusion in patients with LVO AIS, cost‐effectiveness analyses need updating on a regular basis to comprise a reliable source of information to support healthcare decision‐making. Therefore, more frequent and regular reassessments of results from cost‐effectiveness analyses in the field of acute stroke care systems are warranted. Conclusion Compared with the current eight thrombectomy center locations in Sweden, and assuming no ambulance helicopter operability, the most cost‐effective combination of optimally located thrombectomy centers and ambulance helicopters comprises 11 optimally located thrombectomy centers and 14 optimally located ambulance helicopters. The commensurable densities are 1.05 thrombectomy centers and 1.34 ambulance helicopters per one million inhabitants, respectively. It constitutes a cost‐effective solution that would generate substantial health gains in patients with AIS due to LVO. Nicklas Ennab Vogel: conceptualization, investigation, methodology, validation, visualization, writing – review and editing, software, project administration, formal analysis, data curation, resources, writing – original draft. Lars‐Åke Levin: investigation, conceptualization, funding acquisition, supervision, writing – review and editing, resources. Tobias Andersson Granberg: writing – review and editing, validation, supervision, funding acquisition, resources. Per Wester: writing – review and editing, supervision. Ethical approval for this study was obtained from The Swedish Ethical Review Authority with approval numbers/IDs: Dnr 2017/487–31 and Dnr 2019–00721. The authors have nothing to report. The authors declare no conflicts of interest. Appendix S1.
Intra‐ and Postoperative Complications in 4565
0b58b120-437f-4dd5-ae07-f7443504ed0c
11794054
Surgical Procedures, Operative[mh]
Introduction Hysterectomies are globally one of the most frequently performed gynaecologic procedures. In the United States, one in three women will have their uterus removed before the age of 60, giving a yearly rate of 400 000 hysterectomies . vNOTES (vaginal natural transluminal endoscopic surgery) hysterectomy is a combination of a vaginal hysterectomy and endoscopy via the vagina. The method gives the benefits of a vaginal approach to the abdomen with no scars and faster recovery together with the benefits of endoscopic visual overview . The HALON trial was a non‐inferiority single‐centre RCT comparing vNOTES hysterectomy and laparoscopic hysterectomy showing no conversions in any arm. Surgical time, pain and hospital stay were lower in the vNOTES group versus the laparoscopic group. A prospective study with the first 1000 vNOTES procedures of which 730 were hysterectomies registered in the International NOTES Society (iNOTESs) registry showed a conversion rate of 0.4% and an overall complication rate of 5.2% after vNOTES hysterectomy (intraoperative 1.4%, postoperative 3.8%) . The study included surgical data from a single surgeon. vNOTES surgery is on the rise within gynaecology but is still a relatively new technique. The aim of the study is to describe the intra‐ and postoperative hysterectomy complications and conversion rates registered in the iNOTESs registry, mirroring clinical reality with both learning curve data and data from experienced surgeons. Methods Data on 4565 hysterectomies performed 2015 to January 2024 extracted from the iNOTESs registry were included. The international NOTES society (iNOTESs) was established in 2015 and holds the international iNOTESs intra‐ and postoperative case registry and complication database. The iNOTESs is founded by research groups, each specialising in surgeries performed via a specific natural orifice. The registry is an initiative where the founders aim to collect data for all NOTES surgeries. Surgical data are registered prospectively and unidentified by the operating vNOTES surgeons. The most common procedure performed is vNOTES hysterectomy, followed by vNOTES adnexal surgery. Data regarding, but not limited to; type of procedure, patient demographics, surgical time, conversion rate, intraoperative and postoperative complications within 6 weeks were extracted, also classified by Clavien Dindo . In the iNOTESs database, each patient is registered with an random unique file number r, and no information is registered regarding name, personal ID number, address, etc., and it is not possible to identify any individual in the registry. Only the surgeon has access to the unique file number which is linked to the patient, after a two factor security authentication. In case of a complication, the surgeon will identify the patients unique file number, and register postoperative data in the postoperative complication registry.. Only the cases with complications are registered in the postoperative complication database; the cases with normal postoperative outcome are not registered. Consent was obtained from the patients for the use of data for the registry. All vNOTES surgeons are certified and have participated in a mandatory structured, standardised vNOTES course to be allowed to register their vNOTES cases in the database. The target group for the vNOTES course are surgeons experienced in both vaginal and laparoscopic hysterectomy. Thus, the vast majority of the surgeons were passed their learning curves in vaginal and laparoscopic hysterectomy. The course is standardised by an international expert panel and is hosted with the support Applied Medical in most western countries; it includes standardised lectures, live surgery, hands‐on simulation model training and often post course proctoring. The surgeons are invited to send in a video of their 10th vNOTES hysterectomy in order to be certified. Log in codes to the registry are first given to the surgeons after completion of the surgical vNOTES course. The surgeons reporting data to the registry are reminded continuously to register all of their cases and not just the ones that have good or bad outcome. This is stressed when the surgeon qualifies to receive log in codes. Core outcome set : To our knowledge, there is no current core outcome set for surgical outcome after hysterectomy; therefore, no core outcome set has been used. Patient involvement: No patients have been involved in designing or conducting the study. 2.1 Statistics Descriptive data are presented in frequencies ( n ) and percent . All outcomes (dichotomous outcomes with categorised variables) have been analysed with a chi‐squared test. As a sensitivity analysis stratification for surgical experience (categorised in classes; 0–9, 10–49, 50–99, 100–499, 500–999, 1000 + hysterectomies) was performed. Analysis was performed with the statistical software package IBM SPSS version 29. Statistics Descriptive data are presented in frequencies ( n ) and percent . All outcomes (dichotomous outcomes with categorised variables) have been analysed with a chi‐squared test. As a sensitivity analysis stratification for surgical experience (categorised in classes; 0–9, 10–49, 50–99, 100–499, 500–999, 1000 + hysterectomies) was performed. Analysis was performed with the statistical software package IBM SPSS version 29. Results In the database, 4565 vNOTES hysterectomies were identified, 4084 were performed as Vaginally Assisted NOTES Hysterectomy (VANH) and 240 as Total vNOTES Hysterectomy (TVNH). Table shows background characteristics for patients with and without any intra‐ or postoperative complication or conversion. There was a significant difference in parity, surgeon experience, increased surgical time and use of irrigation among patients with intra‐ or postoperative complications/conversions compared to patients without any complications. Duration of surgery was increased in cases with intraoperative or postoperative complications, and the longest duration of surgery was found among surgeries in need of conversion. When stratifying for surgical experience, no significance difference in complications/conversions was found between the different parity groups. It is unclear if this is due to less power in smaller strata, or due to patient selection. A difference was seen in patient selection depending on surgical experience. Among surgeons with maximum 10 cases, 81% of the patients had a previous vaginal delivery, 7.5% were nullipara and 11.8% had a previous CS. The corresponding rates among surgeons with expertise over 1000 cases were 65% previous vaginal delivery, 19% nullipara and 16% previous CS. When stratified for surgical experience no difference in complications depending on BMI or previous surgery was found, but a difference regarding duration of surgery and the use of suction between patients with or without complications/conversions were found. Table shows data regarding intraoperative complications. In 43 cases, complications occurred when establishing access to the abdomen. In five of these cases, a conversion to a different surgical technique was made due to the complication. Furthermore, in five additional cases complications with the vaginal access or vNOTES port placement were reported. Twenty‐one cases were reported as injuries to normally localised organs. Seven cases of injuries to abnormally localised organs. Three cases were categorised as access related haemorrhages, all converted to multiport laparoscopy. Two urological and one gastrointestinal injury and four patients that bled intraoperatively were reported as access related complications. All access related urological injuries were to the bladder. The most common specified injury was urinary tract injury, in total 60 cystotomies and 1 ureteric injury, of which 36 were defined to be surgery related and 24 access related, giving a rate of 1.39%. Twenty five (42%) of the patients that had a cystotomy had previously undergone a caesarean section. All injuries to organs were repaired intraoperatively. One case of macrohematuria during surgery occurred but further assessment showed no signs of damage to the bladder or structures nearby. Five complications relating to anaesthesia were documented ( n = 5), three of which due to insufficient pneumoperitoneum. One patient developed a diffuse erythema without systemic symptoms after administering anaesthetics, and the patient did not undergo surgery due to inadequate anaesthesia. Two patients with anaesthetic complications were scheduled as TVNH, where only one completed as such. Of the TVNH, one surgery could not reach pneumoperitoneum and one patient retained too much gas postoperative after adequate pneumoperitoneum which had to be emptied manually using an ordinary needle after the vaginal cuff had been closed. The perioperative conversion rate was 72 (1.6%), of which 10 (0.2%) were converted to laparotomy. The reason for conversion was intraoperative complication in 23 (32%) of cases and 51% of the patients that were converted of a BMI over 30, and 82% of the conversions occurred within the 50 cases of the surgeon's learning curve. Table shows data regarding postoperative complications. The rate of postoperative complications was 2.52% ( n = 115). The most common postoperative complications were haemorrhages ( n = 28), vaginal cuff or vault complications ( n = 26), cystitis ( n = 18, all except three treated with oral and/or intravenous antibiotics), and non‐specific infection of other location treated with antibiotics ( n = 14). No cases of vault dehiscence were reported. The overall infection rate was 0.94%, and 47 (1%) patients needed a re‐intervention under general anaesthesia. No complications on a level 4B or higher were reported. No deaths occurred. Three patients were reported receiving care at an ICU ‐unit due to single organ failure, categorised as Clavien Dindo 4A . These patients spent 3, 4 and 15 days hospitalised, respectively. Comparison of complications between TVNH and VANH was not completed since only one patient in the TVNH‐group was registered having a postoperative complication. This was determined as a grade 3B with revision due to postoperative haemorrhage 2 weeks post‐surgery. Three patients had both intraoperative and postoperative complications. The mean time of surgery was 125 min, all performed as VANH, two of the patients were obese (BMI 30 and 37) who both had two previous surgeries to the abdomen and surgeon experience of 10–50 previous vNOTES hysterectomies. The postoperative complications in these three cases were categorised as a Clavien Dindo level one and two. Table shows data regarding the vNOTES surgeons. The vNOTES hysterectomies were performed by 201 surgeons, of which 9.5% had performed more than 50 vNOTES cases. As shown in Table , the data consist of both learning curve data (30% of hysterectomies) and data from experienced surgeons (70% of hysterectomies). Half of the hysterectomies resulting in a cystotomy were operated by inexperienced surgeons (previous vNOTES experience < 50), 24 patients were operated by surgeon with intermediate experience (previous vNOTES experience < 500) and 6 patient were operated by surgeon having experience > 500 vNOTES hysterectomies. One main surgeon performed 30% of all hysterectomies ( n = 1364), with 21 intraoperative complications and 43 postoperative complications, giving a total rate of 4.7% intra‐ and postoperative complications. The vast majority of complications were registered prior to 2020 (15 intraoperative and 33 postoperative). Until 2020, the aforementioned surgeon had performed a total of 861 hysterectomies with an intra‐ and postoperative total complication rate of 5.6%. In the years 2020–2024, the same complication rate was reduced to 3.2%. The intra‐ and postoperative complication rate among all other surgeons was 4.9% (107 intraoperative and 51 postoperative complications). 3.1 Main Findings We present the largest prospectively collected data set showing a rate of 3.2% intra‐ and 2.5% postoperative complications after 4565 registered vNOTES hysterectomies. The vNOTES hysterectomies were performed by 201 surgeons, of which 9.5% had performed more than 50 vNOTES cases, representing 70% ( n = 3181) of the registered cases in the registry. The remaining approximate 30% ( n = 1319) of the hysterectomies mainly represent learning curve data from 90% of the included surgeons. Half of the cystotomies were performed by inexperienced surgeons, and the rate of complications decreased with increasing experience, despite operating a higher rate of patients with CS or nulliparity. 3.2 Strengths and Limitations The population of this study ( n = 4565) is the largest one yet evaluating vNOTES hysterectomy. Since the method was introduced in recent times, limitations in pre‐existing scientific research do remain. It is of high importance to summarise a larger study population generating an overview. In addition, it is conducted in numerous countries making it a valuable multicentre study with a beneficial variety of operating surgeons ( n = > 201). The intraoperative complications are registered at the same instance as the patient is registered for the first time in the database; therefore, the risk of missing data or incorrect data is low. The postoperative complications are registered after 6 weeks postoperatively, or when the complication occurs, and the surgeon needs to log in again to the database in order to register the complications. There is therefore most likely a small under‐registration of postoperative complications. We assume that the vast majority of major postoperative complications will be filled in. Supporting this, the rate of Clavien Dindo 3 complications in our study is in line with a recent RCT comparing same day or next day laparoscopic hysterectomy . Patients having minor postoperative complications, for example urinary tract infections, might seek medical attention at their general practitioner and therefore not be registered in the postoperative registry. The iNOTESs questionnaire contains inquiries of various aspects relevant for analysing new operational methods. A weakness of the current database is that not all patient and surgical variables associated with complications are registered, such as indication, smoking, diabetes or uterus weight. Therefore, it is not possible to analyse any predictive factors for intra‐ or postoperative complications. The vNOTES surgeons are requested to fill in all of their operations, not just the uncomplicated ones or the ones with complications. Despite the request, another potential bias could be that vNOTES surgeons do not want to fill in their complications, and only fill in the uneventful hysterectomies. The risk of selection bias, however, can go in both directions, with surgeons also just adding patients with complications. 3.3 Interpretation The rates of intra‐ and postoperative complications reported in the vNOTES registry are in analogy with corresponding rates for other hysterectomy techniques . The HALON trial was a single‐centre blinded non‐inferiority RCT comparing vNOTES to LH. No difference was seen in readmission, postoperative infection or intraoperative complications, although fewer postoperative complications in total were found in the vNOTES group, 82% of postoperative complications were at Clavien Dindo level 1 or 2. A retrospective study of 2000 vNOTES operations found an overall complication rate of 4.4% with a conversion rate of 0.4%. Two systematic reviews comparing vNOTES hysterectomy and laparoscopic hysterectomy show lower postoperative complication rates, less blood transfusion and no difference in intraoperative complication rates or conversions. The authors concluded that vNOTES may have advantages over conventional laparoscopic hysterectomy techniques . The most common intraoperative complication was cystotomy (1.39%), and 42% of the patients that had a cystotomy had previously undergone a caesarean section. All cystotomies were repaired peroperatively, and none had postoperative complications. In the study by Neumann , reporting data from all VH performed at a hospital in Denmark showed a cystotomy rate of 2.3%, and other studies have shown a range from 1.6% to 1.9% . The risk of cystotomy in VH has been reported higher than those of laparoscopic hysterectomies, but lower risk of ureteric injury . A systematic review showed an incidence of cystotomy of 0.28% in over 144 000 benign gynaecological laparoscopic hysterectomies . In contrast, a systematic review by Wei showed lower risk of urinary tract injury in VH versus LH. The review included six cohorts representing 52 492 women undergoing VH and showed a weighted pooled mean injury rate of 295 cystotomies and 122 ureteric injuries per 100 000 cases, respectively. The corresponding data from LH included 15 cohorts with 50 114 women with a weighted pooled mean injury rate of 997 cystotomies and 262 ureteric injuries per 100 000 cases. The risk of cystotomy in VH and vNOTES should be of equivalent rate due to similarity in entrance method. However, in the vNOTES procedure, in cases with difficult vaginal entrance (multiple CS, nullipara, adhesions, large uteri or myoma) the entrance can be performed endoscopically via the vagina. The Alexis ring is put in the pouch of Douglas and under the vaginal mucosa anteriorly, but the peritoneum in the vesicouterine pouch is not yet opened. Pneumovagina and pneumoperitoneum is created, and the vesicouterine pouch is opened under direct endoscopic visualisation. The possibility to create an anterior colpotomy under direct visualisation could possibly reduce the risk of cystotomies compared to a standard VH. No clear distinction of ureteral injury can be made, although it could be feasible that ureteral injuries are less common in vNOTES than other surgeries since only one injury occurred in 4565 hysterectomies. The Alexis ring presses the ureter laterally, away from the surgical instruments. Also, when performing a vNOTES hysterectomy the specimen is pushed cranially, anteriorly and medially, away from the pelvic sidewall. Surgical advancement has led to a reduction in abdominal hysterectomy (AH) and towards LH and RALH. Several surgical guidelines recommend a vaginal entrance to the abdomen when feasible, as it is associated with shorter surgical time, lower complications and quickest recovery . Despite this, the incidence of vaginal hysterectomies are declining, in Sweden only 11% of hysterectomies are performed as a VH . The Swedish Federation of Obstetrics and Gynecology acknowledges vNOTES as an alternative to TLH and VH, with vNOTES giving an advantage over VH regarding adnexal surgery, when lateral visualisation is needed . Comparative guidelines by the UK committee have declared vNOTES hysterectomy as a successful procedure but states criteria of extended caution when carrying out vNOTES procedures, as it is a relatively new surgical procedure and is viewed to have similar complication rates and readmission rates as other methods. A large RCT, comparing vNOTES with LH or VH, with the aim to include 1000 patients, has recently started to include patients and will in the future give further evidence regarding surgical outcome after vNOTES hysterectomy . Advantages in VH do exist considering decreased risk of vaginal vault dehiscence compared to LH , although the risk of hematomas is increased when performing surgery vaginally. Due to properties of vNOTES, with the possibility of meticulous endoscopic haemostasis, vNOTES could have decreased rates of vault hematomas compared to VH and decreased rate of dehiscence compared to LH, as the vault is sutured vaginally. Supporting this theory and consistent with previous research, rates of infected vault hematomas are similar or lower in vNOTES (0.24%) compared to vaginal hysterectomy (2.2%) . vNOTES subsequently poses no increased risk of infected vault hematomas. There seems to be no evidence of vNOTES leading to increased risk of infection compared to any other route of surgery, rather a possibility of decreased infectious burden. For reasons stated above, vNOTES can be considered a valid alternative when choosing an operative method in benign hysterectomies. Main Findings We present the largest prospectively collected data set showing a rate of 3.2% intra‐ and 2.5% postoperative complications after 4565 registered vNOTES hysterectomies. The vNOTES hysterectomies were performed by 201 surgeons, of which 9.5% had performed more than 50 vNOTES cases, representing 70% ( n = 3181) of the registered cases in the registry. The remaining approximate 30% ( n = 1319) of the hysterectomies mainly represent learning curve data from 90% of the included surgeons. Half of the cystotomies were performed by inexperienced surgeons, and the rate of complications decreased with increasing experience, despite operating a higher rate of patients with CS or nulliparity. Strengths and Limitations The population of this study ( n = 4565) is the largest one yet evaluating vNOTES hysterectomy. Since the method was introduced in recent times, limitations in pre‐existing scientific research do remain. It is of high importance to summarise a larger study population generating an overview. In addition, it is conducted in numerous countries making it a valuable multicentre study with a beneficial variety of operating surgeons ( n = > 201). The intraoperative complications are registered at the same instance as the patient is registered for the first time in the database; therefore, the risk of missing data or incorrect data is low. The postoperative complications are registered after 6 weeks postoperatively, or when the complication occurs, and the surgeon needs to log in again to the database in order to register the complications. There is therefore most likely a small under‐registration of postoperative complications. We assume that the vast majority of major postoperative complications will be filled in. Supporting this, the rate of Clavien Dindo 3 complications in our study is in line with a recent RCT comparing same day or next day laparoscopic hysterectomy . Patients having minor postoperative complications, for example urinary tract infections, might seek medical attention at their general practitioner and therefore not be registered in the postoperative registry. The iNOTESs questionnaire contains inquiries of various aspects relevant for analysing new operational methods. A weakness of the current database is that not all patient and surgical variables associated with complications are registered, such as indication, smoking, diabetes or uterus weight. Therefore, it is not possible to analyse any predictive factors for intra‐ or postoperative complications. The vNOTES surgeons are requested to fill in all of their operations, not just the uncomplicated ones or the ones with complications. Despite the request, another potential bias could be that vNOTES surgeons do not want to fill in their complications, and only fill in the uneventful hysterectomies. The risk of selection bias, however, can go in both directions, with surgeons also just adding patients with complications. Interpretation The rates of intra‐ and postoperative complications reported in the vNOTES registry are in analogy with corresponding rates for other hysterectomy techniques . The HALON trial was a single‐centre blinded non‐inferiority RCT comparing vNOTES to LH. No difference was seen in readmission, postoperative infection or intraoperative complications, although fewer postoperative complications in total were found in the vNOTES group, 82% of postoperative complications were at Clavien Dindo level 1 or 2. A retrospective study of 2000 vNOTES operations found an overall complication rate of 4.4% with a conversion rate of 0.4%. Two systematic reviews comparing vNOTES hysterectomy and laparoscopic hysterectomy show lower postoperative complication rates, less blood transfusion and no difference in intraoperative complication rates or conversions. The authors concluded that vNOTES may have advantages over conventional laparoscopic hysterectomy techniques . The most common intraoperative complication was cystotomy (1.39%), and 42% of the patients that had a cystotomy had previously undergone a caesarean section. All cystotomies were repaired peroperatively, and none had postoperative complications. In the study by Neumann , reporting data from all VH performed at a hospital in Denmark showed a cystotomy rate of 2.3%, and other studies have shown a range from 1.6% to 1.9% . The risk of cystotomy in VH has been reported higher than those of laparoscopic hysterectomies, but lower risk of ureteric injury . A systematic review showed an incidence of cystotomy of 0.28% in over 144 000 benign gynaecological laparoscopic hysterectomies . In contrast, a systematic review by Wei showed lower risk of urinary tract injury in VH versus LH. The review included six cohorts representing 52 492 women undergoing VH and showed a weighted pooled mean injury rate of 295 cystotomies and 122 ureteric injuries per 100 000 cases, respectively. The corresponding data from LH included 15 cohorts with 50 114 women with a weighted pooled mean injury rate of 997 cystotomies and 262 ureteric injuries per 100 000 cases. The risk of cystotomy in VH and vNOTES should be of equivalent rate due to similarity in entrance method. However, in the vNOTES procedure, in cases with difficult vaginal entrance (multiple CS, nullipara, adhesions, large uteri or myoma) the entrance can be performed endoscopically via the vagina. The Alexis ring is put in the pouch of Douglas and under the vaginal mucosa anteriorly, but the peritoneum in the vesicouterine pouch is not yet opened. Pneumovagina and pneumoperitoneum is created, and the vesicouterine pouch is opened under direct endoscopic visualisation. The possibility to create an anterior colpotomy under direct visualisation could possibly reduce the risk of cystotomies compared to a standard VH. No clear distinction of ureteral injury can be made, although it could be feasible that ureteral injuries are less common in vNOTES than other surgeries since only one injury occurred in 4565 hysterectomies. The Alexis ring presses the ureter laterally, away from the surgical instruments. Also, when performing a vNOTES hysterectomy the specimen is pushed cranially, anteriorly and medially, away from the pelvic sidewall. Surgical advancement has led to a reduction in abdominal hysterectomy (AH) and towards LH and RALH. Several surgical guidelines recommend a vaginal entrance to the abdomen when feasible, as it is associated with shorter surgical time, lower complications and quickest recovery . Despite this, the incidence of vaginal hysterectomies are declining, in Sweden only 11% of hysterectomies are performed as a VH . The Swedish Federation of Obstetrics and Gynecology acknowledges vNOTES as an alternative to TLH and VH, with vNOTES giving an advantage over VH regarding adnexal surgery, when lateral visualisation is needed . Comparative guidelines by the UK committee have declared vNOTES hysterectomy as a successful procedure but states criteria of extended caution when carrying out vNOTES procedures, as it is a relatively new surgical procedure and is viewed to have similar complication rates and readmission rates as other methods. A large RCT, comparing vNOTES with LH or VH, with the aim to include 1000 patients, has recently started to include patients and will in the future give further evidence regarding surgical outcome after vNOTES hysterectomy . Advantages in VH do exist considering decreased risk of vaginal vault dehiscence compared to LH , although the risk of hematomas is increased when performing surgery vaginally. Due to properties of vNOTES, with the possibility of meticulous endoscopic haemostasis, vNOTES could have decreased rates of vault hematomas compared to VH and decreased rate of dehiscence compared to LH, as the vault is sutured vaginally. Supporting this theory and consistent with previous research, rates of infected vault hematomas are similar or lower in vNOTES (0.24%) compared to vaginal hysterectomy (2.2%) . vNOTES subsequently poses no increased risk of infected vault hematomas. There seems to be no evidence of vNOTES leading to increased risk of infection compared to any other route of surgery, rather a possibility of decreased infectious burden. For reasons stated above, vNOTES can be considered a valid alternative when choosing an operative method in benign hysterectomies. Conclusion This prospective international database study has the largest multicentre study population of vNOTES hysterectomies to date, performed by over 200 surgeons. The data consist of both learning curve data (30%) and data from experienced surgeons (70%) The intra‐ and postoperative complication and infection rates reported are lower or in the same range as other minimally invasive techniques, and the conversion rate to laparotomy was very low (0.2%). A.S., J.W., L.B.F., J.S., A.M., S.E., M.H., J.V., D.H., A.L. and J.B., contributed to the design, background material research and writing the paper and operated the patients. A.S. and A.L. contributed to the statistical design and calculations. All authors have approved the final version and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The Swedish National Ethical Board approved the study, reference number 2023–04433‐01, dated 2023‐09‐23. Jan Baekelandt, Andrea Stuart, Johanna Wagenius, Alvaro Montealegre, Michael Hartmann and Jona Vercammen declare consultancy for Applied Medical.
Identifying KLF14 as a potential regulatory factor in liver regeneration trough transcriptomic and metabolomic
ee67a2af-0d94-4fa1-89dc-7fe9a04dac04
11876588
Surgical Procedures, Operative[mh]
The liver is a unique organ with remarkable regenerative capacity. Liver regeneration is a crucial physiological process in which the liver restores its function through cell proliferation and tissue reconstruction following injury or partial hepatectomy . Alveolar echinococcosis is a zoonotic disease with a global geographical distribution, caused by Echinococcus multilocularis . Ex-vivo liver resection and autotransplantation (ELRA) and partial hepatectomy (PH) are the primary surgical methods for treating alveolar echinococcosis , and the liver’s regenerative ability post-surgery is critical for patient recovery and long-term survival . However, despite the well-recognized clinical importance of liver regeneration, its underlying molecular mechanisms, particularly regarding metabolic and transcriptional regulation, remain poorly understood. During liver regeneration, hepatocytes must undergo rapid proliferation, metabolic reprogramming, and tissue repair in a short time, processes that are tightly regulated by multiple signaling pathways and metabolic routes . Krüppel-like factors (KLF) are zinc finger domain-containing transcription factors involved in embryonic liver development . Among them, KLF14 is associated with cell proliferation and liver metabolism, reducing the secretion of pro-inflammatory cytokines and lipid accumulation . Moreover, previous studies have shown that signaling pathways such as PI3K-AKT and MAPK play crucial roles in liver cell proliferation and survival, while metabolic pathways involving amino acids, lipids, and glucose metabolism are indispensable in the liver regeneration process , . However, the spatiotemporal dynamics of these processes during liver regeneration, and how transcriptional and metabolic coordination promotes liver regeneration, remain understudied. To elucidate the molecular mechanisms of liver regeneration following ELRA and PH, this study employed a combination of transcriptomics and untargeted metabolomics (LC–MS) to analyze the dynamic changes in differentially expressed genes and metabolites during the regeneration process at various time points. KLF14 was identified as a key gene, showing dynamic expression patterns across different stages of regeneration. It was found to be correlated with metabolic pathways. Transcriptomic data revealed that genes associated with KLF14 were enriched in pathways related to cell proliferation and immune regulation. Furthermore, metabolomic analysis highlighted shifts in metabolites involved in lipid, amino acid, and glucose metabolism, underscoring the metabolic reprogramming that accompanies liver regeneration. These findings suggest that KLF14 may play an important role in regulating liver regeneration. Study subjects A total of 39 patients were enrolled in this study: 13 patients underwent partial hepatectomy (PH) and 13 patients underwent ex-vivo liver resection and autotransplantation (ELRA), along with 13 healthy controls. The General information of the participating population and stage of infections were shown in Tables and . All patients were provided with standardized meals during hospitalization, both preoperatively and postoperatively, ensuring consistency in nutrient intake. Meals were designed to meet clinical nutritional requirements, with controlled macronutrient and micronutrient compositions. Blood samples were collected after an overnight fast (≥ 8 h). Inclusion criteria: all patients were adults, regardless of gender; participants underwent PH or ELRA based on their clinical treatment plans. All patients underwent comprehensive physiological and pathological evaluations before surgery, including liver ultrasound, contrast-enhanced CT or MRI imaging, liver function tests, coagulation tests, viral hepatitis screening, and liver fibrosis scoring. All patients were clinically diagnosed with alveolar echinococcosis preoperatively and pathologically confirmed postoperatively. Exclusion criteria: patients with severe underlying liver diseases; those with severe cardiovascular disease, kidney disease, diabetes, or immune system disorders; patients with acute infections, hemorrhagic disorders, hepatorenal syndrome, or other acute complications; patients who had received immunosuppressants or long-term steroid therapy before surgery; and pregnant or breastfeeding women. All patients provided written informed consent. The study strictly adhered to the Declaration of Helsinki and was approved by the Ethics Committee of the First Affiliated Hospital of Xinjiang Medical University (approval number: 231124-04). Postoperative follow-up was conducted on postoperative days 1 and 5. On day 1, the initial recovery status was assessed, and peripheral blood samples were collected. On day 5, liver function recovery was further evaluated, potential complications were monitored, and peripheral blood samples were collected again. All samples were processed immediately after collection and stored at −80 °C for transcriptomic and metabolomic analyses. Transcriptomic sequencing and data preprocessing To analyze changes in differentially expressed genes (DEGs) during liver regeneration, blood samples of 3 patients underwent PH, 3 patients underwent ELRA, and 3 healthy controls were random selected to extract total RNA using Trizol reagent. The quality of the extracted RNA was measured using a NanoDrop 2000 spectrophotometer (Thermo Scientific), and RNA integrity was confirmed with an Agilent 2100 Bioanalyzer. High-quality RNA samples were used to construct transcriptomic sequencing libraries, followed by RNA sequencing using the Illumina NovaSeq 6000 platform. Library construction was performed using the Illumina TruSeq RNA Library Preparation Kit (Illumina, San Diego, CA) according to the manufacturer’s instructions. The constructed cDNA libraries were subjected to paired-end sequencing on the Illumina NovaSeq 6000 platform, with a sequencing depth of approximately 40–50 million reads per sample. The raw reads obtained from sequencing were first subjected to quality control using FastQC software (v0.11.9). Trimmomatic software (v0.39) was used to remove low-quality reads and adapter sequences. The clean reads were aligned to the human reference genome (GRCh38) using HISAT2 software (v2.1.0), and featureCounts was employed to quantify gene expression for each sample. Metabolomics analysis Plasma samples were pre-processed to remove proteins and other interfering substances, and metabolites were analyzed using untargeted liquid chromatography-mass spectrometry (LC–MS). Metabolomic data collection was performed using ultra-high-performance liquid chromatography (UHPLC) coupled with a Thermo Fisher Q Exactive mass spectrometer (LC–MS). An ACQUITY UPLC HSS T3 column (100 mm × 2.1 mm, 1.8 µm) was used as the separation column, with the column temperature set at 40 °C. The mobile phase consisted of solvent A (0.1% formic acid in water) and solvent B (0.1% formic acid in acetonitrile). A gradient elution method was used, with a flow rate of 0.3 mL/min, and a sample injection volume of 5 µL. The mass spectrometer operated in both positive and negative ion modes, utilizing full scan and data-dependent acquisition (DDA) modes. The mass range was set to m/z 100–1500. The LC–MS data were pre-processed using CD3.3 data processing software. Metabolites were quantified with CD3.3 software. Metabolite identification was performed by comparing the high-resolution MS/MS spectra against the mzCloud and mzVault databases, as well as the MassList primary database. Differential expression and enrichment analysis Gene expression differences between preoperative and postoperative time points were analyzed using DESeq2 software (v1.30.0). Differentially expressed genes (DEGs) were defined with a threshold of p value < 0.05 and |log2FoldChange|≥ 1. Partial least squares discriminant analysis (PLS-DA) was applied to the metabolomic data, and differential metabolites were selected based on the Variable Importance in the Projection (VIP) value, with VIP > 1 and p < 0.05 as the criteria for significance. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis for DEGs was conducted using the clusterProfiler package. Time-series clustering and co-expression analysis Time-series clustering of DEGs was performed using the Mfuzz software package to identify gene expression patterns related to liver regeneration at different postoperative time points. Weighted Gene Co-expression Network Analysis (WGCNA) was used to construct a co-expression network for genes with variability greater than 5000, and gene modules significantly associated with liver regeneration were identified. Real-time quantitative polymerase chain reaction (RT-qPCR) Total RNA was extracted from blood samples of remaining 10 patients underwent PH, 10 patients underwent ELRA, and 10 healthy controls using Trizol reagent (Thermo Fisher Scientific). The concentration and purity of the RNA were measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific). The extracted RNA was then reverse transcribed into cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). RT-qPCR was performed using the SYBR Green detection system on an Applied Biosystems 7500 real-time PCR system. The specific primer sequences are listed in Table S1. Relative expression levels of the target genes were calculated using the 2 −ΔΔCt method, with GAPDH serving as the internal control. Western blot Blood samples of remaining 10 patients underwent PH, 10 patients underwent ELRA, and 10 healthy controls were incubated with cold RIPA buffer containing protease inhibitors for 30 min. The lysed samples were centrifuged at 14,000 rpm for 15 min at 4 °C, and the supernatant containing total protein was collected. Protein concentration for each sample was quantified using a BCA Protein Assay Kit (Thermo Fisher Scientific). Protein samples (20 µg) were loaded onto a 10% SDS-PAGE gel for electrophoresis and subsequently transferred to a pre-activated PVDF membrane (Millipore). The membrane was blocked in 5% non-fat milk at room temperature for 1 h. KLF14 (Abcam, 1:1000 dilution), AKT (Abcam, 1:1000 dilution), p-AKT (Abcam, 1:1000 dilution), PI3K (Abcam, 1:1000 dilution), p-PI3K (ABclonal, 1:2000 dilution) primary antibodies were incubated with the membrane overnight at 4 °C. After washing, the membrane was incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies (Abcam, 1:5000 dilution) at room temperature for 1 h. Enhanced chemiluminescence (ECL, Thermo Fisher Scientific) was used for signal detection, and the signals were captured using the Bio-Rad ChemiDoc MP imaging system. Protein expression levels were normalized to β-actin as the internal control, and ImageJ software was used for quantitative analysis of the protein bands. Statistical analysis Bioinformatics analysis was performed using R 4.2, and statistical analysis of experimental data was carried out using GraphPad Prism 9 software. Data are presented as mean ± standard deviation (SD). Differences between time points were analyzed using one-way analysis of variance (ANOVA) or t-tests. A p-value of less than 0.05 was considered statistically significant. Ethics approval This study was approved by Ethics Committee of the First Affiliated Hospital of Xinjiang Medical University (No. 231124-04). A total of 39 patients were enrolled in this study: 13 patients underwent partial hepatectomy (PH) and 13 patients underwent ex-vivo liver resection and autotransplantation (ELRA), along with 13 healthy controls. The General information of the participating population and stage of infections were shown in Tables and . All patients were provided with standardized meals during hospitalization, both preoperatively and postoperatively, ensuring consistency in nutrient intake. Meals were designed to meet clinical nutritional requirements, with controlled macronutrient and micronutrient compositions. Blood samples were collected after an overnight fast (≥ 8 h). Inclusion criteria: all patients were adults, regardless of gender; participants underwent PH or ELRA based on their clinical treatment plans. All patients underwent comprehensive physiological and pathological evaluations before surgery, including liver ultrasound, contrast-enhanced CT or MRI imaging, liver function tests, coagulation tests, viral hepatitis screening, and liver fibrosis scoring. All patients were clinically diagnosed with alveolar echinococcosis preoperatively and pathologically confirmed postoperatively. Exclusion criteria: patients with severe underlying liver diseases; those with severe cardiovascular disease, kidney disease, diabetes, or immune system disorders; patients with acute infections, hemorrhagic disorders, hepatorenal syndrome, or other acute complications; patients who had received immunosuppressants or long-term steroid therapy before surgery; and pregnant or breastfeeding women. All patients provided written informed consent. The study strictly adhered to the Declaration of Helsinki and was approved by the Ethics Committee of the First Affiliated Hospital of Xinjiang Medical University (approval number: 231124-04). Postoperative follow-up was conducted on postoperative days 1 and 5. On day 1, the initial recovery status was assessed, and peripheral blood samples were collected. On day 5, liver function recovery was further evaluated, potential complications were monitored, and peripheral blood samples were collected again. All samples were processed immediately after collection and stored at −80 °C for transcriptomic and metabolomic analyses. To analyze changes in differentially expressed genes (DEGs) during liver regeneration, blood samples of 3 patients underwent PH, 3 patients underwent ELRA, and 3 healthy controls were random selected to extract total RNA using Trizol reagent. The quality of the extracted RNA was measured using a NanoDrop 2000 spectrophotometer (Thermo Scientific), and RNA integrity was confirmed with an Agilent 2100 Bioanalyzer. High-quality RNA samples were used to construct transcriptomic sequencing libraries, followed by RNA sequencing using the Illumina NovaSeq 6000 platform. Library construction was performed using the Illumina TruSeq RNA Library Preparation Kit (Illumina, San Diego, CA) according to the manufacturer’s instructions. The constructed cDNA libraries were subjected to paired-end sequencing on the Illumina NovaSeq 6000 platform, with a sequencing depth of approximately 40–50 million reads per sample. The raw reads obtained from sequencing were first subjected to quality control using FastQC software (v0.11.9). Trimmomatic software (v0.39) was used to remove low-quality reads and adapter sequences. The clean reads were aligned to the human reference genome (GRCh38) using HISAT2 software (v2.1.0), and featureCounts was employed to quantify gene expression for each sample. Plasma samples were pre-processed to remove proteins and other interfering substances, and metabolites were analyzed using untargeted liquid chromatography-mass spectrometry (LC–MS). Metabolomic data collection was performed using ultra-high-performance liquid chromatography (UHPLC) coupled with a Thermo Fisher Q Exactive mass spectrometer (LC–MS). An ACQUITY UPLC HSS T3 column (100 mm × 2.1 mm, 1.8 µm) was used as the separation column, with the column temperature set at 40 °C. The mobile phase consisted of solvent A (0.1% formic acid in water) and solvent B (0.1% formic acid in acetonitrile). A gradient elution method was used, with a flow rate of 0.3 mL/min, and a sample injection volume of 5 µL. The mass spectrometer operated in both positive and negative ion modes, utilizing full scan and data-dependent acquisition (DDA) modes. The mass range was set to m/z 100–1500. The LC–MS data were pre-processed using CD3.3 data processing software. Metabolites were quantified with CD3.3 software. Metabolite identification was performed by comparing the high-resolution MS/MS spectra against the mzCloud and mzVault databases, as well as the MassList primary database. Gene expression differences between preoperative and postoperative time points were analyzed using DESeq2 software (v1.30.0). Differentially expressed genes (DEGs) were defined with a threshold of p value < 0.05 and |log2FoldChange|≥ 1. Partial least squares discriminant analysis (PLS-DA) was applied to the metabolomic data, and differential metabolites were selected based on the Variable Importance in the Projection (VIP) value, with VIP > 1 and p < 0.05 as the criteria for significance. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis for DEGs was conducted using the clusterProfiler package. Time-series clustering of DEGs was performed using the Mfuzz software package to identify gene expression patterns related to liver regeneration at different postoperative time points. Weighted Gene Co-expression Network Analysis (WGCNA) was used to construct a co-expression network for genes with variability greater than 5000, and gene modules significantly associated with liver regeneration were identified. Total RNA was extracted from blood samples of remaining 10 patients underwent PH, 10 patients underwent ELRA, and 10 healthy controls using Trizol reagent (Thermo Fisher Scientific). The concentration and purity of the RNA were measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific). The extracted RNA was then reverse transcribed into cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). RT-qPCR was performed using the SYBR Green detection system on an Applied Biosystems 7500 real-time PCR system. The specific primer sequences are listed in Table S1. Relative expression levels of the target genes were calculated using the 2 −ΔΔCt method, with GAPDH serving as the internal control. Blood samples of remaining 10 patients underwent PH, 10 patients underwent ELRA, and 10 healthy controls were incubated with cold RIPA buffer containing protease inhibitors for 30 min. The lysed samples were centrifuged at 14,000 rpm for 15 min at 4 °C, and the supernatant containing total protein was collected. Protein concentration for each sample was quantified using a BCA Protein Assay Kit (Thermo Fisher Scientific). Protein samples (20 µg) were loaded onto a 10% SDS-PAGE gel for electrophoresis and subsequently transferred to a pre-activated PVDF membrane (Millipore). The membrane was blocked in 5% non-fat milk at room temperature for 1 h. KLF14 (Abcam, 1:1000 dilution), AKT (Abcam, 1:1000 dilution), p-AKT (Abcam, 1:1000 dilution), PI3K (Abcam, 1:1000 dilution), p-PI3K (ABclonal, 1:2000 dilution) primary antibodies were incubated with the membrane overnight at 4 °C. After washing, the membrane was incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies (Abcam, 1:5000 dilution) at room temperature for 1 h. Enhanced chemiluminescence (ECL, Thermo Fisher Scientific) was used for signal detection, and the signals were captured using the Bio-Rad ChemiDoc MP imaging system. Protein expression levels were normalized to β-actin as the internal control, and ImageJ software was used for quantitative analysis of the protein bands. Bioinformatics analysis was performed using R 4.2, and statistical analysis of experimental data was carried out using GraphPad Prism 9 software. Data are presented as mean ± standard deviation (SD). Differences between time points were analyzed using one-way analysis of variance (ANOVA) or t-tests. A p-value of less than 0.05 was considered statistically significant. This study was approved by Ethics Committee of the First Affiliated Hospital of Xinjiang Medical University (No. 231124-04). Identification of differentially expressed genes We compared gene expression levels at different postoperative time points and identified a large number of DEGs related to liver regeneration. In the ELRA group, 3574 DEGs were identified on postoperative day 1 compared to preoperative levels (Fig. A). Additionally, 3269 DEGs were identified when comparing postoperative day 5 to day 1 (Fig. B). In the PH group, 1619 DEGs were identified on postoperative day 1 compared to preoperative levels (Fig. C), and 896 DEGs were identified when comparing postoperative day 5 to day 1 (Fig. D). Further analysis revealed 36 common genes shared between the ELRA and PH groups (Fig. E, Table S1). The common genes identified were enriched in several biological processes related to immune response and cell migration, such as humoral immune response, neutrophil migration, and leukocyte mediated immunity, indicating that the immune system plays a significant role in liver regeneration (Fig. A). The KEGG pathway enrichment showed that common genes were heavily reliant on metabolic reprogramming, as seen by the strong enrichment in Metabolic pathways, Nitrogen metabolism, and Fructose and mannose metabolism (Fig. B). Time series clustering analysis Using the Mfuzz software, time-series clustering analysis of transcriptomic data was performed to further reveal gene expression patterns at different time points. The results identified eight distinct gene clusters (C1 to C8) through Mfuzz time-series clustering during liver regeneration following ELRA (Fig. ). We found that cluster C8 is closely associated with the liver regeneration process post-ELRA. Genes in cluster C8 were upregulated on postoperative day 1 and gradually decreased by day 5, suggesting that these genes may play key roles in the early stages of liver regeneration. Additionally, eight distinct gene clusters (C1 to C8) were identified during liver regeneration following PH (Fig. ). Cluster C1 was found to be associated with liver regeneration after PH. Genes in cluster C1 were significantly upregulated on postoperative day 1 and showed a decline in expression by day 5, indicating that these genes may be involved in the early recovery process after partial hepatectomy. KEGG enrichment analysis for the C8 genes in ELRA, which is significantly associated with early liver regeneration, like PI3K-Akt signaling pathway, MAPK signaling pathway, and Rap1 signaling pathway (Fig. A). The KEGG analysis for the C1 genes in PH, which is more associated with later stages of liver regeneration, shows a strong enrichment in metabolic pathways and purine metabolism (Fig. B). Gene co-expression network analysis To further understand the synergistic role of genes during liver regeneration, we employed Weighted Gene Co-expression Network Analysis (WGCNA) to construct a co-expression network. The hierarchical clustering dendrogram, combined with dynamic tree cutting, identified 12 gene co-expression modules (shown in different colors). These modules represent groups of genes with similar expression patterns throughout the liver regeneration process (Fig. A, Table S2). The heatmap of module-trait relationships (Fig. B) provided valuable insights into the correlation between gene modules and clinical traits. We found that genes in the blue module were significantly negatively correlated with the liver regeneration process. Importantly, by comparing genes from the blue module with the common genes, C8 genes in ELRA, and C1 genes in PH, we further identified KLF14 as a key regulatory gene involved in liver regeneration (Fig. C). As a potential association between KLF14 and the PI3K-AKT signaling pathway, highlighting its potential role as a regulator in the liver regeneration process. Metabolomics analysis results Metabolomics analysis using untargeted liquid chromatography-mass spectrometry (LC–MS) revealed significant changes in metabolite expression during liver regeneration following ELRA and PH. On postoperative day 1 after ELRA, 196 differentially expressed metabolites were identified compared to preoperative levels (Figure S1A, B), including arachidic acid, serotonin, and taurine. When comparing postoperative day 5 to day 1, 33 differentially expressed metabolites were identified (Figure S1C, D), including punicic acid, kojic acid, and L-Lysine. Correlation analysis shows that KLF14 is significantly positively correlated with Kojic acid and significantly negatively correlated with Serotonin (Figure S1E). Enrichment analysis revealed that differentially expressed metabolites are mainly involved in lipid metabolism, amino acid metabolism, and energy metabolism (Figure S2A, B, C). For PH, 82 differentially expressed metabolites were identified on postoperative day 1 compared to preoperative levels (Figure S3A, B), including deoxycholic acid, LPC (18:1), and arachidonic acid. Additionally, 75 differentially expressed metabolites were identified when comparing postoperative day 5 to day 1 (Figure S3C, D), including corticosterone, prostaglandins, and 5-Hydroxytryptophan. Correlation analysis shows that KLF14 is significantly positively correlated with Prostaglandin H1 and significantly negatively correlated with LPC 18:1 (Figure S3E). Enrichment analysis revealed that differentially expressed metabolites are also mainly involved in lipid metabolism, amino acid metabolism, and energy metabolism (Figure S4A, B, C). Verification of KLF14 and signaling pathways To verify the role of KLF14 in liver regeneration, we examined the expression levels of KLF14 and its associated signaling pathways using RT-qPCR and Western blot techniques. RT-qPCR results showed that KLF14, AKT, and PI3K were significantly upregulated on postoperative days 1 and 3 in both ELRA and PH, and their expression decreased by day 5 (Fig. ). Western blot analysis further confirmed the activation of KLF14 in the PI3K-AKT signaling pathway on postoperative day 1 in both ELRA and PH, with a marked reduction in activity by day 5 (Fig. , Figure S5). We compared gene expression levels at different postoperative time points and identified a large number of DEGs related to liver regeneration. In the ELRA group, 3574 DEGs were identified on postoperative day 1 compared to preoperative levels (Fig. A). Additionally, 3269 DEGs were identified when comparing postoperative day 5 to day 1 (Fig. B). In the PH group, 1619 DEGs were identified on postoperative day 1 compared to preoperative levels (Fig. C), and 896 DEGs were identified when comparing postoperative day 5 to day 1 (Fig. D). Further analysis revealed 36 common genes shared between the ELRA and PH groups (Fig. E, Table S1). The common genes identified were enriched in several biological processes related to immune response and cell migration, such as humoral immune response, neutrophil migration, and leukocyte mediated immunity, indicating that the immune system plays a significant role in liver regeneration (Fig. A). The KEGG pathway enrichment showed that common genes were heavily reliant on metabolic reprogramming, as seen by the strong enrichment in Metabolic pathways, Nitrogen metabolism, and Fructose and mannose metabolism (Fig. B). Using the Mfuzz software, time-series clustering analysis of transcriptomic data was performed to further reveal gene expression patterns at different time points. The results identified eight distinct gene clusters (C1 to C8) through Mfuzz time-series clustering during liver regeneration following ELRA (Fig. ). We found that cluster C8 is closely associated with the liver regeneration process post-ELRA. Genes in cluster C8 were upregulated on postoperative day 1 and gradually decreased by day 5, suggesting that these genes may play key roles in the early stages of liver regeneration. Additionally, eight distinct gene clusters (C1 to C8) were identified during liver regeneration following PH (Fig. ). Cluster C1 was found to be associated with liver regeneration after PH. Genes in cluster C1 were significantly upregulated on postoperative day 1 and showed a decline in expression by day 5, indicating that these genes may be involved in the early recovery process after partial hepatectomy. KEGG enrichment analysis for the C8 genes in ELRA, which is significantly associated with early liver regeneration, like PI3K-Akt signaling pathway, MAPK signaling pathway, and Rap1 signaling pathway (Fig. A). The KEGG analysis for the C1 genes in PH, which is more associated with later stages of liver regeneration, shows a strong enrichment in metabolic pathways and purine metabolism (Fig. B). To further understand the synergistic role of genes during liver regeneration, we employed Weighted Gene Co-expression Network Analysis (WGCNA) to construct a co-expression network. The hierarchical clustering dendrogram, combined with dynamic tree cutting, identified 12 gene co-expression modules (shown in different colors). These modules represent groups of genes with similar expression patterns throughout the liver regeneration process (Fig. A, Table S2). The heatmap of module-trait relationships (Fig. B) provided valuable insights into the correlation between gene modules and clinical traits. We found that genes in the blue module were significantly negatively correlated with the liver regeneration process. Importantly, by comparing genes from the blue module with the common genes, C8 genes in ELRA, and C1 genes in PH, we further identified KLF14 as a key regulatory gene involved in liver regeneration (Fig. C). As a potential association between KLF14 and the PI3K-AKT signaling pathway, highlighting its potential role as a regulator in the liver regeneration process. Metabolomics analysis using untargeted liquid chromatography-mass spectrometry (LC–MS) revealed significant changes in metabolite expression during liver regeneration following ELRA and PH. On postoperative day 1 after ELRA, 196 differentially expressed metabolites were identified compared to preoperative levels (Figure S1A, B), including arachidic acid, serotonin, and taurine. When comparing postoperative day 5 to day 1, 33 differentially expressed metabolites were identified (Figure S1C, D), including punicic acid, kojic acid, and L-Lysine. Correlation analysis shows that KLF14 is significantly positively correlated with Kojic acid and significantly negatively correlated with Serotonin (Figure S1E). Enrichment analysis revealed that differentially expressed metabolites are mainly involved in lipid metabolism, amino acid metabolism, and energy metabolism (Figure S2A, B, C). For PH, 82 differentially expressed metabolites were identified on postoperative day 1 compared to preoperative levels (Figure S3A, B), including deoxycholic acid, LPC (18:1), and arachidonic acid. Additionally, 75 differentially expressed metabolites were identified when comparing postoperative day 5 to day 1 (Figure S3C, D), including corticosterone, prostaglandins, and 5-Hydroxytryptophan. Correlation analysis shows that KLF14 is significantly positively correlated with Prostaglandin H1 and significantly negatively correlated with LPC 18:1 (Figure S3E). Enrichment analysis revealed that differentially expressed metabolites are also mainly involved in lipid metabolism, amino acid metabolism, and energy metabolism (Figure S4A, B, C). To verify the role of KLF14 in liver regeneration, we examined the expression levels of KLF14 and its associated signaling pathways using RT-qPCR and Western blot techniques. RT-qPCR results showed that KLF14, AKT, and PI3K were significantly upregulated on postoperative days 1 and 3 in both ELRA and PH, and their expression decreased by day 5 (Fig. ). Western blot analysis further confirmed the activation of KLF14 in the PI3K-AKT signaling pathway on postoperative day 1 in both ELRA and PH, with a marked reduction in activity by day 5 (Fig. , Figure S5). This study systematically integrated transcriptomic and metabolomic data to reveal the dynamic changes in gene expression and metabolite profiles during liver regeneration following ELRA and PH, identifying key molecular mechanisms and metabolic reprogramming closely related to liver regeneration. We found distinct differences in gene expression patterns and metabolic characteristics during early and late regeneration stages between ELRA and PH, indicating that liver regeneration involves time-dependent molecular regulation. Firstly, differential analysis revealed a large number of DEGs after ELRA and PH, but the gene expression patterns differed significantly. Compared to ELRA, the regenerative response following PH showed smaller fluctuations in gene expression, suggesting that the physiological mechanisms of liver regeneration after partial hepatectomy are more stable, with a critical dependence on energy metabolism. Further analysis of the 36 common genes indicated enrichment in biological processes related to immune response and cell migration, particularly humoral immune response, neutrophil migration, and leukocyte-mediated immune responses, highlighting the important regulatory role of the immune system in liver regeneration , . Previous studies have also demonstrated that liver regeneration depends not only on local cell proliferation but also on the coordinated regulation of the systemic immune system to prevent tissue damage, infection, and promote resolution of inflammation , . Additionally, KEGG pathway enrichment analysis revealed that these common genes were closely related to metabolic reprogramming, especially in key pathways such as nitrogen metabolism, fructose, and mannose metabolism , . These results indicate that liver regeneration is not just a local repair process but also involves widespread systemic metabolic regulation to support cell proliferation and organ function restoration . Time-series clustering analysis with Mfuzz revealed the dynamic patterns of gene expression at different time points, further exploring the spatiotemporal regulation during liver regeneration. In the ELRA group, genes in cluster C8 were significantly upregulated on postoperative day 1 and gradually downregulated by day 5, suggesting that these genes play a central role in the early stages of liver regeneration. These genes are involved in key signaling pathways such as PI3K-AKT, MAPK, and Rap1, indicating their crucial roles in regulating cell proliferation, survival, and metabolic reprogramming , . In contrast, in PH, genes in cluster C1 were significantly upregulated on postoperative day 1, with decreased expression by day 5, indicating their involvement in energy metabolism and tissue repair , , with metabolic demands different from those in ELRA. The significant enrichment of metabolic pathways, including purine metabolism, in PH highlights energy supply and cellular metabolism as key limiting factors in the regeneration process following partial hepatectomy – . To further understand the synergistic roles of these genes, we constructed a co-expression network using WGCNA, identifying 12 gene modules with significant co-expression relationships. Among them, the blue module was negatively correlated with liver regeneration, suggesting that these genes may be suppressed or involved in negative feedback regulation after liver injury. By comparing with common genes, C8 cluster genes, and C1 cluster genes, we identified KLF14 within the blue module provide insights into the potential involvement of KLF14 in liver regeneration. KLF14 is a known transcription factor involved in regulating metabolism and cell proliferation , . This study further confirmed its role in liver regeneration through the PI3K-AKT signaling pathway. The upregulation of KLF14 was closely associated with the activation of the PI3K-AKT pathway, which has been widely reported to play a critical role in controlling cell proliferation, survival, and metabolism . Thus, KLF14 may promote liver regeneration and repair by regulating these processes. Metabolomics analysis further revealed significant changes in metabolites during liver regeneration following ELRA and PH. After ELRA, metabolites such as arachidonic acid, serotonin, and taurine were significantly upregulated in the early phase, suggesting their involvement in inflammation regulation and cell proliferation , . By postoperative day 5, metabolites such as punicic acid and L-lysine showed significant changes, reflecting evolving metabolic demands during liver regeneration. These metabolites are related to lipid metabolism and amino acid metabolism, indicating that metabolic regulation during regeneration involves not only energy supply but also membrane synthesis and protein production , . After PH, the upregulation of metabolites such as deoxycholic acid and LPC (18:1) suggests that bile acid and lipid metabolism play a role in liver repair, while changes in corticosterone and prostaglandin levels on postoperative day 5 reflect continued immune regulation and tissue repair – . These metabolic changes highlight the importance of metabolic reprogramming during liver regeneration, especially in different stages of energy metabolism and inflammation regulationy , , . To validate the function of KLF14 in liver regeneration, we assessed its expression levels and associated signaling pathways using RT-qPCR and Western blot analysis. RT-qPCR results showed that KLF14, and PI3K-AKT signaling pathway were significantly upregulated on postoperative day 1 in both ELRA and PH, with a reduction in expression by day 5, highlighting the importance of these genes in early regeneration. Western blot analysis further confirmed the dynamic expression of KLF14 in liver regeneration, with a marked increase in pathway activity on postoperative day 1. This finding provides new evidence supporting the critical role of KLF14 as a regulatory factor in liver regeneration, suggesting its potential as a therapeutic target for promoting cell proliferation and metabolic remodeling. This study has several limitations. It included a small number of ELRA and PH patients, and the small sample size may affect the generalizability and statistical robustness of the findings. These conclusions should be considered preliminary. The observed associations, particularly the involvement of KLF14, require further validation in larger cohorts to assess their generalizability and statistical significance. The samples were primarily derived from PBMCs rather than directly from liver tissue. Blood samples reflect systemic changes in gene expression, which may not always correlate with local changes in the liver, changes in liver-specific cell types, such as hepatocytes, are not directly reflected in blood samples. Important regulatory genes specific to the liver’s regenerative processes might be underrepresented or not detected in blood samples, and there is a lack of spatial resolution needed to assess specific regions of the liver or track dynamic changes in gene expression at different stages of regeneration. Additionally, this study focused on three time points: preoperative, postoperative day 1, and postoperative day 5, which do not comprehensively capture the dynamic changes throughout the entire liver regeneration process. The stage of infection may have influenced the baseline liver function and regenerative capacity, with more advanced stages potentially altering both the inflammatory response and metabolic reprogramming. Although we validated KLF14 expression through RT-qPCR and Western blot, functional studies such as gene knockout or overexpression experiments in vitro or in vivo are still lacking to directly verify its precise role in liver regeneration. We acknowledge that complete dietary control outside the hospital environment is challenging, and unaccounted dietary differences might contribute to minor variations in metabolite profiles. However, the observed trends in key metabolic pathways were consistent across patients and time points, supporting the robustness of the findings. Future research should expand the sample size, increase the number of time points for analysis, conduct functional validation experiments, and incorporate liver tissue samples for more in-depth investigation. Through integrated transcriptomic and metabolomic analyses, we systematically revealed the complex gene and metabolic regulatory networks involved in liver regeneration following ex-vivo liver resection and autotransplantation and partial hepatectomy. The identification of KLF14 as a key regulatory factor offers new insights into the molecular mechanisms of liver regeneration, particularly in the regulation of the PI3K-AKT signaling pathway. This research not only provides important molecular targets for fundamental studies on liver regeneration but also holds potential clinical value for the development of future interventions to promote liver regeneration. Supplementary Information 1. Supplementary Information 2. Supplementary Information 3. Supplementary Information 4. Supplementary Information 5. Supplementary Information 6. Supplementary Information 7. Supplementary Information 8.
Identifying liver cirrhosis in patients with chronic hepatitis B: an interpretable machine learning algorithm based on LSM
e8082b7a-69ad-469d-a4ec-f5bc30d51b02
11924261
Digestive System[mh]
Introduction Liver cirrhosis (LC) is a pathological condition marked by the gradual hardening of liver tissue, a consequence of prolonged inflammation . Chronic hepatitis B (CHB) virus infection is recognized as a key contributor to the development of LC. Despite advances in CHB treatment, the prognosis for patients with severe cirrhosis remains a significant concern . Cirrhosis is often incidentally diagnosed, typically at a stage when liver function has already suffered significant impairment. Fortunately, LC is a reversible condition . Even patients in the decompensated stage can potentially achieve stabilization or even reversal through timely management of underlying risk factors . Undeniably, timely prevention and treatment of hepatitis B virus (HBV)-related cirrhosis are crucial. Although liver biopsy is considered the gold standard for diagnosing liver fibrosis and cirrhosis in chronic liver disease, it is plagued by limitations such as sampling errors, high costs, patient discomfort and invasiveness . These challenges have spurred the need for non-invasive serological diagnostic alternatives in clinical practice. Non-invasive methods like the γ-glutamyl transferase-to-platelet ratio (GPR), aspartate aminotransferase to platelet ratio (APRI) and fibrosis-4 (FIB-4) indices offer advantages in terms of simplicity, cost-effectiveness and reproducibility. However, these methods often produce conflicting results with relatively higher false negative and false positive rates . Liver stiffness measurement (LSM) by transient elastography (TE) has emerged as a highly precise and non-invasive technique, particularly effective in identifying advanced fibrosis and cirrhosis. TE can swiftly evaluate liver tissue properties without causing discomfort or complications to patients, playing a crucial role in monitoring fibrosis progression . LSM can serve as a valuable complementary tool for diagnosing cirrhosis in patients with CHB . Nonetheless, it is essential to underscore that the LSM threshold for diagnosing cirrhosis should be adjusted based on whether the CHB patient’s bilirubin or alanine aminotransferase levels are within the normal range . Relying solely on a single threshold for diagnosing cirrhosis in CHB patients may lead to misdiagnosis. Combining LSM with other traditional indicators can provide a more comprehensive evaluation of cirrhosis, thereby enhancing diagnostic accuracy. However, the substantial expense associated with acquiring and maintaining FibroScan devices hinders their availability in low- and middle-income countries, thereby posing challenges in implementing LSM as a standard approach for cirrhosis surveillance in resource-limited settings or primary healthcare facilities . Therefore, comparing the diagnostic performance of models with and without LSM can help determine the substitutability of traditional indicators when LSM is not available. In recent years, there has been a notable increase in research on liver fibrosis among patients with chronic liver disease. However, these studies predominantly rely on traditional logistic regression (LR), with relatively few applications of machine learning techniques . While machine learning has demonstrated tremendous potential in the medical field, its ‘black box’ nature still poses challenges for interpretation . In this study, we construct multiple machine learning models that integrate LSM with traditional indicators and compare them with traditional LR model. After selecting the optimal model, we employ Shapley additive explanations (SHAP) to enhance the model’s transparency and interpretability. Additionally, we delve into the clinical application value of combining LSM with traditional indicators, using LSM alone, and using traditional indicators alone based on the optimal model. Furthermore, we compare the diagnostic performance of LSM combined with traditional indicators to conventional serological markers. These investigations aim to provide scientific support to help healthcare professionals optimize monitoring and management strategies for CHB patients. Materials and methods 2.1. Study participants In the initial cohort, 5639 CHB patients treated at Dalian Public Health Clinical Center from October 2015 to April 2024 were included in the analysis. Patients who met the following inclusion criteria were included: (1) patients with a clear diagnosis of CHB; (2) patients who have had LSM; (3) patients who have a definitive histopathological examination or clinical diagnosis indicating the presence or absence of abnormalities in liver morphology and structure, including LC. The following exclusion criteria were used: (1) patients diagnosed with liver cancer or other malignancies; (2) patients with viral co-infections and systemic diseases impacting the liver (such as HIV infection, autoimmune liver disease, etc.); (3) patients diagnosed with fatty liver; (4) patients with missing data. Based on the inclusion and exclusion criteria, 1609 CHB patients who underwent LSM were ultimately enrolled in the study. One thousand one hundred and twenty-two CHB patients from October 2015 to December 2021 were included in the training and internal validation sets, among whom 348 were diagnosed with LC. Data from CHB patients between January 2022 and April 2024 were used as an independent external validation set. The variables we extracted were all information that can be obtained from the hospital information system. Data extraction was performed independently by one of the authors and verified for accuracy by another author. The entire study process is shown in . 2.2. Diagnostic criteria The definition of CHB conforms to the 2022 guidelines for the prevention and treatment of CHB . Chronic HBV infection is defined as the presence of HBsAg and/or HBV DNA positive for more than 6 months. The presence of HBsAg for at least 6 months establishes the chronicity of infection. Chronic hepatitis B is defined as a chronic inflammatory liver disease caused by persistent HBV infection. These definitions are also consistent with the AASLD 2018 Hepatitis B Guidance . The diagnosis of HBV-related cirrhosis should meet the following criteria: (1) the patient is currently HBsAg positive, or HBsAg negative and anti-HBc positive with a clear history of chronic HBV infection (with a history of being HBsAg positive for >6 months), with other aetiologies being ruled out. (2) The diagnosis of cirrhosis is established through either histological confirmation or clinical diagnosis, supported by repeated and consistent findings from abdominal ultrasound indicative of cirrhosis, along with corroborative evidence such as thrombocytopenia, oesophageal/gastric varices or additional imaging findings from CT or MRI suggestive of cirrhosis . In Chinese patients, liver tissue pathology diagnosis is less common, with diagnosis mainly relying on clinical and imaging examinations (abdominal ultrasound, CT or MRI). Clinical manifestations, including splenomegaly, portal vein dilation, ascites and other signs, also play a significant role in the clinical diagnosis of cirrhosis . 2.3. Measurement of liver stiffness LSM was performed using the FibroScan device, which assesses liver health using ultrasound technology . Patients underwent the examination while fasting, lying on a bed as the physician positioned the device beneath the ribcage to emit sound waves for assessment. Following the examination, FibroScan produced results based on the collected data. Depending on the patient’s physique, the device offered different models – the M type for most patients and the XL type for obese patients. All LSM values were obtained by experienced operators following the manufacturer’s protocol. The LSM values were measured in kPa. To ensure reliable LSM values, the following conditions must be met: at least 10 valid measurements, a success rate exceeding 60%, and an interquartile range to median ratio not exceeding 30%. 2.4. Routine available serum algorithms FIB-4, APRI and GPR are commonly used serological test indices. The specific calculation formulas are as follows: (1) FIB − 4 = Age years × AST U / L / PLT 10 9 / L × ALT 1 / 2 U / L (2) APRI = AST U / L / ULN / PLT 10 9 / L × 100 (3) GPR = GGT U / L / ULN / PLT 10 9 / L × 100 The ULN for AST, ALT and GGT is defined as 40 U/L, 40 U/L and 60 U/L, respectively. The full names of AST, ALT, GGT, PLT and ULN are aspartate aminotransferase, alanine aminotransferase, γ-glutamyl transferase, platelet count and upper limit of normal, respectively. 2.5. Statistical analysis 2.5.1. Data processing Statistical analysis was conducted using SPSS26.0 (SPSS Inc., Chicago, IL) and R4.3.3 (R Foundation for Statistical Computing, Vienna, Austria). The Kolmogorov–Smirnov test was used to check if the variables followed a normal distribution. Continuous variables were expressed as mean with standard deviation (SD), while categorical variables were presented as numbers (percentages). Student’s t -test or Mann–Whitney’s U -test was employed to compare continuous variables, and Pearson’s Chi-squared test or Fisher’s exact test was used to compare categorical variables. All tests were two-tailed, with a significance level set at p < .05. Variables with missing data exceeding 20% were excluded from the analysis, while those with missing data below 20% were imputed using the missForest method . Multicollinearity among variables was assessed using the variance inflation factor (VIF). A VIF greater than 5 indicated the presence of multicollinearity, while a VIF greater than 10 indicated severe multicollinearity . 2.5.2. Feature selection In this study, feature selection was performed by integrating random forest-recursive feature elimination (RF-RFE) with the least absolute shrinkage and selection operator (LASSO). In LASSO using the ‘glmnet’ package in R for feature selection, optimal regularization parameters are chosen via 10-fold cross-validation. This process entails randomly dividing the data into 10 subsets, where each subset serves as the validation set once while the remaining subsets form the training set in each iteration. Various regularization parameters ( λ ) are tested during training, and model performance is assessed on the validation set. The λ yielding the best performance across all folds is selected as the final parameter for the LASSO. Conversely, RF-RFE achieves a similar objective using the ‘caret’ package, systematically eliminating features by configuring recursive feature elimination control parameters. This method employs accuracy as the evaluation metric and continues iterating until it reaches a predefined number of features or achieves satisfactory performance. 2.5.3. Development and evaluation of predictive models The study subjects were randomly allocated to training and internal validation sets at a 7:3 ratio. Machine learning models including LR, artificial neural network (ANN), support vector machine (SVM), random forest (RF), k-nearest neighbors (KNN) and eXtreme Gradient Boosting (XGBoost) were built. Ten-fold cross-validation and grid search techniques were used to optimize the model’s parameters. After multiple iterations, refined parameters were identified as the optimal configuration for the current model. Receiver operating characteristic (ROC) curves were used to evaluate the models’ diagnostic accuracy and discriminative power. The DeLong test was used to compare AUC values. Calibration curves and decision curve analysis were performed to assess the models’ predictive capability and clinical applicability. The performance metrics for model evaluation included accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F 1-score. Additionally, SHAP was used to further reveal the impact and contributions of feature variables . Study participants In the initial cohort, 5639 CHB patients treated at Dalian Public Health Clinical Center from October 2015 to April 2024 were included in the analysis. Patients who met the following inclusion criteria were included: (1) patients with a clear diagnosis of CHB; (2) patients who have had LSM; (3) patients who have a definitive histopathological examination or clinical diagnosis indicating the presence or absence of abnormalities in liver morphology and structure, including LC. The following exclusion criteria were used: (1) patients diagnosed with liver cancer or other malignancies; (2) patients with viral co-infections and systemic diseases impacting the liver (such as HIV infection, autoimmune liver disease, etc.); (3) patients diagnosed with fatty liver; (4) patients with missing data. Based on the inclusion and exclusion criteria, 1609 CHB patients who underwent LSM were ultimately enrolled in the study. One thousand one hundred and twenty-two CHB patients from October 2015 to December 2021 were included in the training and internal validation sets, among whom 348 were diagnosed with LC. Data from CHB patients between January 2022 and April 2024 were used as an independent external validation set. The variables we extracted were all information that can be obtained from the hospital information system. Data extraction was performed independently by one of the authors and verified for accuracy by another author. The entire study process is shown in . Diagnostic criteria The definition of CHB conforms to the 2022 guidelines for the prevention and treatment of CHB . Chronic HBV infection is defined as the presence of HBsAg and/or HBV DNA positive for more than 6 months. The presence of HBsAg for at least 6 months establishes the chronicity of infection. Chronic hepatitis B is defined as a chronic inflammatory liver disease caused by persistent HBV infection. These definitions are also consistent with the AASLD 2018 Hepatitis B Guidance . The diagnosis of HBV-related cirrhosis should meet the following criteria: (1) the patient is currently HBsAg positive, or HBsAg negative and anti-HBc positive with a clear history of chronic HBV infection (with a history of being HBsAg positive for >6 months), with other aetiologies being ruled out. (2) The diagnosis of cirrhosis is established through either histological confirmation or clinical diagnosis, supported by repeated and consistent findings from abdominal ultrasound indicative of cirrhosis, along with corroborative evidence such as thrombocytopenia, oesophageal/gastric varices or additional imaging findings from CT or MRI suggestive of cirrhosis . In Chinese patients, liver tissue pathology diagnosis is less common, with diagnosis mainly relying on clinical and imaging examinations (abdominal ultrasound, CT or MRI). Clinical manifestations, including splenomegaly, portal vein dilation, ascites and other signs, also play a significant role in the clinical diagnosis of cirrhosis . Measurement of liver stiffness LSM was performed using the FibroScan device, which assesses liver health using ultrasound technology . Patients underwent the examination while fasting, lying on a bed as the physician positioned the device beneath the ribcage to emit sound waves for assessment. Following the examination, FibroScan produced results based on the collected data. Depending on the patient’s physique, the device offered different models – the M type for most patients and the XL type for obese patients. All LSM values were obtained by experienced operators following the manufacturer’s protocol. The LSM values were measured in kPa. To ensure reliable LSM values, the following conditions must be met: at least 10 valid measurements, a success rate exceeding 60%, and an interquartile range to median ratio not exceeding 30%. Routine available serum algorithms FIB-4, APRI and GPR are commonly used serological test indices. The specific calculation formulas are as follows: (1) FIB − 4 = Age years × AST U / L / PLT 10 9 / L × ALT 1 / 2 U / L (2) APRI = AST U / L / ULN / PLT 10 9 / L × 100 (3) GPR = GGT U / L / ULN / PLT 10 9 / L × 100 The ULN for AST, ALT and GGT is defined as 40 U/L, 40 U/L and 60 U/L, respectively. The full names of AST, ALT, GGT, PLT and ULN are aspartate aminotransferase, alanine aminotransferase, γ-glutamyl transferase, platelet count and upper limit of normal, respectively. Statistical analysis 2.5.1. Data processing Statistical analysis was conducted using SPSS26.0 (SPSS Inc., Chicago, IL) and R4.3.3 (R Foundation for Statistical Computing, Vienna, Austria). The Kolmogorov–Smirnov test was used to check if the variables followed a normal distribution. Continuous variables were expressed as mean with standard deviation (SD), while categorical variables were presented as numbers (percentages). Student’s t -test or Mann–Whitney’s U -test was employed to compare continuous variables, and Pearson’s Chi-squared test or Fisher’s exact test was used to compare categorical variables. All tests were two-tailed, with a significance level set at p < .05. Variables with missing data exceeding 20% were excluded from the analysis, while those with missing data below 20% were imputed using the missForest method . Multicollinearity among variables was assessed using the variance inflation factor (VIF). A VIF greater than 5 indicated the presence of multicollinearity, while a VIF greater than 10 indicated severe multicollinearity . 2.5.2. Feature selection In this study, feature selection was performed by integrating random forest-recursive feature elimination (RF-RFE) with the least absolute shrinkage and selection operator (LASSO). In LASSO using the ‘glmnet’ package in R for feature selection, optimal regularization parameters are chosen via 10-fold cross-validation. This process entails randomly dividing the data into 10 subsets, where each subset serves as the validation set once while the remaining subsets form the training set in each iteration. Various regularization parameters ( λ ) are tested during training, and model performance is assessed on the validation set. The λ yielding the best performance across all folds is selected as the final parameter for the LASSO. Conversely, RF-RFE achieves a similar objective using the ‘caret’ package, systematically eliminating features by configuring recursive feature elimination control parameters. This method employs accuracy as the evaluation metric and continues iterating until it reaches a predefined number of features or achieves satisfactory performance. 2.5.3. Development and evaluation of predictive models The study subjects were randomly allocated to training and internal validation sets at a 7:3 ratio. Machine learning models including LR, artificial neural network (ANN), support vector machine (SVM), random forest (RF), k-nearest neighbors (KNN) and eXtreme Gradient Boosting (XGBoost) were built. Ten-fold cross-validation and grid search techniques were used to optimize the model’s parameters. After multiple iterations, refined parameters were identified as the optimal configuration for the current model. Receiver operating characteristic (ROC) curves were used to evaluate the models’ diagnostic accuracy and discriminative power. The DeLong test was used to compare AUC values. Calibration curves and decision curve analysis were performed to assess the models’ predictive capability and clinical applicability. The performance metrics for model evaluation included accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F 1-score. Additionally, SHAP was used to further reveal the impact and contributions of feature variables . Data processing Statistical analysis was conducted using SPSS26.0 (SPSS Inc., Chicago, IL) and R4.3.3 (R Foundation for Statistical Computing, Vienna, Austria). The Kolmogorov–Smirnov test was used to check if the variables followed a normal distribution. Continuous variables were expressed as mean with standard deviation (SD), while categorical variables were presented as numbers (percentages). Student’s t -test or Mann–Whitney’s U -test was employed to compare continuous variables, and Pearson’s Chi-squared test or Fisher’s exact test was used to compare categorical variables. All tests were two-tailed, with a significance level set at p < .05. Variables with missing data exceeding 20% were excluded from the analysis, while those with missing data below 20% were imputed using the missForest method . Multicollinearity among variables was assessed using the variance inflation factor (VIF). A VIF greater than 5 indicated the presence of multicollinearity, while a VIF greater than 10 indicated severe multicollinearity . Feature selection In this study, feature selection was performed by integrating random forest-recursive feature elimination (RF-RFE) with the least absolute shrinkage and selection operator (LASSO). In LASSO using the ‘glmnet’ package in R for feature selection, optimal regularization parameters are chosen via 10-fold cross-validation. This process entails randomly dividing the data into 10 subsets, where each subset serves as the validation set once while the remaining subsets form the training set in each iteration. Various regularization parameters ( λ ) are tested during training, and model performance is assessed on the validation set. The λ yielding the best performance across all folds is selected as the final parameter for the LASSO. Conversely, RF-RFE achieves a similar objective using the ‘caret’ package, systematically eliminating features by configuring recursive feature elimination control parameters. This method employs accuracy as the evaluation metric and continues iterating until it reaches a predefined number of features or achieves satisfactory performance. Development and evaluation of predictive models The study subjects were randomly allocated to training and internal validation sets at a 7:3 ratio. Machine learning models including LR, artificial neural network (ANN), support vector machine (SVM), random forest (RF), k-nearest neighbors (KNN) and eXtreme Gradient Boosting (XGBoost) were built. Ten-fold cross-validation and grid search techniques were used to optimize the model’s parameters. After multiple iterations, refined parameters were identified as the optimal configuration for the current model. Receiver operating characteristic (ROC) curves were used to evaluate the models’ diagnostic accuracy and discriminative power. The DeLong test was used to compare AUC values. Calibration curves and decision curve analysis were performed to assess the models’ predictive capability and clinical applicability. The performance metrics for model evaluation included accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F 1-score. Additionally, SHAP was used to further reveal the impact and contributions of feature variables . Results 3.1. Comparison of demographic and clinical characteristics between the LC and non-LC group This study enrolled 1122 CHB patients for model training and internal validation. Among these patients, 348 were diagnosed with LC. The characteristics of patients with and without LC are presented in . The mean age of the patients was 46.54 (11.83) years, with males comprising 60.8%. The study revealed statistical differences between the two groups across various parameters including age, gender, LSM, platelet, leukocyte, erythrocyte, haemoglobin, NLR, PDW, LMR, albumin, AST/ALT, LDH, total bilirubin, prealbumin, direct bilirubin, GGT, total protein, GFR, glucose, cholesterol, HDL-C, LDL-C, triglyceride, apolipoprotein A, apolipoprotein B, bile acids, cholinesterase, HBsAg, HBV DNA, FIB-4, APRI and GPR ( p < .05). Compared to CHB patients without LC, those with LC were older ( p < .001), had higher LSM ( p < .001) and had lower platelet counts ( p < .001). The detailed baseline characteristics of the external validation set, which includes 365 CHB patients without cirrhosis and 122 with cirrhosis, are presented in . 3.2. Feature selection The LASSO algorithm constrains model complexity through regularization, enabling feature selection during the regression model fitting process . The R package performs cross-validation over a specified range of λ values, yielding two key parameters: lambda_min and lambda_lse. Lambda_min is the λ value that minimizes the cross-validation error, while lambda_lse is the largest λ value within one standard error above the minimum cross-validation error. Opting for lambda_min offers the advantage of achieving the best model fit, thereby attaining superior predictive performance. Similarly, the RF-RFE method performs feature selection by employing a strategy based on a RF model, iteratively training the model to assess the importance of each feature and recursively eliminating irrelevant ones . As shown in , we ultimately selected the common feature variables of LASSO and RF-RFE algorithms to build the predictive model. These variables, listed in sequence, are LSM, platelet, age, leukocyte, glucose, cholinesterase, AST/ALT, apolipoprotein A, apolipoprotein B, HDL-C, GGT, PDW, triglyceride, uric acid, NLR, prealbumin, haemoglobin and erythrocyte. 3.3. Model performance and comparison Based on the feature selection results, we constructed a traditional LR model and five machine learning models including ANN, RF, SVM, KNN and XGBoost. The ROC curves of these models are illustrated in . In the training set, RF (AUC = 0.982) and XGBoost (AUC = 0.954) models exhibited superior discriminative ability, followed by SVM (AUC = 0.912), KNN (AUC = 0.898), LR (AUC = 0.875) and ANN (AUC = 0.867). In the internal validation set, XGBoost (AUC = 0.891) and RF (AUC = 0.889) models exhibited superior discriminative ability, followed by SVM (AUC = 0.872), LR (AUC = 0.861), ANN (AUC = 0.860) and KNN (AUC = 0.842). In the external validation set, the RF model had the highest AUC value (AUC = 0.890), followed by XGBoost (AUC = 0.889), ANN (AUC = 0.881), SVM (AUC = 0.872), LR (AUC = 0.866) and KNN (AUC = 0.853). A detailed comparison of specific performance metrics for each model is presented in . In the training set, the RF model ranks first in accuracy, sensitivity, specificity, PPV, NPV and F 1-score. In the internal validation set, the RF model ranks first in accuracy, specificity, PPV and F 1-score, while it ranks third in sensitivity and NPV. In the external validation set, the RF model also shows strong performance across all metrics. The confusion matrices for each model in the training and validation sets are provided in . We plotted calibration curves and DCA curves based on the training and validation sets. The former is utilized to assess the accuracy and reliability of the model predictions, while the latter is employed to evaluate the potential clinical utility of the model across different threshold ranges. The calibration curves for the six models are depicted in . Except for the ANN model, the other models demonstrate good calibration performance. The DCA curves, as illustrated in , show that the models achieve higher net benefits than the ‘all-intervention’ or ‘no-intervention’ strategies within a wide range of thresholds. Across different threshold probability ranges, the RF and XGBoost models demonstrate more pronounced clinical benefits compared to other models. Combining the results of the model performance evaluation, the RF model exhibits superior diagnostic capability and clinical utility, making it the optimal model for diagnosing LC in CHB patients. 3.4. Diagnostic efficacy comparison of different indicator combinations based on the RF model 3.4.1. LSM and 17 traditional indicators vs. serum biomarkers The RF model emerged as the superior choice after evaluating the modelling performance of various machine learning methods. Building on this, we developed four RF models to evaluate their diagnostic efficacy for LC. These models included a comprehensive model combining LSM with 17 traditional indicators, as well as three models based on APRI, GPR and FIB-4 serological biomarkers, respectively. The results ( ) showed that the comprehensive model combining LSM and 17 traditional indicators had the best diagnostic performance. Among the serological indicator models, the FIB-4 model performed the best, followed by GPR, while APRI had relatively lower diagnostic efficacy. 3.4.2. LSM and 17 traditional indicators vs. LSM-only vs. 17 traditional indicators After determining the optimal RF model, we conducted modelling for three scenarios: (1) LSM alone, (2) 17 traditional indicators alone and (3) LSM combined with traditional indicators. The results of these models are presented in . The incorporation of traditional indicators significantly enhanced model performance, with the combined model achieving the highest AUC compared to the LSM-only approach. Furthermore, in the absence of LSM, the diagnostic performance of the traditional 17-indicator model was comparable to that of the combined LSM and traditional indicators model. This indicates that traditional indicators remain highly effective for diagnosing LC when LSM is unavailable, outperforming the LSM-only model. 3.4.3. FIB-4 vs. LSM-only vs. 17 traditional indicators To further evaluate the performance of FIB-4, we subsequently compared it with models using only LSM and only the 17 traditional indicators separately. In the training set, the FIB-4 model demonstrated superior performance compared to the LSM-only model. In the internal validation set, the LSM-only model exhibited comparable performance to the FIB-4 model. However, the LSM-only model surpassed the FIB-4 model in the external validation set. Notably, the model integrating 17 traditional indicators consistently outperformed the FIB-4 model in all sets ( ). The evaluation of performance metrics for diverse indicator combinations is presented in . 3.5. Model interpretation To gain a more intuitive understanding of the relationship between the model and feature factors, SHAP was employed to interpret the RF model. presents SHAP values for all features based on the RF model. Feature importance is ranked from top to bottom, with LSM scoring the highest importance, followed by platelet, age, leukocyte, glucose, cholinesterase, AST/ALT, apolipoprotein A, apolipoprotein B, HDL-C, GGT, PDW, triglyceride, uric acid, NLR, prealbumin, haemoglobin and erythrocyte. displays the distribution of SHAP values for each feature across different data points or samples. If a specific instance has relatively high variable values, it appears as a yellow point, whereas relatively low variable values appear as purple points. SHAP values illustrate the contribution of each feature to the target variable, whether positive or negative. Additionally, two individual SHAP force plots are provided in the study. illustrates the SHAP force plot for diagnosing LC in patients with CHB, showing that all features support the diagnosis of LC in this patient. The longer the yellow arrow of a certain feature, the greater its contribution to supporting the diagnosis of LC in CHB patients. displays the SHAP force plot for excluding LC in patients with CHB. Glucose supports the diagnosis of LC, the other 17 features do not support the diagnosis. Yellow arrows support the diagnosis of LC in CHB patients; purple arrows do not support the diagnosis of LC in CHB patients. Comparison of demographic and clinical characteristics between the LC and non-LC group This study enrolled 1122 CHB patients for model training and internal validation. Among these patients, 348 were diagnosed with LC. The characteristics of patients with and without LC are presented in . The mean age of the patients was 46.54 (11.83) years, with males comprising 60.8%. The study revealed statistical differences between the two groups across various parameters including age, gender, LSM, platelet, leukocyte, erythrocyte, haemoglobin, NLR, PDW, LMR, albumin, AST/ALT, LDH, total bilirubin, prealbumin, direct bilirubin, GGT, total protein, GFR, glucose, cholesterol, HDL-C, LDL-C, triglyceride, apolipoprotein A, apolipoprotein B, bile acids, cholinesterase, HBsAg, HBV DNA, FIB-4, APRI and GPR ( p < .05). Compared to CHB patients without LC, those with LC were older ( p < .001), had higher LSM ( p < .001) and had lower platelet counts ( p < .001). The detailed baseline characteristics of the external validation set, which includes 365 CHB patients without cirrhosis and 122 with cirrhosis, are presented in . Feature selection The LASSO algorithm constrains model complexity through regularization, enabling feature selection during the regression model fitting process . The R package performs cross-validation over a specified range of λ values, yielding two key parameters: lambda_min and lambda_lse. Lambda_min is the λ value that minimizes the cross-validation error, while lambda_lse is the largest λ value within one standard error above the minimum cross-validation error. Opting for lambda_min offers the advantage of achieving the best model fit, thereby attaining superior predictive performance. Similarly, the RF-RFE method performs feature selection by employing a strategy based on a RF model, iteratively training the model to assess the importance of each feature and recursively eliminating irrelevant ones . As shown in , we ultimately selected the common feature variables of LASSO and RF-RFE algorithms to build the predictive model. These variables, listed in sequence, are LSM, platelet, age, leukocyte, glucose, cholinesterase, AST/ALT, apolipoprotein A, apolipoprotein B, HDL-C, GGT, PDW, triglyceride, uric acid, NLR, prealbumin, haemoglobin and erythrocyte. Model performance and comparison Based on the feature selection results, we constructed a traditional LR model and five machine learning models including ANN, RF, SVM, KNN and XGBoost. The ROC curves of these models are illustrated in . In the training set, RF (AUC = 0.982) and XGBoost (AUC = 0.954) models exhibited superior discriminative ability, followed by SVM (AUC = 0.912), KNN (AUC = 0.898), LR (AUC = 0.875) and ANN (AUC = 0.867). In the internal validation set, XGBoost (AUC = 0.891) and RF (AUC = 0.889) models exhibited superior discriminative ability, followed by SVM (AUC = 0.872), LR (AUC = 0.861), ANN (AUC = 0.860) and KNN (AUC = 0.842). In the external validation set, the RF model had the highest AUC value (AUC = 0.890), followed by XGBoost (AUC = 0.889), ANN (AUC = 0.881), SVM (AUC = 0.872), LR (AUC = 0.866) and KNN (AUC = 0.853). A detailed comparison of specific performance metrics for each model is presented in . In the training set, the RF model ranks first in accuracy, sensitivity, specificity, PPV, NPV and F 1-score. In the internal validation set, the RF model ranks first in accuracy, specificity, PPV and F 1-score, while it ranks third in sensitivity and NPV. In the external validation set, the RF model also shows strong performance across all metrics. The confusion matrices for each model in the training and validation sets are provided in . We plotted calibration curves and DCA curves based on the training and validation sets. The former is utilized to assess the accuracy and reliability of the model predictions, while the latter is employed to evaluate the potential clinical utility of the model across different threshold ranges. The calibration curves for the six models are depicted in . Except for the ANN model, the other models demonstrate good calibration performance. The DCA curves, as illustrated in , show that the models achieve higher net benefits than the ‘all-intervention’ or ‘no-intervention’ strategies within a wide range of thresholds. Across different threshold probability ranges, the RF and XGBoost models demonstrate more pronounced clinical benefits compared to other models. Combining the results of the model performance evaluation, the RF model exhibits superior diagnostic capability and clinical utility, making it the optimal model for diagnosing LC in CHB patients. Diagnostic efficacy comparison of different indicator combinations based on the RF model 3.4.1. LSM and 17 traditional indicators vs. serum biomarkers The RF model emerged as the superior choice after evaluating the modelling performance of various machine learning methods. Building on this, we developed four RF models to evaluate their diagnostic efficacy for LC. These models included a comprehensive model combining LSM with 17 traditional indicators, as well as three models based on APRI, GPR and FIB-4 serological biomarkers, respectively. The results ( ) showed that the comprehensive model combining LSM and 17 traditional indicators had the best diagnostic performance. Among the serological indicator models, the FIB-4 model performed the best, followed by GPR, while APRI had relatively lower diagnostic efficacy. 3.4.2. LSM and 17 traditional indicators vs. LSM-only vs. 17 traditional indicators After determining the optimal RF model, we conducted modelling for three scenarios: (1) LSM alone, (2) 17 traditional indicators alone and (3) LSM combined with traditional indicators. The results of these models are presented in . The incorporation of traditional indicators significantly enhanced model performance, with the combined model achieving the highest AUC compared to the LSM-only approach. Furthermore, in the absence of LSM, the diagnostic performance of the traditional 17-indicator model was comparable to that of the combined LSM and traditional indicators model. This indicates that traditional indicators remain highly effective for diagnosing LC when LSM is unavailable, outperforming the LSM-only model. 3.4.3. FIB-4 vs. LSM-only vs. 17 traditional indicators To further evaluate the performance of FIB-4, we subsequently compared it with models using only LSM and only the 17 traditional indicators separately. In the training set, the FIB-4 model demonstrated superior performance compared to the LSM-only model. In the internal validation set, the LSM-only model exhibited comparable performance to the FIB-4 model. However, the LSM-only model surpassed the FIB-4 model in the external validation set. Notably, the model integrating 17 traditional indicators consistently outperformed the FIB-4 model in all sets ( ). The evaluation of performance metrics for diverse indicator combinations is presented in . LSM and 17 traditional indicators vs. serum biomarkers The RF model emerged as the superior choice after evaluating the modelling performance of various machine learning methods. Building on this, we developed four RF models to evaluate their diagnostic efficacy for LC. These models included a comprehensive model combining LSM with 17 traditional indicators, as well as three models based on APRI, GPR and FIB-4 serological biomarkers, respectively. The results ( ) showed that the comprehensive model combining LSM and 17 traditional indicators had the best diagnostic performance. Among the serological indicator models, the FIB-4 model performed the best, followed by GPR, while APRI had relatively lower diagnostic efficacy. LSM and 17 traditional indicators vs. LSM-only vs. 17 traditional indicators After determining the optimal RF model, we conducted modelling for three scenarios: (1) LSM alone, (2) 17 traditional indicators alone and (3) LSM combined with traditional indicators. The results of these models are presented in . The incorporation of traditional indicators significantly enhanced model performance, with the combined model achieving the highest AUC compared to the LSM-only approach. Furthermore, in the absence of LSM, the diagnostic performance of the traditional 17-indicator model was comparable to that of the combined LSM and traditional indicators model. This indicates that traditional indicators remain highly effective for diagnosing LC when LSM is unavailable, outperforming the LSM-only model. FIB-4 vs. LSM-only vs. 17 traditional indicators To further evaluate the performance of FIB-4, we subsequently compared it with models using only LSM and only the 17 traditional indicators separately. In the training set, the FIB-4 model demonstrated superior performance compared to the LSM-only model. In the internal validation set, the LSM-only model exhibited comparable performance to the FIB-4 model. However, the LSM-only model surpassed the FIB-4 model in the external validation set. Notably, the model integrating 17 traditional indicators consistently outperformed the FIB-4 model in all sets ( ). The evaluation of performance metrics for diverse indicator combinations is presented in . Model interpretation To gain a more intuitive understanding of the relationship between the model and feature factors, SHAP was employed to interpret the RF model. presents SHAP values for all features based on the RF model. Feature importance is ranked from top to bottom, with LSM scoring the highest importance, followed by platelet, age, leukocyte, glucose, cholinesterase, AST/ALT, apolipoprotein A, apolipoprotein B, HDL-C, GGT, PDW, triglyceride, uric acid, NLR, prealbumin, haemoglobin and erythrocyte. displays the distribution of SHAP values for each feature across different data points or samples. If a specific instance has relatively high variable values, it appears as a yellow point, whereas relatively low variable values appear as purple points. SHAP values illustrate the contribution of each feature to the target variable, whether positive or negative. Additionally, two individual SHAP force plots are provided in the study. illustrates the SHAP force plot for diagnosing LC in patients with CHB, showing that all features support the diagnosis of LC in this patient. The longer the yellow arrow of a certain feature, the greater its contribution to supporting the diagnosis of LC in CHB patients. displays the SHAP force plot for excluding LC in patients with CHB. Glucose supports the diagnosis of LC, the other 17 features do not support the diagnosis. Yellow arrows support the diagnosis of LC in CHB patients; purple arrows do not support the diagnosis of LC in CHB patients. Discussion Cirrhosis ranks among the top ten causes of mortality worldwide, placing a significant burden on public health. In China, HBV infection remains the primary cause of cirrhosis . The 2030 Sustainable Development Agenda has set clear goals for eradicating viral hepatitis. To achieve these objectives, the World Health Organization has devised strategies . However, many CHB patients still lack adequate attention, which increases the risk of developing LC. Therefore, it is necessary to improve the diagnostic accuracy for cirrhosis among individuals with CHB. LSM is a direct method for assessing liver stiffness, proving effective in diagnosing liver fibrosis and cirrhosis in chronic liver diseases. However, the accuracy of LSM can be affected by several factors, including obesity, respiratory status and the skill level of the operator . These factors may introduce biases in LSM measurements, thereby compromising the reliability of the diagnosis. Research has demonstrated that both the stiffness of the surrounding liver tissue and the degree of liver inflammation significantly influence the diagnostic accuracy of LSM . Traditional clinical indicators such as ALT, AST and platelet count provide insights into liver function and inflammatory status; however, their diagnostic efficacy is limited when used in isolation. Therefore, a comprehensive assessment using various indicators is a more reliable approach. Many studies have shown that combining LSM with various clinical indicators and scores significantly improves the diagnostic accuracy for assessing liver fibrosis and predicting related complications in patients with chronic liver diseases. Fan et al. conducted a precise evaluation of liver fibrosis in patients with CHB by combining LSM with aMAP scores, which reflect the potential risk of hepatocellular carcinoma development. The joint utilization of aMAP and LSM has shown strong diagnostic efficacy in identifying liver fibrosis among CHB patients. Additionally, Fan et al. developed a machine learning model that integrates clinical indicators with LSM to identify fibrosis associated with metabolic dysfunction-associated steatotic liver disease (MASLD, formerly known as NAFLD). Sanyal et al. demonstrated that combining LSM with clinical biomarkers significantly enhances the accuracy of identifying atrial fibrillation or cirrhosis in patients with non-alcoholic fatty liver disease (NAFLD). However, when using only LSM or FIB-4, predictive accuracy may slightly decrease. Some studies have combined elements of FIB-4 with LSM to predict incident complications of portal hypertension (PH) in individuals with compensated liver disease . This approach has also demonstrated good predictive ability. In this study, we integrated LSM with 17 traditional indicators to identify LC in CHB patients. The results demonstrated that each model achieved an AUC–ROC value of over 0.80. In comparison, the diagnostic performance of LSM or traditional indicators alone was comparatively weaker. Notably, even in the absence of LSM, the selected 17 traditional clinical indicators were still effective in identifying LC in CHB patients. This finding has substantial clinical significance, especially in resource-constrained or equipment-limited environments, where clinicians can utilize these traditional indicators as a dependable approach for initial evaluation. In the field of non-invasive LC diagnosis, serum-based indices such as FIB-4, APRI and GPR are widely used in patients with various types of hepatitis due to their simplicity and low cost. While these methods have proven useful in clinical practice, they have notable limitations in accuracy and sensitivity. Therefore, we aimed to develop a more accurate and sensitive diagnostic model by integrating LSM with 17 traditional indicators. Previous studies have robustly confirmed the high diagnostic value of FIB-4 in identifying cirrhosis . Similarly, research has shown that GPR outperforms FIB-4 and APRI, particularly in staging liver fibrosis in CHB patients . These findings form a crucial basis for assessing the diagnostic efficacy of integrating LSM with multiple indicators and traditional serum markers like FIB-4, APRI and GPR. Among the three serum-based indices, FIB-4 exhibited the best diagnostic performance for LC, followed by GPR, while APRI showed relatively weaker efficacy. To further evaluate the diagnostic performance of FIB-4, we compared it with LSM alone and the 17 traditional indicators. The results indicated that in the internal validation, the AUROC of FIB-4 was comparable to that of LSM alone; however, in the external validation, LSM alone significantly outperformed FIB-4. This finding aligns with previous studies, further underscoring the unique value of LSM in diagnosing LC. Notably, the diagnostic performance of the 17 traditional indicators yielded an AUROC significantly higher than that of LSM alone or FIB-4, demonstrating that the integration of multiple indicators can substantially enhance the accuracy of LC diagnosis. In recent years, machine learning has been widely applied in the field of chronic liver disease. However, due to the ‘black-box’ nature of machine learning, its interpretability is relatively poor, making it difficult to explain why specific predictions are made for patients . In this study, we utilized SHAP analysis to provide a detailed interpretation of the optimal model. SHAP analysis enables us to assess the contribution of different variables to the model’s predictive outcomes. The results revealed that the top three most important variables are LSM, platelet and age. As CHB progresses, liver fibrosis gradually increases, which leads to the progressive hardening of the liver. Higher LSM is associated with an increased risk of cirrhosis . Patients with LC often have PH and hypersplenism, which increases platelet sequestration and destruction in the spleen, leading to thrombocytopenia. If a CHB patient exhibits abnormalities in platelet count or changes in platelet function, LC should be considered as a potential diagnosis . Age is a major risk factor for chronic liver disease. Advanced liver disease is more common in older adults. Even in young patients after liver injury, the compensatory mechanism of liver cell activation may be impaired, leading to the development of serious liver diseases as age increases . Additionally, changes in indicators such as leukocyte, glucose, cholinesterase, AST/ALT, apolipoprotein A, apolipoprotein B, HDL-C, GGT, PDW, triglyceride, uric acid, NLR, prealbumin, haemoglobin and erythrocyte can also help in identifying cirrhosis in CHB patients. Therefore, CHB patients should pay particular attention to fluctuations in these indicators. Early diagnosis and intervention are crucial for CHB patients. CHB is a progressive liver disease, and once it progresses to cirrhosis, reversing the condition is challenging. The proposed model can enhance the accuracy of diagnosing cirrhosis in CHB patients, aiding clinicians in making informed decisions. The model, which is based on traditional indicators, demonstrates considerable effectiveness even in resource-limited environments where LSM is not accessible. However, this study also has its limitations. This study is a single-centre study conducted exclusively at a hospital in Dalian, which introduces potential selection bias. Therefore, external validation from other centres is necessary to improve the model’s generalizability. Furthermore, the current model cannot capture the dynamic changes in key indicators, which may restrict its application in the dynamic monitoring and management of CHB patients. Future research should incorporate longitudinal data, explore the performance of new models at different time points, and analyse the impact of key indicators’ changes over time on outcomes. Additionally, longitudinal data will facilitate the evaluation of new models’ dynamic adjustment capabilities, potentially further enhancing their accuracy and practicality, especially in the long-term management of CHB patients. Conclusions This study utilized machine learning methods to select the optimal model from six candidates, the RF model, which integrates LSM with 17 conventional indicators. This approach significantly improved the diagnostic accuracy of LC in CHB patients. The integration of LSM and the 17 traditional indicators in the RF model demonstrated superior diagnostic performance compared to traditional serological markers such as FIB-4, APRI and GPR, as well as outperforming the use of LSM or the 17 traditional indicators alone. Even in the absence of LSM, the selected 17 traditional clinical indicators effectively identified LC in CHB patients, highlighting their potential utility in resource-limited settings. Despite these significant findings, future research should focus on validating this model in multicentre environments to enhance its generalizability. Additionally, incorporating longitudinal data could further explore the new models’ application in dynamically assessing disease progression in CHB patients, thereby providing more comprehensive support for clinical decision-making. In summary, the combination of LSM and traditional indicators offers an efficient and reliable tool for diagnosing LC, holding substantial clinical value.
Successful staged surgery for advanced esophageal cancer after conversion pancreatoduodenectomy with pancreaticogastrostomy
d7a6ff57-ff8c-4092-90c8-815ffcccdefa
11923029
Surgical Procedures, Operative[mh]
The number of reports of multiple primary cancers is currently increasing because of advancements in diagnostic imaging and technology , and the incidence of these cancers in patients with esophageal cancer is reportedly 5–36% [ – ]. The most frequently occurring multiple primary cancers are head and neck, gastric, and lung cancer, which is characterized by field cancerization . However, with a frequency of 0.1–5%, double primary cancer of the esophagus and pancreas is rarely reported . Subtotal esophagectomy (SE) and pancreatoduodenectomy (PD) are widely considered the most invasive and difficult surgical procedures in gastrointestinal surgery. When both procedures are performed in one patient, it is expected to be particularly difficult to reconstruct the gastrointestinal tract due to anatomical changes and preserve the circulation of the reconstructed organs. In such cases, staged surgery can be a beneficial option to spread out the invasiveness to the patient . Herein, we report the successful performance of a two-staged surgery including oncological multidisciplinary treatment of advanced esophageal cancer in a patient who had previously undergone conversion PD with pancreaticogastrostomy for advanced pancreatic head cancer. We performed a very rare two-staged surgery as a video-assisted thoracoscopic SE and pedicle jejunum reconstruction with microvascular anastomosis. A 60-year-old man was referred from another hospital with a complaint of difficulty in swallowing food. He was diagnosed as having clinical stage III (T3, N1, M0) esophageal squamous cell carcinoma in the middle thoracic esophagus according to the TNM classification, 8 th edition (Fig. a). The patient had undergone a subtotal stomach-preserving PD (SSPPD) for advanced pancreatic cancer at our hospital 3 years earlier. He was diagnosed at age 57 to have clinical stage III (T4, N0, M0) pancreatic cancer in the pancreatic head according to the TNM classification, 8 th edition. He had severe obstructive jaundice, so a bile duct metallic stent was retained, and endoscopic ultrasonography-guided hepaticogastrostomy was done first. After about 7 months of anticancer drug treatment with modified folinic acid, 5-fluorouracil, irinotecan, and oxaliplatin (mFOLFIRINOX), the tumor had shrunk and become resectable with preservation of the superior mesenteric vessels (Fig. b). The patient then underwent SSPPD with pancreaticogastrostomy as a conversion surgery to avoid pancreatic fistula and postoperative bleeding for safety and long-term pancreatic duct patency (Fig. c). The operative time was 10 h 53 min, and blood loss was 2370 mL. He was discharged on postoperative day 22 with no complications. The pancreatic cancer was in the final stage III (T3, N0 [0/23], M0, Grade 2). Tegafur, gimeracil, and oteracil (TS-1) were administered as postoperative adjuvant chemotherapy for 6 months. Because the standard treatment for advanced esophageal cancer detected 3 years after SSPPD is preoperative chemotherapy , we first chose neoadjuvant chemotherapy (DCF: docetaxel 35 mg/m 2 , cisplatin 40 mg/m 2 , fluorouracil 400 mg/m 2 ) . After two courses of neoadjuvant chemotherapy, the treatment efficacy was determined to be a partial response for the esophageal cancer (61% reduction) based on response evaluation criteria in Solid Tumors Criteria version 1.1 (Fig. a) . Surgical planning raised some anatomical challenges. Ligation of the gastroduodenal artery was done for SSPPD, which gives rise to the right gastroepiploic artery, thus precluding the use of the stomach as conduit for any kind of esophagectomy. Extensive adhesiolysis could also be anticipated as the superior mesenteric vessels, which give rise to the middle colic vessels, were dissected during the SSPPD. Therefore, we chose a two-staged surgery to reduce surgical invasiveness and to circumvent the lower rate of curability. We planned esophagectomy and systematic lymph-node dissection as the first stage of the operation and gastrointestinal reconstruction with pedicle jejunum and microvascular anastomosis as the second stage of the operation. In the first operation after percutaneous endoscopic gastrostomy (PEG), the patient underwent thoracoscopic SE with mediastinal lymphadenectomy via prone thoracic manipulation. The esophageal cancer showed a tendency to invade the right inferior pulmonary vein, but R0 resection was possible (Fig. b). Subsequently, cervical manipulation was performed in the spine position, along with lymphadenectomy and neck cervical esophagostomy (Fig. c). The operative time was 5 h 15 min, and blood loss was 30 mL. The postoperative course was generally uneventful, and the patient was discharged home on the 21 st postoperative day. Fifty-six days after the first-stage operation, the second-stage operation was performed. During abdominal manipulation via a median incision, extensive peritoneal adhesions were present. Abdominal lymph-node dissection was performed, followed by pedicle jejunal reconstruction. To ensure preservation of the circulation of the remnant stomach with pancreaticogastrostomy, the left gastroepiploic artery and short gastric artery were preserved, whereas abdominal lymphadenectomy was performed as usual for the patient’s thoracic esophageal cancer. When the mesentery of the jejunum was unfolded and observed, the first jejunal artery was found to have been transected during the PD, and the second jejunal artery flowed into the gastrojejunal anastomosis (Fig. a). During reconstruction of the pedicle jejunum, considering the blood flow in the afferent loop including the past gastrojejunal bypass and Braun’s anastomosis, the left branch of the third jejunal artery was used as a feeder to the afferent loop (Fig. b). The right branch was included on the side of the elevated jejunum, and the incision of the mesentery was made as long as possible by sacrificing one arcade to extend the elevation distance (Fig. c). An incision was made in the anterior chest and left neck to create the subcutaneous tunnel, the jejunum was elevated approximately 160 cm by the percutaneous route, and the fourth jejunal vein and the right internal thoracic vein were first anastomosed, and then, the fifth jejunal artery and the right internal thoracic artery were anastomosed (Fig. a). After additional resection of the cervical esophagus, an end-to-side anastomosis was performed between the esophagus and the elevated jejunum, as was a functional end-to-side anastomosis between the afferent loop and the elevated jejunum. In addition, a lateral anastomosis was made between the pedicle jejunum and the remnant stomach to allow a postoperative endoscopic approach to the stomach after surgery, and the hepaticogastrostomy tube was removed. The operative time was 8 h 55 min, and blood loss was 210 mL. The patient developed mild leakage at the esophageal jejunal anastomosis postoperatively, which quickly resolved with conservative treatment (Fig. b). The pathological result indicated esophageal squamous cell carcinoma, T3N0M0, final stage III, and the pathological response was Grade 1a. At 8 months after surgery, the patient remains recurrence free from both cancers. Postoperatively, nutritional support using the PEG was continued for 3 months, after which it was removed, because the patient was able to take adequate oral intake. Multiple cancer disease refers to that in which more than one malignancy is diagnosed in the same patient, either simultaneously or sequentially. In patients with multiple cancers combined with esophageal cancer, a highly curative treatment such as that in the present case may be provided by choosing a staged operation, if necessary, in view of the operation time and degree of invasiveness. To achieve a better therapeutic effect, careful preoperative surgical planning is necessary, along with a multidisciplinary treatment plan that includes high-intensity preoperative chemotherapy and nutritional therapy . The overlap between malignant esophageal disease requiring SE and hepatobiliary pancreatic disease requiring PD is very rare. Any additional digestive surgeries, especially resectional procedures, performed after a prior SE or PD can be extremely difficult. The cases of simultaneous or metachronous SE and PD in the same patient are very interesting from both a scientific and a practical viewpoint. When divided chronologically, there are three possible clinical scenarios. The first is when the two diseases are noted simultaneously, the second is when PD is needed after SE, and the third is when SE is needed after PD. For the first clinical scenario, there are only 11 reported cases of esophageal and biliopancreatic tumors occurring simultaneously and requiring distal esophagectomy (DE) or SE and PD (Table ). All of these cases required very long operative times and tended toward heavy bleeding, indicating that they are highly invasive procedures. When two malignant lesions are noted at the same time, as in this clinical scenario, it is possible to consider preoperatively whether to perform a simultaneous resection or a multi-staged resection, and there are multiple options for the treatment strategy. As shown in Table , 5 of the 11 cases were selected for two-staged surgery, and six underwent simultaneous resection. However, in almost all cases in which simultaneous resection was performed, DE was performed only via a transhiatal approach, and Ivor Lewis reconstruction was also performed [ – ]. In contrast, staged surgery tended to be chosen for cases requiring a more invasive thoracic approach, such as SE and McKeown esophagectomy [ – ]. Of the five patients who underwent staged surgery, SE was performed in the first surgery and PD in the second surgery in four of these patients. In some cases, the reason for choosing this sequence is that the order is determined by factors that affect the degree of tumor progression and prognosis . There are also reports that SE was performed first due to concerns about postoperative complications, especially pancreatic fistula after PD . Another advantage of splitting the thoracic and abdominal manipulations into two-staged surgery is that the first operation can focus on tumor resection and lymph-node dissection, whereas the second operation focuses on gastrointestinal reconstruction along with abdominal dissection . In the second and third clinical scenarios, the need for SE and PD occurs heterochronically. The treatment strategy is limited by the need for atypical procedures with respect to lymph-node dissection and reconstruction methods, because these two surgical interventions significantly alter the normal anatomical relationship of the upper abdomen. The second clinical scenario, in which PD is required after SE, is not rare, and a relatively large number of reported cases are scattered throughout the literature [ – ]. As shown in Table , in almost all cases of staged resection for simultaneous cancers, the sequence of SE first, followed by PD, was chosen in almost all cases, which confirms that this surgical sequence is not impossible. However, the third clinical scenario, for which disease requiring SE was noted after PD surgery, was reported extremely infrequently, with only five cases, including our case and one case of benign disease (Table ) . Only three cases of SE required cervical gastrointestinal anastomosis, which is more invasive than DE, including the present case, and all were limited to 2019 or later . Two important oncological points are worth mentioning in our case. One is that both pancreatic head and esophageal cancers were highly advanced, and high-intensity chemotherapy, such as mFOLFIRINOX, TS-1, and DCF, were used in the perioperative period for each disease, with the earlier PD being a conversion surgery, whereas radical lymph-node dissection was required for the later SE. The other is that the PD reconstruction procedure performed prior to SE was a pancreaticogastrostomy. This has never been reported before to our knowledge. If the esophageal cancer is an early stage cancer, as reported by Morikawe et al. , one-stage reconstruction with SE using robotic surgery may be possible, or a single resection and reconstruction may be possible for DE for esophagogastric junction cancer . If the cancer is advanced, as in the present case, and long-term chemotherapy is required, we believe that choosing a two-staged surgery will improve oncologic safety. Because of the shortage of jejunum available for reconstruction due to Child’s reconstruction for PD, the right hemicolon is generally supposed to be suitable for esophageal reconstruction. A colon graft can easily be brought up to the neck without microvascular anastomosis, making it a favorable procedure . However, jejunal interposition offers functional advantages, because it more closely resembles the esophagus, and the versatility of the jejunal flap is useful in solving various complex scenarios . In addition, in our presented case, extensive colonic adhesions near the hepatic and splenic flexures complicated the colon reconstruction. To the best of our knowledge, this is the first report to describe minimally invasive SE for advanced esophageal squamous cell carcinoma after SSPPD with pancreaticogastrostomy for advanced pancreatic head cancer. The decision on surgical technique in this patient was very difficult, and we planned a two-staged surgery after a thorough preoperative review. In conclusion, the procedure reported here may be recommended as an option for staged resection and reconstruction in patients with simultaneous advanced esophageal cancer after PD. The use of complex surgery in the treatment of cancer patients has made reoperations challenging. However, staged surgery options exist, and meticulous preoperative planning and intraoperative judgment can help the surgeon to perform an extensive and successful oncologically sound procedure.
HER2-low breast cancer shows a lower immune response compared to HER2-negative cases
ea2f42a5-86a0-45b2-a7a9-3f9d247ac2e5
9334272
Anatomy[mh]
Breast cancer treatment decisions are based, amongst other patient- and tumor characteristics, on the expression of the estrogen receptor (ER), the progesterone receptor (PR) and the human epidermal growth factor receptor 2 (HER2) . HER2 is a protein encoded by the erythroblastic oncogene B ( ERBB2 ) gene . Amplification of the oncogene, leading to overexpression of the HER2 protein, plays a role in the development of different breast cancer subtypes by promoting the growth of cancer cells . In daily clinical practice, the HER2 status of breast cancer is classified dichotomously as either negative or positive to select patients for HER2-targeted therapy. This is usually determined via immunohistochemistry (IHC) and in situ hybridization (ISH) . IHC protein expression is classified as negative (0), weak or partial (1 +), moderate (2 +) or strongly positive (3 +) according to international guidelines . Cases with negative (0) or weak expression (1 +) are considered HER2-negative (HER2-) and patients with strong expression (3 +) are considered as HER2-positive (HER2+). Cases with a moderate protein expression (2 +) need an additional reflex test, like an ISH assay to differentiate between HER2- (ISH without amplification) or HER2+ (ISH with amplification). Currently, HER2+ patients are eligible for targeted treatment against the HER2 receptor, while patients without HER2 amplification will not receive HER2 blockade treatment . With the introduction of novel HER2-targeting agents in recent years, including antibody–drug conjugates, the clinical relevance of the HER2 classification system is shifting, since patients with low levels of HER2 expression (HER2low) could also have a therapeutic benefit from these agents , . Antibody–drug conjugates are delivered inside cancer cells by targeting the few HER2 receptors on the cells , . An advantage of these drugs is that there is a high antibody–drug ratio, thus multiple cytotoxic agents are bound to one antibody molecule . The payload of these drugs is membrane permeable, making it possible to trigger the release of the cytotoxic agent and killing adjacent cells that do not express the HER2 receptor via the bystander effect , , . A phase Ib study by Modi et al. showed that 37% of the patients with HER2low metastatic breast cancer had a partial response after treatment with trastuzumab-deruxtecan . The HER2low category represents tumors with an IHC score of 1 + or 2 + without amplification. According to this definition, HER2- only includes those patients with an IHC score of 0. In HER2low breast cancer cases, the number of receptors is low compared to cases with HER2 amplification . Overall, it is estimated that around 55% of all breast cancers is HER2low , . Hence, it is important to gain more insight in the biology of cancers with low HER2 expression since this subgroup might be of clinical relevance. Since HER2low is a relatively new term, data with respect to the clinicopathologic characteristics and the prognostic impact of HER2low breast cancer is limited. Previous research indicated that HER2low tumors are more often ER+ and that they tend to have a higher histologic grade and a higher proliferation rate compared to HER2- tumors , , . Study results on the prognostic impact are thus far inconsistent , , . To further understand the biology of the HER2low group, various biomarkers need to be analyzed. So far, an analysis of a PAM50 assay (including 50 breast cancer-related genes) of 3600 patients by Schettini et al. elucidated that hormone receptor positive/HER2low tumors had a higher ERBB2 expression level than HER2- tumors . Several studies showed that stromal tumor infiltrating lymphocytes (TILs) have a prognostic and predictive value in breast cancer, where a higher density of TILs is correlated with a better outcome – . High numbers of TILs are associated with triple negative and HER2+ breast cancer. However, it is unknown whether there is a difference in the density of TILs between HER2- and HER2low breast cancer. Therefore, the aim of this study was to analyze whether there is a relation between several clinicopathologic characteristics, including TILs, a large gene expression dataset and HER2 expression (HER2- versus HER2low), stratified for ER status. General patient and tumor characteristics From the 720 tumor samples within the cohort, cases with missing (invasive tumor) tissue or missing hormone receptor data (n = 101) were excluded. Furthermore, HER2+ samples (n = 90) were excluded resulting in a final dataset of 529 samples with either HER2- or HER2low breast cancer. Several patient and tumor characteristics of these 529 patients were analyzed and compared between the HER2- and the HER2low cancers (Table ). Overall, this cohort included 305 patients with ER+ tumors (58%) and 224 with ER- tumors (42%). Most tumors were HER2- (n = 429, 81%), the remaining 100 tumors (19%) were HER2low. In total, 98 patients received adjuvant chemotherapy (e.g., anthracyclines, or anthracycline-containing therapy) and 66 patients received adjuvant hormonal therapy (e.g., tamoxifen, LHRH/tamoxifen) according to historical procedures in the Netherlands. Clinicopathologic differences between HER2- and HER2low breast cancer The median number of HER2 copies (n = 2), as determined by SISH, was similar in both groups. However, HER2low tumors had a significantly higher HER2 copy number compared to HER2- tumors ( P ≤ 0.001;  Fig. A), in both ER+ and ER- tumors. In the ER+ cohort, there was no significant association between any of the clinicopathologic features between HER2- and HER2low breast cancer, except for the HER2 copy numbers. The level of ER expression by immunohistochemistry (% of positive tumor cells) was not different between ER+HER2- and ER+HER2low cases ( P = 0.54; t-test). Within the ER- cohort, HER2low breast cancer was significantly associated with increased regional nodal positivity, lower density of TILs and a lower expression of Ki-67 and epidermal growth factor receptor (EGFR) compared to HER2- cases ( P < 0.001, P = 0.034, P = 0.031 and P = 0.046 respectively; Table ). To analyze which of these four characteristics are independently associated with the HER2 status, a multivariate logistic regression analysis was performed for the ER- cohort. After multivariate analysis, only the density of TILs remained significantly associated with HER2low status ( P = 0.035). Gene expression differences between HER2- and HER2low breast cancer Overall, HER2low cases had a higher mRNA expression of ERBB2 compared to HER2- cases ( P < 0.001; Fig. B). There was no significant difference in expression of ER-pathway-related genes, neither in the ER+ cohort (ER+HER2- versus ER+HER2low) nor the ER- cohort (ER-HER2- versus ER-HER2low). From the 5000 most variably expressed genes, five probe-sets (4 unique genes) showed significantly higher gene-expression levels (FDR P < 0.05) in the HER2low group compared to the HER2- group, within the ER+ cohort (Fig. ). The four genes were ERBB2 , Era Like 12S Mitochondrial RRNA Chaperone 1 ( ERAL1 ), Mediator Complex Subunit 24 ( MED24 ) and Post-GPI Attachment to Proteins Phospholipase 3 ( PGAP3 ) genes. The higher expression level effect was also visually seen within the ER- cohort, although none of these four genes showed a statistically significant difference in expression between HER2- and HER2low cancers. Interdependence of these genes was analyzed by assessing their chromosomal location. Both MED24 and PGAP3 are located on the same amplicon as ERBB2 . PGAP3 (chr17:39,671,122–39,688,057) is located directly 3’ from ERBB2 (chr17:39,700,064–39,728,658) and they are both located on the 17q12 cytogenetic band. MED24 (chr17:40,019,104–40,054,408) is located around 400 kb 5’ from ERBB2 on cytogenetic band 17q21.1. ERAL1 (chr17:28,855,016–28,861,061) is located on cytogenetic band 17q11.2, 3’ of ERBB2 . Furthermore, the Pearson correlation coefficient was calculated to evaluate the relation between the ERBB2 gene and the other genes. The ERBB2 / ERAL1 correlation was 0.370 for the ER+ cases and 0.294 for ER- cases. The correlation between ERBB2 and MED24 was 0.464 and 0.260, for ER+ and ER- respectively. For PGAP3 , two significant probe-sets were found and the correlation was 0.606 (ER+) and 0.552 (ER-) for ERBB2 / PGAP3 (55616_at) and 0.688 (ER+) and 0.687 (ER-) for ERBB2 / PGAP3 (221811_at). Additionally, it was shown that there is a trend that the level of ERBB2 expression increased when more HER2 copies (determined by SISH) were present ( P < 0.001; Fig. C). However, no statistical significance was found between the mRNA ERBB2 expression levels and the HER2 copy numbers ( P = 0.573). Functional pathway enrichment in HER2- and HER2low breast cancer To gain more insight in the biological difference between HER2- and HER2low breast cancer, a more global gene-expression analysis was performed. For this analysis, all differentially expressed genes with an uncorrected univariate p-value below 0.05 were collected and analyzed for enriched shared biology. In total, 1197 genes for the ER+ cohort and 977 genes for the ER- cohort differentiated significantly between the HER2 groups. Functional annotation clustering of the significant genes within the ER+ cohort revealed an immune related cluster with an enrichment score of 10.93. Within this cluster, a gene ontology biological process pathway was found related to the adaptive immune response ( P = 6.8E-10) and involved 31 genes (Supplementary Table ). Furthermore, another immune related gene ontology biological process pathway was detected, independent of the immune cluster. This pathway was presented with the name immune response and involved 69 genes ( P = 1.5E-15; Supplementary Table ). The expression levels of these genes were higher in the HER2- group compared to the HER2low group. Some potential interesting genes retrieved from these pathways are known to contribute to an increased immunity, for example by regulating T-cell activation or by improving T-cell proliferation or helping with T-cell mediated killing. Within the ER- cohort, no enriched immunity pathways were detected. Survival data of patients with HER2- versus HER2low breast cancer From the data set of 529 patients, additional patients were excluded based on missing clinical data (n = 44), receiving adjuvant systemic therapy (n = 115), positive nodal or distant metastasis status at diagnosis (n = 52), leading to a survival analysis of 318 patients. Median follow-up was 82 months for the ER+ cases and 64 months for the ER- cases. Regarding overall survival, the Kaplan–Meier survival curves were not significantly different for patients with HER2- and HER2low breast cancer, neither within the ER+ cohort nor in the ER- cohort ( P = 0.295 and- P = 0.618 respectively; Fig. ). Furthermore, there were no differences regarding disease free survival ( P = 0.664 for ER+ cases and P = 0.391 for ER- cases) and metastasis free survival ( P = 0.615 for ER+ cases and P = 0.941 for ER- cases) between patients with HER2- and HER2low breast cancer (Kaplan–Meier curves not shown). From the 720 tumor samples within the cohort, cases with missing (invasive tumor) tissue or missing hormone receptor data (n = 101) were excluded. Furthermore, HER2+ samples (n = 90) were excluded resulting in a final dataset of 529 samples with either HER2- or HER2low breast cancer. Several patient and tumor characteristics of these 529 patients were analyzed and compared between the HER2- and the HER2low cancers (Table ). Overall, this cohort included 305 patients with ER+ tumors (58%) and 224 with ER- tumors (42%). Most tumors were HER2- (n = 429, 81%), the remaining 100 tumors (19%) were HER2low. In total, 98 patients received adjuvant chemotherapy (e.g., anthracyclines, or anthracycline-containing therapy) and 66 patients received adjuvant hormonal therapy (e.g., tamoxifen, LHRH/tamoxifen) according to historical procedures in the Netherlands. The median number of HER2 copies (n = 2), as determined by SISH, was similar in both groups. However, HER2low tumors had a significantly higher HER2 copy number compared to HER2- tumors ( P ≤ 0.001;  Fig. A), in both ER+ and ER- tumors. In the ER+ cohort, there was no significant association between any of the clinicopathologic features between HER2- and HER2low breast cancer, except for the HER2 copy numbers. The level of ER expression by immunohistochemistry (% of positive tumor cells) was not different between ER+HER2- and ER+HER2low cases ( P = 0.54; t-test). Within the ER- cohort, HER2low breast cancer was significantly associated with increased regional nodal positivity, lower density of TILs and a lower expression of Ki-67 and epidermal growth factor receptor (EGFR) compared to HER2- cases ( P < 0.001, P = 0.034, P = 0.031 and P = 0.046 respectively; Table ). To analyze which of these four characteristics are independently associated with the HER2 status, a multivariate logistic regression analysis was performed for the ER- cohort. After multivariate analysis, only the density of TILs remained significantly associated with HER2low status ( P = 0.035). Overall, HER2low cases had a higher mRNA expression of ERBB2 compared to HER2- cases ( P < 0.001; Fig. B). There was no significant difference in expression of ER-pathway-related genes, neither in the ER+ cohort (ER+HER2- versus ER+HER2low) nor the ER- cohort (ER-HER2- versus ER-HER2low). From the 5000 most variably expressed genes, five probe-sets (4 unique genes) showed significantly higher gene-expression levels (FDR P < 0.05) in the HER2low group compared to the HER2- group, within the ER+ cohort (Fig. ). The four genes were ERBB2 , Era Like 12S Mitochondrial RRNA Chaperone 1 ( ERAL1 ), Mediator Complex Subunit 24 ( MED24 ) and Post-GPI Attachment to Proteins Phospholipase 3 ( PGAP3 ) genes. The higher expression level effect was also visually seen within the ER- cohort, although none of these four genes showed a statistically significant difference in expression between HER2- and HER2low cancers. Interdependence of these genes was analyzed by assessing their chromosomal location. Both MED24 and PGAP3 are located on the same amplicon as ERBB2 . PGAP3 (chr17:39,671,122–39,688,057) is located directly 3’ from ERBB2 (chr17:39,700,064–39,728,658) and they are both located on the 17q12 cytogenetic band. MED24 (chr17:40,019,104–40,054,408) is located around 400 kb 5’ from ERBB2 on cytogenetic band 17q21.1. ERAL1 (chr17:28,855,016–28,861,061) is located on cytogenetic band 17q11.2, 3’ of ERBB2 . Furthermore, the Pearson correlation coefficient was calculated to evaluate the relation between the ERBB2 gene and the other genes. The ERBB2 / ERAL1 correlation was 0.370 for the ER+ cases and 0.294 for ER- cases. The correlation between ERBB2 and MED24 was 0.464 and 0.260, for ER+ and ER- respectively. For PGAP3 , two significant probe-sets were found and the correlation was 0.606 (ER+) and 0.552 (ER-) for ERBB2 / PGAP3 (55616_at) and 0.688 (ER+) and 0.687 (ER-) for ERBB2 / PGAP3 (221811_at). Additionally, it was shown that there is a trend that the level of ERBB2 expression increased when more HER2 copies (determined by SISH) were present ( P < 0.001; Fig. C). However, no statistical significance was found between the mRNA ERBB2 expression levels and the HER2 copy numbers ( P = 0.573). To gain more insight in the biological difference between HER2- and HER2low breast cancer, a more global gene-expression analysis was performed. For this analysis, all differentially expressed genes with an uncorrected univariate p-value below 0.05 were collected and analyzed for enriched shared biology. In total, 1197 genes for the ER+ cohort and 977 genes for the ER- cohort differentiated significantly between the HER2 groups. Functional annotation clustering of the significant genes within the ER+ cohort revealed an immune related cluster with an enrichment score of 10.93. Within this cluster, a gene ontology biological process pathway was found related to the adaptive immune response ( P = 6.8E-10) and involved 31 genes (Supplementary Table ). Furthermore, another immune related gene ontology biological process pathway was detected, independent of the immune cluster. This pathway was presented with the name immune response and involved 69 genes ( P = 1.5E-15; Supplementary Table ). The expression levels of these genes were higher in the HER2- group compared to the HER2low group. Some potential interesting genes retrieved from these pathways are known to contribute to an increased immunity, for example by regulating T-cell activation or by improving T-cell proliferation or helping with T-cell mediated killing. Within the ER- cohort, no enriched immunity pathways were detected. From the data set of 529 patients, additional patients were excluded based on missing clinical data (n = 44), receiving adjuvant systemic therapy (n = 115), positive nodal or distant metastasis status at diagnosis (n = 52), leading to a survival analysis of 318 patients. Median follow-up was 82 months for the ER+ cases and 64 months for the ER- cases. Regarding overall survival, the Kaplan–Meier survival curves were not significantly different for patients with HER2- and HER2low breast cancer, neither within the ER+ cohort nor in the ER- cohort ( P = 0.295 and- P = 0.618 respectively; Fig. ). Furthermore, there were no differences regarding disease free survival ( P = 0.664 for ER+ cases and P = 0.391 for ER- cases) and metastasis free survival ( P = 0.615 for ER+ cases and P = 0.941 for ER- cases) between patients with HER2- and HER2low breast cancer (Kaplan–Meier curves not shown). With the introduction of novel antibody drug conjugates, which can also target tumors with low levels of HER2 expression, there is a need for a more granular HER2 classification system instead of a dichotomous division in either negative or positive. We aimed to analyze whether HER2low primary breast cancers are different compared to HER2- tumors with respect to clinicopathologic characteristics, gene-expression and survival. In this study, we excluded HER2+ cases since this breast cancer subtype is already known to have a distinct biology. Overall, most breast cancers were scored as HER2- in our study (n = 429). The remaining patients (n = 100) were HER2low. This finding is in contrast with previous literature, reporting that around 55% of all breast cancers is HER2low , . This difference could be explained by the relatively large proportion of ER- cases in our study and/or the use of breast cancer tissue from a historical cohort. HER2low breast cancers showed a significant higher number of HER2 copies and a higher ERBB2 gene-expression compared to HER2- tumors, which is in line with previous studies , . Within the ER+ cohort (n = 305), gene-expression analyses showed that HER2low tumors showed several differentially expressed genes compared to HER2- cases. This included ERBB2 , ERAL1 , MED24 and PGAP3 , which were all higher expressed in the HER2low group. Based on genomic co-localization and the correlation coefficients of ERBB2 , MED24 and PGAP3 , the higher expression levels are likely the result of amplification of a common chromosomal region. For ERAL1 , expression levels show little support for this co-amplification indicating that it is likely regulated otherwise. Previous research has shown that most of these genes are linked to worse prognosis for breast cancer patients. MED24 has been reported to have a function in the growth of breast cancer cells . PGAP3 was identified as a promotor of growth and metastasis in triple negative breast cancer . ERAL1 is a mitochondrial RNA chaperone which has not been associated with breast cancer prognosis before . However, ERAL1 is involved in the formation of the 28S small mitochondrial ribosomal protein ( MRPS28 ) and that protein has been shown to be involved with breast cancer proliferation and metastasis . In this ER+ cohort, pathway analyses additionally showed enrichment of immune-related genes in the HER2- group compared to the HER2low group. In ER- cases (n = 224), HER2low status was significantly associated with increased regional nodal positivity, lower density of TILs and a lower protein expression of Ki-67 and EGFR compared to HER2- cases. After multivariate analysis, only density of TILs remained significantly associated with HER2low status. This suggests that ER-/HER2- tumors have a more basal-like profile compared to ER-/HER2low tumors , . In line with the ER+ cohort, gene-expression analyses of the ER- cohort also showed a trend toward higher expression of ERAL1 , MED24 and PGAP3 for the HER2low cases, although this was not significant. No enriched immunity pathways were detected within the ER- cohort. Literature regarding HER2low in relation to gene-expression is very scarce. Schettini et al. analyzed a set of 55 genes of which 34 showed a significant difference between HER2low and HER2- breast cancer within the ER+ cohort . In our study, no genes (after correcting for multiple testing) were found to be statistically significant within the ER- cohort, which is in line with Schettini et al. who also did not find any significant differences in genes within the ER- group. Furthermore, HER2- tumors were more enriched in immune-related genes, which is concordant with the study of Schettini et al. reporting that HER2- tumors are more basal-like, using the PAM50 assay, than HER2low tumors. The higher expression of EGFR in ER-HER2- tumors compared to ER-HER2low tumors in the univariate analysis of our study also supports a more basal-like TNBC aspect of HER2- cases , . Overall, our results suggest that HER2low breast cancer is associated with a limited immune response compared to HER2- breast cancer, as shown by the gene-expression data of the ER+ cohort and the TIL-score of the ER- cohort. Several previous studies reported that high levels of TILs are associated with a higher probability of treatment response and an improved outcome , , . In our study, no difference in survival was observed between these two HER2 groups, neither in the ER+ nor in the ER- cohort. Previous literature is inconsistent with respect to outcome. In line with our findings, various studies reported no difference in overall survival between HER2- and HER2low breast cancer patients , , . Denkert et al. reported that ER+/HER2low breast cancers have a lower pathological complete response rate after neoadjuvant chemotherapy compared to ER+/HER2- cancers . Furthermore, they concluded that patients with HER2low breast cancer have a better prognosis compared to HER2- cases, in the ER- cohort. Other previous studies reported that HER2low is associated with worse prognosis, in ER+ breast cancer , . This is the first study that analyzes the HER2low status of breast cancer in relation to the density of TILs and a large gene-expression dataset. Furthermore, we used a well-documented cohort of patients. Since ER expression is regarded as a key-factor for tumor biology and outcome, we stratified for ER status . However, this study also has some limitations. First, the dataset was based on a historical multicenter cohort of patients for which gene-expression data was generated for different research questions – . Although it is a cohort with a long follow-up period, there was a relatively large proportion of dropout, resulting in a relatively short median follow-up time. Especially for the ER+ cohort it is known that longer follow-up time is needed to detect disease recurrence . Besides, this could have influenced the tissue quality and thus the HER2 protein expression levels, as reflected in the relatively low proportion of HER2low cases in our series – . However, this would rather result in an underestimation of our findings, since the HER2- cases might include some HER2low cases of which the HER2 protein expression levels have decreased slightly. Finally, scoring of HER2 status and density of TILs was performed according to international guidelines, but inter-observer variability has been reported – . In addition, HER2 expression was scored on TMAs, so heterogeneity of the tumors could not be completely depicted. Future mechanistic studies could elucidate the mechanism of poor immune infiltration into HER2low tumors. In summary, in the ER+ cohort, we observed that HER2low tumors had a different gene-expression pattern compared to HER2- cancers, including genes that are associated with depletion of immunity. In ER- cases, HER2low cancers had a lower density of TILs compared to HER2- cases. Although immunity is regarded as an important prognostic factor in breast cancer, we did not observe a difference in survival between HER2low and HER2- patients, neither in the ER+ nor in the ER- cohort. Future research based on large, more recent cohorts of patients, could further elucidate the clinical relevance of HER2low in relation to immunity. General patient and tumor characteristics This retrospective study was based on a well-documented cohort of primary breast cancer patients, for whom cancer tissues were available on tissue microarrays and Affymetrix data was known. Patients were diagnosed between 1982 and 2003 in multiple centers across the Netherlands. Coded leftover patient material was used in accordance with the Code of Conduct of the Federation of Medical Scientific Societies in the Netherlands . According to these national guidelines, this work was not subject to the Medical Research Involving Human Subjects Act (WMO; METC 02.593). In total, formalin-fixed-paraffin-embedded breast cancer tissue of 720 tumors were analyzed. Clinical data and tumor characteristics were partly collected from medical charts and pathology reports. This included age, menopausal status, pTNM classification, treatment and outcome data (overall survival, disease free survival and metastasis free survival). Central pathology review of whole sections was performed to assess histologic grade, histologic subtype, vascular invasion, mitotic activity index and density of TILs. Histologic grading was determined using the Nottingham modified Bloom and Richardson scoring system . The percentage of stromal TILs was scored on hematoxylin and eosin-stained whole slides according to the recommendations of the International TILs Working Group , . Figure illustrates examples of breast cancers with a low, intermediate or high density of TILs. Tissue microarray scoring Breast cancer tissues of all patients was available in triplicate on tissue microarrays. Sections of 4 µm were cut (Micron HM340E) and mounted on Superfrost plus slides (Menzel-Glaser, Braunschweig, Germany). Protein expression of ER, PR, HER2, Ki-67 and EGFR on invasive tumor cells was scored manually by two observers in a central lab . The ER and PR status was reported as negative or positive, using a cut off at 10% stained cells, according to the Dutch treatment guidelines . The Ki-67 expression for this cohort was categorized as low (≤ 10%), intermediate (11–25%) or high (≥ 26%) – . For this study, tissue microarrays were immunohistochemically stained with the 4B5 anti-HER-2/neu antibody (Ventana BenchMark ULTRA, ROCHE), using cell lines and human tissues as internal controls. The VM NanoZoomer 2.0-HT (Hamamatsu Photonics K.K.) was used to digitize the slides. HER2 status was scored according to the most recent guidelines from The American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) . HER2- breast carcinomas were defined by an IHC score of 0, whereas HER2low carcinomas were defined by an IHC score of 1 + or 2 + without amplification (Fig. ). HER2+ was assigned according to international guidelines as IHC 2 + with amplification or IHC 3 + . HER2 silver-enhanced in situ hybridization (SISH) was performed using the VENTANA HER2 Dual ISH DNA Probe Cocktail assay (Ventana BenchMark ULTRA, ROCHE). A HER2 copy number of < 6 per cell was considered HER2 non-amplified and ≥ 6 copies per cell was considered as HER2-amplified . The triplicate IHC and ISH scores were combined to a final HER2 score, using the core with the highest level of expression in case of discrepancy. Patients without assessable invasive tumor tissue were excluded. Statistical analysis The statistical analysis was performed using IBM SPPS Statistics version 26. The Pearson Chi-square or Fisher’s exact tests were used to investigate differences between HER2- and HER2low cases for the categorical variables, stratified for ER status. For the continuous variables a Mann–Whitney U-test was performed. A linear trend test was performed for the categorical variables with a minimum of 4 categories. Multivariate logistic regression analysis was performed to analyze whether relevant, univariate significant, variables were independently associated with HER2 status. For survival analysis, overall survival, disease free survival and metastasis free survival were used as endpoints. The overall survival was defined as the time from diagnosis to date of death or the last date where the patients were known to be alive. Disease free survival was defined as the time from diagnosis to the date of disease recurrence, last follow-up or death (of any cause). Disease recurrence was defined by a positive biopsy within either the ipsilateral breast or axillary nodes. Metastasis free survival was defined as the time from diagnosis to the date of distant disease recurrence, last follow-up or death. With the use of the Mantel-Cox method, Kaplan–Meier curves of the survival data were visualized. Differences in outcome between the HER2 subgroups were evaluated by Log-rank tests, where a two-sided p-value below 0.05 was considered as statistically significant. Gene-expression levels were derived from existing in-house data; samples were run on both U133A and HGU133Plus2.0 chips. The samples were previously described and are available via the Gene Expression Omnibus ( http://www.ncbi.nlm.nih.gov/geo/ ) with accession codes GSE2034, GSE5327, GSE12276 and GSE27830 – . Raw data were normalized using fRMA and samples from both platforms were combined using probe-sets common to both chip-types . ComBat was used to correct for batch effects resulting from using data of two different platforms . Next, the top 5000 most variable (highest standard deviation) genes were used for further analysis. Differentially expressed genes were identified using the non-parametric Mann–Whitney U-test in STATA v14 (StataCorp, Houston, USA). Chromosome locations of the genes were retrieved via http://genome.ucsc.edu and cytogenetic band locations were retrieved from www.genecards.org . An overview of the expression of the genes is presented with the use of heatmapper.ca expression tool. Furthermore, a functional pathway analysis was performed using DAVID bioinformatics resources 6.8, to investigate the global role of differentially expressed genes for HER2low breast cancer and analyzed for enriched shared biology , . Ethics approval and consent to participate This work was approved and need of informed consent was waived by the Medical Ethics Committee of the Erasmus MC (MEC 02.953). This Medical Ethics Committee of the Erasmus MC approved that the rules laid down in the Medical Research Involving Human Subjects Act do not apply to this work. Therefore, there was no need for an informed consent. The study was performed in accordance with the Declaration of Helsinki. This retrospective study was based on a well-documented cohort of primary breast cancer patients, for whom cancer tissues were available on tissue microarrays and Affymetrix data was known. Patients were diagnosed between 1982 and 2003 in multiple centers across the Netherlands. Coded leftover patient material was used in accordance with the Code of Conduct of the Federation of Medical Scientific Societies in the Netherlands . According to these national guidelines, this work was not subject to the Medical Research Involving Human Subjects Act (WMO; METC 02.593). In total, formalin-fixed-paraffin-embedded breast cancer tissue of 720 tumors were analyzed. Clinical data and tumor characteristics were partly collected from medical charts and pathology reports. This included age, menopausal status, pTNM classification, treatment and outcome data (overall survival, disease free survival and metastasis free survival). Central pathology review of whole sections was performed to assess histologic grade, histologic subtype, vascular invasion, mitotic activity index and density of TILs. Histologic grading was determined using the Nottingham modified Bloom and Richardson scoring system . The percentage of stromal TILs was scored on hematoxylin and eosin-stained whole slides according to the recommendations of the International TILs Working Group , . Figure illustrates examples of breast cancers with a low, intermediate or high density of TILs. Breast cancer tissues of all patients was available in triplicate on tissue microarrays. Sections of 4 µm were cut (Micron HM340E) and mounted on Superfrost plus slides (Menzel-Glaser, Braunschweig, Germany). Protein expression of ER, PR, HER2, Ki-67 and EGFR on invasive tumor cells was scored manually by two observers in a central lab . The ER and PR status was reported as negative or positive, using a cut off at 10% stained cells, according to the Dutch treatment guidelines . The Ki-67 expression for this cohort was categorized as low (≤ 10%), intermediate (11–25%) or high (≥ 26%) – . For this study, tissue microarrays were immunohistochemically stained with the 4B5 anti-HER-2/neu antibody (Ventana BenchMark ULTRA, ROCHE), using cell lines and human tissues as internal controls. The VM NanoZoomer 2.0-HT (Hamamatsu Photonics K.K.) was used to digitize the slides. HER2 status was scored according to the most recent guidelines from The American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) . HER2- breast carcinomas were defined by an IHC score of 0, whereas HER2low carcinomas were defined by an IHC score of 1 + or 2 + without amplification (Fig. ). HER2+ was assigned according to international guidelines as IHC 2 + with amplification or IHC 3 + . HER2 silver-enhanced in situ hybridization (SISH) was performed using the VENTANA HER2 Dual ISH DNA Probe Cocktail assay (Ventana BenchMark ULTRA, ROCHE). A HER2 copy number of < 6 per cell was considered HER2 non-amplified and ≥ 6 copies per cell was considered as HER2-amplified . The triplicate IHC and ISH scores were combined to a final HER2 score, using the core with the highest level of expression in case of discrepancy. Patients without assessable invasive tumor tissue were excluded. The statistical analysis was performed using IBM SPPS Statistics version 26. The Pearson Chi-square or Fisher’s exact tests were used to investigate differences between HER2- and HER2low cases for the categorical variables, stratified for ER status. For the continuous variables a Mann–Whitney U-test was performed. A linear trend test was performed for the categorical variables with a minimum of 4 categories. Multivariate logistic regression analysis was performed to analyze whether relevant, univariate significant, variables were independently associated with HER2 status. For survival analysis, overall survival, disease free survival and metastasis free survival were used as endpoints. The overall survival was defined as the time from diagnosis to date of death or the last date where the patients were known to be alive. Disease free survival was defined as the time from diagnosis to the date of disease recurrence, last follow-up or death (of any cause). Disease recurrence was defined by a positive biopsy within either the ipsilateral breast or axillary nodes. Metastasis free survival was defined as the time from diagnosis to the date of distant disease recurrence, last follow-up or death. With the use of the Mantel-Cox method, Kaplan–Meier curves of the survival data were visualized. Differences in outcome between the HER2 subgroups were evaluated by Log-rank tests, where a two-sided p-value below 0.05 was considered as statistically significant. Gene-expression levels were derived from existing in-house data; samples were run on both U133A and HGU133Plus2.0 chips. The samples were previously described and are available via the Gene Expression Omnibus ( http://www.ncbi.nlm.nih.gov/geo/ ) with accession codes GSE2034, GSE5327, GSE12276 and GSE27830 – . Raw data were normalized using fRMA and samples from both platforms were combined using probe-sets common to both chip-types . ComBat was used to correct for batch effects resulting from using data of two different platforms . Next, the top 5000 most variable (highest standard deviation) genes were used for further analysis. Differentially expressed genes were identified using the non-parametric Mann–Whitney U-test in STATA v14 (StataCorp, Houston, USA). Chromosome locations of the genes were retrieved via http://genome.ucsc.edu and cytogenetic band locations were retrieved from www.genecards.org . An overview of the expression of the genes is presented with the use of heatmapper.ca expression tool. Furthermore, a functional pathway analysis was performed using DAVID bioinformatics resources 6.8, to investigate the global role of differentially expressed genes for HER2low breast cancer and analyzed for enriched shared biology , . This work was approved and need of informed consent was waived by the Medical Ethics Committee of the Erasmus MC (MEC 02.953). This Medical Ethics Committee of the Erasmus MC approved that the rules laid down in the Medical Research Involving Human Subjects Act do not apply to this work. Therefore, there was no need for an informed consent. The study was performed in accordance with the Declaration of Helsinki. Supplementary Information.
Education in focus: Significant improvements in student learning and satisfaction with ophthalmology teaching delivered using a blended learning approach
9c0316c0-2e29-4ea0-961e-c4af92b4f0e2
11216581
Ophthalmology[mh]
The necessities brought about by the COVID-19 pandemic required an inevitable shift towards online and distance learning, addressing the challenges posed by government directives, the need for social distancing, and continuing health professions education (HPE) . Reviews that investigated the developments in medical education in response to the COVID-19 pandemic highlighted that in the immediate response, the majority of interventions described a pivot to online learning. However, the need for continued clinical contact remained but was often replaced in curricula with remote, distance or telehealth. In the Best Evidence in Medical Education (BEME) rapid review, BEME 63, the authors identified a significant focus on sharing experiences, rather than robust evaluation or research enquiry, and less than 50% of studies reviewed described educational outcomes. BEME Guide no.64 acknowledged that online learning will undoubtedly continue to be a feature of medical education long after the pandemic, but encouraged educators to have a deliberate and thoughtful selection of strategies and consider the differential impacts of these approaches. BEME Guide no. 71 recognised the limitations of remote learning including the loss of social interactions, lack of hands-on experiences and challenges with technology, the authors recommend its continued use in higher education due the flexibility it offered and highlighted practical advice to optimize the online environment . Among the educational interventions that were adopted in response to the COVID-19 pandemic, the flipped classroom (FC) has been reported to be efficacious in responding to these extraordinary challenges in medical education . The FC, a form of blended learning and instructional strategy, seeks to enhance student engagement and learning. It involves students completing readings autonomously outside of scheduled class time and participating in live problem-solving activities during class time. In undergraduate ophthalmology education, studies by Diel et al found high levels of satisfaction with a FC approach and reported no changes in knowledge acquisition , and a reduction in students’ pressure to perform, course burden and anxiety, along with increased confidence in triaging common eye complaints . In our initial educational response to the pandemic at our university, we implemented a remote online flipped classroom (OFC) approach to facilitate delivery of an ophthalmology clinical attachment for medical students, and evaluated students perceptions and satisfaction with the Course Evaluation Questionnaire (CEQ) . The CEQ is used globally to determine undergraduate student satisfaction and to identify areas for improvement . There is substantial evidence supporting its reliability and validity with undergraduate and medical students , and it has been utilised in ophthalmology interventions evaluating the FC . However, the efficacy of the FC for ophthalmology education in a completely virtual setting is still insufficiently measured . We investigated student satisfaction using the CEQ following the introduction of a remote online FC, necessitated by the COVID-19 pandemic, compared to with our usual delivery format, which provided a blend of didactic lectures and clinical skills sessions . Our results contradicted existing literature on the effectiveness of a flipped-classroom approach in delivering ophthalmology content to medical students. Previous studies indicated a preference among students for the flipped classroom over the traditional lecture method, citing its benefits in developing problem-solving, creative thinking, and teamwork skills . We identified significant levels of dissatisfaction with problem solving, communication, staff motivation and provision of feedback . As the constraints imposed by government directives and the necessity for social distancing eased, we sought to re-design the ophthalmology module to incorporate learnings from our previous findings and the evidence based recommendations from BEME reviews 63, 64, 69 and 71 . For subsequent iterations of the ophthalmology module an educational strategy that combined online learning and in-person seminars with practical patient-centred sessions was adopted. It was anticipated that this blended learning approach would result in improved levels of student satisfaction and knowledge gain. In this study we investigated how a blend of traditional classroom-based and remote FC learning approaches combined with in person practical elements including direct patient contact would impact student satisfaction with the CEQ, and compare these results to those previously reported for the complete OFC delivery of ophthalmology content . Study populations Participants in this study were 4 th year senior cycle medical students enrolled in RCSI on an Ophthalmology clinical attachment that takes place 20 times during the academic year. All students undertaking the ophthalmology clinical attachment module were invited to participate. This study was reviewed and approved by the Research and Ethics Committee (REC) of the RCSI, University of Medicine and Health Sciences and was conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from all participants (REC 202006015). Group 1: Online flipped classroom (OFC) group (2019/2020 ophthalmology module) As a result of the global pandemic an online distance module was devised for students participating in the ophthalmology clinical attachment for the 2019/2020 academic year. As previously described these students (total n = 114) engaged in a curriculum solely dependent on an online flipped classroom (OFC) and for the purposes of the current study functioned as our comparison group . Recruitment period for this study cohort: 19 th October 2020 to 18 th December 2020. Group 2: Blended Learning (BL) group (2020/2021 ophthalmology module) The Blended learning (BL) of the 4 th year ophthalmology clinical attachment began on the 5 th of October 2020 in the Royal Victoria Eye and Ear Hospital (RVEEH) and finished 26 th April 2021. Recruitment period for this study cohort: 5 th April 2021 to 28 th May 2021. Students were assigned into groups by the SARA (Student, Academic & Regulatory Affairs) office RCSI (10–12 in each) before commencing their clinical attachment week. BL students attended on site teaching sessions, remote online FC sessions and in-person patient centred clinical skills teaching sessions. Module description The aims of the ophthalmology module were to enable the students to develop the clinical knowledge and skills to assess any patient presenting with an eye disorder and to formulate an appropriate differential diagnosis and management or referral plan. Our objective was to ensure constructive alignment of the module with the existing learning outcomes despite the change in delivery whilst responding to the changes precipitated the by COVID-19 pandemic regarding social distancing . On site in-person teaching : students attended sessions on the Anatomy of the Eye, History taking in ophthalmology, Patient based teaching and Clinical skills, which each consisted of 60-min small group teaching sessions (face to face lectures with 15-minute question and answer session) led by an Ophthalmologist. Online flipped classroom : students were asked to watch pre-recorded video lectures (Cataract, Glaucoma, Diabetic Retinopathy, AMD) online in advance of a one hour interactive session led by an Ophthalmologist. This was supplemented by slide sets of the didactic lecture material without audio. After the pre-class lecture students attended a synchronous, online, live interactive session on the same topic. These Blackboard Collaborate (BBc) sessions included problem solving, clinical vignettes and MCQs relating to the recorded lecture. Additional BBc sessions covered 3 other key topics including red eye, sudden loss of vision and change in appearance. Facilitators prepared clinical cases and related MCQs that addressed learning outcomes and promoted engagement for use during the interactive online session. The facilitator encouraged problem solving using the poll feature of BBc which encouraged both discussion and active learning. Clinical skills : students attended in person, practical clinical skills sessions which included the examination of the following: Snellen visual acuity, direct ophthalmoscopy, eye movements, pupil reactions, visual fields to confrontation, cover test for strabismus and the external eye examination with a pen torch. Patient centred teaching : students engaged in in-person patient led teaching practical sessions which consisted of taking patient history, reading patient charts, examining patients and a discussion regarding the outcomes of the consultation facilitated by the supervising clinical tutor. Students also attending outpatient clinics as observers, listening to patient histories, examining clinical signs and discussing patient cases with the attending doctors. Knowledge was tested upon completion of the module via a multiple- choice question (MCQ) exam. Clinical Competency (skills) were assessed by practical examination of fundoscopy skills. Digital training Blackboard Collaborate (BBc) has previously been shown to have utility as a platform to support nursing students placement learning. Several studies have highlighted the importance training to develop students digital literacy to facilitate student engagement with this form of technology . To support this guides to the use of BBc were prepared and provided to the students ahead of the online module. Digital training was provided to ophthalmology faculty along with support guides for the use of the BBc platform. Instrument and data collection To investigate student perceptions and satisfaction with the online flipped classroom, all students (257) were invited to complete the CEQ36 online via Survey Monkey. Each item of the questionnaire is answered using a standard 5-point Likert scale where the levels of agreement range from “strongly agree” (scoring a “1”) to “strongly disagree” (scoring a “5”). The CEQ36 measures six constructs established as important learning environment features within the context of higher education , and are presented in . In addition to the CEQ36 data, final anonymised MCQ exam scores were obtained for each student in the study. Statistical analysis Descriptive statistics were used to describe the characteristics of the two groups (OFC (n = 28) vs BL (n = 59)) and Chi-square test/Fisher exact test, or independent samples t-test used to explore differences between the groups. The scores of the MCQ final exam were compared using independent samples t test. The questionnaire data given to students were analysed using Mann-Whitney-U tests, to explore potential differences between the groups. During analysis responses for the ‘agree’ and ‘strongly agree’ categories were combined, similarly responses for ‘disagree’ and ‘strongly disagree’ category were combined. All statistical analyses were performed in GraphPad Prism V5 or Stata v13. Participants in this study were 4 th year senior cycle medical students enrolled in RCSI on an Ophthalmology clinical attachment that takes place 20 times during the academic year. All students undertaking the ophthalmology clinical attachment module were invited to participate. This study was reviewed and approved by the Research and Ethics Committee (REC) of the RCSI, University of Medicine and Health Sciences and was conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from all participants (REC 202006015). Group 1: Online flipped classroom (OFC) group (2019/2020 ophthalmology module) As a result of the global pandemic an online distance module was devised for students participating in the ophthalmology clinical attachment for the 2019/2020 academic year. As previously described these students (total n = 114) engaged in a curriculum solely dependent on an online flipped classroom (OFC) and for the purposes of the current study functioned as our comparison group . Recruitment period for this study cohort: 19 th October 2020 to 18 th December 2020. Group 2: Blended Learning (BL) group (2020/2021 ophthalmology module) The Blended learning (BL) of the 4 th year ophthalmology clinical attachment began on the 5 th of October 2020 in the Royal Victoria Eye and Ear Hospital (RVEEH) and finished 26 th April 2021. Recruitment period for this study cohort: 5 th April 2021 to 28 th May 2021. Students were assigned into groups by the SARA (Student, Academic & Regulatory Affairs) office RCSI (10–12 in each) before commencing their clinical attachment week. BL students attended on site teaching sessions, remote online FC sessions and in-person patient centred clinical skills teaching sessions. Module description The aims of the ophthalmology module were to enable the students to develop the clinical knowledge and skills to assess any patient presenting with an eye disorder and to formulate an appropriate differential diagnosis and management or referral plan. Our objective was to ensure constructive alignment of the module with the existing learning outcomes despite the change in delivery whilst responding to the changes precipitated the by COVID-19 pandemic regarding social distancing . On site in-person teaching : students attended sessions on the Anatomy of the Eye, History taking in ophthalmology, Patient based teaching and Clinical skills, which each consisted of 60-min small group teaching sessions (face to face lectures with 15-minute question and answer session) led by an Ophthalmologist. Online flipped classroom : students were asked to watch pre-recorded video lectures (Cataract, Glaucoma, Diabetic Retinopathy, AMD) online in advance of a one hour interactive session led by an Ophthalmologist. This was supplemented by slide sets of the didactic lecture material without audio. After the pre-class lecture students attended a synchronous, online, live interactive session on the same topic. These Blackboard Collaborate (BBc) sessions included problem solving, clinical vignettes and MCQs relating to the recorded lecture. Additional BBc sessions covered 3 other key topics including red eye, sudden loss of vision and change in appearance. Facilitators prepared clinical cases and related MCQs that addressed learning outcomes and promoted engagement for use during the interactive online session. The facilitator encouraged problem solving using the poll feature of BBc which encouraged both discussion and active learning. Clinical skills : students attended in person, practical clinical skills sessions which included the examination of the following: Snellen visual acuity, direct ophthalmoscopy, eye movements, pupil reactions, visual fields to confrontation, cover test for strabismus and the external eye examination with a pen torch. Patient centred teaching : students engaged in in-person patient led teaching practical sessions which consisted of taking patient history, reading patient charts, examining patients and a discussion regarding the outcomes of the consultation facilitated by the supervising clinical tutor. Students also attending outpatient clinics as observers, listening to patient histories, examining clinical signs and discussing patient cases with the attending doctors. Knowledge was tested upon completion of the module via a multiple- choice question (MCQ) exam. Clinical Competency (skills) were assessed by practical examination of fundoscopy skills. As a result of the global pandemic an online distance module was devised for students participating in the ophthalmology clinical attachment for the 2019/2020 academic year. As previously described these students (total n = 114) engaged in a curriculum solely dependent on an online flipped classroom (OFC) and for the purposes of the current study functioned as our comparison group . Recruitment period for this study cohort: 19 th October 2020 to 18 th December 2020. The Blended learning (BL) of the 4 th year ophthalmology clinical attachment began on the 5 th of October 2020 in the Royal Victoria Eye and Ear Hospital (RVEEH) and finished 26 th April 2021. Recruitment period for this study cohort: 5 th April 2021 to 28 th May 2021. Students were assigned into groups by the SARA (Student, Academic & Regulatory Affairs) office RCSI (10–12 in each) before commencing their clinical attachment week. BL students attended on site teaching sessions, remote online FC sessions and in-person patient centred clinical skills teaching sessions. The aims of the ophthalmology module were to enable the students to develop the clinical knowledge and skills to assess any patient presenting with an eye disorder and to formulate an appropriate differential diagnosis and management or referral plan. Our objective was to ensure constructive alignment of the module with the existing learning outcomes despite the change in delivery whilst responding to the changes precipitated the by COVID-19 pandemic regarding social distancing . On site in-person teaching : students attended sessions on the Anatomy of the Eye, History taking in ophthalmology, Patient based teaching and Clinical skills, which each consisted of 60-min small group teaching sessions (face to face lectures with 15-minute question and answer session) led by an Ophthalmologist. Online flipped classroom : students were asked to watch pre-recorded video lectures (Cataract, Glaucoma, Diabetic Retinopathy, AMD) online in advance of a one hour interactive session led by an Ophthalmologist. This was supplemented by slide sets of the didactic lecture material without audio. After the pre-class lecture students attended a synchronous, online, live interactive session on the same topic. These Blackboard Collaborate (BBc) sessions included problem solving, clinical vignettes and MCQs relating to the recorded lecture. Additional BBc sessions covered 3 other key topics including red eye, sudden loss of vision and change in appearance. Facilitators prepared clinical cases and related MCQs that addressed learning outcomes and promoted engagement for use during the interactive online session. The facilitator encouraged problem solving using the poll feature of BBc which encouraged both discussion and active learning. Clinical skills : students attended in person, practical clinical skills sessions which included the examination of the following: Snellen visual acuity, direct ophthalmoscopy, eye movements, pupil reactions, visual fields to confrontation, cover test for strabismus and the external eye examination with a pen torch. Patient centred teaching : students engaged in in-person patient led teaching practical sessions which consisted of taking patient history, reading patient charts, examining patients and a discussion regarding the outcomes of the consultation facilitated by the supervising clinical tutor. Students also attending outpatient clinics as observers, listening to patient histories, examining clinical signs and discussing patient cases with the attending doctors. Knowledge was tested upon completion of the module via a multiple- choice question (MCQ) exam. Clinical Competency (skills) were assessed by practical examination of fundoscopy skills. Blackboard Collaborate (BBc) has previously been shown to have utility as a platform to support nursing students placement learning. Several studies have highlighted the importance training to develop students digital literacy to facilitate student engagement with this form of technology . To support this guides to the use of BBc were prepared and provided to the students ahead of the online module. Digital training was provided to ophthalmology faculty along with support guides for the use of the BBc platform. To investigate student perceptions and satisfaction with the online flipped classroom, all students (257) were invited to complete the CEQ36 online via Survey Monkey. Each item of the questionnaire is answered using a standard 5-point Likert scale where the levels of agreement range from “strongly agree” (scoring a “1”) to “strongly disagree” (scoring a “5”). The CEQ36 measures six constructs established as important learning environment features within the context of higher education , and are presented in . In addition to the CEQ36 data, final anonymised MCQ exam scores were obtained for each student in the study. Descriptive statistics were used to describe the characteristics of the two groups (OFC (n = 28) vs BL (n = 59)) and Chi-square test/Fisher exact test, or independent samples t-test used to explore differences between the groups. The scores of the MCQ final exam were compared using independent samples t test. The questionnaire data given to students were analysed using Mann-Whitney-U tests, to explore potential differences between the groups. During analysis responses for the ‘agree’ and ‘strongly agree’ categories were combined, similarly responses for ‘disagree’ and ‘strongly disagree’ category were combined. All statistical analyses were performed in GraphPad Prism V5 or Stata v13. A total of 257 undergraduate medical students who received the BL delivery of the ophthalmology clinical attachment were invited to participate in this study. Of these, 59 students (23%) agreed to take part in the study and completed an online CEQ. A total of 114 students who had received OFC delivery of ophthalmology content the year prior attended online tutorials as described previously . Of these, 28 agreed to participate (25%). The demographic distribution of the participants is presented in . There was no evidence of a difference in gender or age between the OFC and BL groups for the classes as a whole (column 1 v 3), or between students in the OFC group or the BL group who participated in the online surveys (column 2 v 4). Student perceptions graphically summarizes the responses from the students regarding the six constructs established as important learning environment features within the context of higher education: Good Teaching (GT), Generic Skills (GS), Appropriate Assessment (AA), Appropriate Workload (AW), Clear Goals and Standards (CG), Emphasis on Independence (IN) . Overall students indicated a preference for the BL compared to the OFC approach. We observed significant differences between the responses of the OFC and BL groups regarding the learning experience, perceived value of the flipped classroom, teaching process, skill development and the evaluation system outlined in . Due to small number of respondents in some categories, Strongly Agree and Agree, and additionally, Strongly Disagree and Disagree categories were joined for analysis. Furthermore, also relating to the small number respondents, the margin of error varied substantially, ranging from 5.7% to 12.6% for estimates of Agree/ Strongly Agree in the BL group, and 16.7% to 18.5% for estimates of Agree/ Strongly Agree in the OFC group. Good Teaching scale We observed that the BL delivery approach resulted in significantly greater levels of student satisfaction on the GT scale compared to the OFC approach. Specifically the BL group felt that the teaching staff motivated students to do their best (Q4, p<0.001), put a lot of time into commenting on students work (Q9, p = 0.004) and made a real effort to understand difficulties that students may be having with their work (Q20, p = 0.001). However, having adjusted for multiple comparisons Q9 was no longer significant. Furthermore, compared to the OFC group the BL students felt that faculty were extremely good at explaining course content (Q23, p = 0.05) and that they made significant efforts to make the subjects interesting (Q25, p = 0.013). Critically we observed a significant improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group (Q33, p = 0.001). However, having adjusted for multiple comparisons, only improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group remained statistically significantly different (Q33, p = 0.033). Clear Goals and Standards scale There was no evidence of a difference in student perceptions on the goals and standards (CG) scale specifically about what was expected from them (Q18), about the standard of work required (Q1) and faculty expectations of students being made clear (Q35). Overall there was no evidence of a difference in goals and standards after Bonferroni adjustment, however, before adjustment there was some evidence the BL group were significantly more satisfied with the CG compared to the OFC. Specifically, students felt that they had a clear idea of what was going on and what was expected from them (Q8, p = 0.006) and that the aims and objectives of the course were made very clear (Q24, p = 0.027). Generic Skills scale There was no evidence of a difference in student perceptions about the capacity for the OFC or BL course to improve their written communication skills (Q13). In contrast to the OFC course students those who participated in the BL course showed some evidence of a difference prior to Bonferroni adjustment, finding it helped develop their problem skills (Q2, p = 0.001), sharpened their analytical skills (Q6, p = 0.023), developed their ability to work as a team member (Q11, p<0.001), improved their confidence about tackling unfamiliar problems (Q12, p = 0.029) and developed their ability to plan work (Q28, p = 0.003). Developing their problem skills (Q2) and developing their ability to work as a team member (Q11, remained statistically significant following adjustment for multiple comparisons. Appropriate Assessment scale There was no evidence of a difference between the OFC and BL group in student perceptions of the impression that staff are more interested in testing what students have memorised (Q17) or ask too many questions about facts (Q26). Additionally, there was no difference between the OFC and BL group in their perceptions of the form that feedback was given (Q29) or that just by working hard around exam times they could get through the course (Q32). We observed significantly greater levels of student satisfaction among the BL group with the impression that faculty can learn from students (Q7, p = 0.001) compared to the OFC group. Furthermore, the BL group indicated that doing well on the course required more than just a good memory (Q10, p = 0.007). Appropriate Workload scale There was no evidence of a difference in student perceptions in relation to the workload (Q5), the number of topics covered in the syllabus (Q14), the amount of time given to learn (Q19), the pressure felt by students (Q27) or how the volume of work affects comprehension of topics (Q36). Emphasis on Independence scale There was no evidence of a difference in student perceptions between the OFC and BL group on the IN scale regarding opportunities to choose the particular areas you want to study (Q3), that the course encouraged them to pursue their academic interests (Q15) or their opportunities to discuss how they were going to learn with lecturers (Q30). However, we observed that the BL group were significantly more satisfied with elements in the IN scale compared to the OFC group. Specifically, students in the BL group felt they had greater levels of choice regarding how they would learn (Q16, p = 0.005), the work they had to do (Q21, p = 0.031) and the ways in which they were assessed (Q34, p = 0.004). Questions regarding the value of the flipped classroom Previous studies have highlighted questions within the CEQ survey, which provide insights into the perceived value of the flipped classroom . The FC scale questions overlap with the GT and GS scale; specifically questions 2, 4, 5, 11, 12, 13 and 28. Student survey responses indicated a significant level of student satisfaction with the online flipped classroom approach as part of the revised BL curriculum. As mentioned above students in the BL group felt that there were more opportunities to improve their problem-solving skills (Q2, P = 0.01) and that staff did more to motivate them (Q4, p<0.001). In addition, they felt the course helped develop their ability to work as a team member compared to the OFC group (Q11, <0.001), tackle unfamiliar problems (Q12, p = 0.029) and developed their ability to plan their own work (Q28, p = 0.003). When asked to rate the statement “Overall, I am satisfied with the quality of this course” there was no evidence of a difference in the rating between the OFC and the BL group (Q 37). Comparison of overall student performance on final multiple-choice exam Next we compared students’ exam scores before and after the educational intervention for all students in the OFC (n = 114) and BL groups (n = 257) and students in the OFC (n = 28) and BL groups (n = 59) who responded to the survey. Students answered 20 ophthalmology multiple-choice questions (MCQ) as part of completing the course. Each question had the same weight, and the total score was converted into a 0–100 scale. Independent samples t test was used to compare the differences between the two groups. This analysis of the final exam MCQ score showed that there were no statistical differences between the OFC and BL group (p = 0.0560). Comparison of the final exam MCQ score for survey responders between the OFC and BL found no evidence of a statistical difference in the score achieved. Overall, this indicates that the BL did not negatively influence knowledge gain. graphically summarizes the responses from the students regarding the six constructs established as important learning environment features within the context of higher education: Good Teaching (GT), Generic Skills (GS), Appropriate Assessment (AA), Appropriate Workload (AW), Clear Goals and Standards (CG), Emphasis on Independence (IN) . Overall students indicated a preference for the BL compared to the OFC approach. We observed significant differences between the responses of the OFC and BL groups regarding the learning experience, perceived value of the flipped classroom, teaching process, skill development and the evaluation system outlined in . Due to small number of respondents in some categories, Strongly Agree and Agree, and additionally, Strongly Disagree and Disagree categories were joined for analysis. Furthermore, also relating to the small number respondents, the margin of error varied substantially, ranging from 5.7% to 12.6% for estimates of Agree/ Strongly Agree in the BL group, and 16.7% to 18.5% for estimates of Agree/ Strongly Agree in the OFC group. Good Teaching scale We observed that the BL delivery approach resulted in significantly greater levels of student satisfaction on the GT scale compared to the OFC approach. Specifically the BL group felt that the teaching staff motivated students to do their best (Q4, p<0.001), put a lot of time into commenting on students work (Q9, p = 0.004) and made a real effort to understand difficulties that students may be having with their work (Q20, p = 0.001). However, having adjusted for multiple comparisons Q9 was no longer significant. Furthermore, compared to the OFC group the BL students felt that faculty were extremely good at explaining course content (Q23, p = 0.05) and that they made significant efforts to make the subjects interesting (Q25, p = 0.013). Critically we observed a significant improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group (Q33, p = 0.001). However, having adjusted for multiple comparisons, only improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group remained statistically significantly different (Q33, p = 0.033). Clear Goals and Standards scale There was no evidence of a difference in student perceptions on the goals and standards (CG) scale specifically about what was expected from them (Q18), about the standard of work required (Q1) and faculty expectations of students being made clear (Q35). Overall there was no evidence of a difference in goals and standards after Bonferroni adjustment, however, before adjustment there was some evidence the BL group were significantly more satisfied with the CG compared to the OFC. Specifically, students felt that they had a clear idea of what was going on and what was expected from them (Q8, p = 0.006) and that the aims and objectives of the course were made very clear (Q24, p = 0.027). Generic Skills scale There was no evidence of a difference in student perceptions about the capacity for the OFC or BL course to improve their written communication skills (Q13). In contrast to the OFC course students those who participated in the BL course showed some evidence of a difference prior to Bonferroni adjustment, finding it helped develop their problem skills (Q2, p = 0.001), sharpened their analytical skills (Q6, p = 0.023), developed their ability to work as a team member (Q11, p<0.001), improved their confidence about tackling unfamiliar problems (Q12, p = 0.029) and developed their ability to plan work (Q28, p = 0.003). Developing their problem skills (Q2) and developing their ability to work as a team member (Q11, remained statistically significant following adjustment for multiple comparisons. Appropriate Assessment scale There was no evidence of a difference between the OFC and BL group in student perceptions of the impression that staff are more interested in testing what students have memorised (Q17) or ask too many questions about facts (Q26). Additionally, there was no difference between the OFC and BL group in their perceptions of the form that feedback was given (Q29) or that just by working hard around exam times they could get through the course (Q32). We observed significantly greater levels of student satisfaction among the BL group with the impression that faculty can learn from students (Q7, p = 0.001) compared to the OFC group. Furthermore, the BL group indicated that doing well on the course required more than just a good memory (Q10, p = 0.007). Appropriate Workload scale There was no evidence of a difference in student perceptions in relation to the workload (Q5), the number of topics covered in the syllabus (Q14), the amount of time given to learn (Q19), the pressure felt by students (Q27) or how the volume of work affects comprehension of topics (Q36). Emphasis on Independence scale There was no evidence of a difference in student perceptions between the OFC and BL group on the IN scale regarding opportunities to choose the particular areas you want to study (Q3), that the course encouraged them to pursue their academic interests (Q15) or their opportunities to discuss how they were going to learn with lecturers (Q30). However, we observed that the BL group were significantly more satisfied with elements in the IN scale compared to the OFC group. Specifically, students in the BL group felt they had greater levels of choice regarding how they would learn (Q16, p = 0.005), the work they had to do (Q21, p = 0.031) and the ways in which they were assessed (Q34, p = 0.004). Questions regarding the value of the flipped classroom Previous studies have highlighted questions within the CEQ survey, which provide insights into the perceived value of the flipped classroom . The FC scale questions overlap with the GT and GS scale; specifically questions 2, 4, 5, 11, 12, 13 and 28. Student survey responses indicated a significant level of student satisfaction with the online flipped classroom approach as part of the revised BL curriculum. As mentioned above students in the BL group felt that there were more opportunities to improve their problem-solving skills (Q2, P = 0.01) and that staff did more to motivate them (Q4, p<0.001). In addition, they felt the course helped develop their ability to work as a team member compared to the OFC group (Q11, <0.001), tackle unfamiliar problems (Q12, p = 0.029) and developed their ability to plan their own work (Q28, p = 0.003). When asked to rate the statement “Overall, I am satisfied with the quality of this course” there was no evidence of a difference in the rating between the OFC and the BL group (Q 37). Comparison of overall student performance on final multiple-choice exam Next we compared students’ exam scores before and after the educational intervention for all students in the OFC (n = 114) and BL groups (n = 257) and students in the OFC (n = 28) and BL groups (n = 59) who responded to the survey. Students answered 20 ophthalmology multiple-choice questions (MCQ) as part of completing the course. Each question had the same weight, and the total score was converted into a 0–100 scale. Independent samples t test was used to compare the differences between the two groups. This analysis of the final exam MCQ score showed that there were no statistical differences between the OFC and BL group (p = 0.0560). Comparison of the final exam MCQ score for survey responders between the OFC and BL found no evidence of a statistical difference in the score achieved. Overall, this indicates that the BL did not negatively influence knowledge gain. We observed that the BL delivery approach resulted in significantly greater levels of student satisfaction on the GT scale compared to the OFC approach. Specifically the BL group felt that the teaching staff motivated students to do their best (Q4, p<0.001), put a lot of time into commenting on students work (Q9, p = 0.004) and made a real effort to understand difficulties that students may be having with their work (Q20, p = 0.001). However, having adjusted for multiple comparisons Q9 was no longer significant. Furthermore, compared to the OFC group the BL students felt that faculty were extremely good at explaining course content (Q23, p = 0.05) and that they made significant efforts to make the subjects interesting (Q25, p = 0.013). Critically we observed a significant improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group (Q33, p = 0.001). However, having adjusted for multiple comparisons, only improvement of student perceptions regarding the course trying to get the best out of its students among the BL group compared to the OFC student group remained statistically significantly different (Q33, p = 0.033). There was no evidence of a difference in student perceptions on the goals and standards (CG) scale specifically about what was expected from them (Q18), about the standard of work required (Q1) and faculty expectations of students being made clear (Q35). Overall there was no evidence of a difference in goals and standards after Bonferroni adjustment, however, before adjustment there was some evidence the BL group were significantly more satisfied with the CG compared to the OFC. Specifically, students felt that they had a clear idea of what was going on and what was expected from them (Q8, p = 0.006) and that the aims and objectives of the course were made very clear (Q24, p = 0.027). There was no evidence of a difference in student perceptions about the capacity for the OFC or BL course to improve their written communication skills (Q13). In contrast to the OFC course students those who participated in the BL course showed some evidence of a difference prior to Bonferroni adjustment, finding it helped develop their problem skills (Q2, p = 0.001), sharpened their analytical skills (Q6, p = 0.023), developed their ability to work as a team member (Q11, p<0.001), improved their confidence about tackling unfamiliar problems (Q12, p = 0.029) and developed their ability to plan work (Q28, p = 0.003). Developing their problem skills (Q2) and developing their ability to work as a team member (Q11, remained statistically significant following adjustment for multiple comparisons. There was no evidence of a difference between the OFC and BL group in student perceptions of the impression that staff are more interested in testing what students have memorised (Q17) or ask too many questions about facts (Q26). Additionally, there was no difference between the OFC and BL group in their perceptions of the form that feedback was given (Q29) or that just by working hard around exam times they could get through the course (Q32). We observed significantly greater levels of student satisfaction among the BL group with the impression that faculty can learn from students (Q7, p = 0.001) compared to the OFC group. Furthermore, the BL group indicated that doing well on the course required more than just a good memory (Q10, p = 0.007). There was no evidence of a difference in student perceptions in relation to the workload (Q5), the number of topics covered in the syllabus (Q14), the amount of time given to learn (Q19), the pressure felt by students (Q27) or how the volume of work affects comprehension of topics (Q36). There was no evidence of a difference in student perceptions between the OFC and BL group on the IN scale regarding opportunities to choose the particular areas you want to study (Q3), that the course encouraged them to pursue their academic interests (Q15) or their opportunities to discuss how they were going to learn with lecturers (Q30). However, we observed that the BL group were significantly more satisfied with elements in the IN scale compared to the OFC group. Specifically, students in the BL group felt they had greater levels of choice regarding how they would learn (Q16, p = 0.005), the work they had to do (Q21, p = 0.031) and the ways in which they were assessed (Q34, p = 0.004). Previous studies have highlighted questions within the CEQ survey, which provide insights into the perceived value of the flipped classroom . The FC scale questions overlap with the GT and GS scale; specifically questions 2, 4, 5, 11, 12, 13 and 28. Student survey responses indicated a significant level of student satisfaction with the online flipped classroom approach as part of the revised BL curriculum. As mentioned above students in the BL group felt that there were more opportunities to improve their problem-solving skills (Q2, P = 0.01) and that staff did more to motivate them (Q4, p<0.001). In addition, they felt the course helped develop their ability to work as a team member compared to the OFC group (Q11, <0.001), tackle unfamiliar problems (Q12, p = 0.029) and developed their ability to plan their own work (Q28, p = 0.003). When asked to rate the statement “Overall, I am satisfied with the quality of this course” there was no evidence of a difference in the rating between the OFC and the BL group (Q 37). Next we compared students’ exam scores before and after the educational intervention for all students in the OFC (n = 114) and BL groups (n = 257) and students in the OFC (n = 28) and BL groups (n = 59) who responded to the survey. Students answered 20 ophthalmology multiple-choice questions (MCQ) as part of completing the course. Each question had the same weight, and the total score was converted into a 0–100 scale. Independent samples t test was used to compare the differences between the two groups. This analysis of the final exam MCQ score showed that there were no statistical differences between the OFC and BL group (p = 0.0560). Comparison of the final exam MCQ score for survey responders between the OFC and BL found no evidence of a statistical difference in the score achieved. Overall, this indicates that the BL did not negatively influence knowledge gain. The imperatives of the COVID-19 pandemic mandated an inevitable transition to online and distance learning, addressing the challenges posed by government directives and social distancing requirements. In the aftermath of the COVID-19 pandemic blended learning has become an accepted approach in health professions education . While student safety and wellbeing was paramount, removing medical students from the clinical context to minimise risk associated with the COVID-19 pandemic was not a feasible long-term strategy. BL involves both face-to-face and online learning components and was therefore an advantageous approach to health professions education during the pandemic as it offers the best of both approaches . In this study we wanted to assess student satisfaction with a revised ophthalmology module adopting a BL format, which included online learning and in-person seminars combined with practical patient centred sessions. Our goal was to compare BL with the previous delivery format that relied solely on a remote online flipped classroom to facilitate continued delivery of the ophthalmology module. It was hypothesised that as an educational intervention, the blended learning approach would continue to facilitate delivery of content as well as maintaining or improving levels of student satisfaction and knowledge gain as determined by a CEQ and MCQ examination. Learner satisfaction is a multidimensional construct and is related to an individual’s subjective assessment . Student satisfaction hinges on the efficacy of educational courses and the individual’s enthusiasm and enjoyment in the learning process . Blended learning offers learners more choice in a multimodal delivery of course content, and when compared to traditional deliver, BL has been shown to yield more favourable results in terms of knowledge outcomes when compared to traditional learning in HPE . While the analysis of the final exam MCQ scores in our study showed that there were no statistical differences between the OFC and BL groups, the BL group showed higher satisfaction with the choice provided by the BL approach with how they learn, the work they completed, and the methods of assessment (Emphasis on Independence scale). The practical constraints related to the OFC approach provided the learners fewer freedoms and choice in their educational journey. Chick et al suggested innovative technology including FC could play an essential role bridging the educational gap during the unprecedented COVID-19 pandemic . The FC has been demonstrated to be accessible and user friendly , and was favourable among ophthalmology residents, with reported improvement in test scores . The OFC approach has been utilised by many academics who found it was well received by students and resulted in similar or enhanced knowledge gain in some instances compared to the traditional delivery of teaching . However, our previous study’s findings were in contrast to this literature, and we found significant dissatisfaction with the online flipped classroom approach . Given that our initial rapid response to the challenges of delivering content during the pandemic relied significantly on a remote OFC approach we sought to determine student perceptions with this model. Overall students reported a lack of satisfaction with this model indicating a lack of staff motivation, difficulties determining the standard of work required and lack of development of critical thinking and problem solving as issues with the OFC approach for remote ophthalmology teaching . We believe that a lack of faculty preparedness , digital fatigue and student uncertainty may also have contributed to student dissatisfaction with the OFC approach . In this study compared to the OFC group the BL students felt staff excelled at explaining course content (Q23, p = 0.05) and made significant efforts to make the subjects engaging (Q25, p = 0.013). The variety in the BL approach offers more choice and enables learners to engage with material through various mediums, and this was also reflected in the BL groups’ satisfaction with the choices regarding how they would learn and be assessed. Additionally, the multimodal delivery of course content in BL, appears to have addressed some of the challenges associated with the technological challenges faced by staff in the initial response to the pandemic. Student Satisfaction is also associated with an individual’s interaction with their peers and with faculty , and in a national review in the UK having a “social life and meeting people” was acknowledged as a crucial factor contributing to overall satisfaction . Our resulted demonstrated BL resulted in significantly greater levels of student satisfaction on the Good Teaching scale compared to the OFC approach, specifically relating to items associated with staff motivating students to do their best and to understand student difficulties. We noted markedly higher levels of student satisfaction within the BL group with survey items relating to students feeling motivated by staff (Q4), working as part of a team (Q11), and how the course tried to get the best of students (Q33). HPE involves hands on learning and elements of teamwork and effective communication. Online learning has been associated with poor engagement and during the COVID-19 pandemic, reduced interpersonal interaction , and has also been associated with lower levels of preparedness and a lack of hands-on training . Our revised BL curricula, including online learning with in-person seminars and practical patient centred sessions, improved students self-reported problem solving, analytical skills and ability to work as part of team. Study limitations The interpretations drawn from our investigations should be considered within the context of the limitations inherent to this study. One such limitation is the relatively low level of student engagement observed in these investigations, which subsequently led to suboptimal response rates and smaller sample sizes. This resulted in varying margins of error around estimates, and results should be interpreted with caution. The ongoing global pandemic during the participant recruitment phase represents a factor potentially influencing the lack of study participants. Furthermore, our study encompasses participants from two distinct iterations of clinical attachments spanning across two academic years. It is noteworthy, however, that an analysis of each student cohort, as well as those who actively engaged in the study, determined no statistically significant disparities in terms of characteristics/demographics. The interpretations drawn from our investigations should be considered within the context of the limitations inherent to this study. One such limitation is the relatively low level of student engagement observed in these investigations, which subsequently led to suboptimal response rates and smaller sample sizes. This resulted in varying margins of error around estimates, and results should be interpreted with caution. The ongoing global pandemic during the participant recruitment phase represents a factor potentially influencing the lack of study participants. Furthermore, our study encompasses participants from two distinct iterations of clinical attachments spanning across two academic years. It is noteworthy, however, that an analysis of each student cohort, as well as those who actively engaged in the study, determined no statistically significant disparities in terms of characteristics/demographics. The COVID-19 pandemic compelled an inevitable shift to online and distance learning to address challenges posed by government mandates and social distancing requirements. However, in post-pandemic HPE, it is crucial to assess the effectiveness and learner perceptions of online and distance learning interventions. In line with recent BEME reviews we implemented a revised curriculum which included a blend of traditional classroom-based and remote learning approaches combined with in person practical elements including direct patient contact with mitigated risk. We provided support and training of both faculty and students will help to increase digital proficiency and engagement as online elements are continuing as a central feature of medical education. These changes resulted in significant increases in student satisfaction. Our study revealed a substantial student preference for blended learning (BL) over the online flipped classroom (OFC) approach, with comparable student performances based on MCQ examinations. Importantly, this study presents a unique insight into the repercussions of introducing an educational intervention centred on blended learning amidst the pandemic. This insight focusses on student satisfaction and the enhancement of learning experiences, underlining the distinctive value of our research. These findings indicate a preference for reintegrating in-person and patient engagement activities in post-pandemic health professions education. S1 Appendix Ophthalmology module learning outcomes. (DOCX)
Proteomic Profile of
266a2491-f113-4375-aa11-56354afd08bc
11720286
Biochemistry[mh]
Freshwater ecosystems are essential for global biodiversity, providing habitats for a wide range of organisms and ecosystem services crucial for human well-being . However, these ecosystems face increasing threats from pollution, particularly heavy metals, which represent a significant global environmental issue . Anthropogenic activities such as mining, industry, agriculture, and urbanization have contributed to the accumulation of heavy metals, including lead, cadmium, mercury, copper, and manganese, in freshwater bodies . These pollutants cause long-lasting toxic effects, such as bioaccumulation within trophic networks and the disruption of critical ecological functions, including those performed by zooplankton communities . Zooplankton communities play a fundamental role in freshwater aquatic ecosystems, transferring energy captured by primary producers like microalgae to higher trophic level species such as fish . Among the diverse zooplankton in Chilean Northern Patagonian lakes, there is a Cladocera species, Daphnia pulex ( D. pulex ) Leydig, 1860 . D. pulex primarily feeds on microalgae, abundant in mesotrophic or eutrophic lakes. In Northern Patagonia, lakes, such as Lake Llanquihue, with high nutrient and contaminant levels, contrast sharply with oligotrophic lakes like Lake Icalma, characterized by low nutrient concentrations and relatively pristine conditions . Lakes in Chilean Northern Patagonia, characterized by low nutrient levels (oligotrophic), were classified based on their morphology, physicochemical properties, biodiversity, and environmental conditions during the 1980s and 1990s . However, in recent decades, the expansion of agriculture, livestock, forestry, and aquaculture has led to increased soil and water pollution, largely due to intensive and unsustainable resource management practices . Lake Llanquihue is notably affected by heavy metal pollution, including copper and manganese, primarily from agricultural runoff, aquaculture activities, and urban discharge . In contrast, Lake Icalma remains relatively untouched, serving as an ideal reference site for studying natural ecosystems . Currently, several tools are available for assessing anthropogenic impacts on freshwater ecosystems . One effective approach is using bioindicator species, such as D. pulex , extensively studied for its sensitivity to environmental stressors through ecological and ecotoxicological methods . In addition, D. pulex has become a model species in molecular research, including environmental genomics, proteomics, and epigenetics. Its phenotypic plasticity in response to environmental changes has driven significant research into this organism, highlighting its value in scientific studies . Environmental proteomics has proven crucial in linking protein diversity to ecological functions in aquatic ecosystems . While an organism’s genome establishes its inherent traits, proteins drive the dynamic and adaptive processes essential for survival in changing environments. In aquatic ecosystems, proteins undergo specific modifications in response to environmental shifts, making proteomics an indispensable tool for exploring these systems . Research on protein expression in aquatic ecosystems impacted by human activities has been relatively limited, despite its relevance for understanding how these activities affect species. In this study, a proteomic approach was used to evaluate the responses of D. pulex to different levels of heavy metal pollution in two Northern Patagonian lakes with contrasting water qualities. Comparing these unique and fragile ecosystems provides valuable information on the effects of human activities on biodiversity and ecological functions. This work highlights the need to integrate molecular and ecological approaches to advance understanding and mitigation of aquatic biodiversity loss, providing an essential perspective for the conservation of these freshwater ecosystems. 2.1. Protein Extraction and Proteomics The protein concentrations in D. pulex from the two Northern Patagonian lakes under study were measured as 150.41 µg/mL −1 for Icalma and 152.23 µg/mL −1 for Llanquihue. Before the proteomic analysis, the samples were adjusted to a uniform concentration of 150 µg/mL −1 . Proteomic analysis identified a total of 1247 proteins , with 17 significantly ( p ≤ 0.05) upregulated proteins and 181 downregulated proteins in Llanquihue compared to Icalma. Of these 181 downregulated proteins, only 6 were analyzed in this study, selected for their roles in oxidative stress, reactive oxygen species (ROS), decreased ATP production, and exoskeleton stability. The upregulated proteins in individuals of D. pulex collected in Llanquihue were involved in the response of this species to environmental stress, including calcium-transporting ATPase, EV-type proton ATPase subunit E, tubulin alpha chain, two variants of heat shock 70 kDa protein cognate 4, fructose-bisphosphate aldolase, heat shock protein 83, 90, and superoxide dismutase. These proteins were more abundant in individuals from Llanquihue, being associated with the muscular system (e.g., myosin regulatory light chain), carbohydrate metabolism (e.g., glyceraldehyde-3-phosphate dehydrogenase, isocitrate dehydrogenase [NADP]), and physiological processes such as ovarian maturation (vitellogenin) . In contrast, the 181 downregulated proteins included those associated with the response to environmental stressors (e.g., cytochrome C oxidase subunit 5A, cytochrome C oxidase subunit 2, NADH ubiquinone oxidoreductase 75 kDa subunit, NADH dehydrogenase [ubiquinone] flavoprotein 1), chitin-related proteins (chitin-binding type-2), and proteins involved in energy metabolism (ATP synthase subunit gamma) . 2.2. Relationships of Up- and Downregulated Proteins with Physicochemical and Ecological Factors in Northern Patagonian Lakes The proteomic profile of D. pulex was investigated in two contrasting environments: the oligotrophic lake Icalma and the anthropized lake Llanquihue. We observed that protein abundance was significantly influenced by total dissolved solids (TDS), calcium (Ca), total nitrogen (N), electrical conductivity (EC), manganese (Mn), pH, copper (Cu), and phosphate concentration . Downregulated proteins negatively correlated with TDS, EC, heavy metal concentration of Mn and Cu, total N, and phosphate. Conversely, upregulated proteins positively correlated with TDS, total N, EC, phosphate, and concentrations of Mn, Cu, and iron (Fe). However, these upregulated proteins were negatively correlated with pH, Ca concentration, and ecological parameters such as specific abundance (Ni’) and evenness (J’) diversity indicators. PCA results indicated that PC1 accounted for 94.3% and PC2 for 2.6% of the data variability . The proteomic and chemical profiles distinctly separated the two lakes. Llanquihue showed a positive correlation with upregulated proteins and chemical parameters (phosphate, total N, Cu, Mn, TDS, EC) but a negative correlation with downregulated proteins, ecological parameters (Ni’ and J’), and other chemical variables (pH, temperature, Ca). The protein concentrations in D. pulex from the two Northern Patagonian lakes under study were measured as 150.41 µg/mL −1 for Icalma and 152.23 µg/mL −1 for Llanquihue. Before the proteomic analysis, the samples were adjusted to a uniform concentration of 150 µg/mL −1 . Proteomic analysis identified a total of 1247 proteins , with 17 significantly ( p ≤ 0.05) upregulated proteins and 181 downregulated proteins in Llanquihue compared to Icalma. Of these 181 downregulated proteins, only 6 were analyzed in this study, selected for their roles in oxidative stress, reactive oxygen species (ROS), decreased ATP production, and exoskeleton stability. The upregulated proteins in individuals of D. pulex collected in Llanquihue were involved in the response of this species to environmental stress, including calcium-transporting ATPase, EV-type proton ATPase subunit E, tubulin alpha chain, two variants of heat shock 70 kDa protein cognate 4, fructose-bisphosphate aldolase, heat shock protein 83, 90, and superoxide dismutase. These proteins were more abundant in individuals from Llanquihue, being associated with the muscular system (e.g., myosin regulatory light chain), carbohydrate metabolism (e.g., glyceraldehyde-3-phosphate dehydrogenase, isocitrate dehydrogenase [NADP]), and physiological processes such as ovarian maturation (vitellogenin) . In contrast, the 181 downregulated proteins included those associated with the response to environmental stressors (e.g., cytochrome C oxidase subunit 5A, cytochrome C oxidase subunit 2, NADH ubiquinone oxidoreductase 75 kDa subunit, NADH dehydrogenase [ubiquinone] flavoprotein 1), chitin-related proteins (chitin-binding type-2), and proteins involved in energy metabolism (ATP synthase subunit gamma) . The proteomic profile of D. pulex was investigated in two contrasting environments: the oligotrophic lake Icalma and the anthropized lake Llanquihue. We observed that protein abundance was significantly influenced by total dissolved solids (TDS), calcium (Ca), total nitrogen (N), electrical conductivity (EC), manganese (Mn), pH, copper (Cu), and phosphate concentration . Downregulated proteins negatively correlated with TDS, EC, heavy metal concentration of Mn and Cu, total N, and phosphate. Conversely, upregulated proteins positively correlated with TDS, total N, EC, phosphate, and concentrations of Mn, Cu, and iron (Fe). However, these upregulated proteins were negatively correlated with pH, Ca concentration, and ecological parameters such as specific abundance (Ni’) and evenness (J’) diversity indicators. PCA results indicated that PC1 accounted for 94.3% and PC2 for 2.6% of the data variability . The proteomic and chemical profiles distinctly separated the two lakes. Llanquihue showed a positive correlation with upregulated proteins and chemical parameters (phosphate, total N, Cu, Mn, TDS, EC) but a negative correlation with downregulated proteins, ecological parameters (Ni’ and J’), and other chemical variables (pH, temperature, Ca). Proteomics studies provide qualitative and quantitative information about the cell and tissue proteins of freshwater species under anthropogenic pressure by identifying molecular markers of the differential protein expression as a result of the effect of xenobiotic elements in the aquatic environment . It is important to emphasize from the outset that, given the breadth of information generated in this study, we will only discuss certain proteins whose evidence has been demonstrated in the scientific literature due to their association with the response of different organisms to anthropized environments, which would illustrate the robustness of this type of analysis. However, all the data from this study will be freely available to anyone interested in evaluating potential markers of anthropization. This study showed evidence of environmental stress due to economic and human activities in D. pulex . Our findings indicated an upregulation of the calcium-transporting protein ATPase, known in D. pulex and associated with heavy metals, including Cu, dissolved in a polluted environment . The significant upregulation of this protein suggested a potential response to elevated Cu concentrations, similar to the report by Liorti et al. regarding Lake Ontario. In addition, we found that the EV-type proton ATPase protein was upregulated in individuals of D. pulex collected in Llanquihue. This protein was involved in similar cellular processes , and it has been demonstrated to be involved in heavy metal tolerance in several species, like Saccharomyces cerevisiae , Tamarix hispida , and other plant species like Mesembryanthemum crystallinum . Furthermore, it was demonstrated that Cucumis sativus plants treated with high concentrations of Cu and nickel (Ni) induced a pronounced upregulation of certain transcript isoforms encoded by this ATPase gene. Interestingly, the study concluded that the isoforms CsVHA-c1, CsVHA-c2, and CsVHP1;1 were essential elements in the mechanisms involved in the adaptation of cucumber plants to Cu toxicity . Although there is a distinct evolutionary lineage between the two species, the overexpression of this protein in D. pulex ( , and ) and Cucumis sativus suggests a common evolutionary defense mechanism for tolerating environments contaminated with heavy metals. One of the most important reactions in glycolysis involves glyceraldehyde-3-phosphate dehydrogenase (GAPDH), where it breaks down glucose to obtain energy and carbon molecules . Studies have observed that GAPDH plays a crucial role in the adaptation and tolerance of plants and aquatic organisms in contaminated environments, highlighting that exposure to high concentrations of heavy metals induces the production of reactive oxygen species (ROS). Among enzymatic responses, GAPDH is overexpressed as part of the antioxidant and detoxification response in plants and aquatic organisms. This overexpression is vital for managing oxidative stress and protecting cells from damage , which is consistent with our results of significant upregulation of GAPDH in D. pulex individuals collected at the sampling site of Llanquihue. Isocitrate dehydrogenase is an important enzyme in the Krebs cycle, as it catalyzes the conversion of isocitrate to alpha-ketoglutarate, producing NADPH or NADH in the process . A study that analyzed the stress state in Rana sylvatica found increased enzyme activity and NADPH production associated with the upregulation of isocitrate dehydrogenase, suggesting that this behavior may enhance antioxidant activity and defense against oxidative stress . The analysis carried out in our work similarly showed increased activity of isocitrate dehydrogenase, which reflects the significant state of oxidative stress found in D. pulex individuals collected in Llanquihue. One of the most important functions of the cell cytoskeleton involves the alpha-tubulin chain, which forms part of the microtubules essential for cell division, intracellular transport, and cell motility . Studies have shown that the alpha-tubulin chain plays a crucial role in the adaptation and tolerance of D. pulex in aquatic environments contaminated by heavy metals and the quality of food resources . Exposure to high concentrations of heavy metals induced the production of reactive oxygen species (ROS). Thus, the overexpressed alpha-tubulin chain found in this study may be explained as part of the antioxidant and detoxification response in D. pulex individuals. This overexpression is crucial for managing oxidative stress and protecting cells from ROS damage . In addition, it has been observed that exposure to a common heavy metal can mediate several key life history responses in D. pulex , including somatic growth rate and survival rates . These findings suggested a protective role of the alpha-tubulin chain in the adaptation and survival of D. pulex under the high concentrations of heavy metals found in Llanquihue. Heat shock proteins are essential as molecular chaperones, enabling the correct folding of recently synthesized and misfolded proteins resulting from cellular stress factors. These proteins were chosen as bioindicators for the early detection of cellular distress due to their significance in cellular functions . Expression levels of the 70 kDa heat shock protein (HSP70) have been reported to increase linearly in the presence of heavy metals like cadmium, arsenic, nickel, and copper . The results of HSP70 upregulation in this study ( , and ) agree with the previously mentioned cellular response of organisms subjected to heavy metal concentrations. Consequently, the heavy metal concentrations in Llanquihue might have triggered the upregulation of this protein as a defense mechanism in D. pulex individuals. In addition, we found a significant upregulation of superoxide dismutase in Llanquihue, which can be attributed to the high Cu concentrations previously reported in this lake , and which could constitute a powerful biomarker in these contaminated conditions. Superoxide dismutase is a potent antioxidant important in cellular defense against oxidative stress. This protein has several properties, such as a high rate of catalysis of reactions, as well as a high rate of stability against physicochemical stress . Our results were in accordance with those found by Lyu et al. , where D. magna was exposed to high concentrations of Cu/zinc, obtaining that the expression of superoxide dismutase mRNA increased significantly (upregulated) after 48 h of exposition to high concentrations of Cu. They concluded that this gene constitutes a biomarker of oxidative stress for this heavy metal, and it was demonstrated that this enzyme exhibited a high sequence similarity of 88% with the D. pulex species. At the same time, our study also indicated some downregulation in certain protein expressions . D. pulex individuals collected from Llanquihue exhibited a downregulation of the cytochrome C oxidase protein subunits (cytochrome C oxidase subunit 5A, cytochrome C oxidase subunit 2). The above was observed in the anthropized lake Llanquihue, which has been reported to have high concentrations of Cu and Mn . These metals can compete with oxygen at COX active sites, thus inhibiting the functioning of this complex and preventing the generation of the proton gradient. Consequently, ATP synthase cannot function efficiently, decreasing ATP synthesis and energy generation in D. pulex cells . In organisms exposed to high concentrations of metals, downregulation of cytochrome oxidase may function as an adaptive defense mechanism . This decrease in enzyme activity may enable cells to conserve energy and reduce the production of reactive oxygen species (ROS), thus mitigating oxidative damage caused by exposure to environmental pollutants such as copper and cadmium metals, which may be beneficial for cell survival under prolonged stress, as observed by Niemuth et al. . This is supported by the fact that cytochrome oxidase subunits (COX) are known as part of complex IV, being a key component of the electron transport chain located within the inner plasma membrane of the mitochondria . Thus, it plays a pivotal role in translocating protons across the membrane, thereby creating an electrochemical gradient that the ATP synthase enzyme utilizes to synthesize adenosine triphosphate (ATP), a fundamental source of energy . In addition, cytochrome C oxidase activity could be observed to be downregulated by various mechanisms. For example, oxygen availability and the presence of specific inhibitors, such as high concentrations of metals, can affect its function, as explained by Muyssen et al. and Ukhueduan et al. . Likewise, we found that D. pulex individuals from Llanquihue exhibited a downregulation of the NADH ubiquinone subunits (NADH ubiquinone oxidoreductase 75 kDa subunit, NADH dehydrogenase [ubiquinone] flavoprotein 1). In the context of the electron transport chain, these complex I subunits facilitate the transfer of electrons from NADH to the Q coenzyme, resulting in the translocation of protons across the inner mitochondrial membrane. This process contributes to the creation of an electrochemical gradient that, in turn, enables the enzyme ATP synthase to synthesize ATP and produce energy . Our findings align with the research conducted by Niemuth et al. and Ukhueduan et al. , who conducted ecotoxicological experiments under controlled conditions to study the effects of heavy metal toxicity on Daphnia . Accordingly, they highlighted that cellular metabolism involving NADH served as a natural source of balanced ROS production. However, a high concentration of Cu and Mn in aquatic environments could lead to an overproduction of ROS, thereby disrupting the delicate balance within the mitochondria. In the case of D. pulex individuals developed in lakes raised under anthropogenic conditions (Llanquihue), a downregulation of the ATP synthase subunit gamma protein was observed. This downregulation suggested a potential depletion of mitochondrial ATP, which could result in organelle dysfunction due to increased endogenous ROS production, consequently affecting various physiological processes . Future studies could benefit from assessing the mitochondrial structure of D. pulex under environmental stress to further support the mitochondrial dysfunction hypothesis. In Llanquihue, D. pulex individuals exhibited carapace instability coinciding with the downregulation of specific proteins such as chitin-binding type-2. This phenomenon aligns with prior findings by Otte et al. and Becker et al. , who identified inadequate food quality or exposure to environmental stress as potential contributing factors to stress-induced situations that impacted chitin production in Daphnia . Chitin is an important polysaccharide of the cuticle, exoskeleton, and other structures in arthropods like D. pulex . This molecule is essential for forming cell walls in plant cells and in the exoskeleton of arthropods such as D. pulex , providing them with physical protection in the environment where they develop . Furthermore, our results from previous studies revealed significantly lower Ca concentrations in Llanquihue than in Icalma . Given the essential role of Ca in the formation of invertebrate exoskeletons, these reduced levels may negatively impact calcium-demanding zooplankton crustaceans like D. pulex . D. pulex , due to its periodic molting, exhibits a high demand for Ca, and reductions in Ca concentrations can lead to decreased reproduction and body size. During the sampling in this study, daphnids collected from Llanquihue appeared smaller in body size than those from Icalma. Although quantitative differences were not measured in our study, we consider that they must be included in future studies after the results obtained. Consequently, D. pulex may need to increase energy consumption to enhance Ca absorption, reallocating energy for growth. This phenomenon could be closely associated with the downregulated protein abundance of chitin-binding type-2, a protein involved in the morphological changes of the carapace . Importantly, this protein’s abundance positively correlated with Ca concentration , further supporting this mechanism. Furthermore, the notably elevated concentration of total dissolved solids (TDS) previously reported in Llanquihue had a significant influence on the up- and downregulation of proteins in D. pulex . Accordingly, Chapman et al. stated that the toxicity of TDS in freshwater ecosystems was primarily attributed to specific combinations and concentrations of ions (e.g., sodium, potassium, calcium, magnesium, chloride, sulfate, and bicarbonate). Moreover, Weber and Pirow reported that D. pulex was physiologically sensitive to changes in water ion balance, affecting its ion and osmoregulatory processes, which is in accordance with the observed protein expression associated with high TDS concentration in our study . Finally, our hypothesis can be supported due to the high data variability explained by PCA (96.9%) , signifying that freshwater quality affected the development of D. pulex at the molecular level, particularly in the anthropized lake Llanquihue. This impact can be largely attributed to TDS, EC, nutrient levels (P and N), and heavy metal concentrations. Ecological parameters were also considered due to their substantial contribution to data variability. 4.1. Study Area and Sampling The populations of the water flea, D. pulex in Lake Icalma (La Araucanía region) and Lake Llanquihue (Los Lagos region), two Northern Patagonian lakes, were compared . Lake Icalma (38°48′ S and 71°17′ W) is classified as an oligotrophic lake with no anthropogenic interference. Lake Llanquihue (41°08′ S and 72°47′ W), under intense anthropogenic pressure, is classified as mesotrophic due to high nutrient inputs from agriculture, livestock, forestry, and aquaculture . Molecular profiling of D. pulex individuals from the two Northern Patagonian lakes was conducted in March 2022. This month has been reported to be one of the periods of highest abundance of D. pulex . Zooplankton samples, including D. pulex , were taken at sampling points I3 at Icalma (38°48′21″ S; 71°17′0.7″ W) and point LL3 at Llanquihue (41°19′17.5″ S; 72°57′53.5″ W) . The collection procedure was repeated until 60 D. pulex individuals were obtained at each site. Zooplankton samples were collected at a depth of 20 m using a Nansen net (Hydro-Bios; Altenholz, Schleswig-Holstein, Germany) 20 cm in diameter and a 200 µm mesh opening following the methodology described by De los Ríos-Escalante and Woelfl et al. . 4.2. Physicochemical Properties of Study Sites The physicochemical data utilized in this study were obtained from measurements conducted in parallel during a study previously published by Norambuena et al. . In that study, water samples were collected from the same sampling points (I3 in Lake Icalma and LL3 in Lake Llanquihue) to evaluate parameters such as total dissolved solids (TDS), calcium (Ca 2+ ), iron (Fe), manganese (Mn), copper (Cu), total nitrogen and phosphorus, temperature, and electrical conductivity (EC). These variables were selected due to their relevance in characterizing lacustrine environments and their potential influence on biological communities. The analyses were conducted following standardized protocols described in detail in Norambuena et al. . Water samples were collected using a Van Dorn device at 10 m depth and stored at 4 °C until laboratory analysis. Dissolved metal concentrations were identified using a Hanna Hi801 UV-Vis spectrophotometer at 340 to 900 nm range according to the methodologies detailed, while general parameters such as EC, TDS, and temperature were measured using WTW Multi 340i multiparameter probes, following the Standard Methods for the Examination of Water and Wastewater (APHA, AWWA, and WEF) . These data were incorporated into the present study to establish relationships between environmental conditions and the proteomic profiles of D. pulex . 4.3. Protein Extraction and Quantification D. pulex samples were collected from lakes from December 2021 to March 2022 in Icalma and Llanquihue, and transported to a cooler at 4 °C. Oxygen was provided to keep the individuals alive. Once in the laboratory, protein extraction was performed instantaneously. To obtain the proteome of the species, cell membrane protein extraction procedures were carried out using the commercial Mem-PER™ Plus Membrane Protein Extraction Kit (Thermo Fisher Scientific™; Waltham, MA, USA), selecting protocol number two, recommended for cell suspensions. D. pulex specimens were captured from integrated samples at site I3 in Icalma and site LL3 in Llanquihue for molecular protein expression and identification analysis. The protocol involved re-suspending cells in 1.5 mL of cell washing solution (CLS), transferring them to 2 mL vials, centrifuging at 300× g for 5 min, discarding the supernatant, adding 0.75 mL of permeabilization buffer (TP) to the pellet, and homogenizing by vortexing, followed by incubating the suspensions for 20 min at 4 °C with constant agitation. After constant shaking of the suspension of permeabilized cells, centrifugation was performed for 15 min at 16,000× g , and the supernatant (cytosolic proteins) was carefully removed and discarded. Subsequently, 0.5 mL of solubilization buffer (TS) was added to the obtained pellet, homogenized by pipetting and incubated at 4 °C for 30 min with constant agitation. After this, the samples were centrifuged at 16,000× g for 15 min at 4 °C, then the supernatant (membrane proteins) was carefully collected and stored at −80 °C until use. Protein concentration was determined by the bicinchoninic acid (BCA) assay using the Pierce™ BCA Protein Assay Kit (Thermo Fisher Scientific™; Waltham, MA, USA). 4.4. Proteomic Analysis 4.4.1. Chemicals and Instrumentation Iodoacetamide (IAA), DL-dithiothreitol (DTT), acetonitrile (ACN), and formic acid (FA) were purchased from Sigma (St. Louis, MO, USA), while trypsin (bovine pancreas) was purchased from Promega (Madison, WI, USA). Ultrapure water was prepared using a Millipore purification system (Billerica, MA, USA). An Ultimate 3000 nano UHPLC system was coupled to an ESI nanospray source to a Q Exactive HF mass spectrometer (Thermo Fisher Scientific™; Waltham, MA, USA). 4.4.2. Sample Information Total protein extracts from groups I3 (60 individuals) and LL3 (60 individuals) were digested with trypsin, identified, and quantified using a nanoLC-MS/MS platform. 4.4.3. Sample Preparation The sample buffer was exchanged with ammonium bicarbonate, and the samples had a final concentration of 1 μg/μL. Then, 60 μL of the sample was transferred to a new Eppendorf tube. After reduction with DTT (10 mM, 56 °C, 1 h) and alkylation with IAA (20 mM, room temperature in the dark, 1 h), the samples were centrifuged (12,000 rpm, 4 °C, 10 min) and washed once with 50 mM ammonium bicarbonate. Free trypsin was added to the protein solution in a trypsin-to-protein ratio of 1:50, along with 50 mM ammonium bicarbonate (100 μL), and the mixture was incubated overnight at 37 °C. Finally, the samples were centrifuged at 12,000 rpm at 4 °C for 10 min. Then, 100 μL of 50 mM ammonium bicarbonate was added to the device and centrifuged, and this step was repeated once. The extracted peptides were lyophilized to dryness and resuspended in 20 μL of 0.1% formic acid in preparation for LC-MS/MS analysis. 4.4.4. NanoLC Nanoflow UPLC: Ultimate 3000 nano UHPLC system (ThermoFisher Scientific, USA); nanocolumn: trapping column (PepMap C18, 100 Å, 100 μm × 2 cm, 5 μm) and an analytical column (PepMap C18, 100 Å, 75 μm × 50 cm, 2 μm); loaded sample volume: 1 μg; mobile phase: A: 0.1% formic acid in water; B: 0.1% formic acid in 80% acetonitrile. Total flow rate: 250 nL/min; LC linear gradient: from 2% to 8% buffer B in 3 min, from 8% to 20% buffer B in 56 min, from 20% to 40% buffer B in 37 min, and finally from 40% to 90% buffer B in 4 min. 4.4.5. Mass Spectrometry The full scan was conducted from 300 to 1650 m / z at a resolution of 60,000 at 200 m / z , with an automatic gain control target of 3 × 10 6 . The MS/MS scan was performed in Top 20 mode, which involved selecting the top 20 precursor ions for fragmentation, using the following parameters: a resolution of 15,000 at 200 m / z , an automatic gain control target of 1 × 10 5 , a maximum injection time of 19 ms, normalized collision energy at 28%, an isolation window of 1.4 Th, and dynamic exclusion for 30 s. 4.4.6. Proteome Data Analysis The six raw MS files were analyzed and searched against the D. pulex protein UniProt database, corresponding to the species of the samples, using MaxQuant (version 1.6.2.6). The protein modification parameters included cysteine (C) carbamidomethylation as a fixed modification and methionine (M) oxidation as a modification variable. Enzyme specificity was set to trypsin, allowing for up to two missed cleavages. The precursor ion mass tolerance was set at 10 ppm and the MS/MS tolerance at 0.6 Da. 4.5. Statistical Analysis To compare the expression of proteins in the two lakes, Icalma and Llanquihue, a paired t -test with a significance level of p ≤ 0.05 was used in all cases. Statistical analyses for comparison were performed using the Microsoft Excel package (Microsoft Office 365). Protein profile visualization was carried out using the Python programming language and the Matplotlib library. Protein profile visualization was conducted using the Python programming language ( https://www.python.org/ ; accessed on 3 April 2024) in conjunction with the Matplotlib library ( https://matplotlib.org/ ; accessed on 3 April 2024). The data obtained, including physicochemical measurements from a previous study , and protein intensity (label-free quantification, LFQ) were subjected to a check for normality using the Shapiro–Wilk test and for homogeneity of variance using the Levene test. Significant differences were analyzed using a parametric two-way ANOVA at a 95% significance level, followed by post-hoc Tukey’s honestly significant difference (HSD) test. Detected significant relationships were further analyzed using Pearson’s correlation analysis, with significance accepted at p ≤ 0.05. A principal component analysis (PCA) was performed using the Factoextra package in the R software (version 4.3.2) to identify the variables that explained the variability in the data. All statistical tests were conducted using the R Foundation for Statistical Computing, Version 3.6.3 (R Development Core Team, 2009–2018). The populations of the water flea, D. pulex in Lake Icalma (La Araucanía region) and Lake Llanquihue (Los Lagos region), two Northern Patagonian lakes, were compared . Lake Icalma (38°48′ S and 71°17′ W) is classified as an oligotrophic lake with no anthropogenic interference. Lake Llanquihue (41°08′ S and 72°47′ W), under intense anthropogenic pressure, is classified as mesotrophic due to high nutrient inputs from agriculture, livestock, forestry, and aquaculture . Molecular profiling of D. pulex individuals from the two Northern Patagonian lakes was conducted in March 2022. This month has been reported to be one of the periods of highest abundance of D. pulex . Zooplankton samples, including D. pulex , were taken at sampling points I3 at Icalma (38°48′21″ S; 71°17′0.7″ W) and point LL3 at Llanquihue (41°19′17.5″ S; 72°57′53.5″ W) . The collection procedure was repeated until 60 D. pulex individuals were obtained at each site. Zooplankton samples were collected at a depth of 20 m using a Nansen net (Hydro-Bios; Altenholz, Schleswig-Holstein, Germany) 20 cm in diameter and a 200 µm mesh opening following the methodology described by De los Ríos-Escalante and Woelfl et al. . The physicochemical data utilized in this study were obtained from measurements conducted in parallel during a study previously published by Norambuena et al. . In that study, water samples were collected from the same sampling points (I3 in Lake Icalma and LL3 in Lake Llanquihue) to evaluate parameters such as total dissolved solids (TDS), calcium (Ca 2+ ), iron (Fe), manganese (Mn), copper (Cu), total nitrogen and phosphorus, temperature, and electrical conductivity (EC). These variables were selected due to their relevance in characterizing lacustrine environments and their potential influence on biological communities. The analyses were conducted following standardized protocols described in detail in Norambuena et al. . Water samples were collected using a Van Dorn device at 10 m depth and stored at 4 °C until laboratory analysis. Dissolved metal concentrations were identified using a Hanna Hi801 UV-Vis spectrophotometer at 340 to 900 nm range according to the methodologies detailed, while general parameters such as EC, TDS, and temperature were measured using WTW Multi 340i multiparameter probes, following the Standard Methods for the Examination of Water and Wastewater (APHA, AWWA, and WEF) . These data were incorporated into the present study to establish relationships between environmental conditions and the proteomic profiles of D. pulex . D. pulex samples were collected from lakes from December 2021 to March 2022 in Icalma and Llanquihue, and transported to a cooler at 4 °C. Oxygen was provided to keep the individuals alive. Once in the laboratory, protein extraction was performed instantaneously. To obtain the proteome of the species, cell membrane protein extraction procedures were carried out using the commercial Mem-PER™ Plus Membrane Protein Extraction Kit (Thermo Fisher Scientific™; Waltham, MA, USA), selecting protocol number two, recommended for cell suspensions. D. pulex specimens were captured from integrated samples at site I3 in Icalma and site LL3 in Llanquihue for molecular protein expression and identification analysis. The protocol involved re-suspending cells in 1.5 mL of cell washing solution (CLS), transferring them to 2 mL vials, centrifuging at 300× g for 5 min, discarding the supernatant, adding 0.75 mL of permeabilization buffer (TP) to the pellet, and homogenizing by vortexing, followed by incubating the suspensions for 20 min at 4 °C with constant agitation. After constant shaking of the suspension of permeabilized cells, centrifugation was performed for 15 min at 16,000× g , and the supernatant (cytosolic proteins) was carefully removed and discarded. Subsequently, 0.5 mL of solubilization buffer (TS) was added to the obtained pellet, homogenized by pipetting and incubated at 4 °C for 30 min with constant agitation. After this, the samples were centrifuged at 16,000× g for 15 min at 4 °C, then the supernatant (membrane proteins) was carefully collected and stored at −80 °C until use. Protein concentration was determined by the bicinchoninic acid (BCA) assay using the Pierce™ BCA Protein Assay Kit (Thermo Fisher Scientific™; Waltham, MA, USA). 4.4.1. Chemicals and Instrumentation Iodoacetamide (IAA), DL-dithiothreitol (DTT), acetonitrile (ACN), and formic acid (FA) were purchased from Sigma (St. Louis, MO, USA), while trypsin (bovine pancreas) was purchased from Promega (Madison, WI, USA). Ultrapure water was prepared using a Millipore purification system (Billerica, MA, USA). An Ultimate 3000 nano UHPLC system was coupled to an ESI nanospray source to a Q Exactive HF mass spectrometer (Thermo Fisher Scientific™; Waltham, MA, USA). 4.4.2. Sample Information Total protein extracts from groups I3 (60 individuals) and LL3 (60 individuals) were digested with trypsin, identified, and quantified using a nanoLC-MS/MS platform. 4.4.3. Sample Preparation The sample buffer was exchanged with ammonium bicarbonate, and the samples had a final concentration of 1 μg/μL. Then, 60 μL of the sample was transferred to a new Eppendorf tube. After reduction with DTT (10 mM, 56 °C, 1 h) and alkylation with IAA (20 mM, room temperature in the dark, 1 h), the samples were centrifuged (12,000 rpm, 4 °C, 10 min) and washed once with 50 mM ammonium bicarbonate. Free trypsin was added to the protein solution in a trypsin-to-protein ratio of 1:50, along with 50 mM ammonium bicarbonate (100 μL), and the mixture was incubated overnight at 37 °C. Finally, the samples were centrifuged at 12,000 rpm at 4 °C for 10 min. Then, 100 μL of 50 mM ammonium bicarbonate was added to the device and centrifuged, and this step was repeated once. The extracted peptides were lyophilized to dryness and resuspended in 20 μL of 0.1% formic acid in preparation for LC-MS/MS analysis. 4.4.4. NanoLC Nanoflow UPLC: Ultimate 3000 nano UHPLC system (ThermoFisher Scientific, USA); nanocolumn: trapping column (PepMap C18, 100 Å, 100 μm × 2 cm, 5 μm) and an analytical column (PepMap C18, 100 Å, 75 μm × 50 cm, 2 μm); loaded sample volume: 1 μg; mobile phase: A: 0.1% formic acid in water; B: 0.1% formic acid in 80% acetonitrile. Total flow rate: 250 nL/min; LC linear gradient: from 2% to 8% buffer B in 3 min, from 8% to 20% buffer B in 56 min, from 20% to 40% buffer B in 37 min, and finally from 40% to 90% buffer B in 4 min. 4.4.5. Mass Spectrometry The full scan was conducted from 300 to 1650 m / z at a resolution of 60,000 at 200 m / z , with an automatic gain control target of 3 × 10 6 . The MS/MS scan was performed in Top 20 mode, which involved selecting the top 20 precursor ions for fragmentation, using the following parameters: a resolution of 15,000 at 200 m / z , an automatic gain control target of 1 × 10 5 , a maximum injection time of 19 ms, normalized collision energy at 28%, an isolation window of 1.4 Th, and dynamic exclusion for 30 s. 4.4.6. Proteome Data Analysis The six raw MS files were analyzed and searched against the D. pulex protein UniProt database, corresponding to the species of the samples, using MaxQuant (version 1.6.2.6). The protein modification parameters included cysteine (C) carbamidomethylation as a fixed modification and methionine (M) oxidation as a modification variable. Enzyme specificity was set to trypsin, allowing for up to two missed cleavages. The precursor ion mass tolerance was set at 10 ppm and the MS/MS tolerance at 0.6 Da. Iodoacetamide (IAA), DL-dithiothreitol (DTT), acetonitrile (ACN), and formic acid (FA) were purchased from Sigma (St. Louis, MO, USA), while trypsin (bovine pancreas) was purchased from Promega (Madison, WI, USA). Ultrapure water was prepared using a Millipore purification system (Billerica, MA, USA). An Ultimate 3000 nano UHPLC system was coupled to an ESI nanospray source to a Q Exactive HF mass spectrometer (Thermo Fisher Scientific™; Waltham, MA, USA). Total protein extracts from groups I3 (60 individuals) and LL3 (60 individuals) were digested with trypsin, identified, and quantified using a nanoLC-MS/MS platform. The sample buffer was exchanged with ammonium bicarbonate, and the samples had a final concentration of 1 μg/μL. Then, 60 μL of the sample was transferred to a new Eppendorf tube. After reduction with DTT (10 mM, 56 °C, 1 h) and alkylation with IAA (20 mM, room temperature in the dark, 1 h), the samples were centrifuged (12,000 rpm, 4 °C, 10 min) and washed once with 50 mM ammonium bicarbonate. Free trypsin was added to the protein solution in a trypsin-to-protein ratio of 1:50, along with 50 mM ammonium bicarbonate (100 μL), and the mixture was incubated overnight at 37 °C. Finally, the samples were centrifuged at 12,000 rpm at 4 °C for 10 min. Then, 100 μL of 50 mM ammonium bicarbonate was added to the device and centrifuged, and this step was repeated once. The extracted peptides were lyophilized to dryness and resuspended in 20 μL of 0.1% formic acid in preparation for LC-MS/MS analysis. Nanoflow UPLC: Ultimate 3000 nano UHPLC system (ThermoFisher Scientific, USA); nanocolumn: trapping column (PepMap C18, 100 Å, 100 μm × 2 cm, 5 μm) and an analytical column (PepMap C18, 100 Å, 75 μm × 50 cm, 2 μm); loaded sample volume: 1 μg; mobile phase: A: 0.1% formic acid in water; B: 0.1% formic acid in 80% acetonitrile. Total flow rate: 250 nL/min; LC linear gradient: from 2% to 8% buffer B in 3 min, from 8% to 20% buffer B in 56 min, from 20% to 40% buffer B in 37 min, and finally from 40% to 90% buffer B in 4 min. The full scan was conducted from 300 to 1650 m / z at a resolution of 60,000 at 200 m / z , with an automatic gain control target of 3 × 10 6 . The MS/MS scan was performed in Top 20 mode, which involved selecting the top 20 precursor ions for fragmentation, using the following parameters: a resolution of 15,000 at 200 m / z , an automatic gain control target of 1 × 10 5 , a maximum injection time of 19 ms, normalized collision energy at 28%, an isolation window of 1.4 Th, and dynamic exclusion for 30 s. The six raw MS files were analyzed and searched against the D. pulex protein UniProt database, corresponding to the species of the samples, using MaxQuant (version 1.6.2.6). The protein modification parameters included cysteine (C) carbamidomethylation as a fixed modification and methionine (M) oxidation as a modification variable. Enzyme specificity was set to trypsin, allowing for up to two missed cleavages. The precursor ion mass tolerance was set at 10 ppm and the MS/MS tolerance at 0.6 Da. To compare the expression of proteins in the two lakes, Icalma and Llanquihue, a paired t -test with a significance level of p ≤ 0.05 was used in all cases. Statistical analyses for comparison were performed using the Microsoft Excel package (Microsoft Office 365). Protein profile visualization was carried out using the Python programming language and the Matplotlib library. Protein profile visualization was conducted using the Python programming language ( https://www.python.org/ ; accessed on 3 April 2024) in conjunction with the Matplotlib library ( https://matplotlib.org/ ; accessed on 3 April 2024). The data obtained, including physicochemical measurements from a previous study , and protein intensity (label-free quantification, LFQ) were subjected to a check for normality using the Shapiro–Wilk test and for homogeneity of variance using the Levene test. Significant differences were analyzed using a parametric two-way ANOVA at a 95% significance level, followed by post-hoc Tukey’s honestly significant difference (HSD) test. Detected significant relationships were further analyzed using Pearson’s correlation analysis, with significance accepted at p ≤ 0.05. A principal component analysis (PCA) was performed using the Factoextra package in the R software (version 4.3.2) to identify the variables that explained the variability in the data. All statistical tests were conducted using the R Foundation for Statistical Computing, Version 3.6.3 (R Development Core Team, 2009–2018). Our study showed that anthropogenic pressure significantly impacted the proteome of Daphnia pulex , resulting in the differential expression of 17 significantly upregulated and 181 downregulated proteins. Of these 198 proteins, only 13 were analyzed, evidencing a significant anthropogenic pressure in Llanquihue affecting zooplankton like D. pulex individuals compared to those collected in Icalma. The observed up- and downregulated proteins indicate cellular stress, compromising physiological functions, particularly cellular metabolism and exoskeleton composition. This study provides valuable information on the effects of anthropogenic pressure on freshwater ecosystems and highlights the importance of incorporating a molecular–ecological approach into environmental monitoring and management strategies. Further studies on the genetic divergence of the precursor genes of the tested proteins will shed light on the influence of environmental change due to anthropogenic stress on gene expression. Our findings have implications for the conservation and restoration of freshwater ecosystems by identifying which anthropogenic variables are affecting zooplankton species and how this stress affects them. This will effectively facilitate the advancement of sustainable, eco-friendly management practices to support human economic activity in this region and prevent premature pollution in the lakes of Chilean Northern Patagonia.
Comparative study between Hugo™ RAS and intuitive da Vinci Xi systems in different gynecologic surgeries: a single-institution perspective study
88963be8-37c6-477e-9c8a-90a1f5de10ef
11889050
Robotic Surgical Procedures[mh]
Minimally invasive surgery (MIS) has been the mainstay of surgical procedures for most gynecologic diseases, with increasing application of robotic surgery globally, since the introduction of the Intuitive™ da Vinci system in 2003. The surgical platform provides precise manipulation, tremor filtering effects, magnified surgical fields with less blood loss, shorter hospital stays, and rapid recovery . There is a higher incidence of adhesive disease and morbid obesity, and larger uteri with fewer intraoperative complications in the robotic cohort than in the conventional open abdominal hysterectomy and vaginal hysterectomy cohorts, while shorter hospital stays and fewer postoperative complications than in the laparoscopic-assisted vaginal hysterectomy, vaginal hysterectomy, and abdominal hysterectomy cohorts . By 2024, the da Vinci robotic series has dominated the robotic surgical market for almost 20 years, with nearly 9210 platforms installed in 72 countries worldwide. Innovative surgical platforms were introduced into the market in 2019. The Medtronic Hugo™ RAS first gained EU approval in 2021, and 100 platforms have been installed with more than 10,000 procedures having been performed by July 2024 [ – ]. This novel surgical platform is currently used in only seven centers in East Asia for gynecologic procedures by September 2024, including one in Taiwan and all others in Japan. The procedure has been performed in 51 cases in our institute (TTMHH), 23, 14, 26, 68, 6, and 17 cases in Kitasato Institute, Sapporo Medical University, Tottori University Hospital, Yamanashi Pref. Central Hospital, Fujita Medical University, and Kyoto University Hospital, respectively, by August 2024. Clinical studies were announced in the USA by Medtronic in May 2024. The innovative open console (Fig. ) enables surgeons to communicate with operating room personnel effortlessly and effectively during the entire procedure. Furthermore, the ergonomic design is more comfortable and fatigue-free for surgeons compared to an immersive console design, especially during a complicated and long surgery. Another key feature is the four independent arm carts, which enable the adjustment of various configurations adaptable to different clinical scenarios and easy movement between different operating rooms, thereby enhancing the versatility of the platform. Few reports on gynecology , except the prolapse of the pelvic organ (POP) procedure, using the new surgical platform have been published in the past 2 years . Herein, we described our first 40 cases of Hugo™ RAS procedure in different gynecologic surgeries and compared it to the established Intuitive da Vinci Xi system to evaluate its feasibility, safety, and perioperative results. Study design and participants Electronic records of patients who underwent Hugo™ RAS for various gynecologic indications between March 2023 and July 2024 were retrospectively analyzed, and the results were compared to those using the da Vinci Xi system with matched demographics and disease characteristics. The study was approved by the Institutional Review Board of Tung’s Taichung Metroharbor Hospital, and the informed consent was waived. Main outcome measures To investigate the feasibility of the Hugo™ RAS in different gynecologic diseases, the length of stay, blood loss, surgical time, and perioperative complication rates were compared to those of the established Intuitive da Vinci Xi systems. Statistical analysis To reduce treatment discrepancies between the Hugo robotic surgery and da Vinci robotic surgery, propensity score matching was used to pair the two surgical methods based on age, height, weight, and the weight of the excised surgical components. Descriptive statistics were used to describe demographic variables. Continuous variables are presented as mean and range and were compared using Student’s t test. Categorical variables are expressed numerically as percentages and were compared using the Chi-square test or Fisher’s exact test, as appropriate. Electronic records of patients who underwent Hugo™ RAS for various gynecologic indications between March 2023 and July 2024 were retrospectively analyzed, and the results were compared to those using the da Vinci Xi system with matched demographics and disease characteristics. The study was approved by the Institutional Review Board of Tung’s Taichung Metroharbor Hospital, and the informed consent was waived. To investigate the feasibility of the Hugo™ RAS in different gynecologic diseases, the length of stay, blood loss, surgical time, and perioperative complication rates were compared to those of the established Intuitive da Vinci Xi systems. To reduce treatment discrepancies between the Hugo robotic surgery and da Vinci robotic surgery, propensity score matching was used to pair the two surgical methods based on age, height, weight, and the weight of the excised surgical components. Descriptive statistics were used to describe demographic variables. Continuous variables are presented as mean and range and were compared using Student’s t test. Categorical variables are expressed numerically as percentages and were compared using the Chi-square test or Fisher’s exact test, as appropriate. Between May 2023 and July 2024, 40 women, including 4, 4, 6, 20, 2, 3, and 1 cases of adenomyosis, cervical cancer, endometrial cancer, uterine leiomyomas, adenomyosis combined with uterine myomas, cesarean scar defect, and ovarian cancer, respectively, underwent hysterectomy (both benign and malignant, n = 25), myomectomy ( n = 12), and scar defect repair ( n = 3) (Table ) with mean uterine weight of 338.6 g (62–1975 g). The mean blood loss was 317.68 ml, which was slightly higher in patients undergoing myomectomy (520.83 ml) and hysterectomy with myomas (502.5 ml), but comparable in others. The mean surgical time, docking time, and console time were 279.92 min (137–628), 7 min (2–20), and 131.7 min (25–455), respectively (Table ). After propensity score matching, compared to the 111 patients who underwent da Vinci surgery in our series, there were no differences in blood loss [235.42 ml (0–3100) vs. 342.57 ml (10–1800), p = 0.198], incidence of perioperative complications [25 (22.52%) vs. 9 (24.32%), p > 0.99, within 7 days (Table ); 15 (13.51%) vs. 5 (13.51%), p > 0.99, within 30 days (Table S2)], and length of hospital stay [3.51 days (2–9) vs. 4.59 days (2–52), p = 0.426). However, the surgical time was longer with Hugo™ RAS (228.23 min [84–483] vs. 290.05 min [137–628], p < 0.011), especially in the procedure in the staging of endometrial cancer (381 vs. 324.86 min, p < 0.025) (Table ). Main findings There were no differences in the length of hospital stay, blood loss, and perioperative complication rates between the Hugo™ RAS and the established da Vinci Xi system in our study cohorts. A variety of procedures, including single-port hysterectomy, three-arm setting, and ovarian interval debulking surgery (IDS) combined with HIPEC, were added to conventional simple hysterectomy, radical hysterectomy, endometrial staging procedure, and myomectomy, all resulting in favorable perioperative outcomes. The feasibility of the platform has been clearly demonstrated for almost all common surgical indications in the gynecology field. The main obstacle to be tackled is the limited option of instruments available for procedures performed with only monopolar scissors, bipolar coagulation, and Cadiere forceps to choose from. Although there is a built-in pedal for the ligasure™ (Medtronic), it was not available during the period when this study was carried out. A slight lag time between instrumental shifts sometimes can be a problem for a seamless procedure. Interpretation of findings The novel Hugo™ RAS can perform most gynecologic procedures well, and with experience gained in the journey, little modification is required in the configuration of presetting arms to fit different situations. We found that with the compact setting as originally suggested by Gueli Alletti et al. (Fig. ), a longer instrument and limited space of the assistant through the Palmer point might hinder the smoothness of the procedure, whereas a slight modification of the down placement of the fourth trocar (the straight port placement) coupled with a modified butterfly configuration (Fig. ) may greatly decrease the incidence of arms collisions during complex procedure. Although its original design is not specifically for single-port applications, we demonstrated the feasibility of single-arm hysterectomy in two cases (Fig. ). Further evaluation of the modifications of the presetting angles and arm placement is necessary to carry out the procedure smoothly in the future. The procedure of interval debulking with complete resection of the ovarian tumor (R0) in a case of stage IIIB ovarian cancer coupled with intraoperative hyperthermic intraperitoneal chemotherapy (HIPEC) was successfully performed, which is believed to be the first case report using the new surgical platform in this scenario. A slightly longer surgical time is expected when adapting to a new robotic platform, with totally different docking procedure as well as adaptability of limited instrument option, especially for more complicated procedures. However, for more commonly performed surgeries, such as simple hysterectomy and myomectomy, there was no difference in surgical time between our study cohort and the da Vinci cohort. With the introduction of a different setting of surgical console and separate docking arms, a more surgeon-friendly platform will reduce the muscle strains of upper musculoskeletal as well as wrist portion and easy adjustment of angle during operative time will enable the reaching to a wider surgical field. There is an easy shifting from da Vinci platform to Hugo™ RAS with short learning curve as demonstrated in our study with most of the surgical time, blood loss and hospital stay are comparable to da Vinci platform. With the innovation of different surgical design, it is anticipated there will be more surgeon-friendly and ease to use with comparable results and cost effective in term of unit price. Strengths and limitations This is a small retrospective study in a single institution, and the experience in this novel surgical platform of one surgeon inevitably introduced recall bias when retrieving data from previous cases using the da Vinci Xi procedure due to its long-lapsing and retrospective nature. However, every procedure and all demographic details were documented precisely in the first 40 cases using the Hugo™ RAS. The configuration was modified according to different clinical situations and new settings were adapted to various situations, which might provide a more practical and valuable experience compared with the recommendations of in vivo studies, as suggested by Gueli Alletti et al. . The author has been practicing da Vinci surgery since 2013 with a total of 351 different procedures being carried out as of June, 2024 and has been a proctor for TR300 in da Vinci accredited advance course as well as an instructor for IRCAD Taiwan (MIS training center). He is also a sole proctor in Hugo™ RAS in gynecologic surgery in the Greater China since August 2024. There are no single case of conversion to laparotomy both in the da Vinci cohort and Hugo™ RAS during the study period. There were no differences in the length of hospital stay, blood loss, and perioperative complication rates between the Hugo™ RAS and the established da Vinci Xi system in our study cohorts. A variety of procedures, including single-port hysterectomy, three-arm setting, and ovarian interval debulking surgery (IDS) combined with HIPEC, were added to conventional simple hysterectomy, radical hysterectomy, endometrial staging procedure, and myomectomy, all resulting in favorable perioperative outcomes. The feasibility of the platform has been clearly demonstrated for almost all common surgical indications in the gynecology field. The main obstacle to be tackled is the limited option of instruments available for procedures performed with only monopolar scissors, bipolar coagulation, and Cadiere forceps to choose from. Although there is a built-in pedal for the ligasure™ (Medtronic), it was not available during the period when this study was carried out. A slight lag time between instrumental shifts sometimes can be a problem for a seamless procedure. The novel Hugo™ RAS can perform most gynecologic procedures well, and with experience gained in the journey, little modification is required in the configuration of presetting arms to fit different situations. We found that with the compact setting as originally suggested by Gueli Alletti et al. (Fig. ), a longer instrument and limited space of the assistant through the Palmer point might hinder the smoothness of the procedure, whereas a slight modification of the down placement of the fourth trocar (the straight port placement) coupled with a modified butterfly configuration (Fig. ) may greatly decrease the incidence of arms collisions during complex procedure. Although its original design is not specifically for single-port applications, we demonstrated the feasibility of single-arm hysterectomy in two cases (Fig. ). Further evaluation of the modifications of the presetting angles and arm placement is necessary to carry out the procedure smoothly in the future. The procedure of interval debulking with complete resection of the ovarian tumor (R0) in a case of stage IIIB ovarian cancer coupled with intraoperative hyperthermic intraperitoneal chemotherapy (HIPEC) was successfully performed, which is believed to be the first case report using the new surgical platform in this scenario. A slightly longer surgical time is expected when adapting to a new robotic platform, with totally different docking procedure as well as adaptability of limited instrument option, especially for more complicated procedures. However, for more commonly performed surgeries, such as simple hysterectomy and myomectomy, there was no difference in surgical time between our study cohort and the da Vinci cohort. With the introduction of a different setting of surgical console and separate docking arms, a more surgeon-friendly platform will reduce the muscle strains of upper musculoskeletal as well as wrist portion and easy adjustment of angle during operative time will enable the reaching to a wider surgical field. There is an easy shifting from da Vinci platform to Hugo™ RAS with short learning curve as demonstrated in our study with most of the surgical time, blood loss and hospital stay are comparable to da Vinci platform. With the innovation of different surgical design, it is anticipated there will be more surgeon-friendly and ease to use with comparable results and cost effective in term of unit price. This is a small retrospective study in a single institution, and the experience in this novel surgical platform of one surgeon inevitably introduced recall bias when retrieving data from previous cases using the da Vinci Xi procedure due to its long-lapsing and retrospective nature. However, every procedure and all demographic details were documented precisely in the first 40 cases using the Hugo™ RAS. The configuration was modified according to different clinical situations and new settings were adapted to various situations, which might provide a more practical and valuable experience compared with the recommendations of in vivo studies, as suggested by Gueli Alletti et al. . The author has been practicing da Vinci surgery since 2013 with a total of 351 different procedures being carried out as of June, 2024 and has been a proctor for TR300 in da Vinci accredited advance course as well as an instructor for IRCAD Taiwan (MIS training center). He is also a sole proctor in Hugo™ RAS in gynecologic surgery in the Greater China since August 2024. There are no single case of conversion to laparotomy both in the da Vinci cohort and Hugo™ RAS during the study period. The Hugo™ RAS is feasible and safe in most gynecologic procedures with more options of instruments anticipated in the future. Below is the link to the electronic supplementary material. Supplementary file 1 (DOCX 30 KB)
Electroplated double-crowns on implants and teeth after up to 12 years– a retrospective clinical study
b41d9903-8024-40f6-8269-79e0c3a8c921
11790550
Dentistry[mh]
Removable dental prostheses being retained by double-crown attachments are based on the principle of primary crowns cemented to the abutment teeth and secondary components connected to the prosthesis. They are successfully used to rehabilitate partially or completely edentulous jaws although with regard to survival rates of abutment teeth, a wide range (60.6–100% after 4–10 years) is to be observed . Double-crown systems, in contrast to fixed restorations or clip-retained prostheses, offer the advantage that also compromised teeth with uncertain prognosis can be included as abutments, as repair and extension measures are very easy to handle. While providing a feeling of fixed teeth for the patient, double-crown retained prostheses still offer optimal conditions for oral hygienic measures. Severely reduced dentition can be supported by strategically placed implants as a symmetrical distribution of abutments can reduce the failure rate of natural abutments and contribute to the stability of the reconstruction and the combination of teeth and implants appears to be a viable method for attaching double-crown retained prostheses . In this context, a recent meta-analysis calculated an estimated survival rate of 98.8% for implant abutments and 95.4% for natural abutments with telescopic crowns . Especially when using double-crowns and combining implants and teeth, the use of electroplated ("galvanic") crowns can be advantageous. The special fabrication method of these crowns enables an intraoral luting of secondary crowns and framework, which leads to a very accurate, passive fit, even if a combination of “mobile” teeth and rigid implants is used for the attachment. While there is currently some data available on the use of electroplated double-crowns , the number of studies remains limited with regard to electroplated double-crowns and the combination of implants and teeth. The majority of studies on electroplated double-crowns were conducted in academic research settings , and only two studies used precious metal alloy instead of ceramics for the primary crowns . With regard to the OHRQoL, Stober et al. and Liebermann et al. have demonstrated an improvement following treatment with double-crowned retained prostheses . However, further data from clinical long-term studies are still required to confirm these findings. This also applies to the subjective chewing ability associated with this type of prosthesis. The occurrence of complications such as acrylic fractures, chippings, endodontic or implant-related problems can be stressful for both the dentist and the patient as it involves additional visits to the dental office. Manufacturing-related and expected retention loss of double-crown prostheses often takes several years to appear. A review of the data on technical complications from the existing clinical studies, which were also summarized by Moliner-Mourelle et al., reveals that the most common complications are screw loosening, veneering fractures, and, less frequently, the need for recementation . This appears to be independent of the number of implants/natural abutments . Biological complications were more likely to be moderate to severe compared to technical complications. These included, for instance, peri-implant infections, caries, and pulpitis, but also implant and tooth loss . The aim of this retrospective clinical study was to assess the clinical outcomes of implant-supported and combined tooth-implant-supported removable prostheses retained by electroplated double-crowns with primary crowns from precious metal alloy after 1 to 12 years in a private dental practice. Kaplan–Meier survival rates for implants and abutment teeth were calculated and the overall success of the prostheses was analyzed. Additionally, we evaluated the OHRQoL and subjective masticatory function of patients with these types of prostheses. Trial design This retrospective clinical cohort study was approved by the Ethics Committee (EK Nr. 123/15) of the Medical Faculty of RWTH Aachen University and was conducted following the ethical standards of the Declaration of Helsinki. Clinical follow-up was performed by authors JSK, TK, and a dental assistant in a private dental practice in Hamburg (Germany) between May 2015 and June 2016. The study was registered in the German Clinical Trials Register (DRKS00033746; date of registration 29/02/2024). Patients All patients with an implant-supported or combined tooth-implant-supported electroplated double-crown removable dental prosthesis (ED-RDP) (Fig. a–f) were invited to participate in this retrospective study as part of their regular follow-up. The inclusion criteria were. prosthesis in place for at least one year minimum age of 18 years a general medical condition that allowed a clinical dental examination informed consent was given prior to inclusion Exclusion criteria were. pregnancy inability to give informed consent any reason contraindicating a clinical dental examination (e.g. severe psychological disorders) Prosthodontic treatment The patients who were investigated in this study had experienced extensive tooth loss in the past and a fixed reconstruction was not an option. Following a joint decision with the patients, it was agreed that an ED-RDP should be incorporated on implants or a combination of implants and teeth. Remaining teeth were conservatively pre-treated where necessary. The aim was to have at least six abutments (implants/and or natural) in the maxilla and four in the mandible. Most of the patients had a few remaining but hopeless teeth that had to be extracted (Fig. a). In these cases six implants were inserted into the upper and/or four into the lower jaw (Fig. b). Other patients had one to four teeth that could be retained, and additional strategically positioned implants were inserted to achieve a quadrangular support of the prosthesis. Implant surgery was performed by author JK and two maxillofacial surgeons in 23 patients, and two patients had already received implants alio loco . Bone augmentation and maxillary sinus membrane elevation were performed prior to implant placement whenever necessary. An allogeneic bone graft material (Bio-Oss, Geistlich Pharma AG, Switzerland) was used in these cases. The majority of patients received conical, self-tapping Camlog or Conelog Screw-line implants (Camlog Biotechnologies GmbH, Basel, Switzerland). One patient received cylindrical Straumann Bone Level implants (Straumann GmbH, Basel, Switzerland). After 4 to 6 months of submerged healing, the implants were uncovered and temporary gingiva formers were placed. Natural abutment teeth were prepared with a 2 mm circumferential reduction and a pronounced chamfer. Temporary restorations were placed. After a further healing period of approximately 6 weeks, an impression of the implants (and prepared teeth, if any) was taken with a polyether material (Impregum, 3 M Deutschland GmbH, Neuss, Germany) and an open custom impression tray. The primary crowns were fabricated using the classic method (wax-up technique, cast in a gold alloy, milled to a convergence angle of 0°, Fig. b). After the try-in of these primary crowns, a fixation impression was taken together with implant impression posts, and the inner copings for the implants were fabricated. The secondary crowns were then electroplated directly onto the primary crowns. The prosthetic framework, also called the tertiary framework, was manufactured from a cobalt-chromium alloy. After checking the accuracy of fit of all components, the screw-retained implant abutments were definitively placed and the primary crowns were cemented to all abutments (implants and teeth, Fig. c) with a Zinc-phosphate cement (Harvard cement, Harvard Dental International, Hoppegarten, Germany). The electroplated secondary crowns were then cemented intraorally into the tertiary structure with an autopolymerizable luting composite (AGC Cem, Wieland Dental, Pforzheim, Germany) to achieve an optimal “passive” fit (Fig. d, e). A second registration of the maxillomandibular relationship with the incorporated framework was performed, followed by another fixation impression. Patients then received a new provisional prosthesis to cover the primary crowns. The final superstructures were completed in the dental laboratory and placed in the patients' mouths (Fig. f–i). Author JK performed the prosthetic treatment for all patients. Patients were seen two to four times per year for check-up appointments and professional dental cleaning. Follow-up examinations All investigators were calibrated prior to the study. After at least one year of wearing, patients were clinically examined by authors JSK, TK and a dental assistant from the private practice between May 2015 and June 2016. Patients were asked about their medical history and were examined extraorally. The intraoral examination included the following parameters. Modified plaque and gingiva index . Probing depth at four sites (distal, buccal, mesial, lingual/palatal) with a periodontal probe Recording of biological and technical complications associated with minimal (1), moderate (2) or extensive (3) treatments, such as (1) Treatment of pressure sores, occlusal adjustments, relining, retightening or insertion of new abutment screw (2) Fillings, root planing/periodontitis therapy, peri-implantitis therapy, primary crown recementation, fracture repair (acrylic parts), acrylic tooth renewal (3) Endodontic treatment, tooth or implant removal, post and core revision due to fracture, remake of framework due to fracture . Peri-implant health was defined according to Berglundh et al. : crestal bone level changes, absence of erythema/bleeding on probing/swelling/suppuration. Peri-implant mucositis was defined as follows : bleeding on gentle probing, inflammation limited to the mucosa. Additionally, patients completed a German short form of the OHIP (OHIP-G14) on OHRQoL. The OHIP-G14 included specific items concerning psychic, physical and social limitations and discomfort as well as pain related to the dental prosthesis, which were answered on a 5-point Likert scale ranging from 0 = "never" to 4 = "very often". Higher scores indicate poorer OHRQoL. Subjective masticatory function was assessed using a visual analog scale (VAS). It covered different types of food (soft to hard consistency, e.g. bread, meat, carrots), with 0 (far left end of the scale) meaning "cannot chew at all" and 10 (far right end of the scale) meaning "can chew without problems". Statistics Statistical analysis was performed using IBM SPSS (version 29, IBM). Descriptive statistics included means, standard deviations, and frequencies for all study parameters. Depending on the specific question, additional analyses were performed at the patient and abutment (tooth/implant) level. Kaplan–Meier analysis was used to calculate cumulative survival and success rates of abutments and prostheses. The severe complications”tooth or implant loss” and “endodontic treatment” were used to calculate success rates. To identify possible risk factors, the occurrence of “severe complications” was plotted against the parameters "age", "type of treatment", "number of abutments", and "type of antagonist treatment" using Chi square test. Furthermore, the dichotomous parameters "gender" and "study jaw" were compared. Patients < 65 years and ≥ 65 years, patients with solely implant-supported prostheses and combined tooth- and implant-supported restorations, restorations on a maximum of 4 abutments and on more than 5 abutments, and patients with fixed or removable restorations in the opposing jaw were compared. These potential variables were plotted against the dichotomous parameter "severe complications" and tested for dependence using the Chi-square test. Additionally, the occurrence of different complications was examined in general and within the same subgroups. After testing for normal distribution using the Shapiro–Wilk test, the mean values of the subgroups were compared using the Mann–Whitney U test. When analyzing the clinical parameters "probing depths", “gingiva index" and "plaque index", only the highest index value per tooth/implant was considered. Mean comparisons were made between the subgroups described above in the same manner. In addition, the results were compared between natural abutments and implants. Means, standard deviations, and medians were calculated for both the OHIP total score and the individual questions. Analogous to this, an evaluation of the chewing ability of the different foods was carried out. As a global parameter, a patient-specific mean value for the masticatory function was calculated from the sum of the individual ratings. Subgroup comparisons were performed according to the previously described procedures. In addition, a linear relationship was tested between the parameters "OHIP", "masticatory function", "observation time" and "number of complications". Due to the non-normal distribution of the data, the Spearman-Rho correlation test was applied. This retrospective clinical cohort study was approved by the Ethics Committee (EK Nr. 123/15) of the Medical Faculty of RWTH Aachen University and was conducted following the ethical standards of the Declaration of Helsinki. Clinical follow-up was performed by authors JSK, TK, and a dental assistant in a private dental practice in Hamburg (Germany) between May 2015 and June 2016. The study was registered in the German Clinical Trials Register (DRKS00033746; date of registration 29/02/2024). All patients with an implant-supported or combined tooth-implant-supported electroplated double-crown removable dental prosthesis (ED-RDP) (Fig. a–f) were invited to participate in this retrospective study as part of their regular follow-up. The inclusion criteria were. prosthesis in place for at least one year minimum age of 18 years a general medical condition that allowed a clinical dental examination informed consent was given prior to inclusion Exclusion criteria were. pregnancy inability to give informed consent any reason contraindicating a clinical dental examination (e.g. severe psychological disorders) The patients who were investigated in this study had experienced extensive tooth loss in the past and a fixed reconstruction was not an option. Following a joint decision with the patients, it was agreed that an ED-RDP should be incorporated on implants or a combination of implants and teeth. Remaining teeth were conservatively pre-treated where necessary. The aim was to have at least six abutments (implants/and or natural) in the maxilla and four in the mandible. Most of the patients had a few remaining but hopeless teeth that had to be extracted (Fig. a). In these cases six implants were inserted into the upper and/or four into the lower jaw (Fig. b). Other patients had one to four teeth that could be retained, and additional strategically positioned implants were inserted to achieve a quadrangular support of the prosthesis. Implant surgery was performed by author JK and two maxillofacial surgeons in 23 patients, and two patients had already received implants alio loco . Bone augmentation and maxillary sinus membrane elevation were performed prior to implant placement whenever necessary. An allogeneic bone graft material (Bio-Oss, Geistlich Pharma AG, Switzerland) was used in these cases. The majority of patients received conical, self-tapping Camlog or Conelog Screw-line implants (Camlog Biotechnologies GmbH, Basel, Switzerland). One patient received cylindrical Straumann Bone Level implants (Straumann GmbH, Basel, Switzerland). After 4 to 6 months of submerged healing, the implants were uncovered and temporary gingiva formers were placed. Natural abutment teeth were prepared with a 2 mm circumferential reduction and a pronounced chamfer. Temporary restorations were placed. After a further healing period of approximately 6 weeks, an impression of the implants (and prepared teeth, if any) was taken with a polyether material (Impregum, 3 M Deutschland GmbH, Neuss, Germany) and an open custom impression tray. The primary crowns were fabricated using the classic method (wax-up technique, cast in a gold alloy, milled to a convergence angle of 0°, Fig. b). After the try-in of these primary crowns, a fixation impression was taken together with implant impression posts, and the inner copings for the implants were fabricated. The secondary crowns were then electroplated directly onto the primary crowns. The prosthetic framework, also called the tertiary framework, was manufactured from a cobalt-chromium alloy. After checking the accuracy of fit of all components, the screw-retained implant abutments were definitively placed and the primary crowns were cemented to all abutments (implants and teeth, Fig. c) with a Zinc-phosphate cement (Harvard cement, Harvard Dental International, Hoppegarten, Germany). The electroplated secondary crowns were then cemented intraorally into the tertiary structure with an autopolymerizable luting composite (AGC Cem, Wieland Dental, Pforzheim, Germany) to achieve an optimal “passive” fit (Fig. d, e). A second registration of the maxillomandibular relationship with the incorporated framework was performed, followed by another fixation impression. Patients then received a new provisional prosthesis to cover the primary crowns. The final superstructures were completed in the dental laboratory and placed in the patients' mouths (Fig. f–i). Author JK performed the prosthetic treatment for all patients. Patients were seen two to four times per year for check-up appointments and professional dental cleaning. All investigators were calibrated prior to the study. After at least one year of wearing, patients were clinically examined by authors JSK, TK and a dental assistant from the private practice between May 2015 and June 2016. Patients were asked about their medical history and were examined extraorally. The intraoral examination included the following parameters. Modified plaque and gingiva index . Probing depth at four sites (distal, buccal, mesial, lingual/palatal) with a periodontal probe Recording of biological and technical complications associated with minimal (1), moderate (2) or extensive (3) treatments, such as (1) Treatment of pressure sores, occlusal adjustments, relining, retightening or insertion of new abutment screw (2) Fillings, root planing/periodontitis therapy, peri-implantitis therapy, primary crown recementation, fracture repair (acrylic parts), acrylic tooth renewal (3) Endodontic treatment, tooth or implant removal, post and core revision due to fracture, remake of framework due to fracture . Peri-implant health was defined according to Berglundh et al. : crestal bone level changes, absence of erythema/bleeding on probing/swelling/suppuration. Peri-implant mucositis was defined as follows : bleeding on gentle probing, inflammation limited to the mucosa. Additionally, patients completed a German short form of the OHIP (OHIP-G14) on OHRQoL. The OHIP-G14 included specific items concerning psychic, physical and social limitations and discomfort as well as pain related to the dental prosthesis, which were answered on a 5-point Likert scale ranging from 0 = "never" to 4 = "very often". Higher scores indicate poorer OHRQoL. Subjective masticatory function was assessed using a visual analog scale (VAS). It covered different types of food (soft to hard consistency, e.g. bread, meat, carrots), with 0 (far left end of the scale) meaning "cannot chew at all" and 10 (far right end of the scale) meaning "can chew without problems". Statistical analysis was performed using IBM SPSS (version 29, IBM). Descriptive statistics included means, standard deviations, and frequencies for all study parameters. Depending on the specific question, additional analyses were performed at the patient and abutment (tooth/implant) level. Kaplan–Meier analysis was used to calculate cumulative survival and success rates of abutments and prostheses. The severe complications”tooth or implant loss” and “endodontic treatment” were used to calculate success rates. To identify possible risk factors, the occurrence of “severe complications” was plotted against the parameters "age", "type of treatment", "number of abutments", and "type of antagonist treatment" using Chi square test. Furthermore, the dichotomous parameters "gender" and "study jaw" were compared. Patients < 65 years and ≥ 65 years, patients with solely implant-supported prostheses and combined tooth- and implant-supported restorations, restorations on a maximum of 4 abutments and on more than 5 abutments, and patients with fixed or removable restorations in the opposing jaw were compared. These potential variables were plotted against the dichotomous parameter "severe complications" and tested for dependence using the Chi-square test. Additionally, the occurrence of different complications was examined in general and within the same subgroups. After testing for normal distribution using the Shapiro–Wilk test, the mean values of the subgroups were compared using the Mann–Whitney U test. When analyzing the clinical parameters "probing depths", “gingiva index" and "plaque index", only the highest index value per tooth/implant was considered. Mean comparisons were made between the subgroups described above in the same manner. In addition, the results were compared between natural abutments and implants. Means, standard deviations, and medians were calculated for both the OHIP total score and the individual questions. Analogous to this, an evaluation of the chewing ability of the different foods was carried out. As a global parameter, a patient-specific mean value for the masticatory function was calculated from the sum of the individual ratings. Subgroup comparisons were performed according to the previously described procedures. In addition, a linear relationship was tested between the parameters "OHIP", "masticatory function", "observation time" and "number of complications". Due to the non-normal distribution of the data, the Spearman-Rho correlation test was applied. Included patients Twenty-five patients (mean age 68.4 ± 9.9 years, 60% female) with 25 ED-RDPs with a mean wearing period of 4.9 ± 3.0 years were examined during the period from May 2015 to April 2016. Seventeen of the prostheses were in the maxilla and eight in the mandible. In total, 139 abutments were used for the electroplated double-crowns. These included 106 implants and 33 natural abutments. Fifteen restorations were solely implant-supported (n implants = 81), and ten restorations were tooth-implant supported (n (implants) = 45, n (teeth) = 33) (Fig. ). Survival and success analysis All 106 implants were included in the analysis. As all patients originated from the same practice and represent all patients who have ever received an ED-RDP at this practice, it can be assumed that no data loss has occurred, including instances of unregistered implant loss. With a minimum of 1.3 years and a maximum of 11.8 years, the mean time in place of the implants was 5.2 ± 3.1 years. Altogether two maxillary implants were lost in two patients after 8 and 9 years due to peri-implantitis in the group with solely implant-supported RDPs. This corresponds to a cumulative post-loading implant survival rate of 90% at 10 years according to the Kaplan–Meier analysis (Fig. ). Early failures, i.e. implant losses before loading, did not occur. The survival rate of the natural abutments was 100%. Prosthesis survival was 100%. At follow-up, none of the prostheses had been replaced. A total of six of the 25 RDPs (24%) were associated with at least one severe complication (implant loss or irreversible pulpitis) and corresponding Kaplan–Meier cumulative success rates were 81% at 5 years and 36% at 8 years (Fig. ). The first observed major complication occurred after 0.6 years. The longest observation period without major complications was 9.6 years. On average, the prostheses had been successfully in use for 4.3 ± 2.8 years, which means that no major complications had occurred up to that point. Technical and biological complications Table provides an overview of all recorded technical and biological complications which had occurred after prosthesis placement. The most common technical complication was decementation of the primary crowns (n = 13) and wear of the prosthetic teeth (n = 11), followed by nine cases of necessary relining. Gingivitis or mucositis was observed in 14 cases. Two implants were lost after eight and nine years. They had previously shown signs of peri-implantitis. These cases, along with four cases requiring endodontic treatment (in one case followed by post-and-core treatment) were considered severe complications and were used to calculate the success rate. With the help of the Chi-square test, the number of abutment teeth could be identified as a risk factor for the occurrence of severe complications. While only two out of 19 (10.5%) subjects with five or more abutments experienced severe complications, four out of six (66.7%) subjects with a maximum of four abutments experienced severe complications (p = 0.005). An analysis of all complications also showed an increased incidence in patients with a maximum of four abutments. While subjects with five or more abutments had an average of 2.4 different complications, the other group had an average of 4.3. However, this difference was not statistically significant. No association was found between the other potential risk factors "age", "location", "gender", "type of opposing dentition" and "solely implant-supported vs. combined tooth-implant-supported" and the occurrence of severe complications. Probing depths, gingiva and plaque index The mean probing depths were 3.0 ± 0.9 mm for the natural abutments and 3.7 ± 1.3 mm for the implants. The mean values of the gingival and plaque indices were 1.3 ± 0.8 and 1.1 ± 0.8 for natural teeth and 1.2 ± 0.7 and 1.1 ± 1.0 for implants, respectively. While the values for gingival and plaque index were comparable for teeth and implants, the implants showed a statistically significant higher probing depth than the natural teeth (p = 0.008). The subgroup comparison of these clinical parameters revealed a significantly higher plaque index in the group of patients ≥ 65 years of age (p = 0.049) and on prostheses with a maximum of four abutments (p = 0.001). An increased probing depth could be observed in male patients (p = 0.018) and in the upper jaw (p = 0.018). Of the six patients with prostheses on a maximum of four abutments, five were in the " ≥ 65 years" group. However, a statistical relationship between the parameters "age" and "number of abutments" could not be established. Oral-health related quality of life As part of the follow-up for this study, the patients' OHRQoL was assessed using the OHIP-G 14. One of the patients did not fill in the OHIP-questionnaire. The mean OHIP total score of the patients was 3.2 ± 4.2. The sum scores ranged from 0 to 13 with a median of 1. Eleven of the 24 patients (45.8%) did not report any limitations, while another eight patients reported a maximum OHIP sum score of 4, indicating only a slight reduction in quality of life. Slight limitations were observed in five of the 24 respondents. Two patients reported frequent to very frequent difficulties with word articulation. In addition, two patients reported that their life had become generally less satisfying in recent times in connection with their prostheses. Another patient reported very frequent pain in the oral cavity, but otherwise did not report any reduction in quality of life. In a comparison of the subgroups, there are more reports of impairments in the group with combined tooth-implant supported ED-RDPs. The mean OHIP score for this group was 5.2 ± 5.0, whereas the mean score for the "solely implant" group was 1.7 ± 2.9. This difference was statistically significant (p = 0.039). The comparison of the other subgroups according to the parameters "age", "gender", "study jaw", "number of abutments" and "opposing jaw" showed no significant differences in the OHIP total value. Furthermore, no correlation was identified between the observation period and the OHIP sum score. Subjective masticatory function In terms of subjective masticatory function, the average score was 9.4 ± 0.8 out of a possible 10. Figure gives an overview of the corresponding box plot analysis. Only four of the respondents scored below 9, with one outlier scoring an average of 6.5. In the individual categories, the chewing function was rated worst for carrots (9.1 ± 1.9) and meat (9.1 ± 1.6). With ratings of 2.3 and 2.2, the "outlier" here indicated considerable difficulty in chewing carrots and meat. The subgroup comparison according to the above mentioned parameters showed no statistically significant differences in the masticatory function. The correlation analysis showed statistically significant negative correlations between the subjective masticatory function and the OHIP total score (p = 0.043), as well as between the number of complications and the general masticatory function (p = 0.017). There was no negative correlation between “prosthesis age/time in function” and masticatory function. Twenty-five patients (mean age 68.4 ± 9.9 years, 60% female) with 25 ED-RDPs with a mean wearing period of 4.9 ± 3.0 years were examined during the period from May 2015 to April 2016. Seventeen of the prostheses were in the maxilla and eight in the mandible. In total, 139 abutments were used for the electroplated double-crowns. These included 106 implants and 33 natural abutments. Fifteen restorations were solely implant-supported (n implants = 81), and ten restorations were tooth-implant supported (n (implants) = 45, n (teeth) = 33) (Fig. ). All 106 implants were included in the analysis. As all patients originated from the same practice and represent all patients who have ever received an ED-RDP at this practice, it can be assumed that no data loss has occurred, including instances of unregistered implant loss. With a minimum of 1.3 years and a maximum of 11.8 years, the mean time in place of the implants was 5.2 ± 3.1 years. Altogether two maxillary implants were lost in two patients after 8 and 9 years due to peri-implantitis in the group with solely implant-supported RDPs. This corresponds to a cumulative post-loading implant survival rate of 90% at 10 years according to the Kaplan–Meier analysis (Fig. ). Early failures, i.e. implant losses before loading, did not occur. The survival rate of the natural abutments was 100%. Prosthesis survival was 100%. At follow-up, none of the prostheses had been replaced. A total of six of the 25 RDPs (24%) were associated with at least one severe complication (implant loss or irreversible pulpitis) and corresponding Kaplan–Meier cumulative success rates were 81% at 5 years and 36% at 8 years (Fig. ). The first observed major complication occurred after 0.6 years. The longest observation period without major complications was 9.6 years. On average, the prostheses had been successfully in use for 4.3 ± 2.8 years, which means that no major complications had occurred up to that point. Table provides an overview of all recorded technical and biological complications which had occurred after prosthesis placement. The most common technical complication was decementation of the primary crowns (n = 13) and wear of the prosthetic teeth (n = 11), followed by nine cases of necessary relining. Gingivitis or mucositis was observed in 14 cases. Two implants were lost after eight and nine years. They had previously shown signs of peri-implantitis. These cases, along with four cases requiring endodontic treatment (in one case followed by post-and-core treatment) were considered severe complications and were used to calculate the success rate. With the help of the Chi-square test, the number of abutment teeth could be identified as a risk factor for the occurrence of severe complications. While only two out of 19 (10.5%) subjects with five or more abutments experienced severe complications, four out of six (66.7%) subjects with a maximum of four abutments experienced severe complications (p = 0.005). An analysis of all complications also showed an increased incidence in patients with a maximum of four abutments. While subjects with five or more abutments had an average of 2.4 different complications, the other group had an average of 4.3. However, this difference was not statistically significant. No association was found between the other potential risk factors "age", "location", "gender", "type of opposing dentition" and "solely implant-supported vs. combined tooth-implant-supported" and the occurrence of severe complications. The mean probing depths were 3.0 ± 0.9 mm for the natural abutments and 3.7 ± 1.3 mm for the implants. The mean values of the gingival and plaque indices were 1.3 ± 0.8 and 1.1 ± 0.8 for natural teeth and 1.2 ± 0.7 and 1.1 ± 1.0 for implants, respectively. While the values for gingival and plaque index were comparable for teeth and implants, the implants showed a statistically significant higher probing depth than the natural teeth (p = 0.008). The subgroup comparison of these clinical parameters revealed a significantly higher plaque index in the group of patients ≥ 65 years of age (p = 0.049) and on prostheses with a maximum of four abutments (p = 0.001). An increased probing depth could be observed in male patients (p = 0.018) and in the upper jaw (p = 0.018). Of the six patients with prostheses on a maximum of four abutments, five were in the " ≥ 65 years" group. However, a statistical relationship between the parameters "age" and "number of abutments" could not be established. As part of the follow-up for this study, the patients' OHRQoL was assessed using the OHIP-G 14. One of the patients did not fill in the OHIP-questionnaire. The mean OHIP total score of the patients was 3.2 ± 4.2. The sum scores ranged from 0 to 13 with a median of 1. Eleven of the 24 patients (45.8%) did not report any limitations, while another eight patients reported a maximum OHIP sum score of 4, indicating only a slight reduction in quality of life. Slight limitations were observed in five of the 24 respondents. Two patients reported frequent to very frequent difficulties with word articulation. In addition, two patients reported that their life had become generally less satisfying in recent times in connection with their prostheses. Another patient reported very frequent pain in the oral cavity, but otherwise did not report any reduction in quality of life. In a comparison of the subgroups, there are more reports of impairments in the group with combined tooth-implant supported ED-RDPs. The mean OHIP score for this group was 5.2 ± 5.0, whereas the mean score for the "solely implant" group was 1.7 ± 2.9. This difference was statistically significant (p = 0.039). The comparison of the other subgroups according to the parameters "age", "gender", "study jaw", "number of abutments" and "opposing jaw" showed no significant differences in the OHIP total value. Furthermore, no correlation was identified between the observation period and the OHIP sum score. In terms of subjective masticatory function, the average score was 9.4 ± 0.8 out of a possible 10. Figure gives an overview of the corresponding box plot analysis. Only four of the respondents scored below 9, with one outlier scoring an average of 6.5. In the individual categories, the chewing function was rated worst for carrots (9.1 ± 1.9) and meat (9.1 ± 1.6). With ratings of 2.3 and 2.2, the "outlier" here indicated considerable difficulty in chewing carrots and meat. The subgroup comparison according to the above mentioned parameters showed no statistically significant differences in the masticatory function. The correlation analysis showed statistically significant negative correlations between the subjective masticatory function and the OHIP total score (p = 0.043), as well as between the number of complications and the general masticatory function (p = 0.017). There was no negative correlation between “prosthesis age/time in function” and masticatory function. Solely implant-supported or combined tooth-implant-supported ED-RDPs showed satisfactory clinical results, and all of the prostheses were still functioning at the time of the follow-up. Patients attended regular follow-up visits and professional teeth cleaning appointments. In particular, with regard to the potential correlation between RDPs and peri-implantitis, these systematic follow-up appointments seem to be of high importance . When severe complications were included in the calculation of a Kaplan–Meier success rate for the prostheses, the results were 81% at five years, which is comparable to the results of a recent study by Klotz et al. , and 36% at eight years. The authors of the mentioned study, however, employed broader criteria to calculate their success rate. In contrast, the rather low cumulative success rates observed in our study were based on only six severe complications: two implant losses and four necessary endodontic treatments (one abutment fracture and three cases of irreversible pulpitis). Despite these complications, the prostheses remained functional, and patients experienced only minor limitations. These complications should be seen in the broader context of prosthetic rehabilitation, as higher rates of tooth loss and caries have been observed with, e.g. conventional telescopic-crown prostheses and a combination of natural abutments and implants . RDPs with electroplated double-crowns are complex and costly to fabricate. Nevertheless, they may offer significant advantages (inclusion of compromised teeth, passive fit, high retention) depending on individual patient situations and preferences. In contrast, overdentures with simpler attachments, such as stud or ball attachments, represent a relatively straightforward and cost-effective solution. However, they often necessitate regular maintenance due to the rapid wear of their components, which leads to a decrease in the prosthodontic success rate . Moreover, the efficacy of such solitary attachments depends on specific conditions. For instance, the vertical height loss should only be minimal and the implants should be as parallel as possible. An additional, less expensive alternative exists in the form of implant-supported bar-retained overdentures. Recent clinical data with four to six implants show high implant survival rates of up to 100% after ten years, however, 19 overdentures had to be replaced due to severe wear of teeth and denture base . The cumulative implant survival of 90% and the prosthetic survival rates of 100% observed in this dental practice study are comparable to the results found in studies conducted in university settings . The authors of these university studies report survival rates of 93.3% and 96.2% for solely implant-supported restorations, and 97.7% and 100% for combined tooth-implant-supported restorations after 5 and 8 years, respectively. This suggests that the outcomes of this complex treatment modality are consistent across different types of clinical environments. There were, however, some biological and technical complications (Table ), but these were usually resolved with little or moderate effort, so that the overall maintenance effort can be considered acceptable. Decementation was the most common technical complication for the primary crowns, all of which were cemented with zinc phosphate cement. Today, the use of zinc phosphate cement is viewed rather critically as its disadvantages, such as poor mechanical properties, outweigh its advantages . A retrospective study found that 75% of the 577 patient cases examined after 15 years showed decementation of primary crowns cemented with either zinc phosphate cement or glass ionomer cement . The results of in vitro studies examining the retention of various cements on implant abutments are, however, inconsistent. Zinc phosphate cement has demonstrated high retention values in some of these studies, indicating that retention is not solely dependent on the cement itself but also influenced by additional factors, including the abutment design, crown material, surface treatment, and the specific study design . Our analyses indicated that a number of abutments less than four led to a higher total number of complications and a higher number of severe complications in comparison to the patient group with five or more abutments. As already described for conventional combined or tooth-implant-supported double-crown prostheses, an increased incidence of severe complications could be observed for prostheses on up to four abutments. However, this association has not yet been demonstrated for prostheses retained by electroplated double-crowns , and also the aforementioned meta-analysis was unable to identify such a correlation . Other risk factors could not be identified in our study. Peri-implantitis was diagnosed in two cases and the affected implants had to be removed after eight and nine years. Considering the prevalence of peri-implantitis at patient level reported in the current literature (nearly 20%) , the number of cases recorded in this study was surprisingly low. Mean implant probing depth was 3.7 ± 1.3 mm, however, this value alone cannot provide any information about peri-implant health. The current consensus and accepted practice is that it is not possible to assess peri-implant health based solely on a range of probing depths . Although mucositis could be detected in several cases, the clinical findings did not provide a justifiable indication for radiography, and radiography without a suspected diagnosis was not part of the study protocol. Furthermore, it should be noted, that due to the retrospective nature of the study, no baseline data were available for comparison. The mean probing depth of 3.0 ± 0.9 mm for the natural abutments can be considered to be associated with healthy periodontal conditions. There was no tooth loss associated with the ED-RDPs but endodontic problems requiring root canal treatment occurred in three cases. In their systematic review, Moldovan et al. found very different rates of tooth loss (5.5—51.7%) in double-crown restorations. The rates of necessary root canal treatment and tooth fractures were also highly variable, ranging from 0.6 to 13.9% and 0.4 to 4.4%, respectively. Patients generally reported few problems with their prostheses and indicated that the prostheses provided very good retention, even after a prolonged wear. In clinical studies, different groups were able to demonstrate the positive influence of double-crown prostheses on OHRQoL . In this study, nearly 50% of patients reported no limitations at all in the OHIP questionnaire. The generally low OHIP scores correlate with high levels of satisfaction. In this context, no differences were observed regarding the duration of prosthesis use. It is, however, noteworthy that patients who received solely implant-supported restorations exhibited greater satisfaction than those who received tooth-implant-supported restorations. Again, due to the retrospective nature of the study, patients were not interviewed before receiving the prosthesis, so it cannot be determined whether the high OHRQoL can actually be attributed to the treatment with an ED-RDP. The majority of patients also rated their chewing function as very satisfactory. Even the ability to eat hard foods such as carrots and apples or relatively hard-to-chew meat was rated an average of 9.1 on the VAS. With regard to masticatory function, many recent studies have compared conventional full dentures with implant-supported dentures in the edentulous mandible and found that both subjective and objective masticatory function improved significantly after implant placement . Of course, these results can only be extrapolated to our data to a limited extent because we do not know if or how the number of abutments and the opposing dentition play a role in this context. However, a recent meta-analysis showed that chewing performance correlates with the number of natural teeth and functional tooth pairs, among other factors . In our analysis, we found that worse masticatory function correlated with a higher OHIP sum score. However, further analysis did not show any effect of time in function or age of the prosthesis and subjective masticatory function. Limitations There are some limitations to this study that should be addressed at this point. A retrospective clinical study with a small sample size has clear methodological and practical limitations. Its scientific validity is restricted, especially in terms of causal relationships or ensuring generalizability. Furthermore, the collection of retrospective data is often susceptible to bias, potential incomplete records, and temporal variability, which can further compromise the reliability of the findings. Consequently, interpretation should be approached with caution. The small sample size of only 25 patients in our study reduces statistical power, increases the risk of random findings, and makes it challenging to detect significant differences or effects. A sample size calculation was not performed. Individual results and statistical outliers lead to a larger scatter of the results due to their statistical overrepresentation. The duration of wear of the prosthesis partly differed considerably, so that also patients with only one year of function were included in the analysis. An analysis of possible peri-implant bone loss was not possible due to a lack of standardized baseline radiographs. A potential loss of retention of the prosthesis was only asked about and was not measured in an objective way. To enhance the strength of the evidence, larger prospective studies with robust sample sizes and controlled data collection are essential. There are some limitations to this study that should be addressed at this point. A retrospective clinical study with a small sample size has clear methodological and practical limitations. Its scientific validity is restricted, especially in terms of causal relationships or ensuring generalizability. Furthermore, the collection of retrospective data is often susceptible to bias, potential incomplete records, and temporal variability, which can further compromise the reliability of the findings. Consequently, interpretation should be approached with caution. The small sample size of only 25 patients in our study reduces statistical power, increases the risk of random findings, and makes it challenging to detect significant differences or effects. A sample size calculation was not performed. Individual results and statistical outliers lead to a larger scatter of the results due to their statistical overrepresentation. The duration of wear of the prosthesis partly differed considerably, so that also patients with only one year of function were included in the analysis. An analysis of possible peri-implant bone loss was not possible due to a lack of standardized baseline radiographs. A potential loss of retention of the prosthesis was only asked about and was not measured in an objective way. To enhance the strength of the evidence, larger prospective studies with robust sample sizes and controlled data collection are essential. In this retrospective study, ED-RDPs, either combined tooth-implant or implant-supported, showed satisfactory results. Only two implants were lost in the “solely implant” group, whereas no implants failed in the “tooth-implant” group. No natural abutments were lost. In general, the biological and technical complications that did arise in both groups were minimal to moderate in severity and were easily managed. Survival and success rates of implants and ED-RDPs were comparable to those reported in the literature from university-based studies. Subjective masticatory function was rated high, and patients reported a very high OHRQoL. With regard to the aforementioned point, patients in the "solely implant" group exhibited a greater degree of satisfaction. Thus, despite the demanding fabrication process, RDPs attached with electroplated double-crowns can represent a reliable and patient-specific solution.
Novel CD123 polyaptamer hydrogel edited by Cas9/sgRNA for AML-targeted therapy
c5fd0882-99f3-4df9-bdcc-5089c8375d4c
8205012
Pharmacology[mh]
Acute myeloid leukemia (AML) is one of the most common types of leukemia derived from myeloid progenitor cells around the world, with an average 5-year survival of ∼28% (Ehx et al., ; Liu, ; Xuan & Liu, ). At present, the major treatment for AML involves induction chemotherapy and hematopoietic stem cell transplantation (HSCT), etc. (Carter et al., ; Appelbaum, ). However, despite constant emergence of these treatment options, relapse is still a common scenario and the most challenging aspect in AML of contemporary oncology. About one-third and one-half of patients will relapse after a transient remission, with a median survival of less than 6 months and estimated 5-year disease-free survival (DFS) of 20–40% (Pasquer et al., ). The most major reason for AML relapse is residue of AML cells. Current chemotherapeutic agents are unable to distinguish tumors from normal tissues (Wu et al., ), resulting in damaging CD123-positive normal cells as well. These features may generate serious problematic side effects, including limited drug intensity, duration of chemotherapy and reduced therapeutic efficacy, resulting in treatment failure and relapse (Liu, ; Rowe, ). Targeted therapy, one of the most effective methods for cancer treatment, is a promising method for AML treatment. Targeted therapy could deliver drugs and inhibit AML cells selectively but not to their normal counterparts (Stuani et al., ). The alpha chain of interleukin 3 receptor (IL3R-α), designated as CD123, has been validated a mainstream target for AML. CD123 is highly expressed on several hematologic neoplasms, including AML, acute lymphoblastic leukemia (ALL), blastic plasmacytoid dendritic cell neoplasm (BPDCN), hairy cell leukemia and certain lymphomas, but expressed at a low level or to be absent on normal hematopoietic stem cells. Further, CD123 is also highly expressed on leukemic stem cells (LSCs), the origin of AML cells. Interestingly, patients with high CD123 expression blasts showed poor outcome and higher relapse rate. In addition, CD123 is a significant factor to control proliferation, growth, and differentiation of AML cell via the activation of many signal pathways such as JAK/STAT (Shi et al., ; Bulaeva et al., ; Lane, ; Sugita & Guzman, ). At present, CD123 targeting therapy for AML are in advanced preclinical and clinical development, and they exhibit robust anti-leukemic activity, including antibody–drug conjugate (ADC), bispecific T-cell engager (BiTE) and chimeric antigen receptor T-Cell immunotherapy (CART) (Gill, ; Slade and Uy, ). However, some advantages limited the clinical application of these therapeutics: (1) fatal side effects: CD123 is expressed at a low level in some epithelial cells and monocytes may cause side effects such as cytokine storm, capillary infiltration syndrome, hepatic transaminase elevation, hypoalbuminemia and myelosuppressi (Cartellieri et al., ; Sun et al., ); (2) resistance: the immune system of some patients may be provoked by antibodies and generate several adverse effects (Togami et al., ); (3) complex design and high cost: since sensitive to temperature, pH and multigelation, antibodies are easier to lose their functions, and these drugs are laboriously prepared, limited drug loaded and highly cost. Therefore, novel CD123-based AML targeted therapy was urgently needed. Aptamer, another kind of targeted molecules, has presented as a powerful clinical potential tool for AML targeted therapy (Zhu & Chen, ). Aptamers are composed of single-stranded oligonucleotides (DNA or RNA), which could form a specific three-dimensional (3D) structure to recognize its targets with high specificity and affinity (Wan et al., ). When compared with antibodies, aptamers have obvious advantages: high specificity and affinity to targets, easy synthesis, limited pharmaceutical cost, multiple modification strategy, no immunogenic properties, easy penetration to tumor tissues, and easy storage and transportation (Sun & Zu, ). Recent works of Dianping Tang et al. have fully demonstrated the merit of aptamer in bio-applications (Qiu et al., ; Zeng et al., ; Lv et al., ; Lu et al., ). To date, a plenty of aptamers have been generated and have showed great potential in pre-clinic. In our previous study, we generated the first ever CD123 thioaptamer, termed as SS30, which could bind CD123 with high specificity and affinity (Hu et al., ; Zhao et al., ). SS30 could impair the function of CD123 molecule by competing with IL-3 to bind to CD123, and could down-regulate the expression of p-AKT and p-STAT5 , resulting in inhibitory effect on CD123 + AML tumor cells in vitro and in vivo . Our data have fully validated that SS30 is a novel anti-tumor agent with therapeutic potential for AML. However, since the small size, they do not show strong retention in the body. To increase the retention of SS30 in the body, DNA hydrogel, a kind of important DNA materials was chosen. DNA hydrogel could retain the biological function of DNA, and realizes the perfect fusion of structure and function of hydrogel materials, presenting good biocompatibility, adjustable biodegradability and controllable mechanical properties (Li et al., ; Zhang et al., ). Moreover, to precisely cleave the DNA hydrogel to generate the precise aptamer for further inhibition function, CRISPR-associated protein 9 (Cas9) was utilized. Cas9 has been widely used in the context of gene editing, such as in therapeutics and agricultural products (Lee et al., ). Jaiwoo Lee et al. have successfully designed an (rolling circle amplification) RCA-based DNA hydrogel that can release PD-1 aptamers via Cas9/sgRNA-mediated specific editing (Lee et al., ). This hydrogel not only exhibits prolonged retention at tumor sites in vivo but also inhibits the activity of immune cells in the tumor microenvironment. Here in this study, we first generate a DNA hydrogel composed of SS30 using RCA method, termed as SS30 polyaptamer hydrogel (SSFH), and Cas9/sgRNA was used to release SS30 from SSFH in a sustained manner at the site of administration. It was demonstrated that this DNA hydrogel could prolong the circulation time and retention of SS30, and enhanced SS30 concentration in tumor tissues. In addition, this DNA hydrogel had a marked inhibitory effect on CD123 + AML tumor cells in vitro and in vivo . Furthermore, this DNA hydrogel produced a prolonged survival of model animals in vivo. Most importantly, once the generated cytokines increasing, the complementary sequences of aptamers injection could relieve immediately. Overall, the present in vitro and in vitro data as well as mechanistic studies fully validate that DNA hydrogel made of SS30 is a novel anti-tumor agent with therapeutic potential for AML. Reagents The RCA template and primers were synthesized by Sangon Biotech (Shanghai, China). T4 DNA ligase kit was purchased from Sangon Biotech (Shanghai, China). phi29 DNA polymerase (100 units/mL) was purchased from Thermo Scientific (Waltham, MA). RPMI-1640 medium was obtained from Hyclone (Thermo Scientific, Waltham, MA). Fetal bovine serum was purchased from Gibco (Invitrogen, Carlsbad, CA). GeneArt precision gRNA synthesis kit as obtained from Thermo Scientific (Waltham, MA). Cas9 protein was purchased from PNA BIO INC (Newbury Park, CA). RNase (0.25 mg/ml) was obtained from Qiagen (Valencia, CA). CCK8 assay kit (ab228554) was purchased from Abcam (Cambridge, UK). CellTiter 96® AQueous One Solution Cell Proliferation Assay (MTS) was obtained from Promega (Madison, WI). BrdU Cell Proliferation ELISA Kit (colorimetric) (ab126556) was purchased from Abcam (Cambridge, UK). Cell lines and culture The human B-cell precursor leukemia cell line RCH-ACV was obtained from Cell Culture Center of Peking Union Medical College (Beijing, China). The human acute myelocytic leukemia cell line Molm-13 was purchased from ATCC (Manassas, VA). Cells were cultured in RPMI-1640 medium, supplemented with 10% fetal bovine serum (FBS, Gibco, Carlsbad, CA) and a mixture of penicillin/streptomycin. Additionally, 5 ng/ml IL-3 was also added into cell culture buffer. Cells were cultured at 37 °C in a humidified atmosphere with 5% CO 2 . All experiments were performed on cells in the exponential growth phase. Construction of SSFH To generate SS30 formed hydrogel (SSFH), first, rolling circle amplification (RCA) template should be designed, which could generate CD123 aptamers SS30 by Cas9/sgRNA-specific cleavage. Each template contained SS30 aptamer sequence and sgRNA target sequence. The sequences were as follows: template 1, 5′- CCGCCCAAATCCCTAAGAGGCAGGGAGTTCGCTAGTAGCTACGGGACCAGACACAGTACACACGCA CCCTGAAGTTCATCTGCACCACC -3′; template 2, 5′- AACTTCA CCGCCCAAATCCCTAAGAGGCAGGGAGTTCGCTAGTAGCTACGGGACCAGACACAGTACACACGCA GGTGGTGCAGATG -3′, where the template for the SS30 is underlined and that for the sgRNA target sequence is bold. Before the RCA process, pre-circular RCA template should be generated. 0.5 mM of each DNA template (Sangon Biotech, Beijing, China) was mixed with primer in hybridization buffer (1 mM EDTA, 10 mM Tris HCl, 100 mM NaCl, pH 8.0). The primer for template 1 was 5′-GTCGCTCGGTGGTGCAGATGAA-3′, The primer for template 2 was 5′-TCCCCTGAAGTTCATCTGCACC-3′. After the hybridization, the mixture was reacted with T4 DNA ligase (Sangon Biotech, Beijing, China) in T4 DNA ligase buffer (500 mM Tris-HCI;100 mM MgCl 2 ;50 mM DTT;10 mM ATP;pH 7.6 @ 25 °C) overnight at 16 °C to close the nick. To inactivate T4 DNA ligase, the product was heated at 70 °C for 10 min. The SSFH was synthesized as followed: pre-circular RCA template was mixed with 50 mM Tris-HCl pH 7.5, 10 mM MgCl 2 , 10 mM (NH 4 ) 2 SO 4 , 4 mM DTT, and phi29 DNA polymerase (100 units/ml; Thermo Scientific, Waltham, MA) for 48 h at 42 °C, and phi29 DNA polymerase was inactivated at 10 min at 65 °C. All the products were centrifuged 12,000 rpm for 15 min, and precipitate were re-suspended in triple distilled water (TDW). Template 1 and 2 were mixed at a 1:1 wt ratio for further study. The precise cleavage of SSFH by Cas9/sgRNA gRNA was synthesized via a GeneArt precision gRNA synthesis kit (Thermo Scientific). The sgRNA sequence was: 5′- GGUGGUGCAGAUGAACUUCA GUUU UAGAGCUAGAAAUAGCAAGUUAAAAUAAGGCUAGUCCGUUAUCAACUUGAAAAAGUGGCACCGAGUCGGUGCUUUU-3′ (the target sequence is underlined). Then, to evaluate whether Cas9/sgRNA complex were assembled, first, Cas9 protein was fixed on COOH-modified magnetics beads via NHS/EDC reaction. In brief, 6 × 10 5 carboxylated magnetic beads were washed by 200 μl MES (100 mM, pH 5.0) at room temperature twice. Then, beads were activated by 100 μl 1-ethyl-3-(3-dimethyllaminopropyl)-carbodiimide hydrochloride (EDC) (20 mg/ml) and 100 μl N-hydroxysuccinimide (NHS) (20 mg/ml) for 15 min with gentle stirring. Next, beads were washed by linking buffer (5.3 ml 0.2 M sodium dihydrogen phosphate and 94.7 mL 0.2 M sodium hydrogen phosphate). About 5 μg Cas9 protein (5 mg/ml, PNA BIO INC, Newbury Park, CA) was added to the beads and incubated at room temperature for 2 h. At last, beads were washed three times with PBS buffer and incubated with FAM-sgRNA (sgRNA chemically modified with FAM). Flow cytometry was applied to assess fluorescent signals. Beads coated with bovine serum albumin (BSA) and blank beads were treated as negative controls. Further, to evaluate the formation of SSFH/Cas9/sgRNA, Cas9 was fixed on beads and sgRNA was added to form Cas9/sgRNA. Then, Cas9/sgRNA beads incubated with FAM-labeled SSFH and applied for flow cytometry. Blank beads were treated as negative control. The zeta of Cas9/sgRNA was evaluated by particle size analyzer. To induce Cas9-mediated cleavage, first, to evaluate the diameter change after Cas9 cleavage, SSFH (10 mM) was mixed with 6 × 10 5 Cas9/sgRNA coated beads. After incubation, beads were removed under magnetic field and DNA mixture was assessed by particle size analyzer. Then, to further observe cleavage effect, Cas9/sgRNA complex were assembled. Cas9 protein (5 mg/ml, PNA BIO INC, Newbury Park, CA) was mixed sgRNA to form Cas9/sgRNA complexes. Then, different ratio of SSFH/Cas9/sgRNA complex was mixed and incubated at 37 °C for a range of time. The reaction mixtures were treated with RNase (0.25 mg/ml; Qiagen, Valencia, CA) to eliminate the sgRNA. About 1% agarose gel electrophoresis was applied to assess the cleavage of mixtures. The band density was analyzed using gene tools software from Syngene (Frederick, MD). Assessment of swelling rate To calculate the swelling ratio, SSFH was freeze dried first. The freeze-drying powder of SSFH was mixed with triple-distilled water with or without Cas9/sgRNA. The weights were calculated at various time points, ranging from 1 h to 10 h. The swelling rate was calculated by this equation: swelling rate = [(W1 − W2)/W1] × 100%. The W1 represented the weight of hydrated gel, whereas W2 is the weight of SSFH freeze drying powder. Binding ability evaluation RCA template was subjected to PCR amplification. To evaluate binding ability, primers were labeled with FAM. By flow cytometry: 1 × 10 5 cells (Molm-13 and RCH-ACV) were incubated with 50 nM SS30, SSFH or cleavage mixtures at 37 °C for 30 min. Cells were washed by PBS and analyzed by flow cytometry. The mean fluorescence intensities (MFI) of FAM were analyzed. By confocal microscope: 1 × 10 5 cells (Molm-13 and RCH-ACV) were incubated with 5 μM SS30, SSFH or cleavage mixtures at 37 °C for 30 min. Cells were washed by PBS and analyzed by confocal microscope. Competing assay 1 × 10 5 cells (Molm-13) were incubated at 96-well plate and washed by PBS twice. 50 nM FAM-labeled CD123 antibody and increasing concentrations of free SS30 or SSFH/Cas9 were incubated with cells at 37 °C for 30 min. Cells were centrifuged, supernatant fluid were collected and analyzed by fluorometer. The mean fluorescence intensities (MFI) of FAM were analyzed. Evaluation of anti-cancer ability in vitro Cell viability evaluation by CCK8 assay : 1 × 10 5 cells (Molm-13) were collected and seeded in 96-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA(20 mM), or random library (5′-TGCGTGTGTAGTGTGTCTG-(N 28 )-CTCTTAGGGATTTGGGCGG-3′, 20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. The supernatant fluid was collected the cell proliferation were detected by CCK8 kit according to the manufacturer's standard protocol. The statistical difference was compared to the PBS group (* indicated p < .05; ** indicated p < .01). Cell viability evaluation by MTS assay : 1 × 10 5 cells (Molm-13) were collected and seeded in 96-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA(20 mM), or random library(20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. MTS buffer was added in each well and further incubated for 2–4 h. The absorbance value was detected 490 nM. The statistical difference was compared to the PBS group (* indicated p < .05; ** indicated p < .01). Cell proliferation evaluation by BrdU assay : 1 × 10 5 cells (Molm-13) were collected and seeded in 96-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA(20 mM), or random library(20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. About 100 μl supernatant fluid was added to reaction plate at 37 °C for 120 min. Then, plate was washed by washing buffer and 100 μl the first antibody buffer was added and incubated at 37 °C for 60 min. Next, plate was washed by washing buffer and 100 μl substrate working buffer was added at 37 °C for 15 min. About 100 μl terminating fluid was mixed and the absorbance value was detected at 450 nM. The statistical difference was compared to the PBS group (* indicated p < .05; ** indicated p < .01). Apoptosis and cell death analysis of AML cell lines About 1 × 10 6 cells (Molm-13) were collected and seeded in 6-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA (20 mM), or random library (20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. Cells were incubated with TUNEL buffer and assessed according to the manufacturer's standard protocol. Assessment of retention ability of SSFH in vivo The protocol of the animal study in this paper was reviewed and approved by the Ethics Committee of Xi'an Jiaotong University Affiliated Children's Hospital (Xi'an Children's Hospital, Xi'an, China), no. C2018004. Eight-week-old female BALB/c mice were purchased from the Xi'an JiaoTong University Lab Animal Center (Xi'an) and raised under pathogen-free conditions. 2 × 10 7 in vitro -propagated Molm-13 cells were injected into both flank of BALB/c mice. FAM-labeled SS30 (5 μM), SSFH (5 μM), and SSFH/Cas9/sgRNA (5 μM) were injected into the hind legs of BALB/c mice. The retention of each sample at the injection site was observed by monitoring the FAM signal using an IVIS® Spectrum CT (PerkinElmer, Waltham, MA). Evaluation of anti-cancer effects of SSFH/Cas9 in vivo To assess whether SSFH/Cas9 could inhibit AML proliferation and prolong retention in vivo , 2 × 10 7 in vitro -propagated Molm-13 cells were injected subcutaneously into the flank of BALB/c mice to generate the mouse xenograft tumor model. Fourteen days later, treatments were initiated. The mice were divided into four groups, with six in each group: ① treated with saline once a day; ② treated with SS30 every 3 days (2 mM/kg); ③ treated with SSFH every 3 days (2 mM/kg); ④treated with SSFH/Cas9/sgRNA once every 3 days (2 mM/kg). The agents were administered through tail vein injection. The tumor volume was calculated using the following equation: tumor volume = tumor length × tumor width 2 /2. The body weights and survival rate were assessed and calculated. Subsequently, when the tumor volume was over 2000 mm 3 , or the mice that lost over 15% of their pretreatment body weights, the mice were euthanized by broken neck. Tumor tissues were collected and subjected to JAK2/STAT5 staining to evaluate anti-cancer effects of SSFH. Cytokine storm rescue ability The mouse xenograft tumor models were divided into three groups and injected with saline, excessive SSFH/Cas9 or CD123 antibody to induce cytokine storm, respectively. Seven days later, blood samples were collected and subjected to detect inflammatory cytokines (TNF-α, IL-1, IL-6, IL-12, IFN-α, IFN-β, IFN-γ, and IL-8). When the SSFH/Cas9/sgRNA group and the CD123 antibody group exhibited an obvious difference compared with the saline group, agent injections should be ended. Then, mice in the saline group and the CD123 antibody group were injected with saline and mice in the SSFH/Cas9/sgRNA group were injected with the complementary sequence of SS30 (5′-CCGCCCAAATCCCTAAGAGGCAGGGAGT TCGCTAGTAGCTACGG GACCAGACACAGTACACACGCA-3′). Three days later, blood samples of mice were collected and subjected to detect inflammatory cytokines. Statistical analysis All statistical analyses were performed using the SPSS11.0 software (SPSS, Chicago, IL) applied from Xi'an Jiaotong University. All numerical data were expressed as the mean ± standard deviation. Differences between the groups were examined with Student's two tailed t -test or one-way ANOVA. ANOVA was performed to evaluate the difference followed by Tukey post hoc test. Kaplan–Meier analysis was used to analyze the overall survival. Independent sample t -test was used to analyze the variance of experimental design. p -values of < .05 were considered statistically significant. * indicated p < .05, ** indicated p < .01. The RCA template and primers were synthesized by Sangon Biotech (Shanghai, China). T4 DNA ligase kit was purchased from Sangon Biotech (Shanghai, China). phi29 DNA polymerase (100 units/mL) was purchased from Thermo Scientific (Waltham, MA). RPMI-1640 medium was obtained from Hyclone (Thermo Scientific, Waltham, MA). Fetal bovine serum was purchased from Gibco (Invitrogen, Carlsbad, CA). GeneArt precision gRNA synthesis kit as obtained from Thermo Scientific (Waltham, MA). Cas9 protein was purchased from PNA BIO INC (Newbury Park, CA). RNase (0.25 mg/ml) was obtained from Qiagen (Valencia, CA). CCK8 assay kit (ab228554) was purchased from Abcam (Cambridge, UK). CellTiter 96® AQueous One Solution Cell Proliferation Assay (MTS) was obtained from Promega (Madison, WI). BrdU Cell Proliferation ELISA Kit (colorimetric) (ab126556) was purchased from Abcam (Cambridge, UK). The human B-cell precursor leukemia cell line RCH-ACV was obtained from Cell Culture Center of Peking Union Medical College (Beijing, China). The human acute myelocytic leukemia cell line Molm-13 was purchased from ATCC (Manassas, VA). Cells were cultured in RPMI-1640 medium, supplemented with 10% fetal bovine serum (FBS, Gibco, Carlsbad, CA) and a mixture of penicillin/streptomycin. Additionally, 5 ng/ml IL-3 was also added into cell culture buffer. Cells were cultured at 37 °C in a humidified atmosphere with 5% CO 2 . All experiments were performed on cells in the exponential growth phase. To generate SS30 formed hydrogel (SSFH), first, rolling circle amplification (RCA) template should be designed, which could generate CD123 aptamers SS30 by Cas9/sgRNA-specific cleavage. Each template contained SS30 aptamer sequence and sgRNA target sequence. The sequences were as follows: template 1, 5′- CCGCCCAAATCCCTAAGAGGCAGGGAGTTCGCTAGTAGCTACGGGACCAGACACAGTACACACGCA CCCTGAAGTTCATCTGCACCACC -3′; template 2, 5′- AACTTCA CCGCCCAAATCCCTAAGAGGCAGGGAGTTCGCTAGTAGCTACGGGACCAGACACAGTACACACGCA GGTGGTGCAGATG -3′, where the template for the SS30 is underlined and that for the sgRNA target sequence is bold. Before the RCA process, pre-circular RCA template should be generated. 0.5 mM of each DNA template (Sangon Biotech, Beijing, China) was mixed with primer in hybridization buffer (1 mM EDTA, 10 mM Tris HCl, 100 mM NaCl, pH 8.0). The primer for template 1 was 5′-GTCGCTCGGTGGTGCAGATGAA-3′, The primer for template 2 was 5′-TCCCCTGAAGTTCATCTGCACC-3′. After the hybridization, the mixture was reacted with T4 DNA ligase (Sangon Biotech, Beijing, China) in T4 DNA ligase buffer (500 mM Tris-HCI;100 mM MgCl 2 ;50 mM DTT;10 mM ATP;pH 7.6 @ 25 °C) overnight at 16 °C to close the nick. To inactivate T4 DNA ligase, the product was heated at 70 °C for 10 min. The SSFH was synthesized as followed: pre-circular RCA template was mixed with 50 mM Tris-HCl pH 7.5, 10 mM MgCl 2 , 10 mM (NH 4 ) 2 SO 4 , 4 mM DTT, and phi29 DNA polymerase (100 units/ml; Thermo Scientific, Waltham, MA) for 48 h at 42 °C, and phi29 DNA polymerase was inactivated at 10 min at 65 °C. All the products were centrifuged 12,000 rpm for 15 min, and precipitate were re-suspended in triple distilled water (TDW). Template 1 and 2 were mixed at a 1:1 wt ratio for further study. gRNA was synthesized via a GeneArt precision gRNA synthesis kit (Thermo Scientific). The sgRNA sequence was: 5′- GGUGGUGCAGAUGAACUUCA GUUU UAGAGCUAGAAAUAGCAAGUUAAAAUAAGGCUAGUCCGUUAUCAACUUGAAAAAGUGGCACCGAGUCGGUGCUUUU-3′ (the target sequence is underlined). Then, to evaluate whether Cas9/sgRNA complex were assembled, first, Cas9 protein was fixed on COOH-modified magnetics beads via NHS/EDC reaction. In brief, 6 × 10 5 carboxylated magnetic beads were washed by 200 μl MES (100 mM, pH 5.0) at room temperature twice. Then, beads were activated by 100 μl 1-ethyl-3-(3-dimethyllaminopropyl)-carbodiimide hydrochloride (EDC) (20 mg/ml) and 100 μl N-hydroxysuccinimide (NHS) (20 mg/ml) for 15 min with gentle stirring. Next, beads were washed by linking buffer (5.3 ml 0.2 M sodium dihydrogen phosphate and 94.7 mL 0.2 M sodium hydrogen phosphate). About 5 μg Cas9 protein (5 mg/ml, PNA BIO INC, Newbury Park, CA) was added to the beads and incubated at room temperature for 2 h. At last, beads were washed three times with PBS buffer and incubated with FAM-sgRNA (sgRNA chemically modified with FAM). Flow cytometry was applied to assess fluorescent signals. Beads coated with bovine serum albumin (BSA) and blank beads were treated as negative controls. Further, to evaluate the formation of SSFH/Cas9/sgRNA, Cas9 was fixed on beads and sgRNA was added to form Cas9/sgRNA. Then, Cas9/sgRNA beads incubated with FAM-labeled SSFH and applied for flow cytometry. Blank beads were treated as negative control. The zeta of Cas9/sgRNA was evaluated by particle size analyzer. To induce Cas9-mediated cleavage, first, to evaluate the diameter change after Cas9 cleavage, SSFH (10 mM) was mixed with 6 × 10 5 Cas9/sgRNA coated beads. After incubation, beads were removed under magnetic field and DNA mixture was assessed by particle size analyzer. Then, to further observe cleavage effect, Cas9/sgRNA complex were assembled. Cas9 protein (5 mg/ml, PNA BIO INC, Newbury Park, CA) was mixed sgRNA to form Cas9/sgRNA complexes. Then, different ratio of SSFH/Cas9/sgRNA complex was mixed and incubated at 37 °C for a range of time. The reaction mixtures were treated with RNase (0.25 mg/ml; Qiagen, Valencia, CA) to eliminate the sgRNA. About 1% agarose gel electrophoresis was applied to assess the cleavage of mixtures. The band density was analyzed using gene tools software from Syngene (Frederick, MD). To calculate the swelling ratio, SSFH was freeze dried first. The freeze-drying powder of SSFH was mixed with triple-distilled water with or without Cas9/sgRNA. The weights were calculated at various time points, ranging from 1 h to 10 h. The swelling rate was calculated by this equation: swelling rate = [(W1 − W2)/W1] × 100%. The W1 represented the weight of hydrated gel, whereas W2 is the weight of SSFH freeze drying powder. RCA template was subjected to PCR amplification. To evaluate binding ability, primers were labeled with FAM. By flow cytometry: 1 × 10 5 cells (Molm-13 and RCH-ACV) were incubated with 50 nM SS30, SSFH or cleavage mixtures at 37 °C for 30 min. Cells were washed by PBS and analyzed by flow cytometry. The mean fluorescence intensities (MFI) of FAM were analyzed. By confocal microscope: 1 × 10 5 cells (Molm-13 and RCH-ACV) were incubated with 5 μM SS30, SSFH or cleavage mixtures at 37 °C for 30 min. Cells were washed by PBS and analyzed by confocal microscope. 1 × 10 5 cells (Molm-13) were incubated at 96-well plate and washed by PBS twice. 50 nM FAM-labeled CD123 antibody and increasing concentrations of free SS30 or SSFH/Cas9 were incubated with cells at 37 °C for 30 min. Cells were centrifuged, supernatant fluid were collected and analyzed by fluorometer. The mean fluorescence intensities (MFI) of FAM were analyzed. Cell viability evaluation by CCK8 assay : 1 × 10 5 cells (Molm-13) were collected and seeded in 96-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA(20 mM), or random library (5′-TGCGTGTGTAGTGTGTCTG-(N 28 )-CTCTTAGGGATTTGGGCGG-3′, 20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. The supernatant fluid was collected the cell proliferation were detected by CCK8 kit according to the manufacturer's standard protocol. The statistical difference was compared to the PBS group (* indicated p < .05; ** indicated p < .01). Cell viability evaluation by MTS assay : 1 × 10 5 cells (Molm-13) were collected and seeded in 96-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA(20 mM), or random library(20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. MTS buffer was added in each well and further incubated for 2–4 h. The absorbance value was detected 490 nM. The statistical difference was compared to the PBS group (* indicated p < .05; ** indicated p < .01). Cell proliferation evaluation by BrdU assay : 1 × 10 5 cells (Molm-13) were collected and seeded in 96-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA(20 mM), or random library(20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. About 100 μl supernatant fluid was added to reaction plate at 37 °C for 120 min. Then, plate was washed by washing buffer and 100 μl the first antibody buffer was added and incubated at 37 °C for 60 min. Next, plate was washed by washing buffer and 100 μl substrate working buffer was added at 37 °C for 15 min. About 100 μl terminating fluid was mixed and the absorbance value was detected at 450 nM. The statistical difference was compared to the PBS group (* indicated p < .05; ** indicated p < .01). About 1 × 10 6 cells (Molm-13) were collected and seeded in 6-well plate. Cells were washed by PBS twice to remove FBS. Cells were treated with PBS, SS30 (20 mM), SSFH (20 mM), SSFH/Cas9/sgRNA (20 mM), or random library (20 mM) for at 37 °C for 6 h. Then cells were washed by PBS and cultured for a further 48 h. Cells were incubated with TUNEL buffer and assessed according to the manufacturer's standard protocol. The protocol of the animal study in this paper was reviewed and approved by the Ethics Committee of Xi'an Jiaotong University Affiliated Children's Hospital (Xi'an Children's Hospital, Xi'an, China), no. C2018004. Eight-week-old female BALB/c mice were purchased from the Xi'an JiaoTong University Lab Animal Center (Xi'an) and raised under pathogen-free conditions. 2 × 10 7 in vitro -propagated Molm-13 cells were injected into both flank of BALB/c mice. FAM-labeled SS30 (5 μM), SSFH (5 μM), and SSFH/Cas9/sgRNA (5 μM) were injected into the hind legs of BALB/c mice. The retention of each sample at the injection site was observed by monitoring the FAM signal using an IVIS® Spectrum CT (PerkinElmer, Waltham, MA). To assess whether SSFH/Cas9 could inhibit AML proliferation and prolong retention in vivo , 2 × 10 7 in vitro -propagated Molm-13 cells were injected subcutaneously into the flank of BALB/c mice to generate the mouse xenograft tumor model. Fourteen days later, treatments were initiated. The mice were divided into four groups, with six in each group: ① treated with saline once a day; ② treated with SS30 every 3 days (2 mM/kg); ③ treated with SSFH every 3 days (2 mM/kg); ④treated with SSFH/Cas9/sgRNA once every 3 days (2 mM/kg). The agents were administered through tail vein injection. The tumor volume was calculated using the following equation: tumor volume = tumor length × tumor width 2 /2. The body weights and survival rate were assessed and calculated. Subsequently, when the tumor volume was over 2000 mm 3 , or the mice that lost over 15% of their pretreatment body weights, the mice were euthanized by broken neck. Tumor tissues were collected and subjected to JAK2/STAT5 staining to evaluate anti-cancer effects of SSFH. The mouse xenograft tumor models were divided into three groups and injected with saline, excessive SSFH/Cas9 or CD123 antibody to induce cytokine storm, respectively. Seven days later, blood samples were collected and subjected to detect inflammatory cytokines (TNF-α, IL-1, IL-6, IL-12, IFN-α, IFN-β, IFN-γ, and IL-8). When the SSFH/Cas9/sgRNA group and the CD123 antibody group exhibited an obvious difference compared with the saline group, agent injections should be ended. Then, mice in the saline group and the CD123 antibody group were injected with saline and mice in the SSFH/Cas9/sgRNA group were injected with the complementary sequence of SS30 (5′-CCGCCCAAATCCCTAAGAGGCAGGGAGT TCGCTAGTAGCTACGG GACCAGACACAGTACACACGCA-3′). Three days later, blood samples of mice were collected and subjected to detect inflammatory cytokines. All statistical analyses were performed using the SPSS11.0 software (SPSS, Chicago, IL) applied from Xi'an Jiaotong University. All numerical data were expressed as the mean ± standard deviation. Differences between the groups were examined with Student's two tailed t -test or one-way ANOVA. ANOVA was performed to evaluate the difference followed by Tukey post hoc test. Kaplan–Meier analysis was used to analyze the overall survival. Independent sample t -test was used to analyze the variance of experimental design. p -values of < .05 were considered statistically significant. * indicated p < .05, ** indicated p < .01. Construction and application of SSFH hydrogel To construct a Cas9-cleavable SS30 polyaptamer hydrogel (SSFH), as illustrated in , first, two RCA templates were designed. RCA template 1 contained SS30 and sgRNA target sequence 1 and RCA template 2 contained SS30 and sgRNA target sequence 2. sgRNA target sequence 1 and sgRNA target sequence 2 were complementary sequences. In details, the composition of each RCA template is illustrated in . One RCA template could be designed to generate long repeats of the SS30 aptamer and the sgRNA target sequence, and the other could be used generate long repeats of the SS30 aptamer and the corresponding sgRNA target-binding sequence. Therefore, after PCR amplification, two RCA products were hybridized between the sgRNA target, this could form a gel which was termed as SSFH. Additionally, since there generated a double-strand sgRNA target-binding sequence, once there was Cas9 presented, sgRNA could guide the enzyme in providing specific cleavage and cut SSFH into SS30. Furthermore, as presented in , when SSFH and Cas9/sgRNA were both injected, Cas9/sgRNA cut SSFH and SS30 aptamer could compete with IL-3 to bind with CD123. When SS30 bond with CD123, it could interfere JAK2/STAT5 signaling pathway, and further inhibit cell proliferation (Hercus et al., 2014). Characterization of Cas9-cleavable SS30 polyaptamer hydrogel To assess whether SSFH was constructed and could be cleavable by Cas9 further, physicochemical behaviors were evaluated firstly. First, to evaluate whether Cas9/sgRNA complex was formed, Cas9 protein was fixed on beads and incubated with FAM-labeled sgRNA. Flow cytometry was applied to assess fluorescent signal. As presented in , sgRNA could bind to Cas9 obviously but not to blank beads. Whereas, sgRNA did not bind with BSA beads. This data indicated a successful Cas9/sgRNA formation. The zeta potential was -5.33 mV. Additionally, to assess SSFH/Cas9/sgRNA complex, Cas9/sgRNA was fixed on beads and incubated with FAM-SSFH. As shown in , there exhibited a stronger binding of SSFH with Cas9/sgRNA, but not in blank beads, indicating a formation of SSFH/Cas9/sgRNA complex. Further, to detect cleavage of SSFH mediated by Cas9, after incubation with Cas9/sgRNA, SSFH mixture was analyzed for diameter. As shown in , before cleavage, the average diameter of SSFH was 567 nM. After incubation with Cas9/sgRNA for 1 h, a smaller size about 217 nm appeared. After incubation with Cas9/sgRNA for 2 h, the diameter of SSFH mixture was decreased significantly. These data suggested a successful cleavage of SSFH by Cas9. Furthermore, as presented in , when in the absence of Cas9/sgRNA, SSFH exhibited an original position at the bottom of tube which was subjected to inversion, indicating gel characteristics. However, after the incubation with Cas9/sgRNA, the mixture flowed to the underside of the lid while the movement of inversion. Additionally, the swelling behavior of the hydrogel also differed in the absence and presence of Cas9/sgRNA. SSFH was freeze dried firstly. The freeze-drying powder of SSFH was mixed with triple-distilled water with or without Cas9/sgRNA. The weights were calculated at various time points. As shown in , the lyophilized SSFH powder could rehydrate to hydrogel directly, and showing a swelling ratio of 659.47 ± 23.4% over 2 h. However, after incubation with Cas9, Cas9/SSFH did not exhibit such swelling function. Furthermore, to evaluate whether Cas9 could cleave SSFH into single SS30, different ratio of SSFH/Cas9/sgRNA complex was mixed and incubated at 37 °C for a range of time. The products were assessed by 1% agarose gel electrophoresis. As presented in , there generated more SS30 for incubation 2 days than 1 day, and when the weight ratio of PAH, Cas9, and sgRNA was 8:3:1, and incubation time was over 2 days, SSFH could be cleaved into single SS30 completely (full liberation of the CD123 aptamer SS30). Binding ability evaluation To assess the binding ability of SSFH/Cas9/sgRNA 50 nM FAM-labeled SS30, SSFH or SSFH/Cas9/sgRNA cleavage mixtures were incubated with Molm-13 or RCH-ACV, respectively. Cells were observed by confocal microscope and fluorescence intensity was assessed by flow cytometry. As presented in , when compared with SS30 and SSFH/Cas9/sgRNA, SSFH generated a relatively lower in CD123-positive Molm-13 cells. It may due to a restricted and un-exposed structure of SS30 in SSFH. Free SS30 or SSFH/Cas9/sgRNA both exhibited a strong binding trend to CD123-positive Molm-13 cells, indicating a successful release of SS30 in SSFH/Cas9/sgRNA, and still remained binding ability to CD123-positive cells. However, all these agents did not bind to CD123-negative RCH-ACV cells. This result was also consistent with confocal microscope. As shown in , SSFH did not generate evident fluorescence intensity in Molm-13, whereas there showed a significant fluorescence signal in the free SS30 and the SSFH/Cas9/sgRNA group ( p < .05). SS30, SSFH/Cas9/sgRNA, and SSFH did not show obvious signal in RCH-ACV cells. These results suggested that SSFH/Cas9/sgRNA still maintained CD123 binding ability after Cas9 cleave. Binding target confirmation To validate the target of SSFH/Cas9/sgRNA was CD123 but no other proteins expressed on CD123-positive cells, competing assay was applied. In brief, Molm-13 cells were incubated with 50 nM FAM-labeled CD123 antibody first, and increasing concentrations of SSFH or SSFH/Cas9 was added. Fluorescence intensity of supernatant was detected and shown in . With an increasing concentration of SSFH/Cas9, the FAM fluorescence intensity was increased, and as well as the free SS30 group, indicating a competing between CD123 antibody and SSFH/Cas9. These data suggested the target of SSFH/Cas9/sgRNA was still CD123. SSFH/Cas9 inhibits the proliferation of CD123-positive cells in vitro Since SS30 could inhibit proliferation of CD123-positive cells, to further evaluate the potential anti-proliferation effect of SSFH/Cas9 on CD123-positive cells, cell line Molm-13 was incubated with PBS, SS30, SSFH, SSFH/Cas9/sgRNA, or random library at 37 °C for 6 h, respectively. Then cells were washed by PBS and cultured for a further 48 h. To assess cell viability, both CCK8 and MTS assay were applied, while BrdU ELISA assay was used to detect cell proliferation. As presented in (MTS) and (CCK8), when compared with the PBS group, both SS30 and SSFH/Cas9/sgRNA significantly decreased cell viability of Molm-13 with statistical differences (SS30 group: p < .05; SSFH/Cas9 group: p < .01); However, SSFH did not obviously interfere cells viability of Molm-13 cells when compared with the PBS group. Furthermore, BrdU assay results are presented in . Both SS30 and SSFH/Cas9/sgRNA significantly inhibited cells proliferation of Molm-13 cells with statistical differences (SS30 group: p < .05; SSFH/Cas9 group: p < .01), and SSFH, did not generate obvious inhibition ability due to restricted structures. Moreover, to further assess whether SSFH/Cas9 maintained CD123-positive cell apoptosis inducing ability of SS30, Molm-13 cells were treated with PBS, SS30, SSFH, and SSFH/Cas9/sgRNA for at 37 °C for 6 h. Cells were cultured for a further 48 h and subjected to TUNEL assay. As illustrated in , when compared with the PBS group, both SS30 and SSFH/Cas9 significantly induced Molm-13 cell apoptosis; However, SSFH did not generate significant difference to Molm-13 of the PBS group. These results indicated that SSFH/Cas9 could effectively release SS30 and did not change the anti-cancer ability of SS30. SSFH/Cas9 could prolong retention time in vivo One purpose of designing SSFH was to prolong SS30 retention time in vivo , to assess whether SSFH could stay longer at the subcutaneous injection site of mice, SS30, SSFH, and SSFH/Cas9 were injected, and Cy5.5-tagged complementary probe was applied to visualize and be observed. As presented in , SS30, SSFH, and SSFH/Cas9 differed in their retention at the subcutaneous injection site of mice. Both SSFH and SSFH/Cas9 were retained longer than the free SS30 aptamer in mice. Additionally, the SS30 group showed a rapid disappearance of the fluorescence signal within 1-day post-injection, whereas the SSFH and the SSFH/Cas9 group showed a prolonged fluorescence signal after administration. These suggested that a successful retention of SSFH in vivo , which may facilitate controllable administration, prolong internal circulation time, and further enhance drug efficacy and reduce drug consumption. SSFH/Cas9/sgRNA could inhibit CD123-positive tumors selectively in vivo It should be evaluated that whether SSFH/Cas9 could inhibit CD123-positive cells in vivo . Therefore, Molm-13 cells were injected subcutaneously into the flank of BALB/c mice to generate the mouse xenograft tumor model. Fourteen days later, mice were divided into four groups: ① saline; ② SS30; ③ SSFH; ④ SSFH/Cas9/sgRNA. Treatments were administrated every 3 days. The mice survival rate is presented in . When compared with the saline group, SSFH did not prolong survival time significantly since SS30 in SSFH could not be release without Cas9/sgRNA; Meanwhile, SS30 and SSFH/Cas9 could prolong survivals compared with the saline group due to anti-cancer ability of free SS30 ( p < .05), and mice in the SSFH/Cas9 group survived longer than free SS30 ( p < .05), due to a longer retention and function time. Tumor volume and body weight were also calculated to assess anti-cancer ability of different agents. As shown in , body weights in the saline and the SSFH group increased rapidly due to a fast tumor growth, whereas the free SS30 and the SSFH/Cas9 group remained stably due to a much slower tumor growth. Interestingly, when compared with the free SS30 group, the SSFH/Cas9 group exhibited a smaller volume, this may due to a longer internal circulation and retention time for SS30 inhibition. Further, to assess whether SSFH/Cas9 inhibit by block IL-3/CD123 interaction and further interfere JAK2/STAT5 pathway, tumor tissues were collected and subjected to JAK2/STAT5 staining. As shown in , the expression of p-JAK2 and p-STAT5 was significantly decreased in the SS30 and the SSFH/Cas9 group when compared with the saline and the SSFH group, indicating a successful inhibition of JAK2/STAT5 pathway. SSFH/Cas9/sgRNA system could rescue cytokine storm Since CD123 is also restrictedly expressed on normal cells such as monocyte cells which are mainly secreting cytokine, CD123 targeting agents may also damage these normal cells and induce cytokine storm, which is fatal in clinic. There was no effective method for cytokine storm rescue induced by CD123 targeted molecules. Therefore, to assess whether SSFH/Cas9/sgRNA system could exhibit a promising approach, the mouse xenograft tumor models were divided into three groups and injected with saline, excessive SSFH/Cas9/sgRNA or CD123 antibody to induce cytokine storm, respectively. Blood samples were collected and subjected to detect inflammatory cytokines (TNF-α, IL-1, IL-6, IL-12, IFN-α, IFN-β, IFN-γ, and IL-8). When the SSFH/Cas9/sgRNA group and the CD123 antibody group exhibited an obvious difference compared with the saline group, complementary sequence of SS30 were injected in the SSFH/Cas9/sgRNA group and inflammatory cytokines were evaluated 3 days later. The data are presented in . It was obvious that after excessive injection, SSFH/Cas9/sgRNA and CD123 antibody both induced a higher concentration of cytokine when compared with the saline group ( p < 0.05). Then, after the injection of complementary sequence of SS30 in the SSFH/Cas9/sgRNA group, the levels of cytokines decreased rapidly and were even to the saline group. However, although the CD123 antibody injection was ended, the cytokine levels were not declined to normal level. These data indicated a safer administration of SSFH/Cas9/sgRNA system to alleviate the side effects. To construct a Cas9-cleavable SS30 polyaptamer hydrogel (SSFH), as illustrated in , first, two RCA templates were designed. RCA template 1 contained SS30 and sgRNA target sequence 1 and RCA template 2 contained SS30 and sgRNA target sequence 2. sgRNA target sequence 1 and sgRNA target sequence 2 were complementary sequences. In details, the composition of each RCA template is illustrated in . One RCA template could be designed to generate long repeats of the SS30 aptamer and the sgRNA target sequence, and the other could be used generate long repeats of the SS30 aptamer and the corresponding sgRNA target-binding sequence. Therefore, after PCR amplification, two RCA products were hybridized between the sgRNA target, this could form a gel which was termed as SSFH. Additionally, since there generated a double-strand sgRNA target-binding sequence, once there was Cas9 presented, sgRNA could guide the enzyme in providing specific cleavage and cut SSFH into SS30. Furthermore, as presented in , when SSFH and Cas9/sgRNA were both injected, Cas9/sgRNA cut SSFH and SS30 aptamer could compete with IL-3 to bind with CD123. When SS30 bond with CD123, it could interfere JAK2/STAT5 signaling pathway, and further inhibit cell proliferation (Hercus et al., 2014). To assess whether SSFH was constructed and could be cleavable by Cas9 further, physicochemical behaviors were evaluated firstly. First, to evaluate whether Cas9/sgRNA complex was formed, Cas9 protein was fixed on beads and incubated with FAM-labeled sgRNA. Flow cytometry was applied to assess fluorescent signal. As presented in , sgRNA could bind to Cas9 obviously but not to blank beads. Whereas, sgRNA did not bind with BSA beads. This data indicated a successful Cas9/sgRNA formation. The zeta potential was -5.33 mV. Additionally, to assess SSFH/Cas9/sgRNA complex, Cas9/sgRNA was fixed on beads and incubated with FAM-SSFH. As shown in , there exhibited a stronger binding of SSFH with Cas9/sgRNA, but not in blank beads, indicating a formation of SSFH/Cas9/sgRNA complex. Further, to detect cleavage of SSFH mediated by Cas9, after incubation with Cas9/sgRNA, SSFH mixture was analyzed for diameter. As shown in , before cleavage, the average diameter of SSFH was 567 nM. After incubation with Cas9/sgRNA for 1 h, a smaller size about 217 nm appeared. After incubation with Cas9/sgRNA for 2 h, the diameter of SSFH mixture was decreased significantly. These data suggested a successful cleavage of SSFH by Cas9. Furthermore, as presented in , when in the absence of Cas9/sgRNA, SSFH exhibited an original position at the bottom of tube which was subjected to inversion, indicating gel characteristics. However, after the incubation with Cas9/sgRNA, the mixture flowed to the underside of the lid while the movement of inversion. Additionally, the swelling behavior of the hydrogel also differed in the absence and presence of Cas9/sgRNA. SSFH was freeze dried firstly. The freeze-drying powder of SSFH was mixed with triple-distilled water with or without Cas9/sgRNA. The weights were calculated at various time points. As shown in , the lyophilized SSFH powder could rehydrate to hydrogel directly, and showing a swelling ratio of 659.47 ± 23.4% over 2 h. However, after incubation with Cas9, Cas9/SSFH did not exhibit such swelling function. Furthermore, to evaluate whether Cas9 could cleave SSFH into single SS30, different ratio of SSFH/Cas9/sgRNA complex was mixed and incubated at 37 °C for a range of time. The products were assessed by 1% agarose gel electrophoresis. As presented in , there generated more SS30 for incubation 2 days than 1 day, and when the weight ratio of PAH, Cas9, and sgRNA was 8:3:1, and incubation time was over 2 days, SSFH could be cleaved into single SS30 completely (full liberation of the CD123 aptamer SS30). To assess the binding ability of SSFH/Cas9/sgRNA 50 nM FAM-labeled SS30, SSFH or SSFH/Cas9/sgRNA cleavage mixtures were incubated with Molm-13 or RCH-ACV, respectively. Cells were observed by confocal microscope and fluorescence intensity was assessed by flow cytometry. As presented in , when compared with SS30 and SSFH/Cas9/sgRNA, SSFH generated a relatively lower in CD123-positive Molm-13 cells. It may due to a restricted and un-exposed structure of SS30 in SSFH. Free SS30 or SSFH/Cas9/sgRNA both exhibited a strong binding trend to CD123-positive Molm-13 cells, indicating a successful release of SS30 in SSFH/Cas9/sgRNA, and still remained binding ability to CD123-positive cells. However, all these agents did not bind to CD123-negative RCH-ACV cells. This result was also consistent with confocal microscope. As shown in , SSFH did not generate evident fluorescence intensity in Molm-13, whereas there showed a significant fluorescence signal in the free SS30 and the SSFH/Cas9/sgRNA group ( p < .05). SS30, SSFH/Cas9/sgRNA, and SSFH did not show obvious signal in RCH-ACV cells. These results suggested that SSFH/Cas9/sgRNA still maintained CD123 binding ability after Cas9 cleave. To validate the target of SSFH/Cas9/sgRNA was CD123 but no other proteins expressed on CD123-positive cells, competing assay was applied. In brief, Molm-13 cells were incubated with 50 nM FAM-labeled CD123 antibody first, and increasing concentrations of SSFH or SSFH/Cas9 was added. Fluorescence intensity of supernatant was detected and shown in . With an increasing concentration of SSFH/Cas9, the FAM fluorescence intensity was increased, and as well as the free SS30 group, indicating a competing between CD123 antibody and SSFH/Cas9. These data suggested the target of SSFH/Cas9/sgRNA was still CD123. Since SS30 could inhibit proliferation of CD123-positive cells, to further evaluate the potential anti-proliferation effect of SSFH/Cas9 on CD123-positive cells, cell line Molm-13 was incubated with PBS, SS30, SSFH, SSFH/Cas9/sgRNA, or random library at 37 °C for 6 h, respectively. Then cells were washed by PBS and cultured for a further 48 h. To assess cell viability, both CCK8 and MTS assay were applied, while BrdU ELISA assay was used to detect cell proliferation. As presented in (MTS) and (CCK8), when compared with the PBS group, both SS30 and SSFH/Cas9/sgRNA significantly decreased cell viability of Molm-13 with statistical differences (SS30 group: p < .05; SSFH/Cas9 group: p < .01); However, SSFH did not obviously interfere cells viability of Molm-13 cells when compared with the PBS group. Furthermore, BrdU assay results are presented in . Both SS30 and SSFH/Cas9/sgRNA significantly inhibited cells proliferation of Molm-13 cells with statistical differences (SS30 group: p < .05; SSFH/Cas9 group: p < .01), and SSFH, did not generate obvious inhibition ability due to restricted structures. Moreover, to further assess whether SSFH/Cas9 maintained CD123-positive cell apoptosis inducing ability of SS30, Molm-13 cells were treated with PBS, SS30, SSFH, and SSFH/Cas9/sgRNA for at 37 °C for 6 h. Cells were cultured for a further 48 h and subjected to TUNEL assay. As illustrated in , when compared with the PBS group, both SS30 and SSFH/Cas9 significantly induced Molm-13 cell apoptosis; However, SSFH did not generate significant difference to Molm-13 of the PBS group. These results indicated that SSFH/Cas9 could effectively release SS30 and did not change the anti-cancer ability of SS30. One purpose of designing SSFH was to prolong SS30 retention time in vivo , to assess whether SSFH could stay longer at the subcutaneous injection site of mice, SS30, SSFH, and SSFH/Cas9 were injected, and Cy5.5-tagged complementary probe was applied to visualize and be observed. As presented in , SS30, SSFH, and SSFH/Cas9 differed in their retention at the subcutaneous injection site of mice. Both SSFH and SSFH/Cas9 were retained longer than the free SS30 aptamer in mice. Additionally, the SS30 group showed a rapid disappearance of the fluorescence signal within 1-day post-injection, whereas the SSFH and the SSFH/Cas9 group showed a prolonged fluorescence signal after administration. These suggested that a successful retention of SSFH in vivo , which may facilitate controllable administration, prolong internal circulation time, and further enhance drug efficacy and reduce drug consumption. It should be evaluated that whether SSFH/Cas9 could inhibit CD123-positive cells in vivo . Therefore, Molm-13 cells were injected subcutaneously into the flank of BALB/c mice to generate the mouse xenograft tumor model. Fourteen days later, mice were divided into four groups: ① saline; ② SS30; ③ SSFH; ④ SSFH/Cas9/sgRNA. Treatments were administrated every 3 days. The mice survival rate is presented in . When compared with the saline group, SSFH did not prolong survival time significantly since SS30 in SSFH could not be release without Cas9/sgRNA; Meanwhile, SS30 and SSFH/Cas9 could prolong survivals compared with the saline group due to anti-cancer ability of free SS30 ( p < .05), and mice in the SSFH/Cas9 group survived longer than free SS30 ( p < .05), due to a longer retention and function time. Tumor volume and body weight were also calculated to assess anti-cancer ability of different agents. As shown in , body weights in the saline and the SSFH group increased rapidly due to a fast tumor growth, whereas the free SS30 and the SSFH/Cas9 group remained stably due to a much slower tumor growth. Interestingly, when compared with the free SS30 group, the SSFH/Cas9 group exhibited a smaller volume, this may due to a longer internal circulation and retention time for SS30 inhibition. Further, to assess whether SSFH/Cas9 inhibit by block IL-3/CD123 interaction and further interfere JAK2/STAT5 pathway, tumor tissues were collected and subjected to JAK2/STAT5 staining. As shown in , the expression of p-JAK2 and p-STAT5 was significantly decreased in the SS30 and the SSFH/Cas9 group when compared with the saline and the SSFH group, indicating a successful inhibition of JAK2/STAT5 pathway. Since CD123 is also restrictedly expressed on normal cells such as monocyte cells which are mainly secreting cytokine, CD123 targeting agents may also damage these normal cells and induce cytokine storm, which is fatal in clinic. There was no effective method for cytokine storm rescue induced by CD123 targeted molecules. Therefore, to assess whether SSFH/Cas9/sgRNA system could exhibit a promising approach, the mouse xenograft tumor models were divided into three groups and injected with saline, excessive SSFH/Cas9/sgRNA or CD123 antibody to induce cytokine storm, respectively. Blood samples were collected and subjected to detect inflammatory cytokines (TNF-α, IL-1, IL-6, IL-12, IFN-α, IFN-β, IFN-γ, and IL-8). When the SSFH/Cas9/sgRNA group and the CD123 antibody group exhibited an obvious difference compared with the saline group, complementary sequence of SS30 were injected in the SSFH/Cas9/sgRNA group and inflammatory cytokines were evaluated 3 days later. The data are presented in . It was obvious that after excessive injection, SSFH/Cas9/sgRNA and CD123 antibody both induced a higher concentration of cytokine when compared with the saline group ( p < 0.05). Then, after the injection of complementary sequence of SS30 in the SSFH/Cas9/sgRNA group, the levels of cytokines decreased rapidly and were even to the saline group. However, although the CD123 antibody injection was ended, the cytokine levels were not declined to normal level. These data indicated a safer administration of SSFH/Cas9/sgRNA system to alleviate the side effects. Based on our previous study, CD123 thioaptamer SS30 could inhibit CD123-positive AML cells via JAK2/STAT5 signaling pathway by blocking interaction between CD123 and IL-3. However, the molecular weight of SS30 was too small and it could be leaked from kidney rapidly, which may influence anti-cancer effects and short the interval of medication. Thus, the present study reports on a DNA hydrogel consisted with SS30 and could be controlled released by cleavage of Cas9 enzyme . This DNA hydrogel was termed as SSFH, and it has been validated that after Cas9/sgRNA incubation, SSFH could release a plenty of SS30 ( and ). The released SS30 could further inhibit CD123-positive cell proliferation targeted in vitro , prolong retention time and subsequently inhibit tumor growth in vivo . Aptamers, a kind of novel targeting molecules, are single stranded DNA, RNA, or altered nucleic acids sequences generated from systematic evolution of ligands by exponential enrichment (SELEX) technology (Darmostuk et al., ). When compared with antibodies, aptamer could serve for various targets, including proteins, even cells, ions, etc. Aptamers have a unique three-dimensional spatial structure, which can bind to the target with high specific and affinity (Sun & Zu, ). Since aptamers were reported in 1992, so far, there have been plenty of aptamer explored. Aptamers have been used as diagnostic and therapeutic targeting ligands widely due to their obvious preponderance: a small molecular weight, stable structure, plasticity of chemical groups, fast blood clearance, and non-immunogenicity. Interestingly, some aptamers have the ability to regulate proteins functions and may inhibit their biological function of target proteins by binding to target the key domain of proteins (Kim et al., ; Chousterman et al., ; Khan et al., ). In our previous study, we have successfully developed a CD123 aptamer termed as SS30. SS30 has been validated that SS30 could bind with CD123 with high specificity and affinity. Additionally, SS30 could interfere the interaction between IL-3 and CD123, and further blocking JAK2/STAT5 signaling pathway, resulting in cell proliferation inhibition in vitro and in vivo . However, due to small size, SS30 could be leaked from kidney rapidly. How to prolong its circulatory time to enhance anti-cancer ability was a main issue. In this study, we first generated CD123 aptamer hydrogel (SS30/Cas9) which could be cut by Cas9/sgRNA. Currently, Cas9/sgRNA system has been used mainly in gene editing. Utilizing Cas9/sgRNA in controlled aptamer release may have advantages over previously reported aptamer delivery systems such as PLGA-conjugated aptamers. First, SSFH gel structure could resist to nuclease. SSFH contained a plenty of SS30 and most SS30 were folded inside and protected. Thus, when compared with other delivery systems, SSFH could prolong effects longer. Second, the specific cleavage of SSFH with Cas9 exhibited reducing off-target cleavage. Previous studies have explored cleavage by restriction enzyme. However, several reports have proven that restriction enzyme-mediated cleavage may encounter problems with nonspecific cleavage and inconsistencies in the tertiary structure of the released aptamers. This may due to shorter recognition region composed of 4–8 base pairs. In contrast, Cas9 needs 20-base pairs of double-stranded DNA to guide sgRNA. Therefore, Cas9/sgRNA offers an exquisitely specific cutting system and could provide higher precision (Chousterman et al., ). Additionally, the release of aptamers from SSFH hydrogel does not depend on the chemical reactions, which may reduce several chemical processes such as synthesis and purification. Moreover, the in situ biological release of aptamers by SSFH can eliminate such multi-step manufacturing processes for aptamer delivery when compared with other polymers. It was notably that since CD123 was also restrictedly expressed on normal cells, resulting in side effects such as cytokines storm (Tong et al., ). Currently, cytokine storm syndrome (CSS) caused by CD123-targeted therapy is a major fatal factor during clinical treatment and hindered therapeutic effects severely (Chousterman et al., ). CSS is a serious and life-threatening disease characterized by systemic inflammation, ferritin, hemodynamics instability, and multiple organ failure (MOF). The common clinical features of CSS are persistent fever, splenomegaly, hepatomegaly with hepatic insufficiency, lymphadenopathy, coagulopathy, cytopenia, skin rash, and neurological symptoms (Kaur et al., ). The biomarker of CSS is an uncontrolled and dysfunctional immune response that involves the continuous activation and expansion of lymphocyte and macrophages, which secrete large amounts of cytokines, leading to cytokine storms. Currently, the treatments of CSS were mainly IFN-γ antibodies, IL-6 inhibitors, JAK inhibitors, IL-6 blocking therapy, and recombinant human IL-1Rα (Bayat et al., ). Aptamer, a kind of targeting molecules, exhibited obvious superiority when compared with antibodies. Aptamers are single oligonucleotides generated by in vitro selection mechanisms via the systematic evolution of ligand exponential enrichment (SELEX) process (Mirau et al., ). Aptamers can fold into three-dimensional structure to recognize and further bind with their targets with high specificities and affinities. Aptamers have attracted attention in targeted therapy because of their high binding affinity toward specific targets such as proteins, cells, small molecules, and even metal ions, antibodies for which are difficult to obtain Most importantly, the complementary sequences of aptamer could be utilized as antidote. Once there have generated obvious effects caused by aptamers, after injection of their complementary sequences, they could form double-strands immediately and destroy three-dimensional structure of aptamers (Krissanaprasit, ; Tabuchi, ; Zhao et al., ). Thus, in this study, to solve side effects caused by CD123 targeted molecules, we tried to assess whether complementary sequence of SS30 could alleviate cytokine storm caused by SSFH. As shown in , both SSFH and CD123 antibody caused an obvious increasing of inflammatory cytokines. However, after injection of SS30 complementary sequence in the SSFH group, the levels of inflammatory cytokines decreased rapidly and were equal to the saline group, whereas the CD123 antibody group remained higher level until 3 days later. This indicated that aptamers showed more controllable with compared with antibodies. However, CD123 poly-aptamer hydrogel edited by Cas9/sgRNA still exited some disadvantages. The retention time in vivo could last a few days, but still needed to be improved. In addition, the possible immunogenicity of Cas9 which was used as a major component of this system should be addressed. Here in this study, we designed an RCA-based DNA hydrogel SSFH which could release CD123 aptamer SS30 by Cas9/sgRNA-mediated specific editing. It has been validated that DNA hydrogel SSFH could prolong retention time at tumor sites in vivo , which may facilitate anti-cancer ability of SS30. The use of Cas9/sgRNA as a precise editing system means that our system may be broadly applied to various aptamers and single-stranded DNA nucleotides. Although we just applied CD123 aptamer in this study, a successful retention and cleavage indicated that Cas9-edited controlled release of aptamers from RCA products could be utilized for other sustained release of aptamers from DNA hydrogels. Cas9/sgRNA-mediated specific editing of RCA-based DNA hydrogel may open a new approach for anti-cancer strategy and aptamer applications.
Online-Ressourcen zu Sprunggelenkdistorsionen
75294614-89ae-4238-bb21-31992ea46b22
11850430
Patient Education as Topic[mh]
Die ASD ist unter der körperlich aktiven Bevölkerung mit bis zu 40 % die häufigste muskuloskeletale Verletzung . Verstauchungen machen 75 % aller Sprunggelenkverletzungen aus, wobei 85 % davon durch ein Inversionstrauma verursacht sind . Einmal aufgetreten, verdoppelt sich die Wahrscheinlichkeit eines erneuten Traumas . Hohe Behandlungskosten und langfristige Folgen, die sich aus der ersten Verstauchung ergeben, führen zu einer Belastung des Gesundheitssystems . Wird die Verletzung nicht angemessen behandelt, kann dies in bis zu 40 % der Fälle zu einer chronischen Instabilität des oberen Sprunggelenks führen. Dies schränkt Betroffene im Alltag erheblich ein und führt zu einer Reduktion sportlicher Aktivitäten . Eine gute Patientenedukation ist essenziell, um irreversible Schäden zu vermeiden. Nur etwa die Hälfte der Patienten nimmt nach einer akuten Außenbandverletzung ärztliche Hilfe in Anspruch . Mehr als 60 % aller Erwachsenen suchen digital nach gesundheitsbezogenen Auskünften . Mit einem Altersgipfel von ASD bei 15 bis 19 Jahren ist anzunehmen, dass hier der Gebrauch von Online-Ressourcen noch höher geschätzt werden muss . Das Identifizieren qualitativ hochwertiger und seriöser Webseiten ist für Betroffene schwierig und das Angebot sehr vielseitig. Die konsumierten Quellen beeinflussen das weitere Handeln des Patienten und können das Arzt-Patient-Verhältnis sowie die Compliance wesentlich durch hervorgerufene Erwartungen prägen . Ziele dieser Arbeit waren es, die vorhandenen Informationsmaterialien zur ASD auf ihre Qualität und Lesbarkeit zu untersuchen (Stand: Juni 2023) sowie mithilfe einer Benutzerumfrage die effektive Nutzbarkeit durch Patienten einzuschätzen. Erfassung der Internetseiten. Knapp 95 % aller Webkonsumenten benutzen die Suchmaschinen Google und Bing, wobei 88 % davon im Mai 2023 Google ausmachten. Mit einem Marktanteil von 96 % steht Google bei der Websuche über mobile Endgeräte an erster Stelle . Zur Festlegung der Suchbegriffe wurde das Programm Google Ads genutzt. Auf Grundlage der Daten vergangener Jahre wurde berechnet, wie viele Klicks und Impressionen pro Suchbegriffkombination durchschnittlich in einem Monat zu erwarten sind. Die gängigsten Suchbegriffe wurden anschließend auf Google Trends im Fünf- sowie im Einjahresverlauf gegeneinander verglichen. Nach Eingabe dieser in einen verlaufsfreien Browser wurden unter Ausschluss aller Werbeanzeigen die je ersten 25 Internetseiten heruntergeladen. Nachdem Duplikate und nichtfunktionierende Links beseitigt wurden, kamen weitere Ausschlusskriterien zum Einsatz. So wurden Webseiten mit beschränktem Zugriff, Video- oder nichtthemenbezogenen Inhalten sowie PowerPoint-Präsentationen und wissenschaftliche Paper nicht miteinbezogen. 25-Item-Score. Basierend auf aktuellen Leitlinien im Umgang mit ASD und Außenbandverletzungen wurde eine Liste der wichtigsten 25 Fakten zusammengestellt (Zusatzmaterial online: eTabelle 1), die von 5 unabhängigen Unfallchirurgen überprüft wurde. Jedes der Items kann mit einem oder keinem Punkt benotet werden. EQIP36-Score. Zur Qualitätsbewertung schriftlicher Patienteninformationen hinsichtlich der Kategorien allgemeine Qualität, Inhalt, Transparenz und Struktur wurde die modifizierte Version des Instruments Ensuring Quality Information for Patients (EQIP) genutzt . Die Daten mit den Werten „Ja“, „Nein“ und „Nicht anwendbar“ lassen sich in einen Punktwert bis 100 umrechnen : [12pt]{minimal} $$= \, \, 100}{36 \, - \, }.$$ Lesbarkeitsanalyse. Die Lesbarkeit wurde mithilfe des Textanalysetools Wortliga bewertet , welches die Verständlichkeit von Texten, basierend auf dem „Hamburger Verständlichkeitskonzept“, untersucht und neben einem eigenen Lesbarkeitsindex weitere Kennwerte wie den Flesch-Kincaid-Index misst . Ein hoher Punktwert entspricht einer besseren Lesbarkeit. Es folgte die Erfassung von Wort- und Satzanzahl, mittlerer Satzlänge, Buchstaben‑, Zeichen- und Absatzanzahl sowie Sprachmelodie und berechneter Lesedauer. Quelle und Urheberschaft der Webseiten. Alle Webseiten wurden in 2 Schritten kategorisiert. Die Klassifizierung bezüglich der Quelle erfasste, ob das Informationsmaterial von einem kommerziellen Anbieter oder aus dem Sektor des Gesundheitswesens kommt oder mit diesem nicht assoziiert ist . Für die Klassifizierung bezüglich des Urhebers wurden folgende Untergruppen festgelegt: (1) akademische Forschung, (2) Berufsverbände und Fachgesellschaften, (3) Krankenhäuser, (4) Magazine und Zeitschriften, (5) Enzyklopädien, (6) Ärzte und Fußgelenkspezialisten, (7) Apotheken, (8) Krankenkassen, (9) Physiotherapien, (10) Anbieter für Medikamente oder Medizinprodukte (wie Bandagen, Orthesen und Taps), (11) Informationsportale und (12) Sportartikelhersteller. Untersuchung auf Bias. Priorisiert im Suchverlauf können Webseiten mit einem kommerziellen Hintergrund und bezahlter Werbung stehen, was das Informationsbild verzerren und die Qualität beeinflussen kann . Wenn Quellen Werbung für gewinnorientierte Produkte oder Dienstleistungen enthielten, wurden sie als kommerziell verzerrt eingestuft. Ebenso wurde das Vorhandensein von Links zu sozialen Medien geprüft. Gesamtwertung. Die erreichten Punktzahlen des 25-Item-Scores, des EQIP-36-Scores sowie der Lesbarkeit gingen zu je einem Drittel in die Gesamtwertung ein. Benutzerumfrage. Um die ermittelten Ergebnisse in der Realität zu prüfen, wurden die 3 am besten bewerteten Webseiten Benutzern zur Bewertung gegeben. Die Teilnehmer wurden gebeten, Fragen hinsichtlich des Inhalts, der Struktur und der Verständlichkeit zu beantworten, wobei deren medizinisches Vorwissen beachtet wurde (Zusatzmaterial online). Knapp 95 % aller Webkonsumenten benutzen die Suchmaschinen Google und Bing, wobei 88 % davon im Mai 2023 Google ausmachten. Mit einem Marktanteil von 96 % steht Google bei der Websuche über mobile Endgeräte an erster Stelle . Zur Festlegung der Suchbegriffe wurde das Programm Google Ads genutzt. Auf Grundlage der Daten vergangener Jahre wurde berechnet, wie viele Klicks und Impressionen pro Suchbegriffkombination durchschnittlich in einem Monat zu erwarten sind. Die gängigsten Suchbegriffe wurden anschließend auf Google Trends im Fünf- sowie im Einjahresverlauf gegeneinander verglichen. Nach Eingabe dieser in einen verlaufsfreien Browser wurden unter Ausschluss aller Werbeanzeigen die je ersten 25 Internetseiten heruntergeladen. Nachdem Duplikate und nichtfunktionierende Links beseitigt wurden, kamen weitere Ausschlusskriterien zum Einsatz. So wurden Webseiten mit beschränktem Zugriff, Video- oder nichtthemenbezogenen Inhalten sowie PowerPoint-Präsentationen und wissenschaftliche Paper nicht miteinbezogen. Basierend auf aktuellen Leitlinien im Umgang mit ASD und Außenbandverletzungen wurde eine Liste der wichtigsten 25 Fakten zusammengestellt (Zusatzmaterial online: eTabelle 1), die von 5 unabhängigen Unfallchirurgen überprüft wurde. Jedes der Items kann mit einem oder keinem Punkt benotet werden. Zur Qualitätsbewertung schriftlicher Patienteninformationen hinsichtlich der Kategorien allgemeine Qualität, Inhalt, Transparenz und Struktur wurde die modifizierte Version des Instruments Ensuring Quality Information for Patients (EQIP) genutzt . Die Daten mit den Werten „Ja“, „Nein“ und „Nicht anwendbar“ lassen sich in einen Punktwert bis 100 umrechnen : [12pt]{minimal} $$= \, \, 100}{36 \, - \, }.$$ Die Lesbarkeit wurde mithilfe des Textanalysetools Wortliga bewertet , welches die Verständlichkeit von Texten, basierend auf dem „Hamburger Verständlichkeitskonzept“, untersucht und neben einem eigenen Lesbarkeitsindex weitere Kennwerte wie den Flesch-Kincaid-Index misst . Ein hoher Punktwert entspricht einer besseren Lesbarkeit. Es folgte die Erfassung von Wort- und Satzanzahl, mittlerer Satzlänge, Buchstaben‑, Zeichen- und Absatzanzahl sowie Sprachmelodie und berechneter Lesedauer. Alle Webseiten wurden in 2 Schritten kategorisiert. Die Klassifizierung bezüglich der Quelle erfasste, ob das Informationsmaterial von einem kommerziellen Anbieter oder aus dem Sektor des Gesundheitswesens kommt oder mit diesem nicht assoziiert ist . Für die Klassifizierung bezüglich des Urhebers wurden folgende Untergruppen festgelegt: (1) akademische Forschung, (2) Berufsverbände und Fachgesellschaften, (3) Krankenhäuser, (4) Magazine und Zeitschriften, (5) Enzyklopädien, (6) Ärzte und Fußgelenkspezialisten, (7) Apotheken, (8) Krankenkassen, (9) Physiotherapien, (10) Anbieter für Medikamente oder Medizinprodukte (wie Bandagen, Orthesen und Taps), (11) Informationsportale und (12) Sportartikelhersteller. Priorisiert im Suchverlauf können Webseiten mit einem kommerziellen Hintergrund und bezahlter Werbung stehen, was das Informationsbild verzerren und die Qualität beeinflussen kann . Wenn Quellen Werbung für gewinnorientierte Produkte oder Dienstleistungen enthielten, wurden sie als kommerziell verzerrt eingestuft. Ebenso wurde das Vorhandensein von Links zu sozialen Medien geprüft. Die erreichten Punktzahlen des 25-Item-Scores, des EQIP-36-Scores sowie der Lesbarkeit gingen zu je einem Drittel in die Gesamtwertung ein. Um die ermittelten Ergebnisse in der Realität zu prüfen, wurden die 3 am besten bewerteten Webseiten Benutzern zur Bewertung gegeben. Die Teilnehmer wurden gebeten, Fragen hinsichtlich des Inhalts, der Struktur und der Verständlichkeit zu beantworten, wobei deren medizinisches Vorwissen beachtet wurde (Zusatzmaterial online). Eine Prognose für die zu erwartenden Klicks und Impressionen von Suchbegriffen für den Juni 2023 in Deutschland wurde erhoben (Zusatzmaterial online: eTabelle 2). Der Suchbegriff „Fuß verstaucht“ war mit 914,6 erwarteten Klicks am beliebtesten. Es schließen sich die Begriffe „Sprunggelenk umgeknickt“ mit 910,9 und „Knöchel verstaucht“ mit 837,9 Klicks an. Danach folgte mit 230,7 Klicks „Sprunggelenk verstaucht“ und „Fuß umgeknickt“ mit 148,1 Klicks. Fachspezifische sowie umgangssprachliche Formulierungen waren weniger gebräuchlich. Die genannten Suchbegriffe wurden für das weitere Verfahren festgelegt. Entsprechend der beschriebenen Vorgehensweise wurden am 16.06.2023 je 125 Internetseiten von Google und Bing erfasst. Nach Anwendung der Ausschlusskriterien konnten 77 Online-Ressourcen in die Studie eingeschlossen werden (Abb. ). Am häufigsten waren unabhängige Informationsportale sowie Webseiten von Medikamenten‑/Medizinproduktanbietern vertreten (Abb. a). Davon wurden 59 Quellen (77 %) dem Gesundheitswesen zugeordnet und 30 (39 %) als kommerzielle Anbieter erfasst. Mit einer Anzahl von 48 wurden 62 % der Webseiten mit Werbung in Verbindung gebracht, und eine Mehrzahl von 63 (82 %) hatte Verweise zu Social Media. Im 25-Item-Score wurde eine durchschnittliche Punktzahl von 9,9 ± 4,4 (± Standardabweichung) erzielt, wobei die inhaltliche Vollständigkeit mit Ergebnissen von 1 bis 20 stark variiert . Webseiten, die nicht mit dem Gesundheitssystem assoziiert werden, wiesen einen geringfügig unterdurchschnittlichen Wert von 8,7 ± 4,7 im Vergleich zu Ressourcen aus dem Gesundheitsbereich mit 10,3 ± 4,2 auf. Informationsquellen von kommerziellen Anbietern erzielten mit 8,6 ± 3,6 im Schnitt eine schlechtere Bewertung. Hinsichtlich des Vorhandenseins von Werbung und Links zu Social Media konnte kein Zusammenhang ermittelt werden. Der meistgenannte Fakt bezüglich der ASD war mit 88 % (68/77 Webseiten) die Definition des PECH-Schemas zur Erstversorgung der Verletzung (Zusatzmaterial online: eTabelle 1). Im EQIP36-Score wurden durchschnittlich 53,0 ± 10,8 bei maximal erreichbaren 100 Punkten erzielt (Zusatzmaterial online: eTabelle 3). Die Kategorie „Inhalt“ schnitt mit 40 % am schlechtesten ab, gefolgt von der Kategorie „Identifizierung“ mit 44 %. Die Webseiten erreichten mit 55 % der möglichen Punkte in der Kategorie „Struktur“ die besten Werte. Mit einer durchschnittlichen Lesbarkeit von 52,4 ± 9,9 von 100 liegt diese im mittleren Bereich. Es konnten 17 leicht verständliche Webseiten (60 bis 100 Punkte) von 58 mittel gut lesbaren (30 bis 60 Punkte) und 2 schwer lesbaren Quellen (unter 30 Punkte) unterschieden werden. Das Sprachniveau wurde als überwiegend ausgewogen wahrgenommen, bei einer Länge von 13,3 ± 2,3 Wörtern/Satz. Die durchschnittliche Lesedauer sowie die Anzahl der Wörter, der Sätze, der Buchstaben, Zeichen und Absätze schwankten stark (Tab. ). Für Informationstexte an die allgemeine Bevölkerung wird ein Flesch-Kincaid-Index über 60 empfohlen, was dem Sprachniveau von 13- bis 15-jährigen Schülern der 8. bis 9. Klasse entspricht . Dieses Niveau wurde mit 52,9 ± 6,4 Punkten nur in 14 % (11/77 Webseiten) eingehalten. Bei einem Spearman-Koeffizienten von − 0,246 ergab sich eine negative, monotone Beziehung zwischen der Lesbarkeit und dem inhaltlichen Gehalt. So wiesen Ressourcen, die nicht mit dem Gesundheitssystem assoziiert sind, mit 57,3 ± 10,1 eine leicht überdurchschnittliche Lesbarkeit auf. Bei durchschnittlich 48,3 ± 8,6 Punkten im Gesamtranking befinden sich 77 % der Webseiten im Wertungsbereich zwischen 40 und 60 Punkten (Abb. b). Die 3 am besten bewerteten waren von der Apothekenumschau , der Techniker Krankenkasse und IQWiG-Gesundheitsinformationen . Neben einer großen Varianz der Werte konnte in 4 von 5 Fällen beobachtet werden, dass die Webseite mit der höchsten Qualität unter den ersten 10 vorgeschlagenen Suchergebnissen als oberste in der Suche aufgelistet wurde und eine überdurchschnittlich hohe Punktzahl im Gesamt-Ranking erhielt (Abb. a). Von diesen ersten 10 Webseiten war die Mehrheit dem Gesundheitssystem zugehörig (86 %) und wies Verweise zu Social Media (78 %) auf (Abb. b). Der Anteil der Werbung hält sich mit 48 % im Mittelfeld auf, wobei 32 % auf Gewinn ausgerichtet sind. An der Umfrage nahmen 75 Teilnehmer, von denen 53 ein medizinisches Vorwissen angaben und sich 75 % aufgrund einer eigenen ASD bereits mit dem Thema beschäftigt hatten, teil. Bei ausgeglichener Geschlechterverteilung konnten in allen Altersgruppen von 20 bis 74 Jahren Antworten erfasst werden. Die Umfrage ergab eine durchschnittliche Zufriedenheit der Webseiten von 5,1 ± 0,4 auf einer 7‑stufigen Likert-Skala, wobei die in dieser Arbeit am besten bewertete Webseite auch in der Umfrage mit einem Wert von 5,5 (79 %) die höchste Zufriedenheit erlangte . Während die Webseiten von Laien mit 5,5 ± 0,5 bewertet wurden, waren Teilnehmer mit medizinischem Vorwissen mit 4,9 ± 0,4 deutlich kritischer (Abb. ). Die analysierten Internetseiten variieren untereinander sehr und stellen eine repräsentative Stichprobe der aktuell vorhandenen Online-Informationsquellen dar. Der 25-Item-Score zeigt große inhaltliche Unvollständigkeiten. Keine Webseite beinhaltete alle erforderlichen Fakten. Im EQIP36-Score wurde festgestellt, dass eine Großzahl der bereitgestellten Informationen nicht ausreichend strukturiert und die Transparenz zur Herkunft dieser nicht gegeben war. Bei einer durchschnittlich mittleren Lesbarkeit wurde das empfohlene Leseniveau nur in 14 % erreicht. Somit zeigt die durchgeführte Analyse deutlich, dass vorhandene Online-Ressourcen zur ASD Mängel in Bezug auf die Qualität des Inhalts, der Lesbarkeit und der Struktur aufweisen. Dies schränkt die effektive Nutzung durch Patienten ein und kann sich negativ auf die Selbstwirksamkeit auswirken. Obwohl keine der untersuchten Webseiten nahe an die mögliche Gesamtpunktzahl herankam, erwies sich die Zufriedenheit in der Umfrage größer als erwartet. Dies lässt die Frage offen, ob vorhandene Online-Ressourcen als Gesamtbild nicht grundsätzlich schlecht sind oder Patienten nichts Besseres gewohnt sind. Es ist für Patienten möglich, sich umfassend zum Thema zu informieren. Jedoch benötigt es dazu mehr als eine Quelle, was ein Hindernis darstellt. Limitationen. Diese Webanalyse stellt eine Momentaufnahme dar, wobei außerhalb dieser Rahmenbedingungen andere gute Internetseiten existieren können. Eine Änderung der Ergebnisse in naher Zukunft ist möglich. Der Vergleich mit ähnlichen Studien der vergangenen Jahre lässt allerdings einen gleichbleibenden Mangel an qualitativ hochwertigen Gesundheitsinformationen im Internet vermuten . Diese Webanalyse stellt eine Momentaufnahme dar, wobei außerhalb dieser Rahmenbedingungen andere gute Internetseiten existieren können. Eine Änderung der Ergebnisse in naher Zukunft ist möglich. Der Vergleich mit ähnlichen Studien der vergangenen Jahre lässt allerdings einen gleichbleibenden Mangel an qualitativ hochwertigen Gesundheitsinformationen im Internet vermuten . Mit einem Altersgipfel zwischen 15 und 19 Jahren für die ASD ist die hauptsächlich betroffene Population sehr jung. Dies kann zu einem Wandel in der Art und Weise, wie man sich im Privaten informiert, führen. Auch wenn Videos in dieser Arbeit ausgeschlossen wurden, so konzentrieren sich neue Studien auf diese . Auf Grundlage dieser Ergebnisse wurde die Webseite Kompetenznetzwerk Fuß erstellt , die die hier aufgestellten Anforderungen bestmöglich erfüllt. Sie wurde um Fotos und Videos ergänzt, die Patienten helfen sollen, eigenständig im häuslichen Umfeld Stärkungsübungen für das Sprunggelenk durchzuführen. Nach dem hier verwendeten Bewertungssystem erzielt die Webseite mit einem vollständig erfüllten 25-Item-Score und EQIP36-Score sowie einer leichten Lesbarkeit von 69 einen Wert von 90 von 100 Punkten. Dabei entspricht die Lesbarkeit bei einem Flesch-Kincaid-Score von 60 ebenfalls dem empfohlenen Leseniveau und konnte mithilfe des vorherig erwähnten KI-basierten Textanalyse-Tools erzielt werden. Eine höhere Punktzahl war bei Erhalt der inhaltlichen Vollständigkeit schwer möglich. Dennoch wurde die hier erstellte Webseite im Vergleich zur nächstbesten Internetseite mit 71 Punkten deutlich besser bewertet. Die Qualität von Webseiten über Sprunggelenkdistorsionen variiert erheblich, wobei keine alle notwendigen Anforderungen erfüllt. Eine mangelhafte Qualität und erschwerte Lesbarkeit schränken die effektive Nutzung von Webseiten ein. Herausgeber von digitalen Gesundheitsinformationen sollten diese Mängel berücksichtigen, um die Selbstwirksamkeit der Patienten zu steigern und Langzeitergebnisse zu verbessern. Zusätzliche Tabellen und Fragebogen der Benutzerumfrage
Effect of temporary freezing on postmortem protein degradation patterns
45b7da16-73d2-492a-bdd1-9eb1de054a35
10567868
Forensic Medicine[mh]
A precise determination of time since death, also referred to as the postmortem interval (PMI), is a primary task in forensic routine work. The PMI is defined as the time elapsed since death, i.e., in forensic cases the time between the onset of death and the finding of a dead body . In recent years, it has become increasingly evident that analysis of postmortem degradation patterns of muscle proteins is a promising new tool adding to the currently available methods for PMI estimation, especially because it expands the methodical scope into mid- and long-term PMI range. Utilizing established SDS-PAGE-based Western blot technology, this innovative approach has demonstrated that the degradation events of some muscle proteins reliably correlate with postmortal timespans in animal models and in humans [ – ]. Relevant information comes from both the degradation of the native protein itself and degradation-derived split products emerging at distinct time points after death. However, earlier studies on the subject have also indicated that individual circumstances of death and several internal and environmental factors can influence the decomposition process, thus complicating PMI estimation. Resulting restrictions are well known from established methods of PMI determination, for instance, the cooling rate–based nomogram method, commonly applied in the early postmortal phase , and evaluation of colonization by arthropods (forensic entomology), mainly applicable in later postmortal phases [ – ], and also from analysis of volatile organic compound (VOC) species [ – ]. For all established and currently applied PMI estimation methods, it is clearly apparent that especially temperature is a fundamental influencing factor [ , , – ]. In order to ensure the precision and the applicability of the protein degradation–based method amongst a broad variety of forensic cases, recent research has already investigated the influence of some factors and individual traits, including sex, age, body mass, and temperature. It was found that there are significant correlations between postmortem protein degradation and age and body mass (as expressed by BMI), whereas sex did not exert measurable effects . Also regarding influence from mass and volume, our own research was able to exclude that porcine postmortem protein degradation is significantly altered, depending on whether muscle samples are taken from dismembered limbs or from limbs still attached to the whole animal . Expectedly, evidence was obtained that temperature acts without doubt as a major, if not as the most important factor on muscle protein degradation. Studies on animals and humans in forensic research [ , – ], as well as meat science studies , have shown that autolytic and decomposition dynamics are accelerated at higher temperatures but delayed under cool conditions. A prominent gap in our knowledge of temperature effects on postmortem degradation of muscle proteins still exists in relation to the degree of bias by shorter or longer exposure or storage at temperatures below 0 °C. A freeze–thaw history is not uncommon in forensic practice. Frozen corpses are found outdoors under freezing conditions, and body parts of crime victims have been hidden in freezers . In such cases, it is often hard , if not impossible , to determine the time since death. As with other information on the subject, some data on how a freeze–thaw treatment acts on muscle proteins have come from research on meat quality. Studies investigating aspects of meat quality including tenderness, water-holding capacity, and optimal meat storage conditions for different livestock species observed distinct changes in myofibrillar proteins and proteases [ – ]. When comparing muscle samples stored at temperatures above the freezing point with samples subjected to the “superchilling” conservation method (involving maintenance at − 0.5 °C to − 4 °C), researchers found that the degradation of some proteins was significantly slower under the latter regime . Another study demonstrated that freeze–thaw treatment advances the degradation in some proteins (e.g., desmin) while leaving others unaffected (e.g., troponin T) . Given this background, it seems evident that there is still need for research to clarify to which extent a freeze–thaw process influences the decomposition of muscle proteins employed in protein degradation–based PMI determination. To address this task, this study uses a standardized protein degradation model that was implemented in previous work . We compare two sets of dismembered pig hind limbs ( n = 6 each), one of which was left to degrade in a climate chamber at 30 °C immediately after slaughter, whereas the second set was freeze-stored at − 20 °C for 4 months before being subjected to degradation under the same conditions. During each experiment, samples were taken from the M. biceps femoris at regular intervals, and degradation patterns of selected proteins (vinculin, alpha tubulin, alpha actinin, glyceraldehyde-3-phosphate dehydrogenase (GAPDH)) were analyzed via SDS-PAGE followed by Western blotting. Proteins were selected according to their previously characterized differences in degradation rates. These degradation events occur either rather early (within 1 or 2 days) or in later postmortem phases (several days) and lie within the observed time course of the present study. Results are thought to improve our understanding of if and how a freeze–thaw process interferes with PMI determination, based on muscle protein degradation, thus adding to the robustness of this forensic methodology. Experimental design and sampling procedure Six commercial sub-adult crossbreed pigs (German Large White × German Landrace, 4 months old, 50 ± 4 kg, 4 male, 2 female) were used for the experiment. To minimize possible bias from factory farming practices, experimental animals fed on quality feed from local production were obtained from controlled species-appropriate husbandry. Animals were killed in a certified slaughterhouse according to standard procedures by captive bolt stunning and subsequent exsanguination. Both hind limbs of each individual were separated by dissection with professional butcher cutlery immediately after death, giving a total of n = 12 limbs. A first set of samples from the M. biceps femoris (= 0 h reference samples) was taken from all hind limbs still in the slaughterhouse immediately after death, prior to transportation to the lab. For further processing, hind limbs were allocated to two treatment groups ( n = 6) with different experimental setups. The first group (referred to as “non-frozen”) was, without further delay, transferred to a climate chamber and incubated under constant conditions (temperature 30 ± 2 °C; humidity 50 ± 5% rH). Muscle samples were regularly collected at a total of 18 pre-defined time points after death: 0, 4, 8, 12, 16, 24, 32, 40, 48, 56, 64, 72, 80, 96, 112, 128, 144, 160 h postmortem (hpm). The second group of hind limbs (referred to as “pre-frozen”) was first subjected to long-time storage in a deep-freeze room at constant − 20 °C for 4 months and then transferred and stored in the + 30 °C climate chamber under the same conditions applied to the non-frozen limbs. Sampling also followed the time schedule of the non-frozen limbs, although with two modifications: (i) The pre-frozen limbs were sampled in still frozen condition, directly after the transfer to the climate chamber (for technical modifications, see below). This provided the 0*h aot baseline samples (h aot = hours after onset of thawing), containing all marker proteins at onset-of-thawing state. (ii) Since full adaptation to + 30 °C took approximately 24 h (cf. Fig. ), sampling of pre-frozen limbs was prolonged for another day. This enabled us to take two additional samples (at 168 and 184 h aot , respectively) in order to ensure high comparability of the two experimental groups. Note: Sampling times of the pre-frozen group are expressed in hours after onset of thawing [h aot ]. However, in the interests of simplicity, in the present work, h aot is often treated as equivalent to hpm. Sampling in both treatment groups was performed according to the following standardized procedure: An incision was made through the skin and the underlying fascial layer using a surgical scalpel, and muscle samples at a size of approx. 5 × 5 × 5 mm were excised from the M. biceps femoris at a depth of 2 cm within the belly of the muscle. A minimum distance of 2 cm was kept between successive sampling sites. Muscle samples were snap frozen and stored in liquid nitrogen until further processing. As the still-frozen condition of the limbs at 0*h aot prevented scalpel incisions, these samples were obtained with the help of a power drill. After discarding skin and fatty tissue, borings of muscle tissue from a depth of 2 cm were collected and transferred to liquid nitrogen. Temperature measurements During both experimental time courses (non-frozen and pre-frozen treatment groups), environmental conditions in the climate chamber, as well as temperature data inside of one hind limb, as measured by a puncture sensor, were documented throughout the entire sampling time (cf. Fig. ). Temperature measurements of frozen limbs start with the onset of thawing/warming process, after a 4-month storage at − 20 °C. Temperatures of non-frozen hind limbs were documented directly after death and after being placed within the climate chamber. Temperature data show that after approx. 12 h, the non-frozen hind limbs have cooled down to ambient temperature, whereas the frozen hind took approx. 24 h. After approx. 24 h, both treatment groups were at similar temperature conditions. Notably, during the placement of the hind limbs inside the chamber and setting up all the data loggers for temperature measurements, the chamber was open and the temperature dropped. It took approximately one and a half hours for the chamber to adjust the temperature again. Transient drops in environmental temperatures occurred during sample collection due to opening of the climate chamber. Sample processing Muscle samples were homogenized by cryogenic grinding and subsequent sonication with ultrasound (2 × 100 Ws/sample). A 10 × vol/wt RIPA buffer was used as lysis and extraction buffer, containing protease inhibitor cocktail (SIGMA) to prevent further protein degradation. Homogenized samples were centrifuged at 1 . 000 × g for 10 min, and supernatants transferred to separate tubes and stored at − 20 °C until further use. Total protein concentrations in the samples were measured using a Pierce BCA-Assay Kit (Thermo Fisher Scientific Inc.) and diluted to protein-specific values with double distilled water (30 µg for vinculin and alpha tubulin, 15 µg for alpha actinin, 10 µg for GAPDH). SDS-PAGE and Western blotting SDS-PAGE was performed according to the protocol of Laemmli with some adaptions. Electrophoresis was run on 5% stacking gels (acrylamide/N,N′-bismethylene acrylamide = 37.5:1, 0.1% SDS, 0.125% TEMED, 0.075% APS, 125 mM Tris HCl, pH 6.8) and 10% polyacrylamide resolving gels (acrylamide/N,N′-bismethylene acrylamide = 37.5:1, 0.1% SDS, 0.05% TEMED, 0.05% APS, 375 mM Tris HCl, pH 8.8). The running buffer contained 25 mM Tris pH 8.3, 195 mM glycine, 2 mM EDTA, and 0.1% SDS. Samples diluted to adequate total protein content (10–30 µg) were denatured at 90 °C for 5 min prior to insertion into the stacking gel wells (volume 20 µl). Electrophoresis was performed at a constant voltage of 150 V until the dye front reached the bottom of the resolving gel (duration approximately 2 h). Following electrophoresis, proteins were transferred from the gels onto polyvinylidene fluoride (PVDF) membranes in transfer buffer containing 192 mM glycine, 20% methanol, and 25 mM Tris pH 8.3. Electroblotting was run at a constant current of 250 mA for 75 min. Membranes were then stored at − 20 °C until further use. For Western blotting, membranes were blocked for 1 h in a blocking buffer containing PBST (137 mM NaCl, 10 mM Na 2 HPO 4 anhydrous, 2.7 mM KCl, 1.8 mM KH 2 PO 4 , 0.05% Tween) including 1% bovine serum albumin (BSA; albumin bovine fraction V, pH 7.0) and then for 1 h incubated with the following primary antisera: mouse-clonal anti-vinculin (7F9, Santa Cruz Biotechnology, 1:1000), mouse monoclonal anti-α-actinin (H-2, Santa Cruz Biotechnology, 1:1000), mouse monoclonal anti-α-tubulin (12G10, DSHB, 1:500), and mouse monoclonal anti-GAPDH (6C5, Santa Cruz Biotechnology, 1:1500). HRP-conjugated polyclonal goat anti-mouse immunoglobulin (Dako, 1:10,000) was applied as secondary antibody. All antibodies applied were diluted in blocking buffer. After each antibody application, membranes were extensively washed and rinsed in PBST (3 × 10 min). HRP-mediated specific antibody binding was visualized with chemiluminescence substrate (Roti®-Lumin plus, Carl Roth) and photographed using an iBright CL1000 Imaging System (Thermo Fisher Scientific). Data interpretation and statistics Band intensity of all proteins was measured using the gel analysis tool of the ImageJ software (v.1.48 NIH, National Institutes of Health, USA). Histograms of the tonal distribution of the images were plotted and the areas underneath the graphs were measured according to the program’s standard protocol. Band patterns of the 0-hpm samples were considered the native form of the protein and used as a control in both experimental series. All signals with ≥ 1% relative density (compared to the respective dominant control band) were considered a present protein band; all signals < 1% of the respective control band were considered background. This enabled binarization of the results and provided obtaining binary information of the absence (0) or presence (1) of proteins and their degradation products. The abundance of bands per time point and the respective PMI [hpm] were then statistically analyzed and logistic regressions were calculated for all significant correlations of protein changes with a significance level above 0.95. This allows to predict the PMI at which the presence of a specific degradation product can be expected in a significant number of cases (confidence threshold = 95%) or at which time point the native protein (or splice variant) is completely degraded. The method also provides indication about when a change is more likely to have occurred than not (using P = 50% as a threshold). In addition, bivariate correlations between the chronology of protein degradation events and the PMI were calculated using Spearman’s rank correlation coefficient (Spearman’s ρ ). For all proteins that gave a full set of data, i.e., those exhibiting distinct degradation events in all investigated hind limbs within the investigated time period of 160 hpm, an analysis of variance (ANOVA) was performed to evaluate possible differences between treatment groups. Since certain proteins showed no degradation in some of the hind limbs, their data had to be excluded from ANOVA evaluation since this specific data interpretation requires a “last time point an individual protein was present”. By simply using the last sampling point, it would most likely not represent the actual outcome because the protein band might be present long after the investigated time. Statistical analyses were performed using the SPSS Statistics 26 software (IBM, USA), MS Excel 2016, and RStudio (PBC). Six commercial sub-adult crossbreed pigs (German Large White × German Landrace, 4 months old, 50 ± 4 kg, 4 male, 2 female) were used for the experiment. To minimize possible bias from factory farming practices, experimental animals fed on quality feed from local production were obtained from controlled species-appropriate husbandry. Animals were killed in a certified slaughterhouse according to standard procedures by captive bolt stunning and subsequent exsanguination. Both hind limbs of each individual were separated by dissection with professional butcher cutlery immediately after death, giving a total of n = 12 limbs. A first set of samples from the M. biceps femoris (= 0 h reference samples) was taken from all hind limbs still in the slaughterhouse immediately after death, prior to transportation to the lab. For further processing, hind limbs were allocated to two treatment groups ( n = 6) with different experimental setups. The first group (referred to as “non-frozen”) was, without further delay, transferred to a climate chamber and incubated under constant conditions (temperature 30 ± 2 °C; humidity 50 ± 5% rH). Muscle samples were regularly collected at a total of 18 pre-defined time points after death: 0, 4, 8, 12, 16, 24, 32, 40, 48, 56, 64, 72, 80, 96, 112, 128, 144, 160 h postmortem (hpm). The second group of hind limbs (referred to as “pre-frozen”) was first subjected to long-time storage in a deep-freeze room at constant − 20 °C for 4 months and then transferred and stored in the + 30 °C climate chamber under the same conditions applied to the non-frozen limbs. Sampling also followed the time schedule of the non-frozen limbs, although with two modifications: (i) The pre-frozen limbs were sampled in still frozen condition, directly after the transfer to the climate chamber (for technical modifications, see below). This provided the 0*h aot baseline samples (h aot = hours after onset of thawing), containing all marker proteins at onset-of-thawing state. (ii) Since full adaptation to + 30 °C took approximately 24 h (cf. Fig. ), sampling of pre-frozen limbs was prolonged for another day. This enabled us to take two additional samples (at 168 and 184 h aot , respectively) in order to ensure high comparability of the two experimental groups. Note: Sampling times of the pre-frozen group are expressed in hours after onset of thawing [h aot ]. However, in the interests of simplicity, in the present work, h aot is often treated as equivalent to hpm. Sampling in both treatment groups was performed according to the following standardized procedure: An incision was made through the skin and the underlying fascial layer using a surgical scalpel, and muscle samples at a size of approx. 5 × 5 × 5 mm were excised from the M. biceps femoris at a depth of 2 cm within the belly of the muscle. A minimum distance of 2 cm was kept between successive sampling sites. Muscle samples were snap frozen and stored in liquid nitrogen until further processing. As the still-frozen condition of the limbs at 0*h aot prevented scalpel incisions, these samples were obtained with the help of a power drill. After discarding skin and fatty tissue, borings of muscle tissue from a depth of 2 cm were collected and transferred to liquid nitrogen. During both experimental time courses (non-frozen and pre-frozen treatment groups), environmental conditions in the climate chamber, as well as temperature data inside of one hind limb, as measured by a puncture sensor, were documented throughout the entire sampling time (cf. Fig. ). Temperature measurements of frozen limbs start with the onset of thawing/warming process, after a 4-month storage at − 20 °C. Temperatures of non-frozen hind limbs were documented directly after death and after being placed within the climate chamber. Temperature data show that after approx. 12 h, the non-frozen hind limbs have cooled down to ambient temperature, whereas the frozen hind took approx. 24 h. After approx. 24 h, both treatment groups were at similar temperature conditions. Notably, during the placement of the hind limbs inside the chamber and setting up all the data loggers for temperature measurements, the chamber was open and the temperature dropped. It took approximately one and a half hours for the chamber to adjust the temperature again. Transient drops in environmental temperatures occurred during sample collection due to opening of the climate chamber. Muscle samples were homogenized by cryogenic grinding and subsequent sonication with ultrasound (2 × 100 Ws/sample). A 10 × vol/wt RIPA buffer was used as lysis and extraction buffer, containing protease inhibitor cocktail (SIGMA) to prevent further protein degradation. Homogenized samples were centrifuged at 1 . 000 × g for 10 min, and supernatants transferred to separate tubes and stored at − 20 °C until further use. Total protein concentrations in the samples were measured using a Pierce BCA-Assay Kit (Thermo Fisher Scientific Inc.) and diluted to protein-specific values with double distilled water (30 µg for vinculin and alpha tubulin, 15 µg for alpha actinin, 10 µg for GAPDH). SDS-PAGE was performed according to the protocol of Laemmli with some adaptions. Electrophoresis was run on 5% stacking gels (acrylamide/N,N′-bismethylene acrylamide = 37.5:1, 0.1% SDS, 0.125% TEMED, 0.075% APS, 125 mM Tris HCl, pH 6.8) and 10% polyacrylamide resolving gels (acrylamide/N,N′-bismethylene acrylamide = 37.5:1, 0.1% SDS, 0.05% TEMED, 0.05% APS, 375 mM Tris HCl, pH 8.8). The running buffer contained 25 mM Tris pH 8.3, 195 mM glycine, 2 mM EDTA, and 0.1% SDS. Samples diluted to adequate total protein content (10–30 µg) were denatured at 90 °C for 5 min prior to insertion into the stacking gel wells (volume 20 µl). Electrophoresis was performed at a constant voltage of 150 V until the dye front reached the bottom of the resolving gel (duration approximately 2 h). Following electrophoresis, proteins were transferred from the gels onto polyvinylidene fluoride (PVDF) membranes in transfer buffer containing 192 mM glycine, 20% methanol, and 25 mM Tris pH 8.3. Electroblotting was run at a constant current of 250 mA for 75 min. Membranes were then stored at − 20 °C until further use. For Western blotting, membranes were blocked for 1 h in a blocking buffer containing PBST (137 mM NaCl, 10 mM Na 2 HPO 4 anhydrous, 2.7 mM KCl, 1.8 mM KH 2 PO 4 , 0.05% Tween) including 1% bovine serum albumin (BSA; albumin bovine fraction V, pH 7.0) and then for 1 h incubated with the following primary antisera: mouse-clonal anti-vinculin (7F9, Santa Cruz Biotechnology, 1:1000), mouse monoclonal anti-α-actinin (H-2, Santa Cruz Biotechnology, 1:1000), mouse monoclonal anti-α-tubulin (12G10, DSHB, 1:500), and mouse monoclonal anti-GAPDH (6C5, Santa Cruz Biotechnology, 1:1500). HRP-conjugated polyclonal goat anti-mouse immunoglobulin (Dako, 1:10,000) was applied as secondary antibody. All antibodies applied were diluted in blocking buffer. After each antibody application, membranes were extensively washed and rinsed in PBST (3 × 10 min). HRP-mediated specific antibody binding was visualized with chemiluminescence substrate (Roti®-Lumin plus, Carl Roth) and photographed using an iBright CL1000 Imaging System (Thermo Fisher Scientific). Band intensity of all proteins was measured using the gel analysis tool of the ImageJ software (v.1.48 NIH, National Institutes of Health, USA). Histograms of the tonal distribution of the images were plotted and the areas underneath the graphs were measured according to the program’s standard protocol. Band patterns of the 0-hpm samples were considered the native form of the protein and used as a control in both experimental series. All signals with ≥ 1% relative density (compared to the respective dominant control band) were considered a present protein band; all signals < 1% of the respective control band were considered background. This enabled binarization of the results and provided obtaining binary information of the absence (0) or presence (1) of proteins and their degradation products. The abundance of bands per time point and the respective PMI [hpm] were then statistically analyzed and logistic regressions were calculated for all significant correlations of protein changes with a significance level above 0.95. This allows to predict the PMI at which the presence of a specific degradation product can be expected in a significant number of cases (confidence threshold = 95%) or at which time point the native protein (or splice variant) is completely degraded. The method also provides indication about when a change is more likely to have occurred than not (using P = 50% as a threshold). In addition, bivariate correlations between the chronology of protein degradation events and the PMI were calculated using Spearman’s rank correlation coefficient (Spearman’s ρ ). For all proteins that gave a full set of data, i.e., those exhibiting distinct degradation events in all investigated hind limbs within the investigated time period of 160 hpm, an analysis of variance (ANOVA) was performed to evaluate possible differences between treatment groups. Since certain proteins showed no degradation in some of the hind limbs, their data had to be excluded from ANOVA evaluation since this specific data interpretation requires a “last time point an individual protein was present”. By simply using the last sampling point, it would most likely not represent the actual outcome because the protein band might be present long after the investigated time. Statistical analyses were performed using the SPSS Statistics 26 software (IBM, USA), MS Excel 2016, and RStudio (PBC). Morphological observations During the decomposition process, the hind limbs of both experimental groups (non-frozen vs. pre-frozen) exhibited similar characteristic morphological changes (cf. Fig. ). Gas-driven bloating of the limb tissue mass and discharge of yellowish foam at incision sites were observed from 72 hpm onward (Fig. a, c) and could be detected until the end of the experiment in both groups. Discoloration of the skin, including black and greenish spots, was first detectable at 96 hpm, especially in the tarsal regions in both non-frozen (Fig. b) and pre-frozen hind limbs (Fig. d). Characteristics of postmortem protein degradation Western blot results from all baseline samples — i.e., the 0-h samples taken immediately after slaughter from both groups and the 0*h aot samples taken at onset of thawing from the pre-frozen group — showed no qualitative differences in protein degradation patterns (Fig. ), apart from slight variances in band intensities of some proteins. Data of the temperature-monitored pre-frozen hind limb show that it took approximately 7 h until they were fully thawed, followed by a steep further increase of temperature until adaption to the environmental temperature in the climate chamber (30 °C). Non-frozen hind limbs took about 12 h to adapt from physiological temperature (38/39 °C) at slaughter to climate chamber temperature. Thus, both treatment groups were at similar temperature conditions from approximately 24 h onwards. Temperature measurements were terminated after 75 hpm/h aot after data showed constant conditions over a period of time. (Fig. ). In general, marker protein degradation in the two experimental groups followed similar qualitative patterns. Although minor temporal differences were observed in some proteins, statistical analysis showed no significant differences between pre-frozen and non-frozen hind limbs (Fig. , Table ). In detail, vinculin presented a complete degradation within the time of the experiment, concerning both the 117-kDa native protein band and an additional band at approx. 135 kDa representing the splice variant meta-vinculin (Figs. a and ). In both experimental groups, the presence of the native vinculin band (non-frozen: ρ = 0.867, p < 0.01; pre-frozen: ρ = 0.890, p < 0.01), and even more so the presence of the meta-vinculin band (non-frozen: ρ = 0.867, p < 0.01; pre-frozen: ρ = 0.890, p < 0.01), significantly correlate with the PMI (Tab. ). Regressions revealed that the native protein band was significantly present until 92.9 hpm in pre-frozen hind limbs and until 102.1 hpm in non-frozen hind limbs (> 95% likelihood). The splice variant was present until 69.1 hpm in frozen legs and until 56.4 hpm in non-frozen legs (Fig. a, b; Table ). In addition, analysis of variance (ANOVA) revealed that there was no significant difference ( p > 0.05) in the degradation behavior between frozen and non-frozen hind limbs for both vinculin and its splice variant meta-vinculin (Table ). At distinct times in the progressing degradation, vinculin split products became detectable (Fig. a). In addition to the native band and the meta-vinculin band, a protein band of approx. 90 kDa appeared throughout the early pm period, again significantly correlated with the PMI (non-frozen: ρ = 0.874, p < 0.01; pre-frozen: ρ = 0.894, p < 0.01) (Tab. ). Logistic regressions showed that this protein band was detectable in early postmortem stages until 118.5 hpm in non-frozen hind limbs and until 110.6 hpm in frozen hind limbs with a likelihood of > 95% (Fig. c; Table ). Furthermore, P = 50% values of the 90 kDa vinculin band was statistically reached at 74.7 hpm in non-frozen hind limbs and at 84.1 hpm in pre-frozen limbs (Table ) which indicates the time points when the occurrence/presence of this split product is more likely than its absence. Nevertheless, a comparison between treatment groups (non-frozen versus pre-frozen) via ANOVA again showed no significant difference ( p > 0.05) between treatment groups (Table ). More varied circumstances characterize a set of further vinculin split products (Fig. a). All investigated samples exhibited bands at approximately 80 kDa, 76 kDa, 70 kDa, and 63 kDa, which were also identified as vinculin degradation products. Their presence was somewhat more varied, both along the time axis and between experimental groups. Irrespective of the experimental group, all these degradation products (except 63-kDa vinculin) showed no significant correlation between their presence and the PMI. Alpha tubulin showed a single native protein band at about 49 kDa which remained stable until it fully degraded in the intermediate phase of the experiment (from approx. 64 hpm onwards) without giving rise to any split products (Figs. b and ). This complete degradation occurred in all examined limbs except for one of the non-frozen limbs in which the native alpha tubulin band remained stable over the entire period of observation (thus analysis of variance was invalid). Logistic regressions revealed similar P = 50% values for both treatment groups of 102.4 hpm for non-frozen and 108.2 hpm for pre-frozen hind limbs. Both P = 95% values exceeded the investigated time period of 160 hpm (Fig. f; Table ). However, there is a highly significant correlation between the presence of native alpha tubulin and the PMI in both frozen and non-frozen limbs ( ρ = 0.795, p < 0.01 and ρ = 0.905, p < 0.01, respectively) (Tab. ). Alpha actinin presented a relatively stable native band at a molecular weight of about 100 kDa, the intensity of which faded with increasing PMI in 4 of 6 non-frozen limbs and in all pre-frozen limbs until complete disappearance (Figs. c and ). In two of the non-frozen legs, this protein remained stable over the investigated time, preventing ANOVA between treatment groups. All investigated limbs exhibited several degradation products of alpha actinin, with molecular weights at approx. 80 kDa, 70 kDa, and 60 kDa in the middle and late phases of the experiment (from 48 hpm onwards) (Fig. c). Notably, these degradation products appeared later and persisted longer in pre-frozen limbs than in non-frozen limbs. There is a highly significant correlation between the presence of native alpha actinin and the PMI in both frozen and non-frozen limbs ( ρ = 0.873, p < 0.01 and ρ = 0.833, p < 0.01 respectively). Except for the 80-kDa band, also the presence of alpha actinin degradation products correlates strongly with the PMI (Tab. ). Logistic regressions showed that P = 50% values of native alpha actinin were reached at 135.2 hpm for non-frozen, and at 116.6 hpm for pre-frozen hind limbs (Fig. d; Table ). The alpha actinin degradation product of approx. 60 kDa showed a presence probability of 50% at 84.5 hpm in non-frozen, and at 100.7 hpm in pre-frozen pig legs (Fig. e; Table ). In addition, ANOVA variance analysis showed no significant difference ( p > 0.05) between non-frozen and pre-frozen hind limbs (Table ). GAPDH displayed a stable native protein band at approximately 35 kDa in all limbs until it degraded in the later phase of the experiment at about 144 hpm (Figs. d and ). Statistical analysis revealed P = 50% values of 154.3 hpm in non-frozen hind limbs, and of 137.2 hpm in pre-frozen legs (Fig. g; Table ). Additionally, two degradation products of GADPH at molecular weights of 25 kDa and 23 kDa were observed, although with varying incidence and timing between the experimental groups. These products were detectable in 5 of the 6 non-frozen limbs (in the time from 4 to 112 hpm), but in only 2 of the 6 pre-frozen limbs (irregularly during the first 72 h), the remaining limbs each containing none of them at all. Neither native GAPDH nor its degradation products showed a significant correlation of presence with PMI, except native GAPDH of the non-frozen limbs ( ρ = 0.858, p < 0.01) (Tab. ). During the decomposition process, the hind limbs of both experimental groups (non-frozen vs. pre-frozen) exhibited similar characteristic morphological changes (cf. Fig. ). Gas-driven bloating of the limb tissue mass and discharge of yellowish foam at incision sites were observed from 72 hpm onward (Fig. a, c) and could be detected until the end of the experiment in both groups. Discoloration of the skin, including black and greenish spots, was first detectable at 96 hpm, especially in the tarsal regions in both non-frozen (Fig. b) and pre-frozen hind limbs (Fig. d). Western blot results from all baseline samples — i.e., the 0-h samples taken immediately after slaughter from both groups and the 0*h aot samples taken at onset of thawing from the pre-frozen group — showed no qualitative differences in protein degradation patterns (Fig. ), apart from slight variances in band intensities of some proteins. Data of the temperature-monitored pre-frozen hind limb show that it took approximately 7 h until they were fully thawed, followed by a steep further increase of temperature until adaption to the environmental temperature in the climate chamber (30 °C). Non-frozen hind limbs took about 12 h to adapt from physiological temperature (38/39 °C) at slaughter to climate chamber temperature. Thus, both treatment groups were at similar temperature conditions from approximately 24 h onwards. Temperature measurements were terminated after 75 hpm/h aot after data showed constant conditions over a period of time. (Fig. ). In general, marker protein degradation in the two experimental groups followed similar qualitative patterns. Although minor temporal differences were observed in some proteins, statistical analysis showed no significant differences between pre-frozen and non-frozen hind limbs (Fig. , Table ). In detail, vinculin presented a complete degradation within the time of the experiment, concerning both the 117-kDa native protein band and an additional band at approx. 135 kDa representing the splice variant meta-vinculin (Figs. a and ). In both experimental groups, the presence of the native vinculin band (non-frozen: ρ = 0.867, p < 0.01; pre-frozen: ρ = 0.890, p < 0.01), and even more so the presence of the meta-vinculin band (non-frozen: ρ = 0.867, p < 0.01; pre-frozen: ρ = 0.890, p < 0.01), significantly correlate with the PMI (Tab. ). Regressions revealed that the native protein band was significantly present until 92.9 hpm in pre-frozen hind limbs and until 102.1 hpm in non-frozen hind limbs (> 95% likelihood). The splice variant was present until 69.1 hpm in frozen legs and until 56.4 hpm in non-frozen legs (Fig. a, b; Table ). In addition, analysis of variance (ANOVA) revealed that there was no significant difference ( p > 0.05) in the degradation behavior between frozen and non-frozen hind limbs for both vinculin and its splice variant meta-vinculin (Table ). At distinct times in the progressing degradation, vinculin split products became detectable (Fig. a). In addition to the native band and the meta-vinculin band, a protein band of approx. 90 kDa appeared throughout the early pm period, again significantly correlated with the PMI (non-frozen: ρ = 0.874, p < 0.01; pre-frozen: ρ = 0.894, p < 0.01) (Tab. ). Logistic regressions showed that this protein band was detectable in early postmortem stages until 118.5 hpm in non-frozen hind limbs and until 110.6 hpm in frozen hind limbs with a likelihood of > 95% (Fig. c; Table ). Furthermore, P = 50% values of the 90 kDa vinculin band was statistically reached at 74.7 hpm in non-frozen hind limbs and at 84.1 hpm in pre-frozen limbs (Table ) which indicates the time points when the occurrence/presence of this split product is more likely than its absence. Nevertheless, a comparison between treatment groups (non-frozen versus pre-frozen) via ANOVA again showed no significant difference ( p > 0.05) between treatment groups (Table ). More varied circumstances characterize a set of further vinculin split products (Fig. a). All investigated samples exhibited bands at approximately 80 kDa, 76 kDa, 70 kDa, and 63 kDa, which were also identified as vinculin degradation products. Their presence was somewhat more varied, both along the time axis and between experimental groups. Irrespective of the experimental group, all these degradation products (except 63-kDa vinculin) showed no significant correlation between their presence and the PMI. Alpha tubulin showed a single native protein band at about 49 kDa which remained stable until it fully degraded in the intermediate phase of the experiment (from approx. 64 hpm onwards) without giving rise to any split products (Figs. b and ). This complete degradation occurred in all examined limbs except for one of the non-frozen limbs in which the native alpha tubulin band remained stable over the entire period of observation (thus analysis of variance was invalid). Logistic regressions revealed similar P = 50% values for both treatment groups of 102.4 hpm for non-frozen and 108.2 hpm for pre-frozen hind limbs. Both P = 95% values exceeded the investigated time period of 160 hpm (Fig. f; Table ). However, there is a highly significant correlation between the presence of native alpha tubulin and the PMI in both frozen and non-frozen limbs ( ρ = 0.795, p < 0.01 and ρ = 0.905, p < 0.01, respectively) (Tab. ). Alpha actinin presented a relatively stable native band at a molecular weight of about 100 kDa, the intensity of which faded with increasing PMI in 4 of 6 non-frozen limbs and in all pre-frozen limbs until complete disappearance (Figs. c and ). In two of the non-frozen legs, this protein remained stable over the investigated time, preventing ANOVA between treatment groups. All investigated limbs exhibited several degradation products of alpha actinin, with molecular weights at approx. 80 kDa, 70 kDa, and 60 kDa in the middle and late phases of the experiment (from 48 hpm onwards) (Fig. c). Notably, these degradation products appeared later and persisted longer in pre-frozen limbs than in non-frozen limbs. There is a highly significant correlation between the presence of native alpha actinin and the PMI in both frozen and non-frozen limbs ( ρ = 0.873, p < 0.01 and ρ = 0.833, p < 0.01 respectively). Except for the 80-kDa band, also the presence of alpha actinin degradation products correlates strongly with the PMI (Tab. ). Logistic regressions showed that P = 50% values of native alpha actinin were reached at 135.2 hpm for non-frozen, and at 116.6 hpm for pre-frozen hind limbs (Fig. d; Table ). The alpha actinin degradation product of approx. 60 kDa showed a presence probability of 50% at 84.5 hpm in non-frozen, and at 100.7 hpm in pre-frozen pig legs (Fig. e; Table ). In addition, ANOVA variance analysis showed no significant difference ( p > 0.05) between non-frozen and pre-frozen hind limbs (Table ). GAPDH displayed a stable native protein band at approximately 35 kDa in all limbs until it degraded in the later phase of the experiment at about 144 hpm (Figs. d and ). Statistical analysis revealed P = 50% values of 154.3 hpm in non-frozen hind limbs, and of 137.2 hpm in pre-frozen legs (Fig. g; Table ). Additionally, two degradation products of GADPH at molecular weights of 25 kDa and 23 kDa were observed, although with varying incidence and timing between the experimental groups. These products were detectable in 5 of the 6 non-frozen limbs (in the time from 4 to 112 hpm), but in only 2 of the 6 pre-frozen limbs (irregularly during the first 72 h), the remaining limbs each containing none of them at all. Neither native GAPDH nor its degradation products showed a significant correlation of presence with PMI, except native GAPDH of the non-frozen limbs ( ρ = 0.858, p < 0.01) (Tab. ). In this study, we employed a previously implemented protein degradation–based method of PMI determination [ , , ] in a pig model to examine the effects of a freeze and thaw history of the analyzed tissue. Degradation patterns of pre-identified muscle proteins from porcine hind limbs subjected to controlled decay either immediately after slaughter, or after 4 months of freeze storage at − 20 °C were monitored and compared. Experimental animals from species-appropriate husbandry, a dense sampling scheme also taking into account the 24-h thawing and warming phase of the freeze-stored muscle, and target proteins already known to be appropriate for the method provide for predictable and reproducible results in the porcine test system used. We provide new information on the degradation behavior of mammalian muscle proteins in two relevant directions: First and foremost, the results of this experiment provide — to our knowledge for the first time — clear indication that transient freezing of muscle tissue does not significantly confound subsequent degradation proteins under warm (+ 30 °C) conditions, as compared to instant degradation of fresh non-frozen controls. Neither the patterns of gross-morphological changes (Fig. ) nor the degradation behavior and kinetics of the proteins chosen for the experiment (vinculin, alpha tubulin, alpha actinin, GADPH) differed substantially between the two experimental settings (Figs. , , and ; Table ; Fig. ). Statistical analysis showed no specific trend whether protein changes are more likely delayed in pre-frozen hind limbs, or in the non-frozen control limbs (Fig. ; Table ). The proteins alpha tubulin and alpha actinin remained stable in some of the non-frozen samples but not the pre-frozen samples. Whether the stability of these proteins in individual legs is a trend towards delayed or accelerated degradation processes in pre-frozen tissue can only be speculated. However, we did not observe similar outcome in other proteins. Furthermore, results demonstrate an evident correlation between protein changes and PMI, regardless of the treatment (Tab. ). When applicable, an analysis of variance (ANOVA: single factor) was performed and revealed no significant difference between treatment groups (non-frozen versus pre-frozen) in any of the tested markers. Particularly, the protein changes of meta-vinculin, native vinculin, the 90-kDa vinculin degradation product, and the 60-kDa alpha actinin degradation product occurred at specific time points and showed no significant difference ( p > 0.05), whether hind limbs were pre-frozen or not (Table ). These findings contribute to increase the validity and reliability of the protein degradation–based method of PMI determination in forensic research and routine casework, if only because it is often necessary to freeze tissue samples for analysis at a later time. The present work clearly indicates that postmortem protein decomposition is arrested in frozen state. After thawing, however, the protein decomposition is continued, regardless of and uninfluenced by the earlier freezing process. This is especially important for forensic cases found outdoor during or right after freezing seasons, and when bodies are hidden in a freezer after a committed crime. In addition, but not less importantly, the present results demonstrate for the first time that skeletal muscle is also a valid substrate for protein degradation–based PMI determination in cases with a freeze–thaw history. By showing that the degradation behavior of the selected proteins is robust against freezing, these results contribute to clarify an as yet largely undecided issue. Previous work targeting a variety of proteins from both muscle and inner organs had delivered a mixture of for and against a freeze–thaw influence, also with differences between organ-specific isotypes. This is already best exemplified by meat quality studies, some of which are principally in line with the present results while others, investigating a set of sarcoplasmic and myofibrillar proteins rather indicate that protein degradation is altered after a freeze–thaw cycle [ – ]. An ambiguous picture also resulted from comparison of protein isotypes from porcine muscle and brain. The cerebral isotypes including those of alpha tubulin and GADPH behaved in largely similar fashion as the muscle isotypes, regardless of whether deriving from pre-frozen or non-frozen brain tissue. Time courses of change, however, differed considerably when compared to those of the proteins’ muscle isotypes, cerebral alpha tubulin being fully degraded already after 10 hpm whereas cerebral GAPDH proved stable over the entire investigated time of approximately 50 hpm . A similarly heterogeneous picture, although depicting an exceptional position of muscle, has been drawn by studies of thawing effects at the metabolomics level comparing multi-tissue samples (plasma, gut, kidney, liver, pancreas), also including muscle. Compared to muscle, thawed non-muscle samples were found to be characterized by higher levels of amino acids and other metabolites, likely the result of enhanced protein degradation, and a higher susceptibility to oxidation . In line with this, recent work from our own lab had shown muscle proteins including vinculin, alpha tubulin, and GAPDH exhibit similar degradation patterns irrespective of whether analyzed fresh or after a brief freeze–thaw cycle of 1 week , thus supporting target isotype choice for the present work. Second, and not directly relating to the freeze–thaw context, the present results also expand upon the present knowledge on the general relationship between muscle protein degradation and temperature. Research to date has generally confirmed that the postmortal progression of human protein degradation in the thermal range of 4–37 °C (i.e., at the temperatures usually prevailing in the warm temperate zone) correlates tightly with ambient temperature [ – , – ]. However, information on protein behavior within this range is still unequally distributed, evidence from the ranges’ upper segment (corresponding to warmer/hot climates) being largely missing. The present work is the first to provide robust data on how four proteins relevant to PMI determination degrade in an environment at + 30 °C, at least one step further toward the full set of reliable temperature correcting factors, which is crucial to provide the protein degradation–based method of PMI determination with a worldwide scope of application. Degradation patterns of muscle proteins relevant to PMI determination in a porcine model system are not critically influenced by whether the analyzed tissue passes through a (4 months) period of freezer storage at stable temperature. The findings give a promising perspective for a broad applicability of protein degradation–based PMI determination, while some caveats remain: The present experiment was based on a single freeze–thaw cycle only. This certainly mimics the conditions encountered by lab freezer–stored tissue, but does not fully apply to the erratically recurring freeze–thaw events to which corpses may subjected under real outdoor conditions. Verification in both thermally more fluctuating animal and human cadaveric models, importantly including field work, must follow to validate the present results. Below is the link to the electronic supplementary material. Fig. S1 Morphological changes during the decomposition process of non-frozen (a) and pre-frozen (b) hind limbs over a time period of 160 hpm/h aot . Both experimental groups show similar changes over the investigated time and at certain time points. (PDF 459 kb) Fig. S2 Representative Western Blots of vinculin, GAPDH, alpha actinin and alpha tubulin, depicting protein bands of non-frozen hind limbs at 0 hpm, and of hind limbs intended for freezing before freezing (pre freeze) and at the onset of thawing (post freeze). Results show no qualitative differences between the freeze-thawed samples and their pre-freeze references. Note that post-freeze samples exhibit no signs of degradation apart from slight fading of some of the protein bands (i.g. alpha tubulin). (PNG 93 kb) High resolution image (TIF 233 kb) Tab. S1 Bivariate correlations between the chronology of protein degradation events and the PMI calculated by using Spearman’s rank correlation coefficient (Spearman’s ρ and according p value). All native proteins (except for GAPDH of non-frozen hind limbs) and their specific degradation products show significant correlation (Spearman’s ρ ≥ 0.75) between protein changes and PMI. (JPG 122 kb)
Pharmacogenomic markers of glucocorticoid response in congenital adrenal hyperplasia
9af99000-61f4-4fcb-8171-c6113d6bd945
9767328
Pharmacology[mh]
Glucocorticoids (GC) replacement are the mainstay treatment for 21-hydroxylase deficiency (21-OHD), the most common cause of congenital adrenal hyperplasia (CAH), a group of autosomal recessive disorders affecting cortisol biosynthesis . The inability to restore physiological cortisol secretion rhythm compromises the outcomes . Therefore, many challenges remain for the management of these patients . The variety in clinical responses to treatment with GC reflects the variation in GC sensitivity between individuals; while some patients are known to rapidly develop adverse effects during corticotherapy, others show good tolerance. Furthermore, reduced glucocorticoid sensitivity has been associated to more favourable metabolic profile, while glucocorticoid hypersensitivity might be involved in the pathogenesis of the metabolic syndrome and mood disorders. Genetic factors that are associated to GC sensitivity are involved in the individuals’ response to GC and the predisposition for diseases . The GC response and the pituitary negative feedback are regulated through GC binding to its receptor (GR), encoded by the NR3C1 gene. Differences in healthy individuals’ response to GC are, at least in part, genetically determined by the NR3C1 gene polymorphisms . Novel insights into the basis of the GC sensitivity point to an important role for GR gene variants . Two reports suggest that polymorphisms of the GR gene may be associated with metabolic profiles in 21-OHD patients: the Bcl I GR polymorphism, which is associated to increased GC sensitivity, was linked to increased cardiovascular risk among adult CAH-subjects . Conversely, the 9β -variant, which was associated to healthier metabolic profiles among paediatric subjects with CAH, is expected to increase GC resistance . Specific arrangements of the NR3C1 gene polymorphisms may play a role in the interindividual variability to GC treatment of 21-OHD subjects. We have investigated whether single nucleotide polymorphisms (SNPs) previously linked to GC increased resistance or sensitivity were associated to different GC responses in a cohort of 21-OHD subjects. We have investigated six SNPs of the GR gene, estimated each minor allelic frequency (MAF) and the haplotypes in CAH-subjects (n = 102). The comparative analysis of MAF with a control group of healthy subjects (n = 163) was previously studied and is shown in . GR genotype-phenotype associations were, then, explored after a very-low dose dexamethasone suppression test (VLD-DST). The study complete flowchart is shown in . Ethics The study was approved by the Ethics Committee of the Institution (CAAE-1.172.019). Patients and their legal guardians signed an informed written consent. Participants The population of interest were classic 21-OHD patients under GC replacement and regular monitoring at the Federal University of Minas Gerais, Hospital of Clinics (HC-UFMG). The 21-OHD classic form diagnosis was based on clinical and biochemical evaluation, according to medical records. Salt-wasting (SW) subjects who presented well-documented hyponatremia and hyperpotassaemia in the neonatal period were selected; some SW adult patients were not on fludrocortisone replacement. Simple virilizing (SV) ones were diagnosed by the public neonatal screening program of the state of Minas Gerais and were not on fludrocortisone replacement. Patients with either chronic disease or taking medication other than hormone replacement for CAH had not been included. A sample of 102 unrelated subjects with a median of 8.95 years-old [interquartile range (IQR) 2.13–17.95], 73 females (71.6%) was enrolled in the GR-genotyping study. NR3C1 genotyping The human GR gene ( NR3C1 ) is located within a single linkage disequilibrium block in chromosome 5 (5q31.3). It spans a length of 157,582 bases and is comprised of 13 first non-coding exons, which act as alternative promoters, and 8 protein-coding regions or exons numbered 2–9 . Among the several SNPs of this gene, the ones linked to the GC response were studied here: Tth111 I, ER22 , 23EK and 9β are associated to GC resistance , while N 363S and Bcl I are associated to GC increased sensitivity . Genotyping methods were described elsewhere . Clinical assessment The VLD-DST final sample (n = 28) was selected based on the 102 individuals’ genotypes classification, according to data of GC sensitivity or resistance of the GR gene variants, well stated in previous reports . Thus, only patients with GC increased resistance (n = 18) or increased sensitivity (n = 10) profiles were selected. Increased resistance or sensitivity were assumed by selecting the genotypes with 100% likelihood estimation of GC resistance or sensitivity in the haplotype distribution of this population. Subjects with 21-OHD, who were in good health conditions, were recruited to undergo the VLD-DST when serum 17-hydroxyprogesterone (17OHP) was > 1,000 ng/dL (by immune chemiluminometric assay) on the eve of the test . After consent, the GC in use was discontinued according to the respective biological half-life of each drug: 4 days for hydrocortisone, 6 days for prednisone and 7 days for dexamethasone . There was no alteration in the administration schedule of fludrocortisone, given once daily in the morning. After GC withdrawal, participants were carefully monitored by phone; they were instructed to inform any symptoms of disease and to resume glucocorticoid use immediately. A single trained observer obtained standard anthropometric measurements with the patients wearing appropriate clothes. Body mass index (BMI) was calculated by [weight (kg)/height (m) 2 ] and body surface area (BSA) was calculated by [weight (kg) x 4 + 7/90 + weight (kg)] . Very low-dose dexamethasone suppression test protocol The VLD-DST test was used to primarily access the GC negative feedback at the level of the anterior pituitary as it only partially suppresses cortisol levels . Standard VLD-DST protocols were adapted to assess individual GC sensitivity for this population. Adrenocorticotropic hormone (ACTH) levels are intricately entwined with cortisol levels, even in CAH-subjects . Here, cortisol decreased concentrations along the test were considered as a suppressive test response. Participants who did not present any decline in cortisol levels, were called as non-suppressors. The test was carried out at HC-UFMG Laboratory Medical Service. Serum cortisol (without fasting) was measured at 8:00 (baseline) and after two and four hours after very low doses (20 and 40 μg/m²) of intravenous dexamethasone disodium phosphate in bolus (Decadron® 2mg/mL, Aché Laboratórios Farmacêuticos S.A., Brazil). Venepuncture was performed by an experienced nurse, who also prepared the solution: 1 mL of the product diluted in 19 mL of 0.9% saline solution to a final concentration of 100 μg/mL. Serum cortisol assay Blood samples were analysed in the same institution for serum cortisol, by a competitive chemiluminescent enzyme immunoassay on the Integrated VITROS™ 5600 Microwell platform (Johnson & Johnson, High Wycombe, UK, 2009). The manufacturer’s range for serum basal cortisol is 4.46–22.7 μg/dL. The provided detection limit (sensitivity) is > 0.1 μg/dL and the working range is 0.16–61.6 μg/dL. The cortisol mean cross-reactivity with biological steroid precursors is <0.5%. The 20 μg/m²-DST intraindividual cortisol variability in healthy volunteers was 4.3% in a previous study . Statistical analysis The genotypes were estimated computationally by the Haplo.stats package (1.6.3 version), available in R, 3.5.1 version. This software uses a consistent maximum likelihood estimation algorithm (haplo-em function) and calculates specific values for the haplotypes (haplo-score function), considering p<0.05, and further estimates carrier probability assuming a diallelic model of inheritance. Cases eligible for the genotype-phenotype association study were selected by assuming this population haplotype distribution and then, selecting only genotypes with 100% likelihood estimation. GC sensitivity trough the VLD-DST was studied according to age (<10, 10–20 or > 20 years old), designated sex (female or male), CAH clinical form (SW or SV), the current use of fludrocortisone (among SW), and GC dose. For uniform analysis, long-acting GC were transformed into equivalent doses of hydrocortisone, as follows: prednisone: hydrocortisone mg x 4; dexamethasone: hydrocortisone mg x 30 . The non-parametric Kruskal-Wallis (KW) test was used to compare medians and chi-square (x 2 ) statistic for mean values. Baseline and suppressed by dexamethasone serum cortisol (F) values were evaluated together as continuous variables (average cortisol values) and as percentage changes. Outlier evaluation followed the limit of 1.5 times the interquartile range (IQR). Multiple regression analysis was performed using a random effect model for average cortisol values. The analysis was performed using R (version 3.6.1, Core Team, 2019), Minitab Statistical (version 17.1.0, 2010) and Microsoft® Excel for Mac (version 16.32, 2019) Software. Two time-effect models were built to study the average cortisol levels across the test compared to baseline for SV and SW subjects. Statistical significance was set as p<0.05. The study was approved by the Ethics Committee of the Institution (CAAE-1.172.019). Patients and their legal guardians signed an informed written consent. The population of interest were classic 21-OHD patients under GC replacement and regular monitoring at the Federal University of Minas Gerais, Hospital of Clinics (HC-UFMG). The 21-OHD classic form diagnosis was based on clinical and biochemical evaluation, according to medical records. Salt-wasting (SW) subjects who presented well-documented hyponatremia and hyperpotassaemia in the neonatal period were selected; some SW adult patients were not on fludrocortisone replacement. Simple virilizing (SV) ones were diagnosed by the public neonatal screening program of the state of Minas Gerais and were not on fludrocortisone replacement. Patients with either chronic disease or taking medication other than hormone replacement for CAH had not been included. A sample of 102 unrelated subjects with a median of 8.95 years-old [interquartile range (IQR) 2.13–17.95], 73 females (71.6%) was enrolled in the GR-genotyping study. genotyping The human GR gene ( NR3C1 ) is located within a single linkage disequilibrium block in chromosome 5 (5q31.3). It spans a length of 157,582 bases and is comprised of 13 first non-coding exons, which act as alternative promoters, and 8 protein-coding regions or exons numbered 2–9 . Among the several SNPs of this gene, the ones linked to the GC response were studied here: Tth111 I, ER22 , 23EK and 9β are associated to GC resistance , while N 363S and Bcl I are associated to GC increased sensitivity . Genotyping methods were described elsewhere . The VLD-DST final sample (n = 28) was selected based on the 102 individuals’ genotypes classification, according to data of GC sensitivity or resistance of the GR gene variants, well stated in previous reports . Thus, only patients with GC increased resistance (n = 18) or increased sensitivity (n = 10) profiles were selected. Increased resistance or sensitivity were assumed by selecting the genotypes with 100% likelihood estimation of GC resistance or sensitivity in the haplotype distribution of this population. Subjects with 21-OHD, who were in good health conditions, were recruited to undergo the VLD-DST when serum 17-hydroxyprogesterone (17OHP) was > 1,000 ng/dL (by immune chemiluminometric assay) on the eve of the test . After consent, the GC in use was discontinued according to the respective biological half-life of each drug: 4 days for hydrocortisone, 6 days for prednisone and 7 days for dexamethasone . There was no alteration in the administration schedule of fludrocortisone, given once daily in the morning. After GC withdrawal, participants were carefully monitored by phone; they were instructed to inform any symptoms of disease and to resume glucocorticoid use immediately. A single trained observer obtained standard anthropometric measurements with the patients wearing appropriate clothes. Body mass index (BMI) was calculated by [weight (kg)/height (m) 2 ] and body surface area (BSA) was calculated by [weight (kg) x 4 + 7/90 + weight (kg)] . The VLD-DST test was used to primarily access the GC negative feedback at the level of the anterior pituitary as it only partially suppresses cortisol levels . Standard VLD-DST protocols were adapted to assess individual GC sensitivity for this population. Adrenocorticotropic hormone (ACTH) levels are intricately entwined with cortisol levels, even in CAH-subjects . Here, cortisol decreased concentrations along the test were considered as a suppressive test response. Participants who did not present any decline in cortisol levels, were called as non-suppressors. The test was carried out at HC-UFMG Laboratory Medical Service. Serum cortisol (without fasting) was measured at 8:00 (baseline) and after two and four hours after very low doses (20 and 40 μg/m²) of intravenous dexamethasone disodium phosphate in bolus (Decadron® 2mg/mL, Aché Laboratórios Farmacêuticos S.A., Brazil). Venepuncture was performed by an experienced nurse, who also prepared the solution: 1 mL of the product diluted in 19 mL of 0.9% saline solution to a final concentration of 100 μg/mL. Blood samples were analysed in the same institution for serum cortisol, by a competitive chemiluminescent enzyme immunoassay on the Integrated VITROS™ 5600 Microwell platform (Johnson & Johnson, High Wycombe, UK, 2009). The manufacturer’s range for serum basal cortisol is 4.46–22.7 μg/dL. The provided detection limit (sensitivity) is > 0.1 μg/dL and the working range is 0.16–61.6 μg/dL. The cortisol mean cross-reactivity with biological steroid precursors is <0.5%. The 20 μg/m²-DST intraindividual cortisol variability in healthy volunteers was 4.3% in a previous study . The genotypes were estimated computationally by the Haplo.stats package (1.6.3 version), available in R, 3.5.1 version. This software uses a consistent maximum likelihood estimation algorithm (haplo-em function) and calculates specific values for the haplotypes (haplo-score function), considering p<0.05, and further estimates carrier probability assuming a diallelic model of inheritance. Cases eligible for the genotype-phenotype association study were selected by assuming this population haplotype distribution and then, selecting only genotypes with 100% likelihood estimation. GC sensitivity trough the VLD-DST was studied according to age (<10, 10–20 or > 20 years old), designated sex (female or male), CAH clinical form (SW or SV), the current use of fludrocortisone (among SW), and GC dose. For uniform analysis, long-acting GC were transformed into equivalent doses of hydrocortisone, as follows: prednisone: hydrocortisone mg x 4; dexamethasone: hydrocortisone mg x 30 . The non-parametric Kruskal-Wallis (KW) test was used to compare medians and chi-square (x 2 ) statistic for mean values. Baseline and suppressed by dexamethasone serum cortisol (F) values were evaluated together as continuous variables (average cortisol values) and as percentage changes. Outlier evaluation followed the limit of 1.5 times the interquartile range (IQR). Multiple regression analysis was performed using a random effect model for average cortisol values. The analysis was performed using R (version 3.6.1, Core Team, 2019), Minitab Statistical (version 17.1.0, 2010) and Microsoft® Excel for Mac (version 16.32, 2019) Software. Two time-effect models were built to study the average cortisol levels across the test compared to baseline for SV and SW subjects. Statistical significance was set as p<0.05. The subjects’ clinical features are summarized in and their genotype distribution in (n = 28). An outlier (male SW-child) was excluded from analysis. Baseline cortisol concentrations were lower among non-suppressors, compared to suppressors (p<0.0039, KW) and were not statistically different between children, adolescents, and adults. The clinical variables biological sex, GC use (type and dose), and BMI did not affect the GC response. SW subjects presented less cortisol suppression compared to SV ones . The Tth111 I + 9β/ Wild or Tth111 I + ER22/23EK + 9β/ Wild genotypes were associated to GC resistance. Six participants had no cortisol suppression at all . Non-suppressors were all SW subjects with median 17OHP of 3448 ng/dL (ranging 1443 to 10,000 ng/dL) on the eve of the challenge test. They did not differ from suppressors for GC replacement dose (p = 0.202, x 2 ). The most significant cortisol suppression was observed in a homozygous Bcl I-carrier. No significant differences were observed at 2-hour testing (p = 0.121) in the comparison of cortisol variability between SW and SV subjects. However, significant differences were detected at 4-hour testing (p<0.001). These significant differences remained when data of fludrocortisone non-users (SW and SV subjects) were compared (p = 0.038, KW). The two time-effect models for cortisol suppression between SW and SV subjects are presented in . SV subjects exhibited a statistically significant decrease in cortisol levels at the 4-hour time-point of the test (p = 0.0064). The final multivariate model for SV subjects included age (p = 0.0155) and genotype (p = 0.0023). Subjects with GC increased sensitivity-genotypes showed a decrease of 7.3 μg/dL in average cortisol levels if compared to those with GC resistance-genotypes. Most of SW subjects did not present a statistically significant decrease in cortisol levels at the 4-hour time-point of the test. The genotype did not influence average cortisol concentrations among them (p = 0.4872). The final multivariate model included age (p = 0.011) and fludrocortisone-use (p = 0.0086) . Fludrocortisone users had mean cortisol concentrations 5.3 μg/dL higher than non-users. Baseline cortisol concentrations were not statistically different between SV and SW (p = 0.3326, KW), nor between SW fludrocortisone non-users and SV subjects (p = 0.416, KW). However, the levels were lower among SW fludrocortisone users when compared to SW non-users (p = 0.088, KW). This is a genotype–phenotype association study that revealed pharmacogenomic markers of GC resistance among subjects with classical CAH due to 21-OHD. We found two genotypes of interest that were associated with impaired DST response at 4-hour testing: “ Tth111 I + 9β/ Wild”; “Tth111 I + ER22/23EK + 9β/ Wild ” . Both haplotypes have been associated with relative GC resistance in the literature . The haplotype Tth111 I + ER22/23EK + 9β/ Wild is rarely reported in most populations (~1%), but haplotype- Tth111 I + 9β/ Wild occurs more commonly (~10%) . For diseases other than CAH, GR gene haplotype-phenotype associations studies were performed in various cohorts of children and adults receiving GC treatment: acute lymphoblastic leukaemia , cystic fibrosis , bowel inflammatory disease , psoriasis , multiple sclerosis , and in Cushing’s , metabolic and Guillain Barré syndromes. The studies showed that GR genetic factors associated with GC resistance can affect not only the response to therapy but also disease pathophysiology. The GR variants influence the expression of molecular properties of the receptor protein, except for Tth111 I and Bcl I, which are located out of protein-coding regions. The Bcl I-variant is quite commonly present in the human population. It is located in introns, but the variant may still have important regulatory roles, modifying the expression of mRNA of nearby genes, affecting RNA stability or access of transcription factors to gene regulatory elements . But the exact mechanism by which this polymorphism interferes with GC sensitivity is unclear. It has been widely associated with GC increased sensitivity , as we detected in our series among SV subjects, although this variant was less commonly present in our cohort compared to healthy controls . Nevertheless, the homozygosis amplified the effect , once the most significant cortisol suppression was observed in the homozygous Bcl I-carriers. Importantly, in addition to GC increased response, this variant might play a role in obesity susceptibility among subjects with CAH . The 9β -polymorphism main actions have been very well-explored in the literature. It is located at the exon-9 of the GR-gene, and it encodes two alternative splicing variants: the α and β isoforms, but only the first one has functional effects. Thus, its related polymorphism has been associated with an increased mRNA expression and stabilization of the dominant-negative splice β-variant . This 9β -polymorphism was inherited in block with both ER22 and 23EK variants in our series . ER22 and 23EK variants have been associated to lower body weight, lower total cholesterol, and low-density lipoprotein cholesterol levels, as well as lower fasting insulin concentrations and better insulin sensitivity . They are located on codons 22 and 23 at the transactivation domain of the exon 2. Molecular studies point to a 15% increase in the proportion of the GRα-A translation sub-isoform to GRα-B . All together, these data might explain the physiology of the GC relative increased resistance among subjects of our series. Herein, regarding the variables which may have affected the GC response, we could highlight the CAH clinical phenotype. SW-participants had consistent less cortisol suppression than SV. Moreover, thirty percent of the SW subjects did not present any cortisol suppression in the DST, all of those exhibited GC resistance-genotypes. Thus, as expected, in our series, some SW-patients had a pattern of greater resistance to GC treatment than others, and this might be, in part, attributed to GR variants. The fludrocortisone use itself in the eve of test might have had an influence on some of these non-suppressive responses among SW subjects. It is well-known that fludrocortisone crosses the blood-brain barrier and exerts an inhibitory effect on the HPA axis, mainly on serum cortisol levels during the nadir of the circadian rhythm. These effects reflect the complete binding of fludrocortisone to mineralocorticoid receptors (MR) in the hippocampus since there are uncertainties about the occurrence of MRs in the hypothalamus and the pituitary. Finally, it seems that MRs in the hippocampus mediate the "proactive" feedback of GCs, in a dose-response manner, when above 0.05 mg . The GC response was also associated with age. Children younger than 10 years old presented more cortisol suppression during the 4-hour test if compared with adolescents and adults. However, this subgroup of children brought together the highest proportion of SV subjects, which may explain this finding, once SV presented more cortisol suppression. Although our study has some limitations, including a relatively small sample size, which may have biased some results, the rigorous experimental protocol of cortisol measurements at three different time points allowed us to make comparative assessment and data analysis regarding the GC overall response to the DST. To our knowledge, there are no other studies addressing the GR haplotype structure and the GC response to treatment within the CAH population. In the future, population specific pharmacogenomics landscape relevant for GC therapy would further contribute to better understanding the inconsistency in therapy response and could be helpful in predicting the risk of adverse reactions in those patients receiving GCs . The genetic profiles of GC resistance might also be useful in identifying CAH specific subgroups that would benefit from personalized treatment. Other mechanisms of inherited GC resistance have been identified and should be further explored in CAH subjects . In conclusion, the Tth111 I + 9β/Wild and Tth111 I + ER22/23EK + 9β/Wild genotypes are probably pharmacogenomics markers of GC resistance in CAH. These findings may be relevant given the challenges posed by the therapeutic management with GC in CAH and should be confirmed by further studies. S1 Data (XLSX) Click here for additional data file.
Screening for depression in the general population through lipid biomarkers
e080f79d-4c19-4f9a-aaf5-00c497da4628
11617895
Biochemistry[mh]
Blood lipids have demonstrated an unusually strong association with psychiatric disorders among various molecular phenotypes examined, such as polar metabolites, gene expression and genetic information. Studies directly investigating the blood plasma lipidome composition in depression and other psychiatric disorders have revealed significant alterations in patients when compared to healthy controls. In this study, we evaluate the association between self-reported anxiety and depression symptoms from an urban population and the blood lipidome. We show that depressive symptoms, in particular, are correlated with certain blood lipid levels, and that lipidome alterations from high-functioning individuals from the general population with pronounced depression scores mirror those of clinically depressed patients, which is an important step towards a clinically applicable molecular test for depression risk assessment. Blood lipids show consistent alterations in psychiatric disorders, and depression in particular, in both clinical patients and general population samples, making them promising candidates for molecular testing of mental disorders. Anxiety and depression significantly contribute to the global burden of mental disorders, with an estimated worldwide population prevalence of approximately 12% and 6% experiencing a respective episode within the past year. , , Depression in particular, is one of the leading causes of disability worldwide. , Although lifestyle and social factors are believed to influence the onset of these diseases, there is also an association with specific genetic risk factors and epigenetic molecular changes. , Given the substantial evidence of intrinsic biological processes associated with psychiatric disorder development, scientists are exploring potential molecular markers to supplement current interview-based diagnostic methods. Promising results have been found for depression and, to a lesser extent, anxiety disorders. , , , , , , , Several computational models based on blood biomarkers have been proposed for predicting depressive states. For instance, a study involving 897 subjects affected by the Great East Japan Earthquake suggested the potential for categorizing individuals with high levels of depressive symptoms based on their blood plasma metabolite profiles. Blood plasma metabolites have also been used to predict symptom severity in patients with clinical depression, and to differentiate between patients with depression and control individuals. Transcriptome studies have similarly revealed peripheral gene expression biomarkers with moderate predictive ability for depressive states. , Notably, both genetic and gene-expression studies have consistently found evidence of lipid metabolism dysregulation associated with depressive symptoms. , , , For example, a Mendelian randomization analysis of 188,577 lipid and 480,359 depression GWAS-identified traits found a causal association between triglycerides and depressive symptoms, as well as deliberate self-harm and suicidal behaviour. Consistent with the findings of genetic and gene expression studies linking lipid metabolism to depression, lipids have demonstrated an unusually strong association with psychiatric disorders among various molecular phenotypes examined. Numerous studies directly investigating the blood plasma lipidome composition in depression and other psychiatric disorders have revealed significant alterations when compared to healthy controls. , , , , , , , , , , , , , , , , Further, several of these studies proposed computational models for distinguishing depressed patients from healthy individuals based on blood lipid profiles, including a predictive model with an area under the receiver operating characteristic curve (ROC AUC) of up to 0.87 in validation analysis, an accuracy that studies of other types of molecular biomarker have not achieved. In our prior multi-cohort analysis of blood lipidome alterations in schizophrenia, bipolar disorder, and major depressive disorder, we similarly developed a model that differentiated psychiatric patients from controls with high accuracy (ROC AUC = 0.86–0.95). These outcomes indicate that lipids show promise as potential biochemical markers of psychiatric disorders, including depression. The transition from observed molecular differences between healthy individuals and patients with psychiatric conditions to a clinically applicable test for psychiatric risk assessment necessitates extensive validation and substantial effort. As an initial step towards this objective, our study explored the potential of using blood lipid profiles as a screening tool in the general population with an aim of detecting individuals exhibiting symptoms of anxiety or depression. We assessed blood plasma lipid levels in 604 volunteers using a high-throughput and reliable lipidome analysis method with prospective clinical applicability, direct-infusion mass spectrometry. These volunteers were sampled from the population of Moscow, Russia, and their lipid levels were investigated for any association with self-reported symptoms of anxiety and depression. To further substantiate our findings, we compared these associations with alterations observed in 32 patients with a clinical depression diagnosis. Accordingly, we investigated the possibility of generalizing lipid alterations from a clinical cohort to the detection of individuals with self-reported symptoms of mental disorders. Study participants Patients with a diagnosis of major depressive disorder ( n = 32; mean age ± std = 33 ± 13; 47% female) were recruited from the Mental Health Clinic no. 1 named after N.A. Alexeev of the Department of Health of Moscow. Inclusion criteria consisted of a diagnosis of major depressive disorder established during an inpatient examination according to the International Classification of Diseases (ICD-10, code F32 or F33). Matched controls without mental disorders ( n = 21; mean age ± std = 29 ± 8; 52% female) were recruited in parallel, including individuals without psychiatric disorder diagnosis. Volunteers ( n = 604; mean age ± std = 30 ± 10; 72% female) were recruited from the Moscow population, and were additionally asked to complete HADS self-reporting questionnaires ( ). Records with missing HADS scales values or demographic data were not included in the study. The HADS scale consists of two subscales: subscale A (assessment of anxiety) and subscale D (assessment of depression). Each subscale includes 7 statements. Each statement has 4 response options reflecting the degree of severity of the symptom and is coded in increasing levels of symptom severity from 0 (absence) to 3 (maximum severity). The scale was validated for the Russian population. The following ranges of HADS-A/D scores were considered: healthy ranges HADS-A/D ≤ 7, mild symptoms of anxiety/depression 8 ≤ HADS-A/D ≤ 10, moderate symptoms 11 ≤ HADS-A/D ≤ 14, severe symptoms HADS-A/D ≥ 15. Volunteer participants were selected according to consecutive sampling scheme: the study sample was formed in 2023 from the 747 volunteers who responded to a social media announcement. The announcement included information about the study, location of the study, inclusion and exclusion criteria. A number of volunteers filled out questionnaires but failed to show up for blood collection, and were not included in the study, giving a final total of blood samples with lipidome measurements from 604 individuals ( ). Volunteer sample size was chosen so that the expected number of individuals with depression-like symptoms would be >30, matching clinical depression sample size, assuming an approximate 5% incidence of depressive symptoms in the population. , , Exclusion criteria for all groups were age (<18 or >70 years old), substance abuse, intellectual disability, severe somatic or neurological diseases that may affect a diagnosis of mental disorder, in line with ICD diagnostic criteria. Ethics Informed consent was obtained from all participants. The study was conducted according to the guidelines of the Declaration of Helsinki formulating ethical principles for medical research involving human subjects. The protocol of this study was reviewed and approved by local ethical committee (Protocol No.2/25.01.2022). Plasma collection Plasma was obtained from peripheral venous blood in the morning. Plasma samples were collected in 4 ml Vacutainer tubes containing the chelating agent ethylenediaminetetraacetic acid (EDTA) (Vacuette, Greiner bio-one, Austria). Tubes were centrifuged at room temperature at 1500 g for 15 min. The supernatant was stored in 500 μl aliquots at −80 °C. Lipid extraction Plasma samples were randomized before the extraction. Extraction blanks were added in the end of the extraction batch and represented empty samples. An aliquot of 250 μl of water was added to 20 μl of plasma, following with the addition of 1300 μl of cold mixture MTBE:MeOH (7:2, v:v). After 10 min of ultrasound exposure (50/60 Hz, Bandelin Electronic, Berlin, Germany) in ice bath, samples were vigorously shaken for 40 munities at 4 °C (Vortex Genie, Scientific Industries, New York, USA) and centrifuged afterwards for 10 min at 12,700 rpm, 4 °C (Centrifuge 5427R, Eppendorf, Germany). Then 1000 μl of upper phase was collected to a separate tube and dried under reduced pressure (20Pa) at 30 °C (Concentrator plus, Eppendorf, Germany). Dried pellets were stored until the analysis at −80 °C. On the day of mass spectrometry measurements, pellets were reconstituted in 200 μl of mixture of isopropanol:methanol:chloroform in a mixture of (4:2:1; v:v:v) and diluted 5-fold with the mixture of isopropanol:methanol:chloroform in a mixture of (4:2:1; v:v:v) using 9.5 mM ammonium formate as additive. Mass spectrometry data acquisition Mass spectrometry analysis was performed in positive mode using a QExactive mass spectrometer (Thermo Fisher Scientific, USA) equipped with a heated electrospray ionization source. Samples were introduced by a flow injection using a Waters Acquity UPLC chromatograph (Waters, Manchester, UK). The mobile phase consisted of a 7.5 mM ammonium formate solution in isopropanol:methanol:chloroform in a mixture of (4:2:1; v:v:v). The flow rate was modulated during the analytical run. The eluent flow rate was set to 0.8 ml/min in the intervals of 0–0.04 min and 2.01–3.0 min and maintained at 10 μl/min in the range of 0.04–2.01. Increased flow rate was used for sample introduction in the beginning and loop flushing at the end of the run. One run duration time was 3 min. Injection volume was set to 20 μl, the autosampler temperature was maintained at 4 °C. The source settings were established as follows for 10 μl/minflow: sheath gas, 15 a.u.; aux gas, 5 a.u.; sweep gas, 0 a.u.; spray voltage, 3.5 kV; capillary temperature, 250 °C; S-lens RF level, 50; aux heater temperature, 250 °C. For 800 μl/min, source parameters changed accordingly: sheath gas, 60 a.u.; aux gas, 20 a.u.; sweep gas, 4 a.u.; spray voltage, 2.5 kV; capillary temperature, 300 °C; S-lens RF level, 50; aux heater temperature, 300 °C. Each mass spectrometry experiment consisted of full scan events and subsequent data-independent fragmentation (DIA). First, full scan range of interest ( m / z 200–1050) were split into narrow windows to avoid C-trap overload. The ion acquisition program was set as follows: 0.12–0.17s: 200–652 Da, 0.17–0.22s: 652–684 Da, 0.22–0.27s: 684–716 Da, 0.27–0.32s: 716–764 Da, 0.32–0.37 s: 764–812Da, 0.37–0.42s: 812–876 Da, 0.42–0.47s: 876–908 Da, 0.47–0.52s: 908–1051 Da. Splitting intervals were selected based on spectra ion population. DIA event consisted of 1 Da-width windows within the range 200.5–1050.5 m / z with a resolution of 17,500 (FWHM at m / z 200). Collision energy was applied in a stepwise manner (15-20-25), AGC target was set at 2∗10E5, isolation window was 1.2 Da and fixed first mass equalled to m / z 80. All spectra were recorded in profile mode. Pierce LTQ Velos ESI Positive Ion Calibration Solution was used for external calibration in positive mode. Resolution was set at 140,000 (FWFM at m / z 200) with an AGC setting of 5∗10E6 and max IT time 100 ms. Quality control samples (QC) were made from pooled aliquots of first 96 samples in the batch. QC samples were inserted after every 12 samples to account for batch effects and technical reproducibility and in the beginning of each batch to allow for system equilibration. Long-term-reference (LTR) samples were inserted every 12 samples, as well, to account for possible intensity drifts between experiments. Data pre-processing and lipid identification Raw files were converted to.mzXML format with PeakStrainer software keeping only MS1 information for biological samples and extraction blanks (time range of 0–32.5 s) and MS1 and MS2 information for QC samples. Then.mzXML files were loaded to LipidXplorer software (v. 1.2.8.1), using the import settings taken from with MS2 threshold of 20,000 abs. Lipid identification were based on MFQL (molecular fragmentation query language) scripts downloaded from article, along with customized MFQL files for isotopically labelled standards. The following lipid classes were included into analysis CAR, LPC, LPC O-, LPE, SM, TAG, DAG, PC, PC O-, PE, PE P-, PI, CE, Cer. Lipid identification strategy was based on precursor high resolution MS1 information and MS2 fragmentation data. Data post-processing Features with more than 10% of zero values across plasma samples were removed from the analysis. The remaining zero values were replaced by 0.9 of the minimum non-zero value across plasma samples in each feature. Feature intensities were transformed with base-2 logarithm (log2). Contaminants were removed using extraction blank samples according to the following rule: mean log2 intensity of plasma samples − mean log2 intensity of blank samples <1. For each feature, measurement batch effect (consisting of 96 plasma samples each) was corrected by subtracting the median intensity of QC samples in this experimental batch. The feature intensities were then returned to their original scale by adding the corresponding median value across all batches. Features retaining high technical variability after batch correction were removed using QC samples according to the following rule: features with standard deviation across QC samples >0.5 (in log2 scale) were removed from the analysis. Plasma samples measurements were conducted in two large temporal batches, and LTR samples were used to align intensities of the two batches. For each feature, the difference of median log2 intensity values in LTR samples between the second and first experimental batches was added to the log2 intensity value of the second experimental batch. Statistics Python version 3.7.3 was used for statistical analysis. To adjust lipid abundances for age, sex, and BMI prior to conducting the association analysis with the HADS scales, we used a linear model regressing age, sex, and BMI on each lipid feature (Python package sklearn.linear_model.LinearRegression). Corresponding residual values were used for Pearson correlation analysis with HADS-A/D values (scipy.stats.pearsonr). To investigate whether there was a significant association between corrected blood plasma lipid levels and the HADS-A/HASD-D scores, we performed a permutation test, by randomly shuffling the HADS-A/HASD-D scores across individuals and calculating the number of lipids with a Benjamini-Hochberg corrected p -value less than 0.1. The permutation p -value was calculated as the proportion of permutations, from 1000, for which this number was equal to or larger than the same number calculated for the original data. Hypergeometric test (one-sided Fisher test) was used to test for overrepresentation of lipid classes among significant lipids (scipy.stats.hypergeom). Enrichment ratio for over-representation analysis of ether phospholipids among the eight lipids displaying the strongest associations with the HADS-D scale (significant lipids) was calculated by dividing the number of lipids from the particular biochemical group among the significant lipids by the corresponding expected number, estimated as N sign × N g r o u p N t o t a l (where N sign is the number of significant lipids, N group is the total number of lipids from the particular biochemical group, and N total is the total number of lipids). Double bond index for PUFA over-representation analysis was calculated as the total number of double bonds divided by the number k of side chains in the lipid structure ( k = 3 for TAG, k = 1 for lyso-species, CE, and CAR, and k = 2 for the rest of the lipid classes). Mann–Whitney U test was used to test whether significant lipids showed a difference in the number of double bonds compared to the rest of the lipids (scipy.stats.mannwhitneyu). Python package sklearn.linear_model LogisticRegression with penalty = ‘l1′ was used for predictive modelling. Of note, the regularization parameter C is sensitive to training sample size, hence the number of features chosen by the lasso logistic model was considered when choosing the appropriate parameter value. First, model performance was estimated for different parameters: C = 0.01, 0.1, 0.5, 10, 100, 500 1000 ( ) in randomized cross validation: 1000 random test-train splits were preformed, for which k = 10 control and k = 10 disease samples were chosen at random from the n = 68 samples ( ), and the rest ( k = 48) were used for training a lasso logistic regression model. The data was normalized by the mean and standard deviation for each feature across the 68 samples. We chose C = 0.5 ( ), corresponding to 14.9 ± 2.4 predictor on average (mean ± std), for reporting model performance in separating healthy controls from depression patients using randomized cross-validation (same train-test split approach described above). For predicting risk scores of volunteers, we used all samples n = 68 in model training, with parameter C = 0.3, corresponding to 14 predictors chosen by the model ( ). The data was also normalized by the mean and standard deviation for each feature across the 68 samples. Standard cutoff of 0.5 of predicted scores was used for defining positive and negative classes. For subsequent correlation assessment between predicted risk scores and HADS scales (Spearman correlation for all values; Pearson correlation for averaged prediction scores across discrete HADS-A/D values; scipy.stats.spearmanr and scipy.stats.pearsonr), as well as ROC AUC value estimation for the detection of volunteers with increased HADS-D scores, the n = 15 volunteers used in model training were excluded from the analysis. The 95% confidence intervals for correlation coefficients were estimated using 10,000 bootstrap resampling and calculating the (2.5%, 97.5%) quantile values. For predictive modelling performance estimation, we report 95% subsampling interval by calculating the (2.5%, 97.5%) quantile values for the performance values in test subsamples in the train/test splitting used during randomized cross-validation. Role of funders The funders had a supporting role in data collection, and no role in the study design, data analyses, interpretation, or writing of report. Patients with a diagnosis of major depressive disorder ( n = 32; mean age ± std = 33 ± 13; 47% female) were recruited from the Mental Health Clinic no. 1 named after N.A. Alexeev of the Department of Health of Moscow. Inclusion criteria consisted of a diagnosis of major depressive disorder established during an inpatient examination according to the International Classification of Diseases (ICD-10, code F32 or F33). Matched controls without mental disorders ( n = 21; mean age ± std = 29 ± 8; 52% female) were recruited in parallel, including individuals without psychiatric disorder diagnosis. Volunteers ( n = 604; mean age ± std = 30 ± 10; 72% female) were recruited from the Moscow population, and were additionally asked to complete HADS self-reporting questionnaires ( ). Records with missing HADS scales values or demographic data were not included in the study. The HADS scale consists of two subscales: subscale A (assessment of anxiety) and subscale D (assessment of depression). Each subscale includes 7 statements. Each statement has 4 response options reflecting the degree of severity of the symptom and is coded in increasing levels of symptom severity from 0 (absence) to 3 (maximum severity). The scale was validated for the Russian population. The following ranges of HADS-A/D scores were considered: healthy ranges HADS-A/D ≤ 7, mild symptoms of anxiety/depression 8 ≤ HADS-A/D ≤ 10, moderate symptoms 11 ≤ HADS-A/D ≤ 14, severe symptoms HADS-A/D ≥ 15. Volunteer participants were selected according to consecutive sampling scheme: the study sample was formed in 2023 from the 747 volunteers who responded to a social media announcement. The announcement included information about the study, location of the study, inclusion and exclusion criteria. A number of volunteers filled out questionnaires but failed to show up for blood collection, and were not included in the study, giving a final total of blood samples with lipidome measurements from 604 individuals ( ). Volunteer sample size was chosen so that the expected number of individuals with depression-like symptoms would be >30, matching clinical depression sample size, assuming an approximate 5% incidence of depressive symptoms in the population. , , Exclusion criteria for all groups were age (<18 or >70 years old), substance abuse, intellectual disability, severe somatic or neurological diseases that may affect a diagnosis of mental disorder, in line with ICD diagnostic criteria. Informed consent was obtained from all participants. The study was conducted according to the guidelines of the Declaration of Helsinki formulating ethical principles for medical research involving human subjects. The protocol of this study was reviewed and approved by local ethical committee (Protocol No.2/25.01.2022). Plasma was obtained from peripheral venous blood in the morning. Plasma samples were collected in 4 ml Vacutainer tubes containing the chelating agent ethylenediaminetetraacetic acid (EDTA) (Vacuette, Greiner bio-one, Austria). Tubes were centrifuged at room temperature at 1500 g for 15 min. The supernatant was stored in 500 μl aliquots at −80 °C. Plasma samples were randomized before the extraction. Extraction blanks were added in the end of the extraction batch and represented empty samples. An aliquot of 250 μl of water was added to 20 μl of plasma, following with the addition of 1300 μl of cold mixture MTBE:MeOH (7:2, v:v). After 10 min of ultrasound exposure (50/60 Hz, Bandelin Electronic, Berlin, Germany) in ice bath, samples were vigorously shaken for 40 munities at 4 °C (Vortex Genie, Scientific Industries, New York, USA) and centrifuged afterwards for 10 min at 12,700 rpm, 4 °C (Centrifuge 5427R, Eppendorf, Germany). Then 1000 μl of upper phase was collected to a separate tube and dried under reduced pressure (20Pa) at 30 °C (Concentrator plus, Eppendorf, Germany). Dried pellets were stored until the analysis at −80 °C. On the day of mass spectrometry measurements, pellets were reconstituted in 200 μl of mixture of isopropanol:methanol:chloroform in a mixture of (4:2:1; v:v:v) and diluted 5-fold with the mixture of isopropanol:methanol:chloroform in a mixture of (4:2:1; v:v:v) using 9.5 mM ammonium formate as additive. Mass spectrometry analysis was performed in positive mode using a QExactive mass spectrometer (Thermo Fisher Scientific, USA) equipped with a heated electrospray ionization source. Samples were introduced by a flow injection using a Waters Acquity UPLC chromatograph (Waters, Manchester, UK). The mobile phase consisted of a 7.5 mM ammonium formate solution in isopropanol:methanol:chloroform in a mixture of (4:2:1; v:v:v). The flow rate was modulated during the analytical run. The eluent flow rate was set to 0.8 ml/min in the intervals of 0–0.04 min and 2.01–3.0 min and maintained at 10 μl/min in the range of 0.04–2.01. Increased flow rate was used for sample introduction in the beginning and loop flushing at the end of the run. One run duration time was 3 min. Injection volume was set to 20 μl, the autosampler temperature was maintained at 4 °C. The source settings were established as follows for 10 μl/minflow: sheath gas, 15 a.u.; aux gas, 5 a.u.; sweep gas, 0 a.u.; spray voltage, 3.5 kV; capillary temperature, 250 °C; S-lens RF level, 50; aux heater temperature, 250 °C. For 800 μl/min, source parameters changed accordingly: sheath gas, 60 a.u.; aux gas, 20 a.u.; sweep gas, 4 a.u.; spray voltage, 2.5 kV; capillary temperature, 300 °C; S-lens RF level, 50; aux heater temperature, 300 °C. Each mass spectrometry experiment consisted of full scan events and subsequent data-independent fragmentation (DIA). First, full scan range of interest ( m / z 200–1050) were split into narrow windows to avoid C-trap overload. The ion acquisition program was set as follows: 0.12–0.17s: 200–652 Da, 0.17–0.22s: 652–684 Da, 0.22–0.27s: 684–716 Da, 0.27–0.32s: 716–764 Da, 0.32–0.37 s: 764–812Da, 0.37–0.42s: 812–876 Da, 0.42–0.47s: 876–908 Da, 0.47–0.52s: 908–1051 Da. Splitting intervals were selected based on spectra ion population. DIA event consisted of 1 Da-width windows within the range 200.5–1050.5 m / z with a resolution of 17,500 (FWHM at m / z 200). Collision energy was applied in a stepwise manner (15-20-25), AGC target was set at 2∗10E5, isolation window was 1.2 Da and fixed first mass equalled to m / z 80. All spectra were recorded in profile mode. Pierce LTQ Velos ESI Positive Ion Calibration Solution was used for external calibration in positive mode. Resolution was set at 140,000 (FWFM at m / z 200) with an AGC setting of 5∗10E6 and max IT time 100 ms. Quality control samples (QC) were made from pooled aliquots of first 96 samples in the batch. QC samples were inserted after every 12 samples to account for batch effects and technical reproducibility and in the beginning of each batch to allow for system equilibration. Long-term-reference (LTR) samples were inserted every 12 samples, as well, to account for possible intensity drifts between experiments. Raw files were converted to.mzXML format with PeakStrainer software keeping only MS1 information for biological samples and extraction blanks (time range of 0–32.5 s) and MS1 and MS2 information for QC samples. Then.mzXML files were loaded to LipidXplorer software (v. 1.2.8.1), using the import settings taken from with MS2 threshold of 20,000 abs. Lipid identification were based on MFQL (molecular fragmentation query language) scripts downloaded from article, along with customized MFQL files for isotopically labelled standards. The following lipid classes were included into analysis CAR, LPC, LPC O-, LPE, SM, TAG, DAG, PC, PC O-, PE, PE P-, PI, CE, Cer. Lipid identification strategy was based on precursor high resolution MS1 information and MS2 fragmentation data. Features with more than 10% of zero values across plasma samples were removed from the analysis. The remaining zero values were replaced by 0.9 of the minimum non-zero value across plasma samples in each feature. Feature intensities were transformed with base-2 logarithm (log2). Contaminants were removed using extraction blank samples according to the following rule: mean log2 intensity of plasma samples − mean log2 intensity of blank samples <1. For each feature, measurement batch effect (consisting of 96 plasma samples each) was corrected by subtracting the median intensity of QC samples in this experimental batch. The feature intensities were then returned to their original scale by adding the corresponding median value across all batches. Features retaining high technical variability after batch correction were removed using QC samples according to the following rule: features with standard deviation across QC samples >0.5 (in log2 scale) were removed from the analysis. Plasma samples measurements were conducted in two large temporal batches, and LTR samples were used to align intensities of the two batches. For each feature, the difference of median log2 intensity values in LTR samples between the second and first experimental batches was added to the log2 intensity value of the second experimental batch. Python version 3.7.3 was used for statistical analysis. To adjust lipid abundances for age, sex, and BMI prior to conducting the association analysis with the HADS scales, we used a linear model regressing age, sex, and BMI on each lipid feature (Python package sklearn.linear_model.LinearRegression). Corresponding residual values were used for Pearson correlation analysis with HADS-A/D values (scipy.stats.pearsonr). To investigate whether there was a significant association between corrected blood plasma lipid levels and the HADS-A/HASD-D scores, we performed a permutation test, by randomly shuffling the HADS-A/HASD-D scores across individuals and calculating the number of lipids with a Benjamini-Hochberg corrected p -value less than 0.1. The permutation p -value was calculated as the proportion of permutations, from 1000, for which this number was equal to or larger than the same number calculated for the original data. Hypergeometric test (one-sided Fisher test) was used to test for overrepresentation of lipid classes among significant lipids (scipy.stats.hypergeom). Enrichment ratio for over-representation analysis of ether phospholipids among the eight lipids displaying the strongest associations with the HADS-D scale (significant lipids) was calculated by dividing the number of lipids from the particular biochemical group among the significant lipids by the corresponding expected number, estimated as N sign × N g r o u p N t o t a l (where N sign is the number of significant lipids, N group is the total number of lipids from the particular biochemical group, and N total is the total number of lipids). Double bond index for PUFA over-representation analysis was calculated as the total number of double bonds divided by the number k of side chains in the lipid structure ( k = 3 for TAG, k = 1 for lyso-species, CE, and CAR, and k = 2 for the rest of the lipid classes). Mann–Whitney U test was used to test whether significant lipids showed a difference in the number of double bonds compared to the rest of the lipids (scipy.stats.mannwhitneyu). Python package sklearn.linear_model LogisticRegression with penalty = ‘l1′ was used for predictive modelling. Of note, the regularization parameter C is sensitive to training sample size, hence the number of features chosen by the lasso logistic model was considered when choosing the appropriate parameter value. First, model performance was estimated for different parameters: C = 0.01, 0.1, 0.5, 10, 100, 500 1000 ( ) in randomized cross validation: 1000 random test-train splits were preformed, for which k = 10 control and k = 10 disease samples were chosen at random from the n = 68 samples ( ), and the rest ( k = 48) were used for training a lasso logistic regression model. The data was normalized by the mean and standard deviation for each feature across the 68 samples. We chose C = 0.5 ( ), corresponding to 14.9 ± 2.4 predictor on average (mean ± std), for reporting model performance in separating healthy controls from depression patients using randomized cross-validation (same train-test split approach described above). For predicting risk scores of volunteers, we used all samples n = 68 in model training, with parameter C = 0.3, corresponding to 14 predictors chosen by the model ( ). The data was also normalized by the mean and standard deviation for each feature across the 68 samples. Standard cutoff of 0.5 of predicted scores was used for defining positive and negative classes. For subsequent correlation assessment between predicted risk scores and HADS scales (Spearman correlation for all values; Pearson correlation for averaged prediction scores across discrete HADS-A/D values; scipy.stats.spearmanr and scipy.stats.pearsonr), as well as ROC AUC value estimation for the detection of volunteers with increased HADS-D scores, the n = 15 volunteers used in model training were excluded from the analysis. The 95% confidence intervals for correlation coefficients were estimated using 10,000 bootstrap resampling and calculating the (2.5%, 97.5%) quantile values. For predictive modelling performance estimation, we report 95% subsampling interval by calculating the (2.5%, 97.5%) quantile values for the performance values in test subsamples in the train/test splitting used during randomized cross-validation. The funders had a supporting role in data collection, and no role in the study design, data analyses, interpretation, or writing of report. Cohort description То evaluate the potential of using blood lipid profiles as indicators of mental disorders in the general population, we conducted a study of 604 urban population representatives with exclusion criteria limited to age (>70 years) and severe and decompensated somatic or neurological diseases (age 29.9 ± 9.6 years; 72% female; a; ). For each of the volunteers we collected blood plasma samples and anxiety and depression scores assessed using the Hospital Anxiety and Depression Scale (HADS) questionnaire for symptom assessment, with the HADS-A and HADS-D subscales providing the anxiety and depression scores, respectively. As anticipated from a cohort representative of the general population, most individuals scored within the healthy range for both scales (65% and 81% respectively for HADS-A and HADS-D ≤7). A minor percentage reported severe (3% and 1.5% respectively for HADS-A/D ≥ 15) and moderate (12% and 6.5% for 11 ≤ HADS-A/D ≤ 14) symptoms of depression or anxiety, with the rest of the cases classified as having mild symptoms (8 ≤ HADS-A/D ≤ 10). Consistent with prior observations, the HADS-A and HADS-D scores were significantly correlated with each other (Pearson correlation c = 0.55, p < 0.0001, 95% CI = (0.49, 0.61); b, ). Further, among three examined demographic factors—age, sex, and BMI—HADS-A scores demonstrated weak yet significant correlations with age and sex (linear regression, p < 0.0001, R 2 = 6% and 3%, respectively; c, ; ). The HADS-D scores showed an even weaker association with age and none with sex or BMI ( p = 0.010, R 2 = 1% for age; c, ; ). Lipid associations We then moved to examining the association between blood plasma lipid levels and self-reported anxiety and depression scores. Direct-infusion mass spectrometry measurements of 604 blood plasma samples, followed by data processing and quality control filtration, yielded the abundance levels for 186 lipids representing 14 chemical classes ( a; ). Recognizing that demographic factors can have a substantial impact on plasma lipid levels, we adjusted the lipid abundances for age, sex, and BMI prior to conducting the association analysis with the HADS scales. A significant correlation with corrected blood plasma lipid levels was found for the HADS-D scale (Pearson correlation, permutation test p = 0.026; b; ), whereas no statistically significant association was detected for the HADS-A scale (Pearson correlation, permutation test p = 1; b; ). Among eight lipids that displayed the strongest associations with the HADS-D scale (PC 38:6, PC O-34:3, PC O-36:5, PC O-36:6, PC O-38:6, PC O-38:7, PE P-38:6, TAG 58:8; Pearson correlation, FDR = 10%; , ), five represented the PC O- lipid class and, more generally, six represented ether lipids. This prevalence of ether lipids was far above the chance expectation indicating biochemical specificity of the altered compounds (hypergeometric test, one-sided Fisher test, enrichment ratio = 5.8, p = 0.00040 for PC O-, enrichment ratio = 5.8, p < 0.0001 for PC O- and PE P- merged; c). All eight lipids also displayed significantly higher polyunsaturated fatty acids (PUFAs) incidence in their structure, with the median of double bond per side chain equalling 3.0 compared to 1.3 for the remaining quantified lipids (Mann–Whitney U test, p = 0.0014; d). Several studies have reported substantial alterations in blood lipidome profiles of patients with clinical depression. , , , , , , If the results we found for the volunteer cohort reflect depression-associated metabolic alteration, they would presumably align with those observed in patients with major depression diagnosis. To assess the validity of this notion, we collected blood samples and measured blood plasma lipid levels in 32 psychiatric inpatients diagnosed with clinical depression, using the same procedure as for the volunteer cohort ( a; ). The alterations found in these psychiatric patients were consistent with the results of the HADS-D analysis. Specifically, the alterations in lipid abundances of depression patients compared to the volunteer cohort, which represented the general population, were congruent with the associations between lipids and HADS-D scores identified using the volunteer cohort alone (Spearman correlation, c = 0.49, p < 0.0001, 95% CI = (0.36, 0.60); b). Further, when considering only ether-phospholipid classes, PC-O and PE-P, which showed the strongest association with depression symptoms for volunteers, the correlation increased (Spearman correlation, c = 0.67 p = 0.00035, 95% CI = (0.35, 0.85); b). The congruence of these lipid alterations between the two analyses is further supported by consistent reports of decreased ether phospholipids in the blood of depressed individuals. , , While lipids showed parallel alterations in high HADS-D volunteers and patients with clinical depression, one class, triglycerides, substantially contributed to the discrepancy between these two groups (excluding triglycerides, Spearman correlation, c = 0.57, p < 0.0001, 95% CI = (0.43, 0.68); b). Among triglycerides, we noted a clear relationship between the occurrence of double bonds in the fatty acid residues and the extent of disagreement. Specifically, the most saturated and shorter-chained triglycerides showed the greatest discrepancy, while triglycerides containing polyunsaturated fatty acids (PUFAs) followed the overall positive correlation trend ( b, ). Predictive modelling After observing the alignment of blood plasma lipidome alterations in individuals with clinical depression and the volunteers with elevated depression scores, we considered training a mathematical model to separate clinical depression from healthy individuals with the aim to extend its predictions to the volunteer individuals. Accordingly, we first used lipid measurements for 32 patients with depression and 36 individuals with no signs of mental illness ( n = 21 matched controls, n = 15 individuals from the volunteer cohort with HADS-D and HADS-A ≤ 7; ) to train a lasso logistic regression predictive model ( ). In randomized cross-validation, the model successfully differentiated the two groups: patients with clinical depression and controls showing no depression symptoms (mean ROC AUC = 0.91, 95% subsampling interval = (0.77, 1); mean accuracy = 0.83, 95% subsampling interval = (0.70, 95); mean sensitivity = 0.89, 95% subsampling interval = (0.70, 1); mean specificity = 0.77, 95% subsampling interval = (0.60, 1); a, ). Using this model trained to separate patients with clinical depression from healthy controls, we then calculated the depression probability scores for the volunteer cohort. In agreement with our predictions, depression probability scores calculated by the model correlated positively and significantly with volunteers’ HADS-D scores (Spearman correlation c = 0.15, p = 0.00023, 95% CI = (0.06, 0.24) for all values; Pearson correlation c = 0.70, p = 0.0027, 95% CI = (0.51, 0.85) for averaged prediction scores across discrete HADS-D values; b, ). Further, despite substantial correlation between HADS-D and HADS-A scales, the association between the predictive model scores and HADS-A values was notably weaker, suggesting a degree of specificity of the model in detecting depression-associated alterations (Spearman correlation c = 0.10, p = 0.021, 95% CI = (0.01, 0.18) for all values; Pearson correlation c = 0.47, p = 0.041, 95% CI = (−0.01, 0.77) for averaged prediction scores across discrete HADS-A values; ). The ability of the model to distinguish between individuals without depressive symptoms (HADS-D ≤ 7) and those displaying moderate signs of depression, although consistently better than expected by chance, improved substantially with greater HADS-D threshold defining the risk group. Specifically, model ROC AUC values increased from 0.64 to 0.84 for individuals selected using HADS-D ≥ 11 ( n = 48) and HADS-D ≥ 14 ( n = 13), respectively ( c, ). For volunteers with severe signs of depression (HADS-D ≥ 15, n = 9), the accuracy of their differentiation from control individuals was similar to that of individuals with clinical depression (ROC AUC = 0.89 and 0.92 for depressed volunteers and clinical depression, respectively; c, ). Previous studies indicated potential influence of antidepressant medication on the blood plasma lipid composition. , , Given that undisclosed medication use could confound results for volunteers with depressive symptoms, the nine volunteers with severe signs of depression were invited for a follow-up visit. Of the five individuals who responded, two were currently receiving treatment for depressive symptoms, while three were medication-naïve ( ). There were no observable differences in the model prediction score distributions between medicated and medication-naive individuals ( d). Although based on a small sample size, this observation nonetheless indicates lack of association between model prediction scores and medication status. То evaluate the potential of using blood lipid profiles as indicators of mental disorders in the general population, we conducted a study of 604 urban population representatives with exclusion criteria limited to age (>70 years) and severe and decompensated somatic or neurological diseases (age 29.9 ± 9.6 years; 72% female; a; ). For each of the volunteers we collected blood plasma samples and anxiety and depression scores assessed using the Hospital Anxiety and Depression Scale (HADS) questionnaire for symptom assessment, with the HADS-A and HADS-D subscales providing the anxiety and depression scores, respectively. As anticipated from a cohort representative of the general population, most individuals scored within the healthy range for both scales (65% and 81% respectively for HADS-A and HADS-D ≤7). A minor percentage reported severe (3% and 1.5% respectively for HADS-A/D ≥ 15) and moderate (12% and 6.5% for 11 ≤ HADS-A/D ≤ 14) symptoms of depression or anxiety, with the rest of the cases classified as having mild symptoms (8 ≤ HADS-A/D ≤ 10). Consistent with prior observations, the HADS-A and HADS-D scores were significantly correlated with each other (Pearson correlation c = 0.55, p < 0.0001, 95% CI = (0.49, 0.61); b, ). Further, among three examined demographic factors—age, sex, and BMI—HADS-A scores demonstrated weak yet significant correlations with age and sex (linear regression, p < 0.0001, R 2 = 6% and 3%, respectively; c, ; ). The HADS-D scores showed an even weaker association with age and none with sex or BMI ( p = 0.010, R 2 = 1% for age; c, ; ). We then moved to examining the association between blood plasma lipid levels and self-reported anxiety and depression scores. Direct-infusion mass spectrometry measurements of 604 blood plasma samples, followed by data processing and quality control filtration, yielded the abundance levels for 186 lipids representing 14 chemical classes ( a; ). Recognizing that demographic factors can have a substantial impact on plasma lipid levels, we adjusted the lipid abundances for age, sex, and BMI prior to conducting the association analysis with the HADS scales. A significant correlation with corrected blood plasma lipid levels was found for the HADS-D scale (Pearson correlation, permutation test p = 0.026; b; ), whereas no statistically significant association was detected for the HADS-A scale (Pearson correlation, permutation test p = 1; b; ). Among eight lipids that displayed the strongest associations with the HADS-D scale (PC 38:6, PC O-34:3, PC O-36:5, PC O-36:6, PC O-38:6, PC O-38:7, PE P-38:6, TAG 58:8; Pearson correlation, FDR = 10%; , ), five represented the PC O- lipid class and, more generally, six represented ether lipids. This prevalence of ether lipids was far above the chance expectation indicating biochemical specificity of the altered compounds (hypergeometric test, one-sided Fisher test, enrichment ratio = 5.8, p = 0.00040 for PC O-, enrichment ratio = 5.8, p < 0.0001 for PC O- and PE P- merged; c). All eight lipids also displayed significantly higher polyunsaturated fatty acids (PUFAs) incidence in their structure, with the median of double bond per side chain equalling 3.0 compared to 1.3 for the remaining quantified lipids (Mann–Whitney U test, p = 0.0014; d). Several studies have reported substantial alterations in blood lipidome profiles of patients with clinical depression. , , , , , , If the results we found for the volunteer cohort reflect depression-associated metabolic alteration, they would presumably align with those observed in patients with major depression diagnosis. To assess the validity of this notion, we collected blood samples and measured blood plasma lipid levels in 32 psychiatric inpatients diagnosed with clinical depression, using the same procedure as for the volunteer cohort ( a; ). The alterations found in these psychiatric patients were consistent with the results of the HADS-D analysis. Specifically, the alterations in lipid abundances of depression patients compared to the volunteer cohort, which represented the general population, were congruent with the associations between lipids and HADS-D scores identified using the volunteer cohort alone (Spearman correlation, c = 0.49, p < 0.0001, 95% CI = (0.36, 0.60); b). Further, when considering only ether-phospholipid classes, PC-O and PE-P, which showed the strongest association with depression symptoms for volunteers, the correlation increased (Spearman correlation, c = 0.67 p = 0.00035, 95% CI = (0.35, 0.85); b). The congruence of these lipid alterations between the two analyses is further supported by consistent reports of decreased ether phospholipids in the blood of depressed individuals. , , While lipids showed parallel alterations in high HADS-D volunteers and patients with clinical depression, one class, triglycerides, substantially contributed to the discrepancy between these two groups (excluding triglycerides, Spearman correlation, c = 0.57, p < 0.0001, 95% CI = (0.43, 0.68); b). Among triglycerides, we noted a clear relationship between the occurrence of double bonds in the fatty acid residues and the extent of disagreement. Specifically, the most saturated and shorter-chained triglycerides showed the greatest discrepancy, while triglycerides containing polyunsaturated fatty acids (PUFAs) followed the overall positive correlation trend ( b, ). After observing the alignment of blood plasma lipidome alterations in individuals with clinical depression and the volunteers with elevated depression scores, we considered training a mathematical model to separate clinical depression from healthy individuals with the aim to extend its predictions to the volunteer individuals. Accordingly, we first used lipid measurements for 32 patients with depression and 36 individuals with no signs of mental illness ( n = 21 matched controls, n = 15 individuals from the volunteer cohort with HADS-D and HADS-A ≤ 7; ) to train a lasso logistic regression predictive model ( ). In randomized cross-validation, the model successfully differentiated the two groups: patients with clinical depression and controls showing no depression symptoms (mean ROC AUC = 0.91, 95% subsampling interval = (0.77, 1); mean accuracy = 0.83, 95% subsampling interval = (0.70, 95); mean sensitivity = 0.89, 95% subsampling interval = (0.70, 1); mean specificity = 0.77, 95% subsampling interval = (0.60, 1); a, ). Using this model trained to separate patients with clinical depression from healthy controls, we then calculated the depression probability scores for the volunteer cohort. In agreement with our predictions, depression probability scores calculated by the model correlated positively and significantly with volunteers’ HADS-D scores (Spearman correlation c = 0.15, p = 0.00023, 95% CI = (0.06, 0.24) for all values; Pearson correlation c = 0.70, p = 0.0027, 95% CI = (0.51, 0.85) for averaged prediction scores across discrete HADS-D values; b, ). Further, despite substantial correlation between HADS-D and HADS-A scales, the association between the predictive model scores and HADS-A values was notably weaker, suggesting a degree of specificity of the model in detecting depression-associated alterations (Spearman correlation c = 0.10, p = 0.021, 95% CI = (0.01, 0.18) for all values; Pearson correlation c = 0.47, p = 0.041, 95% CI = (−0.01, 0.77) for averaged prediction scores across discrete HADS-A values; ). The ability of the model to distinguish between individuals without depressive symptoms (HADS-D ≤ 7) and those displaying moderate signs of depression, although consistently better than expected by chance, improved substantially with greater HADS-D threshold defining the risk group. Specifically, model ROC AUC values increased from 0.64 to 0.84 for individuals selected using HADS-D ≥ 11 ( n = 48) and HADS-D ≥ 14 ( n = 13), respectively ( c, ). For volunteers with severe signs of depression (HADS-D ≥ 15, n = 9), the accuracy of their differentiation from control individuals was similar to that of individuals with clinical depression (ROC AUC = 0.89 and 0.92 for depressed volunteers and clinical depression, respectively; c, ). Previous studies indicated potential influence of antidepressant medication on the blood plasma lipid composition. , , Given that undisclosed medication use could confound results for volunteers with depressive symptoms, the nine volunteers with severe signs of depression were invited for a follow-up visit. Of the five individuals who responded, two were currently receiving treatment for depressive symptoms, while three were medication-naïve ( ). There were no observable differences in the model prediction score distributions between medicated and medication-naive individuals ( d). Although based on a small sample size, this observation nonetheless indicates lack of association between model prediction scores and medication status. Our results demonstrate a consistent and reliable association between blood plasma lipid levels and self-reported mental health symptoms within a cohort of volunteers representing the general urban population. Despite a significant overlap between self-reported depression and anxiety symptoms, assessed using HADS-D and HADS-A scores, the observed lipidome alterations showed a degree of specificity to manifestations of depression. This relationship was further supported by the congruence of lipidome alterations found in volunteers and those found in the blood plasma of patients diagnosed with major depressive disorder. Biochemically, lipids associated with the severity of depressive symptoms in the volunteer cohort were predominantly found in specific groups, namely ether phospholipids and lipids containing polyunsaturated fatty acids, both of which have been previously linked to depression. For example, a decrease in ether phospholipids in the blood has been noted in a family-based depression study and an investigation of HADS-D associations including both healthy and depressed individuals. , Similarly, there is strong evidence for polyunsaturated fatty acids deficiency being related to depression. , , , , , Our results strongly suggest congruent lipidome alterations in the blood plasma of both psychiatric patients diagnosed with clinical depression and individuals with depressive symptoms within a general population cohort. Notably, more saturated, shorter-chained triglycerides were the main biochemical group to display a distinctly inconsistent alteration profile between volunteers and patients with depression. One possible explanation for this inconsistency could be related to differences in metabolic health, considering that this particular triglyceride signature has been shown to be predictive of insulin resistance, diabetes, and non-alcoholic fatty liver disease, and possibly even superior in this regard to standard lipid profiling. , Such differences between patients with depression and volunteers would be in line with expected increased instances of antidepressant usage and related metabolic changes among patients with psychiatric conditions. Alternatively, it is intriguing to speculate that these triglyceride alterations might reflect the symptom severity distinction between functioning volunteers with high depression scores and individuals hospitalized for their depressive state. Lending support to this hypothesis, we have previously shown that the same triglyceride signature was associated with impaired treatment response in schizophrenia patients. The general agreement between alterations in the blood plasma lipidome observed in clinical depression patients and volunteers with higher depression scores suggests the potential of using a predictive model based on psychiatric patients' lipid levels data as a screening tool for depression. Prior studies have demonstrated the feasibility of developing predictive models that could effectively distinguish patients diagnosed with major depressive disorder from healthy controls. , , Likewise, in our analysis, we successfully developed a predictive model that differentiated between patients with clinical depression and healthy individuals based on the abundance levels of specific blood plasma lipids (ROC AUC = 0.91, 95% subsampling interval = (0.84, 1)). Despite the small number of individuals used in the model training ( n = 32 and 36), its application to the lipidome data from the volunteer cohort revealed a significant positive correlation between the model's depression probability scores and HADS-D values. This outcome demonstrates both the robust association of observed lipidome alterations with depressive states, specifically, and the possibility of generalizing lipid alterations from a clinical cohort to the general population. The relatively low correlation strength in the association analysis between model prediction scores and the HADS-D scale resulted from the inherent imbalance in the dataset's symptom severity distribution. Most volunteers (81%) reported no signs of depression (0–7 scores of HADS-D), while only 6.5% and 1.5% stated moderate or severe symptoms of depression, respectively. Therefore, applying the model to individuals within increasingly restricted HADS-D score brackets substantially enhanced model accuracy, ultimately distinguishing volunteers with the most severe depressive symptoms from healthy individuals with reliability comparable to that achieved for clinical depression patients (ROC AUC = 0.89–0.92). Although this result requires verification by further studies involving larger cohort sizes, it nonetheless suggests that our approach could lead to accurate detection of individuals afflicted with depressive conditions based on their blood lipid level profiles. Our study has several limitations. One such limitation is the restricted range of blood plasma lipids that we were able to assess due to our reliance on a direct-infusion mass spectrometry protocol. Although this method facilitates the rapid screening of large sample cohorts, it compromises sensitivity, particularly for specific lipid classes like ceramides, which have been reported to be consistently associated with depression levels. Another limitation is the relatively small number of individuals diagnosed with clinical depression used for the predictive model training, as well as the low number of volunteers with high HADS-D scores that were needed for model testing. While the congruence between lipidome alterations we find in our study for clinically diagnosed depression patients and volunteers with elevated HADS-D scores partially mitigates this concern, since the likelihood of such congruence is low regardless of sample size, caution should nevertheless be exerted with the generalization of statistical modelling due to possible model misspecification, as well as unmeasured confounding factors, such as medication use. One further limitation is the reliance of our model's performance measure in the volunteer cohort on self-reported HADS-D scores, and not a clinical assessment of depressive symptoms. The correlation between clinician-rated severity of depressive symptoms and HADS-D scores has been shown to be limited. Moreover, the high HADS-D group may encompass a diverse population in terms of diagnoses, potentially including not only individuals suffering from affective mental diseases, but subjects exhibiting post-traumatic stress disorder (PTSD) symptoms or those afflicted with psychotic disorders. Of note, our model demonstrated a certain degree of specificity to the detection of individuals with elevated depression scores, in particular, despite the well-documented association between HADS-D and HADS-A scales. Nevertheless, it is difficult to presume the specificity of the observed lipid alterations to any one mental disorder without additional evidence. Likewise, due to the cross-sectional design of the study, only symptoms at baseline were collected, which bars the evaluation of the lipid-based risk scores as potential prognostic markers. These latter two points would constitute main study objectives for future investigations. In conclusion, despite certain limitations, our study utilizing a high-throughput method for blood lipidome measurements has revealed a significant association between lipid abundances and self-reported depressive symptoms. We have presented a lipid-based model that shows promising reliability in identifying individuals from the general population with severe self-reported depressive symptoms. Although the precise implications of these findings are currently challenging to predict, they undeniably highlight the potential utility of lipid-based panels in detecting individuals at heightened risk of psychiatric disorders. GK and PK—conceptualization, supervision, writing review & editing. AM, YZ, DA, DR, EG, AO, and VS—investigation (data collection), methodology. ES—methodology, validation, investigation (experiments), writing review & editing. AGol and AS—investigation (experiments). AGon, DP, IA—project administration, resources. AT—formal analysis, visualization, writing original draft, writing review & editing. AT and ES have accessed and verified the data, PK was responsible for the decision to submit the manuscript. All authors have read and approved the final version of the manuscript. Lipid data used in statistical model construction, including normalized log2 lipid abundances for patients with depression and controls, is available as a supplementary tables (S1, S8). The authors declare no conflict of interest.
Application of Negative-Pressure Wound Therapy in Patients with Wound Complications after Flap Repair for Vulvar Cancer: A Retrospective Study
bf369f44-4a06-43fd-b539-3368f370f412
11939101
Surgery[mh]
Vulvar cancer is a relatively rare disease affecting 45,000 women worldwide each year and accounting for approximately 5% of all gynecologic malignancies. It usually occurs after menopause and increases in incidence with age. The incidence of vulvar cancer has shown an upward trend in southwest China in the past 10 years. Historically, surgeons would perform an en bloc resection of the vulvar tumor and complete bilateral inguinofemoral lymphadenectomy, but this approach was associated with significant morbidity and high complication rates. The current standard technique involves resection of the vulvar tumor and lymph node dissections through separate incisions. However, despite surgical advancements, postoperative wound complication (WC) rates remain high. The incidence and prevalence of WCs vary depending on the definition and assessment period used, the sample studied, the setting, and the country. The prevalence of WCs after vulvar surgery may be as high as 45.4%, with an average incidence of 10.4%. One study from Germany reported that 17.7% of patients developed a lymphocele, 18.1% of patients developed wound dehiscence, and 20.4% of patients developed chronic lymphedema. A study from Northeast India revealed that 6.25% of patients developed lymphocyst, and 18.75% of patients developed wound necrosis. Surgery for vulvar cancer results in a significant soft tissue defect that requires repair with a flap, which is characterized by a wide surgical area, large tissue defects, and high wound tension. A higher incidence of WCs postsurgery has been reported after flap repair (30.6%) versus without flap repair (10.4%). Wound complications after flap repair are associated with high morbidity and have negative impacts on vulvar morphology, urinary function, and quality of life. , In addition, delayed wound healing leads to a delay in follow-up adjuvant therapy, which in turn leads to a poorer survival prognosis, longer hospitalization, and greater medical expenses. Since its introduction in 1997 by Argenta and Morykwas, , negative-pressure wound therapy (NPWT) has been widely used for complex wounds, having demonstrated efficacy for wound control and treatment. It has also been studied as an adjuvant to aid in wound healing in patients undergoing vulvectomy. , However, the application of NPWT in this context is complicated by the anatomic location of the vulva and anus, which poses a challenge for achieving an airtight seal, especially around the urinary catheter. In this study, the authors retrospectively analyzed the characteristics of WCs after flap repair in vulvar cancer and evaluated the efficacy of NPWT in wound healing. In this single-center study, the researchers retrospectively selected 17 inpatients with postoperative WCs after flap repair for primary vulvar cancer in the gynecologic oncology ward in a tertiary comprehensive hospital from January 2016 to December 2022. This study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Board (no. SYSKY-2022-131-01). Patients provided informed consent for the publication of clinical images. Eligibility Criteria Patients were included in this retrospective study if they were diagnosed with vulvar cancer, underwent vulvectomy and flap repair, and developed postoperative WCs. Patients with other types of tumors were excluded. Wound Assessment and Care Wounds were categorized into inguinal wounds and flap wounds according to the wound location. Inguinal wounds were deep and adjacent to the femoral artery, so wound probing was gentle, and the surgeon avoided vertical probing with forceps. If the wound appeared tight and shiny or the skin edge was purplish red with blistering, the surgeon removed some of the stitches at the site of the swelling and squeezed out the accumulated fluid using gauze on both sides of the incision or along the incision from top to bottom. In cases of deep infection, the epidermal stitches were removed to fully open the wound. Application of NPWT and the “Sandwich” Adhesive Method Clinicians administered pain medication to patients because of the potential for procedural pain and evaluated patients’ pain using a visual analog scale. Nurses certified in wound, ostomy, and continence applied NPWT (Disposable Negative Pressure Drainage Wound Protecting Dressing; Shandong Medilogy Co, Ltd) with a hydrophobic black foam sponge made of polyurethane ether (pore size of 400-500 mm); these products are similar to standard dressings used in other countries. , For infected wounds with opaque or thick exudate, the wound was irrigated with 0.9% sodium chloride. The wound was covered with a cut-to-fit foam sponge and overlaid with a semiocclusive transparent dressing. The edge of the transparent dressing extended at least 3 to 5 cm beyond the wound edge to ensure a watertight/airtight seal. A fenestrated evacuation tube was fixed in the foam sponge, and the tube was connected to the wall central negative-pressure suction system for continuous NPWT treatment of 100 to 150 mm Hg depending on the wound size, exudate volume, and patient tolerance. The foam sponge appearing deflated with no accumulation of fluid under the film indicated that the NPWT was effective. The foam was changed every 2 to 3 days or if contaminated. Nurses in charge inspected the effectiveness of NPWT and airtightness every shift, and the inpatients were also educated to monitor leakage. If leakage occurred, the wound care nurse would be notified, and the dressing reinforced or changed. Due to the complexity of the vulva, it was difficult to maintain local tightness when applying NPWT. The adhesion of film required a two-step process. First, with the patient lying supine, the nursing assistant would open the skin folds between the vulva and inner thigh, and the nurse would apply the film over the wound. For patients with an indwelling urinary catheter, nurses used a “sandwich” adhesive method to preserve the normal function of the urinary catheter. Two pieces of transparent dressing were folded at 90°, and the vertical pieces were attached to each other with the urinary catheter in the middle (Figure A). Second, with the patient lying in a lateral position, the nurse would open the folds between the buttocks and cover the remaining wound with a transparent dressing. Because the edge of the perianal area is prone to leakage, nurses would use ostomy paste (Brava ostomy paste; Coloplast A/S) to fill the skin folds and seal around the perianal area; another piece of hydrocolloid dressing (DuoDerm Extra Thin; ConvaTec Limited) would be placed to improve adherence when the patient moved (Figure B). Figure illustrates NPWT application using the sandwich fixation method around a urinary catheter. Specific Positioning During NPWT Application Patients at risk of pressure injury development used an air suspension bed or a foam mattress; a bed support rack (Figure ) was routinely used to keep the lower body ventilated. Inpatients were instructed to turn over with the assistance of nurses or caregivers, and nurses positioned patients in the supine position, 30° semirecumbent position, and lateral position alternately in a 2-hour turning schedule. To mitigate wound tension and the risk of leakage, nurses placed a soft pillow over the patient’s outer thighs when the patient was in a supine position to prevent excessive abduction of lower limbs. When the patient was in the lateral position, it was essential to maintain bent legs, and nurses placed a pillow between the patient’s knees for further support and comfort. Considering the prolonged immobilization and hospitalization during the application of NPWT, patients were instructed to mobilize early. Those who had difficulty moving wore elastic stockings or used intermittent inflatable compression devices to prevent deep vein thrombosis. Low-molecular-weight heparin therapy was provided if necessary. Observation Indices The main indices were the healing rate (%) and the healing time (days). The researchers defined complete wound healing as complete epithelization of the wound. The healing rate was calculated as the number of healed cases divided by the total number of cases in the group, multiplied by 100%. The secondary outcome measured was the incidence of NPWT-related complications (%), including wound bleeding and pain (procedural/resting). The incidence of NPWT-related complications was calculated as the number of cases with NPWT-related complications divided by the total number of cases in the group, multiplied by 100%. Statistical Analysis Continuous variables are reported as mean ± SD (range). Categorical variables are presented as numbers and percentages. All analyses were performed using SPSS version 25 (IBM Corp). The statistical significance level for all the tests is set at a P < .05, two-tailed. Patients were included in this retrospective study if they were diagnosed with vulvar cancer, underwent vulvectomy and flap repair, and developed postoperative WCs. Patients with other types of tumors were excluded. Wounds were categorized into inguinal wounds and flap wounds according to the wound location. Inguinal wounds were deep and adjacent to the femoral artery, so wound probing was gentle, and the surgeon avoided vertical probing with forceps. If the wound appeared tight and shiny or the skin edge was purplish red with blistering, the surgeon removed some of the stitches at the site of the swelling and squeezed out the accumulated fluid using gauze on both sides of the incision or along the incision from top to bottom. In cases of deep infection, the epidermal stitches were removed to fully open the wound. Clinicians administered pain medication to patients because of the potential for procedural pain and evaluated patients’ pain using a visual analog scale. Nurses certified in wound, ostomy, and continence applied NPWT (Disposable Negative Pressure Drainage Wound Protecting Dressing; Shandong Medilogy Co, Ltd) with a hydrophobic black foam sponge made of polyurethane ether (pore size of 400-500 mm); these products are similar to standard dressings used in other countries. , For infected wounds with opaque or thick exudate, the wound was irrigated with 0.9% sodium chloride. The wound was covered with a cut-to-fit foam sponge and overlaid with a semiocclusive transparent dressing. The edge of the transparent dressing extended at least 3 to 5 cm beyond the wound edge to ensure a watertight/airtight seal. A fenestrated evacuation tube was fixed in the foam sponge, and the tube was connected to the wall central negative-pressure suction system for continuous NPWT treatment of 100 to 150 mm Hg depending on the wound size, exudate volume, and patient tolerance. The foam sponge appearing deflated with no accumulation of fluid under the film indicated that the NPWT was effective. The foam was changed every 2 to 3 days or if contaminated. Nurses in charge inspected the effectiveness of NPWT and airtightness every shift, and the inpatients were also educated to monitor leakage. If leakage occurred, the wound care nurse would be notified, and the dressing reinforced or changed. Due to the complexity of the vulva, it was difficult to maintain local tightness when applying NPWT. The adhesion of film required a two-step process. First, with the patient lying supine, the nursing assistant would open the skin folds between the vulva and inner thigh, and the nurse would apply the film over the wound. For patients with an indwelling urinary catheter, nurses used a “sandwich” adhesive method to preserve the normal function of the urinary catheter. Two pieces of transparent dressing were folded at 90°, and the vertical pieces were attached to each other with the urinary catheter in the middle (Figure A). Second, with the patient lying in a lateral position, the nurse would open the folds between the buttocks and cover the remaining wound with a transparent dressing. Because the edge of the perianal area is prone to leakage, nurses would use ostomy paste (Brava ostomy paste; Coloplast A/S) to fill the skin folds and seal around the perianal area; another piece of hydrocolloid dressing (DuoDerm Extra Thin; ConvaTec Limited) would be placed to improve adherence when the patient moved (Figure B). Figure illustrates NPWT application using the sandwich fixation method around a urinary catheter. Patients at risk of pressure injury development used an air suspension bed or a foam mattress; a bed support rack (Figure ) was routinely used to keep the lower body ventilated. Inpatients were instructed to turn over with the assistance of nurses or caregivers, and nurses positioned patients in the supine position, 30° semirecumbent position, and lateral position alternately in a 2-hour turning schedule. To mitigate wound tension and the risk of leakage, nurses placed a soft pillow over the patient’s outer thighs when the patient was in a supine position to prevent excessive abduction of lower limbs. When the patient was in the lateral position, it was essential to maintain bent legs, and nurses placed a pillow between the patient’s knees for further support and comfort. Considering the prolonged immobilization and hospitalization during the application of NPWT, patients were instructed to mobilize early. Those who had difficulty moving wore elastic stockings or used intermittent inflatable compression devices to prevent deep vein thrombosis. Low-molecular-weight heparin therapy was provided if necessary. The main indices were the healing rate (%) and the healing time (days). The researchers defined complete wound healing as complete epithelization of the wound. The healing rate was calculated as the number of healed cases divided by the total number of cases in the group, multiplied by 100%. The secondary outcome measured was the incidence of NPWT-related complications (%), including wound bleeding and pain (procedural/resting). The incidence of NPWT-related complications was calculated as the number of cases with NPWT-related complications divided by the total number of cases in the group, multiplied by 100%. Continuous variables are reported as mean ± SD (range). Categorical variables are presented as numbers and percentages. All analyses were performed using SPSS version 25 (IBM Corp). The statistical significance level for all the tests is set at a P < .05, two-tailed. A total of 17 female inpatients met the inclusion criteria and were included in the study. Their mean age was 59.53 ± 8.99 years (range, 38–71 years), and all of them were married. Most of the patients had squamous cell carcinoma (82.35%), and the plurality was FIGO (International Federation of Gynecology and Obstetrics) stage II (35.29%), followed by stage IV (23.53%) and stage III (17.65%). Further, 11.76% of the patients had diabetes, and 23.53% had hypertension. Surgical and Perioperative Results Seven patients (41.18%) underwent extensive excision, and all patients underwent flap repair, with femoral and abdominal wall flaps in 52.94% and local flaps in 41.18% of patients. Moreover, 58.83% of patients underwent bilateral or unilateral inguinal lymph node dissection, and seven cases (41.18%) recurred prior to surgery (Table ). Results of WCs for Vulvar Cancer The 17 patients in this study had a total of 25 wounds. The wounds ranged from 1 to 52 cm in length and from 0.5 to 9.0 cm in depth. The undermining/sinus of the wounds ranged from 4 to 22 cm. Seroma/lymphorrhea was the most common WC, present in nine cases (52.94%), followed by wound infection (six cases, 35.29%), fat liquefaction (four cases, 23.53%), wound dehiscence (three cases, 17.65%), and wound ischemia (one case, 5.88%; Table ). The main locations of flap wounds were the vulva (n = 11, 64.71%), thigh (n = 6, 35.29%), and pubic symphysis (n = 4, 23.53%; Table ). Among wounds located on the vulva, seroma/lymphorrhea was the most common WC (n = 6/11, 54.55%). Fat liquefaction was the most common WC among thigh wounds (n = 5/6, 83.33%) and pubic symphysis wounds (n = 2/4, 50.00%), and wound infection was the most frequent WC among wounds in the presacral area (n = 3/3, 100%; Table ). Wound Healing after Flap Repair for Vulvar Cancer One patient with severe flap ischemia halted treatment. The remaining 16 patients all healed in between 17 and 65 days (mean, 43.50 ± 17.93 days). NPWT-Related Complications Three patients (17.65%) reported procedural pain during the application of NPWT (Table ). Seven patients (41.18%) underwent extensive excision, and all patients underwent flap repair, with femoral and abdominal wall flaps in 52.94% and local flaps in 41.18% of patients. Moreover, 58.83% of patients underwent bilateral or unilateral inguinal lymph node dissection, and seven cases (41.18%) recurred prior to surgery (Table ). The 17 patients in this study had a total of 25 wounds. The wounds ranged from 1 to 52 cm in length and from 0.5 to 9.0 cm in depth. The undermining/sinus of the wounds ranged from 4 to 22 cm. Seroma/lymphorrhea was the most common WC, present in nine cases (52.94%), followed by wound infection (six cases, 35.29%), fat liquefaction (four cases, 23.53%), wound dehiscence (three cases, 17.65%), and wound ischemia (one case, 5.88%; Table ). The main locations of flap wounds were the vulva (n = 11, 64.71%), thigh (n = 6, 35.29%), and pubic symphysis (n = 4, 23.53%; Table ). Among wounds located on the vulva, seroma/lymphorrhea was the most common WC (n = 6/11, 54.55%). Fat liquefaction was the most common WC among thigh wounds (n = 5/6, 83.33%) and pubic symphysis wounds (n = 2/4, 50.00%), and wound infection was the most frequent WC among wounds in the presacral area (n = 3/3, 100%; Table ). One patient with severe flap ischemia halted treatment. The remaining 16 patients all healed in between 17 and 65 days (mean, 43.50 ± 17.93 days). Three patients (17.65%) reported procedural pain during the application of NPWT (Table ). With advancements in modern surgery, the prognosis for vulvar cancer has gradually improved. However, the incidence of postoperative WCs remains high. Proper wound management in vulvar cancer is crucial because severe WCs may delay therapy, worsen prognosis, and increase the risk of postoperative recurrence. Vulvar cancer is relatively rare in clinical practice, and conventional wound care may result in prolonged wound healing and additional pain. In this retrospective study, 17 patients with postoperative WCs subsequent to flap repair in vulval cancer received NPWT. Among the 16 patients who completed treatment, all wounds healed. Three patients (17.65%) reported procedural pain when applying the NPWT. The current findings are consistent with the results of Xu et al and Quercia et al. However, in the present research, the mean wound healing time was 43.5 ± 17.93 days, which was longer than the 31.5 ± 11.1 days reported by Quercia et al. This longer healing time may be attributed to the poor systemic condition of the participants in this study who had higher FIGO stages. The presence of preoperative chemotherapy and radiotherapy history could also have a negative impact on the regenerative capacity of the surrounding tissues. Overall, NPWT provides the advantage of enhancing flap survival rate, promoting wound healing, and shortening hospitalization in postoperative patients with vulvar cancer. , , This study provides more substantial evidence on the efficacy of NPWT and uniquely concentrates on the critical challenge of maintaining an airtight seal during NPWT treatment. The authors introduce a comprehensive management strategy including the “sandwich” fixation method and positioning skills to prevent leakage. Further, this study also provides additional insights into the characteristics of WCs based on anatomy, categorizing postvulvectomy wounds as either flap or inguinal, and provides protective suggestions when applying NPWT in inguinal wounds. In the present research, postoperative WCs of vulvar cancer with flap repair exhibited specific characteristics that were dependent on the wound location and anatomic features. In the groin region, a lymphocele/lymphocyst may easily occur with an abnormal collection of lymphatic fluid in dead space because of surgical dissection. The inguinal wound would appear swollen with pale yellow translucent fluid coming out upon incision probing. If the lymphatic fluid is not fully drained, it may lead to infection, edema, discomfort, and deep vein thrombosis. , The flap incision extends from the pubic symphysis to the presacral area or sacrococcygeal area, encompassing the bilateral buttocks and thighs. Certain procedures require abdominal flaps to repair defects. Flap repair has a wide resection area and resultant large wound, leading to high wound tension that may be impacted by patient position and activity, creating challenges for primary healing. Vulvar wounds are located inferiorly to the groin; lymph may track under the skin bridge and exit through the vulvar wound, and the flap is susceptible to dehiscence due to the high tension of the wound. Wounds in the thigh area are prone to fat liquefaction as a result of the thick layer of fat and the large amount of tissue required for flap repair. Within the presacral region, wound infection can be a significant problem due to the moisture and warm environment of the perineum, which is conducive to bacterial growth, and it is also prone to be contaminated by urine, feces, and vaginal secretions. For flap wounds, it is crucial to fully drain the exudate to prevent infection, which may result from the large tissue defect during surgery. Negative-pressure wound therapy is an effective method for draining exudate and promoting wound healing. In cases in which the exudate is thick or opaque or a severe infection is present, clinicians should irrigate the wound with saline to help maintain debridement, reduce the bacterial load, and control infection. Vulvar and presacral wounds are under high tension and susceptible to contamination, and NPWT can effectively alleviate lateral tension, eliminate subcutaneous dead space, enhance blood perfusion, improve edema and tension, and promote flap survival. Transparent dressings are permeable to water vapor and oxygen but impermeable to microorganisms, thus decreasing the risk of wound infection , and effectively reducing the frequency of dressing changes and related procedural pain. However, Narducci et al reported vestibular stenosis and partial necrosis of the musculocutaneous flap among older adult patients undergoing NPWT; careful consideration is necessary for older patients with frailty and systemic conditions. In addition to performing the “sandwich” adhesive method to reduce the risk of leakage when implementing NPWT in the flap wound, managing defecation is also important for healing. Improper management of urination and defecation can contaminate the wound and affect the confinement of NPWT. Patients should fast in the early postoperative period to delay defecation and, after resuming eating, avoid foods that can cause diarrhea. Some patients with a high risk of WCs required a prophylactic enterostomy/cystostomy. Other investigators have reported using a rectal pressure catheter with a silicone balloon to drain feces , or combining antidynamic drugs with total parenteral nutrition to decrease defecation, thus reducing the risk of the incision being contaminated by fecal matter. Clinicians should choose the appropriate method according to the patient’s surgical method and risk of postoperative WCs. For inguinal wounds, the conventional approach involves inserting a subcutaneous drainage tube with a disposable drainage bottle. However, the skin and soft tissue in the groin region are irregular and uneven, making it difficult to impose well-distributed and effective pressure in the inguinal area; this may cause dead space formation leading to lymphorrhea/lymphocele. Negative-pressure wound therapy facilitates wound healing in the inguinal region by applying uniform pressure on the wound, continuously draining exudate, and decreasing edema. Because inguinal lymph node dissection requires the removal of the entire inguinal lymphatic fat pad and most of the subcutaneous fat, it is essential that NPWT be used with caution to avoid harming the femoral blood vessels. The black polyurethane foam sponge used in this study has the disadvantage of foam retainment when used over a long period, which could cause damage during removal, resulting in bleeding and pain. As a protective measure for the wound, it is suggested that the foam be changed every 2 to 3 days and a nonadherent lipidocolloid dressing (hydrocolloid particles in a lipophilic substance; UrgoTul, Urgo Medical) be applied to wrap the foam to reduce the risk of damage during foam removal and relieve procedural pain. Riebe et al also reported using a polypropylene mesh implantation in combination with NPWT to protect the exposed vessels. One of the concerns in the present study is that 35.29% of patients were not tumor-free in the pathological resection margin, which contradicts the current consensus that NPWT is contraindicated in the presence of malignancy. However, because 41.18% of the patients in this study had preoperative recurrence, tumor-free surgical resection margins were difficult, and clinicians and patients were more concerned about wound healing for discharge. Further, there was no evidence of recurrence during the application of NPWT. Future studies should engage in long-term follow-up of patients to better investigate the safety of NPWT in wounds with the presence of malignancy. Narducci et al reported that the immediate application of NPWT after vulvectomy could promote postoperative wound healing, reduce wound odor, and enhance patient comfort. Recent clinical practice suggests that the early initiation of NPWT has the potential for better clinical benefits. Thus, the findings indicate that healthcare providers should apply NPWT as early as possible to prevent WCs in patients with vulvar cancer; confirming this recommendation should be a key component in future research. Limitations This study was conducted at a single hospital, which limits the external validity, and because vulvar cancer is rare, the sample size was small. In addition, the retrospective study design may be prone to confounding due to temporal changes, which might affect the results. The authors noticed that the definitions and assessment periods of WCs differed among previous studies, making it difficult to compare the current findings with other research. The authors propose the establishment of a standardized wound recording and reporting template to facilitate large, well-designed randomized trials. This study was conducted at a single hospital, which limits the external validity, and because vulvar cancer is rare, the sample size was small. In addition, the retrospective study design may be prone to confounding due to temporal changes, which might affect the results. The authors noticed that the definitions and assessment periods of WCs differed among previous studies, making it difficult to compare the current findings with other research. The authors propose the establishment of a standardized wound recording and reporting template to facilitate large, well-designed randomized trials. Wound complications are common following vulvectomy, particularly with flap repair and extensive groin surgery. Negative-pressure wound therapy is an effective strategy for managing postoperative WCs in patients with vulvar cancer because it reduces wound volume, swelling, and exudate and ultimately promotes wound healing, thereby improving the patient’s quality of life. Further prospective studies are needed to confirm these preliminary results.
Advances in pediatrics in 2023: choices in allergy, analgesia, cardiology, endocrinology, gastroenterology, genetics, global health, hematology, infectious diseases, neonatology, neurology, pulmonology
5ccf48b1-1732-4225-9536-57d4e26794d5
11562862
Internal Medicine[mh]
The most important papers from distinct specialties that were published in the Italian Journal of Pediatrics in the first half of 2023 have been included in this review. We have selected key information based on those articles that were most cited or accessed on our website. The aim is to provide an overview of the most influential published papers of the past year in the fields of allergy, analgesia, cardiology, endocrinology, gastroenterology, genetics, global health, infectious diseases, neonatology, neurology and pulmonology. The papers in our analysis covered a variety of novel insights in risk factors, mechanisms, diagnosis, treatment options and prevention. The advances that were more relevant in clinical practice, have been commented looking to the future. Eosinophilic gastrointestinal disorders Eosinophilic gastrointestinal disorders (EGID) are a group of disorders characterized by pathological eosinophilic infiltration of the esophagus, stomach, small intestine or colon, leading to organ dysfunction and clinical symptoms. In recent years there has been an increase in reports of Eosinophilic Esophagitis (EoE) above all . Votto et al. studied 60 patients with EGIDs. EGID diagnosis was made approximately 12 months after symptom onset, which was shorter than the delay observed in other studies . However, the diagnosis was delayed in children with EoE who had failure to thrive and feeding problems than in children without growth and feeding problems. So, a prompt diagnosis is crucial to prevent failure to thrive. Furthermore, they observed an increased frequency of coexisting allergic diseases, especially food allergy. An elimination diet is beneficial in most children with EoE. These elements can indicate that a mixed IgE-mediated non IgE-mediated mechanism can be involved in the pathogenesis. An oral food challenge to the food in question would be necessary to reach the diagnosis. Mastocytosis Pediatric mastocytosis is a rare and heterogeneous group of disorders characterized by an abnormal clonal expansion of mast cells that accumulate in the skin (Cutaneous Mastocytosis) and/or, less frequently, in other organs or tissues (Systemic Mastocytosis). The release of mast cell mediators, including histamine and other vasoactive substances, is responsible for the clinical manifestations. Cutaneous Mastocytosis is defined by typical skin lesions with a positive Darier’s sign. Diagnosis of systemic mastocytosis is based on organ enlargements, elevated serum tryptase levels, cytoreduction and characteristic histopathological findings in biopsies of affected tissue . Children with systemic mastocytosis are at risk of severe reactions due to mediator release mainly induced by allergens such as hymenoptera venom, foods , non IgE-mediated stimuli or spontaneous. The management is based on identifying of triggers by IgE tests . It is aimed at preventing the release of mast cell mediators and controlling symptoms with second-generation anti-H1 antihistamines, systemic corticosteroids and organ specific drugs. Bossi et al. highlighted that a child affected by systemic mastocytosis with persistent rash, diarrhea, abdominal pain, palpitations, musculoskeletal symptoms, fatigue, refractory to anti-H1 and oral steroids became quickly asymptomatic following administration of omalizumab, a monoclonal antibody against IgE. Symptoms recurred when omalizumab was suspended. The child responded to restart of omalizumab. Side effects to omalizumab were not recorded. Eosinophilic gastrointestinal disorders (EGID) are a group of disorders characterized by pathological eosinophilic infiltration of the esophagus, stomach, small intestine or colon, leading to organ dysfunction and clinical symptoms. In recent years there has been an increase in reports of Eosinophilic Esophagitis (EoE) above all . Votto et al. studied 60 patients with EGIDs. EGID diagnosis was made approximately 12 months after symptom onset, which was shorter than the delay observed in other studies . However, the diagnosis was delayed in children with EoE who had failure to thrive and feeding problems than in children without growth and feeding problems. So, a prompt diagnosis is crucial to prevent failure to thrive. Furthermore, they observed an increased frequency of coexisting allergic diseases, especially food allergy. An elimination diet is beneficial in most children with EoE. These elements can indicate that a mixed IgE-mediated non IgE-mediated mechanism can be involved in the pathogenesis. An oral food challenge to the food in question would be necessary to reach the diagnosis. Pediatric mastocytosis is a rare and heterogeneous group of disorders characterized by an abnormal clonal expansion of mast cells that accumulate in the skin (Cutaneous Mastocytosis) and/or, less frequently, in other organs or tissues (Systemic Mastocytosis). The release of mast cell mediators, including histamine and other vasoactive substances, is responsible for the clinical manifestations. Cutaneous Mastocytosis is defined by typical skin lesions with a positive Darier’s sign. Diagnosis of systemic mastocytosis is based on organ enlargements, elevated serum tryptase levels, cytoreduction and characteristic histopathological findings in biopsies of affected tissue . Children with systemic mastocytosis are at risk of severe reactions due to mediator release mainly induced by allergens such as hymenoptera venom, foods , non IgE-mediated stimuli or spontaneous. The management is based on identifying of triggers by IgE tests . It is aimed at preventing the release of mast cell mediators and controlling symptoms with second-generation anti-H1 antihistamines, systemic corticosteroids and organ specific drugs. Bossi et al. highlighted that a child affected by systemic mastocytosis with persistent rash, diarrhea, abdominal pain, palpitations, musculoskeletal symptoms, fatigue, refractory to anti-H1 and oral steroids became quickly asymptomatic following administration of omalizumab, a monoclonal antibody against IgE. Symptoms recurred when omalizumab was suspended. The child responded to restart of omalizumab. Side effects to omalizumab were not recorded. Adverse reactions to ibuprofen or paracetamol Type 1 (or type “A”, Augmented) adverse reaction to drugs are dose-dependent, related to the pharmacologic mechanism and occur in normal subjects. Type 2 (or type “B”, Bizarre) are not dose-dependent, unrelated to the pharmacologic mechanism and occur in predisposed subjects. They include anaphylaxis and severe cutaneous reactions . In children, first-line treatment for mild-to-moderate pain and fever is either ibuprofen or paracetamol that have similar safety and tolerability profiles . Marano et al. analyzed 351 patients who contacted the hospital’s pediatric poison control center (PPCC) for exposure to ibuprofen and paracetamol from January 1, 2018 to September 30, 2022, to assess the incidence of any adverse reactions. Misuse or accidental ingestion was the most common reason for inappropriate oral use of paracetamol or ibuprofen, with a fifth of patients taking it for suicidal purposes. Most patients were not intoxicated and hospitalization was necessary for 30.5% of children. Type 1 adverse reactions were recorded in 10.8% of patients taking paracetamol and in 10.1% of cases after ibuprofen. The most common adverse reactions to paracetamol were vomiting, hypertransaminasemia, coagulopathy and headache those to ibuprofen were nausea, vomiting, abdominal pain, increased serum creatinine and dizziness. Pain in emergency department Pain is one of the most frequent reasons for referral to pediatric emergency department, especially in younger children and those with special needs, a category in which undertreatment of pain (so-called “oligoanalgesia”) is very common. Oligoanalgesia is related to long-term negative behavioral and psychological consequences. Management of pain and anxiety is of fundamental importance and good pain control could help the entire medical team in the evaluation and treatment of a child . Several studies have shown that very often the treatment of pain in children is inadequate and have highlighted the importance of adequate pain treatment in terms of immediate but also future well-being and neurological development of the patient . Bevacqua et al. report the current state of the art of pediatric sedation and analgesia in Italian emergency rooms and identify existing gaps that need to be addressed. The survey proposed a case vignette and questions addressing different domains, such as pain management, availability of medications, protocols and safety aspects, staff training, and availability of human resources regarding sedation and procedural analgesia. Eighteen Italian sites participated in the study, 66% of which were represented by University Hospitals and/or Tertiary Care Centres. It was found that 27% of patients receive inadequate sedation. In many emergency departments there is a lack of availability of some drugs such as nitrous oxide, lack of use of intranasal fentanyl and topical anesthetics at triage, the use of safety protocols and pre-procedural checklists is rare, there is also a lack of staff training and lack of space. Moreover, the availability of child life specialists and hypnosis, as a non-pharmacological practice of sedation and analgesia is insufficient. The study highlights that, although much progress has been made in recent years in the treatment of pain in the pediatric emergency department setting, there is still much work to be done due to the complexity of pediatric patients and, sometimes, the need of adequate instruments/medicines as well as training of health personnel. Pain in surgery, oncology and hematology Pain control is universally recognized as a human right and the correct assessment of pain is now one of the standards for the accreditation of health institutions. Proper pain management can reduce the incidence of complications, reduce hospital stays, achieve faster discharges and decrease the use of hospital resources. On the contrary, inadequate pain management can lead to persistent or chronic pain, alterations in nociception and emotional and psychological complications; pain can have negative effects on the physical and mental conditions of hospitalized patients, worsening the quality of life and increasing costs . However, the assessment and especially the treatment of pain are still important health problems in hospitalized patients . Marchetti et al. compare a one-day survey that analyzed the prevalence of pain, pain intensity and pain therapy conducted in 2016, in which they showed suboptimal pain management in the surgery and oncohematology departments, with the same survey conducted in 2020. They found a higher prevalence of moderate/severe pain in the 2020 survey compared to the previous 2016 survey, both during hospitalization and in the 24 h preceding the day of the survey, despite hospital training initiatives aimed at doctors and nurses on pain therapy. On the other hand, the daily prescription of pain therapy has significantly improved both in terms of time indications and as needed. There were fewer children who were not prescribed any pain therapy compared to the 2016 survey. However, the quality of analgesic therapy was low in 2020 also compared to 2016. Indeed, the therapy administered led to a statistically significant undertreatment of pain, and it was unable to alleviate moderate/severe pain. Basically, many steps forward still need to be made, not so much in the evaluation, which most of the time appears correct, but in the correct use of drugs, also in relation to the type of pain . Type 1 (or type “A”, Augmented) adverse reaction to drugs are dose-dependent, related to the pharmacologic mechanism and occur in normal subjects. Type 2 (or type “B”, Bizarre) are not dose-dependent, unrelated to the pharmacologic mechanism and occur in predisposed subjects. They include anaphylaxis and severe cutaneous reactions . In children, first-line treatment for mild-to-moderate pain and fever is either ibuprofen or paracetamol that have similar safety and tolerability profiles . Marano et al. analyzed 351 patients who contacted the hospital’s pediatric poison control center (PPCC) for exposure to ibuprofen and paracetamol from January 1, 2018 to September 30, 2022, to assess the incidence of any adverse reactions. Misuse or accidental ingestion was the most common reason for inappropriate oral use of paracetamol or ibuprofen, with a fifth of patients taking it for suicidal purposes. Most patients were not intoxicated and hospitalization was necessary for 30.5% of children. Type 1 adverse reactions were recorded in 10.8% of patients taking paracetamol and in 10.1% of cases after ibuprofen. The most common adverse reactions to paracetamol were vomiting, hypertransaminasemia, coagulopathy and headache those to ibuprofen were nausea, vomiting, abdominal pain, increased serum creatinine and dizziness. Pain is one of the most frequent reasons for referral to pediatric emergency department, especially in younger children and those with special needs, a category in which undertreatment of pain (so-called “oligoanalgesia”) is very common. Oligoanalgesia is related to long-term negative behavioral and psychological consequences. Management of pain and anxiety is of fundamental importance and good pain control could help the entire medical team in the evaluation and treatment of a child . Several studies have shown that very often the treatment of pain in children is inadequate and have highlighted the importance of adequate pain treatment in terms of immediate but also future well-being and neurological development of the patient . Bevacqua et al. report the current state of the art of pediatric sedation and analgesia in Italian emergency rooms and identify existing gaps that need to be addressed. The survey proposed a case vignette and questions addressing different domains, such as pain management, availability of medications, protocols and safety aspects, staff training, and availability of human resources regarding sedation and procedural analgesia. Eighteen Italian sites participated in the study, 66% of which were represented by University Hospitals and/or Tertiary Care Centres. It was found that 27% of patients receive inadequate sedation. In many emergency departments there is a lack of availability of some drugs such as nitrous oxide, lack of use of intranasal fentanyl and topical anesthetics at triage, the use of safety protocols and pre-procedural checklists is rare, there is also a lack of staff training and lack of space. Moreover, the availability of child life specialists and hypnosis, as a non-pharmacological practice of sedation and analgesia is insufficient. The study highlights that, although much progress has been made in recent years in the treatment of pain in the pediatric emergency department setting, there is still much work to be done due to the complexity of pediatric patients and, sometimes, the need of adequate instruments/medicines as well as training of health personnel. Pain control is universally recognized as a human right and the correct assessment of pain is now one of the standards for the accreditation of health institutions. Proper pain management can reduce the incidence of complications, reduce hospital stays, achieve faster discharges and decrease the use of hospital resources. On the contrary, inadequate pain management can lead to persistent or chronic pain, alterations in nociception and emotional and psychological complications; pain can have negative effects on the physical and mental conditions of hospitalized patients, worsening the quality of life and increasing costs . However, the assessment and especially the treatment of pain are still important health problems in hospitalized patients . Marchetti et al. compare a one-day survey that analyzed the prevalence of pain, pain intensity and pain therapy conducted in 2016, in which they showed suboptimal pain management in the surgery and oncohematology departments, with the same survey conducted in 2020. They found a higher prevalence of moderate/severe pain in the 2020 survey compared to the previous 2016 survey, both during hospitalization and in the 24 h preceding the day of the survey, despite hospital training initiatives aimed at doctors and nurses on pain therapy. On the other hand, the daily prescription of pain therapy has significantly improved both in terms of time indications and as needed. There were fewer children who were not prescribed any pain therapy compared to the 2016 survey. However, the quality of analgesic therapy was low in 2020 also compared to 2016. Indeed, the therapy administered led to a statistically significant undertreatment of pain, and it was unable to alleviate moderate/severe pain. Basically, many steps forward still need to be made, not so much in the evaluation, which most of the time appears correct, but in the correct use of drugs, also in relation to the type of pain . Intravenous immunoglobulin in Kawasaki disease Kawasaki disease (KD), although a typically, a self-limited condition, lasting for an average of 12 days without therapy, is the main cause of acquired heart disease in western countries, as far as patients may develop cardiovascular complications, mainly coronary arteries aneurism, with life threatening issues such as coronary occlusion and sudden cardiac death. Treatment with intravenous immune globulin (IVIG) have dramatically change the outcome of KD, because of effectiveness in preventing coronary arteries abnormalities, and decreasing frequency of coronary arteries aneurysm development. Timely diagnosis and treatment are critical for the clinical outcome, but although a general consensus on immunoglobulin as first line treatment, optimal timing, with or without adjunctive therapy, is still debated , with immune globulin resistance as a matter of concern because of correlation with earlier therapy (within 4 days of disease) and coronary arteries aneurism development, as shown in a large review and meta-analysis. Several studies aimed to determine the optimal window for IVIG therapy, and although some controversies, starting within 7 days of illness seems to be the best . Acute myocarditis Pediatric myocarditis is a challenging inflammatory disease because of the wide spectrum of clinical signs and symptoms, the multiple etiologies, and the complications and sequelae ranging from hemodynamic instability, ventricular dysfunction, dilated-cardiomyopathy, life-threatening arrhythmias and sudden cardiac death. Though improvement in the understanding of pathogenesis, several studies and attempts at meta-analysis, optimal treatment remains controversial and debated, because of small sample sizes and the quality of studies . In addition to standard supportive care for heart failure and arrhythmias, current therapeutic strategies look for etiologically oriented treatment . Anti-inflammatory and immune responses modulating agents have been considered beneficial , in particular corticosteroids and IVIG for their broad and overlapping effects. If no treatment has demonstrated significant improvement in reducing the risk of mortality, corticosteroids seem to produce significant effects on left ventricle ejection, as shown in a meta-analysis , even if treatment effects are difficult to ascertain as far as ventricular function improves fully in many patients . Prevention of respiratory syncytial virus infection in infants with congenital heart disease Respiratory syncytial virus bronchiolitis is the leading cause of hospitalizations for infants and children under 2 years of age. Patients with hemodynamically significant congenital heart disease have a higher rate of hospitalization, need for intensive care and ventilator support . Full passive immune prophylaxis with palivizumab prophylaxis has shown to be effective against respiratory syncytial virus (RSV) infections, thus reducing RSV-related hospitalization rate, morbidity, and mortality, avoiding delay in interventional and surgical procedures in this category of patients . Although cost-effectiveness is still debated, it may impact healthcare resource availability and utilization . Kawasaki disease (KD), although a typically, a self-limited condition, lasting for an average of 12 days without therapy, is the main cause of acquired heart disease in western countries, as far as patients may develop cardiovascular complications, mainly coronary arteries aneurism, with life threatening issues such as coronary occlusion and sudden cardiac death. Treatment with intravenous immune globulin (IVIG) have dramatically change the outcome of KD, because of effectiveness in preventing coronary arteries abnormalities, and decreasing frequency of coronary arteries aneurysm development. Timely diagnosis and treatment are critical for the clinical outcome, but although a general consensus on immunoglobulin as first line treatment, optimal timing, with or without adjunctive therapy, is still debated , with immune globulin resistance as a matter of concern because of correlation with earlier therapy (within 4 days of disease) and coronary arteries aneurism development, as shown in a large review and meta-analysis. Several studies aimed to determine the optimal window for IVIG therapy, and although some controversies, starting within 7 days of illness seems to be the best . Pediatric myocarditis is a challenging inflammatory disease because of the wide spectrum of clinical signs and symptoms, the multiple etiologies, and the complications and sequelae ranging from hemodynamic instability, ventricular dysfunction, dilated-cardiomyopathy, life-threatening arrhythmias and sudden cardiac death. Though improvement in the understanding of pathogenesis, several studies and attempts at meta-analysis, optimal treatment remains controversial and debated, because of small sample sizes and the quality of studies . In addition to standard supportive care for heart failure and arrhythmias, current therapeutic strategies look for etiologically oriented treatment . Anti-inflammatory and immune responses modulating agents have been considered beneficial , in particular corticosteroids and IVIG for their broad and overlapping effects. If no treatment has demonstrated significant improvement in reducing the risk of mortality, corticosteroids seem to produce significant effects on left ventricle ejection, as shown in a meta-analysis , even if treatment effects are difficult to ascertain as far as ventricular function improves fully in many patients . Respiratory syncytial virus bronchiolitis is the leading cause of hospitalizations for infants and children under 2 years of age. Patients with hemodynamically significant congenital heart disease have a higher rate of hospitalization, need for intensive care and ventilator support . Full passive immune prophylaxis with palivizumab prophylaxis has shown to be effective against respiratory syncytial virus (RSV) infections, thus reducing RSV-related hospitalization rate, morbidity, and mortality, avoiding delay in interventional and surgical procedures in this category of patients . Although cost-effectiveness is still debated, it may impact healthcare resource availability and utilization . Diabetic ketoacidosis Rates of diabetic ketoacidosis (DKA) at diagnosis vary from 11 to 80% depending upon region, even in developed countries. The risk of DKA in children after diagnosis of type 1 diabetes mellitus (T1D) is 1–10/100 person-years. DKA is usually provoked by intentional or inadvertent insulin omission, sometimes associated with intercurrent illness and increased insulin requirement . It has been shown that more than 50% of diabetic children were treated in the pediatric intensive care unit (PICU) due to DKA in Croatia . Passanisi et al. showed that 51.5% of 103 children and adolescents with a new diagnosis of T1D had DKA and 10 subjects with T1D onset needed to be treated in PICU for severe clinical manifestations. Among these four children were younger than 5 years of age. Acute kidney injury was the most common complication of DKA followed by cerebral oedema, papilledema and acute esophageal necrosis. The authors emphasized that increased public awareness campaigns should be promoted to facilitate the recognition of early symptoms of diabetes and to reduce the morbidity and mortality associated with DKA. Vitamin D Galeazzi et al. report that in children aged between 5 and 10 years, living in a coastal area of Central Italy (Ancona) and subjected to screening for celiac disease, blood values of 25-hydroxyvitamin D (25(OH)D were sufficient in 36% of subjects, according to the classification proposed by a recent Italian Consensus which considers such values to be ≥ 30 ng/ml (≥ 75nmol/L). 21% had values classifiable as deficient (10–20 ng/ml) and 6% as severely deficient (< 10 ng/ml). It should be remembered that, in general, values < 12 ng/ml are considered at risk of rickets as confirmed by an extensive review on children aged under 4 years with radiological signs of rickets from which it appears that over 60% have values below this limit . The prevalence data found in the study is substantially in line with epidemiological studies carried out in various European countries regardless of latitude. This confirms that other factors, in addition to sun exposure, which differs in the various latitudes, can play a significant role. In particular, socioeconomic conditions, lifestyles and eating habits must be taken into account. Furthermore, the study reports a higher percentage of deficient values in subjects of non-Caucasian ethnicity and in obese subjects. The latter have been and are the subject of various studies aimed at explaining the reason for this phenomenon . The causes are not yet fully clarified and it is thought that various factors may contribute, such as: less sun exposure of this group due to decreased outdoor activities; vitamin D sequestration in the adipose tissue or uptake by this tissue; impaired liver vitamin D synthesis in fatty liver of severely obese subjects. The high prevalence of children with vitamin D deficient levels is stimulating a broad discussion on the modalities of a prophylaxis that is today recommended with a defined empirical modality due to the absence of strong scientific evidence but which, ideally, should be personalized at the individual level or at risk subpopulations, taking into account the specific individual needs and the type of pathology, especially extra-skeletal, that one wants to prevent . Treatment and prevention of obesity Obesity is today considered, due to its prevalence, which continues to increase in various populations, and to the known risk of complications, such as cardiometabolic, psychosocial comorbidity and premature mortality . a chronic disease of primary interest for public health. In 2023, a joint task force of the Italian Society of Pediatric Endocrinology and Diabetology, the Italian Society of Pediatrics and the Italian Society of Pediatric Surgery developed a consensus position statement on the treatment of obesity in children and adolescents . Lifestyle intervention is the first step in treatment. In children over the age of 12, pharmacotherapy is the second step and bariatric surgery is the third, in selected cases. There are new developments in the medical treatment of obesity . In particular, new medicines have demonstrated efficacy and safety and have been approved for use in adolescents . The Food and Drug Administration has approved once-daily liraglutide, orlistat, and phentermine–topiramate for adolescents at least 12 years of age; only liraglutide is approved by the European Medicines Agency. On the other hand, origin is polyfactorial and the various ways of dealing with it at a therapeutic level have proven, especially in the long term, not particularly effective also due to the various barriers that can oppose it, making therapeutic success more unlikely . For a long time, therefore, attempts have been made to develop prevention with a particular focus on pre-school age and the first school cycles. A recent review that only takes into consideration randomized controlled trials (RCTs) in the 5–11 age group suggests “that a range of activity interventions, and interventions that combine diet with activity, can have a modest beneficial effect on developing obesity” . Very little research has taken into consideration the cost/benefit ratio of these results even if, as shown by the work of Guarino et al. the economic analysis would be positive in a high percentage of them. However, there is a great heterogeneity of data and methodological settings that make a correct comparison difficult. There is a need to use global approaches (school, family, environment, society, etc.) to agree on measurable outcomes and to obtain longitudinal data. It would also be useful to distinguish between true prevention of obesity (subjects therefore initially not overweight and/or obese) and prevention of the worsening of obesity. Rates of diabetic ketoacidosis (DKA) at diagnosis vary from 11 to 80% depending upon region, even in developed countries. The risk of DKA in children after diagnosis of type 1 diabetes mellitus (T1D) is 1–10/100 person-years. DKA is usually provoked by intentional or inadvertent insulin omission, sometimes associated with intercurrent illness and increased insulin requirement . It has been shown that more than 50% of diabetic children were treated in the pediatric intensive care unit (PICU) due to DKA in Croatia . Passanisi et al. showed that 51.5% of 103 children and adolescents with a new diagnosis of T1D had DKA and 10 subjects with T1D onset needed to be treated in PICU for severe clinical manifestations. Among these four children were younger than 5 years of age. Acute kidney injury was the most common complication of DKA followed by cerebral oedema, papilledema and acute esophageal necrosis. The authors emphasized that increased public awareness campaigns should be promoted to facilitate the recognition of early symptoms of diabetes and to reduce the morbidity and mortality associated with DKA. Galeazzi et al. report that in children aged between 5 and 10 years, living in a coastal area of Central Italy (Ancona) and subjected to screening for celiac disease, blood values of 25-hydroxyvitamin D (25(OH)D were sufficient in 36% of subjects, according to the classification proposed by a recent Italian Consensus which considers such values to be ≥ 30 ng/ml (≥ 75nmol/L). 21% had values classifiable as deficient (10–20 ng/ml) and 6% as severely deficient (< 10 ng/ml). It should be remembered that, in general, values < 12 ng/ml are considered at risk of rickets as confirmed by an extensive review on children aged under 4 years with radiological signs of rickets from which it appears that over 60% have values below this limit . The prevalence data found in the study is substantially in line with epidemiological studies carried out in various European countries regardless of latitude. This confirms that other factors, in addition to sun exposure, which differs in the various latitudes, can play a significant role. In particular, socioeconomic conditions, lifestyles and eating habits must be taken into account. Furthermore, the study reports a higher percentage of deficient values in subjects of non-Caucasian ethnicity and in obese subjects. The latter have been and are the subject of various studies aimed at explaining the reason for this phenomenon . The causes are not yet fully clarified and it is thought that various factors may contribute, such as: less sun exposure of this group due to decreased outdoor activities; vitamin D sequestration in the adipose tissue or uptake by this tissue; impaired liver vitamin D synthesis in fatty liver of severely obese subjects. The high prevalence of children with vitamin D deficient levels is stimulating a broad discussion on the modalities of a prophylaxis that is today recommended with a defined empirical modality due to the absence of strong scientific evidence but which, ideally, should be personalized at the individual level or at risk subpopulations, taking into account the specific individual needs and the type of pathology, especially extra-skeletal, that one wants to prevent . Obesity is today considered, due to its prevalence, which continues to increase in various populations, and to the known risk of complications, such as cardiometabolic, psychosocial comorbidity and premature mortality . a chronic disease of primary interest for public health. In 2023, a joint task force of the Italian Society of Pediatric Endocrinology and Diabetology, the Italian Society of Pediatrics and the Italian Society of Pediatric Surgery developed a consensus position statement on the treatment of obesity in children and adolescents . Lifestyle intervention is the first step in treatment. In children over the age of 12, pharmacotherapy is the second step and bariatric surgery is the third, in selected cases. There are new developments in the medical treatment of obesity . In particular, new medicines have demonstrated efficacy and safety and have been approved for use in adolescents . The Food and Drug Administration has approved once-daily liraglutide, orlistat, and phentermine–topiramate for adolescents at least 12 years of age; only liraglutide is approved by the European Medicines Agency. On the other hand, origin is polyfactorial and the various ways of dealing with it at a therapeutic level have proven, especially in the long term, not particularly effective also due to the various barriers that can oppose it, making therapeutic success more unlikely . For a long time, therefore, attempts have been made to develop prevention with a particular focus on pre-school age and the first school cycles. A recent review that only takes into consideration randomized controlled trials (RCTs) in the 5–11 age group suggests “that a range of activity interventions, and interventions that combine diet with activity, can have a modest beneficial effect on developing obesity” . Very little research has taken into consideration the cost/benefit ratio of these results even if, as shown by the work of Guarino et al. the economic analysis would be positive in a high percentage of them. However, there is a great heterogeneity of data and methodological settings that make a correct comparison difficult. There is a need to use global approaches (school, family, environment, society, etc.) to agree on measurable outcomes and to obtain longitudinal data. It would also be useful to distinguish between true prevention of obesity (subjects therefore initially not overweight and/or obese) and prevention of the worsening of obesity. Trisomy 3q syndrome Serra et al. report on a female preterm newborn with a de novo 3q27.1-q29 duplication. The article provides interesting insights related to the clinical and diagnostic management of the newborn carrying a genetic pathology and related individual and multidimensional follow-up strategies. In the article the analysis of correlations between genes involved in duplication and phenotypic manifestations is discussed, with a comparative review of previous described patients. The presence of risk factors related to advanced parental age, responsible for potentials chromosomal and/or genomic anomalies and assisted reproduction techniques (ART) for epigenomic defects is emphasized . The whole diagnostic pathway allowing the diagnosis of the contiguous gene syndrome (non-invasive prenatal diagnosis, karyotype and a-CGH) is well outlined . In the clinical approach 3q27.1-q29 duplication should be included in the differential diagnosis of hypergrowth syndromes. Serra et al. report on a female preterm newborn with a de novo 3q27.1-q29 duplication. The article provides interesting insights related to the clinical and diagnostic management of the newborn carrying a genetic pathology and related individual and multidimensional follow-up strategies. In the article the analysis of correlations between genes involved in duplication and phenotypic manifestations is discussed, with a comparative review of previous described patients. The presence of risk factors related to advanced parental age, responsible for potentials chromosomal and/or genomic anomalies and assisted reproduction techniques (ART) for epigenomic defects is emphasized . The whole diagnostic pathway allowing the diagnosis of the contiguous gene syndrome (non-invasive prenatal diagnosis, karyotype and a-CGH) is well outlined . In the clinical approach 3q27.1-q29 duplication should be included in the differential diagnosis of hypergrowth syndromes. Telemedicine for pediatric care The use of telemedicine for pediatric care is increasing worldwide. According to the recent guidelines issued in 2020 by the Italian Ministry of Health, has been recognized as an integral part of the services of the National Health Service. The adoption of telemedicine in the field of care has found a significant impact during the covid pandemic and has allowed to continue and implement virtuous clinical care processes with improvement of the quality of health care, increasing the usability of treatments, diagnostic services and remote medical advice, along with positive economic impact too. Zuccotti et al. report the peculiarities of a regional operational center for telemedicine to ensure continuing care in pediatrics. The services included routine pediatric hospital activities and innovative programs, such as early discharge, telecardiology, online supervised exercise training and preventive healthcare . The proposed platform of telemedicine can be a useful model for other experiences in this field. The use of telemedicine for pediatric care is increasing worldwide. According to the recent guidelines issued in 2020 by the Italian Ministry of Health, has been recognized as an integral part of the services of the National Health Service. The adoption of telemedicine in the field of care has found a significant impact during the covid pandemic and has allowed to continue and implement virtuous clinical care processes with improvement of the quality of health care, increasing the usability of treatments, diagnostic services and remote medical advice, along with positive economic impact too. Zuccotti et al. report the peculiarities of a regional operational center for telemedicine to ensure continuing care in pediatrics. The services included routine pediatric hospital activities and innovative programs, such as early discharge, telecardiology, online supervised exercise training and preventive healthcare . The proposed platform of telemedicine can be a useful model for other experiences in this field. Thiol disulfide balance and vitamin B12 deficiency Several studies support a relation between an increased use of cell phones and technological devices with high specific absorption rate (SAR) values, fast-food consumption, smoking cigarettes and other tobacco products and the increase in the oxidative stress levels (OSL). An increase in OSL has been linked to negative functional consequences on the central and peripheral nervous system . Demirtas et al. conducted a case-controlled observational study, on adolescents with symptoms attributable to headache by evaluating the levels of oxidation markers and B12 levels that were lower in affected ones. The statistically significant results showed that in the group with vitamin B12 deficiency native thiol levels were lower, while the disulfide and HCY levels were higher. Interestingly identification of B12 deficiency did not correlate, as in previous studies, with significant differences in MCV, or identifiable macrocytic anemia. Thus, central nervous system findings can be seen prominently in children with vitamin B12 deficiency who have normal hematological findings. Several studies support a relation between an increased use of cell phones and technological devices with high specific absorption rate (SAR) values, fast-food consumption, smoking cigarettes and other tobacco products and the increase in the oxidative stress levels (OSL). An increase in OSL has been linked to negative functional consequences on the central and peripheral nervous system . Demirtas et al. conducted a case-controlled observational study, on adolescents with symptoms attributable to headache by evaluating the levels of oxidation markers and B12 levels that were lower in affected ones. The statistically significant results showed that in the group with vitamin B12 deficiency native thiol levels were lower, while the disulfide and HCY levels were higher. Interestingly identification of B12 deficiency did not correlate, as in previous studies, with significant differences in MCV, or identifiable macrocytic anemia. Thus, central nervous system findings can be seen prominently in children with vitamin B12 deficiency who have normal hematological findings. Bronchiolitis Bronchiolitis is one of the most warning cause of hospitalization for infants less than two years of age . Seasonally, bronchiolitis hospitalization mainly correlates to RSV and mostly affects young infants less than 3 months of age not eligible to current available prophylaxis with palivizumab . During CoronaVIrus Disease – 2019 (COVID-19) pandemic, a significant decrease of respiratory infections had been globally reported, including bronchiolitis . In line with respiratory tract infections decrease, there was also a significant decrease in antibiotic prescriptions, which are too often inappropriately prescribed in children due to a lack of readily available tests or to limit parental anxiety . Later, in the following season, an anticipated peak with an increase of the overall number of cases had been described by epidemiological reports . They confirmed that most cases of bronchiolitis are caused by RSV and more frequently affect infants less than three months of age. According to the “UPDATE − 2022 Italian guidelines on the management of bronchiolitis in infants”, the diagnosis is made by anamnestic and clinical evaluation and the management is supportive . Since specific etiological treatment is not available, the authors suggest fluid and/or respiratory supplement, avoiding salbutamol, glucocorticosteroids and antibiotics . Oxygen therapy should be provided in case of respiratory distress and hypoxemia and may be discontinued when saturation levels equal to or greater than 93–94% . Recent epidemiological reports highlight that oxygen support as well as sub intensive or even intensive care hospitalization were more frequently required compared to previous seasons. Long COVID-19/post-COVID condition In the COVID-19 era, we encounter a new disease, named “Long COVID” which may affect even children . To meet the criteria for the diagnosis, young people with a history of confirmed Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2 infection), should present with at least one persisting physical symptom for a minimum duration of 12 weeks in the absence of an alternative diagnosis . Symptoms may vary, including fatigue, hemicrania, dizziness or disequilibrium, asthenia or weakness, chest pain, cough and respiratory distress under exertion . Aggregatibacter actinomycetemcomitans infection Aggregatibacter actinomycetemcomitans is an oral flora colonizing bacterium which may cause dental caries and periodontitis . In literature it has been associated to severe extra-oral infections including endocarditis, soft tissue abscesses and more rarely to osteomyelitis, brain abscess and pneumonia . A prolong-term antibiotic treatment is suggested to get a complete eradication. Nevertheless, the optimal duration of therapy is not known depends on multiple variables including patient clinical response, the extent of tissue involvement . Acute otitis media and facial nerve palsy Acute mastoiditis is the most frequent complication of acute otitis media while meningitis, subperiosteal, brain abscesses and facial nerve paralysis are more severe but rarely reported during childhood . After the widespread use of antibiotics, the prognosis of acute otitis media complications is generally good after the appropriate therapy, even though residual dysfunction may happen . Metagenomic next-generation sequencing for the detection of pathogens Recently, Metagenomic Next-generation sequencing (mNGS) has started to been used in the detection of bacteria to clarify the aetiology and guide anti-infection treatment . The benefits are a rapid and accurate identification of pathogens searching for pathogens not commonly identifiable using conventional technology . Evidence suggests that in most cases treatment may be changed on mNGS results, with a faster clinical improvement . Bronchiolitis is one of the most warning cause of hospitalization for infants less than two years of age . Seasonally, bronchiolitis hospitalization mainly correlates to RSV and mostly affects young infants less than 3 months of age not eligible to current available prophylaxis with palivizumab . During CoronaVIrus Disease – 2019 (COVID-19) pandemic, a significant decrease of respiratory infections had been globally reported, including bronchiolitis . In line with respiratory tract infections decrease, there was also a significant decrease in antibiotic prescriptions, which are too often inappropriately prescribed in children due to a lack of readily available tests or to limit parental anxiety . Later, in the following season, an anticipated peak with an increase of the overall number of cases had been described by epidemiological reports . They confirmed that most cases of bronchiolitis are caused by RSV and more frequently affect infants less than three months of age. According to the “UPDATE − 2022 Italian guidelines on the management of bronchiolitis in infants”, the diagnosis is made by anamnestic and clinical evaluation and the management is supportive . Since specific etiological treatment is not available, the authors suggest fluid and/or respiratory supplement, avoiding salbutamol, glucocorticosteroids and antibiotics . Oxygen therapy should be provided in case of respiratory distress and hypoxemia and may be discontinued when saturation levels equal to or greater than 93–94% . Recent epidemiological reports highlight that oxygen support as well as sub intensive or even intensive care hospitalization were more frequently required compared to previous seasons. In the COVID-19 era, we encounter a new disease, named “Long COVID” which may affect even children . To meet the criteria for the diagnosis, young people with a history of confirmed Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2 infection), should present with at least one persisting physical symptom for a minimum duration of 12 weeks in the absence of an alternative diagnosis . Symptoms may vary, including fatigue, hemicrania, dizziness or disequilibrium, asthenia or weakness, chest pain, cough and respiratory distress under exertion . Aggregatibacter actinomycetemcomitans is an oral flora colonizing bacterium which may cause dental caries and periodontitis . In literature it has been associated to severe extra-oral infections including endocarditis, soft tissue abscesses and more rarely to osteomyelitis, brain abscess and pneumonia . A prolong-term antibiotic treatment is suggested to get a complete eradication. Nevertheless, the optimal duration of therapy is not known depends on multiple variables including patient clinical response, the extent of tissue involvement . Acute mastoiditis is the most frequent complication of acute otitis media while meningitis, subperiosteal, brain abscesses and facial nerve paralysis are more severe but rarely reported during childhood . After the widespread use of antibiotics, the prognosis of acute otitis media complications is generally good after the appropriate therapy, even though residual dysfunction may happen . Recently, Metagenomic Next-generation sequencing (mNGS) has started to been used in the detection of bacteria to clarify the aetiology and guide anti-infection treatment . The benefits are a rapid and accurate identification of pathogens searching for pathogens not commonly identifiable using conventional technology . Evidence suggests that in most cases treatment may be changed on mNGS results, with a faster clinical improvement . Vitamin D level and neonatal respiratory distress syndrome The results of studies on the association between vitamin D levels and respiratory distress are not consistent, with differences in both maternal and fetal several variables involved, and cord blood 25(OH)D3 levels considered normal for the gestational ages . Liu W et al. address the potential relationship between cord blood 25(OH)D3 levels and the onset of neonatal respiratory distress syndrome (NRDS). This retrospective study was conducted on infants (gestational age 28–36 weeks) diagnosed with NRDS and non-NRDS preterm infants as the control group. The results of a monofactor analysis showed a correlation between lower cord blood 25(OH)D3 levels and NRDS. In addition, a multivariate logistic regression analysis identified as independent risk factors for NRDS the following: 25(OH)D3 cord blood levels < 57.69 nmol/L (24 ng/ml), gestational age < 31 weeks, birth weight < 1.86 kg, Apgar score (1 min) < 7 and Apgar score (5 min) < 8. The Authors conclude that 25(OH)D3 level is an independent risk factor for NRDS in preterm infants. Neurodevelopmental outcomes of very low birth weight preterms Battajon et al. conducted a single tertiary center prospective cohort study enrolling all infants < 30 weeks GA and birthweight < 1500 g admitted to NICU over a period of three years. The preterm baby is at risk of presenting neurodevelopmental disorders whose early identification allows targeted treatments. The study adopted up-to-date child development evaluation tools, and a valid methodology of statistical analysis, providing a valid model for further research in this field. The 2- and 4-year development evaluations showed a different expression in terms of percentages of subjects with developmental abnormalities, related risk factors and areas of development involved. At two years Bayley motor scale resulted worse in the lowest GA groups ( p = 0.0282). No disability was present in 59.6%, a minor one in 31.1% and a major disability in 9.3%. Risk factors associated with disability were early neonatal sepsis ( p = 0.0377), grade ≥ 3 intra ventricular hemorrhage ( p = 0.0245), BPD ( p = 0.0130), ROP ( p = 0.0342), late neonatal sepsis ( p = 0.0180), and length of hospitalization ( p < 0.0001). Assessment at four-years, using WPPSI scale and scores with mABC 2, showed major disability in 19.7%, a minor one in 47.2%, or no disability in 33.1%. Disability was only associated with BPD ( p = 0.0441) and length of hospitalization ( p = 0.0077. A progressively worse performance was noted in relation to reduction of the GA, while using multivariate analysis, only the length of stay was predictive. At both ages there was no difference in the incidence of disabilities considering AGA and SGA groups, ( p = 0.2689). The analysis of the conjoint distribution of disability at age of two and four years revealed how children without disabilities at the age of two (62.1%) developed impairments at the age of four in 58.4% of cases ( p < 0.0001), with significant correlation between processing speed and manual dexterity with Spearman’s coefficient = 0.47 ( p < 0.0001) and between processing speed and aiming and grasping with Spear man’s coefficient = 0.27 ( p < 0.0001). This study demonstrated a clear shift in the incidence of disabilities since about half of children completely free from disability at two years of age, showed a disability related to fine motor skills that impacted an alteration in processing speed at four years. The authors suggest that attentional capacity may not be the primary cognitive problem, but a motor impairment and a difficulty with oculo-motor coordination. Children with oculo-motor impairment have less cognitive results and this does not reflect their true cognitive abilities. Therefore, for proper assessment of school learning problems, it is necessary to conduct a careful follow-up on all cognitive, motor and behavioral aspects as early as possible to detect the real problem. This allows intervention with appropriate neuropsychological techniques and thus improves school performance . The results of studies on the association between vitamin D levels and respiratory distress are not consistent, with differences in both maternal and fetal several variables involved, and cord blood 25(OH)D3 levels considered normal for the gestational ages . Liu W et al. address the potential relationship between cord blood 25(OH)D3 levels and the onset of neonatal respiratory distress syndrome (NRDS). This retrospective study was conducted on infants (gestational age 28–36 weeks) diagnosed with NRDS and non-NRDS preterm infants as the control group. The results of a monofactor analysis showed a correlation between lower cord blood 25(OH)D3 levels and NRDS. In addition, a multivariate logistic regression analysis identified as independent risk factors for NRDS the following: 25(OH)D3 cord blood levels < 57.69 nmol/L (24 ng/ml), gestational age < 31 weeks, birth weight < 1.86 kg, Apgar score (1 min) < 7 and Apgar score (5 min) < 8. The Authors conclude that 25(OH)D3 level is an independent risk factor for NRDS in preterm infants. Battajon et al. conducted a single tertiary center prospective cohort study enrolling all infants < 30 weeks GA and birthweight < 1500 g admitted to NICU over a period of three years. The preterm baby is at risk of presenting neurodevelopmental disorders whose early identification allows targeted treatments. The study adopted up-to-date child development evaluation tools, and a valid methodology of statistical analysis, providing a valid model for further research in this field. The 2- and 4-year development evaluations showed a different expression in terms of percentages of subjects with developmental abnormalities, related risk factors and areas of development involved. At two years Bayley motor scale resulted worse in the lowest GA groups ( p = 0.0282). No disability was present in 59.6%, a minor one in 31.1% and a major disability in 9.3%. Risk factors associated with disability were early neonatal sepsis ( p = 0.0377), grade ≥ 3 intra ventricular hemorrhage ( p = 0.0245), BPD ( p = 0.0130), ROP ( p = 0.0342), late neonatal sepsis ( p = 0.0180), and length of hospitalization ( p < 0.0001). Assessment at four-years, using WPPSI scale and scores with mABC 2, showed major disability in 19.7%, a minor one in 47.2%, or no disability in 33.1%. Disability was only associated with BPD ( p = 0.0441) and length of hospitalization ( p = 0.0077. A progressively worse performance was noted in relation to reduction of the GA, while using multivariate analysis, only the length of stay was predictive. At both ages there was no difference in the incidence of disabilities considering AGA and SGA groups, ( p = 0.2689). The analysis of the conjoint distribution of disability at age of two and four years revealed how children without disabilities at the age of two (62.1%) developed impairments at the age of four in 58.4% of cases ( p < 0.0001), with significant correlation between processing speed and manual dexterity with Spearman’s coefficient = 0.47 ( p < 0.0001) and between processing speed and aiming and grasping with Spear man’s coefficient = 0.27 ( p < 0.0001). This study demonstrated a clear shift in the incidence of disabilities since about half of children completely free from disability at two years of age, showed a disability related to fine motor skills that impacted an alteration in processing speed at four years. The authors suggest that attentional capacity may not be the primary cognitive problem, but a motor impairment and a difficulty with oculo-motor coordination. Children with oculo-motor impairment have less cognitive results and this does not reflect their true cognitive abilities. Therefore, for proper assessment of school learning problems, it is necessary to conduct a careful follow-up on all cognitive, motor and behavioral aspects as early as possible to detect the real problem. This allows intervention with appropriate neuropsychological techniques and thus improves school performance . Psycho-emotional distress in relation to COVID-19 confinement Since its appearance in Wuhan in mid-December 2019, COVID-19 has spread dramatically worldwide . The pandemic forced the population to face unprecedented changes such as social isolation, closure of schools and public areas, and significantly impacted the well-being of children and adolescents. Compared with adults, children with COVID-19 usually had a milder or moderate course of the disease, but children were more susceptible to psychological effects than adults, suggesting that the pediatric population is more vulnerable toward mental health problems . García-Rodríguez et al. conducted a systematic review to assess the impact of the lockdown measures associated with COVID-19 pandemic on children (from 2 to 12 years) and adolescent (from 13 to 18 years) . Authors felt it was essential to conduct this systematic literature review since both children and adolescents belong to a fragile group in a stage of physical and mental development. The reviewed studies focused on a population of children and adolescents evaluated during COVID-19 and the quarantine period. Main results can be summarized. Lifestyle changes and psycho-emotional manifestations: school closure and social isolation, which increased the use of screens and technologies making children and adolescent less capable of social skills and socialization. Psycho-emotional manifestations according to age differentiation: in the adolescent population higher levels of stress, depression and anxiety were found, while among children the most common symptoms were irritability, arguments with the rest of the family or rebellious behavior. Effects of confinement from a cross-cultural approach: focusing on young people from three different countries (Spain, Italy and Portugal), authors observed that Italian children had the lowest levels of anxiety and less nutritional, cognitive and sleep disorders than Spanish or Portuguese peers. Children from Portugal and from Spain reported more mood disturbances and more behavioral disturbances, respectively. Strategies for promoting resilience: the most common and successful strategy included spending a lot of time together in a limited space, improving communication between parents and children. Mental health at pediatric age is a source of constant concern for clinicians. Improving the knowledge about the impact that pandemic has had on children will allow clinicians to identify young people who need specialized help and consequently will allow to intervene before irremediable repercussions or long-term effects occur . Children with autism spectrum disorder and their care-givers Autism Spectrum Disorder (ASD) refers to a group of pervasive neurodevelopmental disorders that involve moderately to severely disrupted functioning in areas such as social skills and socialization, expressive and receptive communication, and repetitive or stereotyped behaviors and interests . Caring for children with ASD is a stressful process that heavily depends on the abilities of caregivers. The stress associated with raising a sick or disabled child creates a burden of care, which is defined as the physical, psychological, social, or economic reactions experienced by caregivers during the caregiving process . Rasoulpoor et al. designed a descriptive-analytical study to determine the relationship between care burden, coping styles, and resilience among mothers of children with ASD. Authors assessed caregiving burden, coping styles, and mothers’ resilience by contacting 80 volunteered mothers of autistic children. They responded to a questionnaire consisting of 3 parts: (a) the Caregiver Burden Inventory to measure the objective and subjective burden of care; (b) the Connor-Davidson Resilience Scale to measure the ability to deal with pressure and threats; and (c) the Coping Strategies Questionnaire to study how people cope with stress, in addition to providing demographic information. Questionnaires that were completed correctly and comprehensively by 69 mothers of children with ASD were analyzed. Mothers were recruited among the parents of patients at an Autism Center who met the predefined inclusion criteria (having a child aged between 3 and 15 years with a diagnosis of autism, and the mother’s psychological and physical well-being). Data analysis revealed that the average age of the participating mothers was 38.4 ± 7 years. Of these women, 94.2% were married, and 50.7% had only one child. Additionally, 56.5% had received a university education, but only 30.4% were employed. The average age of the children was 3.3 ± 1 years. Cross-referencing the demographic information with the questionnaire results revealed a significant correlation between maternal age, number of children, maternal employment status, child’s gender, and economic status. Mothers with more than one child, lower economic status, and daughters with ASD exhibited an increased burden of care. Although the average levels of resilience and coping styles were moderate, the average burden of care of mothers participating in the study was 95.5 ± 9.1, which shows that the care load is severe. Additionally, an inverse proportional relationship was observed between caregiving burden and the resilience of mothers with children affected by ASD. Therefore, the findings of this study indicate that mothers of children with autism are burdened with an increased caregiving load and exhibit moderate adaptation capabilities in response to the stress they face, which can be physical, emotional, social, and economic in nature . ADHD in children and adolescents Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder. Main symptoms of ADHD, i.e., lack of attention and concentration, disorganization, difficulty completing tasks, being forgetful, and losing things, usually occur before age 12 years and interfere with daily life activities in more than one setting (home and school, or school and after-school time). ADHD can result in abnormal social interactions, increased risky behaviors, loss of jobs, and difficulties in school performance . Boys are more likely to manifest symptoms and being diagnosed as having ADHD . The Diagnostic and Statistical Manual of Mental Disorders IV distinguishes among inattentive (ADHD-I), hyperactive–impulsive (ADHD-H), and combined (ADHD-C) subtypes of ADHD. The diagnosis of ADHD-C requires the presence of symptoms across the domains of inattention and hyperactivity–impulsivity. Salari et al. reported a prevalence of ADHD in children 3 to 12 years-old higher than in adolescents aged 12 to 18 years (7.6% versus 5.6%, respectively), with more cases among males than females, while previous research pointed out a lower prevalence in young children (2–7% ). In this systematic review, 1167 studies have been analyzed and the prevalence of several forms of ADHD was also measured. Results show that the prevalence of ADHD-I, ADHD-H, and ADHD-C is nearly equal among children. The prevalence of ADHD was higher when using the DSM-V diagnostic criterion than when using other criteria. According to these findings, while ADHD appears less common in childhood than in adulthood, its prevalence is increasing. Since its appearance in Wuhan in mid-December 2019, COVID-19 has spread dramatically worldwide . The pandemic forced the population to face unprecedented changes such as social isolation, closure of schools and public areas, and significantly impacted the well-being of children and adolescents. Compared with adults, children with COVID-19 usually had a milder or moderate course of the disease, but children were more susceptible to psychological effects than adults, suggesting that the pediatric population is more vulnerable toward mental health problems . García-Rodríguez et al. conducted a systematic review to assess the impact of the lockdown measures associated with COVID-19 pandemic on children (from 2 to 12 years) and adolescent (from 13 to 18 years) . Authors felt it was essential to conduct this systematic literature review since both children and adolescents belong to a fragile group in a stage of physical and mental development. The reviewed studies focused on a population of children and adolescents evaluated during COVID-19 and the quarantine period. Main results can be summarized. Lifestyle changes and psycho-emotional manifestations: school closure and social isolation, which increased the use of screens and technologies making children and adolescent less capable of social skills and socialization. Psycho-emotional manifestations according to age differentiation: in the adolescent population higher levels of stress, depression and anxiety were found, while among children the most common symptoms were irritability, arguments with the rest of the family or rebellious behavior. Effects of confinement from a cross-cultural approach: focusing on young people from three different countries (Spain, Italy and Portugal), authors observed that Italian children had the lowest levels of anxiety and less nutritional, cognitive and sleep disorders than Spanish or Portuguese peers. Children from Portugal and from Spain reported more mood disturbances and more behavioral disturbances, respectively. Strategies for promoting resilience: the most common and successful strategy included spending a lot of time together in a limited space, improving communication between parents and children. Mental health at pediatric age is a source of constant concern for clinicians. Improving the knowledge about the impact that pandemic has had on children will allow clinicians to identify young people who need specialized help and consequently will allow to intervene before irremediable repercussions or long-term effects occur . Autism Spectrum Disorder (ASD) refers to a group of pervasive neurodevelopmental disorders that involve moderately to severely disrupted functioning in areas such as social skills and socialization, expressive and receptive communication, and repetitive or stereotyped behaviors and interests . Caring for children with ASD is a stressful process that heavily depends on the abilities of caregivers. The stress associated with raising a sick or disabled child creates a burden of care, which is defined as the physical, psychological, social, or economic reactions experienced by caregivers during the caregiving process . Rasoulpoor et al. designed a descriptive-analytical study to determine the relationship between care burden, coping styles, and resilience among mothers of children with ASD. Authors assessed caregiving burden, coping styles, and mothers’ resilience by contacting 80 volunteered mothers of autistic children. They responded to a questionnaire consisting of 3 parts: (a) the Caregiver Burden Inventory to measure the objective and subjective burden of care; (b) the Connor-Davidson Resilience Scale to measure the ability to deal with pressure and threats; and (c) the Coping Strategies Questionnaire to study how people cope with stress, in addition to providing demographic information. Questionnaires that were completed correctly and comprehensively by 69 mothers of children with ASD were analyzed. Mothers were recruited among the parents of patients at an Autism Center who met the predefined inclusion criteria (having a child aged between 3 and 15 years with a diagnosis of autism, and the mother’s psychological and physical well-being). Data analysis revealed that the average age of the participating mothers was 38.4 ± 7 years. Of these women, 94.2% were married, and 50.7% had only one child. Additionally, 56.5% had received a university education, but only 30.4% were employed. The average age of the children was 3.3 ± 1 years. Cross-referencing the demographic information with the questionnaire results revealed a significant correlation between maternal age, number of children, maternal employment status, child’s gender, and economic status. Mothers with more than one child, lower economic status, and daughters with ASD exhibited an increased burden of care. Although the average levels of resilience and coping styles were moderate, the average burden of care of mothers participating in the study was 95.5 ± 9.1, which shows that the care load is severe. Additionally, an inverse proportional relationship was observed between caregiving burden and the resilience of mothers with children affected by ASD. Therefore, the findings of this study indicate that mothers of children with autism are burdened with an increased caregiving load and exhibit moderate adaptation capabilities in response to the stress they face, which can be physical, emotional, social, and economic in nature . Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder. Main symptoms of ADHD, i.e., lack of attention and concentration, disorganization, difficulty completing tasks, being forgetful, and losing things, usually occur before age 12 years and interfere with daily life activities in more than one setting (home and school, or school and after-school time). ADHD can result in abnormal social interactions, increased risky behaviors, loss of jobs, and difficulties in school performance . Boys are more likely to manifest symptoms and being diagnosed as having ADHD . The Diagnostic and Statistical Manual of Mental Disorders IV distinguishes among inattentive (ADHD-I), hyperactive–impulsive (ADHD-H), and combined (ADHD-C) subtypes of ADHD. The diagnosis of ADHD-C requires the presence of symptoms across the domains of inattention and hyperactivity–impulsivity. Salari et al. reported a prevalence of ADHD in children 3 to 12 years-old higher than in adolescents aged 12 to 18 years (7.6% versus 5.6%, respectively), with more cases among males than females, while previous research pointed out a lower prevalence in young children (2–7% ). In this systematic review, 1167 studies have been analyzed and the prevalence of several forms of ADHD was also measured. Results show that the prevalence of ADHD-I, ADHD-H, and ADHD-C is nearly equal among children. The prevalence of ADHD was higher when using the DSM-V diagnostic criterion than when using other criteria. According to these findings, while ADHD appears less common in childhood than in adulthood, its prevalence is increasing. Vascular rings Vascular rings (VR) account for < 1% of all congenital cardiac defects. Abnormalities in position and/ or branching of the aortic arch can lead to a complete or incomplete VR that encircles and compresses the trachea, the bronchi and/or the oesophagus. Over the recent years, there has been an increase in detection of VR due to the increased rate of fetal diagnosis. Right aortic arch with aberrant left subclavian artery is the most common complete VR, followed by double aortic arch (DAA). Aberrant innominate artery (AIA) compression accounts 3 to 20% of cases of incomplete VR, followed by left pulmonary artery sling. Respiratory symptoms associated with VR often occur early in life (at age 1–6 months). The severity of clinical manifestations depends on the encroachment on the trachea, bronchus or oesophagus by the abnormal vascular structures. Common symptoms vary from apnoea and cyanosis to stridor, barky cough, wheezing, shortness of breath, dysphagia for solid food. An history of chronic cough, recurrent bronchopneumonia and fatigue during physical exertion is also frequently reported. Clinical presentation can vary, but disease severity does not appear strictly related with the degree of the anatomical obstruction . A higher prevalence of severe symptoms such as reflex apnoea and stridor, has been reported in young children . Computed tomography (CT) with angiography is an important diagnostic tool as it allows a careful simultaneous assessment of vascular abnormalities and airway involvement. Flexible laryngotracheal-bronchoscopy performed under light sedation and spontaneous breathing allows the dynamic evaluation of the tracheobronchial tree, revealing the localization, extension and the estimation of the airway malacia severity. Spirometry is recommended in children aged over 6 years for documenting the flow-volume curve shape abnormalities. The exercise challenge test is helpful to reproduce exercise induced symptoms frequently reported by patients. As far as treatment, the evidence of a VR is not an indication for early surgical intervention. Corcione et al. have proposed a management algorithm of patients suspected of AIA based on the evidence from literature review of 20 original articles on 2166 patients with several vascular anomalies, including 1092 patients with AIA . A rapid clinical improvement in AIA children treated with aortopexy has been reported, this supporting the role of AIA- induced tracheal compression in the pathogenesis of recurrent/chronic dry cough . Gardella et al. studied a patient population of 28 AIA children, 16 of whom undergone surgical correction. All patients with a clinical presentation sufficiently severe to justify surgical correction showed 70% or greater of tracheal narrowing at endoscopy, and this finding was found in any of the patients in the conservative management group. Porcaro et al. conducted a review based on 14 articles whose endpoint was symptom management of several VR after treatment. Overall, the reviewed studies showed a positive trend of resolution of patients’ symptoms after surgical correction. Nevertheless, the difference in percentage of symptoms resolution likely reflects discrepancy among the different cohorts in term of timing of intervention, anatomical variants of the VR, and prevalence of associated lesions. Based on the available literature findings authors proposed an algorithm including the investigations required for the diagnosis, the indications for surgical treatment and the evaluations needed for monitoring both treated and non-treated patients during the follow up period. Treatment is recommended in all symptomatic patients, particularly in those with DAA or with marked Kommerell diverticulum, in cases with anterior or posterior tracheal compression greater than 50% of the lumen, or in the presence of concomitant congenital heart disease necessitating surgical repair. Conservative treatment might be indeed reasonable in asymptomatic or mildly symptomatic cases. Bronchopulmonary aspergillosis Aspergillus spp. is a mold that colonize the airways provoking a spectrum of clinical syndromes. Invasive pulmonary aspergillosis occurs in immune-compromised subjects. The “gold standard” for diagnosis of invasive pulmonary aspergillosis requires a lung biopsy . It is treated with liposomal amphotericin B in children < 2 years of age and voriconazole in older patients. Voriconazole or itraconazole are used for prophylaxis in children 2–12 years old, posaconazole those > 13 years of age . Allergic sinusitis and allergic bronchopulmonary aspergillosis are characterized by allergic asthma, peripheral blood eosinophilia, skin test or elevated IgE to Aspergillus fumigatus, fungus-specific IgG or precipitins. Allergic bronchopulmonary aspergillosis affects asthmatics, with poor symptom control, and/or children with cystic fibrosis. It requires a CT of the chest for bronchiectasis. A prompt diagnosis of chronic pulmonary aspergillosis is difficult but necessary since it may evolve in idiopathic pulmonary fibrosis. In patients with aspergillus-associated hypersensitivity pneumonitis, a reduced pH values in exhaled breath condensate that is also observed in acute asthma may be helpful in interpreting the specific inhalation challenge. Other conditions include acute community-acquired aspergillus pneumonia, aspergillus bronchitis. Bronchoscopy and severe pneumonia The use of fiberoptic bronchoscopy (FOB) and bronchoalveolar lavage (BAL) is increasingly prevalent in pediatric settings as an aid in diagnosing numerous pulmonary diseases and as a therapeutic tool in specific conditions, particularly those affecting the small airways . The capability provided by endoscopy to identify the etiology of severe pneumonia at an early stage represents an undeniable advantage in the clinical management and prognosis of the disorder. Wu et al. analyzed 229 patients admitted with severe pneumonia to the Pediatric Intensive Care Unit (PICU) at Xinxiang Hospital, China, between November 2018 and December 2021. Patients were divided into two groups based on the necessity of invasive ventilation (invasive ventilation group and non-invasive ventilation group) and further stratified according to the timing of BAL (early BAL group: received BAL within one day of admission; late BAL group: received BAL two days after admission). For each patient, the following information was collected: demographic data, duration of symptoms prior to PICU admission, reason for PICU admission, APACHE II score (that addressed patients’ severity in the PICU), SOFA score (for evaluating the organs’ failure), length of hospitalization overall and in the PICU. Additionally, data regarding patients’ clinical presentation, laboratory tests results, especially microbiology of the BAL specimens by PCR and culture, and endoscopic score assessment were evaluated. Notably, the most frequently isolated etiological agent in the study was Mycoplasma pneumoniae (36.67%), followed by Staphylococcus aureus (26.11%), Haemophilus pneumoniae (23.33%), and Streptococcus spp . (16.67%). Viral identification was less frequent, with RSV being the most prevalent (27.22%), followed by Influenza B virus (17.22%) and Influenza A virus (4.44%). A small portion of the pneumonias were due to fungal infections, with Candida albicans identified in 5.56% of cases. Comparison of endoscopic scores revealed a significantly higher score, indicating greater severity, in patients who required invasive ventilation. Moreover, a shorter PICU stay was observed in patients who underwent early BAL compared to those who had BAL two or more days after ICU admission. The study also demonstrated that patients in the invasive ventilation group had higher SOFA and APACHE II scores and a longer PICU stay. Among the patients examined, 9.61% succumbed to their illness, although no statistically significant differences in mortality rates were observed between the various groups and subgroups. Wu et al. have strengthened the growing body of evidence regarding the role of FOB and BAL in diagnosing and prognostically stratifying patients with pneumopathy, both in acute forms, as seen in the study patients, and in managing pediatric patients with prolonged/recurrent disease forms, such as recurrent pneumonia , and refractory disease forms, where along with CT scan, it represents an indispensable tool for the modern pediatric pulmonologist . Vascular rings (VR) account for < 1% of all congenital cardiac defects. Abnormalities in position and/ or branching of the aortic arch can lead to a complete or incomplete VR that encircles and compresses the trachea, the bronchi and/or the oesophagus. Over the recent years, there has been an increase in detection of VR due to the increased rate of fetal diagnosis. Right aortic arch with aberrant left subclavian artery is the most common complete VR, followed by double aortic arch (DAA). Aberrant innominate artery (AIA) compression accounts 3 to 20% of cases of incomplete VR, followed by left pulmonary artery sling. Respiratory symptoms associated with VR often occur early in life (at age 1–6 months). The severity of clinical manifestations depends on the encroachment on the trachea, bronchus or oesophagus by the abnormal vascular structures. Common symptoms vary from apnoea and cyanosis to stridor, barky cough, wheezing, shortness of breath, dysphagia for solid food. An history of chronic cough, recurrent bronchopneumonia and fatigue during physical exertion is also frequently reported. Clinical presentation can vary, but disease severity does not appear strictly related with the degree of the anatomical obstruction . A higher prevalence of severe symptoms such as reflex apnoea and stridor, has been reported in young children . Computed tomography (CT) with angiography is an important diagnostic tool as it allows a careful simultaneous assessment of vascular abnormalities and airway involvement. Flexible laryngotracheal-bronchoscopy performed under light sedation and spontaneous breathing allows the dynamic evaluation of the tracheobronchial tree, revealing the localization, extension and the estimation of the airway malacia severity. Spirometry is recommended in children aged over 6 years for documenting the flow-volume curve shape abnormalities. The exercise challenge test is helpful to reproduce exercise induced symptoms frequently reported by patients. As far as treatment, the evidence of a VR is not an indication for early surgical intervention. Corcione et al. have proposed a management algorithm of patients suspected of AIA based on the evidence from literature review of 20 original articles on 2166 patients with several vascular anomalies, including 1092 patients with AIA . A rapid clinical improvement in AIA children treated with aortopexy has been reported, this supporting the role of AIA- induced tracheal compression in the pathogenesis of recurrent/chronic dry cough . Gardella et al. studied a patient population of 28 AIA children, 16 of whom undergone surgical correction. All patients with a clinical presentation sufficiently severe to justify surgical correction showed 70% or greater of tracheal narrowing at endoscopy, and this finding was found in any of the patients in the conservative management group. Porcaro et al. conducted a review based on 14 articles whose endpoint was symptom management of several VR after treatment. Overall, the reviewed studies showed a positive trend of resolution of patients’ symptoms after surgical correction. Nevertheless, the difference in percentage of symptoms resolution likely reflects discrepancy among the different cohorts in term of timing of intervention, anatomical variants of the VR, and prevalence of associated lesions. Based on the available literature findings authors proposed an algorithm including the investigations required for the diagnosis, the indications for surgical treatment and the evaluations needed for monitoring both treated and non-treated patients during the follow up period. Treatment is recommended in all symptomatic patients, particularly in those with DAA or with marked Kommerell diverticulum, in cases with anterior or posterior tracheal compression greater than 50% of the lumen, or in the presence of concomitant congenital heart disease necessitating surgical repair. Conservative treatment might be indeed reasonable in asymptomatic or mildly symptomatic cases. Aspergillus spp. is a mold that colonize the airways provoking a spectrum of clinical syndromes. Invasive pulmonary aspergillosis occurs in immune-compromised subjects. The “gold standard” for diagnosis of invasive pulmonary aspergillosis requires a lung biopsy . It is treated with liposomal amphotericin B in children < 2 years of age and voriconazole in older patients. Voriconazole or itraconazole are used for prophylaxis in children 2–12 years old, posaconazole those > 13 years of age . Allergic sinusitis and allergic bronchopulmonary aspergillosis are characterized by allergic asthma, peripheral blood eosinophilia, skin test or elevated IgE to Aspergillus fumigatus, fungus-specific IgG or precipitins. Allergic bronchopulmonary aspergillosis affects asthmatics, with poor symptom control, and/or children with cystic fibrosis. It requires a CT of the chest for bronchiectasis. A prompt diagnosis of chronic pulmonary aspergillosis is difficult but necessary since it may evolve in idiopathic pulmonary fibrosis. In patients with aspergillus-associated hypersensitivity pneumonitis, a reduced pH values in exhaled breath condensate that is also observed in acute asthma may be helpful in interpreting the specific inhalation challenge. Other conditions include acute community-acquired aspergillus pneumonia, aspergillus bronchitis. The use of fiberoptic bronchoscopy (FOB) and bronchoalveolar lavage (BAL) is increasingly prevalent in pediatric settings as an aid in diagnosing numerous pulmonary diseases and as a therapeutic tool in specific conditions, particularly those affecting the small airways . The capability provided by endoscopy to identify the etiology of severe pneumonia at an early stage represents an undeniable advantage in the clinical management and prognosis of the disorder. Wu et al. analyzed 229 patients admitted with severe pneumonia to the Pediatric Intensive Care Unit (PICU) at Xinxiang Hospital, China, between November 2018 and December 2021. Patients were divided into two groups based on the necessity of invasive ventilation (invasive ventilation group and non-invasive ventilation group) and further stratified according to the timing of BAL (early BAL group: received BAL within one day of admission; late BAL group: received BAL two days after admission). For each patient, the following information was collected: demographic data, duration of symptoms prior to PICU admission, reason for PICU admission, APACHE II score (that addressed patients’ severity in the PICU), SOFA score (for evaluating the organs’ failure), length of hospitalization overall and in the PICU. Additionally, data regarding patients’ clinical presentation, laboratory tests results, especially microbiology of the BAL specimens by PCR and culture, and endoscopic score assessment were evaluated. Notably, the most frequently isolated etiological agent in the study was Mycoplasma pneumoniae (36.67%), followed by Staphylococcus aureus (26.11%), Haemophilus pneumoniae (23.33%), and Streptococcus spp . (16.67%). Viral identification was less frequent, with RSV being the most prevalent (27.22%), followed by Influenza B virus (17.22%) and Influenza A virus (4.44%). A small portion of the pneumonias were due to fungal infections, with Candida albicans identified in 5.56% of cases. Comparison of endoscopic scores revealed a significantly higher score, indicating greater severity, in patients who required invasive ventilation. Moreover, a shorter PICU stay was observed in patients who underwent early BAL compared to those who had BAL two or more days after ICU admission. The study also demonstrated that patients in the invasive ventilation group had higher SOFA and APACHE II scores and a longer PICU stay. Among the patients examined, 9.61% succumbed to their illness, although no statistically significant differences in mortality rates were observed between the various groups and subgroups. Wu et al. have strengthened the growing body of evidence regarding the role of FOB and BAL in diagnosing and prognostically stratifying patients with pneumopathy, both in acute forms, as seen in the study patients, and in managing pediatric patients with prolonged/recurrent disease forms, such as recurrent pneumonia , and refractory disease forms, where along with CT scan, it represents an indispensable tool for the modern pediatric pulmonologist . Relevant publications in the field of pediatrics have been provided in the first semester of the last year. Important findings have allowed to improve understanding of pathogenic mechanisms leading to disease development. Novelties on biomarkers may be assessed to link laboratory results with clinical applications. In parallel, several recommendations have shed light on the management of diseases. Finally, interesting and promising results for developing personalized interventions have been reported. We think that published papers give something new that may potentially have a significant effect in healthcare practice.
Left ventricular ejection fraction is a determinant of cardiac performance after long-term conduction system pacing in patients with left bundle branch block?
e1295215-d8ee-47ec-b65c-1530280c83a4
11938565
Pathologic Processes[mh]
Patients with heart failure (HF) and significantly reduced left ventricular ejection fraction (LVEF) have become more prevalent . Many of these patients experience higher rates of rehospitalization and mortality associated with end-stage HF. Biventricular pacing (BiVP) has been shown to reduce mortality and improve cardiac performance in patients with HF and left bundle branch block (LBBB) . However, individual outcomes vary substantially, with more than one-third of patients not responding to BiVP . It had been reported that BiVP was beneficial for patients with LBBB and severely reduced LVEF, but challenges such as prolonged procedure duration, acute perioperative heart failure, and complications were particularly significant for those with severe heart failure. Although many CRT-BVP studies included patients with LVEF of 20–25%, data focusing on the feasibility, safety, and clinical outcomes of BiVP in patients with LVEF < 25% and LBBB were very limited [ – ]. Conduction system pacing (CSP), which includes His bundle pacing (HBP) and left bundle branch pacing (LBBP), had been proven to be a feasible alternative to BiVP [ – ]. CSP represented a promising pacing modality for patients with HF and LBBB due to its favorable response rates and procedural tolerance [ , , ]. However, the long-term benefits of CSP in patients with severe cardiac dysfunction remained unknown . This study aims to explore the feasibility, safety, and clinical outcomes of conduction system pacing (CSP) in patients with left bundle branch block (LBBB) and varying LVEF values. Study population Patients with LBBB and left ventricular ejection fraction (LVEF) ≤ 35% who underwent CSP from January 2018 to December 2021 were consecutively enrolled at our center. LBBP was considered an alternative therapy when His bundle pacing (HBP) failed. In cases where CSP was unsuccessful, the left ventricular (LV) lead was implanted using a consecutive coronary venous approach. The hospital's ethics committee approved the study. Clinical outcomes were compared between patients with LVEF < 25% and those with LVEF between 25 and 35%. All patients received guideline-directed optimized medical therapy for at least three months prior to the procedure. Implant procedure The procedure utilized the Select Secure lead (3830–69 cm, Medtronic, Minneapolis, Minnesota, USA) . The unipolar-paced QRS configuration and pacing impedance were continuously monitored. His bundle electrograms and left bundle branch electrograms were recorded in a unipolar configuration using the Prucka Cardiolab system (GE Healthcare, Waukesha, WI, USA). Additionally, the unipolar configuration and pacing impedance were monitored alongside the left ventricular activation time (LVAT). Patients follow-up All patients were followed up regularly after the procedure at one month, three months, and every six months thereafter. Data collected included 12-lead electrocardiograms (ECG), six-minute walk distance (6MWD), echocardiography (assessing LVEF, left ventricular end-diastolic diameter (LVEDD), and left ventricular end-systolic volume (LVESV)), and pacemaker parameters. Adverse events such as thromboembolism, infection, stroke, rehospitalization due to HF, or death were recorded. Furthermore, pacemaker-related complications were documented, including a significant increase in capture threshold (defined as an increase of more than 2 V/0.4 ms after implantation or more than 5 V/0.4 ms at any visit), lead dislodgement, and cardiac perforation. Criteria and definition LBBB was defined according to the Strauss criteria . Response to CSP was characterized by a decrease in LVESV of ≥ 15% or an increase in LVEF ≥ 5% with an increase in 6-MWD of ≥ 25% or an improvement in NYHA class ≥ 1 or NHYA class I at last follow-up. Super-response to CSP was defined as a reduction in LVESV of ≥ 30% or a ≥ 15% improvement in the LVEF accompanied by clinical improvements after six months of follow-up [ – ]. An LVEF greater than 50% and an LVEDD less than 50 mm were considered as complete reverse remodeling of the left ventricle . HBP was deemed acceptable when the correcting threshold was lower than 3.0 V/0.4 ms in patients exhibiting acceptable His–ventricular conduction. LBBP was defined as pacing with a stim-left ventricular active time (S-LVAT) of less than 85 ms in lead V 5 , a sudden drop in LVAT greater than 10 ms, and the presence of Qr, qR, or rSR’ morphologies in lead V 1 . Statistical analysis Data analysis was conducted using SPSS 26.0 (SPSS Inc., Chicago, USA). Continuous variables with a normal distribution were expressed as the mean ± standard deviation, and t -tests were performed. Continuous variables without a normal distribution were represented by the median (P 25 , P 75 ), and nonparametric tests were employed. Categorical data were expressed as n (%), and the chi-square test was utilized. Independent predictors of complete reverse remodeling of the left ventricle after CSP were identified through univariate and multivariate logistic regression analysis. A two-tailed P value of ≤ 0.05 was considered statistically significant. Patients with LBBB and left ventricular ejection fraction (LVEF) ≤ 35% who underwent CSP from January 2018 to December 2021 were consecutively enrolled at our center. LBBP was considered an alternative therapy when His bundle pacing (HBP) failed. In cases where CSP was unsuccessful, the left ventricular (LV) lead was implanted using a consecutive coronary venous approach. The hospital's ethics committee approved the study. Clinical outcomes were compared between patients with LVEF < 25% and those with LVEF between 25 and 35%. All patients received guideline-directed optimized medical therapy for at least three months prior to the procedure. The procedure utilized the Select Secure lead (3830–69 cm, Medtronic, Minneapolis, Minnesota, USA) . The unipolar-paced QRS configuration and pacing impedance were continuously monitored. His bundle electrograms and left bundle branch electrograms were recorded in a unipolar configuration using the Prucka Cardiolab system (GE Healthcare, Waukesha, WI, USA). Additionally, the unipolar configuration and pacing impedance were monitored alongside the left ventricular activation time (LVAT). All patients were followed up regularly after the procedure at one month, three months, and every six months thereafter. Data collected included 12-lead electrocardiograms (ECG), six-minute walk distance (6MWD), echocardiography (assessing LVEF, left ventricular end-diastolic diameter (LVEDD), and left ventricular end-systolic volume (LVESV)), and pacemaker parameters. Adverse events such as thromboembolism, infection, stroke, rehospitalization due to HF, or death were recorded. Furthermore, pacemaker-related complications were documented, including a significant increase in capture threshold (defined as an increase of more than 2 V/0.4 ms after implantation or more than 5 V/0.4 ms at any visit), lead dislodgement, and cardiac perforation. LBBB was defined according to the Strauss criteria . Response to CSP was characterized by a decrease in LVESV of ≥ 15% or an increase in LVEF ≥ 5% with an increase in 6-MWD of ≥ 25% or an improvement in NYHA class ≥ 1 or NHYA class I at last follow-up. Super-response to CSP was defined as a reduction in LVESV of ≥ 30% or a ≥ 15% improvement in the LVEF accompanied by clinical improvements after six months of follow-up [ – ]. An LVEF greater than 50% and an LVEDD less than 50 mm were considered as complete reverse remodeling of the left ventricle . HBP was deemed acceptable when the correcting threshold was lower than 3.0 V/0.4 ms in patients exhibiting acceptable His–ventricular conduction. LBBP was defined as pacing with a stim-left ventricular active time (S-LVAT) of less than 85 ms in lead V 5 , a sudden drop in LVAT greater than 10 ms, and the presence of Qr, qR, or rSR’ morphologies in lead V 1 . Data analysis was conducted using SPSS 26.0 (SPSS Inc., Chicago, USA). Continuous variables with a normal distribution were expressed as the mean ± standard deviation, and t -tests were performed. Continuous variables without a normal distribution were represented by the median (P 25 , P 75 ), and nonparametric tests were employed. Categorical data were expressed as n (%), and the chi-square test was utilized. Independent predictors of complete reverse remodeling of the left ventricle after CSP were identified through univariate and multivariate logistic regression analysis. A two-tailed P value of ≤ 0.05 was considered statistically significant. Baseline characteristics of the study population Eighty patients were enrolled in this study, with CSP successfully deployed in 74 (92.50%) patients, comprising 60 patients with HBP and 14 patients with LBBP. Among these, thirty-two patients (mean age 67.59 ± 9.05 years, 56.3% male) had LVEF < 25%, including 25 patients with HBP and 7 patients with LBBP (Fig. ). The average follow-up duration was 40.81 ± 11.93 months. No complications such as thrombosis, infection, lead dislodgement, perforation, or stroke were detected during the follow-up period. During the follow-up period, a total of 24 patients (24/74, 32.40%) were re-hospitalized. The rate of re-hospitalization for HF among patients with a LVEF of less than 25% was significantly higher than that of patients with an LVEF between 25 and 35% (46.90% vs. 21.40%, P = 0.021). Notably, no patients in the study died. There were no significant differences observed in terms of sex, age, comorbidities, duration of HF, or QRS duration among all patients (all P > 0.05). Baseline measurements revealed statistically significant differences between the two groups in terms of B-type natriuretic peptide (BNP) levels ( P < 0.001), LVEF ( P < 0.001), (LVEDD ( P < 0.001), LVESV ( P < 0.001) and digoxin usage ( P = 0.006), as detailed in Table . Lead outcomes following conduction system pacing The threshold for correcting LBBB was measured at 1.55 ± 0.90 [email protected] ms during the procedure, with no significant increase observed during the follow-up period (1.55 ± 0.90 [email protected] ms vs. 1.60 ± 0.89 [email protected] ms, P = 0.544) (Supplementary Fig. 1A). Both the initial correcting threshold ( P = 0.003) and the final correcting threshold ( P = 0.013) were significantly lower in the LBBP group compared to the HBP group. Impedance measurements showed no significant difference one-month post-operation compared to the last follow-up (415.64 ± 105.95 Ω vs. 412.76 ± 109.48 Ω, P = 0.648) (Supplementary Fig. 1B). An increase in the correcting threshold greater than 1 V @ 0.4 ms was observed in 3 (including 2 in HBP and 1 in LBBP) out of 74 patients (4.05%,), and lead resets were performed in two of these cases due to correcting thresholds exceeding 5 [email protected] ms. No instances of lead dislodgment, breakage, or infection were reported during the follow-up period. Detailed lead outcomes are presented in Supplementary Table 1. Cardiac performance after CSP Significant improvements were observed in LVEF (25.15 ± 5.26% vs. 42.55 ± 11.84%, P < 0.001), LVEDD (65.81 ± 8.04 mm vs. 56.26 ± 9.63 mm, P < 0.001), LVESV (196.87 ± 58.03 ml vs. 103.02 ± 72.22 ml, P < 0.001), and QRS duration (165.78 ± 19.73 ms vs. 113.16 ± 18.64 ms, P < 0.001) (as shown in Table ) after CSP. Additionally, NYHA class (3.50 ± 0.52 vs. 1.55 ± 0.62, P < 0.001) and 6-MWD (140.41 ± 18.09 m vs. 373.51 ± 119.22 m, P < 0.001) also demonstrated significant improvement. All details are presented in Table . In patients with LVEF < 25%, significant improvements were noted in LVEF (20.50 ± 2.75% vs. 37.78 ± 13.04%, P < 0.001), LVEDD (69.56 ± 6.77 mm vs. 59.41 ± 11.00 mm, P < 0.001), LVESV (224.81 ± 50.65 ml vs. 134.00 ± 83.35 ml, P < 0.001), and QRS duration (168.75 ± 21.52 ms vs. 117.81 ± 17.09 ms, P < 0.001) (as illustrated in Fig. ). Furthermore, NYHA class (3.59 ± 0.48 vs. 1.78 ± 0.66, P < 0.001) and 6-MWD (137.81 ± 15.40 m vs. 324.06 ± 128.34 m, P < 0.001) also showed significant improvement. Clinical outcomes between patients with different LVEF The super-response ratio (62.50% vs. 78.60%, P = 0.129), response ratio (71.90% vs. 90.50%, P = 0.076), and the rate of LV complete reverse remodeling (21.90% vs. 42.90%, P = 0.059) were similar in patients with LVEF < 25% and those with LVEF 25–35%. A total of 25 patients (33.80%) achieved the criteria of LV complete reverse remodeling, including 7 patients (21.90%) with LVEF < 25% and 18 patients (42.90%) with LVEF 25–35%, with no significant difference detected between the two groups ( P = 0.059). Although the improvements magnitude in LVEF (17.28 ± 13.26% vs. 17.50 ± 9.20%, P = 0.937), LVEDD (10.16 ± 9.46 mm vs. 9.10 ± 6.41 mm, P = 0.587), LVESV (94.38 ± 62.19 ml vs. 97.74 ± 46.54 ml, P = 0.825), and QRS duration (50.94 ± 26.91 ms vs. 53.90 ± 19.86 ms, P = 0.587) were not significantly different between the two groups, the final 6-MWD (324.06 ± 128.34 m vs. 411.19 ± 97.41 m, P = 0.002), NYHA class (1.78 ± 0.66 vs. 1.38 ± 0.54, P = 0.005), LVEF (37.78 ± 13.04% vs. 46.19 ± 9.47%, P = 0.003), LVESV (134.00 ± 83.35 ml vs. 70.89 ± 38.89 ml, P = 0.001), and LVEDD (59.41 ± 11.00 mm vs. 53.86 ± 7.75 mm, P = 0.019), and the re-hospitalization (46.90% vs. 21.40%, P = 0.021) in patients with LVEF < 25% were inferior to those in patients with LVEF 25–35% after follow-up (Table ). The super-response ratio (71.40% vs. 71.70%, P = 1.000), response ratio (85.70% vs. 81.70%, P = 1.000), and the LV complete reverse remodeling ratio (21.40% vs. 36.70%, P = 0.440) were similar in LBBP and HBP. In terms of predictors of LV complete reverse remodeling, univariate logistic regression analysis indicated that digoxin ( P = 0.004), LVESV ( P = 0.001), and LVEDD ( P = 0.006) prior to CSP were associated with LV complete reverse remodeling. Further multivariate logistic regression analysis revealed that LVESV prior to CSP (OR 0.977, 95% CI 0.961–0.994, P = 0.007) was an independent predictor of LV complete reverse remodeling in patients with LBBB and HF, with a cutoff value of 106.5 mL and an area under the curve (AUC) of 0.858, demonstrating a sensitivity of 94.10% and specificity of 73.00%. The results are presented in Supplementary Table 2. Eighty patients were enrolled in this study, with CSP successfully deployed in 74 (92.50%) patients, comprising 60 patients with HBP and 14 patients with LBBP. Among these, thirty-two patients (mean age 67.59 ± 9.05 years, 56.3% male) had LVEF < 25%, including 25 patients with HBP and 7 patients with LBBP (Fig. ). The average follow-up duration was 40.81 ± 11.93 months. No complications such as thrombosis, infection, lead dislodgement, perforation, or stroke were detected during the follow-up period. During the follow-up period, a total of 24 patients (24/74, 32.40%) were re-hospitalized. The rate of re-hospitalization for HF among patients with a LVEF of less than 25% was significantly higher than that of patients with an LVEF between 25 and 35% (46.90% vs. 21.40%, P = 0.021). Notably, no patients in the study died. There were no significant differences observed in terms of sex, age, comorbidities, duration of HF, or QRS duration among all patients (all P > 0.05). Baseline measurements revealed statistically significant differences between the two groups in terms of B-type natriuretic peptide (BNP) levels ( P < 0.001), LVEF ( P < 0.001), (LVEDD ( P < 0.001), LVESV ( P < 0.001) and digoxin usage ( P = 0.006), as detailed in Table . The threshold for correcting LBBB was measured at 1.55 ± 0.90 [email protected] ms during the procedure, with no significant increase observed during the follow-up period (1.55 ± 0.90 [email protected] ms vs. 1.60 ± 0.89 [email protected] ms, P = 0.544) (Supplementary Fig. 1A). Both the initial correcting threshold ( P = 0.003) and the final correcting threshold ( P = 0.013) were significantly lower in the LBBP group compared to the HBP group. Impedance measurements showed no significant difference one-month post-operation compared to the last follow-up (415.64 ± 105.95 Ω vs. 412.76 ± 109.48 Ω, P = 0.648) (Supplementary Fig. 1B). An increase in the correcting threshold greater than 1 V @ 0.4 ms was observed in 3 (including 2 in HBP and 1 in LBBP) out of 74 patients (4.05%,), and lead resets were performed in two of these cases due to correcting thresholds exceeding 5 [email protected] ms. No instances of lead dislodgment, breakage, or infection were reported during the follow-up period. Detailed lead outcomes are presented in Supplementary Table 1. Significant improvements were observed in LVEF (25.15 ± 5.26% vs. 42.55 ± 11.84%, P < 0.001), LVEDD (65.81 ± 8.04 mm vs. 56.26 ± 9.63 mm, P < 0.001), LVESV (196.87 ± 58.03 ml vs. 103.02 ± 72.22 ml, P < 0.001), and QRS duration (165.78 ± 19.73 ms vs. 113.16 ± 18.64 ms, P < 0.001) (as shown in Table ) after CSP. Additionally, NYHA class (3.50 ± 0.52 vs. 1.55 ± 0.62, P < 0.001) and 6-MWD (140.41 ± 18.09 m vs. 373.51 ± 119.22 m, P < 0.001) also demonstrated significant improvement. All details are presented in Table . In patients with LVEF < 25%, significant improvements were noted in LVEF (20.50 ± 2.75% vs. 37.78 ± 13.04%, P < 0.001), LVEDD (69.56 ± 6.77 mm vs. 59.41 ± 11.00 mm, P < 0.001), LVESV (224.81 ± 50.65 ml vs. 134.00 ± 83.35 ml, P < 0.001), and QRS duration (168.75 ± 21.52 ms vs. 117.81 ± 17.09 ms, P < 0.001) (as illustrated in Fig. ). Furthermore, NYHA class (3.59 ± 0.48 vs. 1.78 ± 0.66, P < 0.001) and 6-MWD (137.81 ± 15.40 m vs. 324.06 ± 128.34 m, P < 0.001) also showed significant improvement. The super-response ratio (62.50% vs. 78.60%, P = 0.129), response ratio (71.90% vs. 90.50%, P = 0.076), and the rate of LV complete reverse remodeling (21.90% vs. 42.90%, P = 0.059) were similar in patients with LVEF < 25% and those with LVEF 25–35%. A total of 25 patients (33.80%) achieved the criteria of LV complete reverse remodeling, including 7 patients (21.90%) with LVEF < 25% and 18 patients (42.90%) with LVEF 25–35%, with no significant difference detected between the two groups ( P = 0.059). Although the improvements magnitude in LVEF (17.28 ± 13.26% vs. 17.50 ± 9.20%, P = 0.937), LVEDD (10.16 ± 9.46 mm vs. 9.10 ± 6.41 mm, P = 0.587), LVESV (94.38 ± 62.19 ml vs. 97.74 ± 46.54 ml, P = 0.825), and QRS duration (50.94 ± 26.91 ms vs. 53.90 ± 19.86 ms, P = 0.587) were not significantly different between the two groups, the final 6-MWD (324.06 ± 128.34 m vs. 411.19 ± 97.41 m, P = 0.002), NYHA class (1.78 ± 0.66 vs. 1.38 ± 0.54, P = 0.005), LVEF (37.78 ± 13.04% vs. 46.19 ± 9.47%, P = 0.003), LVESV (134.00 ± 83.35 ml vs. 70.89 ± 38.89 ml, P = 0.001), and LVEDD (59.41 ± 11.00 mm vs. 53.86 ± 7.75 mm, P = 0.019), and the re-hospitalization (46.90% vs. 21.40%, P = 0.021) in patients with LVEF < 25% were inferior to those in patients with LVEF 25–35% after follow-up (Table ). The super-response ratio (71.40% vs. 71.70%, P = 1.000), response ratio (85.70% vs. 81.70%, P = 1.000), and the LV complete reverse remodeling ratio (21.40% vs. 36.70%, P = 0.440) were similar in LBBP and HBP. In terms of predictors of LV complete reverse remodeling, univariate logistic regression analysis indicated that digoxin ( P = 0.004), LVESV ( P = 0.001), and LVEDD ( P = 0.006) prior to CSP were associated with LV complete reverse remodeling. Further multivariate logistic regression analysis revealed that LVESV prior to CSP (OR 0.977, 95% CI 0.961–0.994, P = 0.007) was an independent predictor of LV complete reverse remodeling in patients with LBBB and HF, with a cutoff value of 106.5 mL and an area under the curve (AUC) of 0.858, demonstrating a sensitivity of 94.10% and specificity of 73.00%. The results are presented in Supplementary Table 2. This study is the first to demonstrate that improvements in LVEF, LVESV and NYHA class were comparable in patients with LBBB and severely reduced LVEF (< 25%) when compared to those with LVEF between 25 and 35% after CSP. However, the final LVEF and LVESV were inferior in patients with LVEF < 25%. Safety of CSP in patients with severe cardiac dysfunction Patients with severely depressed LVEF presented significant challenges during BiVP due to severe symptoms, prolonged procedure duration, perioperative acute heart dysfunction, and complications related to the operation. Several prognostic models, incorporating multiple risk factors, have been developed to predict response to CRT . The well-established EAARN (Ejection fraction, Age, Atrial fibrillation, Renal dysfunction, NYHA class IV) score indicates that an LVEF < 22% predicts mortality during BiVP . Although Rickard et al. reported no procedure-related deaths in patients with very low LVEF (less than 15%) during the BiVP procedure, a machine learning-based score for predicting all-cause mortality in CRT patients identified LVEF as a significant predictor of all-cause death . CSP demonstrated a shorter procedure duration compared to BiVP, which is advantageous for improving operational tolerance and reducing the risk of complications . In this study, complications such as thrombosis, infection, lead dislodgement, and perforation were not observed in patients with LVEF < 25%. The pacing thresholds remained stable during follow-up, with only two patients requiring electrode replacement post-operation. The high success rate (92.50%) and low thresholds may be attributed to the distal HBP and proximal LBBP . Thus, the safety of CSP has been established in patients with significantly reduced ejection fraction. Feasibility of CSP in patients with severe cardiac dysfunction Several studies have demonstrated that CSP serves as an effective alternative to BiVP. Additionally, numerous investigations have confirmed that HBP is superior to BiVP in enhancing ventricular electrical synchronization; however, the failure of the HBP procedure remains a significant concern . Abdelrahman et al. reported that only 4.2% of patients (14 out of 332) required lead replacement in BiVP . In contrast, approximately 20% of patients with BiVP were found to have leads in suboptimal positions, which could potentially impair cardiac performance . The LBBP-RESYNC trial indicated a more substantial improvement in LVEF with CSP compared to BiVP in patients with non-ischemic cardiomyopathy and LBBB, along with a significant reduction in LVESV . Our previous study also reported a more pronounced enhancement in LVEF in CSP compared to BiVP in patients with HFrEF and permanent AF . It is well established that the overall response rate to BiVP is only 70% . In our study, we observed a notable improvement in LVEF (from 20.50 ± 2.75% to 37.78 ± 13.04%) and LVEDD (from 69.56 ± 6.77 ml to 59.41 ± 11.00 ml), even among patients with LVEF < 25%. A higher response rate (74% vs. 60%) for CSP compared to BiVP was also identified in a multicenter retrospective study . Similarly, our research revealed a response ratio of 71.90% and a super-response ratio of 62.50% after CSP in patients with LVEF < 25%. These findings suggest that patients may be tailored for CSP or BiVP therapy based on individual characteristics . Cardiac performance in different LVEF value after CSP The relationship between improvements in cardiac performance and baseline LVEF values following CRT remains to be thoroughly elucidated. The REVERSE study compared the effects of BiVP in patients with LVEF greater than 30% to those with LVEF of at least 30%, revealing no significant benefits of CRT that varied with LVEF . However, numerous studies have indicated that the severity of left ventricular (LV) dysfunction correlates inversely with the benefits derived from BiVP. Kutyifa et al. found that patients with a baseline LVEF of 25% or lower faced an increased risk of subsequent HF or death compared to those with LVEF between 26 and 30% or greater than 30% . Notably, the clinical benefits of BiVP were evident regardless of baseline LVEF in the sub-study of MADIT-CRT . Additionally, Rickard et al. reported that patients with severe cardiac dysfunction, defined as LVEF of 15% or lower, exhibited a diminished response ratio . This study demonstrates that CSP can yield promising clinical outcomes in patients with severely reduced LVEF (LVEF < 25%) following long-term follow-up, with the correction of LBBB significantly enhancing cardiac function. There were no statistically significant differences in CSP response (71.90% vs. 90.50%, P = 0.076), or super response (62.50% vs. 78.60%, P = 0.129) among patients with varying LVEF values. These findings indicated that patients with severely reduced LVEF can also benefit from CSP. However, the rate of hospitalization for HF was higher than that in patients with LVEF 25–35%. And the ratio of complete LV reverse remodeling between patients with LVEF < 25% and LVEF 25–35% showed trends of significance (21.90% vs. 42.90%, P = 0.059). Furthermore, more favorable LVEF and LVESV levels, along with a lower rate of heart failure-related rehospitalization, were observed in patients with LVEF between 25 and 35% during follow-up. Collectively, these results suggested that timely CSP may enhance clinical prognosis for patients with LBBB and CRT indications. Numerous studies have demonstrated that LBBB serves as a robust predictor of CRT response and super response in patients with heart failure. Specifically, correcting LBBB may resolve heart failure if the underlying cause is attributed to LBBB. However, the progression of heart failure is a critical factor influencing LV reverse remodeling . It is important to acknowledge that severe, irreversible cardiac remodeling, coupled with significantly reduced LVEF, can adversely affect cardiac outcomes following CRT, particularly if heart failure persists for an extended duration. While LVEF is not a definitive prognostic determinant for heart failure in patients with LBBB, it does play a vital role in determining ultimate cardiac performance. Limitations This study has several limitations, including a small sample size and its design as a single-center retrospective analysis. The findings necessitate validation through larger, multicenter studies with randomized controls. The medical therapy adjustments post CSP is important and could affect cardiac function. However, due to the inherent limitations of real-world retrospective studies, the dynamic and often short-term adjustments of patients' postoperative medications cannot entirely exclude the possibility that these medications may have contributed to the observed improvement in cardiac function. The large populations with non-ICM in this study may overestimated the benefit of CSP. Additionally, there are many different factors involved in patient demographics that could make conventional CRT may be a better choice for certain patient groups, Thus, the study did not fully address whether CSP is preferable to conventional CRT in all cases. A comparison with conventional CRT, which is the current evidence-based standard for this patient population, would provide more meaningful insights. Patients with severely depressed LVEF presented significant challenges during BiVP due to severe symptoms, prolonged procedure duration, perioperative acute heart dysfunction, and complications related to the operation. Several prognostic models, incorporating multiple risk factors, have been developed to predict response to CRT . The well-established EAARN (Ejection fraction, Age, Atrial fibrillation, Renal dysfunction, NYHA class IV) score indicates that an LVEF < 22% predicts mortality during BiVP . Although Rickard et al. reported no procedure-related deaths in patients with very low LVEF (less than 15%) during the BiVP procedure, a machine learning-based score for predicting all-cause mortality in CRT patients identified LVEF as a significant predictor of all-cause death . CSP demonstrated a shorter procedure duration compared to BiVP, which is advantageous for improving operational tolerance and reducing the risk of complications . In this study, complications such as thrombosis, infection, lead dislodgement, and perforation were not observed in patients with LVEF < 25%. The pacing thresholds remained stable during follow-up, with only two patients requiring electrode replacement post-operation. The high success rate (92.50%) and low thresholds may be attributed to the distal HBP and proximal LBBP . Thus, the safety of CSP has been established in patients with significantly reduced ejection fraction. Several studies have demonstrated that CSP serves as an effective alternative to BiVP. Additionally, numerous investigations have confirmed that HBP is superior to BiVP in enhancing ventricular electrical synchronization; however, the failure of the HBP procedure remains a significant concern . Abdelrahman et al. reported that only 4.2% of patients (14 out of 332) required lead replacement in BiVP . In contrast, approximately 20% of patients with BiVP were found to have leads in suboptimal positions, which could potentially impair cardiac performance . The LBBP-RESYNC trial indicated a more substantial improvement in LVEF with CSP compared to BiVP in patients with non-ischemic cardiomyopathy and LBBB, along with a significant reduction in LVESV . Our previous study also reported a more pronounced enhancement in LVEF in CSP compared to BiVP in patients with HFrEF and permanent AF . It is well established that the overall response rate to BiVP is only 70% . In our study, we observed a notable improvement in LVEF (from 20.50 ± 2.75% to 37.78 ± 13.04%) and LVEDD (from 69.56 ± 6.77 ml to 59.41 ± 11.00 ml), even among patients with LVEF < 25%. A higher response rate (74% vs. 60%) for CSP compared to BiVP was also identified in a multicenter retrospective study . Similarly, our research revealed a response ratio of 71.90% and a super-response ratio of 62.50% after CSP in patients with LVEF < 25%. These findings suggest that patients may be tailored for CSP or BiVP therapy based on individual characteristics . The relationship between improvements in cardiac performance and baseline LVEF values following CRT remains to be thoroughly elucidated. The REVERSE study compared the effects of BiVP in patients with LVEF greater than 30% to those with LVEF of at least 30%, revealing no significant benefits of CRT that varied with LVEF . However, numerous studies have indicated that the severity of left ventricular (LV) dysfunction correlates inversely with the benefits derived from BiVP. Kutyifa et al. found that patients with a baseline LVEF of 25% or lower faced an increased risk of subsequent HF or death compared to those with LVEF between 26 and 30% or greater than 30% . Notably, the clinical benefits of BiVP were evident regardless of baseline LVEF in the sub-study of MADIT-CRT . Additionally, Rickard et al. reported that patients with severe cardiac dysfunction, defined as LVEF of 15% or lower, exhibited a diminished response ratio . This study demonstrates that CSP can yield promising clinical outcomes in patients with severely reduced LVEF (LVEF < 25%) following long-term follow-up, with the correction of LBBB significantly enhancing cardiac function. There were no statistically significant differences in CSP response (71.90% vs. 90.50%, P = 0.076), or super response (62.50% vs. 78.60%, P = 0.129) among patients with varying LVEF values. These findings indicated that patients with severely reduced LVEF can also benefit from CSP. However, the rate of hospitalization for HF was higher than that in patients with LVEF 25–35%. And the ratio of complete LV reverse remodeling between patients with LVEF < 25% and LVEF 25–35% showed trends of significance (21.90% vs. 42.90%, P = 0.059). Furthermore, more favorable LVEF and LVESV levels, along with a lower rate of heart failure-related rehospitalization, were observed in patients with LVEF between 25 and 35% during follow-up. Collectively, these results suggested that timely CSP may enhance clinical prognosis for patients with LBBB and CRT indications. Numerous studies have demonstrated that LBBB serves as a robust predictor of CRT response and super response in patients with heart failure. Specifically, correcting LBBB may resolve heart failure if the underlying cause is attributed to LBBB. However, the progression of heart failure is a critical factor influencing LV reverse remodeling . It is important to acknowledge that severe, irreversible cardiac remodeling, coupled with significantly reduced LVEF, can adversely affect cardiac outcomes following CRT, particularly if heart failure persists for an extended duration. While LVEF is not a definitive prognostic determinant for heart failure in patients with LBBB, it does play a vital role in determining ultimate cardiac performance. This study has several limitations, including a small sample size and its design as a single-center retrospective analysis. The findings necessitate validation through larger, multicenter studies with randomized controls. The medical therapy adjustments post CSP is important and could affect cardiac function. However, due to the inherent limitations of real-world retrospective studies, the dynamic and often short-term adjustments of patients' postoperative medications cannot entirely exclude the possibility that these medications may have contributed to the observed improvement in cardiac function. The large populations with non-ICM in this study may overestimated the benefit of CSP. Additionally, there are many different factors involved in patient demographics that could make conventional CRT may be a better choice for certain patient groups, Thus, the study did not fully address whether CSP is preferable to conventional CRT in all cases. A comparison with conventional CRT, which is the current evidence-based standard for this patient population, would provide more meaningful insights. CSP has demonstrated feasibility and safety in enhancing clinical outcomes for patients with severely reduced LVEF. A smaller LVESV prior to CSP may predict LV complete reverse remodeling. Furthermore, the final LVEF and LVESV outcomes were more favorable in patients with LVEF between 25 and 35% compared to those with LVEF less than 25%. This suggests that timely initiation of CSP may be beneficial for patients with LBBB and HF. Supplementary Material 1.
Comparison of asymmetric and symmetric offset ablation in myopic astigmatism patients
5aee5b87-dec7-41f3-b81e-3282b52e6fe2
11920032
Ophthalmologic Surgical Procedures[mh]
There are controversies regarding the optimum center for laser vision correction. Due to imperfections in the optical system of the human eye, the line of sight doesn’t match the visual axis of the eye. A decentration in laser ablation may result in high-order aberrations (HOAs) and decreased visual quality. Therefore, clinicians must opt for the best centration strategy. The three most popular centration techniques applied by various devices are pupil centration, symmetric pupil offset centration and asymmetric pupil offset centration . Pupil centration (PC) is shown to have acceptable results in myopic patients , but the instability of pupils according to changing light conditions disfavors this method . An advantage of this technique is the reduction of total ablation volume . Symmetric offset (SO) or coaxial light method uses the corneal vertex (the first Purkinje on the anterior corneal surface) image as a center of ablation and excellent results are anticipated as this centration is closer to the real visual axis – . The asymmetric pupil offset (AO) considers the corneal vertex as a centration point but the edges of the optical zone cover the pupil boundaries . Several studies compared the visual and refractive outcomes of AO and SO in laser-assisted in situ keratomileusis (LASIK) and transepithelial photorefractive keratectomy (TransPRK) on hyperopic patients since the introduction of AO in 2012 10–12 . However, the effectiveness of AO has not been properly evaluated in myopic astigmatism patients. There is only one study in the literature comparing the visual and ablating outcomes of AO with SO in LASIK for myopic and myopic-astigmatism patients . To the best of our knowledge, there is not any precedent report comparing the outcomes of AO and SO in patients with myopia and/or astigmatism who were treated by PRK. Hence, this prospective clinical trial was designed to report the safety and efficacy of the AO centration strategy and compare its outcome with the SO ablation profile. This randomized, contralateral, single-blind study compared visual, refractive, and topographic outcomes of asymmetric versus symmetric centration strategies in patients with myopia and/or astigmatism at Farabi Eye Hospital. This prospective study adhered to the tenets of the Declaration of Helsinki and the protocol of the study was approved by the Medical Ethics Committee of Tehran University of Medical Sciences (IR.TUMS.FARABIH.REC.1400.067). In this study, 40 patients were recruited (14 males, 26 females) between May and August 2022. All included patients signed an informed consent. Inclusion criteria were 18–45 years old, preoperative manifest spherical equivalent (MRSE) of less than − 10.00, refractive cylinder of −6.00 D or less, and preoperative CDVA 20/30 or better. Patients with anisometropia of more than 2.0 diopter spherical or cylindrical refractive error, antimetropia, intraocular pressure (IOP) > 21, a history of recent hard or soft contact lens wear, a history of systemic disease, and a history of ophthalmic surgery or trauma were excluded. Two surgeons (GL, PA) collaborated in this study and considered their patients for enrolment. At the first visit, patients underwent Sirius topography, aberrometry, pupillometry, and manifest and full cycloplegic refraction. The included patients were scheduled for correction of their refractive error by the photorefractive keratectomy (PRK) method using SCHWIND AMARIS 1050, RS. On the day of surgery, one of each patient’s eyes was randomly assigned to the AO group using random blocks by the operator of the SCHWIND platform, while the other was allocated to the SO group. Patients were blind to the allocation. Patients were visited on the postoperative day, the next week, the next month, and after four months. Surgical technique The target refraction was individualized for all included patients. The planned optical zone was considered 7 mm; however, OZ of less than 7 mm was considered in three patients due to limitations imposed by corneal thicknesses. Both eyes of all patients were ablated with the same optical zone. The surgery procedure was as follows. After topical anesthesia with 0.5% propoxyphene tetracaine eye drops (Sina Darou, Tehran, Iran), the epithelium was removed by alcohol debridement technique (alcohol 20% for 20 s in a central 9-mm zone). Laser ablation was conducted with the excimer laser (SCHWIND AMARIS 1050, SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). This device is an excimer laser with a 1050 Hz pulse repetition. After laser application, mitomycin-C (MMC) 0.02% was applied for all cases (5 s for each diopter of spherical equivalent) , followed by irrigation with a balanced sterile solution. The target refraction was individualized for all included patients. The planned optical zone was considered 7 mm; however, OZ of less than 7 mm was considered in three patients due to limitations imposed by corneal thicknesses. Both eyes of all patients were ablated with the same optical zone. The surgery procedure was as follows. After topical anesthesia with 0.5% propoxyphene tetracaine eye drops (Sina Darou, Tehran, Iran), the epithelium was removed by alcohol debridement technique (alcohol 20% for 20 s in a central 9-mm zone). Laser ablation was conducted with the excimer laser (SCHWIND AMARIS 1050, SCHWIND eye-tech-solutions GmbH, Kleinostheim, Germany). This device is an excimer laser with a 1050 Hz pulse repetition. After laser application, mitomycin-C (MMC) 0.02% was applied for all cases (5 s for each diopter of spherical equivalent) , followed by irrigation with a balanced sterile solution. Post-operative data, including total ablation volume, maximum ablation depth, minimum ablation depth, central ablation depth, and total ablation zone, were extracted directly from the SCHWIND device. At each follow-up, uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), and manifest refraction were collected. At 4-month visits, Sirius wavefront aberrometry and topography were performed. Corneal wavefront aberrations were measured from the central 6 mm corneal zone and reported in the Zernike coefficient of aberrations format. Changes in post-operative aberration in comparison with preoperative values were calculated and analyzed. We calculated angle lambda based on anterior chamber depth, corneal central thickness, and the pupil center offset using the method proposed by Gharieb et al. . As there is a little difference between the angle kappa (the angle between the pupillary and visual axis) and angle lambda (the angle between the pupillary axis and line of sight), to avoid readers’ confusion, we use angle kappa term throughout this paper . We utilized preoperative and follow-up anterior tangential maps to calculate the effective optical zone (EOZ). The EOZ represents the corneal surface area marked by a change of 0.25 D on the anterior tangential curvature difference map . For this purpose, we employed MATLAB software, version 2021b (The MathWorks). Preoperative and postoperative anterior tangential map CSV (Comma-separated values) files generated by the Sirius topography system were exported to MATLAB to compute the tangential curvature difference map. The region with a difference of more than ± 0.25 diopter was regarded as the cutting boundary. All the boundaries were rechecked manually and in cases of multiple crosses over zero, manually corrected by the expert. Images from 6 eyes were excluded due to the low quality of reconstructed images. Subsequently, we created a mask from the region enclosed by this boundary and computed the area in square millimeters (mm ) (Fig. ). EOZ diameter was calculated by the mean diameter of every 1 degree of the EOZ zone. Additionally, we determined the circularity of the EOZ, as a measure of roundness, using equator 1: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:Circularity\:=\:\:\frac{4\pi\:A}{{P}^{2}}$$\end{document} where A is the area and P is the perimeter of the region. Circularity is dependent on the shape of the region but remains the same for regions of the same shape, irrespective of size. This property implies that a small circle is as round as a large one. The maximum circularity value is 1, representing a perfect circle, and less than 1 for any other shapes. Statistical analysis Results are expressed as mean ± standard deviation. The sample size of 35 was calculated with power = 90%, and significance level = 0.05 to detect a difference of higher order aberrations based on the results previous study by Li et al. . The t-test was applied to compare mean preoperative values among the eyes of each patient. Post-operative values were compared using paired t-test with Bonferroni correction of P- values. Statistical significance was set as P < 0.05. All data analyses were performed using SPSS software (version 27, IBM Corp). Results are expressed as mean ± standard deviation. The sample size of 35 was calculated with power = 90%, and significance level = 0.05 to detect a difference of higher order aberrations based on the results previous study by Li et al. . The t-test was applied to compare mean preoperative values among the eyes of each patient. Post-operative values were compared using paired t-test with Bonferroni correction of P- values. Statistical significance was set as P < 0.05. All data analyses were performed using SPSS software (version 27, IBM Corp). Eighty eyes of 40 patients underwent PRK. The mean age was 30.73 ± 6.7 years. 26 patients were female. The preoperative characteristics for both symmetric and asymmetric groups are shown in Table . The MRSE was − 2.71 ± 1.47 in the AO and − 2.73 ± 1.49.35 SO (P-value = 0.96, t-test). The angle kappa was 2.76 ± 2.05 degrees in the asymmetric offset group (AO) and 3.06 ± 2.19 degrees in the symmetric offset group (SO) (P-value = 0.53, t-test). There wasn’t any significant difference between the preoperative aberrometry of the two groups (all P > 0.05). Also, preoperative pupillometry of patients showed insignificant differences between the scotopic, mesopic, and photopic pupil dimeters of the two groups (all P > 0.05). Table summarized the postoperative ablation profiles of patients. The total ablation volume was not significantly different between the two groups ( P = 0.59, paired t-test). There was also no significant difference between the maximum and central ablation depth of the two groups (all P > 0.05, Table ). The two groups showed significant differences in minimum ablation depth, with a higher value for SO than AO (0.03 ± 0.03 and 0.01 ± 0.01, respectively; P-value < 0.001). The median follow-up time of these patients was 114 days (interquartile range of 100–147). The mean postoperative MRSE was 0.08 ± 0.24 D and 0.08 ± 0.32 D in SO and AO, respectively (P-value = 0.81, paired t-test). There wasn’t a significant difference between postoperative mean RMS of higher order aberrations (HOAs) among AO and SO groups (all P > 0.05, Table ). The EOZ diameter was 6.05 ± 0.64 in the SO group and 6.05 ± 0.68 in the AO group (P value = 0.99, paired t-test). The EOZ centre offset was 0.22 ± 0.14 in the SO group and 0.27 ± 0.15 in the AO group (P value = 0.04, paired t-test). Figures and shows the visual outcomes of in SO and AO, respectively. 95% and 93% of the eyes in the AO and SO groups achieved UDVA of 20/20 or better at four months postoperatively (P-value = 0.78, Chi-square). 5% of eyes in the AO and SO groups had worse postoperative CDVA compared with preoperative CDVA. 5% and 3% of eyes in AO and SO groups had better postoperative CDVA compared with preoperative values (P-value = 0.55, Chi-square). Figures and shows the refractive outcomes of in SO and AO, respectively. The postoperative MRSE ranging from + 0.14 to + 0.50 D was 18% in the AO, and 20% in SO groups (P-value = 0.77, Chi-square). The rate of postoperative MRSE ranging from − 0.14 to −0.50 D was 8% in both groups. 90% and 88% of eyes in SO and AO groups had 0.5 D or less astigmatism postoperatively (P-value = 0.72, Chi-square). Different ways to centre laser vision correction methods include using the pupil centre, corneal vertex, or finding a point in between while taking both into account – . Asymmetric offset ablation profile in SCHWIND excimer laser is a type of combined centration that uses an oversized ablation zone centered on the corneal vertex and then fits the outer boundaries of the ablation area with a backward method and tilt removal to an area centered on the pupil center. Another option in the SCHWIND excimer laser is the symmetric offset method, which uses the corneal vertex as a centration point. It has been shown that both symmetric (SO) and asymmetric offset (AO) centration studies resulted in acceptable visual and refractive outcomes in LASIK surgery for hyperopic patients , . In a retrospective study by Ortueta et al., the safety and efficacy of the AO centration strategy were demonstrated in TransPRK surgery for hyperopia and mixed hyperopic patients. While it has been shown that angle kappa is less significant in myopic eyes , there is still a need to provide evidence on the safety and efficacy of AO vs. SO in myopic eyes. In addition, it’s important to note that angle kappa can also be substantial in myopic eyes, as demonstrated by our findings, which revealed that angle kappa ranged from 0.01 to 8.4 degrees with a mean of 2.91 ± 2.1 degrees (Table ). Both asymmetric and symmetric offset strategies were shown to have similar safety and efficacy (Fig. 2). In accordance with our report, in a study by Ortueta et al. on 47 eyes in the SO and 51 eyes in the AO groups, both centration techniques resulted in similar safety and efficacy profiles . The post-operative accuracy of manifest spherical equivalent was similar between the two groups (93% and 98% had a postoperative MRSE within ± 0.50 D in the AO and SO groups, respectively). The previous studies achieved similar 95% postoperative MRSE accuracy using a SCHWIND excimer laser . We could not demonstrate clinical implications of the theoretically suggested superiority of AO over SO in terms of higher-order aberrations (HOAs) in patients with myopia and/or astigmatism. Our results showed that there were not any significant differences between clinically important HOA, including coma, spherical aberration, trefoil and total RMS among two groups. The results of this study revealed that there wasn’t a significant difference in the maximum and central ablation depth of SO and AO groups. Moreover, although the total ablation volume was higher in the AO group, the difference between the two groups didn’t reach statistical significance. Interestingly, the minimum ablation depth was higher in the AO group. In contrast with our findings, Li et al. showed that the maximum depth was significantly higher in the SO group (44 eyes) than in the AO group (46 eyes) (90.6 ± 24.8 vs. 89.6 ± 23.7, P = 0.005) . The insignificance of the maximum and central ablation depth between SO and AO groups in our results could be explained by the Munnerlyn Formula . According to this formula that the maximum ablation depth is correlated with the square of the optical zone diameter times the attempted refractive error . As in this study, there was no significant difference between the planned optical zone (POZ) of the two groups; the slight difference caused by different centration strategies did not show any significant difference in final ablation depth and volume. Additionally, in the theoretical explanation of the asymmetric offset technique by Arba-Mosquera et al., they claimed that this method is identical to SO except that it reduces the ablation depth by removing the tilt component . The current study demonstrated that the AO laser vision correction did not result in lower the central and maximum depth. This significant difference in minimum ablation depth points out the added peripheral area to cover the pupil diameter in the AO method. Therefore, in myopic refractive correction, where the maximum ablation occurs on the central cornea and the peripheral optical zone ablated minimally, we shouldn’t expect a significant difference between these two methods’ visual and refractive outcomes. However, in hyperopic laser ablation, the results may be different and require further research to compare the outcomes of these two centration strategies. The optical zone (OZ) represents the segment of the cornea that light traverses to project images onto the retina. For optimal visual results post-refractive surgery, the laser targets an area encompassing both the POZ and a neighboring transition zone. The POZ is responsible for the main refractive adjustments, while the treatment zone (TZ) is essential in mitigating sudden shifts in corneal shape. Ideally, the POZ should align perfectly with the Effective Optical Zone (EOZ), which is the actual corneal area reshaped during the surgical procedure. However, this is often not the case, and EOZs are reported to be smaller than POZs in a clinically relevant manner. The discrepancy between the POZ and EOZ can have a significant impact on the success of the surgery. This reduction in the EOZ compared to the POZ has been reported to be around 20% and may be attributed to various factors – . These include changes in corneal biomechanics, wound healing processes, alterations in corneal topography, and, for excimer-based procedures, a decrease in laser energy efficiency in the peripheral cornea , . The observed reduction in EOZ relative to POZ may have connections with various corneal factors. Changes in corneal biomechanics, as discussed by Damgaard, might contribute to this trend . Additionally, the process of wound healing could influence the EOZ measurements post-intervention . Alterations in corneal topography might also play a role in the reduction seen in EOZ . Furthermore, a decreased consumption of laser energy in the peripheral cornea could be a contributing factor . These associations suggest that the reduction in EOZ is not merely a measurement variance but potentially indicative of underlying corneal changes post-procedural intervention. Understanding these relationships is critical for enhancing the precision of corneal procedures and for tailoring post-operative care to optimize patient outcomes. With the automation of the EOZ determination process, it’s pertinent to reflect upon prior research methods for defining the EOZ. Historically, studies utilized the difference map of the tangential anterior, coupled with mouse tracking across 12 half corneal meridians (spaced every 30°), centering on the corneal vertex. By manually setting points on each half-meridian, the distance between them was calculated. The mean distance spanning all six meridians then provided the EOZ diameter . In a different approach, a study introduced the Region-Growing Algorithm to define the EOZ . Adding to the methodologies, another approach presented an automatic calculation of EOZ for the tangential curvature map. This technique calculates the EOZ area by examining image pixels, specifically where the boundary between two colors exists. This area is then converted to square millimeters, from which the diameter is derived. In our series of patients, we didn’t find a significant difference between the EOZ of the AO and SO groups. In addition, we didn’t find a significant difference between postoperative higher-order aberrations of the two groups. Also, we calculated the circularity index of EOZ. The similarity between the circularity index of two ablation profiles could explain the similarity between higher-order aberrations of the method. In contrast with our findings, Li et al. found the EOZ diameter was significantly higher in SO than the AO group (5.01 ± 0.22, 4.96 ± 0.15; SO, and AO respectively, (P value = 0.01) . They also found coma and trefoil were higher in AO than SO. This difference could be explained by the fact that in our series all the patients have similar optical zones (Table ). Additionally, Li et al. only reported the diameter of manually calculated EOZ which could be imprecise . As Arba Mosquera et al. explained the main difference between AO and SO centration strategies is about the peripheral ablation zone that is ablated regarding the pupil boundaries . However, it seems that in myopic astigmatism patients, these two centration studies didn’t show a significant difference, and the optical zone adjustment is the key variable. Further comparative studies are required to evaluate the results of AO and SO on hyperopic and mixed astigmatism patients. The strength of the current study is that personal confounding factors, such as age and sex, were minimized through the method of group allocation. However, the study had several limitations. First, the potential confounding effects of epithelial changes were not investigated. Second, subjective symptoms were not evaluated in this study of contralateral eyes, and we recommend that future studies address this aspect in patients with varying centring profiles of ablation. Third, refractive targets were individually set based on patients’ age, which introduced variability. To address this, one eye of each patient was randomly assigned to each group, and pairwise analysis was performed. Finally, the follow-up period was limited to 4 months, highlighting the need for future studies with longer follow-up durations. In conclusion, both symmetric and asymmetric offset centration strategies result in safe and effective refractive correction in myopic astigmatism patients. We have not found any significant difference between the postoperative total ablation volume of the two groups. The higher-order aberration, visual acuity, and refraction were similar in the mean 4-month follow-up period.